Blueprint

Lumo

LUMO is a Raspberry Pi–based robotic learning assistant designed to help children with learning disabilities through interactive, multi-sensory activities. It combines a camera-guided robotic arm, an interactive screen, and simple AI to support learning through voice, visuals, and physical actions. Why this project was made Many children learn better when they can see, hear, and touch at the same time. LUMO was built to make learning more engaging and easier to understand, especially in situations where constant one-to-one attention from a teacher or parent is not always possible. How to use this project Power the system, place learning objects in front of the camera, and interact with LUMO using voice commands. The robotic arm responds by identifying, picking, and sorting objects, while the screen provides visual feedback, questions, and prompts to guide the activity.

Created by Johan Johan

Tier 1

23 views

4 followers

Iamalive Iamalive 🚀 requested changes for Lumo ago

Awesome robot! If you can just fully read https://blueprint.hackclub.com/about/submission-guidelines and fix your project readme, journals, and readme, that would be great :)

Johan Johan added to the journal ago

Github profile update

All STL files related to the robotic arm have been uploaded to my GitHub profile. These files include the complete set of 3D-printable components required to assemble the mechanical structure of the arm, such as mounts, joints, brackets, and supporting parts.

In addition to the STL files, the repository also contains various program files used for operating and testing the robotic arm. These programs cover motor control, movement logic, and hardware interfacing, and are intended to support both development and experimentation.

image

Johan Johan added to the journal ago

Circuit Schematic

I’ve created a detailed circuit schematic to help anyone who wants to replicate or understand this setup. The schematic covers the complete connection between the Raspberry Pi and six servo motors, along with supporting components such as the speaker, display screen, and microphone.

This should make it easier to follow the wiring, avoid connection errors, and adapt the circuit for similar projects. The full schematic is also available in my GitHub repository for reference and reuse.
Lumo Circuit Schematic

misbahudheen123k misbahudheen123k gave kudos to Lumo ago

can we help me in funding for my start-up

M.Abdullah M.Abdullah gave kudos to Lumo ago

Amazing project,keep it up

Johan Johan submitted Lumo for review ago

fifageo122 fifageo122 gave kudos to Lumo ago

Poy vellom irinu paddi

josephtom0511 josephtom0511 gave kudos to Lumo ago

very nice and inspiring , did you study anything for exam?

Johan Johan added to the journal ago

Added two containers for sorting purpose

I added two containers to the robotic arm, giving it more flexibility and a clearer purpose during interactions. With two containers, the arm can now place objects into different sections, making sorting and grouping tasks easier to understand.

This helps visually show differences—like separating items by colour or type—and makes the whole process feel more natural and engaging. The dual-container setup adds clarity to actions and makes the learning experience more interactive and intuitive.
IMG_5402

IMG_5405

Johan Johan added to the journal ago

Upgraded colour detection system

I’ve made a big improvement to the colour detection system, and it can now clearly identify basic colours like red, blue, green, and yellow. This makes interactions such as sorting and recognising objects much more reliable.

However, detecting black was not consistent due to lighting and colour grading issues with the current camera. This isn’t a software problem, and it can be improved in the future by upgrading to a better camera and imaging setup.
Screenshot 2026-01-15 161725
Screenshot 2026-01-15 162325

Johan Johan added to the journal ago

Upgraded camera quality and smoother motion

The system has been enhanced with a 4-megapixel high-resolution camera, significantly improving visual clarity and object detection accuracy. This upgrade enables more precise recognition, tracking, and interaction with learning materials, resulting in smoother and more reliable performance during vision-based tasks.

In addition, the servo tracking functionality has been refined and optimised to deliver more accurate, stable, and responsive movements. The improved tracking algorithm allows the robotic arm to follow objects and gestures with greater precision, enhancing real-time interaction and overall user experience. These upgrades collectively improve the system’s reliability, engagement level, and effectiveness as an interactive learning assistant.

Despite these improvements, the system still experiences occasional servo jitter, primarily due to the limitations of the low-quality servos currently in use. This jitter becomes more noticeable under load, where precision and stability are reduced. However, during no-load or light-load conditions, the servo performance remains moderately stable and functional.

This limitation has been clearly identified as a hardware constraint rather than a software or control issue, and it highlights a clear upgrade path for future iterations of the system. Replacing the existing servos with higher-quality units is expected to significantly improve motion smoothness, tracking accuracy, and overall reliability.

IMG_5386
IMG_5385
IMG_5387

Johan Johan added to the journal ago

Natural Language Processing (NLP) and facial tracking system

The development of LUMO’s Natural Language Processing (NLP) and facial tracking modules marked a critical phase in enabling meaningful human–robot interaction. The primary objective was to allow the system to understand spoken commands, respond appropriately, and visually track user presence to enhance engagement.

The NLP pipeline was successfully configured on the Raspberry Pi 4, including voice input handling and command parsing. Initial integration and software setup proceeded smoothly; however, limitations were observed in the speech-to-text recognition accuracy. These issues were primarily attributed to hardware constraints, particularly the Raspberry Pi 4
’s limited processing capability and the use of a basic microphone module. While the system can recognize simple commands, consistent and accurate speech recognition—especially in real-world environments with background noise—requires a more capable microcontroller or dedicated audio-processing hardware. Upgrading this component would significantly improve reliability, responsiveness, and overall user experience, making it a key area for future enhancement.

In parallel, the robotic arm was fully assembled using 3D-printed components. While functional, the structure lacks optimal rigidity and long-term durability due to material and print-quality limitations. As in-house 3D printing facilities were unavailable, external printing services were used, resulting in increased costs and limited iteration flexibility. Access to a professional-grade 3D printer would allow for faster prototyping, improved mechanical strength, and refined design accuracy.

Despite using a low-resolution camera, the facial tracking system performed reliably, successfully detecting faces and maintaining user engagement. With improved camera hardware and processing capability, this module has strong potential for further accuracy and expanded emotional interaction.

Overall, the system demonstrates strong foundational performance, with clear opportunities for advancement through improved hardware support.
IMG_5251
IMG_5252
IMG_5334

IMG_5267
IMG_5329
IMG_5237
IMG_5344
IMG_5289

Johan Johan started Lumo ago

1/10/2026 - Natural Language Processing (NLP) and facial tracking system

The development of LUMO’s Natural Language Processing (NLP) and facial tracking modules marked a critical phase in enabling meaningful human–robot interaction. The primary objective was to allow the system to understand spoken commands, respond appropriately, and visually track user presence to enhance engagement.

The NLP pipeline was successfully configured on the Raspberry Pi 4, including voice input handling and command parsing. Initial integration and software setup proceeded smoothly; however, limitations were observed in the speech-to-text recognition accuracy. These issues were primarily attributed to hardware constraints, particularly the Raspberry Pi 4
’s limited processing capability and the use of a basic microphone module. While the system can recognize simple commands, consistent and accurate speech recognition—especially in real-world environments with background noise—requires a more capable microcontroller or dedicated audio-processing hardware. Upgrading this component would significantly improve reliability, responsiveness, and overall user experience, making it a key area for future enhancement.

In parallel, the robotic arm was fully assembled using 3D-printed components. While functional, the structure lacks optimal rigidity and long-term durability due to material and print-quality limitations. As in-house 3D printing facilities were unavailable, external printing services were used, resulting in increased costs and limited iteration flexibility. Access to a professional-grade 3D printer would allow for faster prototyping, improved mechanical strength, and refined design accuracy.

Despite using a low-resolution camera, the facial tracking system performed reliably, successfully detecting faces and maintaining user engagement. With improved camera hardware and processing capability, this module has strong potential for further accuracy and expanded emotional interaction.

Overall, the system demonstrates strong foundational performance, with clear opportunities for advancement through improved hardware support.
IMG_5251
IMG_5252
IMG_5334

IMG_5267
IMG_5329
IMG_5237
IMG_5344
IMG_5289

1/11/2026 - Upgraded camera quality and smoother motion

The system has been enhanced with a 4-megapixel high-resolution camera, significantly improving visual clarity and object detection accuracy. This upgrade enables more precise recognition, tracking, and interaction with learning materials, resulting in smoother and more reliable performance during vision-based tasks.

In addition, the servo tracking functionality has been refined and optimised to deliver more accurate, stable, and responsive movements. The improved tracking algorithm allows the robotic arm to follow objects and gestures with greater precision, enhancing real-time interaction and overall user experience. These upgrades collectively improve the system’s reliability, engagement level, and effectiveness as an interactive learning assistant.

Despite these improvements, the system still experiences occasional servo jitter, primarily due to the limitations of the low-quality servos currently in use. This jitter becomes more noticeable under load, where precision and stability are reduced. However, during no-load or light-load conditions, the servo performance remains moderately stable and functional.

This limitation has been clearly identified as a hardware constraint rather than a software or control issue, and it highlights a clear upgrade path for future iterations of the system. Replacing the existing servos with higher-quality units is expected to significantly improve motion smoothness, tracking accuracy, and overall reliability.

IMG_5386
IMG_5385
IMG_5387

1/15/2026 4:36 PM - Upgraded colour detection system

I’ve made a big improvement to the colour detection system, and it can now clearly identify basic colours like red, blue, green, and yellow. This makes interactions such as sorting and recognising objects much more reliable.

However, detecting black was not consistent due to lighting and colour grading issues with the current camera. This isn’t a software problem, and it can be improved in the future by upgrading to a better camera and imaging setup.
Screenshot 2026-01-15 161725
Screenshot 2026-01-15 162325

1/15/2026 4:40 PM - Added two containers for sorting purpose

I added two containers to the robotic arm, giving it more flexibility and a clearer purpose during interactions. With two containers, the arm can now place objects into different sections, making sorting and grouping tasks easier to understand.

This helps visually show differences—like separating items by colour or type—and makes the whole process feel more natural and engaging. The dual-container setup adds clarity to actions and makes the learning experience more interactive and intuitive.
IMG_5402

IMG_5405

1/20/2026 - Circuit Schematic

I’ve created a detailed circuit schematic to help anyone who wants to replicate or understand this setup. The schematic covers the complete connection between the Raspberry Pi and six servo motors, along with supporting components such as the speaker, display screen, and microphone.

This should make it easier to follow the wiring, avoid connection errors, and adapt the circuit for similar projects. The full schematic is also available in my GitHub repository for reference and reuse.
Lumo Circuit Schematic

1/23/2026 - Github profile update

All STL files related to the robotic arm have been uploaded to my GitHub profile. These files include the complete set of 3D-printable components required to assemble the mechanical structure of the arm, such as mounts, joints, brackets, and supporting parts.

In addition to the STL files, the repository also contains various program files used for operating and testing the robotic arm. These programs cover motor control, movement logic, and hardware interfacing, and are intended to support both development and experimentation.

image