2022-2023 Robotics & Computer Vision

Sentry Nerf Gun

A project integrating mechanical, electrical, and software systems to build a color- or face-tracking Nerf turret with real-time aiming and firing capabilities.

The Challenge

The team aimed to create a mobile sentry gun that autonomously detects, tracks, and fires Nerf darts at targets—whether a specific color or a human face. This involved combining computer vision with a mechanical turret system to achieve real-time aiming and firing, plus an initial plan to mount it on a Neato robot vacuum for autonomous movement.

Integrating mechanical, electrical, and software components proved more complex than anticipated, requiring careful balancing of ambition and feasibility. The project needed to coordinate multiple systems including computer vision processing, stepper motor control, and real-time target tracking while maintaining smooth movement and accurate aiming capabilities.

Foundation System

Custom Pan/Tilt Mechanism with Stepper Motors

The Solution

We developed a functional turret prototype that successfully tracked colored objects or faces, aimed, and fired Nerf darts in real time. The solution featured two tracking modes: color-based detection for red objects and face detection using OpenCV, providing flexibility for different target types.

The mechanical system consisted of a custom pan/tilt mechanism powered by two NEMA-17 stepper motors with a crossed-roller bearing for smooth, load-bearing rotation. The Nerf Stryfe gun required minimal permanent modifications, with a relay controlling the flywheel motors and one servo pulling the trigger for firing control.

All actuators were unified under a single microcontroller (Raspberry Pi Pico), controlled via USB serial communication. This simplified the control architecture while maintaining precise coordination between the pan/tilt system and firing mechanism.

Computer Vision Process

Computer Vision Detection System

How It Works

The system utilizes OpenCV for color masking and face detection, processing video feed in real-time to identify targets. When a target is detected, the system calculates its position and uses PD controllers for aiming precision, with specific gains to ensure smooth and accurate tracking.

Coordinating two stepper motors simultaneously for diagonal aiming required sophisticated control algorithms. The solution employed an async I/O library to generate clean, coordinated PWM signals, significantly improving simultaneous stepper motor control and reducing jitter for smooth diagonal movements.

Serial communication via USB integrates the computer vision processing with the turret controls, allowing the Raspberry Pi to send precise positioning commands to the Raspberry Pi Pico microcontroller, which then drives the stepper motors and firing mechanism accordingly.

Full Assembly

Servo-Controlled Aiming Mechanism

Challenges & Solutions

Multiple face detections presented a significant challenge, as OpenCV often detected spurious "faces" in the background. The solution involved imposing a size threshold so only sufficiently large "faces" were tracked, effectively ignoring smaller false positives and improving target accuracy.

Integration with ROS and the Neato vacuum proved cumbersome when attempting to run face/color recognition code on a Raspberry Pi while interfacing with ROS. The team decided not to focus on the Neato and ROS integration anymore, consolidating everything into one codebase for direct turret control, which simplified the overall system architecture.

Applications

The sentry gun project demonstrates effective integration of mechanical, electrical, and software components, serving as an excellent platform for educational exploration in computer vision fundamentals, real-time control systems, embedded programming techniques, and mechanical systems integration.

Future development opportunities include reviving the Neato robot integration plan to mount the turret on a mobile base and utilize ROS for full autonomy. Additional improvements could involve more robust face detection through refined OpenCV parameters or alternative ML methods to reduce false positives, and optimizing code for standalone operation on a Raspberry Pi without requiring a laptop.