top of page
IMG_7104_edited.jpg

Cat Vision

Problem: Like many cats, my cat often requires more play-time than is provided by our family. Thus, she resorts to attacking us on occasion.

 

Goal: Design a autonomous cat toy capable of entertaining a cat indefinitely to foster better feline-human relations.


Solution: Cat Vision is a computer vision-driven cat toy which utilizes a Raspberry Pi and the YOLOv4 object detection model to locate the cat and move the laser away. The device is mounted on a turret system that allows the laser to point anywhere in a room.

Design

From personal experience, I had learned that the one toy which our cat had consistently found interest in had ironically not been a toy at all, but
rather a laser. Given this knowledge, a design based around a controllable laser pointer was implemented. Autonomy was extremely important in the design, so computer vision was chosen as the means to make the device “smart”.

 

The initial design consisted of a rotatable top stage mounted on a platform supported by three steel rods. A stepper motor in combination with an spur gear and an inner ring gear controls its movement, allowing precise 360 rotation. The electronics are mounted below the top stage, hung from 3D printed spacers.

First Iteration Laser Pointer CAD 1.png
IMG_5787.JPG
IMG_4272.jpg
IMG_4268_edited.jpg

Programming and Algorithms

The YOLOv4 Tiny object detection model was used on the device as it allowed faster and more accurate detection than competing models such as EfficientDet. Additionally, the tiny version was much more memory efficient and lightweight, perfect for implementation on the relatively limited ARM Cortex-A72 processor of the Raspberry Pi 4 that I was using. A Raspberry Pi Camera v2 module was used to get image input given its compatibility with the Raspberry Pi platform. There were some limitations with the speed with which the sensor reported data; however, given the relatively more impactful framerate of the object detection model, it was determined to be sufficient.

Limitations and Challenges

However, I soon encountered limitations in the design and implementation of the turret design.

  • Rotating top stage could easily come undone and would regularly do so.

  • Platform was fairly unstable due to limited amount of contact points between base and top-stage.

  • Integration of electrical components was difficult and given the design, would require additional on-board batteries that would both hinder its movement, but also be much to large, given that it had to power the Raspberry Pi.

  • Camera and laser pointer had fairly limited range of movement given their vantage point → design was quite tall.

Initial Iteration Laser Pointer 3_edited.jpg

Changes and Improvements

Given the limitations presented, I decided to re-design the mechanisms to be more compact and low to the ground.

 

Additionally, it was decided that a slip-ring would be implemented to pass power to the Raspberry Pi, negating the requirement of a battery.

 

An alternate turret design was chosen with an external (as opposed to internal) ring gear and a NEMA 11 stepper motor powering a similar spur gear to rotate the top stage. However, the turret was secured at the bottom preventing the dislodging of the top stage and given the lower mounting of the laser + camera, a larger range of motionwas possible.

Final Iteration Laser Pointer CAD 1.png
Final Iteration Laser Pointer CAD 1.png
Flowchart v2.jpg

Performance & Evaluation

Over the course of testing multiple instances of the algorithm and attempting to optimize the speed of the algorithm and the accuracy of detection, an optimal framerate of approx. 4 frames per second was achieved. Though not ideal, it was sufficient for basic camera-based
movement.

 

A proportional speed was then applied to the stepper motor and servo to ensure the laser stays a constant distance away from the target cat.

 

In testing, the device was found to be effective, accurately navigating both toward and away from the cat. There was, as expected, delay between the sensor input and the movement of the device, leaving room for improvement.

Reflection and Moving Forward

Overall, the design was able to achieve the target objectives, capable of successfully navigating a laser pointer toward a target destination, entertaining my cat autonomously. With a 360 degree range of motion and an approximately 50 degrees of vertical motion, it is capable of pointing to virtually any location that a cat may be located and thus entertained.


As a cat owner, the device provides an efficient way to entertain my feline friend and to ensure that we are able to coexist without the need for violence.

 

There are some limitations with the design, mainly the speed with which the device operates. This delay is a result of a combination of factors:

  • Slow algorithm performance on Raspberry Pi hardware

  • Delay in sensor input and signal processing

  • Python-based Raspberry Pi scripts have a slow processing speed

 

Additionally, the implementation of the design could be improved with regard to the cleanliness of the assembly as well as the wiring. To address these challenges, possible improvements could include:

  • Improved Hardware: Popular options for camera detection include the Jetson Nano or addon AI cards for the Raspberry PI increasing its ability to run detection models.

  • Custom PCB: A custom wiring or PCB would provide a cleaner implementation of the various electronics required in the design, consolidating everything onto one or two PCBs.

  • More Modern Model Implementation: As the object detection models advance with the rapid pace of AI, newer models may provide increased performance, and require less memory, etc, speeding up the processing of image data on the Raspberry Pi and improving latency.

bottom of page