A spectacle-like wearable device that enables computer control through eye movements, designed to enhance accessibility for individuals with disabilities.
EyeCan is a wearable assistive technology that uses real-time eye movement monitoring to enable individuals with mobility impairments to control computers. The device resembles regular spectacles but incorporates a camera system that tracks eye gestures and translates them into computer commands.
Video: Working demonstration of the EyeCan in action
- Real-time eye movement tracking using MediaPipe and OpenCV
- Gesture recognition via Hough Circle Transform algorithm
- Advanced detection through custom CNN models
- Hands-free computer control through calibrated eye movements
- Low latency response for natural interaction
- Comfortable wearable design for extended use
Image: OpenCV eye detection and tracking visualization
Our system uses a multi-stage approach to translate eye movements into computer commands:
- Capture: Camera mounted on spectacles captures eye region
- Detection: MediaPipe facial landmark detection locates precise eye position
- Tracking: OpenCV processes eye movements in real-time
- Recognition: Hough Circle Transform algorithm detects pupil position and movement
- Classification: CNN models interpret movements as specific commands
- Action: Commands are translated into computer inputs
Image: Visualization of the eye movement detection algorithm
- Computer Vision: OpenCV, MediaPipe
- Machine Learning: Custom CNN models
- Hardware: Raspberry Pi Zero (or similar), specialized camera module
- Programming: Python
torch==2.1.0
ultralytics==8.0.230
opencv-python==4.10.0
numpy
pillow
math
streamlit
# Clone the repository
git clone https://github.com/Harish-ioc/EyeCan.git
# Navigate to the project directory
cd EyeCan
# Install dependencies
pip install -r requirements.txt
# Run the application
streamlit run main_streamlit.py
- Setup: Position the device comfortably on your face like regular glasses
- Calibration: Follow the on-screen prompts to calibrate the system
This technology aims to:
- Enhance accessibility for individuals with mobility impairments
- Create new employment opportunities
- Promote digital inclusion
- Enable greater independence in computer use
- Improve gesture recognition accuracy
- Add customizable gesture mappings
- Reduce hardware size and weight
- Develop wireless version
- Create mobile application support
- Implement eye-typing with word prediction
Contributions to improve EyeCan are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.