A Raspberry Pi–powered AI chatbot with animatronic eyes, voice recognition, and an expressive LED mouth animation.
👉 Full build guide, detailed wiring, 3D printing, and explanation are available on my blog here:
https://www.the-diy-life.com/i-built-a-pi-5-ai-chatbot-that-talks-blinks-and-looks-around/
This repository contains two versions of code;
- AIChatbot.py — Full AI Chatbot Code which handles speech recognition, voice output, OpenAI responses, emotion colouring, LED mouth animation.
- EyeMovement.py — Just controls the animatronic eye servos, idle motion, blinking, and coordinated gaze movement. Doesn't have any chatbot functionality.
Use this repository together with the build instructions on my blog to assemble the animatronic eye mechanism and connect everything to the Raspberry Pi.
The physical assembly instructions for the animatronic eyes, servo mounts, wiring diagrams, and LED placement are available on my blog, linked at the begining.
Hardware used:
- Raspberry Pi (Pi 5 recommended)
- USB microphone
- Speaker or I2S audio card
- 4× micro servos for eyes
- 2× micro servo for eyelids
- NeoPixel / WS2812B LED strip for mouth
Follow these steps to get the software running.
Pi OS Bookworm requires virtual environments for most GPIO + audio libraries.
sudo apt update
sudo apt install python3-venv python3-dev git -y
# Create virtual environment
python3 -m venv pica-env
# Activate environment
source pica-env/bin/activateInstall dependencies inside the virtual environment:
pip install \
openai \
sounddevice \
numpy \
scipy \
RPi.GPIO \
gpiozero \
adafruit-circuitpython-neopixel \
adafruit-circuitpython-servokit \
adafruit-circuitpython-ads1x15 \
adafruit-circuitpython-pixelbuf \
adafruit-circuitpython-led-animation \
pillow \
flask \
python-dotenv \
pydub \
pygame \- Go to OpenAI Platform
- Log in
- Navigate to Dashboard → API Keys
- Create a new API key
- Copy the key
Create a .env file in your project directory:
nano .envAdd the following line:
OPENAI_API_KEY=your_api_key_hereYour AIChatbot.py file will automatically load this using dotenv.
python EyeMovement.pypython AIChatbot.pyIn AIChatbot.py, locate the voice setting near the top:
voice_name = "echo"Replace "alloy" with any supported OpenAI TTS voice, for example:
voice_name = "verse"
voice_name = "nova"
voice_name = "shimmer"Find the system prompt in AIChatbot.py:
messages=[
{"role": "system", "content": (
"You are a calm, expressive AI. "
"Respond concisely in 1 sentence unless necessary. "
"Also output emotion as one of: happy, sad, neutral, angry, surprised. "
"Format: <text> [emotion: <label>]"
)},Edit this text to change the AI's personality, style, and behaviour.
The emotion-to-colour mapping is stored as:
EMOTION_COLORS = {
"happy": (0, 255, 255), # yellow-ish
"sad": (255, 0, 0), # blue
"angry": (0, 255, 0), # red
"surprised": (255, 255, 0), # purple
"neutral": (0, 255, 0), # default green
}Adjust any RGB combination as desired.