Skip to content

Commit e7a12f4

Browse files
authored
Merge pull request larymak#353 from MBSA-INFINITY/emotion-detector
Realtime Emotion Detector using Python (Google's Teachable Machine Learning)
2 parents c163e87 + 7123f21 commit e7a12f4

File tree

11 files changed

+105
-0
lines changed

11 files changed

+105
-0
lines changed
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
<!--Please do not remove this part-->
2+
![Star Badge](https://img.shields.io/static/v1?label=%F0%9F%8C%9F&message=If%20Useful&style=style=flat&color=BC4E99)
3+
![Open Source Love](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)
4+
5+
# Realtime Emotion Detector using Python (Google's Teachable Machine Learning)
6+
7+
## 🛠️ Description
8+
This project is about developing a system that can detect the emotions of a person in realtime from the video on the basis of a pre-trained **keras** model. This model was trained using Google's [Teachable Machine Learning](https://teachablemachine.withgoogle.com/).
9+
10+
The project can detect the following emotions:-
11+
**Angry**, **Happy**, **Sad**, **Smile**, **Surprise**
12+
13+
14+
## ⚙️ Languages or Frameworks Used
15+
- Python, Mediapipe, Keras
16+
- Teachable Machine Learning (For model training)
17+
18+
19+
## 🌟 How to run
20+
- ### Install all the requirements
21+
Run `pip install -r requirements.txt` to install all the requirements.
22+
23+
- ### Run the project
24+
To the run the project, go to the terminal and run `python main.py`. This will popup two windows, one for capturing the `video input` and the other for displyaing the `emotion output.`
25+
26+
> Note: The Model (.h5 file) has been trained using the Teachable Machine Learning which is an esay to use ML Training Platform by **Google**. Do checkout that platform.
27+
28+
29+
## 📺 Demo
30+
Do checkout the Below Video for Demo of the Project.
31+
32+
[Youtube Link](https://youtu.be/ER4avLksQfU)
33+
34+
## 🤖 Author
35+
Github - [MBSA-INFINITY](https://github.com/MBSA-INFINITY)
36+
LinkedIn - [MBSAIADITYA](https://www.linkedin.com/in/mbsaiaditya/)
37+
Portfolio - [MBSA](https://mbsaiaditya.in/)
38+
Instagram - [MBSAIADITYA](https://instagram.com/mbsaiaditya)
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
0 happy
2+
1 angry
3+
2 sad
4+
3 smile
5+
4 surprise
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
import cv2
2+
import numpy as np
3+
import mediapipe as mp
4+
from keras.models import load_model
5+
from keras.preprocessing import image
6+
# from tensorflow.keras.utils import img_to_array
7+
from PIL import Image, ImageOps
8+
9+
mpFaceDetection = mp.solutions.face_detection
10+
mpDraw = mp.solutions.drawing_utils
11+
faceDetection = mpFaceDetection.FaceDetection()
12+
13+
model = load_model('./Teachable ML Data/keras_model.h5')
14+
15+
cap = cv2.VideoCapture(1)
16+
17+
results_detect = {0:"😁",1:"😠",2:"☹️",3:"😊",4:"😲"}
18+
results_detect_str = {0:"happy",1:"angry",2:"sad",3:"smile",4:"surprise"}
19+
20+
# pTime = 0
21+
while cap.isOpened():
22+
_,img = cap.read()
23+
imgRGB = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
24+
results = faceDetection.process(imgRGB)
25+
if results.detections:
26+
ih,iw,ic = img.shape
27+
for id,detection in enumerate(results.detections):
28+
bBoxC = detection.location_data.relative_bounding_box
29+
bBox = int(bBoxC.xmin * iw),int(bBoxC.ymin * ih),int(bBoxC.width * iw),int(bBoxC.height * ih)
30+
# cv2.rectangle(img,bBox,(255,0,255),2)
31+
roi_gray = img[bBox[1]:bBox[1] + bBox[2], bBox[0]:bBox[0] + bBox[3]]
32+
roi_gray = cv2.resize(roi_gray, (224, 224))
33+
cv2.imwrite("image.jpg",roi_gray)
34+
35+
36+
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
37+
image = Image.open('image.jpg')
38+
size = (224, 224)
39+
image = ImageOps.fit(image, size, Image.ANTIALIAS)
40+
image_array = np.asarray(image)
41+
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
42+
data[0] = normalized_image_array
43+
prediction = model.predict(data)
44+
res = np.argmax(prediction)
45+
46+
# predictions = np.argmax(model.predict(np.array([roi_gray])))
47+
# cv2.putText(img, results_detect[res], (150,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
48+
49+
temp_emotion = cv2.imread(f"./emotions/{results_detect_str[res]}.jfif")
50+
cv2.imshow("emotion", temp_emotion)
51+
print(results_detect[res])
52+
53+
cv2.imshow("Image",img)
54+
55+
key = cv2.waitKey(1)
56+
if key == ord('q'):
57+
cv2.destroyAllWindows()
58+
break

0 commit comments

Comments
 (0)