Skip to content

AntonioLoya/Reconocimiento_facial_de_emociones

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Reconocimiento facial de emociones

clmtrackr

npm version

tracked face

clmtrackr is a javascript library for fitting facial models to faces in videos or images. It currently is an implementation of constrained local models fitted by regularized landmark mean-shift, as described in Jason M. Saragih's paper. clmtrackr tracks a face and outputs the coordinate positions of the face model as an array, following the numbering of the model below:

facemodel_numbering

Reference - Overview

The library provides some generic face models that were trained on the MUCT database and some additional self-annotated images. Check out clmtools for building your own models.

For tracking in video, it is recommended to use a browser with WebGL support, though the library should work on any modern browser.

For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.

License

clmtrackr is distributed under the MIT License

Authors

About

Prueba de detección de emociones basada en la salida de parámetros de clmtrackr.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages