Segment, Recognise and Count fingers from a live video feed
Live Gesture Recognition is an application that captures and interprets human gestures as numbers. Gesture here, can refer to any physical movement, large or small of the fingers. This project has been built using OpenCV and Python. It is based on the concepts of object segmentation. It segments one foreground obeject which in this case is our hand, from a live video sequence.
The two essential parts of this project are
One of the efficient methods to separate foreground from background is using the concept of running averages. More about running averages can be found over here. The system looks over a particular number of frames of the background and runs the average of those. The background is figured out with this average.
When the hand is brought inside the frame after the calibration(once the application starts, it takes 30 sec to calibrate itself according to the background), absolute difference between the background model and the current frame is obtained resulting in a single foreground object - the hand. This method so far as a whole, is known as Background subtraction.
Getting the foreground object does not suffice, there is a need to threshold the difference image to make the hand region visible making all other regions black. To detect the motion of the fingers, contours are of great help. Contours of the difference image are taken and the contour with the largest area is assumed to be the hand.
The process of counting the fingers has five intermediate steps
virtualenv
and add it to your terminal path.
$ git clone https://github.com/abhishekbvs/Gesture-Detection.git
$ cd Gesture-Detection
$ virtualenv -p python3 .
$ source bin/activate
requirements.txt
$ pip install -r requirements.txt
$ python app.py
This project can be extended to understand the gestures and execute commands based on the gestures.