What Is Gesture Recognition?
To get a better understand on what gesture recognition is, you need to know what a gesture is.
Gesture is a movement of part of the body, especially a hand or the head, to express an idea or meaning.
Gesture recognition refers to the mathematical interpretation of human motions using a computing device. At present, gesture recognition is mainly centered on hand-gesture recognition and facial emotion recognition.
In gesture recognition, the human body’s motions are read by a camera and the captured data is sent to a computer. The computer then makes use of this data as input to handle applications or devices.
The history of hand gesture recognition for computer control started with the invention of glove-based control interfaces. Researchers realized that gestures inspired by sign language can be used to offer simple commands for a computer interface. This gradually evolved with the development of much accurate accelerometers, infrared cameras and even fibreoptic bend-sensors (optical goniometers).
How Gesture Recognition Works
Gesture recognition is an alternative user interface for providing real-time data to a computer. Instead of typing with keys or tapping on a touch screen, a motion sensor perceives and interprets movements as the primary source of data input. This is what happens between the time a gesture is made and the computer reacts:
- A camera feeds image data into a sensing device that is connected to a computer. The sensing device typically uses an infrared sensor or projector for the purpose of calculating depth.
- Specially designed software identifies meaningful gestures from a predetermined gesture library where each gesture is matched to a computer command.
- The software then correlates each registered real-time gesture, interprets the gesture and uses the library to identify meaningful gestures that match the library.
- Once the gesture has been interpreted, the computer executes the command correlated to that specific gesture.
The Most Popular Gesture Control Devices
1. Nintendo Wii
Nintendo Wii was announced in 2006 as a wireless and motion-sensitive remote for its game console.
A main feature of the Wii Remote is its motion sensing capability, which allows the user to interact with – and manipulate – the items on a screen via gesture recognition and pointing, through the use of an accelerometer and optical sensor technology.
The Wii remote offers users an intuitive, natural way to play games.
2. Sony PlayStation Eye
A digital camera device for Sony PlayStation. The technology uses computer vision and gesture recognition to process images taken by the camera.
This allows players to interact with games using motion and colour detection, as well as sound through its built-in microphone array.
3. Microsoft Kinect
A motion sensing input device developed for Xbox consoles and Windows PCs. Based on a webcam-style, add-on peripheral, it enables users to control and interact with their console/computer without the need of a game controller, through a natural user interface using body gestures and spoken commands.
Whilst, the first generation Kinect was first introduced in Nov 2010, Microsoft released the Kinect software development kit for windows on June 16, 2011. The SDK allows developers to write apps and games for Kinect.
4. Leap Motion
A computer hardware sensor device that supports hand and finger motion as inputs, analogous to a mouse, requiring no hand contact or touching.
Before Leap motion was officially sold, the company launched a software development program in Oct 2012 for developers to create applications, which would form the Airspace app store.
While Kinect focuses more on body gestures, Leap motion specialises in detecting hand and finger movement.
5. Myo Wristband
A new competitor in the gesture control market, “Myo wristband”, was announced on Feb 2013. Myo contains 8 pods with a proprietary EMG sensor on each, to be able to detect the muscle movements around user’s forearm and hand gestures. It also contains a gyroscope, an accelerometer and a magnetometer, to be able to track arm movements like spinning or moving.
Gesture Recognition Challenges
One of the key challenges in gesture recognition is that the image processing can be significantly slow creating unacceptable latency for video games and other similar applications.
2. Lack of Gesture Language
Since common gesture language is not there, different users make gestures differently. If users make gestures as they seem fit, gesture recognition systems would certainly have difficulty in identifying motions with the probabilistic methods currently in use.
Many gesture recognition systems do not read motions accurately or optimally due to factors like insufficient background light, high background noise etc.