Rapid Analysis of Video Data

DavidMsm

Traditionally the first step in interpreting video is to code it into a form that can be analysed systematically. The coding process is currently performed manually, and it can be slow and difficult, and biased by subjectivity.  David Mawdsley (Research IT) recently presented a poster at the first “Advances in Data Science” conference explaining how we are helping Dr Caroline Jay’s group develop a way to quickly code human behaviours allowing the rapid analysis of hours of video.

The technique being emplyed by Jay’s group uses object and face tracking with machine learning techniques to automate the coding of behaviours, having been trained on a random sample of frames from the video.  David is currently working on collating video data, sensor data from wearable devices and depth field data from a Xbox Kinect camera.  Using applied data science and analytical techniques on this data, the aim is to obtain an understanding of what the person is doing, how the person is interacting with the equipment and where they are focusing their attention.  This analysis of human behaviour will help to design or improve user experience.  To find out more about the work undertaken by Jay’s group please see their latest publication.

Due to the flexible nature of research software engineer (RSE) support, Jay’s research team was able to hire in David’s expertise when they needed it rather than employ a full time RSE.  This saved the research group money and also allowed them to use a variety of different RSEs depending on the skill required at the particular point of the research project.

If you are interested in hiring a RSE to work with your research group, please contact Robert Haines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s