Tuesday, March 6, 2012

Paper Reading #7- Kinect to Architecture

Kinect to Architecture

The beginnings of the article is pretty similar to the other articles I have covered so far so in this review I will focus on its last section: the user study.  The group of participants were given a trial to complete two separate times.  The first time the users had to use the Kinect based gesture system to guide an avatar to grab several rings.  The second time the users had to use a keyboard which had the gestures mapped to various keys. The locations of the rings were randomized but the general distance was not so that there was no noise in the difference between the trials.  Afterwards, the users were given a survey that measured their satisfaction of each gesture. The trials measured the following parameters: accuracy, responsiveness, and memorability.  The surveys asked the users about these: ease of use and "fatigueness" (i.e. how tiring any particular gesture was).

Source: http://www.ivs.auckland.ac.nz/ivcnz2011_temp/uploads/1543/2-Kinect_to_Architecture_v2.pdf

Paper Reading-Recovering Missing Depth Info.

Recovering Missing Depth Information from Microsoft’s Kinect


As with all of these papers for some reason, the authors spend the first page describing the history of the Kinect and why they chose it.  According to the authors, no previous work has been published concerning the topic they address: recovering missing depth information from the Microsoft Kinect.  Apparently, the Kinect and other "time-of-flight," devices lose lots of data while capturing and compressing video data.  The authors retrieve the missing depth data by a process of RGBD segmentation and the Hough transform voting scheme.





Source:

http://www.contrib.andrew.cmu.edu/~ammarh/projects/vis_project.pdf



Thursday, March 1, 2012

Paper Reading- UI Evaluation


A Gesture Controlled User Interface for Inclusive Design and Evaluative Study of Its Usability


I choose this paper to evaluate because its material deviated from the normal type of article our group has blogged about.  This paper was discussing an evaluation system of a user interface for a gesture recognition program.  Considering that our project is similar, even though the methods of capturing the gestures are different, I felt that it would be beneficial to us to study how other people completed user studies and evaluated them.

The first third of the paper is focused on previous works and the development of "GCUI" technology, or Gesture Controlled User Interfaces.  It goes on to note that with introduction of this technology into the games industry, the increased desire for it has drove prices down to levels that precipitate researchers to expand upon this subject with much more interest than in the past.

The meat of the paper contains the most useful information to our group.  It has an evaluation and discussion of popular research methodology, specifically pertaining to the evaluation of success of the project.  The paper espouses both qualitative and quantitative approaches as valid options, each with their benefits and detractions.  The specifics of the evaluation methods I will leave in the paper, it would be pointless to rewrite what they have done, but when it comes time to perform these evaluations for our own project, I plan to revisit this paper and use it as an inspiration or as help when crafting our own user studies or surveys.



Source:
http://www.scirp.org/journal/PaperDownload.aspx?paperID=7503&returnUrl=http://www.scirp.org%2Fjournal%2FHome.aspx%3FIssueID%3D1069

Tuesday, February 21, 2012

Paper Reading-Dancing With The Kinect


Evaluating a Dancer’s Performance using Kinect-based Skeleton Tracking


This paper's main purpose is the use of motion based technology (specifically the Microsoft Kinect) in a social media setting.  The goal is to create a website where experienced dancers can tape themselves using the Microsoft Kinect and then new dancers can watch these videos in order to learn and improve their skills.  The researcher's application will analyze the experienced dancer's moves then compare them to the rookie's to give the new dancer advice on how to improve his steps and what things he is doing wrong.  The system will also encourage social interaction by allowing the students to communicate with the advanced dancer or other new dancers.

The portion of the system that analyzes the dancer's moves will be based on skeleton tracking data provided by the Microsoft Kinect.  Because of the unique nature of the Kinect's sensors, the data captured will contain 3D information allowing the users of the system to rotate the view of their skeleton to get multiple perspectives.  The testing of their application showed that skeleton tracking is effective even if each user does not input his own calibration data and instead a general calibration data set was used; this general set was picked by the researchers to accommodate for the wide variety of test subjects that were participating in the experiment.

The testing of this software was done with two professional dancers, one male and one female, 13 amateur dancers, eight males and five females, all dancing various forms of Salsa.  The evaluation of the dancers was done according to several parameters: joint positions, joint velocities, and 3D flow error.

My Thoughts
I was general impressed with the work and concepts introduced in this paper.  I have personal experience with learning various types of dance.  The biggest obstacle to becoming skilled is the lack of personal attention in most learning environments.  Many times there are a large number of students in these classes so individual attention by the instructor is limited to a few minutes each lesson.  Furthermore, the ability to see exactly what you are doing wrong would be extremely useful in learning a new skill like dancing.



Source:
http://dl.acm.org/citation.cfm?id=2072298.2072412&coll=DL&dl=GUIDE

Tuesday, February 7, 2012

Paper Reading-Hand Gesture Recognition


Robust hand gesture recognition with kinect sensor

The authors of this paper confront the problem of gesture recognition on the Microsoft Kinect.  The biggest problem is the relatively low resolution of the Kinect's optical sensors.  To help alleviate this problem the authors combined different approaches, using the color and depth maps together, to compensate for this low resolution.

To test their gesture recognition application they build two small programs: a rock paper scissor game and a very simple calculator.  Both of these programs focus on finger and hand gestures, something the Kinect does not support natively.  The paper did not go into depth about the actual user studies and how much data was collected.  The only thing it did say was that the testing yielded positive results and in conclusion, their application had a 90.6% efficiency rating when detecting the correct hand gesture.  




Source:

Paper Reading-Pose Estimation With The Kinect


Learning shape models for monocular human pose estimation from the
Microsoft Xbox Kinect


This paper's primary focus is a relatively new idea: identifying human pose through the Microsoft Kinect.  The Kinect captures the silhouette of the person for analysis.  This is different from previous work according to the researchers because with the Kinect, they are able to capture shapes and "generative models" of human bodies as opposed to simple geometric models used by the previous works.


The conclusion of the researchers is that using the Kinect to generate learning models for their application yielded much more accurate pose estimations.  The future works includes a wide range of information they wish to include not limited to: extending their dataset to account for clothing and other limb variations, investigate coupling between the mixture components between limbs to capture correlations in shape, etc.

Source:

Paper Reading-Brave NUI World


Brave NUI world