Intelligent Human-Computer Interaction and Vision-based Interfaces

The sensing and understanding of human motion, commends, expressions, emotional states and even intents may lead to intelligent human computer interactions. Toward this goal, we have been working on vision-based gesture interfaces where the machine can perceive and interpret human commends through gestures. We have built several intelligent human-computer interaction systems that are able to accept visual inputs. Some are toys such as an interaction video game "paper-rock-scissors". But some are real applications, such as the Visual Panel system and the Perceptual PowerPoint (P3) system.



Related Projects:
 
[1] Capturing articulated hand/finger motion from video
[2] John Lin's real-time vision-based gesture interface
[3] "Paper-Rock-Scissors" --- an interactive video game
[4] "Visual Panel" --- turning a piece of paper into a mobile visual device
[5] "Perceptual PowerPoint (P3)"


Publication:
  1. Ying Wu, John Lin and Thomas S. Huang, "Analyzing and Capturing Articulated Hand Motion in Image Sequences", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.27, No.12, pp.1910-1922, Dec., 2005.  [PDF]
  2. Ying Wu and Thomas S. Huang, "Human Hand Modeling, Analysis and Animation in the Context of Human Computer Interaction", IEEE Signal Processing Magazine, Vol.18, No.3, May, pp.51-60, 2001.   [PDF]
  3.  
  4. Gang Hua and Ying Wu, "Capturing Human Body Motion from Video for Perceptual Interfaces by Sequential Variational MAP", invited, in Proc.11th Int'l Conf. on Human-Computer Interaction (HCII'05), Las Vegas, Nevada, July 2005.  [PDF]
  5.  
  6. John Lin, Ying Wu, and Thomas S. Huang, ``Articulate Hand Motion Capturing Based on a Monte Carlo Simplex Tracker", in Proc. 17th Int'l Conf. on Pattern Recognition (ICPR04), Cambridge, UK, 2004.   [PDF]
  7.  
  8. John Lin, Ying Wu, and Thomas S. Huang, "3D Model-Based Hand Tracking Using Stochastic Direct Search Method", in Proc. IEEE Int'l Conf. on Automatic Face and Gesture Recognition (FG04), Seoul, Korea, 2004.   [PDF]
  9.  
  10. Ying Wu, John Lin and Thomas S. Huang, "Capturing Natural Hand Articulation", in Proc. IEEE Int'l Conf. on Computer Vision (ICCV'01), Vol.II, pp.426-432, Vancouver, Canada, July, 2001.   [PDF]
  11.  
  12. Ying Wu, Kentaro Toyama and Thomas S. Huang, "Self-Supervised Learning for Object Recognition Based on Kernel Discriminant-EM Algorithm", in Proc. IEEE Int'l Conf. on Computer Vision (ICCV'01), Vol.I, 275-280, Vancouver, Canada, July, 2001.   [PDF]
  13.  
  14. Zhengyou Zhang, Ying Wu, Ying Shan and Steven Shafer, "Visual Panel: Virtual Mouse, Keyborad, and 3D Controller with an Ordinary Piece of Paper", in Proc. ACM Perceptive User Interface Workshop (PUI'01), Florida, Nov., 2001.   [PDF]
  15.  
  16. Ying Wu and Thomas S. Huang, "View-independent Recognition of Hand Postures", In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR'2000), Vol.II, pp.88-94, Hilton Head Island, SC, June, 2000.   [PDF]
  17.  
  18. Ying Wu and Thomas S. Huang, "Self-supervised Learning for Visual Tracking and Recognition of Human Hand", In Proc. AAAI 17th National Conf. on Artificial Intelligence (AAAI'2000), pp.243-248, Austin, TX, 2000.   [PDF]
  19.  
  20. John Lin, Ying Wu and Thomas S. Huang, "Modeling Human Hand Constraints", in Proc. Workshop on Human Motion (Humo2000), Austin, TX, Dec., 2000.    [PDF]
  21.  
  22. Ying Wu, Thomas S. Huang, "Vision-Based Gesture Recognition: A Review", Lecture Notes in Artificial Intelligence, 1739:103-115, 1999.   [PDF]
  23.  
  24. Ying Wu and Thomas S. Huang, "Capturing Articulated Hand Motion: A Divide-and-Conquer Approach", In Proc. IEEE Int'l Conf. on Computer Vision (ICCV'99), pp.606-611, Greece, Sept., 1999.   [PDF]
  25.  
  26. Ying Wu, "Perceptual PowerPoint System", Technical Report, TR-04-2003, Computer Vision Lab, Dept. of ECE, Northwestern University, 2003.
Return to Research


Updated 10/2003. Copyright © 2001-2003 Ying Wu