Automated Textual Descriptions for a Wide Range of Video Events with 48 Human Actions
- 1 January 2012
- book chapter
- conference paper
- Published by Springer Science and Business Media LLC in Lecture Notes in Computer Science
Abstract
No abstract availableKeywords
This publication has 16 references indexed in Scilit:
- Automatic human action recognition in a scene from visual inputsPublished by SPIE-Intl Soc Optical Eng ,2012
- Towards coherent natural language description of video streamsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2011
- Increasing the security at vital infrastructures: automated detection of deviant behaviorsPublished by SPIE-Intl Soc Optical Eng ,2011
- Actions in contextPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Understanding videos, constructing plots learning a visually grounded storyline model from annotated videosPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Recognizing realistic actions from videos “in the wild”Published by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Floor Fields for Tracking in High Density Crowd ScenesLecture Notes in Computer Science, 2008
- Actions as Space-Time ShapesIeee Transactions On Pattern Analysis and Machine Intelligence, 2007
- Recognizing human actions: a local SVM approachPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2004
- Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of ActionsInternational Journal of Computer Vision, 2002