Visual Voice Activity Detection in the Wild
Research output: Contribution to journal › Article › Scientific › peer-review
Details
Original language | English |
---|---|
Pages (from-to) | 967-977 |
Number of pages | 11 |
Journal | IEEE Transactions on Multimedia |
Volume | 18 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Jun 2016 |
Publication type | A1 Journal article-refereed |
Abstract
The visual voice activity detection (V-VAD) problem in unconstrained environments is investigated in this paper. A novel method for V-VAD in the wild, exploiting local shape and motion information appearing at spatiotemporal locations of interest for facial video segment description and the bag of words model for facial video segment representation, is proposed. Facial video segment classification is subsequently performed using the state-of-The-Art classification algorithms. Experimental results on one publicly available V-VAD dataset denote the effectiveness of the proposed method, since it achieves better generalization performance in unseen users, when compared to the recently proposed state-of-The-Art methods. Additional results on a new unconstrained dataset provide evidence that the proposed method can be effective even in such cases in which any other existing method fails.
ASJC Scopus subject areas
Keywords
- Action Recognition, Bag of Words model, Voice Activity Detection in the wild