VEATIC: Video-based Emotion and Affect Tracking in Context Dataset: Conclusion

:::info
Authors:

(1) Zhihang Ren, University of California, Berkeley and these authors contributed equally to this work (Email: peter.zhren@berkeley.edu);

(2) Jefferson Ortega, University of California, Berkeley and these authors contributed equally to this work (Email: jefferson_ortega@berkeley.edu);

(3) Yifan Wang, University of California, Berkeley and these authors contributed equally to this work (Email: wyf020803@berkeley.edu);

(4) Zhimin Chen, University of California, Berkeley (Email: zhimin@berkeley.edu);

(5) Yunhui Guo, University of Texas at Dallas (Email: yunhui.guo@utdallas.edu);

(6) Stella X. Yu, University of California, Berkeley and University of Michigan, Ann Arbor (Email: stellayu@umich.edu);

(7) David Whitney, University of California, Berkeley (Email: dwhitney@berkeley.edu).

:::

Table of Links

Abstract and Intro
Related Wok
VEATIC Dataset
Experiments
Discussion
Conclusion
More About Stimuli
Annotation Details
Outlier Processing
Subject Agreement Across Videos
Familiarity and Enjoyment Ratings and References

6. Conclusion

In this study, we proposed the first context based large video dataset, VEATIC, for continuous valence and arousal prediction. Various visualizations show the diversity of our dataset and the consistency of our annotations. We also proposed a simple baseline algorithm to solve this challenge. Empirical results prove the effectiveness of our proposed method and the VEATIC dataset.

:::info
This paper is available on arxiv under CC 4.0 license.

:::

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.