Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Problem Statement_Yaoli_talk and engagement.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Problem Statement_Yaoli_talk and engagementVersion One:My problem is to develop a model to discover the attention pattern(fixation duration) of the visual features regarding the speaker and camera shooting in the ted talk videos that predict audience preference(scoring 1-5 on self-reported attitudes questions: interested, agree, engaging, informative). And my educational goal is to improve audience preference through manipulating attention-related features found in the talk video. My objective is to find the attention pattern that best predict audience preference. Version Two:My problem is to develop a model to discover the visual and audio features regarding the speaker and camera shooting in the ted talk videos that predict the attention pattern(fixation duration and location). My educational goal is to improve audience attention through manipulating visual and audio content features found in the talk video. My objective is to find the visual and audio features that best predict audience attention.I have collected a dataset of 29 participants�� fixation of two ted talk videos and their responses to 4 attitude questions. Currently my priorities are 1) to determine the granularity level of the features I��m coding and entering into the model. For example, how fine-grained should I code the gestures of the speaker, in level of general type(beats, deictics, iconic, metaphoric) or specific shapes and space(in central/peripheral space, high/low span, long/short stroke); 2) to determine the cut-off for defining a fixation(500ms from previous literature); 3) to normalize fixation duration based on total length of the video or specific visual features.See fake dataset in the attached excel.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks very interesting, have you coded the videos already? If not that will be a very time consuming activity.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've coded the videos myself without inter-rater reliability.

Expand Down
1 change: 1 addition & 0 deletions fakedataset_Yaoli_talk and engagement.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
,,versionOne,,,,,,,,,,,participantNo,participantGender,videoNo,speakerGender,fixation on gesture1(ms),fixation on gesture2(ms),fixation on speaker face(ms),fixation on slides(ms),fixation on other objects in the scene(ms),aggregated preference score of 4 attitude questions,,,,1,F,A,F,500,2000,20000,20000,100,10,,,,1,F,B,M,4000,20,10000,500,30000,18,,,,2,M,A,F,40,8000,40000,10000,18000,7,,,,2,M,B,M,1000,800,2000,40,2000,15,,,,,,,,,,,,,,,,,,,versionTwo,,,,,,,,,,,participantNo,participantGender,videoNo,speakerGender,gesture1Counts or Duration,gesture2Counts or Duration,speakerfaceDuration(ms),slidesDuration(ms),storyinspeech(describe)Duration(ms),reasoninginspeech(reason)Duration(ms),cameraZoominDuration(ms),cameraOnAudience(ms),aggreagted fixation time(ms)/counts ,aggreagted fixation time/counts on object11,F,A,F,,,,,,,,,,1,F,B,M,,,,,,,,,,2,M,A,F,,,,,,,,,,2,M,B,M,,,,,,,,,,
Expand Down