ACM Multimedia 2019

ConfLab: Meet the Chairs!

Meet peers and chairs of ACM MM 2019 while co-creating a community data set you could be using for your own research.

Sign up for ConfLab by filling in our Consent Form, agreeing to donate your data for research:

How it Works

1 . Sign up

Sign up for ConfLab by filling in our Informed Consent Form, where you agree to take part and to donating your data for research purposes. Still not sure? Read our FAQ.


2 . Meet the Chairs on Thursday

Our data collection will take place on Thursday from 16:30 to 17:30 at the Rhodes hall. After you arrive, we will check if you agreed to donate your data for research. You fill in a short survey about your research interests and experience with MM. We will give you our newly designed MINGLE Midge to be worn around your neck. After that you are free to meet peers and the conference's organizational chairs at this event.


3 . Tutorial and Debrief on Friday

Join us in learning more about the science and technology behind ConfLab including discussions on privacy, ethics, & data sharing.


4 . Research and the Future

Your data will help progress research on social interaction analysis in the wild. It will be shared in a pseudonymised form with the research community under an End User License Agreement to be only used for non-commercial and non-governmental research. We also hope to use it for setting future grand challenges.

What data are you contributing?

We will be collecting the following data as part of this event

Acceleration and proximity

Our newly designed MINGLE Midge wearable device records acceleration and proximity during your interactions. Acceleration readings can be used to infer some of your actions like walking and gesturing. It is worn around the neck like a conference badge.

Video

Overhead cameras will mounted to capture the interaction. These videos will be used to annotate behavior and for detection of social actions like speaking or of conversational groups.

Low-frequency audio

The Mingle MIDGE will also record low-frequency audio. This low frequency is enough for recognizing if you are speaking, but not enough to understand the content of your speech, giving us valuable information without compromising your privacy. Example audio:

Survey measures

Your research interests and level of experience within the MM community will be linked to the data above via a numerical identifier.

Hayley Hung

Associate Professor: Socially Perceptive Computing

Ekin Gedik

Postdoctoral researcher: Multi-modal Social Experience Modelling

Bernd Dudzik

PhD Student: Memory as Context for Induced Affect from Multimedia

Stephanie Tan

PhD Student: Multi-modal Head Pose Estimation & Conversation Detection

Chirag Raman

PhD Student: Multi-modal Group Evolution Modelling

Jose Vargas

PhD Student: Multi-modal Conversational Event Detection

Organizers

ConfLab is an initiative of the Socially Perceptive Computing Lab, Delft University of Technology.

We have over 10 years of experience in developing automated behavior analysis tools and collecting large scale social behavior in the wild.

We are partially supported by the Dutch Research Agency (NWO) MINGLE project and the organization of ACM Multimedia 2019.

Have any questions?

H.Hung@tudelft.nl