

MultiSOCIAL Toolbox
An Open-Source Library for Quantifying Multimodal Social Interaction
In everyday life, most of our experiences are multimodal, but in our research, the study of interaction focuses on only one kind of behavior. However, by reducing the dimensionality of our data, we limit our ability to capture the dynamics of real-world interpersonal behavior. A major reason for this unimodal focus has been technological: Until recently, the richness of human behavior in just minutes of interaction could take over an hour of meticulous hand-coding, transcription, and annotation, but advances in computing power and software innovation are changing that.
We present an effort to assemble open-source tools into a single platform for multimodal interaction data: the MultiSOCIAL Toolbox (MULTImodal timeSeries Open-SourCe Interaction Analysis Library). While these tools exist for scientists with computer programming abilities, our goal is to expand access to scholars with non-existent computer programming experience and to accelerate discovery through a unified multimodal data processing pipeline.
The toolbox enables any researcher who has video files to extract time-series data in three modalities:
-
Body movement to quantify non-verbal behavior through a pose estimation algorithm
-
Transcripts of what was said during an interaction through the use of an Automatic Speech Recognition system
-
Acoustic prosodic characteristics of speech.
The toolbox can be accessed easily through Github by clicking on the button here
In collaboration with:


Related Publications

Alexandra Paxton
University of Connecticut

Muneeb Nafees
Colby College

Tahiya Chowdhury
Colby College
Romero, V. Chowdhury, T., Paxton, A., & *Nafees, M. (under review). An Introduction to MultiSOCIAL Toolbox: An Open-Source Library for Quantifying Multimodal Social Interaction. Behavior Research Methods.
+
+
+
+ denotes equal contribution by authors
* denotes undergraduate student