-------------------------------------------------------------------------------- | PLT Recordings and Transcripts: | | The POETICON Lithic Tool multimodal database on spontaneous speech and | | movement production on object affordances | | version 1.0 | | | | Release date: 01/09/2015 | | | | Organisation: | | COGNITIVE SYSTEMS RESEARCH INSTITUTE, Athens, Greece & | | INSTITUTE FOR LANGUAGE AND SPEECH PROCESSING/ATHENA R.C., Athens, Greece | | | | Related Documentation: | | Argiro Vatakis and Katerina Pastra (submitted), The PLT Corpus: A | | multimodal dataset of spontaneous speech and movement production on | | object affordances, Scientific Data, Nature Publishing Group. | | | | Further details: | | http://csri.gr/scientific-achievements#node-835 (PLT general) | | http://csri.gr/demonstrations | | http://csri.gr/open-access-data | | | | Download from: http://csri.gr/downloads/PLT | | | | Contact: | | For further information, or to report any problems send an e-mail to: | | Argiro Vatakis at avatakis@csri.gr or | | Katerina Pastra at kpastra@csri.gr | -------------------------------------------------------------------------------- License: This dataset is released under a creative commons non-commercial share-alike (BY-NC-SA) license. See http://creativecommons.org/licenses/by-nc-sa/4.0/ Cite as: Vatakis, A., & Pastra, K. (2015). PLT Recordings and Transcripts: The POETICON Lithic Tool multimodal database on spontaneous speech and movement production on object affordances. Athens: Cognitive Systems Research Institute. version 1.0 http://csri.gr/downloads/CMR, ISLRN:022-103-177-574-9 -------------------------------------------------------------------------------- Overview: In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of "thinking aloud", spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants in three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances. -------------------------------------------------------------------------------- Files in this package: a. "CSRI-ReadMe-PLT_recordings_and_transcriptions.txt": current explanation file b. "Experimental_Information.xls": Spreadsheet with information on a) the participant’s assigned number and experiment (e.g., PN#_E#, where PN corresponds to the participant number and E to the experiment), which serves as a guide to the corresponding video, audio, and transcription files, b) basic demographic information (e.g., gender, age), and c) the available data files for each participant, details regarding their size (in MB) and duration (in secs), and potential problems with these files. These problems are mostly due to dropped frames in one of the cameras and in some rare cases missing files. The excel file is composed of three different sheets that correspond to the three different experiments conducted. c. "Experiment_1.rar", "Experiment_2.rar", "Experiment_3.rar": Experiment-specific compressed files (rar format) that comprise audiovisual videos (mp4), audio files (.m4a), and transcription files (.trs), organized by experiment and participant. Each participant file contains the frontal (F) and profile (P) video recordings (e.g., PN1_E1_F that refers to participant 1, experiment 1, frontal view) and the transcribed file along with the audio file. Also, the videos are labelled according to the condition with ‘NH’ when the object is in isolation, ‘H’ when the object is held by an agent, and ‘T’ when the actual, physical object is presented (e.g., PN1_E1_F_H.mp4 that refers to participant 1, experiment 1, frontal view, object held by an agent). -------------------------------------------------------------------------------- | COSMOROE Project Copyright (c) 2015 by | | Cognitive Systems Research Institute (CSRI). | | All rights reserved. | --------------------------------------------------------------------------------