This project defines a framework for the analysis of the level of trust in a traffic environment involving an automated vehicle. The jsPsych framework is used to for the frontend. In the description below, it is assumed that the repo is stored in the folder trust-crowdsourced
. Terminal commands lower assume macOS.
Tested with Python 3.9.12. To setup the environment run these two commands in a parent folder of the downloaded repository (replace /
with \
and possibly add --user
if on Windows):
pip install -e trust-crowdsourced
will setup the project as a package accessible in the environment.pip install -r trust-crowdsourced/requirements.txt
will install required packages.
Configuration of the project needs to be defined in trust-crowdsourced/config
. Please use the default.config
file for the required structure of the file. If no custom config file is provided, default.config
is used. The config file has the following parameters:
appen_job
: ID of the appen job.num_stimuli
: number of stimuli in the study.num_stimuli_participant
: subset of stimuli in the study shown to each participant.allowed_min_time
: the cut-off for minimal time of participation for filtering.num_repeat
: number of times each stimulus is repeated.kp_resolution
: bin size in ms in which data is stored.allowed_stimulus_wrong_duration
: if the percentage of videos with abnormal length is above this value, exclude participant from analysis.allowed_mistakes_signs
: number of allowed mistakes in the questions about traffic signs.sign_answers
: answers to the questions on traffic signs.mask_id
: number for masking worker IDs in appen data.files_heroku
: files with data from heroku.file_appen
: file with data from appen.file_cheaters
: CSV file with cheaters for flagging.path_source
: path with source files for the stimuli from the Unity3D project.path_stimuli
: path consisting of all videos included in the survey.mapping_stimuli
: CSV file that contains all data found in the videos.plotly_template
: template used to make graphs in the analysis.
The source files of the video stimuli are outputted from Unity to config.path_source
. To prepare them for the crowdsourced setup python trust-crowdsourced/preparation/process_videos.py
. Videos will be outputted to config.path_stimuli
.
Check that you are indeed in the parent folder for running command pip install -e trust-crowdsourced
. This command will not work from inside of the folder containing the repo.
For the analysis plots of the AOI data were made for two groups.
Plot of AOI Video_0 All Participants Plot of AOI Video_1 All Participants Plot of AOI Video_2 All Participants Plot of AOI Video_3 All Participants Plot of AOI Video_4 All Participants Plot of AOI Video_5 All Participants Plot of AOI Video_6 All Participants Plot of AOI Video_7 All Participants Plot of AOI Video_8 All Participants Plot of AOI Video_9 All Participants Plot of AOI Video_10 All Participants Plot of AOI Video_11 All Participants Plot of AOI Video_12 All Participants Plot of AOI Video_13 All Participants Plot of AOI Video_14 All Participants Plot of AOI Video_15 All Participants Plot of AOI Video_16 All Participants Plot of AOI Video_17 All Participants Plot of AOI Video_18 All Participants Plot of AOI Video_19 All Participants Plot of AOI Video_20 All Participants
Plot of AOI Video_0 Lab only Participants Plot of AOI Video_1 Lab only Participants Plot of AOI Video_2 Lab only Participants Plot of AOI Video_3 Lab only Participants Plot of AOI Video_4 Lab only Participants Plot of AOI Video_5 Lab only Participants Plot of AOI Video_6 Lab only Participants Plot of AOI Video_7 Lab only Participants Plot of AOI Video_8 Lab only Participants Plot of AOI Video_9 Lab only Participants Plot of AOI Video_10 Lab only Participants Plot of AOI Video_11 Lab only Participants Plot of AOI Video_12 Lab only Participants Plot of AOI Video_13 Lab only Participants Plot of AOI Video_14 Lab only Participants Plot of AOI Video_15 Lab only Participants Plot of AOI Video_16 Lab only Participants Plot of AOI Video_17 Lab only Participants Plot of AOI Video_18 Lab only Participants Plot of AOI Video_19 Lab only Participants Plot of AOI Video_20 Lab only Participants
Plot of KP Video_0 All Participants Plot of KP Video_1 All Participants Plot of KP Video_2 All Participants Plot of KP Video_3 All Participants Plot of KP Video_4 All Participants Plot of KP Video_5 All Participants Plot of KP Video_6 All Participants Plot of KP Video_7 All Participants Plot of KP Video_8 All Participants Plot of KP Video_9 All Participants Plot of KP Video_10 All Participants Plot of KP Video_11 All Participants Plot of KP Video_12 All Participants Plot of KP Video_13 All Participants Plot of KP Video_14 All Participants Plot of KP Video_15 All Participants Plot of KP Video_16 All Participants Plot of KP Video_17 All Participants Plot of KP Video_18 All Participants Plot of KP Video_19 All Participants Plot of KP Video_20 All Participants
Plot of KP Video_0 Lab only Participants Plot of KP Video_1 Lab only Participants Plot of KP Video_2 Lab only Participants Plot of KP Video_3 Lab only Participants Plot of KP Video_4 Lab only Participants Plot of KP Video_5 Lab only Participants Plot of KP Video_6 Lab only Participants Plot of KP Video_7 Lab only Participants Plot of KP Video_8 Lab only Participants Plot of KP Video_9 Lab only Participants Plot of KP Video_10 Lab only Participants Plot of KP Video_11 Lab only Participants Plot of KP Video_12 Lab only Participants Plot of KP Video_13 Lab only Participants Plot of KP Video_14 Lab only Participants Plot of KP Video_15 Lab only Participants Plot of KP Video_16 Lab only Participants Plot of KP Video_17 Lab only Participants Plot of KP Video_18 Lab only Participants Plot of KP Video_19 Lab only Participants Plot of KP Video_20 Lab only Participants