Sonification of fMRI Data Nik Sawe Music 220C Overview • PhD studies assess decision-making on environmental issues through neuroimaging • Neural activation suggests underlying physiological bases for framing effects, heuristics, affect (emotion) and their impact on decision-making How We Image the Brain • Functional MRI allows us to take realtime pictures of the brain’s response to stimuli. • Using headsets and hand input devices, can present subjects with a wide range of tasks. The BOLD Signal • BOLD: Blood Oxygenation Level-Dependent • fMRI evaluates brain activity indirectly, by measuring changes in the local amount of oxygenated blood • Complex regressions account for fluctuations due to heart rate, breathing, etc. • Validity confirmed through optogenetics Motivations for Sonification • Can hear patterns of activation that would be less obvious through visualization of time courses Motivations for Sonification • May be able to hear “conversations” between different brain regions that would be less obvious through traditional neuroimaging analyses • Intuitive level of interpretation that may provide clues for further analytic techniques Limitations of fMRI • Poor temporal resolution – One pass through each brain region every 1-2 seconds (most often 2) Limitations of fMRI • Poor temporal resolution – One pass through each brain region every 1-2 seconds (most often 2) • For most study designs, need many repeated trials in one person to get an accurate read Translatable fMRI Outputs Sonification Methodology Built in R from a simple initial formula – Pitch: 128 * [(Xi – Xl)/(Xh-Xl)] – Velocity: 128*Pi Xi : signal at timepoint i Xh: maximum signal Xl: minimum signal Pi: a given network's proportional contribution to the total signal strength of all sampled networks at timepoint i Sonification Methodology Use these new values as downstream MIDI values, convert to MIDI file via Java First trial: utilized data from one subject in my first study (environmental philanthropy to save parks threatened with potentially destructive land use development) Used 3 networks: attentional, visual, default mode network Visual Cortex Quartet Final project: Sampled from the visual cortex as subject undergoes retinotopy Sonification Methodology Program had several stages: • Scale converter: created array of MIDI values based on desired scale • Instrument filter: selected valid (in scale range) notes for a given instrument • Signal to MIDI converter: Gated signals below a threshold value (5%) and did not play them Sonification Methodology • Velocity based upon relative prominence of the voxel signal given other voxels’ activity • Duration based on arbitrary equation of: – ((128-note value)+velocity)/20 The Next Step • Scan whole brain while watching a silent film • May obtain complementary EEG data • Will have PCA networks to work with, as well as a wealth of regions • Signals do not all have to be pitch modulation Mapping Ideas • Activity in the attendant PCA network helps define duration and velocity for each region, based on its relative contribution • Talairach (spatial) coordinates define surround sound mapping Mapping: Anterior Insula • Handles “negative arousal” / response to physiologically as well as morally aversive stimuli • Control how discordant the note selection is in other regions Nucleus Accumbens • Handles “positive arousal”/reward/approach behavior • Control weighting towards major scales • May be able to make a balancing equation of AI vs Nacc Mapping: Amygdala • Fear/apprehension/anxiety region • Control tempo, accelerating at tense moments • Control percussive elements • Trigger clusters Mapping: Fusiform Gyrus • Recognizes faces: triggering of voice samples? Parahippocampal Gyrus • Spatial/landscape encoding • Spatial manipulation of samples/Doppler? Incorporation of EEG • Since temporal resolution is only 2 sec passes, would be good to have variation that decides interleaving of notes • This interpolation can be decided by activity in relevant EEG signals Thanks! sawe@stanford.edu