Skip to main content

V-4 Manipulating Flow States With Audio Delays: Early Results

V-4 Manipulating Flow States With Audio Delays: Early Results

Name:Oliver Durcan

School/Affiliation:Goldsmiths University of London, London, UK, Fudan University, Shanghai, China

Co-Authors:Ozan Vardal2, Aneta Sahely1, Joydeep Bhattacharya1, Manuel Anglada-Tort1, Peter Holland1

Virtual or In-person:Virtual

Abstract:

Music-making and performance often evoke flow states, characterised by fluent, effortless, and enjoyable actions. Despite this, identifying consistent neural correlates of flow remains challenging, calling for innovative research methods. Common approaches use varying task speeds to induce and compare flow and non-flow states, but this can introduce variance in motor and visual actions. Our ongoing study employs a novel method: manipulating audio feedback from piano keypress’s to disrupt flow. By comparing instant vs. randomised delayed (0-350 ms) feedback during two piano tasks, we found substantial decreases in flow ratings (sight reading: d = 2.36; improvisation: d = 1.46) while maintaining high engagement, as indicated by consistently high Absorption scores. This method keeps the frequency of motor and visual actions similar between task conditions, thus reducing potential confounds in neural data, Next, we will analyse our EEG, MIDI, and Tetris data, which are included in this study but not discussed in this poster.

Poster PDFPoster PDF Video LinkVideo Link