[ Morphogenic Resonance ]
Electroacoustic Composition with AI
Title: Morphogenic Resonance
Duration: 12’23’’
54-channel multichannel composition
»Morphogenic Resonance« is not just a musical composition. Rather, it is a sonic exploration that explores the question of whether sounds can self-organize within a given system or space. Simulations of Physarum (a genus of slime molds) and deep learning algorithms are played in parallel. They converge, diverge, or neglect and oscillate within a system. This local oscillation creates unique sound patterns: harmonized feedback, smearing, and displacements.
Physarum's simulation data transforms into sounds. It controls sound oscillation, modulation, and the nonlinear musical behavior of each phase, creating patterns of spatialization.
Human voices, artificial hardware sounds, and sounds reminiscent of ghosts and extraterrestrials are trained with artificial intelligence to explore the new auditory domain and unfold the possible connections between humans and non-humans, objects and organisms, and reality and non-reality. It intertwines machine algorithms with sound and uses the principles of an organism's behavior to expand the possibilities of self-organizing sound and the organicity of sounds in space.
Residency & Performance @ZKM, Karlsruhe, DE
Performance Photo by Dominck Kautz
︎ More info on ZKM
Electroacoustic Composition with AI
Title: Morphogenic Resonance
Duration: 12’23’’
54-channel multichannel composition
»Morphogenic Resonance« is not just a musical composition. Rather, it is a sonic exploration that explores the question of whether sounds can self-organize within a given system or space. Simulations of Physarum (a genus of slime molds) and deep learning algorithms are played in parallel. They converge, diverge, or neglect and oscillate within a system. This local oscillation creates unique sound patterns: harmonized feedback, smearing, and displacements.
Physarum's simulation data transforms into sounds. It controls sound oscillation, modulation, and the nonlinear musical behavior of each phase, creating patterns of spatialization.
Human voices, artificial hardware sounds, and sounds reminiscent of ghosts and extraterrestrials are trained with artificial intelligence to explore the new auditory domain and unfold the possible connections between humans and non-humans, objects and organisms, and reality and non-reality. It intertwines machine algorithms with sound and uses the principles of an organism's behavior to expand the possibilities of self-organizing sound and the organicity of sounds in space.
Residency & Performance @ZKM, Karlsruhe, DE
Performance Photo by Dominck Kautz
︎ More info on ZKM