Artificial Intelligence and generative models have revolutionized music creation, with many models leveraging textual or visual prompts for guidance. However, existing image-to-music models are limited to simple images, lacking the capability to generate music from complex digitized artworks. To address this gap, we introduce Art2Mus, a novel model designed to create music from digitized artworks or text inputs. Art2Mus extends the AudioLDM 2 architecture, a text-to-audio model, and employs our newly curated datasets, created via ImageBind, which pair digitized artworks with music. Experimental results demonstrate that Art2Mus can generate music that resonates with the input stimuli. These findings suggest promising applications in multimedia art, interactive installations, and AI-driven creative tools. The code is publicly available at: https://github.com/justivanr/art2mus_.
Art2Mus: Bridging Visual Arts and Music Through Cross-Modal Generation
Rinaldi, Ivan;Fanelli, Nicola
;Castellano, Giovanna;Vessio, Gennaro
2025-01-01
Abstract
Artificial Intelligence and generative models have revolutionized music creation, with many models leveraging textual or visual prompts for guidance. However, existing image-to-music models are limited to simple images, lacking the capability to generate music from complex digitized artworks. To address this gap, we introduce Art2Mus, a novel model designed to create music from digitized artworks or text inputs. Art2Mus extends the AudioLDM 2 architecture, a text-to-audio model, and employs our newly curated datasets, created via ImageBind, which pair digitized artworks with music. Experimental results demonstrate that Art2Mus can generate music that resonates with the input stimuli. These findings suggest promising applications in multimedia art, interactive installations, and AI-driven creative tools. The code is publicly available at: https://github.com/justivanr/art2mus_.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


