December 7 @ Room 27, San Diego Convention Center
Contact: aiformusicworkshop@gmail.com
Register here! (A “Workshops & Competitions” pass is required to attend the workshop.)
This workshop explores the dynamic intersection of AI and music, a rapidly evolving field where creativity meets computation. Music is one of the most universal and emotionally resonant forms of human expression. Producing, understanding, and processing music presents unique challenges for machine learning due to its creative, expressive, subjective, and interactive nature. According to the IFPI Global Music Report 2025 published by the International Federation of the Phonographic Industry (IFPI), “AI will be one of the defining issues of our time and record companies have embraced its potential to enhance artist creativity and develop new and exciting fan experiences.” AI has tremendous impacts on all aspects of music, across composition, production, performance, distribution, and education. Recent years have also seen rapidly growing interests among the machine learning community in AI music research. In this first NeurIPS workshop dedicated to music since 2011, we want to bring together the music and AI communities to facilitate a timely, interdisciplinary conversation on the status and future of AI for music.
The goal of this workshop is twofold: First, we aim to explore the latest advancements of AI’s applications for music, from analysis, creation, performance, production, retrieval to music education and therapy. Second, we aim to discuss the impacts and implications of AI in music, including AI’s impacts on the music industry, musician community, and music education as well as ethical, legal and societal implications of AI music and AI’s implications for future musicians. We will emphasize networking and community building in this workshop to generate a sustainable research momentum.
The workshop will feature invited talks, contributed spotlight presentations, a poster and demo session, a panel discussion, and round table discussions. We have invited six speakers from a diverse background who will bring interdisciplinary perspectives to the audience. We will solicit original 4-page papers from the community. We will also call for demos accompanied by 2-page extended abstracts to accommodate the various formats AI music innovations may take.
8:00 - 8:10 | Welcoming Remarks |
8:10 - 8:40 | Invited Talk by Chris Donahue (CMU & Google Deepmind) The Expanding Horizons of Generative Music AI Research |
8:40 - 9:10 | Invited Talk by Shlomo Dubnov (UC San Diego) Music Co-creativity with AI |
9:10 - 10:10 | ☕Coffee Break + Posters & Demos I |
10:10 - 10:40 | Lightning Talks |
10:40 - 11:10 | Awards & Spotlight Presentations |
11:10 - 11:40 | Invited Talk by Ilaria Manco (Google Deepmind) Real-time Music Generation: Lowering Latency and Increasing Control |
11:40 - 12:10 | Invited Talk by Akira Maezawa (Yamaha) Assisting Music Performance through AI |
12:10 - 1:10 | 🍴Lunch Break (🥪box lunch provided) |
1:10 - 2:00 | Posters & Demos II |
2:00 - 2:30 | Invited Talk by Anna Huang (MIT) In Search of Human-AI Resonance |
2:30 - 3:00 | Invited Talk by Julian McAuley (UC San Diego) Recommendation and Personalization for Music |
3:00 - 4:00 | ☕Coffee Break + Posters & Demos III |
4:00 - 4:50 | Panel Discussion (with invited speakers) |
4:50 - 5:00 | Closing Remarks |
Chris Donahue is an Assistant Professor in the Computer Science Department at the Carnegie Mellon University, and a part-time research scientist at Google DeepMind. His research goal is to develop and responsibly deploy generative AI for music and creativity, thereby unlocking and augmenting human creative potential. His research has been featured in live performances by professional musicians like The Flaming Lips, and also empowers hundreds of daily users to convert their favorite music into interactive content through his website Beat Sage.
Shlomo Dubnov is a Professor in the Music Department and Affiliate Professor in Computer Science and Engineering at the University of California San Diego. He is also the Director of the Center for Research in Entertainment and Learning at Qualcomm Institute. He is best known for his research on poly-spectral analysis of musical timbre and inventing the method of Music Information Dynamics with applications in Computer Audition and Machine improvisation.
Ilaria Manco is a Research Scientist at Google Deepmind. She works on music understanding and generative models for interactive, real-time music creation. Her Phd dissertation focused on developing multimodal deep learning methods to enable music understanding by learning representations from audio and language.
Akira Maezawa is a Principal Senior Engineer at Yamaha Corporation. He has been leading the music informatics applications lab in the MINA Lab based in Yokohama. He works on AI music ensemble technology that will allow AI to perform together with people, and technology that predicts musical metadata – such as beats – from musical acoustic signals. He also interacts with customers at experience-based installations and hold discussions at universities and with artists, to see how people are actually using the technologies we create in the real world.
Anna Huang is an Assistant Professor at the Massachusetts Institute of Technology, with a shared position between Electrical Engineering and Computer Science (EECS) and Music and Theater Arts (MTA). She is the creator of the ML model Coconet that powered Google’s first AI Doodle, the Bach Doodle. In two days, Coconet harmonized 55 million melodies from users around the world. In 2018, she created Music Transformer, a breakthrough in generating music with long-term structure, and the first successful adaptation of the transformer architecture to music. She was a Canada CIFAR AI Chair at Mila.
Julian McAuley is a Professor in the Department of Computer Science and Engineering at the University of California San Diego. He works on applications of machine learning to problems involving personalization, and teaches classes on personalized recommendation. He has also been leading the MUSAIC team at UCSD. His lab has significant contributions to machine music understanding, symbolic music processing, music generation, NLP for music, and audiovisual learning.
Hao-Wen (Herman) Dong is an Assistant Professor in the Department of Performing Arts Technology at the University of Michigan. Herman’s research aims to augment human creativity with machine learning. He develops human-centered generative AI technology that can be integrated into the professional creative workflow, with a focus on music, audio and video content creation. His long-term goal is to lower the barrier of entry for content creation and democratize professional content creation for everyone. Herman received his PhD degree in Computer Science from University of California San Diego, where he worked with Julian McAuley and Taylor Berg-Kirkpatrick. His research has been recognized by the UCSD CSE Doctoral Award for Excellence in Research, KAUST Rising Stars in AI, UChicago and UCSD Rising Stars in Data Science, ICASSP Rising Stars in Signal Processing, and UCSD GPSA Interdisciplinary Research Award.
Zachary Novack is a PhD student in the Computer Science and Engineering department at the University of California San Diego, advised by Dr. Julian McAuley and Dr. Taylor Berg-Kirkpatrick. His research focuses on controllable and efficient music/audio generation, as well as audio reasoning in LLMs. His long-term goal is to design bespoke creative tools for musicians and everyday users alike with adaptive control and real-time interaction, collaborating with top industry labs such as Adobe Research, Stability AI, and Sony AI. Zachary’s work has been recognized at numerous top-tier AI conferences, including DITTO (ICML 2024 Oral), Presto! (ICLR 2025 Spotlight), CoLLAP (ICASSP 2025 Oral). Outside of academia, Zachary is active within the southern California marching arts community, working as an educator for the 11-time world class finalist percussion ensemble POW Percussion.
Yung-Hsiang Lu is a Professor in the Elmore Family School of Electrical and Computer Engineering at Purdue University. He is a fellow of the IEEE and a distinguished scientist of the ACM. Yung-Hsiang has published papers on computer vision and machine learning in venues such as AI Magazine, Nature Machine Learning, and Computer. He is one of the editors of the book “Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence” (ISBN 9780367744700, 2022 by Chapman & Hall).
Kristen Yeon-Ji Yun is a Clinical Associate Professor in the Department of Music at the Patti and Rusty Rueff School of Design, Art, and Performance at Purdue University. She is the Principal Investigator of the research project “Artificial Intelligence Technology for Future Music Performers” (US National Science Foundation, IIS 2326198). Kristen is an active soloist, chamber musician, musical scholar, and clinician. She has toured many countries, including Malaysia, Thailand, Germany, Mexico, Japan, China, Hong Kong, Spain, France, Italy, Taiwan, and South Korea, giving a series of successful concerts and master classes.
Benjamin Shiue-Hal Chou is a PhD student in Electrical and Computer Engineering at Purdue University, advised by Dr. Yung-Hsiang Lu. His research focuses on music performance error detection and the design of multimodal architectures. He is the lead author of Detecting Music Performance Errors with Transformers (AAAI 2025) and a co-author of Token Turing Machines are Efficient Vision Models (WACV 2025). Benjamin is the graduate mentor for the Purdue AIM (AI for Musicians) group and has helped organize the Artificial Intelligence for Music workshop at both AAAI 2025 and ICME 2025. He is currently interning at Reality Defender, where he works on audio deepfake detection.
(In alphabetical order)
Erfun Ackley, Julia Barnett, Luca Bindini, Kaj Bostrom, Brandon James Carone, Yunkee Chae, Ke Chen, Xi Chen, Manuel Cherep, Joann Ching, Eunjin Choi, Woosung Choi, Benjamin Shiue-Hal Chou, Annie Chu, Charis Cochran, Frank Cwitkowitz, SeungHeon Doh, Chris Donahue, Hao-Wen Dong, Shlomo Dubnov, Victoria Ebert, Hugo Flores García, Ross Greer, Timothy Greer, Jiarui Hai, Wen-Yi Hsiao, Jiawen Huang, Jingyue Huang, Manh Pham Hung, Yun-Ning Hung, Tatsuro Inaba, Purvish Jajal, Dasaem Jeong, Molly Jones, Kexin Phyllis Ju, Tornike Karchkhadze, Haven Kim, Jinju Kim, Shinae Kim, Soo Yong Kim, Yewon Kim, Yonghyun Kim, Junyoung Koh, Zhifeng Kong, Junghyun Koo, Luca A Lanzendörfer, Jongpil Lee, Junwon Lee, Wei-Jaw Lee, Wo Jae Lee, Bochen Li, Xueyan Li, Duoduo Liao, Jia-Wei Liao, Brian Lindgren, Jeng-Yue Liu, Phillip Long, Yung-Hsiang Lu, Yin-Jyun Luo, Mingchen Ma, Akira Maezawa, Ilaria Manco, Jiawen Mao, Atharva Mehta, Giovana Morais, Oriol Nieto, Stephen Ni-Hahn, Zachary Novack, Patrick O’Reilly, Seungryeol Paik, Ting-Yu Pan, Eleonora Ristori, Iran R Roman, Sherry Ruan, Koichi Saito, Rebecca Salganik, Sridharan Sankaran, Jiatong Shi, Nikhil Singh, Christian J. Steinmetz, Li Su, Chih-Pin Tan, Jiaye Tan, John Thickstun, James Townsend, Fang-Duo Tsai, Teng Tu, Ziyu Wang, Christopher W. White, Shih-Lun Wu, Yusong Wu, Weihan Xu, Xin Xu, Yujia Yan, Chao Peter Yang, Guang Yang, Qihui Yang, Yen-Tung Yeh, Jayeon Yi, Chin-Yun Yu, Kristen Yeon-Ji Yun, Yongyi Zang, Fan Zhang, Wenxin Zhang, Yixiao Zhang, You Zhang, Jingwei Zhao, Ge Zhu
Acceptance rate: 69% (74/108) | Paper: 71% (61/85) | Demo: 61% (14/23)
MGE-LDM: Joint Latent Diffusion for Simultaneous Music Generation and Source Extraction
Yunkee Chae, Kyogu Lee
Soundtrack Retrieval for Film Production
Bill Wang, Haven Kim, Leduo Chen, Minje Kim, Julian McAuley
Midi-LLM: Adapting Large Language Models for Text-to-MIDI Music Generation
Shih-Lun Wu, Yoon Kim, Cheng-Zhi Anna Huang
Advancing Multi-Instrument Music Transcription: Results from the 2025 AMT Challenge
Ojas Chaturvedi, Kayshav Bhardwaj, Tanay Gondil, Benjamin Shiue-Hal Chou, Yujia Yan, Kristen Yeon-Ji Yun, Yung-Hsiang Lu
Semitone-Aware Fourier Encoding: A Music-Structured Approach to Audio-Text Alignment
Chengze Du, JinYang Zhang, Wenxin Zhang
Zero-shot Geometry-Aware Diffusion Guidance for Music Restoration
Jia-Wei Liao, Pin-Chi Pan, Li-Xuan Peng, Sheng-Ping Yang, Yen-Tung Yeh, Cheng-Fu Chou, Yi-Hsuan Yang
AIBA: Attention-based Instrument Band Alignment for Text-to-Audio Diffusion
Junyoung Koh, Soo Yong Kim, GYU HYEONG CHOI, Yongwon choi
My Music My Choice: Adversarial Protection Against Vocal Cloning in Songs
Ilke Demir, Gerald Pena Vargas, Alicia Unterreiner, David Ponce, Umur A. Ciftci
PANDORA: Diffusion Policy Learning for Dexterous Robotics Piano Playing with a Train-only LLM Expressiveness Reward
Yanjia Huang, Renjie Li, Zhengzhong Tu
Rhythmic Stability and Synchronization in Multi-Track Music Generation
Anonymous
Audio-to-Audio Schrodinger Bridges
Kevin J. Shih, Zhifeng Kong, Weili Nie, Arash Vahdat, Sang-gil Lee, Joao Felipe Santos, Ante Jukić, Rafael Valle, Bryan Catanzaro
Hierarchical Tokenization of Multimodal Music Data for LLM-Compatible Semantic Representations
Wo Jae Lee, Emanuele Coviello, Rifat Joyee, Sudev Mukherjee
AMBISONIC-DML: Higher-Order Ambisonic Music Dataset for Spatial AI Generation
Seungryeol Paik, Kyogu Lee
Video-to-Music Generation for Film Production: A Dataset and Framework
Haven Kim, Leduo Chen, Bill Wang, Hao-Wen Dong, Julian McAuley
Segment-Factorized Full-Song Generation on Symbolic Piano Music
Ping-Yi Chen, Chih-Pin Tan, Yi-Hsuan Yang
When Creative Machines Learn from Each Other
Haven Kim, Yusong Wu, Taylor Berg-Kirkpatrick, Julian McAuley
FlashFoley: Fast Interactive Sketch2Audio Generation
Zachary Novack, Koichi Saito, Zhi Zhong, Takashi Shibuya, Shuyang Cui, Julian McAuley, Taylor Berg-Kirkpatrick, christian simon, Shusuke Takahashi, Yuki Mitsufuji
Generative Multi-modal Feedback for Singing Voice Synthesis Evaluation
Xueyan Li, Yuxin Wang, Mengjie Jiang, Qingzi Zhu, Jing Zhang, Zoey Kim, Yazhe Niu
Bias beyond Borders: Global Inequalities in AI-Generated Music
Ahmet Solak, Florian Grötschla, Luca A Lanzendörfer, Roger Wattenhofer
ENHANCING TEXT-TO-MUSIC GENERATION THROUGH RETRIEVAL-AUGMENTED PROMPT REWRITE
Meiying Ding, Brian McFee, Chenkai Hu, Sunny Yang, Juhua Huang
The Ghost in the Keys: A Disklavier Demo for Human-AI Musical Co-Creativity
Louis Bradshaw, Alexander Spangher, Stella Biderman, Simon Colton
Persian Musical Instruments Classification Using Polyphonic Data Augmentation
Diba Hadi Esfangereh, Mohammad Hossein Sameti, Sepehr Harfi Moridani, Leili Javidpour, Mahdieh Soleymani Baghshah
Discovering Interpretable Concepts in Large Generative Music Models
Nikhil Singh, Manuel Cherep, Patricia Maes
Linear RNNs for autoregressive generation of long music samples
Konrad Szewczyk, Daniel Gallo Fernández, James Townsend
Music to Video Matching Based on Beats and Tempo
Aleksandr Mikheev, Ilya Makarov
Multi-bit Audio Watermarking for Music
Luca A Lanzendörfer, Kyle Fearne, Florian Grötschla, Roger Wattenhofer
LEGATO: Large-scale End-to-end Generalizable Approach to Typeset OMR
Guang Yang, Victoria Ebert, Nazif Can Tamer, Luiza Amador Pozzobon, Noah A. Smith
MuCPT: Music-related Natural Language Model Continued Pretraining
Kai Tian, Yirong Mao, Wendong Bi, Hanjie Wang, Que Wenhui
Composer Vector: Style-steering Symbolic Music Generation in a Latent Space
Xunyi Jiang, Xin Xu
Perceptually Aligning Representations of Music via Noise-Augmented Autoencoders
Mathias Rose Bjare, Giorgia Cantisani, Marco Pasini, Stefan Lattner, Gerhard Widmer
SepACap: Source Separation for A Cappella Music
Luca A Lanzendörfer, Constantin Pinkl, Florian Grötschla, Roger Wattenhofer
Ethics Statements in AI Music Papers: The Effective and the Ineffective
Julia Barnett, Patrick O’Reilly, Jason Smith, Annie Chu, Bryan Pardo
Beyond Collaborative Filtering: Using Decoders for Personalized Music Recommendation
Timothy Greer, Nicholas Capel, Emanuele Coviello, Amina Shabbeer
Do Joint Language-Audio Embeddings Encode Perceptual Timbre Semantics?
Qixin Deng, Bryan Pardo, Thrasyvoulos N Pappas
CLAM: Safeguarding Authenticity and Addressing Implications for the Music Industry
Arnesh Batra, Krish Thukral, Naman Batra, Dev Sharma, Ruhani Bhatia, Aditya Gautam
MusicSem: A Semantically Rich Language-Audio Dataset of Organic Musical Discourse
Rebecca Salganik, Teng Tu, Fei-Yueh Chen, Xiaohao Liu, Kaifeng Lu, Ethan Luvisia, Zhiyao Duan, Guillaume Salha-Galvan, Anson Kahng, Yunshan Ma, Jian Kang
Chord-conditioned Melody and Bass Generation
Alexandra C Salem, Mohammad Shokri, Johanna Devaney
Why Do Music Models Plagiarize? A Motif-Centric Perspective
Tatsuro Inaba, Kentaro Inui
StylePitcher: Generating Style-Following, Expressive Pitch Curves for Versatile Singing Tasks
Jingyue Huang, Qihui Yang, Fei-Yueh Chen, Randal Leistikow, Yongyi Zang
Embedding Alignment in Code Generation for Audio
Sam Kouteili, Hiren Madhu, George Typaldos, Mark Paul Santolucito
Using a Joint-Embedding Predictive Architecture for Symbolic Music Understanding
Rafik Hachana, Bader Rasheed
Robust Neural Audio Fingerprinting using Music Foundation Models
Shubhr Singh, Kiran Bhat, Benjamin Resnick, Xavier Riley, John Thickstun, Walter De Brouwer
DAWZY: A New Addtion to AI powered “Human in the Loop” Music Co-creation
Aaron C Elkins, Sanchit Singh, Adrian Kieback, Sawyer Blankenship, Uyiosa Philip Amadasun, Aman Chadha
A Loopy Framework and Tool for Real-time Human-AI Music Collaboration
Sageev Oore, Finlay Miller, Chandramouli Shama Sastry, Sri Harsha Dumpala, Marvin F. da Silva, Daniel Oore, Scott C. Lowe
MusPyExpress: Extending MusPy with Enhanced Expression Text Support
Phillip Long, Hao-Wen Dong, Julian McAuley, Zachary Novack
Adapting Speech Language Model to Singing Voice Synthesis
Yiwen Zhao, Jiatong Shi, Jinchuan Tian, Yuxun Tang, Jiarui Hai, Jionghao Han, Shinji Watanabe
The Name-Free Gap: Policy-Aware Stylistic Control in Music Generation
Ashwin Nagarajan, Hao-Wen Dong
Learning Music Style Through Cross-Modal Bootstrapping
Jingwei Zhao, Ziyu Wang, Gus Xia, Ye Wang
The Proxy Trap, the Capability Mismatch, and the Circular Fallacy: A Critical Analysis of the Music Captioning Task in the Era of MLLMs
Yixiao Zhang
Who Gets Heard? Rethinking Fairness in AI for Music Systems
Atharva Mehta, Shivam Chauhan, Megha Sharma, Gus Xia, Kaustuv Kanti Ganguli, Nishanth Chandran, Zeerak Talat, Monojit Choudhury
Memership and Dataset Inference Attacks on Large Audio Generative Models
Jakub Proboszcz, Paweł Kochański, Karol Korszun, Katarzyna Stankiewicz, Donato Crisostomi, Giorgio Strano, Emanuele Rodolà, Kamil Deja, Jan Dubiński
From Generation to Attribution: Music AI Agent Architectures for the Post-Streaming Era
Wonil Kim, Hyeongseok Wi, Seungsoon Park, Taejun Kim, Sangeun Keum, Keunhyoung Kim, Taewan Kim, Jongmin Jung, Taehyoung Kim, Gaetan Guerrero, Mael Le Goff, Julie Po, Dongjoo Moon, Juhan Nam, Jongpil Lee
Asura’s Harp: Direct Latent Control of Neural Sound
Kaj Bostrom
Conversational Music Recommendation with LLM Tool Calling
SeungHeon Doh, Keunwoo Choi, Juhan Nam
No Encore: Unlearning as Opt-Out in Music Generation
Jinju Kim, Taehan Kim, Abdul Waheed, Jong Hwan Ko, Rita Singh
Generating Piano Music with Transformers: A Comparative Study of Scale, Data and Metrics
Ábel Ilyés-Kun, Jonathan Lehmkuhl, Nico Bremes, Cemhan Kaan Özaltan, Frederik Muthers, Jiayi Yuan, Christian Schmidt, Karim Abou Zeid
BNMusic: Blending Environmental Noises into Personalized Music
Chi Zuo, Martin B. Møller, Pablo Martínez-Nuevo, Huayang Huang, Yu Wu, Ye Zhu
Evaluating Multimodal Large Language Models on Core Music Perception Tasks
Brandon James Carone, Iran R Roman, Pablo Ripollés
ACappellaSet: A Multilingual A Cappella Dataset for Source Separation and AI-assisted Rehearsal Tools
Ting-Yu Pan, Kexin Phyllis Ju, Hao-Wen Dong
Leveraging Diffusion Models For Predominant Instrument Recognition
Charis Cochran, Yeongheon Lee, Youngmoo Kim
Slimmable NAM: Neural Amp Models with adjustable runtime computational cost
Steven Atkinson
Towards AI Rapper: Creating an Interactive Rap Battle Experience with Generative AI
Nikita Kozodoi, Elizaveta Zinovyeva, Zainab Afolabi, Egor Krashenninikov
Enhancing Text-to-Music Generation through Retrieval-Augmented Prompt Rewrite Demo
Meiying Ding, Brian McFee, Sunny Yang, Chenkai Hu, Juhua Huang
Prompt-Based Music Discovery: A Prototype Using Source Separation And LLMs
Vansh Chugh
LyricLens: An Interactive System for Multi-Label Music Content Rating
Kai-Yu Lu, Malhar Sham Ghogare, Zihan Su, Shanu Sushmita
AI Harmonica: A Smart Electronic Harmonica for Music Learning and Co-Creativity
Sherry Ruan
HARP 3.0: Generalizing I/O and API Support for Machine Learning in Digital Audio Workstations
Frank Cwitkowitz, Christodoulos Benetatos, Qixin Deng, Huiran Yu, Nathan Pruyne, Patrick O’Reilly, Hugo Flores García, Zhiyao Duan, Bryan Pardo
E-Motion Baton: Human-in-the-Loop Music Generation via Expression and Gesture
Mingchen Ma, Stephen Ni-Hahn, Simon Mak, Yue Jiang, Cynthia Rudin
DAWZY: Human-in-the-Loop Natural-Language Control of REAPER
Aaron C Elkins, Sanchit Singh, Sawyer Blankenship, Adrian Kieback, Uyiosa Philip Amadasun, Aman Chadha
Immersive AI-Assisted Music Composition and Performance in Virtual and Mixed Reality
Strong Bear
EVxRAVE: Incorporating Neural Synthesis in an Augmented String Instrument Platform
Brian Lindgren
Robust Personalized Human-AI Collaboration with SmartLooper
Sageev Oore, Finlay Miller, Chandramouli Shama Sastry, Sri Harsha Dumpala, Marvin F. da Silva, Daniel Oore, Scott C. Lowe
Demonstrating Singing accompaniment capabilities for MuseControlLite
Fang-Duo Tsai, Yi-Hsuan Yang
Mozart AI: Browser-Based AI Music Co-Production
Pascual Merita, Petr Ivan, Immanuel Rajadurai, Sundar Arvind, Arjun Khanna
We invite contributions from the community on relevant topics of AI and music, broadly defined. This workshop is non-archival. Accepted papers will be posted on the workshop website but will not be published or archived. We welcome work that is under review or to be submitted to other venues.
Submission portal: openreview.net/group?id=NeurIPS.cc/2025/Workshop/AI4Music
We call for original 4-page papers (excluding references) on relevant topics of AI and music, broadly defined. We encourage and welcome papers discussing initial concepts, early results, and promising directions. All accepted papers will be presented in the poster and demo session. A small number of papers will be selected for 10-min spotlight presentations. The review process will be double-blind.
We call for demos of novel AI music tools and artistic work. Each demo submission will be accompanied by a 2-page extended abstract (excluding references). We encourage the authors to submit an optional short video recording (no longer than 10 min) as supplementary materials. The selected demos will each be assigned a poster board in the poster and demo session. The goal of the demo session is to accommodate the various forms that novel AI music innovations may take, and thus we will adopt a single-blind review process.
The following due dates apply to both paper and demo submissions:
Topics of interest include, but not limited to:
Please format your paper using the NeurIPS 2025 LaTeX template.
\workshoptitle{AI for Music}
For submission, set the options as follows:
\usepackage[dblblindworkshop]{neurips_2025}
\usepackage[sglblindworkshop]{neurips_2025}
For camera-ready, set the options as follows:
\usepackage[dblblindworkshop,final]{neurips_2025}
\usepackage[sglblindworkshop,final]{neurips_2025}
Note that you do NOT need to include the NeurIPS paper checklist. You may include technical appendices (after the references in the main PDF file) and supplementary material (as a ZIP file up to 100 MB). However, it is up to the reviewer to determine if they want to read them.
For demo submissions, you may provide a short video recording (10 minutes maximum) to support your work. We recommend you to provide a YouTube link (or equivalent) to the video recording. Please make sure you have set the visibility to public or unlisted on YouTube. Less preferably, you may upload the video recording as an MP4 file (100 MB maximum).