Full class information can be found here.
Guest Lecture - An Emergent Art Movement: AI & Music in XR
For this joint VR class on Building Immersive Experiences, with students from MIT, Berklee College of Music and Harvard, each student was given a VR headset. Ryan Groves coordinated the creation of a custom VR world where the students met for the lecture, and speakers Ryan Groves, Roman Rappak and Dan Franke discussed their work in AI music generation for XR, live music in XR, and VR animation for music experiences.
In the Keynote Lecture for Stanford Music Technology group, CCRMA, Ryan highlighted the different musical components of the vast topic of automatic musical composition.
Given his background of computational music theory, he emphasized the importance of building and validating machine-learning models that can perform particular musical tasks, and leveraging those to create artificially intelligent compositional agents that can perform the entire music creation process.
SXSW panel: the future of live music, blended reality
Organized and moderated by Ryan Groves, XR provides a huge opportunity to reinvent the shared musical experience. Platforms have tried new approaches to live music–either by streaming concerts in VR or by creating VR-exclusive events. But most approaches either fail to replicate the experience of an in-person event or explicitly exclude a co-located audience. Certain groups, however, are creating new music experiences using a mix of VR, AR and live venues.
Our panel consists of experts at the intersection of Music and VR. Anne McKinnon is an XR consultant, advisor and writer, focused on immersive events. Eric Wagliardo founded &Pull and created Pharos AR, a collaboration with Childish Gambino. Roman Rappak is the lead creative of Miro Shot, an XR band/collective. Ryan Groves is a music technologist and founder of Arcona.ai.
The Impact of AI on Music Creation
Organized and moderated by Ryan Groves. Artificial Intelligence is advancing at an exceptional rate-continually redefining the set of activities that were previously only achievable by humans. Indeed, even the creative industries have been impacted. But the dialogue about AI for creative activities doesn’t have to be one of conflict and replacement.
Recently, music technologists are using AI to extend what humans can do musically - to foster collaboration, to create rapid musical prototypes, and to create new modes of music consumption. Two pioneers in this field are the companies Landr and Melodrive. Landr uses AI to automatically master musical tracks, but also enables users to collaborate, share and promote their music. Melodrive is creating an AI that composes - and re-composes - music, so that it can be truly adaptive.
The Next Uncanny Valley: Interaction in XR
Organized and moderated by Ryan Groves. Visual technologies have come a very long way in terms of being able to accurately simulate 3D environments and human appearance in controlled situations. The so-called “Uncanny Valley” has been centered around visual resemblance. With the rise of AI and the access to interactive tech, there is a new Uncanny Valley being created through human-like characters and interactions - where the focus is on narrative rather than visuals.
This creates an opportunity to redefine how people engage and interact with machines, and with each other. This new paradigm will not only require new tools, like storyboarding in VR (Galatea) and emotion-driven music (Melodrive), but also new approaches to promote introspective interactions (Where Thoughts Go), and new methodologies for the performance and direction of this new theatrical medium (Fiona Rene).
Won Best Paper overall, see Awards for more info.
At the 2016 conference for the International Society for Music Information Retrieval, Ryan Groves presented his work on automatically reducing melodies using machine learning techniques borrowed from Natural Language Processing (NLP), titled: "Automatic Melodic Reduction Using a Supervised Probabilistic Context-Free Grammar".
Participant, Sync Panel (2020)
Guest Lecture, Adaptive Music in Gaming (2017)
Panel participant, Music & VR, (2019)
Tutorial, Creative Applications of Music and Audio Research
Lectures: Python for Machine Learning; Algorithms (2019-)
Invited Talk, Building AI: A Systematic Approach to Music Data Problems
Internal Talk to ML Team, Applying Computational Linguistics to Music Theory Analysis
Internal Talk to ML Team, Computational Music Theory with Probabilistic Grammars