Summary: AI is changing the way music is heard, and made
Original author and publication date: Amnol Saxena – April 10, 2020
Futurizonte Editor’s Note: Anthropologists believe music is the oldest form of communication among humans, older than words. Perhaps a new way of communication is now emerging.
From the article:
It is raining outside. You are in your bed, cuddled up with your favorite book and listening to your favorite music.
There is a high probability that the music you have been listening to has been recommended to you by your music streaming application, which perfectly suits the weather outside and your current activity (which is reading).
While music tech companies – like Tencent backed Joox, QQ Music, KKBox, etc – seem to have different value propositions, in terms of the myriad of regional music offered to the listeners, the monetization models, etc.; they all sing one same song today.
And that is the song of AI.
Artificial Intelligence has widely gained popularity in the music tech industry in the recent few years. The reasons behind this rise in the uptake of AI in core music streaming application tech is because of some obvious, and yet some other not-so-obvious reasonsAI Augments Listeners’ Experiences Through Personalized Playlists:
Each artist held their own personality which they presented through their music – while some loved the jazzy nature of Louis Armstrong, others melted every time they heard Elvis Presley sing one of his love songs. Some headbanged to The Beatles, while the others swayed to The Doors.
Music streaming app companies like Joox, QQ Music and KuGou have been using AI to analyze the preferences of their listeners and recommend specially curated playlists for personalized customer experience.
By using AI-based “recommendation engines”, the music streaming applications analyze the existing history of the listeners and recommend new songs.
While AI is being used to provide recommendations today, in my opinion there is a great possibility that the music streaming industry will try to offer features that will read body vitals like heart rate, stress levels, breathing rate, maybe even neurological signals from wearable devices. The feature may offer biometric and physiology-based music.
Imagine you are traveling in a metro, filled with people. The rush to reach the office and the excess number of people makes you anxious. The tiny wearable over your ear may identify your anxiety and offer to play music from your favorite artist but in a softer, calmer melody.
A feedback mechanism may autonomously indicate how this softer melody is affecting your health vitals and further improvement in the music will be done to deliver more curative results.
There is a possibility that AI will be able to vary the song’s melody, genre, tonal quality, harmonic rhythm, etc. to suit your body vitals to try and essentially “heal” you.