Unfinished Symphony To Swan Song: What The Future May Hold

Keeping up with technological developments can sometimes feel impossible, as the changes arrive bolder and faster than ever before. Living in 2024 has often crossed into the realm of watching childhood sci-fi become a reality for those of us past a certain age, and it brings with it a series of feats as well as quandaries.

When Tupac’s hologram “performed” at Coachella 2012, it was talked about for weeks – we re-watched and spoke about it around the proverbial water cooler time and again, and it’s astonishing looking back at just how many other technological developments have been implemented in the decade since, and the relentless pace at which these creations keep coming.

Get Back To The Future

The 2021 Peter Jackson documentary The Beatles: Get Back utilised de-mix technology, meaning that the musical parts could be isolated, re-built, and edited in high quality with modern-day digital methods, with an overall effect that hit like a person living in 1955 hearing Johnny B. Goode for the first time. By the end of 2023, the documentary team and the wizards at Abbey Road Studios had achieved the unlikely task of creating an all-new Beatles track – taking the starting point of a rough vintage demo recording of John’s vocals, and adding George’s guitar parts from a 1995 session, with Paul, Ringo, and an orchestral string ensemble recording in the present day. Bearing in mind that Lennon‘s demo was a 1978 tape recording of vocal and piano, it’s quite the leap to hear the 21st century final track of Now and Then. With a creation process that spanned five decades, the emergence of this technology meant that the group could turn the “Unfinished Symphony” into a Swan Song.

Paul McCartney spoke about the decision to go ahead with the track in the mini documentary that accompanied the song’s release, saying:

“George and Ringo came down to my studio. Nice day. Fabulous day,” recalls McCartney of the ’95 reunion. “We listened to the track. There’s John in his apartment in New York City, banging away at his piano, doing a little demo. Is it something we shouldn’t do? Every time I thought like that, I thought, ‘Wait a minute. Let’s say I had a chance to ask John, ‘Hey John, would you like us to finish this last song of yours?’ I’m telling you, I know the answer would’ve been: Yeah! he would’ve loved that.

Just a few short months after the release of Now and Then, the long-awaited version of Logic Pro 11 included the new “Stem Splitter” feature, bringing this de-mix technology into portable home studios of the world. The accessibility, low cost, and ease of use with such an advanced feature is astonishing, and it makes me wonder what possibilities lie ahead in the months and years to come.

Creatives And Computers

There have been many famous “Unfinished Symphonies” which have been completed by others. Mozart’s Requiem still remains shrouded in suspicion as to how much his faithful assistant Franz Xaver Süssmayr may have contributed to it, while the Queen album Made in Heaven was completed by the remaining three band members following Freddie Mercury’s passing. In the literary world, Eoin Colfer authored And Another Thing… which was the sixth and final installment of The Hitchhiker’s Guide to the Galaxy with the blessing of Douglas Adams’ widow Jane Belson, while David Lagercrantz did the same with Stieg Larsson’s Millennium Series.

While there’s no doubt that these well-loved creations were crafted in honour and admiration, we are currently living in times that pose the question of just exactly where the line is between a homage from a friend or superfan, and something more ethically ambiguous. YouTube announced last year the upcoming launch of their new text-to-music creation Dream Track – an AI voice & music cloning tool that will create music for YouTube Shorts “in the style” of collaborating artists including John Legend, Alec Benjamin, Charlie Puth, and Charli XCX. This technology comes from Google’s DeepMind and Lyria, a music generation model that will mean users simply choose one of the artists and enter a prompt. The result will be a 30-second track with lyrics in an AI-generated voice, along with music, all in the style of the chosen artist.

Looking at how quickly de-mix technology hit the shelves, I wonder how far away we are from being able to create entire albums in the style of our favourite artists, with just a few clicks from the couch? And just how easy will it be to hijack this technology and apply it to all artists and music, whether they have partnered/opted-in or not? Are we looking at a day pretty soon when it will be possible to prompt the technology to provide us with a new “Beatles” track, singing about our exact situation in the style of our choosing, and then repeat the process ad infinitum?

Individual use of this technology admittedly sounds intriguing, however, if altered and computer-generated images of figures such as Marilyn Monroe and Albert Einstein can freely be used in advertising campaigns in the present day, what are the implications for other uses of creative works in the “style” of an artist, but which are not officially created or owned by anybody?

From the era of Tupac’s resurgence into our current Deepfake confusion, it’s becoming harder to decipher just what is real anymore, therefore is there a possibility that we will soon hear the musical equivalent of this with the advent of programs such as Dream Track? Additionally, the question arises that if I’m so inclined, and decide to make enough tweaks and changes to my generated “Beatles” song to make it my own, record it, and release it – did its creation truly come from The Beatles, the program/company, or from me?

Looking Ahead

Experts in the technology field advise caution across the board when it comes to the use of new developments, as would be expected. One such expert, Ray Kurzweil, author of The Singularity Is Nearer says: “Exponential growth in technology means we must prepare for changes beyond our current imagination.” I appreciate his choice of words, as the discourse around the definition of imagination is always the most perplexing thing when it comes to the creative process, and is the frequent focus of current issues with generated content. Everyone from the ancient Greeks to the modern day has theorised on what the heck imagination actually is, what defines genius and originality, and even whether supernatural external forces exist and give people a hand.

Perhaps looking simply at the similarities between the way machine learning and the human brain both work with information is a good enough starting point. Our creative processing tools are certainly similar to computers in the way they are an amalgam of our retained knowledge, influences, preferences, and output intentions, the difference being they are merely wrapped in a human bow of neuroses and emotion. Many have argued that there is no such thing as true originality, and perhaps it’s fair to say the ancient philosophical dilemma has simply modernised and gone digital. There’s undoubtedly a cycle of human imagination broadening when technology provides us with more capabilities, and this spiralling dance of expansion is what Kurzweil has predicted for years – leading to the point of singularity he speaks of when the technology eventually surpasses us.

While the future is filled with potential that my mind cannot comprehend, it’s clear we are standing on the shoulders of giants, with easy access to more information and tools than ever before. Documentarian Peter Jackson has hinted that he has more footage tucked away, meaning there could be further unheard real Beatles songs to come, and of course, there are the infinite possibilities of whatever music cloning and generative tools lay ahead. It’s an exciting time to observe and be a part of, and I for one am optimistic about expanding the limits of our current capabilities.

Browse All SoundGirls Contributors