Listen

Description

As a Beatles fan, I’ve always been frustrated, even baffled by the scarcity of quality footage of the group performing. We’re stuck with grainy, chaotic black-and-white snippets and barely audible sound. How is this possible? Consider this: the Beatles played the Hollywood Bowl twice—in the entertainment capital of the world—yet the only surviving video looks like a bad home movie. They were playing in Hollywood, with perhaps 10,000 idle movie cameras within a few square miles, and nobody properly filmed the biggest show-business act of the century?

For decades, we’ve been stuck listening to the same recordings, watching the same grainy footage, accepting the limitations of 1960s technology as just part of the experience. You wanted to hear the Beatles? You dealt with the hiss, the murky mix, the fact that sometimes you couldn’t quite make out what Ringo was doing back there. But artificial intelligence is changing all that in ways that would have seemed like science fiction just a few years ago. And it’s not just making things sound cleaner—it’s actually revealing music that was always there but impossible to hear, and bringing the Beatles back to life in ways that are both thrilling and a little unsettling. 🎵

The tension is real: Are we preserving history or rewriting it? Are we revealing what the Beatles actually sounded like, or creating something new that never existed? One thing’s certain—the Beatles, with their massive catalog and wildly varying recording quality, have become the perfect test subjects for what AI can do with musical archaeology. From the pristine studio recordings at Abbey Road to the muddy basement tapes and everything in between, there’s a lot of material to work with, and technology is transforming all of it.

This issue has been bubbling up for a while. Way back in 1995, long before we were thinking about AI, Paul McCartney had reservations about releasing alternate takes and demos that differed from the official recordings ultimately released on records. In an interview with Allan Kozinn for Beatlefan magazine, Paul said:

“… If we picked take 6, [that meant] we didn’t want takes 1 through 5 [released].”

This was in the context of discussing bootlegs and the Anthology project. McCartney was specifically worried that releasing alternate takes and demos could confuse listeners—especially those who didn’t grow up with the Beatles—about which version was the “finished” or “official” version of a song.

🎬 Get Back: Peter Jackson’s Game Changer

The first breakthrough came with Peter Jackson’s “Get Back” documentary in 2021. Jackson’s team used AI technology called MAL—Machine Assisted Learning—to do something that once seemed impossible: take the original mono recordings from the 1969 Let It Be sessions and separate them into individual tracks. What had existed until then was John’s guitar, Paul’s bass, George’s amp, Ringo’s drums, all the vocals—were captured on a single microphone. There was no multitrack recording, no way to isolate anything. It was all just one big sonic mess captured together. 📼

And today, with the proliferation of AI tools, even hobbyists are uploading to Youtube startlingly enhanced footage of the Fab Four, with groundbreaking visual and audio clarity:

And the progress isn’t going to stop, as AI is used to recognize each instrument’s unique sonic signature and pull it apart. Suddenly, you hear Paul’s bass line clearly without the drums drowning it out. You can isolate John’s vocal without the guitar bleeding through. It’s like having a time machine going back and recording everything properly in the first place. The AI isn’t just cleaning things up, it’s fundamentally reconstructing everything, track by track, revealing details that have been buried in the mix for over fifty years.

Jackson’s work didn’t just make for a better documentary. It made possible something nobody thought would ever happen: a new Beatles song in 2023, featuring all four Beatles, including John Lennon, who’d been dead for over forty years. “Now and Then” wouldn’t exist without AI, and we’ll get to that story in a minute. But first, let’s talk about what else AI is doing to Beatles recordings.

📼 Video Restoration and Enhancement

The visual side of this AI revolution is dramatic. Old Beatles footage—and there’s a lot of it—was shot on everything from pristine 35mm film to grainy 16mm to whatever cheap cameras could capture them playing in Hamburg clubs. For years, fans dealt with blurry, jumpy, washed-out footage because that’s all there was. But AI upscaling is transforming this material in shocking ways. 🎥

Modern AI can take old footage and upscale it to 4K resolution, adding detail that seems to appear out of nowhere. It’s not just making the image bigger—it’s intelligently filling in missing information based on what it’s learned from analyzing millions of images. The results can be startling: you can suddenly see the texture of Paul’s jacket, the individual strings on George’s guitar, the sweat on their faces during a performance. Early Ed Sullivan Show appearances that looked like they were shot through cheesecloth now look like they could have been filmed yesterday.

Colorization is another tool in the kit. Black and white footage of the Beatles can now be automatically colorized with surprising accuracy—the AI has learned what colors things should be, from skin tones to the specific shade of a Gretsch guitar. And frame rate adjustment makes old footage that was shot at 24 or 25 frames per second look smooth and natural when bumped up to modern standards. The jerky, old-timey quality disappears, and suddenly the Beatles look less like ancient historical figures, and more like a band you could see playing tonight.

🎸 Audio De-mixing and Remixing

On the audio side, Giles Martin—George Martin’s son, who’s become the keeper of the Beatles’ sonic flame—has been using AI to create new remixes of classic albums that would have been technically impossible before. The problem he’s dealing with is that the Beatles recorded most of their groundbreaking work on four-track tape machines. That means multiple instruments were often recorded together on the same track out of necessity. You couldn’t just turn up George’s guitar in the mix because it was permanently married to the tambourine and maybe a backing vocal. 🎛️

But AI de-mixing technology can now separate instruments that were recorded together, analyzing the waveforms and learning to distinguish between different sounds occupying the same track. This is how Giles Martin created the 2022 remix of “Revolver”—widely considered one of the most experimental and important Beatles albums, but also one that sounded pretty murky in its original mix. Using AI to separate the instruments, Martin could finally give each element its own space in a modern Dolby Atmos mix. Suddenly you could hear the tambourine shaking in one corner, the guitar in another, Paul’s bass finally getting the prominence it deserved.

The Super Deluxe editions of Beatles albums that have been coming out—each one with new remixes, outtakes, and bonus material—are only possible because of this technology. It’s not about making the Beatles sound “modern” in the sense of slapping Auto-Tune on John’s vocals. It’s about revealing what they actually played, giving you the ability to hear each musician’s contribution clearly for the first time. For serious Beatles fans, this is revelatory stuff. You’ve heard these songs a thousand times, but you’ve never really heard them like this.

This essay continues below. Click on the title of this product to view on Amazon. As an Amazon Associate, I earn from qualifying purchases.

The Beatles: Get Back

🎤 The “Now and Then” Breakthrough

“Now and Then,” was released in November 2023 and billed as “the last Beatles song.” The story goes back to the late 1970s when John Lennon recorded a demo at home on a cheap cassette player, just him and a piano, singing a song he was working on. After his death, Yoko gave the tape to the remaining Beatles during their Anthology sessions in the mid-1990s. They tried to work with it, but the piano was so loud and so tangled up with John’s vocal that they couldn’t separate them. They gave up. 🎹

Fast forward to 2022, and the same AI technology Peter Jackson had used on “Get Back” finally made it possible. The AI could analyze John’s voice, learn its characteristics, and extract it from the recording while removing the piano entirely. What they ended up with was John’s vocal, crystal clear, as if he’d recorded it in a professional studio instead of on a cassette machine in his apartment. Paul and Ringo could then add their parts—Paul on bass and piano, Ringo on drums—and even incorporate guitar parts George had recorded before his death in 2001.

The result was genuinely moving: all four Beatles playing together on a new song, decades after it should have been possible. But it also raised uncomfortable questions. Is this what John would have wanted? He recorded a rough demo, not a finished song. Would he have even wanted it released? The AI made technical wizardry possible, but it couldn’t answer the ethical questions. Some fans loved it; others felt like it crossed a line, that we were putting words in a dead man’s mouth—or at least putting his voice where he hadn’t intended it to go.

🤔 The Controversy: Enhancement vs. Authenticity

This gets to the heart of the debate around all this AI enhancement: Are we preserving the Beatles or changing them into something they never were? The purists make a solid argument. They say the Beatles recorded their albums with specific limitations and worked within those constraints creatively. The murky mix on “Revolver” wasn’t a mistake—it was what was possible at the time, and the Beatles made creative decisions based on that reality. When you “fix” these recordings, you’re not revealing some hidden truth; you’re creating a version that never existed. 🤔

There’s also the slippery-slope concern. Right now we’re using AI to clean up existing recordings and separate tracks that were always there. But what’s to stop someone from using AI to create entirely new Beatles songs from scratch? Deepfake technology can already convincingly mimic voices. You could theoretically generate “new” John Lennon vocals singing lyrics he never wrote, or create “lost” Beatles performances that never happened. At what point does enhancement become fabrication?

On the other hand, the pro-enhancement crowd argues that these technologies are revealing what was always there, not inventing something new. When you separate John’s guitar from Paul’s bass on a track where they were recorded together, you’re not creating new music—you’re finally hearing clearly what they actually played. The performances are authentic; the technology is just removing the technical limitations that obscured them. And for something like “Now and Then,” they’d argue that the surviving Beatles themselves tried to complete it in the 1990s but couldn’t because the technology didn’t exist yet. AI just finished what they wanted to do.

It’s worth noting that Giles Martin and the Beatles’ camp have been pretty careful about where they draw the line. They’re using AI as an archaeological tool, not as a creative partner. Nobody’s asking AI to write new melodies or generate fake performances. The rule seems to be: if the Beatles played it or recorded it, AI can help us hear it better. But AI shouldn’t create Beatles material that never existed in any form.

🔮 What’s Next?

So what else could AI do with Beatles recordings? There’s plenty of material out there that’s been considered too degraded or too badly recorded to release. The Hamburg tapes from their residency at the Star Club in 1962 exist, but the recording quality is so poor that even hardcore fans find them hard to listen to. Could AI reconstruction make them listenable? Could we finally hear those legendary early performances in anything approaching decent quality? 🎸

There’s also “Carnival of Light,” a 14-minute experimental piece the Beatles recorded in 1967 that’s never been officially released. Paul has the tape, but it’s never been deemed releasable, partly because it’s such a chaotic, avant-garde piece and partly because the recording quality is rough. Could AI clean it up enough to finally justify releasing it?

And what about the Rooftop Concert? We have the film and audio, and it’s been released multiple times, but could AI enhancement give us an even better version? Could it reconstruct crowd noise more accurately, separate the instruments more cleanly, maybe even enhance the video quality to make it look like it was shot last week instead of in 1969?

The technical possibilities are almost limitless. The question is whether they should all be pursued. Just because we can do something doesn’t mean we should.

đź’­ The Complex Legacy

The Beatles are in some ways the perfect test case because there’s so much material, so much fan interest, and so much variation in recording quality. What we learn from AI-enhancing Beatles recordings will inform how we approach the entire history of recorded music. Do we enhance everything? Do we leave some things alone as time capsules? Who gets to decide?

For now, the approach seems reasonable: use AI to reveal what’s there, not to create what isn’t. Clean up the muck, separate the instruments, restore the video, but don’t generate fake performances or manufacture new material. Use technology to serve the music, not to replace it.

The Beatles themselves were technological innovators who pushed the boundaries of what was possible in the studio. They’d probably be fascinated by what AI can do—and maybe a little worried about where it could lead. But that’s always been the bargain with new technology: it gives us new possibilities and new responsibilities. We get to decide how to use it. 🎼



Get full access to Beatles Rewind at beatlesrewind.substack.com/subscribe