0%
Still working...

In Search of Logged Time


It’s the late 19th century in France. Sandwiched between the invention of the telephone and the first aircraft, the Lumière Brothers are hard at work in their Lyon studio, tinkering with a device that would, in a few years, become the first all-in-one camera, where you can shoot and view on the same machine. Five hundred kilometers northwest of the brothers, Marcel Proust was a young boy in Illiers, spending the summer holiday with his aunt. It is here, presumably in the tranquility of the afternoon, where he would eat madeleines dipped in flower tea. This moment is famously detailed in Proust’s magnum opus, In Search of Lost Time, decades later. Both events, the birth of the literary madeleine and the Lumière invention, suffer the same legacy: the rigorous and painstaking task of capturing a memory. Yet the value—and methods of access—of memories are growing increasingly tenuous in the digital age, as information on the internet is deleted at random while projecting the illusion of omnipresence.

A century after Proust’s childhood, in a different part of the world, Susan Sontag wrote, “The problem is not that people remember through photographs but that they remember only the photographs. This remembering through photographs eclipses other forms of understanding—and remembering.” Sontag wrote this in the early 2000s about war photography. Still, in the wider world, camera phones from Kyocera had been recently released, and the result was a buzzing conversation about a new lifestyle. Suddenly, technology shifted from sturdy analog to ethereal pixels floating in a mysterious digital cyberspace; and, more importantly for most people, now, it seemed, one could remember everything.

Cut to the present. It’s June 2024, and Google Photos, in its daily onslaught of “Remember This Day?”, shows me memories of my summers as a teenager. Over on Instagram, “On This Day” archives show that five years ago, I was eating avocado toast in London. The moments that these photographs recall have no active hold in my memory. Still, upon seeing them—presenting themselves as the definite proof of truth, helpfully supplicated with captions from myself—my narrative of the present shifts to accommodate the information. The neologism that defined a generation, “pics or it didn’t happen,” resulted in a reality where every moment is micro-remembered. Nothing can be brought back decades later—with a sense of complete Proustian “ecstasy,” the fruit of the labor of spontaneous and intensive remembering—since nothing can peacefully fade away.

On one hand, the internet we knew is rapidly disappearing and taking large parts of us with it. On the other hand, its absence is met with a flood of content that is nonhuman, often incorrect, and ever-increasing. As more information vanishes from the internet—be it photographs of dogs we posted on a Facebook account that Meta deleted, or an online game world that crashed with current code—a piece of history is lost. The misinformation that can stem from generative AI’s hallucinations is the flipside to a complete lack of records. When generative AI tools like those created by Open AI fail to understand a prompt—or are unable to find information to answer the question—they instead fill in the gaps with whatever seems most plausible (according to their mysterious, secretive inner workings). Not only does this generate misinformation that falls on the user to verify, but it also creates additional content for other AI to use as a source, creating a chained loop of false memories presented as fact.

It’s hard, perhaps even impossible, for Gen Z to feel Swann’s “all-powerful joy” of a dormant memory resurfacing through the rigorous labor of trying to remember. Now, the past is a scrollable archive optimized for our pleasure, a shift that further obscures the possibilities of Sontag’s “other forms of understanding—and remembering.” If Proust were writing today, Swann would not struggle to conjure the memory. There would be no “I drink a second mouthful, in which I find nothing more than in the first … It is time to stop; the potion is losing its magic.” The “potion” would lead him to his phone. He would, one can imagine, open an app to find the photo of the original moment—now made into an AI slideshow for passive viewing. It’s a simpler route—which transforms thought into the physical action of scrolling—but it disregards the pleasures of the mental gymnastics that one must endure to remember.


Information has never been more easily accessible, but this convenience comes at a cost, and most of the internet is already a sourceless, opinion-filled cache, stuffing content in the face of users who lack the bandwidth or knowledge to verify sources accurately. Media literacy is an increasingly important subject that most Gen Alpha students will have to learn. Efforts from the Media Literacy Now organization and online education platforms like Crash Course offer extensive coaching on identifying misinformation on the internet, learning about credible sources, and doing rigorous, manual fact-checking.

But all this may be for naught, because, in an effort to provide easy answers to users, the internet made it genuinely harder and more time-consuming to find valid information. This becomes a problem when it’s coupled with the illusion of ease. You can see the photograph of the day you met someone, but trying to remember the event beyond the image is a tougher task. It’s Sontag’s microscopic view on steroids. The internet seems vast and ever-expanding, producing the same exoticism and exploratory lens of a war photograph. In reality, the frame is tightened, context is actively removed, and the information gleaned seems somehow false.

The early internet was characterized as an interactable, ever-updating archive of words, sounds, and images. It was assumed that these digital spaces—that were so carefully built—would last forever, and that our interactions had the same quality as handwritten letters: a mode for future historians to understand us at the particular moment of technological shift. At the same time, companies like Sony, Google, and Yahoo pushed users to think of their sites as personal data stores, free of charge and forever accessible. Thousands of people followed suit, storing their memories on servers that promised protection, only to eventually shut down without warning. Books were digitized and no longer had to be carefully preserved, resulting in the closure of libraries that couldn’t afford the ebook demand, and streamers like Netflix dominated the film industry—pushing theaters into tighter box office windows, and DVDs toward extinction.

But innovation comes hand in hand with obsolescence. As the technology behind the internet kept evolving, servers couldn’t keep up.

Now, the carefully curated caches of our digital histories—and, therefore, almost all of our histories—face an existential threat. The creators of internet content—that is, us—believe they own their digital material, whether it’s a blog started at age 15 or a carefully backed-up Google Drive. This notion is proving to be a lie. The “digital dark age” is a term that was popularized in 2013 among archivists, who noticed that much of Web 2.0—the space that characterized the internet from the 2000s to now—faces complete obsolescence. Link-rot (dead URLs) and bit-rot (corrupted data) metastasized blog servers, video players, and chat forums. In 2019, 50 million tracks from 12 million artists on MySpace disappeared. This year, Christopher Nolan and Guillermo Del Toro warned film buffs to own DVDs as an archive source in a world where you don’t own many physical things, let alone the films you watch on streamers.

Behind the immaterial language of data storage—like the “cloud” or “ether”—lie physical servers at the edge of metropolises that require vast amounts of energy, manual labor, and extensive updating. It’s an expensive job that can never be completed, because, by the time data from one server is updated to the latest storage code, a new one will be available. The archivists behind these projects vary from individuals trying to collect artifacts of the early internet to businesses like Criterion, who sell Blu-rays in a world of streaming. Kurt D. Bollacker for American Scientist writes, “With all digital media, a machine and software are required to read and translate the data into a human-observable and comprehensible form.” This becomes a problem when the machines and software in question have a remarkably short lifetime and forever hold the possibility of being rendered unusable with a single software update.

When memories blur into public spaces like the internet, who controls them?

The question then becomes: How can we remember in the age of digital degeneration? Before the internet, when memories were inaccessible, our minds would fill in the gaps using surrounding clues. In a study conducted with Holocaust survivors, many of them recalled minor events (particularly when it came to the loud booms of chimneys bursting) that never happened. Dori Laub found that the chimney boom was a coping mechanism to reveal the truth of the fear and violence of the camp. So, though the chimney never burst, the fear and terror it might provide was real. Thus, even our misremembering was still valuable.

But now, memory gaps are filled with archives found on our phone; if they are ever deleted, we may be at a point where AI could hallucinate for us. When I asked GPT-4 about myself, it filled in informational gaps by listing things that never happened. It knew what school I went to, but when asked about my childhood, it used informational clues to explain my school days. Some of it is accurate, some is wrong. The campus details, for example, are true. The classes I took and the teachers I had are completely false.

But there is a mysterious section that I’m not sure about; it could be true or false, and I can’t remember. But Chat GPT presents it to me with absolute authority and certainty, which is even more confusing.


Internet archivists who attempt to save webpages are met with two obstacles: Firstly, e-files have fraught ownership, and therefore if a piece of content is deemed privately distributed, it cannot be archived—even if it runs a risk of being lost. Secondly, the idea that a creator owns their content is tenuous. It’s a ubiquitous practice to have mirror servers that store information in case of unexpected loss. For example, a YouTuber may delete one of their own videos, but fans who have shadow copies can easily re-upload it. Similarly, if someone wants to delete a comment of theirs from the internet, they have no power over any screenshots that might have been taken.

The Proustian pleasure is dual; it is both the spontaneous rush of a memory unlocked and the knowledge that it exists only in my mind, never to be known or shared by another. It is the pleasure of ownership. But when memories blur into public spaces like the internet, who controls them?

Most online data is owned by a small group of firms, and you know their names: Google, Meta, Amazon, Apple, and Microsoft. And companies like them have a brutal history of caring about profit at all costs. There may seem to be the illusion of free will—of uploading and deleting according to your discretion—but data can be completely destroyed or resurrected in the hands of a small few.

As it stands, the current trend of disappearing data is likely to increase as storage spaces get tighter in accommodating generative AI content. Eventually, some of your memories will be deemed obsolete: not by the workings of your brain, but by a company that can’t afford to host them anymore.

And when this happens, we, with our brains weak and soft from shunning Proustian labor, will not even remember what we have lost. icon

Featured image: 090811 Old days. Photograph by underthesun / Flickr (CC BY-NC 2.0)



Source link

Recommended Posts