Last month, The New York Times published a story by A.J. Jacobs titled 48 hours without AI. Jacobs is known for these types of self-applied lifestyle experiments, notably his book, The Year of Living Biblically: One Man’s Humble Quest to Follow the Bible as Literally as Possible.
This month’s question is my reaction to Jacobs’ article. I’m reframing it to…
Can you go 48 hours without generative AI?
Short answer: Were you alive in 2021 and were you able to successfully make it to November of 2022? Then you are living proof it’s possible! For a more thoughtful analysis of why I’m even asking this question, read on.
Avoiding Machine Learning vs. Generative AI
All of the outrageous “there’s AI in that” realizations in the article – Jacobs can’t listen to a podcast, pay for food, take the subway, or turn on a light – have nothing to do with generative AI. If he was only going to avoid generative AI it would be a pretty boring article. Millions of people around the world – right now – are successfully not using generative AI. It’s a non-story. But, all of the cases in the article do involve machine learning. Machine learning has become a go to technique for big data processing in a myriad of situations where pattern recognition and prediction might be useful. While machine learning is also the bigger category into which generative AI fits, the common perception of what qualifies as “AI” has shifted toward a more narrow idea.
AI has always been a problematic term. It’s a floating signifier, a term without a fixed meaning that shape shifts over time. Since the launch of ChatGPT in late 2022, the general public has come to associate the term “AI” with large language models that generate outputs. In other words, for much of the public, generative AI is “real AI,” the kind that exhibits actual intelligence or even consciousness. More than that – it’s viewed as extremely important. When organizations are told that “AI will revolutionize everything,” the people making those claims are pointing at generative AI or even more specifically, large language models. They tell people these models have “sparks of AGI” – artificial general intelligence – which is another ill-defined term, but speaks to broad level human capabilities across all domains. There is, of course, massive self interest at play given who makes those claims. It’s made even more confusing because much of the media report these stories in ways that also focus on generative AI while framing it simply as “AI.” It’s hard to disentangle this terminology (though I try at every talk I give). It is comes back to what people believe and Big Tech has a fantastic PR machine.
A Misleading Spectacle
Jacobs’ broad take on AI – defining it in terms of machine learning – isn’t wrong, but it does feel a bit misleading. It’s true that “AI” writ large – whether that’s machine learning, algorithmic or expert systems – powers a lot of our digital world. It’s part of the supply chain and workflow processes for many physical goods too. To state the obvious, that’s because so much of what we rely on involves computers and data at some level. This is a point I often make in my own discussions about broadening our conceptions of AI.
If Jacobs is helping illustrate that AI is bigger than generative AI, isn’t that a good thing?
The problem is that people who believe AI is only generative AI, which is a lot of the general public, might miss this important nuance. The story doesn’t really call it out and instead focuses on “AI surprises” peppered throughout our digital world. The tone of the article carries this air of inevitability. As we follow Jacobs through his day, winding up using candle light and a typewriter to write his story – even as he admits he did use ChatGPT for research prior to the experiment – we’re left with the feeling that this thing many people know as AI (generative AI) is a fait accompli. It would be silly to resist.
This narrative also fits with the goals of technology companies who are eager to push large language models into every digital nook and cranny they can find – from enterprise tools (co-pilot) to search engines (AI overviews). It’s not that any of this is inevitable – there is in fact room for choice and agency. This is particularly true of generative AI, which requires user participation in a way that machine learning systems involved in monitoring the energy grid do not. Generative AI is also what’s driving unprecedented demands to expand the energy grid to support data centre build outs – something the other types of less compute intensive AI have not demanded.
Jacobs’ experiment takes an overly broad application of using AI, applying that idea to anyone in the supply chain who is using machine learning to process data as contaminating the situation. This makes for a humorous read, a type of media spectacle, but it’s unwise to think in terms of all or nothing binaries in terms of our choices. Instead, we need to think about where our responsibility begins and take reasonable steps to exert our human agency.
Send Me Your Questions!
I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at [email protected] or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well.
This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.
Your Data Career Accelerator
The training subscription designed for the busy data professional — from foundational courses to advanced certification. (Use code Cyber2025 to save 25% through December 8, 2025!)

