A Tech Veteran's Perspective
I was lucky enough to be in San Francisco during the late 90s/2000 to watch and be a small part of the first internet boom. It was amazingly exciting, then devastating when the bubble crashed. From the ashes, a second wave of the internet emerged that was even bigger.
Fast forward to what I'm seeing in the last year with AI, and this is even bigger, like way bigger and even more exciting. I'm so happy to still be in technology and getting to ride another wave. But there are definitely warning signs and real dangers which we can't ignore as this AI explosion happens.
I've been on my AI learning journey for a while now, but this summer vacation I decided to go deeper. I wanted to understand AI through the eyes of the people building it, warning about it, and betting their careers on it.
My Summer Reading and Viewing List
Books:
- "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao
- "The Optimist" by Keach Hagey
- "Superagency: What Could Possibly Go Right with Our AI Future" by Reid Hoffman and Greg Beato
Videos:
- "Godfather of AI: I Tried to Warn Them, But We've Already Lost Control!" (Geoffrey Hinton interview)
- "How I Use ChatGPT to Run My $30B Company" (Dharmesh Shah, cofounder of HubSpot)
These resources fundamentally changed how I think about what's coming.
The Great AI Divide
As I continued my exploration, what struck me most was how sharply divided the AI world has become. On one side, you have the "boomers" (the optimists) and on the other, the "doomers" (the pessimists). And here's what's fascinating: both camps have brilliant people with compelling arguments.
The Optimists: Reid Hoffman and Dharmesh Shah
Take Reid Hoffman, cofounder of LinkedIn and serial investor in companies like OpenAI, Coinbase, and Dropbox. His book "Superagency" (written with Greg Beato) paints a picture of AI as humanity's ultimate amplifier. He sees AI giving us superpowers, helping us work smarter, create better, and solve problems we couldn't tackle before. He argues passionately against heavy regulation, believing market forces will naturally guide AI toward beneficial outcomes. His optimism isn't naive; it's grounded in his experience building and investing in transformative technologies.
Dharmesh Shah, cofounder and CTO of HubSpot (now valued at $30 billion) and investor in over 60 startups, showed me the practical side of this optimism. In his video about using ChatGPT to run HubSpot, he demonstrates exactly how AI transforms daily business operations. He built ChatSpot, an AI chatbot for HubSpot's CRM, and uses it constantly. Watching him work with AI was like watching someone from the future: efficient, creative, and completely natural. These guys understand the technology at a level that I don't, so when they say we should embrace it, I listen.
The Warning Voice: Geoffrey Hinton
But then I watched Geoffrey Hinton's interview "Godfather of AI: I Tried to Warn Them, But We've Already Lost Control!" and felt a chill run down my spine. Hinton isn't some doomsday preacher; he's the person who helped create the foundation of modern AI. He left Google specifically so he could speak freely about AI's dangers. His message was stark: he now estimates there's a 10 to 20 percent chance AI causes human extinction within 30 years, and believes AGI (artificial general intelligence, AI that matches or exceeds human capabilities) could arrive in 5 to 20 years, not the 30 to 50 he once predicted.
When someone who helped birth, this technology tells you he's worried we've already lost control, you don't dismiss it as fear mongering. You pay attention. BTW his recommendation to young people is to become plumbers because it will be a tough job to replace with AI. At least anytime soon.
The OpenAI Story: When Mission Meets Reality
This divide between optimism and pessimism became even more complex when I dug into the OpenAI story through two different lenses. Karen Hao's "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" was particularly eye-opening. She had unprecedented access to OpenAI in its early days and watched its transformation from a nonprofit promising transparency and safety to something entirely different.
The book reveals how Sam Altman (CEO of OpenAI) operates with what can generously be called flexible ethics (I'm being very kind when I say this), particularly around AI safety and his dealings with the board and employees. Keach Hagey's "The Optimist" gave me additional perspective, showing how the narrative around technological progress often overshadows the messy realities of how these companies actually operate.
The Product-Market Fit Paradox
Yet here's the paradox that keeps me up at night: product-market fit trumps everything. ChatGPT is perhaps the most perfect example of product-market fit in tech history, and its success has made all other concerns (safety, ethics, transparency) seem secondary. In Silicon Valley, being right about what people want can paper over almost any problem. But when we're talking about technology that could fundamentally alter human existence, should product success be enough?
The Hidden Empire
As I dove deeper into AI, three truths emerged that most of us never see or think about:
The Human Cost
First, there's a hidden human cost that's staggering. Hao's reporting took me to Kenya, where data workers review traumatic content for pennies, helping train the models we use daily. One worker she interviewed had to review 15,000 pieces of disturbing content per month. These are the invisible humans making our "artificial" intelligence work, often suffering psychological damage in the process. It reminded me uncomfortably of other moments in tech history where innovation was built on exploitation we chose not to see.
The Environmental Impact
Second, the environmental impact dwarfs anything I imagined. Data centers consume enormous amounts of energy and water, with one planned facility set to use nearly as much power as New York City. The companies building these AI empires are essentially recreating colonial resource extraction patterns, but with computing power, data, and cheap labor instead of gold and rubber.
The Broken Promise of Openness
Third, the promise of "open" AI has become a cruel joke. Despite names like "OpenAI" and talk of democratization, AI development has become increasingly secretive. The same companies lobbying against regulation are the ones refusing to share their research or explain how their systems work. They want less oversight while building technology that could reshape civilization.
Finding Balance in the Storm
Throughout my reading, I kept returning to Keach Hagey's "The Optimist" for perspective. The book reminded me that we've been here before, not with AI specifically, but with transformative technologies that sparked both utopian dreams and existential fears. The printing press, the steam engine, the internet: each brought prophecies of doom and promises of paradise. The optimists in Hagey's story aren't naive; they're choosing to focus on potential while acknowledging risk, believing human ingenuity and adaptation will guide us through.
This historical perspective helped me refine my own position in this debate. I'm neither a pure boomer nor a complete doomer. I'm someone who's ridden tech waves before and knows they can both create and destroy, often simultaneously.
Where My Journey Has Led Me
After this summer's deep dive added to everything else, I've learned about AI, I've come to believe three things with absolute certainty:
First, this is bigger than the internet revolution. I've seen tech transformations, but nothing like this. The speed, scale, and potential impact dwarf anything I've witnessed in my decades in technology.
Second, the risks are real and cannot be ignored. When Geoffrey Hinton warns about existential risk, when we see the exploitation and environmental damage, when companies prioritize growth over safety, these aren't abstract concerns. They're present dangers requiring immediate attention.
Third, opting out isn't an option. Here's the harsh reality I tell everyone: AI will definitely take jobs, but it will absolutely take YOUR job if you don't learn to use it. The people who thrive won't be those who resist AI, but those who learn to work with it, understand its capabilities and limitations, and use it to amplify their own skills.
The Path Forward
My ongoing AI education, enriched by this summer's reading, has convinced me that we're living through one of the most important technological shifts in human history. The divide between boomers and doomers isn't just academic; it represents two possible futures we're choosing between right now.
I want my family, friends, and coworkers to become AI native and leverage it for all the good it offers. But I also want them to do it with eyes wide open to the risks, the ethical challenges, and the hidden costs. We need to demand transparency from AI companies, support responsible development, and ensure the benefits aren't built on exploitation.
Continue experimenting with AI tools but think critically about their implications. Have conversations about AI safety. Question the narratives from both extreme optimists and pessimists. Most importantly, don't let fear or ignorance leave you behind, but don't let excitement blind you to legitimate concerns.
The future is being written right now, and we all have a role in shaping what it looks like. My journey continues, but after this summer of reading, I'm more committed than ever to engaging with AI actively but cautiously, embracing its potential while fighting for guardrails, transparency, and ethical development. Because ultimately, whether AI becomes our greatest tool or our greatest threat depends on the choices we make today.
The waves I've ridden in tech have taught me one thing: the biggest transformations happen when we're both excited and worried. That tension keeps us honest. And right now, I'm both more excited and more worried than I've ever been.
Resources That Shaped This Summer's Learning
Books:
- "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao
- "The Optimist" by Keach Hagey
- "Superagency: What Could Possibly Go Right with Our AI Future" by Reid Hoffman and Greg Beato
Videos:
- "Godfather of AI: I Tried to Warn Them, But We've Already Lost Control!" (Geoffrey Hinton interview). https://www.youtube.com/watch?v=Is--XTem56s
- "How I Use ChatGPT to Run My $30B Company" (Dharmesh Shah, cofounder of HubSpot). https://www.youtube.com/watch?v=mzSjAxYCEow&t=22s