151: The unsettling familiarity
AI is reflecting changes we've experienced before, but in new ways
For a brief period in the 90s I only listened to Soul Coughing. “Screenwriter’s Blues” in particular—yes, we are all going to Reseda…someday. And I read Mike Doughty’s The Book of Drugs. But that was long after we had hired Mike to be the solo opening act for a Volkswagen-sponsored college campus tour. I never imagined the original band would reunite. But they have! And they might have the best back stage rider of any band ever, even better than the Van Halen brown M&Ms clause.
We can’t be imaginative enough
I’ve been writing words online since roughly 1994, but reading other people’s online writing even longer. Especially Seth’s. His sense of “A possible AI future” points to both our opportunity and responsibility as consumers and stewards of AI policy, use, and acculturation. For example, what if…
“A company seeking RFPs invites all its suppliers to submit confidential overviews of their supply chain. An AI reads the material and creates Pareto optimal connections, building a confederation of several suppliers who can work together to build something faster and more efficiently than any could do alone.”
In other words, what could we do now because of AI which we couldn’t do before? (Or couldn’t do because it was too complex, too time-consuming, too expensive.) What might we unlock?
Take this concept above, and remember we finally have a tool which can teach us how to use it more effectively. This is weird, for sure. We’ve never had tools which we could converse with to understand their capabilities. So, anyway, you could cut/paste Seth’s quote above into ChatGPT and ask, “How could I build this?” Better yet—cut/paste the above, head to ChatGPT v4o and ask, “Help me write a prompt to build a system based on the following [paste quote].” Then cut/paste the new prompt into ChatGPT v1o which will leverage “reasoning” to provide a more nuanced answer. This is “Let Me Google That For You” but 10x (thanks, Greg!).
So there’s tension in the air as AI settles into our lives, becomes routine, boring, yet remains wildly unfamiliar and unsettling.
Or as Sir John Hegarty reminds us, “in adverse conditions, the only sensible action is to try to create the future you’d want to live in.”
This is the business of creativity, of course. Framed another way, it is, “our elemental desire to learn about and explore our environments in order to extract meaning from our circumstances.” This insight via Neil Perkin writing about the work of author Dan Cable.
We’ve been here before, remember?
When software arrived or when you first experienced software, how long did it take for you to get acclimated?
Then think about the Internet. How long did it take you to get accustomed to the new realities it created?
Then think about social media and answer the same questions.
Now let’s think about AI and its implications related to work, home, school, and society. The new thing arrives in your periphery, it takes over your focus, and then what?
We come by the exhaustion honestly. In the last 200 years humans have standardized large-scale agriculture, running water, electricity, flight, healthcare, computers, higher education, and a global Internet, to name a few. And now there’s AI. As monumental as this latest opportunity might be, a relative few have the wherewithal to engage substantially. Just think about how many small businesses didn’t have a website decades after the arrival of the worldwide web. Google spent a lot of money trying to convince them to engage and invest.
We’re still very early in this latest adoption curve.
But it feels pretty different. How will you engage?
AI+Creativity Update
🤖 📆 Next Tuesday September 24 Google’s hosting a series of quick Gemini at Work virtual sessions. I signed up for a few. They’re free.
🤖 “We are in for the ride of our lives,” says Oprah in last week’s ABC primetime special, “AI and the Future of Us.” The 70-year old interview queen engages Sam Altman, Bill Gates, Marques Brownlee and others to illuminate the world we find ourselves within. (Full program via Apple TV; Seven minute ABC preview; TechCrunch review)
🤖🔬 Ben Thompson expands on Ethan Mollick’s explorations of ChatGPT v1o. You’re not likely to ask the latest version to solve crosswords, but this is definitely a space worth spending time in. There was a time not long ago when far more rudimentary AI arrived and freaked people out.
🤖✏️ NewArc has been around a while, but deserves another look. Their premise: Turn sketches into realistic images. If you have skill to articulate (or curate) enough of an image, NewArt can help you advance an idea quickly.
🤖 🎥 Runway’s new video-to-video capabilities open up versioning options for brands—here’s an example for Red Wing Shoes (what do you think, Dave?)
🤖 🎥 Minimax is another text-to-video generator. Someone on Reddit used it to recreate iconic movie scenes.
🤖 Handy reminder of Midjourney parameters and their effects.
☝🏽 Andrej Karpathy, former Tesla and OpenAI computer scientist, makes an astute point: Large Language Models have little to do with language.
“They don't care if the tokens happen to represent little text chunks. It could just as well be little image patches, audio chunks, action choices, molecules, or whatever. If you can reduce your problem to that of modeling token streams (for any arbitrary vocabulary of some set of discrete tokens), you can ‘throw an LLM at it.’”