👋🏽 Welcome to all the new subscribers. I appreciate your interest. There will be two posts this week. Today is a “mixed bag,” a smorgasbord of ideas and topics related to creativity (broadly) and AI (specifically) which provoked me; maybe they’ll do the same for you. Later this week I’ll have a fresh episode of the Curiosity+Courage Podcast. New episodes always appear in your email feed, but are also available via Apple, Spotify, or YouTube. And here are a few highlights from the newsletter archives in case you want to get grounded in my approach:
Your job is to have taste — The role of humans in an age of AI-infused creativity
Four AI culture questions for leaders — Productivity, Psychology, Knowledge, Customers
Which room are you in? — It's time to strengthen your curiosity muscles
Welcome to the Subjectivity Business — How do we know any idea is the right one?
Pivotal moments — If you can choose when to enter an industry, choose a pivotal moment
🙏🏽 And a hearty thank you to the teams at 3M, University of Minnesota Duluth, and Model B for providing a chance to present philosophy and pragmatism related to AI and marketing. If your organization would benefit from a 60 minute experience, let me know.
Grand Theft Hamlet
Usurping a place (like an IKEA) to present an idea isn’t new. The trick with juxtaposition is contrast. The more, the better. Case in point: staging Shakespeare’s Hamlet inside the video game Grand Theft Auto—then making a film of the experience. Brilliant. Please don’t kill the actors.
DeepSeek = Comedy?
This is also about contrast.
The prevailing narrative (WSJ, Casey Newton, New York Times) has told us training large language models—ChatGPT, Claude, Meta, Perplexity, et al—is absurdly expensive both financially and in terms of energy use. You have to have tons of money, tons of skill, and tons of the latest NVIDIA chips to have a chance.
On Monday that narrative appeared to be less true. DeepSeek, a Chinese LLM, seems to be able to do much of what its more expensive competitors can do at a fraction of the expense. I’ve played around with DeepSeek on my iPhone and laptop, feeding it various prompts I wrote for the other LLMs. And overall, DeepSeek worked pretty well. Maybe the research wasn’t as nuanced as ChatGPT, or the writing wasn’t as thoughtful as Claude. But it was good enough; harkening the “good enough” revolution of MP3s that shook up the music industry. So, now what?
I think Bloomberg’s Matt Levine has articulated the best narrative—what if this is just comedy?
The nice thing about building an artificial intelligence model out of a quantitative hedge fund is that there are interesting ways to monetize it. A standalone AI company will probably think of ideas like “sell subscriptions to an AI chatbot” or “sell access to an application programming interface,” but with a hedge fund you can be more creative. A naïve approach might be: “We will ask our AI chatbot what stocks to buy, and buy them,” but that is probably wrong. For one thing, your chatbot is probably bad at picking stocks. Also your hedge fund probably has its own AI stock-picking models that are better. Also, if you do release your AI model to the public — if you open-source it! — then everyone else can use it to pick stocks too, so this doesn’t give you any real advantage.
There is, however, a much funnier approach. The approach is:
Build a good AI model that can compete with the leading large language models built by tech giants, but cheaply, with fewer and less sophisticated chips and less electricity.
Sell short the stocks of the tech giants with expensive AI models, and the big chipmakers, and electric utilities and everyone else who is exposed to the “AI is a gusher of capital spending” trade.
Then announce your cheap good open-source model.
Wipe out almost $1 trillion of equity market value, and take some of that for yourself.
I have no reason to think that quant fund manager and DeepSeek founder Liang Wenfeng actually did that, or even thought about it, but, man, wouldn’t it be cool if he did?
Wouldn’t that be cool, indeed.
This is a prevailing story of engineering and technology. How can I build what they built, but faster, cheaper—and maybe even of better quality?
Sports bras and ChatGPT
I’m assuming most of the readers of this newsletter are trying to make sense of LLMs through traditional means—doing research, organizing copy, etc. But the increasing capability of multimodal systems (i.e. what am I/you looking at?) is probably where you’ll encounter real break throughs. At least, that’s the premise of CatGPT’s TikTok video. She’s a good follow, btw.
Speaking of Science Fiction
The developer Simon Willison curated some of his 2024 blog highlights including a favorite of mine, “Voice and live camera mode are science fiction come to life.” Between Google’s Project Astra (here and here), and the sports bra story above, we are living in a world once only imagined.
Right now you can use Google’s Gemini Live on your phone to have conversations with an AI about what your lens is pointed at. It can translate language and handwriting, explain code, give you a recipe for items it sees in your fridge, etc. It’s not quite what has been demoed with Astra, but good grief! We couldn’t do this a year ago.
What ideas will you imagine for a world where almost everyone has an AI-enabled ability to understand, and interact with their surroundings? Or is that too much to consider?
Willison expounded on the keen reality:
“LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.”
I’m reminded of the early days of websites. Unlike, say, a corporate magazine which probably followed pagination rules long established, a website could be…anything—any pixel dimension, any number of pages, hyperlinked as needed. This newfound architecture was both liberating and frustrating. Why can’t we just make normal TV commercials?
We’re living in that weird space we’ve visited before. How will you take advantage?
Wither the idea agency, circa 2025?
The history of advertising and ad agencies in the U.S. is at least two hundred years old. Imagine working alongside your former newspaper-centric creative partners as they develop their first TV commercial. Later on, and not long ago, many of us witnessed our TV-centric colleagues build their first websites, then their first Facebook ads. And today there’s everything I wrote about above. If you and your agency aren’t allocating time to experiment with AI, well, we’ve seen what happens.
The “integrated” agency of the mid 2010s was fluent in traditional forms of ideas, with an ability to both imagine and translate those skills in new realms like social. Or vice versa—they launched in the new realm, learned some lessons then acquired and incorporated traditional skills to survive.
What’s the version of this that comes to fruition in 2025; over the next 18 months? That’s one of the fundamental questions I’m thinking about. And I’m clearly not alone.
Secret Level, founded by Jason Zada, aims to reimagine an agency model in the age of AI. You’ve seen their work for Coca-Cola, and the film The Heist which leveraged Google’s Veo2 platform.
RabbitHole, founded by my friend Ross Patrick, is equally bullish on the future of the agency model. Fun stuff including “My Name is Gary.”
Who am I missing? Who else is advancing the future of the agency?
reader request: Some future stories / analysis of AI native "idea" agencies. Remember the whole divide in the early web days between creative agencies bolting "digital" onto their model vs. digital native shops? The ame thing is happening (or will be happening).