094: This time is different?
[During - Session 2] Evaluating contexts and mechanics | And more AI+Creativity
When innovations like the Internet, smartphones, or social media first arrived, artists and entrepreneurs could choose to ignore these shifts without detrimental effect for quite a while. How long did it take for you to build your first website? Or join your first social platform? How many companies held out on mobile web investment? There were clear benefits to being early, but few real damages to showing up late. In a similar vein, have any artists ever engaged a moral quandary over who made their paintbrushes?
Maybe this time is different.
The range and impact of change from GenAI feels unprecedented—more like electrification in the 1920s. All of status quo is being disrupted. So Monday evening’s second session took a step back to interrogate terminology, historic context, and cultural adoption; the better to help us gain advantage.
First, how do we explain outcomes which are surprisingly human-like, mutating and improving literally by the hour? I think we have to return to the 1940s and 50s and the aspirations of scientists and storytellers to comprehend human cognition through code and mechanics. Their legacy is both the technical and cultural foundation we’re operating from today. Our communal understanding of GenAI is as much non-fiction as fiction.
This piece from The Guardian is an excellent primer in the mechanics powering an LLM. And this piece from The Washington Post does an equally useful job in showing how the Diffusion model of image-generation works.
Then there’s speed, adoption and mutation.
“The LLMs you have access to today, the LLMs several billion people around the world have access to today, is literally the best AI available to anyone outside a handful of people at the big AI firms. You have the same AI access if you are Goldman Sachs, or the Department of Defense, an entrepreneur in Milwaukee, or a kid in Uganda.” – Ethan Mollick
How many of us would have recognized the acronym LLM even a year ago?
Here are a handful of resources:
One Year of ChatGPT - via The New York Times
Digiday’s definitive 2023 timeline
Microsoft’s AI Pipeline - via FastCompany
A History of Generative AI - via Toloka
The pace of innovation is almost unnerving. But also empowering. One of my students talked about uncovering and feeling a kind of permission in using GenAI; these tools apparently helped demystify and unlock stigma and sentiment which had inhibited creative expression.
Sounds like a win to me.
We wrapped up class by addressing the inevitable disruption caused by so much shifting in power. Three stories stand out as signals of who’s cage is being rattled, and where we might expect to see legislative action or shifts in policy.
1️⃣ Sept 6 2023 - U.S. Patent and Trademark Office - “rejected copyright protection for art created using artificial intelligence, denying a request by artist Jason M. Allen for a copyright covering an award-winning image he created with the generative AI system Midjourney, citing…it was not the product of human authorship.”
2️⃣ Dec 27 2023 - The New York Times Sues OpenAI, citing “unlawful copying and use of The Times’s uniquely valuable works.” “It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.”
3️⃣ Fall 2023 - Independent artists sue GenAI companies for copyright infringement. “The Authors Guild and 17 well-known authors like Jonathan Franzen, John Grisham, George R.R. Martin, and Jodi Picoult filed the lawsuit alleges OpenAI ‘copied plaintiffs’ works wholesale, without permission or consideration and fed the copyrighted materials into large language models.”
Next week we’re diving into generative text tools, and prompt engineering.
I’ll have an “After” reflection later this week.
AI+Creativity Update (Mostly Google stuff)
✏️ “For the first time, potentially billions of people will be confronted with the option to have software write on their behalf, in virtually every online context,” writes John Herrman in New York magazine’s Intelligencer, referring to Google’s upcoming Chrome browser feature.
📢 And the same for Search ads. “All you need to start is your website URL and Google AI will help you create optimized Search campaigns by generating relevant ad content, including creatives and keywords.”
🎬 And Google Research just announced Lumiere—a “Space-Time Diffusion Model for Video Generation,” which is, “designed for synthesizing videos that portray realistic, diverse and coherent motion -- a pivotal challenge in video synthesis.” Paper here.
🤔 “The current generative A.I. systems raise a lot of complicated copyright issues — some have called them existential — that really require us to start grappling with fundamental questions about the nature and value of human creativity,” says Shira Perlmutter, the register of copyrights at the Copyright Office. Worthy read from The New York Times.
🇺🇸 Say what you will about government, but I love seeing the word “complete” in green and all caps. Kudos to the Biden White House AI Council on getting diverse teams thinking and acting quickly across federal agencies. Especially the actions to speed up and increase hiring of AI talent. More please.
🍫 “Velvetise into Happiness.” Apparently that’s a thing GenAI can do.