(So….I wrote this mostly before the election. And I guess now I’m even less optimistic for a positive outcome on the AI issue and all other issues? Apologies for what is an even more dour edition of this newsletter than what I anticipated.)
I generally like to remain positive in this Substack. No one needs to hear me complain about everything. That’s what group text chats with friends are for. However, there are times when I just need to get something off my chest, so welcome to the first installment of a series I’m calling, What the Hell Are We Doing, People?
Generative AI. What the hell are we doing, people?
Now, I’m not saying that there can’t be actual, helpful applications of this technology. If it can be used to help folks in the sciences/medical fields, that seems great (although, as found in this study from Harvard Medical School and this report from the AP News, we have a long way to go in actually making its assistance useful). I’m even open to the idea that AI could be useful in the more technical aspects of filmmaking - never as a replacement for the actual creative person, but I can imagine a scenario where AI could help with certain aspects of, for example, editing. Again, not with the creative choices, but with the technical processes. Maybe? Any actual editors in the chat, weigh in!
And I’m not unsympathetic to the idea that AI could be a tool for those low-budget, independent filmmaker who, for example, can’t afford a graphic designer/editor to make pitch materials for their project. I think that is still rife with ethical issues, but I could see a case for it.
But then there are those who seem to think that AI can and should replace the creative - artists, writers, actors, etc. Absolute buffoonery.
You’ve got Mark Zuckerberg out here trying to convince creators that as individuals their work is essentially value-less in AI training (ok, Mark, then stop using it).
There’s Mira Murati, the CTO of OpenAI, saying that creative jobs may go away with the integration of AI, “But maybe they [creative jobs] shouldn’t have been there in the first place,” (what the hell does that even mean, Mira? ).
And then there’s the guy who spent $745 in Kling credits to make a shot-for-shot “live-action” copy of the trailer for Miyazaki’s animated Princess Mononoke with AI. I won’t link to it because I don’t want to give it views, but suffice it to say this man wasted $745 in Kling credits ( I can’t believe I’m saying the phrase “Kling credits”) and probably many hours of his life. And to what purpose? To recreate something, but much, much worse?
Guillermo del Toro recently boiled it down at a BFI Q&A, saying that AI basically does “semi-compelling screensavers.” Empty, shallow husks that can mimic but not truly create. “What would you risk to be in its presence?” he asks of AI-generated art. Honestly, I’d risk more to get away from it.
But all this is not actually why I say, “what the hell are we doing, people?” And it’s not even the havoc that Chat GPT has wrought upon an entire generation of students that use it to write their essays. I mean, sure, go ahead and undermine the whole purpose of your education. Who needs critical thinking skills and the ability to craft an argument? Have some goddamn intellectual curiosity, you hooligans!
Me right now:
ANYWAY.
My main beef with AI is much more fundamental: its effect on climate change. As detailed a few months ago by NPR, AI utilizes an enormous amount of energy and, as a result, the emissions for companies like Google and Microsoft have soared.
“As we further integrate AI into our products, reducing emissions may be challenging,” Google’s 2024 environmental report reads. Um, excuse me???? You say that like reducing emissions is an encouraged recommendation and not a fucking existential necessity. The report also outlines plans to use AI as a tool to mitigate climate change (which to me seems like trying to get someone to stop punching you in the face by punching yourself in the face), but the plans are vague, and they fully admit that the future environmental impact of AI is “complex and difficult to predict.”
Okay, well then maybe put in a little more R&D to understand and mitigate AI’s environmental impact before you start forcing it down the throats of the general public. The idea that we’re not going to reach our climate goals anyway so we should just double down on AI to solve the problem for us (a view recently put forth by former Google CEO, Eric Schmidt), even though AI is a huge part of the problem, is insane to me. It’s defeatist and short-sighted. And I think it’s designed to quell protest against AI by making it seem inevitable. Just like when they tell you your vote doesn’t matter. But it does. And so does using your voice and dollars to push back against the integration of AI into everything we do.
I recently canceled my Adobe subscription when they raised their price touting new AI features. Absolutely fuck off. I die a little inside every time one of the apps/programs I use, from Instagram to Constant Contact, asks me if I’d like to use AI to write my caption/email/etc. On this very Substack, I had to make sure to opt out of allowing it to use my writing to train AI. And who knows if they actually adhere to that. At the very least, using AI should be an “opt-in” procedure and not a, “hey, all your Google searches now make you complicit in the destruction of the planet…and also the answers are shit.” Incidentally, I’ve started using Bing, because at least there I could disable their AI feature. Yeah, it is kinda weird using Bing.
So. What the hell are we doing, people? Are all these tech companies operating on the sunk-cost fallacy even as the planet burns? Can humanity please stop hitting itself in the face? Or am I just a Luddite?