Many of the topics that we’ve all been discussing about technology these days seem to matter so much more, and the stakes have never been higher. So, I’ve been trying to engage with more conversations out in the world, in hopes of communicating some of the ideas that might not get shared from more traditional voices in technology. These recent conversations have been pretty well received, and I hope you’ll take a minute to give them a listen when you have a moment.
Galaxy Brain
First, it was nice to sit down with Charlie Warzel, as he invited me to speak with him on Galaxy Brain (full transcript at that link), his excellent podcast for The Atlantic. The initial topic was some of the alarmist hype being raised around AI within the tech industry right now, but we had a much more far-ranging conversation, and I was particularly glad that I got to articulate my (somewhat nuanced) take on the rhetoric that many of the Big AI companies push about their LLM products being “inevitable”.
In short, while I think it’s important to fight their narrative that treats big commercial AI products as inevitable, I don’t think it will be effective or successful to do so by trying to stop regular people from using LLMs at all. Instead, I think we have to pursue a third option, which is a multiplicity of small, independent, accountable and purpose-built LLMs. By analogy, the answer to unhealthy fast food is good, home-cooked meals and neighborhood restaurants all using local ingredients.
The full conversation is almost 45 minutes, but I’ve cued up the section on inevitability here:
Revolution Social
Next up, I got to reconnect with Rabble, whom I’ve known since the earliest days of social media, for his podcast Revolution.Social. The framing for this episode was “Silicon Valley has lost its moral compass” (did it have one? Ayyyyy) but this was another chance to have a wide-ranging conversation, and I was particularly glad to get into the reckoning that I think is coming around intellectual property in the AI era. Put simply, I think that the current practice of wholesale appropriation of content from creators without consent or compensation by the AI companies is simply untenable. If nothing else, as normal companies start using data and content, they’re going to want to pay for it just so they don’t get sued and so that the quality of the content they’re using is of a known reliability. That will start to change things from he current Wild West “steal all the stuff and sort it out later” mentality. It will not surprise you to find out that I illustrated this point by using examples that included… Prince and Taylor Swift. But there’s lots of other good stuff in the conversation too! Let me know what you think.
What’s next?
As I’ve been writing more here on my site again, many of these topics seem to have resonated, and there have been some more opportunities to guest on podcasts, or invitations to speak at various events. For the last several years, I had largely declined all such invitations, both out of some fatigue over where the industry was at, and also because I didn’t think I had anything in particular to say.
In all honesty, these days it feels like the stakes are too high, and there are too few people who are addressing some of these issues, so I changed my mind and started to re-engage. I may well be an imperfect messenger, and I would eagerly pass the microphone to others who want to use their voices to talk about how tech can be more accountable and more humanist (if that’s you, let me know!). But if you think there’s value to these kinds of things, let me know, or if you think there are places where I should be getting the message out, do let them know, and I’ll try to do my best to dedicate as much time and energy as I can to doing so. And, as always, if there’s something I could be doing better in communicating in these kinds of platforms, your critique and comments are always welcome!