I’ve had a really hard time understanding the wild, whiplash-inducing ride over at OpenAI this past weekend. As much as Twitter sucked, I do miss how much easier it was for me to keep up with the news there. Thankfully, someone linked me to Ben Thompson’s post at Stratechery and it’s one of the best summaries/explainers of the OpenAI implosion that I’ve seen.
I also liked the conclusion of the Newcomer piece quoted by Ben Thompson:
Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation.
He lost the trust of his board. We should take that seriously.
Let’s give the board and Altman’s critics some time to explain themselves and to articulate a vision for how OpenAI might move forward without putting Altman back in charge.
Then again, I’ve also been reading plenty about how that board is a bunch of AI Doomers and that their fears around Altman moving too quickly within OpenAI weren’t rooted in what I’d consider to be real, level-headed concerns (e.g. the dangers of relying on tools that hallucinate and confidently spread misinformation) but were instead rooted in the fear that OpenAI will cause a literal doomsday scenario. Matt Levine (who always produces required reading) touches on this a bit in his Bloomberg post today, which also contained some great info about what’s happening at OpenAI.