This week in digital trust » Episode 89

#89 Who’s in charge here? The Altman/OpenAI saga explained

5 December 2023

This week we unpack the soap opera that was Sam Altman’s firing-then-reinstatement at OpenAI (makers of ChatGPT).

Beneath the drama, and there was a lot of it, the saga potentially stands as a commentary on the state of AI safety approaches within the tech community, and on the effectiveness of self-governance.

We also touch on the US Government’s executive order on safe and trustworthy AI and the UK governments AI Safety Summit, both from the last month or so.

Listen now

Transcript

This is an automatically generated transcript. We make our best efforts to check that it is an accurate reflection of the episode, but it may contain some errors and unedited content.

Arj
Welcome to this week in Digital Trust, elevenM’s regular conversation about all things tech policy, privacy and cybersecurity. I’m Arj joining you today from Awabakal country.

Jordan
And I’m Jordan joining you from Wurundjeri country in Melbourne. And Arj, did you know that it’s AI month here in Australia from the 15th of November to the 15th of December for some reason, middle of the month to the middle of the month. So that’s weird. but let’s celebrate Australia’s amazing AI capabilities, talent and potential.

Arj
Let’s do that. These months are all about raising awareness of little known, little discussed topics and like no one’s been talking about AI in 2023. No one’s been talking about AI. So let’s have a month and yeah, and let’s start it. From the middle of the month, I’m down with that, why not?

Jordan
Yep, good fun. So yeah, you might not have heard, but AI is kind of a thing at the moment. There’s chat bots, it’s chat GPT, it’s a bit of a thing.
There’s various other stuff going on, but most significantly over the last few weeks, just a string of amazing news out of the US. I don’t know, I quite like soap operas. I enjoy a bit of corporate drama, a bit of, you know, he said, she said. So we’ve had first of all a weekend, but then a couple weeks of the tail end of it, of…
Just some pretty wild news out of OpenAI, which is coming to the makes chat GPT, about Sam Altman as the CEO being fired and then hired again and then maybe going to Microsoft. It’s been just the most dramatic and hilarious weekend of news.

Arj
It’s been so funny. We were setting up for a conversation about AI because there was other big news. It was like the US government had made major announcements and executive orders and the UK had pulled together this major international summit with almost 30 countries from around the world sending their biggest leaders.
So there was stuff happening, big government stuff and we were like, oh yeah, we should we should tap back into this and it’s been just blown out of the water by this amazing soap opera, exactly the right word. Quick recap, because it’s been a few weeks now and it’s hit the mainstream news, so most people listening I imagine are aware of it, but just a quick recap is the CEO of OpenAI, who are the makers of Chat GPT, Sam Altman was fired. just essentially out of the blue on a Friday in November, sort of November the 17th.

Jordan
Completely out of the blue. Apparently he was like on stage representing the company like an hour before he got the call over Google meet to get fired by the board apparently. So yeah.

Arj
And you know, extensively some sort of comms issue was the reason, you know, like that he wasn’t being sufficiently candid.
So very confusing and you know, as we’ll discuss, there hasn’t been a lot more detail since, but he kind of fired out of the blue, you know, the world’s in shock because this is obviously the sort of the most prominent AI company that we’ve been talking about for a long time. And it doesn’t go well. Basically his co-founder goes to the, I’m going to resign as well. I’m not happy about this. The board appoint not one, but two interim CEOs to sort of step in, in the meantime. whole bunch of staff threatened to quit and the PR battle is progressively being lost by the board and soon enough Altman’s like positioning himself to come back.

Jordan
And they’ve got Microsoft on the sidelines over the weekend who’ve just sunk like $14 billion into open AI and who are relying on, you know, open AI’s GPT models to power their AI products, you know, being in co-pilot and chat solutions and stuff that they’re rolling out.
So you’ve got Microsoft on the sideline, who apparently were not told, major stakeholder, were not told about the firing of Sam Altman until it actually happened either, or like a few minutes before. You’ve got them piping up saying, we’re not sure about this, and we really need the situation to be resolved by Monday when markets open, because our share price is gonna take a hit.

Arj
And then soon enough, you know, there’s a bit of a, like an opportunism streak that gaga runs through Silicon Valley. So you see like all of these people threatening to resign from open AI. So then you just see like blatant open poaching on Twitter from, you know, like the Salesforce CEOs, like saying, if you will send me a CVs, I’ll hire you. And then Microsoft obviously will go after Altman and offer Altman and Brockman, his co-founder jobs.
And then there’s, you know, the public pressure takes its toll and there’s an about face. And soon enough, Altman’s back, back in the chair as CEO at OpenAI. And it’s actually the board members who were positioning to get rid of him, who find themselves on the outer. Just a bizarre soap opera.

Jordan
It’s bizarre. I mean, even to the point, another fun tidbit. So at some point over the weekend, there was a letter that was signed by a whole bunch of OpenAI staff saying, look, if Altman goes, we’ll go. And among the people who signed that letter, this is like two days after the board made the decision to fire him, was one of the board members who decided to fire him, who two days later had flipped Ilya Sutskever, and Mira Murati who was the interim CEO who the board had appointed. those two key people on the side of firing had already flipped and, and, and they’re on, on his side. It’s just the more there is, so we could talk about this forever.

Arj
And we could, but there’s so much there. So it’s give us like, uh, I think the lead researcher at open AI as well. So a very, very prominent person employee was in the company as well as on the board. And I think the quotes I had read were sort of, you know, he effectively read the room after a day and then sort of flipped his position, which, yeah, very interesting.
But there’s so much here. There’s so much here that, you know, to talk about.

Jordan
Yeah, there really is. So, and I recommend, well, let’s put some links in the show notes, just like scrolling through some of the coverage. There’s a great kind of timeline on The Verge, which just has so much like great little detail and tidbits. It’s quite funny to follow along.
interesting, more than just the drama. I think it’s a really significant situation that’s played out there because I think we’ve talked about this on the podcast before, but OpenAI, it’s not just a for-profit company. It was actually set up as a non-profit with the aim of building safe, artificial general intelligence for the benefit of humanity.
It’s these Techies in Silicon Valley who are concerned about the future of artificial general intelligence, they’re worried about creating Skynet. And so it was originally set up as a research institute to do research for beneficial AI. In order to raise money, they adopted a split structure so that part of the company, so that there’s a holding company that’s the not-for-profit subsidiary company, which is actually a for-profit company that makes money for the, for itself and for the not-for-profit to run, right?
This is how Mozilla runs actually with Firefox. It’s not a super uncommon setup signal also runs in a similar way. So the, the for-profit subsidiary is still owned by the not-for-profit. They can make money. They can, you know, offer staff stock options so that you can actually recruit good AI researchers who, you know, could be paid more at Microsoft kind of thing. So You need to be able to recruit people, you need to be able to raise money from companies like Microsoft to do the computation to build your AI systems. So they have this structure. And I think it looks like from the outside anyway, the thing that happened over the weekend was really a competition between those two halves, between the board, which represents the not-for-profit, who tried to fire Sam Altman, who is the head of the for-profit company that’s supposed to be in service of the not-for-profit and other interests supporting Sam like Microsoft who’s got $15 billion tied up in the company.
And so, what seems to have happened is this conflict where the board’s not happy with the direction that the company’s going. They should have the power to fire the CEO. That’s explicitly something But all of the stuff around the CEO’s charismatic leadership, the staff wanting to work for him, Microsoft getting involved, other stakeholders getting involved, stops that board from actually exerting meaningful power to direct the company. And so I think that that feels like that’s the drama that’s played out and that feels like really significant.

Arj
And the reason is not, I guess, hasn’t been as cut and dried is because the board haven’t said any more than that initial statement about, you know, Altman not being sufficiently candid, consistently candid in his communications. But Casey Newton, who we’ve referenced on the pod before, who has a newsletter called Platformer, he quoted some employees who attended an all hands meeting and reported that Ilya Sutskever, that board member that you mentioned before, the removal of Altman was necessary to make sure that open AI builds AGI that benefits all of humanity.
So essentially what you’re saying, which is that this sense from members of the board that the direction of the company was going away from this mission statement, which was around safe, um, general AI and also beneficial AI. And the other interesting stuff is around, uh, there was some New York Times reporting around one of the other board members, Helen Toner, who’s actually a University of Melbourne graduate, it turns out.

Jordan
Yeah, Australian, Victorian, yeah.

Arj
Yeah, the New York Times had sort of reported that in the weeks leading up to him being fired, Altman and Toner had a bit of a stoush over a research paper that Toner had written in her capacity as director of strategy at Georgetown University. And the research paper effectively criticized ChatGPT and OpenAI for just basically putting this thing out there, you know and creating this frenzy, this competitive frenzy amongst tech companies to build and deploy AI before sufficiently, I guess, ensuring that it was safe and benefited humanity. So that’s that kind of part of the board that is very concerned about that. And then you’ve got this board member who’s writing papers that Altman felt was critical of the company. But so these tensions were sort of bubbling and playing out for weeks leading into this. And yeah, it seems reasonable to assume that sort of the culmination of it was, you know, the removal of Altman because of that conflict.

Jordan
I mean, that narrative is complicated by the fact that it was just apparently so poorly done, right? Like the board clearly like completely missed.

Arj
They botched.

Jordan
Yeah, they had no real sense of the kind of pushback that they were going to get. They didn’t appear to have a real plan for who’s the next CEO or where that’s going or how to engage their comms over the weekend were kind of virtually non-existent. So, I mean, what I’m reading from this is an attempt by the board to assert the original values of the company, which may or may not be true. There might have been other politics going on, but an attempt by the board to assert those original values of the company and Microsoft and the for-profit kind of incentives around the company kind of trumping that. It may be that that’s not a fair reading, right? That I’m sure there’s more going on and there’s complications to that story. But I think what that story shows, what it does show despite any kind of complications is that we can’t rely on a board like that to regulate a company like that. We can’t rely on self-regulation by these companies. We can’t rely on OpenAI to do the right thing. to entrust the future of humanity and artificial general intelligence to them, that these people really need outside supervision.

Arj
But the distant and more immediate history of open AI revealed that the lure of profit is just so compelling. Like, you know, the long history which you talked about earlier, which was like, we’re gonna set up this thing as a non-for-profit research institute with this mission around you know, safe AI that benefits humanity. It didn’t hold up long enough. Like eventually it morphed into this kind of profit for profit enterprise, because, you know, that that was deemed something that was necessary to chase after the kind of capitalize on AI and also raise the money that was needed to continue to build the engine.
And then the, yeah, the short history tells us that, you know, you, you, you can’t, uh, the, the profit motive ultimately wins out in these kind of self governing situations. I mean. It’s being kind of written up as by some people is like, this is a loss for AI safety. Like people are looking at this and saying AI safety is clearly not, you know, something that we should be prioritizing because AI safety played its cards in this battle and lost. And now we move on and Altman comes back with a stronger mandate and a more purer profit focus and he rolls forward.
I think that’s correct reading in a descriptive way of what’s happened. Like I think, yes, in the context of this company, you know, Altman and his desire to kind of continue with a sort of profit-driven AI enterprise is the winner. And it’s the AI safety advocates on the board that have lost.
But I think it’s more suggestive of the fact that there’s a bit of a monoculture of thought around these companies around like, this is the way to go forward. Like when… you know, when, when the, when the battle happened, all of the people that kind of, you know, rallied were in favor, you know, and the employees were in favor of, you know, that old Silicon Valley mindset of like, the only things we should worry about is like building things and breaking things and, you know, seeing how far we can push this AI and building for profit and, um, you know, that to your point tells us that if we care about AI safety, it’s not gonna come from within.
It has to come, you know, we have to look at other ways that are not self-governance, that are regulation and other mechanisms.

Jordan
Yeah, for sure. I always love these stories when they emerge because they remind you that these are just people and they’re not particularly special people running these companies, they’re just people. And there’s this like… mystique and mysticism around Silicon Valley companies and startups. And you see it even in procurement. Like when you’re working in companies here and there’s like some Silicon Valley tech startup with a facial recognition app or an algorithm that’ll help you do recruitment or whatever it is, magically solve your problem with technology, there’s this mysticism around Silicon Valley tech companies like they must know what they’re doing and they’re kind of magical. And it’s like, you know, a couple nerds in a garage, or it’s a bunch of like maths nerds from the university and they’ve never studied history or humanities and they don’t actually know the complications of the problem. And this really kind of, just the human fallibility of this story of just like it is such a botched attempt to exert control or power or like they just hadn’t thought through how this is going to play out over the weekend. It just brings it back to, is it just a bunch of people in a room making a decision and they have relationships and they know people and they make bad decisions. And it is no basis for shaping, like these are not the people I want shaping the future of one of the most important technologies you know, the next industrial revolution, if you buy into the hype, like this is, yeah, these are humans. We can’t trust them. We can’t trust any humans. People make mistakes.

Arj
I think the thing about the mythology is just, it’s not just like a mythology that, you know, outsiders of Silicon Valley or tech have about, you know, tech entrepreneurs, it’s a self, it’s a self-driven mythology and a self-consumed mythology as well.
And like, Just reading some of Twitter or X now and seeing the proclamations about, you know, this proves the death of effective altruism, which was always about, you know, like a totalitarian ideology to, you know, or AI safety. Sorry. You know, AI safety is, you know, the path to totalitarianism and to take full control and there’s just no measured way of, you know, talking about these things. And talking about anything that might be anything other than go 100 miles an hour with no restraints when you’re building tech. And it seems like a caricature of a particular kind of type of VC person, but I’m talking about people that are like Anderson Horowitz type people and people who are the most prominent venture capitalists that are behind this industry. And yeah, we don’t want that mindset to drive a technology that is evidently going to be transformative and embedded in everything that we do going forward.

As we sort of flagged at the top of the show, there were two major government kind of announcements or initiatives over the past month or so. One out of the US, which was an executive order by the US government, by the White House around AI safety, and the other being a summit that the UK hosted, an AI safety summit which featured the participation of about 30 countries around the world.

Jordan
My overwhelming reaction to both of these was like initially very lukewarm that like, ugh, there’s not really anything material here. It’s all a bunch of platitudes. And the more I’ve like looked at it and read them, the more I think, oh, maybe like this is kind of like, you know, it is moving in the right direction and it is a kind of nascent technology and we are still working out what. You know, like, it is maybe unreasonable. It is definitely unreasonable to expect anyone to come out with a fully baked, you know, here is the solution to regulating AI. And so once we put that desire aside, I wish we could have that. What these, both of these things do is kind of incremental progress towards, you know, research standards agreement consistency on testing and transfer some, get some of the way there. And so.

Arj
Yeah, let’s go through them very quickly. We won’t go in detail, but the US executive order, the general outline of it was to make announcements around stuff like new standards for AI safety and security. So as an example, that’s stuff like getting companies that are using or developing foundational models that have a serious risk to do what they call red team safety tests and to release those results. So one example, another example is getting NIST, the National Institutes of Standards and Technology, which does a lot of great work around setting standards to set standards for this testing.
There’s a strong focus around privacy. So for example, the President Biden calling for bipartisan support for data privacy legislation at a federal level in the US, which we don’t have and we’ve talked about a lot.
There’s a focus around kind of equity.

Jordan
Just on that privacy thing is a good example of what I found like really lukewarm, right? That like it’s an executive order that’s asking for Congress to legislate. Oh, okay. That’s not doing a lot.

Arj
I agree with you. It doesn’t do a lot, but I actually had a different reaction to it, which was that one, it was good that it kind of front and centered privacy in this conversation and said, like, this is a big deal. But also the fact that I think the president calls for federal privacy legislation, you know, in the US where it’s so contested and it’s been such a long road to get to get to this point that I think under an executive order, if we had have had a standalone executive order from the president saying we should have federal privacy reform, I feel like we might have thought that’s actually really good.

Jordan
But so I’m more cynical. I don’t think that like the legislation is not going to come is the problem because

Arj
well, actually, just on that, I think that also this some of this reflects the general sort of. state of play, I think with like the American kind of legislative process, which is that, you know, the White House and the president can effectively just sort of plant flags and make signals like through executive orders and ultimately Congress has to move things forward. And so this is their kind of, I guess, their play on that front.
But a couple of other things to mention in this executive order, I won’t go through them all, but talking about like equity and civil rights. So talking about, you know, making sure things are fair for, you know, people in particularly vulnerable situations, you know, under like receiving benefits or, you know, who are renters and making sure people use AI in those contexts, using them fairly. And just, you know, a bunch of other stuff there around sort of supporting and standing up for users and citizens in different contexts. There’s also, I should say a bunch of stuff around innovation and competition. So it was, yes, largely about risks and safety, but there’s a large part of this, which is about kind of competition, research, driving kind of leadership on the international stage.

Jordan
That’s the shape of it. For me, one of the key things to understand about an executive order, any executive order in the US, is that it’s not like it’s not law, right? It’s a direction to the government. And there are some laws which give the US president the power to make an executive order that applies to private companies.
And so a small part of this is under some old defense production act, which, and that’s the bit that there’s, there’s one reporting requirement that you mentioned up the front that you got to do testing, red teaming testing of new models, and you got to report those outcomes to the government. So that, that applies to the private sector.
The rest of the report pretty much, or the rest of the order is directions to US federal government agencies to make rules, to do reports, to do various other things. My first reading of that is, oh, well, that’s a bit weak, right? But when you go into the detail of it, it’s like 100 pages of directions, 150 distinct directions to like 50 different federal entities to do really actually concrete things over the next like 30, 90, 120 days. And so it’s actually, it actually represents like a huge amount of work and policy direction and movement in the U S through those decentralized kind of government organizations.

Arj
The breadth of those directions is, is quite, quite large. Like it’s not just. existential risk or frontier models as we’ll sort of talk about with the UK summit shortly. It’s looking at like privacy, it’s looking at security, it’s looking at kind of discrimination, fairness, equity, and it’s looking at innovation. And so I like you, I had an initial kind of perspective, I like arts, a bunch of nice words on a page, but they’re quite discreet directions. And I think particularly in the US, I think so much of this conversation is still up for grabs as we sort of see in the whole open AI scenario. Like it’s so contested about whether you care about this even at all, that to have the White House put out quite a detailed statement laying out the scope of the problem and specific directions on what can be done. I think I’m like you, like I ultimately I think we need sort of much more accountable, regulatory you know, action at some point, but I think it’s not nothing as well.

Jordan
So yeah. Yeah. So generally positive on that one. The UK AI safety summit is the other bit of news. So that happened at Bletchley Park, you know, symbolic because World War II codebreakers, I’m not sure what cryptography has to do with AI, but you know, it’s vaguely technology relevant, I suppose.

Arj
It’s tech.

Jordan
Yeah, it’s tech. It’s,

Arj
it’s, it’s, it’s complicated

Jordan
That’s what Bletchley Park stands for.
Yeah, at the start of November, this was a big international summit, the EU and 28 different countries, including importantly China and the US, signed a declaration at the end of it. There was a lot of kind of pow wowing and discussion, a lot of tech leaders and, you know, country leaders getting together to talk about AI risk.
Around that summit, Rishi Sunak, the UK Prime Minister, was talking a lot about existential risk and innovation and the kind of talk around AI that gets me worried because it’s focused on the big stuff and not doing all the little stuff that the US executive order is doing that we’re so positive about.
But, yeah, out of that, they signed this Bletchley Declaration, which is affirming that AI should be designed, developed, deployed, and used in a manner that is safe human-centric, trustworthy and responsible. That’s a nice, you know, nice motherhood statement.
They’ve also agreed to have future, you know, AI summits. Next one’s in Korea. The one after is gonna be in France and they’re establishing a UN body that’s modeled on the intergovernmental panel on climate change that’s formed from the representatives of the countries attending the summit and that’s going to produce an annual report about the state of AI, much like those annual climate change reports. So there’s a bit there, but it’s also feeling very international and motherhoody.

Arj
Well, yeah, same thing. I was looking for a bit of a read on it and the one that stood out for me was a think tank called the Carnegie Endowment for International Peace, which was actually very favorable in its write up, but described it as a major diplomatic breakthrough. Like that’s essentially what they’ve sort of taken away from this, which is the fact that, you know, it’s one thing to get kind of the EU, the UK and the US and Australia in there, but to also get kind of China and other like the BRIC countries like, like Brazil and India and so forth into the, into the room is, is a big deal.
You know, I think we sort of wait and see for stuff that’s concrete.
I, there’s some tie-ins to the U S stuff in as much as I think the U S and the UK are both going to set up, um, AI kind of safety testing centers of some kind, which will be able to sort of do some of that safety testing that we talked about in the executive order, but, um, my, my major bug bear with it is the framing around sort of specifically around frontier models. And, um, there was a great in an article I read where someone said it was like, it’s like having a fire brigade conference and all you talk about is the media, a meteor strike that obliterates the country and not all the real fires that are actual, imminent and present threats. And so like that’s probably my main issue with it.
One thing I thought that was interesting was the, in the UK government’s press statement about the summit. There’s a quote from a few different governments, including the Australian government. And so there’s a quote from Richard Miles, the deputy prime minister who attended and he says, you know, it’s a short statement that, you know, Australia welcomes a secured by design approach where developers take responsibility.
And then he has this line, voluntary commitments are good, but will not be meaningful without more accountability. And I just thought that was interesting. I can’t find that quote on Australia’s own kind of government statements about the summit, but it was there in that one. And it felt like maybe a little bit of a signal saying like we need to be a little bit more, um, you know, bold around kind of how we place accountabilities on companies that develop these models beyond voluntary.

Jordan
Yeah, that’s interesting. It certainly signals a bit of a intent from the Australian government to do some regulation, which to be fair has been signalled Previously as well, right in the, you know, the, the, the recent, well, maybe six months ago now, but consultation on safe and responsible AI, the Australian government has signaled pretty strongly that it’s looking to regulate in some way in this area.

Arj
And like speaking of that as well, like, um, neither the UK government’s press release, uh, Prime Minister Sunak’s speech at the summit or the executive order used the word regulation at all.
Like none of these major things that we’re talking about talk about regulation. So it’s kind of not on the agenda of these discussions and that’s like one fairly quite crystal view of that.

Jordan
Which is on the one hand disappointing. On the other hand, I mean, again, I feel like we’re relatively early days with the attention on this stuff and just setting up a multilateral kind of places for debate and that that IPCC analogous UN organization for AI, and they’re committed to this state of the science report, which will be an annual, I think, production of a shared understand, to build a shared understanding of the capabilities and risks posed by certain AI systems. Just having a forum for… kind of global debate at the country level and understanding of where the tech is and getting some of the hype out. You know, like, do we really need to worry about this existential stuff? Like, it’ll be great to have, you know, if we can an agreed body of experts who are saying like, yes, that’s a worry, no, that’s not a worry. Here’s where we should focus regulation. So I think it’ll go a long way. I’m hopeful that it’ll go a long way to kind of maturing that debate, but…
Yeah, it’s certainly not got any answers yet.

Arj
Yeah. No, I think so. I think there’s a lot of, a lot of literacy and understanding around the problem space to come, but in the interim, there’s also entertainment as long as open AI are in the picture and carrying on.

Jordan
Yeah. Look, you go from, you know, board level soap operas through to, you know, the American government doing some pretty good stuff, which is not a thing I usually say. And then.
Yeah, to, you know, international agreements and 30 countries coming together to, yeah, to, yeah, I’m weirdly positive this week.

Arj
It’s all happening as I say.

Jordan
Yeah, let’s look forward. I can’t wait. That State of the Science report’s been promised, I don’t know, early next year, I think. So it’s a bit of a wait, but I’m looking forward to it.

Arj
Good one. Okay. Well, till next time.

Jordan
Till next time.

Arj
Thanks, Jordan.

Jordan
See ya.