Promise, profit and public benefit: some thoughts on Musk v OpenAI

Elon Musk’s lawsuit against Sam Altman and Open AI brings up thorny questions about AI governance and the public interest (but they aren’t the ones listed in the complaint).

In a complaint filed in San Francisco, Musk alleges that Altman, another co-founder Greg Brockman, and eight entities which form part of the OpenAI corporate group, breached commitments for OpenAI Inc. to ‘be a non-profit developing AGI [artificial general intelligence] for the benefit of humanity’ and to open source its technology. According to the complaint, these commitments formed terms of a ‘founding agreement’ and/or bare promises which induced Musk to bring his resources, time, and ‘clout’ to bear on the endeavour. The founding agreement, the complaint says, was set ‘aflame’ when OpenAI publicly released GPT-4 without accompanying source code or documentation about its internal design. Also, when its major financial partner Microsoft acquired an observer board seat. As a result, Musk seeks unspecified damages (after some equitable accounting) and performance of the allegedly breached terms/promises.

The complaint hinges on a number of contentious allegations, many of which OpenAI refuted in a public response last week. Wider scepticism about the motives and merits of Musk’s claim abounds in media coverage and commentary. Still, it touches on some fundamental questions about AI governance, such as: What does fidelity to the ‘public interest’ require in the context of AI development? Who decides? And how are those decision-makers held to account?

To open source or not to open source

OpenAI Inc’s Articles of Incorporation say that ‘the corporation will seek to open source technology for the public benefit when applicable.’

The syntax lends itself to a couple of readings but assume it’s a commitment to open source when doing so would benefit the public. Open-sourcing is a spectrum. OpenAI says it’s making its technology ‘broadly usable’ but its mission doesn’t imply open sourcing everything.

The dispute points to a wider debate about the public benefits of open sourcing LLMs - one of the major flashpoints in debates about AI governance, which has reportedly played out in recent summits and closed-door meetings between lawmakers and tech leaders.

Advocates of open-source sharing argue that making all or parts of the source code, weights, datasets and/or documentation for large language models (LLMs) available on an open-source basis is the best way to democratise access and control over the future development of AI. Opponents, on the other hand, emphasise the potential for misuse by ‘sophisticated threat actors’ who might remove safeguards built into open-source models in order to deploy them for an array of harmful purposes. Open-source licensing with guardrails (OpenRails) is one proposed way of mitigating misuse risks but as many point out are ‘only as good as the ability to enforce them.’

Corporate secrecy and public accountability

Open sourcing and making LLMs open to scrutiny are related but not co-extensive objectives. Claims to proprietary interests are one of the common objections to various proposals to increase public scrutiny.

Other commentators have suggested that the lawsuit may have been designed to make public more information about OpenAI’s operations. Legal discovery is one way to bring to light information subject to (sometimes under-interrogated) claims of secrecy and proprietary ownership. However, the costly and time consuming and largely inaccessible option of litigation shouldn’t be the only or even main avenue for meaningful scrutiny.

Private companies and public missions: who guards against mission drift?

OpenAI Inc. remains a non-profit with a stated mission to benefit humanity; its board, reconfigured after a high-profile boardroom drama late last year, still charged with upholding that mission. That non-profit now sits at the centre of a complex arrangement of capped and for-profit corporate entities, established to raise money for computer and brain power it needs to build AGI, with eye-watering valuations.

The legal complaint talks a lot about the OpenAI Inc board’s fiduciary duties ‘to humanity’. It lists breach of fiduciary duty to Musk (not humanity) as the third cause of action, citing (among other things) failing to publicly disclose ‘details on GPT-4’s architecture, hardware, training method, and training computation’, erecting a paywall and giving Microsoft an observer seat on the board.

Commentators have raised valid questions about the tensions between private profit, corporate vehicles and governance, and public benefit missions. Can profit-based models be compatible with public interest missions? What kinds of structures and governance arrangements can best protect public interest missions? Guard against mission drift?

A number of frontier AI companies, including OpenAI, have floated and experimented with various forms of public participation – ‘democratisation’ – as a way of surfacing and aligning their products with community expectations. But democratic systems of governance hinge not only on inclusive participation but capabilities for public contestation – that is, opportunities and avenues for to contest decisions which don’t align with public interests. As I argue in more depth elsewhere, public contestability (in varying degrees and forms) will be critical to realising public interest missions through democratising AI governance.

However it proceeds, the lawsuit is unlikely to yield satisfactory answers to any of these important questions about governance. Which is not a bad thing, at least in the minds of those who think that questions of if and how AGI is pursued and deployed shouldn’t be left to solely or mostly to private contracts and litigation.

Previous
Previous

Automating housing (in)justice: The promise and limits of ‘fair’ rent tech

Next
Next

Privacy reform edges forward