Begin typing your search...

From nonprofit idealism to corporate power: OpenAI’s critical turning point

New governance model tests whether AI will serve society or investors

From nonprofit idealism to corporate power: OpenAI’s critical turning point

From nonprofit idealism to corporate power: OpenAI’s critical turning point
X

16 Feb 2026 7:58 AM IST

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that "safely benefits humanity, unconstrained by a need to generate financial return," according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so "safely."

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed "safely" from its mission statement, among other changes.

That change in wording coincided with its transformation from a nonprofit organisation into a business increasingly focused on profits. OpenAI faces lawsuits related to its products' safety, making this change newsworthy.

Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims. As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word "safely" from its mission statement as a significant shift that has largely gone unreported outside highly specialized outlets.

And I believe OpenAI's makeover is a test case for how we, as a society, oversee the work of organisations that have the potential to both provide enormous benefits and do catastrophic harm.

Establishing a new structure

In October 2025, OpenAI reached an agreement with the attorneys general of California and Delaware to become a more traditional for-profit company. Under the new arrangement, OpenAI was split into two entities: a nonprofit foundation and a for-profit business.

The restructured nonprofit, the OpenAI Foundation, owns about one-fourth of the stock in a new for-profit public benefit corporation, the OpenAI Group. Both are headquartered in California but incorporated in Delaware.

The new structure is described in a MoU signed in October 2025 by OpenAI and the California attorney general, and endorsed by the Delaware attorney general.

Many business media outlets heralded the move, predicting that it would usher in more investment. Two months later, SoftBank, a Japanese conglomerate, finalised a USD 41 billion investment in OpenAI.

Mission statement

Most charities must file forms annually with the Internal Revenue Service with details about their missions, activities and financial status to show that they qualify for tax-exempt status. Because the IRS makes the forms public, they have become a way for nonprofits to signal their missions to the world.

In its forms for 2022, and 2023, OpenAI said its mission was "to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return."

That mission statement has changed, as of OpenAI's 990 form for 2024 which the company filed with the IRS in late 2025. It became "to ensure that artificial general intelligence benefits all of humanity."

OpenAI had dropped its commitment to safety from its mission statement along with a commitment to being "unconstrained" by a need to make money for investors.

According to Platformer, a tech media outlet, it has also disbanded its "mission alignment" team. In my view, these changes explicitly signal that OpenAI is making its profits a higher priority than the safety of its products.

Legal governance structure

Nonprofit boards are responsible for key decisions and upholding their organisation's mission. Unlike private companies, board members of tax-exempt charitable nonprofits cannot personally enrich themselves by taking a share of earnings.

In cases where a nonprofit owns a for-profit business, as OpenAI did with its previous structure, investors can take a cut of profits but they typically do not get a seat on the board or have an opportunity to elect board members, because that would be seen as a conflict of interest.

Steps that might help

Several conditions in the OpenAI restructuring memo are designed to promote safety, including: A safety and security committee on the OpenAI Foundation board has the authority to "require mitigation measures" that could potentially include the halting of a release of new OpenAI products based on assessments of their risks.

(The writer is the Thomas Schmidheiny Professor of International Business at The Fletcher School, and the Tisch College of Civic Life, at Tufts University, Massachusetts, United States)

OpenAI Mission AI Safety and Ethics Concerns Nonprofit to For-Profit Transformation Artificial General Intelligence Tech Industry Accountability 
Next Story
Share it