The Open Source Initiative (OSI), the organization responsible for defining and promoting open source software, has announced a revised definition of “open source AI.” This updated definition, intended to clarify the criteria for AI systems to be considered truly open source, could potentially exclude models developed by industry giants like Meta and Google.
The OSI believes that open source principles are crucial for the development and accessibility of AI. They argue that removing barriers to learning, using, sharing, and improving AI systems will lead to significant benefits for everyone. In their recent blog post, they emphasized the need for “the same essential freedoms of Open Source” to enable developers, deployers, and users to fully reap the rewards of AI.
According to the new definition, models like Meta’s Llama 3.1 and Google’s Gemma would not qualify as open source AI. Nik Marda, Mozilla’s technical lead of AI governance, explained to PCMag that the lack of a precise definition in the past allowed companies to falsely claim their AI was open source. He believes that most large commercial AI models will fail to meet the new, stricter criteria.
The OSI’s move aims to address concerns about companies potentially abusing the flexibility of the older definition. Previously, companies could potentially manipulate their consumer AI products, altering functionality or disabling access without notice. This could result in disrupted services, compromised performance, and increased costs for users.
While Meta and Google have not yet officially acknowledged the new definition, its implications for the future of AI development and accessibility are undeniable. This update could encourage more transparency and collaboration in the field, ultimately leading to a more robust and accessible AI ecosystem.