Four major players in the field of artificial intelligence (AI), namely OpenAI (the developer of ChatGPT), Anthropic, Microsoft, and Google (the owner of DeepMind), have collaboratively formed the Frontier Model Forum. This newly established industry body has a primary mission to oversee the responsible development of highly advanced AI models that surpass the capabilities of current examples.
Brad Smith, the President of Microsoft, stressed the crucial role of prioritizing safety, security, and human control in AI technology. He views the Frontier Model Forum as a significant step towards uniting the tech sector in advancing AI in a responsible manner and addressing challenges for the greater good of humanity.
The forum has laid out several key objectives, including promoting research in AI safety by setting standards for evaluating models, advocating for responsible deployment of advanced AI models, engaging in discussions with policymakers and academics to address trust and safety risks in AI, and contributing to the development of AI for positive applications like combating climate change and aiding in cancer detection.
The announcement of the Frontier Model Forum comes at a time when efforts to regulate AI technology are gaining momentum. Recently, tech companies, including the founding members of the forum, have agreed to implement new AI safeguards following a meeting with US President Joe Biden. These commitments entail watermarking AI content to identify misleading material, such as deepfakes, and allowing independent experts to test AI models.
While some skeptics have expressed concerns over the tech industry’s track record of not adhering to self-regulation pledges, the Frontier Model Forum aims to make significant contributions to AI safety.
The forum recognizes and appreciates the important work being done by entities like the UK government, which hosted a global summit on AI safety, and the EU, which is introducing an AI act representing a serious legislative effort to regulate AI technology.