G7 introduces voluntary AI code of conduct

Please vote:

Global government leaders are continuing to make it clear that they are taking AI’s risks and opportunities seriously.

Today in the most recent government action around the evolving technology, the Group of 7 industrial countries (G7) announced the International Code of Conduct for Organizations Developing Advanced AI Systems. The voluntary guidance, building on a “Hiroshima AI Process” announced in May, aims to promote safe, secure, trustworthy AI.

The announcement comes on the same day that U.S. President Joe Biden issued an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence.”

It also comes as the EU is finalizing its financially binding  EU AI Act and follows the UN Secretary-General’s recent creation of a new Artificial Intelligence Advisory Board. Composed of more than three dozen global government, technology and academic leaders, the body will support the international community’s efforts to govern the evolving technology.

“We… stress the innovative opportunities and transformative potential of advanced AI systems, in particular, foundation models and generative AI,” the G7 said in a statement issued today. “We also recognize the need to manage risks and to protect individuals, society and our shared principles, including the rule of law and democratic values, keeping humankind at the center.”

Leaders assert that meeting such challenges requires “shaping an inclusive governance” for AI.

An extensive 11-point framework

The G7 — consisting of the U.S., EU, Britain, Canada, France, Germany, Italy and Japan — released the new 11-point framework to help guide developers in responsible AI creation and deployment.

The group of global leaders called on organizations to commit to the code of conduct, while acknowledging that “different jurisdictions may take their own unique approaches to implementing these guiding principles.”

The 11 points include:

– Take appropriate measures throughout development to identify, evaluate and mitigate risks. This can include red-teaming and testing and mitigation to ensure trustworthiness, safety and security. Developers should enable traceability in relation to datasets, processes and decisions.

– Identify and mitigate vulnerabilities and incidents and patterns of misuse after deployment. This can include monitoring for vulnerabilities, incidents and emerging risks and facilitating third-party and user discovery and incident reporting.

– Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use. This should include transparency reporting that is supported by “robust documentation processes.”

– Work towards responsible information-sharing and reporting of incidents. This can include evaluation reports, information on security and safety risks, intended or unintended capabilities and attempts to circumvent safeguards.

– Develop, implement and disclose AI governance and risk management policies. This applies to personal data, prompts and outputs.

– Invest in and implement security controls including physical security, cybersecurity and insider threat safeguards. This may include securing model weights and algorithms, servers and datasets, including operational security measures and cyber/physical access controls.

– Develop and deploy reliable content authentication and provenance mechanisms such as watermarking. Provenance data should include an identifier of the service or model that created the content and disclaimers should also inform users that they are interacting with an AI system.

– Prioritize research to mitigate societal, safety and security risks. This can include conducting, collaborating on and investing in research and developing mitigation tools.

– Prioritize the development of AI systems to address “the world’s greatest challenges,” including the climate crisis, global health and education. Organizations should also support digital literacy initiatives.

– Advance the development and adoption of international technical standards. This includes contributing to development and use of international technical standards and best practices.

– Implement appropriate data input measures and protections for personal data and intellectual property. This should include appropriate transparency of training datasets.

A ‘non-exhaustive’ living document

The G7 emphasized that AI organizations must respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and “humancentricity.” Advanced systems should not be introduced in a way that is harmful, undermines democratic values, facilitates terrorism, enables criminal misuse, “or poses substantial risks to safety, security and human rights.”

The group also committed to introduce monitoring tools and mechanisms to hold organizations accountable.

To ensure that it remains “fit for purpose and responsive,” the code of conduct will be updated as necessary based on input from government, academia and the private sector. The list of “non-exhaustive” principles will be “discussed and elaborated as a living document.”

The G7 leaders further assert that their efforts are intended to foster an environment where AI benefits are maximized while mitigating its “risks for the common good worldwide.” This should include developing and emerging economies “with a view of closing digital divides and achieving digital inclusion.”

Support from fellow global leaders

The code of conduct received approval from other global government officials, including Věra Jourová, the European Commission’s Vice President for Values and Transparency

“Trustworthy, ethical, safe and secure, this is the generative artificial intelligence we want and need,” Jourová said in a statement. With the Code of Conduct, “the EU and our like-minded partners can lead the way in making sure AI brings benefits while addressing its risks.”

European Commission President Ursula von der Leyen, for her part, said that “the potential benefits of artificial intelligence for citizens and the economy are huge. However, the acceleration in the capacity of AI also brings new challenges. I call on AI developers to sign and implement this Code of Conduct as soon as possible.”




Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Windows chief sees AI bridging the cloud, PCs

Windows chief sees AI bridging the cloud, PCs

Microsoft has show AI living in the cloud. AMD, Intel, and Qualcomm want AI to live on the PC, powered by their own processors. Does that pose a potential conflict? Apparently not. At AMD’s “Advancing AI” presentation, where the company launched the Ryzen 8040 family of AI-enhanced mobile

Read More »