Artificial intelligence continues to snare the technological limelight and, rightly so as we move well into the final quarter of 2023, there is wide international interest in harnessing the power of AI. But with the excitement and anticipation come some appropriate notes of caution from governments around the world, concerned that all of AI’s promise and potential has a dark flipside: It can be used as a tool by bad actors just as easily as it can by the good guys.
Thus, on October 30, 2023, US President Joe Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” while contemporaneously the G-7 Leaders issued a joint statement in support of the May 2023 “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.” The US executive order also references the anticipated November UK Summit on AI Safety, which will bring together world leaders, technology companies and AI experts to “facilitate a critical conversation on artificial intelligence.”
Understanding how AI will affect the CISO’s role is key
Amid the cacophony of international voices trying to bring order to what many see as chaos, it is important for CISOs to understand how AI and machine learning are going to affect their role and their abilities to thwart, detect, and remediate threats. Knowing what the new policy moves entail is critical to gauging where responsibility for dealing with the threats will lie and provides insight into what these governmental bodies believe is the way forward.
CISOs will be well served to ensure they have visibility into the various working groups and advisory boards (e.g., AISSB) as they support their entity’s evolution and adoption of AI/ML tools. In addition, given the fluid nature of the global initiatives, the lack of harmonization across borders is a reality and could cause downstream compliance issues if guidance and regulations differ within regions or by country.
The US executive order on AI
The US executive order builds on prior White House engagement on AI and provides guidelines for industry and the government. Those entities that have a national security footprint should be especially attentive to the dual-use possibilities of AI technologies. The executive order points to seven important areas:
- Ensure safety and security.
- Protect the privacy of Americans.
- Advance equity and civil rights.
- Stand up for consumers and workers.
- Promote innovation and competition.
- Advance American leadership abroad.
- Ensure responsible and effective government use of AI.
Government agencies on the front lines of AI regulation
The National Institute of Standards and Technology (NIST) has a herculean task, which it characterized as an “opportunity” on social media: “AI provides tremendous opportunity, but we also must manage the risks. The [executive order] directs NIST to develop guidelines & best practices to promote consensus industry standards that help ensure the development & deployment of safe, secure & trustworthy AI.”
Meanwhile, the White House Office of the National Cyber Director characterized its understanding of the executive order on social media with precision: “Today’s EO establishes new standards for AI safety and security, the protection of Americans’ privacy, the advancement of equity and civil rights — it stands up for consumers and workers, promotes innovation & competition, advances American leadership around the world.”
The US Department of Homeland Security put out its own fact sheet explaining the executive order and its responsibilities, highlighting key areas:
- Formation of the AI Safety and Security Advisory Board (AISSB) to “support the responsible development of AI. This committee will bring together preeminent industry experts from AI hardware and software companies, leading research labs, critical infrastructure entities, and the U.S. government.”
- Work to develop AI safety and security guidance for use by critical infrastructure owners and operators.
- Capitalize on AI’s potential to improve U.S. cyber defense, highlighting how CISA is actively “leveraging AI and machine learning (ML) tools for threat detection, prevention, vulnerability assessments.”
Separately, the Cybersecurity and Infrastructure Security Agency emphasized in its own social media post that it will “assess possible risks related to the use of AI, provide guidance to the critical infrastructure sectors, capitalize on AI’s potential to improve US cyber defenses, and develop recommendations for red-teaming generative AI.”
Assessing the AI threat to intellectual property
The threat to intellectual property is not hypothetical and is front and center within the executive order. To bolster the protection of AI-related intellectual property, DHS, through the National Intellectual Property Rights Coordination Center “will create a program to help AI developers mitigate AI-related risk, leveraging Homeland Security Investigations, law enforcement, and industry partnerships.
While industry, in the form of IBM, chimed in with the admonishment that the “best way to address potential AI safety concerns is through open innovation. A robust open-source ecosystem with a diversity of voices — including creators, developers, and academics — will help rapidly advance the science of AI safety and foster competition in the marketplace.”
It’s now been a year since ChatGPT stormed into consumer hands and the past 12 months have been nothing short of whirlwind adoption. CISOs must, as recommended previously, ask the hard questions, and demand provenance and demonstratable test results from providers who espouse the inclusion of AI/ML in their products. While the global government initiatives are pointed in the right direction, it’s clear that it will ultimately fall on the CISO’s shoulders to determine if the arrows in their quiver are the right ones.
Go to Source
Author: