fbpx

Navigating the practicalities of AI regulation and legislation

by Wire Tech

Navigating the practicalities of AI regulation and legislation

What CIOs need to know about the global patchwork of existing and upcoming laws governing AI – and what CIOs need to be doing about them

Misusing artificial intelligence (AI) can have some very clear and expensive consequences. Movie studio Lionsgate recently joined a long list of organisations discovering that quotations and citations from generative AI (GenAI) systems need to be verified like any other source; Microsoft is being sued by a German journalist after Bing Copilot suggested he had committed crimes he had instead reported on; and a US telecoms service is paying a $1m fine for simply transmitting automated calls featuring a fake AI voice mimicking President Biden.

Enterprise enthusiasm for adopting GenAI remains high, meaning organisations are busy putting various governance, risk and compliance protections in place around their usage in different jurisdictions. While the main reason for restrictions on AI usage is frequently data privacy and security concerns, regulation and copyright concerns are also high on the list.

Part of the problem for chief information officers (CIOs), however, is knowing exactly which regulations apply to AI, from the legal basis of using personal data to train AI models, to questions of transparency and discrimination when using AI systems.

Many organisations focus on upcoming legislation specifically designed to set rules for those developing and deploying AI systems, alongside a mix of regulations and voluntary guidelines for AI that can be individually useful, but make up what United Nations secretary-general António Guterres rather politely called a “patchwork” of potentially inconsistent rules.

But the impact of new laws hasn’t yet been felt, and changes in government in the UK and US make it harder to predict what future legislation will dictate, especially for UK businesses caught between the US and the European Union (EU).

Meanwhile, existing regulations that don’t explicitly mention AI already apply – and are being applied. This summer, the Brazilian data processing authority temporarily forced Meta to stop using “publicly available” information collected from its users to train AI models – on the basis of legitimate interests allowed under Brazilian legislation similar to the General Data Protection Regulation (GDPR). The company had to notify users in advance and provide easy ways to opt out.

To safely navigate this web of regulations and laws – both upcoming and existing – that cover different stages of the development and deployment of AI, enterprises must therefore urgently get to grips with the direction of travel and appetite for enforcement in the countries they operate in.

Evolving UK priorities

Although there is likely to be an AI Bill in the UK, neither of the two private member’s bills making their way through the House of Lords are a reliable guide to what future legislation might look like.

The government seems unlikely to take exactly the same “pro-innovation” approach to AI regulation as the previous one, especially as it’s signed up to the Council of Europe’s Framework Convention on AI and Human Rights (which covers the use of AI by governments and other public bodies).

Currently a research organisation, it’s possible the UK AI Safety Institute may get a new role as an additional regulator, alongside the Information Commissioner’s Office (ICO), Ofcom, the Financial Conduct Authority (FCA) and the Competition and Markets Authority (CMA).

The government report on Assuring a responsible future for AI envisions a commercial ecosystem of “AI assurance” tools to guide businesses using AI to mitigate risks and harms. It promises an AI Assurance Platform with a toolkit based on standards such as ISO/IEC 42001 (covering AI management systems), the EU AI Act and the NIST AI Risk Management Framework. Indeed, existing GRC tools such as Microsoft Purview Compliance Manager are already introducing reporting templates that cover these regulations.

As for AI providers, the minister for the future digital economy and online safety, Baroness Jones, told the World Trade Organization’s AI conference the government will soon “bring forward highly targeted binding regulation on the handful of companies developing the most powerful AI systems”.

Similarly, Ofcom recently issued an open letter to online service providers reminding them that the Online Safety Act (which imposes additional duties on search engines and messaging services starting in December 2024 with more beginning next year) also applies to GenAI models and chatbots. That’s aimed at the big AI platforms, but businesses using services to create chatbots, for example, for customer service, will want to test them thoroughly and make sure content safety guardrails are turned on.

Disclosure requests

The ICO has already requested fairly detailed disclosures from large platforms such as LinkedIn, Google, Meta, Microsoft and OpenAI about the data used to train their GenAI systems. Again, that’s covered by the Data Protection Act 2018, the UK implementation of GDPR. “Substantially right now, GDPR is regulating AI,” Lilian Edwards, director at Pangloss Consulting and professor of technology law at Newcastle University, told Computer Weekly.

While the upcoming Data (Use and Access) Bill doesn’t include the sweeping reforms to data protection rights proposed by the previous government, it also doesn’t provide any extra clarity about the impact of UK GDPR on AI, beyond the existing guidance from the ICO, which makes it clear that senior management needs to understand and address the complex data protection implications of the technology.

“The definition of personal data is now very, very wide: data that relates to a person who can be made identifiable,” warned Edwards. “Any AI company is almost certainly, if not intentionally, processing personal data.”

More generally, she cautioned chief information officers not to dismiss legislation that doesn’t name AI specifically, noting that all of the compliance and risk management processes already in place apply to AI. “There are plenty of laws affecting companies that have been around for donkeys’ years that people aren’t as excited about: all the normal laws that apply to businesses. Discrimination and equality: are you in some way breaking the laws that are enforced by the Equality and Human Rights Commission? Are you breaking consumer rights by putting terms and conditions into your contracts?”

Edwards further warned that AI systems delivering hallucinations can breach health and safety laws (like Amazon selling AI-generated books that misidentify poisonous mushrooms).

Diya Wynn, responsible AI lead at Amazon Web Services, said: “We all have been very familiar and accustomed to having an understanding of sensitivity of data, whether it’s confidential, PII or PHI. That awareness of data and protecting data still is foundational, whether you’re using AI or not, and that should underpin internal policies. If you would not share PII or confidential or sensitive information normally, then you absolutely don't want to do that in in in AI systems that you're building or leveraging as well.”

International implications

Despite (or perhaps because of) being home to the major AI suppliers, the US has no comprehensive, AI-specific federal laws. Both the result of the election and the highly fragmented legal and regulatory setup (with multiple authorities at both state and federal level) make it unclear what, if any, legislation will emerge.

The executive order on AI issued by President Biden called for frameworks, best practices and future legislation to ensure AI tools don’t break existing laws about discrimination, worker rights or how critical infrastructure and financial institutions handle risk.

Although ostensibly broad, it actually applied more narrowly to government services and organisations that supply the US government, including assigning responsibilities for developing guardrails to NIST, which runs the US AI Safety Institute.

As Paula Goldman, Salesforce chief ethical and humane use officer and a member of the national AI advisory committee that advises the US president, noted: “In a policy context, there are real, legitimate conversations about some of these bigger questions, like national security risk or bad actors. In a business setting, it is a different conversation.”

That conversation is about data controls and “a higher level of attention to good governance, hygiene and documentation that can be discussed at a board level”.

The executive order did specifically mention GenAI models trained on very large systems, but there are no existing systems with the specified level of resources.

Many individual states have passed laws covering AI, some implementing the principles in the executive order, others regulating decisions in critical or sensitive areas made using GenAI without significant human involvement (again, GDPR already includes similar principles). California did introduce a sweeping range of legislation covering areas such as deepfakes and AI-generated “digital replicas”, of particular interest to Hollywood.

The Republican campaign talked about replacing the executive order with AI built on free speech principles and the somewhat mysterious “human flourishing”. It’s unclear what that would look like in practice – especially given the conflicting interests of various donors and advisors – and organisations doing business in the US will need to deal with this patchwork of regulation for some time to come.

The EU may set the tone

On the other hand, the EU is the first jurisdiction to pass AI-specific legislation, with the EU AI Act being both the most comprehensive regulation of AI and the one that suppliers are actively preparing for. “We are specifically building into our products, in as much as we can, compliance with the EU AI Act,” Marco Casalaina, vice-president of product at Azure AI Platform, told Computer Weekly.

That’s something Goldman expects businesses to welcome. “We have definitely heard a desire from companies to make sure that these regimes are interoperable, so there’s an overarching set of standards that can apply globally,” she said. “Honestly, it’s a lot to keep track of.”

The Council of Europe’s AI Convention follows similar principles, and if this leads to global legislation aligning with it in the same way that GDPR influenced legislation worldwide, that will simplify compliance for many organisations.

That’s by no means certain (there are already AI laws in place in Australia, Brazil and China), but any business operating in the EU, or with EU customers, will certainly need to comply with it. And unlike the mostly voluntary compliance approach of the executive order, the act comes with the usual penalties – in this case, of up to 3% of global turnover.

The first thing to remember is that the new legislation applies in addition to a wide range of existing laws covering intellectual property, data protection and privacy, financial services, security, consumer protection, and antitrust.

“It fits into this giant pack of EU laws, so it only covers really quite a small amount,” said Edwards.

The act doesn’t regulate search, social media, or even recommender systems, which are meant to be dealt with by legislation enforcing the Digital Services Act.

Read more about artificial intelligence regulation

Rather than focusing on specific technologies, the EU AI Act is about demonstrating that AI products and services available in the EU comply with product safety requirements, which include data security and user privacy, in much the same way that the CE mark acts as a passport for selling physical products in the EU market.

The act covers both traditional and generative AI; the latter provisions are clearly a work in progress mainly intended to apply to AI providers, who have to register in an EU database. As in the US executive order, general purpose models with “systemic risks” (like disinformation or discrimination) are categorised by the capacity of the system they were trained on, but at the level of the training systems already in use by providers such as Microsoft and OpenAI, rather than future systems yet to be built.

The most oversight (or outright bans) applies to specific higher-risk uses, which are predominantly in the public sector, but also include critical infrastructure and employment.

“If you are building a chatbot for vetting candidates for hiring,” Edwards warned, that would fall into the high-risk category. AI systems with limited risk (which includes chatbots) require transparency (including informing users about the AI system): those with minimal or no risk, such as spam filters, aren’t regulated by the act.

These high-risk AI systems require conformity assessments she described as “a long list of very sensible machine learning training requirements: transparency, risk mitigation, looking at your training and testing data set, and cyber security. It’s got to be trained well, it’s got to be labelled well, it’s got to be representative, it’s got to be not full of errors.”

AI providers have to disclose at least a summary of their training set as well as the risks involved in using these AI systems, “so you can see what risks have been mitigated and which ones have been flagged as not entirely mitigated”, said Edwards.

The original intent of the act was to regulate traditional AI systems (not models) with a specific, intended use (like training, education or judicial sentencing) and making sure they stay fit for that purpose with tracking of updates and modifications. That makes less sense for GenAI, which is rarely designed for a single purpose, but the principle remains. Organisations deploying GenAI systems have fewer responsibilities than AI providers under the act, but if they make substantial enough modifications for these to count as a new system (including putting their name or trademark on it), they will also need a conformity assessment if they fall into a high-risk category.

Code of practice

It’s not yet clear whether fine-tuning, RAG or even changing from one large language model to another would count as a substantial modification, as the code of practice for “general purpose” GenAI systems is still being written.

Until that guidance clarifies the position for businesses deploying GenAI (and also the situation for open weight models), “it’s not certain how far this will apply to everyday individuals or businesses”, Edwards told Computer Weekly.

The act comes into force gradually over the next few years, with some provisions not in force until the end of 2030. Even if the EU takes the same approach as with GDPR (of enforcing the regulations very publicly on some larger suppliers to send a message), that may take some time – although multinational businesses may already be thinking about whether they use the same AI tools in all jurisdictions.

But one thing all organisations using AI covered by the act will need to do very quickly, is make sure staff have enough training in responsible AI and regulatory compliance to have “AI literacy” by the February 2025 deadline.

“It’s not just for your lawyers,” Edwards warns: anyone dealing with the operation and use of AI systems needs to understand the deployment of those systems, and the opportunities and risks involved.

That kind of awareness is important for getting the most out of AI anyway, and getting ahead of the game rather than waiting for regulation to dictate safety measures makes good business sense.

As Forrester vice-president Brandon Purcell put it: “Most savvy legal and compliance teams know that if there’s a problem with an experience that involves your brand, you’re going to be culpable in some way, maybe not in litigation, but certainly in the court of public opinion.”

Originally published at ECT News

You may also like

Leave a Comment

Unlock the Power of Technology with Tech-Wire: The Ultimate Resource for Computing, Cybersecurity, and Mobile Technology Insights

Copyright @2023 All Right Reserved