top of page

Regulations, Rights & Responsibilities play a big role in AI - Frameworks can make sense of them

chips on a blue circuit board
Photo by Jonas Svidras on Unsplash

The starting pistol has been fired in a race to bring AI under control of lawmakers, government and policy folk worldwide. 


The EU leads the pack, passing an AI Act which applies from this March. Following are the US White House, which issued an Executive Order to establish standards for AI safety, security and privacy. Finally, the UK hosted an all-star AI Safety Summit, while warning of “regulatory capture” by large firms.


In response to this frenetic policy activity, organisations looking to build, commercialise and control AI have embraced a wide variety of frameworks; over 80 in number as charted in Artemis Ethics’ excellent review


The reason frameworks are a hot topic is they provide a path through a major current challenge around AI; that of getting traction:

80% of executives polled by Gartner expect AI to significantly affect their business, but less than 50% believe they have the right capabilities in places to harness the benefits.

So, if done right a framework should empower organisations to make the most of a new general-purpose technology like AI, and that spark is needed right now: 


At the same time, we need guardrails to stop the whole process running amok, according to KPMG: “It’s necessary to have a governance framework to ensure AI systems are safe, transparent, traceable, non-discriminatory and environmentally-friendly”. 


Balancing these two forces is where ethics comes in, and despite criticisms of being dumbed-down law, ethical frameworks are popular in the boardroom and as a way to communicate their values with the public and move the whole issue out of the ‘compliance zone’. 


An ethical framework is a set of codes that an individual uses to guide their behaviour: an example would be the 10 commandments in Christianity. By my own calculations, there are around 40 ethical frameworks being trialled and used in commercial organisations, which all have a number of common principles:


  • Support the organisation’s principles and goals

  • Support the greater good.

  • Respect human rights

  • Be consistent

  • Meet external standards

  • Be defensible

  • Be clear and easy to use

  • Protect the vulnerable

  • Seek outside voices

  • Consider local conditions

  • Consider industry requirements


These are pretty high level, but a good set of general statements to engage the C suite and other key stakeholders, get on the record with citizens and move the whole issue out of the ‘compliance’ zone. 


Here’s a visual example from the World Economic Forum, applied in the case of Responsible AI: 


a flow diagram about governance
Image from WEF

These principles become goals to which organisations can apply the same rigour and measurement as they do for production output and financial results. This isn’t ethical hand-wringing; it’s commercial sense, and smart companies get this!


Among this plethora of frameworks, it appears a few are starting to gain the recognition of a ‘gold standard’ which could spread in the same way as GDPR did 5 years previously:  

  1. The EU AI Act sets out a harmonised legal framework based on a hierarchy of risks, which it is already starting to flesh out with specifics on which activities are classed as ‘high’, obligations and codes of practice. 


  1. NIST has developed a framework for managing risks to individuals and broader society around AI, taking into account trustworthiness when designing, building and using AI products, along with a playbook for suggested actions.

  2. ISO published the international standard ISO/IEC 42001 specifying requirements for building, maintaining and improving Artificial Intelligence Management Systems (AIMS) in organisations.  


Over time, a blended approach should emerge which brings structure needed to manage current AI experimentation in organisations and their suppliers downstream, while controlling the centralisation of risk around a handful of dominant platforms and LLMs upstream. 


This is where the human ‘value-add’ factor provided by governance, privacy and compliance professionals is so vital. Frameworks have an important role to play as a flexible bridge between policy and process. However, building public trust in the complex interplay of tech, data and regulation also requires choosing the right tools and a culture of responsibility and fairness. 


This doesn’t happen overnight, but is an ongoing process of planning, collaboration and transparency.


“AI has always stood on the shoulders of human intelligence. It may be far superior in its ability to predict our preferences or obey every traffic law, but it still has much to gain from our uniquely imperfect perspectives.” - Forbes

Comentarios


bottom of page