Continue
By using our website you accept our cookies policy.Find out more

Trust, Ethics and building responsible AI: The Only Way is Ethics

Posted on 14/10/2019

In the professional services sector, those human factors which engender trust, must be fully translated & sustained within AI ecosystems.

Any conversation about AI will quickly turn towards aspects of trust and ethics and this is especially true when considering how professional services can harness the power of AI to build future products. The ability to use AI responsibly, to create smart new products which customers trust and recommend, is a continuing source of competitive advantage and there is a sense that the UK’s professional services are now at a tipping point, well placed to take advantage of the opportunities that AI provides.

 

AI development often holds up a mirror to the behaviours and values of the whole company, making internal decision-making processes more transparent to all. It can also amplify bias, risk, conflicts and ambiguities within the information that the company has acquired or gathered over time. In the race to build AI-driven business products there have been some costly high-profile casualties where the algorithms have exposed unconscious bias inherent within the data and all too obvious at the point of delivery. In the professional services sectors – law, accounting and insurance – trust is everything and it is imperative that those human factors which engender trust are fully translated and sustained within AI ecosystems.

 

Professional bodies such as the Law Society are now actively engaged in fostering a multi-disciplinary approach to AI and ethics and are grappling with governance. Academic research is also under way at Birmingham Law school investigating how best to foster innovation in professional services while ensuring consumer protection. Guidelines – such as the EU Commission’s Ethics Guidelines for Trustworthy AI – are being published and revised routinely now but these can be a little abstract and hard to put into practice so companies need to take time to review, refine and embed their own responsible approach to AI product development and communications.

 

It’s clear that embedding ethical best practice from the start is the key to building better products and the ways to achieve this vary considerably depending on the size, maturity and complexity of the company. There are various models for achieving this and open source tools to assist with the process, such as the Open Data Institute’s Data Ethics Canvas,  which can be applied in almost any context.

 

The UK’s innovation agency Digital Catapult has been considering how to deliver practical support here for some time and its Machine Intelligence Garage (MIG), which provides computational power and expertise to companies wanting to develop AI solutions, has placed ethical best practice right at the heart of its programme. The MIG has a wide network of industry practitioners serving on their Ethics Committee and last year published their seven-point Ethics Framework which, while it is aimed at start-ups, provides a useful blueprint for any business wishing to test and review its processes and ensure responsible product development, delivery and evolution.

MIG Ethics Framework

  1. Be clear about the benefits of your product or service
  2. Know and manage your risks
  3. Use data responsibly
  4. Be worthy of trust
  5. Promote diversity, equality and inclusion
  6. Be open and understandable in communications
  7. Consider your business model

In practice this framework acts as the starting point for deep conversations involving senior stakeholders, managers, marketeers and developers and it helps everyone understand and articulate the company’s ethics and values which can then be communicated and integrated into product development.

 

Once the framework has been considered the next stage is to create a bespoke Ethics Roadmap unique to the company and closely aligned to its vision and values. Anat Elhalal, Head of AI Technology at the Digital Catapult, explains: “Guided by the framework this roadmap becomes a living document within the company. As development continues it is revisited and tested as dilemmas happen all the time. The way to drive change successfully is through the developer community and to help them anticipate issues before they become a problem.”

 

The key issue for Anat lies in how companies obtain training data and how they actively consider the purpose for which that data was originally supplied. “Firms should always be asking themselves ‘Is it right to use this data for this purpose?’ If the training data has flaws then it is likely that these flaws will be amplified further down the line.” As the data sources are acquired and documented the team must be vigilant and the impact on customers, who may not have anticipated future uses of their data, should be carefully reviewed.

 

Imaginative new products based on apparently public data have run into difficulties in recent years. In the insurance sector, for example, where AI is being deployed widely to aid risk assessment, Admiral had to withdraw a plan to price car insurance based on Facebook posts in 2016 and an innovative product was substantially compromised. A recent paper from the Centre for Data Ethics and Innovation on AI and Personal Insurance provides a thorough overview of this space and concludes that “more work needs to be done to understand what the public views as an acceptable use of AI within the industry, including the types of data that insurers should be able to make use of.” This is a dynamic environment posing is a challenge for insurers who may have a legitimate business innovation but it is out of step with what consumers find acceptable. Rigorous oversight and auditing of datasets along with more accessible privacy notices will make it completely transparent how customer data is gathered and used.

 

Larger legal, financial and insurance companies often have the governance in place along with the resource to address these issues and the time to evolve policy but the tools for developers and smaller companies are still badly needed. Having published the Ethics Framework the MIG is now considering how to deliver tools and modules that can remove friction in AI development. They are planning to set up an Applied AI Ethics Hub where these tools will be made available. In the future they hope to offer group workshops and possibly drop-in sessions for start-ups and other businesses needing practical support.

 

AI for Services is bringing together all the parties within this dynamic ecosystem into a single network to facilitate multi-disciplinary and cross- sector discussions on salient topics such as human factors and ethics. The network is growing and now has more than 350 members including organisations such as Allen & Overy, The Alan Turing Institute, The Office for Artificial Intelligence, BDO and Brit Plc. Register free here to become a member.

 

Upcoming activities include the Innovation Lab, organised by the Knowledge Transfer Network on behalf of UKRI and in partnership with Via Dynamics, where leading professionals, academics with AI & Data Solutions, and high growth entrepreneurs will collaborate to develop projects and form consortia in response to the Next Generation Services Challenge next competition. The aim of this funding opportunity is to speed up the responsible adoption of Artificial Intelligence (AI) and Data technologies and solutions in the accountancy, insurance and legal sectors by enabling better access to data. If you are interested in applying as a partner to this competition, simply register to join the network and you will receive the information.

 

The UK is in a great position to take advantage of the opportunities which AI brings to transform the back-office and customer-facing products, and there is real energy in the sector as people see the potential, but there is also a need to proceed with care. As Apple CEO Tim Cook says: “We can achieve both great artificial intelligence and great privacy standards. It’s not only a possibility, it is a responsibility. In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence.”

Other useful links:

The Law Society coverage of their event – AI and ethics: Plotting a path to the unanswered questions – includes VP Christina Blacklaw’s pertinent opening summary.

 

The Institute of Chartered Accountants of England and Wales explore some of the challenges facing their sector regarding ethics and new technologies and have published a Code of Ethics.

 

Cornell University’s From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practice.