DeepMind Announces AI Ethics Group

Deepmind brings in advisers from academia and the charity sector to ‘help technologists put ethics into practice’ in face of the rise of artificial intelligence.

London-based Deepmind introduced a research unit which targets ethical problems raised by AI. The group aims “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”.

Amongst its expert advisers are professor Jeffrey Sachs and Nick Bostrom, as well as campaigner Christiana Figueres. According to the leaders of the unit - Verity Harding and Sean Legassick, “These Fellows are important not only for the expertise that they bring but for the diversity of thought they represent.”

DeepMind's services extend to hospitals in the UK. Its projects include Streams app used in the NHS, as well as AI projects which target visual and cancer diagnosis and treatment. Following several meetings, the panel released its first annual report in July 2017. However, DeepMind was reprimanded by Artificial Intelligence to Benefit People and Society just days beforehand for its partnership with the Royal Free. Co-founded by tech giants Facebook, Amazon, IBM and Microsoft, the partnership was created to “conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness and inclusivity”.

The creation of the research unit displays a change in trajectory for Deepmind following a series of bad publicity resulting from its clandestine partnership with the Royal Free. Notably, the launch of Streams was criticised for its lack of transparency with regard to its privacy terms.

The core group also responds to some of the public concerns and anxieties with relation to AI and its effects on society in general. Tech leaders such as Elon Musk and Mark Zuckerberg have discussed the repercussions of relying too much on AI, whilst others are concerned about the capacity of technology to act on the behalf on human beings.

Researchers Ryan Calo and Kate Crawford expressed in Nature: “Autonomous systems are already deployed in our most crucial social institutions, from hospitals to courtrooms. Yet there are no agreed methods to assess the sustained effects of such applications on human populations.”

Designers and researchers “need to assess the impact of technologies on their social, cultural and political settings,” Calo and Crawford further said.