汤头条原创

Image of Ray Eitel-Porter

Ray Eitel-Porter

Ray is an Intellectual Forum Senior Research Associate.

Ray is an expert in AI Safety and Ethics. As a pioneer in the field of Responsible AI, he worked on the first AI bias and fairness tool and also designed and built Accenture鈥檚 internal AI compliance program. He advises organisations across industries on how to use AI safely and responsibly: past projects included multi-year programs at a global bank, a global retailer and a major health brand. Ray has led ethical AI research collaborations with Stanford, MIT, The Alan Turing Institute and the Institute for AI Ethics at Oxford University.

What are you working on now?

As Senior Adviser for Responsible AI to Accenture I am engaged on client projects, and, as Lumyz Advisory, I also work as an independent consultant for AI safety and ethics. I am preparing a book on the practical application of Responsible AI in organisations, drawing on more than six years of experience in this specific area.

How has your career to date led to this?

Most of my career has been at the intersection of consulting and technology. This included founding a software business and corporate leadership positions. Prior to my role as Accenture鈥檚 Global lead for Responsible AI, I was head of Accenture鈥檚 Data and AI business in the UK. I authored 鈥淏eyond the promise: implementing ethical AI鈥 in The Journal of AI Ethics (2020), and subsequent publications on related topics.

What one thing would you most want someone to learn from what you鈥檝e done or are doing now?

Learn to use AI responsibly. Business executives and other leaders may think that ensuring AI is used safely and responsibly is up to the technical team, but that is far from the truth. Everyone should be aware of the potential unintended negative consequences of using AI and look for ways to identify and minimise that risk. This starts at the very beginning when considering whether AI is an appropriate solution to a particular problem or opportunity, throughout development and user testing and continuing during deployment. AI is capable of adding huge value to organisations and to society, but we must all play our part in ensuring it is used in the right way.

What do you think of 汤头条原创 and the Intellectual Forum?

I was fortunate to be invited to two residential conferences for Leaders in Responsible AI hosted by the Intellectual Forum. I was energised by the community of experts I met and by the discussions 鈥 both formal and informal. I admire the mission of the Intellectual Forum to facilitate and share the thinking of experts and am looking forward to helping to expand the Leaders in Responsible AI program in my new association with 汤头条原创.

You can meet the rest of the Intellectual Forum team or contact us via email.