New study has put forward a mathematical principle that could help businesses seek out the questionable strategies their AI systems may adopt.

TOP INSIGHT
Commercial artificial intelligence is likely to cheat you unless trained not to.
So, in an environment in which decisions are increasingly made without human intervention, there is a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate it entirely if possible.
Mathematicians and statisticians from the University of Warwick, Imperial, EPFL, and Sciteb Ltd, have come together to help businesses and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle,” published in Royal Society Open Science on Wednesday, 1st July 2020.
The four authors of the paper are Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute of the University of Warwick.
Professor Robert MacKay of the Mathematics Institute of the University of Warwick said:
“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”
MEDINDIA




Email




