You don’t have to agree with Elon Musk’s apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm.
This type of self-learning software powers Uber’s self-driving cars, helps Facebook identify people in social-media posts, and let’s Amazon’s Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe.
Researchers put AI software into a series of simple, two-dimensional video games composed of blocks of pixels, like a chess board, called a gridworld. It assesses nine safety features, including whether AI systems can modify themselves and learn to cheat.
AI algorithms that exhibit unsafe behavior in gridworld probably aren’t safe for the real world either, Jan Leike, DeepMind’s lead researcher on the project said in a recent interview at the the Neural Information Processing Systems (NIPS) conference, an annual gathering of experts in the field.
DeepMind’s proposed safety tests come …
READ MORE ON BLOOMBERG.COM