Safety in the spotlight as AI booms
6 April 2022
In the animated Disney classic Fantasia, the sorcerer asks his apprentice – Mickey Mouse – to fill a cauldron with water. Instead of dutifully filling the pot with buckets of water, the apprentice casts a spell to make a broomstick do the work. Practising his master's tricks before learning how to control them, the apprentice forgets the spell and can't stop the broom, which multiplies into an army of brooms and a lesson learnt.
The film came a decade before the concept of intelligent machines became a popular research topic with the development of early computers in the 1950s. With Artificial Intelligence (AI), creating machines to carry out tasks previously done by humans was no longer the stuff of sorcery.
Federation University Artificial intelligence researcher Associate Professor Peter Vamplew says AI has boomed in recent years, driven by advances in computational power, improvements with algorithms and the availability of vast amounts of data. But with the explosion in AI's growth has come growing concerns about its safety and the possibility it will eventually supersede human ability on many more tasks.
Associate Professor Vamplew and his collaborator Associate Professor Richard Dazeley were recently added to the Future of Life Institute's Artificial Intelligence Existential Safety Community. The appointment recognises the importance of their research applying concepts of multi-objective agency to the task of ensuring that advanced AI systems are safe and aligned to human needs and ethical values.
The Future of Life Institute is a philanthropic organisation aimed at diminishing risks to the future of humanity and has particular interests in nuclear proliferation, environmental issues, and AI safety. The AI Existential Safety Community consists of AI researchers mostly from the world's top 50 ranked universities. The collaborators are the first Australians to be admitted to the group that will open up new opportunities to expand on the international impact of their research.
"Artificial Intelligence has come on in leaps and bounds in recent years, and many people are starting to get concerned about what happens when we start rolling this out in practice in broadly applicable areas. What are the implications for society?"
"There are also concerns with aspects like fairness and if there's bias built into these decision-making systems or whether we're all going to be out of a job. These concerns about safety issues are there because what we currently have is AI that isn't necessarily very smart. It's very good at doing one thing, but if something changes or something unexpected happens, the reactions could be very different from what you'd expect – much like The Sorcerer's Apprentice." Associate Professor Peter Vamplew
The safety issue came to global prominence in 2015 when physicist Stephen Hawking, Tesla and SpaceX CEO Elon Musk and prominent AI researchers signed an open letter calling for research on the societal impacts of AI. The group wrote that while superhuman artificial intelligence could provide incalculable benefits, it also had the potential to end the human race.
"Around that time, I was doing research in a small subfield of multi-objective AI. The idea here is we don't just tell the AI to maximise one thing, we give it a list of things we care about, and it tries to find a solution that's a good compromise across all of those factors," Associate Professor Vamplew said.
"If I was building an autonomous car, for example, and I told it to get me to my destination as quickly as possible, it's going to break the law. It's going to drive too fast, and it's probably going to use more fuel than I would like it to use. So, our idea is to treat each of those parameters as a separate reward measure and then give the system some sort of guidance about how I want it to trade off between those different rewards.
"We realised as we started to do work in this area that it seemed like a very natural fit to the safety field because the more you have various factors you care about, the less chance there is of the system finding some sort of loophole that says, 'just keep maximising this one thing' because that's going to be bad with respect to the other factors."
Associate Professor Vamplew has been researching AI for about 30 years, beginning with his PhD. He said joining the AI Existential Safety Community would open the door for Australian researchers to participate in international collaborations, with the pair also likely to join the community's workshops in the United States.
"The potential benefits of AI are so great, and we're not saying 'stop, don't make it'. But we do need to start thinking about these issues of safety and ethics and so on. How do we ensure that this is actually going to be beneficial?"