Over the 60-year history of artificial intelligence (AI), the field has grown to a position where the real-life applications are both surprising and exponential. Mike Seymour, a Research and PhD Fellow at Sydney University, recently presented at an ‘Innovating with Impact’ session – Deloitte’s monthly forum for leading innovators to present inspiring stories and cutting-edge perspectives. Mike took the audience through the various forms of AI, advanced applications of the technology and what the future might look like. Mike explained in easy-to-understand and non-technical language the principles of different forms of AI – deep learning vs traditional computer vision. In comparison to traditional computer vision techniques (image/object classification), deep learning uses ‘neural networks’ that are trained rather programmed. It provides greater accuracy in tasks such as image classification. To showcase how advanced applications of deep learning are being used, Mike used a cutting-edge example of a high-end, realistic digital human (avatar) – rendered to talk and react in real-time. This digital human was in fact a virtual version of himself: Real-time MIKE! The technology is advancing exponentially, to the point where digital humans can be rendered with a high level of realism. It’s no-longer a stretch to imagine the day when a friend might struggle to tell the difference between you and your avatar! But will humans ever trust avatars? At Sydney University, Mike and other researchers are interested in how to make digital agents and avatars look and act like people. But, perhaps more interestingly, they’ve begun exploring the long-term ethical and societal implications of this technology. For example, if a lie was told while interacting with Real-time MIKE, would we consider Mike or the digital human to be the liar? When Real-time MIKE was put to the test, many people found it to be more trustworthy than a cartoon-style avatar that used less-advanced AI. In other words, as technology improves, our instinctive trust increases. It’s an important area of research – especially when one considers that we’ll likely see virtual assistants used across organisations, education, health, entertainment and more. We spoke with (real) Mike about the future of AI-human interaction. How can we manage the ethical implications associated with avatars and AI? What is your biggest concern about the implementation of these technologies? There are a few really clear ethical issues to be discussed. If I have an agent or avatar acting for me, do you see that as me or separate from me? Identity in this world may need some type of universal distributed blockchain ledger approach or certainly third-party validation. There are also issues of trust and a need for an audit trail of the deep learning/machine learning algorithms that these systems use. In the end, artificial intelligence is only as good as the data we humans give it. So will there be biases in my data, and therefore my recruitment, if an agent advises and optimizes our hiring practices? Does the illusion of human intelligence cause more faith to be placed in systems since they appear to care? What do you think this technology could look like in a workplace like Deloitte? Have you noticed that these days we normally need several emails just to arrange a phone call? As work becomes team / project focused in an ever-shifting continuous improvement model, a personal digital assistant that crosses multiple platforms could be the invaluable productivity tool that many people need. We know there are big steps being made in deep learning or more broadly AI, but the interface side of this will need to be more human – not less. We think an agent with an actual face could provide immediate and intelligent help in our business lives. How would you recommend people start to understand AI in layman terms, and not be put off by the technical language associated with it? (Or should we leave it to the experts?) That is such a great question. AI is such a large and evolving field. For example, there is a lot of specialist work on face-recognition, which is separate from the work on deep learning insights into big data in a business analytics and finance sense. It’s also exciting how quickly it’s developing and changing. Our Motus Research Lab at Sydney University is specifically designed to help research, explain and demystify AI and these complex boundaries between machine and human. We agree it’s hard to learn especially as many people have agendas but the future of business has never looked more interesting. It’s an open field with huge opportunities and a lot of room for innovation. The history of AI is long and varied. And we’ve obviously made significant ground in more recent years. But how do you see AI expand and change again in the future? Or is that too hard to predict? While it’s hard to predict, there is no doubt that there are a class of problems that are well suited to being solved. We now have a data rich world to embed these systems in, and a real desire for faster more managed help in a range of complex situations. We’re not about to have a singularity event where the machines take over, but personally, I expect the people who use these new ‘machines’ to be the ones who succeed.