When artificial intelligence can detect how you feel, it can respond to you in a better manner. We, humans, express our emotions not just through speech but also through non-verbal cues. These include facial expressions, body language, gesture, and the tone of voice. Hence, when artificial intelligence is able to recognise your emotional status, at a given time, just like humans, it is known as Emotional AI.
Artificial emotional intelligence or Emotional AI uses body, face and voice sensors to analyse the human expression of emotion. It combines both speech and non-verbal cues to provide a deeper insight into how a particular human is feeling at the moment.
This unimaginable technology is being utilised by various industries around the world. For example, movie and entertainment giant, Disney is testing Emotional AI by outfitting a cinema with IR cameras that record facial expressions. The technology enables matching scenes in a movie with reactions to predict audience reactions for its films. Likewise, BBC Worldwide is using facial emotion recognition technology to predict if new programs will be liked by viewers or not. The AI system is being used to analyse social sentiment to predict global demand for its content – specific to a geographical area.
While these examples are just related to the retail industry, AI is also recognised to have great potential in preventing crime and improving security. So, considering this, Emotional AI can be a powerful tool in making smart cities a better place to live. Apparently, the technology seems extremely helpful. However, a recent annual report published by research institute AI Now is talking about placing a ban on the use of emotional recognition technology. Read on to know more about it!
Emotional AI Use Should Be Stopped
According to AI Now, there is very little scientific proof of emotion recognition technology. Hence, it should be banned from use in decisions that influence people’s lives. Furthermore, the study shows that there is evidence that emotion recognition can amplify race and gender disparities. The report cited a recent study by the Association for Psychological Science that contributed two years examining over 1,000 papers on emotion detection. It concluded that it is very difficult to use facial expression alone to accurately tell how someone is feeling.
As a matter of fact, AI Now report is asking that regulators should take action and restrict the use of AI. Even AI companies should stop deploying technology for sensitive applications until the risks have been studied thoroughly. In addition, the report criticised the AI industry for its “systemic racism, misogyny, and lack of diversity.” It also asked for obligatory disclosure of the AI’s industry environmental impact.
One Step Ahead
Meanwhile, the UK and Japan have come together to investigate the various, uncertain and broad-ranging impacts Emotional AI could have on society, culture and economy. Both countries have together launched six projects for the same which will run for a period of three years.
The projects are aiming to improve the understanding of how AI systems affect individuals’ happiness and wellbeing. It will explore its potential in transforming different areas. One of the projects will provide suggestions on best practice around the use of AI in healthcare to ensure benefits in both countries. Another project will focus on developing methods to predict how advances in AI could change and automate household work. One more will explore the consequences of introducing AI into the UK and Japan’s legal systems.
All the six projects have been funded through UK Research and Innovation’s (UKRI) Fund for International Collaboration (FIC) in a joint UK-Japan initiative. £2.4 million have been funded through FIC by The Arts and Humanities Research Council and the Economic and Social Research Council, both as a part of UKRI. Whereas the Japanese Science and Technology Agency has granted ¥180 million. As a part of the UK-Japan Initiative, Northumbria University and Bangor University have been selected for research in the Emotional AI.
The Northumbria University Project
The project involving Northumbria University will focus on the use of Emotional AI in security and policing. It will explore the ethical implications of AI systems that read human emotions – both in terms of benefits and potential challenges. The new project is entitled Emotional AI in Cities: Cross-Cultural Lessons from the UK and Japan on Designing for An Ethical Life. Dr Diana Miranda, a professor in Criminology and a member of Northumbria’s Centre for Crime and Policing will work on the project.
Smart cities and their intelligent devices, buildings and infrastructure are developing at full speed. Hence, experts are anticipating that Emotional AI may have the potential to improve our quality of life. This is especially important when it comes to preventing crime and terrorism while improving security.
Nonetheless, the technology could have strong ethical implications with the gathering of personal data, particularly in public spaces. It could probably lead to mistrust and concern among citizens. It is indispensable to understand the social and ethical implications that result from the use of different practical methods that allow AI machines to read our feelings, intentions, bodies and emotional states. This is as stated by Dr Miranda while she talked about the project.
Going further, the research team will perform the investigation in commercial, security and police settings. The focus will also be one the context of media, including social media. In addition, it will involve interviewing organisations that are developing or introducing AI in smart cities. The team will also interact with people in smart cities to find out about their perspectives regarding Emotional AI. This information will then be used to advise how Emotional AI should be shaped for future developments.
According to Dr Miranda, Emotional AI is not about recognising an individual but recognising their intention. This is possible by reading how a person behaves. This involves focusing on their voice, intonation, facial expression, and physiological characteristics like heart rate and temperature.
For example, this is already being used in improving safety in cars by analysing the driver’s emotional and physical state. That said, it is a rapidly advancing technology. We still know very little about the ethical impact on our lives, added Dr. Miranda. Another important part of this project will be the development of a think tank. It will advise on the use of Emotional AI to governments, industries, educators, and other stakeholders – around the world.
The Bangor University Project
Another project is granted to the research team from Bangor University. They will study ways in which people can live in harmony with AI technologies that sense human emotions.
Japan and the UK are advanced nations in AI development and adoption. But both countries have very different political, social, normative and techno-ethics history. This provides a rich scope for the team to perform the research. Other important issues include the logics of sensing technologies and the degree to which emotion display is universal across cultures. Then, the nature of the ethnocentric diversity in social media usage and expression of online emotion. And possible variations between European and Japanese perceptions on what forms privacy and sensitive data.
The interviewing program will be similar to Northumbria University. Andrew McStay, Professor of Digital Life at Bangor University and head of the research team shares his views as follows. The largest organisations are adopting Emotional AI systems in cars, streets, classrooms, homes etc. The technology is entering in diverse sectors of smart cities. For both Japan and the UK, it is urgently required that we understand its societal implications. How will Emotional AI be deployed in our cities? How do citizens feel about technology? Are the policies appropriate for it? And so on.