Technology company Blueskeye AI is aiming to improve safety on board aircraft through facial analysing technology which can identify fatigue in pilots. The next phase project will cost £20,000 due to equipment (cameras and microcomputers) programming and analysis time (human resource costs) and rental of a small aircraft and its pilots for testing.
By the end of March, the company expects to install a prototype in a two-seater aircraft. This will be used to collect data to inform any improvements, define functional requirements and pursue a larger scale test.
Professor Michel Valstar, founding CEO of Blueskeye AI, spoke with Airport Technology about how the technology works and its benefits.
Jasleen Mann (JM): How and when was Blueskeye AI founded?
Michel Valstar (MV): BlueSkeye AI was co-founded by myself, with (CTO) Dr Anthony Brown and chief machine learning and software engineer Dr Timur Almaev in 2019 to commercialise over 18 years of research by myself and my team in the affective computing (AC) and social signal processing (SSP) field.
A spin-out from the University of Nottingham’s School of Computer Science, BlueSkeye received £3.4m in a venture capital funding round led by XTX Ventures in October 2022.
JM: What are Blueskeye AI’s main areas of focus?
MV: The company’s aim is to create the most-used technology for ethical machine understanding of face and voice behaviour trusted to measure the mind through the use of ubiquitously available, affordable technology.
We use machine learning to objectively and automatically analyse face and voice data, (the only company in the field using both) to interpret medically relevant expressed behaviour and help our customers improve people’s quality of life at home, in their cars, and at work.
Our clinical grade technology is already successfully in use in social robotics, FMCG and the health and wellbeing sectors.
Motor manufacturers have been quick to identify the potential of the company’s emotion AI for adding significant value to the driver and passenger journey experience; by identifying elements of discomfort, car sickness for example.
This led us to develop an SDK for use in the automotive industry and prompted the interest from the aviation sector.
Our technology is constantly observing you, it detects minute differences in the way you behave, the way you look and the way you move. By comparing the way your behaviour changes over time it can identify all kinds of medical conditions that change the way you behave.
JM: How does your strategy differ to competitors?
MV: The analysis our software does is based on real science. It builds on research undertaken by myself and my research team, at the University of Nottingham in the UK.
Our technology has privacy and security included by design. Data collection and storage is minimised wherever practical, and we process all data on people’s own devices, without using the cloud. Users can choose who they share their data with, and when they do it is always over end-to-end encrypted channels.
A strong ethical framework is central to the development of our technology and products. Our AI models are designed to allow them to be interpretable and transparent, with predictions about mental state and clinical conditions based on readily verifiable behaviour primitives, resulting in a multi-layer AI system where outputs can be checked independently.
JM: How does this software offer insights about human behaviour in an aircraft?
MV: The software uses a microcomputer and small near infrared camera pointing at the pilot to record and analyse facial movements. Normally, we would also record voice [inputs] but this is challenging in an aero environment with background noise from the engine.
The camera monitors the facial muscle movement underpinning facial expression, identifying how much those muscles are activated. It also determines the direction of eye gaze, and the pose of the head. Together this data is analysed over time to assess how actively engaged the pilot is (Arousal).
We could go further and use the same approach to assess how positive or negative the pilot is feeling (Valence) and how able they feel to deal with the cause of the emotion (Dominance).
By plotting these three values with Valence on the x-axis, Arousal on the y-axis and Dominance as a depth to the plot, we can pick any point or collection of points within the three-dimensional space and give it a label. Typically, we use those commonly applied to emotion. For example, anger, disgust, fear, happiness, sadness and neutral.
JM: What are the benefits of this technology?
MV: Currently fatigue is regulated by the number of hours a pilot is allowed to fly in any given period. But our software can infer the level of fatigue in an individual. Where a pilot begins to show a certain level of tiredness they could be rested, or simply need a cup of coffee for a temporary lift.
Conversely, where they are not tired, they could perhaps fly for a little longer. This has potential benefits for the airlines in terms of efficiency and more importantly for the safety of the passengers and crew.
Our software also allows the inference of mood and mental state, and this can be tracked over time with the potential to support pilots and improve airline efficiency.
The technology wouldn’t just be relevant for aircrew. It could be useful for ground staff in control towers and Air Traffic Control Centres.
JM: What is your outlook for the use of face scanning software in the future?
MV: The global artificial intelligence market size is massive and growing at an astonishing rate and the UK is a leading country in this field, so I see some huge opportunities.
But it’s not about the money. Our technology has the potential to save lives in cars, planes, and a range of safety critical industries, anywhere where focus, alertness and mood are a matter of life and death.