Share It On

Between awe and anxiety: Automation and autonomy in the post-COVID world

“Robots will take over the world.” At any given moment, that ludicrous statement will draw ridicule at best. But in today’s world – where a pandemic has shut down schools, factories and restaurants, grounded airplanes and placed millions in lockdown – even the absurd sounds plausible. 

This particular claim though is more than plausible. In fact, experts are outright saying the post-COVID-19 world will have a much broader acceptance of automation: Robots welcome to take over. Social distancing measures force people to eliminate the human element as much as possible. Customers may now prefer human-less, contactless interaction. Airports may speed up their automation efforts for touchless passenger processing. Businesses may need to reduce labour costs and are likely to look into automation as a way to do that. Automation may simply be the key to pandemic-proof the global economy.

All elements for increased acceptance and demand for automation are at hand.

Right in the centre of the debate about automation in the post-COVID world, a new ECA paper on automation and unmanned aviation saw the light. The timing is a coincidence as the drafting had started long before the COVID crisis. 

But reading the paper today – through the lens of COVID-19 – brings even more weight and significance to its content. The paper argues that a competent human must always be at the centre of the system, the human must be in command. While the technological, legal and regulatory aspects still need to be sorted, the COVID-19 crisis may have brought us considerably closer to ‘robots taking over’. How would we reconcile this with the vision of the human at the centre? Is this vision still viable, and if so - how can we defend it …as humans? 

We reached out to part of the team behind the paper, with questions about the vision for automation in aviation as presented in the paper and how that may be affected by the current COVID-reality. 

Q: Niels Verhoeven (VNV) and Max Scheck (VC) experts from our UAS+ Working Group and Paulina Marcickiewicz, ECA Policy Advisor are part of the core-team that worked on the paper. First, what is this paper about? 

In a nutshell, it is about the popular subject of autonomy in automated systems, and especially in Unmanned Aircraft Systems (UAS). It addresses the problem of defining a certain automation level to an automated system because a lack of standards today. 

All this – from the aviation point of view.

Q: Where did the idea for this paper come from? Why was such a paper actually needed in the first place? 

We hear often about automation, and how it’s going to ‘change’ or even ‘save’ the world. At the same time, reading news about drones can make you stumble upon a term ‘autonomous’. 

We realised that even in a professional context – among experts involved in discussions on integration of drones into airspace – these terms are being sometimes confused and used interchangeably. And there is much more nuance to their meaning than the famous ‘potato’ potatoe’ dilemma. 

Also for legislation it can be very important to define how automated an operation actually is. At the moment there is only a definition of ‘autonomous operation’ in the ICAO RPAS manual and you can see in the paper that this can’t really cover the different automated systems with their own specific requirements. The good news is that in the automotive industry they had the same problem for a while, so the Society of Automotive Engineers has come up with a standard of automation levels. We tried to apply this to aviation to prevent having the same problems all over again. 

Generally, we felt that there was a lot of “half-baked” information, from some self-proclaimed experts, containing a lot of half-truths out there. 

We consider some of the concepts, as well as the propagated capabilities of ‘autonomous systems’ highly inflated – often driven by the commercial interests of some stakeholders. With our paper we want to educate and provide more realistic and balanced information.  

Q: One of the core arguments is that humans must remain at the centre of the automation systems. That seems to be a lesson from manned aviation’s experience with automated systems?

Definitely. It seems like manufacturers always strive for maximum automation and there are loads of examples from the manned aviation industry where this has turned out disastrous. 

It is important to keep the human operator in the loop about what is going on for obvious reasons. Instead of going for maximum automation, manufacturers should strive for optimum automation. A popular statement is that human error is responsible for 70% of the accidents, and that because of this more automation is always better. A better question to ask ourselves would be: In how many of all the other flights has human intervention in automated decisions prevented an accident from happening? Then you can see the value of the human operator better.

Any pilot who has flown a complex aircraft will be able to give you multiple examples of where the automation by itself would not have been able to deal with certain circumstances. A competent human pilot remains the decisive factor.  

Q: A major threat – according to the paper – lies in reducing the human to a system monitor. What’s the danger with that? 

This is a threat we already see in manned aviation. When arousal levels are low people underperform in the task of monitoring the system. When the autopilot needs to be monitored for hours during a transatlantic flight for example, and it is almost always behaving correctly, it can take a while for the human to spot a deviation in the system. This is just how our minds are wired. 

This problem can be solved by giving the human optimal arousal levels with small tasks related to the operation, a good human – machine interface. As even the smallest tasks, like changing a frequency on the radio, are automated away, arousal has dropped more and more and it is harder for any human to remain concentrated on the task for hours at a time. This is a good example where manufacturers have passed optimum automation levels. 

Q: Can humans and robots collaborate?

Definitely. We do it every day. Automation makes our lives better than it ever was. When you program your washing machine you interact with a robot and set its goals. Just see how much time these robots have given any of us to pursue other goals for ourselves. As we look at more complex automated systems, I am convinced we can collaborate, because ultimately the human sets the systems goals. 

Personally, I consider robots a tool and thus do not think “collaboration” in the sense of working together is an appropriate concept. I believe for true collaboration the “collaborators” need to have certain skills and capabilities – for example intentionality, purpose and meaning – which robots will not have for some time – perhaps never should have. 

However, robots have evolved into extremely sophisticated and highly helpful tools, without which a lot of things could not be accomplished as effectively and efficiently as they are today. Therefore, I think humans should – eventually probably must – use robots to achieve optimum results.

I do see a future where a robot and human collaborate. I may be biased as I grew up watching R2D2 & C3PO using their endless hacking and linguistic abilities to help the human. But I do think there will be things we will never be able to accomplish or learn as a human being. And a device which would assist us – could be a great compliment to solving some problems. 

Q: Reading the paper today, through the prism of COVID, do you see this pandemic as an accelerator of automation?

Yes. Removing a “potential carrier” from the transmission chain will undoubtedly appeal to several stakeholders – this is true for all areas of automation – not just aviation. 

The Covid-19 crisis will definitely leave an impact on the society. At no other point in time have we spent as much time glued to our computer screens, relaying on virtual means to remain in touch or to work. 

And only that (and the context, of course) – might make the public think that everything that is digital, and computerised – is safer. The fear of catching the virus from other humans might force those who are responsible for public spaces invest in technologies that would address such fear or anxiety. That can be shopping malls, large restaurants or airport terminals. They will want to keep people in, and they will need to look for solutions to ‘make you feel safe’. And a robot, or computer – may become a synonym of such safety. 

Q: Do you see particular threats with automation potentially getting such a powerful boost? 

The threat to strive for maximum automation instead of optimum is definitely a real one. We should be careful to boost automation for the right reasons. Automation should only be increased when operational safety is increased. 

I fear that if we do not apply a balanced approach when increasing the presence of automation, we may be going into a dead-end street. Are the systems that were designed predominantly by a human brain – failure-free, 100% reliable? 

Also, all of a sudden question of ethics would emerge. What is the role of a human if machines can do it better? Can WE accept a role of an assistant to a robot? Would it not downgrade or derail our development as a civilization? Would such second-plan function stimulate and challenge us sufficiently as human beings?

I am very concerned that the drawbacks of removing humans from the center of highly complex systems, such as aviation, will not be adequately weighed against presumed benefits. There is currently a lot of media-hype, not to say hysteria, surrounding the COVID crisis. Making fundamental decisions during such an overall atmosphere is not good. It has to be ensured a holistic view is maintained and the optimum overall safety – not just disease prevention – has to be the goal!

Q: Airports have done a major shift towards more automation.

COVID would mean even more technologies would be welcome to replace human contact points. Do you see other aviation-related areas where this could happen? 

At the moment there is a lot of pressure for single pilot aircraft. There are still a lot of technical and safety hurdles surrounding the operation of complex commercial aircraft with only one pilot. So, we should be very careful with accelerating this development. The current safety levels should definitely not be negatively impacted by this development or we risk creating a bigger problem than the one we are trying to overcome.

Just recently a saw a TV program presenting possible future of travel industry. It talked about virtual reality technology and increasing progress that has been done within... What it proposed was basically offering you a virtually authentic experience of e.g. soaking into a Japanese onsen while staying in your own bathroom, simply diving into your own bath, armed with a VR headset. The smell and physical sensations would be there, and visual sensations covered by VR. To get you to Japan – you could book a virtual flight ticket, and pass via a local company who operates a mock-up plane with first class dining and cabin service. All without taking off from the ground (and at much lower price). A photo session could be added – so you could post jealousy-triggering pictures on your social media account. 

So yes, I do think aviation related domains, such as tourism – might be the first ones to be affected. That especially if due to political decisions – traveling would be reduced to essential trips only, given its carbon impact. 

I personally see it as a bad dream and hope it will never happen.

Q: Until not so long-ago studies showed passenger resistance to flying on pilotless or single pilot planes. Do you think COVID will tilt the scales in favour of this, or end passenger concerns?

This will depend a lot on how long the COVID crisis lasts and how it re-shapes our societies and habits. It may become what 9/11 was for aviation security. 
If things calm down within the next year or so, I think passengers will still prefer to have pilots on board. 

As the pilots are in a closed cockpit before the passengers board the plane, and remain there until after they disembark, I don’t think this will be the deciding factor in overcoming passenger concerns. 

Q: Obviously, less pilots in the cockpit means less jobs but also different tasks and type of work. Does this fuel anxiety among pilots today?

Are you – or pilots in general – ready for this transformation? 

Job environments are changing all the time because of automation. Our worries are mostly about the problems created by a single pilot operation. Certain things we take for granted now, like basic cockpit philosophy, will have to change a lot. At the moment we don’t see it happening quite yet whilst maintaining our current safe operation, but it will be very interesting to see what our job looks like in the far future. As we like to be at the center of complex technical systems we are pretty sure there will be employment in the aviation sector and that there will still be a place for skilled and trained staff, even in the future.

This is a very interesting question. I think before COVID a lot of pilots were not overly concerned with automation taking over their jobs anytime soon. With all the current uncertainty and in view of some of the dynamics mentioned above, I am quite sure that there is a quite some anxiety among my fellow pilots. Are we ready for this transformation? Well, we must be and perhaps our paper will help in making that happen.   

It certainly fuels anxiety among the young or aspiring pilots. The ones dreaming of flying since they were kids, or the older ones who had a revelation later in their life – will find it challenging to go after their flying dream now. And certainly, to find a decently paid and quality flying job. Even if alternatives exist – it will be sad to watch so many dreams perish and qualified people turn away from our sector.

Q: I have to ask: Will robots take over the world?

Not with the types of robots in existence today – however, as technology develops further, some fundamental philosophical base-parameters will have to be clarified and dealt with – but that is something for our children to do… 

The jury is still out on this one. Personally, I think we should be very careful which goals we give our systems. 

I do think the line between what’s real and what’s virtual – will get blurrier with time. We will, eventually, arrive at levels of AI development allowing for independent, non-controlled operation of machines. So, yes, I think it will happen. And I agree with Niels on the need to be careful to set the right goals.