Current strategy no path to autonomy


Monday, 15 August, 2016


Current strategy no path to autonomy

Two analysts from the DST Group (the federal government’s Department of Defence Science and Technology) say that the current approach to autonomous technology development is hampering our ability to deploy and potentially unlikely to ever deliver. They are using this theory to inform a new strategic research program.

Jason Sholz and Darryn Reid have penned an article titled ‘So, where are all the robots?’ in which they suggest that autonomy should now be in our reach, based on the availability of all the pieces of the puzzle, such as motors, sensors, circuits, batteries and algorithms. However, Sholz and Reid say that we are yet to deliver “a single operationally usable autonomous system worthy of the name” and that we have, instead, achieved varying degrees of automation rather than true autonomy.

Being from the DST Group, the analysts work primarily in defence, but their hypothesis regarding our misdirected approach covers many applications. The article addresses the increasing public fear that artificial intelligence may pose significant threat to human kind and the authors posit that continuing along a path that seeks to make automation more complex (in an endeavour to reach autonomy) will only exacerbate that “unsolvable problem”.

Sholz and Reid suggest that this approach will continue to “result in systems that are less trustworthy, less verifiable and more dependent on complex human interaction (likely to be at times that are unwelcome) in an attempt to manage the risk of ever-more-spectacular failures”.

According to the authors, the inherent problem is that we want autonomy to avoid the unacceptable failures likely to occur outside controlled conditions, yet simultaneously require it to remain open to the uncertainties of the real world. They further suggest that this type of thinking is widespread, finding “no other body of organised research that addresses the true nature of this problem” and that there appears to be a common belief that developing more of the same technology will somehow deliver something different... eventually.

Sholz and Reid figure that true autonomy will only be achieved if future machines are able “to deal with fundamental uncertainty, for which sample spaces of possible outcomes cannot be known in advance, if at all”.

The pair will develop a strategic research initiative on ‘trusted’ autonomous systems, which they define as including the machine, the human and their integration. According to Sholz and Reid, integration exists to “complement the weaknesses of some parts of the system with strengths in other parts of the system”.

The research will be based around four themes: understanding the foundations of autonomy; realising these in cognitive machines; ensuring these machines operate as trustworthy partners; and, their embodiment within novel platforms, sensors and effectors to achieve new capabilities.

The objective through each of these themes is to focus specifically on autonomous capabilities for managing uncertainty. This will be one to keep an eye on, as it represents a challenge to everything done to date. Can’t wait to see how it pans out.

Image credit: © Vladislav Ociacia/Dollar Photo Club

Related Articles

All-electric haulage fleet under mining alliance

A strategic alliance between Newmont and Caterpillar will see the rapid deployment of an...

How to measure ROI of field service management software

Some ROIs are easier to calculate than others. It's important to consider both tangible and...

Preparing the grid for electric vehicles

A new $3.4 million trial will help support growing adoption of electric vehicles across Australia...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd