War Thoughts
Military funding has played a role in many key computing developments, and the armed forces clearly see advanced AI as being useful for combat situations. AI combat forces might enable fewer human soldiers to be put in harm’s way, or undertake missions ruled too dangerous for human combatants.
One of the key issues is – not to put it too bluntly – how do you create a machine with sufficient intelligence to distinguish friend from foe? Even the most hardened military character would suffer a serious case of the willies at the thought of explaining what went wrong in front of a court martial, should the worst happen. This is one reason why there’s invariably an element of human operation in military bots.
We spoke to Eric Taipale, a senior-level computer systems architect for defence contractor Lockheed Martin, to discuss its unmanned aerial vehicle (UAV) Desert Hawk III. "
Desert Hawk III is a small unmanned aircraft system," he explained. "
It has a 54in wingspan and weighs around 8lbs, including sensors and batteries. It was originally designed to provide visual day and night surveillance capability for military bases."
Desert Hawk’s aviation electronics systems (avionics) are coded in C, while the hardware uses a surprisingly high number of commercial components. "
The guidance, navigation and control systems use high-reliability automotive and military-grade versions of standard embedded processors and Field Programmable Gate Arrays (FPGAs)," Taipale explained. The ground station consists of "
rugged commercial laptop equipment, industrial-grade computing hardware in form factors such as PC-104 (a standard embedded form factor), and embedded or specialty processing elements, such as FPGAs."
Desert Hawk’s AI goes beyond a standard autopilot. "
Desert Hawk’s AI system is used for route planning, deciding an efficient minimum path distance to visit a number of objective points, which minimises the time between visits to an area under observation," says Taipale. "
The AI also suggests new flight plan updates to follow a moving object of interest in the most effective way."
While the AI can suggest changes, a human operator still needs to approve changes (you wouldn’t want the aircraft flying without approval into another country’s airspace, for instance.) As Taipale puts it, "
operational doctrine mandates that human operators approve and understand recommendations made by the system".
It isn’t only for oversights that humans are needed though. Taipale admits that "
current AI capability tends to lose effectiveness compared with a human operator as a mission evolves dynamically. A human operator is also a better judge of many subjective matters; for example, is a collected image of sufficient quality or is an activity under observation worthy of additional time spent overhead?"
Research to find ways for AIs to cope with fuzzier, more complex situations such as this is well under way, and the Polaris project has a bearing on this. However, it seems likely that human operators will be calling the shots in the military for a long time yet.
Want to comment? Please log in.