Robotics and Law Enforcement

Science fiction is the new reality

Robotics and Law Enforcement

Police worldwide are experimenting with robotic technologies. Hollywood makes us think of robots as autonomous machines which behave like people. Indeed, recent deployments even bear some resemblance to movie and TV robots. But their ability to watch and react – rather than their appearance – will determine their impact on civilian society.

Surveillance robots have arrived. For an upcoming festival, Singapore is testing a simple device that resembles the Mars Explorer. A surveillance camera roams about on wheels. A police officer operates it remotely, watching its video feed.

Dubai is experimenting with human-shaped robots. These operate as mobile communication devices, through which citizens can report crimes, pay fines and get information. If the public accepts its familiar shape and friendly gestures, they may grow inured to its constant video surveillance.

This year, China deployed robots that resemble R2-D2 or perhaps Dr. Who’s nemesis, the Daleks. These roam about in public places such as train stations, sniffing for fires and providing public information.

The robots in Dubai and China share a significant feature: they carry cameras, which compare the faces of people they see to databases of fugitives. When they find a wanted person, they alert police.

Intelligent public surveillance is not new. In 2006, police in British Columbia started using automatic licence plate recognition systems. These devices scan the licence plates of vehicles near a police cruiser, and identify which ones are stolen or associated to unlicensed or prohibited drivers. In 2012, the Privacy Commissioner set rules for police using these systems; but their essential features survived her scrutiny.

Robots can operate in hostile environments. For example, the Mars Rover explores where humans cannot breathe. Police officers also work in hostile environments: they go where the trouble is. Bomb squads deploy remotely controlled machines to handle suspected explosive devices. In hostage-takings, police robots carry communication devices, cigarettes, or other tokens of good faith from negotiators to their target. A human controls the device, and, from a place of safety, reasons with the person who threatens harm. This officer-by-proxy technique could reduce violent police behaviour in disadvantaged neighbourhoods where officers fear the population.

Lethal robots are developing. In the military sphere, armed robots already exist, and have killed often. Indeed, American drones are nothing more than flying robots. Last year, Russian researchers demonstrated a grim human-shaped bipedal robot, which accurately fired handguns at targets – using both hands. They intend to program it to operate autonomously.

The first documented killing of a civilian by a police robot occurred in Dallas, Texas in 2016. A sniper shot and killed five police officers from an impregnable position. Police used a bomb-squad robot to place a charge of plastic explosive on a wall behind him, and detonated it. A grand jury accepted that this violence was necessary to save lives, and declined to charge the officers criminally.

Will Canadian police resist the temptation to “weaponize” robots?

RCMP in British Columbia currently oppose this idea. At this early stage, this makes sense. But if self-driving cars prove themselves consistently safer than their human counterparts, then logical risk-management suggests that they reconsider. If computers driving cars reliably make life-and-death decisions more safely than humans, then prudence suggests that police robots, not people, should carry the weapons and decide when to use them.

Forget the Hollywood robot. Surveillance technology in robots already raise policy choices between security and privacy. Autonomous police robots may raise the more challenging ethical choice: for our safety, are we willing to sacrifice control?

Related Articles