AI-powered police body cams raise privacy and bias concerns

Rebecca Kendall Vice President
Rebecca Kendall Vice President
0Comments

The increasing use of police body cameras equipped with artificial intelligence is sparking concerns over privacy, racial bias, and insufficient oversight. This is according to a report released by the R Street Institute, a think tank based in Washington.

Initially introduced in the 2010s to enhance transparency and accountability during public interactions, body-worn cameras have become standard equipment in many police departments nationwide. However, the R Street report suggests that integrating AI has introduced new risks, such as misidentifying individuals and collecting sensitive data without consent.

“The line between public security and state surveillance lies not in technology, but in the policies that govern it,” warns the report. It highlights the growing use of facial recognition and real-time video analytics within law enforcement tools.

To address these issues, the report recommends stricter state regulations. These include requiring warrants for facial recognition use, setting higher accuracy standards, limiting data retention periods, and conducting regular audits to identify racial or systemic biases.

Logan Seacrest, one of the authors of the report, stressed the need for human oversight in decision-making processes involving AI technology. “Not letting the AI kind of make any final decisions by itself…that stuff really should remain in human hands,” he stated.

Privacy advocates are increasingly worried about how footage from these cameras is captured, used, and stored. Body-worn cameras do more than record crimes; they often capture individuals during medical emergencies or inside their homes. Some police departments collaborate with technology firms like Clearview AI and Palantir to analyze footage in real time without clear rules or transparency guidelines.

“Predictive systems can also open the door to invasive surveillance…these tools could be abused by bad actors,” reads part of the report.

In New Orleans, police officers faced criticism for using facial recognition across a private network of over 200 surveillance cameras equipped with AI against city policy. The city responded by proposing an ordinance allowing broader use of this technology by police.

Seacrest noted that backlash was “completely predictable” due to civil liberty concerns associated with warrantless algorithm dragnets operating outside legal boundaries.

The R Street report also emphasizes how AI errors disproportionately affect communities of color. It cites a 2020 incident where Robert Williams was wrongfully arrested after being misidentified by a facial recognition system in Michigan.

Several states have acted on these concerns. California banned facial recognition on police-worn cameras until 2023 when its law expired. Illinois recently strengthened its Law Enforcement Officer-Worn Body Camera Act with measures like retention limits and prohibitions on live biometric analysis.

“There’s nothing inherently incompatible about AI and civil liberties…It’s really just a matter of guardrails we put in place for it,” said Seacrest regarding balancing AI usage with democratic oversight.

The report concludes that government oversight remains inconsistent nationally regarding regulating AI use within policing practices; however Seacrest sees potential benefits: “I think regulations are often best if they are created closest to people they affect…use should be done at state/local level.”



Leave a Reply

Your email address will not be published. Required fields are marked *

Related

Trending

The Weekly Newsletter

Sign-up for the Weekly Newsletter from DC News Line.