A group of law enforcement experts has expressed concerns over the lack of transparency in the Department of Justice’s (DOJ) inventory of artificial intelligence (AI) use cases.
The omission of facial recognition and automated license plate readers from the inventory, for example, has sparked discussions among experts, who are calling for more comprehensive and transparent documentation of AI applications in federal law enforcement agencies.
The comments were raised during a February National AI Advisory Committee (NAIAC) meeting discussing the integration of artificial intelligence (AI) technologies in law enforcement practices.
Farhang Heydari, a member of the committee’s Law Enforcement Subcommittee and a Vanderbilt University law professor, was surprised by the omission of some law enforcement technologies. “It just seemed to us that the law enforcement inventories were quite thin,” he said in an interview with FedScoop.
Heydari further discussed in the interview the role of transparency in understanding the implications of AI in law enforcement. “The use case inventories play a central role in the administration’s trustworthy AI practices,” he said.
The issue gained further attention when the FBI disclosed its use of Amazon’s image and video analysis software Rekognition, prompting scrutiny into the initial exclusions.
In response, the Law Enforcement Subcommittee unanimously voted in favor of edits to recommendations governing excluded AI use cases in Federal CIO Council guidance.
The committee’s goal is to clarify interpretations of exemptions and to ensure a more comprehensive inventory of AI applications in federal law enforcement agencies.
This effort aligns with Office of Management and Budget guidance issued in November, which called for additional information on safety- or rights-impacting uses, particularly relevant to agencies like the DOJ.
Jane Bambauer, chair of the Law Enforcement Subcommittee chair and a University of Florida law professor, called for more specificity in defining sensitive AI use cases.
“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Bambauer said. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”
The committee further proposed a narrower recommendation, favoring public disclosure for every law enforcement use of AI, with exceptions limited to cases that could compromise ongoing investigations or endanger public safety.
Additionally, the subcommittee addressed the exemption for agency usage of AI embedded within common commercial products, which often led to technologies like automated license plate readers being overlooked.
“The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory,” Heydari added.
In a separate recommendation, the subcommittee proposed that law enforcement agencies adopt an AI use policy to set limits on technology usage and access to related data, along with oversight mechanisms to govern its implementation.
Once finalized, the recommendations will be publicly posted and forwarded to the White House and the National AI Initiative Office for consideration.
While NAIAC recommendations carry no direct authority, subcommittee members hope their efforts will contribute to building trust and transparency in AI applications within law enforcement.
According to Heydari, transparency is crucial in bridging the gap between law enforcement and communities. “If you’re not transparent, you’re going to engender mistrust,” he said.