CUSTOMER SUCCESS BRIEF

Law Enforcement on the AI Frontier: Seizing the Potential Requires Strong Fundamentals

AI can empower law enforcement agencies to expand their efforts to protect public safety

Calendar icon 01-30-2025

Key Takeaways:


  • AI can empower law enforcement agencies to expand their efforts to protect public safety—but deployment in the field must balance significant advantages against potential concerns.
  • Law enforcement agencies are already applying AI across a number of use cases that support deductive reasoning, problem solving and crime analysis.
  • Getting the full value from AI applications requires trusted data sources, human intelligence and a modern IT infrastructure—and concerns regarding privacy, potential biases and ethical use must be addressed.

LISTEN TO THIS FEATURE:

 

 

Just like their peers in government and industry, law enforcement officials are asking a fundamental question: “What does law enforcement look like in the age of AI?”

This is an important question to ask—and a complex one to answer. Many in law enforcement see AI as a game-changer. A powerful complement to human intuition, AI excels at deductive reasoning, problem-solving and crime analysis at speed and scale.

But AI can’t solve all of law enforcement’s challenges. There are implications to be understood and explored.

As with any technology, seizing AI’s full potential means getting the fundamentals right: Enforcing the law and protecting our people in the age of AI requires that we leverage it for what it does best—processing vast quantities of data quickly and detecting patterns to free up humans for what we do best: making nuanced judgments in complex situations.

AI can process vast datasets, recognize patterns and make predictions far more reliably and quickly than humans.

AI is a teammate, not a tool

As data volumes and complexity increases—and as criminals’ technical sophistication improves—law enforcement agencies need commensurate investments to employ advanced technology for proactive mitigation and enforcement.

Techniques like facial recognition and biometric screening, which now use AI to enhance their performance, can quickly identify people and suspects in real time and aid in investigations. The application of these techniques has measurably reduced response times, improved public safety1 and given law enforcement officers tools to counter criminals. These tools are being used today to aid in proactively identifying and preventing drug movements2, human trafficking3, money laundering and more.

However, AI is not a standalone tool. It is a teammate. It complements work and empowers law enforcement agents to expand their efforts to protect public safety. It can help make up for personnel shortfalls, speeding up decision time by surfacing the most relevant and timely data. This assistance can help address the backlog of investigative leads. In addition, the AI enablement of law enforcement can identify the vast threat spectrum to law enforcement and the American public—including threats that are unknown and unidentified today.

Top use cases for AI in law enforcement today

AI offers a wide range of applications for law enforcement agencies to enhance their capabilities, improve efficiencies, identify intelligence gaps and support efforts to maintain public safety4. And the technology is advancing all the time. Here are some applications that are in use today.

In addition to supporting active law enforcement work, AI can also streamline administrative tasks by automating report writing and documentation requirements. This allows law enforcement officers to focus on critical, mission-focused tasks.

Deductive reasoning

  • Identity protection. AI algorithms can monitor and analyze transactions in real time to detect unusual patterns or anomalies that may indicate identity theft or fraud. AI is used for biometric authentication and analysis of behavioral biometrics. It can detect and thwart phishing attempts, monitor social media, verify identity documents and more.
  • Real-time policing. AI-based geospatial location technology can help assess crime hotspots and trends and enable preparedness action by allocating officers accordingly. AI can also unravel patterns and spot infractions in time to stop them. AI-powered drones and vehicles provide additional surveillance and allow officers to react faster to threats.

Problem-solving

  • Identity recognition. Facial recognition technology is used to identify and track suspects or missing persons. The effectiveness of these algorithms improves dramatically with synthetic data, which is algorithmically generated data that mimics the properties of real data. Synthetic data can be used to generate thousands of versions of an image with different eye colors, hair styles and skin tones. These synthetic training datasets can vastly improve the performance of facial recognition algorithms. In addition, biometric markers such as voice recognition, iris scans (unique pattern capture), retinal scans (blood vessel patterns), hand geometry (hand shape and size), keystroke dynamics (typing patterns) and gait analysis are used for identity recognition.
  • Crime scene assessment. AI can help assess crime scenes, especially when used in concert with real-time trusted fusion center or records management system (RMS) data. AI can also be used for 3D scene reconstructions from images and videos to aid investigators. Further, it can enhance surveillance footage, bullet trajectory analysis, blood spatter analysis, DNA analysis and a host of other tasks to aid human investigators. The data ingest should include the National Integrated Ballistic Information Network (NIBIN) and the Combined DNA Index System (CODIS) to enhance the effectiveness of the analysis.

Crime analysis

  • Surveillance simulations. AI can simulate the movement and behavior of crowds in public places like sports arenas, transportation hubs and concert venues. These simulations help in training and contingency planning. AI anticipates traffic and pedestrian flow patterns and identifies proactive mitigations. Consider an incident like the shooting at Nationals Park in July 2021. AI can model movements in a stadium and surrounding community and secondary effects on modes of transportation exiting the affected area.
  • Forensic art. AI can assist forensic artists in reconstructing faces. It can aid in the initial stages of sketching, speeding up the process. It can also simulate how a person’s appearance may change over time and suggest variations to supplement gaps in witness recall.

The fundamentals of AI in law enforcement

Getting the full value from AI applications requires trusted data sources, human intelligence and a modern IT infrastructure. These elements are the fundamentals of AI that all law enforcement agencies should factor into their AI strategies.

Everything hinges on defensible data from trusted sources

The effective use of AI in law enforcement relies on access to trusted data sources that make it possible to confidently apply algorithms at the edge for real-time decisions—and an approach that balances utility and ethical use.

Data fuels AI, and operational AI models are only as good as the data on which they run. In training them, supervised AI systems learn patterns and make decisions based on available data. Without sufficient, diverse, clean and trusted data, the systems cannot learn effectively.

Data is not only used for training but also for testing and validating AI models. These activities ensure that the models perform as expected on new, unseen data and continue to provide value from unexpected data. The ability to continuously monitor and learn from feedback is necessary to stay relevant and useful. AI learns from its mistakes and continually improves and adapts to changing conditions and new information. The value of an AI solution on Day 1,000 will far exceed the value on Day 1.

AI trained on complete1 and clean2 data is better equipped to aid human analysts. When investing in AI solutions, law enforcement agencies should never underestimate the importance of trusted data. This is how analysts and investigators can be confident in the output of AI models—that the data inputs are defensible.

When agencies use their own intelligence to train AI models, they know that the sources are trusted. The ability to use AI from a trusted source allows the investigator to create controlled variants or models, rather than a biased source possibly magnifying errors in the AI model itself when applied to a real-world situation.

By training algorithms on more representative and current data, law enforcement can expand the use cases of AI. When bringing together trusted data and open-source data to train models, the viability of each data stream should be assigned a valuation, which should be considered in the organization’s data strategy and data management plans. It’s also essential to ensure that key data elements are accessible to the AI, which is a potential challenge for sensitive and proprietary datasets3.

Most of the data that law enforcement uses is sensitive and needs to be guarded. Even data that is not sensitive needs to be guarded against poisoning by adversaries that may be trying to influence the algorithms. The advent of zero trust architectures and attribute-based access controls has created new opportunities for delivering value from protected data and to harness the power of AI.

The elimination of algorithmic bias5 is a non-negotiable priority

Law enforcement agencies can expect legislative, judicial and regulatory scrutiny around the use of AI. In this environment, addressing the potential for biases in AI systems must be a priority. Left unchecked, AI bias can lead to unfair discriminatory and unexpected outcomes. If training datasets contain historical biases or do not reflect the diversity of the population, AI could perpetuate and even exacerbate these biases and lead to unintended outcomes6.

Protecting privacy and promoting transparency is highly nuanced

Privacy is another issue that must be addressed. Surveillance and data analysis involve the collection and processing of personal data—often without explicit consent. This raises questions of how to balance public safety with individual privacy related to AI models. Further, with agencies collecting and storing petabytes of data, any data breaches could reveal sensitive information of a large segment of the population.

Policies around data storage and retention will need to be formulated for the ever-increasing volumes and complexity of data. However, it may not always be possible to provide blanket policies around the proper use of data for AI models. For example, transparency may detract from security when it comes to financial data.

Implementing trustworthy AI is a nuanced initiative that must account for the particulars of a given use case. Ultimately, an organization’s data and AI strategy must be developed, implemented and iterated with forethought and a willingness to adapt.

Explainability is essential to make AI-based decisions defensible

Most AI algorithms today use deep learning techniques. While the results are accurate, they are difficult, if not impossible, to explain. This lack of transparency is problematic because it is difficult for law enforcement agencies to defend AI-informed decisions or identify any decision-making errors. Government, academia and industry are working collaboratively to enable users to understand and trust these outputs by being transparent in how the models are trained, tested and monitored7. It is necessary to ensure the output from machine learning (ML) algorithms meet the Daubert standard8 for evidence.

These concerns emphasize the need for ethical guidelines, transparent algorithms and a legal framework to ensure that AI and ML technologies are used responsibly. Many government organizations have internal guidelines to avoid these pitfalls, but the need for continued vigilance cannot be overemphasized. Continual oversight is essential. Law enforcement agencies should be prepared to establish a trained board that independently evaluates data sources and evaluates the impact of deployed AI solutions.

Humans always have a role in law enforcement in the era of AI

While AI is poised to transform the work of law enforcement in profound ways, it will do so in partnership with humans and depend on uniquely human intelligence. Human oversight is essential in a variety of ways. Humans must evaluate data sources. Is the data relevant? Is it trustworthy? Is it unbiased? Human analysts and investigators act on the insights that come from AI solutions. Humans make judgments in real time based on information from AI models. They must evaluate and safeguard AI outputs to protect the integrity of the public safety mission.

A modern IT infrastructure powers value from AI

Legacy technology systems severely limit the potential of AI in law enforcement. This is true across any sector. The processing power, scalability, data storage and management capacity, connectivity, security and collaborative tools of a modern IT infrastructure are all critical to successfully implement AI. These modern tools and capabilities are also essential to the ability to adapt as AI advances over time and to scaling uses cases as they are developed.

Looking ahead, cost could be a barrier to adoption of AI in law enforcement, particularly for smaller state and local agencies. Creative purchasing models like grant alternatives or state-wide licensing agreements could help address this issue.

Moving toward an AI-powered future for law enforcement

The introduction of AI in law enforcement is transformative and necessary to stay ahead of criminals and adversaries. The deployment of AI in the field must, however, be navigated carefully, balancing the significant advantages against potential concerns.

AI’s role in predictive policing, real-time surveillance, identity protection and recognition and other applications shows its ability to improve and aid humans in law enforcement. And the use of AI in administrative tasks allows officers to focus more on the crucial aspects of law enforcement.

Despite the undeniable benefits of AI, there are valid concerns regarding privacy, potential biases and ethical use of this technology. Agencies should ensure care when developing and deploying safety- and rights-impacting AI. It is essential to address these concerns through a strict regulatory framework, continuous oversight and a focus on training personnel in the ethical use of AI. Doing so can mitigate these concerns and ensure safe and responsible use of this powerful technology.

 

Learn more

SAIC is committed to helping law enforcement agencies get maximum value from AI to deliver the public safety mission with integrity. We can help agencies navigate the rapidly changing AI frontier, developing AI solutions that are defensible, efficient and effective and modernizing the IT infrastructure to multiply the value of AI. To learn more, contact: 

Shweta Mulcare 
Managing Director, Data and AI, Mission Advisory Services
shweta.mulcare@saic.com

Daniel Board Jr. 
Director, Government Strategy and Operations
Former Chief of Staff at ATF
daniel.l.board@saic.com

 

References

1 How Assistive AI Can Be a Force Multiplier in Public Safety

2 Customs and Border Protection is Using AI to Crack Down on Fentanyl Trafficking

3 Artificial Intelligence and the Fight Against Human Trafficking

4 Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement

5 Algorithms in Policing: An Investigative Packet

6 Ethical Framework Aims to Reduce Bias in Data-Driven Policing

7 What is Explainable AI?

8 Machine Learning Evidence: Admissibility and Weight

 

 

Learn more about SAIC's border innovations