Report: Flawed AI system fast-tracked inexperienced ICE recruits into field work

The Trump administration reportedly used an artificial intelligence (AI) tool to speed up the process as Immigration and Customs Enforcement (ICE) raced to hire thousands of new officers last year. Instead, the AI inadvertently sent new hires with no law enforcement experience straight into advanced programs intended for more seasoned recruits.

According to an NBC News report published Jan. 14, ICE used an AI system to scan résumés and flag applicants with prior law enforcement experience, routing them into the agency’s abbreviated law enforcement officer program. 

The problem, two anonymous law enforcement officials told NBC, was that the system relied on keyword matching. Applicants were flagged as experienced officers simply because their résumés included the word “officer” — a term used by people describing themselves as “compliance officers,” or even by applicants who simply wrote that they were interested in becoming ICE officers.

As a result, individuals with no prior law enforcement background were placed into a four-week online training track intended for experienced officers, rather than the eight-week in-person course at the Federal Law Enforcement Training Center in Georgia. That longer program includes physical fitness testing and instruction in immigration law and firearms handling. The AI error meant some recruits advanced more quickly to field offices without completing such training, according to NBC’s report. 

The misclassification was discovered in mid-fall 2025, more than a month into a hiring surge driven by congressional pressure to bring on 10,000 new ICE officers by the end of the year. The effort was backed by $50,000 signing bonuses funded through the “One Big Beautiful Bill.” While the hiring target was technically met, remedial steps meant that not all recruits were fully operational during 2025, NBC reported. 

ICE responded to the discovery of the error by manually reviewing résumés and requiring affected recruits to return to the training center for proper instruction. Anonymous officials emphasized to NBC that field offices provide additional on-site training and that those misclassified most likely received further instruction before working independently. Still, the episode delayed full operational readiness for some hires at a time when ICE was carrying out a major enforcement push, including the deployment of more than 2,000 officers to Minneapolis since late November 2025.

NBC’s report on the bureaucratic mishap within ICE comes amid a broader push to accelerate the use of AI across the federal government, including at the highest levels of military decision-making.

Two days before NBC published its report, U.S. War Secretary Pete Hegseth announced that Grok, the AI chatbot developed by X, would be integrated into Pentagon networks, including both unclassified and classified systems. Speaking at SpaceX headquarters in South Texas on Jan. 12, Hegseth said the system would go live later this month and operate alongside Google’s generative AI engine within the War Department’s infrastructure.

“Very soon we will have the world’s leading AI models on every unclassified and classified network throughout our department,” Hegseth said. 

Emphasizing speed and “experimentation,” he described an “AI acceleration strategy” designed to reduce bureaucratic barriers and ensure military dominance. Military AI, he said, would operate “without ideological constraints that limit lawful military applications.” The Pentagon’s AI “will not be woke,” he added.

Hegseth also stressed that “AI is only as good as the data that it receives,” framing the Pentagon’s push as a way to make “all appropriate data” available for AI use across defense systems. What he did not specify were the guardrails: No details were offered about access levels, safeguards for classified material, or how ethical risks would be mitigated. 

The rollout drew added attention because it coincided with international backlash over Grok’s image-generation capabilities. As CatholicVote previously reported, findings from Copyleaks indicated the chatbot had been used to generate non-consensual sexualized deepfake images of real people, including women, girls, and children, at estimates as high as one non-consensual sexualized image per minute. 

On the same day as Hegseth’s remarks, the United Kingdom’s online safety regulator, Ofcom, launched a formal investigation into X over potential violations of the Online Safety Act, as CatholicVote previously reported. Governments around the world followed suit, including in Europe, Asia, and the Indo-Pacific.

Changes to Grok’s image-generation feature were announced Jan. 14, but critics and regulators have argued that the measures remain insufficient to address the scale and nature of non-consensual deepfake abuses.

There is no evidence that the Pentagon’s use of Grok would involve image-generation features, and the focus of the War Department’s initiative is on data processing, analysis, and other military applications. Still, the announcement underscored how rapidly AI systems are being placed into high-stakes roles across the federal government.

The post Report: Flawed AI system fast-tracked inexperienced ICE recruits into field work appeared first on CatholicVote org.

Leave a Comment

Ontario Canada