Greg Siskind • 2025-08-14
EOIR issues cautious but open-minded approach and the first step before the Department of Justice issues a broader AI policy later this year. Here is what it means for Lawyers.
On August 8, 2025, the Executive Office for Immigration Review (EOIR) issued Policy Memorandum 25-40—its first comprehensive guidance on the use of generative artificial intelligence (AI) in immigration court and Board of Immigration Appeals (BIA) proceedings.
EOIR is taking a relatively open-minded view on AI and begins the memorandum noting Chief Justice John Roberts' 2023 observation, “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike…But any use of AI requires caution and humility.” EOIR notes that it is also taking this approach.
EOIR warns lawyers about the problem of hallucinated legal citations and mischaracterizations of cited materials and reminds lawyers that this harms the legal system and could subject lawyers to sanctions.
Prior to this memo, only EOIR’s Office of the Chief Administrative Hearing Officer (OCAHO) has established guidelines on the subject. The Office of the Chief Immigration Judge and the Board of Immigration Appeals have yet to do so. EOIR also noted that the Department of Justice is expected to issue department-wide guidance on the use of generative AI by the end of this year. But until then, this memorandum is intended to guide adjudicators.
No Blanket Ban—But Local Rules Possible
EOIR is taking a middle-ground approach in this memorandum. It is not taking a complete hands-off approach and letting lawyers act without guidelines. But it is also not issuing a nationwide ban or mandatory disclosure rule. Instead, it is provided general cautions and also allowing individual judges, courts or the BIA to issue their own standing orders or local operating procedures.
For lawyers, EOIR recommends reviewing ABA Formal Opinion 512. Additionally, EOIR reminds lawyers that they are expected to conform to their state’s rules of professional conduct. Specifically, EOIR reminds lawyers:
Depending on the particular posture and facts of a case and the impact of the filing, the use of such content may also implicate disciplinary rules regarding “[e]ngag[ing] in conduct that constitutes ineffective assistance of counsel,” “[e]ngag[ing] in conduct that is prejudicial to the administration of justice or undermines the integrity of the adjudicative process,” “[f]ail[ing] to provide competent representation to a client,” failing to disclose adverse legal authority, and “[r]epeatedly draft[ing] notices, motions, briefs, or claims that are filed with. . .EOIR that reflect little or no attention to the specific factual or legal issues applicable to a client’s case, but rather rely on boilerplate language indicative of a substantial failure to competently and diligently represent the client.” 8 C.F.R. §§ 1003.102(k)-(o), (s), (u). Consequently, parties and attorneys before each of EOIR’s adjudicatory components should take care to confirm the accuracy of any citations or other research or drafting conducted using generative AI tools. Moreover, in certain circumstances, attorneys may need to consult with their clients regarding the use of generative AI.
Any such orders will require management approval and must be posted publicly on EOIR’s website. That means attorneys could face different AI disclosure or usage requirements depending on the judge or jurisdiction.
Adjudicator Oversight and Reporting
The guidance is aimed not just at practitioners but at adjudicators as well. Immigration judges and BIA members are instructed to remain vigilant for filings containing nonexistent cases or inaccurate legal statements and to report suspected AI misuse to EOIR’s Attorney Discipline Program or Anti-Fraud Program.
This dual focus—on both the bar and the bench—signals that EOIR sees AI misuse as a systemic risk that could undermine the integrity of its proceedings.
Conclusion
EOIR’s first foray into regulating generative AI strikes a balance: no blanket prohibition, but a firm reminder that ethics and accuracy are non-negotiable. Notably, it doesn’t warn adjudicators themselves about how they can use AI in their work. Finally, how this memorandum will be affected by a Department of Justice broader policy expected in the next 180 days is also an open question.
Copyright © 2025 Visalaw Ventures, LLC. All rights reserved.