Highlights from the Committee on Artificial Intelligence and the Courts’ Final Report
The Hawaii State Judiciary’s Committee on Artificial Intelligence and the Courts submitted its Final Report to the Hawaii Supreme Court on December 15, 2025. This post provides a high-level summary of the Committee’s findings and recommendations.
Status of AI Technology in the Judiciary
Legal Research Tools
The Judiciary has adopted multiple AI platforms for legal research and document analysis, granting personnel access to Paxton.ai (a generative AI tool used to summarize, analyze, and cite-check legal documents) and Bloomberg Law, though adoption of the latter is less extensive. The appellate courts have also incorporated Westlaw Precision’s AI search into their workflows, while the Judiciary tests Thomson Reuters’ Co-Counsel to evaluate its potential for drafting and summarization.
Operational AI Uses
In May 2025, the Judiciary launched KolokoloChat, which is an AI chatbot trained on court rules and procedures. This resource assists users with inquiries regarding traffic fines, collections, and family law, while also helping self-represented litigants locate forms and procedural answers outside of business hours.
Operationally, the Judiciary views AI as a tool with the potential to automate tasks and mitigate staffing shortages, provided it does not encroach upon judicial independence or supplant decision-making authority. The Judiciary also recognizes the risk of a widening “knowledge and resource gap” between parties using sophisticated AI tools and those relying on free resources.
Findings in Various Areas of AI
Guidance and Policies Regarding AI Usage
Justices and Judges
The Chief Justice approved a memorandum mandating judicial competence in AI technology. A critical note is that Hawaii Revised Code of Judicial Conduct Rule 2.15(b) (duty to report lawyer misconduct) is implicated by improper AI use. For example, submitting AI “hallucinations” to the court may raise questions about a lawyer’s honesty and trustworthiness, potentially triggering the duty to report them.
A judge’s own use of generative AI for research could violate Rule 2.9 regarding independent investigations. “Extractive AI with retrieval-augmented generation (RAG)” should be prioritized over purely generative capabilities, as RAG is designed to retrieve facts from accurate, external sources to supplement the AI, rather than relying solely on the model to generate content. Additionally, using AI tools could violate Rule 2.3(a) if the algorithms produce biased or unfair outputs. Finally, judges are warned against inputting confidential data into public AI platforms and are advised to educate themselves on “deepfakes” to make informed evidentiary rulings.
Judiciary Employees
The Judiciary’s internal Guardrails for the Acceptable Use of Artificial Intelligence strictly prohibit staff from entering confidential or personally identifiable information into AI prompts. Once entered, information is assumed to become public data used to train the model, and even “approved” tools do not mitigate these risks.
Human verification of AI outputs is also mandatory, as staff are required to fact-check all content for hallucinations and copyright attribution. Judiciary staff are further warned that outputs may reflect bias, and that their AI prompts and outputs are potentially discoverable under the Hawaii Uniform Information Practices Act.
Attorneys and Self-Represented Litigants
The Supreme Court is considering a mandatory Order Regarding Use of Artificial Intelligence Tools. If adopted, attorneys and self-represented parties would be required to declare whether AI was used to draft a filing and to affirmatively verify all citations. This framework mirrors orders from the U.S. District Court for the District of Hawaii.
With respect to ethics, the Committee found the Hawaii Rules of Professional Conduct and Hawaii Rules of Civil Procedure are sufficiently broad to govern AI use without amendment. The report also cites ABA Formal Opinion 512 for guidance on ethical obligations, including supervision, confidentiality, and fees.
Implementing AI Technology in Court Operations
General Implementation
The report identifies AI as a potential solution to staffing shortages, particularly regarding court clerks in Maui, Kauai, and Kona. The expectation is that automating repetitive tasks such as data input, analysis, and summarizing will address operational gaps. Consistent with National Center for State Courts recommendations, the report emphasizes a “human-in-the-loop” standard requiring human review of all AI outputs to ensure technology supplements rather than replaces judgment.
Intelligent Document Processing
Intelligent Document Processing (IDP) extracts information from filed documents by converting PDFs into structured data. However, current IDP tools are not sufficiently reliable to process pre-printed forms filled out by hand. The report suggests converting documents, such as petitions for Orders for Protection, into fillable PDFs requiring typed input, thus making them more amenable to automated processing.
Transcripts
The Judiciary is exploring AI to automate transcripts and court minutes to address clerk shortages. A recent pilot of Fireflies.ai in the Fifth Circuit failed to meet accuracy standards, but the courts continue to seek out options to assist staff.
Using AI to Assist Self-Represented Litigants
Plain Language and Video
The report notes that tools like Claude.ai, ChatGPT, and Synthesia can translate complex legal instructions into plain language and create multilingual video guides for underserved populations. The Third Circuit has requested assistance using this technology to aid in explaining family court procedures.
Data Research
The report suggests AI could automate the identification of cases involving self-represented litigants. Identifying these cases would provide data on where unrepresented parties face disproportionate challenges, which may in turn lead to changes in court rules or form revisions to resolve procedural bottlenecks.
Increasing Language Access
AI machine translation is generally unreliable for courtroom use, particularly for oral testimony where accuracy is critical. A Fifth Circuit pilot confirmed that AI tools failed to produce accurate oral transcripts without human oversight.
However, the report outlines limited permissible workflows. Generative AI may be used for the preliminary translation of foreign-language documents to reduce the workload of certified translators. Additionally, sufficiently accurate AI tools may be used to assist help centers in interpreting “non-critical information.” The report further notes that exigent circumstances may warrant the use of AI translation where the absence of other resources would delay justice to the point of denial.
Legal and Ethical Issues Arising from AI Use
Hawaii Rules of Professional Conduct
Competence (Rule 1.1)
Attorneys must maintain technological competence. Because generative AI models are “black boxes” predicting statistical sequences rather than retrieving established truths, they are prone to hallucinations even in legal-specific systems. Competence requires assessing when AI use is appropriate and implementing workflows that prioritize human oversight while grounding answers in specific documents.
Confidentiality (Rule 1.6)
Inputting sensitive client data into unsecured AI tools (like standard ChatGPT accounts) places that information in third-party hands for training. The report warns that “what goes in can potentially come right back out,” as AI systems may reproduce confidential inputs for other users. Therefore, using AI tools without verifying security protocols constitutes a potential breach of privilege.
Communication (Rule 1.4)
Attorneys should discuss AI use with clients and obtain consent. If a client objects, the lawyer must determine the most effective way to proceed. Depending on the facts of a case, practitioners should consider disclosing AI use to all persons involved.
Independent Judgment (Rule 2.1)
Because AI tools lack transparency and can potentially generate false citations, the “buck stops with attorneys.” Practitioners must supervise AI outputs with the same scrutiny applied to a law clerk or new associate.
Meritorious Claims and Candor (Rules 3.1, 3.3)
The duty to verify AI content extends to scrutinizing opposing counsel’s submissions. The report cites Noland v. Land of the Free, where a party was denied attorney’s fees for failing to alert the court to fabricated authorities in an opponent’s brief. Citing Mata v. Avianca, the Judiciary also warns against viewing AI as a “super search engine.” Attorneys who fail to understand AI’s potential to invent facts risk sanctions.
Supervisory Responsibilities (Rules 5.1, 5.3)
Partners must ensure subordinate lawyers and non-lawyer assistants use AI compliantly. The report warns of “Shadow IT” (the unauthorized use of unapproved AI tools by staff). The proliferation of these tools also makes the supervisory burden to “trust but verify” more onerous, as firms are required to implement internal policies governing external vendors and data usage.
Fees (Rule 1.5)
AI efficiency creates tension with the mandate to charge “reasonable fees.” If AI significantly reduces research time, raising hourly rates to maintain the total fee may be unreasonable. The report suggests AI usage may eventually become a standard for competence, similar to the shift from typewriters to word processors.
Hawaii Rules of Evidence
Relevance and Reliability (HRE 401, 403, 702)
While AI-generated evidence (e.g., predictive analytics or automated data classifications) may be admissible, courts should apply heightened scrutiny under Daubert. Because AI models are opaque, courts must assess factors beyond general acceptance (e.g., quality of training data, known biases, and error rates). Evidence lacking such disclosure may be excluded as substantively unhelpful or prejudicial.
Authentication (HRE 901)
The “black box” nature of AI complicates the authentication of AI-generated data under HRE 901, which may necessitate specialized expert testimony to explain the system’s inner workings. Additionally, the report warns that current standards may be insufficient to address “deepfakes” and synthetic media, which are sophisticated enough to deceive fact-finders. To prevent these fabrications from reaching the jury, the Committee notes scholarly proposals to heighten authentication criteria and potentially shift the fact-finding responsibility for authentication from the jury to the judge, thereby expanding the court’s gatekeeping role.
Practice Considerations
The report concludes that existing rules are sufficiently robust to address AI issues, and therefore adopts a “wait and see” approach rather than proposing new regulations. The Committee explicitly recommended against making AI-related Continuing Legal Education (CLE) mandatory to avoid burdening the profession, encouraging voluntary education instead.
The Committee’s Recommendations
The Judiciary should form a standing committee comprising judges, administrative personnel, and IT staff. This body will be responsible for vetting the use of any new AI technologies to ensure they remain consistent with the court’s mission and values.
The Judiciary should collaborate with the Hawaii State Bar Association and the law school to develop resources for the legal community. This includes offering CLEs, ethics courses, and annual judicial conference presentations focused on the use of AI.
The Committee should remain active as a standing or ad-hoc body to keep abreast of rapid technological developments and further explore AI-assisted tools.