Skip to content

Main Insight

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Model Protocol for Electronically Stored Information (ESI)

October 2, 2023

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery. The potential that advanced review technologies hold for advancing a “just, speedy, and inexpensive determination of every action and proceeding” has been clear to litigants and the courts for over 15 years. The recent advent of legal applications based on AI systems, such as large language models (LLMs), brings additional hope for advancing this objective. However, attempts to realize the potential often encounter obstacles, such as costly and time-consuming meet-and-confer processes and (reasonable) uncertainty with regard to the effectiveness of such tools.

This effort, led by Dr. Bruce Hedin, aims to provide a statistically robust method for evaluating the results of an AI-assisted review of ESI in legal discovery, thereby reducing the barriers in the way of the effective adoption and use of such tools. The protocol also serves as a model for the rigorous assessment of advanced tools, including AI systems based on LLMs or foundation models in other domains.

The Model Protocol for Electronically Stored Information publications comprises two documents: the Protocol and Commentary and the Guidelines for Practicioners.

The Model ESI Protocol (“Protocol”) specifies procedures for validating the results of a review effort; the Commentary to that protocol (“Commentary”) provides guidance on implementing those procedures. The Protocol and Commentary will substantially meet the needs of most practitioners.

The Guidelines for Practitioners (“Guidelines”) provide an additional level of depth required in circumstances in which a deeper understanding of the concepts and methods that underlie the approach to validation and the steps to be followed in executing its procedures is required, such as when answering a question raised by opposing counsel or when adapting to a mid-review change in the data landscape.

This work was made possible with generous financial support from IEEE and gracious support from a number of distinguished legal experts. More information can be found in the “On Preparation of these Documents” and “Acknowledgements” sections of both documents.

Team members

Related resources

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Giving Agency to the AI Act

Giving Agency to the AI Act

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).

Response to U.S. OSTP Request for Information on National Priorities for AI

Response to U.S. OSTP Request for Information on National Priorities for AI

Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.