Quick Note on Terminology

In this survey we're defining "TAR" as "traditional TAR": categorization (1.0) or active learning (TAR 2.0). We're using "GenAI" to mean a Generative AI-based review process (e.g. aiR, eDiscovery AI). In many important respects, GenAI can also be (or is) considered a form of TAR, but, here, we're just using "TAR" to mean "traditional TAR", to avoid any confusion.

Question Title

* 1. When reviewing for production, on average how many RFP's are you typically responding to?

Question Title

* 2. When a list of RFP's is converted into a review protocol (i.e. when writing instructions for the review team), the list of RFP's is generally summarized into a short-list of of categories (or issues) that are responsive . In other words, 50 RFP's are "collectivized" into say 10 "issues". How many issues are reviewers typically reviewing for?

Question Title

* 3. Before running a TAR or GenAI based review, do you typically apply search terms to limit the population?

Question Title

* 4. Which of the below have you validated using recall and precision?

Question Title

* 5. What is your primary method of validating results from Keyword Searches?

Question Title

* 6. What is your primary method of validating results from TAR 1.0?

Question Title

* 7. What is your primary method of validating results from TAR 2.0?

Question Title

* 8. What is your primary method of validating results from a GenAI review?

Question Title

* 9. Have you ever used an Elusion rate, as a single data point, to validate a project?

Question Title

* 10. What is your primary review methodology?

T