The Problem with Human Review
We believe in human review. However, even well trained, highly qualified subject matter experts disagree and their coding decisions may vary greatly.
Human review is also time-consuming, costly and hard to quantify. It’s difficult to effectively measure the quality of subjective decisions so as to maximize the defensibility of a document review.
Nonetheless, human review is an essential part of litigation. Even processes that augment and extend human review – such as technology-assisted review (TAR) – rely on human review within the process.
iDS confronts these challenges with our Consensus CodingSM platform – a unique, innovative methodology, web-based and built on Relativity®, that applies the feedback mechanisms from information retrieval (like those used to train TAR engines) and pairs them with an evolving assessment of individual reviewer performance. The result is a measured reliability ranking for each document tag. The Consensus CodingSM platform assesses the review quality at the document level and at the reviewer level.
It’s not all about technology. The Consensus CodingSM platform is a tailored, comprehensive strategy that combines expert testimony, experienced consultants and best practices to ensure the successful management of the eDiscovery document review process for law firms, corporations and document review companies.
QUANTIFY THE ACCURACY OF EVERY DECISION
MEASURE THE QUALITY OF EVERY REVIEWER