Skip to content

Methods

NEI is built on four methodological commitments:

  1. Observable over self-reported — Indicators describe things an organization has or does, observable from outside, not things employees report experiencing.
  2. Versioned and immutable — Criteria are pinned to specific versions. Published versions are never silently modified. This is what makes evaluations reproducible.
  3. Evidence-tiered — The framework distinguishes between what can be inferred from public signals, what the organization has declared, and what has been independently verified.
  4. Separable taxonomy — Domain membership lives in tables, not in indicator definitions. Indicators can be reclassified without changing their criteria.

A good indicator:

  • Describes something the organization does or has, not something it values or intends
  • Is observable without relying on employee self-report
  • Is distinct from existing indicators
  • Can be assessed using at least one evidence category

The normalized name is used to generate the indicator ID:

normalize_name → sha256 → base32 → lowercase → first 6 chars → prefix NDI-

Normalization rules:

  • Lowercase
  • Remove characters that are not letters, digits, or spaces
  • Collapse multiple spaces to a single space
  • Trim leading/trailing whitespace

The same concept always produces the same ID. If a concept is substantially renamed, a new ID should be generated.

Criteria describe what a reviewer would look for. They must be:

  • Specific — exactly what is being assessed
  • Observable — determinable by a reviewer with the appropriate evidence access
  • Honest — reflect both supportive and complicating evidence

Criteria must not require access unavailable to the intended evaluator, rely on employee sentiment, or reference organizational intentions without observable evidence.

Each indicator version includes:

  • citations.supporting — research supporting the indicator’s relevance
  • citations.dissenting — research or arguments that complicate the indicator

Dissenting citations are not grounds for rejection — they reflect intellectual honesty about the evidence landscape.

Inferred evidence is collected from public sources without organizational cooperation.

Approved source types:

  • Employee review platforms (Glassdoor, Blind, Indeed)
  • Job descriptions and careers pages
  • Corporate sustainability and ESG reports
  • Annual reports and regulatory filings
  • News coverage and investigative journalism
  • Legal filings (EEOC charges, court records, NLRB filings)
  • NGO/association membership and sponsorships

Usage notes:

  • Single sources are insufficient. Look for convergent signals.
  • Employee review data is aggregate signal, not individual anecdote.
  • Document source type, URL, and date accessed for each piece of evidence.

Declared evidence consists of public statements in which the organization asserts that a practice exists.

Approved source types:

  • Published HR or employee-facing policies
  • Employee handbooks (publicly accessible portions)
  • Careers pages and benefits descriptions
  • Sustainability and diversity reports
  • Press releases and official announcements
  • Executive public statements

Usage notes:

  • Declared evidence establishes that the organization has stated a practice exists — not that it operates as described.
  • Record the date of declaration. Policies change.

Validated evidence is submitted to an accredited verifier who confirms the practice is in place.

Requirements for verifiers:

  • Independent of the organization being evaluated
  • Documented verification procedures for the specific indicator
  • Issues a verification statement referencing the indicator ID, version, evidence category, and date

Usage notes:

  • Validations expire. Assess currency of any validation statement.
  • A validation statement must reference the specific indicator ID and version.

Evaluations must specify the release version used (NDR-<version>). An evaluation conducted without a pinned release version is not reproducible.

NEI does not define a scoring methodology. The framework defines what is observable; how observations aggregate to a score or rating is left to the evaluator or platform.

New domains are proposed with:

  • A domain name and description
  • A rationale for why existing domains are insufficient
  • An initial set of indicators to assign to the domain
Change typeVersion bump
Minor reclassification, no structural changePatch (NDT-x.x.1)
New domain addedMinor (NDT-x.1.0)
Structural reorganizationMajor (NDT-2.0.0)

Each taxonomy version is a new pair of CSV files. Old versions are retained.