For Researchers

A structured framework for neurodivergent workplace research

NEI draws on occupational psychology, organisational behaviour, and neurodiversity research to build a framework that is rigorous, transparent, and designed to be tested and refined.

Research foundations

Every indicator in the NEI framework is grounded in peer-reviewed research, community-sourced evidence, or both. The indicator specification requires at least one supporting citation and one dissenting citation — not because we assume the evidence is evenly balanced, but because acknowledging dissenting evidence is a baseline requirement for intellectual honesty in a framework that will influence organisational practice.

The framework draws on research from occupational psychology, organisational behaviour, disability studies, and the broader neurodiversity literature. Where evidence is thin or contested, indicators are marked accordingly — the Candidate status of many current indicators reflects genuine epistemic humility about the state of the evidence.


Measurement logic

NEI indicators describe observable organisational signals, not individual experience. This is a deliberate methodological choice: the framework assesses organizational infrastructure, not individual neurodivergence. This makes indicators observable without requiring individuals to disclose anything about themselves.

Three evidence layers reflect different degrees of observability and verifiability. Inferred evidence is observable from public sources — employee reviews, job descriptions, public reports. Declared evidence reflects organisational self-report. Validated evidence requires third-party verification. These layers are not a hierarchy of reliability in the simple sense: each captures something different, and all three can be informative.

The taxonomy separates structural grouping from indicator definition. An indicator's domain assignment reflects its primary organisational context; it does not constrain how the indicator can be used in research design. Researchers can use the taxonomy to structure comparative analysis or ignore it when building custom frameworks.


Indicator design

Each indicator has a concept (permanent) and a versioned specification (evolvable). The concept defines what the indicator measures at an abstract level — this never changes once assigned. The specification defines the precise assessment criteria, evidence requirements, and domain assignment for a particular version. Specifications are immutable once released as Standard; new evidence generates a new version. This means research that cites a specific indicator version (e.g., NDI-hgbbzn-v1) is citing an immutable specification, not a moving target.


Open evidence development

Contributors can propose new indicators, add citations, or challenge existing criteria through the public proposal process. The framework is designed to be critiqued, tested, and refined. Dissenting citations are a first-class part of each indicator — if your research challenges an existing indicator's evidence base, that's a valuable contribution, not a problem to be managed.

The contribution process is documented and transparent. Every change to the framework is tracked as a Git commit. Proposals, reviews, and decisions are public. This creates a traceable history of how the framework has evolved and why — useful both for research that builds on the framework and for research that studies how community-governed standards develop.


Collaboration

The framework is explicitly designed to be critiqued, tested, and refined. We do not have enough evidence to be certain about most of what we've defined. What we do have is a process for incorporating new evidence rigorously — and a community of people who care enough to keep improving it. If your research intersects with any indicator in the framework, we'd like to hear from you.