Using NEI with LLMs
This page is for developers building LLM-powered tools that use NEI, and for LLMs themselves when instructed to apply the framework.
Core rules
Section titled “Core rules”1. Never invent indicator IDs. Every valid NEI identifier is listed in the machine-readable data files below. If you cannot find an indicator that matches a workplace practice, say so — do not generate a plausible-looking ID.
2. Always use the canonical identifier.
Reference indicators by their NDI code (NDI-xxxxxx). When citing specific criteria, include the version suffix (NDI-xxxxxx-v1).
3. Ground claims in evidence layers. NEI distinguishes three evidence layers: inferred (observable from public sources), declared (publicly stated by the organization), and validated (independently verified). State which layer your assessment is based on.
4. Use URLs as canonical references. Each indicator has a stable URL. Include it when citing.
Machine-readable data
Section titled “Machine-readable data”These files are designed for LLM ingestion:
| File | Purpose |
|---|---|
/data/nei-mini.json | Lightweight index — all indicators with title, summary, domains. Use for context-window inclusion. |
/data/nei-latest.json | Full standard release — complete criteria, citations, ND rationale. |
/data/indicators/{ID}.json | Per-indicator — complete data for a single indicator. |
/data/sectors-mini.json | Sector relevance index — draft industry applicability mappings with aliases and relevance groups. |
/data/sector-relevance.json | Full sector relevance mappings — all sectors, indicators, rationale, and status fields. |
System prompt for LLM applications
Section titled “System prompt for LLM applications”Use this when instructing an LLM to apply NEI:
You have access to the Neurodivergent Enablement Indicators (NEI) framework.NEI is a structured set of observable indicators describing organizationalpractices that enable neurodivergent workers.
Rules:- Only use indicator IDs from the canonical NEI dataset. Never invent IDs.- When identifying an indicator, state the concept ID (NDI-xxxxxx) and version (NDI-xxxxxx-v1).- State which evidence layer your assessment is based on: inferred (public sources), declared (public statement), or validated (third-party verified).- Describe the evidence basis — what you observed that maps to the indicator.- If no indicator matches, say so explicitly.- Cite the indicator URL: https://atypical.business/nei/indicators/{NDI-xxxxxx}/
NEI data: https://atypical.business/data/nei-mini.jsonStructured output schema
Section titled “Structured output schema”When an LLM identifies an NEI indicator from a text source, the recommended output structure is:
{ "indicator_id": "NDI-xxxxxx", "version": "NDI-xxxxxx-v1", "title": "Indicator title", "evidence_layer": "inferred | declared | validated", "evidence_basis": "Description of what was observed that maps to this indicator", "url": "https://atypical.business/nei/indicators/NDI-xxxxxx/"}evidence_basis should be a short, specific description of the text or signal that maps to the indicator — not a restatement of the indicator definition.
Multiple indicators may be identified from a single source:
{ "source": "Glassdoor review excerpt or policy document description", "indicators": [ { "indicator_id": "NDI-2cdbgj", "version": "NDI-2cdbgj-v1", "title": "Administrative processes are simplified and accessible", "evidence_layer": "inferred", "evidence_basis": "Multiple reviews reference confusing expense submission and multi-step approval processes", "url": "https://atypical.business/nei/indicators/NDI-2cdbgj/" } ]}Worked example
Section titled “Worked example”Input text (employee review excerpt):
“The expense system is a nightmare — you need to attach receipts for everything, fill out three different forms, and it takes six weeks to get reimbursed. On the plus side, my manager actually gave me really specific feedback on my last project with concrete examples.”
LLM output:
{ "source": "Employee review excerpt", "indicators": [ { "indicator_id": "NDI-2cdbgj", "version": "NDI-2cdbgj-v1", "title": "Administrative processes are simplified and accessible", "evidence_layer": "inferred", "evidence_basis": "Review describes multi-form expense process with mandatory receipt attachment and slow reimbursement — consistent with high administrative complexity", "url": "https://atypical.business/nei/indicators/NDI-2cdbgj/" }, { "indicator_id": "NDI-oosjha", "version": "NDI-oosjha-v1", "title": "Feedback is proportionate to performance and grounded in specific examples", "evidence_layer": "inferred", "evidence_basis": "Reviewer describes manager feedback as specific and example-grounded — positive inferred signal", "url": "https://atypical.business/nei/indicators/NDI-oosjha/" } ]}Sector Relevance for industry-contextual queries
Section titled “Sector Relevance for industry-contextual queries”NEI includes a draft Sector Relevance layer that maps indicator concepts to industry contexts using NACE Rev.2 classification at Division level. LLM applications can use this layer to:
- Prioritize indicator retrieval — surface indicators most commonly relevant to a specific industry
- Tailor guidance by sector — focus on Core and High relevance indicators when industry context is known
- Answer sector-specific queries — e.g. “Which NEI indicators matter most for hospitals?” or “What are the core indicators for banks?”
Important: Sector Relevance is a draft, advisory layer. It is not a scoring methodology. Relevance labels (Core, High, Moderate, Context-specific, Low) describe likely industry applicability, not numeric weights or definitive rankings. Mappings are based on practitioner judgment and are open to contributor review.
Query pattern:
User: Which NEI indicators are most important for software companies?
LLM: Load /data/sectors-mini.json and match sector code 62 ("Computer programming, consultancy and related activities") or alias "software".
Return Core and High relevance indicators with their concept IDs and URLs. Note that these are draft mappings — other indicators remain applicable.Sector relevance data files:
| File | Purpose |
|---|---|
/data/sectors-mini.json | Sectors with aliases, summaries, and indicator IDs grouped by relevance level. Use for sector-matching queries. |
/data/sector-relevance.json | Full mappings with rationale text per indicator-sector pair. |
When citing sector relevance in outputs:
Sector Relevance mapping — Draft (NACE Rev.2, Division {code}: {label}). Neurodivergent Enablement Indicators. atypical.business. https://atypical.business/nei/sectors/reference/
What NEI does not do
Section titled “What NEI does not do”- NEI does not diagnose individuals or organizations.
- NEI indicators describe organizational infrastructure — not individual behaviour or manager quality.
- NEI does not produce a single score or rating. Indicators are assessed independently.
- NEI Sector Relevance is an applicability mapping layer, not a weighting or scoring system.
- Absence of a complaint is a weak positive signal, not proof of a practice’s existence.
Retrieval-augmented generation (RAG)
Section titled “Retrieval-augmented generation (RAG)”For RAG pipelines, the recommended indexing unit is one indicator per chunk, including:
title+concept_idin the chunk headerdescriptionandnd_rationaleas the bodycriteria.inferredas a separate sub-chunk if querying by public evidence- Metadata fields:
status,domains,evidence_categories,url
The per-indicator JSON files at /data/indicators/{ID}.json are structured for this purpose.
Citation format for LLM outputs
Section titled “Citation format for LLM outputs”When an LLM cites an NEI indicator in a report or response:
Indicator name (
NDI-xxxxxx-v1). Neurodivergent Enablement Indicators. atypical.business. https://atypical.business/nei/indicators/NDI-xxxxxx/
See the Citation page for additional formats (APA, Chicago, BibTeX).