AI Bias & Fairness Statement

Last updated: March 10, 2026

1. Our Position on Fairness

AI-powered hiring tools carry a responsibility that goes beyond technical performance. ZeroPitch is built with the conviction that assessment technology should reduce bias in hiring — not amplify it.

We recognize that no system is bias-free by default. Fairness requires deliberate design, ongoing measurement, and transparency about limitations. This document describes the specific steps we take and the commitments we hold ourselves to.

2. How Structured Interviews Reduce Bias

Decades of research in industrial-organizational psychology demonstrate that structured interviews — where every candidate receives the same questions and is scored against the same rubric — produce significantly more equitable outcomes than unstructured conversations.

Unstructured interviews are susceptible to well-documented biases:

  • Similarity bias — Interviewers favor candidates who resemble them in background, communication style, or demographics.
  • Halo/horn effect — A single strong or weak impression colors the entire evaluation.
  • Order effects — Candidates interviewed earlier or later in the day receive systematically different scores.
  • Confirmation bias — Interviewers form early impressions and selectively attend to evidence that confirms them.

ZeroPitch eliminates these factors by design. The AI interviewer asks every candidate the same role-specific questions, in the same order, with the same neutral tone. Scoring is performed after the session using the full transcript — not during the conversation where real-time impressions can distort judgment.

3. Bias Mitigation in Our Pipeline

3.1 Input Controls

  • No demographic data in scoring — The evaluation model receives only the session transcript and role configuration. It does not have access to candidate name, photo, age, gender, ethnicity, location, or any other protected characteristic.
  • Text-only evaluation — Scoring is performed on the transcript text, not on audio. This removes vocal characteristics such as accent, pitch, and speech cadence from the evaluation signal.

3.2 Scoring Controls

  • Fixed rubrics — Every dimension is scored against a pre-defined behavioral rubric, not a comparative ranking. Candidates are measured against the role requirements, not against each other.
  • Evidence anchoring — Each score must be supported by specific transcript excerpts. Scores without evidence are rejected by validation checks before reaching the report.
  • Deterministic pipeline — We use temperature-zero inference and structured output validation to ensure the same transcript produces the same evaluation, eliminating score variance from model stochasticity.

3.3 Output Controls

  • Transparent reports — Every report shows the evidence behind every score. Organizations can verify that evaluations are grounded in what the candidate actually said, not in proxy signals.
  • Human-in-the-loop — ZeroPitch provides assessment data, not autonomous hiring decisions. A human reviewer always makes the final call. Our reports are designed to inform, not replace, human judgment.

4. Language and Accessibility

We acknowledge that conversational AI assessments may disadvantage candidates who are non-native speakers of the assessment language, have speech impediments, or have other communication differences. We are actively working to address this:

  • Our speech-to-text pipeline is selected for accuracy across diverse accents and speaking styles.
  • Scoring rubrics evaluate the substance and structure of responses, not fluency or pronunciation.
  • Organizations can configure accommodations such as extended session time.

We do not claim to have solved every accessibility challenge. We are transparent about current limitations and committed to improving coverage as the underlying technology advances.

5. What We Do Not Do

Equally important to what we build is what we deliberately exclude:

  • We do not use facial analysis, emotion detection, or sentiment analysis on candidate video or audio.
  • We do not score candidates on personality traits inferred from language patterns.
  • We do not train models on historical hiring outcomes, which can encode and amplify past discrimination.
  • We do not make autonomous hire/reject decisions. All recommendations are advisory.

6. Regulatory Alignment

We design our assessments with awareness of emerging AI hiring regulations, including:

  • NYC Local Law 144 — Requires bias audits for automated employment decision tools.
  • EU AI Act — Classifies AI systems used in employment as high-risk, requiring transparency, human oversight, and bias testing.
  • EEOC Guidance — The U.S. Equal Employment Opportunity Commission's guidance on AI and algorithmic fairness in hiring.

Our commitment to structured evaluation, evidence-based scoring, and human-in-the-loop decision-making aligns with the core requirements of these frameworks.

7. Ongoing Commitments

  • Regularly review scoring distributions across candidate demographics when sufficient anonymized data is available.
  • Engage third-party auditors for independent bias assessments as our candidate volume grows.
  • Update this statement when material changes are made to our evaluation pipeline.
  • Maintain open communication with customers about our fairness practices and limitations.

8. Contact

If you have questions or concerns about fairness in our assessments, we want to hear from you. Reach us at hello@buildzeroist.com.

For details on how our assessments are structured and scored, see our Assessment Methodology. For data handling practices, see our Privacy Policy.