Assessment Methodology
Last updated: March 10, 2026
1. Design Principles
ZeroPitch assessments are built on three core principles drawn from industrial-organizational psychology and structured interview research:
- Structured consistency — Every candidate receives the same role-specific questions in the same order, eliminating interviewer variability and ensuring comparable evaluations across applicants.
- Behavioral evidence — Scoring is anchored to observable responses, not gut feelings. Each evaluation dimension maps to concrete behavioral indicators that the AI agent tracks during the conversation.
- Multi-dimensional measurement — Candidates are evaluated across 30+ dimensions spanning technical competence, communication skills, problem-solving approach, and role-specific domain knowledge.
2. How an Assessment Works
2.1 Experience Configuration
Organizations define the assessment experience: the job role, seniority level, required competencies, and custom evaluation criteria. ZeroPitch uses this configuration to generate a role-specific question set and scoring rubric before any candidate enters the session.
2.2 Live Conversational Interview
Candidates join a real-time voice session with an AI interviewer. The agent asks structured questions, listens to responses via speech-to-text transcription, and adapts follow-up probes based on the candidate's answers — mirroring best-practice behavioral interviewing technique.
Sessions typically run 8–12 minutes. The AI agent maintains a professional, neutral tone throughout and does not reveal evaluations during the conversation.
2.3 Transcript and Evidence Collection
Every session produces a time-stamped transcript with speaker-labeled turns. This ground-truth record forms the basis of all downstream scoring. Audio recordings are retained per the organization's data retention settings and our Privacy Policy.
3. Scoring Framework
3.1 Evaluation Dimensions
Each assessment scores the candidate on a configurable set of dimensions. Default dimensions include:
- Communication intelligence — Clarity, conciseness, structured reasoning, and active listening.
- Technical depth — Domain-specific knowledge, problem decomposition, and accuracy of technical statements.
- Problem-solving approach — How candidates frame problems, evaluate trade-offs, and arrive at solutions.
- Role fit signals — Alignment with the specific responsibilities, challenges, and culture described in the job configuration.
Organizations can add custom domain metrics tailored to their hiring criteria. Each metric includes a natural-language instruction that defines what the evaluator should look for.
3.2 Evidence-Based Scoring
Scores are not generated from summary impressions. The evaluation model processes the full transcript and produces scores tied to specific quotes and behavioral evidence. Every score in a ZeroPitch report links back to the candidate's own words.
3.3 Deterministic Pipeline
Our scoring pipeline is designed for reproducibility. Given the same transcript, the pipeline produces the same evaluation. We achieve this through structured prompts with fixed rubrics, temperature-zero inference, and validation checks that reject malformed outputs before they reach the report.
4. Report Generation
After scoring completes, ZeroPitch generates a structured report with two views:
- Scorecard — A 30-second executive summary with overall score, top strengths, risk areas, and a hire/pass recommendation.
- Show the Work — Full evidence backing every score: transcript excerpts, behavioral indicators observed, and dimension-by-dimension breakdowns.
Reports are available to the organization immediately after processing. Candidates do not see their reports unless the organization explicitly shares them via a time-limited link.
5. Fraud and Integrity Checks
ZeroPitch applies automated integrity checks to every session:
- Detection of text-to-speech playback or pre-recorded audio
- Tab-switching and window-focus monitoring during sessions
- Response-latency analysis to flag potential script reading
- Cross-session similarity detection for duplicate or templated answers
Integrity flags are surfaced in the report alongside scoring data so organizations can make informed decisions.
6. Continuous Improvement
We regularly review scoring calibration, question effectiveness, and candidate feedback to improve assessment quality. Our methodology evolves with advances in natural language understanding, psychometric research, and customer feedback. Material changes to the methodology are documented on this page.
7. Scientific Foundation
ZeroPitch's assessment design draws from established research in structured interviewing, including:
- Meta-analyses demonstrating that structured interviews predict job performance significantly better than unstructured conversations (Schmidt & Hunter, 1998; Huffcutt & Arthur, 1994).
- Behavioral interviewing frameworks that anchor evaluation to past behavior as the strongest predictor of future performance (Janz, 1982).
- Research on interviewer bias showing that standardization reduces the influence of irrelevant factors such as appearance, accent, and rapport (Levashina et al., 2014).
By automating the structured interview format, ZeroPitch eliminates common sources of human interviewer inconsistency while preserving the conversational depth that makes interviews valuable. For our specific approach to fairness and bias mitigation, see our Bias Statement.
8. Questions
If you have questions about our methodology or would like to discuss how ZeroPitch can be configured for your evaluation needs, contact us at hello@buildzeroist.com.