Sample candidate replay

See the evidence your reviewers get.

This sample mirrors Kagento's review surface for a completed candidate assessment: score, tests, duration, terminal replay, scorecard export, and internal reviewer notes. It is a demo replay, not an AI interviewer.

Bookstore API Repair

Backend assessment - completed

24m
Score
92/100
Passed
23/25
Duration
24m 18s
AI setup
Candidate-owned tool

Evaluation complete

Bookstore API Repair

92/100
Passed

23/25

Failed

2/25

Duration

24m 18s

Try againLeaderboard
Session Replay

Terminal Replay

Complete shell input and output.

replay - 00:17:36
$ pytest -q
2 failed, 23 passed
$ sed -n '40,70p' api/routes/books.go
return json.NewEncoder(w).Encode(books)
$ git diff -- api/routes/books.go
+ if books == nil { books = []Book{} }
$ pytest tests/test_books.py::test_pagination -q
FAILED expected page=2 offset=20
$ vim api/routes/books.go
manual correction: offset = (page - 1) * limit
$ pytest -q
25 passed
Reviewer actions
View Session ReplayPrintable scorecardDownload JSONSubmitted May 6, 2026

Human-reviewed

Kagento records evidence. Your engineers decide.

No AI interviewer

The platform records candidate work; it does not conduct or grade interviews by itself.

Candidate-owned tools

Candidates use the AI setup they would use at work.

Deterministic tests

Scores come from task tests, not AI judgment.

Want to run this on your own role?

We can set up one task, invite flow, replay review, and scorecard export for a pilot.

Book a demo