How it works Hooks, git notes, the open standard Toolkit Live · Burndown · Menubar · CLI Coach Personal AI Usage Coach — local-first DORA Roadmap Now · Next · Later · Exploring Pair with Microsoft Copilot The outcome layer for your Copilot Power BI Compare vs git-ai, DX, LinearB, Faros Whitepaper A framework for AI code observability Pricing

PERSONAL AI USAGE COACH

The coach that helps you use AI better.
Cut the misses. Amplify the wins.

Personal DORA metrics from your local Claude Code, Codex and Copilot sessions. Runs on your laptop. No telemetry, no ranking — just a clear weekly read of what your AI is helping with and where it is hurting you.

For developers who want a personal weekly read of their own AI use — not a corporate dashboard watching them.

PERSONAL AI USAGE COACH

AI is an amplifier — and a mirror.

Healthy teams ship more with AI; struggling teams ship more chaos with AI. The bottleneck is rarely the model — it is your loop, your goals and your verification habits. The Personal AI Usage Coach is a local-first mirror that surfaces what your AI is amplifying in your week, so you can correct it at the source.

“AI is an amplifier.”
— DORA, ROI of AI-assisted Software Development, 2026, p. 3
AI is an amplifier — and a mirror. Healthy team +throughput Struggling team +rework

Healthy team. Clear goals, fast feedback, small batches. AI compounds the throughput.

Struggling team. Vague prompts, no tests, big diffs. AI compounds the rework.

02

The J-Curve: AI value takes a while to land.

DORA's 2026 report is honest about the timeline. Productivity often dips first as teams climb a learning curve, pay a verification tax on AI-generated code, and adapt their pipelines to a new cadence. Only after that does throughput actually grow. The Personal AI Usage Coach does not promise to skip the valley — it gives you a local mirror so you can see exactly where you are in it, and what to correct next.

“Realism is essential when forecasting the timeline to ROI for AI.”
— DORA, ibid., p. 4
The J-Curve: AI value takes a while to land. value time Learning curve Verification tax Pipeline adaptation Exit — where the coach helps you see you are

Figure: J-Curve of AI value realisation. Original illustration.

03

DORA team metrics vs Personal coach signals.

DORA's four classic metrics measure team outcomes — they are lagging by design and require a whole team to move them. The Personal AI Usage Coach surfaces the leading, individual signals you can correct this week, in the loop of your own AI sessions. Same vocabulary, different scope.

DORA team metric (lagging) Personal coach signal (leading) First corrective step
Deployment Frequency throughput_per_week Reduce sessions abandoned mid-flow.
Lead Time for Changes time_to_deliverable_p50 Tighten goal-clarity at session start.
Change Failure Rate personal_failure_rate Cut course-changes mid-session.
Time to Restore Service recovery_days_after_chaos Schedule a recovery day after a red day.
Wellbeing wellbeing_flags Cap late-night sessions; protect weekends.
Goal clarity goal_clarity_rate Write acceptance criteria before prompting.
Batch / iteration mix mode_mix Match the mode to the work — do not deliver in discovery mode.

04

Your seven personal signals at a glance.

The Personal DORA radar plots the seven canonical signals exposed by the local coach. The example below uses synthetic values; on your machine the radar is computed from your real Claude Code, Codex and Copilot sessions.

Synthetic example. Real values come from your local sessions.

05

Activity is not delivery.

A noisy week of LLM sessions can look productive on a dashboard while shipping nothing. The Personal AI Usage Coach distinguishes activity (sessions, retries, course-changes, message count) from delivery (commits, tests passing, branches merged, evidence screenshots). The point is not to optimise the noise — it is to widen the gap between activity and delivery, in the right direction.

“We don’t measure AI by the code it writes but by the bottlenecks it clears.”
— DORA, ibid., p. 7

Activity (LLM session noise)

Delivery (commits, tests passing, branches merged, evidence)

Example week

Monday and Tuesday show high activity and low delivery — many sessions, little merged. Thursday and Friday flip: fewer sessions, more shipped. The coach surfaces the inversion so you can ask whether the early days were exploration that paid off, or just churn.

If your week shows high activity and low delivery, the coach surfaces the pattern — not to shame, to redirect.

06

Exploratory weeks are work too.

Some weeks you do not ship features — you map the terrain. The coach treats discovery as a first-class mode and refuses to penalise it. The mode-mix below shows a real exploratory week, framed for what it was: a successful expedition, not a slow delivery week.

Case: a week building a mainframe interpreter

The objective was to map the surface of a mainframe stack — PL/I, COBOL, CICS, BMS, VSAM — and prove that a local interpreter plus a Hercules host could run a non-trivial sample. By Friday a 3270 terminal connected end-to-end and CARDDEMO was running on the host. Commits were few and small; learning was vast and concrete. The coach renders this as a 70 percent discovery week, with the artefacts that did ship.

What shipped

  • Working 3270 terminal
  • CARDDEMO running on host
  • Map of the COBOL/CICS/VSAM/BMS surface

Where the time went

  • Provisioning a System/390 on cloud — external blocker
  • zxplore quota: ten executions per week
  • Climbing the learning curve, not burning time

Mode mix this week

Delivery

Discovery

Maintenance

Research

A 70% discovery week is healthy when the goal is mapping new terrain. The coach surfaces the mix so you can verify the mode matched the intent.

07

Privacy by construction.

The coach is local-first because measuring AI at the individual level only works if developers trust the tool. No telemetry leaves your machine; no leaderboard ranks you against your peers; the analysis core ships zero external dependencies.

Install the coach. See what your AI is amplifying.

Two minutes from install to your first weekly note. Works with Claude Code, Codex and Copilot sessions out of the box.

pipx install obsly-ai