Kill Your Darlings: What Analytics Taught Us About Akiko
How a fire-and-forget analytics setup revealed that one of our most elaborate features was being ignored — and what we did about it.
The Feature We Were Proud Of
Akiko is one of the characters in the Retropolis world — a rogue AI broker who offers high-risk, high-reward contracts. Unlike standard missions, Akiko's contracts have unusual conditions: they expire fast, they require rare items, and the rewards are substantial but unpredictable. We spent weeks on her: the writing, the UI, the reward tables, the cooldown system, the way her contracts escalate in difficulty as your reputation with her grows.
We were proud of Akiko. She felt like depth.
Three months after launch, we pulled the contract acceptance rate by type. Standard missions: 74% acceptance. Gang war contracts: 61%. Akiko contracts: 23%.
Less than one in four players who saw an Akiko contract took it.
That number started a conversation we needed to have.
Setting Up the Analytics Stack
Before getting into what we learned, it's worth explaining how we got the data — because the tooling choice matters, and we made a deliberately minimal one.
ClickHouse: The Right Database for Events
Game analytics is a write-heavy, read-analytical workload. You're inserting thousands of events per minute and running aggregation queries over millions of rows. A traditional relational database handles this poorly. A full-featured analytics platform (Amplitude, Mixpanel) handles it well but costs money and creates a dependency.
We went with ClickHouse — an open-source columnar database built for exactly this workload. It's fast, it's cheap to self-host, and its SQL dialect is familiar. A c3.xlarge on Fly handles our current volume without breaking a sweat.
Our schema is deliberately flat. One main events table:
CREATE TABLE game_events (
event_time DateTime,
event_name LowCardinality(String),
user_id String,
session_id String,
platform LowCardinality(String),
value Float64,
label String
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(event_time)
ORDER BY (event_name, event_time, user_id);
That's it. No foreign keys, no normalization, no joins. Everything interesting gets denormalized into label as a JSON string when needed.
Fire-and-Forget in Go
The critical design decision: analytics must never slow down gameplay. Every event is sent asynchronously, with no retry on failure. If ClickHouse is down, we lose some events. That's fine. We're making game decisions, not running a financial ledger.
Our tracking call looks like this:
// In features/analytics/track.go
func Track(userID, event string, value float64, label string) {
go func() {
// Insert directly via ClickHouse HTTP interface
// Any error is silently discarded
_ = insertEvent(context.Background(), userID, event, value, label)
}()
}
Calling it from a handler is one line:
analytics.Track(userId, "contract_viewed", 1, contractType)
analytics.Track(userId, "contract_accepted", 1, contractType)
We added tracking to every meaningful player action: map open, building visited, item crafted, mission started, mission completed, mission declined, session start, session end. The implementation cost per event is roughly 30 seconds — add two lines, deploy.
The Dashboard
We use Grafana connected to ClickHouse via the official plugin. Zero custom frontend code. Our main dashboard has:
- Activity heatmap — hourly active users across the week, broken down by timezone
- Top collected items — what drops players are actually picking up, ranked by frequency
- Session length distribution — how long sessions are, and where they end
- Mission funnel — viewed → started → completed per mission type
- Retention cohorts — D1/D7/D30 by signup week
The Akiko funnel is where things got interesting.
Reading the Akiko Data
The contract viewed → accepted funnel looked like this:
| Contract type | Views | Accepted | Rate |
|---|---|---|---|
| Standard missions | 48,200 | 35,700 | 74% |
| Gang war | 12,400 | 7,600 | 61% |
| Akiko contracts | 9,800 | 2,250 | 23% |
At first we assumed it was a difficulty problem — Akiko contracts require rare items, maybe players just didn't have the prerequisites. But the data showed that 71% of players who viewed an Akiko contract met the item requirements. They could take it. They chose not to.
We dug deeper. The label field in our events stores a JSON blob with context — in this case, the specific contract requirements. We could group by json_value(label, '$.reward_type') to see if certain reward types were driving the low acceptance.
The answer: players were rejecting Akiko contracts at high rates when the reward was cosmetic. When the reward was functional (items, resources, currency), the acceptance rate jumped to 58% — still below standard missions, but no longer alarming.
Akiko's "unpredictable rewards" weren't exciting. They were risky in a way that felt unfair. Players had become good at evaluating expected value across other mission types. Akiko introduced variance they couldn't model, and when the downside of that variance was "you spent rare items for a cosmetic you didn't want," they opted out.
What We Changed
We adjusted the reward table to guarantee a minimum functional reward on every Akiko contract, with cosmetics as a bonus layer on top rather than a possible main outcome. We also added a small preview hint — not the exact reward, but the category ("gear," "currency," "rare drop") — so players could make a more informed decision.
Acceptance rate after the change: 54%. Not 74% — Akiko is supposed to be high-risk. But not 23%.
The more important outcome: we hadn't killed the feature. The impulse might have been to scrap the Akiko system entirely, or to simplify it into something more like standard missions. The data told us it wasn't broken — the reward structure was broken. That's a much narrower fix.
The Broader Lesson
The value of analytics isn't proving you were right. It's catching the things you were wrong about before they ossify into permanent parts of the game.
We built Akiko over weeks. The assumption that "high variance = exciting" was never tested. It felt true in the design doc. It felt true in internal playtests where we, the designers, understood the system's intentions. It didn't feel true to a player encountering it for the first time with no context, weighing rare items against an opaque reward.
The fix took two hours once we had the data. The data took two weeks to start flowing meaningfully after we added tracking.
Add tracking earlier than you think you need it. The implementation cost is almost nothing. The data you'll want to have, retroactively, when a feature underperforms — that you can't recover.
What Our Dashboard Tells Us Weekly
Beyond the Akiko example, here's the stuff we look at every Monday:
Activity heatmap shows us peak hours per region, which informs when we schedule maintenance and when we time major game events. Early on, we were scheduling things at times convenient for us (CET evenings) and wondering why North American engagement was low.
Top collected items tells us which parts of the economy are healthy. If one item dominates the collection chart for three weeks, that's a balance flag. It means players have optimized toward it in a way we probably didn't intend.
Session length distribution shows us where sessions end. A spike at minute 4 usually means a specific screen is confusing or frustrating. We've found two UX bugs this way that never surfaced in bug reports — players just silently left.
Retention cohorts is the north star. Everything else is a leading indicator; this is the outcome. When a change we make shows up positively in D7 retention two weeks later, we know it worked.
None of this required a data engineering team or a six-figure analytics contract. ClickHouse, Grafana, and 30 seconds per event to add tracking. The hard part isn't the tooling. It's building the habit of looking.
Stack Summary
For anyone setting this up from scratch:
| Component | Tool | Monthly cost |
|---|---|---|
| Event store | ClickHouse (self-hosted on Fly) | ~$30 |
| Dashboards | Grafana (self-hosted or Cloud free tier) | $0 |
| Tracking SDK | Custom fire-and-forget (50 lines of Go) | $0 |
| Total | ~$30/month |
The only cost is ClickHouse compute. At our current event volume (~2M events/day), a single small instance is fine. ClickHouse is fast enough that you'll hit budget limits before you hit performance limits.
Start simple. Track the five or six events that actually matter for your core loop. Add more as you have questions the current data can't answer. You'll know what you need once you're looking at the dashboard and frustrated by what's missing.
That frustration is how we found Akiko.