Sphere Chest Solver

v1.0 — 2026-04-07

Each row shows one step in the optimal click strategy. Only states reachable by following the strategy from the beginning are shown.

Board (column 1)

Colored cells are already revealed; the dashed ? cell is the recommended next click; dark cells are unrevealed. The fixed center cell is shown in medium gray. Toggle Show red candidate positions to overlay a red border on every unrevealed cell that could still be the red sphere on at least one consistent board. Cells without a border have been eliminated as possible red positions.

Next color? (column 2)

Click a color pill to show/hide the child rows for that outcome branch. Multiple branches can be open simultaneously. A pill fades out when its branch is currently open.

EV (column 4)

Three values — next-cell EV (expected kakera from clicking on the recommended cell) / remaining EV (expected total from all remaining clicks from this state) / total EV (remaining EV plus kakera already earned from this branch).

Board counts (column 5 and 6)

Not all board configurations are equally likely — each red position is equally likely but has a different number of possible boards consistent with that position (a red in a corner only has some number of board possibilities, while a red near the center has a much larger number of possibilities). Therefore, if you see a corner-red branch, each of those raw boards actually carries more weight than a board in a center-red branch. The weighted count corrects for this: it gives each of the 24 red positions an equal share of probability, then divides that share evenly among the boards for that position. Compare column 5 (raw count) with column 6 (weighted) to see how skewed a particular branch is.

How the strategy is computed (Bellman DP)

The strategy is computed using Bellman dynamic programming. The idea is to work backwards from the end of the game and figure out the best move at every possible situation.

At the last click (click 5), the best cell is simply whichever unrevealed cell has the highest expected value given what you already know — you average over all boards still consistent with your revealed cells, weighted by their probability.

For earlier clicks, it’s a little more involved: clicking a cell reveals its color, which changes what you know, which changes the best move for future clicks. Bellman DP handles this by defining EV(state, k) = the expected total kakera from the next k clicks, given the current revealed state. It computes this recursively:

EV(state, k) = max over cells c of: ∑ P(color at c = x | state) × (value(x) + EV(state + reveal(c=x), k−1))

In plain terms: for each candidate cell, consider every color it might be. For each color outcome, add the kakera you’d get from that color to the best possible future kakera from that new state. Pick the cell that maximises the weighted average across color outcomes. This is repeated for all 5 clicks, producing the full decision tree shown here.

Red (150) Orange (90) Yellow (55) Green (35) Teal (20) Blue (10)
Board Next color? Earned so far EV (next / remaining / total) Boards (raw) Boards (effective)
Click 1: (1, 1)
?
035.5 / 344.7 / 344.716,80016,800.0