202412110000 Advent of Code 2024 Day 11
Plutonian Pebbles
Today we're visiting the ancient civilization on Pluto from 2019 day 20 and find some interesting physics-defying stones.
#blog #tech #projectThe Puzzle
Today we're visiting the ancient civilization on Pluto from 2019 day 20 and find some interesting physics-defying stones.
We are given a list of stones that have numbers on them. Every time we blink, the stones change according to some pre-determined rules including splitting apart at certain times. We're asked how many stones there are after blinking 25 times for the first part and 75 times for the second.
Solution
Today was a case of optimizing the solution to an exponential problem. For the first part, I just applied the rules literally.
let cache = ;
const ;
const ;
You might notice that I preemptively made a cache, but it isn't very good. Let's take a closer look at why.
The solution above works through each generation of rocks at once. Our outer loop is the number of generations (i.e. blinks) and the inner loop works over each rock.
The cache then saves us converting each rock to its result.
return cache.;
This part of our solution isn't actually the expensive piece though. The real problem is the sheer number of rocks we have to compute. Exponential problems are of the form
Here, is a constant equal to when , is some constant factor determining the growth rate of the function, and is the independent variable. In our case is the number of rocks in the initial input, is the number of blinks, and is an unknown factor that's definitely greater than 1 and probably less than 3. Doing some back of the envelope math, we see
Realistically, this means our factor will be somewhere towards the bottom end of the range, because the top end would be infeasible for the purposes of this puzzle. With the power of hindsight, I ran an exponential regression for the problem and found that .
Anyway, all of that is to show that exponential equations grow really fast and we need to make some sort of cache that will reduce the total number of things we do, not simply shorten the computation per thing. Our first caching attempt failed at this. So what can we do?
There's an approach to optimization in programming that I like to term "rotating the problem". This is when you take the problem and change from a wide approach to a deep approach or vice versa. The famous change from an array of structs to struct of arrays is a good example of how rotating the data can lead to a more efficient solution. In the case of our puzzle, we rotate our processing from breadth first (computing an entire generation at once) to depth first (computing all generations for one rock before moving to the next one).
This rotation unlocks a key caching opportunity. When we process the first rock we store each step and value along the way. This way, if we have a generation with two of the same rock, the second will be cached all the way down to its final value, as well as all the rocks and generations for the other rocks once we get back to the outer loop and go to our next initial rock.
I can't seem to get an explanation of this in words that feels clear, so let me try to show it visually. For this case I'm going to change the rules of the puzzle to be a bit more simple and only compute 5 generations. We'll have two rules as follows:
a -> b
b -> a a
First the old breadth-first solution.
╔══════════════════╤════════════╤══════════════╗
║ rocks │ generation │ computations ║
╠══════════════════╪════════════╪══════════════╣
║ a a │ 0 │ 0 (initial) ║
║ b b │ 1 │ 2 (a->b) ║
║ a a a a │ 2 │ 2 (b->a a) ║
║ b b b b │ 3 │ 4 ║
║ a a a a a a a a │ 4 │ 4 ║
║ b b b b b b b b │ 5 │ 8 (done) ║
╠══════════════════╪════════════╪══════════════╣
║ │ total │ 20 ║
╚══════════════════╧════════════╧══════════════╝
Even though we cache the simple rule transformations, we still do more and more computations each generation.
Second, let's do it depth first and cache the results.
╔══════════════════╤════════════╤══════════════╗
║ rocks │ generation │ computations ║
╠══════════════════╪════════════╪══════════════╣
║ a _ │ 0 │ 0 (initial) ║
║ b │ 1 │ 2 (a->b) ║
║ a a │ 2 │ 2 (b->a a) ║
║ b b │ 3 │ 4 ║
║ a a a a │ 4 │ 4 ║
║ b b b b │ 5 │ 8 (done) ║
╠══════════════════╪════════════╪══════════════╣
║ │ subtotal │ 10 ║
╠══════════════════╪════════════╪══════════════╣
║ _ a │ 0 │ 0 (initial) ║
║ b b b b │ 1 │ 1 (cached) ║
╠══════════════════╪════════════╪══════════════╣
║ │ subtotal │ 1 ║
╠══════════════════╪════════════╪══════════════╣
║ │ total │ 11 ║
╚══════════════════╧════════════╧══════════════╝
Here's the power of the new way of doing it. We populate our cache in the first tree with "an a
in generation 0 computes to 4 rocks in the end" and skip the entire second tree of computation.
The real input and rules are more complicated, but it short-circuits a huge portion of our problem space, making the problem tractable.
Now that I've gone on for far too long, here's the final code.
const ;
const ;
const ;
JavaScript failed me a bit here compared to Python's @functools.cache
. I had to manually memoize the evolve
function, which ends up being co-recursive with cached
. You can ignore the cached
stuff entirely if you just read the evolve
function and pretend it's automatically memoized.
Long winded today, but kind of fun to go through and explain.
You can find the full solutions to today's AoC puzzles in my AoC GitHub repo.