Skip to main content

Advent of Code Catchup

·2045 words·10 mins

Finally after years of sitting on the sidelines, I’m’a jump on the Advent of Code! I’m a few days late so I’m’a try to do a big catch up.

Day One - Lobby
#

Phew! Day one advent of code took like 80 minutes? It was 2.5 pomodoros.

It was a puzzle about rotating a dial on a safe and counting the number of times you pass zero. https://adventofcode.com/2025/day/1

Part 1 was fast. Part 2 took a lot of time. I tried using modulo to do the calculations but it was really hairy keeping track of it. I also tried to use a single branch of code to handle the forward and backward turns but that turned out not to work either. There’s probably a more elegant solution out there.

I poked around a bit and haven’t found one yet. I saw a few adding or subtracting 100, though. Nothing with modulo.

Oh here we go: https://github.com/hiimjasmine00/advent-of-code/blob/master/2025/day1/part2.js

if (line.startsWith("L")) {
    for (let i = 0; i < factor; i++) {
        rotation = (((rotation - 1) % 100) + 100) % 100;
        if (rotation == 0) count++;
    }
}

Oh wait, no, they’re brute forcing it. They’re just using modulo to handle the edges.

There’s a very succinct solution written in R and posted on Reddit which has a crazy strategy. They point out the edge case where you start from 0 and move the dial left.

“I get around the edge case by noticing that when the dial goes to zero and reverses, it either counts it twice or not at all, and there’s a symmetric case for rounding classes up rather than down. So I just combine them.”

x = readr::read_lines("input.txt")
xdir = substring(x,1,1)
xlen = as.numeric(substring(x,2))
xcum = cumsum(c(50,xlen*ifelse(xdir=="L",-1,1)))
0.5 * sum(
    abs(diff(floor(xcum / 100))) +
    abs(diff(floor((xcum-1) / 100)))
)

That’s Reddit user Busy_Coffee779.

The way they’re counting zeros is crazy. I think xcum just contains the list of raw positions without wrapping. Those values get divided by 100 and floored so now each row should have the number of full turns (zero crossings?) to get from 0 to the raw position. But if you’re going from 250 to 350 you don’t want to count 5 full turns so there’s a diff in there, too.

I’m barely following this but based on the description, that line, abs(diff(floor(xcum / 100))) should give the number of crossings without handling the “left from zero” edge case.

I don’t quite follow how subtracting one from the raw values helps. It would shift the boundary over by one… I think I’d have to run some examples to wrap my head around it but unforch I better move on.

It’s such a different way to think about the problem. Such a data scientist way, not just because of R but instead of storing one variable for the position and manipulating that, this solution stores a vector of all the positions and processes them all in parallel.

Day Two - Gift Shop
#

https://adventofcode.com/2025/day/2

Okay, so in a two-digit number I can only have repeating patterns of one-digit numbers. Same for a three-digit number.

11 or 111 are repetitions of 1

Four-digit numbers can also have repetitions of two-digit numbers.

1212

Five-digit numbers can have repetitions of one-digit numbers.

So we’re doing factoring.

Six-digit numbers can have repetitions of one-digit, two-digit, and three-digit numbers.

111111 121212 123123

Gotta remember these ranges could have multiple numbers of digits. I could have a range from 5-8000.

I could visit every number in the range, then collect the digits into a pattern. That’s a start.

Ooh whoops. I submitted a guess using my test data. Oy yo yoy.

I noticed in the problem description the ID ranges sounded like they were inclusive but it wasn’t 100% obvious. But when I try using the range with the last number exclusive I get the same result. Not sure if it’s on purpose but if it is, that’s a nice bit of polish to ensure that the last number in each of the ranges isn’t one of the pattern numbers.

Reading the problem description again, though, it looks like my issue is I’m supposed to find patterns that are repeated twice, not X times. I solved a more general problem, whoops.

Reading.

Phew. Finally got part one done. And thankfully, part two is just the more general problem I already solved … hopefully! Can I just plug in my first number? Let me actually run it in case I changed something else but I don’t think so.

27180728081

Same number.

And yeah, that was the right answer! Lol. Note to self. Reading is important.

That took an hour and a bit overall. Maybe a little faster than the previous one.

My solution: https://github.com/nielmclaren/AdventOfCode/blob/main/2025/day2/main.py

Time for a break, reckon.

Day Three - Lobby
#

https://adventofcode.com/2025/day/3

Pick two digits from a large number such that the resulting number is as high as possible. So picking 8 and 9 from the number 811111111111119 yields a value of 89. 81 is smaller. 19 is smaller. 11 is smaller.

Sweet! That only took fifteen minutes. For part one, at least. We’ll see how part two goes.

Wow! That was much easier than the others. I finished both parts in one pomodoro. Maybe I’m getting a flow going? There’s a pattern to the two parts so far, too. A simpler problem then a more general solution to that problem.

https://github.com/nielmclaren/AdventOfCode/blob/main/2025/day3/main.py

Day Four - Printing Department
#

https://adventofcode.com/2025/day/4

Okay, so this one is a grid of binary values: it has a roll of paper or it doesn’t. How many rolls of paper have fewer than four rolls of paper in the eight adjacent squares?

I can’t even think of a solution that isn’t brute forcing it. Just iterate over each square, right?

Wow. Brain must be getting tired. I’ve made two mistakes.

First, when iterating over the adjacent squares, by going from x-1 to x+1 and y-1 to y+1, I failed to exclude the current square. I should know this! Ouchie.

Second, I got mixed up between x and y when looking at output coordinates. I was wondering why my results were so wrong, even though I was getting the correct total. Facepalm.

On the bright side I got the answer on the first try.

Oof. Part two looks like it’ll be a bit unwieldy. Remove accessible rolls of paper until I can remove no more. How many can you remove?

This might actually be pretty cool to visualize. Or use for generating dungeons or something. Terrain weathering maybe.

But anyway, in the example, at each step, all accessible rolls are removed. But I don’t necessarily have to do it that way, I could just find the first accessible roll and remove it and repeat, no? Would the answer be the same? Can I think of a counter example?

I ended up removing them all at each step anyway since that’s how the code worked already. I remembered to dodge the gotcha of updating the grid in place. Submitted my answer and got it on the first try, yay!

…however. Let’s try removing one by one, just out of curiosity. I would imagine the animation would look very different but would still converge to the same solution but I’m also very uncertain about that gut instinct.

So my answer was 7922. Let’s see how the one-by-one approach compares.

7922

Cool.

It takes much longer to reach the solution but it gets to the same solution, at least for my personal problem input dataset. Which is pretty large so I’m inclined to believe that my guess was correct.

https://github.com/nielmclaren/AdventOfCode/blob/main/2025/day4/main.py

Reflection
#

I compared my solutions to those of my friends for all the days and it was cool to see the different approaches (and different language syntax). One big takeaway for me is that I need to lean on libraries more. I was working on my grid by hand which was unnecessary extra work, caused more mistakes, and resulted in worse performance. And I’m expecting performance to become a bigger problem as the Advent of Code progresses.

Performance
#

My one-at-a-time solution for removing rolls of paper from the grid was also abnormally slow. My friends used the one-at-a-time approach as well and their’s took less than a second. Mine took seventeen seconds.

So I tried profiling Python code for my first time ever. Remarkably easy to setup. Really delightfully surprised by that! I used cProfile which is already built-in and produced the chart I needed on the first go.

My solution was calling the grid accessor 170 million times. My friends profiled their code and they were calling their accessors 4.5 million and 1.5 million times. Something was clearly off.

After some head scratching I tracked down the difference. When they remove a roll they continue looking for accessible rolls from where they left off. When I remove a roll I go back to the top left corner of the grid and start again. I end up with a huge amount of calls, many of which are just telling me over and over again that, no, this roll is not accessible.

I did have a reason for restarting the search after each removal. I was avoiding a problem I’d often seen with images where reading some pixel and then changing the pixels around it would cause future pixel readings to be thrown off. The usual way to avoid that problem is double buffering. You read from one image and write to another one to keep the output from going back and polluting your input.

Making Conway’s game of life, for example, this can make it behave very wrong.

Maybe another relevant idea is removing elements while iterating over an array and having your index all messed up.

But removing rolls only added rolls that could be removed so it was fine to modify the grid in place and to continue iterating from the same position.

With the update it’s gone down from 170 million to 1.5 million calls. Phew!

It’s even faster than the batch-wise implementation now, which has 4x the calls at 6 million.

Now one more thing. Another algorithm.

What if I traverse the graph once, add all those accessible rolls of paper to a queue, then process the queue. Every time I remove a roll, add its neighbours to the queue.

Heck ya. I got to the same solution and it’s sooo much faster.

300k calls now. And look at the run times.

Total calls Run time
Batch-wise 10,822,573 1.881s
One-at-a-time 2,659,615 0.575s
Doomed neighbours 691,880 0.123s

Day Five
#

Heck ya that was a rush! I do like working with time constraints, haha.

https://adventofcode.com/2025/day/5

The crux of today’s puzzle is you have several ranges of numbers, some or many of which overlap. You need to determine how many numbers fall in one of those ranges. I started with a naive solution where I iterate over the numbers in every range and let it run for a little while before it crashed my poor little laptop. Then I saw how big the numbers are, lol.

After that I implemented a solution that tracked the ranges instead. The main chunk of code was an ugly if statement that could merge two ranges and output the resulting range.

def merge_ranges(a, b):
    if a[0] <= b[0] and b[0] <= a[1] and a[1] <= b[1]:
        return (a[0], b[1])
    if b[0] <= a[0] and a[0] <= b[1] and b[1] <= a[1]:
        return (b[0], a[1])
    if a[0] <= b[0] and b[1] <= a[1] :
        return (a[0], a[1])
    if b[0] <= a[0] and a[1] <= b[1] :
        return (b[0], b[1])
    return None

I used a little doodle to help me get those values straight in my head.

The twist in the problem (which was generously portrayed in the example data) is that if you only do one pass, you’re results will still contain ranges that overlap.

In my rush to get the problem submitted I did this:

def solve(fresh_ranges):
    result = merge_all_ranges(fresh_ranges)
    print(len(result))
    result = merge_all_ranges(result)
    print(len(result))
    result = merge_all_ranges(result)
    print(len(result))
    result = merge_all_ranges(result)
    print(len(result))
    result = merge_all_ranges(result)
    print(len(result))

I feel dirty just posting it here so I’ll clean it up before I commit it. (-:

That took around 45 minutes.

https://github.com/nielmclaren/AdventOfCode/blob/main/2025/day5/main.py