GCJ – Osmos

Osmos, Round 1B 2013

This was a fun little problem in recursion. We have a mote that wants to eat other smaller motes, and we want to enable our mote to eat all of the others by either removing uneatable motes or adding new, eatable ones. The first thing to notice is that given a set of motes to try to eat, it is always optimal to try to eat the smallest mote. If we can’t eat it, then we know that we also can’t eat any of the larger motes, and if we can eat it, then we’ll only be larger afterwards, able to eat at least as many of the remaining motes as we could before. Secondly, if we want to remove uneatable motes, it’s always optimal to remove the largest motes we can.  If there are any uneatable motes, then the largest must be one of them, and removing it won’t impede our ability to eat any larger motes, if we decide to add more motes later. Thirdly, if we do wish to add a mote to enable our mote to eat others, it is optimal to always add the largest mote that we can currently eat, which will be 1 unit smaller than our current size.

Together, these suggest that the first thing we want to do is sort the motes from smallest to largest and analyze them in order.  For each mote, we have two possibilities:  first, it may be small enough to eat, in which case we eat it, our mote gets larger, and we move on to the next smallest mote.  Second, it may be too big to eat, in which case we can either add another mote just smaller than our current size in hopes that eating it will allow our mote to grow large enough to continue, or we can simply remove all the remaining motes and be finished.  Which of these we choose depends on which will cost fewer changes.  Removing all remaining motes will cost L, where L is the number of motes remaining.  Adding an additional mote will cost 1 plus the number of changes it takes to eat the remaining motes with our new larger mote.

In general, the cost of eating a list of motes M with a mote of size A is
C(A, M) = \begin{cases} min(L(M), C(2A - 1, M)) & A \le M_0 \\ C(A + M_0, M_{1...L}) & A > M_0 \end{cases}
We can calculate this fairly simply with a recursive formula. Our base cases will be A=1 , where A can never grow and we must remove all motes and L(M)=0 , where there are no motes remaining, and we simply return 0. For other cases, if we can eat the first mote, then we recursively call with A incremented by M_0 and M_0 removed from M . If we cannot, then we return the minimum of L(M) , and the recursive call with A incremented by A - 1 .

Of note is that this can lead to a recursive stack as large as L(M) , but as this is limited to 100 in the problem statement, we shouldn’t have to worry about space limitations with this solution.

Continue reading

Advertisements

GCJ – Bullseye

Bullseye, Round 1A 2013

This is a largely mathematical problem: the first ring will use (r + 1)^2 - r^2 units of paint, the second (r + 3)^2 - (r + 2)^2 units, and so forth, so that the k^{th} ring will require
P(k) = (r + 2k - 1)^2 - (r + 2k - 2)^2, or
P(k) = 2r + (4k - 3)
units of paint. I.e., each ring will take a constant amount of paint (2r) plus some varying amount depending on how far out it is (4k - 3). That second sequence is 1, 5, 9, 13, 17, 21, etc. But the question we want to answer is not how much paint each ring will take, but how many rings we can paint with a given amount of paint.

One way to solve this problem would be to simply add up the paint as we paint each ring until we don’t have enough paint to go on, then report the number of the last complete ring. This solution will work for the small problem, but will fail on the large problem as it is linear in k, and k in the large problem can be very very large.

We can resolve this by finding a closed form solution for the sum of all P(k) from 1 to k. We first note that there is a constant amount of paint for each ring, so we can sum up that part simply as 2rk. The first few variable sums are 1, 6, 15, 28, 45…, which we can recognize as the sequence of hexagonal numbers with closed form solution k(2k - 1). This gives us:
P_{total}(k) = 2rk + k(2k-1)

Now that we have a closed form solution, we can try to substitute in t, the total amount of paint we have, for P_{total}, solve for k, and round down to obtain the maximum number of complete rings we can draw, in constant time, much better than linear! This is a quadratic, and we obtain the (positive) solution
k = \lfloor \sqrt{4r^2 - 4r + 8t + 1} - \frac{2r - 1}{4} \rfloor

We’re done, right? Well, this solution will work fine for the small input, and in constant time. But it will fail dramatically on the large input. Why? Because we’re taking square roots of very large integers, which transforms them from integers (which are stored with perfect precision) to floating point numbers (which are stored with limited precision), and with the limits we’re given, the precision of floating points isn’t sufficient to give us the right answer this way.

So we need to use a different method. Again, what we want to do is find k such that P_{total}(k) \le t while P_{total}(k+1) > t. Simply walking through increasing values of k is too slow for the large input. So instead we can do a binary search. We’re guaranteed in the problem statement that k \ge 1. So let’s start a lower limit for our search at k_{low} = 1. We’ll start an upper limit at k_{hi} = 2. It’s probably not an upper limit yet, but we’ll make it so by iteratively doubling both our limits until it is: find the first power of 2 where P_{total}(2^i) > t. Then k is somewhere in the interval [k_{low}, k_{hi}). We can find its exact value by repeatedly subdividing this interval until it is narrowed down to a single value.

Continue reading

GCJ – Template Code

I’ve been participating in the Google Code Jam since last year, and I’ve been having a good time with it.  The problems have a common format, and instead of spending a lot of time rewriting the same code to read in data from a file, write it out with the correct formatting, and so forth, I went ahead and wrote a template for new problems in Python that can handle all of that by itself.

It first runs the precalculate() function to prepare anything (e.g. lookup tables and such) that I want to have before I start the main calculations. Then it reads in the input file from the command line, parses the number of cases to calculate, and uses the read_input() function to parse data for a single case.  It then passes the input data to the solve_case() function, which should solve the problem, and writes the result to the output file in the correct format.

Then to solve a new problem, I need only rewrite these three functions to do what I need. There’s a couple of other helpful things: read_input() has a bunch of parser functions inside it to quickly pull out specific types of data, and there’s a memoize decorator that I can use to give helper functions a cache if needed for efficiency.

Continue reading

Chaco Plots in PySide

I’ve been working on learning how to use the QT4 framework in python with the PySide bindings.  I’m trying to move into more app-centered development as opposed to just doing all of my work from a command line.  I work in an environment with very few actual programmers, and if I can develop real apps to do analysis, then it’s much easier for me to share my tools than if I develop command-line programs.  GUIs are just much more accessible to non-programmers, and I like being useful to my coworkers.
Continue reading

Shattered Empire

I’ve been running a Fourth Edition D&D campaign for about a year now, mostly with old college friends. It’s been both a challenge and a pleasure to keep it going. We live in different areas of the country (and our party cleric is soon going to move to another country entirely!), so we’re obliged to play over the internet. There’s tools available to play remotely, but they’re imperfect and we have problems. Scheduling difficulties add another layer of problems as well. But it’s just great fun to get together with old friends and hand them a group of orcs to kill.
Continue reading