JOIN
Get Time
statistics_w  Match Editorial
SRM 182
Saturday, February 7, 2004

Match summary

A level one problem with some nasty boundary cases and a devious level two problem made this SRM a tough one for many Division 1 participants. Coding phase saw nicka81 jump into an early lead with exceptionally fast submissions on the two harder problems. He had to resubmit the hard, however, which allowed jshute to pass him. Not surprisingly, John Dethridge and tomek were close behind, but John Dethridge made an unsuccessful challenge and then declared his 1000 point submission to be incorrect. His 1000 point submission did not actually fail, but his 250 point submission did, and when the dust settled, the top three finishers were jshute, tomek, and lars. Congratulations are also due to Per, who finished 5th in his Division 1 debut. Very impressive! Division 2 participants had a slightlier easier time of it, and two newcomers, Tomy and snowing, finished first and second, each with over 1500 points.

The Problems

IBEvaluator discuss it
Used as: Division Two - Level One:
Value 300
Submission Rate 215 / 240 (89.58%)
Success Rate 200 / 215 (93.02%)
High Score joelu for 294.55 points (3 mins 52 secs)
Average Score 242.39 (for 200 correct submissions)

As can be seen from this problem's success rate, it was fairly straightforward for its 300 point value. First, we need to count how many predictions are off by each value (either positive or negative). This can be done in a single loop.

    int[] result = new int[7];
    int i;
    for (i = 0; i < predictedGrades.length; i++)
    {
        int gradeDiff = Math.abs( predictedGrades[i] - actualGrades[i] );
        result[gradeDiff]++;
    }
Next, we need to convert these counts into percentages. We first divide each count by the total number of predictions so as to get a fraction between 0 and 1, and we then multiply through by 100. To be safe, we could use floating point arithmetic for this, but if you do the division last, integer arithmetic works too.
    for (i = 0; i < 7; i++)
        result[i] = (result[i]*100) / predictedGrades.length;
SlayingDeer discuss it
Used as: Division Two - Level Two:
Value 500
Submission Rate 171 / 240 (71.25%)
Success Rate 75 / 171 (43.86%)
High Score nobody3 for 461.48 points (8 mins 20 secs)
Average Score 317.02 (for 75 correct submissions)
Used as: Division One - Level One:
Value 250
Submission Rate 177 / 184 (96.20%)
Success Rate 118 / 177 (66.67%)
High Score ploh for 243.26 points (4 mins 45 secs)
Average Score 190.36 (for 118 correct submissions)

This problem caused problems for many, but if it is approached in the right way, it can be done quickly and painlessly. Suppose we just simulate the chase one minute at a time. Every minute, Running Wild will move forward A meters while the deer will either move forward B meters or will hold still. To figure out which case it is, remember that the deer moves the first 30 minutes in each 45 minute span. This means it will be moving if and only if (time % 45) is between 0 and 29 inclusive, which we can easily check.

The next problem is to determine when Running Wild's task is impossible. One approach here is to note that every 45 minutes, Running Wild gains 45*A - 30*B meters on the deer. If this quantity is positive, it is at least 15, so Running Wild will make up the maximum of 100,000 meters in around 45*100,000/15 = 300,000 minutes. If the quantity is not positive, then Running Wild makes up no ground after 45 minutes, so there is no need to search after that. Thus, we can simply stop the simulation after, say, 500,000 minutes and declare the task impossible. These ideas can be implemented as shown below.

    int getTime(int A, int B, int C)
    {
        for (int time = 0; time < 500000; time++)
        {
            if (C <= 0)
                return time;
            if (time % 45 < 30) 
                C += B;
            C -= A;
        }
        return -1;
    }

Indeed, the fastest submission, from ploh, used almost precisely this method. It was easy to overlook certain things, however, and a number of submissions were incorrect in both divisions. Some of the more common errors were:

  • Not recognizing as impossible the case when Running Wild and the deer are exactly even over 45 minutes.
  • Replacing 500,000 with a constant that is too small.
  • Failing the case where Running Wild catches the deer after a multiple of 45 minutes due to faulty mathematics or an incorrect boundary condition.

RSABreaker discuss it
Used as: Division Two - Level Three:
Value 1000
Submission Rate 51 / 240 (21.25%)
Success Rate 35 / 51 (68.63%)
High Score vkanade for 882.93 points (10 mins 38 secs)
Average Score 607.11 (for 35 correct submissions)

This problem may have seemed intimidating at first, but it breaks up into several smaller parts that are actually quite manageable. To decrypt b, we need to come up with a valid private key. The first step in doing that is computing m. Since n is fairly small, we can do this by iterating over every positive integer less than n, and checking whether that integer shares a prime factor with n, or equivalently, whether its greatest common divisor with n is 1. Fortunately, this can be done both both concisely and efficiently using the Euclidean algorithm.

    int gcd(int a, int b)
    {
        if (b == 0) 
            return a;
        else
            return gcd(b, a%b);
    }

    int computeM(int n)
    {
        int count = 0;
        for (int i = 1; i < n; i++)
        if (gcd(i,n) == 1)
            count++;
        return count;
    }

For those unfamiliar with the Euclidean algorithm, the greatest common divisor of a and b can also be computed by looping from 1 to a, and finding the largest integer that divides both a and b. Regardless, the next step is to find e such that e*d - 1 is a multiple of m. Again, since n is not very large, this can be done with a loop.

    int d = 1;
    while ( (d*e) % m != 1) 
        d++;

Finally, we need to determine the remainder when b^d is divided by n. Unfortunately, since d can be as large as 1000, we cannot just go ahead and compute b^d without overflowing. On the other hand, with a little bit of thought, you can convince yourself that (x * y) % n is equal to ( (x%n) * y) % n. Thus, suppose we calculate b^d in yet another loop by multiplying repeatedly by b, but we take the answer modulo n at each step. This will not overflow since we will never be left with a number larger than 1000 at the end of a step, and it will generate the correct answer as observed above.

    int a = 1;
    for (int i = 0; i < d; i++)
        a = (a*b) % n;

This is enough to solve the problem, but one might ask if the method could be sped up. It turns out that the Euclidean algorithm can be used to compute d from e extremely fast. Also, given d, we can compute a very quickly by repeatedely squaring modulo n. On the other hand, it is much more difficult to compute m quickly. In fact, n is usually chosen to be the product of two distinct primes p and q. In that case, m = (p-1)*(q-1), and finding m becomes equivalent to factoring n. For precisely this reason, finding efficient factoring algorithms has become one of the biggest problems in computer science, and fortunately for cryptography, it is still unsolved! RSA encryption is an interesting and important subject, and if you want to know more, you can start here.

LunchScheduler discuss it
Used as: Division One - Level Two:
Value 600
Submission Rate 50 / 184 (27.17%)
Success Rate 29 / 50 (58.00%)
High Score jshute for 519.76 points (11 mins 32 secs)
Average Score 362.59 (for 29 correct submissions)

As pointed out in the last match editorial, small constraints like M.length < 10 are a good indication that you should use brute force. With this problem, however, it was not clear for many people how to go about doing this. Specifically, what are all the possibilities that we need to iterate through? The first thing to notice is that the order in which people arrive and leave is all that really matters. If you stop and think about it, this tells us whether each pair of people could have shaken hands, as well as the largest number of people in the cafe at the same time. Unfortunately, if you try all permutations of arrival and departure times, you are looking at no less than 9!*9! possibilities, which is too high.

On the other hand, suppose we fix the order in which people arrive. Then, a member cannot leave before everyone arrives who he needs to shake hands with. On the other hand, there is no reason why any member needs to stay longer than that. Thus, departure orders that minimize overlap can easily be determined once the arrival order is fixed. Pseudo-code that solves the problem based on this observation is shown below.

    k = infinity;
    Iterate through each start ordering
    {
        thisOrderingOverlap = 0;
        Loop time from 0 to M.length-1
        {
            thisTimeOverlap = 1;
            Loop through all members i arriving before time
            {
                memberHasNotLeft = false;
                Loop through all members j not arriving before time
                {
                    if (member i shakes member j's hand)
                        memberHasNotLeft = true;
                }
                if (memberHasNotLeft)
                    thisTimeOverlap++;
            }
            thisOrderingOverlap = max(thisOrderingOverlap, thisTimeOverlap);
        }
        k = min(k, thisOrderingOverlap);
    }

In C++, iterating through each possible ordering can be done with next_permutation. If you do not know about next_permutation, or your language does not have a similar function, it can also be done recursively.

    void generateOrderings(int[] ordering, boolean[] used, int position)
    {
        if (position >= NumElements)
            doSomethingWithOrdering(ordering);
        else
        {
            for (int i = 0; i < numElements; i++)
            if (!used[i])
            {
                ordering[position] = i;
                used[i] = true;
                generateOrderings(ordering, used, position+1);
                used[i] = false;
            }
        }
    }

For a more concise (obfuscated) version, be sure to check out John Dethridge's solution.

To some extent, this algorithm can actually be improved upon. Suppose a set of members S has already arrived at the cafe. Then, members in S should still be in the cafe when the next member arrives if and only if they need to shake hands with someone who has not arrived yet. After that, we can determine the maximum overlap recursively by trying every possibility for the next member to arrive. With memoization, this gives a solution that runs in 2^n time instead of n! time. For more details, check out the writer solution in the practice room. It is doubtful that too many improvements exist beyond that, however, since the problem is actually NP-complete. If you interpret the matrix as the adjacency matrix for a graph, the problem becomes that of computing path-width, which is of surprising importance in graph theory and theoretical computer science.

PairwiseSums discuss it
Used as: Division One - Level Three:
Value 1000
Submission Rate 27 / 184 (14.67%)
Success Rate 16 / 27 (59.26%)
High Score AdrianKuegel for 700.84 points (20 mins 29 secs)
Average Score 564.54 (for 16 correct submissions)

At first glance, this problem may seem to resemble a previous TopCoder problem involving pairwise differences, but the solution does not carry over very well. Here, we need to reconstruct a list of up to fifty elements given its pairwise sums, and an exponential solution will simply not be fast enough.

We will build A up from smallest element to largest. Suppose A and P are integer lists such that P gives the pairwise sums for A, where A[0] <= A[1] <= A[2] <= ... and P[0] <= P[1] <= P[2] <= ... . Given P, we wish to find A. It is difficult to get started, but suppose that we already know A[0]. Then, since P[0] is the smallest element in P, it can only arise as A[0] + A[1]. Similarly, P[1] must equal A[0] + A[2]. Therefore, if we know A[0], we can compute A[1] and A[2].

After that, however, this pattern breaks down. P[2] could either be A[0] + A[3] or A[1] + A[2] and without prior knowledge, we cannot know which one it is. Fortunately, we do have prior knowledge! If we know A[0], we can compute A[1] and A[2] as described above, and then remove A[1] + A[2] from P. The next smallest element is then guaranteed to be A[0] + A[3], which allows us to find A[3]. Similarly, if we know A[0], A[1], A[2], and A[3], we can remove A[i]+A[j], 0 <= i < j <= 3, from P. The next smallest element of P will be A[0]+A[4], which allows us to compute A[4]. Repeating in this way, we can find all of A without ever backtracking. Pseudocode is shown below.

    int[] completeA(int smallestElementInA, int[] P)
    {
        int[] A = new int[];
        A.insert(smallestElementInA);

        // Keep looking until we can account for all of P

        while (P.length > 0)
        {
            // Figure out the next element

            int nextSmallestPairwiseSum = P[0];
            int nextElementInA = nextSmallestPairwiseSum - smallestElementInA;

            // Check off used elements, returning [] on a contradiction

            for (int i = 0; i < A.length; i++)
            if (P.contains(nextElementInA + A[i]))
                P.remove(nextElementInA + A[i]);
            else
                return new int[];  

            // Add the new element

            A.insert(nextElementInA);
        }
        return A;
    }

Thus, once we know A[0], we can compute the rest of A. For this problem, we can now simply try every possibility for A[0]. We are given A contains only non-negative integers, and A[1] >= A[0], so A[0] must be an integer between 0 and P[0]/2 inclusive. Since P[0] <= 1000, this means we will have to try at most 501 values, which will still be fast enough.

This can actually be made to work in a more general setting as well. Recall that we can find A[0] + A[1] and A[0] + A[2] regardless of A[0], as these are just P[0] and P[1]. Then, since A[0] = ( (A[0]+A[1]) + (A[0]+A[2]) - (A[1]+A[2]) ) / 2, we can compute A[0] from A[1]+A[2]. On the other hand, this is just an element of P, and it must actually be quite early in the list. Thus, we can iterate through all of these values to find A[0] without making any extra assumptions about the values in A.

Author
By dgarthur
TopCoder Member