|
Match Editorial |
SRM 165Tuesday, September 23, 2003
Match summary
SnapDragon won again, surprising to no one, but it was by no means a
sure thing. SnapDragon finished all three problems first, but a
few minutes later WishingBone snuck into the lead with an exceptionally
fast time on the Hard. In the challenge phase, SnapDragon spotted
a flaw in a competitor's code, but wleite beat him to the challenge by
five seconds, denying SnapDragon 50 points that would have propelled him
into first place. It looked like WishingBone would eke out the
victory until his Hard problem timed out during system tests. Division 1
also witnessed the end of an amazing streak by Tomek, who had
until now submitted all three problems in every challenge he had entered.
In Division 2, there were lots of high scores after the coding phase, but
challenges and system tests claimed all but two of the submitted
Hard problems. Newcomer moggy won in his debut, with
vavadera close behind.
The Problems
BritishCoins
Used as: Division Two - Level One:
Value
|
250
|
Submission Rate
|
222 / 230 (96.52%)
|
Success Rate
|
212 / 222 (95.50%)
|
High Score
|
pinting for 249.81 points (0 mins 46 secs)
|
Average Score
|
231.07 (for 212 correct submissions)
|
There are 12 pennies in a shilling and 20 shillings in a
pound, so there are 12*20 = 240 pennies in a pound. One simple way to
convert pennies into the larger denominations is to use two loops, one
to convert pennies to pounds and one to convert the remaining pennies
to shillings.
int pounds = 0, shillings = 0;
while (pence >= 240) {
pounds++;
pence -= 240;
}
while (pence >= 12) {
shillings++;
pence -= 12;
}
return new int[]{ pounds, shillings, pence };
We can accomplish the same thing without any loops by using integer division and mod.
int pounds = pence / 240;
pence = pence % 240;
int shillings = pence / 12;
pence = pence % 12;
return new int[]{ pounds, shillings, pence };
Alternatively, we can write all the math in a single line as
return new int[]{ pence/240, (pence%240)/12, pence%12 };
ParallelSpeedup
Used as: Division Two - Level Two:
Value
|
500
|
Submission Rate
|
178 / 230 (77.39%)
|
Success Rate
|
87 / 178 (48.88%)
|
High Score
|
pinting for 493.00 points (3 mins 23 secs)
|
Average Score
|
296.54 (for 87 correct submissions)
|
Used as: Division One - Level One:
Value
|
250
|
Submission Rate
|
151 / 156 (96.79%)
|
Success Rate
|
121 / 151 (80.13%)
|
High Score
|
ZorbaTHut for 246.44 points (3 mins 25 secs)
|
Average Score
|
205.07 (for 121 correct submissions)
|
Assume that there are p processors. Then there are p*(p-1)/2
pairs of processors, so the communications between the various pairs take a
total of overhead*p*(p-1)/2 milliseconds. If the tasks can be
distributed evenly among the processors, then every processor gets
k/p tasks. Otherwise, some processors get one extra task.
Therefore the total time required is either
k/p + overhead*p*(p-1)/2 (if p divides evenly into k)
or
1+k/p + overhead*p*(p-1)/2 (otherwise)
We can combine these two cases into a single formula by guaranteeing that
the division
k/p rounds up instead of down, as in
(k+p-1)/p + overhead*p*(p-1)/2
To find the optimal number of processors, we loop through different
values of p, keeping track of which value yields the fastest
running time. It makes sense to start at 1 and count upwards, but
when can we stop?
Clearly, we will never use more than k processors, so one simple
strategy is to loop from 1 to k. However, as k gets large,
the calculation of p*(p-1)/2 starts to overflow 32-bit
integers, so we would have to be careful to use 64-bit integers.
A more efficient alternative that works with 32-bit integers is to stop as soon
as the communications overhead exceeds k. For example, if we have
1000000 tasks, we will never choose a number of tasks where the communications
overhead is more than 1000000 milliseconds. ZorbaTHut used a variation
of this reasoning to place an upper limit of 1000 on the loop.
An even more efficient alternative depends on the fact that the curve
of running times versus number of processors is concave. In other words,
it slopes downward until it reaches the minimum, and then slopes upward
forever. Therefore, we can exit the loop as soon as the running time in
one iteration is greater than the running time in the previous iteration.
See, for example, Yarin's and ChristopherH's solutions.
ShortPalindromes
Used as: Division Two - Level Three:
Value
|
1150
|
Submission Rate
|
22 / 230 (9.57%)
|
Success Rate
|
2 / 22 (9.09%)
|
High Score
|
vavadera for 568.18 points (38 mins 18 secs)
|
Average Score
|
562.50 (for 2 correct submissions)
|
A key observation is that, to make the shortest possible palindrome out
of base, you should never add letters to both the front and back
of the string. If you were to do so, then you could make an even shorter
palindrome by removing the first and last characters that you added.
Therefore, the shortest palindrome that you can make out of base must
either start with the first letter of base or end with the
last letter of base (or both, if the first and last letters of
base are the same).
It is easy to turn this observation into a recursive algorithm:
shortest(base)
if base is already a palindrome then
return base
if base has the form A...A then
return A + shortest(...) + A
if base has the form A...B then
return min(A + shortest(...B) + A,
B + shortest(A...) + B)
where
min chooses the shorter of the two strings, or the
lexicographically earliest if the strings have the same length.
Unfortunately, a naive implementation of this algorithm might take exponential time,
because a call on a string of size
n can result in two calls on strings
of size
n-1. Such an implementation would be likely to time out on
strings of size 25.
The second key observation is to notice that the above algorithm can end
up calling shortest multiple times on the same substring, but
that it never calls shortest on a string that is not a substring
of the original base. The first property suggests that memoization
might be a good idea, whereas the second property hints at dynamic programming.
Either will work.
To apply memoization, we modify shortest to remember which strings
it has already been called on, and the answer for each. Then
shortest begins by checking whether it has seen the current
argument before. If so, it looks up the answer in a table. Otherwise,
it computes the answer as above, but makes the recursive calls to
the new, memoizing version of shortest. Once it is done, it
saves the new answer in the table.
To apply dynamic programming, we make a table of all the possible
substrings of base. Then we process the entries of the
table from shortest to longest. For each entry, we compute the
shortest palindrome of that entry using the above rules, except that we
replace the recursive calls with lookups in the table.
See newcomer moggy's solution for a nice example of this approach.
ContinuedFractions
Used as: Division One - Level Two:
Value
|
600
|
Submission Rate
|
85 / 156 (54.49%)
|
Success Rate
|
79 / 85 (92.94%)
|
High Score
|
tjq for 503.85 points (12 mins 55 secs)
|
Average Score
|
332.64 (for 79 correct submissions)
|
If you are math phobic, this problem probably gave you a severe case of
the heebie jeebies. Even if you are comfortable with math, you may
have been uncomfortable with this problem because it was deliberately
written to test a slightly different skill from most TopCoder problems.
Usually, challenge problems ask you either to come up with an algorithm
from scratch or to use an algorithm that is clearly specified.
This problem asked you to use a particular algorithm, but did not specify
how that algorithm works. Intead, you were asked to generalize the
algorithm from some sample
calculations, which is something that happens frequently in real life.
The algorithm begins by finding the integer part of the square root,
which is easily done by a loop:
int root = 0;
while (root*root <= n) root++;
root--;
root becomes the first element of the answer. The remainder is
the fraction
(sqrt(n)-root)/1. We next begin a loop in
which we process remainders of the form
(sqrt(n)-d)/m until
we get back to
(sqrt(n)-root)/1.
At each stage, we must convert the fraction (sqrt(n)-d)/m into
the form
1
-------------------
q + (sqrt(n)-d')/m'
where
q is a positive integer and the remainder
(sqrt(n)-d')/m' is between 0 and 1.
The sample calculations went through the following steps:
sqrt(n)-d 1 1
--------- = ----------- = -----------------------
m m m sqrt(n)+d
--------- --------- * ---------
sqrt(n)-d sqrt(n)-d sqrt(n)+d
1 1
= ----------------- = -----------
m * (sqrt(n)+d) sqrt(n)+d
--------------- ---------
n - d*d (n-d*d)/m
At this point, we can simplify
(n-d*d)/m to get
m'.
There is no obvious reason to expect that
n-d*d will always
be divisible by
m, except that the problem statement implies that
this will be the case.
Next, we need to massage (sqrt(n)+d)/m' into the form
q + (sqrt(n)-d')/m', where q is an integer and
(sqrt(n)-d')/m' is between 0 and 1. This can be accomplished
by setting q = (root+d)/m' and d' = q*m' - d.
Altogether, then, our main loop looks like
repeat
m' = (n - d*d) / m
q = (root + d) / m'
d' = q*m' - d
add q to output list
d = d'
m = m'
until d == root and m == 1
An interesting fact about these periodic continued fractions is that
the last element is always twice the first element. Therefore an
alternative test for the ending condition is
repeat
....
until q == 2*root
An even more intriguing fact about these periodic continue fractions,
albeit one that is ultimately useless for the problem, is that, if
you remove the first and last elements from the periodic continued fraction,
then the remaining elements form a palindrome.
For example, the periodic continued fraction for the
square root of 158 is
{ 12, 1, 1, 3, 12, 3, 1, 1, 24 }
The last element (24) is twice the first element (12), and
the remaining elements
(1, 1, 3, 12, 3, 1, 1) form a palindrome.
Scheduling
Used as: Division One - Level Three:
Value
|
1000
|
Submission Rate
|
13 / 156 (8.33%)
|
Success Rate
|
7 / 13 (53.85%)
|
High Score
|
SnapDragon for 666.74 points (22 mins 37 secs)
|
Average Score
|
563.28 (for 7 correct submissions)
|
This problem looks scary at first, but the constraints keep it reasonable.
What at first seems like it will require some heavy duty graph theory in
the end requires nothing fancier than memoization, a staple of
Division 1 Hard problems.
We define a recursive function
solve(tasks,cur,delay)
that computes the minimum time needed to complete the set of tasks,
assuming
that the task
cur is currently executing and that it has
delay units of time to go. If no tasks are executing,
then
cur will be -1 and
delay will be 0. The
initial call will be to
solve(allPossibleTasks,-1,0).
There are three main cases:
- There are no more tasks to run. Then we are done as soon as the
current task finishes.
- The best move is to start a task right now. Loop through all
tasks that are eligible to start and try each one, keeping
track of the minimum time achieved. A task is eligible to run if
all the tasks it depends on have completed.
- The best move is to sit idle until the current task finishes.
This happens when the next task that we want to run depends on
the current task.
Notice that it never makes sense to sit idle until some time between
now and when the current task finishes, because any task we could start
then, we could just as easily start now.
In pseudocode, solve is
solve(tasks,cur,delay)
if tasks is empty then return delay
best = 999999999
for each t in tasks
if t is eligible to run then
if time[t] > delay then
total = delay + solve(tasks-{t},t,time[t]-delay)
else
total = time[t] + solve(tasks-{t},cur,delay-time[t])
best = min(best,total)
if cur != -1 then
total = delay + solve(tasks,-1,0)
best = min(best,total)
return best
Unfortunately, this algorithm is too slow as written because it tries the
tasks in every possible order. There are a maximum of 12 tasks, but there
are trillions of ways to order those 12 tasks. Notice, however,
that we make many calls to solve with the same arguments. For
example, suppose tasks 1-4 are all eligible to start immediately and all run in
the same amount of time. Then we will try running 1&2 together followed
by 3&4, 1&3 together followed by 2&4, and so on. Each of
these patterns will eventually make a call to solve({5,...,12},-1,0).
We can save time by caching the value of
solve({5,...,12},-1,0) in a table the first
time it is called, and looking it up each subsequent time instead of
recomputing it. This technique is called memoization.
We modify solve to check the table at the beginning and
save the minimum time in the table at the end. The lines
marked with stars are new but the rest of solve is unchanged.
solve(tasks,cur,delay)
if tasks is empty then return delay
* combine tasks, cur, and delay into a single key
* lookup that key in a table
* if it's there, grab its associated value and return it
compute best as in the previous version
* save key and best in the table
return best
One final issue is how to represent the set of remaining tasks
(or, alternatively, the set of completed tasks). Most people used
bitmasks, but this was not crucial.
By
vorthys
TopCoder Member