JOIN
Get Time
statistics_w  Match Editorial
SRM 317
Thursday, August 24, 2006

Match summary

Single Round Match 317 started with a record number of 1,134 registrants. Also, many top ranked coders had signed in, so the table was set for a great match. But just a couple of minutes after the start of the match, some technical problems had the competition stopped for about half an hour and made the remaining part strange (including almost instant submissions of the easy problems) and also, of course, not worth any money or rating points.

Surely this is the last time I add a joke in the editorial of a match about making things go wrong on purpose. (I didn't! I didn't!). Anyhow, at least people who stayed got to do the problems for fun and practice. After all, every problem deserves to be solved -- and later explained in an editorial -- so here we go.

Disclaimer: Since there will be no statistics about this match for the mentioned problems, I won't add links to code that you can follow for implementations, but I suggest you check any code that passed system tests to see the implementation in your favorite language, or go to the practice room for this SRM and look for code there.

The Problems

ReversingBrackets

This problem was fairly easy, and most coders were able to solve it correctly. The only tricky case was correctly handling an empty set of brackets, but that only made a few submissions fail. Depending on how well you managed your own language string functionality, you could submit almost instantly or a little bit slower. The solution involves two simple steps:

  • Find the position i of the opening bracket and the position j of the closing bracket. If there are no brackets, return the same string as the input.
  • Return s.substr(0,i) + s.substr(i+1,j).reverse() + s.substr(j+1,n) where n is the length of s.
All languages have library functions for taking a substring and reversing a string, but even if you didn't know them ahead, writing your own wasn't difficult.


PalindromicNumbers

Solving this problem depended only on the fact that in the entire range of possibilities (integers between 1 and 109) there is a small number of palindromic numbers (exactly 109998), and they are easy to generate.

The best way to generate all palindromic numbers was to separate them between even and odd length.

The even length numbers can be generated by taking a number n and concatenating n reversed (remembering to use the leading zeroes of the reversed form, because they will not be leading in the resulting number). Note that if n has k digits, the generated palindromic number has 2k digits. This means the biggest n that may be used to generate in this case has 4 digits (9999).

For generating the odd length palindromic numbers there were two similar approaches. Take n and a digit d and concatenate n, then d and then n reversed (note that in this case we have to handle the case of 1 digit numbers and the best way was to do it separately) or just take n, reverse its digits and concatenate both but not use the last digit of n (or the first digit of it's reversed form). In the first case, n can also have at most 4 digits and d has 10 choices, therefore in total there are at most 9999*10 possibilities. In the second case, n can have up to 5 digits (because n having k digits generates a 2k-1 digit number) and therefore the possibilities are 99999.

In both cases, after generating each number, see whether it lies in the specified range, and if it does, increase the counter. Most correct solutions used a variation of this approach.


OrderingCount

This problem is about counting the number of topological sorts of a given set of elements. Topological sort of any directed graph can be achieved by a known algorithm, but that algorithm gives any topological sort that satisfies all constraints, so we can only use it to decide between zero and non-zero cases.

Seeing the number of elements being at most 20 should made you realise this probem was probably an exponential one. In fact, as in many previous TC problems, the whole idea relied in putting the number of orderings of some set as a function of the number of orderings of its subsets. Since there are only 220 subsets (and this is a little above 1000000) this seem like a feasible approach.

We can see any order of the subset S as an element e and an ordering of the elements of S - {e}. We can only select as e the elements such that no element of S (this includes e and therefore takes care of ai < ai requirements) is required to be before e. In this fashion, we iterate all possible elements, and for each one of them we add f(S - {e}) to the result.

More formally, f(S) = sum over elements e of S such that no element of S is required to be before e of [ f(S - {e}) ]. Of course, f on the empty set is 1 (because there is only one way to order no elements). If defining the function on the empty set is not clear enough, you can say that f applied to a set of a single element is 1 if the element is not required to be before itself and 0 if it is.

This recursive function is easily memoizable in an array of at most 220 positions, or you could make an iterative dynammic programming approach can based on the same idea. There are codes of both approaches among passed solutions.


CollectingPayment

This problem was pretty classic, but coders had a little more trouble with it than with the usual mediums -- although given the overall problems that arised on this match, the coders' concentration was probably affected.

By looking at the problem, the typical dynammic programming bell rings in a regular division 1 topcoder. But suddenly, a little problem appears: the final amount depends on previous events. More precisely, since interest is added for the whole amount in your account, you can't forget about previously deposited checks, because their amount is still to be increased.

But this problem can be solved by seeing that once you deposit a check, you can calculate how its amount is going to be increased from that point up to the end of the year and then forget about it. To clarify this, first let's see that increasing your account by rate*A/1000, as the problem states, is the same as multiplying the amount by rate/1000+1. Similarly, having n Sundays, means multiplying n times by that number, which is the same as multiplying once by (rate/1000+1)n.

More formally, if f(d) is the function that says how much money you can make after the day d, we can define f as the maximum over all possible days d' > d of f(d') + check(d,d')*(rate/1000+1)numberOfDays(d,d'), where check is the amount of a check that has all payments between d and d' (you can define it as 0 if that amount is lower than the fee, since the final profit will never be lower than 0) and numberOfDays is the number of interest adding days between them. This recursive function leads, as usual, to a memoized or iterative version of a DP solution of the problem.

It was also possible to define the function f only on Sundays, because there is always an optimal solution that asks all checks on Sundays. Another approach was to only ask for checks in delivery days (i.e., in days mentioned in moment). The two approaches are a little faster on runtime, but using every day was fast enough to get the problem correctly. Using only Sundays simplify a little the problem, because the number of interest adding days between two points is simpler to calculate.


CuttingPaper

This problem proved to be quite difficult, and not many coders were able to solve it -- even when a good number seemed to be on the right track, it was difficult to get the whole idea together.

The first thing to notice is the "graph" smell the problem has. The graph to build here is seeing vertices of squares as nodes and lines between squares as edges (actually, we'll think as edges only those lines that are to be cut -i.e., lines that separate two different non-space characters). With this graph built, you can rethink the problem as "given this graph, what's the least number of disjoint simple paths that cover the whole graph" with the only additional restriction of at most 1 path going through each node (this arises by carefully examining example 5, note that we call going through to passing by it, not start or end on it). Also, all nodes must be reachable from empty space (to make the thing cuttable), which means in each connected component that has at least one edge there must be at least one node that is a vertex of a non-papered square. We'll further call this nodes "space nodes." If that does not happen (we can check it with a simple DFS or flood), we return -1.

Making use of knowledge about eulerian paths you can see that, without the additional restriction, the problem is simply the sum of the number of odd degree nodes divided by 2. But that restriction makes the problem a bit different. This is when good knowledge of the theory comes into play. By carefully looking at the demonstration for the quoted euler theorem, one can see that is a "greedy approach" that still works in this case, but slightly modified.

Let's suppose a given node has degree d. Suppose p paths pass through (i.e., use the node, but don't start or end on it) the node. That means d-2p paths start or end in it. In this case, by the additional restriction mentioned, p is at most 1. Also in "space nodes" p has to be 0, because a cut cannot pass through that point. Since we want to minimize cuts, is better to have p=1 in each of the other nodes, and that's always possible because at this point each connected component has at least one "space node" (the only restriction for a node that has a degree greater than 1 to be a passing node would be to be part of a component with all p=1 edges, because someone has to be the starting node, but since "space nodes" can play that part, there's no trouble to assign p=1 to each possible edge). Therefore, the final algorithm is, for each node with degree d we sum d to the accumulated result if its a space node or it has degree less or equal than 1 (cases for p=0) or d-2 otherwise (cases for p=1). Finally, since each path has 2 endpoints, divide the accumulated result by 2, and return that number.

Note: Actually, in non-space nodes with d>1 we must have p=1, because if we have p=0, that means that d cuts starts or ends there, and the first one that does it will be violating cutting rules (you can only start a cut in an already used vertex/node or in a "space node"). With p=1 we can have first the cut that passes through and after that the cuts that start or end, that are then valid as in example 5.

As a final comment, as you can see by the proof, it was also possible to do a greedy approach: start at any possible point and cut until you can't go on. Add 1 to the counter. Cut again. Add 1... and so on. If all parts left are all single-coloured, return the counter, else return -1. The reason why this works is the same as the reason why the analytical approach works. Of course, coming up with both is similar in difficult, and once you have the idea, the analytical approach is much easier to code. Probably that's why no passed solution and no reference or tester solution used the greedy approach.

Author
By soul-net
TopCoder Member