Member Search 
Thursday, June 8, 2006 Match summary This SRM attracted 774 coders. This is good, taking into account the number of participants of recent SRMs, but not as good as one would expect after seeing more than 1,000 people in several straight SRMs not long ago. Maybe everyone is concentrating on drawing logos for the TCCC? The problem set in both divisions was pretty difficult. In division 1, dskloet took home the win while marek.cygan got second, with the only 2 succesful submissions of the hard problem... Wait. Some of the top ranked participants are asking the judges to review a crucial play. There's a moment of tension. Hard problems are being retested. Wow! In a major turn of events, this has become Russian coders day with Petr, pashka and andrewzta filling the podium based on solid performances on all problems. Also, 6 out of the top 8 coders in this SRM were from Russian Federation. In division 2, the problem set was pretty complex and strange, with easy and medium problems being closely related. irakli, kohaku and dramk took the first three spots by being the only 3 coders correctly solving the hard problem. Both division champions, Petr and irakli, really showed off by each having the fastest submission of every problem in their respective problem set and finishing all three in less than half an hour. A really impressive debut for irakli and (another) astonishing performance for Petr. The ProblemsSortMachineUsed as: Division Two  Level One:
This problem was unusual for a division 2 easy, because it required an analysis a little more complex than most problems of that level. The first thing to realize is that moving two times the same number is useless, because removing all but the last instruction that moves it will not affect the output of the machine and reduce the number of instructions of the code. Now, we have to think about the output and what will that be: A sequence that has two parts, first all not moved numbers in its original order and then all the moved numbers in the order we moved them. To have the first part right, all numbers in that part (i.e., numbers that are not moved) will have to be a prefix of the sorted sequence, and have to be in correct order. We can easily see that numbers in the second part can be in any order we want simply by moving them in that order. So, the problem reduces to see how big can the first part be (i.e., how many numbers can we not move). Since the first part is a prefix in the sorted sequence, we can greedily find it by repeatedly looking in the original sequence the elements of the sorted sequence, only allowing to move forward (if at some point the ith element in the sorted sequence is before the (i1)th element, then the ith element can't be part of the prefix, so we are done). For a clean implementation of this idea see irakli's code. BifidSortMachineUsed as: Division Two  Level Two: Used as: Division One  Level One:
Please read the editorial comment for SortMachine (directly above this one) before and i'll take explanation from there. The idea for this problem is similar, but the output sequence has 3 parts. First are the MOVEFRONTed numbers, then the unmoved numbers, and then the MOVEBACKed numbers. Like in the previous problem, and for the same reasons, each number will only be moved once, so the problem is to see how long the middle part (the unmoved numbers) can get. The unmoved numbers can be any consecutive subsequence of the sorted sequence that appear as a subsequence (in this case not necessarily consecutive) in the original sequence. We can use the same procedure than in the previous problem to find it, allowing to start in any element and trying to build the consecutive set that starts in that element and keep the greatest length we found. For a clean implementation of this approach see this other irakli's code. You'll see the continuity in the ideas presented here, because it's code in both problems is almost the same, with one added loop in this one to check every possible start and the variable for keeping the length of the best subsequence found. For an extended discussion of different ideas and how they work properly or not, see this thread in the forums. BlockDistanceUsed as: Division Two  Level Three:
This is was a classical dynammic programming problem, with a little twist. There were two similar approaches that solved it in O(N^{3}), each of the 3 correct submissions use one of them. Since 1000^{3} is close to the edge of time limits, you had to be careful and do a couple of necessary optimizations for that to work in time. There is also a solution in O(N^{2}) which was safer because 1000^{2} is way under the time limit. I'll present this solution afterwards. As usually when DP applies to strings, you needed indexes on each of them. Let's call i an index in oldText and j an index in newText (I'll refer to both parameters as if they were strings, the concatenation part is merely technical). The function f(i,j) means "what is the lowest number of block insertions to take A to B", where A is the subtring of oldText that starts at position i, and B is the substring of newText that starts at position j.
We'll formally define f(i,j) as the minimum of two situations, match and not match. This takes O(N^{3}) (after memorizing the function or passing it to an iterative DP implementation), because ther are O(N^{2}) states (each pair i,j) and in each of them you loop among several possible insertions (at most N). The difference between irakli's solution and dramk's solution is that for the second situation the first one finds the best insertion of all positive lengths, and the second one finds the best match for oldText[i], because it HAS to match something, so finds each k>j such as oldText[i] is equal to newText[k] and tries to improve he's actual answer with f(i,k)+1. Both solutions are clear enough to read, use any of them for reference. The other possible idea was to add to the (i,j) state another parameter b stating if we are part of an already started block or not. In situation 1, the cost is always 0 and we recursively call with b = false and in situation 2 we only say that newText[j] is part of a block, so we recursively call f(i,j+1,true). If b was false, the value this gives as is f(i,j+1,true)+1, (because we are starting a new block). If b is false is just f(i,j+1,true), because we are using the same block that was already counted before.
This takes O(N^{2}), because the state space is N*N*2 and each call is done in O(1) (no loop). int memo[][][]; char[] s,t; static final int inf = 100000; //b==0 stands for TRUE and b==1 stands for FALSE //that way the added cost when inserting is b int calc(int i, int j, int b) { if (j>=t.length&&i>=s.length) return 0; if (j>=t.length) return inf; if (i>=s.length) return b; if (memo[i][j][b]>1) return memo[i][j][b]; int r=inf; if (s[i]==t[j]) r=Math.min(r,calc(i+1,j+1,1)); r=Math.min(r,calc(i,j+1,0)+b); return memo[i][j][b]=r; } public int minDist(String[] oldText, String[] newText) { String s="",t="";; for (int i = 0; i < oldText.length; i++) s += oldText[i]; for (int i = 0; i < newText.length; i++) t += newText[i]; this.s=s.toCharArray();this.t=t.toCharArray(); memo=new int[s.length()][t.length()][2]; for(int i=0;i<s.length();i++)for(int j=0;j<t.length();j++) for(int b=0;b<2;b++) memo[i][j][b]=1; return calc(0,0,1)<inf?calc(0,0,1):1; }LightSwitches Used as: Division One  Level Two:
This problem was pretty unusual for a medium. Instead of the classical DP or DFS/BFS we are used to in division 1 500s, this match had a mostly theoretical problem that was much thinking and little coding. Even though it is possible to use some linear algebra theory to quickly express the problem, I'll try to explain it with as little theory as possible, so more people can follow the explanation. The important part is to formally state what the problem is asking and solution will be quite standard (in this, it's similar to many graph problems, where the real problem is in using the right graph). First, let's turn the Y/N matrix into a 0/1 matrix, 0 for unconnected switch/bulb pairs and 1 for connected ones. It's easy to see than the order in which we flip the switches is not important, and also that each switch is at most used once because two uses will cancel each other off. So we must assign a number (0 or 1) to each row (switch), and then multiply each element in that row by that number, add up each column using modulo 2 arithmetic (because to see if a bulb is turned on the important thing is the parity of the number of state changes) and the resulting vector represent the configuration achieved by that assignment. So, formally, the problem asks for how many different such resulting vectors there are, when iterating over each possible assignment of 0 or 1 to each row. We say that a switch S can be simulated by a subset of switches not including it OS if and only if when we flip all switches in OS we obtain the exact same transformation than flipping only S. If a switch can be simulated by the other switches, then it's the same to remove that switch, because it basically adds nothing to the possible configurations. If we keep removing switches in this fashion until every switch left is linearly independent from the others, the final result of possible configurations will turn out to be 2^{N}, where N is the number of switches left, because all possible combinations of those switches is a different state (remember they are independent). Let's go back to our mathematic formulation, to get a linearly independent set of rows we can run the known Gaussian elimination on the matrix (without the variable and the result vector), doing everything with modulo 2 arithmetic. The number of nonzero rows in the result is the size of the maximum linearly independet set of rows (also called the rank of the matrix). For a clean enough implementation, see antimatter's code. TourCountingUsed as: Division One  Level Three:
First, let's get rid of the modulo consideration. The modulo m thing is only there because numbers can get too big. Simply doing modulo m after each operation will keep all numbers below 10^{9} and make sums and multiplications to never cause overflow (using 64 bit integers). We will be using matrices a lot in this problem, for simple definitions of matrices, their sum, their product and other basic things you can check out this page. What we are given of the graph is it's adjacency matrix, only replacing 1's with 'Y' and 0's with 'N'. So, as a first step, we must construct the mathematical adjacency matrix as a 2 dimension array of 64 bit integers. There's a useful theorem we must take into account for this problem and is that when powering the adjacency matrix of any graph to the kth power, what we get in the position (i,j) of the result is the number of paths from i to j that have exactly k steps. Of course, the number of cycles that have exactly k steps are the sum of the elements of the diagonal of the matrix (because are the paths that start and end in the same node). This value is known as the trace of the matrix. From here on there were a lot of different approaches, each in some point in the line between linear algebra and graph theory. I invite you to check out a couple of different correct codes for this problem to see how, while expressing the idea in several different ways, when inspecting codes carefully, it's funny that the way they all behave (i.e., the operations they perform when executing) are pretty much the same. I'll continue to explain the idea behind Petr's implementation, which seemed the easiest one to read and understand. This is mostly based in algebraic terms; for an example of a code based more on graph theory see dskloet's failed solution (the bug in it is minor and only in the last line, so you can safely take a look at it, or at the corrected working code he has uploaded to the practice room). Let A be the adjacency matrix of g as described above and N be the number of nodes in g (i.e., also the number of rows and columns of A). As we said, the number cycles of length i is the trace of A^{i}. Therefore, our goal of counting all cycles may be divided into counting all cycles of each length and then add all of those together. In other words, the result we need is equal to the sum of the trace of A^{i} for all i between 2 and k1 inclusive. Since sum is commutative, we can say that number is equal to the trace of the sum (in matrix sum) of all matrices A^{i} for all i between 2 and k1. Since A^{0} is the identity matrix that has all 1s in it's diagonal and therefore has trace equal to N and A^{1} is the original matrix that has only 0s in its diagonal and there fore it's trace is 0 we will finally get the following formula: sumTo(m) = Sum of all A^{i} for each i between 0 and m, inclusive. Result = sumTo(k1)  N. When m is even, it's clear than sumTo(m) = sumTo(m/2) + sumTo(m/2)*A^{m/2} = sumTo(m/2)*(1+A^{m/2}) (here 1 means the identity matrix). When m is odd, we can simply say that sumTo(m) = sumTo(m1)+A^{m}. This leads to a simple recursive implementation that does O(log(k)) matrix multiplications. Since matrix multiplication is O(N^{3}) the total time is O(log(k)*N^{3}) which maximum is about 20*35^{3} for k ≤ 10^{6} and N ≤ 35, which is small enough (actually, the number of recursive calls to sumTo is 2*log(k) because there is also one call for each 1 in the binary representation of k, and the same applies to the exponentiation, so the constant factor here can be a little high, but we have enough margin). If you failed to understand this last part, please post any questions in the forums and I'll be glad to clarify what you misunderstood. As a final comment, I like to say that andrewzta's solution was pretty similar to my original solution and came up to be pretty different from the rest. It is based on building a 2N*2N matrix with a similar idea of fast fibonacci matrix (remember that block multiplication is valid in matrices) and simply powering that matrix, with a O(log(k)) number of matrix multiplications, to get the result. I won't give any other details for this approach because it would be too long, but try to inspect the code, is a good lesson of linear algebra. 
