

Problem Statement 
Contest:
LoadBalance
Problem: LoadBalance
Problem Statement   Problem definition
In this problem you need to process a number of marathon submissions as cheaply as possible, while maintaining a reasonably small maximum and average latency.
To do this, you can order virtual machines (VMs). There are two ways to order a VM:
 On demand VM: its price is fixed ($0.165 per hour).
 Spot VM: each minute you bid for these VM as much as you want, but if your bid is less than current market price, then all your spot VMs are taken away.
You start with 0 machines of each type. Each minute, you can order some new VMs and/or return some of your VMs back.
Once ordered, a new VM needs 7 minutes to complete its startup. During this time it is not operational.
There are two types of submissions: full and custom. Each submission consists of a certain number of queries: 92 for a full submission and customQueries for a custom one. Each VM, if operational, can perform exactly 1 query per minute. Queries of the same submission can be executed at any time and in any order, including parallel execution of several queries at the same time on different VMs.
Simulation process
You will need to operate VMs during contestLength minutes. During each minute, the following steps are performed:
 VMs that you ordered 7 minutes ago become operational.
 You receive the following information: number of VMs of each type you currently have, number of operational VMs of each type you currently have, current market price for spot VMs (in $ per hour), number of incoming full and custom submissions during this minute.
 If your bid is less than current market price, you lose all spot VMs.
 Each operational VM you have will perform a single query. The queries are distributed using the following fixed mechanism: each VM, in order, will take a query from the highest priority submission which still has queries to be executed. Of two submissions, A and B, the earliest arriving one has a higher priority. If they were received at the same time, full submission will have a higher priority than a custom one. The submissions which were received at the same time and have the same type are prioritized using some arbitrary order.
 For each VM type, you choose whether you want to order some more VMs or return some VMs back. (Note: if you've lost your spot VMs during this minute, you no longer own them and can't return them back.) There are almost no limitations for ordering (you have to pay though), except that you're not allowed to have more than 10,000,000 VMs of the same type simultaneously. When you return VMs back, there is one limitation: you can only choose the number of VMs to be returned, however, you can't choose which exactly VMs get returned. The exact VMs to be returned will be chosen by a process which is relatively close to random selection. (You can check the source code of the offline tester for more details.)
 You also choose your bid for the next minute (in $ per hour).
Note that steps 14 get performed automatically. You can only make decisions at steps 56.
Implementation details
You need to implement two methods. The first one, init, will be called only once at the very beginning. It has two parameters customQueries and contestLength  the number of queries in a custom submission and simulation length, in minutes. Return value from init will be ignored.
The second method, nextOneMinute, will be called contestLength times, once for every minute. It takes the following parameters as arguments:
 int[] numVMs will contain 2 integers: number of spot and on demand VMs you currently have;
 int[] numOperational will contain 2 integers: number of operational spot and on demand VMs you currently have;
 double[] currentPrices will contain 2 numbers: current market prices of on demand and spot VMs, per hour (the first element will always be 0.165, since on demand VMs have a fixed price);
 int[] submissions will contain 2 integers: number of full and custom submissions you received at the beginning of the current minute.
The return value should be an int[] containing 3 elements. First two are the number of on demand and spot VMs you want to order/return back. Positive numbers mean ordering, negative numbers are for returning back. The 3rd element should be an integer X such that (X/1000.0)$ is your bid for the next minute.
Scoring
A submission is considered to be complete once all its queries are executed. If T2 is the time at which a submission becomes complete and T1 is the time when this submission was received, the latency for this submission is defined as T2  T1. (Note that latency is at least 1 minute, since this is the time needed to execute a query.)
Your solution must maintain reasonably low latency. More exactly:
 The latency of any full submission must not exceed 35 minutes.
 The average latency of all full submissions must not exceed 15 minutes.
 The latency of any custom submission must not exceed 20 minutes.
 The average latency of all custom submissions must not exceed 3 minutes.
If any of these conditions is not fulfilled, your raw score for a test case will be reported as 1.0 (this is considered to be worst possible score).
If you maintain the proper latency, then your raw score for a test case will be the total amount of money you spent, in dollars. Each VM is charged as follows. Suppose you ordered it at time A and returned back at time B. The segment [A, B] is broken into hours like this: [A, A+60 minutes), [A+60 minutes, A+120 minutes), ... (last hour may be incomplete). If it's an on demand VM, you are charged a fixed amount of $0.165 per each hour (the last segment is charged as a complete hour, even if it's not really complete). If it's a spot VM, then for each segment the maximum market price during this segment is found and you're charged for this segment according to this maximum price. Again, last segment is charged as a complete hour (but the maximum is taken only over the segment itself). Additionally, for each (complete or incomplete) hour of usage of any VM (on demand or spot) you will be charged an extra amount which is fixed at $0.03.
The normalized score for each test case is calculated as follows. If your raw score is 1.0, then normalized score is 0.0, otherwise it is BEST_SCORE / YOUR_SCORE, where YOUR_SCORE is your raw score and BEST_SCORE is the best (minimum) score achieved for this test case by any of the competitors.
Your overall score will be equal to 1,000,000.0 * (arithmetic average of your normalized scores over all test cases).
Test case generation
Each test case is generated randomly, different test cases have the same generation algorithm, but different seeds for random numbers generator.
customQueries is chosen as one of 0, 1, 10, 100 or 1000 (each with probability of 20%). If customQueries is 0, this test case will contain only full submissions and no custom submissions.
contestLength is chosen uniformly, at random, between 40,000 and 41,000, inclusive.
Market prices are generated using the following file. Each line is formatted "LENGTH PRICE" and represents a segment of LENGTH seconds during which the market price is PRICE dollars/hour. To generate market prices for contestLength minutes, we will consecutively select random segments from the file and stack them together. The process will be stopped as soon as total length of the chosen segments exceeds contestLength minutes. The resulted segments represent market prices that will be used for simulation. If several segments are active during a particular minute, the market price during this minute is considered to be the maximum of these segments' prices.
Algorithm used to generate incoming full/custom submissions is relatively complicated. We'll describe only basic properties here and you can look into the offline tester if you need more details.
 numCompetitors (number of competitors) is generated randomly from 40 to 200.
 numFullSubmission (number of full submissions) is generated as numCompetitors * (random double 5.0 to 30.0).
 numFullSubmissions are distributed among contestLength minutes, however, not uniformly. The probability for a particular minute T is proportional to
f(T) * growthRate^T, where f is a piecewise linear function (generated randomly; 0 <= f(T) <= 1) and growthRate is generated as a random double 1.00002 to 1.00006.
 minActivity is generated as a random double 0.0025 to 0.025.
 maxActivity is generated as a random double 0.025 to 0.125.
 For each competitor, its activity is generated as a random double between minActivity and maxActivity. Custom submissions of this competitor will come during approximately activity*contestLength minutes. They will be distributed into 3 to 30 segments so that within each segment a single custom submission is made during each minute. The distribution for segment lengths is not uniform: later segments will tend to be longer.
Additionally, there will be no incoming submissions during the first 7 minutes (to account for VM startup time) and the last 35 minutes (so that you have enough time to finish processing of all submissions).
Tools
An offline tester is available. You can use it to test your solution locally. You can also inspect its code for details of test case generation, testing and scoring.
Special conditions
 All code used in your submission must be completely authored by you.
 In order to receive the prize money, you will need to fully document how your solution works. Please note that this description should not be submitted anywhere during the coding phase. Instead, if you win a prize, a TopCoder representative will contact you directly in order to collect it. The description needs to be submitted within the next 3 calendar days from the moment when final results of the match became available on the website.
 This match is rated.
  Definition   Class:  LoadBalance  Method:  nextOneMinute  Parameters:  int[], int[], double[], int[]  Returns:  int[]  Method signature:  int[] nextOneMinute(int[] numVMs, int[] numOperational, double[] currentPrices, int[] submissions)    Method:  init  Parameters:  int, int  Returns:  int  Method signature:  int init(int customQueries, int contestLength)  (be sure your methods are public) 
    Notes    The memory limit is 1024 MB and the time limit is 1 minute per test case. This includes only time spent in your code, but this is the limit per total execution time of all contestLength+1 calls to your methods.    There is no explicit code size limit. The implicit source code size limit is around 1 MB (it is not advisable to submit codes of size close to that or larger). Once your code is compiled, the binary size should not exceed 1 MB.    The compilation time limit is 30 seconds. You can find information about compilers that we use and processing server specifications here.   Examples  0)     1)     2)     3)     4)     5)     6)     7)     8)     9)    
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2010, TopCoder, Inc. All rights reserved.

