Total prizes: $100,000
- Period 1: $21,000 (6 winners)
- Period 2: $29,000 (9 winners)
- Period 3: $55,000 (15 winners)
In addition to best overall performance (across all countries) in each scoring period there are
separate prizes to award best performance in individual countries or event types, and also a special
prize for best undergraduate solver. See details of the prize structure here.
Surprise events such as the fall of the Berlin Wall, Iraq’s invasion of Kuwait, the civil unrest
that gave rise to the Arab Spring, and Russian incursions into Ukraine, forced the U.S. government
to respond rapidly, often in an absence of data related to the underlying causes of these events.
The IARPA Mercury Challenge is looking for novel and advanced methods to provide early warning for
the U.S. Government of such events. The Mercury Challenge seeks innovative solutions and methods for
the automated generation of event forecasts in the Middle East.
The specific event classes of interest are:
- Military Activity (MA) in Egypt, Saudi Arabia, Iraq, Syria, Qatar, Lebanon, Jordan, and Bahrain.
- Non-violent Civil Unrest (CU) in Egypt and Jordan, such as demonstrations, marches and protests.
- Infectious disease in Saudi Arabia: Weekly Middle East Respiratory Syndrome (MERS) count.
More detail on the background of the challenge, on the event types, submission procedure, input
data, scoring method, and much more can be found in the "Mercury Challenge - Handbook.pdf" and
"Mercury Challenge - Handbook Appendices.pdf" documents (referred to as 'handbooks' from now on).
Find the handbooks in the contest's public
git repository (doc/handbook folder).
This document you are reading contains only a high level overview of the challenge, you are
expected to study the details in the handbooks.
This challenge is different in several ways from a regular TopCoder marathon match. The most important differences are highlighted below. Once again, the details of all these points are given in the handbooks.
- Continuous submissions. In order to achieve a high score you will need to submit predictions often. Your predictions will be scored against real life events happening during the course of the contest.
- Updateable submissions. You have the option to revise your predictions submitted previously (but of course they still have to predict an event in the future). Your score will depend on how many updates you make and when.
- No immediate scoring. Submissions are syntactically validated and acknowledged, but scoring happens only once a month, when ground truth events of the previous 30 days are collected and entered into the scoring system.
- Multiple phases. The challenge will consist of 3 3-month scoring periods, each with their own prize structure.
- Multiple subtasks. You may choose to create predictions in all 3 event types or only a subset of them.
- Different submission system.
- No provisional vs final scoring, all scores you receive throughout the contest will contribute to your final result.
Historical data belonging to the 3 event types of this challenge can be found in the /data folder of the contests git repository. You may use it for training and back-testing your prediction system. Real life events are being collected and continuously added to the event database by an expert team while the contest is live. These events will be used to evaluate the predictive power of your systems, and will be published as additional training data after each scoring period.
Making a submission
The required format of the predictions (also known as warnings) your system has to make is
described in the handbooks. Submitting a warning (or a set of warnings) is different from the
standard process you are used to in TopCoder marathons. This time the TopCoder platform is used only
for registration, leaderboard display and forums, the submissions have to be made through a public
API. The submission process is described
The quality of your submissions will be judged by comparing your predictions with actual real life events happening while the contest is live. The exact scoring method is different for the 3 event types, in short you have to guess the facets (fields) of the events (like their location, type, count, actors, etc) as precisely as possible, and do it as early as possible. For more information see the handbooks. During the contest an online leaderboard will show how your system performs against the others.
In this contest there is no real difference between provisional and final scoring (the
distinction you may be used to on the TopCoder platform), all submissions you make in a scoring
period contribute to the final score you achieve in that period, there will not be additional
verification steps using hold back data. At the end of each scoring period winners will be selected
as described in the Mercury Challenge Rules document (highest score wins, provided that the winner's
submitted solution description sufficiently demonstrates a working and automated system that was
used to create the predictions during the scoring period). The 3 scoring periods are independent,
you may choose to participate in all or some of them. If your system is not changed from one period
to the next, you are free to submit the same solution description to the corresponding review
process. Submissions at the end of the three month periods will be through the standard Topcoder
submission process and should contain both a System Diagram and a Written Report as described in the
Mercury Challenge Rules linked above.
- Express Scorer is a lightweight testing engine that does not require a Docker installation.
- The "Resources" section of the Handbook provides a list of papers, pointers to additional data and relevant tools.
- See the "Resources" section on the challenge microsite.
- This match is NOT rated.
- Teaming is allowed. Topcoder members are permitted to form teams for this competition.
After forming a team, Topcoder members of the same team are permitted to collaborate with other
members of their team. To form a team, a Topcoder member may recruit other Topcoder members, and
register the team by completing this Topcoder Teaming Form. Each team must declare a Captain.
All participants in a team must be registered Topcoder members in good standing. All participants
in a team must individually register for this Competition and accept its Terms and Conditions
prior to joining the team. Team Captains must apportion prize distribution percentages for each
teammate on the Teaming Form. The sum of all prize portions must equal 100%. The minimum permitted
size of a team is 1 member, the maximum permitted team size is 5 members. Only team Captains
may submit a solution to the Competition. Notwithstanding Topcoder rules and conditions to the
contrary, solutions submitted by any Topcoder member who is a member of a team on this challenge
but is not the Captain of the team are not permitted, are ineligible for award, may be deleted,
and may be grounds for dismissal of the entire team from the challenge. The deadline for forming
teams is 11:59pm ET on the 21th day following the start date of each scoring period. Topcoder will
prepare a Teaming Agreement for each team that has completed the Topcoder Teaming Form, and
distribute it to each member of the team. Teaming Agreements must be electronically signed by each
team member to be considered valid. All Teaming Agreements are void, unless electronically signed
by all team members by 11:59pm ET of the 28th day following the start date of each scoring period.
Any Teaming Agreement received after this period is void. Teaming Agreements may not be changed
in any way after signature.
- The registered teams will be listed in the contest forum thread titled "Registered Teams".
- Organizations such as companies may compete as one competitor if they are registered as a team and follow all Topcoder rules.
- Relinquish - Topcoder is allowing registered competitors or teams to "relinquish". Relinquishing means the member will compete, and we will score their solution, but they will not be eligible for a prize. Once a person or team relinquishes, we post their name to a forum thread labeled "Relinquished Competitors". Relinquishers must still submit their final write ups to maintain leaderboard status.
- You may use open source languages and libraries provided they are equally free for your use, use by another competitor, or use by the client.
- If your solution includes licensed software (e.g. commercial software, open source software, etc), you must include the full license agreements with your submission. Include your licenses in a folder labeled “Licenses”. Within the same folder, include a text file labeled “README” that explains the purpose of each licensed software package as it is used in your solution.
- External data sets and pre-trained networks are allowed for use in the competition provided the following are satisfied:
- The external data and pre-trained network dataset are unencumbered with legal restrictions that conflict with its use in the competition.
- The data source or data used to train the pre-trained network is defined in the submission description.
- You are legally able to use the data source and have paid for the usage fees, if required.
- Use the match forum to ask general questions or report problems, but please do not post comments and questions that reveal information about possible solution techniques.
Award Details and Requirements to Win a Prize
See the corresponding section of the Mercury Challenge Rules document.
See the corresponding section of the Mercury Challenge Rules document.
You are not eligible to participate in such Competition if you are a resident of the Quebec province of Canada, Iran, Cuba, North Korea, Crimea Region of Ukraine, Sudan or Syria. In addition, you are not eligible to participate in any Competition if you are on the Specially Designated National list promulgated and amended, from time to time, by the United States Department of the Treasury.