Image Credit: NASA/JPL/SSI
The Cassini spacecraft has returned thousands of images from its orbit around Saturn. Scientists have discovered giant propeller-shaped objects in the rings of Saturn. These propellers are quite unique in their shape and effect on the rings, so the scientists currently use a combination of processing and manual inspection to determine the location of these propeller objects. However, with thousands of images returned from Cassini, manually inspecting each image would take a large amount of time.
Our goal is to automate the process of identifying propeller objects in Saturn’s rings as much as possible. The propeller objects’ locations must be determined and propellers which appear in multiple images must be flagged as the same object.
The propellers travel around Saturn in a circular Keplerian orbit. These orbits are highly predictable and the future position can be predicted accurately even over long periods of time. For reasons not completely understood, the propellers also make small chaotic jumps over time. These jumps slightly change the radius of the orbit. These small chaotic jumps have a very low probability of occurring over a short time span.
This match will be split into two separate phases. In Phase 1, part of the image filtering source code for the current algorithm will be released for use in this competition. In Phase 2, the complete propeller detection source code will be released for use in this competition. In Phase 2, competitors will be able to use the source code of any of the submissions from Phase 1. Phase 2 will begin immediately after the completion of Phase 1.
The original run of this contest had prizes for both rounds it was run, however, the re-running will not have cash prizes, but will include a collection of NASA-related swag and goodies.
There are 643 images in the complete data set that will be used for this competition. Approximately 30% of these images will be used for system testing, 20% for provisional testing, and 50% for local testing. The local test images can be downloaded here (~1GB). The images can be viewed using the NASAView program. Most images contain only 1 propeller, but some images contain as many as 5 propellers. The ground truth .csv file for the local test images can be downloaded here. The ground truth contains one line per propeller appearance with the following values, comma separated.
a, b, dx, and dy represent size measurements estimated by the observer. The delta entries are measures of uncertainty.
- Image - Unique string assigned to this image.
- Line - Propeller pixel coordinate Y.
- Sample - Propeller pixel coordinate X.
- Radius - Orbital radius with respect to Saturn.
- Longitude - Longitudinal coordinate with respect to Saturn.
- Nickname - Name assigned to this propeller.
For each image in the local training set, your trainingData method will be called with the current image data. The following parameters are supplied to your testingData method.
Parameters 4 through 9 can be used to convert from pixel location to space coordinates using the provided Transformer class.
- imageData - Decimal format image array in horizontal scan lines. Single value per pixel. Image size can be determined from the instrumentModeId parameter (see Transformer class).
- imageId - Unique string assigned to this image.
- startTime - Time the image was taken. Format: yyyy-dddTHH:mm:ss.SSS (Example: 2005-138T17:33:19.842)
- declination - Camera posture declination
- rightAscension - Camera posture right ascension
- twistAngle - Camera posture twist angle
- scPlanetPositionVector - Vector from the Cassini spacecraft to Saturn in J2000 coordinates.
- instrumentId - Camera instrument ID
- instrumentModeId - Camera operating mode ID
- imageGroundTruth - String array of lines from the ground truth file associated with this image. Comma separated format with same values as the supplied local ground truth .csv file.
For each image in the testing set, your testingData method will be called with the current image data. The following parameters are supplied to your testingData method.
Finally, your getAnswer method will be called. This method should return all of the identified propeller objects in order from "most sure" to "least sure". You may not return more than 10,000 objects. Each element should contain the following information in comma delimited format.
- ImageID – ImageID associated with image containing this object
- Line – Pixel coordinate Y
- Sample – Pixel coordinate X
- ObjectID - User defined ID# identifying this object (integer from 1 to 10000)
For the example test case, only the first 50 local images will be used in the testing set and the next 50 images will be used for the training. Provisional and system tests will contain propellers which do not exist at all in the local test set. The example case will be different because propellers appear in both the training and testing sets.
Scoring and Testing
Two criteria are used to determine your final score: accurately determining propeller positions and linking propellers between images. Your final score will be the sum of these two scores.
- Position Scoring: Scored used average precision. Items closer to the front of the answer list will affect the score more than those further away. Maximum position score: 1,000,000
- Linking Scoring: Compares and attempts to match the correctly positioned answers' ObjectID's with the ground truth nicknames. Maximum linking score: 200,000
numFound := 0
aIndex := 0
matched := all false
baskets := all 0
for all(a in answers)
for all(b in groundTruth)
if(a.imageID == b.imageID AND matched[b] == false)
if(distance from a to the center of b is less than 10 pixels)
numFound = numFound + 1
score = score + (1,000,000 / groundTruth.length) * (numFound / (aIndex + 1))
matched[b] = true
baskets[a.objectId][b.nickname] = baskets[a.objectId][b.nickname] + 1
aIndex = aIndex + 1;
for all(B = 0 to baskets.length - 1)
maxN = nickname N with maximum value of baskets[B][N]
score = score + (200,000 / groundTruth.length) * (baskets[B][maxN] - 1) * (baskets[B][maxN] - 1) / (baskets[B].length - 1)
A local tester has been provided for Java solutions. The tester files can be downloaded here and here.
The local_ground_truth.csv, local_index.lbl, and local_index.tab files must be placed in the program's directory. The extracted image and label files from the local data set must also be placed in a folder "local/" in the program's directory.
Your solution is placed in the PropellerDetectorLocal class file.
In this competition, you may use and modify the provided code from the current algorithm.
The Transformer class and RingSubtractor class from the current algorithm can be downloaded here with example usage.
The current feature extraction and linking algorithms can be downloaded here, here, and here. The current linking algorithm finds features in each of the images and cross correlates these features by comparing their radius, longitude, and time values.
Special Rules and Conditions
- You are not allowed to hard code values of known objects into your code, you are expected to process the images in order to detect the propellers.
- In order to receive the prize money, you will need to fully document your code and explain your algorithm. If any parameters were obtained from the training data set, you will also need to provide the program used to generate these parameters. There is no restriction on the programming language used to generate these training parameters. Note that all this documentation should not be submitted anywhere during the coding phase. Instead, if you win a prize, a TopCoder representative will contact you directly in order to collect this data.