Key Information

Register
Submit
The challenge is finished.

Challenge Overview

 

Back by popular demand we are reintroducing dynamic scorecards to [topcoder].  For those who competed at Cloudspokes you may remember our dynamic scorecard initiative called Madison.   Instead of generic score cards per technology, a challenge author would create requirements with values and those requirements became the scorecard.  The Cloudspokes data model was confined to Salesforce,  but now we will front this with PostgreSQL so we think we can make some improvements and add some functionality by leveraging some of the PostgreSQL data types and add some enhancements we have wanted for two years.

For this challenge we are only looking for some ideas on how we should implement the requirement/scorecard schema, and more specifically what are the pros and cons of some choices we need to make.  The winning solution need not be more than a page or two document and maybe a quick ERD if you think it is necessary and a script to create the tables in PostgreSQL.

If you have ever competed at [topcoder] these concepts should not be new to you however we want to rethink what we have done in the past to see if we can provide a simpler model with more flexibility and get some new ideas from you, the consumer of this process.   Attached is a conceptual model which should be used only convey the idea of the scorecard process and should not be taken literally as a data model,  that is what you will produce.  This conceptual model is a simplified version and not all fields are present but the important ones are, the datatypes of these fields have been intentionally  left off for you to decide in your solution.    At times in this requirement I may conject my ideas and if I do I looking for affirmation or opposition to them.    I will describe the basic flow and requirements and then it is your turn to produce the data model, enough said, lets get started.

If you look at the Madison conceptual model you can assume the challenge author has created a challenge with both the overview and title, now it is time to articulate the requirements which will eventually become the scorecard template and later the scorecards.   I define the scorecard template as simply as a group of requirements as they are related to a single challenge.  And the scorecard itself as the template that also contains the scores for each requirements and the appeals and appeal responses.   But more on that latter.  First lets talk about the requirement builder.

Once the challenge author has started the skeleton of the challenge that will be presented with an interface that  allows them to create new requirements or search for requirements that have been saved  for reuse.   The fields depicted in the image are broken into required fields and advanced fields.   You can assume that the advanced fields are collapsed and are optional.   The first mandatory field  is type. [functional, technical, informational] and is a single radio button.  A functional requirement describes what a submission should do, i.e create a search box.   A technical requirement should describe how it should be done, i.e. “use node.js or use angular version 1.2.1” Both of these types of requirements will store a point value, articulated in the third mandatory field called Point Range (1-4, or 1-10, ect).  However if the user set the type as informational it means that that requirement will not store a point value.   For example they may create an information requirement that says “email me to be added to the github team”.   You may argue that this does not belong in the requirements but the idea is that with the ability to save ‘private’ requirements,  the user could use this to recycle common information that has no effect on the score.  The Body is simply the details of the requirement.  ‘Point range’ is simply a picklist used to generate the scorecard and calculate the total.   If the user picks ‘0-4’ the scorecard will   allow the review to give the submitter a score from 0 to 4, for that requirement.  The actual values of this range are not important at this point but you may consider adding boolean to this as well so the the requirement can also be used as a check list.   I assume that this field is a string and values like 1-10, or boolean would be translated in the scorecard to the appropriate types but I leave that up to you, as you may want to store as a low range and a high range.  please discuss this option in your solution.

Next we move to the optional fields.   This is for the power user who wants be able to reuse the requirements,  add a weights or get statistics on each requirement.    The first optional field is ‘tags’ which should be pretty self explanatory.   Add one or more tags for searching purposes, examples.  ‘heroku’, ‘Salesforce’, ‘Kyles favorites’, ‘Project Serenity’ ‘Visualforce best practices’.   If we are going to use tags this implies we  are planning on reusing the requirement which leads us to the the next field.   The ability to save the field to the library and make it private to the user or public to all.   It seems like the PostgreSQL array object is well suited to this but I am not sure.   This fast search type ahead when planning this field.   In Salesforce or my old MySQL days we would have create a single row table for this however if the array datatype can be search just as fast it seems like a better candidate but I don’t know,   What is your opinion?  Since we are adding tags it implies we would like to reuse this req.  For this we would give the user the ability to save this requirement and flag it for public or private (just that user) reuse.   The big question is do we save it anyway?  or does it just go to the score card.   For example: Cory creates a requirement that says ‘make the icon green’ and saves it public.    I decided to use it but change green to red but don’t save it.   Should it save anyway or just be translated to the scorecard with my change.  I don’t plan I using it again.   Discuss the merits of this in your submission.   Think about 10,000 challenges a year with 20 requirements each.   How do you ensure you can still search by both the tag and a full text search without impacting performance if you are storing 200k new requirements a year?   Maybe you store explicit save in one table and non-saves in another.    What do you think?  the next field is difficulty just think of a range,  like move ratings 3 ½ stars, but in our case it is brains instead of stars.  We won’t use this field right away but the ideas is we can sum up the total complexity of a challenge and then compare it to other challenges.  The final optional field is necessity.    This is a field that we might not use right away but is a way to weight the requirements as ‘must have’,  ‘should have’, ‘nice to have’, or ‘optional’.   This is a way for the challenge author to tell the community which requirements they can live without and which are essential and everything in between.  I think of this as a coefficient.  ‘Must have’ = 1.0, ‘Should have’=0.75, ‘Nice to have’=0.5 and ‘Optional’ = 0.25.   We can then multiply these coefficients to get a weighted score and will help the members determine where they should spend their time.   These are only a sampling of the fields if you have additional thoughts we would love to hear them, so feel free to inject whatever you think is necessary.

Once the user has created all the requirements and saves the form it creates the scorecard template  In traditional db terms this would be a join table of the requirements but we think using a JSON field on the challenge might be worth looking into.  Please discuss the merits of this.

Once the challenge closes lets assume there are three submissions and three reviewers, or better yet 9 iterations of the scorecard template.   Now we need to create 9 new objects so the scorecards can be filled out.   


Scorecard:  The action of  one or more submitters  and one or more reviewers  ‘filling out’ the score ‘value’ may be include in this collection or it may be in a separate model.   This is one of my biggest questions.  On one hand there is benefit  to have the completed scores all in one place on the other hand this could make for a hot mess that is hard to unravel and calculate the final score.    Much like the score value, for each requirement on the scorecard, an ’appeal’ and ‘appeal response’ needs to be stored.   You may notice that I used the word collection instead of table.   This was intentional since I see this as a json table but would like you opinion.   I use the terms scorecard template and scorecard to assume  these are two separate objects/collections/tables but would love to hear your opinion.   I am not particularly happy with those two terms and would be open to suggestions.   If they we a survey we might use the terms questions and answers.  Another very import consideration is some or all of this data will eventual roll up to Salesforce.    If we use json fields we will need to store them as long text fields and won’t be able to report on them.  It is not required to report on them or maybe we expand them out when they go into Salesforce but I am also looking for a discussion on this.

Thanks and good luck,   I am anxious to read your submissions.

Kyle



Final Submission Guidelines

Provide:

1.  A details document describing the data model of your design.  REQUIRED

2.  A ERD if you think it is necessary. REQUIRED

3.  A script to create the tables in PostgreSQL. OPTIONAL but highly desired.

ELIGIBLE EVENTS:

2015 topcoder Open

REVIEW STYLE:

Final Review:

Community Review Board

Approval:

User Sign-Off

SHARE:

ID: 30045046