Before starting the design and implementation of any major feature, please contact the core team via the discussion mailing list to organize the effort and making sure there is no duplication of work.
For every change, you should ensure that:
pep8
command)pylint
)AUTHORS.txt
.You can setup automatic checks for the first three items using git hooks as explained in the wiki.
Depending on the size of your project, the review process is slightly different.
For a small to medium change (that is reasonably reviewed in a single time), you should:
For a big change that requires several commits, we may want to review it in several steps that may also break cms or the tests. Since we try to keep the master branch as clean as possible, we will create a feature branch on cms-dev, and you will issue pull requests against that branch. The review process will be the same, but you will be allowed to temporarily break tests and features (not to break the style!). When the feature is completed, we will agree on how to rebase the branch on top of master and we will merge it.
The following are some ideas for self-contained projects to expand CMS’s features. All of these require some knowledge of Python, SQLAlchemy and relational databases. Experience with the organization of programming contests and training camps is a plus.
If you are interested in developing one or more of these ideas, please contact us via the discussion mailing list, and we will arrange for a member of the core team to provide guidance.
Allow admins to store, for each task a set of representative submissions, intended to have a precise result (for example: a wrong submission which should score zero points, a correct one which should score full points, and an intermediate one which should score half of the points. Admins should be able to test at any moment that the representative submissions score exactly as expected.
More precisely, there should be a way to associate to tasks a set of tuples (submission, (name), expected result); moreover, AWS should be extended to allow the recomputation of the scores associated to the representative submissions and to warn the admins when the score is different from the expected.
Expected changes.
Additional requirements: none.
Difficulty: medium.
Size: medium.
CMS currently has a functional test that runs all the services (apart from RWS) and simulating the creation of contests and tasks by the admins, and some submissions from the user. There are some unit tests but very limited. The task is to increase the coverage of the unit tests (possibly reorganizing code to make it more testable), extend the functional tests to other aspects (like importing of tasks) and to speed up the execution of functional tests.
Additional requirements: testing experience (especially in Python).
Difficulty: high.
Size: big.
Many online services for programmers (e.g., github) are starting to offer in-browsing editing of source files. It would be very cool to add these capabilities to CMS, allowing editing of submissions directly from the browser. Existing open source projects could be used, like Ace or CodeMirror. There is a lot of potential for follow ups, like for example branching and auto-snapshotting.
Expected changes.
Additional requirements: JavaScript.
Difficulty: medium to high, depending on the follow ups.
Size: big.
After an official contest is often nice to have some easily retrievable statistics about scores distributions, language usage, and so on.
Expected changes.
Additional requirements: JavaScript.
Difficulty: low.
Size: small.
There is already an experimental code in cmscontrib that tries to compute the complexity of the contestants’ solutions based on the size of the input. A similar computation was suggested as a more sane way of scoring at some IOI conference, though it has many problems in reliability and is subject to cheating. Still, it would be interesting for admins to see this information, for example to have an idea of the cleverness of the solutions of two contestants.
Expected changes.
Additional requirements: statistical regression.
Difficulty: low.
Size: medium.
At least, make sure that all interfaces are usable with a high number of contestants, tasks (and of contests for RWS). This may mean to have adaptive UIs after a certain threshold (for example, not showing a single page in RWS if you have 10k contestants.
Additional requirements: training camps experience.
Difficulty: medium to high, depending on how thorough is the investigation.
Size: big.
Many online contests are in the form of team competitions: multiple contestants cooperate to solve the same set of problems. The easiest way to implement this without changing a lot of code is to put the team information in the user table and add a new table for team participants (“sub-users"?). As extras, with in-browser code editing it would be nice to allow real-time cooperation, and also for admins it would be nice to be able to see who wrote a specific piece of code.
Expected changes.
Additional requirements: none.
Difficulty: low.
Size: medium.
Note: this is currently in progress here.
Allow spectators to be more involved with the contest by giving them the possibility of looking at the internal functioning of the participants’ submission. During the evaluation process, CMS will record in the submission one or more opaque pieces of data, that will be passed to RWS; on his side, RWS will have a plugin able to translate back each piece of data into the solution given by the contestant’s submission to a simple input, and to show this solution to the spectators. Bonus point for implementing this also for interactive tasks and showing the spectators an animation.
Expected changes.
Additional requirements: contest organization experience, JavaScript.
Difficulty: high.
Size: big.
The RankingWebServer works awfully on mobile devices (mostly phones) due to their small screens and to the mouse-driven interaction model. It’s advisable to redesign the client to handle these use-cases better, or to develop dedicated applications for mobile (Android, iOS, etc.). The latter ones should get the data from the same server that provides the current in-browser scoreboard (i.e. communicate over the existing HTTP API). Notifications could be interesting too.
Additional requirements: JavaScript, accessible web development; possibly app development.
Difficulty: high (there will be no guidance available from the core team for native app development).
Size: big.
Important competitions may have on-site large screens, projectors or “totems” to display the live scoreboard to the audience, so they don’t have to use their laptops or phones (which wouldn’t work anyway, see above). These use-cases have a totally different set of needs than an interactive online ranking. These are best addressed by writing a new and different scoreboard client. Again, the server should remain the same: we just need a new client.
Additional requirements: JavaScript.
Difficulty: medium.
Size: small.