Quantcast
Viewing latest article 38
Browse Latest Browse All 57

Answer by Martin Ender for Loopholes that are forbidden by default

Optimising for the given test cases

This applies to s and things like , where you write some code that is measured by a criterion like runtime or size of your output (e.g. in compression challenges). These often employ an obviously finite set of test cases, because you have to measure the metric somehow.

It's not in the spirit of such challenges if an answer optimises exclusively for those test cases (e.g. by hardcoding them, which would usually allow you to compress them to a single byte or execute in milliseconds), but performing much worse for general/random input.

For variable-sized input there is no way around test cases (one can use some sort of big-O class for scoring, but those tend to not be accurate enough to distinguish submissions and they require proofs instead of just running the code), so the code of conduct should be that the algorithm is such that the test cases are actually representative for the implementation's performance.

This also means, that if you optimise your algorithm to perform well on the majority of cases (and worse on a handful of edge cases) and the test cases happen to be picked from that majority, that's perfectly fine. However, optimising for a minority of cases which include one or more of the test cases is not.


Viewing latest article 38
Browse Latest Browse All 57

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>