Tournament 2 - Results - Corrected

Good morning Challengers!

It is with sincere regret that we must inform the community that an error has been made during Tournament 2. Thank you to @ScottSchultz and @marvijo for bringing it to our attention.

As some of you might know, we embarked on a more optimized tournament structure, this year, that statistically ensured each bot played against every other bot. For this, we used a covering design. It’s a very interesting part of combinatorial mathematics but I won’t bore you with the details.

Unfortunately, in this pursuit of fairness, the fact that some bots might play more matches than other, was overlooked. This resulted in some bots getting more chances to build up their scores.

Luckily, for Tournament 1, this was not the case as the covering design for 13 participants has an equal amount of matches.

However, it did alter the results for Tournament 2.

Therefore we are happy to announce the corrected Top 4 player for Tournament 2, congratulations @marvijo, @Jesse, @ScottSchultz, @japes. And our sincerest apologies to @ted and @Gilad for this mistake. We will endeavour to never make it again.

For the sake of transparency, we’ve identified the correct combinatorial structure for our tournaments going forward, namely a Balanced Incomplete Block Design. This will ensure that a) every bot plays against every other bot and b) all bots play the same amount of matches. We will be implementing this after the finals since the covering design for 8 players also has an equal amount of matches.

To recalculate Tournament 2’s results, we reverted back to the old way of generating enough matches for each bot against every other bot that there is more than enough chances to face off against everyone. The reason we’re trying to improve this process is that it generates a lot of clusters in AWS and each cluster has an associated cost. For example, we had to run 266 matches to verify Tournament 2’s results instead of the initial 31.

For the finalist, unfortunately, the deadline of 20 September for final bot submissions still stands due to the unmovable Comic Con date. Hopefully all the tweaks and improvements can be done in time.

We thank the community for their continued understanding and support and hope to see you at the finals at Comic Con on 28 September.

2 Likes

Thank you for your transparency regarding this.

1 Like

For real guys???

This is not a great way to leave the tournament.

Goodluck to the top 8.

We are very very sorry @Gilad - it is not a great place for us to be.

Rest assured we will definitely be much more careful going forward.

1 Like

All the best to the top 8

1 Like

Just a curiosity:
we had to run 266 matches to verify Tournament 2’s results instead of the initial 31.

Thats quite the overhead,

Curious, What would happen if you seeded the tournament with dead bots to the nearest perfect covert?

Since each player will play these dead bots at least 0nce for every seed,

Everyone will get an equal amount of free points.
But you can use the dead bots to ensure that nobody plays two matches.

Im just spitballing…

Edit #2:
Im even tempted to say, Round start, everyone starts with 0 points,

If they win another player for the first time, +1 to score,
If they lose against another player for the first time, -1 to score.

If a player was already played, ignore their presence completely.

so if player A already played Player B, that player doenst impact their match at all.
And even if another player plays 2 extra matches, They can only ever get score against another player 1 time.

I think if I was building this id consider that approach.

Anyways, Again, just spitballing.
And rewriting my actual bot scoring algorithm cause my current one agitates me.

Even if we dont see our scores,

Would it be possible to see our exact placements?
To encourage the lower guys to push. Higher ones to fight?

Right now I dono if Im first or 8th and it pains my heart.

Know it will all change anyways for the last event.
Just want encouragement,

But playing against “dead bots” isn’t fair @WillieTheron. I might easily come in 1st place and get 4 points if I play against a bot that doesn’t do anything.

We will release T2s standings soon but T1 and all the friendlies should be visible no?

Yeah, but iof the dead bots are players filled, the system will ensure that each player plays each dead bot exactly once.

Why I said to the nearest perfect covering design where all players have even matches.

Then it doesnt matter if the dead bot is a free win,
because everyone gets it.

Example:

If the covering design for 12 players is uneven,
You add one dead bot so that it is “13” players

Like 4, 7, 10, 13, 16

Then you get 1 Match,

I think thats the linear path to perfect matches,

It seems that As long as number of opponents is divisible by 3 you dont need to worry about extras,

So you hit 14 players, you add two “Dead bots” so your back to 16. even again.

Every player will play them and get free points evenly. So its fair.
But i know nothing of this covering business.
I, just following the patterns.

Trying to solve it, cause faster matches is better.

Massive kudos to the Entelect Team for their transparency, especially given the nature of the error. And I also have to say that the hard work that the team puts in, often after hours, is hugely appreciated.

I am curious, though, as to the rationale behind removing the two individuals who were initially told they were in the final. Why could there not be 10 finalists this year as a result of this unfortunate mistake?

Is it to do with prize-money/overhead restrictions? Or the maths only works out if 8 people play? Or is it due to a sense of fairness, as they only made the finals due to a calculation error? Related, maybe those who beat the two people in the second, larger tournament may feel slighted?

My concern is, this sets an uncomfortable precedent as, in future, if someone is told that they’re part of the finals, and have informed their friends and family that they’re going through to Comic Con, made plans, etc - they will now be concerned they’ll discover, a few days later, that “Oops, sorry! Maths, hey… amiright?” and have an awfully embarrassing time ahead of them. It becomes quite difficult, as at what point can one reliably believe one is through? When you’re actually on the stage?

And for Entelect, I would imagine the team would prefer their finalists to celebrate and publicise their achievement, rather than keep it on the low down “just in case”.

It would different if the leader board were visible with a caveat that the scores were still being finalised and everything was subject to change. This has happened in previous years, if I’m not mistaken. But, in my opinion, an official announcement of the finalists should carry more weight than that.

And, to be clear, I do not know Gilad and Purpose. But I can only imagine the unpleasantness of what they’re experiencing.

2 Likes

Good morning @PhoneticallyUnstable

Thank you for your feedback. We appreciated it.

There are several factors involved, most of which you have identified that influence our decision.

We have not had this happen in the 13 years we have been running Entelect Challenge. However, the point about the finalist announcement is valid. We will place extra checks and QA in place in 2025 to ensure this does not happen again.

Every participant will have the surety that the results are final and guaranteed.

We will also take into consideration the leader board visibility in 2025 (with clear caveats and stipulations).

We continue to value and ask for the community’s support as we make the Entelect Challenge bigger and better in the upcoming years.