How do we make our bots better?

I agree that it is a frustration not having the replays. Last year I sucked and wanted them. This year I am doing well and want them even more. However…

We are all operating under the same information as the competition currently stands. Any information that gets released needs to be disclosed to everyone. Which is why I am against an opt-in style system. It has to be all or nothing.

Coding game is great and each time I watch a replay I learn something new. I also see strategies I tend to replicate. It is like watching replays of good sportsmen, we do tend to imitate. They have a massive pool of people participating relative to this though and when an exploit is found, everyone jumps on it. In addition, they have multiple reference bots (one for each tier) and you are not guaranteed to play certain people.

I think give credit to the people who found the exploit (if there is one). They deserve the lead for finding it in the first place and should not have to compete against some advantage they found because someone copied them.

I do agree that a write-up after the competition hosted with entelect is a great way to share the learning though. Last year one had to hunt for information after the competition to find out if your suspicions were true or not. A kind of written question / answer interview from the top players.

I only came 38th in the first round but I’m happy to share the code for my bot (C#) https://github.com/aodendaal/entelect-challenge-bot-2018

2 Likes

I agree coding game is fun, I managed to snatch a Shirt last year sometime, But I only did 2 comps, I just don’t always have the time. And I prefer a long term AI to the 10 day rush. With Entelect I know I have time to really sit.

I feel community is important.

I am all for releasing all replays for all the events after the challenge but I honestly believe that by that time most people will not even bother. Because the challenge would be “over” I know I never rewatch anything.

Entelect is sadly the only AI I actually write.

I am willing to do a write-up and share my source for the past 2 years where I was in the top 8, as well as this year if I can continue my current momentum. Break down my logic and style. If it could help anyone.

But I personally am comfortable with these kinds of disclosures afterwards. During the challenge however. I generally keep my tactics to myself. Some would call this selfish. After this challenge If I do well, I will do a complete writeup. (last year more than half of the players after the top 8 announcement shared their bots and strategies with the world)

For what it’s worth, I’m in favour of match replays being public. I’m a big fan of the Codingame format, with anyone able to see any match.

I don’t think it’s a trivial thing to watch how another bot plays and figure out how to beat it, so if you manage to make your bot better that way it’s fantastic!

Like Willie, I’m also planning a write-up, but only after the competition is done :joy:

1 Like

In my opinion, the biggest problem here is that this year again, the game the bots are playing is a direct reactive and real-time game.
It feels like the way this format works suppresses the competition a bit. I’m simply programming to

  1. Beat the reference bot
  2. And do it as efficiently as I can.

I suppose the win is more worth than it is in Coding Game, which would cause it to be more geared towards hiding and closed-box AI to increase chances of winning, but to someone like me it will make it very hard to win, and somewhat of a pot-luck whether my strategy works against other players’ strategies.

I, for example, don’t use any fancy stuff like neural networks or genetics.
On Codinggame my bots are generally manually analysed matches.
In fact, I find that most of my time during the Coding Game challenges are spent on fixing incorrect decisions my bots make vs specific decisions by my opponents, rather than trying to copy their strategies.

My thought processes (for example on Coding Game) usually run the programming cycle as:

  1. Think of the general most efficient strategy to follow.
  2. Code a bot that does that.
  3. Check rank that is reachable
  4. Start to watch matches it loses, analyzing the reasons.
  5. If the reasons are my own bot doing something stupid, resolve.
  6. If the reasons are the opponent in that specific match beat me, try to figure out how to adjust for it to improve chances of winning
  7. Eventually reach a point where the bot is doing everything right, but loses because of being less efficient.
  8. Rest of time is generally spent trying to improve efficient decision making to raise chances of winning.

Unfortunately, for my Entelect bot it was:

  1. Think of the general most efficient strategy to follow.
  2. Code a bot that does that.
  3. Check whether I can beat the reference bot
  4. Analyse losses against the reference bot.
  5. If the reasons are my own bot doing something stupid, resolve.
  6. If the reasons are the reference bot, try to figure out how to adjust for it to improve chances of winning
  7. Improve effeciency
  8. Enter bot and hope it is good enough.
  9. Repeat for each round.

Alas, this massive rant is rather in bad taste, and I do want to mention that despite all of this I still enjoy entering the Entelect challenges each year, because its fun to try and build bots that do all of these different things for the various game modes. Although I don’t stand much of a chance of winning, this is a great way to get you to think in a different way of programming, and I always learn new things.

This is just thinking out loud about how the challenge is formulated and what it might lack (in my opinion).

I will stop now, well done to the top 10 peoples.

Process I follow (for what it is worth), still think I was lucky:

  • Code a bot A to beat the reference bot
  • Code another bot B to beat A (trying something different)
  • Test B does not lose to the reference bot (happens more than you think)
  • Code bot C to beat B
  • Test C against B and A and reference bot
  • Rinse & repeat…

I got to B before this round. Every bot I write ends up with a strength and weakness.
The idea is to beat all different strengths from previous bots.

To make things simpler I wrote a core-ai which A,B,C,D… use, as to speed along future bot development. On an aside, I am using a behavior tree.

1 Like

That’s pretty much what I also do :blush: In the spirit of advancing the bot standards, I would welcome the release of all the matches. It would allow us to make stronger bots faster.

I can agree with @avanderw,

I am really hyped to get into this competition.

The above strategy is pretty good way to go…

I have so far 16 different version of my bot, where i keep on improving as i go.
I have a copy script where i copy the latest version to each (battle) folder and play it out against all my old versions.

I keep all my old code for my old versions and just duplicate current code if i am happy with the current version just incase i lose my bot somehow then i can just rebuild it.

If you guys want the script or bases of how i do it i will be more than happy to share it. (just not my code :slight_smile: )