How do we make our bots better?

I agree with Malman’s method…

I would very much like to see how Ralf handles his mathes, along with the other top 10 players.

But one match would give me everything I need to copy their strategy…

In the top bots its like that.

  • I see your energy tactic, your offensive tactic and your defensive tactic and I can pretty much
    pick up on your “score” logic…
  • If I can analyze it I can build it.

Regardless, I feel that part of the bot building experience is seeing the strategy. And learning how to build it. To scratch your head and search for those game changing ideas.

If I for example ran 1000 tests and came to the conclusion that defensive buildings are useless (Stupid example). And I ignore them, and I win the event. And everyone else sees that I never built defensive buildings, so they do not either, Sure, I might be better with attack then them, but them copying my homework would not be cool. There is a little bit of human in every bot and I do not like the idea that that part gets shared.

I personally would opt out for logs. As much as I can gain, I never liked the idea of someone seeing my product before its time…

I personally loved the challenge system we had in Bomber man where you had ±5 tokens where you can challenge another player and see the replay from that game. If the other person thinks you will copy his strategy you can just reject the challenge. It gives you a limited platform to improve your bot without giving your source to anybody but gets valuable feedback in your bot logs and replays.

Hi guys,

We are hearing a lot of frustration around this issue and so we are in the planning phases of getting an Opt-In challenge system built so that you guys can challenge anyone else who has opted in.

As I said, we are still planning this and trying to find ways of “detecting” or otherwise mitigating what @linjoehan said, where someone might upload a ‘do-nothing’ bot and facing off against people, gathering info as they go along.

That being said, we will not be releasing the Tournament logs, for various reasons.

We will keep you guys appraised around the progress of this feature and if you guys have any ideas or suggestions, let us know and we will consider them.

5 Likes

Great to hear you are working on something for this!

What about, rather than a challenge system, running an opt-in, ongoing tournament, with the replay files available? So players who wanted to could join it, and it would play a round robin tournament with all the bots that were submitted to it, and they could see the replay files of all their bots’ matches.

By ongoing, I mean that whenever a player submits an updated bot to this tournament, all their games are replayed and the leaderboard recalculated (though the replay files of the games their previous bots played would still have to be available for other players to view). With a limit of only one bot update per player per day, or three days or something, to save your server. Plus whatever non-exploitation ideas you come up with.

I think doing well in this kind of tournament would be a fun challenge by itself, as well as a way for people to get more insight into how to improve their bots.

2 Likes

Also, since we’re doing ideas: I think it would be interesting if all the replay files from the final round robin tournament were made available for all bots. At that point, there are no more tournaments to enter, so giving away strategy doesn’t matter, and I think a lot of people would like to be able to check out the showdowns between the best bots. I know it’s a lot of data, but I did a quick test and compressing a match directory tree with bzip2 gives around a 10x reduction in file size over ZIP! So 2360 matches could fit in around 150MB!

1 Like

That’s to much information.

I think the only thing that might be released should be the command.txt files, or gamestate. Logs also contain outputs players might be using to debug which should not be public. commands are small and would be very small compared to full states, it will be a little work for players to go through it though.

By log files I meant the game state for each round, wasn’t thinking about the bot outputs but I don’t think people should have access to their opponents’ bot outputs.

I suppose the ultimate compression for a deterministic game like this would be to store each game history as a list of the moves in each round, then people can regenerate the game state using the game engine. This could even be a mode of the game engine (play a game and generate all the normal files from two lists of moves). But then players wouldn’t be able to see their bots’ own logs, and maybe it isn’t worth all that fuss to save a few hundred megs of storage (S3 is cheap!)

1 Like

I disaggree here, This could be discouraging if one bot finds a strategy that will win 90% of the time. I would say making all matches available after finals would be better. Otherwise Just now I crack a unbeatable strategy :stuck_out_tongue: No but serious. Some mystery should also be cool so that nobody knows. I mean, if your like me it does not matter anyways, Because I will not check logs anyways, to just keep things a surprize, so honnestly I dont feel this point matters too much. But not after the first 2 events.

Last year I pulled my bot from the clan games to not give away strategy…

I don’t know about this. This might seem negative, but seeing as the issue was raised, I might as well raise my thoughts too.

Last year you didn’t have much of an effect on the opponent directly. The only direct way you could influence your opponent was the placement of your ships, and destroying specific ships to disable abilities. But because you couldn’t really know where they are, it was mostly an efficiency AI.
This year it it is a much more real-time influence. You have to directly act and react to what your opponent does.

My bot could fairly efficiently defeat the reference bot. I could eventually run any number of matches and have a 100% winrate.

Now, I didn’t place very high, which, although disappointing, isn’t a big problem. But there is no way for me to know why.
I can’t analyse my bot’s behavior vs its purpose, because I can’t see the direct results of its choices, only the final result of its performance, which effectively helps me nothing in determining its faults or weaknesses.

What’s the point of building an AI bot if you can’t collect results of outcome except against a single reference bot and itself? Isn’t the point of AI to help it evolve and improve? To do that you have to be able to collect real data on its purpose (defeating another bot), to analyse and determine its mistakes, wrong choices and weaknesses, because you have a direct influence on your opponent’s performance.

How do we make our bots better?

3 Likes

Build a bot that beats your current AI

A fresh one perhaps?

You beat me to it. Now I know not everyone has that kind of time, But I currently have two different active “Bots”. and I build them up against each other.

Now it does not mean that I entered both, But one is slightly better though they both follow a different strategy. So I try and build different bots with different strategies if one bot completely thrashes the older one I will archive it. And currently I have 2 builds left thats close to each other. But the one reached the point of “I have no idea how I will better this” The other is a work in progress. But there’s potential.

The problem with the top tier bots is, The moment your bot slips up (Skips a move. Makes a move that causes them to trade say 2 Attack buildings for 1 attack building.) Your at a disadvantage. I am 90% sure that if you had to skip 2 turns after round 20 against Ralf, Justin or possibly Andre, You will lose. Against me not so much because my existing bot does not go for the game ending kill. sometimes it plays with the opponent. so one of my problems is I give my opponent chances to “get back”. cool against the lower bots for score, Not as cool against the top bots which might use that “wasted turn” of mine to turn the match.

Now on to finding possible code improvements: Here is what I do. I will play my bot against itself and view my matches using the replay viewer. Then I will analyze move for move and look if there wouldn’t have been a better move and I will check why it did not decide to make the move. This has helped me a bit in the past. For example using chess: “The perfect move is Knight to A5”. Oh look, it just moved a pawn. Why did it do that, Was its reasoning better than mine or did it have a flaw in its logic.

This so called observation is what lead to most of my “ideas”. I am already working on the bot that will break my existing entry. if I had my way I will finish it tonight. But then again, my “reference” bot ended third. So theres a rather high bar. But if I do not improve I can lose to any other player in the top 8 with a 50% 50% chance. And to me that is not good enough. And nothing is stopping them from improving. I mean, Every single bot can still develop.

My one and only strategy in developing my bot is to keep playing it against itself and trying to “see” a better strategy in replays. This is the trick I feel. You cannot implement a tactic that you cannot see. And you cannot see a tactic if you are not actually looking for one. By devising strategies against your “own” bot you are often able to build one that’s better, and if you repeat this process enough. Eventually your bot will get better. And often the one who does this most often will also be the one with the better bot.

Dont know if this will help, But trying to not deviate from the How do I build a better bot question.

3 Likes

I personally find that the theme here tends to be that leading players want to keep everything hidden, which is fair as it gives them a better chance of winning. However that does mean that it’s not beginner friendly.

The first time I played was pacman as a total beginner. It was totally fun and a decent learning experience, however for me a lot was lost as nothing was released afterwards. I haven’t really bothered since, as I always felt that I wasn’t good enough. I entered this year because I feel like I’m a bit better and I might have a chance now.

Contrast that to my experience on other platforms like hackerearth or codingame where I able to view replays, ask questions about my approach and get some advise even from the leaders. At the end the top players would post a postmortem about how they approached it. When doing those as a beginner I felt engaged all the time, no discouragement. I always had something to look at to get ideas from, and when out of ideas just ask better players what terms to search etc.

That’s my rant for today.

Also for those that have time codingame.com has a competition starting next week, try it I’ll be there :wink:

3 Likes

I agree that it is a frustration not having the replays. Last year I sucked and wanted them. This year I am doing well and want them even more. However…

We are all operating under the same information as the competition currently stands. Any information that gets released needs to be disclosed to everyone. Which is why I am against an opt-in style system. It has to be all or nothing.

Coding game is great and each time I watch a replay I learn something new. I also see strategies I tend to replicate. It is like watching replays of good sportsmen, we do tend to imitate. They have a massive pool of people participating relative to this though and when an exploit is found, everyone jumps on it. In addition, they have multiple reference bots (one for each tier) and you are not guaranteed to play certain people.

I think give credit to the people who found the exploit (if there is one). They deserve the lead for finding it in the first place and should not have to compete against some advantage they found because someone copied them.

I do agree that a write-up after the competition hosted with entelect is a great way to share the learning though. Last year one had to hunt for information after the competition to find out if your suspicions were true or not. A kind of written question / answer interview from the top players.

I only came 38th in the first round but I’m happy to share the code for my bot (C#) https://github.com/aodendaal/entelect-challenge-bot-2018

2 Likes

I agree coding game is fun, I managed to snatch a Shirt last year sometime, But I only did 2 comps, I just don’t always have the time. And I prefer a long term AI to the 10 day rush. With Entelect I know I have time to really sit.

I feel community is important.

I am all for releasing all replays for all the events after the challenge but I honestly believe that by that time most people will not even bother. Because the challenge would be “over” I know I never rewatch anything.

Entelect is sadly the only AI I actually write.

I am willing to do a write-up and share my source for the past 2 years where I was in the top 8, as well as this year if I can continue my current momentum. Break down my logic and style. If it could help anyone.

But I personally am comfortable with these kinds of disclosures afterwards. During the challenge however. I generally keep my tactics to myself. Some would call this selfish. After this challenge If I do well, I will do a complete writeup. (last year more than half of the players after the top 8 announcement shared their bots and strategies with the world)

For what it’s worth, I’m in favour of match replays being public. I’m a big fan of the Codingame format, with anyone able to see any match.

I don’t think it’s a trivial thing to watch how another bot plays and figure out how to beat it, so if you manage to make your bot better that way it’s fantastic!

Like Willie, I’m also planning a write-up, but only after the competition is done :joy:

1 Like

In my opinion, the biggest problem here is that this year again, the game the bots are playing is a direct reactive and real-time game.
It feels like the way this format works suppresses the competition a bit. I’m simply programming to

  1. Beat the reference bot
  2. And do it as efficiently as I can.

I suppose the win is more worth than it is in Coding Game, which would cause it to be more geared towards hiding and closed-box AI to increase chances of winning, but to someone like me it will make it very hard to win, and somewhat of a pot-luck whether my strategy works against other players’ strategies.

I, for example, don’t use any fancy stuff like neural networks or genetics.
On Codinggame my bots are generally manually analysed matches.
In fact, I find that most of my time during the Coding Game challenges are spent on fixing incorrect decisions my bots make vs specific decisions by my opponents, rather than trying to copy their strategies.

My thought processes (for example on Coding Game) usually run the programming cycle as:

  1. Think of the general most efficient strategy to follow.
  2. Code a bot that does that.
  3. Check rank that is reachable
  4. Start to watch matches it loses, analyzing the reasons.
  5. If the reasons are my own bot doing something stupid, resolve.
  6. If the reasons are the opponent in that specific match beat me, try to figure out how to adjust for it to improve chances of winning
  7. Eventually reach a point where the bot is doing everything right, but loses because of being less efficient.
  8. Rest of time is generally spent trying to improve efficient decision making to raise chances of winning.

Unfortunately, for my Entelect bot it was:

  1. Think of the general most efficient strategy to follow.
  2. Code a bot that does that.
  3. Check whether I can beat the reference bot
  4. Analyse losses against the reference bot.
  5. If the reasons are my own bot doing something stupid, resolve.
  6. If the reasons are the reference bot, try to figure out how to adjust for it to improve chances of winning
  7. Improve effeciency
  8. Enter bot and hope it is good enough.
  9. Repeat for each round.

Alas, this massive rant is rather in bad taste, and I do want to mention that despite all of this I still enjoy entering the Entelect challenges each year, because its fun to try and build bots that do all of these different things for the various game modes. Although I don’t stand much of a chance of winning, this is a great way to get you to think in a different way of programming, and I always learn new things.

This is just thinking out loud about how the challenge is formulated and what it might lack (in my opinion).

I will stop now, well done to the top 10 peoples.

Process I follow (for what it is worth), still think I was lucky:

  • Code a bot A to beat the reference bot
  • Code another bot B to beat A (trying something different)
  • Test B does not lose to the reference bot (happens more than you think)
  • Code bot C to beat B
  • Test C against B and A and reference bot
  • Rinse & repeat…

I got to B before this round. Every bot I write ends up with a strength and weakness.
The idea is to beat all different strengths from previous bots.

To make things simpler I wrote a core-ai which A,B,C,D… use, as to speed along future bot development. On an aside, I am using a behavior tree.

1 Like

That’s pretty much what I also do :blush: In the spirit of advancing the bot standards, I would welcome the release of all the matches. It would allow us to make stronger bots faster.

I can agree with @avanderw,

I am really hyped to get into this competition.

The above strategy is pretty good way to go…

I have so far 16 different version of my bot, where i keep on improving as i go.
I have a copy script where i copy the latest version to each (battle) folder and play it out against all my old versions.

I keep all my old code for my old versions and just duplicate current code if i am happy with the current version just incase i lose my bot somehow then i can just rebuild it.

If you guys want the script or bases of how i do it i will be more than happy to share it. (just not my code :slight_smile: )