Hi
it is extremely frustrating in a Tournament to see your Bot not performing well and not having any feedback as to the reason why.
The Tournament generates a huge volume of data, and most of it is not useful to anybody. However, is there no way that limited feedback could be provided to each Player. For instance, would it not be possible to zip and email the first five matches that the Player lost
How do you prevent leaking strategy of the opponent?
I agree more information would be great, but you cannot guarantee the logs being clear of what your opponent is doing. Which I assume many will not be happy with.
I would be great if you got a statemap and an output, though once again… that gives away “secrets” or advantages that some players may find.
Seems to me that if you leak a small amount of strategy at the end of each Tournament, you would make the Tournament more challenging for the top players and you would make an opportunity for the majority of players to improve their Bots.
I agree with thinus and had the same feeling last year. It’s frustrating when you are losing and have no idea as to why, thus no idea how to improve. Simply being able to watch the games will be very helpful and I think it will make the experience a lot more educational especially to new players. In the same breath I also think all tournament games should be public, simply publishing the move list of all game should be easy to do, but I guess it depends on what Entellect wants to achieve with this competition.
I don’t know how much my opinion counts, but that’s my 20 cents. because 2 cents don’t exist.
Is being able to see what your opponents do really something we should stop? In my mind, the bot’s logs are private but the bot’s actions in the tournament should absolutely be public. In my mind, the right course of action would be to give authors unrestricted access to their bot’s logs for debugging and improvement. I might even go as far as to say people should be able to see the replay of any match in the tournament (that’s how https://www.codingame.com/ does it).
Looking through hundreds of previous matches between tournaments to figure out how to improve your own bot sounds like a fair strategy to me.
Also, not all programming languages have exceptions. Since error reporting hasn’t been mentioned before, even the languages that do have exceptions might not be reporting them the way that you expect. If you’re planning on reporting some feedback to players on how their bots did and not others, THAT would cause an unfair advantage. A more language-agnostic approach would be to allow logging errors to the standard error output, but that would put us right back in the previous discussion on if giving people logs back would leak information that shouldn’t be leaked.
I feel we need to take the actual game into account as well.
The game is very well balanced so its 100% AI… What this means is that even if another player sees a bot it might not even help them, My current bot (And I hope i finish version 3 before the first deadline)… is rather random in the way it builds.
Now for a case study.
2016’s Bomberman was also pure AL… So could replays provide an unfair tactic leak? Well, I matched with Ralf a number of times in the practice matches. And as a result I managed to get replays of both of us. My bot seemed to always be one step behind his. Or rather his “saw” a bit further than mine. So using the replays I calibrated my bot and added depth and built my bot against his. Then I worked on my bot until my bot gained advantage over his. So in doing this I used the few match replays I had as “reference bot”.
So information is dangerous.
Now lets go last year. It was highly psychological. Where you needed to account for ship placements. A perfect bot would always attack middle first, Yet all the top 8 players went for the sides first. The reasoning was that the perfect bot will search there last. So in last year’s case we played the human and not the AI.
In this case leaking any strategy would be bad. Also In a game of Round Robin if you play all players you could easily look at what the top players are doing. I do not necessarily see this as a good thing. I do not want someone else to copy my homework. Even if going by the whole “Just because you can see it does not mean you can actually write it”. But some of the contestants are in a league of their own. If they see your bot once they will likely know your tactics.
Row Based
Tile Based
Energy Focus
Attack Focus
or one of those special bots that will come to the party:
However. I still do not see any harm in this year’s info being available. Because of expansions.
While others are hitting their heads on the first version. There will be more buildings to account for.
Which will already be an added load. And the map will change. 4* 4 for example will be a far different kind of match compared to a 10 * 10.
I will likely need to rebuild my bot completely after the first event.
I’m new around these parts so please excuse my ignorance but - what tournament? I’m guessing this was some kind of semi-informal test-your-bots kind of tournament that was probably declared via email to members since I find no reference to it on the site or on these forums? I totally feel like I missed out unnecessarily.
Also just wanted to throw in some suggestions based on what I’ve read here, how about considering:
A) Providing the game state of only every X rounds, making this very sparse like every 15 rounds could make it very hard to decipher your opponents strategy while still giving you at least some glimpse of an idea of what happened in the game.
B) Provide the end-game stats, like what all the game variables were, how many rounds the game went and the players healths at the end. Maybe your bot is performing worse on small maps or big maps and knowing these end-game stats might help you focus on the right areas.