With the Box Carting tournament ongoing , tell me about your bot

What was your approach? What will you do differently? What do you regret?

My entry bot is named sleeko. It is a simple feed-forward neural net trained to be as greedy as possible. It doesn’t appear to care about the opponent. In the two games so far it doesn’t seem to be concerned with turning or spending boosts, only hitting mud. :man_facepalming:

I got sleeko to a point where he can compete, but most importantly will be used to train further bots. I’ll be making a few modifications to improve it slightly, making it more competitive to future bots. Recurrence is coming, but that requires longer training times than the 24 hours I gave to sleeko.

Good luck and thanks for the fun!

I’ll tell you after the finals :smiley:


I think I will wait for Round 2 to really comment…

Like, because we only see 20 Tiles it is just so hard to account for future states. What is the best move now is not always the case for new states.

Secondly I get many cases where I do well on one seed, manage to win, but then it throws off other maps (because one has a large focus on Mud, the other is more open. )

So all I am saying is…

I regret it all, I would change it all, but at the same time I regret nothing and would want to keep things the same.

Like, because we only see 20 Tiles it is just so hard to account for future states. What is the best move now is not always the case for new states.

I’m not sure how you can penalize a decision that turns out to be a bad one. The bot sees what it sees, and can’t make a better decision without knowing what lies ahead But, it is technically possible but difficult to predict the future blocks based on the previous blocks. There might just be enough information at the start of the game to derive the seed. Then you can generate the full map, and find the optimal route, responding to the opponent as and when.

I dont think this would be legal, and should not be. I feel digging into the test harness and reverse engineering the map generation from seed is unfair play (As it relies on “inside info”) so to say.

If updates are extreme, maybe they give us a view of 30…

But it is an interesting idea. Hopefully they have some sha256 encryption on the seed before generation to avoid this kind of strategy cause that would kill me. :sweat_smile:

By difficult I mean very, very difficult. Like it’ll take maybe a few years to do. SecureRandom is very good at producing random looking numbers.

Even with transformation of the seed, there is still a 1:1 mapping of a seed to the map that seed produces. But with 2^64 possible seeds, that is 1.84E19 possible maps. There is enough information in a 25x4 section of the map to predict the rest of the map, so you’ll only need 5 moves to predict the rest of the map.

It’s an amusing thought, but not really possible.

If you put it that way, I guess the person who would put up with that deserves it.

Still anti anything that does this myself. but it is an amusing thought as you say,

Ha, I had this thought too but I think you’ll run into the pigeonhole principle in trying to get from the map to the output of the PRNG.

1 Like

Ok, so biggest mess-up that I’ll admit to right now: It is the opposite of Zoolander - it can’t turn right. So, if it starts in the left lane it’ll stay in that lane. If it starts in the right lane, it’ll eventually move over to the left lane. :man_facepalming: :man_facepalming: :man_facepalming: :man_facepalming: :man_facepalming:

See you guys next tournament!

Yeah sorry man, trade secrets, NDA, lawyers, AI holding my kids hostage … so no can do. Not yet.

But I’ll probably dump everything to a public github repo when it’s all done and settled, for what ever that might be worth :slight_smile:

I will tell you this though - I have no fancy ML in there. My attempts at that hits some portal related issues (build artifact size apparently broke the portal), and besides, the initial results were not very encouraging. So I dropped that and focused on what was actually producing results.

I will tell you that I (again) wasted too much time on stuff that didn’t really work ( to scratch that “itch”).
On the other hand, if I did not go through all that, would I have reached something that actually sorta works? Can’t say for sure.

Currently, I know there are still subtle problems (I still need to look at the logs where I lost, but the day job demands attention right now, so it will need to wait).
The annoying thing is, I’m pretty sure it at this point it is not actual bugs, per se, but more fundamental approach detail problems,which is harder to track down and “fix”.

Especially since any change can only really be tested by running many games (more than 10) against reasonably OK opponent, in order be sure the random map aspect does not negate the tweak.

Right, well, good luck to everybody for the next round.

I am happy with my car AI performance.
I coded a car full of IF-THEN-ELSE statements. I used a bit of game tree analysis (modified min-max algorithm), but most of the logic in the decision making was programming how my brain thought it out.
I did experement with some deep learning ML, but progress was way too slow and did not came close to my own if-then-else bots (yet). I have spent a lot of time in XLS analysing complete matches and then work out how to imrpove.

In the first tournament I managed to beat a couple of bots that are currently in the top 8 and I noticed that some of my losses was very close calls - I lost by the hair of my chinny-chin-bumber or something like that. In 2 of the games that I lost, I crossed the finish line at the same time and speed as my opponent, but my score was not good enough. Will have to work on that.


Given all the issues that were there when the engine was released (bugs/incomplete documentation) I decided to do the minimum of work. I made a hacky engine with which I look 3 moves ahead and choose the best option. I don’t consider the opponent, oil powerups, or score at all. Only allowed actions are left, right, accelerate, boost. I don’t even consider collisions with the opponent. Whenever a rule was unclear of difficult to implement I went for a simpler implementation.

I’m sharing these details in the hope that the next round will make such a simple approach useless.

What I would change is to drop oil when not turning and max speed.

There’s one aspect of having an opponent that I haven’t yet been able to fully replicate in my simulation environment: They change the environment. I suspect the difference between the winner of the finals is going to have a handle on what their opposition is doing rather than raw speed. Speed alone might get you into the finals, though. RNGesus can be mitigated by testing with more seeds and map generation settings.

AI is just a billion dynamic if-then statements :laughing:

A deep model is my ultimate goal, but I figure that final training run will take weeks.

If yours is stig, then congrats on placing 2.

I had a forced oil drop in my output validation if nothing else was being done. It made sense to earn a few extra points for no cost, and for these round robins score looks very valuable.

1 Like

I don’t what the winning strategy is, but one thing is clear:

“He who hath the most boost, doth winneth” - William Shakespeare