Why so complicated? Simple example bot available?


I have taken part in a few of the Entellect Challenges over the past years, but, this year the framework is just so confusing to me.

Unless I am missing some key information, or, have the wrong project at hand, the sample bots do not do anything at all other than display their position, and, there are no clues in there as to how to use it, nor do I find any information on how to use the “runner” so that I can test.

I feel so lost and frustrated…I miss the days where all we had to do was write code, in whatever framework we prefer, that simply modifes the game state file.

Anyhow, enough ranting…may I be so bold as to ask someone to see a working example of a bot that gets the game state and takes action based on it? I use c#, but an example in any of the frameworks would suffice.

From what I see, the reference bot only gets its own position on the board and prints it out; nowhere is any sort of action taken or, the state of the board referenced.

Maybe I am getting old, but I am confused for sure.

Well, ok it seems all the information is in the workshop video. Would be nice to have it in text :slight_smile:

Here is the video for anyone else having the same question:

1 Like

You and me both. Hope the recording helped get you started :expressionless:

Sorry about the confusion @BoerBedonnered, it has been a hectic game this year. Hopefully the Build-a-Bot video helped you out :crossed_fingers:

To be honest I feel the same, since the challenge moved from simple text file editing to docker instances and signalR I’ve kind of lost interest. Feels like one spends more time getting the basics up and running than working on the fun part. Also it seems like the engines themselves are more buggy, I assume the entelect team need to spend more time looking at the infrastructure.

In previous years all you needed to do was basic file IO, and you could jump in and start working on your AI. The game engines were simple and solid. I see only 36 people submitted bots for round 1 this year. Previous years there were over 100. Most of the veterans who consistently used to place in the top 10 seem to have dropped out.

All due respect to the Entelect team, I know this is volunteer work that you do outside of your fulltime jobs. But I think the challenge was a lot more fun and interesting before the signalR days.

Do any other regular players feel the same as me? Shall we write a petition for next year to KISS?

Speaking as a veteran (just not one of the WINNING kind…but at least a finalist one year long ago)

For my part… the signalR and docker stuff does not bother me TBH.

Full disclosure, I use docker and containers all the time in the day job (and for hobby projects etc), so I’m pretty used to that.

There was at least one competition that used XML and SOAP - don’t know if anybody remembers THAT.
But even with the horror show that is SOAP…it’s OK. The IO channel, whatever it might be , imho, is not really a deal breaker.

Just using file-based I/O definitely feels simpler (for us players), but I’m pretty sure the performance penalty is pretty heavy when you have 100s (or more) of games to run. It also complicates things infra-wise since you now need shared storage of some sort so that all the producers and consumers of these file artifacts can actually talk to each other.
And remember, everybody want to see tournament results NOW NOW NOW or earlier, so doing things that help move in that direction helps us all when it comes to crunch times.

What tripped me up this year was mostly just not realizing the challenge was actually up and running.
I only realized when the “build-a-bot workshop” email went out (thanks for the video link btw, still need to watch!)…and that was , it seems, 2 months after things were already open-ish, going by some historic posts here.
So for myself, I’m 2 months behind, with a lot of “llife and day job” things going on.

As for engine bugs - not sure what to say. That has been, in my experience, par for the course, every year I was able to compete. Like…bugs that sometimes flat out contradicted published rules.
Part of the fun I would say.
If you can identify a bug - maybe create an issue, with lots of details and example game states, on github, and then perhaps also raise a topic here? And link to GH issue?
Even better, raise a PR with a suggested fix?

If I were to raise a peeve…it would be, that in my estimation, C# bots might have a slight advantage, seeing as how the engine is also written in C#. So if they were so inclined, C# bot authors can potentially have an easier time implementing rules (and for example collision checks + resolutions) “correctly”.

So in summary, best advise I can give : abstract your I/O parts away from the “fun” stuff, get the I/O stuff sorted out early, and then you can focus on the “fun” stuff.
And refer to the published engine rules to see how rules are REALLY implemented, don’t trust any published rules :stuck_out_tongue:

By the way - is it just me, or does the published rules feel esp thin this year? Just seems to raise more question for me…


IMO, the SignalR stuff really isn’t an issue (in fact I prefer it over file IO). The EC team provides starter bots for quite a few languages, which means you can just start working on the logic part of your bot without worrying about the basic setup. I was able to reuse my SignalR interface almost as-is from last year, so that actually made things a lot easier. Additionally, I am definitely in favor of using the docker images (as opposed to the old submission format where you submit your code directly) - it makes setting up a proper dev environment a breeze and makes it easy to confirm that your local build will run the same in the tournament.

I feel like this year the engine was definitely more complicated and intricate than previous years (i.e. it was really painful to get a bot that reliably works), however I feel like that’s part of the challenge and I see no issue with it. It might be setting the bar a bit high in terms of participation (especially first-timers), but I think the fact that the EC team did a workshop to explain everything was a great initiative which hopefully addresses that.

If I could raise one thing, (and I say this without knowing anything about Entelect org-structure nor incentives, so please forgive me if I’m barking up the wrong tree here), I think it would go a long way if the EC team didn’t have to work on the EC challenge only in their free time as volunteers. The engine/cloud side/comms/docs are all massive undertakings, and perhaps having some work-hours dedicated to addressing these might smoothen out the rougher edges of the EC, which will end up make participation easier.

In any case, thanks EC team for the work you guys do :slight_smile:


Using all this overhead raises the bar of entry by a lot. For seasoned devs it is not a problem but a hassle to set up new frameworks needed, but, it is if that is where all the effort has gone into from EC’s side.

The problem I personally have is that there is no clear documentation to be found anywhere at all, and the example bots are not much good at being examples (at least compared to when EC started this challenge many years ago) and, the simulator is something I have still not manage to get working.
For example the “steal” command is referred to at many places, but nowhere is it actually explained as to what actually happens when you command the bot to “steal”.

The output from the match logs from the dashboard is also not usable without writing a separate app to interpret the data. I am all for having structured data, but logs intended to be read by humans need to be readable by humans.

Also, due to there being bugs in the EC framework every year, I think more and more developers are opting to start later than to jump on the hype to create the ultimate bot, because , every year the bugs have only been resolved like within the last week of important tournament dates.

I did not write a single line of code and every line of code this year due these frustrations; my bot was written by ChatGpt; I figured it would have a better idea of what is going on than I do. (Not joking, 100% of my code is written by chatgpt based on the little documentation available).