I’m new to the Entelect challenge so I apologize if I should know this, but what are the details of the machine our bots will be running on? I understand that each bot will be allowed 2 seconds of processing time + additional time as determined by the initial calibration phase, but I can’t seem to find any information regarding the computing resources we will be working with (RAM, CPU details (GPU? )).
Hi Munsanje, I will go and find out for you, but as far as I understand the bots will be run on VMs, so don’t expect any GPU power, I will try and find out more regarding the RAM and CPU for you.
Any update on this for the 2018 challenge?
Would be really nice if there was some sort of GPU support, structured AI depends heavily on the amount of computations one can perform to evaluate as far into the future as possible. Also it’s a bit strange that in a AI competition there is no official information on how much resources we have available.
We are using an Standard B4ms virtual machine on Azure for the matches. The VM has the following specs:
- 4 vCPU’s
- 16GB Ram
- 32GB SSD
We are still investigating and playing around with it to see if the burst VM would actually provide a fair environment for the competition. So the machine specs are not final and we might end up going for a slightly smaller VM with better guarantees over performance. If that is the case then it will have the same specs as the tournament machine we used for last years tournament.
None of these virtual machines have GPU’s available however so do not count on that for your bot executions.
The Challenge Team
Is it possible to give us more time per turn, that would counter balance the specs used?
I don’t think it will be feasible to allow for more time per bot execution. We have to take tournament running time into account, especially with the finals where we have an incredibly small window of time in which to actually run bots, verify matches, re-run if required and then get videos and presentations ready for comic con.
To give an example if a match lasted on average 200 rounds and every bot used their full 2 seconds each round we would need about 13 minutes to run each match. Given that we usually run around 400 matches in tournament (submissions dependent) that would take about 90 hours to run all the matches. Granted we could spin up more virtual machines but that is only a possibility we would be exploring to account for more than anticipated submissions, not to allow for longer execution times.
Hope that helps to answer your questions?
I had a look at Azure VM that you are using, information is scarce but as far as I can see a vCPU is about an equivalent of a single non threaded core, they don’t do HT and the ‘v’ part means that you are allocated time on the CPU instead of a direct mapping. With the machine you are using it’s basically a bit better than a dual-core with HT enabled.
I have made my bot execute until it uses up the time allocated and since the bot move quality relies heavily on how many moves ahead it can anticipate, the amount of calculations it can perform in the time-frame makes a lot of difference.
Can you consider doing something with the host machine specs, I know it’s expensive, but AI research has always been demanding on resources, I’m a bit baffled that you don’t have your own Threadripper with a 1080 for this?
Resource constraints force you to come up with clever strategies that make it a great deal more interesting - where’s the fun in running a textbook AI algorithm on a supercomputer?
Look I’m not asking for a supercomputer, a quad core with HT and a graphics card would be fine, just by not including a GPU, you are effectively disqualifying a whole AI research field. Surely an up too date gaming PC is not overkill, what we are getting is a mid-range laptop.