Release 2022.3.2 · EntelectChallenge/2022-Arctica

Arctica - Release 2022.3.2 - Some Bug Fixes

Hey guys! Here is a new release with fixes for the two bugs that I have seen reported this weekend:

  • Scout Towers accidentally being added to territory
  • GameComplete step breaking based on casting issue

Thank you for the quick reporting of bugs so far!
Please let me know if I have missed any important bugs.

Shoutout to @kobus-v-schoor and @rfnel for help debugging

2 Likes

@Jordan Ran into an issue using the new starter pack when running a full game, looks like the state update messages are starting to exceed 2MB in size :sweat_smile:

[ERROR] [Core]: Failed to run GameRunLoop with error: The server closed the connection with the following error: Connection closed with an error. InvalidDataException: The maximum message size of 2000000B was exceeded. The message size can be configured in AddHubOptions.

On this, I was wondering if the tick rate of 250ms could maybe be increased. From my own tests, parsing 2MB of JSON text, with the overhead of websocket protocol on such large messages is already taking up a significant portion of processing time (I suspect on the runner/game engine as well).

I did some crude testing, maybe someone else can help confirm my suspicions, but it seems to me like marshaling + sending + receiving + unmarshaling the state updates (I tested both Python and Golang) already takes 200ms to 230ms late to endgame due to the massive state updates.

This is anecdotal, but my bot takes a maximum of 15ms-20ms to parse the state and calculate its actions. I’m using Golang which is a compiled language, and which has performant websocket/json libraries. Whenever I’m playing 4v4, I can’t reduce the tick rate below ~230ms before my bot starts missing ticks near the end of the game due to it exceeding the tick rate.

Technically speaking my bot isn’t really affected since I still send the tick updates in time (at least on my machine), but I think other bots (especially those using slower languages or more intensive algorithms like AI) might be severely limited. Just my 2 cents.

Ran into the below issue with the latest engine

[DEBUG] [Core]: at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsyncCore(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken)
at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsync(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken)
at Engine.Services.EngineService.GameRunLoop() in /home/runner/work/2022-Arctica/2022-Arctica/game-engine/Engine/Services/EngineService.cs:line 145
at Engine.Services.SignalRService.b__8_1(Task task) in /home/runner/work/2022-Arctica/2022-Arctica/game-engine/Engine/Services/SignalRService.cs:line 122
[ERROR] [Shutdown]: Shutting down due to a critical error
[ERROR] [Shutdown]: Shutting down before a winner was found. Informing the runner.
[ERROR] [Connections]: Engine informed of Disconnect. Reason: Shutdown called before a winner was found.
Disconnecting all clients and stopping
[DEBUG] [CloudCallback]: Cloud Callback called with Failed
[DEBUG] [CloudCallback]: Cloud Callback No-opped, Status: failed

Check out Release 2022.3.1 · EntelectChallenge/2022-Arctica - #12 by kobus-v-schoor

Also, this was supposed to be fixed in 3.2.

Getting this also

[DEBUG] [DisconnectEvent]: The maximum message size of 2000000B was exceeded. The message size can be configured in AddHubOptions.
[ERROR] [Core]: Failed to run GameRunLoop with error: The server closed the connection with the following error: Connection closed with an error. InvalidDataException: The maximum message size of 2000000B was exceeded. The message size can be configured in AddHubOptions.
[DEBUG] [Core]: at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsyncCore(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken)
at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsync(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken)
at Engine.Services.EngineService.GameRunLoop() in /home/runner/work/2022-Arctica/2022-Arctica/game-engine/Engine/Services/EngineService.cs:line 145
at Engine.Services.SignalRService.b__8_1(Task task) in /home/runner/work/2022-Arctica/2022-Arctica/game-engine/Engine/Services/SignalRService.cs:line 122
[ERROR] [Shutdown]: Shutting down due to a critical error
[ERROR] [Shutdown]: Shutting down before a winner was found. Informing the runner.
[ERROR] [Connections]: Engine informed of Disconnect. Reason: Shutdown called before a winner was found.
Disconnecting all clients and stopping
[DEBUG] [CloudCallback]: Cloud Callback called with Failed
[DEBUG] [CloudCallback]: Cloud Callback No-opped, Status: failed

Mine is running on Javascript

I would be wary given the already tight timelines before the finals. At 500ms per tick, I believe a 2500 round match will run just over 20 minutes.

It is a valid concern however. I’m guessing that (like last year) the matches are run in the cloud, so another work-around might be to up the computing power a bit when running the matches to alleviate this issue.

I’m assuming (hoping) that the matches are run in parallel in a distributed fashion, which means that if the matches take longer they should be able to scale horizontally to decrease the time it takes to run the matches.

Maybe it’s just me - if anyone can confirm or deny my suspicions it would help a lot. I’ve only really started running into this issue after the last update (with full 4v4 matches) since the state updates became so large. There are two ways in which I’m now able to detect the problem:

Firstly, in my runner logs I’m seeing this (you can search for “Game Loop Time”):

[INFO] [TIMER]: Game Loop Time: 492ms
[INFO] [TIMER]: Game Loop Time: 490ms
[INFO] [TIMER]: Game Loop Time: 514ms
[INFO] [TIMER]: Game Loop Time: 464ms
[INFO] [TIMER]: Game Loop Time: 534ms
[INFO] [TIMER]: Game Loop Time: 574ms
[INFO] [TIMER]: Game Loop Time: 488ms
[INFO] [TIMER]: Game Loop Time: 507ms
[INFO] [TIMER]: Game Loop Time: 572ms
[INFO] [TIMER]: Game Loop Time: 583ms
[INFO] [TIMER]: Game Loop Time: 446ms

As you can see each tick is taking >500ms on average to process, and this is only halfway through the game. If the runner is working is expected it seems the game loop time should be 250ms (i.e. the tick rate)

And then secondly, I initially found the issue by searching the runner’s output for invalid actions using the string “Invalid”. Once the tick skipping starts on my side, my bot would end up sending multiple commands in the same tick leading to invalid actions being reported by the engine. Increasing the tick rate makes the errors go away.

1 Like

I’ve been 2gb + files coming out of the new engine from a run that was successful compared to a 750mb file on the previous engine.

@Jordan - it appears (I haven’t proven it beyond any doubt) that the final state file contains duplicate Land entries in bots’ territory now. That could contribute to the file sizes blowing up.

I’ve also noticed, on the latest engine, that my bot can build on top of resource nodes. Is that expected behaviour?

Edit: It also looks like territory is no longer “first-come, first-serve”. I just spotted something that looks like the same Land node can be in the territory list for multiple bots.

Is that intentional due to the territory changes?

In the first event we had condensed logs?

Is that still a thing? Im having some trouble with size as well.
My visualiser parses the matches up to around 1024 MB size on my visualiser using full log.

I could not get a condensed log to work.

For now Im doing a 2v2 or 3v3 matches with 1 non builder to test my tactics.
But very tame.

Im exploring Json Stream solutions now.
But those logs are huge,

The team should implement my style of indexing…

Build your whole JSON object. then set the 1st and second dimention to the X and Y of the node.

That way you access the node with: obj[x][y].whatever

For now one of my solutions is to write something that can strip any data I dont need for the visualiser out of the JSON.
I could also run 2000 ticks, But you cant do proper diagnosis on that.

2GB + is a different game though.
I expect your entry goes beserk on buildings.
Though I know they are cheaper (Have not adjusted that yet)

I will let others know what I see as well.
But I rely heavily on the visualiser for visual queues.

So thats something Im working on for the moment…
If I generated the round JSON, it would be much much smaller
BUT, very few would understand it

Im having fun though.
Some roadblocks to furthering my entry isnt too bad.

Hey there @rfnel! Yes, territory ownership is now dynamic. It should never belong to more than one bot at a time though… If you see the same land object in multiple territories in the same tick, then we have a problem :joy:

@rfnel Hmmm… There is an explicit check when adding buildings that they are not placed on resource nodes, only available nodes. It might not be rejecting it as an invalid action, however it shouldn’t be able to place it.

Is anyone else having issues uploading, my latest upload has been in queue going close to an hour, usually takes under 5 minutes.

Hey @WillieTheron,

I put together a small script that might help with the log size issue, it is node based so I’m not sure if it will be compatible with your visualizer but it might be helpful to you or someone else.
image

The logs are still readable, but are definitely more difficult to read than the original.

Let me know if you have any questions.

Here’s a link to the Repo: https://github.com/NoxRammus/ec-2022-arctica-log-compressor

2 Likes

@Jordan I got an example showing the building on top of a resource node + the territory being added multiple times:

Screencast from 15-08-2022 21 45 30

Red dot is a building. The shade of red getting darker and darker is the position being listed as part of the bot’s territory multiple times and the visualizer redrawing over the same spot

This is pretty cool for certain. I will assess this as well.

For now I could cut my logs by 350 MB, which is enough to continue the visualiser.

I will definately assess the above and see what can be done.

1 Like

I’ve got a bit of a far out suggestion, I’m starting to wonder if it wouldn’t be better to revert the territory takeover changes for the final round. I think it’s a great idea, but giving us just one week to figure out a strategy for such a big change while simultaneously battling (fairly severe) engine bugs is asking a bit too much :sweat_smile: Same goes for the EC team - the new changes are fairly complex, and I think it’s unrealistic to think that all the bugs are going to be sorted out come Saturday (which is when most of us will probably only be able to seriously start working on our bots, which might lead to new issues being uncovered).

I think there is enough changes to the engine as it currently stands to make for an interesting final round:

  1. I’m almost certain the fair resource distribution will have quite an effect on the leaderboard. From my own tests with the previous versions, at some point strategy just didn’t matter anymore, just speed. The new patches will allow differences in strategy efficiency/effectiveness to come to its full right.
  2. The resource debuff feature is really nice - I think it will also have a fairly significant effect.
  3. New buildings will allow for faster territory expansion, again making room for optimizing the building strategy.

Just my 2 cents.