Skip to content

Backend and Abyss Tracker update

It has been a while, but it is time to give you a small update on the backend development of EVE Workbench.

As you all know, the EWB team is pushing to update the Abyss Tracker to a more stable solution, there for the first development release of the new new abyss tracker has been deployed to the staging server and is being tested and tuned to function on our cluster environment. And if you are curious, it is here:

But in the mean while there are also updates in the backend of EVE Workbench, focussed towards ESI calls and monitoring of these.

ESI Call changes

This part of the code is from the beginning of EVE Workbench back in 2019, there where changes to it, but the way it stored and handeld the data never changed.

One part of this was that every record that was pulled from the ESI was stored into a log table and the pulled item for example a type was stored with an FK to that log record.
This meant when we received a 304 from the ESI, we pulled the item or multiple items from the database based on the FK.
At the time we thought this was a good design choice, but over the years we noticed some issues with this aproach, but as developers think sometimes, don’t fix it, if it is not broken.

For illustration, a small example of the current system:

Well with this design, you can imagine that the database server didn’t like use very much and putting responsibilities in places where they didn’t belong in our opinion now a days and with that we sometimes ofcourse ran into some troubles…

So the decision was finally made to remove all the relations with the ESILog table from the database and replace it with a additional field in the ESILog table that stores the full ESI response. And this enables us also to move the ESI log table to another database system like mongodb.

On top of the database changes, we also added a new caching layer into the ESI call system, which we can activate in the code per call and stores the entire ESIResult object we generate based on the Expire header in the ESI response (offcourse, only successful requests), this in turn reduceses the number of requests we need to made towards the ESI.

ESI Monitoring

After a recent incident with our staging server which was blocked from the ESI, due to a bug in the code that popped up and was not seen due to the lack of monitoring of the ESI calls.

So with the changes being already made to the ESI call system, a monitoring system is also being implemented that gives us insight into the number of requests made towards the ESI.

The graph above shows the number of calls made by the entire stack on the staging cluster made towards the ESI in 1 hour.

The software used for this is InfluxDB to store the outgoing call records and Grafana for visualization.

Other small changes

Beside the ESI changes, there were also many other changes in the backend that will improve data integrity and enables us to pull missing information in a earlier stage from the ESI and countless other things that I will not type out here all, because I don’t want to be blamed for a broken scroll wheel on you mouse.

And as always, if you have any ideas or questions join our Discord Server.

Fly safe!
Team EVE Workbench
– Lionear
– RaymondKrah
– Ithran

Published inDevelopment

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *