- Vivvykins 1m
- Baron17 1m
- Atheran 2m
- Vera 49s https://twitter.com/dril/status/218160412510785536
- Revex 4m
- Grey0 37m
- Scarlyt 1m <3 <3 <3 The admins are the bestest! <3 <3 <3
- Sli 2m https://youtu.be/e2C4ocYeGto
- Malakai 1m
- crashdown 11m
- waddlerafter 6m
- Archer 33s
- whatislove00 11s youtu.be/v0dUnoecoZ0
- ComradeNitro 38m
- tachi 1m
- Diani 21s Why do you write like you're running out of time?
- Storm 9m
- Lionion 1m Within cells, interlinked.
- PriceCheck 2m
- Pomegranate 33s
- geoux 1h
- FairyBlue 14s
- deepBlue 7s
- jsmith225 2h
j Johnny 11h New Code Written Nightly. Not a GM.
- NimbleZone 50m
a Cerberus 1h Head Builder & GM when I need to
- SacredWest 6m
And 20 more hiding and/or disguised
Connect to Sindome @ moo.sindome.org:5555 or just Play Now

May 11th Downtime Report
one hour lost

Sometime after 6pm, players began reporting lag with their game play. Initial investigation showed lag for admin as well, but systems looked OK and there was consistent latency across all our reported applications, so it looked like an effect from the network.

After performing command line tests directly on the server, the network effect was ruled out and further investigation was taken. It was at this point that we learned the MOO was running away, consuming as much CPU as fast as it could. A backup of the last checkpoint was immediately taken - this checkpoint was from 5:06PM and its this time difference (a little over an hour) that was lost.

To try and get control of the problem, I asked everyone to log off. As latency was so high this was proving impractical, I forced everyone to disconnect. Despite my attempts, I was unable to get to the exact cause and fix it from within the system.

As part of investigating this, I've determined a number of things to change so we can better handle this situation if it crops up again.

@forked - lag was so bad that our list of whats waiting to run was failing on me. I've recoded it so it will handle this sort of monster lag should it happen again. This critical command helps us keep tabs on runaways.

@panic - I've given myself a PANIC button. In the event of emergency, I can use this to turn off some components and prevent the cascade effect these can cause as things get backed up in @forked

cpu utilization - while we see server cpu metrics, our existing tools and monitoring weren't specifically monitoring the moo process. CopperEgg was only giving us general cpu metrics and 'is this process running' checks. I've written a new component into our website app that periodically gets the cpu and memory usage of the moo process. Then, each time we record metrics on site usage, this information from the 'last healthcheck' is logged along with the number of connected players.

cpu use alerts! - I've configured an experimental alert if the MOO process uses 95% of a CPU for 5 and 10 minutes (2 alert levels). This way, if/when the MOO runs away next time, we'll get an alert more immediately about the true problem.

And proof I'm not bullshitting, heres the live board for CPU and RAM:

This information is now available on our [url=http://status.sindome.org/">status server.

That was a total failure. Shame on me. :)

http://status.sindome.org/