- Neekly 1m
- Valentine 5s
- Baron17 3s
- s33wagz 3m
- MaxKombo 21s
- crashdown 11s
- Sam_Guest 12m [Welcome to Sindome]
- ComradeNitro 43m
- Malakai 3s
- himble 1h youtu.be/og243Dom4Sw
- waddlerafter 0s https://www.youtube.com/watch?v=jZitWKRvTtU
- Vera 5m youtu.be/maAFcEU6atk
- attaboy 6s
- jsmith225 2h
- Stelpher 2h
- Cyberpunker 10m
a Cerberus 4h Head Builder & GM when I need to
- Hippo 1s
- Dorn 6s https://www.youtube.com/watch?v=7OUqUiZQxs4
- Chrissl1983 9h working on my @history for too long...
- Azelle 14h
j Johnny 4h New Code Written Nightly. Not a GM.
And 28 more hiding and/or disguised
Connect to Sindome @ moo.sindome.org:5555 or just Play Now

May 11th Downtime Report
one hour lost

Sometime after 6pm, players began reporting lag with their game play. Initial investigation showed lag for admin as well, but systems looked OK and there was consistent latency across all our reported applications, so it looked like an effect from the network.

After performing command line tests directly on the server, the network effect was ruled out and further investigation was taken. It was at this point that we learned the MOO was running away, consuming as much CPU as fast as it could. A backup of the last checkpoint was immediately taken - this checkpoint was from 5:06PM and its this time difference (a little over an hour) that was lost.

To try and get control of the problem, I asked everyone to log off. As latency was so high this was proving impractical, I forced everyone to disconnect. Despite my attempts, I was unable to get to the exact cause and fix it from within the system.

As part of investigating this, I've determined a number of things to change so we can better handle this situation if it crops up again.

@forked - lag was so bad that our list of whats waiting to run was failing on me. I've recoded it so it will handle this sort of monster lag should it happen again. This critical command helps us keep tabs on runaways.

@panic - I've given myself a PANIC button. In the event of emergency, I can use this to turn off some components and prevent the cascade effect these can cause as things get backed up in @forked

cpu utilization - while we see server cpu metrics, our existing tools and monitoring weren't specifically monitoring the moo process. CopperEgg was only giving us general cpu metrics and 'is this process running' checks. I've written a new component into our website app that periodically gets the cpu and memory usage of the moo process. Then, each time we record metrics on site usage, this information from the 'last healthcheck' is logged along with the number of connected players.

cpu use alerts! - I've configured an experimental alert if the MOO process uses 95% of a CPU for 5 and 10 minutes (2 alert levels). This way, if/when the MOO runs away next time, we'll get an alert more immediately about the true problem.

And proof I'm not bullshitting, heres the live board for CPU and RAM:

This information is now available on our [url=http://status.sindome.org/">status server.

That was a total failure. Shame on me. :)