- Thompson12131732 8m
- Marioanius 1m Hamilton wrote, the other 51!
- JustSomeGuy 3m
- jparrish 36s
- CellDweller 41s
- Dragon2fly 17s
- Selly 3m
- Neekly 45s
- Rangerkrauser 45s
- Malanth 12m
- Storm 1m
- Jameson 4m making the eternal black firmament my side bitch
- jwimpeney 26s
- Rigo 7m
- SolBro 27s
- ZaCyril 4m https://youtu.be/9r4WD60uM18
- RealHumanBean 57s Something?
- Atheran 43s
- Newbs21 1m We're all crazy here!
- crashdown 16m
- SacredWest 5m
- Cyberpunker 27s
- Evie 30s
- BCingyou 11s
- Diani 24s Shake hands with him! Charm her!
- Melpothalia 1h
- villa 15s
- Dawnshot 5h https://youtu.be/qg_7saEbnwE
- ThatOneGuy 4m Welp.
- Chrissl1983 2h Really enjoying this awesome game to the fullest!
a Cerberus 9h Head Builder & GM when I need to
- YourLeftHand 12h
j Johnny 46m New Code Written Nightly. Not a GM.
- Azelle 11h
And 29 more hiding and/or disguised
Connect to Sindome @ moo.sindome.org:5555 or just Play Now

May 11th Downtime Report
one hour lost

Sometime after 6pm, players began reporting lag with their game play. Initial investigation showed lag for admin as well, but systems looked OK and there was consistent latency across all our reported applications, so it looked like an effect from the network.

After performing command line tests directly on the server, the network effect was ruled out and further investigation was taken. It was at this point that we learned the MOO was running away, consuming as much CPU as fast as it could. A backup of the last checkpoint was immediately taken - this checkpoint was from 5:06PM and its this time difference (a little over an hour) that was lost.

To try and get control of the problem, I asked everyone to log off. As latency was so high this was proving impractical, I forced everyone to disconnect. Despite my attempts, I was unable to get to the exact cause and fix it from within the system.

As part of investigating this, I've determined a number of things to change so we can better handle this situation if it crops up again.

@forked - lag was so bad that our list of whats waiting to run was failing on me. I've recoded it so it will handle this sort of monster lag should it happen again. This critical command helps us keep tabs on runaways.

@panic - I've given myself a PANIC button. In the event of emergency, I can use this to turn off some components and prevent the cascade effect these can cause as things get backed up in @forked

cpu utilization - while we see server cpu metrics, our existing tools and monitoring weren't specifically monitoring the moo process. CopperEgg was only giving us general cpu metrics and 'is this process running' checks. I've written a new component into our website app that periodically gets the cpu and memory usage of the moo process. Then, each time we record metrics on site usage, this information from the 'last healthcheck' is logged along with the number of connected players.

cpu use alerts! - I've configured an experimental alert if the MOO process uses 95% of a CPU for 5 and 10 minutes (2 alert levels). This way, if/when the MOO runs away next time, we'll get an alert more immediately about the true problem.

And proof I'm not bullshitting, heres the live board for CPU and RAM:

This information is now available on our [url=http://status.sindome.org/">status server.

That was a total failure. Shame on me. :)

http://status.sindome.org/