It has not been a good week at the LII. Yesterday at around noon our server room experienced a serious power spike. We’re still not sure where it came from, but it was sufficient to cause all of our servers to reboot despite protection with UPS devices and the like. One of them didn’t come back, and as Murphy would have it it was the director machine for our cluster. We’ve worked around the problem, but because the repair involves a DNS change it will take a while to propagate through the network (I’m writing this from central Connecticut more than 16 hours after the DNS change and it has still not reached Comcast’s servers here). We would expect it to be net-wide within 24 hours. This is just…bad. We are still investigating the power-event that led to this. More here when we know more.