Starting at 2:05 AM PDT we experienced power and network connectivity issues for some instances, and degraded performance for some EBS volumes in the affected Availability Zone (euw2-az2) . By 4:50 AM PDT, power and networking connectivity had been restored to the majority of affected instances, and degraded performance for the majority of affected EBS volumes had been resolved. Since the beginning of the impact, we have been working to recover the remaining instances and volumes. A small number of remaining instances and volumes are hosted on hardware which was adversely affected by the loss of power. We continue to work to recover all affected instances and volumes and have opened notifications for the remaining impacted customers via the Personal Health Dashboard. For immediate recovery, we recommend replacing any remaining affected instances or volumes if possible.
Between 5:25 PM and 7:52 PM PDT, some AWS Batch Compute Environments transitioned to INVALID in the US-WEST-2 Region. The issue has been resolved and the service is working normally.