Starting at 6:41 PM PDT on September 26th, we experienced degraded performance for some EBS volumes in a single Availability Zone (USE1-AZ2) in the US-EAST-1 Region. The issue was caused by increased resource contention within the EBS subsystem responsible for coordinating EBS storage hosts. Engineering worked to identify the root cause and resolve the issue within the affected subsystem. At 11:20 PM PDT, after deploying an update to the affected subsystem, IO performance for the affected EBS volumes began to return to normal levels. By 12:05 AM on September 27th, IO performance for the vast majority of affected EBS volumes in the USE1-AZ2 Availability Zone were operating normally. However, starting at 12:12 AM PDT, we saw recovery slow down for a smaller set of affected EBS volumes as well as seeing degraded performance for a small number of additional volumes in the USE1-AZ2 Availability Zone. Engineering investigated the root cause and put in place mitigations to restore performance for the smaller set of remaining affected EBS volumes. These mitigations slowly improved the performance for the remaining smaller set of affected EBS volumes, with full operations restored by 3:45 AM PDT. While almost all of EBS volumes have fully recovered, we continue to work on recovering a remaining small set of EBS volumes. We will communicate the recovery status of these volumes via the Personal Health Dashboard. While the majority of affected services have fully recovered, we continue to recover some services, including RDS databases and Elasticache clusters. We will also communicate the recovery status of these services via the Personal Health Dashboard. The issue has been fully resolved and the service is operating normally.