从 12:58 PM UTC+8 开始,在 AP-EAST-1 地区的一个可用区(ape1-az2)内,我们遇到了 EC2 APIs 的错误率增加、一些 EBS 卷的性能下降和 EC2 实例连接问题。该问题的根本原因是受影响的可用性区域内底层 EBS 存储服务器的性能下降。工程师采取了行动,缓解和解决 EBS 存储服务器性能下降的问题,从而解决了这个问题。在 03:45 PM UTC+8 性能下降的 EBS 卷开始恢复,到 03:58 PM UTC+8,绝大部分受影响的 EBS 卷已经恢复。我们继续在少数仍在经历性能下降的 EBS 卷上工作,并将通过个人健康仪表板或这些卷提供进一步的更新。所有的服务现在都在受影响的可用性区域内正常运行。这个问题已经得到解决,服务运行正常。| 從 12:58 PM UTC+8 開始,在 AP-EAST-1 地區的一個可用區(ape1-az2)內,我們遇到了 EC2 APIs 的錯誤率增加、一些 EBS 卷的性能下降和 EC2 實例連接問題。該問題的根本原因是受影響的可用性區域內底層 EBS 存儲服務器的性能下降。工程師採取了行動,緩解和解決 EBS 存儲服務器性能下降的問題,從而解決了這個問題。在 03:45 PM UTC+8 性能下降的 EBS 捲開始恢復,到 03:58 PM UTC+8,絕大部分受影響的 EBS 卷已經恢復。我們繼續在少數仍在經歷性能下降的 EBS 捲上工作,並將通過個人健康儀表板或這些卷提供進一步的更新。所有的服務現在都在受影響的可用性區域內正常運行。這個問題已經得到解決,服務運行正常。| Starting at 9:58 PM PDT we experienced increased error rates for the EC2 APIs, degraded performance for some EBS volumes and EC2 instance connectivity issues within a single Availability Zone (ape1-az2) in the AP-EAST-1 Region. The root cause of the issue was degraded performance for underlying EBS storage servers within the affected Availability Zone. Engineers took action to mitigate and resolve the degraded EBS storage server performance, which resolved the issue. At 12:41 AM PDT, EBS volumes with degraded performance began to recover and by 12:58 AM PDT, the vast majority of affected EBS volumes had recovered. We continue to work on a small number of EBS volumes that are still experiencing degraded performance, and will provide further updates via the Personal Health Dashboard for those volumes. All services are now operating normally within the affected Availability Zone. The issue has been resolved and the service is operating normally.