Make – Delays on queue processing

Jan 9, 21:38 CET
Resolved – This incident has been resolved

Jan 8, 22:21 CET
Monitoring – We have applied a fix and we are currently monitoring the zone.

Jan 8, 19:13 CET
Investigating – We are currently facing an issue where some of the users might experience a delay on the processing of the queues records on their scenarios. That does not affects all scenarios.

GitHub – Issues with VNet Injected Larger Hosted Runners in East US 2

Jan 9, 20:00 UTC
Resolved – This incident has been resolved.

Jan 9, 20:00 UTC
Update – The impact to Large Runners has been mitigated. The third party incident has not been fully mitigated but is being actively monitored at https://azure.status.microsoft/en-us/status in case of reoccurrence.

Jan 9, 19:27 UTC
Update – We are continuing to see improvements while still monitoring updates from the third party at https://azure.status.microsoft/en-us/status

Jan 9, 18:53 UTC
Update – We are still monitoring the third party networking updates via https://azure.status.microsoft/en-us/status. Multiple workstreams are in progress by the third party to mitigate the impact.

Jan 9, 18:18 UTC
Update – We are still monitoring the third party networking updates via https://azure.status.microsoft/en-us/status. Multiple workstreams are in progress by the third party to mitigate the impact.

Jan 9, 17:43 UTC
Update – The underlying third party networking issues have been identified and are being work on. Ongoing updates can be found at https://azure.status.microsoft/en-us/status

Jan 9, 17:12 UTC
Investigating – We are currently investigating this issue.

DigitalOcean – Retro – Multiple services and API

Jan 9, 18:01 UTC
Resolved – From 15:28 to 15:32 UTC, an internal service disruption may have resulted in users experiencing errors while using the Cloud Panel or API to manage Spaces Buckets, Apps, Managed Database Clusters, or Load Balancers as well as other actions due to impacted downstream services.

If you continue to experience problems, please open a ticket with our support team.

Thank you and we apologize for any inconvenience.

GitHub – Some GitHub Actions may not run

Jan 9, 08:30 UTC
Resolved – This incident has been resolved.

Jan 9, 08:30 UTC
Update – Actions is operating normally.

Jan 9, 08:17 UTC
Update – We have seen recovery of Actions runs for affected repositories. We are verifying all remediations before resolving this incident.

Jan 9, 07:47 UTC
Update – We have identified the problem and are proceeding with a fail-over remediation. We anticipate this will allow Actions Runs to proceed for affected repositories.

Jan 9, 07:17 UTC
Update – 1-2% of repositories may have Actions jobs that are blocked and are not running or will be delayed. We have identified a potential cause. We are confirming and will be working on remediation.

Jan 9, 07:15 UTC
Investigating – We are investigating reports of degraded performance for Actions