We're resolving this incident. Please contact our support team if you need further assistance.
Posted 29 days ago. Jul 23, 2018 - 07:42 UTC
The last of the immediate hardware issues have been addressed, and backlogs have all been consumed. We're continuing to watch this, but performance should be more or less back to normal now.
Posted 29 days ago. Jul 23, 2018 - 07:10 UTC
We've identified some hardware issues on a portion of our load balancing layer. While the remaining portion was able to assume control of the affected IPs, as designed, the additional load on that unaffected portion caused higher-than-expected queueing and latency for requests.
We're addressing the hardware issues now, and performance is recovering. Stay tuned.
Posted 29 days ago. Jul 23, 2018 - 07:01 UTC
We're investigating some front-end performance degradation on the GUI; API; Git and Mercurial operations over HTTPS; SSH; and archive downloads. Webhooks, Pipelines, and LFS are not directly affected.
Posted 29 days ago. Jul 23, 2018 - 06:50 UTC
This incident affected: Website, API, SSH, Git via HTTPS, and Mercurial via HTTPS.