Update - The workers are going through the queues now and all events should be handled in the next hour or so.
Sep 2, 20:59 UTC
Monitoring - Earlier this morning we applied a fix for the issue causing our background tasks to fall behind. Our workers are slowly working through a fairly large backlog, however the processing is degraded due to a separate on-going issue. We will continue to update the second incident, and resolve this incident once our task queues have returned to normal levels.
Sep 2, 13:12 UTC
Investigating - We are currently experiencing an issue with background tasks not running in a timely manner. This impacts multiple systems, e.g. activity feeds, notifications, and webhooks. We're investigating and will post an update once we know more.
Sep 2, 12:02 UTC
Website ? Operational
API ? Operational
SSH ? Operational
Git via HTTPS ? Operational
Mercurial via HTTPS ? Operational
Webhooks ? Operational
Source downloads ? Operational
Operational
Degraded Performance
Partial Outage
Major Outage
System Metrics Month Week Day
Website average response time
Fetching
API average response time
Fetching
Git average response time
Fetching
Mercurial average response time
Fetching
Past Incidents
Sep 2, 2015
Resolved - SSH and HTTPS traffic is back to normal, though background tasks are still running.
Sep 2, 20:54 UTC
Identified - We have resolved the current issues with the platform. Our workers are slowly working through a fairly large backlog of requests that have been pending since the issues originally started. Things are continuing to stabilize, but will take another hour or 2 before things are back to normal and your requests are fully processed.
Sep 2, 19:59 UTC
Investigating - The Bitbucket team are still investigating the situation, and are working on a fix
Sep 2, 18:47 UTC
Identified - Partial service has been restored. Repository clones and pulls should complete without problem on HTTPS as well as SSH. Pushes and pull-request updates are still functioning at a limited capacity and may not work as expected for some users.
Sep 2, 15:05 UTC
Investigating - We are currently experiencing a major outage impacting customers attempting to interact with repositories over SSH. This impacts both pushing and pulling from repositories. Some users are also experiencing 503 errors when attempting to interact with repositories over HTTPS. Our engineers are investigating and we will post an update shortly.
Sep 2, 12:50 UTC
Sep 1, 2015
Resolved - Our background workers have finished working through their epic backlogs. All systems should be fully operational again.
Sep 1, 18:48 UTC
Monitoring - Earlier this morning we applied a fix for the issue causing our background tasks to fall behind. Our workers are slowly working through a fairly large backlog, so the site should gradually return to normal. We'll monitor our systems closely and resolve this incident once our task queues have returned to normal levels.
Sep 1, 17:10 UTC
Identified - We've identified the issue causing our queue of background tasks to get backed up. We expect to be able to resolve the problem shortly.
Sep 1, 15:06 UTC
Investigating - We are currently experiencing an issue with background tasks not running in a timely manner. This impacts multiple systems, e.g. activity feeds, notifications, and webhooks. We're investigating and will post an update once we know more.
Sep 1, 14:41 UTC
Aug 31, 2015

No incidents reported.

Aug 30, 2015

No incidents reported.

Aug 29, 2015

No incidents reported.

Aug 28, 2015

No incidents reported.

Aug 27, 2015
Resolved - Performance has been fully restored and services have been reported to be fully operational.
Aug 27, 17:47 UTC
Monitoring - Performance has returned to normal and all systems are currently operational. We're continuing to monitor the situation to ensure no regressions.
Aug 27, 16:31 UTC
Identified - We have identified the cause of the database performance issues and are working on addressing it. Performance is improving slowly, and we'll keep working on it until we're back to normal response time.
Aug 27, 15:34 UTC
Investigating - The site is currently experiencing degraded database performance, causing occasional 500s and slow page loads. We are investigating and will update this page once we've identified the cause of the problem.
Aug 27, 14:48 UTC
Aug 26, 2015
Resolved - The networking issue affecting webhooks has been resolved. Webhooks should be firing as normal now.
Aug 26, 16:43 UTC
Monitoring - We've identified the issue with webhooks and have applied a fix. Outgoing webhooks should be working again. We will continue monitoring the situation to be sure.
Aug 26, 16:05 UTC
Investigating - We're experiencing a temporary networking issue with our outbound webhooks. Our team is currently investigating.
Aug 26, 15:43 UTC
Aug 25, 2015
Resolved - This incident has been resolved.
Aug 25, 18:24 UTC
Monitoring - API load is back to mostly-normal levels. We're continuing to monitor the situation, but (for now at least) all workers are behaving themselves.
Aug 25, 16:29 UTC
Identified - We've identified a problem in our API workers, and we're taking steps to resolve it. UI traffic should be back to normal, and Git and Mercurial traffic is still unaffected.
Aug 25, 16:09 UTC
Investigating - Bitbucket is currently experiencing higher than normal load on its front-end systems. We're investigating.
Aug 25, 14:51 UTC
Aug 24, 2015

No incidents reported.

Aug 23, 2015

No incidents reported.

Aug 22, 2015

No incidents reported.

Aug 21, 2015

No incidents reported.

Aug 20, 2015

No incidents reported.

Aug 19, 2015

No incidents reported.