All Systems Operational
Website ? Operational
API ? Operational
SSH ? Operational
Git via HTTPS ? Operational
Mercurial via HTTPS ? Operational
Webhooks ? Operational
Source downloads ? Operational
Pipelines ? Operational
Email delivery Operational
Atlassian account signup and login Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
System Metrics Month Week Day
Website average response time
Fetching
API average response time
Fetching
Git average response time (HTTPS)
Fetching
Mercurial average response time (HTTPS)
Fetching
Past Incidents
Feb 25, 2017

No incidents reported today.

Feb 24, 2017

No incidents reported.

Feb 23, 2017

No incidents reported.

Feb 22, 2017
Resolved - This incident has been resolved.
Feb 22, 23:46 UTC
Monitoring - We've been able to bring our cluster of build agents back online successfully, and are closely observing throughput of the system.
Feb 22, 23:08 UTC
Update - We're bringing back online our cluster of build agents for customers running Pipelines builds.

Customers may notice that their pending builds will have errored out, please re-run these builds once service is healthy again.
Feb 22, 22:14 UTC
Identified - Customer build queue continues to decrease and team has identified the root cause.
Feb 22, 21:03 UTC
Update - Build queue is decreasing after scaling up manually. Team is still actively working on both investigating the root cause and monitoring queue size.
Feb 22, 20:54 UTC
Investigating - Customer builds are pending, cause of the delay is yet unknown. We are currently investigating the cause of this problem
Feb 22, 20:37 UTC
Feb 21, 2017
Resolved - This incident has been resolved.
Feb 21, 16:29 UTC
Identified - Please be informed that emails to Google domains are currently delayed and you may notice delays in receiving notifications from Atlassian.
We appreciate your patience while we work on fixing the issue and will notify you once this has been addressed.
Feb 21, 11:08 UTC
Feb 20, 2017

No incidents reported.

Feb 19, 2017

No incidents reported.

Feb 18, 2017

A piece of hardware in Bitbucket's internal network infrastructure failed at roughly 17:38 UTC on Saturday 18th February 2017. Automated failover systems kicked in as intended, and performed as intended; the load balancers also began queueing connection requests (as intended) while the failover process completed. Unfortunately, all this intentional behavior led to an unintentional time of service degradation between (roughly) 17:38 and 17:45; load balancer backlogs caused many requests to wait far longer than anticipated, and some requests may have timed out as a result.

We apologize for any inconvenience, and we're re-examining our failover strategy so that the backlog remains small when this happens next.

Feb 17, 2017

No incidents reported.

Feb 16, 2017

No incidents reported.

Feb 15, 2017

No incidents reported.

Feb 14, 2017

No incidents reported.

Feb 13, 2017

No incidents reported.

Feb 12, 2017

No incidents reported.

Feb 11, 2017

No incidents reported.