After more than a day of monitoring we can now confirm that this issue has been resolved.
We found that a bug in the software that runs one controller of our storage infrastructure began to cause some network slowdowns for all file operations issued to that controller. We believe the bug began early in the week, but only manifested itself under high load.
On Thursday, the site was much busier than normal during our peak traffic time. We discovered some applications causing significantly higher traffic to Bitbucket's API endpoints and temporarily blocked it.
This appears to have exacerbated the problem which then resulted in the entire site having pages timeout and some operations to fail.
Our storage infrastructure runs multiple controllers so we can remain highly available in the face of a failure. At approximately 1:15pm PST we chose to perform a manual failover and take-back to allow the affected controller to reset. Once the controller was reset the site began to return to normal.
As a result of this incident, we are working with our storage vendor to patch the bug that caused the slowdown. We believe we know how to keep our controllers from entering this state in the future.
We appreciate your support and patience while we worked out the problem.