The United States-based, cryptocurrency exchange, Coinbase published a blog post justifying the technical issues that took place when the price of Bitcoin was hovering above 10K.
Why Coinbase Crashes At Peak Time?
The Bitcoin community was seen rejoicing when the price of BTC reached 10K. This was further followed by a breach pushing the price of the coin above 10K. On 2 June 2020, the king coin recorded a high of $10,121. Even though this post halving high, didn’t stay for long as Bitcoin miners were seen selling off a huge portion of their BTC, many users of Coinbase were seen facing difficulties.
The cryptocurrency exchange Coinbase faced immense backlash, as it began to crash just as the price of Bitcoin started to soar. This wasn’t the first time that Coinbase had dealt with such an issue. But the latest technical difficulty left several comparing this exchange to traditional banks that wreck during the flimsiest inconvenience.
After a week-long of losing customers and facing repercussions of the crash, Coinbase decided to break its silence and address the issue. The exchange revealed that it had identified the reason behind the outrage and had also fixed the same. As per the blog post, when the price of BTC hit 10K on 1 June at 16:05 PDT, the exchange experienced a 5x spike in its traffic in a period of just 4 minutes. However, the exchange’s autoscaling failed to stride along with the significant increase in traffic.
The post further read,
“This traffic spike affected a number of our internal services, increasing latency between services. This led to process saturation of the web servers responsible for our API, where the number of incoming requests was greater than the number of listening processes, causing the requests to either be queued and timeout, or fail immediately.”
Furthermore, many users were facing issues communicating with the coinbase.com, pro.coinbase.com, and the exchange’s mobile apps since the request error rate had reportedly increased to 50%.
The health check also revealed that several occurrences were ticked off as unhealthy and worsened the crash by “taking out of the load balancer.” In the image below, the peaks are suggested to show deploys while the dips are classified as unhealthy.
The exchange added,
“In an effort to mitigate the saturation, we redeployed the API at 16:20 PDT to increase the machines serving the traffic. Once this deploy completed, the previous deploy’s instances were taken out of rotation, leading to another 2 minute outage due to instances saturating and being marked unhealthy. This was handled automatically by our autoscaling.”
Additionally, the exchange not only claimed to have addressed immediate issues but also plans on bolstering the deployment process to decrease the autoscaling difficulties.