TLDR:
- Sui Mainnet was stopped for 6 hours on January 14 due to a divergence in the internal validator consensus.
- There were no certified state forks, rollbacks, or risks of user funds during the network outage.
- The root cause can be traced back to a rare consensus commit bug under specific garbage collection conditions.
- Recovery involved removing bad data, canary deployment, validator updates, and checkpoint resumption.
On January 14, 2026, Sui Mainnet experienced a 6-hour outage caused by a rare divergence in validator consensus. Transaction submissions timed out, but reads continued from the last certified state.
No user funds were at risk and no forks occurred. The network resumed normal operation after validators implemented a fix, replayed consensus, and safely resumed checkpoint certification.
Consensus Divergence Triggers Secure Network Outage
The Sui Mainnet outage was caused by a Edge case bug in consensus commit logic. Certain garbage collection conditions and an optimization path led different validators to compute divergent candidate checkpoints.
When more than a third of participation signed conflicting checkpoint summaries, certification stalled. The network security architecture responded as expected, stopping progress rather than ending an inconsistent state.
During the incident, transaction submissions timed out, execution stopped, and users experienced a temporary interruption in network activity. However, remote procedure call (RPC) reads continued to default to the last certified state.
Most importantly, no forks occurred and user funds were never at risk. Quarantine mechanisms and checkpoint certification prevented any insecure completion of divergent transactions.
This demonstrates the effectiveness of Sui’s multi-level security and consensus design.
Recovery and future improvements
Recovery involved several stages. First, the team diagnosed the divergence and implemented a fix to remove the incorrect consensus data.
Validators deployed the updated binary through a canary implementation, verified checkpoint production, and securely replayed the consensus data.
Once the quorum signed the same checkpoint summary, certification resumed and network operations returned to normal.
Looking ahead, Sui Labs is implementing improvements to reduce downtime during rare incidents. Faster detection and recovery processes will stop consensus sooner when inconsistencies are detected.
Operator tools are being improved to automate cleanup of inconsistent internal states. Additionally, randomized consensus testing has been expanded to consistently reproduce these extreme scenarios before deployment.
This incident confirms that Sui’s design prioritizes safety over liveliness during extreme events. While disruptive, the outage preserved state coherence, prevented the loss of user funds, and highlighted areas to improve recovery speed and network resilience.
In conclusion, the temporary suspension of Sui Network highlights the strength of its design that prioritizes security, safeguarding all user funds. Real-time data from SUI coins shows that despite the pause, network activity and wallet balances remained stable.
This reflects confidence in the chain architecture. The rapid recovery after validator corrections and recheckpointing demonstrates that even rare consensus divergences can be managed effectively.
This incident reinforces Sui’s commitment to secure and reliable blockchain operations while maintaining consistent on-chain performance.

