Collaboration Benchmark Report
This report evaluates Univer's collaboration engine under real-time multi-user editing and summarizes environment, metrics, and results.
Introduction
Realtime collaboration performance depends heavily on concurrent users. Based on the paper โPerformance of real-time collaborative editors at large scale: User perspective,โ the number of collaborators is the key factor. Public limits (2022):
| Office 365 | Tencent Docs | Shimo | Google Sheets | Feishu Sheets | |
|---|---|---|---|---|---|
| Max collaborators | 365 | 200 | 200 | 200 | 200 |
Engine Overview
Univer supports distributed deployment; the diagram below is a simplified single-node view:

- Golang (universer): networking and dispatch
- Node.js (collaboration-server): OT Transform and Apply
The stateful collaboration-server keeps active documents in memory for fast processing.
Key Metrics
Collaboration Latency
Time from a changeset being sent to being applied by the first client.
CS QPS (Collaboration Concurrency)
Number of changesets submitted per second to the same document.
Test Setup
- Server: 4 cores, 8GB RAM
- Deployment: Docker Compose single-node
- Client edit frequency: ~0.15 ops/sec/user
- At 200 users, CS QPS โ 30
We increased the number of users step-by-step and observed the 99th percentile latency over 5 minutes.
Results


At 200 concurrent users, collaboration latency is around 1.3s, comparable to mainstream products. Latency grows exponentially with more collaborators.

Conclusion & Next Steps
- On a 4C8G server, 200 concurrent users achieve ~1.3s latency
- Latency increases exponentially as concurrency grows
Next steps:
- Analyze Transform/Apply internal processing under load
- Study network throughput vs latency
- Evaluate multi-document concurrency capacity
How is this guide?
