Collaboration Benchmark Report

GitHubEdit on GitHub

This report evaluates Univer's collaboration engine under real-time multi-user editing and summarizes environment, metrics, and results.

Introduction

Realtime collaboration performance depends heavily on concurrent users. Based on the paper โ€œPerformance of real-time collaborative editors at large scale: User perspective,โ€ the number of collaborators is the key factor. Public limits (2022):

Office 365Tencent DocsShimoGoogle SheetsFeishu Sheets
Max collaborators365200200200200

Engine Overview

Univer supports distributed deployment; the diagram below is a simplified single-node view:

Univer collaboration single-node

  • Golang (universer): networking and dispatch
  • Node.js (collaboration-server): OT Transform and Apply

The stateful collaboration-server keeps active documents in memory for fast processing.

Key Metrics

Collaboration Latency

Time from a changeset being sent to being applied by the first client.

CS QPS (Collaboration Concurrency)

Number of changesets submitted per second to the same document.

Test Setup

  • Server: 4 cores, 8GB RAM
  • Deployment: Docker Compose single-node
  • Client edit frequency: ~0.15 ops/sec/user
  • At 200 users, CS QPS โ‰ˆ 30

We increased the number of users step-by-step and observed the 99th percentile latency over 5 minutes.

Results

Result 1

Result 2

At 200 concurrent users, collaboration latency is around 1.3s, comparable to mainstream products. Latency grows exponentially with more collaborators.

image

Conclusion & Next Steps

  • On a 4C8G server, 200 concurrent users achieve ~1.3s latency
  • Latency increases exponentially as concurrency grows

Next steps:

  • Analyze Transform/Apply internal processing under load
  • Study network throughput vs latency
  • Evaluate multi-document concurrency capacity

How is this guide?