Docker Compose Deployment

GitHubEdit on GitHub

Deployment Configuration

How to Modify Deployment Configuration

Univer's default deployment configuration is located in the .env file under the installation directory. If you need to modify the installation configuration, do not edit the .env file directly. Instead, create a custom configuration file named .env.custom in the same directory. Items configured in .env.custom will override the default values in the .env file.

Configuration Details

Enable Identity Authentication and Permission Management

The meaning of the relevant configuration items has already been explained in Integrate with Your System via USIP and will not be repeated here. To enable it, you can refer to the following configuration:

.env.custom
# usip about
USIP_ENABLED=true # Set to true to enable USIP
USIP_URI_CREDENTIAL=https://your-domain/usip/credential
USIP_URI_USERINFO=https://your-domain/usip/userinfo
USIP_URI_ROLE=https://your-domain/usip/role
USIP_URI_COLLABORATORS=https://your-domain/usip/collaborators
USIP_URI_UNITEDITTIME=https://your-domain/usip/unit-edit-time

# apikey is optional
USIP_APIKEY=

# auth about
AUTH_PERMISSION_ENABLE_OBJ_INHERIT=false
AUTH_PERMISSION_CUSTOMER_STRATEGIES=

Enable Univer Event Publishing

The meaning of the relevant configuration items has already been explained in Integrate with Your System via Univer Event Sync and will not be repeated here. To enable it, you can refer to the following configuration:

.env.custom
EVENT_SYNC=true # Set to true to enable

Use Self-Maintained Infrastructure Components

RDS

Please note, because the Temporal component used by Univer does not support GaussDB and DamengDB, if you choose either of these databases, Univer will install PostgreSQL separately for Temporal. Temporal stores workflow states of import/export tasks in the database and does not involve any document data. Even if lost, it will only cause incomplete import/export tasks to fail and will not have any other impact.

Use PostgreSQL-Compatible RDS
.env.custom
# RDS config
DISABLE_UNIVER_RDS=true # When using your own RDS, disable Univer's default PostgreSQL

DATABASE_DRIVER=postgresql # Set to postgresql for PostgreSQL-compatible databases
DATABASE_HOST=your-database-host
DATABASE_PORT=your-database-port
DATABASE_DBNAME=univer # Univer database init scripts use this name by default; keep it consistent if changed
DATABASE_USERNAME=user-name # Grant select, insert, update, delete permissions
DATABASE_PASSWORD=password
Use MySQL-Compatible RDS
.env.custom
# RDS config
DISABLE_UNIVER_RDS=true # When using your own RDS, disable Univer's default PostgreSQL

DATABASE_DRIVER=mysql # Set to mysql for MySQL-compatible databases
DATABASE_HOST=your-database-host
DATABASE_PORT=your-database-port
DATABASE_DBNAME=univer # Univer database init scripts use this name by default; keep it consistent if changed
DATABASE_USERNAME=user-name # Grant select, insert, update, delete permissions
DATABASE_PASSWORD=password
Use GaussDB
.env.custom
# RDS config
DISABLE_UNIVER_RDS=true # When using your own RDS, disable Univer's default PostgreSQL

DATABASE_DRIVER=gaussdb
DATABASE_HOST=your-database-host
DATABASE_PORT=your-database-port
DATABASE_DBNAME=univer # Univer database init scripts use this name by default; keep it consistent if changed
DATABASE_USERNAME=user-name # Grant select, insert, update, delete permissions
DATABASE_PASSWORD=password
Use DamengDB
.env.custom
# RDS config
DISABLE_UNIVER_RDS=true # When using your own RDS, disable Univer's default PostgreSQL

DATABASE_DRIVER=dameng
DATABASE_HOST=your-database-host
DATABASE_PORT=your-database-port
DATABASE_DBNAME=univer # Univer database init scripts use this name by default; keep it consistent if changed
DATABASE_USERNAME=user-name # Grant select, insert, update, delete permissions
DATABASE_PASSWORD=password
Redis
.env.custom
# redis config
DISABLE_UNIVER_REDIS=true # When using your own Redis, disable Univer's built-in Redis

# if you use redis cluster, use comma ',' to separate multiple addresses
# for example: REDIS_ADDR=192.168.1.5:6001,192.168.1.5:6002,192.168.1.5:6003
REDIS_ADDR=host:port[,host:port]
REDIS_USERNAME=user-name
REDIS_PASSWORD=password
REDIS_DB=0

# redis tls config
# Supports three modes: insecure, CA verification only, and client certificate (mTLS) verification.
# CA, certificate, and private key can be provided as file paths or directly as file content.
# If using file paths, be sure to use the container virtual path after Docker volume mounting.
REDIS_TLS_ENABLED=false
REDIS_TLS_INSECURE=false
REDIS_TLS_CA=
REDIS_TLS_CERT=
REDIS_TLS_KEY=
MQ
.env.custom
DISABLE_UNIVER_MQ=true # When using your own MQ, disable Univer's built-in MQ

RABBITMQ_CLUSTER_ENABLED=true # Must be set to true
RABBITMQ_CLUSTER_USERNAME=user-name # Needs Declare Exchange, Produce, Consume permissions
RABBITMQ_CLUSTER_PASSWORD=password # password

# use comma to separate multiple addresses
# for example: RABBITMQ_CLUSTER_ADDR=192.168.1.2:5672,192.168.1.5:5672,192.168.1.7:5672
# Every addr must be readable and writable; Univer will poll the hosts until a connection succeeds
RABBITMQ_CLUSTER_ADDR=host:port[,host:port]

# RABBITMQ_CLUSTER_VHOST is the vhost of the rabbitmq cluster. If you don't set it, the default value is /
# for example: RABBITMQ_CLUSTER_VHOST=univer
RABBITMQ_CLUSTER_VHOST=/
RABBITMQ_CLUSTER_SCHEMA=amqp
Object Storage
.env.custom
DISABLE_UNIVER_S3=true # When using your own object storage, disable Univer's built-in MinIO

# s3 config
S3_USER=user
S3_PASSWORD=password
S3_REGION=your-inner-s3like-region # If the S3-like service has no such concept, fill in any string

# S3_PATH_STYLE, can be set to true or false
# If set to true, use Path-Style to construct access URLs
# If set to false, use Virtual-Host Style to construct access URLs
S3_PATH_STYLE=true|false

# S3_ENDPOINT address provided for internal service access
S3_ENDPOINT=inner-visit-host:port

# S3_ENDPOINT_PUBLIC publicly accessible address, used when the client directly connects to object storage to download files
S3_ENDPOINT_PUBLIC=public-visit-host:port

# S3_DEFAULT_BUCKET configures which bucket to use
S3_DEFAULT_BUCKET=default-bucket-name

Enable Univer Built-in Observability Components

.env.custom
# observability config
ENABLE_UNIVER_OBSERVABILITY=true # Set to true to enable
GRAFANA_USERNAME=set-your-admin-user-name-here # Set Grafana admin username
GRAFANA_PASSWORD=set-your-admin-user-password-here # Set Grafana admin password
HOST_GRAFANA_PORT=set-the-port-you-want # Grafana exposed port
.env.custom
UNIVERSER_REPLICATION_CNT=2 # universer instance count
COLLABORATION_SERVER_REPLICATION_CNT=2 # collaboration service instance count
COLLABORATION_SERVER_MEMORY_LIMIT=2048 # collaboration service max memory limit, in MB
COLLABORATION_HELPER_REPLICATION_CNT=2 # helper service instance count, note: introduced since v0.13.0
COLLABORATION_HELPER_MEMORY_LIMIT=2048 # helper max memory limit, in MB, note: introduced since v0.13.0

# import/export exchange-worker capacity-related config
EXCHANGE_WORKER_REPLICATION_CNT=1 # working exchange worker count
EXCHANGE_WORKER_MEMORY_LIMIT=4096 # MB, the memory limit of each exchange-worker.
EXCHANGE_WORKER_IMPORT_CONCURRENT=1 # how many import tasks each worker can do at the same time.
EXCHANGE_WORKER_EXPORT_CONCURRENT=1 # how many export tasks each worker can do at the same time.

Tiered Collaborative Editing Scheduling Configuration

Introduced since v0.15.0, disabled by default, and needs to be manually configured to enable. The system decides which tier of the collaboration service cluster to schedule to based on the total cell count of the workbook and the actual import time. For newly imported workbooks, scheduling is based on import time; for workbooks created in other ways or after a new snapshot is generated, scheduling is based on total cell count. After each snapshot creation, the relevant metric data will be persisted to the database and the scheduling policy updated.

The essence of tiered scheduling is deploying the collaboration service as multiple clusters and specifying which cluster handles spreadsheets of different scales through rules. The data structure of the rule configuration is as follows:

unitRoutingConf:
  tiers:
    - tierName: normal
      enable: true
      upgradeCellsThreshold: 0
      downgradeCellsThreshold: 0
      upgradeImportTimeThreshold: 0
    - tierName: large
      enable: true
      upgradeCellsThreshold: 1000
      downgradeCellsThreshold: 900
      upgradeImportTimeThreshold: 10
    - tierName: huge
      enable: true
      upgradeCellsThreshold: 10000
      downgradeCellsThreshold: 9000
      upgradeImportTimeThreshold: 60

Description:

  • tiers: Describes information about each cluster; theoretically any number of clusters can be configured.
  • enable: Sets whether to enable this cluster; set to false and the cluster will be ignored.
  • upgradeCellsThreshold: Specifies the total sheet cell count required to schedule to this cluster.
  • downgradeCellsThreshold: If the sheet is currently scheduled to this cluster, it can only be downgraded to a lower-tier cluster when the total cell count falls below this value. This avoids frequent switching for sheets whose scale is near the tier boundary.
  • upgradeImportTimeThreshold: For newly imported sheets without total cell count statistics, this configuration is used for scheduling. The mechanism is the same as upgradeCellsThreshold, except the metric is actual import time.

You must ensure that at least one cluster has all metric requirements set to 0, so that every sheet can be scheduled.

Scheduling examples under the sample configuration above:

  • workbook1 has an initial cell count of 0 after creation and is scheduled to the normal cluster.
  • When workbook1 is edited and the total cell count reaches 1000, it is scheduled to the large cluster.
  • When workbook1's total cell count drops to 999, because it is still above the downgrade threshold of 900, it remains scheduled to the large cluster.
  • When workbook1's total cell count further drops to 899, it is downgraded back to the normal cluster.
  • workbook2 is imported with an actual time of 61 seconds, which exceeds the huge cluster threshold of 60 seconds, so it is scheduled to the huge cluster.

Univer's Docker Compose deployment script provides options to deploy 1 or 2 tiers (separating normal and large spreadsheets). The ENABLE_LARGE_TIER_COLLABORATION_SERVER parameter in the cluster configuration controls whether to use the large spreadsheet cluster for scheduling and whether Docker Compose actually deploys the large spreadsheet cluster services. If finer cluster tier control or K8s deployment is needed, adjust the docker compose / charts orchestration to actually deploy the corresponding tier of collaboration-server service clusters, and then configure the scheduling routing table (tiers array) as described above.

By default, only two tiers are configured: normal and large.

.env.custom
# Configure to enable/disable large cluster, true-enable, false-disable, default disabled
ENABLE_LARGE_TIER_COLLABORATION_SERVER=false

# If large cluster is enabled, sheets will be scheduled to the large cluster when total cell count reaches this scale, default 2M
LARGE_TIER_MIN_CELLS_COUNT=2000000

# If the sheet is currently scheduled to the large cluster, it will only be downgraded back to the normal cluster when total cell count falls below this value, default 1.95M
LARGE_TIER_DOWNGRADE_CELLS_COUNT=1950000

# If large cluster is enabled, workbooks will be scheduled to the large cluster when import time exceeds this value, default 10s
LARGE_TIER_MIN_IMPORT_SECONDS=10

# large cluster collaboration service instance count, default 2
LARGE_TIER_COLLABORATION_SERVER_REPLICATION_CNT=2

# large cluster collaboration service instance memory limit, recommended at least 8GB, default 8GB
LARGE_TIER_COLLABORATION_SERVER_MEMORY_LIMIT=8192

Network Configuration

If Univer's default exposed ports conflict with other services, you can modify them as needed:

.env.custom
# If it conflicts with existing network configuration, you can modify the CIDR used by docker
DOCKER_NETWORK_SUBNET=172.30.0.0/16

# Univer API service exposed port; if it conflicts with other host services, change to another port
HOST_NGINX_PORT=the-univer-server-api-port-you-wantted

# If you choose Univer's built-in MinIO as object storage and there is a port conflict, add this configuration item
# If you are using self-maintained object storage, setting this configuration item has no effect
HOST_MINIO_PORT=the-minio-port-you-wantted

# If you choose Univer's built-in observability components and there is a port conflict, add this configuration item
# If you are using self-maintained observability components, setting this configuration item has no effect
HOST_GRAFANA_PORT=the-grafana-port-you-wantted

Cross-Origin Configuration

.env.custom
# allowed cross-origins config
CORS_ALLOW_ORIGINS='["domain1", "domain2", "and more"]'
CORS_ALLOW_HEADERS='["content-type","authorization"]'

Configuration Example

The following configuration example achieves:

  • Enable USIP integration, authenticate user identity, enable permission validation
  • Configure only document owner can copy content and print
  • Enable Univer event publishing
  • Use self-maintained PostgreSQL
  • Use self-maintained Object Storage
  • Modify Univer service exposed port
  • Set universer and collaboration-server capacity configuration

.env.custom file content:

.env.custom
# 1. enable USIP integration
# usip about
USIP_ENABLED=true
USIP_URI_CREDENTIAL=https://usip-demo.univer.ai/usip/credential
USIP_URI_USERINFO=https://usip-demo.univer.ai/usip/userinfo
USIP_URI_ROLE=https://usip-demo.univer.ai/usip/role
USIP_URI_COLLABORATORS=https://usip-demo.univer.ai/usip/collaborators
USIP_URI_UNITEDITTIME=https://usip-demo.univer.ai/usip/unit-edit-time

# 2. only owner can copy content and print
AUTH_PERMISSION_CUSTOMER_STRATEGIES=[ {"action": 3, "role": 2}, {"action": 6, "role": 2} ]

# 3. enable univer event sync
EVENT_SYNC=true # Set to true to enable

# 4. use owner RDS
# postgresql config
DISABLE_UNIVER_RDS=true # When using your own RDS, disable Univer's default PostgreSQL

DATABASE_DRIVER=postgresql # Set to postgresql for PostgreSQL-compatible databases
DATABASE_HOST=univer-postgresql
DATABASE_PORT=5432
DATABASE_DBNAME=univer
DATABASE_USERNAME=universer-biz
DATABASE_PASSWORD=123456

# 5. use owner Object storage
# s3 config
DISABLE_UNIVER_S3=true # When using your own object storage, disable Univer's built-in MinIO

S3_USER=universer-biz
S3_PASSWORD=123456
S3_REGION=cn-sz
S3_PATH_STYLE=true
S3_ENDPOINT=univer-s3:9652
S3_ENDPOINT_PUBLIC=univer.ai:9653
S3_DEFAULT_BUCKET=univer

# 6. change univer service Port
HOST_NGINX_PORT=8010

# 7. set the service scale
UNIVERSER_REPLICATION_CNT=4 # universer instance count
COLLABORATION_SERVER_REPLICATION_CNT=5 # collaboration service instance count
MAX_MEMORY_LIMIT=2048 # collaboration service max memory limit, in MB

Deployment SOP

After determining the required configuration, you can deploy Univer back-end services according to the following steps.

Docker Compose Deployment SOP

  1. Prepare the configuration file .env.custom (skip this step if no modifications are needed).
  2. Prepare the License files, including license.txt and licenseKey.txt.
  3. If you choose to use self-maintained RDS, download the database initialization script and complete RDS initialization, including universer and temporal. If using GaussDB or DamengDB, temporal initialization is not required.
  4. Obtain Univer Pro back-end services:
    • If the deployment server can access the public internet:
      • Run bash -c "$(curl -fsSL https://get.univer.ai/product)" [-- version] to download the specified version of Univer services. If no version is specified, the latest version is downloaded by default.
    • If the deployment server cannot access the public internet:
      • Click here to download the All-in-one offline installation package provided by Univer.
      • Upload the offline installation package to the deployment server and extract it.
      • Enter the extracted directory and run bash load-images.sh to load Univer back-end service images into local Docker.
  5. Copy the prepared .env.custom file into the downloaded Univer service folder, in the same directory as the default .env file.
  6. Copy the License files license.txt and licenseKey.txt into the configs/ directory of the downloaded Univer service folder.
  7. Enter the Univer service root directory and run bash run.sh start to complete service deployment.
  8. Complete regression testing.

How is this guide?