Skip to content

High Memory Usage (1–2 GB Heap) for Add/Delete Operations on Large Tables (~1,000,000 Rows) #2219

@originalchou

Description

@originalchou

Description

When a table grows to around 1,000,000 rows, performing operations such as adding or deleting fields/rows leads to extremely high memory usage on the server side.

From logs, both ComputedEvaluatorService and RepairAttachmentOpService consumed 1–2 GB of heap memory, causing long execution times (up to 28 seconds).

Just curious that whether the current performance is expected for this edge case, or if further optimization is possible.

Logs

{"level":30,"time":1764903844542,"pid":74,"hostname":"01915a5bd33f","name":"teable","req":{"id":"cc965f6ae3d4d3a9cfc3cdc1cae92bcd","method":"POST","url":"/api/table/tblPbRic7DxF8Bf3bXK/field"},"msg":"ComputedEvaluatorService - evaluate Execution Time: 28055 ms; Heap Usage: 1038.57 MB"}

{"level":30,"time":1764903846161,"pid":74,"hostname":"01915a5bd33f","name":"teable","req":{"id":"cc965f6ae3d4d3a9cfc3cdc1cae92bcd","method":"POST","url":"/api/table/tblPbRic7DxF8Bf3bXK/field"},"msg":"RepairAttachmentOpService - getCollectionsAttachmentsContext Execution Time: 1607 ms; Heap Usage: 1114.12 MB"}

Actual Behavior

  • Heap usage spikes to 1–2 GB
  • Execution time ranges from 1.6s to 28s
  • Operations appear to scale poorly with table size

Question

Is this level of resource consumption expected for ~1M-row tables, or is there still room for performance optimization

Environment

  • Table size: ~1,000,000 rows
  • Operation: Add/Delete Field
  • API endpoint: /api/table/{tableId}/field
  • 2c4g linux server

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions