Ceph Storage Configuration
Overview
Ceph integrated in Proxmox VE provides a unified, scalable software-defined storage solution for block, object, and file storage.
Install Ceph
# Install Ceph
pveceph install
# Or via web UI:
# Datacenter -> Storage -> Add -> CephSetup Steps
- Install Ceph packages
- Create monitor
- Create OSDs
- Create pools
Ceph Components
MON (Monitor)
# Create monitor
pveceph mon create
# Check status
pveceph mon statusOSD (Object Storage Daemon)
# Create OSD
pveceph osd create /dev/sdb
# Create OSD with journal
pveceph osd create /dev/sdc --journal /dev/sddMDS (Metadata Server)
# Create MDS
pveceph mds create
# Verify
pveceph mds statusCeph Pools
Create Pool
# Create pool
pveceph pool create vm-data
# Configure pool
ceph pool set vm-data size 3
ceph pool set vm-data min_size 2
ceph pool set vm-data pg_num 128Pool Configuration
| Setting | Description | Default |
|---|---|---|
| size | Replicas | 3 |
| min_size | Min replicas | 2 |
| pg_num | Placement groups | 128 |
| crush_rule | Rule to use | - |
Pool Properties
# Set replicas
ceph pool set vm-data size 3
# Enable compression
ceph pool set vm-data compression_algorithm=zstd
# Set min size
ceph pool set vm-data min_size 2Ceph CRUSH Map
Custom Rules
# Create rule
ceph osd crush rule create-replicated replicated-rule default host
# Apply rule to pool
ceph osd pool set vm-data crush_rule replicated-ruleTunables
# View current
ceph osd crush dump
# Adjust
ceph osd crush tunables legacyCeph Performance
Tuning OSD
# Increase thread count
ceph config set osd osd_max_thread_ops 5000
ceph config set osd osd_op_threads 16Cache Tiering
# Create cache tier
ceph tier add vm-data-cache vm-data --force
# Configure cache
ceph tier cache-mode vm-data-cache writeback
# Set target
ceph tier set-target vm-data-cache .5Ceph Backup/Restore
Backup
# RBD snapshot
rbd snap create vm-data/vm-100@snapshot-$(date +%Y%m%d)
# Export
rbd export vm-data/vm-100@snapshot-$(date +%Y%m%d) /backup/vm-100.imgRestore
# Import
rbd import /backup/vm-100.img vm-data/vm-100-restored
# Re-size if needed
rbd resize vm-data/vm-100-restored --size 100GTroubleshooting Ceph
Check Health
# Ceph health
ceph health
# Detailed
ceph -s
# PG status
ceph pg statCommon Issues
# OSD down
ceph osd out 0
ceph osd in 0
# PG stuck
ceph pg dump
ceph health detailKeywords
ceph osd mon mds pool crush replication software-defined