How a German Data Center Cut Backup Costs by 24% with Veeam Backup and FishOS Ceph

Sardina Systems blog
4 min readJan 13, 2025

--

See how innovation and open source come together to bring a cost-efficient backup solution with FishOS Ceph.

The Importance of Backup in 2025

The more humanity relies on digital services for storing and processing information, the bigger the importance of strong backup solutions for enterprises. Each year, the number of cyberattacks and data loss threats grows. Therefore, a robust backup solution addresses multiple levels of the problem: it maintains business continuity, reduces downtime, and helps avoid financial and reputational risks.

FishOS Ceph: The Optimal Storage Solution

With over 10 years of operations at Sardina Systems, we have worked with numerous storage options — and our FishOS cloud platform is compatible with many of them. However, we are now confident in recommending Ceph as the best solution, especially when it comes to backups.

Ceph’s capabilities include:

No Single Point of Failure: Data is distributed across multiple nodes, ensuring system reliability.

Automatic Management: Ceph automatically balances loads and restores data, reducing the need for manual intervention.

Unified Storage: The ability to use the same cluster for object and block storage simplifies management and reduces costs.

Flexible Architecture: Ceph works with almost any hardware, providing the flexibility to grow seamlessly.

Cost Efficiency: The open-source license covers the needs of any storage type with a single solution.

By pairing Ceph with FishOS, Sardina Systems delivers unmatched backup solutions for businesses large and small.

Implementation of the First Use Case

In the GRASS-MERKUR use case, the deployment of a Ceph cluster demonstrates the flexibility and robustness of distributed storage solutions in high-capacity environments. Below, we explore the technical aspects of this implementation, focusing on the architecture and storage efficiency.

The first step involved deploying a Ceph cluster using FishOS Ceph, configured with RADOS Block Devices (RBD) as the storage backend. The cluster comprises 12 hosts, each equipped with twelve 8 TB rotational drives (7,200 rpm) for data storage and two 1.7 TB NVMe drives dedicated to the Ceph’s journal. This setup provides a total raw storage capacity of 1.1 PB.

We employed an erasure coding scheme of 6+3 to optimize storage efficiency, where 6 blocks store the original data and 3 blocks provide data protection. This approach provides 67% storage efficiency, with additional 10% overhead for Ceph OSD, resulting in an effective usable space of 691.2 TiB.

GRASS-MERKUR then configured Veeam Backup & Replication to use the Ceph RBDs as the target backup repository. This setup involved mapping the RBD volumes to the Veeam backup servers, allowing them to function as block devices ready to store backup data.

The erasure coding was optimized with a 6+3 setup to balance storage efficiency and data durability. Replication settings within Ceph were also fine-tuned to ensure data availability across the distributed storage nodes.

Testing

During the implementation of the Veeam and Ceph RBD solution, we conducted key tests for Erasure Coding (EC) Pool. These tests evaluated its performance characteristics for 32 G and 1 T volumes.

Performance testing with random read-only workloads showed that the storage cluster achieved 36,500 IOPS on small-size data blocks (4k) with Linux kernel cache utilization and 5,534 IOPS on larger volumes where cache would be insufficient to affect performance. Larger data blocks (2 MiB) demonstrated up to 2,212 MiB/s bandwidth with 1,106 IOPS.
Mixed Read/Write (R/W) workloads showed up to 2,189 IOPS in both directions with small blocks. With larger blocks, it handled 516 IOPS bi-directionally and achieved 1,030 MiB/s of bandwidth. Random writes showed up to 3,700 IOPS on small blocks and 1,120 MiB/s of bandwidth and 560 IOPS on large blocks (2 MiB).

Cost-Efficient Backup with FishOS Ceph

Deploying Veeam with FishOS using Ceph’s API component is a swift and seamless process, typically making the system operational within a day. This combination has proven to be more cost-effective than most software-defined storage solutions, particularly for large-scale systems.

One of the key advantages of using FishOS Ceph for backup is its cost efficiency, offering approximately 24% savings per core compared to other Ceph providers. This is particularly notable for systems with substantial storage requirements, such as the 1.1 PB raw storage (691.2 TiB of effective usable space) in the GRASS-MERKUR case.

GRASS-MERKUR’s adoption of Veeam Backup on FishOS Ceph highlights the significant benefits of this pairing. By leveraging Ceph’s advanced storage capabilities and Veeam’s robust backup features, the system provider achieved a highly economical, efficient, and reliable backup solution.

--

--

Sardina Systems blog
Sardina Systems blog

Written by Sardina Systems blog

A cloud software vendor building on OpenStack & Kubernetes with Zero-Downtime Operations, scalable, no lock-in, and efficient to any enterprise.

No responses yet