• info@vtechsummary.com
  • Munich, Germany

02. VMware vSAN Architectures

This is part of the VMware vSAN Technical Deep Dives and Tutorials series. By using the following link, you can access and explore more objectives from the VMware vSAN.
VMware vSAN [v8] – Technical Deep Dives and Tutorials

vSAN 8 supports two types of storage architectures, Original Storage Architecture (OSA) and Express Storage Architecture (ESA). We’ve had the original storage architecture, OSA, since 2014 and now we have ESA, which is the Express Storage Architecture.

vSAN OSA uses a 2 tiered disk structure. It has a cache tier and has a capacity tier. OSA supports either SSDs or NVMe for the cache tier, and it supports SSDs, NVMe, or spinning disks for the capacity tier. An All-Flash configuration has all-flash devices for both the cache and capacity tier. A hybrid configuration has all-flash devices for the cache tier and spinning disks for the capacity tier.
Compared to vSAN ESA, which only supports NVMe device in a single tier structure known as a storage pool, where all the disks on a host form a single storage pool. There isn’t a two tiered structure.

vSAN OSA Disk Configuration

Focusing on vSAN OSA first, it has a two-tier storage system where you have a cache disk that receives the writes from VMs. Once that cache tier starts filling up, to about 30%, vSAN then starts to destaging the data down to the capacity tier. That allows vSAN to take advantage of the fast cache disk, whether that’s an SSD or NVMe, and then store the data on the capacity tier, which could be SSDs, NVMe, or it could be spinning disks.

Each ESXi host supports a maximum of 5 Disk Groups (DG). A disk group is just a collection of disks that form a 2 tier structure. In each disk group, we can have a maximum of 1 cache disk with a maximum of 7capacity disks. Then based on the storage architecture we choose, hybrid or all-flash, it needs to be consistent across all of our hosts in the vSAN cluster. We don’t have the ability to mix and match between hosts because we want to make sure that if we have a failure of one of our hosts, our VMs can run on a different host with the same performance characteristics.
vSAN OSA supports up to 40 disks per host (5x cache and 35x capacity). The sizing of the disk groups will depend on business requirements, read and write IOPS, and failure handling.

vSAN OSA All-Flash DG

This has all-flash devices for both our cache tier and our capacity tier. When our VM wants to write a piece of data, it goes to the cache tier first. Depending on the applied storage policy (RAID-0/1/5/6), all the applicable disk groups in the cluster must acknowledge they have also written the data to their respective cache tier.

At that point, vSAN let’s the VM know it has a successful write. As our cache tier starts filling up, it destages that data down to our capacity tier, which is also SSDs or NVMe. It’s important to ensure our configuration, SSDs or NVMe, is consistent across all of our ESXi hosts. When it comes to reads, we can read from any of the disks in the environment because they’re all-flash disks compared to our hybrid environment. If we have a piece of data currently sitting in our cache tier and our VM wants to read it, vSAN returns that data back to our VM from the cache tier. If we have a piece of data that’s sitting in our capacity tier, we can return that data back to our VM from the capacity tier.
From a network bandwidth perspective, we need to have a dedicated or shared 10 Gbps NIC. Dedicated is preferred for maximum performance, but shared is also supported. If using a shared configuration, VMware would recommend using a vDS with Network IO Control (NIOC) to ensure vSAN receives proper bandwidth.

vSAN OSA Hybrid DG

The vSAN hybrid disk group model is based on a two-tier architecture:

  • Cache Tier: SSD
    • Cache tier logically partitions 70 percent for reads and 30 percent for writes.
    • Writes are acknowledged when they arrive at the cache tier.
  • Capacity Tier: HDD
    • Reads served from cache tier only
    • If a read cache is hit, return from the read cache.
    • If a read cache is missed, copy from the capacity tier to the cache tier and then return.

A hybrid architecture is where the cache tier is either an SSD or NVMe and the capacity tier is a spinning disk, with a maximum of seven capacity disks per each disk group. This allows vSAN to take advantage of the fast cache tier to receive writes, but then the cheap and deep storage of spinning disks for our capacity tier.
When it comes to our cache tier, the cache tier is responsible for receiving writes from the VM. The configured storage policy, RAID-0/1/5/6, would determine how many disk groups in the cluster are involved. vSAN will make sure that all of the applicable disk groups confirm that they have received the write. Once the cache tier starts filling up to about 30%, vSAN will begin to destage the data down to the capacity tier. The capacity tier is ultimately, where the data will live.

vSAN logically partitions the cache disk into a 70/30 split, where 70% of that disk is allocated towards reads and 30% is allocated towards writes. When our VM wants to write a piece of data, it is written to that 30% logical partition. When that 30% section starts filling up, then we start destaging the data to our capacity tier. Ultimately, that is 9% of the disk, which is why it is critical that we understand the read and write characteristics of our workload to appropriately size our cache disks.

vSAN ESA Storage Pool

vSAN ESA doesn’t have a two-tier structure. Instead, it takes all of the storage devices on the local hosts and turns it into a storage pool. All of the disks can receive writes and all of the disks can service reads. This makes for a more optimal and efficient workflow when it comes to reads and writes compared to vSAN OSA.

The minimum network configuration is 10 Gbps, using the smallest ReadyNode hardware profile, and the largest ReadyNode hardware profile requires 100 Gbps. At a high level, a ReadyNode profile specifies the required hardware to support a specific VM workload/configuration.

vSAN Storage Policies

vSAN uses storage policies to define how data is placed on disk.

  • Supported Policies:
    • No Data Redundancy (RAID-0)
    • Mirroring (RAID-1)
    • Erasure Coding (RAID-5/6)

The administrator selects a desired storage policy and it is applied to objects. Depending on the selected storage policy, vSAN will create the appropriate components and place them on the physical disks.
A storage policy can be applied to all the VMs in a vSAN cluster, a single VM, or it can be applied to a single VMDK.

Reference: VMware (by Broadcome) Docs

Leave a Reply

Your email address will not be published. Required fields are marked *