Windows Server 2012 extends the multitude of available file and storage options. System Center 2012 Data Protection Manager (DPM) further adds to the options for providing redundancy for data and storage. Direct Attached Storage (DAS) is still used, although enterprise installations frequently use Fibre Channel Storage Area Networks (SANs) to provide block-level access to disk resources.
Network Attached Storage (NAS) is also used to provide file-level access to data. You can connect NAS devices directly to the network, where they frequently use the Server Message Block (SMB) protocol when connecting to Windows-based computers.
iSCSI and Fibre Channel
Internet SCSI (iSCSI) is another method of providing SCSI-based access to block-level file systems. iSCSI varies from Fibre Channel SCSI in that data is transferred via the existing IP network versus a separate Fibre Channel network. Windows Server 2012 provides an iSCSI Target Server role.
Several advantages for iSCSI SANs are as follows:
- Geographically dispersed storage You can use iSCSI over existing wide area networks (WANs), whereas Fibre Channel has a practical limit of 10 kilometers.
- Lower deployment costs Fibre Channel requires specialized network infrastructure, while iSCSI can use existing IP network infrastructure.
- Simplified management The Microsoft iSCSI initiator is all you need for implementation and management with an organization’s already existing IP network expertise.
- Enhanced security iSCSI uses Internet Protocol Security (IPSec) and Challenge Handshake Authentication Protocol (CHAP). Specifically, three levels of security are available with IPSec: Authentication Header (AH), Encapsulating Security Payload (ESP), and AH plus ESP.
Windows Server 2012 introduces a clustered Scale-Out File Server that provides more reliability by replicating file shares for application data. Scale-Out File Server varies from traditional file-server clustering technologies and isn’t recommended for scenarios with highvolume operations in which opening, closing, or renaming files occurs frequently.
Virtual Fibre Channel
Virtual Fibre Channel is a new feature in Hyper-V that enables virtual client machines to connect directly to Fibre Channel host bus adapters (HBAs).
Storage Spaces and storage pools
Storage Spaces are a new technology in Windows Server 2012 that uses standard disks in a group. A Storage Space creates a virtualized disk that enables you to combine several disks as needed for redundancy and capacity expansion. Picture Storage Spaces as the software RAID (Redundant Array of Inexpensive Disks) of tomorrow. Storage Spaces have the following features:
- Resiliency You can optionally use mirroring or parity to provide for redundancy in case one of the disks fails.
- Availability Storage Spaces can use failover clustering, pools can be clustered across nodes, and then the Storage Space is built on top so that if one node fails, another node picks up in its place.
- Administration Storage pools and Storage Spaces are integrated with Active Directory and follow a similar model, thus enabling delegation of control.
- Optimized Capacity Storage Spaces take advantage of trim storage support to reclaim space. They can share disk across multiple workloads, thus optimizing how disk space is used, thereby reducing waste.
- Simplified Management Storage Spaces use the same concepts as storage pools, therefore alleviating administrators from having to learn a new technology. Storage Spaces are also manageable through several interfaces, including Windows PowerShell.
Storage pools and Storage Spaces are created within the File and Storage Services role in Windows Server 2012. The key for planning Storage Spaces is to think of them as virtual disks created from one or more storage pools. Storage Spaces are at the level where the provisioning and resiliency are created, and storage pools are where capacity is expanded.
Data deduplication
Data deduplication is another new feature in Windows Server 2012 that helps remove duplicate data to preserve storage capacity. A role service within the File and Storage Services role,data deduplication breaks data into small chunks, identifies the duplicates, and maintains a single copy of each chunk. The following workloads are considered ideal for deduplication:
- General file shares house general content, home folders, and offline files.
- Software deployment shares house program setup files, images, and the like.
- VHD libraries store VHD files for provisioning.
By default, data deduplication doesn’t attempt to deduplicate a file until after five days, although this setting can be changed (MinimumFileAgeDays setting). Data deduplication also uses an exclusion list with which you can manually exclude files from deduplication. You can implement data deduplication on non-removable NTFS drives but not on system or boot volumes. Data deduplication runs garbage collection once an hour.
--------------------
NOTE: DATA DEDUPLICATION
Data deduplication isn’t enabled by default. Also, Cluster Share Volumes (CSVs), system volumes, dynamic disks, and Resilient File System (ReFS) are not eligible for data deduplication.Files smaller than 32KB or those that are encrypted aren’t processed.
--------------------
After data deduplication is installed, you can use the DDPEVAL.exe command-line tool to estimate capacity savings on Windows 7 or Windows 8 client computers as well as Windows Server 2008 R2 and Windows Server 2012 servers. The Windows PowerShell Measure-DedupFileMetaData cmdlet determines the amount of disk that can be reclaimed by deleting data from a volume.