3D Scaling
Encyclopedia
A 3D scaling storage architecture provides the ability to scale up for improved performance, scale out to improve capacity and deliver increased aggregate throughput and scale deep to connect and manage external multi-vendor storage platforms. This architecture is derived from the Super Computing world using a design that includes specialized elements within the system allowing for an increase in overall system performance and efficiency.

A storage system built on a 3D architecture is designed primarily for use with enterprise storage applications. As the number of virtualized servers hosted on a physical server grows, due to the increased processing capability of servers with multi-core processors and increased memory, the I/O workload of these virtual servers tends to be consolidated onto a single storage system. These storage systems require more resources and should have the ability to dynamically increase the available resources in order to keep up with the I/O demand. Resources providing performance, capacity and connectivity can be increased by adding cache, processors, connections and disks to the base system. A virtual server accessing the storage system should be able to utilize all these resources, acting as a single common pool of resources.

Scale Up

Scaling up enables improved performance through the use of multi-core and special purpose processors to handle critical internal functions.

Scaling Up can be handled in multiple ways, however, one of the most efficient and best performing uses a combination of general-purpose processors and special purpose acceleration processor cores to form a synergistic processing framework. This framework is used to process applications and tasks with ultimate efficiency with results that are greater than the sum of the individual components.
The ability or a storage system to scale up should support more virtual server consolidation, as well as, better utilization of resources and lower cost.

Scale Out

An optimal design for a system that scales out provides the ability to add more processing capability and cache and have it tightly coupled with the existing system, becoming part of the central processing capability of the system, with a global cache, rather than a separate node with its own cache operating independently.

As an alternative, many scale out implementations add processors, cache and/or hosts but the added components are not tightly integrated. Copies of user data or control information is stored in the cache of each processor introducing the need to maintain cache coherency and the use of locking to avoid simultaneous updates or writes to the same record in the cache of multiple processors. These implementations typically use messaging to pass status and locking information between the processors or hosts to insure coherency. This use of messaging and locking introduces additional latency which increases response time. In enterprise class storage systems, where massive amounts of information are being handled, multi-image cache, with its inherent need for locking and cache coherency along with the messaging to manage it, is not an efficient management technique.

Within a storage system, when cache and processors are tightly coupled with the existing cache and processors, it is possible to deliver a unified solution with high-performance computing characteristics. Scaling out can deliver the ability to improve both performance and aggregate throughput. The ability to scale out a storage system may be beneficial when multiple servers, virtual or physical, are required to meet I/O demands. A tightly coupled storage system facilitates host server access to required storage resources from a common pool.

Scale Deep

Scaling deep is the ability to connect and manage multi-vendor external storage to a storage system.

In this environment, primary storage platform will receive requests from a host. The request is analyzed and the storage platform determines that the requested data actually resides on an external storage platform. The request is sent to the external storage via a link, typically a Fibre Channel connection. The results of the request to the external storage device are returned to the primary storage system where it is then passed back to the requesting host.
It is important that the metadata for the external storage be maintained within the primary storage controller and not as blocks of storage on the external disk, which would result in vendor lock-in. With the metadata maintained in the primary storage system, it is very easy to add and provision external storage. There is no need to copy data from the external storage before connecting it because there is no need to reformat the drives or write any metadata to the drives. The external storage can be connected to the primary storage system and as soon as the zoning and and host configurations are updated to reflect the new path, applications can begin accessing the data. It is also easy to deprovision the storage since there are no proprietary formats of data on the external disk storage. There is no vendor lock-in complicating a change of platform or vendors.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK