Thursday, October 24, 2013

Seagate Kinetic – A Game Changer for Cloud Storage Hardware Architectures

Seagate recently announced a new technology platform called the Kinetic Open Storage Platform that is a genuine game changer for cloud storage hardware architectures (and perhaps other storage architectures as well).  My prediction is that in 2-3 years, cloud storage hardware will be unrecognizable as compared to the classic x86 architecture of today.


In January I had talked about another trend that could fundamentally alter cloud storage hardware architecture i.e. microservers.  My guess is that while each trend is powerful on its own, the combination is even more potent.

Kinetic reworks two items that have been sacrosanct in direct attached storage (DAS) for 20+ years – the SCSI/ ATA logical protocol, and SAS/ SATA interfaces (I’m clubbing them together with their parallel predecessors). The Kinetic platform provides a new logical interface via key/ value protocol and new physical interface via Ethernet. Both of these will have a profound impact on hardware design.

A cloud storage software stack like OpenStack Swift will be affected in four major areas

First, storage nodes can be a lot lighter (in terms of compute and memory) since numerous layers such as file-system, volume management, block storage management, health check etc. can be thrown out wholesale. This cuts cost, power, and real-estate.  

Second, connectivity protocols in the rack (SAS/ SATA and Ethernet) can now be collapsed from two into one i.e. Ethernet. This will provide greater simplicity and cost reductions. Of course you might argue that SAS/ SATA is cheaper than Ethernet! True, but my prediction is that the combination of Kinetic and micro-servers are going to take care of this problem. Layer2/3 networking is going to get completely commoditized. There is also the possibility that switchless supercomputer style fabrics may also emerge to provide connectivity inside the rack. Microserver vendors such as  AMD  or Calxeda are already providing such fabrics.

Third, Ethernet removes the distance issue of SAS/ SATA (2-8 meters). Because of this limitation, current direct attached storage either has to reside in the actual server chassis or an adjacent JBOD (just-a-bunch-of-disks). Disks that use Ethernet could be up to 100 meters away. This could open up unique architectures where you could scale raw storage and storage node compute capacity independently. You could have some very interesting failover techniques where a storage node failing doesn’t mean the entire set of disk drives or JBODs has to be tossed out. You could simply failover to another storage node while reusing disks.

Finally, Ethernet is a true switched network unlike SAS. This means that N+1 high-availability can now be realized. I am not suggesting shared storage, but if one storage node fails, another one could take over i.e. only one compute node would use a Kinetic drive at any point in time.

Below is one conceptual futuristic architecture that might make sense. Components such as shared flash, general purpose fabrics etc. are not available yet; but this is a blog, why limit imagination J.


An architecture like this would provide numerous benefits:
  • Reduced cost (reduced compute, reduced cost of microservers, elimination of switches)
  • Reduced real-estate
  • Reduced power
  • Better failover architecture
  • Higher durability (smaller failure domains)
  • Better scalability (independent scaling of components)
  • Dynamic re-allocation of compute capabilities to work on different part of storage; or even change personality from storage nodes to proxy servers

 Here’s a photograph of Kinetic platform with one of the key minds behind it – Jim Hughes.



Net-net, Seagate has changed the game. The next 3-5 years might create excitement that the direct-attached storage world has not seen in 20+ years.

2 comments:

  1. Amar,

    When I first read about Kinect drives, I thought apps would directly talk to Kinetic drives using some sort of REST APIs, and Kinetic drives would cluster together using some sort of swarming and distribution algorithm. But it looks like a middleware like OpenStack, Hadoop etc. is still required. Is that correct?

    Thanks,
    Saqib

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete