HDS had some hybrid flash based systems for awhile now, including accelerated flash storage modules for the HUS series. The HFS is their latest, with two bigger brothers, F Series (4U, 1.4M IOPS, 24 GB/S), G Series (2U – 10U, up to 4M IOPS, up to 48 GB/S) . The HFS is cheaper and smaller (higher performance density), and generally less power hungry than the majority of other comparable systems that Prometeus looked at, such as EMC ExtremeIO & PureStorage //m70. It would take a lot to make Prometeus to change from HDS now. The support is excellent, the HUS 150 has been brilliant, and they got a good deal for the HFS if/when. 🙂 HDS have kept the HUS series up to date with h/w & s/w updates & additions (such as the accelerated flash storage modules). Which means that the investment in the HUS 150 isn’t wasted or the system is redundant. It will be kept as their primary system and fully supports working with cloud based HFS systems.
Still, there are always risks. there is no such thing as an 100% safe option. All anyone can do is determine the likely risks & decide which are acceptable & what can be reasonably done to minimize them. 🙂
]]>My big concern with the all-flash vendors is that they are using proprietary controllers for their flash chips, and if a vendor discontinues a model or goes out of business, getting spares could get tricky. I may have some machines in my machine room for the engineering lab that are six years old now, but they are all commodity machines where I can get spare parts off of eBay without any issues because it’s all commodity parts. Not so much with a Violin or SolidFire…
]]>A couple of the *younger * guys at Prometeus are excited by the new HDS HFS system. 😀 Hitachi say they are basically skipping over an all SSD solution as they are the new bottleneck into high capacity Flash systems. The HFS is actually hybrid SSD/Flash, but they say they are working on new Flash tech that will eventually replace the need for SSD’s. Given that they currently offer up to 384TB @ 1M IOPS & 8 GB/s in a 2U unit, I can see why. 😀 It also needs a LOT less power & far less space than the HUS 150 (which I think uses up to 14 KW, it varies). Compared to about 3.4 KW max for the same storage capacity in HFS units (4 x 2U), that part of the equation is a no-brainer. They designed their network form the start with this future expansion in mind (based on a Brocade 5th/6th gen. FC net), so it would be relatively easy to implement if/when… 🙂
Glad to see you are being smart about your situation, unsurprisingly. 🙂 “Fools rush in…” etc. 🙂 I’ve never been an advocate of changing something that works, or spending money for the shiniest toy that isn’t needed. Though of course, I do also believe in being prepared. Things change, and sometimes can change suddenly. Been there… 🙂
]]>I’ve looked for other hosting platforms that give me similar ability to partition and hide my virtual infrastructure, and I just can’t find anything that’s cheaper than Amazon that’ll do it. And I already told my boss that we’d need $100,000 in hardware and a full-time guy to do nothing but manage and secure the infrastructure if we were going to do it ourselves (and that full time guy at current Silicon Valley prices is $150K/year minimum). Plus I’d want to hire a security firm to do a security audit of our entire infrastructure, and that would not be cheap either. Our AWS bill isn’t anywhere near high enough to justify that.
]]>When Prometeus decided they wanted a cloud based offering (mainly due to client demand) circa 2012/13, they concluded that the offerings available then were either too expensive, restrictive or didn’t have what they required (such as data centers across Europe & Asia). So, they created a subsidiary and designed their own. They were not going to spend a lot on large unified storage system, but after doing the modeling for 5+ years ahead, they concluded they needed a robust reliable and easily scalable system. So they bought the HUS 150. It needed a good redundant network environment around it, so that cost also. It was something of a gamble for a relatively small hosting company, but it’s paid off. 🙂 On the plus side, they guys there have over 2 decades of expedience on average, and their priorities are security, reliability & availability. They’ve done a great job. I think they’ve had something like 30 minutes down time in over 3 years which was mitigated by their cloud (just lost some performance, but was basically unnoticeable). 🙂 They haven’t had a security breach in several years, not a successful one anyway. 🙂
They are looking at getting a Hitachi Virtual Storage Platform in a year or so as the HUS will pretty much be at it’s limits by then, plus they want a 2nd big storage system somewhere. They do have redundant storage systems, but only one that has high performance/scalable architecture.
So yeah, it can be done. But it can’t be done cheaply or with a wing & a prayer! And as you said, you need the right people. And they don’t usually come cheap. 🙂
]]>The reality is that there comes a time when it’s cheaper to roll your own fully secured data center with security team etc. than to pay Amazon. Dropbox hit that mark, obviously. We’re a long ways from that. Replicating Amazon’s multitudes of security measures is well beyond anything we’re capable of doing at this point in time.
]]>The cloud storage war is in full swing. Amazon & Google were somewhat complacent until Backblaze created their B2 system and began undercutting them. Though, they have a problem in that they only have one data center in CA. Dropbox created their own system and moved about 90% off Amazon S3/AWS (which had to hurt) into 3 data centers.
Adobe have had such a history of poor security, I wouldn’t trust them with anything.
I’m old school… If I don’t control it, I don’t trust it! 😀
]]>As for security, our cloud product is far more secure than the on-premise product, because the major components of the cloud product are hidden behind multiple layers of professional-grade firewalls and networks where nothing can get to networks except through multiple layers of bastions. Each component can “see” only the components that it needs to see — nothing else — and nothing that doesn’t need to “see” a component can see it. The API server(s), for example, has one port open to the load balancer servers, and can only read data or issue requests via JSON to a back end processor (there’s multiple of them). The back end processor parses JSON’s, does database operations, and returns results. The database servers can only be seen by the back end processors. The web servers are behind another set of load balancer bastion hosts and talks to the API server load balancer to talk to the API’s. And so forth. And all of this is kept up to date in real time via an automated configuration management system (not accessible from the Internet) that continually checks to make sure all software is up to date and updates it as needed, and all of this is monitored continually for signs of intrusion, DoS attack, and so forth.
The on-premise product, on the other hand, we had to modify to work on a flat network because our customers simply don’t have the sophistication to set up such a complex network topology and there’s no way for us to charge enough to send consultants in to do it for them, and as a result things like, e.g., the database, are hanging out on the same network as IP video cameras — the same IP video cameras that were recently massively hacked in order to do the biggest DDOS in Internet history. We have individual host-level firewalls, but not the multiple levels of network indirection and network-level bastion hosts. And so forth. As a product, it is far less secure than the cloud product, and I worry that we’re making a grave mistake wasting resources on it even if we do have major multinational corporations willing to give us the six-figure sum to implement it for them. Once we lose control of the environment, support costs skyrocket and security plummets… and neither is a recipe, IMHO, for long-term success.
]]>Oh yes! I fondly remember CP/M (& it’s brother MP/M which I worked on for a Dual 8086 system ICL were developing to control 16 terminals/users) & DR GEM, as I’ve mentioned before. 🙂
*SIGH* I may get teary… :'(
The *Cloud* is for suckers! And there are so many of them! And no matter how often it bites them, the love it! Yep! Stupidity definitely trumps common sense! (No pun intended… well, maybe a little!) LOL
Oh… speaking of Drumpf, see this?
A Trump victory may not be the worst outcome
LOL
]]>