Disrupting Cloud Business Models and Avoiding the Seven Deadly CSP Sins

Is there anything seemingly more low cost than cloud pricing? Cloud users enjoy prices in the micro penny range and so the expectation is a low monthly bill. For most companies the bills are anything but.

Let’s look at cloud pricing for services offered by the major CSPs. Google’s N1 standard machine’s price is just $0.01 per hour for batch processing. Azure asks a miniscule $0.00099 per gigabyte to store data for a month in its archival storage tier. Amazon charges an infinitesimal $0.0000002083 for 128 megabytes of memory to support a Lambda Function.

The seemingly teeny tiny numbers are quite deceptive. How is it that prices for many cloud services that are literally fractions of pennies, add up to huge sums of money so quickly?

Here are seven deadly ways of how the major cloud service companies turn fractions of cents into real money.

1. Hidden Fees

Sometimes the highlighted numbers hide the extras that you don’t notice. Amazon’s S3 Glacier has a “Deep Archive” tier designed for long-term backups priced at $0.00099 per gigabyte, which works out to $1 per terabyte per month. It’s easy to imagine swapping out backup tapes or on premise drives for the simplicity of Amazon’s service.

But let’s say you want to actually look at that data. If you click through their price sheet, you can see the cost for retrieval is $0.02 per gigabyte. It’s 20 times more expensive to look at the data than to store it for a month. Can you name another business that has a similar price model?

I suppose Amazon’s pricing model makes plenty of sense because they designed the product to support long-term storage not casual browsing and endless report generation. If you want frequent access, you can move to the S3 tier where the ratio is much lower, but still you are paying for gets, puts, and API calls which add up. But if the goal is to save on archival storage, businesses need to understand the secondary costs and plan accordingly.

2. Location, Location, Location

Every cloud company has the obligatory map highlighting data centers around the globe, offering us to place our data and workloads wherever we wish. The prices, though, are not always the same. Amazon may charge $0.00099 per gigabyte in Ohio but it’s $0.002 per gigabyte in Northern California. If I want my data to enjoy an ocean view I guess it makes sense to pay more. 

Such business models are not peculiar to the large American companies, Alibaba, a Chinese company, offers low-end instances starting at just $2.50 per month outside of China but jump to $7 per month in Hong Kong and $15 per month in mainland China.

Companies choose locations for reasons of governance and regulations, latency and network performance characteristics, as well as convenience in visiting a site. It’s up to the buyer to watch these prices and choose accordingly. Not all service providers charge a premium for location.

3. Data Transfer Costs

The only problem with scrutinizing the price lists and moving your workload to the cheapest data centers is that the cloud companies charge for data movement as well. If you try to be clever and arbitrage the costs by shifting the bits around the globe searching for the cheapest computation and storage, you can end up with bigger bills for moving the data.

The costs for data flow across the network are surprisingly large. Oh, an occasional gigabyte won’t make a difference, but it can be a big mistake to replicate a frequently updated database across the country every millisecond just because some earthquake or hurricane may come along.

If latency is an issue or data governance then your decision is not based strictly on price and you will need to somehow absorb the cost.

4. The Data Exit Fee

Cloud companies often don’t charge you to bring data into the cloud. But if you try to ship the data out, the bill for egress is infinitely larger.

This can bite anyone, small or large, who watches demand for their data or content skyrocket. As you satisfy all the requests, the meter for egress charges spins faster and faster.

5. Sunk Cost Fallacy

There are always moments when the current machine or configuration will struggle to do the job but if you just increase the size it will be fine. And it’s only an extra few cents per hour. If you’re already paying several dollars an hour, another few pennies won’t bankrupt you. Right? And the cloud companies are there to help with just a click. How convenient!

Accountants know all too well that the sunk cost fallacy, throwing good money after bad, is a big problem. The money you’ve spent is gone and won’t ever come back, however, any new spending is something you can control.

This can be a challenge in the tech industry, for example, when you’re developing software. You often can’t be sure just how much memory or CPU a feature will require. You’re going to have to turn up the power of the machines some of the time. The real challenge is keeping your eye on the budget and controlling costs along the way. The random addition of a bit more CPU here or memory there is a clear path to a big bill at the end of the month.

6. Cloud Overhead

Cloud proponents might suggest that people who ask such questions just don’t understand the benefits of the cloud. All of the extra layers and extra copies of the OS bring plenty of redundancy and flexibility with all of the instances booting and shutting down in an elaborate, well choreographed ballet. Fantastic stuff indeed.

But the ease of recovery with Kubernetes almost encourages sloppy programming. A node failure isn’t a problem because the pod will sail on as Kubernetes replaces the instance. So what if you pay a bit more for all of the overhead to maintain the extra layers, just be thankful that you can start a clean fresh machine without any of the junk that seems to get in the way.

7. Cloud Scale Infinity

The sticky problem with cloud computing is that the best feature, its seemingly infinite ability to scale up, is also a budgetary minefield. Is each user going to average 20 gigabytes of egress or 50 gigabytesƒƒ? Will each server need 3 gigabytes of RAM or 5? When we start up the projects, it’s impossible to know.

The old solution of buying a fixed number of servers for a project may start to pinch when demand spikes, but at least the budget costs are understood and don’t skyrocket. The servers may overheat from all of the load and the users may complain about the slow response, but you’re not going to get a panicked call from the accounting team.

We can cobble together estimates but no one will really know until the users show up and then anything can happen. No one notices when the costs come in lower, but when the meter starts to spin faster and faster, the boss starts to pay attention. Unfortunately our bank accounts don’t scale like the cloud. Yes, imagine if our savings could scale like the cloud.

RStor – The Disruptor

At RStor we believe that modern workflows need a modern business model which is why we offer flat rate pricing. Storage is priced at a flat rate, there are no charges for ingress, egress, or API calls, high speed data transfer is included at no extra charge, and there is no difference in price due to location. Such a model is obviously predictable thus enabling you to conduct your business with well known costs. Check out our price calculator and see how much of a difference our flat model provides over the variable pricing associated with competitive solutions.

You may also like…

A Bold New Look for RSTOR

A Bold New Look for RSTOR

Over the coming weeks and months, you’ll start to see some visual and cosmetic changes at RSTOR and I want to take a few minutes to explain what is happening and why.

Accelerating Insights to Action – Part V

Accelerating Insights to Action – Part V

Speed to insight has long been an important objective, but users are often frustrated by delays in getting from data to insights, sometimes due not to technology but rather to project issues. Challenges are only growing as data, especially streaming data, gets more voluminous, varied, and moves with greater velocity. Fortunately, cloud vendors are providing solutions and options to fit ambitious use cases as well as corresponding analytics and AI workloads.

Accelerating Insights to Action – Part IV

Accelerating Insights to Action – Part IV

For project teams to attract investment in technologies and cloud services to enable faster data and support faster analytics, it is essential to articulate carefully the benefits that would accrue to the organization.