Monday, December 21, 2009
This is a classic bundling, although some may call it a tying strategy. Microsoft, seeing that they couldn’t win each office productivity segment individually—including word processing, spreadsheet and presentations—decided to play to their strength and bundle them into a solution that no individual company could compete with. This is bundling. The tying strategy is where Microsoft used their dominance in the operating system to tie the browser to the OS, thereby owning the browser market. In the case of Oracle, one could make a case either bundling or tying. I’m making neither a value, nor a legal judgment about Oracle’s strategy; I am just providing historical context.
Ellison points to Cisco and IBM, under T.J. Watson Jr., as examples of successful systems companies. But my question is simple: Will this back to the future strategy work against the cloud? Assembling solutions with pre-packaged systems is certainly easier than starting with more granular components like hardware and software. But does it really stack up against today’s benchmark, the cloud.
Let me use a transportation analogy:
Assembling all of the components (hardware, software, etc.): Like building a car piece by piece
Assembling systems (a la Oracle's Exadata and Cisco): Like building a car by installing large grain items, the chassis, wheels, engine, etc.
Using the cloud: Like buying a pre-built car off the lot
SaaS Applications: Like riding the subway
Most people are perfectly happy either buying a car or riding the subway. For really high-end performance, some may want to build their own car with components or by hand, but it’s a relatively small market.
I don’t expect any public cloud offerings to satisfy high-end enterprise demands…yet. But I have to admit, the cloud is evolving quite rapidly. Just look at Amazon and their introduction of Virtual Private Clouds, Elastic Block Services (a SAN in the sky), Boot from EBS, etc. I can launch an entire cluster with a mouse-click, without talking to IT. How can you beat that? Historical precedence is also on the side of commodity technologies, like the cloud, growing up to cannibalize the high-end. The PC cannibalized the workstation, which cannibalized the mini, which cannibalized the mainframe. From the clou's perspective, the trend is their friend.
The cloud won’t seriously threaten large enterprise systems for quite some time, but I believe it is just a matter of time. Oracle can certainly ride a strong wave of current demand for systems. I expect that in time they will also provide a compelling suite of solutions in the cloud. But if I were a bettin’ man I’d have to bet on the cloud; they have simplicity and history on their side. On the other hand, it is hard to bet against Ellison.
Monday, December 14, 2009
TPC database benchmarks—which database vendors tune specifically for—are a useful objective comparison for buyers of databases. Unfortunately, there is no such comparison in the cloud, and the current cost/comparison approach used by TPC doesn’t fit the cloud.
Here are the problems:
1. TPC doesn’t include costs that are included in the cloud: Public cloud services bundle the costs of everything into their pricing. TPC eliminates things like: electricity, network connectivity, people to run the service, networking equipment (e.g. switches, cables, internet connectivity, etc.), load balancers, modems, Ethernet cards, etc. The public cloud is really a total cost of ownership, while TPC costs are not. So any cost/performance between onsite and cloud solutions compares apples to oranges.
2. TPC assumes that the expenses included above are paid in advance for three full years. Public clouds use a pay-as-you-go model. To compare apples-to-apples here, you would need to do a net present value (NPV) calculation to account for the time-value of money.
3. And this is the BIGGEST issue. TPC tests assume full utilization of the computer. Everyone knows that in the real world you (a) only use at most 80% of the CPU to accommodate usage growth and extreme peaks in performance; (b) between typical peaks and valleys in the remaining 80% your average usage in the real world is typically 10%-20% of the server’s capacity. The cloud, on the other hand, provides an elastic environment where you only pay for what you use. If you use a cloud-ready database that scales elastically, an average load factor of 10%-20% per server translates into saving 80%-90% of the costs versus a dedicated machine. In other words, an elastic cloud environment should reduce cost/transaction by 80%-90%, relative to a dedicated machine.
The cloud provides a normalized cost structure that reflects a more realistic total cost of ownership (TCO). It bundles costs like networking, personnel, electricity, internet access, and more, all in a pay-per-use model. But most importantly, we can get a transactions/instance, and then we elastically scale the instances as needed. This elastic pricing model gives us a real world cost scenario, instead of assuming that a single server is utilized at 100% capacity, which never happens in the real world.
For these reasons, I would like to see TPC create a category of benchmarks that measure cost/performance on standard cloud infrastructures.
Here is a link to Nobu, who ran TPC-C on Amazon’s RDS.