Jun 15 2011
SAPPHIRE 2011 Wednesday Keynote – HANA, HANA, and More HANA
Author Note: I realize that SAPPHIRE is old news by now, but I felt this post still had enough to offer that I would finish and publish it.
As a technical guy myself, I tend to prefer the SAP Business Objects conference or SAP TechEd over SAPPHIRE, mostly because I find more technical content at those events. However, the Wednesday keynote address from Vishal Sikka and Hasso Plattner of SAP certainly gave me plenty to chew on from a technical perspective.
Vishal Sikka
Vishal kicked off the keynote talking about HANA, and continued that theme throughout his entire (long!) presentation. In a prior post about the conference I answered the question, “what is HANA, exactly?” very simply: HANA is a database. It can be presented in a number of different ways, but ultimately that’s the function that HANA provides. I don’t install HANA to provide new functionality. In order to do anything with it, I need what I have started calling “HANA Plus One” instead. The “plus one” can be Web Intelligence, Xcelsius, or any other query tool. It can also be application code. HANA is an accelerator or an enabler. With HANA I can do the same things I did before but much faster. Or quite possibly I can now do something I wasn’t able to do before because the process took too long. (True story: A very long time ago I was asked to optimize a daily report that was taking 20+ hours to run. By the time the report was finished it was too late. With a few report tweaks and one additional database index I got the report down to 20 seconds.)
Vishal also talked about a HANA system that would cost about a half-million dollars that was able to handle 460B (that’s a “B” as in billion) database records. He called it the “perfect non-disruptive technology” because it bolts onto the back end of BW or Business Objects or… whatever. Stuff that worked before still works, it just works faster. (HANA supports both MDX and SQL query languages.) His presentation continued with a lot of customer testimonials, such as one from Colgate-Palmolive. They use HANA for sales planning and profitability analysis. Processes that used to take well over an hour ran in 13 seconds. He also showed customer testimonials related to analyzing traffic for a taxi company in Japan and margin analysis for Infosys. There were probably a dozen or more testimonials in all; after a while I got bored and stopped taking notes because they were all variations on a theme.
At one point Vishal appeared to want to emphasize that HANA was not developed in a vacumm; they used partners whenever possible. He gave as an example the history of innovation with Intel to optimize CPU logic and memory management. Intel now has chips with 10 cores, 30MB of cache, and support up to 4TB of main memory. The partnership with Intel is a win-win. SAP gets a platform to sell more software, and Intel gets demand for top-line hardware.
Towards the end of his session Vishal mentioned that Adobe was going to be using a HANA-based system to evaluate what he called “unintended license usage” of their products. π They didn’t share details about that, of course.
Hasso Plattner
Hasso then took the stage, after Vishal had run well over his allocated time. Hasso was quite the comedian! He commented that since Vishal took much of his time, he would have to present everything in accelerated fashion, and coined the term “HANA Speak” followed by a bunch of gibberish that sounded like the Chipmunks after espresso. I saw more than one tweet related to that joke. π
Hasso took some prearranged questions via video clips. One that I was the most interested in (and therefore took notes about) was related to how reliable could a HANA-based system be, considering it’s all based on RAM. Hasso had a very good answer… he basically said why should we care. π The concept of caching is not new. I remember buying products years ago that made use of my personal computer RAM to cache commonly accessed information. Today my CPU has a cache, my hard drive has a cache, my video card has cache, heck I would not be surprised if my cache has a cache! Hasso’s point was that all systems today will cache data, but we’re relying on the database to be smart enough to manage that process. HANA doesn’t try to be smart. Instead it takes the brute-force approach and caches everything.
When my Oracle or Teradata (or any other) system is power cycled, the cache starts out empty and has to be reloaded. If a HANA system is powered off and back on again, the cache also has to be reloaded. It’s just a bigger cache.
Hasso then proceeded to show three different HANA systems. The first was HANA running on a Mac Mini (he called it a Mini-Mac, which inspired several folks around me to make McDonalds references). By the way, don’t bother looking for the Mac Mini on the hardware support list. π It runs a custom port of the code which is not (to my knowledge) commercially available. But Hasso wanted to talk about the fact that entire companies were running on this system. Imagine being able to run your entire company application suite on hardware that small! The next system Hasso showed was a rack with some pizza boxes (blades) which is probably the more typical configuration. Finally, we went live to a data center that showed (if my memory is correct) a 1,000 CPU system. Impressive stuff, all done to show the scalability of HANA.
Keep in mind what I said earlier though… HANA isn’t a product that helps you make better business decisions, build prettier graphs, or deliver mobile solutions. It is an accelerator / enabler for all of these. It’s a database. Along those lines, SAP is taking steps to be able to run their ERP on top of HANA. The first module I heard about for this was their planning application. As Hasso put it, planning is all about seeing the future. HANA gives you brighter headlights so you can see farther and drive faster.
One final moment of amusement came when Hasso was looking at one of his slides and describing the various elements, and could not remember what the “ASIS” acronym stood for. Then he remembered that he was speaking in English and was able to put “as-is applications” into context. π That goes back to Vishal’s comment about HANA being non-disruptive. In a theoretical situation, I don’t have to change a single thing about my business processes, reporting, or mobility solutions in order to use HANA. Anything currently in place will continue to run as it currently stands (as is). Nice idea in theory, of course, but I’m sure that in practice there might be one or two things that need tweaking. π
Conclusion
As I mentioned previously, the three hot items for the conference this year were mobility, cloud, and in-memory computing. Mobile devices require fast response times or they’re not useful. I can’t sit at a client site for 30 minutes waiting for a report to run or an analytic to refresh, I need it now. HANA helps me get there, and SAP is obviously very proud of their technology. I expect to hear (and see) quite a bit about this over the coming years. Perhaps even to the point where we can paraphrase Scotty (from Star Trek IV) and say:
Hard disks, how quaint.
thanks a lot for this post
it makes HANA a bit clearer
love your final quote π
Thanks for the post.
I like the way you narrated.
Keep in mind that there also different terms used: SAP In-Memory Appliance (SAP HANA), SAP In-Memory Computing (HANA).
HANA seems to be THE next big thing and is more than just a fast, in-memory, column-oriented, highly parallelized, compressed database.
See also: http://www.sap.com/platform/in-memory-computing
Good post. I rolled off of an engagement earlier this year where my focus was primarily Business Objects Explorer, BI 4.0 primer for the client’s staff, and HANA (High-performance Analytic Appliance; HANA, get it?) 1.0 on the back end. I’ll break my observations down into 3 categories.
The Good:
HANA definitely delivers on its promise of high performance….I mean…WOW…do things move once you get them into HANA. Even when you put a universe over it (universes put a little I/O back into the equation when you consider that data is written to “core files” and then loaded into a micro cube).
Case in point: I was doing ETL testing; nothing extraordinary, just making sure sources equal targets. Used the merged-dimensions capability of WebI to speed things up so built a quick and dirty universe over Oracle and another over HANA (yes, can be done in IDT, but I was under a deadline). The source was an Oracle 10g standard view and the target was HANA. After making sure the ETL guys did it right, I looked at the run times. Here’s what I saw: Source: 6 minutes, Target: 1 minute. 6 times faster!
The Bad:
-Integration Gaps
HANA has its own metadata. You use its module called “Modeler” to build “Analytic Views.” You can build WebI dataproviders directly over Analytic Views instead of a universe; same goes for Explorer Information Spaces. Here’s the problem with Explorer and HANA: To restrict an Explorer Information Space, you have to use an object qualified as a Pre-Defined Condition. Universes have ’em, Analytic Views don’t. Explorer won’t let you put a Dimension in the Filters section.
The only way I found to do it (after speaking the SAP’s “War Room” in Germany), was to put a filter on a column in the Analytic View itself. Here’s the problem: I’d have to spawn out ump-teen versions of the same view with different filter values to meet the requirement. Nah, shouldn’t be a maintenance problem….
-Version 1 Software
Well, as with any new software there’s bound to be bugs. HANA is no exception. A lot of functions in the Modeler module don’t work at all or don’t return what you would expect. Also, very basic in terms of overall functionality; which, I suppose, makes sense if you’re gunning for lightning speed.
The UGLY:
HANA was probably release to the market earlier than it should have been. When I rolled off this engagement earlier this year, SAP was on Revision 26…or was it 30? I lost count since we were averaging about 2 revisions a week. There were also some issues between Data Services (formerly known as BODI) and HANA; although I canβt say exactly what.
What I’ve suggested to clients is that either wait for version 2 or give Oracle’s Exadata a peek if you can’t wait.
Hi, Brian, and welcome. Glad to hear from you again. π Thanks for your thoughts on HANA.
Thank you, Sir. Good to be here!