* Posts by Chris Schmid

23 publicly visible posts • joined 17 Feb 2011

HDD prices to remain 'inflated' until August

Chris Schmid

It is just pure economics...

This was so predictable, in December 2011 already:

http://bluside.wordpress.com/2011/12/09/a-black-swan-making-the-world-green/

Businesses are just egoistically driven by maximizing profit, that is the system we chose. So you only need to do the math and with only 3 players left in the game, it was crystal clear that this happened. Who want's to live off micro margins if they suddenly realize they demand for storage is inelastic? By the way, they are not even to blame, it's us all who think storage must become a commodity or even a free good. It is just pure economics...

Dell sprays hot fluid data across storage products

Chris Schmid

This says it all

Quote :

"The compression engine available of the DX6000 has been integrated with the DocAve app, and customers can set policies to use fast or slower but stronger compression in the object store."

This says it all. The Ocarina compression inside Dell is NOT native. It needs to be integrated in every app relying on the DX6000 otherwise no benefits. This implies for SharePoint that as soon as files are migrated to other non-Dell systems, it needs to be rehydrated again.

True Native Format Optimization technology such as ours (which is available as a truly independent SharePoint infrastructure solution as well) doesn't do that and its benefits are immediate, permanent and first of all agnostic to storage and applications.

Chris Schmid, COO balesio AG

NetApp will jack up disk prices next month

Chris Schmid

checked your dedupe ratios for unstructured files yet?

Dedup is good for backup for less frequently accessed files. Dedupe on pre-compressed content such as a JPEG file results in 0% space gain. The process of decompression-deduping blocks-recompressing and writing again is just too cumbersome. So when you turn on your dedup on Netapp (or any other vendor), you see scarse results for frequently accessed unstructured files like Office, PDF, image data, etc. - but that is the stuff that grows more than anything else.

Netapp is after profit, so I hope it is clear, they would not give it away for free if it saved you 50% or more on storage space. They give it to you "for free" because they know even when you turn it on, you still need more storage in the future, your data still grows. So in order for you not to look around for solutions that really help you with your data growth, they give you this "patch" for free.

Chris Schmid

Time to think about storage utilization

If companies don't start thinking about storage utilization now, they will still buy the 15% more expensive disks because they have no other chance. SDD's won't be a real alternative as fast as companies would like them to be. And when storage companies incl. NetApp realize that HDD demand is rather inelastic to a 15% price increase, they won't start a raise to the bottom again and keep up prices as long as possible.

With advanced data reduction technologies such as our Native Format Optimization, you can store 2-5 times more files on your existing disks and become more storage efficient. You need less HDDs and you are less exposed to these price increases.

So why not starting to improve your storage utilization now, get really "green" and effectively reduce your data?

Imagine you had an old car which consumes 20 litres per 100 km and you can get an add-on which lets you drive 200 km for 20 litres. Wouldn't you like to get this now instead of waiting for the electric car in 5 years (when electricity will probably cost as much as gas now) ?

Thoughts?

Chris Schmid, COO balesio AG

EMC hikes drive prices, blames Thai flood tragedy

Chris Schmid

Time to look out for a different approach to storage utilization

Companies should not accept this and look out for other ways to increase storage efficiency and storage utilization. There are really great technologies out there that can make you more independent from buying more and more storage like dedupe, thin provisioning, NFO (Native Format Optimization), etc.

It is kind of funny to see all that companies being reluctant to adopting new technologies. Consumers who buy a new car hardly accept that the new one consumes as much gas as the old one, still IT departments seem to have settled with the fact that they just need more storage and counted for too much time on falling storage prices...

It is time now to make a real move into "green IT" for corporate IT departments!

Chris Schmid, COO balesio AG

HP also jacks up disk prices in Thai flood wake

Chris Schmid

Time to look out for a different approach to storage utilization

Companies should not accept this and look out for other ways to increase storage efficiency and storage utilization. There are really great technologies out there that can make you more independent from buying more and more storage like dedupe, thin provisioning, NFO (Native Format Optimization), etc.

It is kind of funny to see all that companies being reluctant to adopting new technologies. Consumers who buy a new car hardly accept that the new one consumes as much gas as the old one, still IT departments seem to have settled with the fact that they just need more storage and counted for too much time on falling storage prices...

It is time now to make a real move into "green IT" for corporate IT departments!

Chris Schmid, COO balesio AG

Swiss-based Balesio takes the knife to PDF files

Chris Schmid

Native Format Optimization technology

What we do is applying our Native Format Optimization (NFO) technology on PDF files. Our technology is composed of a comprehensive set of content-aware native optimization algorithms especially developed for unstructured file formats such as Microsoft Office files, PowerPoint presentations, PDF files and images.

More info and a technical White Paper can be found here:

http://www.balesio.com/technology/eng/native-format-optimization.php

Chris Schmid, COO balesio AG

NetApp loses ground again in IDC's Storage Tracker

Chris Schmid
Thumb Up

Lack of innovation...

it seems that some storage players were just too confident of their early success. They didn't innovate at all and customers accepted that to a certain point - until they couldn't keep their IT budgets up with their explosive data growth.

Now their situation has changed dramatically - they ask for more than just a naive storage system, they want to be as efficient as possible with thin provisioning, dedupe, NFO, HSM, etc.

If you want, customers have an implicit interest to become as "green" as possible in the IT and stop all waste and inefficiency.

So this "trending down" we see is actually good for the world...

Chris Schmid, COO balesio AG

Flash prices FALL

Chris Schmid

Price economics...

As an economist, a few thoughts on this topic...

First, let's agree that demand for HDD is rather unelastic, meaning no big substitution effects will appear in the short/mid-term.

Desktop users might prefer SSD over HDD for some aspects but price prevents this from happening at the large scale. SSD manufacturer are aware of the ruining race-to-the-bottom of HDD manufacturers which all have low to zero margins and are extremely exposed to the market. Making disks alone is a very unprofitable business. So, SDD manufacturers will do nothing about entering a price war as HDD manufacturers did, consequence is prices will likely stay substantially higher than HDDs.

Now, let's look at the HDD situation. It is a market with a few players left only, everybody at large scale. Before Thailand happened, they were in a race-to-the-bottom, eroding margins and forcing market concentration. Now, this external shock makes prices rice. Demand is unelastic, which means after a short while where comanpies can resist buying new disks, they will be forced to buy disks, and HDD disks, even as prices are high, because:

- the majority of disks is sold to corporations

- it is an illusion to think corporations can exchange HDD for SDD as easily as the desktop user exchanges its laptop for a new one with SDD.

Now, HDD manufacturers realize that they will be able to sell similar quantities of disks at a substantially higher price level. They realize they can make better profit and maintain higher margins. The market will try to force them to reduce prices, but the BIG question is: With propably no new HDD competitors left and SDD adoption not foreseeable in the near future, will the HDD manufacturer oligopoly really enter AGAIN in a race-to-the-bottom where they don't want to be? This external shock is a opportunity for HDD manufacturers more than SDD players to get back to healthy margins and a profitable business...

HP shows off filer and dedupe monsters in Vienna

Chris Schmid

Netapp, HP, etc. miss the root of the problem

Netapp SIS is good for backup, where all the duplicate data resides. As is generic dedupe technology in general. However, if you are honest, you need to ask the question: Where does all this data come from that I need to backup? Right, it is primary storage and here mostly unstructured data, the fastest-growing segment of all. SIS and other dedupe technologies cannot effectively handle unstructured data, dedupe ratios on unstructured data sets are poor - why?

a) unstructured data represents pre-compressed content

b) unstructured data is similar, but not identical

c) unstructured data is highly active, moves a lot

The root of the problem is the unstructured data which accumulates on primary storage. You need to tackle this problem as early as during file creation and you will see that these primary storage reductions result in less backup requirements as well...

I wonder why companies like HP, Netapp do not realize that dedupe is just treating the symptoms in an effort to sell always more expensive, bigger machines to customers instead of offering a true, working solution....

Chris Schmid, COO balesio AG

World may be short 70 MILLION disk drives

Chris Schmid

This is like a Black Swan for the storage industry...

...and all naive players who maintain their credo and lobbyist tactics with customers that storage will become cheaper and cheaper. It is the right time now to be really "green" in IT and not waste storage anymore. Companies can, as Len pointed out, reduce their storage requirements by 50% or more using the right technologies such as virtualisation, thin provisioning, dedupe on backup and content-aware native format optimization (NFO) on primary storage.

Black swans exist, thank you Nassim Taleb. It is a tragedy and the human catastrophe in Thailand, but in the end it will contribute to greener IT...

Chris Schmid, COO, balesio AG

NetApp's STEALTH launch of ONTAP 8.1

Chris Schmid

big, bigger, biggest...

When will do one of the storage vendors finally something about data growth? All they do is getting out bigger and bigger systems, nobody, not even IBM, thinks beyond the whole topic and works on solutions how information can be more efficiently stored....

BridgeSTOR in rash NAS cash splash

Chris Schmid
FAIL

Dedupe for unstructured data....

"...Just think about a dozen office workers storing a copy of an email attachment on their home directories."

Well, those "unstructured" files are not deduped because they are precompressed files. To get to the blocks, the card would need to decompress these files, and reassemble them after dedupe. A whole mess and major performance impact, no system would dare to make. So the dedupe here is limited to non precompressed, unstructured files, which are not that frequent... Result? Zero effect from this deduplication on stored email attachments...

There is only one thing you can do to reduce unstructured files on primary storage and that is native format optimization. The truth is simple sometimes...

Quantum mid-range DXi crushes Data Domain

Chris Schmid
Happy

That is a big confession...

Isn't that the same story as with storage itself ? With Storage, prices fall every year, so you get more for less. Now it seems with dedupe, you get faster dedupe for less every year...

So, what does this trend tell us?

If dedupe must be soooo fast, then apparently unstructured primary data grows even faster or otherwise you would not need that speed.

So in other words, and all these releases about dedupe speed confirm this, dedupe is no answer to unstructured primary data growth as it cannot flatten or even reduce the unstructured data, otherwise there would be no need for ever faster dedupe throughput, right?

So this is great news, it is the big guy's confession that they can do nothing about unstructured data growth and have absolutely no solution for that.

To all that struggle with growing data: Look at the source of the problem, learn about new technologies such as native format optimization and reduce your unstructured data before putting another expensive box in your data center...

Arkeia bigs up sliding windows dedupe

Chris Schmid
Thumb Down

NEVER dedupe unstructured data...

...because the doc or xls file contains pre-compressed content already, further dedupe just increases performance needs but realized storage savings will typically be low and not justify the investment.

Native Format Optimization which preserves the file format can achieve 40-75% data reduction but without the disadvantages of dedupe (single point of failure, performance) and is the best way to tackle unstructured data - dedupe was made for backup, not for unstructured data.

And, I want to understand: If the PPT file is deduped and even 50% of space is saved because not 2 but 1 copy is saved, how the hell is this generating savings in the network? The single file is still X MB big and still pushed as a whole through the line...

Dell's dedupe story still unfolding

Chris Schmid

Re: ID

I did not want to do a commercial here, rather I want people to change their perspective on data reduction and focus on the source, which is the single file. I am sure we can help you a lot with your efforts to reduce your data and of course we don't sell magic in a black box... you are welcome to contact me if interested...

Chris Schmid

yes you can...

Native format optimization technology such as Ocarina or balesio is able to reduce file sizes of pre-compressed content efficiently by applying more intelligent methods than what standard document creators (people or applications) usually do, and that is the great advantage. However you need to do it right. Ocarina does it ACROSS files mixing up dedupe with optimization and ending up with the need to have a reader...Dell's big problem now although they won't admit that. balesio stays INSIDE the file offering true native format optimization which is compliant...

Generally, NFO is able to compress precompresssed content further, we have realized this with clients where Image data shrank from 85TB down to 30TB without a reader...

Chris Schmid

Not truly native...

The challenges Dell is seeing now with Ocarina is that Ocarina technology is not truly "native" - it requires a reader if you wish to get significant data reduction as much of the optimization technology works ACROSS files and not INSIDE. Truly native format optimization happens INSIDE files without needing a reader leaving the original file 100% intact is mandatory for reducing active data.

It is an illusion to think other storage platforms will integrate the ocarina reader which would be necessary though if Dell doesn't want to lock in the end user with the technology.

Storage DEATHMATCH: Permabit v Isilon

Chris Schmid

native format optimization...

Jim, good observation but compression cannot be the answer for the fact that data needs to be rehydrated (doesn't' matter how powerful the machine is)...

native format optimization is, as NO rehydration is needed, NEVER, and data is reduced by 40-75% through optimization of content INSIDE the files... this works especially for big data where dedupe doesn't add benefits. It resolves not only the storage problem but as well the network problem as w/o decompression the single data is permanently smaller, not only on disk but as well in the network...

Chris Schmid

Change the perspective...

End users dealing with big data or not so big data should change the perspective: where is the big data volume coming from?

It is the result of the massive number of single files created. Attacking this problem, reducing single file sizes through advanced technologies such as native format optimization provides significant savings throughout the data lifecycle without tweaking existing infrastructure....

It is not so much the question between an old Buick or a Hybrid car. Both cars, if they are fully packed, need a lot of energy to get from A to B. Having the same information content weighing 50, 60 or 70% less, means both cars need less energy (gas or hybrid) to get from A to B... it is time to change the perspective: not the engine is the problem, but the excess luggage

Problems with primary data cloud storage

Chris Schmid

Reason for latency

The reason for latency is in the end the single file size. Getting a 20 MB file takes longer than a 3 MB file. 3 MB from the cloud or a data center 500 miles away can be retrieved faster than the same file with 20 MB in size from your local servers.

Here, Native Format Optimization from providers like balesio comes into play. Getting single file sizes down through natively optimizing the contents provides huge advantages in terms of latency.

New balesio appliance liposuctions fat out of files

Chris Schmid

No, it is not

We read and optimize the unstructured files (e.g. a PPTX file, a PPT file or an image) in its binary form, we don't need the application.

The technology is able to look inside these unstructured files and does its content-aware optimization process within each single file completely independent on the storage resource. No need for the application.

Startup offers penalty-free file data reduction

Chris Schmid

True and not true

Larry,

true and not true.

1) "Scaling back" is not what we are doing because it would mean we treat every object in the same exact way. No, what we are doing is recognizing the contents (if you wish "interpreting" correctly the elements and objects there) and optimize them according to what they are. the result is a visually lossless file. If we were to scale back attributes, we would not be visually lossless.

2) true, that is a customer comment. What is the true ratio for these kind of unstructured files with internally compressed content (PowerPoint, images, etc.)?

3) It is penalty-free because you do not need a reader or any rehydration of an optimized file. The optimization itself requires performance, but only one time. Once the file is optimized the file is smaller and doesn't need to be rehydrated anymore by no application or system. And a smaller file is also loaded faster, so after the optimization less performance is required for handling that file.

In general, our approach is totally different than dedupe. We don't look across files but INSIDE files to optimize capacity. By doing so, we create an open form of capacity savings and users can do primary dedupe and all other things in the same way after optimization.

Best,

Chris