back to article Symbolic IO CEO insists the IRIS i1 is more than a bunch of pretty lights

The TechLive session revealed Symbolic IO's IRIS i1 box, and its tech continues to raise lots of questions. How do the components work? Why have certain design choices been made? How does the IRIS system compare to other servers? We asked the company lots of questions about its technology and CEO and founder Brian Ignomirello …

  1. This post has been deleted by its author

  2. Jered

    Checked the calendar...

    We're still 4 weeks out from April 1 for US readers.

    Does global warming mean that the UK silly season is now starting in March?

  3. jake Silver badge


    Or, if you prefer, "If it sounds too good to be true ... ".

    IOW, don't TELL me. SHOW me.

  4. tomjoyce64

    Ok i'll byte

    I never post on here (first time caller, long time listener) and i am not an engineer bit will risk an observation or two. I have only seen a couple of these articles about Symbolic IO so I dont know much yet. I think the launch of this company will drive a ton of skepticism based on the claims and tone of the announcement, but I am going to err on the side of optimism for the moment, until I know more....

    It sounds like a key piece of the product is encoding to stuff more through a memory bus into the CPU and back to get full value out of the obscene performance advantage of modern CPUs relative to everything else in the server/storage stack. They seem to be saying that this is not just compression but is a different class of algorithm like huffman encoding or stuff along those lines that the HPC people have talked about. So to comes down to the quality and defensibility of the math that the Symbolic IO engineers have done to implement this encoding, as well as things like how efficient it is and error handling. I dont know that they need to have invented the equivalent of cold fusion for this to be of value. Rather, if they have implemented some known academic techniques in a production computer for the first time, and potentially figured out some of the practical gotchas, then this might be pretty cool.

    Another part seems to be an outboard processor of some sort that handles or assists this encoding and plugs into a dimm slot. I am not sure I quite have a handle on what this does yet so I won't try to comment but it seems like an interesting and creative design choice.

    Then you have the matter of the work they have had to do to go qualify and work with a bunch of new and not-yet-standardized non volatile memory types, and probably do a ton of bios work and other pain in the butt kind of stuff to make their contraption work. Sounds like the early days of storage networking when nothing worked together.

    If the result is super fast, costs less and is reliable then they are on to something.better still if the outboard co-processor and other parts are deployable on some other server vendor's standard gear. The key is how good and unique that symbolic encoding math piece is. I am going to bet for the moment that there is something good there. I am also going to look past the awkward marketing; i am not sure the blinking lights, snazzy names for everything. and unproven claims based on un-named customers are helpful (but they did get my attention more than if they said they had memory bus encoding algorithms....)

  5. Steve Chalmers

    Perhaps this isn't so complicated

    Seems to me this is a server, integrating a storage system, which compresses data and then puts it in DRAM which is backed up to flash either continually or at power fail. So storage I/O happens at memory latency (plus whatever CPU time is used for the compression), and applications which are limited by storage latency run far, far faster.

    This design will be significantly simplified when byte addressable, suitably fast SCM chips are available (and priced right, and reliable, and ...).

    As an old storage system designer, there are two risks Symbolic needs to have mitigated, and needs to be able to explain the mitigation in terms which are both understandable to a CIO and technically accurate.

    First, when a storage system acknowledges that a write is successful (a SCSI status phase comes to mind), it commits to the world that at no time in the future will a read ever, under any combination of misfortunes, be able to see what was on the disk in that place before this write. In this case, it means that even if the power goes out, the server crashes, a chip dies, or the like in the millisecond following this write, it's durable. This is traditionally really hard to do with DRAM. It is clear the Symbolic folks have put a lot of time and thought on this, in the end building a proprietary NVDIMM-N type module with purpose built logic and some processing capability on board.

    Second, which I would guess is out of scope, is that part of what makes a classic storage array "reliable" is dual controllers, and RAID or mirroring of disks -- and failing independent of any particular server. I think this design's intent is more along the lines of (say) a Microsoft Exchange active/standby configuration of servers, where the application makes sure a current copy of the Exchange data exists on both servers at all times.

    Looking forward to future communication from the Symbolic technical folks which, after the patents are all filed, explains in clear language how the gains are achieved so customers can self-select based on the risks and benefits of the design.

    1. Anonymous Coward
      Anonymous Coward

      Re: Perhaps this isn't so complicated

      The patents actually ARE already filed. Some of them are:

      1. Jered

        Re: Perhaps this isn't so complicated

        AC, this is a rather depressing list. I don't think I can bring myself to dig through it all, but let's look at... oh, application US 13/756,921. Ignomirello is trying to patent tokenized compression. You know, that underlying concept below LZ-variants and most other lossless compression.

        I mean look at that first claim:


        1. A method for storing data on a recording medium comprising:

        i. receiving a plurality of digital binary signals, wherein the digital binary signals are organized in a plurality of chunklets, wherein each chunklet is N bits long, wherein N is an integer number greater than 1 and wherein the chunklets have an order;

        ii. dividing each chunklet into subunits of a uniform size and assigning a marker to each subunit from a set of X markers to form a set of a plurality of markers, wherein X equals the number of different combinations of bits within a subunit, identical subunits are assigned the same marker and at least one marker is smaller than the size of a subunit; and

        iii. storing the set of the plurality of markers on a non-transitory recording medium in an order that corresponds to the order of the chunklets.


        So, breaking it down:

        1) Take some data, and break it up into fixed-sized blocks,

        2a) Break up those blocks until smaller sub-blocks, [an unnecessary step, but whatever]

        2b) Assign a token to each unique sub-block. where at least one token is smaller than the fixed sub-block size, [mathematically, of course, this means in random data some will be larger],

        3) Store the tokens.

        That's basically the description of a dictionary coder, leaving out all the actual hard parts. There's no technology development here. Thankfully, the people at the USPTO are very good at stacking you up with prior art in responses, so if this gets granted that independent claim will get saddled with a whole bunch of dependent clauses like "when a self-destructing GPS card is present," or "when the moon is in the seventh house, and Jupiter aligns with Mars".

        But, if you hop over to PAIR ( you'll see that this application received a Final Rejection Notice two weeks ago, on 3/3/2017.

  6. chuBb.

    sounds like a challenge

    Unhackable??!!?? Ummm nah there will be a way.

    So we are talking about a rack mount gaming pc case, with a smartphone display on the bezel instead of an ascii char lcd, magic save to dram on power loss without using (internal) batteries so reliant on ups or generator or super cap where a battery should be and hoping the alternate power source has enough grunt for the task to complete, and potentially a weird proprietary bios level file system/memory manager with an apple style vendor lockin ensuring that when you need a spare the eta is weeks not hours

    Think I will stick with off the shelf boxen and wait to see what's left after the vapour has evaporated, if that's anything at all they will have distilled it down to an addin card for a standard x86 box, or gone bust...

  7. SirWired 1

    That interview was bizarre

    He's claiming 95% reductions in memory usage and 75% reductions in CPU usage that are somehow universally applicable, and you are asking him about why the module plugs in from the rear? Why there's a *bleep*-ing window on top of the chassis?

    What the hell is going on with you and this company? This is your fourth (at least!) article about these guys, and you have yet to, even once, dig into the most fantastical claims for their product.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like