* Posts by kbuggenhout

7 posts • joined 31 Jul 2015

Cisco shoves more GPUs in AI server for deep learning, still doesn't play Crysis


And then some prove that for machine learning, there is almost no difference between 1 machine with 8 or 16 v100 sxm2 gpu’s and a number of 2 or 4 gpu’s in a cluster. Efficiency diffrerence less than 2%. While having a shitload more efficiency in scale. As you know can use any number of gpu’s , which can be adapted to the workload, and create the ml machine you need at job submission time.

Why fork out big $$ for a monolith that will have IO problems, and is busy with one problem. If you can truly compose the machine you need. I am very wary of these big monolyths. All for agile modular workings.

HPE primes storage networking pipes for NVMe-oF data deluge


Couldnt agree more, with fabrics going to 100 and 200 Gbps, fc is struggling to stay alive. Having a solution for NVMEoF crippled over a 32 Gbps link is not a smart idea. Some solutions work with 32 NVMe devices, with theoretical bandwith of 32 x 3 GBps. Even one NVMe flash drive saturates fc32.

I dont see value in a fc san anymore. Even a SAS link is 4x12 Gbps these days, fc is dying a slow death

Linux literally loses its Lustre – HPC filesystem ditched in new kernel


This is creating a lot of unnecessary confusion

Imho, and as a person in the field of hpc, lustre is still the most used filesystem in hpc systems worldwide. The mentioning of spectrumscale is an odd one, is spectrumscale is as old as lustre. Has no support in the kernel (doesnt need it) but is horribly expemsive. We are deploying clusters with lustre every week. The dev direction of lustre is to make builds that do not need specific kernels, but can use kernel modules to obtain its functionality.

Besides that, intel stopped its own offering if the fs, which was a specific patchlevel geared towards enterprise usage. This seemed to be a bit too far fetched even for intel. They still have the largest group of devs for lustre in house, they still offer l3 support and community support. In the field we encounter other filesystems, where 90 % is divided between lustre,BeeGFS and spectrumscale. The other 10% are the newcomers that grew up in the big data world. Scale out nfs is rare, as nfs does not provide support for large scale applications that need special offloaded I/O.

To call lustre down and out is way too premature. It has existed for most of its life without native kernel support, and with the latest efforts will not need that native support as it will be running by loaded modules.

This article is accurate to the point that the stubs in linux kernel are gone, but thats all.

Hdfs has no place in hpc. Unless its a hybrid system that offers best of both worlds, even then there is most likely both.

Just my 2 c

Belgian court fines Skype for failing to intercept criminals' calls in 2012


its so shortsighted of the courts, they better just outright ban all communication, and make it illegal to use any form of software telephony, messaging, encryption, why not ban all electronic devices then, while your at it shut down all power plants, ban electricity, remove all vehicles, close the borders,... wait, some people might love that...but really?... as if that is going to happen.

and indeed in 2012 skype was not yet owned by microsoft, it was employing p2p ( one of the reasons it didnt work for confcalls with multiple parties). for that reason it is technically impossible to deliver anything more than the meta data of the call.

we will have to go back to writing texts with hidden messages like during WW2

hmmm we already do that... wickr, whatsapp, facetime... any voip soft, people running asterisk at home... guess what.. you cant trace those calls... and if you can, you need years of compute time to get to the message.

Array with it: What's next for enterprise storage?


nice comment on HPE with a picture of DELL blades

funny to see DELL blade chassis with equalogic storage units pictured above a HPE article..


The incredible IT hulk: Dell + EMC - did someone say 'synergy'?


networking hardware is not OEM'ed at Dell, Dell aquired Force10 2 years ago

inaccuracy of the article, Dell owns Force10 for more than 2 years now which designs its own hardware, of course based on ASICS that are in house designed under license. so there is no network aquisition in the making, it has already been done.

Crazy Canucks heat their lab with muahaha-capable server


comparison does not hurt

funny article, as Dell has done the same in 1U with only 1.6KW, just a few less disks, but lets face it, these machines should go into a setup that has a fast central storage. with these ultra dense ones, you can fit 40 of these babies in one rack, taking up a whopping 65KW. and for the cold plate cooling guys, its possible, but we see better efficiency with rear door cooling.

and they are really good at heating up your home, although the sound pressure may be a bit hard on the people living there.


Biting the hand that feeds IT © 1998–2021