late to the news party.
Looks like this has been covered already… https://www.techtarget.com/searchVirtualDesktop/opinion/Broadcom-intends-to-divest-VMware-EUC-Whats-next
9 publicly visible posts • joined 15 Dec 2015
Yeah,It will eventually come, but just in case, let me apologize in advance if I decide not to in the short term…
Linus might have a rant or two, but he is not dumb, he needs a very strong talent pool, which is what drives language adoption. Furthermore, Rust is a heavily dependent upstream language, it will limit security from this nature, not knowing what bit of code, if any has been modded, like we have seen in similar languages where the exposure came from the library…. how long will it take to some to look at the “safe code” see how it evaluates it’s use and find new ways attack the code. It’s a never ending story. Now consider this, New tool, few resources to review the code, and an upstream process… and you are in for a fiasco, nothing to do with RUST, it could be any language, just that in this case it is RUST.
Also, all this talk of performance, it’s really hard to measure performance from one cade base to another, unless you are keeping likes to likes, in most cases, RUST is doing two level optimization, at he source ( writing the code) and at the compiler level…. I find it troubling unless you take a set of factors in place, and then leverage the LLVM to compile the code, else it’s no different than performance refactoring.
Consider this, RUST is a new language, it’s a cool language, so were many language of the C era that never really took off, if that happens, Linux could be in a thought situation, not finding maintainers for the RUST code base. More risks that benefits from what I can tell, heck, just work on LLVM static code validation instead of refactoring, might well be worth the trouble to find any flaws on what’s there now than having to rewrite it all.
Talent pool is way too small to consider it viable at the moment… but I’m just a dumb ass not in comparison to Linus…
Look, VMware, if anything could be called the Enterprise OS (the OS closest to the HW) which would give Broadcom a quick access to the Datacenter of the top enterprises. They are posed to now own the main entry in influencing a architectural change, dominated but the x86 architecture. They close relationship with Ampere Computing who’s also close the TSMC manufacturing partner for Broadcom, could prose the the capacity to leverage *maybe even buy Ampere, which would make this a much bigger deal than most are considering. Broadcom is trying to recuperate what is lost to Qualcomm who is now posed to compete with the likes of Apple for the consumer *RISC market, and they could be focused on the Enterprise market, if you consider this, the single biggest barrier for the that limits this shift is not Ampere or other server class AArch64 manufacture, it’s the lack of OS support that impedes this. Consider some offering out there, where in real term comparison, can take a x86 POD { tor networking + server + storage } and shrink it down from the 46 Us to nearly 6-8 Us and maybe get about 5x performance boost to at about 1/3 the power consumption… maybe i’m reading into this, but if you consider VSAN, they would have a means to own the end to end considering that they would have preferential access to drive the OS to their HW with tighter coupling… which would make the transition more palatable for most Enterprises.
Well just another POV and MHO.
Well, this statement of a custom kernel was un impressed, and I find it really useless the need of a customer kernel on a VM to append the app. “When was the last time you have a custom kernel needed to install an app?” Windows ? Well, if you consider the said mention of boot times, one of the key values of container is that fail over of a containerized app happens at the Aplication layer, so the RTO of that said app to fail over from one HW to another is a substantial benefit vs VMs. I’d like to see some figures on the Bare Metal OS with Containers (say MariaDD, etc, etc) vs a Hypervisor OS + Guest OS +App. How much performance can you hardness out off the same hardware platform vs the other, this is the key value of the Container platform. Now set it up to fail over to another hardware, the RTO are in the 2-4 sec vs minutes… the Container wins IMHOP.
It you really dig int deep, is it not a database nothing more than a smart file system? Conceptually the lines are blurred between a unit of storage a a database, both are just a container of 1s-0s. The believe is that there is not benefit from storage per say (ie. if you’re data has many instance of itself, then there is an opportunity to gain from data reduction, which is a measurable unit of saving) but in the end storage is the slowest denominator in the equation, so your CPU IRQs, will only free themselves as soon a you finalize the entire IO trasversal. In this case, the right performance from storage will mitigate all the performance bottlenecks in the data access path. ( how many times has the DBA has used that as a legitimate excuse) This has a major impact in savings, a 2ms access, is not he same as the .5ms access. Consider a complete data ingest/read/validation ACID of most modern database roughly creates about 5 IOPS per transaction (Not a science but lets say that is the issue regardless) and the database is using 4KB blocks... OLTP type data, even when you do IO coalescing, 2x or 8x into a larger database write... you are still subject to the IO wait time. On transaction could be 300usecs to complete on the CPU, but you still need to move it off to storage on one side. I’m not going to do the math here, but you get the idea, integrate and you will see how much CPU waiting the WRONG storage produces. So the questions begs answering, does DB consolidation has a storage implication to achieve the cost savings, IMHOP, yes, better do your research and test man test!