* Posts by CheesyTheClown

745 posts • joined 3 Jul 2009


FreeBSD 13.0 to ship without WireGuard support as dev steps in to fix 'grave issues' with initial implementation


I was about to

sweep in and complain about poor coding.

Whenever I write kernel modules in C (Linux ugh), I find myself spending far too long detangling unintuitive preprocessor crap that has no place in 2021. When implementing secure protocols, most ciphers allow in-place operations, but usually headers need to be prepended and you will never find a solution to this problem that permits for the headers to remain memory aligned or for the buffers to be encrypted do.

This means to effectively make protocols which encapsulate higher level data, good buffer management is necessary. And while C allows you to do anything you want, as efficiently as you want, almost all solutions to the problem tends to lead towards reimplementing object oriented language features... usually in preprocessor macros or using crazy compiler extensions for tracking offsets into buffers based on their positions in structs.

There is also the whole game of kernel buffers. Since the kernel is in privileged mode, performing allocs and frees is frowned upon. The structure of the kernel memory space is expensive and dangerous to randomly allocate memory, especially if it may trigger the kernel to need to further allocate memory beyond its initial pool. Since the MMU is mostly bypassed in this mode and since C memory management generally is not relocateable, the only real solution is to overprovision needlessly.

I could (and probably should) write a book on all the numerous problems related to kernel development as well as the endless caveats of coding kernels in C, but let me simply say that while good C code is possible, it’s rare and far too many trivial tasks are managed manually and repetitively when coding C.

I don’t particularly care for the syntax of Rust or Go. But both languages run with a great concept which is... if it’s a very common task that can be added to the language with no real additional cost, do it. As such, both languages understand strings, buffers and data structures. There is no need for disgusting hacks to implement things like macros that are all hacks to support something as trivial as foreach.

C could fix these things as well. But it’s a conscious decision by the people steering the language to keep it as it is and to leave it to libraries and compiler extensions to do it instead.

I love C because if I want to write a new C compiler, I can make something usable and likely self-hosting within a few hours. But this isn’t the characteristic of a programming language I would want to use in 2021. If I were to spend my time on such a project, the first thing I’d do is build extensions for strings, buffers and data structures ... and it wouldn’t be C anymore.

Oh... and most importantly, I would drop the preprocessor and add support for domain specific language extensions. And I’d add proper RTTI. And I’d add an extension for references. And of course relocatable memory. And probably make ...

You know what... I don’t think I’d do anything other than bootstrap the new language with C and then basically just ignore the standard from there :)

Flagship Chinese chipmaker collapses before it makes a single chip or opens a factory


Re: More to this than meets the eye

They actually seen what we call IP theft as free market economy. Rather than protectionism which is supposed to allow companies to recoup their investments through patents and such, they believe that everything that can be copied is open source and each copy will generally generate innovation and place pressure on all players to always make advances.

I am currently sitting just outside a semiconductor research facility as I write this. It makes high frequency radio semiconductors for communications. Almost all the technology inside is off-the-shelf and while they obviously make a buck off patents, everyone there knows that their real protection is research and progress. We have a real problem at this location because if the US government becomes protectionist against its allies, they’d have to just shut down.

The good news is, there is another building just off to the side which is a nanotechnology research facility that can produce everything without dependence on the US. So rather than investing massive sums of money in traditional semiconductors, they focus their efforts in nanotechnology as a replacement.

When China eventually catches up in semiconductor fabrication to the US, it will quickly surpass them as American companies will depend on protectionism. And even if the US were to undo all the restrictions Trump enacted this moment, China would still invest heavily in their own tech and will still pass the US since they have now learned that if it happened once, it can happen again. Not only that, but I imagine the facility I’m sitting at will start buying from China at least as much as from the US so they will never have to worry about being completely cut off.

The sanctions placed on China by the Trump administration will likely be the biggest boost to China that could have ever been possible. It will take China time to recover from it, but when they do, it will leave the entire rest of the world without any leverage when negotiating with the Chinese on political issues. At this point, Biden’s choice of leaving the sanctions in place does nothing other than provide a buffer to let countries outside of China get a running head start in a very very long race.

Please don’t assume I’m playing the Trump is evil card. With the rollout of 5G, if Huawei would have been able to keep going as they did, China would have been able to draw trillions of dollars more from the US treasury. I don’t agree with how Trump prevented this, I would have rather seen an executive order demanding Cisco or another US company produce a competitive offering. But it did accomplish mitigating the risk of China being able to simply collapse the US economy on a whim. The European approach of working on an open source RAN was a far better approach.

America, Taiwan make semiconductors their top trade priority at first-ever 'Economic Prosperity Dialogue'


What happens when...

One of the world’s top economies... heavily dependent on semiconductors is told that no one in the world is allowed to sell them semiconductors?

The easy answer is, they invest massive amounts of money, time and resources to never need to buy semiconductors from another country.

Then they build up enough manufacturing ability to produce semiconductors for every other country who doesn’t like the impending threat of being cut off.

Then they do it for a lower cost than any other country.

Then they weaponize their capacity and use extensive government grants to economically attack countries like Taiwan... after all, why not simply give semiconductors away for free until TSMC can no longer afford to keep their doors open?

Of course, in order to stay competitive, that country will innovate as well. They will make sure they’re not just competitive, but after throwing money at racing to equal with publicly traded companies in the US and Taiwan, they will have a momentum already in place to also surpass them.

So what happens when someone makes a decision that threatens many of China’s largest and most influential companies and treats them like this? You think if Biden gives a little and agrees to sell them chips again that China will just stop their almost space race like efforts to become entirely independent?

Who knew? Hadoop is over, says former Hortonworks guru Scott Gnau


Re: @tfb This is why

I've been saying this for some time about COBOL. (Oh and I work with FORTRAN in HPC quite often)

People make a big deal about COBOL programmers being in short supply and that it's an antiquated language. Honestly though, what makes programmers really confused about it is that you don't really write programs in COBOL, it's more of a FaaS (Serverless, function as a service) platform. You write procedures in COBOL. The procedures are stored in the database like everything else and when a procedure is called, it's read from the database and executed.

The real issue with "COBOL programmers" is that they don't know the platform. The platform people are usually referring to when they say "COBOL" is actually some variation of mainframe or midrange computers. Most often in 2020, they're referring to either IBM System/Z or they're referring to IBM Series i ... which is really just a new name for what used to be AS/400.

The system contains a standard object storage system... or more accurately, a key/value store. And the front end of the system is typically based on CICS and JCL which is job control language. IBM mainframe terminals (and their emulators) have a language which could be kind of compared to HTML in the sense that it allows text layout and form entry as well as action buttons like "submit".

Then there's TSO/ISPF which is basically the IBM mainframe CLI.

What is funny is that, many of us when we look at AWS, all we see is garbled crap. They have a million screens and tons of options. The same is said for other services, but AWS is a nightmare. Add to that their command line tools which are borderline incomprehensible and well... you're screwed.

Now don't get me wrong, if I absolutely must use AWS, it wouldn't take more than watching a few videos and a code along. I'd probably end up using Python even though I don't care much for the language. I'd also use Lambda functions because frankly... I don't feel like rolling my own platform from scratch. Pretty much anything I'd ever need to write for a business application can be done with a simple web server to deliver static resources, lambda functions to handle my REST API, and someplace to store data which is probably either object storage, mongodb, and/or some SQL database.

Oddly, this is exactly what COBOL programmers are doing... and have done since 1969.

They use :

- TSO/ISPF as their command line instead of the AWS GUI or CLI tools.

- JCL to route requests to functions as a service

- CICS (in combination with a UI tech) to connect forms to JCL events as transactions... instead of using Amazon Lambda. Oh, it's also the "serves static pages" part. It's also kind of a service mesh.

- COBOL, Java or any other language as procedures which are run when events occur... like any serverless system.

It takes a few days to learn, but it's pretty simple. The hardest part is really learning the JCL and TSO/ISPF bit because it doesn't make sense to outsiders.

What's really funny is that IBM mainframes running this stuff are pretty much infinitely scaleable. If you plug 10 mainframes in together, they practically do all the work for you since their entire system is pretty much the same thing as an elastic Kubernetes cluster. You can plug in 500 mainframes and get so much more. The whole system is completely distributed.

But like you're saying FORTRAN is its own entire platform/ecosystem is entirely true. Everything you would ever need for writing a FORTRAN program is kind of built in. But I will say, I would never even consider writing a business system using FORTRAN :)

Microsoft submits Linux kernel patches for a 'complete virtualization stack' with Linux and Hyper-V


Re: The way forward?

I'm not sure what you're referring to. While there are vast areas of the Linux kernel in desperate need of being thrashed and trashed, a side effect of it's lack of design is that it's highly versatile (which mind you is what makes it so attractive for some many things).

Microsoft has managed to play quite nicely and by the rules with regards to making the majority of Windows friendly code self-contained within its own directories similar to other modules. It's really not much different than either the MD stack or the SCSI stack. In fact, the Hyper-V code is much easier to rip and remove than most other systems within the kernel as it's organized in a pretty central place.

Rather than spamming the kernel with massive amounts of Windows specific integrations for things like DirectX integration, they have done some pretty cool things to abstract the interfaces to allow generic RDP support for redirecting Wayland compositing for a pretty nice alternative to VNC or the X11 protocol and from what I can tell, they're working with the open source to make Wayland finally have a strong solution for headless applications over the wire.

Microsoft may be all-out embracing and extending Linux, but now their hands are so deep in Linux's pocket that extinguish is no longer an option for them. And they even play nicely enough by the rules that GPL zealots tend to just grunt rather than rampage about them these days.

Add to that that Microsoft does release massive amounts of actually genuinely useful technologies to the open source and they're almost even likeable now.

This announcement is pretty interesting to me because it will likely result in a VMM on Linux which is easy to manage and is heavily consumed. Honestly, I adore KVM and use it for almost everything, but the highly generic nature of kvm due to it's qemu roots makes it infinitely configurable and infinitely difficult to configure.

Money talks as Chinese chip foundries lure TSMC staff with massive salaries to fix the Middle Kingdom's tech gap



For the most part, the knowledge required to not only produce current generation and the knowledge to move forward is what is most interesting.

There is little value in hiring for blueprints. The worst possible thing that could happen to Huawei and China as a whole would be getting caught producing exactly the same technology verbatim.

There is value however in hiring as many people as possible that know how to innovate in semiconductor technology.

Huawei and others are running out of chips, but they're not as desperate as you'd think. They're more than smart enough to have contingency plans in place. They have time to catch up. It's far better to get it done right rather than to get it done.

The problem of course is that by China doing all of this, it will seriously impact the Taiwanese and American semiconductor market. When China is finally able to produce 7nm using Chinese developed technology, they can start undercutting costs for fabrication.

Where the US and TSMC will build a handful of fabs for each generation, China will focus on scale. And once China catches up, they'll target becoming a leader instead.

Trump focused entirely on winning the battle. But he has absolutely no plans in place for defending in the war. History shows that every time trade wars have tried his tactics, it doesn't just backfire, but it explodes. The issue now is whether China can do it before Trump leaves office in 2024. If they can accelerate development and start eating away at the semiconductor market in the next 4.25 years, Trump's legacy will be that he completely destroyed the entire semiconductor market for the US and US allies.

Apple gives Boot Camp the boot, banishes native Windows support from Arm-compatible Macs



I may have missed it, but there are a lot of people who depend on hackintosh or virtualization out there. There are companies with full farms of virtualized Macs for running XCode compiler farms. There are a surprising number of people using virtualized Macs as iMessage gateways.

By Apple making this move, they can make little tweaks like CPU instructions that are Apple specific. They can also make their own TPM that would block any XCode compiled application from running on a non Apple CPU.

How about large enterprises who depend heavily on virtualized Windows on Macs... for example IBM? They actually dumped Windows PCs in favor of Macs because they could remote desktop or virtualize corporate desktops and the users would all have a pretty easy "press the key during boot to wipe your PC and reinstall it". I guess this would still work... at least remote desktop VDI.

What happens to all the developers at Microsoft carrying around Macs? If you've ever been to Microsoft Build Conference, you'd think it was sponsored by Apple.


Re: Bochs

Bochs is a nice toy for retro purposes, but it lacks much of what you would need to make this a solution. On the other hand, you're on the right track, qemu which has a dynamic recompiler and x86/ARM64 JIT would be a solution... it won't be particularly fast though. To run Windows worth a damn today, GPU drivers are an absolute must... even if it's just an old Intel GPU, the Windows compositor really thrives on it.

Nine in ten biz applications harbor out-of-date, unsupported, insecure open-source code, study shows


Don't forget Cisco!

Cisco Prime, Cisco IOS-XE, Cisco IOS, Cisco ISE....

I can go on... but Cisco absolutely refuses to use package management and as a result, may systems only release patches once or twice a year. When they are released, they don't upgrade nearly enough packages.

Consider Cisco ISE which is the security portal for many thousands of networks around the world for wireless login. It's running on Apache versions so old that zero day passed years ago.

Then there's openssl and openssh... they just don't even bother. It doesn't matter how many CVEs are released or what their level is... Cisco ignores them.

Then there's Java versions

And there's the other key issue which is that Cisco products don't do incremental upgrades. You either upgrade everything or nothing. So even with critical security patches, the vast majority of Cisco customers DO NOT upgrade their software because there is far too much risk that it will break more than it fixes.

Of course, even with systems like Cisco DNA which automates most things, upgrades are risky and sometimes outright dangerous since there's no means of recovering remotely when your infrastructure equipment goes down.

Cisco doesn't release information, but I know of at least several dozen government organizations running Cisco ISE releases from 2-5 years old with no security patches because you can't simply run rpm update or apt upgrade on Cisco ISE... which is really really stupid when it's running on Linux.

I think Cisco might be the most insecure enterprise company out there and the only thing keeping it from being more common knowledge is that the people who actually know about these things risk losing their meal tickets from making noise about it. And what's worse is that Cisco people almost NEVER know anything about Linux... or security unless it's protocol security.

Uncle Sam tells F-35B allies they'll have to fly the things a lot more if they want to help out around South China Sea


As a tax payer...

I disapprove of these planes being flown. There are far too many possible chances for accidents or them being shot down. With how much these planes cost, the best option is to keep them stored in hangars where they are only a limited risk.

If anyone in the F-35 governments are reading this, please invest instead in F-16 and F-22 jets which are substantially less expensive and only consider the use of F-35 jets when the F-16 and F-22 planes can’t possibly do the job.

Think of it as using the 1997 Toyota Canary to drive to and from work in urban rush hour rather than the Bentley since scratching the Canary doesn’t matter but the Bentley will cost you and your insurance company a small fortune. The F-35 series planes should never be put in the air where they can be damaged... it’s simply fiscally irresponsible.

You're always a day Huawei: UK to decide whether to ban Chinese firm's kit from 5G networks tomorrow


Treasury Notes

If Huawei is allowed into western teleco networks, the governments will have to cover the purchase of this equipment by issuing treasury notes. If China does not spend those notes, which they do less and less often and instead stockpile them, they gain more control of them. At some point, if China decides it needs to buy things from the world, they will use them as currency. When they do this, if they need to make a massive purchase (think $100 billion) whichever government they are purchasing from may decide the risk of holding that much currency in treasury notes would be difficult to manage. So China will sell treasury notes to multiple other countries and banks who will negotiate favorable terms of exchange for themselves. This will result in flooding the market and therefore devaluing the power of said notes. This is a major security (not as in guns and bombs, but as it in financial security) risk for any country who holds U.S. treasury notes. Weaker economies can actually collapse because of this. Stronger economies can lose their purchasing power in China.

Leave your admin interface's TLS cert and private key in your router firmware in 2020? Just Netgear things


Re: "wanted to see some extra fields populated"

I’m not sure I agree. I keep a simple bash script on my PC I’ve hacked together which reads a few fields from a JSON file and generates a certificate that makes Chrome happy. It also informs me of all my current certificates that are about to expire. I think I got all the OpenSSL commands within the first page of links on a google search.

I think the problem is that certificates are difficult for pretty much anyone to understand since there are no good X.509 primers our there these days. I still see a lot of enterprise certificates signed by the root CA for the domain. Who the heck actually even has a root private key? I actually wish Chrome would block supporting certificates signed by root CAs. Make a root cert, sign a few subordinates and delete the root CA altogether.

That said, Let’s Encrypt has destroyed the entire PKI since a trusted certificate doesn’t mean anything anymore. A little lock on the browser just means some script kiddy registered a domain.

Creative cloudy types still making it rain cash for Adobe


Re: F*** adobe

I generally agree... I actually stopped paying for creative cloud when Affinity Designer came out... but their Bézier curves (a fairly simple thing to get right) are somewhat of a pain in the ass.

Then there’s Affinity Photo that still has major scaling problems that causes visual artifacts when zooming the workspace. It makes it almost unusable. My daughter is using Photoshop CS6 because it doesn’t need a subscription and it’s still quite a bit better than Affinity. Her reasoning is brushes. But she’s using other Asian software a lot more now.

In a touching tribute to its $800m-ish antitrust fine, Qualcomm tears wraps off Snapdragon 865 chip for 5G phones



I often work together with large enterprises helping them train their IT staff in wireless technologies. And the message I send regularly is that there is absolutely no value in upgrading their wiress for new standards rather than growing their existing infrastructure to support better service.

I have recently begun training telecoms on planning for 5G installation. And the message I generally send is "people won't really care about 5G" and I have many reasons to back this up.

Understand that so long as pico/nano/femto/microcells are difficult to get through regulation in many countries, Wifi will continue to be a necessary evil within enterprises and business running particularly difficult operations to deploy wireless in. We need Wifi mostly for things like barcode scanners and RFID scanners within warehouses. An example of this is a fishery I've worked with where gigantic, grounded metal cages full of fish are moved around refrigerated storage all day long. Another is in a mine shaft where the entire environment is surrounded by iron ore. In these places, wifi is needed, but there's absolutely no reason to run anything newer than Wireless-N except for availability. AC actually costs less than N in most cases today, but there's no practical reason to upgrade. 4x4 MIMO 802.11n is more than good enough in these environments.

5G offers very little to the general consumer. It is a great boon for IoT and for wireless backhaul networks, but for the consumer, 5G will not offer any practical improvements over LTE. 600Mhz 5G is a bit of an exception though. 600Mhz 5G isn't particularly fast... in most cases it's about the same as LTE. It's primary advantage is the range. It will be great for farmers on their tractors. In the past, streaming Netflix or Spotify while plowing the fields has been unrealistic. 5G will likely resolve the issue.

For people within urban environments, they're being told that 5G will give them higher availability and higher bandwidth. What most people don't realize is that running an LTE phone against the new 5G towers will probably provide the exact same experience. 5G will offer far more towers within urban areas and as such, LTE to those towers will work much better than it does to the 4G towers today. 4G is also more than capable of downloading at 10 times higher bandwidths than most users consume today. The core limitation has been the backhaul network. And where 4G typically had 2x10Gb/s fibers to each of 4 towers within an area. 5G will have 2x100Gb/s fibers (as well as a clock sync fiber) to 9 towers within the same area. This will result in much better availability (indoors and out) as well as better bandwidth... and as a bonus, it will improve mobile phone battery life substantially as 4G beamforming along with shorter distances will consume as much as 5 times less power on the phone compared to the current cell network.

5G has no killer app for the consumer. 3G had serious problems across the board since 3G technologies (UMTS, CDMA, etc...) were really just poor evolutions of the classical GSM radio design. LTE was "revolutionary" in its design and mobile data went from "nice toy for rich people" to "ready for consumption by the masses". 5G (which I've been testing for over a year) doesn't offer anything of practical value other than slightly shorter latency which is likely only to be realized by the most hardcore gamers.

I certainly have no intention of upgrading either my phone or my laptop to get better mobile and wireless standards. What I have now hasn't begun to reach the capacity of what they can support today. The newer radios (wifi6 and 5G) will make absolutely no difference in my life.

If you have anyone who listens to you, you should recommend that your IT department focuses on providing wireless network security through a zero-trust model. Which means you could effectively ignore wireless security and as you mentioned, use VPNs or fancy technologies like Microsoft Direct Access to provide secure, inspected, firewalled links for wireless users. They should focus on their cabling infrastructure as well as the addition of extra APs to offer location services for things like fire safety and emergency access. They shouldn't waste money buying new equipment either. Used APs are 1/10th the price. In a zero-trust environment, you really don't need software updates as the 802.11n and 802.11ac standards and equipment are quite stable today. They should simply increase their AP count, improve their cabling so the APs within a building are never cabled into one place (a closet can catch fire), install redundant power to support emergency situations. Use purely plenum rated cabling. Support pseudo-mac assignment to people not carrying wireless devices can be located by signal disturbance during a fire.

Once this system is operational, it should live for the rest of the lifespan of your wifi dependence. I can safely believe that within 5-10 years, most phones from Apple, Samsung, etc... will ship without Wifi as its presence will be entirely redundant.

Also for 5G, inform people that they should wait for a phone that actually gives them something interesting. Spending money on 5G for personal communication devices is just wasteful and worst of all, environmentally damaging. If the market manages to sell 5G as a "killer app", we stand to see over a billion mobile phones disposed of as people upgrade. Consider than even something as small as a telephone, when you make a pile of a billion of them is a disaster for this planet.

5G will be great for IoT and not so much 5G, but the proliferation of NB-IOT is very interesting. $15 or less will provide an eSim capable 5G modem module to things like weather sensors (of which there are already tens of millions out there), radar systems, security systems, etc... We should probably see tens of billions of NB-IOT devices out there within the next few years. A friend of mine has already begun integrating it into a project of hers of which she has funding for over 2 million sensors to be deployed around Europe.

No... you're 100% correct. Wifi has begun it's death knell. It will be irrelevant within 5-10 years and outside of warehouses and similarly radio harsh environment, it is very likely it will be replaced by LTE, NB-IOT and 5G.

And no... 5G on a laptop is almost idiotic if you already have LTE. You should (with the right plan) be able to do 800Mbit/sec or possibly more with LTE. Even when running Windows Update, you probably don't consume more than 40MBit/sec.

You're praying your biz won't be preyed upon? Have you heard of our lord and savior NVMe?


Why oh why

If you’re dumping SAS anything in favor of something else, then please get a distributed database with distributed index servers and drop this crap altogether.

Hadoop, Couch, Redis, Cassandra, multiple SQL servers, etc all support scale out with distributed indexing and searching often through map reduce methodologies. The network is already there and the performance gain is often substantially higher (orders of magnitude) than using old SAN block storage technologies.

Or, you can keep doing it the old way and spend millions on slow ass NVMe solutions

'Happy to throw Leo under the bus', Meg Whitman told HP after Autonomy buyout


How could this company ever be worth that much?

There was a time in history when HP was famous as a technical innovator who filed more than enough patents that they could use pretty much any technology they wanted and make deals with other companies to trade tech. They would engineer and build big and amazing things and if they panned out, they got rich, if they didn't, they'd sell them off.

Then the suits came in

HPe has become nothing more than an Acquisitions and Mergers company. They don't make any new technology. They "me too" a crap load of tech at times. But regarding innovation... check out HPe's labs/research website. Instead of actual innovation, it looks like a list of research of "why shouldn't we invest money in research" thing. I mean really... they wrote one whole paragraph on why they won't waste money on quantum computing and it's basically "We are going to prove P=NP and make a new way of saying it so if we can solve one NP problem, it will solve all NP problems."

There have been a bunch of CEOs that have converted HP from being a world leader in the creation of all things great in technology to being a shit company which spends $8 billion on a document store and search engine that "might be big one day".

Cooksie is *bam-bam* iGlad all over: Folk are actually buying Apple's fondleslabs again


Why would you buy a new one anymore?

I have a stack of old iPads laying around. I have 2 iPad version 1 and about 10-12 more after that. My wife uses hers... the kids stopped using theirs when they got telephones big enough to render them useless as they also have PCs.

I did get my wife a new iPad for Christmas... we actually don't know why... but I suppose it had been 2 years since the last iPad was bought... so I got her that.

To be honest, it used to be that everyone needed their own iPad... but these days, I think mom and dad just need big phones and the kids need maybe an iPad mini or so. There's no need to constantly upgrade... they already have more features than anyone will ever use. Now, it's more like "Wow... look Apple is still making iPads... at least I can buy a new one if the old one breaks... if I actually need it for something"

I used to see iPads all over every coffee shop. These days, there's laptops and telephones... but there doesn't seem to be any iPads anymore.

NAND down we goooo: Flash supplier revenues plunged in first quarter


Re: Yay!

I thought the same and then thought... why bother?

I used to spend tons of money building big storage systems... even for the house... I have a server in the closet I just can't force myself to toss which has 16TB of storage I built in 2005. These days, 500GB is generally more than enough. 1TB for game PCs.

At the office, I used to buy massive NetApp arrays... now that I have moved to Docker and Kubernetes, I just run Ceph, GlusterFS, or Windows Storage Spaces Direct and I use consumer grade SSDs.

We are soooooooo far past needing for storage it's silly. To expand a Ceph cluster by a terabyte of low cost SSD, it requires 3TB of storage which is under $300 now... and it gives us WAY better redundancy than using an expensive array. And to be fair... since almost everything is in the cloud these days, you could probably run an entire bank on 2-4TB of storage for years. It's not like a database record takes much space. Back in 1993, we ran over 100 banks on about 1GB of online storage. I'm almost sure you can run 1 modern bank on 4000 times that much. :)

As for performance... once you stop running VMware and you switch to something... well... anything else, you just don't need that much performance. I guess video games would load faster, but ask yourself when the last time you actually thought "I need a faster hard drive"

Former unicorn MapR desperately seeking cash as threat of closure looms


Re: The software is quite good

Everyone always talks about Betamax as if it was infinitely better than VHS. As someone who thoroughly understands the physics, mechanics, electronics, etc... of both Betamax and VHS from the tape all the way through the circuitry up to the phosphors, I'll make it clear... yes Betamax was better... but the difference was negligible. The two formats were so close to being the same that it barely mattered... and when transmitting the signal over composite from the player to the TV which ... well to be honest was 1950s technology (late 1970s TV was still 1950s tech... just bigger)... it was impossible to tell.

S-Video and SCART (in Europe) made a slightly noticeable difference. Using actual component cabling could have mattered but neither Betamax or VHS could take advantage of that.

The end result was simple... when playing a movie recorded on Betamax on a high end 1970s or early 1980s TV next to the same movie recorded on a VHS tape, you had one big ugly player next to another and the only possible difference you could give the consumer was "Beta is more expensive because the quality is better" and of course... it wasn't... at least not enough to notice. Often you could sell the consumer on audio quality, but on 1970/1980s era speakers and hifi, you wouldn't notice until you were far past the average consumer threshold.

Betacam SP was actually substantially better, but by then it no longer mattered.

I used to have 400 Betamax decks and 600 VHS decks in my office... all commercial grade duplicators with automatic tape changers. The Betamax decks existed for collecting dust. The VHS decks were constantly being serviced because they were running 24/7. I spent 10 years of my career in video technology development (I am a codec developer at heart, but I know analog too). In 10 years of working with studio/commercial grade broadcast and duplication equipment, and knowing what I know about the technology, if I saw Betamax for $120 and VHS for $110, I'd still by VHS.


Re: @CheesyTheClown ... Burned $300 million?

Thanks for commenting.

I honestly had no idea how MapR would sell in the first place. The problem is... it was a great product. But it was also expensive. And I don't really care how good a sales team you have is, the website is designed to scare away developers.

I just visited there and I'm pretty sure that I've been in multiple situations where I could have seen the technologies as interesting, but the website makes it look like it's too expensive for me to use in my projects. I can use tools that cost $10,000 or less without asking anyone. But they have to be purchasable without having to spend another $10,000 on meetings where people show Gartner magic quadrants.

I can't use any tools where I can't just pop to the web site and buy a copy on my AMEX in the web shop and expense it. When we scale, we'll send it to procurement and scale, but we're not going to waste a ton of money and hours or days on meetings and telephone conferences with sales people who dress in suits... hell I run away without looking back when I see sport jackets and jeans.

Marketing failed because MapR is not an end user program and developers can't make the purchasing decisions. The entire front end of the company is VERY VERY developer unfriendly. Somehow, someone thought that companies all start off big and fancy. My company is a top-400 and we start projects as grass-roots and once we prove it works, we sell the projects at internal expos and the management chooses whether to invest more in it or not. MapR looks expensive and scary and difficult to do business with.

This is why we do things like always grow everything ourselves instead of buying stuff that would do it better. Everyone is trying to sell to our bosses and not selling to the people who actually know what it is and what it does.

I wish you luck in the future.. now that I've looked a little more at you guys, I'll check the website occasionally when I go to start projects. If the company starts trying to sell to the people who will actually buy it (people like me) instead of to our bosses... maybe I'll buy something :)


Burned $300 million?

$200,000/year times 2 is $400,000 for an inflated cost or employing one overpaid SV employee. Multiply that by 200 employees. That’s $80 million a year for 200 employees... to develop and market a product.

Now... let’s assume that the company actually received $300 million in investments.

Was there even one person in the whole company actually doing their job? And was that job spending money with no actual consideration for return on investment?

Planes, fails and automobiles: Overseas callout saved by gentle thrust of server CD tray


Re: Ah the old push-out-the-cd-tray trick

Why not dump random data to the PC speaker?

'Evolution of the PC ecosystem'? Microsoft's 'modern' OS reminds us of the Windows RT days


Presented at build and only interesting to techies?

Let me get this straight... you're complaining that technologies presented at Build... Microsoft's annual developers conference... presented tools that are interesting to developers?

Ok... so... if you were to present tools that would be life changing and amazing... primarily to developers... which conference would you recommend presenting them at? And if you want the developers and techies who will use them to be present... and actually buy tickets to the event... are we still against using Build for this?

I almost could read the rest of what you wrote after reading that... I was utterly stuck... totally lost... wondering... what in the name of hell is this guy talking about.

So... let's try some stuff here.

Windows isn't built the way you seem to think it is. This is why Microsoft makes documentation. You can read it instead of just headlines.

Windows these days is built on something you might understand as containers... but not really. It's more than that. You can think of it as enclaves... if you want.

UWP also doesn't seem to work the way you think it does. You're thinking in terms of how Linux works and how languages on Linux work. Windows has extremely tight integration between programming languages and the operating system. As such, a lot of stuff has happened in the process of the compiler development which made it so that things you would think are native code are actually .NET and things you would think are .NET are native code. The architecture of the development tools have made what classically been though of as "Linking" a LOT more dynamic.

There's also a LOT more RTTI happening in all language from Microsoft which is making things like the natural design of what many generations ago was called COM pretty much transparent. All object models at one time (especially COM) was horrible at one point because of things like IDLs which were used to do what things like SWAGGER do these days. As describing and documenting the call interface between objects was a sheer terror.

Windows has made it so that you can program in more or less anything and expose your APIs from pretty much anything to pretty much anything ... kinda like how COM did... but it's all pretty much automatic now. This means that "thunking" mechanisms can make things happen magically. So you can write something in native code in C++ and something in .NET in C# and make calls between them and the OS can translate the calls... this actually requires a few special programming practices and it actually makes it easier if you pretend like you don't even know it's there.

There are A LOT of things going on in Windows that are kinda sorta like the things you seem to think it might do... but in many ways they're done far better.

If you want to see it look really awesome... start two sessions of Linux on WSL1. You'll find that they're not in the same enclave. They have some connections to each other... but they are actually separate. It's like running two different containers... but not really.

Now consider that Windows works a lot like that now too. Microsoft has progressively managed to get most of us to stop writing software that behaves as if everything absolutely must talk to everything else directly. As such, over time, they'll manage to finally make all processes run in entirely separate enclaves while still allowing communication between processes.

And BTW... Android and Chrome OS are sheer frigging terror.... if you want to do interesting things at least. Everything is so disconnected that 99% of the time... if you're trying to make two programs work with each other, you find yourself having to send everything through the cloud.


Re: That's what Plinston said

This is not argumentative. I'm a file system jockey and I have to admit that I'm a little bit in the dark here about the SIDL terminology.

I also wonder if you and I understand the file system in Windows differently than one another. It's been a long time since Microsoft originally added forked file support. Yeh, traditionally Windows really didn't support iNodes and it was a wreck, but it's been a long time since that's been set in stone.

The main reason Windows has required reboots to update is more related the UI. Upgrading files is no real problem. But in the a case like Linux where the GUI is entirely separated from the rest of the operating system (which is probably what I like least about Linux), the Windows GUI used to be the root for all tasks to be spawned from. So the GUI was the parent of all the tasks which made it so that if you upgraded the kernel, you'd have to restart the GUI running under the new kernel.

With all the effort they've made to make it so that they kernel is less important and that most of the OS is running either as a VM or a container, they should be able to start a new kernel now and repatriate the system call hooks to the new kernel.

Weak AF array sales at NetApp leave analysts feeling cold


Re: "End of Storage" - silliest thing ever said...

I don't disagree. I still see the occasional UltraSparc5, AS/400 and Window NT 4 machines in production. Legacy will always exist... but I think you're overestimating the need for low-latency on premise storage.

As latency to the clouds are decreasing and bandwidth is increasing and availability is actually often rivaling on-premise location isn't the hot topic.

We used low-latency storage for things like fiber channel because we were oversubscribing everything. But if you consider that massive banks still run on systems like IBM Z which seem really amazing, but performance-wise are generally obscenely over-provisioned. A well written system can handle millions of customer transactions per day on equipment no more powerful than a Raspberry Pi... and they did for decades... on horribly slow storage.

The question is... what do you really plan to run back home anymore? Most of the reasons you've needed extremely high end storage systems in the past have moved to the cloud where they logically belong. This means that most of what you're running back home is actually non-business systems anymore.

A major company will probably have something like an in-house SAP style system and a bunch of other things like file server which no one uses anymore. Everything else will be moved to the cloud with or against IT's "better judgement". Remember, you don't need the IT guy to sign up for Slack, the boss does that with his own credit card while sitting in a meeting.

The cloud doesn't replace storage... it replaces the systems using storage.

Now... let's assume you're working for a news paper or a television station where you need local storage because 1000+ photos at 20megapixel RAW or 25 hours of video at 12Gbp/s needs to be stored somewhere. These days, you pay a lot of money for your storage, but you also have a choice of easily 10 legitimate vendors and maybe another 200 "won't make it through another funding round" vendors. Right now, there's lots of choices and all those vendors still have lots of sales keeping them in the black.

Now, as more and more services are migrated to the cloud. The storage systems at most companies with more "plain vanilla" needs will free up capacity on their local storage. If they refresh their servers again, they'll choose a hyperconverged solution for the next generation.

This will mean that the larger storage companies will dissolve or converge. If they dissolve, they're gone. If they converge, they'll reduce redundant products and deprecate what you already have.

As this happens, the companies with those BIG low latency storage needs will no longer be buying a commodity product but instead a specialty product. Prices will increase and the effected customers will be substantially more conservative about their refresh cycle in the future.

Storage is ending... sure... there will always be a need for it in special cases, but I think it will be a LONG time before the stock market goes storage crazy again. And I don't think Netapp, a storage only company will survive it. EMC is part of Dell and 3Par is part of HP etc... companies which sell storage to support their core business. But Netapp sells storage and only storage, so they and pure will be hurt hardest and earliest.


Re: End of storage coming

Honestly, I think the NKS platform looks ok, but I expect that it's only a matter of time before all three clouds have their own legitimate competitors for it.

Don't get me wrong, I'm not saying it to be a jerk... as I said, it looks ok. But it's obvious progression for K8S, I've been building the same thing for internal use on top of Ceph at work. I'm pretty sure anyone trying to run a fault tolerant K8S cloud is all doing the same. But to be honest, if you're doing K8S, you should be using document/object storage and not volume storage.

If you're running Mongo or Couch in containers, I suppose volume or file storage would be a good thing. But when you're doing "web scale applications" you really should avoid file and volume storage as much as possible.

I just don't expect NetApp to be able to compete in this market when Microsoft and Amazon decide to build a competing product and pretty much just toss it in with their existing K8S solutions.


Re: End of storage coming

I don't disagree on many points. I've seen some pretty botched cloud gambits.And those are almost always on the companies that go to the cloud by copying up their VMs as quickly as possible. It's like "If you actually need VMware in the cloud... you really did it wrong"

The beauty of the change is that as we systems that genuinely belong in the cloud... like e-mail and collaboration is going there as SaaS and it's working GREAT. Security for email and collaboration can't ever work without mass economy and 24/7 attention from companies who actually know what they're doing...not like Cisco AMP or ESA crap.

A lot of other systems are going SaaS as well... for example Salesforce, SAP, etc... these systems should almost, by law have to be transferred to the cloud if for no other reason than it guarantees paper trails (figuratively speaking) of all business transactions that can be audited and subpoenaed. Though that's true for email and collab.

Systems which are company specific, they can come back home, and then eventually over time get ported to newer PaaS type systems which can be effectively cloud hosted.

I actually live in terror of the term "Full Stack Developer" since these days it often means "We don't actually want to pay for a DBA, we'd rather just overpay Amazon"


End of storage coming

Ok, when NetApp rose, it was because companies overconsolidated and overwasted. Not only that, but Microsoft, VMware and OpenStack lacked built in storage solutions. Most storage sales were measured on the scale of a few terabytes at most. Consider that a 2TB FAS 2500 series cost a company $10000 or more using spinning disks.

Most companies ran their own data centers and consolidated all their services into as few servers as possible. They went from running 5-10 separate servers (AD, Exchange, SQL, their business app...) costing $2000 each to 3-10 VMware servers costing $5000 each plus a SAN and an additional $2000+ in software licenses each... to run the same things.

Performance dropped considerably when they made that shift. Sure, they were supposedly easier to manage, but the management began to realize that systems that used to take 1 average skilled employee and 1 consultant to manage now took a team of full time employees and a lot more consultants to run.

Performance was almost always a problem because of storage. NetApp made a fortune because they could deliver a SAN which was relatively easy to manage that could handle most small businesses data.

What got really weird is when the bosses wondered how they went from $100,000 IT costs per year (people too) to $500,000 or more and no matter how much they spent on tools to make it more reliable and more robust, they always found themselves with the same outages and increasing costs.

Enter the cloud.

Companies could move their identity, mail, sharepoint, collaboration and office tools online using a relatively easy migration tool which took a few days to weeks.

SQL and their company app could be uploaded as VMs initially with little effort and with some effort, they could move their SQL to Azure’s SQL.

Now, they can downsize to one IT person and drop their costs to about $100K a year again.

The catch is, since we no longer need a dozen IT guys and consultants, no one left knows what either NetApp or Cisco is and they’re just using the simple pointy clicks UI to do everything. Their internal data center is being spun down and finding its way to eBay instead.

Then there’s whoever is left. They find that by replacing their servers with new servers containing disks, they can use VSAN, Storage Spaces Direct or Swift and not have to spend money on external storage which actually has a lower aggregate performance and substantially higher cost. Not only that, but they’re integrated into the systems they run on.

NetApp has no meaning for cloud vendors because MS, Google, Amazon, Facebook, Oracle can all make their own. In some cases, they even make their own hardware.

NetApp will still have a market for a while, but they will become less interesting as more services are moved to the cloud. After all, most companies depending on NetApp today probably have just enough performance to continue operations and as more systems go to the cloud, they’ll need less performance, not more.

There will be organizations like military and banks who will still need storage. And of course there are surveillance systems that require keeping video for 2-10 years depending on country. But I believe increasingly they will be able to move to more cost efficient solutions.

NetApp... I loved you, but like many others, I have now spun down 5 major NetApp installations and moved either to cloud or to OpenStack with Ceph. My company is currently spinning down another 12 major (service provider scale) NetApp solutions because we just don’t need it anymore.

I wish you luck and hope you convince HPe to buy you out like they do to everyone else in your position.

Cray's found a super scooper, $1.3bn's gonna buy you. HPE's the one


So long Cray.. we’ll miss you

So... what about the obvious implications that this leaves the US with only one supercomputer vendor? ugh

I mean really, if Cray can’t manage to be a player with the US dumping exascale contracts on them... the US deserves to be screwed. The US government should have been dumping cash on SGI and Cray for years. Instead, they forced them into bidding wars against each other which allowed a non-super computer acquisitions and merged chip shop to suck them both up leaving the US without even one legitimate HPC vendor in 3-5 years.

Do a search in SGI and find out what HPe has done since buying them... nothing. They ran what was left of them into the ground.

What about Cray? Cray does a lot of cool things. Storage, interconnects, cooling, etc... at one time HP did this too. And if HPe didn’t suck at HPC, they wouldn’t need to buy Cray. They could actually compete head on. But, no... they have no idea what they’re doing.

Want to see what’s left of HPe... google HPe research and show me even one project which doesn’t seem as interesting as Mamma June on the cover of Hustler?


What about SGI?

They bought SGI also... they finished up those contracts and what came next? Oh... SGI who?

Nvidia keeping mum on outlook for year as data centre slows, channel chokes on crypto crap


Alienating their core?

So, gaming cards are twice as expensive as they should be.

V100 is WAY more expensive than it should be... and it is cheaper to spend more developer hours optimizing code for consumer GPU then to use V100 which is a minimum of four times as expensive as they should be... at least to justify the CapEx for the cards. If the OpEx for consumer GPU is way lower than the V100 cost, why would I buy 10 V100s rather than 100 GeForce?

Then there’s Grid.. I don’t even know where to start on that. If you use Grid, there is no possible way to justify the cost. It is so insanely expensive that absolutely every ROI or TCO excuse you have to run virtualized evaporated instantly with Grid. Grid actually increase TCO by a LOT and you can’t even force nVidia to sell it to you. I mean really, you knock on their door begging to buy Grid for 1000 nodes and they don’t answer emails, they refuse to demo... I mean... sitting there with cash in hand waving it under their nose and looking for a dotted line to sign on and they blow you off.

They are too busy to bother with... well customers.

You know... they deliver to important customers like Microsoft, Amazon and Google. They don’t need the rest of us.

Good heavens, is it time to patch Cisco kit again? Prime Infrastructure root privileges hole plugged


Oh for the love of pizza

Ok... if you’re a network engineer who doesn’t suck, you would secure your control and management planes. If you install PI properly, it should be behind a firewall. If you install Huawei switches, the management planes should be blocked.

This is getting stupid.

Now, PI is based on a LOT of insecure tech. It’s a stinking security nightmare. You can’t run PI or DNA controllers without a massive amount of security in-between. This is because Cisco doesn’t actually design for security.

If you want a fiesta of security hell, just install Cisco ISE which might be the least secure product ever made. Their SAML Sp looks like it was written by drunken hackers. Their web login portal is practically an invitation. Let’s not even talk about their insanely out of date Apache Tomcat.

Want to really have a blast hacking the crap out of Prime? Connect via wireless and manipulate radio management frames for RRM. You can take the whole network without even logging in. It’s almost like a highway to secure areas.

When you contact Cisco to report zero-day hacks, they actually want you to pay for them to listen to you.

How about Cisco CDP on IOS XE having rootable vulnerabilities caused by malformed packets? A well formed malicious CDP packet can force a kernel panic, reboot and if you move quickly enough, you’ll be on native VLAN while it’s reading and processing the startup config. I mean come on... it’s 2019 and they still have packet reassembly vulnerabilities because they don’t know how to use sk_buff properly?

They practically ignore all complaints about it too.

Time to reformat the old wallet and embiggen your smartmobe: The 1TB microSD is here


Am I the only one?

I was driving yesterday and as always, instead of paying attention to the road, I was going all Sci-Fi and drifting off to a weird fantasy. I though... imagine if I blinked and found myself driving my BMW i3 in the year 1919... kinda like “Back to the Future” but without Mr. Fusion.

My car had been recently cleaned so all I had with me was my backpack. And I freaked, because I had my play laptop, a Microsoft Surface Go and it didn’t have and development tools on it... not even VS Code. And I was like “I have a JRPG video game, some movies and the only programming language I had was Powershell, whatever is in Office, vbscript, the web browser... ok... I can code... but I don’t have Google, Wikipedia, or StackOverflow”

I could made do I told myself, and then I thought to myself, on my phone, I have about 150 videos on multivariable calculus, chemistry and encryption. Woot!

Then I realized how screwed I was because I didn’t have the parts I needed to build a USB to.. well anything interface... all I had for peripherals was a USB-C to USB and HDMI dongle. I could design a USB to serial UART. In fact, I also have on my phone an FPGA course and I could make a simple VHDL to schematic compiler in Powershell if I had to. But of course, I would have to make my own semiconductors and I’m not sure I could produce a stable silicon substrate capable of 12Mhz for USB using 1919 era laboratories.

Then I realized I had a really awesome toy with me... I have a 400GB MicroSD in the laptop. I don’t think I could even explain to Ramanujan what 400GB is and that’s a guy who was pretty hard core into infinite series. Could you imagine explaining to people 100 years ago that you had a chip that was visually the size of a finger nail which had the storage and addressing circuitry for about 4 trillion memory cells?

So, today... without even thinking of it, I found myself loading VSCode, .NET and Julia onto my laptop. Yesterday afternoon, I found myself packing a USB-RS-232 dongle too. I also realized that I had 3D Studio Max and OpenSCAD installed.

And oddly, I believe I have an Arduino and USB cable in my glove box. Though, I don’t have the software, but I think I could write an Atmel assembler from memory.

Today if I got sucked back to 1919, I could use my laptop to design a simple field emitting transistor which I’m sure would be reliable at 50Khz, a simple cathode ray tube, a simple CPU, a reliable and reproducible carbon film resistor, a half-assed capacitor (don’t know the chemistry on those, but I could fake it), and probably could produce a reasonable two sided circuit board etching and plating system... and I could probably do all this with my laptop and 1919 era tools and tech. I would have to do it at Kodak in New York.

Oddly, I could probably do most of this with just the information I have on my phone, but it would probably take me a while just to make a stable 5V 2amp power source to keep the phone running for any period of time.

To be honest, I think I’d find the closest thing to epoxy available at the time. I would use gold leaf to make traces... then I’d use a simple acid based battery. I wouldn’t trust 1919 electrical mains.

Anyway... anyone else here ever get geeky like this? Wouldn’t you love to show off a 1TB MicroSD card to people back then? Hell just try to explain the concept of what it would take to fit 10 trillion non-volatile memory cells into something that size :)

Mellanox investor proposes class action to kill Nvidia's $6.9bn mega buy


Future potential?

ARM processors are beginning to integrate 100Gb/s Ethernet to support RDMA over converged Ethernet. See Huawei’s HPC solutions for reference.

Intel has the capacity to do the same with their own chipsets used in servers and supercomputers.

NVidia, if they choose to can do the same on their own. They clearly have a solid grasp on high speed serial communications.

Infiniband has is useful in HPC environments because it’s mostly plug and play. But it comes at a premium cost. The HPC market is investigating alternatives to Infiniband because as with technologies like ATM/SDH/SONET, much less expensive technologies, namely Ethernet have become good enough to replace them.

I just saw a 1000 port QDR Infiniband multiplexer sitting unused in a supercomputing center this morning. It will be replaced with 100Gb/E, not more Infiniband.

They should sell now while they are still valuable.

Complex automation won't make fleshbags obsolete, not when the end result is this dumb


It’s not about becoming obsolete.

If you consider that the heart of the issue is unsustainable capitalism, it becomes clear. And even if it were, it has little to do with automation, it’s about centralization and enhanced logistics.

We simply overproduce.

Let’s use a simple example.

Ground beef has a limited shelf life. It can survive quite a long time when frozen, but the meat will degrade and no longer be edible after a short time when thawed.

We as shoppers however are turned away from meat that is frozen. It looks unattractive and although we should know that almost immediately following being slaughtered, the meat is stored in frozen storage, and even if we visit a butcher, we are attracted to meat hanging on hooks in frozen storage, when the meat is on a shelf, we will buy the fresh, red, lovely pack of meat which we’ll transport thawed to our houses and refrigerate and hope we’ll use it before the”best before” date passes.

Grocery stores also know that shoppers almost never buy the meat products which are the last on the shelf. They can charge more for meat that is thawed than frozen. And the result is, they ensure there is always enough thawed meat to attract shoppers and charge them more. They also waste packaging and to make it last just a little longer, they’ll use sealed packaging that makes the meat prettier for a little while longer. And the packaging now even has fancy little devices to measure freshness... which are not recycled. In order to produce (and overproduce) enough ground beef to have enough left over to actually waste approximately 30% (real number for here in Norway), we are left with massive amounts of other meat that must also be sold and suffer the same problems.

When you purchase meat online for home delivery, meat can be kept frozen during the entire process ... up to but not necessarily including delivery for the “last mile”. We don’t need to produce extra to make the meat look more attractive to consumers. We can expect the consumer to receive fresh lovely red ground beef with no need for freshness sensors, vacuum sealed packaging, etc...

Using more advanced larger scale marketing mechanisms. If people are buying too much ground beef, algorithms can raise prices of cheaper meats and lower prices of more expensive cuts to convince shoppers to eat steak instead of burgers tonight. We can sell 400grams or 550grams or however much because meat will be packaged to order. We can cut deals with pet food and pig slip companies to simply give them byproducts in exchange for bartering products like “if we give you pig food worth $1.5million, you give us bacon worth $1.5 million” which would probably count towards tax credits for being green and also leave the additional money in a form that can be written off.

This works great because people buying online will buy based on photos and text. Marketing is easier. They product always looks perfect prior to purchase.

By needing to produce 30% less, we need 30% less cows. Less movement of live stock or frozen sides. We need less butchers. We can use more machines. We’ll use less packaging. We won’t need freshness sensors. We can package in biodegradable paper or reusable deposit oriented containers. We can eliminate printing of fancy labels. We will reduce shipping of product by 50% by using more efficient packaging and shipping 30% less product to begin with. We can reduce consumer fuel consumption and car repairs and tired degradation associated with shopping.

By enhancing logistics and centralizing as much as possible, we will eliminate massive numbers of jobs. But initially the result will be people will spend more time unemployed and believe it or not... more time humping, reproducing and creating more people who have less jobs available to them.

As such, we need to start sharing jobs. People will work 50% of what they do today. This means they’ll have much more time to manage their household economies. They’ll eat out less and use more time cooking. This will reduce dependence on restaurants. They will also have less disposable income as they’ll be forced to spend more time entertaining themselves. They will think more about their meals and waste less food producing them as they know when they buy chicken breast on sale, they can use half today and half two days from now. It won’t be like “I planned to use the other half, but are out because I got stuck in a meeting.”

People will order groceries to be delivered which means the grocery stores which used to be “anchor stores” will become less important and people will stop “Let’s grab a taco and some ice cream next door to the grocery store while we’re out already”. As such, those smaller stores which were never anchors themselves will become less interesting.

This was a simple example, and it barely scratched the surface. It has so little to do with automation. It’s capitalism and we just have too many meat sacks to keep it maintained.

Tesla touts totally safe, not at all worrying self-driving cars – this time using custom chips


Use of investor's capital?

I've worked in a few environments where we did our own HDL development. We worked almost entirely in the FPGA world because we did too much "special purpose" algorithms which would often require field updates... an area not well suited for ASICs.

But, I believe what Tesla is doing here is a mistake.

Large scale ASIC development is generally reserved for a special category of companies for a reason. Yes, their new tensor processor almost certainly is a bunch of very small tensor cores, which each are relatively easy to get right, and the interconnect is probably either a really simple high speed serial ring bus... so, it's probably not much harder than just "daisy chaining" a bunch of cores. But even with a superstar chip designer on staff, there are a tremendous amount of costs in getting a chip like this right.

Simulation is a problem.

In FPGA, we often just simulate using squiggly lines in a simulator. Then we can synthesize and upload it to a chip. The trial and error cycle is measured in hours and hundreds of dollars.

In ASIC, all the work is often done in FPGA first, but then to route, mask and fab a new chip... especially of this scale, there is a HUGE amount involved. It requires multiple iterations and there are always going to be issues with power distribution, grounding, routing... and most importantly, heat. Heat is a nightmare in this circumstance. Intel, NVidia, Apple, ARM, etc... probably each spend 25-50% of their R&D budgets on simply putting transistors in just the right places to distribute heat appropriately. It's not really possible to properly simulate the process either... and a super-star chip designer probably know most of the tricks of the trade to make it happen, but there's more to just intuition with regards to this.

Automotive processors must operate under extreme environmental conditions... especially those used in trucks traversing mountains and deserts.

If Tesla managed to actually make this happen and they managed to build their own processors instead of paying NVidia, AMD or someone similar to do it for them, I see this as being a pretty bad idea overall.

Of course, I'd imagine that NVidia is raking Tesla over the coals and making it very difficult for Tesla to reach self-driving in a model 3 class car, but there has the be a better solution than running an ASIC design company within their own organization. Investing in another company in exchange for favorable prices would have made more sense I think. Then the development costs could have been spread across multiple organizations.


Re: 144 trillion operations per second

I'd love to see something that would back your statement up.

To be honest, I'm just moving past basic theoretical understanding of neural networks and moving into application. I've been very interested in reducing transform complexity and therefore reducing the number of operations per second for a given "AI" operation. Think of me as the guy who would spend 2 months hand coding and optimizing assembler back in the 90's to draw a few pixels faster. (I did that too)

I don't entirely agree from my current understanding with the blanket statement that it wouldn't need that much. I believe at the moment that there are other bottlenecks to solve first, but at least in my experience processing convolutional networks in real time from multiple high resolution sources at multiple frequency ranges could probably use all 144 trillion operations and then some.

Do you have something that would back up your statement... I'd love to see for better understanding of the topic.

Better late than never: Cisco's software-defined networking platform ACI finally lands on AWS


Re: If you need ACI in AWS or Azure, you're just doing it wrong

Shitting on the competition?

What competition? NXOS vs ACI?

ACI does try to solve software problems using hardware solutions. This can’t be argued. In fact, it could be its greatest feature. In a world like VMware where adding networking through VIBs can be a disaster (even NSX blows up sometimes with VUM... which no one sets up properly anyway), moving as much networking as possible out of the software is probably a good thing.

Using a proper software define solution such as Docker/K8S, OpenFlow, Hyper-V extensible switch, or even NSX (if you just can’t escape VMware) with a solid layer-3 solution like NXOS... or any other BGP capable layer-3 switch is generally a much better design than using a solution like ACI which separates networking from the software.

It’s 2019, we don’t deploy VMs using OVFs and next-next-next-finish things anymore. We create description files like YAML or AWS/Azure specific formats and automate the deployment method and define the network communication of the system as part of a single description.

ACI didn’t work for this. So Cisco made Contiv and by the time the market started looking at ACI+Contiv as a solution, Cisco had basically abandoned the project... which left us all with Calico or OpenFlow for example... which are not ACI friendly.

Of course, NSX doesn’t control ACI since they are different paradigms.

Hyper-V extensible switch doesn’t do ACI, so Cisco released an ACI integration they showed off at Live! As few years back and then promptly abandoned.

NXOS works well with all these systems and most of these systems document clearly how they recommend they are configured. Microsoft even publishes Cisco switch configurations as part of their SDN Express git.

So... which competition are you referring to?


Re: If you need ACI in AWS or Azure, you're just doing it wrong

Servers + Fabric + VMware license + Hyperfles storage license + Windows Server Enterprise licenses + backup licenses (Veeam?) Firewall + Load balancer + server engineering hours + network engineering hours + backup engineering hours + Microsoft hours...

You need two stacks of (three servers + two leaf and two spine + 2 ASR1000 or 2 border leafs + 2 firewall nodes, 2 load balancers) and whatever else I’m forgetting.

If you can get a reliable Hyperflex environment up with VMware and Microsoft license and all the hours involved for less than $1.6 million, you probably have no clue what you’re doing.... and I specifically said retail. And architecting, procuring, implementing and testing etc... a redundant Hyperflex environment requires several hundred hours of what I hope are skilled engineers.

I’ve done the cost analysis multiple times on this. We came in under $1.2 million a few times, but that was by leaving out things like connecting the servers to the UPS management system and cutting corners by using hacked fabric solutions like skipping the border leafs or trying to do something stupid like trading in core switches and trying to make the ACI fabric double as a core switch replacement. Or leaving out location independence etc...


If you need ACI in AWS or Azure, you're just doing it wrong

So... the year is 2019 and well... software defined is... um... in software.

ACI has one of the most horrible management and configuration systems ever to be presented on earth. It started off as a solution to support an "intelligent" means of partitioning services within data centers running VMware. This is because VMware really really needed it. VMware, even with NSX still networking like it's 1983. So companies invested heavily in ACI which would allow them to define services based on port-groups and describe policies to connect the services together and even support service insertion.

Well, if you're in the 21st century and using Hyper-V, or far better yet, OpenStack and even better Docker/Kubernetes, all of these features are simply built in. In Docker Swarm mode, it's even possible to do all of this with full end-to-end encryption between all services. And since you can free up about 98% of your bandwidth from storage in a VM environment, you have lots of extra bandwidth and also extra CPU... and I mean LOTS of extra CPU... a well written FaaS function using 0.0001% of the resources that a similar routine on a VM would use... no exaggeration... that's the actual number... we measure resource consumption in micro-CPUs (as in one millionth of a CPU) as opposed to in terms of vCPUs when doing FaaS. For PaaS on Docker, we think in terms of milli-CPUs for similar functions.

So, we use all that idle CPU power for networking functions. And since we can truly micro-segment (not VMWare NSX crap segmentation or ACI brainless segmentation), we can have lots of load balancers and encryption engines and firewalls, etc... and still not use a 100th of what ACI would waste in resources or a millionth of what it would waste in money.

The best solution a company can take in terms of the 21st century is to start moving their systems more and more to proper modern networking and virtualization rather than wasting all that money on trying to come up with ways of scaling even further up using solutions like ACI.

What's worse is that if you're considering using ACI in the cloud, what it says is that you think that none of the pretty damn awesome SDN solutions that are integral parts of the cloud provider's solution work. And instead you're willing to spend A LOT more money to add networking that doesn't do anything that their offerings don't but at least creates a bunch of new jobs for engineers who don't really understand how it works to begin with.

Having reviewed ACI in the cloud in extreme detail... the only thing I could come up with is "Why the hell would anyone want that?". I was just at a job interview with a major multi-national financial clearing house where they wanted to hire me as an architect to recover from their failed attempt at ACI... I explained that the first thing I'd do is delete ACI from the Nexus 9000 switches, upgrade to NX-OS (the legacy networking platform) setup layer-3 connectivity between nodes and use their OpenShift environment to manage the networking and handle all the software defined networking as it's far better suited for it. They loved the idea... we could easily reduce the complexity of the networking infrastructure by a substantial amount. In fact, by using a simple layer-3 topology (all that's needed for real SDN which operates entirely on tunnels over layer-3) we could cut costs on people and equipment by millions per year.

Cisco has spent the last 10 years trying to make new technologies which don't actually solve problems but add complexity and therefore errors and management headaches at up to 100 times the cost of their other solutions which are actually more suitable. And I really only wish I was exaggerating those numbers. ACI actually increases costs DRASTICALLY with absolutely no chance for return on investment.

On the other hand, if your company has a VMware data center and A LOT of VMs which will take years (if ever) to replace with intelligent solutions, I would recommend buying two small HyperFlex stacks (retail cost with VMware licenses and ACI, about $1.6 million minimum configuration) which should let you cut the operations overhead substantially... possibly down to 3-5 people... until you can move more and more systems off the legacy platform.

Astronomer slams sexists trying to tear down black hole researcher's rep


Thank you!!!

I had not considered her gender to be relevant. I have a friend who is a physicist, he pees sitting down because his mother would have beaten him to death for dripping on her floor. I don’t think it’s ever been relevant to his research which way he pees, why would it matter to Katie’s research how she pees?

She is a Ph.D from MIT and a researcher and professor at CIT... she’s way beyond gender issues when it comes to her profession. Even if for some reason she was at the lowest possible level of accomplishment for someone in her position, she would still be a frigging brilliant scientist. And from what I can tell, she is definitely not at the bottom.

As someone who respects the hell out of her life accomplishments, I will make some comments that border on sexist. Her presentation of her work includes a level of giddiness and bubbliness (if that is a word) that if a guy did it would be creepy, but her case is endearing to the point of bordering on adorable. She seems to love her work as much as I love mine. I can almost imagine my next presentation of my research having cute little squeaks in it like she has done in her presentations. I just started writing a brief postulate on applying Lagrangians to the Finite Element Method to attempt a P solution to an NP problem in structural analysis by defining desired results and calculating idealized mesh coefficients by working in reverse. I am so going to roll forward on my toes and yip like Katie does so I will can show my excitement. I’ll probably have rotten eggs thrown at me, but I don’t care... I love what I do and I want to show it like she does.

I have no idea how old she is, but I would love to adopt her and make her my daughter and we can do math together and be bubbly together :)

Nutanix and HPE sitting in a Greenlake: That disavowed hookup has actually happened


Re: Subscription Boxes As a Service (SBaaS)

It is out of control.

First, we had servers. A rack mount server from Dell for example cost about $2000 and was enough to run a single application.

Then as we ran more applications and had more servers, we decided that running around and changing failed hard drives was expensive. So we centralized. We did this by encapsulating SCSI packets into a new frame called Fibre Channel and this allowed us to have stacks of hard drives in a big box that allowed us to map LUNs to WWNs.

Then we wanted to support fail over to allow for supporting a backup path for failed fiber so we supported no just node naming but port naming.

Then we decided that since hard drives were no longer 200MB in size, and instead were 200GB in size, we would make virtual drives as files on a centralized RAID and instead of mapping node world wide names to physical SCSI devices, we would map them instead to files.

We then decided that servers were running at 10% capacity and were too much trouble, we decided to run software on servers that would allow a single $2000 server from Dell to do the job of 8 $2000 servers.

Then we decided that if we added more RAM and more CPU, we could consolidate further, so we scaled those servers up... requiring specialized RAM, specialized storage controllers, specialized CPUs, specialized graphics cards, specialized fiber channel adapters, specialized management software, specialized cooling etc... so, we increased the density from 8 servers per $2000 server to 20 servers per $40,000 server... bringing us back to about $2000 a task... plus storage.

By centralizing storage and increasing density drastically, we decided that we needed to come up with crazy new technologies to sustain disk bandwidth over SCSI fabrics that lack routing and intelligent load balancing and introduced technologies like NVMe as not only a SCSI replacement for local connectivity, but also as a network fabric. We then decided to encapsulate NVMe which itself is a routable/switchable network fabric within FibreChannel. By doing so, we killed most of the benefits of NVMe which could have been retained by simply configuring UEFI and VMware correctly to us NFS or another file service.

We are now at about $4000-$5000 per virtualized server... we require specialists in storage, backup, virtualization, networking, service insertion, etc... our TCO between CapEx and OpEx has risen from about $4000 per service per year to about $12000 per service per year and we're locked into vendors with no hope of ever escaping....

And then comes subscriptions.

Instead of $4000-$5000 per virtualized server, they're trying to figure out how to charge us $3000-$4000 per virtualized server per year instead.

I did a recent cost assessment for building a greenfield minimal VMware data center design that has nothing more than 6 nodes across two data centers. The purpose of this would be to run whatever services are still not "cloud based". This minimal configuration is the minimum you should run... not the minimum you could run. It came to $1.6 million CapEx and $950,000 OpEx per year... with a $1.6-$2 million additional CapEx investment every 5 years.

Anyone who buys a current license from VMware, Microsoft, Cisco, HPe, etc... should actually be investigated for criminal activity.

Moving to the cloud is dangerous as hell because it may seem like a bargain today, but we have no guarantees we will keep the prices low over time. Moving a VM to the cloud is just stupidity... move services, not VMs that were overbloateded to begin with.

I would posit that it would be substantially more cost effective for most companies to throw everything away, close shop for a year and start over than it would be to manage the absolute mess they are in thanks to virtualized servers today.

Anyone who would consider buying Simplivity from HPe or HyperFlex from Cisco should be shot for saying something so amazingly stupid... especially when they already pay for licenses for all of this tech from other vendors anyway. And besides, neither HPe, Cisco or Dell have the first idea what the hell you would actually use their servers for... so their answer is just buy so much capacity that it should run anything and everything.

The first step to recovery is to fire most of your IT staff... especially "The smart ones" and hire a "systems analyst" who will identify your actual business needs and then hire an architect to design systems to meet those needs. Then buy a "Kubernetes in a box" solution from RedHat, Ubuntu or whoever else that could run on a few Intel NUCs at $1000-$2000 a piece. And then build what you need.

The CapEx would drop to about $16,000 every 4-5 years and the OpEx would be much lower... and most IT spending would be done on development of the systems you actually need.

Huawei savaged by Brit code review board over pisspoor dev practices


Re: Real point here

I was hoping to see a comparative study to Cisco. My company gives Cisco over a billion Euro a year and while this seems damning to Huawei, I am the pretty sure Cisco is as bad.

1) Multiple OpenSSL instances is normal. They should however be pulled from the same repositories. There are good reasons to compile OpenSSL differently based on context. I compile it differently when using it in kernel space or in user space. OpenSSL is an absolute must for security... this is because OpenSSL is the absolute most hacked library EVER because of mass economy. But that also means it should be the fastest patched.

2) Large amount of C code in a network product unless it’s the forwarding engine itself is a really bad idea. Even then, companies like Ericsson code large amount of their systems in Erlang. While I’m no fan of Erlang, it has many benefits over C with regards to this. As such, it would make sense choose Ericsson over Huawei for 5G. Cisco uses C VERY heavily and if you were to look at much of the code Cisco has made public... let’s say they have pretty bad practices.

3) Poorly mapped safe-C type functions for string management. If you’re using C and safe in the same sentence... just don’t. Even the absolute best C code will almost certainly degrade over time. A common pattern which has grown over time in C circles is to make insane messes of goto statements for reallocating objects in the “right order” at the end of functions. I have seen many cases where this degraded over time.

4) 3rd party real-time operating systems are common. If you’re developing network hardware, RTOS makes a lot of sense as opposed to Linux. One reason is because network hardware should have deterministic latency to support protocols like ATM, SDH, ISDN, T1/E1. Vxworks, QNX, GreenHills all made excellent operating systems for communication grade equipment. Most of these systems however suffer from age. SYSBIOS from TI is also great. An excellent aspect of RTOS systems often is the ability to partition the CPU based not only on time share, but also cores.

I honestly think this review might be the best thing to ever happen to Huawei. It is a roadmap to let them plan their next steps. They should really consider looking into using Redox as a foundation to something new. If they invest in building a RTOS scheduler, it could be something glorious... especially for Huawei.

HP crashed Autonomy because US tech titan's top brass 'lost their nerve', says lawyer for ex-CEO Mike Lynch


Re: "Losing their nerve" is a common theme in all of HP's acquisitions...

I generally recommend to my customers that if they purchase HPe designed and developed products, go for it. But as soon as a company is acquired by either HPe or Cisco, they should see it as a sign they should consider looking for alternatives.

HPe and Cisco sales people don’t like selling products that take work. Meaning that if they can’t count on making regular bonuses, they lose interest. This is why most HPe and Cisco customers don’t buy what they actually need and end up never getting anything working as it should.

Even when these companies buy great products, if the sales people can’t figure out how to make them fit their portfolio, they simply won’t.

Cisco and HPe also lose interest quickly. Many times, both companies have purchased companies producing products targeting new markets. They try to sell their products to their existing customers and their customers who are conservative and are also often backlogged don’t catch on until the development teams of those products have been downsized and placed in a purgatory in India.

So the moral of the story is... don’t expect a company that is as big as a government, turns over leadership as governments do, and sells mostly to governments to behave any differently than any other dysfunctional government.

Don't mean to alarm you, but Boeing has built an unmanned fighter jet called 'Loyal Wingman'


If they deliver

Let’s be honest... Boeing isn’t exactly we’ll known for delivering anything in “Maybe 12 months”. As soon as they do a half assed demo, Boeing will claim to be out of money and it will end in a way late, way over budget, never delivered product.

In the meantime, any country that the plane would be useful against will focus on much smaller, mich cheaper, autonomous drones.... because they won’t have the same stupid tender process as western governments do

NAND it feels so good to be a gangsta: Only Intel flash revenues on the rise after brutal quarter


The almighty dollar!

Thanks to the strong dollar, the majority of the world can't afford to pay in dollars what the market is demanding. Sure, we ship more bits, but if you want sell them at all, you have to consider that you can't negotiate in dollars. They're too damn expensive. So, of course the revenue will be lower. You have to ship the same product for less dollars if you want to ship at all.

Oh, there's also the issue that people are finally figuring out that enterprise SSD doesn't really pay off. You just need to stop using SANs and instead use proper scale-out file systems.

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo


There has been progress

I do almost all my ARM development on Raspberry Pi. This is a bit of a disaster.

First of all, the Pi 3B+ is no a reliable development platform. I’ve tried Banana and others as well. But only Raspberry as a maintained Linux distro.

The Linux vendors (especially Redhat) refuse to support ARM for development on any widely available SBC. This is because even though Raspberry PI is possibly the most sold SBC ever (except maybe Arduino), they don’t invest in building a meaningful development platform on the device.

Cloud platforms are a waste because... well, they’re in the cloud.

Until ARM takes developers seriously, they will be a second class citizen. At Microsoft Build 2018, there were booths demonstrating Qualcomm ARM based laptops. They weren’t available for sale and they weren’t even attempting to seed them. As a result, 5,000 developers with budgets to spend left without even trying them.

This was probably the biggest failure I’ve ever seen by a company hoping to create a new market. They passed up the chance to get their product in front of massive numbers of developers who would make software that would make them look good.

Now, thanks to no real support from ARM, Qualcomm, Redhat, and others, I’ve made all ARM development an afterthought.

Surface Studio 2: The Vulture rakes a talon over Microsoft's latest box of desktop delight


$100 a month? Not a bad ROI

If you consider that this machine will last a minimum of 3 years, $3600 is pretty cheap actually. It's a nice looking machine and because of it's appearance, the user will be happy to hang onto it a little longer than a normal machine. I can easily see this machine lasting 5 years which would make the machine REALLY cheap.

When you're thinking in terms of return on investment, if you can get a machine which will meet the needs of the user for around $100 a month, it's a bargain. This is why I bought a Surface Book 2 15" with little hesitation. The Office, Adobe and Visual Studio Subscriptions cost substantially more per month than the laptop.

I'm considering this machine, but I have to be honest, I'd like to see a modular base. Meaning, take this precise design and make the base something that could slide apart into two pieces.

The reason for this is actually service related. This is a heavy computer. It has to be to support the screen when being used as a tablet. 80% of the problems which will occur with this PC will occur in the base. When it comes to servicing these machines, they risk easy damage by being moved around. This is not an IT guy PC, it's something which is pretty. I'd like to simply slide a latch, then slide the PC part of the system off and bring it in for service.

Upgradability would be nice using the same system as well. But I'm still waiting for Microsoft to say "Hey, bought a 15 inch Surface Book 2? We have an upgraded keyboard and GPU to sell you"


Re: Hmmmmm!

I work in post-production for a while. We were almost exclusively a Mac shop at the time, but we did most of our rendering on the workstation. Even more so when people began using laptops for post.

The earlier comment that the hardware has far outpaced the software is true. Sure, there are some rare exceptions. And if you're working on feature length productions rather than a 45 second commercial spot at 2k (max resolution.. meaning 1125 frames at 2k resolution), you'll need substantially more. But a GTX1060 or GTX1070 is WAY MORE than enough to manage rendering most video using current generation Adobe tools. Even 3D rendering with ray tracing will be able to work ok. Remember, you don't ray trace while editing (though we might get closer now with GTX2060+ cards). Instead, we render and even then, with the settings turned way down. Ray tracing on a single workstation can generally run overnight and be ready in the morning. If it's a rush, cloud based rendering is becoming more popular.

This machine should last 5-7 years without a problem. Most of the guys I know who still have jobs in TV (there is way too much supply and simply way too little demand for TV people) generally run 5-7 year old systems. Or more accurately, they wait that long before considering an upgrade.

UC Berkeley reacts to 'uni Huawei ban' reports: We unplugged, like, one thing no one cares about


Re: BT Infinity uses Huawei and no one seems to care

Telenor Global Services runs a Tier-1 LTE service provider on Huawei which most western governments depend on for secure communication... and Huawei has administrative credentials for all the devices since they also have the operations agreements for the hardware.

None of this is classified information if you can read Norwegian.



Biting the hand that feeds IT © 1998–2021