* Posts by CheesyTheClown

756 posts • joined 3 Jul 2009


Your job was probably outsourced for exactly the reason you suspected


What about liability?

When you outsource, you specify a project and set acceptance criteria and then pay a company who can be held liable to deliver.

Yes there are horror stories of how this can go wrong. The British and U.S. governments are absolute experts on botching their procurement of IT systems on a biblical scale. But overall, most projects outsourced generally are delivered accounting for the fact that the original price and deadlines are set by who can lie the best. Meaning that in order to win a tender against similar companies, all the vendors tend to intentionally underbid and quote delivery times over optimistically knowing they can ask for more once the project is in progress. So, it’s generally best to estimate at least 50% extra cost and time.

Even then, most systems are extremely similar. Using mature tools and platforms, vendors can estimate quite accurately the amount of time and resources required to deliver most any business system. For more technical projects, they can often throw large numbers of developers at making them happen.

When you employ your own development staff, you have no one to hold accountable but yourself. You need to have expert knowledge at planning and executing development projects. You can see salaries double in the time it takes to complete a project. Many such projects can see entire project staffs rolled over during their lifetime.

If you are going to farm out development to an external organization anyway, why waste time or money going local when going overseas will yield similar results? I haven’t met many great ITT grads, but I’ve known a bunch of good ones. And for most projects, you just don’t need great. You need good enough.

And don’t forget, most people in IT, no matter where you look are about equal. Each place has their rock stars, each place has their slow but steadies and each place has their “there’s gonna be a law suit or a lynching” and all in about equal proportions

If I were to outsource, I would look to Lithuania. I have worked with IT people all around the world. I have even been in one country training their IT staff and a month later in Lithuania training their replacements following a surprise outsourcing. The Lithuanians real impressed me on every count.

China seems to have figured out how to make 7nm chips despite US sanctions


Re: Another shoe of the US Clyddsdale falls.

I enjoyed your perspective even if I consider your writing style to be as annoying as my own.

Intel said for quite some time that 10nm from Intel could achieve the same density as 7nm from TSMC which was mostly true as Intel and TSMC measured differently. Of course there were exceptions, but overall, Intel and TSMC evolved their processes greatly over time and once the limits of TSMC 7 and Intel 10 were reached, they both moved a notch down in size.

Intel 4 should be able to employ 7nm fabrication to match the density and power foot print of 4nm from TSMC or Samsung.

I haven’t found details on SMIC’s 7nm node yet, but I would expect they have to work on the following

- Transistor size

- trace width

- insulator width

- yield

- more

I would suspect that SMIC has nailed the transistor size issue. But yield will probably be a beast for them at this point.

What I think is really interesting about this that SMIC had to work around a LOT of sourcing issues. Everyone has made a lot of noise over the EUV tech, but in truth, with enough money and engineers, EUV is not actually a mystery. The patent documents cover it it a lot of detail and they just had to figure out how to do it themselves. I would be surprised if it was actually difficult to do once China decided to ignore patent rights.

The big dog was wafer production. It takes an incredible team of people to produce crystal of such purity. It is also really easy to hide the process of purification, growth, slicing and polishing. 7nm is pretty far from subatomic, still 1-2 orders of magnitude from it. But crystal production for semiconductors probably require at least a 85% crystal accuracy (speculating from basic chemistry knowledge) to gate accurately. Layering must also be extremely difficult since layers are very likely less than 30nm high. (7nm trace, 15 degree angle of exposure, 120 picometer photon [estimation], 350nm wavelength [EUV estimation]).

This would mean that NMOS or PMOS at this scale would require some truly insanely accurate robotics and so ridiculously accurate centrifuges for layer application.

I think what many people didn’t recognize is that once something has been accomplished by anyone anywhere, whether through patents or marketing material, or other sources… reproduction of that technology is much much much easier. All that keeps people from copying it is patents and good faith. When it became public knowledge that diagonal EUV lithography did in fact successfully etch 7nm wafers… the financial risk associated with reproducing that technology was mitigated greatly. You really would only need a team of chemists, physicists and precision engineers to do it again.

What I think should also be considered is that documents published regarding sub-nanometer fabrication allows China as a whole to invest in skipping multiple generations and even beating their competitors to the goal. No one says China HAS to move ton5nm or 3nm first. They can be happy with 7nm and focus entirely on 500pm instead.

That of course raises the next issue which is that the really difficult stuff to reproduce is the software.

If China is blocked from Mentor Graphics and other synthesis tools, they will be forced to stick with versions licensed before the embargo. They can’t just call Mentor and say “I need a patch for our proprietary fab process”. They will have to produce their now synthesizers, physical and logical simulators. That actually takes a lot of time. There is a lot of graph theory for rule constrained synthesis. Also, field theory, especially when concerning quantum effects which is necessary at such low gate sizes takes teams of brilliant people years to achieve.

Elon Musk considering 'drastic action' as Twitter takeover in 'jeopardy'


Where is the ROI?

I have been seriously wondering what Musk would actually be buying.

Twitter isn’t a meaningful information sharing platform. It does not promote communication. It requires users to favor banter. I haven’t seen much use of Twitter as anything other than sharing links and making quirky comments. It is commonly used for taking swipes at other people on a platform which greatly limits follow up discussion.

I see absolutely no features or Twitter which provide real value. As an example, look at Facebook. As a platform; it is generally also a cesspool, but everyone tends to have a Facebook account and often have messenger because it is the modern equivalent of a personal phone book. Twitter doesn’t have a real networking feature. You just follow people, you don’t connect with them.

I wouldn’t consider Twitter a worthy investment. The only value I can see for Musk to buy Twitter is to hurt Donald Trump. And frankly, he is rich enough to do something petty like that.

Actual quantum computers don't exist yet. The cryptography to defeat them may already be here


Policy vs Technology

Keeping data protected for 75 years is a matter more properly managed by policy than technology.

It would be important to identify the reason you would encrypt there data and then identify why you would decrypt it.

Most data doesn’t actually need to be encrypted if it is intended for offline storage. Rather, a good physical security policy such as long term media storage in a secure location would be ok.

Encryption is primarily necessary for when data is to be transmitted. Until it needs to be transmitted plain text is probably ok. But when you transmit the data and it is intercepted, if it needs to remain private for 75 years from the time it is intercepted, it is clear that data will eventually be compromised. Therefore, such data should be physically transported rather than electronically.

You’re absolutely right in my opinion. Looking 5-10 years ahead may be achievable. Looking 25 years ahead would be entirely impossible. We honestly have little or no idea how quantum computer work today and we have no ideas what break through swill occur in the next 25 years. At this time, every quantum programming language is roughly equivalent to a macro assembler. We are literally programming with gate operations like assembler op-codes and of course macros. 25 years ago, chip design was very similar… almost identical. Then we figured out how to write high level code which would automatically compile to synthesized logic. And then we rocketed decades ahead overnight. We have absolutely no idea what will happen when the quantum world sees such a leap.

So, if data must be secure for extended periods, floppy-net will probably be the best option.

Cisco EVP: We need to lift everyone above the cybersecurity poverty line


Step 1

Provide software updates for old hardware.

Currently, most Cisco customers are running hardware that is locked to old ciphers and hashes. This is because their hardware is out of support.

Cisco has been pushing upgrade cycles for decades that deprive customers of patches for their equipment.

There are millions of Cisco routers and switches that use MD5 passwords, RIPEMD160 SSH hashes, outdated and insecure AES key cycling for SNMP.

In fact, there are many in production, supported network devices which use weak or seriously outdated AES and 3DES ciphers for SNMPv3.

Then there is the Cisco golden goose, YANG centric NetConf which which Cisco DevNet provides lots of documentation and training to push for the use of poor security programming practices such as storing keys insecurely in example code.

Then there are products like Cisco ISE which is Cisco’s golden security tool at the heart of all Cisco security which uses severely out of date Apache TomCat versions, Java libraries with insecure LDAP implementations, extremely insecure and easily compromised SAML implementations and more.

Then of course there are insane cases such as Cisco FirePower which make use of insecure network stacks beneath their secure implementations.

Oh… I don’t even know where to start when it comes to course system patching. Every single IOS-XE device runs Linux and lots of things like OpenSSL, OpenSSH, etc… but even after ~20 years of IOS-XE, you still don’t have granular software updating. So, you can’t just “apt update” for example and get new versions of most of the tools which regularly have critical CVE’s, you actually have to schedule downtimes and perform time consuming full system updates, sometimes with update cycles as long as 45 minutes.

I can go on for hundreds of cases of this.

Overall, Cisco is a mess when it comes to security.

But let’s be honest, they’re not much worse than their peers.

Export bans prompt Russia to use Chinese x86 CPU replacement


Re: Russian politics aside

Thank you! Compared to your normal posts… this was was nearly a marriage proposal ;)


Re: Russian politics aside

Honestly… I am an American and I’m just excited about where the tech is going. I really never had an interest in taking sides with anyone. Let the gorillas thump their chests and grunt. From a tech perspective, there is nothing but good things to expect to come from this.


Russian politics aside


Ok... I'm in complete disagreement from a technical perspective from the author. I feel as if there's a bit too much "Queen and Country" happening here. Patriotism and propaganda is fine, but there is so much more to this article than just "We're so much better than they are." In fact, this article should make the author seriously reevaluate that.

So, not long ago, China was embargoed and they were cut off from western technology. Since then they have

- launched a multi-pronged plan to mitigate the loss

- used ARM under a strange "ARM China is not ARM LTD" clause that allowed them to keep using ARM like crazy.

- moved approximately 50-200% faster on die process advancements than their western peers... depending on how you evaluate this. But whatever the case, it simply makes sense since their peers have to do things never done before and they only have to learn from what is already known.

- either licensed, bought or created most of the peripheral infrastructure surrounding CPUs including audio, video codecs, USB, MMU, interrupt controllers, DMA controllers, ethernet and more.

- launched a slew of RISC-V based processors and advanced RISC-V technology at least in terms of synthesis more than anyone else on earth.

- managed to use loopholes in the Cyrix x86 license via Via Tech to get a hold of x86 ISA (at least until Intel figures out how to go after this) x64 never really had these issues.

- Grabbed what I believe is S3 graphics tech which is nothing to write songs about but is a truly solid foundation for building on as they should be able to tack on a pile of specialized RISC-V cores to build most of the processing capacity and with some serious focus on memory technologies... meaning finding an HBM2 like solution and solving some pretty serious cache coherency issues, make a modern GPU.

Yesterday I was in front of a classroom and a student asked me to look at his progress on a project on his laptop. His machine was a several generation old Core i3 with 8 gigs of RAM. It was sluggish, but it was entirely usable even when loaded down by a very processor intensive application. I'd only guess that if I searched for a benchmark of this machine relative to the motherboards seen in this article, they'd be quite similar.

For HPC, Intel is not a requirement. Huawei and others solved this problem by producing CPUs which are less energy efficient than Intel or AMD but using solar energy and batteries for power. If you join a meeting with Huawei to buy a super computer, they present to you systems which they can deliver at any scale (as in Top500 systems) using Chinese technology and they can deliver power through solar. This is not a problem. And frankly, so long as it has a fully functional Linux toolchain including Python, Numpy, Scipy, Julia, C, C++, Fortran... I really don't care which instruction set I'm using. The only really important missing tool on the systems is Matlab, but it does have Octave ... which isn't really a replacement, but it could be good enough.

For telephones, there's ARM and soon RISC-V

For normal desktop, this x86 solution looks like it should be perfectly suitable for most users. Toss on a copy of Deepin Linux or ReactOS and it'll be fine.

Gaming and graphics workstations... that will require something else. And while everyone loves western and Japanese game studios... China is producing a crap ton of pretty good games these days... mostly on Unity and Unreal (I think) which I think could cause issues, I can honestly see China pushing Chinese and Chinese owned publishers to produce for a Chinese architecture. And let's be honest... most of the best games out there these days are well known for "they could run on a toaster"

At China's current rate of progress, they should meet or exceed western tech on every front within a few years... not necessarily always by looking at benchmarks and such, but based on user experience.

We can thank Donald Trump and Joe Biden for this. If it weren't for them, China probably would have kept chugging along at a moderate pace and had been happen just to keep using and licensing western tech. But thanks to the embargos which forced China to increase their efforts so drastically, we're going to see some truly amazing advancements in tech. This will be simply because the speed China can and is moving at is IMMENSE and soon everyone else will have to really push it into overdrive to avoid being left in the dust.

The tech world is going to be truly amazing now. I can't wait to see what comes from this.

What is so exciting is that there's absolutely nothing that can be done to slow China down now. Not only are they hellbent on never being crippled by the west again, but they need to do this for their economy. For them it's full speed ahead or bust.

Elon Musk flogs $8.4bn of Tesla shares amid Twitter offer drama


I don’t get it

So… Twitter

From my use of it,

if you write threads, people hate you because that’s not Twitter

If you write tweets

- people hate you because even a haiku author can’t express themselves in the character limit without being (intentionally) misunderstood by most people

- people hate you because you sound like a raving lunatic for writing short meaningless messages

- people hate you for writing crappy jokes

- people hate you for writing anything

I have never experienced a platform so well suited for spreading hate and discontent. It’s a toxic platform. I am sure there is someone out there who isn’t a shit tweeter, and I think Twitter sometimes is a good platform for making announcements… but, it’s just not a good place.

Then there is the issue of Musk buying the platform… has Musk ever bought a company and made it successful? I was always under the impression that he’s a maker and builder… not a buyer.

Deere & Co won't give out software and data needed for repairs, watchdog told


Stack vs heap

Deere has a very odd internal development policy. For decades, they wrote their code in C and they have specific rules for coding which require all data to be on stack, not heap. Also; they don’t allow structures.

This means that there are decades of impressively shitty code on John Deere devices. There is a high likelihood that the engineers who wrote the code have no ability to read it themselves.

I think John Deere must have done some sort of internal assessment and realized that if they released source to their systems, they would likely have to also provide some degree of support for the code and it is entirely possible that’s not an option.

I have see lots of JD source code and what I would say is “if you drive a JD tractor, you’re lucky to be alive”

Cisco to face trial over trade secrets theft, NDA breach claims after losing attempt to swat Leadfactors lawsuit


2008 was long past the point of original thinking.

I am more than happy to talk smack about Cisco’s business practices. I think Cisco is one of the dirtiest companies on the market at this time and the funny thing is, they don’t even realize it. It’s one big family of “We drank to Kool-Aid” out there in San Jose. I mean really, you have never met a more brainwashed group of… groupies than the Tasman campus groupies.

That said, I was working at a major competitor to Cisco in the collaboration gig at the time. In fact, Cisco acquired them for something like 1.8 gazillion dollars. And one thing I’ll tell you about the social and conferencing market is that there hasn’t been an original thought in that market since Walt Disney built Epcot.

Everything in that entire field is strictly logical evolution and if there is even a single patentable idea is social collaboration anymore, I’ll eat my socks… and that’s just gross.

I think the last piece of innovation in that field was when Zoom first reared its ugly head and rendered video in a 3D oriented fashion. That really threw us through a loop. But really, by 2008, there was nothing left to invent. We are soooooo long past that point

FreeBSD 13.0 to ship without WireGuard support as dev steps in to fix 'grave issues' with initial implementation


I was about to

sweep in and complain about poor coding.

Whenever I write kernel modules in C (Linux ugh), I find myself spending far too long detangling unintuitive preprocessor crap that has no place in 2021. When implementing secure protocols, most ciphers allow in-place operations, but usually headers need to be prepended and you will never find a solution to this problem that permits for the headers to remain memory aligned or for the buffers to be encrypted do.

This means to effectively make protocols which encapsulate higher level data, good buffer management is necessary. And while C allows you to do anything you want, as efficiently as you want, almost all solutions to the problem tends to lead towards reimplementing object oriented language features... usually in preprocessor macros or using crazy compiler extensions for tracking offsets into buffers based on their positions in structs.

There is also the whole game of kernel buffers. Since the kernel is in privileged mode, performing allocs and frees is frowned upon. The structure of the kernel memory space is expensive and dangerous to randomly allocate memory, especially if it may trigger the kernel to need to further allocate memory beyond its initial pool. Since the MMU is mostly bypassed in this mode and since C memory management generally is not relocateable, the only real solution is to overprovision needlessly.

I could (and probably should) write a book on all the numerous problems related to kernel development as well as the endless caveats of coding kernels in C, but let me simply say that while good C code is possible, it’s rare and far too many trivial tasks are managed manually and repetitively when coding C.

I don’t particularly care for the syntax of Rust or Go. But both languages run with a great concept which is... if it’s a very common task that can be added to the language with no real additional cost, do it. As such, both languages understand strings, buffers and data structures. There is no need for disgusting hacks to implement things like macros that are all hacks to support something as trivial as foreach.

C could fix these things as well. But it’s a conscious decision by the people steering the language to keep it as it is and to leave it to libraries and compiler extensions to do it instead.

I love C because if I want to write a new C compiler, I can make something usable and likely self-hosting within a few hours. But this isn’t the characteristic of a programming language I would want to use in 2021. If I were to spend my time on such a project, the first thing I’d do is build extensions for strings, buffers and data structures ... and it wouldn’t be C anymore.

Oh... and most importantly, I would drop the preprocessor and add support for domain specific language extensions. And I’d add proper RTTI. And I’d add an extension for references. And of course relocatable memory. And probably make ...

You know what... I don’t think I’d do anything other than bootstrap the new language with C and then basically just ignore the standard from there :)

Flagship Chinese chipmaker collapses before it makes a single chip or opens a factory


Re: More to this than meets the eye

They actually seen what we call IP theft as free market economy. Rather than protectionism which is supposed to allow companies to recoup their investments through patents and such, they believe that everything that can be copied is open source and each copy will generally generate innovation and place pressure on all players to always make advances.

I am currently sitting just outside a semiconductor research facility as I write this. It makes high frequency radio semiconductors for communications. Almost all the technology inside is off-the-shelf and while they obviously make a buck off patents, everyone there knows that their real protection is research and progress. We have a real problem at this location because if the US government becomes protectionist against its allies, they’d have to just shut down.

The good news is, there is another building just off to the side which is a nanotechnology research facility that can produce everything without dependence on the US. So rather than investing massive sums of money in traditional semiconductors, they focus their efforts in nanotechnology as a replacement.

When China eventually catches up in semiconductor fabrication to the US, it will quickly surpass them as American companies will depend on protectionism. And even if the US were to undo all the restrictions Trump enacted this moment, China would still invest heavily in their own tech and will still pass the US since they have now learned that if it happened once, it can happen again. Not only that, but I imagine the facility I’m sitting at will start buying from China at least as much as from the US so they will never have to worry about being completely cut off.

The sanctions placed on China by the Trump administration will likely be the biggest boost to China that could have ever been possible. It will take China time to recover from it, but when they do, it will leave the entire rest of the world without any leverage when negotiating with the Chinese on political issues. At this point, Biden’s choice of leaving the sanctions in place does nothing other than provide a buffer to let countries outside of China get a running head start in a very very long race.

Please don’t assume I’m playing the Trump is evil card. With the rollout of 5G, if Huawei would have been able to keep going as they did, China would have been able to draw trillions of dollars more from the US treasury. I don’t agree with how Trump prevented this, I would have rather seen an executive order demanding Cisco or another US company produce a competitive offering. But it did accomplish mitigating the risk of China being able to simply collapse the US economy on a whim. The European approach of working on an open source RAN was a far better approach.

America, Taiwan make semiconductors their top trade priority at first-ever 'Economic Prosperity Dialogue'


What happens when...

One of the world’s top economies... heavily dependent on semiconductors is told that no one in the world is allowed to sell them semiconductors?

The easy answer is, they invest massive amounts of money, time and resources to never need to buy semiconductors from another country.

Then they build up enough manufacturing ability to produce semiconductors for every other country who doesn’t like the impending threat of being cut off.

Then they do it for a lower cost than any other country.

Then they weaponize their capacity and use extensive government grants to economically attack countries like Taiwan... after all, why not simply give semiconductors away for free until TSMC can no longer afford to keep their doors open?

Of course, in order to stay competitive, that country will innovate as well. They will make sure they’re not just competitive, but after throwing money at racing to equal with publicly traded companies in the US and Taiwan, they will have a momentum already in place to also surpass them.

So what happens when someone makes a decision that threatens many of China’s largest and most influential companies and treats them like this? You think if Biden gives a little and agrees to sell them chips again that China will just stop their almost space race like efforts to become entirely independent?

Who knew? Hadoop is over, says former Hortonworks guru Scott Gnau


Re: @tfb This is why

I've been saying this for some time about COBOL. (Oh and I work with FORTRAN in HPC quite often)

People make a big deal about COBOL programmers being in short supply and that it's an antiquated language. Honestly though, what makes programmers really confused about it is that you don't really write programs in COBOL, it's more of a FaaS (Serverless, function as a service) platform. You write procedures in COBOL. The procedures are stored in the database like everything else and when a procedure is called, it's read from the database and executed.

The real issue with "COBOL programmers" is that they don't know the platform. The platform people are usually referring to when they say "COBOL" is actually some variation of mainframe or midrange computers. Most often in 2020, they're referring to either IBM System/Z or they're referring to IBM Series i ... which is really just a new name for what used to be AS/400.

The system contains a standard object storage system... or more accurately, a key/value store. And the front end of the system is typically based on CICS and JCL which is job control language. IBM mainframe terminals (and their emulators) have a language which could be kind of compared to HTML in the sense that it allows text layout and form entry as well as action buttons like "submit".

Then there's TSO/ISPF which is basically the IBM mainframe CLI.

What is funny is that, many of us when we look at AWS, all we see is garbled crap. They have a million screens and tons of options. The same is said for other services, but AWS is a nightmare. Add to that their command line tools which are borderline incomprehensible and well... you're screwed.

Now don't get me wrong, if I absolutely must use AWS, it wouldn't take more than watching a few videos and a code along. I'd probably end up using Python even though I don't care much for the language. I'd also use Lambda functions because frankly... I don't feel like rolling my own platform from scratch. Pretty much anything I'd ever need to write for a business application can be done with a simple web server to deliver static resources, lambda functions to handle my REST API, and someplace to store data which is probably either object storage, mongodb, and/or some SQL database.

Oddly, this is exactly what COBOL programmers are doing... and have done since 1969.

They use :

- TSO/ISPF as their command line instead of the AWS GUI or CLI tools.

- JCL to route requests to functions as a service

- CICS (in combination with a UI tech) to connect forms to JCL events as transactions... instead of using Amazon Lambda. Oh, it's also the "serves static pages" part. It's also kind of a service mesh.

- COBOL, Java or any other language as procedures which are run when events occur... like any serverless system.

It takes a few days to learn, but it's pretty simple. The hardest part is really learning the JCL and TSO/ISPF bit because it doesn't make sense to outsiders.

What's really funny is that IBM mainframes running this stuff are pretty much infinitely scaleable. If you plug 10 mainframes in together, they practically do all the work for you since their entire system is pretty much the same thing as an elastic Kubernetes cluster. You can plug in 500 mainframes and get so much more. The whole system is completely distributed.

But like you're saying FORTRAN is its own entire platform/ecosystem is entirely true. Everything you would ever need for writing a FORTRAN program is kind of built in. But I will say, I would never even consider writing a business system using FORTRAN :)

Microsoft submits Linux kernel patches for a 'complete virtualization stack' with Linux and Hyper-V


Re: The way forward?

I'm not sure what you're referring to. While there are vast areas of the Linux kernel in desperate need of being thrashed and trashed, a side effect of it's lack of design is that it's highly versatile (which mind you is what makes it so attractive for some many things).

Microsoft has managed to play quite nicely and by the rules with regards to making the majority of Windows friendly code self-contained within its own directories similar to other modules. It's really not much different than either the MD stack or the SCSI stack. In fact, the Hyper-V code is much easier to rip and remove than most other systems within the kernel as it's organized in a pretty central place.

Rather than spamming the kernel with massive amounts of Windows specific integrations for things like DirectX integration, they have done some pretty cool things to abstract the interfaces to allow generic RDP support for redirecting Wayland compositing for a pretty nice alternative to VNC or the X11 protocol and from what I can tell, they're working with the open source to make Wayland finally have a strong solution for headless applications over the wire.

Microsoft may be all-out embracing and extending Linux, but now their hands are so deep in Linux's pocket that extinguish is no longer an option for them. And they even play nicely enough by the rules that GPL zealots tend to just grunt rather than rampage about them these days.

Add to that that Microsoft does release massive amounts of actually genuinely useful technologies to the open source and they're almost even likeable now.

This announcement is pretty interesting to me because it will likely result in a VMM on Linux which is easy to manage and is heavily consumed. Honestly, I adore KVM and use it for almost everything, but the highly generic nature of kvm due to it's qemu roots makes it infinitely configurable and infinitely difficult to configure.

Money talks as Chinese chip foundries lure TSMC staff with massive salaries to fix the Middle Kingdom's tech gap



For the most part, the knowledge required to not only produce current generation and the knowledge to move forward is what is most interesting.

There is little value in hiring for blueprints. The worst possible thing that could happen to Huawei and China as a whole would be getting caught producing exactly the same technology verbatim.

There is value however in hiring as many people as possible that know how to innovate in semiconductor technology.

Huawei and others are running out of chips, but they're not as desperate as you'd think. They're more than smart enough to have contingency plans in place. They have time to catch up. It's far better to get it done right rather than to get it done.

The problem of course is that by China doing all of this, it will seriously impact the Taiwanese and American semiconductor market. When China is finally able to produce 7nm using Chinese developed technology, they can start undercutting costs for fabrication.

Where the US and TSMC will build a handful of fabs for each generation, China will focus on scale. And once China catches up, they'll target becoming a leader instead.

Trump focused entirely on winning the battle. But he has absolutely no plans in place for defending in the war. History shows that every time trade wars have tried his tactics, it doesn't just backfire, but it explodes. The issue now is whether China can do it before Trump leaves office in 2024. If they can accelerate development and start eating away at the semiconductor market in the next 4.25 years, Trump's legacy will be that he completely destroyed the entire semiconductor market for the US and US allies.

Apple gives Boot Camp the boot, banishes native Windows support from Arm-compatible Macs



I may have missed it, but there are a lot of people who depend on hackintosh or virtualization out there. There are companies with full farms of virtualized Macs for running XCode compiler farms. There are a surprising number of people using virtualized Macs as iMessage gateways.

By Apple making this move, they can make little tweaks like CPU instructions that are Apple specific. They can also make their own TPM that would block any XCode compiled application from running on a non Apple CPU.

How about large enterprises who depend heavily on virtualized Windows on Macs... for example IBM? They actually dumped Windows PCs in favor of Macs because they could remote desktop or virtualize corporate desktops and the users would all have a pretty easy "press the key during boot to wipe your PC and reinstall it". I guess this would still work... at least remote desktop VDI.

What happens to all the developers at Microsoft carrying around Macs? If you've ever been to Microsoft Build Conference, you'd think it was sponsored by Apple.


Re: Bochs

Bochs is a nice toy for retro purposes, but it lacks much of what you would need to make this a solution. On the other hand, you're on the right track, qemu which has a dynamic recompiler and x86/ARM64 JIT would be a solution... it won't be particularly fast though. To run Windows worth a damn today, GPU drivers are an absolute must... even if it's just an old Intel GPU, the Windows compositor really thrives on it.

Nine in ten biz applications harbor out-of-date, unsupported, insecure open-source code, study shows


Don't forget Cisco!

Cisco Prime, Cisco IOS-XE, Cisco IOS, Cisco ISE....

I can go on... but Cisco absolutely refuses to use package management and as a result, may systems only release patches once or twice a year. When they are released, they don't upgrade nearly enough packages.

Consider Cisco ISE which is the security portal for many thousands of networks around the world for wireless login. It's running on Apache versions so old that zero day passed years ago.

Then there's openssl and openssh... they just don't even bother. It doesn't matter how many CVEs are released or what their level is... Cisco ignores them.

Then there's Java versions

And there's the other key issue which is that Cisco products don't do incremental upgrades. You either upgrade everything or nothing. So even with critical security patches, the vast majority of Cisco customers DO NOT upgrade their software because there is far too much risk that it will break more than it fixes.

Of course, even with systems like Cisco DNA which automates most things, upgrades are risky and sometimes outright dangerous since there's no means of recovering remotely when your infrastructure equipment goes down.

Cisco doesn't release information, but I know of at least several dozen government organizations running Cisco ISE releases from 2-5 years old with no security patches because you can't simply run rpm update or apt upgrade on Cisco ISE... which is really really stupid when it's running on Linux.

I think Cisco might be the most insecure enterprise company out there and the only thing keeping it from being more common knowledge is that the people who actually know about these things risk losing their meal tickets from making noise about it. And what's worse is that Cisco people almost NEVER know anything about Linux... or security unless it's protocol security.

Uncle Sam tells F-35B allies they'll have to fly the things a lot more if they want to help out around South China Sea


As a tax payer...

I disapprove of these planes being flown. There are far too many possible chances for accidents or them being shot down. With how much these planes cost, the best option is to keep them stored in hangars where they are only a limited risk.

If anyone in the F-35 governments are reading this, please invest instead in F-16 and F-22 jets which are substantially less expensive and only consider the use of F-35 jets when the F-16 and F-22 planes can’t possibly do the job.

Think of it as using the 1997 Toyota Canary to drive to and from work in urban rush hour rather than the Bentley since scratching the Canary doesn’t matter but the Bentley will cost you and your insurance company a small fortune. The F-35 series planes should never be put in the air where they can be damaged... it’s simply fiscally irresponsible.

You're always a day Huawei: UK to decide whether to ban Chinese firm's kit from 5G networks tomorrow


Treasury Notes

If Huawei is allowed into western teleco networks, the governments will have to cover the purchase of this equipment by issuing treasury notes. If China does not spend those notes, which they do less and less often and instead stockpile them, they gain more control of them. At some point, if China decides it needs to buy things from the world, they will use them as currency. When they do this, if they need to make a massive purchase (think $100 billion) whichever government they are purchasing from may decide the risk of holding that much currency in treasury notes would be difficult to manage. So China will sell treasury notes to multiple other countries and banks who will negotiate favorable terms of exchange for themselves. This will result in flooding the market and therefore devaluing the power of said notes. This is a major security (not as in guns and bombs, but as it in financial security) risk for any country who holds U.S. treasury notes. Weaker economies can actually collapse because of this. Stronger economies can lose their purchasing power in China.

Leave your admin interface's TLS cert and private key in your router firmware in 2020? Just Netgear things


Re: "wanted to see some extra fields populated"

I’m not sure I agree. I keep a simple bash script on my PC I’ve hacked together which reads a few fields from a JSON file and generates a certificate that makes Chrome happy. It also informs me of all my current certificates that are about to expire. I think I got all the OpenSSL commands within the first page of links on a google search.

I think the problem is that certificates are difficult for pretty much anyone to understand since there are no good X.509 primers our there these days. I still see a lot of enterprise certificates signed by the root CA for the domain. Who the heck actually even has a root private key? I actually wish Chrome would block supporting certificates signed by root CAs. Make a root cert, sign a few subordinates and delete the root CA altogether.

That said, Let’s Encrypt has destroyed the entire PKI since a trusted certificate doesn’t mean anything anymore. A little lock on the browser just means some script kiddy registered a domain.

Creative cloudy types still making it rain cash for Adobe


Re: F*** adobe

I generally agree... I actually stopped paying for creative cloud when Affinity Designer came out... but their Bézier curves (a fairly simple thing to get right) are somewhat of a pain in the ass.

Then there’s Affinity Photo that still has major scaling problems that causes visual artifacts when zooming the workspace. It makes it almost unusable. My daughter is using Photoshop CS6 because it doesn’t need a subscription and it’s still quite a bit better than Affinity. Her reasoning is brushes. But she’s using other Asian software a lot more now.

In a touching tribute to its $800m-ish antitrust fine, Qualcomm tears wraps off Snapdragon 865 chip for 5G phones



I often work together with large enterprises helping them train their IT staff in wireless technologies. And the message I send regularly is that there is absolutely no value in upgrading their wiress for new standards rather than growing their existing infrastructure to support better service.

I have recently begun training telecoms on planning for 5G installation. And the message I generally send is "people won't really care about 5G" and I have many reasons to back this up.

Understand that so long as pico/nano/femto/microcells are difficult to get through regulation in many countries, Wifi will continue to be a necessary evil within enterprises and business running particularly difficult operations to deploy wireless in. We need Wifi mostly for things like barcode scanners and RFID scanners within warehouses. An example of this is a fishery I've worked with where gigantic, grounded metal cages full of fish are moved around refrigerated storage all day long. Another is in a mine shaft where the entire environment is surrounded by iron ore. In these places, wifi is needed, but there's absolutely no reason to run anything newer than Wireless-N except for availability. AC actually costs less than N in most cases today, but there's no practical reason to upgrade. 4x4 MIMO 802.11n is more than good enough in these environments.

5G offers very little to the general consumer. It is a great boon for IoT and for wireless backhaul networks, but for the consumer, 5G will not offer any practical improvements over LTE. 600Mhz 5G is a bit of an exception though. 600Mhz 5G isn't particularly fast... in most cases it's about the same as LTE. It's primary advantage is the range. It will be great for farmers on their tractors. In the past, streaming Netflix or Spotify while plowing the fields has been unrealistic. 5G will likely resolve the issue.

For people within urban environments, they're being told that 5G will give them higher availability and higher bandwidth. What most people don't realize is that running an LTE phone against the new 5G towers will probably provide the exact same experience. 5G will offer far more towers within urban areas and as such, LTE to those towers will work much better than it does to the 4G towers today. 4G is also more than capable of downloading at 10 times higher bandwidths than most users consume today. The core limitation has been the backhaul network. And where 4G typically had 2x10Gb/s fibers to each of 4 towers within an area. 5G will have 2x100Gb/s fibers (as well as a clock sync fiber) to 9 towers within the same area. This will result in much better availability (indoors and out) as well as better bandwidth... and as a bonus, it will improve mobile phone battery life substantially as 4G beamforming along with shorter distances will consume as much as 5 times less power on the phone compared to the current cell network.

5G has no killer app for the consumer. 3G had serious problems across the board since 3G technologies (UMTS, CDMA, etc...) were really just poor evolutions of the classical GSM radio design. LTE was "revolutionary" in its design and mobile data went from "nice toy for rich people" to "ready for consumption by the masses". 5G (which I've been testing for over a year) doesn't offer anything of practical value other than slightly shorter latency which is likely only to be realized by the most hardcore gamers.

I certainly have no intention of upgrading either my phone or my laptop to get better mobile and wireless standards. What I have now hasn't begun to reach the capacity of what they can support today. The newer radios (wifi6 and 5G) will make absolutely no difference in my life.

If you have anyone who listens to you, you should recommend that your IT department focuses on providing wireless network security through a zero-trust model. Which means you could effectively ignore wireless security and as you mentioned, use VPNs or fancy technologies like Microsoft Direct Access to provide secure, inspected, firewalled links for wireless users. They should focus on their cabling infrastructure as well as the addition of extra APs to offer location services for things like fire safety and emergency access. They shouldn't waste money buying new equipment either. Used APs are 1/10th the price. In a zero-trust environment, you really don't need software updates as the 802.11n and 802.11ac standards and equipment are quite stable today. They should simply increase their AP count, improve their cabling so the APs within a building are never cabled into one place (a closet can catch fire), install redundant power to support emergency situations. Use purely plenum rated cabling. Support pseudo-mac assignment to people not carrying wireless devices can be located by signal disturbance during a fire.

Once this system is operational, it should live for the rest of the lifespan of your wifi dependence. I can safely believe that within 5-10 years, most phones from Apple, Samsung, etc... will ship without Wifi as its presence will be entirely redundant.

Also for 5G, inform people that they should wait for a phone that actually gives them something interesting. Spending money on 5G for personal communication devices is just wasteful and worst of all, environmentally damaging. If the market manages to sell 5G as a "killer app", we stand to see over a billion mobile phones disposed of as people upgrade. Consider than even something as small as a telephone, when you make a pile of a billion of them is a disaster for this planet.

5G will be great for IoT and not so much 5G, but the proliferation of NB-IOT is very interesting. $15 or less will provide an eSim capable 5G modem module to things like weather sensors (of which there are already tens of millions out there), radar systems, security systems, etc... We should probably see tens of billions of NB-IOT devices out there within the next few years. A friend of mine has already begun integrating it into a project of hers of which she has funding for over 2 million sensors to be deployed around Europe.

No... you're 100% correct. Wifi has begun it's death knell. It will be irrelevant within 5-10 years and outside of warehouses and similarly radio harsh environment, it is very likely it will be replaced by LTE, NB-IOT and 5G.

And no... 5G on a laptop is almost idiotic if you already have LTE. You should (with the right plan) be able to do 800Mbit/sec or possibly more with LTE. Even when running Windows Update, you probably don't consume more than 40MBit/sec.

You're praying your biz won't be preyed upon? Have you heard of our lord and savior NVMe?


Why oh why

If you’re dumping SAS anything in favor of something else, then please get a distributed database with distributed index servers and drop this crap altogether.

Hadoop, Couch, Redis, Cassandra, multiple SQL servers, etc all support scale out with distributed indexing and searching often through map reduce methodologies. The network is already there and the performance gain is often substantially higher (orders of magnitude) than using old SAN block storage technologies.

Or, you can keep doing it the old way and spend millions on slow ass NVMe solutions

'Happy to throw Leo under the bus', Meg Whitman told HP after Autonomy buyout


How could this company ever be worth that much?

There was a time in history when HP was famous as a technical innovator who filed more than enough patents that they could use pretty much any technology they wanted and make deals with other companies to trade tech. They would engineer and build big and amazing things and if they panned out, they got rich, if they didn't, they'd sell them off.

Then the suits came in

HPe has become nothing more than an Acquisitions and Mergers company. They don't make any new technology. They "me too" a crap load of tech at times. But regarding innovation... check out HPe's labs/research website. Instead of actual innovation, it looks like a list of research of "why shouldn't we invest money in research" thing. I mean really... they wrote one whole paragraph on why they won't waste money on quantum computing and it's basically "We are going to prove P=NP and make a new way of saying it so if we can solve one NP problem, it will solve all NP problems."

There have been a bunch of CEOs that have converted HP from being a world leader in the creation of all things great in technology to being a shit company which spends $8 billion on a document store and search engine that "might be big one day".

Cooksie is *bam-bam* iGlad all over: Folk are actually buying Apple's fondleslabs again


Why would you buy a new one anymore?

I have a stack of old iPads laying around. I have 2 iPad version 1 and about 10-12 more after that. My wife uses hers... the kids stopped using theirs when they got telephones big enough to render them useless as they also have PCs.

I did get my wife a new iPad for Christmas... we actually don't know why... but I suppose it had been 2 years since the last iPad was bought... so I got her that.

To be honest, it used to be that everyone needed their own iPad... but these days, I think mom and dad just need big phones and the kids need maybe an iPad mini or so. There's no need to constantly upgrade... they already have more features than anyone will ever use. Now, it's more like "Wow... look Apple is still making iPads... at least I can buy a new one if the old one breaks... if I actually need it for something"

I used to see iPads all over every coffee shop. These days, there's laptops and telephones... but there doesn't seem to be any iPads anymore.

NAND down we goooo: Flash supplier revenues plunged in first quarter


Re: Yay!

I thought the same and then thought... why bother?

I used to spend tons of money building big storage systems... even for the house... I have a server in the closet I just can't force myself to toss which has 16TB of storage I built in 2005. These days, 500GB is generally more than enough. 1TB for game PCs.

At the office, I used to buy massive NetApp arrays... now that I have moved to Docker and Kubernetes, I just run Ceph, GlusterFS, or Windows Storage Spaces Direct and I use consumer grade SSDs.

We are soooooooo far past needing for storage it's silly. To expand a Ceph cluster by a terabyte of low cost SSD, it requires 3TB of storage which is under $300 now... and it gives us WAY better redundancy than using an expensive array. And to be fair... since almost everything is in the cloud these days, you could probably run an entire bank on 2-4TB of storage for years. It's not like a database record takes much space. Back in 1993, we ran over 100 banks on about 1GB of online storage. I'm almost sure you can run 1 modern bank on 4000 times that much. :)

As for performance... once you stop running VMware and you switch to something... well... anything else, you just don't need that much performance. I guess video games would load faster, but ask yourself when the last time you actually thought "I need a faster hard drive"

Former unicorn MapR desperately seeking cash as threat of closure looms


Re: The software is quite good

Everyone always talks about Betamax as if it was infinitely better than VHS. As someone who thoroughly understands the physics, mechanics, electronics, etc... of both Betamax and VHS from the tape all the way through the circuitry up to the phosphors, I'll make it clear... yes Betamax was better... but the difference was negligible. The two formats were so close to being the same that it barely mattered... and when transmitting the signal over composite from the player to the TV which ... well to be honest was 1950s technology (late 1970s TV was still 1950s tech... just bigger)... it was impossible to tell.

S-Video and SCART (in Europe) made a slightly noticeable difference. Using actual component cabling could have mattered but neither Betamax or VHS could take advantage of that.

The end result was simple... when playing a movie recorded on Betamax on a high end 1970s or early 1980s TV next to the same movie recorded on a VHS tape, you had one big ugly player next to another and the only possible difference you could give the consumer was "Beta is more expensive because the quality is better" and of course... it wasn't... at least not enough to notice. Often you could sell the consumer on audio quality, but on 1970/1980s era speakers and hifi, you wouldn't notice until you were far past the average consumer threshold.

Betacam SP was actually substantially better, but by then it no longer mattered.

I used to have 400 Betamax decks and 600 VHS decks in my office... all commercial grade duplicators with automatic tape changers. The Betamax decks existed for collecting dust. The VHS decks were constantly being serviced because they were running 24/7. I spent 10 years of my career in video technology development (I am a codec developer at heart, but I know analog too). In 10 years of working with studio/commercial grade broadcast and duplication equipment, and knowing what I know about the technology, if I saw Betamax for $120 and VHS for $110, I'd still by VHS.


Re: @CheesyTheClown ... Burned $300 million?

Thanks for commenting.

I honestly had no idea how MapR would sell in the first place. The problem is... it was a great product. But it was also expensive. And I don't really care how good a sales team you have is, the website is designed to scare away developers.

I just visited there and I'm pretty sure that I've been in multiple situations where I could have seen the technologies as interesting, but the website makes it look like it's too expensive for me to use in my projects. I can use tools that cost $10,000 or less without asking anyone. But they have to be purchasable without having to spend another $10,000 on meetings where people show Gartner magic quadrants.

I can't use any tools where I can't just pop to the web site and buy a copy on my AMEX in the web shop and expense it. When we scale, we'll send it to procurement and scale, but we're not going to waste a ton of money and hours or days on meetings and telephone conferences with sales people who dress in suits... hell I run away without looking back when I see sport jackets and jeans.

Marketing failed because MapR is not an end user program and developers can't make the purchasing decisions. The entire front end of the company is VERY VERY developer unfriendly. Somehow, someone thought that companies all start off big and fancy. My company is a top-400 and we start projects as grass-roots and once we prove it works, we sell the projects at internal expos and the management chooses whether to invest more in it or not. MapR looks expensive and scary and difficult to do business with.

This is why we do things like always grow everything ourselves instead of buying stuff that would do it better. Everyone is trying to sell to our bosses and not selling to the people who actually know what it is and what it does.

I wish you luck in the future.. now that I've looked a little more at you guys, I'll check the website occasionally when I go to start projects. If the company starts trying to sell to the people who will actually buy it (people like me) instead of to our bosses... maybe I'll buy something :)


Burned $300 million?

$200,000/year times 2 is $400,000 for an inflated cost or employing one overpaid SV employee. Multiply that by 200 employees. That’s $80 million a year for 200 employees... to develop and market a product.

Now... let’s assume that the company actually received $300 million in investments.

Was there even one person in the whole company actually doing their job? And was that job spending money with no actual consideration for return on investment?

Planes, fails and automobiles: Overseas callout saved by gentle thrust of server CD tray


Re: Ah the old push-out-the-cd-tray trick

Why not dump random data to the PC speaker?

'Evolution of the PC ecosystem'? Microsoft's 'modern' OS reminds us of the Windows RT days


Presented at build and only interesting to techies?

Let me get this straight... you're complaining that technologies presented at Build... Microsoft's annual developers conference... presented tools that are interesting to developers?

Ok... so... if you were to present tools that would be life changing and amazing... primarily to developers... which conference would you recommend presenting them at? And if you want the developers and techies who will use them to be present... and actually buy tickets to the event... are we still against using Build for this?

I almost could read the rest of what you wrote after reading that... I was utterly stuck... totally lost... wondering... what in the name of hell is this guy talking about.

So... let's try some stuff here.

Windows isn't built the way you seem to think it is. This is why Microsoft makes documentation. You can read it instead of just headlines.

Windows these days is built on something you might understand as containers... but not really. It's more than that. You can think of it as enclaves... if you want.

UWP also doesn't seem to work the way you think it does. You're thinking in terms of how Linux works and how languages on Linux work. Windows has extremely tight integration between programming languages and the operating system. As such, a lot of stuff has happened in the process of the compiler development which made it so that things you would think are native code are actually .NET and things you would think are .NET are native code. The architecture of the development tools have made what classically been though of as "Linking" a LOT more dynamic.

There's also a LOT more RTTI happening in all language from Microsoft which is making things like the natural design of what many generations ago was called COM pretty much transparent. All object models at one time (especially COM) was horrible at one point because of things like IDLs which were used to do what things like SWAGGER do these days. As describing and documenting the call interface between objects was a sheer terror.

Windows has made it so that you can program in more or less anything and expose your APIs from pretty much anything to pretty much anything ... kinda like how COM did... but it's all pretty much automatic now. This means that "thunking" mechanisms can make things happen magically. So you can write something in native code in C++ and something in .NET in C# and make calls between them and the OS can translate the calls... this actually requires a few special programming practices and it actually makes it easier if you pretend like you don't even know it's there.

There are A LOT of things going on in Windows that are kinda sorta like the things you seem to think it might do... but in many ways they're done far better.

If you want to see it look really awesome... start two sessions of Linux on WSL1. You'll find that they're not in the same enclave. They have some connections to each other... but they are actually separate. It's like running two different containers... but not really.

Now consider that Windows works a lot like that now too. Microsoft has progressively managed to get most of us to stop writing software that behaves as if everything absolutely must talk to everything else directly. As such, over time, they'll manage to finally make all processes run in entirely separate enclaves while still allowing communication between processes.

And BTW... Android and Chrome OS are sheer frigging terror.... if you want to do interesting things at least. Everything is so disconnected that 99% of the time... if you're trying to make two programs work with each other, you find yourself having to send everything through the cloud.


Re: That's what Plinston said

This is not argumentative. I'm a file system jockey and I have to admit that I'm a little bit in the dark here about the SIDL terminology.

I also wonder if you and I understand the file system in Windows differently than one another. It's been a long time since Microsoft originally added forked file support. Yeh, traditionally Windows really didn't support iNodes and it was a wreck, but it's been a long time since that's been set in stone.

The main reason Windows has required reboots to update is more related the UI. Upgrading files is no real problem. But in the a case like Linux where the GUI is entirely separated from the rest of the operating system (which is probably what I like least about Linux), the Windows GUI used to be the root for all tasks to be spawned from. So the GUI was the parent of all the tasks which made it so that if you upgraded the kernel, you'd have to restart the GUI running under the new kernel.

With all the effort they've made to make it so that they kernel is less important and that most of the OS is running either as a VM or a container, they should be able to start a new kernel now and repatriate the system call hooks to the new kernel.

Weak AF array sales at NetApp leave analysts feeling cold


Re: "End of Storage" - silliest thing ever said...

I don't disagree. I still see the occasional UltraSparc5, AS/400 and Window NT 4 machines in production. Legacy will always exist... but I think you're overestimating the need for low-latency on premise storage.

As latency to the clouds are decreasing and bandwidth is increasing and availability is actually often rivaling on-premise location isn't the hot topic.

We used low-latency storage for things like fiber channel because we were oversubscribing everything. But if you consider that massive banks still run on systems like IBM Z which seem really amazing, but performance-wise are generally obscenely over-provisioned. A well written system can handle millions of customer transactions per day on equipment no more powerful than a Raspberry Pi... and they did for decades... on horribly slow storage.

The question is... what do you really plan to run back home anymore? Most of the reasons you've needed extremely high end storage systems in the past have moved to the cloud where they logically belong. This means that most of what you're running back home is actually non-business systems anymore.

A major company will probably have something like an in-house SAP style system and a bunch of other things like file server which no one uses anymore. Everything else will be moved to the cloud with or against IT's "better judgement". Remember, you don't need the IT guy to sign up for Slack, the boss does that with his own credit card while sitting in a meeting.

The cloud doesn't replace storage... it replaces the systems using storage.

Now... let's assume you're working for a news paper or a television station where you need local storage because 1000+ photos at 20megapixel RAW or 25 hours of video at 12Gbp/s needs to be stored somewhere. These days, you pay a lot of money for your storage, but you also have a choice of easily 10 legitimate vendors and maybe another 200 "won't make it through another funding round" vendors. Right now, there's lots of choices and all those vendors still have lots of sales keeping them in the black.

Now, as more and more services are migrated to the cloud. The storage systems at most companies with more "plain vanilla" needs will free up capacity on their local storage. If they refresh their servers again, they'll choose a hyperconverged solution for the next generation.

This will mean that the larger storage companies will dissolve or converge. If they dissolve, they're gone. If they converge, they'll reduce redundant products and deprecate what you already have.

As this happens, the companies with those BIG low latency storage needs will no longer be buying a commodity product but instead a specialty product. Prices will increase and the effected customers will be substantially more conservative about their refresh cycle in the future.

Storage is ending... sure... there will always be a need for it in special cases, but I think it will be a LONG time before the stock market goes storage crazy again. And I don't think Netapp, a storage only company will survive it. EMC is part of Dell and 3Par is part of HP etc... companies which sell storage to support their core business. But Netapp sells storage and only storage, so they and pure will be hurt hardest and earliest.


Re: End of storage coming

Honestly, I think the NKS platform looks ok, but I expect that it's only a matter of time before all three clouds have their own legitimate competitors for it.

Don't get me wrong, I'm not saying it to be a jerk... as I said, it looks ok. But it's obvious progression for K8S, I've been building the same thing for internal use on top of Ceph at work. I'm pretty sure anyone trying to run a fault tolerant K8S cloud is all doing the same. But to be honest, if you're doing K8S, you should be using document/object storage and not volume storage.

If you're running Mongo or Couch in containers, I suppose volume or file storage would be a good thing. But when you're doing "web scale applications" you really should avoid file and volume storage as much as possible.

I just don't expect NetApp to be able to compete in this market when Microsoft and Amazon decide to build a competing product and pretty much just toss it in with their existing K8S solutions.


Re: End of storage coming

I don't disagree on many points. I've seen some pretty botched cloud gambits.And those are almost always on the companies that go to the cloud by copying up their VMs as quickly as possible. It's like "If you actually need VMware in the cloud... you really did it wrong"

The beauty of the change is that as we systems that genuinely belong in the cloud... like e-mail and collaboration is going there as SaaS and it's working GREAT. Security for email and collaboration can't ever work without mass economy and 24/7 attention from companies who actually know what they're doing...not like Cisco AMP or ESA crap.

A lot of other systems are going SaaS as well... for example Salesforce, SAP, etc... these systems should almost, by law have to be transferred to the cloud if for no other reason than it guarantees paper trails (figuratively speaking) of all business transactions that can be audited and subpoenaed. Though that's true for email and collab.

Systems which are company specific, they can come back home, and then eventually over time get ported to newer PaaS type systems which can be effectively cloud hosted.

I actually live in terror of the term "Full Stack Developer" since these days it often means "We don't actually want to pay for a DBA, we'd rather just overpay Amazon"


End of storage coming

Ok, when NetApp rose, it was because companies overconsolidated and overwasted. Not only that, but Microsoft, VMware and OpenStack lacked built in storage solutions. Most storage sales were measured on the scale of a few terabytes at most. Consider that a 2TB FAS 2500 series cost a company $10000 or more using spinning disks.

Most companies ran their own data centers and consolidated all their services into as few servers as possible. They went from running 5-10 separate servers (AD, Exchange, SQL, their business app...) costing $2000 each to 3-10 VMware servers costing $5000 each plus a SAN and an additional $2000+ in software licenses each... to run the same things.

Performance dropped considerably when they made that shift. Sure, they were supposedly easier to manage, but the management began to realize that systems that used to take 1 average skilled employee and 1 consultant to manage now took a team of full time employees and a lot more consultants to run.

Performance was almost always a problem because of storage. NetApp made a fortune because they could deliver a SAN which was relatively easy to manage that could handle most small businesses data.

What got really weird is when the bosses wondered how they went from $100,000 IT costs per year (people too) to $500,000 or more and no matter how much they spent on tools to make it more reliable and more robust, they always found themselves with the same outages and increasing costs.

Enter the cloud.

Companies could move their identity, mail, sharepoint, collaboration and office tools online using a relatively easy migration tool which took a few days to weeks.

SQL and their company app could be uploaded as VMs initially with little effort and with some effort, they could move their SQL to Azure’s SQL.

Now, they can downsize to one IT person and drop their costs to about $100K a year again.

The catch is, since we no longer need a dozen IT guys and consultants, no one left knows what either NetApp or Cisco is and they’re just using the simple pointy clicks UI to do everything. Their internal data center is being spun down and finding its way to eBay instead.

Then there’s whoever is left. They find that by replacing their servers with new servers containing disks, they can use VSAN, Storage Spaces Direct or Swift and not have to spend money on external storage which actually has a lower aggregate performance and substantially higher cost. Not only that, but they’re integrated into the systems they run on.

NetApp has no meaning for cloud vendors because MS, Google, Amazon, Facebook, Oracle can all make their own. In some cases, they even make their own hardware.

NetApp will still have a market for a while, but they will become less interesting as more services are moved to the cloud. After all, most companies depending on NetApp today probably have just enough performance to continue operations and as more systems go to the cloud, they’ll need less performance, not more.

There will be organizations like military and banks who will still need storage. And of course there are surveillance systems that require keeping video for 2-10 years depending on country. But I believe increasingly they will be able to move to more cost efficient solutions.

NetApp... I loved you, but like many others, I have now spun down 5 major NetApp installations and moved either to cloud or to OpenStack with Ceph. My company is currently spinning down another 12 major (service provider scale) NetApp solutions because we just don’t need it anymore.

I wish you luck and hope you convince HPe to buy you out like they do to everyone else in your position.

Cray's found a super scooper, $1.3bn's gonna buy you. HPE's the one


So long Cray.. we’ll miss you

So... what about the obvious implications that this leaves the US with only one supercomputer vendor? ugh

I mean really, if Cray can’t manage to be a player with the US dumping exascale contracts on them... the US deserves to be screwed. The US government should have been dumping cash on SGI and Cray for years. Instead, they forced them into bidding wars against each other which allowed a non-super computer acquisitions and merged chip shop to suck them both up leaving the US without even one legitimate HPC vendor in 3-5 years.

Do a search in SGI and find out what HPe has done since buying them... nothing. They ran what was left of them into the ground.

What about Cray? Cray does a lot of cool things. Storage, interconnects, cooling, etc... at one time HP did this too. And if HPe didn’t suck at HPC, they wouldn’t need to buy Cray. They could actually compete head on. But, no... they have no idea what they’re doing.

Want to see what’s left of HPe... google HPe research and show me even one project which doesn’t seem as interesting as Mamma June on the cover of Hustler?


What about SGI?

They bought SGI also... they finished up those contracts and what came next? Oh... SGI who?

Nvidia keeping mum on outlook for year as data centre slows, channel chokes on crypto crap


Alienating their core?

So, gaming cards are twice as expensive as they should be.

V100 is WAY more expensive than it should be... and it is cheaper to spend more developer hours optimizing code for consumer GPU then to use V100 which is a minimum of four times as expensive as they should be... at least to justify the CapEx for the cards. If the OpEx for consumer GPU is way lower than the V100 cost, why would I buy 10 V100s rather than 100 GeForce?

Then there’s Grid.. I don’t even know where to start on that. If you use Grid, there is no possible way to justify the cost. It is so insanely expensive that absolutely every ROI or TCO excuse you have to run virtualized evaporated instantly with Grid. Grid actually increase TCO by a LOT and you can’t even force nVidia to sell it to you. I mean really, you knock on their door begging to buy Grid for 1000 nodes and they don’t answer emails, they refuse to demo... I mean... sitting there with cash in hand waving it under their nose and looking for a dotted line to sign on and they blow you off.

They are too busy to bother with... well customers.

You know... they deliver to important customers like Microsoft, Amazon and Google. They don’t need the rest of us.

Good heavens, is it time to patch Cisco kit again? Prime Infrastructure root privileges hole plugged


Oh for the love of pizza

Ok... if you’re a network engineer who doesn’t suck, you would secure your control and management planes. If you install PI properly, it should be behind a firewall. If you install Huawei switches, the management planes should be blocked.

This is getting stupid.

Now, PI is based on a LOT of insecure tech. It’s a stinking security nightmare. You can’t run PI or DNA controllers without a massive amount of security in-between. This is because Cisco doesn’t actually design for security.

If you want a fiesta of security hell, just install Cisco ISE which might be the least secure product ever made. Their SAML Sp looks like it was written by drunken hackers. Their web login portal is practically an invitation. Let’s not even talk about their insanely out of date Apache Tomcat.

Want to really have a blast hacking the crap out of Prime? Connect via wireless and manipulate radio management frames for RRM. You can take the whole network without even logging in. It’s almost like a highway to secure areas.

When you contact Cisco to report zero-day hacks, they actually want you to pay for them to listen to you.

How about Cisco CDP on IOS XE having rootable vulnerabilities caused by malformed packets? A well formed malicious CDP packet can force a kernel panic, reboot and if you move quickly enough, you’ll be on native VLAN while it’s reading and processing the startup config. I mean come on... it’s 2019 and they still have packet reassembly vulnerabilities because they don’t know how to use sk_buff properly?

They practically ignore all complaints about it too.

Time to reformat the old wallet and embiggen your smartmobe: The 1TB microSD is here


Am I the only one?

I was driving yesterday and as always, instead of paying attention to the road, I was going all Sci-Fi and drifting off to a weird fantasy. I though... imagine if I blinked and found myself driving my BMW i3 in the year 1919... kinda like “Back to the Future” but without Mr. Fusion.

My car had been recently cleaned so all I had with me was my backpack. And I freaked, because I had my play laptop, a Microsoft Surface Go and it didn’t have and development tools on it... not even VS Code. And I was like “I have a JRPG video game, some movies and the only programming language I had was Powershell, whatever is in Office, vbscript, the web browser... ok... I can code... but I don’t have Google, Wikipedia, or StackOverflow”

I could made do I told myself, and then I thought to myself, on my phone, I have about 150 videos on multivariable calculus, chemistry and encryption. Woot!

Then I realized how screwed I was because I didn’t have the parts I needed to build a USB to.. well anything interface... all I had for peripherals was a USB-C to USB and HDMI dongle. I could design a USB to serial UART. In fact, I also have on my phone an FPGA course and I could make a simple VHDL to schematic compiler in Powershell if I had to. But of course, I would have to make my own semiconductors and I’m not sure I could produce a stable silicon substrate capable of 12Mhz for USB using 1919 era laboratories.

Then I realized I had a really awesome toy with me... I have a 400GB MicroSD in the laptop. I don’t think I could even explain to Ramanujan what 400GB is and that’s a guy who was pretty hard core into infinite series. Could you imagine explaining to people 100 years ago that you had a chip that was visually the size of a finger nail which had the storage and addressing circuitry for about 4 trillion memory cells?

So, today... without even thinking of it, I found myself loading VSCode, .NET and Julia onto my laptop. Yesterday afternoon, I found myself packing a USB-RS-232 dongle too. I also realized that I had 3D Studio Max and OpenSCAD installed.

And oddly, I believe I have an Arduino and USB cable in my glove box. Though, I don’t have the software, but I think I could write an Atmel assembler from memory.

Today if I got sucked back to 1919, I could use my laptop to design a simple field emitting transistor which I’m sure would be reliable at 50Khz, a simple cathode ray tube, a simple CPU, a reliable and reproducible carbon film resistor, a half-assed capacitor (don’t know the chemistry on those, but I could fake it), and probably could produce a reasonable two sided circuit board etching and plating system... and I could probably do all this with my laptop and 1919 era tools and tech. I would have to do it at Kodak in New York.

Oddly, I could probably do most of this with just the information I have on my phone, but it would probably take me a while just to make a stable 5V 2amp power source to keep the phone running for any period of time.

To be honest, I think I’d find the closest thing to epoxy available at the time. I would use gold leaf to make traces... then I’d use a simple acid based battery. I wouldn’t trust 1919 electrical mains.

Anyway... anyone else here ever get geeky like this? Wouldn’t you love to show off a 1TB MicroSD card to people back then? Hell just try to explain the concept of what it would take to fit 10 trillion non-volatile memory cells into something that size :)

Mellanox investor proposes class action to kill Nvidia's $6.9bn mega buy


Future potential?

ARM processors are beginning to integrate 100Gb/s Ethernet to support RDMA over converged Ethernet. See Huawei’s HPC solutions for reference.

Intel has the capacity to do the same with their own chipsets used in servers and supercomputers.

NVidia, if they choose to can do the same on their own. They clearly have a solid grasp on high speed serial communications.

Infiniband has is useful in HPC environments because it’s mostly plug and play. But it comes at a premium cost. The HPC market is investigating alternatives to Infiniband because as with technologies like ATM/SDH/SONET, much less expensive technologies, namely Ethernet have become good enough to replace them.

I just saw a 1000 port QDR Infiniband multiplexer sitting unused in a supercomputing center this morning. It will be replaced with 100Gb/E, not more Infiniband.

They should sell now while they are still valuable.

Complex automation won't make fleshbags obsolete, not when the end result is this dumb


It’s not about becoming obsolete.

If you consider that the heart of the issue is unsustainable capitalism, it becomes clear. And even if it were, it has little to do with automation, it’s about centralization and enhanced logistics.

We simply overproduce.

Let’s use a simple example.

Ground beef has a limited shelf life. It can survive quite a long time when frozen, but the meat will degrade and no longer be edible after a short time when thawed.

We as shoppers however are turned away from meat that is frozen. It looks unattractive and although we should know that almost immediately following being slaughtered, the meat is stored in frozen storage, and even if we visit a butcher, we are attracted to meat hanging on hooks in frozen storage, when the meat is on a shelf, we will buy the fresh, red, lovely pack of meat which we’ll transport thawed to our houses and refrigerate and hope we’ll use it before the”best before” date passes.

Grocery stores also know that shoppers almost never buy the meat products which are the last on the shelf. They can charge more for meat that is thawed than frozen. And the result is, they ensure there is always enough thawed meat to attract shoppers and charge them more. They also waste packaging and to make it last just a little longer, they’ll use sealed packaging that makes the meat prettier for a little while longer. And the packaging now even has fancy little devices to measure freshness... which are not recycled. In order to produce (and overproduce) enough ground beef to have enough left over to actually waste approximately 30% (real number for here in Norway), we are left with massive amounts of other meat that must also be sold and suffer the same problems.

When you purchase meat online for home delivery, meat can be kept frozen during the entire process ... up to but not necessarily including delivery for the “last mile”. We don’t need to produce extra to make the meat look more attractive to consumers. We can expect the consumer to receive fresh lovely red ground beef with no need for freshness sensors, vacuum sealed packaging, etc...

Using more advanced larger scale marketing mechanisms. If people are buying too much ground beef, algorithms can raise prices of cheaper meats and lower prices of more expensive cuts to convince shoppers to eat steak instead of burgers tonight. We can sell 400grams or 550grams or however much because meat will be packaged to order. We can cut deals with pet food and pig slip companies to simply give them byproducts in exchange for bartering products like “if we give you pig food worth $1.5million, you give us bacon worth $1.5 million” which would probably count towards tax credits for being green and also leave the additional money in a form that can be written off.

This works great because people buying online will buy based on photos and text. Marketing is easier. They product always looks perfect prior to purchase.

By needing to produce 30% less, we need 30% less cows. Less movement of live stock or frozen sides. We need less butchers. We can use more machines. We’ll use less packaging. We won’t need freshness sensors. We can package in biodegradable paper or reusable deposit oriented containers. We can eliminate printing of fancy labels. We will reduce shipping of product by 50% by using more efficient packaging and shipping 30% less product to begin with. We can reduce consumer fuel consumption and car repairs and tired degradation associated with shopping.

By enhancing logistics and centralizing as much as possible, we will eliminate massive numbers of jobs. But initially the result will be people will spend more time unemployed and believe it or not... more time humping, reproducing and creating more people who have less jobs available to them.

As such, we need to start sharing jobs. People will work 50% of what they do today. This means they’ll have much more time to manage their household economies. They’ll eat out less and use more time cooking. This will reduce dependence on restaurants. They will also have less disposable income as they’ll be forced to spend more time entertaining themselves. They will think more about their meals and waste less food producing them as they know when they buy chicken breast on sale, they can use half today and half two days from now. It won’t be like “I planned to use the other half, but are out because I got stuck in a meeting.”

People will order groceries to be delivered which means the grocery stores which used to be “anchor stores” will become less important and people will stop “Let’s grab a taco and some ice cream next door to the grocery store while we’re out already”. As such, those smaller stores which were never anchors themselves will become less interesting.

This was a simple example, and it barely scratched the surface. It has so little to do with automation. It’s capitalism and we just have too many meat sacks to keep it maintained.

Tesla touts totally safe, not at all worrying self-driving cars – this time using custom chips


Use of investor's capital?

I've worked in a few environments where we did our own HDL development. We worked almost entirely in the FPGA world because we did too much "special purpose" algorithms which would often require field updates... an area not well suited for ASICs.

But, I believe what Tesla is doing here is a mistake.

Large scale ASIC development is generally reserved for a special category of companies for a reason. Yes, their new tensor processor almost certainly is a bunch of very small tensor cores, which each are relatively easy to get right, and the interconnect is probably either a really simple high speed serial ring bus... so, it's probably not much harder than just "daisy chaining" a bunch of cores. But even with a superstar chip designer on staff, there are a tremendous amount of costs in getting a chip like this right.

Simulation is a problem.

In FPGA, we often just simulate using squiggly lines in a simulator. Then we can synthesize and upload it to a chip. The trial and error cycle is measured in hours and hundreds of dollars.

In ASIC, all the work is often done in FPGA first, but then to route, mask and fab a new chip... especially of this scale, there is a HUGE amount involved. It requires multiple iterations and there are always going to be issues with power distribution, grounding, routing... and most importantly, heat. Heat is a nightmare in this circumstance. Intel, NVidia, Apple, ARM, etc... probably each spend 25-50% of their R&D budgets on simply putting transistors in just the right places to distribute heat appropriately. It's not really possible to properly simulate the process either... and a super-star chip designer probably know most of the tricks of the trade to make it happen, but there's more to just intuition with regards to this.

Automotive processors must operate under extreme environmental conditions... especially those used in trucks traversing mountains and deserts.

If Tesla managed to actually make this happen and they managed to build their own processors instead of paying NVidia, AMD or someone similar to do it for them, I see this as being a pretty bad idea overall.

Of course, I'd imagine that NVidia is raking Tesla over the coals and making it very difficult for Tesla to reach self-driving in a model 3 class car, but there has the be a better solution than running an ASIC design company within their own organization. Investing in another company in exchange for favorable prices would have made more sense I think. Then the development costs could have been spread across multiple organizations.


Re: 144 trillion operations per second

I'd love to see something that would back your statement up.

To be honest, I'm just moving past basic theoretical understanding of neural networks and moving into application. I've been very interested in reducing transform complexity and therefore reducing the number of operations per second for a given "AI" operation. Think of me as the guy who would spend 2 months hand coding and optimizing assembler back in the 90's to draw a few pixels faster. (I did that too)

I don't entirely agree from my current understanding with the blanket statement that it wouldn't need that much. I believe at the moment that there are other bottlenecks to solve first, but at least in my experience processing convolutional networks in real time from multiple high resolution sources at multiple frequency ranges could probably use all 144 trillion operations and then some.

Do you have something that would back up your statement... I'd love to see for better understanding of the topic.

Better late than never: Cisco's software-defined networking platform ACI finally lands on AWS


Re: If you need ACI in AWS or Azure, you're just doing it wrong

Shitting on the competition?

What competition? NXOS vs ACI?

ACI does try to solve software problems using hardware solutions. This can’t be argued. In fact, it could be its greatest feature. In a world like VMware where adding networking through VIBs can be a disaster (even NSX blows up sometimes with VUM... which no one sets up properly anyway), moving as much networking as possible out of the software is probably a good thing.

Using a proper software define solution such as Docker/K8S, OpenFlow, Hyper-V extensible switch, or even NSX (if you just can’t escape VMware) with a solid layer-3 solution like NXOS... or any other BGP capable layer-3 switch is generally a much better design than using a solution like ACI which separates networking from the software.

It’s 2019, we don’t deploy VMs using OVFs and next-next-next-finish things anymore. We create description files like YAML or AWS/Azure specific formats and automate the deployment method and define the network communication of the system as part of a single description.

ACI didn’t work for this. So Cisco made Contiv and by the time the market started looking at ACI+Contiv as a solution, Cisco had basically abandoned the project... which left us all with Calico or OpenFlow for example... which are not ACI friendly.

Of course, NSX doesn’t control ACI since they are different paradigms.

Hyper-V extensible switch doesn’t do ACI, so Cisco released an ACI integration they showed off at Live! As few years back and then promptly abandoned.

NXOS works well with all these systems and most of these systems document clearly how they recommend they are configured. Microsoft even publishes Cisco switch configurations as part of their SDN Express git.

So... which competition are you referring to?


Re: If you need ACI in AWS or Azure, you're just doing it wrong

Servers + Fabric + VMware license + Hyperfles storage license + Windows Server Enterprise licenses + backup licenses (Veeam?) Firewall + Load balancer + server engineering hours + network engineering hours + backup engineering hours + Microsoft hours...

You need two stacks of (three servers + two leaf and two spine + 2 ASR1000 or 2 border leafs + 2 firewall nodes, 2 load balancers) and whatever else I’m forgetting.

If you can get a reliable Hyperflex environment up with VMware and Microsoft license and all the hours involved for less than $1.6 million, you probably have no clue what you’re doing.... and I specifically said retail. And architecting, procuring, implementing and testing etc... a redundant Hyperflex environment requires several hundred hours of what I hope are skilled engineers.

I’ve done the cost analysis multiple times on this. We came in under $1.2 million a few times, but that was by leaving out things like connecting the servers to the UPS management system and cutting corners by using hacked fabric solutions like skipping the border leafs or trying to do something stupid like trading in core switches and trying to make the ACI fabric double as a core switch replacement. Or leaving out location independence etc...



Biting the hand that feeds IT © 1998–2022