* Posts by CheesyTheClown

770 publicly visible posts • joined 3 Jul 2009


Uncle Sam is this keen to keep US CHIPS funds out of China


Wow, designed to backfire

The Chinese have managed to produce 7nm semiconductors without EUV. This is a task Intel spent billions on and failed. They will continue to innovate and advance their semiconductor production. They will continue to shrink their processes without outside technology.

Of course, their advancements mean that they will produce chips for a LOT less money than other companies.

By adding clauses blocking collaboration between China and the U.S., the U.S. will be unable to compete.

Oracle's revised Java licensing terms 2-5x more expensive for most orgs


Re: with 49,500 employees, all of whom are applicable

Help me out here. The applications I develop are, at a personal project level CAD software, operating systems and compilers. On a professional level, I code storage systems, physics simulations for things like bio computing and weather prediction.

I do actually have a storage system in production which provides object storage to CERN projects which is based on Java. Its system requirements are idiotic and if we could rewrite it in any language actually suitable for such tasks, it would save us tens of millions of dollars.

For business and accounting type systems, I code everything in C# because the language is insanely well maintained at this time.

What is a suitable application for Java? It always struck me as the square peg in the round hole. I used to develop a Java Virtual Machine and a clean room implementation of the Java libraries, and I suppose it was a fun project, and I needed to do it because I needed a platform independent JVM for embedding in a web browser on all kinds of platforms. But when DOM and HTML canvas became a thing, I didn’t really see the point anymore.

So what kind of software would you develop using Java and is that because of habit (shamefully, I still use C and C++ for some things because they’re comfortable even if they’re terrible), or does Java offer something which makes it well suited for some task?

Jury orders Google to pay $340M patent-infringement damages over Chromecast


Re: How does one get such patents in 2010

I read the patents and the first google I did voided all three patents irrefutably. "DLNA".

HDMI CEC covered it to.

IR blaster as well

These are all modern enough that there are probably active patents.

So let's try Real Player for expired patents.

H.263 patents also covered all 3

Logitech has to have expired patents for the control aspect.

Google's lawyers were totally incompetent.

Make chips, not trade wars, says Semiconductor Industry Association


Forfeiting leadership

All that the US has achieved by doing this has been to motivate China to become an alternate source/supplier for all these technologies. The result will be that they will compete and undercut all western companies forcing them to struggle. The China will continue to move forward with extremely high momentum while western companies (and Taiwan) struggle to recoup their investments on R&D. Eventually China will end up as the preferred suppliers for all these technologies.

The wins for the west are short term and short sited. Even if the restrictions are lifted today, China’s forward momentum will only slow, not stop. They will cut their dependence on western suppliers.

The only good news is, by the time China is far in the lead, the people responsible for starting this will be too old (or even dead) to care… as if they’re not pretty much there already.

Bad times are just starting for India's IT outsourcers, says JP Morgan


Re: Nobody mention AI

AI as a service?

1) Open Bing chat.

2) Ask “write a bash script to download and install Alpacca on my machine

3) Open Alpacca URL


Re: I can't stand...

The issue is the number of people needed to be productive.

No code and low code is a big driver for business systems today. It is getting to the point where a good low code platform is enough to handle most IT development tasks.

Add ML to the mix.

Since I started using copilot and Bing Chat to assist my work, my productivity has increased drastically. Not two or three fold. Substantially more than that. I am now completing projects on my own in less time than it used to take to spec the projects out. What’s worse is that the quality has improved greatly in comparison. I expect that these very early generation LLMs are going to improve considerably in very rapid succession. And once I finish my current project, I hope to switch to running an open source LLM which I expect to perform substantially faster and increase my productivity a great deal more.

I can honestly say that I could now complete 12-20 large scale IT projects in a year with these tools compared to the past where I used 3 months just to hire an outsourcing team to do one.

I think we’re about to see a considerable collapse of labor requirements thanks to a single LLM with one-two skilled developers greatly outpacing managed projects with 10-30 people and all the paperwork and bureaucracy.

I am tempted to design and build a large database project based on a previous project using one other developer and an LLM and establish real numbers for before and after for LLMs.

China the largest buyer of chipmaking machines as sales hit an all-time high


“Advanced technologies” aren’t always needed

This is a misunderstanding which seems to make a lot of people confused.

You don’t actually need more advanced nodes for nearly anything which the US is attempting to curtail.

Advanced nodes increase density which is a non-issue unless portability is the need

They also decrease power consumption which is also a non-issue if power is available. Thanks to China’s solar, battery and wind initiatives, total non-issue

They also decrease heat radiation. No problem if you can remove the heat. There are of course many profitable ways to consume the heat from data centers. And China does have ways to do that.

They can also increase clock rates which can be easily compensated for by simply adding more chips.

There is the issue of high frequency communication which is a limiting factor due to things like quantum tunneling. I won’t make this a quantum physics or regulation theory lesson, but the only way to deal with this is to have more parallel channels of serial communication. A better solution is to write better code which doesn’t need so much bandwidth. Use binary data rather than JSON for example.

The last and toughest issue is latency. Sometime you just need to send data from n less time. This is generally addressed with higher frequency busses. There are almost no places in high performance computing and AI where you genuinely need better latency. We use it when it’s available, but if you consider most super computers are running like slugs because scientists tend to write their code in Python, R or Matlab, this isn’t really the issue. Many HPC applications could run on laptops with a decent video card if they were coded properly.

The only thing these advanced fabrication technologies are really needed for is things like 5G. And this is a huge problem because China’s 5G solution is much cheaper and quite a bit better than their competitors. Even now, for example Nokia is struggling to deliver millimeter wave or SA. Two features Huawei could deliver on day one of 5G and also the two most interesting features of 5G. If the US hadn’t bullied everyone to avoid Huawei RANs, the whole western would would be shipping crates full of US treasury notes to China never to be seen again.

Overall, these semiconductor fabrication equipment sanctions have very little impact on China except that they are being forced to up their game will eventually catch up with the west and leave them perfectly positioned to compete aggressively against western companies.

The ship has sailed. There is no going back. If all the sanctions were lifted today, China would work just as hard to become the world leader in semiconductor technology. What Trump and Biden have accomplished is to make it so China might simply choose to withhold advanced Chinese tech from the US and their friends.

The good news is, most of the people who have initiated this inevitable end more than likely, due to old age won’t be around to see the world when the US is pan handling on the streets of Beijing begging for bread. But I’m sure they’ll go down in the history books as the presidents who forced China’s hand.

Boeing's first-ever crewed mission in Starliner ISS spacecraft delayed to late July


I didn't realize they were still trying

Honestly, I was under the impression that they had just given up. I wonder if they have a plan to how to lobby for a billion dollar a flight once BFR starts launching.

Ransomware severs 1,000 ships from on-shore servers


DaVinci Virus?

I'm pretty sure this is the DaVinci Virus and it will likely eventually cause the ballast in the ships to ship and the little boats will flip over unless a million dollars is deposited into a specific numbered account.

I recommend you they track down a hacker named "The Plague"

China may have to reassess chip strategy in face of US sanctions


Re: US semiconductor industry???

Thanks for this write up. It filled in some stuff I didn't know.

I would like to point out that most of ASML's technology and the technology of their vendors are patented which means that where patent protections work, they are safe. But in countries being sanctioned in ways they see as a violation of trust, the patents serve as blueprints to reproducing said technology.

I like to point out to people that if you were to compare China to the U.S. in a very naive way which says that "Any American is twice as good as any Chinese person", considering that there are twice as many people in China and they're demographically distributed similarly as most other countries including the U.S., then China is still twice as good as America. Or at the very least, they have the capacity to achieve twice as much as the Americans.

This tells me that if the Chinese can outperform America consistently by at least a fact of two, the Chinese are absolutely going to pass America in every conceivable way in time.

Google datacenters use 'a quarter of all water' in one US city


I am not convinced.

If the water is already evaporating on scale, then there are some pretty interesting situations to consider.

If the evaporation is indoors, the humidity of the data centers would increase and cause condensation.

If the water is at a high enough temperature to evaporate, there is potential for extremely low cost water filtration within a farm. The water released from the data centers could be pumped into pools within green houses with properly shaped roofs and the water would evaporate and slide down the sides of the system and be reclaimed. While doing so, it would also cool slightly.

In that same system, the water as it falls and cools can be used to turn low resistance turbines and the energy can be collected. The additional energy removed from the water in the process would be a few more degrees of temperature.

An evaporator system the size of several indoor Olympic swimming pools could be incredibly profitable to everyone involved.

I don't think the water is quite ready for mass evaporation, when that much water is carrying that much energy, only fools would pass up the opportunity to reclaim as much of it as possible. In fact, making sure that the evaporator system can collect even more energy from the sun would make it vastly more profitable.

I think there has to be a reason the water and energy isn't being reclaimed. Either

- they are using so much water that the heat is being broadly distributed enough that it doesn't actually evaporate easily and as such energy isn't easily extracted

- they are mixing the potable water with non-potable water

- they are mixing potable water with chemicals

One thing I know for sure from following some pretty interesting people who talk about DC engineering, they can't account for pretty much every milliwatt of potential energy savings within a datacenter. If there is a penny to be saved or made by exploiting the waste output of a data center, they will find it and exploit it.

Latest US blacklist spells trouble for China’s biggest domestic 3D NAND supplier


Re: Too Far?

This was a fresh perspective for me to think about.

I think I'll give you a knee jerk reaction.

China subsidizes YMTC heavily and the result has been some extremely inexpensive flash for consumer and enterprise. I'm not completely sure, but I think I just spent about $100K on YMTC flash last month as the Huawei products were better than the Hitatchi, IBM, NetApp, Dell and SuperMicro solutions I evaluated. I believe the storage system I purchased was substantially faster for the price than their competitors because they had access to much less expensive enterprise SSD for cache.

Here's the thing though, China needs flash too.

In fact, China needs flash so badly that the government is willing to do what is necessary to make it competitive and affordable. As a result, China is a world leader in Flash technologies now.

Now here's the crux.

China is definitely going to lay on the funding thick now.

Consider what you just said. You're saying that YMTC is a threat the the rest of the world because China values YMTC so much they're willing to subsidize it. In fact, you're suggesting that these subsidies will run the competition out of business.

What do you think will happen now?

If YMTC is a threat, the Chinese government will ask them "What do you need?" and they will make it happen.

Already, China is growing and producing massive amounts of silicon ingots. They've already developed technology for slicing and polishing and transporting them. It's only a matter of time before the ingots grow large enough to start production (if they haven't already done so). That will cut Chinese semiconductor cost in half or even further.

Then there's lithography.

EUV isn't really a big secret and it's an evolution on earlier technology. Do you honestly believe that China can't make their own equipment?

When they do, they'll buy the machines for $1 million instead of $100 million and cut the price of semiconductors by 80% or more.

When Trump imposed those tariffs, he guaranteed that China would respond by investing trillions in making their own semiconductors. Also, they'll invest trillions to collapse TSMC which would collapse the Taiwanese economy which would make the west lose interest in Taiwan.

All this latest decision will do is accelerate China's timelines.

Trump and Biden will go down in history as the fools who signed the death of the western semiconductor industry.

Nvidia faces lawsuit for melting RTX 4090 cables as AMD has a laugh


What will this accomplish?

NVidia is now aware of the problem and will obviously take steps to resolve the tech.

Some people will have been affected by the issue and it is very likely they will have to pay to fix the problems. NVidia will most likely attempt to address the financial issues and may even have to issue a recall on the damaged boards.

This lawsuit however will place NVidia on the offensive which will either delay financial reparations to the consumers and possibly even delay a recall as their legal team will most likely advise them that it will cost a great deal more to admit fault.

In the end, the affected consumers will probably receive checks for $50 if they live in the USA and the lawyers will make millions.

This is an excellent example of how class action suits can seriously backfire. The consumers will be hurt and NVidia will end up paying less than if than if they actually fix the problem.

Your next PC should be a desktop – maybe even this Chinese mini machine


One step further

I'm using some relatively older NUCs for most things, but I came to the realization that it's a heck of a lot cheaper to pay for a game streaming service then to pay the electricity on my normal desktop which is a pretty decently specced gaming system.

We also have an XBox Series X in the house, no screen attached, we just connect remotely.

I don't think I'll ever pry the gaming rigs away from my kids, but I'm pretty sure they'll be fighting over my video card soon when I switch completely to streaming gaming.

Chip shortages still plague carmakers despite weaker semiconductor demand



Cars use chips mostly for sensors.

These chips are chosen when each part is designed. So, for example if there is a temperature sensor on each brake pad of the car, it will have a microcontroller that connects the thermal sensor to the CANBUS of the car.

The brake thermal sensor will be tested extremely well over a long and excruciatingly detailed product approval cycle. It will be tested in the lab and in the road. Once the design is certified, it’s given a slew and things like changing parts will not happen… maybe passive components, but certainly not temperature sensitive devices such as semiconductors.

The car manufacturer will use that same sensor on as many cars as possible and for long as possible because designing and certifying a new part is very expensive and more importantly requires a new SKU. Then every car which uses the old SKU as a replacement part will need to be certified for the new SKU. Distribution of the new SKU will take time and updating all systems to use said SKU as a replacement will need to be updated world wide. Then, even if all you did was move from a microcontroller built on 180nm to one built on 28n with a 1 to 1 compatibility, it will take years to identify, resolve or work around, and catalogue the issues with the new part.

Even now, designing new cars, if you add a brake thermal sensor to the design, shortage or not, the car company will use the old part with the old chip because the engineers who even know the requirements and parameters of such a sensor may already be retired.

That means the automotive industry has to find chips made using the same process.

The problem is, the number of cars using that part has been increasing over the years and the number of replacements parts needed has increased proportionally. So, the demand has increased but the supply has not.

Designing new SKUs is an option, but it takes years and would probably bankrupt the companies needing them.

Also, no one, even under the chips act is scaling availability of manufacturing those parts. Everyone wants to make new and bad ass stuff to be competitive.

China is always willing to scale production of this tech. In fact, they can easily scale 28-250nm production with no supplies from outside of China. But that would require actually depending on China for these chips and why should they scale to help the US car makers when the US is attacking every company that can actually solve the problem?

The chips act is great because it helps the US catch up with Asia in terms of tech. The US companies Intel and GF are willing to produce unlimited amounts of 10nm and are even willing to scale 28nm production. But that would only be helpful if the companies who actually need the chips could cover the cost of changing the process. This is not an option… unless of course the US were to invest in the companies using said chips to get new SKUs designed and certified.

We can’t cry like little babies over this shit when no one actually is listening to the people who actually experienced a real chip shortage.

China may prove Arm wrong about RISC-V's role in the datacenter


Nope... not a risk... we just don't use them... ARM that is

We're currently planning 5 exa-scale computers in the next 5 years. We had been working with ARM as we're developing our own CPUs. Then after extensive research and money spent and of course the Trump embargos on China, we (a European collaboration) moved on to RISC-V. Trump and Biden taught us that no one is immune to being crippled by friends... and unlike China, we just don't have the resources to bounce back.

So, we've more or less dumped ARM beyond some POC related things. So we'll build systems with a few dozen or maybe even thousand CPU cores, but then we'll scrap them and replace them with a RISC-V ISA.

Here's the real reason why ARM offers nothing in the datacenter over RISC-V.

There are so many varieties of ARM that compilers just aren't optimized for them. It's not like Intel or AMD where compilers are written to optimize for each generation of chip. For example, one generation of compiler can optimize code for CPU cores organized in a ring bus, the next generation will be organized in a mesh. But with ARM, you never know what you're going to get and the compilers don't optimize for a generation or even a given core. As a result, ARM processors are kinda useless when you need performance. Consider that code optimized for a NVidia processor probably isn't optimized for Qualcomm and certainly code written for Apple M2 isn't optimized at all for NVidia. You end up using compilers which target the minimum common denominator which means no support for many instructions or architectures.

RISC-V will be worse. As most ARM processors actually are mostly designed by one company. On the other hand, RISC-V processors will be designed by pretty much anyone. They'll have nothing in common and their instruction sets are highly customizable. As a result, we will very likely become dependent on JIT compilers which on Exascale machines are actually optimal as they can optimize code for the platform as they run based on their current states.

Already, I can't wait to get my Alibaba RISC-V laptop to start optimizing code.

Micro Focus bought by Canada's OpenText for $6b


Successful enterprise software starts…

Successful enterprise software starts as products accessible to young and agile future engineers hoping to choose a direction.

Micro focus is a paywall.

If Microfocus owns a product, it will never see the lights outside of the Microfocus paywall. They are a buy first pay later shop. Their entire existence is all about justifying open source. Because even though Microfocus already has a solution, it will be rewritten in the open source because it’s less work to develop an entire system to replace it than to work with Microfocus.

Your job was probably outsourced for exactly the reason you suspected


What about liability?

When you outsource, you specify a project and set acceptance criteria and then pay a company who can be held liable to deliver.

Yes there are horror stories of how this can go wrong. The British and U.S. governments are absolute experts on botching their procurement of IT systems on a biblical scale. But overall, most projects outsourced generally are delivered accounting for the fact that the original price and deadlines are set by who can lie the best. Meaning that in order to win a tender against similar companies, all the vendors tend to intentionally underbid and quote delivery times over optimistically knowing they can ask for more once the project is in progress. So, it’s generally best to estimate at least 50% extra cost and time.

Even then, most systems are extremely similar. Using mature tools and platforms, vendors can estimate quite accurately the amount of time and resources required to deliver most any business system. For more technical projects, they can often throw large numbers of developers at making them happen.

When you employ your own development staff, you have no one to hold accountable but yourself. You need to have expert knowledge at planning and executing development projects. You can see salaries double in the time it takes to complete a project. Many such projects can see entire project staffs rolled over during their lifetime.

If you are going to farm out development to an external organization anyway, why waste time or money going local when going overseas will yield similar results? I haven’t met many great ITT grads, but I’ve known a bunch of good ones. And for most projects, you just don’t need great. You need good enough.

And don’t forget, most people in IT, no matter where you look are about equal. Each place has their rock stars, each place has their slow but steadies and each place has their “there’s gonna be a law suit or a lynching” and all in about equal proportions

If I were to outsource, I would look to Lithuania. I have worked with IT people all around the world. I have even been in one country training their IT staff and a month later in Lithuania training their replacements following a surprise outsourcing. The Lithuanians real impressed me on every count.

China seems to have figured out how to make 7nm chips despite US sanctions


Re: Another shoe of the US Clyddsdale falls.

I enjoyed your perspective even if I consider your writing style to be as annoying as my own.

Intel said for quite some time that 10nm from Intel could achieve the same density as 7nm from TSMC which was mostly true as Intel and TSMC measured differently. Of course there were exceptions, but overall, Intel and TSMC evolved their processes greatly over time and once the limits of TSMC 7 and Intel 10 were reached, they both moved a notch down in size.

Intel 4 should be able to employ 7nm fabrication to match the density and power foot print of 4nm from TSMC or Samsung.

I haven’t found details on SMIC’s 7nm node yet, but I would expect they have to work on the following

- Transistor size

- trace width

- insulator width

- yield

- more

I would suspect that SMIC has nailed the transistor size issue. But yield will probably be a beast for them at this point.

What I think is really interesting about this that SMIC had to work around a LOT of sourcing issues. Everyone has made a lot of noise over the EUV tech, but in truth, with enough money and engineers, EUV is not actually a mystery. The patent documents cover it it a lot of detail and they just had to figure out how to do it themselves. I would be surprised if it was actually difficult to do once China decided to ignore patent rights.

The big dog was wafer production. It takes an incredible team of people to produce crystal of such purity. It is also really easy to hide the process of purification, growth, slicing and polishing. 7nm is pretty far from subatomic, still 1-2 orders of magnitude from it. But crystal production for semiconductors probably require at least a 85% crystal accuracy (speculating from basic chemistry knowledge) to gate accurately. Layering must also be extremely difficult since layers are very likely less than 30nm high. (7nm trace, 15 degree angle of exposure, 120 picometer photon [estimation], 350nm wavelength [EUV estimation]).

This would mean that NMOS or PMOS at this scale would require some truly insanely accurate robotics and so ridiculously accurate centrifuges for layer application.

I think what many people didn’t recognize is that once something has been accomplished by anyone anywhere, whether through patents or marketing material, or other sources… reproduction of that technology is much much much easier. All that keeps people from copying it is patents and good faith. When it became public knowledge that diagonal EUV lithography did in fact successfully etch 7nm wafers… the financial risk associated with reproducing that technology was mitigated greatly. You really would only need a team of chemists, physicists and precision engineers to do it again.

What I think should also be considered is that documents published regarding sub-nanometer fabrication allows China as a whole to invest in skipping multiple generations and even beating their competitors to the goal. No one says China HAS to move ton5nm or 3nm first. They can be happy with 7nm and focus entirely on 500pm instead.

That of course raises the next issue which is that the really difficult stuff to reproduce is the software.

If China is blocked from Mentor Graphics and other synthesis tools, they will be forced to stick with versions licensed before the embargo. They can’t just call Mentor and say “I need a patch for our proprietary fab process”. They will have to produce their now synthesizers, physical and logical simulators. That actually takes a lot of time. There is a lot of graph theory for rule constrained synthesis. Also, field theory, especially when concerning quantum effects which is necessary at such low gate sizes takes teams of brilliant people years to achieve.

Elon Musk considering 'drastic action' as Twitter takeover in 'jeopardy'


Where is the ROI?

I have been seriously wondering what Musk would actually be buying.

Twitter isn’t a meaningful information sharing platform. It does not promote communication. It requires users to favor banter. I haven’t seen much use of Twitter as anything other than sharing links and making quirky comments. It is commonly used for taking swipes at other people on a platform which greatly limits follow up discussion.

I see absolutely no features or Twitter which provide real value. As an example, look at Facebook. As a platform; it is generally also a cesspool, but everyone tends to have a Facebook account and often have messenger because it is the modern equivalent of a personal phone book. Twitter doesn’t have a real networking feature. You just follow people, you don’t connect with them.

I wouldn’t consider Twitter a worthy investment. The only value I can see for Musk to buy Twitter is to hurt Donald Trump. And frankly, he is rich enough to do something petty like that.

Actual quantum computers don't exist yet. The cryptography to defeat them may already be here


Policy vs Technology

Keeping data protected for 75 years is a matter more properly managed by policy than technology.

It would be important to identify the reason you would encrypt there data and then identify why you would decrypt it.

Most data doesn’t actually need to be encrypted if it is intended for offline storage. Rather, a good physical security policy such as long term media storage in a secure location would be ok.

Encryption is primarily necessary for when data is to be transmitted. Until it needs to be transmitted plain text is probably ok. But when you transmit the data and it is intercepted, if it needs to remain private for 75 years from the time it is intercepted, it is clear that data will eventually be compromised. Therefore, such data should be physically transported rather than electronically.

You’re absolutely right in my opinion. Looking 5-10 years ahead may be achievable. Looking 25 years ahead would be entirely impossible. We honestly have little or no idea how quantum computer work today and we have no ideas what break through swill occur in the next 25 years. At this time, every quantum programming language is roughly equivalent to a macro assembler. We are literally programming with gate operations like assembler op-codes and of course macros. 25 years ago, chip design was very similar… almost identical. Then we figured out how to write high level code which would automatically compile to synthesized logic. And then we rocketed decades ahead overnight. We have absolutely no idea what will happen when the quantum world sees such a leap.

So, if data must be secure for extended periods, floppy-net will probably be the best option.

Cisco EVP: We need to lift everyone above the cybersecurity poverty line


Step 1

Provide software updates for old hardware.

Currently, most Cisco customers are running hardware that is locked to old ciphers and hashes. This is because their hardware is out of support.

Cisco has been pushing upgrade cycles for decades that deprive customers of patches for their equipment.

There are millions of Cisco routers and switches that use MD5 passwords, RIPEMD160 SSH hashes, outdated and insecure AES key cycling for SNMP.

In fact, there are many in production, supported network devices which use weak or seriously outdated AES and 3DES ciphers for SNMPv3.

Then there is the Cisco golden goose, YANG centric NetConf which which Cisco DevNet provides lots of documentation and training to push for the use of poor security programming practices such as storing keys insecurely in example code.

Then there are products like Cisco ISE which is Cisco’s golden security tool at the heart of all Cisco security which uses severely out of date Apache TomCat versions, Java libraries with insecure LDAP implementations, extremely insecure and easily compromised SAML implementations and more.

Then of course there are insane cases such as Cisco FirePower which make use of insecure network stacks beneath their secure implementations.

Oh… I don’t even know where to start when it comes to course system patching. Every single IOS-XE device runs Linux and lots of things like OpenSSL, OpenSSH, etc… but even after ~20 years of IOS-XE, you still don’t have granular software updating. So, you can’t just “apt update” for example and get new versions of most of the tools which regularly have critical CVE’s, you actually have to schedule downtimes and perform time consuming full system updates, sometimes with update cycles as long as 45 minutes.

I can go on for hundreds of cases of this.

Overall, Cisco is a mess when it comes to security.

But let’s be honest, they’re not much worse than their peers.

Export bans prompt Russia to use Chinese x86 CPU replacement


Re: Russian politics aside

Thank you! Compared to your normal posts… this was was nearly a marriage proposal ;)


Re: Russian politics aside

Honestly… I am an American and I’m just excited about where the tech is going. I really never had an interest in taking sides with anyone. Let the gorillas thump their chests and grunt. From a tech perspective, there is nothing but good things to expect to come from this.


Russian politics aside


Ok... I'm in complete disagreement from a technical perspective from the author. I feel as if there's a bit too much "Queen and Country" happening here. Patriotism and propaganda is fine, but there is so much more to this article than just "We're so much better than they are." In fact, this article should make the author seriously reevaluate that.

So, not long ago, China was embargoed and they were cut off from western technology. Since then they have

- launched a multi-pronged plan to mitigate the loss

- used ARM under a strange "ARM China is not ARM LTD" clause that allowed them to keep using ARM like crazy.

- moved approximately 50-200% faster on die process advancements than their western peers... depending on how you evaluate this. But whatever the case, it simply makes sense since their peers have to do things never done before and they only have to learn from what is already known.

- either licensed, bought or created most of the peripheral infrastructure surrounding CPUs including audio, video codecs, USB, MMU, interrupt controllers, DMA controllers, ethernet and more.

- launched a slew of RISC-V based processors and advanced RISC-V technology at least in terms of synthesis more than anyone else on earth.

- managed to use loopholes in the Cyrix x86 license via Via Tech to get a hold of x86 ISA (at least until Intel figures out how to go after this) x64 never really had these issues.

- Grabbed what I believe is S3 graphics tech which is nothing to write songs about but is a truly solid foundation for building on as they should be able to tack on a pile of specialized RISC-V cores to build most of the processing capacity and with some serious focus on memory technologies... meaning finding an HBM2 like solution and solving some pretty serious cache coherency issues, make a modern GPU.

Yesterday I was in front of a classroom and a student asked me to look at his progress on a project on his laptop. His machine was a several generation old Core i3 with 8 gigs of RAM. It was sluggish, but it was entirely usable even when loaded down by a very processor intensive application. I'd only guess that if I searched for a benchmark of this machine relative to the motherboards seen in this article, they'd be quite similar.

For HPC, Intel is not a requirement. Huawei and others solved this problem by producing CPUs which are less energy efficient than Intel or AMD but using solar energy and batteries for power. If you join a meeting with Huawei to buy a super computer, they present to you systems which they can deliver at any scale (as in Top500 systems) using Chinese technology and they can deliver power through solar. This is not a problem. And frankly, so long as it has a fully functional Linux toolchain including Python, Numpy, Scipy, Julia, C, C++, Fortran... I really don't care which instruction set I'm using. The only really important missing tool on the systems is Matlab, but it does have Octave ... which isn't really a replacement, but it could be good enough.

For telephones, there's ARM and soon RISC-V

For normal desktop, this x86 solution looks like it should be perfectly suitable for most users. Toss on a copy of Deepin Linux or ReactOS and it'll be fine.

Gaming and graphics workstations... that will require something else. And while everyone loves western and Japanese game studios... China is producing a crap ton of pretty good games these days... mostly on Unity and Unreal (I think) which I think could cause issues, I can honestly see China pushing Chinese and Chinese owned publishers to produce for a Chinese architecture. And let's be honest... most of the best games out there these days are well known for "they could run on a toaster"

At China's current rate of progress, they should meet or exceed western tech on every front within a few years... not necessarily always by looking at benchmarks and such, but based on user experience.

We can thank Donald Trump and Joe Biden for this. If it weren't for them, China probably would have kept chugging along at a moderate pace and had been happen just to keep using and licensing western tech. But thanks to the embargos which forced China to increase their efforts so drastically, we're going to see some truly amazing advancements in tech. This will be simply because the speed China can and is moving at is IMMENSE and soon everyone else will have to really push it into overdrive to avoid being left in the dust.

The tech world is going to be truly amazing now. I can't wait to see what comes from this.

What is so exciting is that there's absolutely nothing that can be done to slow China down now. Not only are they hellbent on never being crippled by the west again, but they need to do this for their economy. For them it's full speed ahead or bust.

Elon Musk flogs $8.4bn of Tesla shares amid Twitter offer drama


I don’t get it

So… Twitter

From my use of it,

if you write threads, people hate you because that’s not Twitter

If you write tweets

- people hate you because even a haiku author can’t express themselves in the character limit without being (intentionally) misunderstood by most people

- people hate you because you sound like a raving lunatic for writing short meaningless messages

- people hate you for writing crappy jokes

- people hate you for writing anything

I have never experienced a platform so well suited for spreading hate and discontent. It’s a toxic platform. I am sure there is someone out there who isn’t a shit tweeter, and I think Twitter sometimes is a good platform for making announcements… but, it’s just not a good place.

Then there is the issue of Musk buying the platform… has Musk ever bought a company and made it successful? I was always under the impression that he’s a maker and builder… not a buyer.

Deere & Co won't give out software and data needed for repairs, watchdog told


Stack vs heap

Deere has a very odd internal development policy. For decades, they wrote their code in C and they have specific rules for coding which require all data to be on stack, not heap. Also; they don’t allow structures.

This means that there are decades of impressively shitty code on John Deere devices. There is a high likelihood that the engineers who wrote the code have no ability to read it themselves.

I think John Deere must have done some sort of internal assessment and realized that if they released source to their systems, they would likely have to also provide some degree of support for the code and it is entirely possible that’s not an option.

I have see lots of JD source code and what I would say is “if you drive a JD tractor, you’re lucky to be alive”

Cisco to face trial over trade secrets theft, NDA breach claims after losing attempt to swat Leadfactors lawsuit


2008 was long past the point of original thinking.

I am more than happy to talk smack about Cisco’s business practices. I think Cisco is one of the dirtiest companies on the market at this time and the funny thing is, they don’t even realize it. It’s one big family of “We drank to Kool-Aid” out there in San Jose. I mean really, you have never met a more brainwashed group of… groupies than the Tasman campus groupies.

That said, I was working at a major competitor to Cisco in the collaboration gig at the time. In fact, Cisco acquired them for something like 1.8 gazillion dollars. And one thing I’ll tell you about the social and conferencing market is that there hasn’t been an original thought in that market since Walt Disney built Epcot.

Everything in that entire field is strictly logical evolution and if there is even a single patentable idea is social collaboration anymore, I’ll eat my socks… and that’s just gross.

I think the last piece of innovation in that field was when Zoom first reared its ugly head and rendered video in a 3D oriented fashion. That really threw us through a loop. But really, by 2008, there was nothing left to invent. We are soooooo long past that point

FreeBSD 13.0 to ship without WireGuard support as dev steps in to fix 'grave issues' with initial implementation


I was about to

sweep in and complain about poor coding.

Whenever I write kernel modules in C (Linux ugh), I find myself spending far too long detangling unintuitive preprocessor crap that has no place in 2021. When implementing secure protocols, most ciphers allow in-place operations, but usually headers need to be prepended and you will never find a solution to this problem that permits for the headers to remain memory aligned or for the buffers to be encrypted do.

This means to effectively make protocols which encapsulate higher level data, good buffer management is necessary. And while C allows you to do anything you want, as efficiently as you want, almost all solutions to the problem tends to lead towards reimplementing object oriented language features... usually in preprocessor macros or using crazy compiler extensions for tracking offsets into buffers based on their positions in structs.

There is also the whole game of kernel buffers. Since the kernel is in privileged mode, performing allocs and frees is frowned upon. The structure of the kernel memory space is expensive and dangerous to randomly allocate memory, especially if it may trigger the kernel to need to further allocate memory beyond its initial pool. Since the MMU is mostly bypassed in this mode and since C memory management generally is not relocateable, the only real solution is to overprovision needlessly.

I could (and probably should) write a book on all the numerous problems related to kernel development as well as the endless caveats of coding kernels in C, but let me simply say that while good C code is possible, it’s rare and far too many trivial tasks are managed manually and repetitively when coding C.

I don’t particularly care for the syntax of Rust or Go. But both languages run with a great concept which is... if it’s a very common task that can be added to the language with no real additional cost, do it. As such, both languages understand strings, buffers and data structures. There is no need for disgusting hacks to implement things like macros that are all hacks to support something as trivial as foreach.

C could fix these things as well. But it’s a conscious decision by the people steering the language to keep it as it is and to leave it to libraries and compiler extensions to do it instead.

I love C because if I want to write a new C compiler, I can make something usable and likely self-hosting within a few hours. But this isn’t the characteristic of a programming language I would want to use in 2021. If I were to spend my time on such a project, the first thing I’d do is build extensions for strings, buffers and data structures ... and it wouldn’t be C anymore.

Oh... and most importantly, I would drop the preprocessor and add support for domain specific language extensions. And I’d add proper RTTI. And I’d add an extension for references. And of course relocatable memory. And probably make ...

You know what... I don’t think I’d do anything other than bootstrap the new language with C and then basically just ignore the standard from there :)

Flagship Chinese chipmaker collapses before it makes a single chip or opens a factory


Re: More to this than meets the eye

They actually seen what we call IP theft as free market economy. Rather than protectionism which is supposed to allow companies to recoup their investments through patents and such, they believe that everything that can be copied is open source and each copy will generally generate innovation and place pressure on all players to always make advances.

I am currently sitting just outside a semiconductor research facility as I write this. It makes high frequency radio semiconductors for communications. Almost all the technology inside is off-the-shelf and while they obviously make a buck off patents, everyone there knows that their real protection is research and progress. We have a real problem at this location because if the US government becomes protectionist against its allies, they’d have to just shut down.

The good news is, there is another building just off to the side which is a nanotechnology research facility that can produce everything without dependence on the US. So rather than investing massive sums of money in traditional semiconductors, they focus their efforts in nanotechnology as a replacement.

When China eventually catches up in semiconductor fabrication to the US, it will quickly surpass them as American companies will depend on protectionism. And even if the US were to undo all the restrictions Trump enacted this moment, China would still invest heavily in their own tech and will still pass the US since they have now learned that if it happened once, it can happen again. Not only that, but I imagine the facility I’m sitting at will start buying from China at least as much as from the US so they will never have to worry about being completely cut off.

The sanctions placed on China by the Trump administration will likely be the biggest boost to China that could have ever been possible. It will take China time to recover from it, but when they do, it will leave the entire rest of the world without any leverage when negotiating with the Chinese on political issues. At this point, Biden’s choice of leaving the sanctions in place does nothing other than provide a buffer to let countries outside of China get a running head start in a very very long race.

Please don’t assume I’m playing the Trump is evil card. With the rollout of 5G, if Huawei would have been able to keep going as they did, China would have been able to draw trillions of dollars more from the US treasury. I don’t agree with how Trump prevented this, I would have rather seen an executive order demanding Cisco or another US company produce a competitive offering. But it did accomplish mitigating the risk of China being able to simply collapse the US economy on a whim. The European approach of working on an open source RAN was a far better approach.

America, Taiwan make semiconductors their top trade priority at first-ever 'Economic Prosperity Dialogue'


What happens when...

One of the world’s top economies... heavily dependent on semiconductors is told that no one in the world is allowed to sell them semiconductors?

The easy answer is, they invest massive amounts of money, time and resources to never need to buy semiconductors from another country.

Then they build up enough manufacturing ability to produce semiconductors for every other country who doesn’t like the impending threat of being cut off.

Then they do it for a lower cost than any other country.

Then they weaponize their capacity and use extensive government grants to economically attack countries like Taiwan... after all, why not simply give semiconductors away for free until TSMC can no longer afford to keep their doors open?

Of course, in order to stay competitive, that country will innovate as well. They will make sure they’re not just competitive, but after throwing money at racing to equal with publicly traded companies in the US and Taiwan, they will have a momentum already in place to also surpass them.

So what happens when someone makes a decision that threatens many of China’s largest and most influential companies and treats them like this? You think if Biden gives a little and agrees to sell them chips again that China will just stop their almost space race like efforts to become entirely independent?

Who knew? Hadoop is over, says former Hortonworks guru Scott Gnau


Re: @tfb This is why

I've been saying this for some time about COBOL. (Oh and I work with FORTRAN in HPC quite often)

People make a big deal about COBOL programmers being in short supply and that it's an antiquated language. Honestly though, what makes programmers really confused about it is that you don't really write programs in COBOL, it's more of a FaaS (Serverless, function as a service) platform. You write procedures in COBOL. The procedures are stored in the database like everything else and when a procedure is called, it's read from the database and executed.

The real issue with "COBOL programmers" is that they don't know the platform. The platform people are usually referring to when they say "COBOL" is actually some variation of mainframe or midrange computers. Most often in 2020, they're referring to either IBM System/Z or they're referring to IBM Series i ... which is really just a new name for what used to be AS/400.

The system contains a standard object storage system... or more accurately, a key/value store. And the front end of the system is typically based on CICS and JCL which is job control language. IBM mainframe terminals (and their emulators) have a language which could be kind of compared to HTML in the sense that it allows text layout and form entry as well as action buttons like "submit".

Then there's TSO/ISPF which is basically the IBM mainframe CLI.

What is funny is that, many of us when we look at AWS, all we see is garbled crap. They have a million screens and tons of options. The same is said for other services, but AWS is a nightmare. Add to that their command line tools which are borderline incomprehensible and well... you're screwed.

Now don't get me wrong, if I absolutely must use AWS, it wouldn't take more than watching a few videos and a code along. I'd probably end up using Python even though I don't care much for the language. I'd also use Lambda functions because frankly... I don't feel like rolling my own platform from scratch. Pretty much anything I'd ever need to write for a business application can be done with a simple web server to deliver static resources, lambda functions to handle my REST API, and someplace to store data which is probably either object storage, mongodb, and/or some SQL database.

Oddly, this is exactly what COBOL programmers are doing... and have done since 1969.

They use :

- TSO/ISPF as their command line instead of the AWS GUI or CLI tools.

- JCL to route requests to functions as a service

- CICS (in combination with a UI tech) to connect forms to JCL events as transactions... instead of using Amazon Lambda. Oh, it's also the "serves static pages" part. It's also kind of a service mesh.

- COBOL, Java or any other language as procedures which are run when events occur... like any serverless system.

It takes a few days to learn, but it's pretty simple. The hardest part is really learning the JCL and TSO/ISPF bit because it doesn't make sense to outsiders.

What's really funny is that IBM mainframes running this stuff are pretty much infinitely scaleable. If you plug 10 mainframes in together, they practically do all the work for you since their entire system is pretty much the same thing as an elastic Kubernetes cluster. You can plug in 500 mainframes and get so much more. The whole system is completely distributed.

But like you're saying FORTRAN is its own entire platform/ecosystem is entirely true. Everything you would ever need for writing a FORTRAN program is kind of built in. But I will say, I would never even consider writing a business system using FORTRAN :)

Microsoft submits Linux kernel patches for a 'complete virtualization stack' with Linux and Hyper-V


Re: The way forward?

I'm not sure what you're referring to. While there are vast areas of the Linux kernel in desperate need of being thrashed and trashed, a side effect of it's lack of design is that it's highly versatile (which mind you is what makes it so attractive for some many things).

Microsoft has managed to play quite nicely and by the rules with regards to making the majority of Windows friendly code self-contained within its own directories similar to other modules. It's really not much different than either the MD stack or the SCSI stack. In fact, the Hyper-V code is much easier to rip and remove than most other systems within the kernel as it's organized in a pretty central place.

Rather than spamming the kernel with massive amounts of Windows specific integrations for things like DirectX integration, they have done some pretty cool things to abstract the interfaces to allow generic RDP support for redirecting Wayland compositing for a pretty nice alternative to VNC or the X11 protocol and from what I can tell, they're working with the open source to make Wayland finally have a strong solution for headless applications over the wire.

Microsoft may be all-out embracing and extending Linux, but now their hands are so deep in Linux's pocket that extinguish is no longer an option for them. And they even play nicely enough by the rules that GPL zealots tend to just grunt rather than rampage about them these days.

Add to that that Microsoft does release massive amounts of actually genuinely useful technologies to the open source and they're almost even likeable now.

This announcement is pretty interesting to me because it will likely result in a VMM on Linux which is easy to manage and is heavily consumed. Honestly, I adore KVM and use it for almost everything, but the highly generic nature of kvm due to it's qemu roots makes it infinitely configurable and infinitely difficult to configure.

Money talks as Chinese chip foundries lure TSMC staff with massive salaries to fix the Middle Kingdom's tech gap



For the most part, the knowledge required to not only produce current generation and the knowledge to move forward is what is most interesting.

There is little value in hiring for blueprints. The worst possible thing that could happen to Huawei and China as a whole would be getting caught producing exactly the same technology verbatim.

There is value however in hiring as many people as possible that know how to innovate in semiconductor technology.

Huawei and others are running out of chips, but they're not as desperate as you'd think. They're more than smart enough to have contingency plans in place. They have time to catch up. It's far better to get it done right rather than to get it done.

The problem of course is that by China doing all of this, it will seriously impact the Taiwanese and American semiconductor market. When China is finally able to produce 7nm using Chinese developed technology, they can start undercutting costs for fabrication.

Where the US and TSMC will build a handful of fabs for each generation, China will focus on scale. And once China catches up, they'll target becoming a leader instead.

Trump focused entirely on winning the battle. But he has absolutely no plans in place for defending in the war. History shows that every time trade wars have tried his tactics, it doesn't just backfire, but it explodes. The issue now is whether China can do it before Trump leaves office in 2024. If they can accelerate development and start eating away at the semiconductor market in the next 4.25 years, Trump's legacy will be that he completely destroyed the entire semiconductor market for the US and US allies.

Apple gives Boot Camp the boot, banishes native Windows support from Arm-compatible Macs



I may have missed it, but there are a lot of people who depend on hackintosh or virtualization out there. There are companies with full farms of virtualized Macs for running XCode compiler farms. There are a surprising number of people using virtualized Macs as iMessage gateways.

By Apple making this move, they can make little tweaks like CPU instructions that are Apple specific. They can also make their own TPM that would block any XCode compiled application from running on a non Apple CPU.

How about large enterprises who depend heavily on virtualized Windows on Macs... for example IBM? They actually dumped Windows PCs in favor of Macs because they could remote desktop or virtualize corporate desktops and the users would all have a pretty easy "press the key during boot to wipe your PC and reinstall it". I guess this would still work... at least remote desktop VDI.

What happens to all the developers at Microsoft carrying around Macs? If you've ever been to Microsoft Build Conference, you'd think it was sponsored by Apple.


Re: Bochs

Bochs is a nice toy for retro purposes, but it lacks much of what you would need to make this a solution. On the other hand, you're on the right track, qemu which has a dynamic recompiler and x86/ARM64 JIT would be a solution... it won't be particularly fast though. To run Windows worth a damn today, GPU drivers are an absolute must... even if it's just an old Intel GPU, the Windows compositor really thrives on it.

Nine in ten biz applications harbor out-of-date, unsupported, insecure open-source code, study shows


Don't forget Cisco!

Cisco Prime, Cisco IOS-XE, Cisco IOS, Cisco ISE....

I can go on... but Cisco absolutely refuses to use package management and as a result, may systems only release patches once or twice a year. When they are released, they don't upgrade nearly enough packages.

Consider Cisco ISE which is the security portal for many thousands of networks around the world for wireless login. It's running on Apache versions so old that zero day passed years ago.

Then there's openssl and openssh... they just don't even bother. It doesn't matter how many CVEs are released or what their level is... Cisco ignores them.

Then there's Java versions

And there's the other key issue which is that Cisco products don't do incremental upgrades. You either upgrade everything or nothing. So even with critical security patches, the vast majority of Cisco customers DO NOT upgrade their software because there is far too much risk that it will break more than it fixes.

Of course, even with systems like Cisco DNA which automates most things, upgrades are risky and sometimes outright dangerous since there's no means of recovering remotely when your infrastructure equipment goes down.

Cisco doesn't release information, but I know of at least several dozen government organizations running Cisco ISE releases from 2-5 years old with no security patches because you can't simply run rpm update or apt upgrade on Cisco ISE... which is really really stupid when it's running on Linux.

I think Cisco might be the most insecure enterprise company out there and the only thing keeping it from being more common knowledge is that the people who actually know about these things risk losing their meal tickets from making noise about it. And what's worse is that Cisco people almost NEVER know anything about Linux... or security unless it's protocol security.

Uncle Sam tells F-35B allies they'll have to fly the things a lot more if they want to help out around South China Sea


As a tax payer...

I disapprove of these planes being flown. There are far too many possible chances for accidents or them being shot down. With how much these planes cost, the best option is to keep them stored in hangars where they are only a limited risk.

If anyone in the F-35 governments are reading this, please invest instead in F-16 and F-22 jets which are substantially less expensive and only consider the use of F-35 jets when the F-16 and F-22 planes can’t possibly do the job.

Think of it as using the 1997 Toyota Canary to drive to and from work in urban rush hour rather than the Bentley since scratching the Canary doesn’t matter but the Bentley will cost you and your insurance company a small fortune. The F-35 series planes should never be put in the air where they can be damaged... it’s simply fiscally irresponsible.

You're always a day Huawei: UK to decide whether to ban Chinese firm's kit from 5G networks tomorrow


Treasury Notes

If Huawei is allowed into western teleco networks, the governments will have to cover the purchase of this equipment by issuing treasury notes. If China does not spend those notes, which they do less and less often and instead stockpile them, they gain more control of them. At some point, if China decides it needs to buy things from the world, they will use them as currency. When they do this, if they need to make a massive purchase (think $100 billion) whichever government they are purchasing from may decide the risk of holding that much currency in treasury notes would be difficult to manage. So China will sell treasury notes to multiple other countries and banks who will negotiate favorable terms of exchange for themselves. This will result in flooding the market and therefore devaluing the power of said notes. This is a major security (not as in guns and bombs, but as it in financial security) risk for any country who holds U.S. treasury notes. Weaker economies can actually collapse because of this. Stronger economies can lose their purchasing power in China.

Leave your admin interface's TLS cert and private key in your router firmware in 2020? Just Netgear things


Re: "wanted to see some extra fields populated"

I’m not sure I agree. I keep a simple bash script on my PC I’ve hacked together which reads a few fields from a JSON file and generates a certificate that makes Chrome happy. It also informs me of all my current certificates that are about to expire. I think I got all the OpenSSL commands within the first page of links on a google search.

I think the problem is that certificates are difficult for pretty much anyone to understand since there are no good X.509 primers our there these days. I still see a lot of enterprise certificates signed by the root CA for the domain. Who the heck actually even has a root private key? I actually wish Chrome would block supporting certificates signed by root CAs. Make a root cert, sign a few subordinates and delete the root CA altogether.

That said, Let’s Encrypt has destroyed the entire PKI since a trusted certificate doesn’t mean anything anymore. A little lock on the browser just means some script kiddy registered a domain.

Creative cloudy types still making it rain cash for Adobe


Re: F*** adobe

I generally agree... I actually stopped paying for creative cloud when Affinity Designer came out... but their Bézier curves (a fairly simple thing to get right) are somewhat of a pain in the ass.

Then there’s Affinity Photo that still has major scaling problems that causes visual artifacts when zooming the workspace. It makes it almost unusable. My daughter is using Photoshop CS6 because it doesn’t need a subscription and it’s still quite a bit better than Affinity. Her reasoning is brushes. But she’s using other Asian software a lot more now.

In a touching tribute to its $800m-ish antitrust fine, Qualcomm tears wraps off Snapdragon 865 chip for 5G phones



I often work together with large enterprises helping them train their IT staff in wireless technologies. And the message I send regularly is that there is absolutely no value in upgrading their wiress for new standards rather than growing their existing infrastructure to support better service.

I have recently begun training telecoms on planning for 5G installation. And the message I generally send is "people won't really care about 5G" and I have many reasons to back this up.

Understand that so long as pico/nano/femto/microcells are difficult to get through regulation in many countries, Wifi will continue to be a necessary evil within enterprises and business running particularly difficult operations to deploy wireless in. We need Wifi mostly for things like barcode scanners and RFID scanners within warehouses. An example of this is a fishery I've worked with where gigantic, grounded metal cages full of fish are moved around refrigerated storage all day long. Another is in a mine shaft where the entire environment is surrounded by iron ore. In these places, wifi is needed, but there's absolutely no reason to run anything newer than Wireless-N except for availability. AC actually costs less than N in most cases today, but there's no practical reason to upgrade. 4x4 MIMO 802.11n is more than good enough in these environments.

5G offers very little to the general consumer. It is a great boon for IoT and for wireless backhaul networks, but for the consumer, 5G will not offer any practical improvements over LTE. 600Mhz 5G is a bit of an exception though. 600Mhz 5G isn't particularly fast... in most cases it's about the same as LTE. It's primary advantage is the range. It will be great for farmers on their tractors. In the past, streaming Netflix or Spotify while plowing the fields has been unrealistic. 5G will likely resolve the issue.

For people within urban environments, they're being told that 5G will give them higher availability and higher bandwidth. What most people don't realize is that running an LTE phone against the new 5G towers will probably provide the exact same experience. 5G will offer far more towers within urban areas and as such, LTE to those towers will work much better than it does to the 4G towers today. 4G is also more than capable of downloading at 10 times higher bandwidths than most users consume today. The core limitation has been the backhaul network. And where 4G typically had 2x10Gb/s fibers to each of 4 towers within an area. 5G will have 2x100Gb/s fibers (as well as a clock sync fiber) to 9 towers within the same area. This will result in much better availability (indoors and out) as well as better bandwidth... and as a bonus, it will improve mobile phone battery life substantially as 4G beamforming along with shorter distances will consume as much as 5 times less power on the phone compared to the current cell network.

5G has no killer app for the consumer. 3G had serious problems across the board since 3G technologies (UMTS, CDMA, etc...) were really just poor evolutions of the classical GSM radio design. LTE was "revolutionary" in its design and mobile data went from "nice toy for rich people" to "ready for consumption by the masses". 5G (which I've been testing for over a year) doesn't offer anything of practical value other than slightly shorter latency which is likely only to be realized by the most hardcore gamers.

I certainly have no intention of upgrading either my phone or my laptop to get better mobile and wireless standards. What I have now hasn't begun to reach the capacity of what they can support today. The newer radios (wifi6 and 5G) will make absolutely no difference in my life.

If you have anyone who listens to you, you should recommend that your IT department focuses on providing wireless network security through a zero-trust model. Which means you could effectively ignore wireless security and as you mentioned, use VPNs or fancy technologies like Microsoft Direct Access to provide secure, inspected, firewalled links for wireless users. They should focus on their cabling infrastructure as well as the addition of extra APs to offer location services for things like fire safety and emergency access. They shouldn't waste money buying new equipment either. Used APs are 1/10th the price. In a zero-trust environment, you really don't need software updates as the 802.11n and 802.11ac standards and equipment are quite stable today. They should simply increase their AP count, improve their cabling so the APs within a building are never cabled into one place (a closet can catch fire), install redundant power to support emergency situations. Use purely plenum rated cabling. Support pseudo-mac assignment to people not carrying wireless devices can be located by signal disturbance during a fire.

Once this system is operational, it should live for the rest of the lifespan of your wifi dependence. I can safely believe that within 5-10 years, most phones from Apple, Samsung, etc... will ship without Wifi as its presence will be entirely redundant.

Also for 5G, inform people that they should wait for a phone that actually gives them something interesting. Spending money on 5G for personal communication devices is just wasteful and worst of all, environmentally damaging. If the market manages to sell 5G as a "killer app", we stand to see over a billion mobile phones disposed of as people upgrade. Consider than even something as small as a telephone, when you make a pile of a billion of them is a disaster for this planet.

5G will be great for IoT and not so much 5G, but the proliferation of NB-IOT is very interesting. $15 or less will provide an eSim capable 5G modem module to things like weather sensors (of which there are already tens of millions out there), radar systems, security systems, etc... We should probably see tens of billions of NB-IOT devices out there within the next few years. A friend of mine has already begun integrating it into a project of hers of which she has funding for over 2 million sensors to be deployed around Europe.

No... you're 100% correct. Wifi has begun it's death knell. It will be irrelevant within 5-10 years and outside of warehouses and similarly radio harsh environment, it is very likely it will be replaced by LTE, NB-IOT and 5G.

And no... 5G on a laptop is almost idiotic if you already have LTE. You should (with the right plan) be able to do 800Mbit/sec or possibly more with LTE. Even when running Windows Update, you probably don't consume more than 40MBit/sec.

You're praying your biz won't be preyed upon? Have you heard of our lord and savior NVMe?


Why oh why

If you’re dumping SAS anything in favor of something else, then please get a distributed database with distributed index servers and drop this crap altogether.

Hadoop, Couch, Redis, Cassandra, multiple SQL servers, etc all support scale out with distributed indexing and searching often through map reduce methodologies. The network is already there and the performance gain is often substantially higher (orders of magnitude) than using old SAN block storage technologies.

Or, you can keep doing it the old way and spend millions on slow ass NVMe solutions

'Happy to throw Leo under the bus', Meg Whitman told HP after Autonomy buyout


How could this company ever be worth that much?

There was a time in history when HP was famous as a technical innovator who filed more than enough patents that they could use pretty much any technology they wanted and make deals with other companies to trade tech. They would engineer and build big and amazing things and if they panned out, they got rich, if they didn't, they'd sell them off.

Then the suits came in

HPe has become nothing more than an Acquisitions and Mergers company. They don't make any new technology. They "me too" a crap load of tech at times. But regarding innovation... check out HPe's labs/research website. Instead of actual innovation, it looks like a list of research of "why shouldn't we invest money in research" thing. I mean really... they wrote one whole paragraph on why they won't waste money on quantum computing and it's basically "We are going to prove P=NP and make a new way of saying it so if we can solve one NP problem, it will solve all NP problems."

There have been a bunch of CEOs that have converted HP from being a world leader in the creation of all things great in technology to being a shit company which spends $8 billion on a document store and search engine that "might be big one day".

Cooksie is *bam-bam* iGlad all over: Folk are actually buying Apple's fondleslabs again


Why would you buy a new one anymore?

I have a stack of old iPads laying around. I have 2 iPad version 1 and about 10-12 more after that. My wife uses hers... the kids stopped using theirs when they got telephones big enough to render them useless as they also have PCs.

I did get my wife a new iPad for Christmas... we actually don't know why... but I suppose it had been 2 years since the last iPad was bought... so I got her that.

To be honest, it used to be that everyone needed their own iPad... but these days, I think mom and dad just need big phones and the kids need maybe an iPad mini or so. There's no need to constantly upgrade... they already have more features than anyone will ever use. Now, it's more like "Wow... look Apple is still making iPads... at least I can buy a new one if the old one breaks... if I actually need it for something"

I used to see iPads all over every coffee shop. These days, there's laptops and telephones... but there doesn't seem to be any iPads anymore.

NAND down we goooo: Flash supplier revenues plunged in first quarter


Re: Yay!

I thought the same and then thought... why bother?

I used to spend tons of money building big storage systems... even for the house... I have a server in the closet I just can't force myself to toss which has 16TB of storage I built in 2005. These days, 500GB is generally more than enough. 1TB for game PCs.

At the office, I used to buy massive NetApp arrays... now that I have moved to Docker and Kubernetes, I just run Ceph, GlusterFS, or Windows Storage Spaces Direct and I use consumer grade SSDs.

We are soooooooo far past needing for storage it's silly. To expand a Ceph cluster by a terabyte of low cost SSD, it requires 3TB of storage which is under $300 now... and it gives us WAY better redundancy than using an expensive array. And to be fair... since almost everything is in the cloud these days, you could probably run an entire bank on 2-4TB of storage for years. It's not like a database record takes much space. Back in 1993, we ran over 100 banks on about 1GB of online storage. I'm almost sure you can run 1 modern bank on 4000 times that much. :)

As for performance... once you stop running VMware and you switch to something... well... anything else, you just don't need that much performance. I guess video games would load faster, but ask yourself when the last time you actually thought "I need a faster hard drive"

Former unicorn MapR desperately seeking cash as threat of closure looms


Re: The software is quite good

Everyone always talks about Betamax as if it was infinitely better than VHS. As someone who thoroughly understands the physics, mechanics, electronics, etc... of both Betamax and VHS from the tape all the way through the circuitry up to the phosphors, I'll make it clear... yes Betamax was better... but the difference was negligible. The two formats were so close to being the same that it barely mattered... and when transmitting the signal over composite from the player to the TV which ... well to be honest was 1950s technology (late 1970s TV was still 1950s tech... just bigger)... it was impossible to tell.

S-Video and SCART (in Europe) made a slightly noticeable difference. Using actual component cabling could have mattered but neither Betamax or VHS could take advantage of that.

The end result was simple... when playing a movie recorded on Betamax on a high end 1970s or early 1980s TV next to the same movie recorded on a VHS tape, you had one big ugly player next to another and the only possible difference you could give the consumer was "Beta is more expensive because the quality is better" and of course... it wasn't... at least not enough to notice. Often you could sell the consumer on audio quality, but on 1970/1980s era speakers and hifi, you wouldn't notice until you were far past the average consumer threshold.

Betacam SP was actually substantially better, but by then it no longer mattered.

I used to have 400 Betamax decks and 600 VHS decks in my office... all commercial grade duplicators with automatic tape changers. The Betamax decks existed for collecting dust. The VHS decks were constantly being serviced because they were running 24/7. I spent 10 years of my career in video technology development (I am a codec developer at heart, but I know analog too). In 10 years of working with studio/commercial grade broadcast and duplication equipment, and knowing what I know about the technology, if I saw Betamax for $120 and VHS for $110, I'd still by VHS.


Re: @CheesyTheClown ... Burned $300 million?

Thanks for commenting.

I honestly had no idea how MapR would sell in the first place. The problem is... it was a great product. But it was also expensive. And I don't really care how good a sales team you have is, the website is designed to scare away developers.

I just visited there and I'm pretty sure that I've been in multiple situations where I could have seen the technologies as interesting, but the website makes it look like it's too expensive for me to use in my projects. I can use tools that cost $10,000 or less without asking anyone. But they have to be purchasable without having to spend another $10,000 on meetings where people show Gartner magic quadrants.

I can't use any tools where I can't just pop to the web site and buy a copy on my AMEX in the web shop and expense it. When we scale, we'll send it to procurement and scale, but we're not going to waste a ton of money and hours or days on meetings and telephone conferences with sales people who dress in suits... hell I run away without looking back when I see sport jackets and jeans.

Marketing failed because MapR is not an end user program and developers can't make the purchasing decisions. The entire front end of the company is VERY VERY developer unfriendly. Somehow, someone thought that companies all start off big and fancy. My company is a top-400 and we start projects as grass-roots and once we prove it works, we sell the projects at internal expos and the management chooses whether to invest more in it or not. MapR looks expensive and scary and difficult to do business with.

This is why we do things like always grow everything ourselves instead of buying stuff that would do it better. Everyone is trying to sell to our bosses and not selling to the people who actually know what it is and what it does.

I wish you luck in the future.. now that I've looked a little more at you guys, I'll check the website occasionally when I go to start projects. If the company starts trying to sell to the people who will actually buy it (people like me) instead of to our bosses... maybe I'll buy something :)


Burned $300 million?

$200,000/year times 2 is $400,000 for an inflated cost or employing one overpaid SV employee. Multiply that by 200 employees. That’s $80 million a year for 200 employees... to develop and market a product.

Now... let’s assume that the company actually received $300 million in investments.

Was there even one person in the whole company actually doing their job? And was that job spending money with no actual consideration for return on investment?

Planes, fails and automobiles: Overseas callout saved by gentle thrust of server CD tray


Re: Ah the old push-out-the-cd-tray trick

Why not dump random data to the PC speaker?