* Posts by guyr

83 publicly visible posts • joined 25 Oct 2012

Page:

VMware giving away Workstation Pro, Fusion Pro free for personal use

guyr

Other free options have existed for over a decade

Like others, I use VirtualBox for my personal VM needs, have been doing so for over a decade. So, I don't really understand why free VMWare is such a big deal. I guess if you use the paid version in a work environment, having the same available as a free option for non-work environments makes sense. Would have made more sense 10 years ago before people were forced to find other options.

Trying out Microsoft's pre-release OS/2 2.0

guyr

Re: Very Different

Doug 3: "If an application wasn't multi-threaded(all OS/2 apps were required to be ) you could switch to another app while that one finished doing its work."

Not true. I worked on an IBM OS/2 app from 1991-96. I was in Hong Kong on a customer support assignment. Scanner code was locking up. By running the code in a debugger, we were able (with some remote support from a large company in Redmond) to determine that the scanner code had been linked incorrectly, without the /MT flag. So, OS/2 apps were not required to be multi-threaded, though the overhead was so slight, that *should* have been the default behavior.

The task switching you describe worked because the OS itself was fully multi-threaded. Though even to the very end, OS/2 (including 2.0) had a single input queue for desktop GUI events - mouse clicks, etc. - so completely locking up the desktop was very easy and happened frequently. That was also true of Windows, but MIcrosoft finally fixed it in NT.

Top Linux distros drop fresh beats

guyr

Re: GNOME V 2

I also got most of my formative Linux graphical experience with GNOME 2, so when I saw the proposals for GNOME 3, I investigated how I might stay with GNOME 2 without excessive work on my part. MATE fit the bill for me. While MATE has since migrated to GTK 3+, it continues to provide the traditional look and feel of GNOME 2. As an added bonus, it is lighter weight than many other UIs, so runs well on (slightly) older computers.

Intel finally takes the hint on software optimization

guyr

Won't help the vast majority of developers in a corporate environment

I worked most of my career as a software developer in a corporate environment, i.e, software for internal use. In 35 years, the top priority was always "get it done". Perhaps Intel is only targeting its efforts at huge software vendors like NVidia or the game companies. But for corporate developers, these types of specialized routines will not be used unless they get baked into the common development tools used: Java and C++ tool chains, etc. If we can get a 25% speedup (or a 25% reduction in resource use) by changing a compiler flag, that will help. If we *might* see an improvement by manually stitching an Intel-supplied module into our software build (which we then have to test extensively to make sure nothing else breaks), that will never happen.

AMD to end Threadripper Pro 5000 drought for non-Lenovo PCs

guyr

Re: Running out of I/O on a Threadripper?

Agreed. One advantage a non-Pro Threadripper retains over Ryzen is the 4 memory channels vs 2. A build effectively utilizing all those channels should see significant uptick in IO.

How legacy IPv6 addresses can spoil your network privacy

guyr

Every molecule can have its own IP address

Wasn't that the selling point of v6? If we have to randomize our v6 IP address on a daily/hourly basis to avoid tracking, doesn't that greatly diminish the value of having such an expansive address space? I guess if we can hide all that inside the ISP connection point (normally, an ISP-provided router), then we can just use NAT on all home devices and be done with it. As long as we can continue to use v4 inside the house for ease of use, and NAT that to v6 on the ISP router, that should cover the majority of use cases for home users.

Professionals get paid to deal with these headaches, so I'm not too concerned about the hoops that they have to jump through with all this.

Linux kernel edges closer to dropping ReiserFS

guyr

Re: Puzzled

Same, I used it for a bunch of years in the early 2000s. At it's height of popularity, ReiserFS was great compared to the alternatives that were available. Then it stopped being maintained. I switched to BTRFS once that got past version 1.0.

Now that's wafer thin: Some manufacturers had less than five days of chip supplies, says Uncle Sam

guyr

How to strengthen the chip supply chain within the US

Hmmm, how about actually making chips in the US? Or perhaps the US Department of Commerce is unaware that over the last 2 decades, just about all state-of-the-art production has migrated to the Far East? I'm surprised that we have constrained (or no?) domestic capacity for 40nm to 250nm production. The US had lots of equipment with that technology 20-30 years ago. But I guess fabs can't just sit idle waiting for customers to show up.

Rusty Linux kernel draws closer with new patch adding support for Rust as second language

guyr

Re: Is it just me ?

Rust is not trying too hard. Rust was designed to be a systems-level language that addresses well-known weaknesses in other languages. You can read widely available resources on the whole breadth of Rust design goals, but the primary one everyone talks about is memory safety. Operating systems have been battling memory access issues forever; Windows is coming up on 40 years old, and still struggles with hacks achieved through buffer overflows, etc. So, if you could eliminate this entire category of vulnerabilities by the design of the programming language itself, that would be huge.

You are correct that you can't correct program logic errors in a programming language. But Rust helps address other major problems like the memory safety issue.

Intel teases 'software-defined silicon' with Linux kernel contribution – and won't say why

guyr

How much non-functional

Hmm, a modern CPU can have anywhere from 10-35 billion transistors, and take 3-4 months to manufacture starting from a polished wafer. I'm having trouble envisioning any CPU having significant portions of these precious transistors sitting non-functional in the anticipation that a customer *might* decide somewhere down the line to enable additional features. *Maybe* take a gamble with 10% of available transistors. But with global production constrained and customers lined up for products, the vendor (AMD, Intel, etc.) would be much better off using those idle transistors to make additional CPUs.

But the people running these businesses are undoubtedly much smarter than I am, and likely have decades of accumulated data showing which features customers are likely to want.

HP Inc slurps Teradici to get better at delivering remote PCs

guyr

Sounds like VNC

"no data moves over networks – just bitmaps"

That's called VNC, which has been available - for free - for 20+ years.

Developing for Windows 11: Like developing for Windows 10, but with rounded corners?

guyr
Happy

Re: Meh to both Windows 11 and mobile 5G

Haha, like that picture. I had totally forgotten that I went to the midnight release of Windows 95. :)

guyr

Meh to both Windows 11 and mobile 5G

I actually am getting old, but I can't get excited about either Windows 11 or the next mobile standard 5G. My main computer is still running Windows 7 (I have a secondary on Windows 10, but rarely use it), and my 3 year old 4G phone on Android 10 does everything I require. Microsoft positioning Windows 11 not to run on anything older than 2 years makes my (non)-decision about Windows 11 very easy.

Samsung reveals DDR5 memory module that’s ready for Compute Express Link

guyr

Re: Don't get it

This is why USB has replaced older parallel connections

And why PCIe (serial) replaced PCI (parallel.)

More Linux love for Windows Insiders with a kernel update

guyr

Re: Windows Tax

Well, it's very convenient for developers running in a corporate network that's using Active Directory and Windows clients. You can have several entire Linux distros of one's choice on that client, and you need never bother IT for permission to put a Linux machine on the corporate network.

Been doing that for well over a decade using Virtual Box. And VB has supported GUI apps - including seamless mode which displays Linux windows independently directly on the Windows desktop - for many years. So, what unique capability does WSL add?

guyr

Re: Windows Tax

"Anything "Windows for Linux" is nothing but a cancer out to get at the Linux ecosystem from inside out."

So you're flipping Steve Ballmer's famous quote now? Strange world.

Absolutely fab: As TSMC invests $100bn to address chip shortage, where does that leave the rest of the industry?

guyr

U.S. chip companies need to cooperate

TSMC has stated they are spending $20 billion on their next generation fab. I would imagine that is a significant hurdle for U.S. companies. TSMC gets all sorts of subsidies, which the US doesn't do. I think this will require some law changes as well, to allow US companies to cooperate without fear of all being sued for anti-competitive practices. I know Intel has said they are looking to build two fabs in Arizona; I don't believe it, I think they are just trying to exert economic pressure on TSMC and others. Intel has already said their lower end processors are being farmed out to TSMC. These fabs are just too expensive for individual US companies to undertake without subsidies or other financial support.

Canonical: Flutter now 'the default choice for future desktop and mobile apps'

guyr

try installing Ubuntu Server on a box, and then try to upgrade to use the NVidia drivers.

Why would you be installing NVidia drivers on a server box? Typically, servers don't employ graphical tools.

SK Hynix boss predicts CPUs and RAM will merge, chipmakers will hold hands to make it happen

guyr

HBM?

I thought HBM was designed to address this problem. You'll never be able to solve this problem generally via merging memory into the CPU, because different applications need different amounts of memory. The latest Intel Xeon Cascade Lake has "only" 38.5 MB of onboard cache. So, figuring out how to expand HBM seems to be a better expenditure of time, energy and money.

Devuan adds third init option in sixth birthday release

guyr

I don't have a fax, I'm Internet only these days

If you have Internet, you have fax. :) I've been using fax1.com for years because there is no set monthly cost, and when you buy credits for faxing at 12 cents (US) /page, it never expires. Would likely get expensive if you fax in high volume, but I bought $10 probably 8 years ago and haven't used it up yet.

guyr

I don't have a fax, I'm Internet only these days

If you have Internet, you have fax. :) I've been using fax1.com for years because there is no set monthly cost, and when you buy credits for faxing at 12 cents (US) /page, it never expires. Would likely get expensive if you fax in high volume, but I bought $10 probably 8 years ago and haven't used it up yet.

President Biden to issue executive order on chip shortages as under-pressure silicon world begs for help

guyr

the US used to HAVE fab plants of their own

This is the point people need to acknowledge. All the major semiconductor companies - IBM, Intel, AMD, etc - did their manufacturing in the US.All those plants actually still exist, and are churning out chips today, but not the latest and greatest. Why? State-of-the-art chip manufacturing is not for the faint of heart. TSMC's current 7 nm fab cost $10 B, and the next one is estimated to cost $20 B. For comparison, AMD had a great year in 2020, with a net income of $2.4B, but that included a one-time income tax benefit of $1.3B, so more like $1.1B, which is still great for AMD, which had lost money for 15 years straight.

So, how does a company who makes $1.1B in their best year afford a $20B manufacturing plant every 10 years? It can't, math doesn't work out. AMD outsourced manufacturing to TSMC for its latest chips because it had no other choice, not because it just wanted to make a couple more bucks. Building those chips on their existing (last-generation ) production line would have been unsuccessful - too big, too much heat. And they didn't have the time or money to build a new current-gen fab.

So, how does TSMC do it? Government subsidies and indirect government support (tax breaks, low interest loans, etc.) TSMC can easily get that $20B loan and go 10 years showing little or no profit. No US company can do that. So how do we fix this? I'm not an economist or a miracle worker, but I doubt we can mass-hypnotize American investors to accept zero profit in their investments for 10 years at a time. So, off the top of my head, I'd say we need to reorganize how massive investments can be made while still showing profits. Public funding? Accelerated tax write-offs? Smarter minds than mine need to think about that.

Microsoft tells Biden administration to adopt Australia’s pay-for-news plan

guyr

Despite Microsoft pouring billions into loss-making media ventures, creating new options for advertisers inside Xbox games and boasting annual search-driven ad revenue in the billions, Smith blamed others for the media’s problems.

None of which has anything to do with news. I'm not a shrill for Microsoft, Google or Facebook, but CNet could help understanding by focusing on the topic at hand.

In Rust we trust: Shoring up Apache, ISRG ditches C, turns to wunderkind lang for new TLS crypto module

guyr

Re: But///but...this is routine programming

"My guess as to what's gone wrong isn't 'malloc' as such but 'new'. This can create quite large objects either from pool memory or on the stack"

"new" does not allocate objects on the stack. But the larger point is that while I haven't looked at the httpd code recently, most code written in the last 10-15 years doesn't use "new" directly, but instead relies on libraries to manage dynamically allocated memory. The problem of memory not being freed has been well-known for decades, so projects of any significant size use libraries to ensure that doesn't happen.

KDE maintainers speak on why it is worth looking beyond GNOME

guyr

Re: The "Problem" with Linux

At least Mate (based on GTK 2 last I checked)

Mate moved to GTK 3 several versions ago. I use it exclusively on all my Linux systems. It's original design goal directly from the MATE web site https://mate-desktop.org/:

"The MATE Desktop Environment is the continuation of GNOME 2."

So, the look and feel of GNOME 2 is the essential ingredient, rather than a particular generation of GTK.

'This was bigger than GNOME and bigger than just this case.' GNOME Foundation exec director talks patent trolls and much, much more

guyr

Re: I'm just rebuilding my desktop ...

I've been running Ubuntu MATE (after starting with Mint MATE) for several years, and it has been trouble-free. You can opt for the LTS (Long Term Support) releases, which only come out every 2 years, and thus provide a more stable platform. In reality, I've found that MATE provides its own stability, so don't find the 6-month releases that disruptive. But I've adopted LTS on all my systems except for one VM I use expressly to see what's currently going on, simply to cut down on the update volume.

Third time's still the charm: AMD touts Zen-3-based Ryzen 5000 line, says it will 'deliver absolute leadership in x86'

guyr

Why skip a number series?

So the last generation of desktop processors were the 3000 series, why has AMD jumped to 5000? I see on this page:

https://www.amd.com/en/products/ryzen-processors-laptop

AMD appears to have allocated the 4000 series to mobile chips. Is that a one-time thing, or are they going to be allocate even-numbered series to mobile and odd-numbered series to desktops?

And then there is this head-scratcher. On that page: "AMD Ryzen™ 3000 Series Mobile Processors with Radeon™ Graphics", under which is ... wait for it ...

AMD Ryzen™ 9 4900H, AMD Ryzen™ 7 4800H, etc. How's that for clear marketing???

Teracube whips out cheap, fixable phone with removable battery and four-year warranty

guyr

Re: A suggestion for a long life

Took a look at the specs - missing bands 66 and 71. So, forget about 5g. Unfortunately, this phone doesn't even have newer 4G bands. I like the idea, and my needs are pretty basic; I especially like the replaceable battery, which every phone before my current one had. My current phone is LG G7, which was engineered in 2018, and has both 66 and 71. So, doesn't seem like including these bands in 2020 should be a difficult engineering challenge.

Cross-platform app toolkit Flutter lead Tim Sneath aims Dart at an ambient computing future

guyr

Re: Cross platform?

Java is cross-platform, and is by far the largest language used for enterprise applications because it can get you were you need to be in a reasonable amount of time and resources. But Java GUI apps never really caught on because, as you say, they tend to look terrible and non-native, so people don't like them.

If Flutter can get an app on desired platforms in a short amount of time to get a feel for potential user interest, that seems like a desirable approach. Most organizations either can't afford or don't have the time to simultaneously develop for multiple native platforms. Then if sufficient interest exists, developers can weigh the cost and benefits of a native approach for specific platforms.

Oracle adds Arm-powered servers with up to 160 cores to its cloud – must be why it sunk millions into Ampere

guyr

workloads that can thrive on modest resources

"AMD is also playing nice with Oracle, with the chip slinger's Epyc Milan silicon on the way. Oracle plans to rent individual cores of the new processors to those running microservices and similar workloads that can thrive on modest resources."

This is an odd statement. AMD is presently outperforming all available Intel options by a wide margin. So why is Oracle targeting AMD cores only to workloads requiring modest resources?

Party like it's 2004: Almost a quarter of Windows 10 PCs living with the latest update

guyr

I've tried to update my single Windows 10 system to 2004 at least 10 times now. *Still* failing. Thankfully, my primary system is on Windows 7, which is safe because Microsoft abandoned it. :(

Relying on plain-text email is a 'barrier to entry' for kernel development, says Linux Foundation board member

guyr

Re: But for accurate communication between geeks at long distances, plain text wins every time.

I worked in software development for 40 years, now mostly retired but dabble in open source projects to keep my brain working. In the early years, we used rudimentary editors and makefiles. Got the job done, but the toolset was not really helpful to comprehension.

For the last 25+ years, we've used IDEs for the development task, which greatly facilitates understanding. Tools don't replace the need for a developer to understand what the code is doing, or even more important, fundamental software concepts. But it certainly helps competent developers to do their jobs with fully cross-referenced source, source-level debuggers, etc.

To your point, plain text facilitates accurate communication between remote systems. Today we have tremendous tooling that enables easier communication between people. Think distributed reviews.

WSL2 is so last year: Linux compatibility layer backported to older Windows 10 versions

guyr

"Rather than requiring Hyper-V, which is resolutely not part of Windows Home"

Ye of little faith. Please see my comment on the page in the following link. Hyper-V can be enabled on Home with some easy to follow steps.

https://docs.microsoft.com/en-us/answers/questions/29175/installation-of-hyper-v-on-windows-10-home.html

Linus Torvalds pines for header file fix but releases Linux 5.8 anyway

guyr

#ifdef/#define wrappers

Came to say exactly this. The problem of including a header more than once was solved at least 30 years ago, when I was doing C/C++ development consistently, using this #ifdef/#define pattern. I can't imagine any code passing a code review today without it.

Battle for 6GHz heats up in America: Broadcasters sue FCC to kill effort to open spectrum for private Wi-Fi

guyr

NAB is worried about it because they use it for production

Yes, that's it exactly. I found this topic interesting enough to do some searching, and found this article that explains some of it. Apparently that band is used by wireless microphones as well as microwave backhaul links. To us not in the industry, WiFi in the home wouldn't appear to impact either of those applications. However, I can see a scenario where someone is trying to do an on-site news report in a suburban neighborhood, and every house is leaking these WiFi signals, rendering the spectrum unusable for licensed users.

https://www.sportsvideo.org/2020/05/12/spectrum-faces-its-next-challenge-as-fcc-allocates-6-ghz-range-for-wi-fi-6/

CompSci student bitten by fox after feeding it McNuggets

guyr

It's likely the foxes will be euthanised

No, that's what we tell ourselves we do to our dogs, so we are able to sleep following the deed. The foxes will be simply killed.

guyr

*WE* computer folks are supposed to be smart enough to know how to use English

Same thought - apparently English is not a prerequisite for writing Reg columns.

Intel outside: Chip king Keller quits x86 giant immediately 'for personal reasons'

guyr

Smart but restless

Jim Keller is without doubt a very smart man, given his stints at every major CPU manufacturer. But he doesn't appear to like to stay in one place for very long. Given how complex the task is, I'm a little surprised the big names - Intel, AMD - are willing to make the investment. Keller joined Intel in April 2018, meaning he's been there a little over 2 years. How quickly can anyone absorb an architecture with billions of transistors to start making significant contributions? I worked in software development, not hardware, and we would allow new people a couple months to get familiar with the architecture of any project before we expected major independent work.

This'll make you feel old: Uni compsci favourite Pascal hits the big five-oh this year

guyr

Re: pascal was simply useless.

>> "Meh. Pascal appealed because it was strongly typed."

Indeed, others here are criticizing Pascal being strongly typed. They fail to recognize that this one feature is what made Pascal the preferred teaching language in the 70s. I went to a pretty rigorous engineering university, and they employed Pascal in both intro and advanced language classes. If you are instructing people who know nothing about structured programming, then you want to start them thinking in very methodical techniques. Pascal provided that structure then.

More than a billion hopelessly vulnerable Android gizmos in the wild that no longer receive security updates – research

guyr

Both manufacturers and Google are to blame really.

In my experience, the responsibility goes further than that, to the carrier. In my experience, for an Android update to appear on my phone requires the cooperation of all 3: Google, the phone manufacturer, and the carrier (T-Mobile in my case.) Unless all 3 are on board, the update will not show up on the phone.

Android owners – you'll want to get these latest security patches, especially for this nasty Bluetooth hijack flaw

guyr

Re: "you'll want to get these latest security patches"

I came here to say the same thing. We may *want* to get updates, but unless the maker of our phone and our carrier both provide the updates, then it ain't happening. Telling phone users to get an update accomplishes about as much as telling your dog to get a driver's license.

LibreOffice 6.4 nearly done as open-source office software project prepares for 10th anniversary

guyr

Re: "Has LibreOffice succeeded?"

"In LibreOffice, you can change chart type and hide or show the legend and that's just about it."

I've only used LO at home, where my needs are not that great. But I did add a chart to my retirement spreadsheet, and tweaked to get what I wanted. The chart capability actually does provide quite a bit of customization, though it might not be as easy to find and apply as you'd wish. As one example I remember off the top of my head, to tailor how individual lines in a data series appear, you can't just right click the line and modify properties. Instead, you need to open the data series editor and change properties there.

Waity K8-y no more Pivotal: We'll unhook Application Service from VMware

guyr

We're now at a scale that is too massive for humans to touch.

Hmmm, why did I immediately think of SkyNet upon reading this?

There once was a biz called Bitbucket, that told Mercurial to suck it. Now devs are dejected, their code soon ejected

guyr

Re: Git

To think that we're going to be forever lumbered with this sad example of "eh, good enough" design saddens me when there are so many better systems that already exist.

Forever? Not if somebody thinks of something better. Remember, Torvalds developed git as a 1-month side project when he couldn't find an available version control system that worked as he liked, without unbearable licensing terms. The entire open-source universe has proven its meritocracy-based roots. Torvalds himself would not hesitate to throw out git if something provably better comes along.

Airbus A350 software bug forces airlines to turn planes off and on every 149 hours

guyr

Turn off and back on - patented process

I think Microsoft patented the process of turning the machine off and back on again to fix any random problem. Some serious fees are going to be paid for this.

El Reg sits down to code with .NET for Linux and MySQL, hitting some bumps along the way

guyr

Re: MS Access for Linux

In a corporate software development environment, I've never been in a group that seriously considered MySQL for production work. If open source is preferred for a database, PostgreSQL is a natural choice.

Delphi RAD tool (remember that?) gets support for Linux desktop apps – again

guyr

Re: Communicates with Delphi on Windows?

"This article is exactly about now being able to develop GUI application for Linux and no longer only console ones."

Thanks for the correction. I went back and reread this part:

"FmxLinux was developed by a third party, Eugene Kryukov. It has been licensed under “a long term distribution agreement,” says Embarcadero’s Marco Cantu in the announcement this week." and followed the link to the announcement Makes more sense now. But I also read this on the announcement page: "active subscription to the Enterprise or Architect editions". What's up with Embarcadero? They really like to put the screws to their customers. Less pain-seeking customers can just license FmxLinux directly at a much lower total cost.

guyr

Communicates with Delphi on Windows?

"On the Ubuntu side, you have to install an agent which communicates with Delphi on Windows."

What's this? I took a 15-minute look at Kylix when it came out, and honestly I don't remember anything about it. But this quote has me scratching my head. Am I understanding correctly that to run a Delphi app on Linux, you need to shuttle work over an agent to a process running on Windows? Sounds like the mother of all kludges. I really can't see the niche for this. If you have a staff of experienced Delphi developers, and you want to deploy a bespoke app onto a Linux desktop? But if you also need to have a full Windows running to do the grunt of the work, why bother? Just run the app on that installation of Windows.

Okay, found this article on the Embarcadero site that explains things:

http://docwiki.embarcadero.com/RADStudio/Rio/en/Linux_Application_Development

The agent is only required during development, not deployment. You develop the app on Windows, and cross-compile it for Linux. GUI apps are not supported, only console apps. Meh, very niche market. If you want to run a full app on both Linux and Windows from a single source, several options are available. Java of course, but for native look and feel, WxWidgets gets you there.

Halleluja! The Second Coming of Windows Subsystem For Linux blesses Insider faithful

guyr

OS/2 failed for different reasons

I was inside the castle for pretty much the entire run of OS/2, from pre-1.0 to 3.0 Warp. From my perspective, what killed OS/2 was IBM's defiant adherence to the original design decisions, and a refusal to modernize. For example, IBM insisted it run on the 286, even though by the time it reached any measurable market, the 386 was ubiquitous. That let to numerous architectural restrictions the messed up the works. From a developer's viewpoint, the single input queue (from all Windows versions prior to NT) was a nightmare; this was the old cooperative multitasking model, where a single badly designed program could hang the entire OS.

Microsoft addressed all these issues in NT, but IBM refused to budge with OS/2. When Microsoft demonstrated they were serious about NT, everyone gave up on OS/2. It held onto that darned single input queue to the very end!

All nodes lead to Rome: Epyc leak spills deets on second-gen Zen 32-core AMD server chippery

guyr

Re: Mostly harmless upgrades

I doubt mainstream businesses upgrade CPUs very often. If a company has a rack of dual CPU servers supporting their business and they are working acceptably, they don't risk disrupting their business by tearing apart that rack. Instead, they'll just keep those servers going until they are depreciated, then replace them completely with newer ones.

The only large scale upgrades I've ever read about were supercomputers, where they bring the system offline and replace (e.g.) 8096 processors at once. They can do that because time on that system is scheduled, and it's not used for daily purposes like processing credit card transactions.

Page: