* Posts by Kristian Walsh

1612 posts • joined 10 Apr 2007

Businesses should dump Windows for the Linux desktop

Kristian Walsh

Your recollection is incomplete. 90% was the peak figure for user desktop replacements, but very quickly those users began to request a return to Windows as the Linux-based solutions were missing key functions.

The Linux ecosystem has no centre, and its continuous development model makes it much more of a moving target for internal application developers than Windows is, unless you expend a fair amount of effort maintaining your own distro. (Package maintainers are the real heroes of Linux - it’s a thankless, time-consuming job, but without them the whole ecosystem would collapse).

The nail in the coffin was the prospect of decentralised working. There simply wasn’t any good private-cloud solution for collaborative work. For several classes of application, Munich basically came to the realisation that to make Linux work for it, it would have to permantently invest in its own team of programmers and maintainers in addition to its existing IT staff (who also had their own grievances regarding provisioning and monitoring). It was a buy/build decision, and the real costs of “build” were now known to be far higher than simply paying for a ready-made product. After all, the city doesn’t build and maintain its own trams - the latest lot was bought from Siemens - so what made software any different?

Other organisations have fared better with Linux, however, but they were more pragmatic: adopting it where it was a viable and mature alternative. The mistake Munich made was the (ultimately ideological) decision to do a top-to-bottom replacement of every software system with a FOSS alternative. That really makes no sense - you might not like Exchange, but is there really a FOSS alternative that’s anywhere near as good at managing office communications?

China's 7nm chip surprise reveals more than Beijing might like

Kristian Walsh

Re: Ours

And yet American companies do build and operate state-of-the-art fabs in the EU. So, if the EU was trying to stop it, they’re going a funny way about it. Certainly, you don’t give a billion euro in grants to stop someone coming into your territory.

Intel just kicked off two more CPU fabs in Italy and Germany, to go with the one it already has ready to go in Ireland. The UK didn’t make the shortlist for the new sites partly because of Brexit (you need a stable supply-chain for a semiconductor plant), and mostly because of the lack of skilled labour.

That latter one is what has really crippled English industry (I say English, because education is devolved) - underinvestment in education, especially at secondary level, means it costs far more in training to hire a plant operative in England than it does in Germany, France, Italy or Ireland.

Your strange assertion that “diversity” is incompatible with a meritocracy shows you’re not thinking things through. I’ve worked with and for enough white thick-shits who had rich parents to get them through college to know where the real problem is in hiring practices, and it’s not ethnic minorities.

Kristian Walsh

Re: Ours

The meaning of that original statement is very clear: all of the UK Conservative Party’s bleating about being the defenders of British industry, and all of their anti-foreigner rhetoric pales into insignificance when presented with the opportunity to earn quick cash (and pocket a personal payoff as a consultant after retirement).

Software issues cost Volkswagen CEO Herbert Diess his job

Kristian Walsh

Re: Stand by for subscription services like the ones BMW recently announced.

You’re thinking of the problem as if the seat heaters were a standalone product, instead of one component of a system where there’s a power and data bus already in place, with a high-performance general-purpose computer attached to that bus. A simple solid-state current driver IC, uniquely addressable on the bus and controlled by the cabin software is far more flexible and cost-efficient than the electo-mechanical system you propose, especially as that part can be mass-produced to suit for any actuator application within any vehicle.

Also, by having these loads centrally controlled, you can also very closely manage total current draw on the 12 V electrical system, and keeping maximum current down can save you a lot of money.

If that wasn’t enough reason, there’s the bill of materials: replacing the cost of your timer and relay with a common solid-state part across 4 million units a year is an example of They’re Doing It Right, and that’s before you consider the knock-on costs of maintaining supplies of additional unique parts.

Kristian Walsh

Re: Fast chargers

Everyone homes in on DC fast-charging because it looks most like the way you fill up a petrol car, but it’s really not how the majority of EV recharging will happen. Private cars spend 95% of their time parked, so given the ubiquity of electrical supply, that parked-time is when they will charge. Any residence with a driveway can use AC home charging at 7kW (rising to 22kW in countries with 3-phase domestic supplies), which is cheaper and more convenient than forecourt charging. AC charging can be rolled out across pay-parking facilities, commercial premises and workplaces too without much techical difficulty. On-street charging is still a challenge, but it’s a very solvable problem: the difficulties are mostly political and commercial, not technological.

You say filling up with petrol is “convenience”, but is it really? Having the car charge up overnight — or while you do your day’s work— is in a different league when it comes to convenience: just unplug, and go. No wasting your time detouring to a station, or queueing at the till... and it’s cheaper too.

The only place where fast-charging becomes relevant is for very long distance journeys (exceeding 75% of your battery’s range). But with current capacities, those journeys are very rare. Yes, there are people out there who have to drive more than 200 miles at a go, but they really are outliers. For them, a diesel car is still the best option, but eventually EVs will catch up there too.

The electrical grid is entirely able to accommodate EVs. Demand on the grid today is not in any way constant, and it has to be able to operate at a peak that is far in excess of the normal loading. Nightly charging actually solves a problem that has plagued electricity generators since Edison’s day: the most efficient way to operate most types of generation is to run them continuously, but demand fluctuates wildly over the day and week, and if you over generate, you have to dump the power.. (Search for “duck curve”, and follow the articles you find to learn more about this topic).

EVs actually help the grid. They aren’t steady-state dumb-loads, they have intelligence, and are all capable of negotiating with the supply (the charging connector has a serial datacomms link, and the protocols used on it include a Vehicle-to-Grid facility). Right now, you can schedule cars to take electricity at any time of your choosing (to reduce your home bill), but there’s no technological impediment to prevent the electrical utilities and vehicles negotiating their charging periods to react to instantaneous power surpluses in the grid. There’s also the prospect of using your EV battery as a domestic demand buffer, for which the utility will reward you with cheaper electricity.

Regarding capacity, it might interest you to know that domestic electricty demand peaked in the early 1990s, and has declined since (Datacentres have picked up some of the slack, but not all). A lot of infrastructure was put in place based on that level of demand, and mass adoption of EVs will just bring things back towards those projections.

But, it’s not like grid infrastructure is immutable once installed. Unless you’re misfortunate enought to live in Texas, the electrical grid is constantly maintained and expanded to meet demand for electricity; and it’s not like a hundred million EVs are just going to wink into existence and plug in next week: the transition will take decades, and decades is long enough to adapt any grid.

This credit card-sized PC board can use an Intel Core i7

Kristian Walsh

Re: Fahrenheit?

... OR GIVE ME CLYDE ?

Kristian Walsh

Re: Fahrenheit? More Precise?

True, although once we’re being technical, I’m sure you remember that water actually freezes at its triple-point, which is 0.01 ºC (273.16 K), not zero, and that the boiling point is only 100 ºC at 1013.25 hectopascals of air pressure (= 1 atmosphere); in a hard vacuum, pure water has no liquid state, and so boils at any temperature over its triple-point, and freezes at any temperature below.

Kristian Walsh

Re: Fahrenheit? More Precise?

We have decimal places for when we want to be precise. There's no need to convert out to an odd-shaped unit with an offset baseline.

Big Tech silent on data privacy in post-Roe America

Kristian Walsh

Re: Democracy

"So if a Democratic president, a Democratic Congress and a Democratic senate were unable to stop this, exactly what was the point in voting for them ?"

For the purposes of “stopping this”, the Democrats (or, more correctly, Senators who would uphold Roe v Wade) do not hold the Senate.

Why? Well, like the President, the Justices of the Supreme Court can be impeached and removed from office by Congress. However, just like the President, doing so requires a majority of the House of Representatives to vote for impeachment, which triggers them to be sent for trial in front of the Senate, after which a two-thirds majority of Senators must vote for conviction in order to remove them from office.

The first part is easy: there are reasonable claims that the appointments of Neil M. Gorsuch, Brett M. Kavanaugh and Amy Coney Barrett were all invalid, as all three committed perjury during their appointment hearings. Under questioning, each explicitly denied that they would even consider repealing the judgement in Roe v Wade, and yet, on the very first instance that the matter came before the Court, each agreed wholeheartedly to overturn it. I can believe in Justices having varying changes of opinion after many years, but such a rapid and complete about-face on this issue stretches credulity.

But impeachment isn’t enough. The conviction must be voted for by the Senate, and by 67% or better. In a normally-functioning government, a proven case of perjury would be normally enough to have one or more of these Justices removed from the court, but the highly partisan reality of US politics where most of the 50 Republicans will pretty much vote down any Democratic impeachment, makes such an outcome impossible… unless the Democrats were to somehow secure 67% of the seats in the Senate.

... and that is exactly the point in voting for them.

Jeffrey Snover claims Microsoft demoted him for inventing PowerShell

Kristian Walsh

Your use of the present tense is odd in that post, given Microsoft’s current championing of PowerShell—and, for that matter, Linux—on Windows today. This is not the 1990s anymore.

And Microsoft never had any relationship with IBM they could “split” from apart from being a vendor of DOS, and a development partner in OS/2 - both in the 1980s: anyone managing then retired long ago.

The rot in the 1990s was what happens to any dominant player in a market: gaining share becomes more difficult, so management decides instead to rent seek, and adopt defensive positions to block competition (we’re seeing this with Apple on mobile). Of course, these tactics are contrary to the needs of customers, and can sow a long-lasting poison that damages the chances of any genuinely good product that comes from the company later. As an example, Windows Phone was the most techically advanced mobile SDK, with amazing low-resource performance, and I believe that it would have been a success had it not come from Microsoft, with all the baggage and industry distrust that that entailed...

Microsoft, like other successful companies, also attracted the bullshitters. I worked for Apple in the 1990s, and we used to say that the only good thing about working for a dying company (which it was then) was that the kind of MBA-but-no-intellect manager-class tended to avoid us when choosing a place to inflict themselves upon. Instead they flocked to Oracle, Netscape (heh), Microsoft and Sun, who were all on the up, and thus offer a low-risk environment for their inept ideas. (Not to say we didn’t have home-grown assholes, but nobody tried to bring in morale-destroying “new ways of working”)

Kristian Walsh

Re: powershell command missing

The bash solution becomes ugly very quickly once your needs diverge from "results sorted by directory size". The PowerShell version, on the other hand, is trivial to adapt to sorting by any attribute of the system.

You can also trivially adapt the PS version to give you sizes, sorted by most-recently modified directories, or excluding directories that have not changed since a given date. Try that in a bash one-liner if you really want to learn the darker corners of bash (but really, don’t do it: complex functions belong in scripts, not one-line alias macros)

Lest I be accused of fanboyism, I should mention that I don’t use PS very much: I overwhelmingly write scripts in bash (or more accurately, the subset that’s approximates the original Bourne Shell - a habit formed from writing for embedded systems using dash and other lightweight shells). But I’m not some kind of idiotic zealot who thinks that just because something is from Microsoft, it’s inferior to whatever came from Unix. The reason I don’t use PS because it’s not well supported on Linux, where I do most of my scripting; and on Windows, its default commands are all different to what I know from Unix.

But if you’ve ever had to parse the crap emitted by linux tools like ip, you'd wish that Linux had something like PowerShell’s object-stream approach to pipelines.

Making the most common Linux tools produce key-value output (even if it has to be in a lousy format like JSON) would be a start, but that requires cross-project collaboration, which is something the Linux world has proven to be very bad at.

Kristian Walsh

Re: I would get it fired for inventing Powershell

I suspect you don’t do much shell programming. Consider this bash snippet:

X=009

test $X = 9

... should that return true or false? It returns false, because '=' compares text. There’s a difference between lexical comparison (characters must all match) and value compare (parse to numeric, then numeric results must match), and numeric comparison is what -eq, -le, -gt and company do on the unix `test` command ([ is an alias for test, chosen to make shell-scripts a little more readable).

Powershell is similar, but you first need to know that PS is a typed runtime (like Python). The equivalent code:

set X 009

if ( $X -eq 9 ) { echo "true" }

will report true, because X is a number. Change that initial assignment to be a string, and the comparison now fails.

set X "009"

if ( $X -eq 9 ) { echo "true" }

If you want to force integer comparison, you can cast the operand:

set X "009"

if ( [int]$X -eq 9 ) { echo "true" }

Datacenters in Ireland draw more power than all rural homes put together

Kristian Walsh

Re: Datacenters vs grid power.

Because that hot air is typically below 35C (i.e., only 10-15 above indoor ambient) efficient energy extraction is expensive (although it's getting better). The low temperature also precludes use for district heating except in places with high-density accommodation and harsh winters, and Ireland has neither.

Kristian Walsh

It would be a massive reduction for Ireland, given that so much of the DC equipment located in Ireland is providing services for EU and global customers.

The uptick in demand is one of the knock-on factors of Brexit. EU customers want their data-processing to be carried out within the EU. Most of the transatlantic fiber makes landfall in the British Isles, but with the UK now outside the EU, Ireland has become a preferred location, despite its thorny planning process.

The usual suspects will say this is all about tax rates of course, but datacenters are cost-centres, and their location has nothing to do with where profits are booked. Amazon, for instance, located is first AWS Europe region in Dublin, but it has no business operations in Ireland at all: even its shop website is a redirect to amazon.co.uk. Amazon’s European operations are run by a Luxembourg company.

Growing US chip output an 'expensive exercise in futility', warns TSMC founder

Kristian Walsh

Re: Wait for it, TSMC. Just wait for it.

It's not about “top notch” minds. The belief that it is is part of what got the USA in this mess, but the high cost of education in the States is the real killer for manufacturing competitiveness. (I won’t mention healthcare costs, as that cripples all sectors’ competitiveness, not just that of the manufacturers)

High levels of college debt from even the most modest of institutions makes it impossible for companies to hire high-skilled operatives at internationally-competitive wage rates.

That same system of amassing debt just to get educated discourages manufacturing workers from even bothering, leading to lower skill levels in the work-force, thus exacerbating the previous wage issue through lack of supply.

Basically, the whole education system in the USA discourages “making things” as a career option, and this is why, as Chang noted, the USA has pivoted away from mass manufacturing towards services and niche, cutting-edge technology. The alternative would require a ground-up overhaul of how Americans are educated, and that’s both impractical (districts, counties, states, private colleges and federal government all have fingers in the pie, but nobody is in charge of enough of it to change anything), and politically lethal for either party.

GNOME 42's inconsistent themes are causing drama

Kristian Walsh

Re: I hate "modern" UIs.

“Creatives” shouldn’t be let near user interfaces until they have been properly designed. UI is strictly speaking a branch of ergonomics, and the people I know of who did it for a living are serious engineering types.

The problem is that a device’s user-interface is now seen as a marketing point, not a functional one. That means lots of flashy but sub-optimal solutions get chosen over those that are more functional.

Unfortunately this trend is creeping out of computing/web/phones and into areas where it’s starting to pose genuine problems. Those cars with touch-screen-only secondary controls are effectively unusable for elderly or other users with poor fine motor skills (=the ability to accurately position and work with your fingers), whose motor skills would otherwise be perfectly adequate to safely pilot a car. Previously, both the secondary (air, radio, seats...) and the primary ( steering, speed, brake, indicators) controls in a car have gone through decades of usability engineering to ensure that they can be used by people with pretty much any level of fine motor control, but the shift to touch-screen for secondary controls has undone that good work. It has, however, saved money (if you think Tesla uses touchscreens because they’re cool, you should look at the costs of designing and making physical control buttons sometime)

Kristian Walsh

I suppose when you got a laser printer, you started to print all your documents at 3 point text too because you could...

Resolution is not the same as size. Bumping up the number of pixels on your display doesn’t give you more space to lay out your work - or improve your eyesight. And making the display itself bigger also comes with a penalty.

First off, the limiting factor for information density on any interactive display is the human operator. Text needs to be large enough to be clearly readable at the typical viewing distance. That means larger type size (in pixels) and greater line-spacing. The reason your modern UI’s default settings on a 4k monitor still uses the same physical character size today as a VT100 (around 2.0 x 3.5 mm) is because that character size was not pulled out of a hat; it is the result of detailed study of human vision and ergonomics.

UI controls also need to be larger, but for a different reason. Monitors have got bigger physically. Whereas twenty years ago a single 17 inch monitor was the norm, and 30 inch the practical limit, now that norm is 24 inches, and the maximum can be multiple 35-inch displays. Because the distance you have to move the mouse cursor can now be much, much bigger than before, it is much harder to accurately position that cursor on buttons that are further away - so because the average display surface areas get bigger, the controls must get bigger too for the system to remain usable. (The theoretical basis of this has been understood since the 1950s, see https://en.wikipedia.org/wiki/Fitts's_law)

But the problems with this KDE release aren’t density, but misunderstanding of the rules they’re working with, and instead aping the mistakes made by an existing popular system (the fallacy of assuming that popularity means high quality). They have made the same mistake that Apple made with iOS7 when they tried to copy Windows Phone’s minimalist UI - they went too far, and simply removed all the button-borders and other visual signifiers from their UI without providing anything else in their place. The result is a mess.

Windows Phone gets a lot of blame for this, but it’s misplaced. Anyone who has actually used a Windows Phone device will tell you that its UI may have been flat, but it was still distinguishable: yes, there were only a handful of visual cues, but they were used consistently across the whole UI - and the Windows Phone UI did feature boxed buttons; frameless buttons are an abomination that the world can than Apple for. As an example, colour was consistently used as an interaction cue: pretty much any text drawn in the UI highlight colour would react to touch - Apple badly broke this basic rule when it did iOS 7, indiscriminately using colour both as a decoration and as a interaction cue. The irony that it was Apple being so ignorant of basic design principles isn’t lost on this ex-employee, but that’s what you get when you fire all your UI designers (the Human Interface Group was closed back in the late 1990s) and expect graphic artists to have the same skillset. “Looks like” is not “works like”

Intel counters AMD’s big-cache PC chip with 5.5GHz 16-core rival

Kristian Walsh

Re: Let's Qualify that 5.5GHz Figure

Yeah, it’s crazy isn’t it. The Motorola 68000, the first CPU I learned to program in assembly-language, was so-named because that was the number of transistors on the die!

…and the process node for the first 68000 chips was 3.5 μm, meaning that each of those transistors was around 500 times larger than those of a modern CPU.

Kristian Walsh

Re: Let's Qualify that 5.5GHz Figure

Yes, it was fine when the node measurement was just something that was used internally at the fabs, but tech bloggers with not quite enough knowledge have unfortunately turned it into a Holy Number. No surprise, as we’ve been here before several times with “numbers”, most recently with the confusion about high versus low TDP, cores versus threads, 64-bit versus 32-bit, and of course, the granddaddy of them all: Megahertz.

Alder Lake is built on what, until very recently, Intel called its “10 nm Enhanced Super-Fin” process. Despite having that “10 nm” in its name, this is actually able to pack more gates onto a die (100 Million transistors per square mm) than the slightly older TSMC “7nm” process (92 Million transistors per square millimetre) that AMD is using right now.

However, TSMC has a newer “5nm” process (albeit only for ARM SoCs) that packs in around 165 Million transistors per square millimetre, and Intel has no answer to that until its “Intel 4” process starts up, allegedly later this year, with an expected density of around 180-200 Million transistors per square millimetre. In other words, when it comes to wafer fabrication, whoever has the newest thing has the best thing.

But AMD is expected to jump to that TSMC 5mm process later this year, and so will take back the process lead; Intel’s first “Intel 4” CPUs, meanwhile, will be “Meteor Lake” in mid-2023.

114 billion transistors, one big meh. Apple's M1 Ultra wake-up call

Kristian Walsh

Re: I saw the reveal presentation, and, while I'm no fanboy, I was amazed

If you understand your workload, why don't you just max out the RAM when you buy the system?

It’s precisely because I do understand my workloads that I am able to defer the cost of any upgrades. “Maxing out” is a clear sign that a buyer doesn’t have a handle on their real requirements. You can get away with it for one off purchases, but those mistakes multiply quickly.

I make my living writing software systems of various sizes, operating from a few kilobytes to a terabyte in data-size. Right now, I know the work I’m doing has low RAM requirements, but as soon as I shift to anything cloud-based again, with lots of containers, I’ll buy the extra RAM. By then it will also be cheaper, both nominally, and in terms of spending power. But then, maybe I’ll still be writing small systems through the life of the system. If so, also great, I haven’t wasted a cent.

But that’s just my single PC. My point was if you’re making that decision for thirty to a hundred workstations, the consequences of such a cavalier approach to provisioning as you‘re suggesting will add up quickly, even before we consider Apple’s steep price ramp.

As for downtime, it’s twenty minutes, including the memory check (five, if I do what you did and don’t self-test the system hardware before putting it into use). I’m fairly confident in my ability to safely fit a pair of DIMMs without damaging my computer. But don’t imagine that’s any kind of a boast: computers are designed to be assembled by low-skilled labour - there’s nothing magical or “elite” about putting components into one yourself.

Kristian Walsh

Re: I saw the reveal presentation, and, while I'm no fanboy, I was amazed

My experience is different: I have upgraded RAM in every system I’ve ever owned, both Mac and PC (my last Mac laptop, a 2015 MacBook, had a RAM upgrade in its lifetime too). It’s the one thing that I do tend to run out of in my workloads, but then, I had a period of developing systems that required a lot of containers and virtualised systems, and they tend to consume lots of RAM.

Storage is another thing that I have always ended up upgrading, but then it has been for speed (SATA 3 to SATA 6 to NMVe), rather than capacity. I don’t think I’ll need more than 1 Tbyte onboard for a long time to come.

The issue of expandability came to my mind because I have just replaced my own desktop system, and I fitted it with 32G of DDR5. I may end up needing 64G in a year or two, but the price of DDR5 is so crazy right now that it just wasn’t financially sound to buy that much until I really needed it. By then, the usual pricing curve for electronics will make it much cheaper for me to upgrade. Apple’s design decisions with the M1 chip take that option away from me, and I find that position difficult to swallow in something that claims to be a professional tool.

Kristian Walsh

Re: I saw the reveal presentation, and, while I'm no fanboy, I was amazed

You’ll hit that RAM barrier pretty quickly, though, and then you’ll have nowhere to go except sell and replace. If you’re a professional, your time is your income, and the downtime of buying and setting up a new system compared with unscrewing a panel and adding DIMMs is significant.

That’s my big problem with these: you’re basically given a choice of grossly overpaying for the memory you’ll need over the unit’s lifetime, or ending up at a dead-end. Also, at 128 Gbyte, that maximum memory purchase isn’t even that high.

That inability to cost-effectively expand means that this isn’t a professional system; and the specifications are more than any regular user could ever find a use for - the only mainstream high-performance computing application is gaming, and there’s almost no Mac gaming market. Even if there were, games require GPU rather than CPU muscle, and the M1 Ultra is nowhere near as competitive as a GPU.

I think that’s why the response to the M1 Ultra has been so muted. It’s great, in theory, but who would actually buy one? At the other end of the range, I also feel that there’s too much of a gap between the Mac mini and the entry-model Studio: the entry Studio model is nearly double the price of the top mini.

Kristian Walsh

Those industries are supposed to buy the Mac Pro, which can accommodate 1.5 Tbyte of RAM.

I say “supposed to” because pricing on Mac Pro lives in its own private fantasy-land - anyone paying €6k+ for a quad-core Xeon 3223 and 32 Gbytes of RAM would need their head, and/or budgetary approval privileges, examined.

Just two die for: Apple reveals M1 Ultra chip in Mac Studio

Kristian Walsh

Re: Mac Studio

I had one of those in my mac-using days. Probably the last good Mac desktop: a function-first case design, internally expandable, could be placed under a desk easily, and (surprisingly for Apple) it wasn’t priced at a multiple of competing systems (I think my 2x Xeon model cost about 25% more than an equivalent HP at the time).

This new thing is neither fish nor fowl. The low-end model is actually pretty decent for someone who needs more than the €1000 M1 mac Mini and cannot leave the world of Apple, but more than doubling the price of the top Mini to get to a €2200 entry-level Studio is a real deterrent. (I suspect some of this pricing is supply-chain related, but I also don’t expect Apple to drop the prices when things improve, either..)

But it’s the upper-spec “Ultra” one that has no use-case that I can see: it’s a sort-of workstation CPU, but you can’t expand the RAM or storage, and it runs an OS that is poorly supported by vendors of the sort of software you normally need a workstation to run. At €4,500+, no rational purchaser would buy one. (especially as it lacks the “tech furniture” appeal of the Mac Pro)

But, looked at in the hermetically-sealed bubble of Apple’s product-range, it does fulfil a function - it bridges the chasm in price between the midrange Mac Mini and the workstation Mac Pro, whose €6500 entry-price for an 8-core Xeon system with just 32 GB of RAM is nothing short of offensive.

Apple has missed the video revolution

Kristian Walsh

Re: Apple had spectacularly bad timing

M1 is a great personal-computer SoC, but once you get into the high end, it quickly becomes unsuitable for demanding workloads. The biggest reason is that RAM is capped at 64 Gbyte, and has to be chosen at time of purchase.

64G of RAM sounds like a lot, but it won’t be in 2-3 years, and it already isn’t enough now if you’re working with 8k video.

For context, consider that Apple’s own Xeon-based Mac Pro systems, aimed at this space, max-out at 1.5 Terabytes of RAM. That gives you an idea of the sort of configurations people are looking for in this space (although any production house with proper budgetary control won’t be buying Mac Pros if they can avoid it when comparable rack-mountable systems can be had at much lower cost)

Of course, the solution would be to design an M1 using a traditional offboard RAM architecture, but once you pull the DRAM off the package, you lose a lot of the M1’s performance advantages, which were made possible by not having to deal with different RAM specifications.

But, even if M1 kept its performance figures, it’s still outclassed in this space: once you‘re on a desktop, power consumption and cooling become less of an issue, and it’s hard to argue with the much higher peak performance per core figures achieved by AMD and Intel chips.

Kristian Walsh

Re: My favorite irony…

These huge LED-matrix displays are normally driven by Windows, because the makers of the display controller hardware only supply drivers for Windows systems; even getting Linux drivers for these things is hard: the OS licence saving is a pittance compared to the complete cost of the rig, and Linux requires a lot of tuning before it will work acceptably with this kind of real-time processing.

But even if it was a macOS system, it would not be Apple hardware (you need a full, multi-slot PCI chassis, which Apple no longer makes), but Apple themselves would still have to take responsibility for keeping the hardware drivers up to date. That’s a lot of effort for one single installation.

But if you’re looking for more examples you can go further down the chain: All of the high-tech automated assembly of Apple’s products is controlled by Windows PCs, because that’s the OS that all the the equipment used in PCB and computer-controlled manufacturing runs.

(Why is this? Well, apart from cost, there’s long-term support. I once had a client who came to me for advice for his business which sold a very expensive piece of specialist machinery built around a MacOS-based controller. When Apple killed support for CarbonLib, in which all of the control software had been written - for performance reasons - it became impossible for him to source new controllers, as the new macs would only run versions of OSX that did not support Carbon. Remember, you cannot legally run OSX on generic Intel boards. My advice was to hire someone to rewrite everything on Linux, by the way. “Rewrite everything” was exactly not what he wanted to hear, but it was the only way out of the trap.)

But none of it matters. For a long time, most of Intel and Microsoft’s advertising and promotional material was produced on Macs. I was an Apple fan in those days (partly because the systems were good, but also because, as an employee, I was able to buy that quite good hardware without paying the full, extortionate, prices), and this fact used to raise a smile, but really it did nothing to convince people using Windows systems that Macs were better. The same goes now, in reverse.

Kristian Walsh

Re: Apple dumbed down and threw pros under a bus

No, he means video and audio. Avid Composer launched on Macintosh and was Mac-only through most of the 1990s; basically the time it took to become the preeminent non-linear editing system for video production. Even today, people in film and TV production will use the term “Avid suite” as a generic term, even if the actually software is a competitor system such as DaVinci Resolve or Adobe Premiere Pro.

Part of the reason for this early advantage was that all Apple systems (until the Macintosh 4400 and later, the iMac) supported SCSI hard-drives. That higher throughput and ability to attach up to 6 external drives was essential if you were trying to do video editing in the 1990s.

I used to work at Apple in the late 1990s, and audio-video production, pre-press and education were the only markets where Apple desktops still had any traction, and it was a big worry when first Adobe, and then Avid started to produce Windows versions of their flagship products.

Apple did purchase Final Cut to keep a foot in the game, and while it had initial success at the higher end of the market, Apple’s lack of competitive workstation products throughout the 2010s meant that it has now slipped into a kind of “prosumer” space, with most large productions moving to Windows-based systems, which offer much greater flexibility (and value for money) in terms of the hardware you can run them on.

A Snapdragon in a ThinkPad: Lenovo unveils the X13s

Kristian Walsh

Re: Seduced and abandoned by yet another attempt at a non-x86 Windows?

For me, Google is still the leader when it comes to offering stuff to developers and then leaving them high and dry later. Apple also has history of abandoning developers at dead-ends with no migration path except “rewrite everything” (OSX CarbonLib?), although it has improved immeasurably if you compare now to its Don Juan attitude in the 1990s, when APIs would be trumpeted as the Second Coming at one year’s WWDC, then completely written out of history by the time the next one arrived (Just off the top of my head I can remember OpenDoc, Dylan, MacApp, OS 8 “Copland”, Appearance Manager, QuickDraw GX... and I may even have contributed my own tiny part toward this too while at Apple)

Returning to Silverlight as the example, at least there had a forward migration path to converting apps into WinRT 8.1 applications - a class-path rename would get you about 75% straight carryover of application code, and Microsoft went to some effort to provide you with workarounds when that didn’t work, which isn’t bad considering to complete platform change underneath. Sure, you lost the opportunity to run in-browser, but by that time using these browser-plugin runtimes was rightly seen as far less efficient, and no more secure, than just downloading and running a signed application.

The issue for users with these changes is that Microsoft’s third-party devs are pragmatic when it comes to the use of their time, and are slow to rewrite apps just so they can say they are using The Latest Library (in contrast to Apple’s cohort who tend to leap over themselves to be first to have rebuilt their app against whatever Apple announces).

That’s not saying that Microsoft’s current clusterfuck with GUI toolkit APIs is an any way okay, mind you. I happen to really like WinRT, or UWP as it’s known these days, (and I especially like its responsiveness on slow CPUs), but I’d be happy to abandon it and use something else that has a long-term future if only MS would just make its damned mind up about what that “something else” should be.

As for x86, Microsoft isn’t to blame for these failures. Intel has very effectively used its position as the dominant supplier of desktop and laptop chips to discourage vendors from producing non-x86 Windows machines. Remember when Windows NT was launched on about five ISAs: Microsoft really wanted to break with “WinTel”, but the hardware builders weren’t so free to do so. Same story today - x86 ISA chips have higher price and higher profit-per-unit than ARM parts, and so the suppliers of these really want hardware vendors to stick with x86. Maybe if AMD applied some of its Ryzen know-how to an ARM ISA part, we’d see real competition, but I wouldn’t hold my breath.

Kristian Walsh

Re: Seduced and abandoned by yet another attempt at a non-x86 Windows?

They was indeed a PowerPC build of Windows NT4 Workstation (for devices conforming to the PowerPC Reference Platform, so it won’t work on Apple hardware). I worked for Motorola at the time, and NT was one of the options available with its “PowerStack” PPC workstations..

Microsoft gives tablets some love in latest Windows 11 build

Kristian Walsh

The middle finger has been in the emoji set since Unicode 7.0 in 2014, as U+1F595.

The closest you’ll get to “wanker” is Waving Hand, U+1F44B, which has the delightful side-effect of making its recipient thing you were actually saying hello.

(... much like replying “Good For You!” to an email, with the capital letters conveying your real meaning.)

Meet Neptune OS, an attempt to give seL4 a Windows personality transplant

Kristian Walsh

"a bad programming interface".

As this is the NT kernel itself, not the Windows user-space, I don’t think Linux supporters should be shouting too loudly about bad interfaces. (x86 and x64 syscall numbering...?)

Actually, even in user-space, my fairly long experience of Linux development would still advise a more conciliatory attitude: once you get away from the core APIs and tools that were shared with Unix, Linux APIs get very inconsistent, and you see the same concepts modelled in different ways in different libraries. Whatever its deficiencies, Windows (and MacOS) user-space libraries are designed to a “house style” and the concepts learned from using one library are helpful when learning others.

Nothing’s perfect, and that inconsistency is the downside of the Linux development model whose large number of semi-autonomous projects with the freedom to do what they want allows fast feature improvement and bugfixes.

Canalys: Foldable shipments could 'exceed 30 million by 2024'

Kristian Walsh

The “will” you refer to appears in a sub-clause of that sentence, a sub-clause which functions as the object of the main verb in the sentence, and that verb is “to forecast”. That meaning of that main verb implies that the sub-clause is in fact a conditional statement, despite looking like the simple future tense. (Consider the example: “maybe I’ll go”, or, as written longhand: “it may be that I will go”. That is not a definite statement, despite the apparent use of the future simple “will go”)

So, because the sentence is making a conditional statement, it is entirely correct to summarise it using “could”

…or “might” if you’re really sceptical of the source.

20 years of .NET: Reflecting on Microsoft's not-Java

Kristian Walsh

Yeah, only the exact kind of writing code against a supplied API that you happen to be doing is “real” programming.

Kristian Walsh

Re: Notably missing in action...

That’s the problem everyone has on Linux. There is no native UI, just a half-dozen pretenders to the title of “Linux UI”.

I have faced it in my own work over the years: you start a project, and you first have to pick a UI library - a task that itself robs you of time, as you try to pick the facts out of various fanboy postings on the web. I’ve often regretted decisions that I made, or had forced on me, at project outset (e.g., the arse-pain of having to wrap existing datatypes in gObject boilerplate just to be able to have custom list-item views* in GTK+, or every single aspect of using Enlightenment).

In the end, I settled on Qt for Linux GUI work, but it took a lot of dead-ends to get there (and did I mention the living hell of having to write a UI in Enlightenment?).

Ironically, Windows suffers the same problem right now, with multiple competing UI kits leaving developers completely at a loss about what they should use for a new project (UWP, MFC/WPF, Win32??? Dammit, just tell me I have to use one, and I’ll use that, but you’re making me learn multiple platforms just to discard all but one of them). At least .Net 6 is fixing that issue.

__

* yes, this GTK+ example is probably over a decade out of date, but I have not forgotten how unpleasant, error-prone and time-consuming a task it was to do something so basic, and as a result, I resolved to never use GTK+ for a new project again. That’s another example of why this problem is such a problem on Linux: because there are so many other options, I bailed out on GTK+, rather than pushing the developers to fix the ball-ache, or writing a utility library to do it for me. As a result, the development of the toolkit slows down, and everyone ultimately loses out. And it’s the same for every library.

IBM looked to reinvigorate its 'dated maternal workforce'

Kristian Walsh

Re: The funny thing is they compare themselves to Accenture...

A good point. Accenture is known for its phenomenal churn-rate of young employees, and that keeps their median age artificially low (if you keep losing lots of 23-year-olds before they hit 24, and hire in new 22-year-olds to replace them, it's going to push down your numbers).

A higher median age is normally a sign of a company that people want to stay working with for the long term; so you can’t help but think that IBM is actively trying to make its working environment less hospitable...

UK science stuck in 'holding pattern' on EU funding by Brexit, says minister

Kristian Walsh

Actually, he’s closer to the truth than the Horizon website is. UK institutes are effectively in limbo - neither allowed to nor prevented from joining research calls under Horizon.

UK access to Horizon was one of the many things the EU and UK agreed to in the Withdrawal Agreement. However, the UK has been slow to follow through on its other commitments under that treaty (particularly around small-vessel fishing and the whole NI Protocol), and has threatened more than once to scrap the agreement entirely. In that environment, the EU is not comfortable with committing funds to UK educational institutes that it may never receive matching contributions for. A typical Horizon project runs for three years, so decisions have knock-on implications in future budgets.

A home-grown fund isn’t really a replacement: Horizon is not so much about the money as the way it bringS together universities and industrial R&D operations from across the EU. You can throw money at the problem domestically, but if your leading experts in a field are all working in Leuven, Padua or Lund, having a stack of Pounds Sterling sitting in Oxford won’t really help your research. There is of course the same issue for EU participants if your leading experts happen to be in Glasgow, Manchester or Cambridge, which is why everyone across Europe wanted the UK partners to remain in Horizon...

... well, everyone except the UK government, it would seem, based on the scant attention they’ve been giving the matter. But I guess everything is low-priority for Boris Johnson compared to keeping Boris Johnson in his job.

Hands up who ISN'T piling in to help Epic Games appeal Apple App Store ruling

Kristian Walsh

Re: Apple is getting shafted by very guilty parties

The bigger irony is that it was Microsoft who added XHTTPRequest to the web browser in order to allow interactive webmail without resorting to Java applets. Other browser vendors adopted the feature to remain compatible with Outlook webmail, and it eventually got pushed through the ECMA standards process.

Basically, Microsoft, while afraid of web-apps, did more to create web-apps than any other browser vendor.

APNIC: Big Tech's use of carrier-grade NAT is holding back internet innovation

Kristian Walsh

Re: It's the mobile networks that seem to be stuck in CGNAT

Yes. Some parts of India, traffic goes through more than ten levels on NAT. That’s ten levels that can break, or mis-route or block your traffic.

Problem is, the big network vendors make their money from making equipment that can do this kind of insanity without breaking. Asking them to adopt a technology that removes the need for their competitive advantage is always going to be difficult.

Kristian Walsh

Re: That old chestnut

First, “static” does not mean “public”; and second, by using NAT you are already “relying on the firewall” - your NAT rules are, in effect, an allow-list for outbound and inbound traffic.

Tesla driver charged with vehicular manslaughter after deadly Autopilot crash

Kristian Walsh

That story never happened; it’s an urban myth that originated with a joke from the 1970s, when the first Winnebago models were offered with cruise-control.

Kristian Walsh

Re: Who's Driving The Car?

Yes, entirely correct: the driver is liable for what happens in the car he was driving. And he was driving it: turning on the cruise-control system was a decision he, not Tesla, made, and the outcome is thus his fault.

Of course, he might try to sue Tesla for landing him in this mess (or the victims’ relatives may also end up bringing a civil case against Tesla for supplying an unsafe vehicle to the driver), but ordinary-person-sues-rich-company cases tend to only go one way…

Tesla has been playing fast and loose about this product feature ever since it launched, and the company hasn’t done anything to counter the exaggerated claims made about the product capability because those claims made Tesla look better than it was. That inaction may end up costing Tesla a lot when (not if) it faces a class-action case around these autonomy features, but calling its latest iteration of a driver-intervention-required system “Full Self Driving” shows that no lessons have been learned at all.

Epoch-alypse now: BBC iPlayer flaunts 2038 cutoff date, gives infrastructure game away

Kristian Walsh

There seems to be a lot of confusion about what C considers to be the size of an int on 64-bit machines.

Basically, the value of INT_MAX (for that is its real name) is whatever is defined in ‹limits.h› for your compiler; it does not necessarily have anything to do with the machine architecture.

On my 64-bit Linux with a unchanged clang installation, I have 64-bit time_t, 64-bit size_t, but 32-bit int.

$ uname -m

x86_64

$ cat test.c

#include <limits.h>

#include <time.h>

#include <stdio.h>

int main(int argc, char* argv[])

{

printf("INT_MAX = %d, sizeof time_t = %ld\n", INT_MAX, sizeof(time_t));

return 0;

}

$ gcc test.c

$ ./a.out

INT_MAX = 2147483647, sizeof time_t = 8

32-bit int is a de-facto default in C; after all, int is the type you use when you don’t care about exactly how big it is. If size of the variable is important, stdint.h has the type-names you should be using instead.

If you need 64-bit ints, you probably know you do already, and are using uint64_t/int64_t…or intptr_t, ptrdiff_t, size_t and the other types that are guaranteed to match your address-space.

All of this, is, however, completely irrelevant: there’s almost zero chance that the iPlayer backend was written in C. The software is most likely written in Java, which explicitly defines int as 32 bits. The fact that the developers used an ad-hoc approach to date/times rather than a proper library also points to Java, which before 1.8 (in 2014) had a rats-nest of badly-concieved date-time functions in the standard library. (Yes there were good third-party packages, particularly jodatime which became java.time, but devs have to realise that the built-ins are crap and go looking for a better library… before they start using the built-in functions extensively in code)

I suspect the code does indeed have a defined constant of Integer.MAX_VALUE somewhere to mean “no expiry”. That’s 2038-01-19 at 03:14:07 but when passed to the presentation code, that would most likely have been moved back to the last full day preceding (thus “January 18., 2038”), because stating “available until xx. Month, year” can be read inclusively, promising availability until the end of that day. Not a big deal on something so far away, but an expiry of this Friday at midnight would generate a lot of annoyed users if presented as “available until [this Friday]”.

JavaScript dev deliberately screws up own popular npm packages to make a point of some sort

Kristian Walsh

Re: Widen the horizon - also affects live web, GoLang, Perl, Rust, etc.

Regarding the Rust story, there’s not much you could do about your problem #1, but you can’t pin #2 on the language; bad engineering practice works across all languages.

Un-versioned dependencies are a timebomb. At best you waste days chasing a bug that’s actually down to some anonymous developer’s commit the day before, at worst you run the risk of silently opening security holes in your code every time you do a build.

There’s also the joy of trying to recreate an old build. You might think you’d never have to do something like this, but if a legal action is taken that involves your code, it’s really, really important that you’re able to prove exactly what was built at any given date, which may have been years ago. Your source-code repo can give you your contribution to the product at any date, but if your checked-in dependency-list was “latest” for every package, that’s not even half the story...

Kristian Walsh

I know of a guy who paid for half his house with a works-with-all-edge-cases CSV library for .Net.

The target platform is the difference, though. Whatever you may think of .Net (and I don’t want to derail an interesting topic with opinions of Microsoft), it is a system used predominately by organisations who are paying directly for the software they are using. In that scenario, there is a genuine market for components that will save time, and thus money.

libcurl and sqlite are other examples of software libraries that people would pay for, as they solve problems that are difficult, but don’t contribute much to the primary functions of the tool or application that uses them.

There’s actually nothing in GPL that prevent you charging for your work (go back and read it), but Stallman’s decision to make Gnu free as in beer and speech set the pattern for everything following.

The problem of compensation is solvable (each project has a fund account, and you can figure out how to distribute it based on ownership of codebase changes), but it’s the culture of “work for free on a project to land a cool job in Google/IBM/RedHat/whoever” will take a lot more time to change...

Kristian Walsh

Re: "taking control of someone else's property"

Given that we’re now nearly four decades into the idea of open-source, there are many open-source projects that are no longer actively maintained by the person who started them. What if the original author of one of these gets into a spat with the current maintainers, then decides to use their commit privileges to continuously vandalise the repo as part of their tantrum. Sure, it’s easy to revert changes, but in the meantime, every contributor has to add “dealing with that guy’s shit” to the already long list of thankless tasks involved in contributing to FOSS projects.

What would you, as a contributor to that project (let alone someone who depends on its stability) want to happen to that user’s commit privileges?

Lots of people in the Open Source community are altruists - the model wouldn’t work otherwise - some to the point of idealism. But that altruism and idealism blinds them to the fact that sometimes, just sometimes, people act like dickheads, and the common good often means that you can’t be friends with everyone.

Lots of people will jump on NPM as being the problem in this case, but it’s not, really: it’s just exposing an existing culture of developers lazily pulling in whatever’s at the tip of the tree into their code without considering what that really means. If you care about the quality of your software, you edit your dependencies list to remove all references to “latest” or “head”, and then schedule someone to do dependency updating at times that suit you.

Microsoft rang in the new year with a cutesy tweet in C#. Just one problem: The code sucked

Kristian Walsh

Re: Not all Yanks are wrong.

In this day and age, I would regard the use of any of the Etc/ timezones in a system as a defect that requires fixing.

However, at least you have the consolation that someone at least thought enough about things to use the IANA zone-names, rather than just specifying the zone-offset numerically — a “great idea” that condemns consumer code to an eternity of calculating Daylight Savings Time transitions (oh, and for the USA, the added joy of accommodating those not-quite contiguous counties in Arizona that don’t observe DST at all).

I have a simple rule: never, ever persist any timestamp that is not UTC. For situations where the code needs to do zone-aware calculations, I will still store the timestamp as UTC, but then I store the relevant time-zone beside it: so the turn of the New Year in New York City is stored as (“2022-01-01 05:00:00”, “America/New York”). That way, all your general queries by timestamp still work, and if you need to calculate locale-aware stuff, you can still do it easily enough.

Fans of original gangster editors, look away now: It's Tilde, a text editor that doesn't work like it's 1976

Kristian Walsh

vim? Backslider!

It should be the original vi and nothing more. vim is the mark of those who have strayed from the path of righteousness.

Actually, that initial look of betrayal and horror when someone realises that vi isn’t an alias to vim on a system, but is rather the original vi editor is priceless.

[esc]:wq

When product names go bad: Microsoft's Raymond Chen on the cringe behind WinCE

Kristian Walsh
Pint

I actually hadn’t heard that, and never made the connection between VMS and WNT, but it makes perfect sense given Dave Cutler’s previous job at DEC. Also explains the evasiveness of Microsoft PR at the time when asked what “NT” actually stood for...

Have a beer.

Kristian Walsh

Drawing inspiration from an old Dilbert cartoon, I once proposed “Policy-based Heterogenous Large-scale Entity Group Manager” as a project acronym. Sadly, I then had to explain why I wasn’t serious.

In another job, I once worked in a Tools & Architecture group, abbreviated to T&A, which was something our American colleagues found hilarious but none of us had thought of.

Years ago, the German mother of a family member’s girlfriend came to visit Ireland, and on seeing the TV weather forecast map with “MIST MIST MIST MIST” written all over it, she just turned to her daughter and just said: “nun, das stimmt!” (well, that’s for sure!). “Mist” means crap, garbage or, literally, dung, and in fairness, is a fine summary of an Irish Summer.

That said, you can buy the whiskey liquor Irish Mist under that exact name in Germany. Could be worse, they could have called it “an ideal gift” (das Gift = poison)

Speaking of booze and Germans, the -ficks problem is why you can’t buy the Irish beer sold across Europe as “Kilkenny” anywhere in Ireland. Here, it goes by its original name of Smithwicks. The irony here is that the pronunciation of the name in Ireland, /'smidhiks/, doesn’t even contain the offending “ficks” consonant - it was a pre-emptive defence against German pronunciation of the name.

Windows Terminal to be the default for command line applications in Windows 11

Kristian Walsh

At launch, MacOS X was a BSD-derived Unix-based OS. Yes, that heritage means things were different to Linux (from the System V lineage), but Linux is not the definitive Unix, any more than McDonalds is the definitive meal out, or Windows the definitive GUI. Over time, the freedom to do whatever you want in MacOS has been limited (to a point where I no longer use Macs), but it fundamentally remains a Unix-based system, with fork() and file-descriptors.

I’d suggest your problem with deploying JavaFX applications is down to “JavaFX” rather than MacOS X, but the issues appear to be in not following the packaging rules for making desktop applications under MacOS. Blaming the OS for enforcing its rules, when you didn’t even check them doesn’t make the OS wrong. You might as well complain that your Linux software couldn’t dynamically link against the .so files you’d installed directly under /etc/.

That app-bundle folder layout might be a mild pain for developers, but it makes it really easy to completely remove applications on MacOS; and, having written tools to handle both, I can say that making an OS X app bundle is way simpler than writing a deb-file that ensures that everything gets restored properly on uninstall.

I don’t know how to answer your rant about icons, as I didn’t mention graphics. Still, if you felt better afterward, at least someone got benefit from it...

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022