* Posts by Kristian Walsh

1535 posts • joined 10 Apr 2007

What is your greatest weakness? The definitive list of the many kinds of interviewer you will meet in Hell

Kristian Walsh

Best interview questions I have been asked, and later asked:

“What is systemd?”

and

“How do you format your code?”

Neither of these were asked to elicit a “correct” answers, but to act as an asshole trigger: someone who holds such strong opinions about either of these things that they’d let loose at an interview is not someone who will be a net benefit to any development team.

Personally, I have never been asked to whiteboard a qsort algorithm or other sorts of dick-wavery, but a former employer did ask interviewees to write a trivial program (<10 lines of C) from a three-line spec in any language of their choice. You would be amazed how many people with “programmer” in their CV cannot form a simple loop and a conditional… (nobody ever offered to use a functional language)

As Europe hopes to double its share of global chip production, Intel comes along with $20bn, plans for fabs

Kristian Walsh

Re: When they say EU

Intel could have got exactly the same tax benefits with less than 1% of the investment they’ve made in Leixlip over the last 30 years.

Latest patches show Rust for Linux project making great strides towards the kernel

Kristian Walsh

Re: C++

I think we agree that bounds-checking on every access is expensive, and 10% sounds about right if you were iterating a memory block byte-by-byte in such a language. I often get a touch of the heebie-jeebies when writing such code in C#, but I reassure myself with the knowledge that (in my code) these tend to be once-per execution operations, so the overhead is small in the grand scheme of things.

However, speculative execution on a CPU is nothing like bounds-checking on an array. Speculative execution is where the CPU pipelines one or both of the code-paths that could result from a branch instruction, and then discards the machine state arising from the “losing” branch once the actual branch condition can be resolved. If a memory access on the speculative path access is to an invalid address, that exception is noted but not raised until its proper place in the instruction stream is reached; if such an access occurs on the discarded path, it will noted, but it will never be raised because that machine state will be discarded.

Array bounds-checking is not the same as the use of guard pages to trap runaway pointers before they get a chance to do too much damage. That is a hardware feature, but it will not help you if you trash your own memory by overstepping your array boundaries. To fix that, you must use software checks, which do consume CPU cycles (or, more strictly, instruction pipeline slots).

Kristian Walsh

Re: C++

True, operator[] isn’t bounds-checked, but bounds-checking at access is the least efficient and laziest way to avoid buffer overruns; and in any case, containers also have ::at(), which does do bounds-checks, if you’re happy with the performance penalty.

But the problem with overruns happens not at access, but earlier, when you compute the index that you want to put between those brackets, and that is where C++ containers are vastly superior to the features offered by C. In my (reasonably long) experience of debugging C codebases, overruns happen because code is relying on “in-band” methods of determining the length of structures, which can be lost when those structures are copied into inadequately small buffers.

The C-string itself is the classic example of this problem: copy a string that has strlen() == 10 into char[10] and... oops. no terminating NUL anymore - even if you used the overflow-safe strncpy(), because that function does not guarantee a NUL-terminated result, contrary to what most new developers would expect.

By contrast, vector::size() gives you the number of elements in the container (or bytes in the string). It’s explicit, it’s cheap to call, and it’s always present. It makes it trivial to sanitise indices at entry to modules, and prevent a whole swathe of overrun scenarios.

Rule 1: When you’re doing random-access with un-trusted indices, use ::at(), which does perform range-checking.

Rule 2: For iteration, use for (auto i = container.begin(); i!= container.end(); ++i) rather than iterating by index on size(). It’s the same cost, and will still work without a recompile if you change 'container' to be a set or some other type later on. ('++i' avoids a performance penalty if some iterators have expensive copy behaviour, but most compilers do now make this substitution if you say i++ without using 'i' in an expression)

Neither of those options are available in C without writing your own container classes.

Anyone still using cash? British £50 banknote honouring Alan Turing arrives

Kristian Walsh

Re: Here in Euro territory

Germans routinely use €100 notes, though; I’ve even seen German tourists in Ireland try to pass €200 notes in shops and getting annoyed when the sales assistant tells them to break it at a bank and come back. The Deutschmark was denominated up to 1000,- DM (a bit less than €500), and those notes were circulated too: I remember being handed one in a pay-packet once and wondering what the hell I was going to do with such a large note. (It had the Brothers Grimm on it - sehr witzig)

€50 notes are almost as common as €20 in Ireland, but €100s and higher are almost unknown.

Treaty of Roam finally in ashes: O2 cracks, joins rivals, adds data roaming charges for heavy users in EU

Kristian Walsh

Re: Colour me surprised

And yet, data plan costs did not increase when the EU banned roaming charges.

Hmmm..

Facebook granted patent for 'artificial reality' baseball cap. Repeat, an 'artificial reality' baseball cap

Kristian Walsh

Artificial Reality Cap?

Surely we have these already? They were the red ones with “Make America Great Again” written on them.

Anyone I ever saw wearing one seemed to be living in some kind of artificial reality...

Windows 11: Meet the new OS, same as the old OS (or close enough)

Kristian Walsh

Word for Windows skipped to v6 when Microsoft unified the codebases for the Macintosh version (which had reached 5.x) and Windows one (by replacing the Mac code with the Windows and including a poorly-debugged translation layer from Win32 to Mac Toolbox; it was as bad as it sounds).

Word 6.0 for DOS (the last DOS release) was launched around the same time as the other 6.0 releases, and like the Macintosh version, the DOS version of Word had steadily counted through 2.0, 3.0, 4.0, 5.0 and 5.5 versions while the Windows one kept its release versioning on a 2.x sequence.

Kristian Walsh

Big OS improvement, trivial Shell improvements.

Ironically, it’s in the OS, meaning the kernel and user space libraries, that the biggest changes have been made.

The kernel upgrades are significant, but unless/until you’re buying a system with a 12th Generation Intel CPU (Or Zen5) and associated chipset, you won’t see many of them. Scheduling on asymmetric x64 CPU cores, PCIe 5 and DDR5 are the headlines, but reading comments from people who have had bugs open, it seems there’s been a general bug-bash on open kernel issues, with longstanding requests finally getting a release date.

But it’s the graphical shell getting yet another minor makeover that’s most visible, even if those changes are pretty much zero net gain (okay, the “maximise to this part of the screen” thing is handy, I guess...) . This time, the modernising brush is allegedly being wielded further into the system settings dialogs than before, but Microsoft has to tread carefully with those panels, because if they change the appearance of them too much, then about two decades of internal troubleshooting manuals at every corporate customer will become outdated overnight.

The rather good Windows Terminal is now the default command terminal - no need to install it anymore, and the local-desktop X11 server for WSL apps should now be included automatically when you enable WSL. Nothing new for anyone already using these, but it’s good to have the better options as the default. It also seems that some of the very optional extras (Paint 3D, etc) are no longer in the default install.

Speaking of, and to end on a high note, Internet Explorer is no longer in the manifest. iexplore.exe now launches Edge Chromium. And there was much rejoicing.

The best time to plant a tree is 20 years ago. The best time to build a semiconductor foundry is 5 years ago

Kristian Walsh

Re: Is this fair?

Good points. I'll add another disadvantage for Intel: its CPU designs and fabrication became co-dependent. If the chips (and packaging solutions!) that were to use the {X nm} process were delayed, then any fabrication investment to feed that product like was a major drain on profits. Some of the "7 nm" problems came from not wanting to commit resources to technologies early until the company was sure it would have CPU designs that could exploit that technology.

Samsung or TSMC never had that problem. If a big customer couldn’t tape out on schedule, then they re-booked another customer instead, so investment was much easier to justify.

The experience thing is important, but it goes both ways. In general, bringing a design to production on any kind of manufacturing process helps in developing the next one; you discover the things that expose the weaknesses in a process, and especially the things that you know you couldn’t get away with if tolerances tightened. Consider this outlandish situation: Intel contracts TSMC to manufacture its main CPU families at {next process shrink}. Meanwhile, Intel’s fabrication arm skips volume production at that node size completely, and invests in the next one, in order to leapfrog TSMC.

(It’s outlandish largely because neither party would agree to it: too much IP is exposed to both parties during these projects, it’s only okay as long as TSMC never designs competing CPUs and none of TSMC’s CPU customers get into fabrication.)

Intel finds a couple more 11th-gen Core chips, one hits 5.0GHz in laptops

Kristian Walsh

Re: Intel are idiots.

You are mistaken in thinking that quoted node size and gate density are the same thing. Intel's “10 nm” process yields chips with approximately 90 million transistors per square millimetre. TSMC’s “7 nm” process yields chips with approximately 90 million transistors per square millimetre, and its current 5 nm process raised this to 150M transistors per millimetre. Guess what Intel’s 7 nm process can pack onto a square millimetre... it's the same 150 million transistors. That is the measure that matters.

Microscopy scans of chips on both the Intel 10 and TSMC 7 processes do not show a 30% smaller feature size on the TSMC parts, it’s more like 5-10%, with Intel making more use of complex structures to level up that small difference. Frankly these days, “nanometre” it’s as much of a marketing term as a real measure of process size. You may as well judge CPU architectures by MHz...

If we could see Qualcomm or Apple building their chips at Intel foundries, we’d have a better view of the real differences, but until that happens (an outside chance, as they have claimed that they will now take on third-party jobs), or until Intel hires TSMC to build CPUs (not likely), you can’t make a valid comparison. Process is just one factor in power consumption, just as core-count is largely meaningless once you change between core designs.

But really, nobody who knows what they’re talking about is claiming that Intel’s troubles are because TSMC is at “five” while Intel is still only on “seven”. The reason Intel is screwed is because TSMC got to volume production of its 150MTr/mm process (called “five”) about 18 months before Intel will (calling it “seven”).

Kristian Walsh

Re: Intel are idiots.

Ah CPU fanboyism. How.. sad.

10 GHz base-clock? come back in ten years and maybe someone will be doing it. Maybe. There’s good reasons why CPUs still run at a 2~3 GHz base-clock, despite the big gains in transistor switching speeds. The technologies that allow faster switching also allow increased density (i.e., more cores) instead, which gives better real-world performance than just bumping up the operating clock and having it spin while the chip waits for the DRAM.

I’ve no loyalty to any part vendor, but “nanometres” also means very little in this context. The gate density of Intel’s “10 nm” parts is pretty much the same as TSMC’s “7 nm” parts, and Intel’s “7 nm” parts have a higher density than TSMC’s “5 nm” parts. This is a stupid and confusing situation, but there’s no industry agreement on how to define “feature size”, so it persists. What is true is that Intel squandered its lead on process, has failed to get its 7 nm process up and running fast enough, and now it’s paying the price for that failure. This is good. Companies should not be allowed to fail to deliver without it hurting their bottom line.

You’re right that AMD has eaten Intel’s lunch in the enthusiast market, but as a business that makes lots and lots of money selling laptop chips, Intel’s biggest threat isn’t AMD, but rather ARM-based hybrid designs like Apple’s M1 that provide low TDP without sacrificing single-core performance. It’s hard to believe that Intel’s switch to a hybrid-core design for the 12th generation CPUs was unrelated to Apple dumping Intel in favour of rolling their own hybrid-core ARM chip. AMD has followed suit, and Zen5 will now be a hybrid-core design too.

Why did automakers stall while the PC supply chain coped with a surge? Because Big Tech got priority access

Kristian Walsh

Re: They only have themselves to blame

As the article says, the car manufacturers do not bully the silicon vendors: they don’t buy enough parts and they mainly buy older parts that have low production costs and low prices. A contract for a small number of parts with a low production cost and a low price is not something you can use to bully a supplier with.

The bullies are Apple, Samsung and the other big phone makers, who can make or break an investment in manufacturing by the foundries. You’ve tooled up for 500 million parts a year on a new process, and Apple casually mentions that it’s looking to change supplier? You’ll meet their demands, because without their volume you won’t be able to pay back your investment costs.

Your “should have...” scenario is impossible: there’s around $500 of semiconductor chips in a modern car. Multiply by 6 million units (an industry average for a volume car maker) and you’ll run out of money very quickly by just buying your pre-ordered chips into stock without using them to generate sales (for perspective, that $3 billion inventory cost is around twice the price of bringing a completely new car model from drawing-board to first sale). But: the car-makers do not directly purchase semiconductors; they purchase finished control boards, computers, and assemblies from their suppliers, so the inventory cost is double or triple the cost of the chips. That is a recipe for bankruptcy.

Apple's macOS is sub-par for security, Apple exec Craig Federighi tells Epic trial

Kristian Walsh

Re: Keeping things secure

Windows is the largest target, and the most lucrative, in that it is most used in businesses to handle money.

If you gave Linux to your typical office desktop users, you'd have as many, if not more, of the same security breaches. It's not the OS, it's the applications, plus the plugins and scripts necessary to make those applications work together for business tasks, and finally, the users who open anything that comes into their email.

Kristian Walsh

Re: He does have a point, even if it's draconian

Downvoters who assumed that this is an insult aimed at the previous poster, please read this summary of the 'goto fail; goto fail;' bug that left mac and iOS clients vulnerable to HTTPS spoofing (CVE 2014-1266): https://www.imperialviolet.org/2014/02/22/applebug.html

Better buckle up: Volkswagen puts Microsoft in driver's seat to deliver 'automated' platform

Kristian Walsh

Re: VW. Not.

To be clear about this, because it's often repeated that "they were all at it":

Every manufacturer is guilty of following the letter, but not the spirit, of the emissions regulations; they programmed their cars' engine controllers to behave differently when they detected "test-like" driving patterns. But, if you happened to drive your car that way, you'd have achieved the quoted emissions (but, let's be clear, almost nobody drives like the NEDC cycle)

But: only Volkswagen was found to have deliberately programmed their cars to detect the test situation itself, and then behave differently while the car was being tested. Because this mode only activated when the car was in a lab (front wheels turning but rear wheels not turning, driver's door open), there was no way to reproduce the result while on the road, no matter how frugal your driving style.

That was a whole new level of cheating beyond what the rest of the industry did.

AMD's Lisa Su: Our processor sales are Ryzen faster than the PC market is growing

Kristian Walsh

Intel and TSMC measure "feature size" differently. Intel is indeed behind AMD, but at "7 nm" it's behind on yield, not process. AMD's "7 nm" parts are equivalent to Intel's "10 nm" parts in terms of transistors per square micrometre (some measures actually put Intel ahead, but none claim that AMD's 7 nm parts are meaningfully denser than Intel's 10 nm ones).

Intel's forthcoming "7 nm" process is significantly more dense than TSMC's "5 nm" process, so Intel would take a lead again... however, those devices don't start production until 2022, whereas TSMC is already making 5 nm parts right now, notably for Apple.

Tim Cook 'killed' TV project about the one website Apple hates more than The Register

Kristian Walsh

I can't be arsed to look up what year MacOS X "Jag-wire" came out, but El Reg highlighting Jobs's bizarre pronunciation of the word was allegedly the trigger...

Adios California, Oracle the latest tech firm to leave California for the wide open (low tax) Lone Star State

Kristian Walsh

San Francisco is only into the kind of progressive politics that don't require effort or money. They're happy to accept your right to choose whatever pronouns you want without batting an eyelid, but can come up with a thousand well-rehearsed arguments to show that building any new housing in a city with endemic homelessness is actually a bad thing.

Another piece comes to .NET Core: Microsoft will keep the runtime patched automatically

Kristian Walsh

Re: It's a myth that it's intrinsically impossible to create bug free software

I agree that the Unix approach is a good one. However, the dominant Unix-derived OS in use today, Linux, doesn't really follow it anymore. The Linux-only tools like ip are a mess of sub-commands and special cases, each producing pre-formatted output that's unsuited to processing by other tools further down a pipeline.

Ironically, it's Microsoft who have remained closest to the old Unix idea, with PowerShell. Because PowerShell commands output machine-parseable objects, not pre-cooked text, you can do exactly the sort of chaining and division of responsibility that the K&R Unix tools let you do, even with complex output types. I can't get along with PowerShell for other reasons, mainly to do with it not providing the Unix core commands, but this is one area where it really shines.

It would be good for something similar to be implemented by the Linux system management tools (even if it has to be via something crap like JSON as an intermediary), but the problem isn't a technical one, but rather a political one of getting all the maintainers to agree to a standard way of interchanging information, and that's playing to the biggest weakness of the FOSS model.

Sod Crysis, can the 21-year-old Power Mac G4 Cube run Minecraft? The answer is yes

Kristian Walsh

Re: Why does minecraft need a beefy computer?

I just know that Java on PowerPC Macs was a dog, regardless of whose JVM was used. As Jake says, Apple did a lot of it in-house during the PPC era; in these days they also maintained their own C/C++ compilers, so there was a good deal of expertise available. I spoke to someone who was on that team once, and it was they who suggested that the JVM concepts were just a bad fit for how the PowerPC CPUs were optimised. and that it was not a CISC vs RISC thing.

Kristian Walsh

Re: "to the credit of Jony Ive, it was remarkably customisable"

From my memory (I had a very brief and small role in the product), Ive designed the casework and "pull-out" handle concept, so yes. Ease of upgrade was one of the big design goals on the Apple gear of this period (even iMac had a simple memory-and-wifi access cover, although I concede that changing the HD was a pain): the PowerMac G3 also had a super-simple case-access mechanism, and this is from the same period. (Back when Apple's design was functional rather than just aesthetics)

I believe it was Jobs who specified that the system had to be fanless. A team of uncredited system design engineers came up with the "chimney" heatsink.

Kristian Walsh

Re: Why does minecraft need a beefy computer?

RAM is the problem, plus the overhead of running a JVM (Minecraft is a Java application). PowerPC never had good JVM performance: part of that can be blamed on a lack of development resources, but also, fundamentally, the Java Virtual Machine was modelled on a system that looks a lot more like a 32-bit x86 chip than a PowerPC (or even 680x0, given that PPC hadn't been launched when Java was being developed), so you get a small number of "registers", and the instruction stream is organised in variable-length byte chunks. So, the JVM brings the PowerPC's superior register provision down to the level of the x86, but without an x86's faster memory access and smarter pipelining to compensate for the need to keep fetching stuff from RAM.

Arm at 30: From Cambridge to the world, one plucky British startup changed everything

Kristian Walsh

Re: Who killed MIPS?

Odd how you blame Microsoft for Alpha's demise when Intel is clearly the villain in this story. "WinTel" was never the kind of close cartel that the Linux and Mac fan communities painted it; if it were, Microsoft would not have tried to get NT running on so many architectures. Truth was, Microsoft wanted to break its own dependency on Intel at a time when CISC looked like yesterday's technology and Intel was seen as clinging to the past while everyone else embraced the RISC future. NT launched with MIPS, Alpha and x86, then PowerPC was added when that hardware was launched later*.

When pressure was brought to bear to avoid Alpha (and others), it was on the hardware manufacturers. NT was cross-platform and very easy to port, so Microsoft really would not care what CPUs the hardware makers were going to use, so long as they bought NT licences for that hardware when they sold it; it was Intel that had something to lose. But Intel was also the one that had real direct leverage over those hardware vendors. Companies like Compaq and HP were faced with decisions that could have effects on the pricing of key parts for their booming x86 desktop sales, and that doubt was often all that was needed to keep them on x86.

Microsoft dropped support for NT architectures when sales no longer warranted the cost of qualification. Alpha systems did not sell in enough numbers to justify the expense of tested and qualifying and supporting a build - dropping Alpha support on NT was an effect, not a cause, of the architecture's demise. NT workstations with PowerPC also didn't make much impact, and while Motorola did take over qualification of NT updates in order to support its existing customers, that ended before NT4.

In short, all the evidence says that the plot to kill Alpha was hatched in Santa Clara, not Redmond, as Alpha, MIPS and PPC were no threat to Microsoft's business.

__

* Apple's PowerPC systems could not run NT because Apple never produced a machine that was compliant with the PowerPC Reference Platform; all of the first series PowerMacs used Apple-proprietary support and I/O chips for which no public driver sources were available (actually, there were some small parts of the Mac ROMs at this time for which no source-code at all was available: the ROM image contained a couple of binary merges from known-working driver builds which could not be re-created from any archived source-code).

Apple Arm Macs ship, don't expect all open-source apps to work without emulation – here's what you need to know

Kristian Walsh

Re: "thumbs down"?

The result of that code is a failed code review :)

This is a classic sneaky C interview question, because it's actually undefined in the language specification, and so the correct answer is "don't do this". gcc says 3, which how a right-to-left evaluation would work it out, but an argument can be made that the answer should be 2 if you treat it as LOAD x <-- i; ( LOAD A<-- i; INCREMENT i; ) ADD x <-- A;

GCC throws a warning if you ask it to be picky (you should always build production code with warnings on): operation on i may be undefined.

Kristian Walsh

Re: Adobe

Affinity Designer.

Seriously. Try it.

Kristian Walsh

Re: I'm going to be called a Fanboi but...

Forgive me for not gushing - I worked for Apple for a few years, so I know better than to take anything they put in a press-release at face value, but for the record, I think M1 is a significant leap forward for desktop CPUs and for the ARM ecosystem.

I read the Anandtech article. Apple have not modified the cores, but they have done a lot of interesting work with the memory interfacing, and it shows in the benchmarks. The relatively ordinary performance on crypto tasks (which are compute limited) shows that the performance gains are due to a huge improvement in memory I/O.

M1 is the first ARM design that can actually compete with traditional desktop CPUs, and Apple has done some really good work on fitting this device for laptop and desktop workloads. By comparison, Qualcomm's design for Microsoft's Surface ARM product was still in the mindset of a "mobile device" SoC, and that seriously restrained its performance (also, Windows buyers are far more cynical and change-resistant that Apple customers). Apple's approach was to ditch the UMTS support, and use the power extra budget to seriously improve performance, and finally make an ARM a viable alternative to an x86 ISA system.

However, while I think it's a big deal, I'm not willing to play along with the usual game of ascribing all the brilliance of a new product to the company whose brand ends up on the package, when much of it derives from the work done by suppliers - in this case ARM itself for the CPU cores, and TSMC for the new manufacturing process. Apple is the first customer of TSMCs brand-new 5 nanometre process, and that feature-size reduction has a huge part to play in the M1's impressive performance-per-Watt measurements.

But there's one final thing that struck me about the reaction to M1 - everyone's talking about how Intel will react, naturally enough, but it's also worth considering what the other ARM SoC vendors might do, too. The closely-related Apple A14 has similarly good benchmark performance to M1, and yet that A14 isn't the runaway leader in the mobile world that M1 appears to be on laptops, and generally the lead swaps between whoever has the newest device on market. If M1 lives up to the benchmarks, what odds on seeing true desktop-focused ARM SoCs from the likes of Samsung and Qualcomm before too long?

Kristian Walsh

Re: I'm going to be called a Fanboi but...

Apple sold its shares in ARM in the late 1990s when it needed the cash to keep the lights on. ARM is wholly-owned by nVidia today, having recently been sold by SoftBank, which had been sole owner since 2016 - the same year Apple launched its first (and very much "me too") A1 System-on-Chip.

Apple does what every other ARM customer does: they licence the CPU cores, and tweak the logic around those cores to produce a system-on-chip best suited to their needs. Then they get another company to deal with the problem of fabricating the resulting chip. In Apple's case, that company is TSMC, and pretty much all of the M1 chips low power wins are down to TSMC's brand-new, industry leading 5 nanometre manufacturing process. Apple won't hold that lead for long, as TSMC does work for everyone.

However, Apple's engineers can take credit for the good benchmarking results on tasks that require high memory throughput - that's one of their optimisations over a "typical" ARM SoC.

Until we get other SoCs on the same process, it's hard to say for sure how much of this brilliance is from Cupertino, and how much from Taiwan.

Kristian Walsh

Re: Well, it's nice of El Rego to provide us with gushing quotes from other web sites, but...

I do agree, but if you take Anandtech (the site that first diagnosed the pathetic self-grounding antenna "design" on iPhone 4, if I recall) out of that list, the criticism becomes a lot more valid. Verge and Engadget are pretty lightweight when it comes to actual technology and have habit of parroting the press release or falling into the "bigger number = better" trap, while TechCrunch is the "Supermarket Shopper" of the tech journalism world.

How Apple's M1 uses high-bandwidth memory to run like the clappers

Kristian Walsh

Re: Apple leading the way once more

That argument assumes that Apple has a runaway lead in SoC performance on Mobile, which is not true.

Apple's mobile SoCs tend to score very well on single-core benchmarks, but fall back into the pack when you look at multi-core scores. For example, Apple's A14 is the leading SoC on single-core benchmarks, but the Kirin 9000 beats it on multi-core tests, and early Snapdragon 875 results also show a significant lead in multi-core benchmarks over Apple.

Basically, Microsoft, Google, HP, Dell do have options if they want to pursue ARM for desktop, and the performance-boost of directly-attached RAM that Apple has used on M1 is not a new idea, and can be adapted to existing systems. It sacrifices any chance of upgrading RAM, though, which could limit its attractiveness in the Windows market, where enterprise IT procurement policies have a lot of power over what gets sold.

Kristian Walsh

Re: Great Block Diagram

This design doesn't replace cache; it just adds a really low-overhead method of accessing the general system RAM pool.

The L1/L2 caches are still present as normal, although big cores have more than small ones, which confuses some benchmarking software. Total L1 seems to be 192k instruction+128k data, and Apple itself says there's 12 Mbyte L2 cache. Big cores have twice as much L2 cache as small ones, but it's unclear how L1 is allocated to each core.

The evolution of C#: Lead designer describes modernization journey, breaks it down about getting func-y

Kristian Walsh

"Basically, if he stated this during a developer interview at our company... he would not have been offered a job. "

And if that's in any way indicative of how your company conducts interviews, I think he'd be glad to have dodged a bullet...

Here's the new build, Insiders... wait for it... wait for it... Is it Windows 10X's upcoming ... Oh. You can change refresh rate of the display

Kristian Walsh

Re: Windows Calculator

Actually, everyone knows that both are correct. An electronic calculator is a running-total device. Perform an operation, get the result, that result becomes the first operand of the next operation, and so on...

More sophisticated electronic calculators had bracket keys to override this behaviour and store results so that you can perform mathematically-correct ordering without mentally juggling the calculation beforehand, but they remained running-total at heart - the brackets managed an expression stack that was popped and evaluated every time you pressed the close-bracket key. When I owned Macs, you could irrevocably confuse the MacOS X calculator by not closing a bracket before typing equals. Nothing would recover it except a re-launch.. I reported it with clear repro, but six years later it was still there, maybe it's still broken. [try it yourself: 17 - ( + 2 × 4 = ]

Microsoft could actually be accused of being wrong in Scientific mode, as many Scientific calculators retained the stack-machine-based running total evaluation; adopting the correct operator precedence is a concession to users. However, while engineering and scientific users might appreciate that concession, financial users are accustomed to the running-total pattern, which is why the standard mode retains that behaviour.

In other words, “Technically Wrong but Actually Useful” beats “Technically Correct But Irritating” every time.

Linux 5.10 to make Year 2038 problem the Year 2486 problem

Kristian Walsh

Re: Take-over solves the problem...

What a humorous post. I love the way you used the dollar-sign to show that Microsoft is a big corporation. Swift would be proud - his legacy is in good hands.

Kristian Walsh

Re: Sigh... the K notation again.

Heh.. Not surprised one of the ones above me got deleted. I’m pretty sure I know what it was too. Our Electronics lecturer taught it as “look, there’s also another one that will guarantee you remember the order, but for god’s sake, don’t ever say it out loud”

If you're feeling down, know that we've just buried a heat sensor in an alien planet. If NASA can get through Mars soil, we can get through 2020

Kristian Walsh

Re: Which in the end will provide more data?

Again, though, sending thousands of probes versus sending one human with limited mobility across the surface (we underestimate the impact of long distance travel because we grew up with engineered roads, high-speed rail and air travel)

Frames per second? Windows Terminal brings back text animation with the VT100 blink

Kristian Walsh

Re: Waste of time

It’s a terminal. If you can’t manage making things happen by typing text, you’re in the wrong place.

Tesla to build cars made of batteries and hit $25k price tag about three years down the road

Kristian Walsh

Re: Structural Batteries?!

There's more than one type of Lithium-ion battery.

Tesla uses the kind you know best: it’s called NCA (Lithium Nickel Cobalt Aluminium-oxide) and it’s the same chemistry as your laptop or phone. It is energy-dense (which is why Tesla is able to offer greater range than competitors to date), but it is also dangerous: it has thermal risks on hard charging and discharging, and is highly flammable if mis-handled (e.g., if something were to smash into the battery pack and crush the cells). In every respect except energy density, it is the wrong choice to use in a road vehicle.

Because of the safety issues with NCA, the rest of the EV industry uses NMC (Nickel Manganese Cobalt) chemistry instead. These batteries are less energy dense than NMC but are an order of magnitude safer - especially when mechanically shocked. When Tesla started, NMC was far behind NCA, but the latest generation of NMC has come very close to the energy density of NCA, while retaining the higher safety profile, and it may surpass NCA, simply because more vendors are investing in improving it.

Incidentally, not all Li batteries are fire-risks. The safest Lithium chemistry, Lithium-iron-phostphate (LiFePO4), is nearly impossible to get to catch fire (you could try dousing it in petrol, I guess...), but suffers from much lower energy density than either of the previous two. However, it does not require expensive and ethically-dubious-to-source Cobalt or Nickel, and could easily get below the magic $100/kWh figure for vehicles that are not size or weight-constrained. (LiFePO4 is already used in applications where Lead-acid cells were used that don’t require high current at very low temperatures; in the freezing cold, lead-acid is still king, which is why all EVs still have a lead-acid battery on board)

Classy move: C++ 20 wins final approval in ISO technical ballot, formal publication expected by end of year

Kristian Walsh

Re: Is C++ becoming too large and complex?

No madness, just perhaps a misunderstanding of what a declaration is for, and an attempt to misuse a keyword for its side-effects.

Because that is the whole point of auto - deduction of a type from an initializer. It is for exactly that purpose - to allow you not to be specific.

You can't keep hand-waving all the way to the the metal. At some point, you have to be specific, and for a variety of reasons, allocated class member types are one of those places.

Incidentally, you are allowed to use auto to declare a field of a class/struct, but only where that field is both static and const (and the initializer is a constexpr or literal) static const fields are special, though, allocated and assigned just once and shared by every instance (sometimes they don't even get allocated unless your code tries to get the field's address). This single point of initialisation may give you an idea why they can easily be allowed to use auto, while class-members are not... hint: consider how you'd declare a constructor that sets your Fred::x field.

Even if it didn't cause problems for code generation, your desired solution would only work for fields that have a default initializer, and thus it is only half a solution for the legitimate requirement of needing a field that can always contain the return-value of a given function. Luckily, the people standardising the language did consider this, and so if you need a field type that always matches the return type of a function, you should be using decltype(), which can be used regardless of initialization status. (auto can be seen as a special-case of decltype() for use when declaring variables and constants)

struct Fred { decltype("gotcha") x; decltype(foo()) y = foo(); };

... and this syntax also allows you to write a constructor, or any other function, whose argument type will match the return-type of a function, or match the type of any other declared constant.

Kristian Walsh

Re: Is C++ becoming too large and complex?

struct Fred { auto i = 43; };

Why would you ever want to do this? If you're not able to be specific about the datatypes used in a DECLARATION, then maybe look at using Python or something. No typed language with type inference allows this kind of "yah, whatever you're having yourself" declaration.

If I were maintaining that code, even if that syntax were possible, I would switch it back to int, so that the intention of the programmer is clearly visible.

Or, to put it another way, what I like about C++ is that I don't have to deduce what could be in a data-structures by looking at field assignments.

DPL: Debian project has plenty of money but not enough developers

Kristian Walsh

Re: Oh dear

You've hit on the major weakpoint of the volunteer FOSS model. While many volunteers end up being paid to work on their hobby, many do not, and end up pulled between having to make a living and fulfilling their personal commitment to a project. Shortage of time leads to poor progress, and complaints from users who just assume that package maintainers do nothing else but maintain packages.

I'll be charitable and say that it's often the thanklessness of maintaining a package that causes the aggression - any small request runs the risk of being the straw that breaks the camel's back. (That said, yes, there will always be arseholes in every sphere of human activity)

As for what to do with the money: Once it becomes practicable again, why not pay for people to meet up more often in person? It's time we stopped assuming that every programmer is an asocial hermit - it's simply not true, not even for Linux. An easier opportunity to meet contributors and maintainers would also go towards solving the problem of finding new contributors and maintainers.

In the frame with the Great MS Bakeoff: Microsoft sets out plans for Windows windows

Kristian Walsh

Re: What about what shows on screen?

That CPU argument was valid for Vista and 7's eye-candy modes but it doesn't hold water today. Windows 10's window design requires fewer drawing operations than that used in Win2000. A standard window with a title-bar, title and the the three control buttons in Windows 10 needs just eight graphic operations to draw. Shadows are done by the GPU at no CPU cost.

Kristian Walsh

I agree in principle, but not with the alternatives you offer.

Web applications are horrifically inefficient, and subject to strange behaviour depending on client. When I write a UI, there are times when I need different components to align to the exact same pixel - doing that in web still degenerates into a nightmare of rules and javascript in too many cases.

Plus, and this is a big one: right now, moving to Web means you get no code reuse from an existing C application. That's a problem for codebases that have decades of customer requirements baked into them.

Adopting a cross-platform development framework for C/C++ code makes more sense, but I'd add a recommendation to use Qt instead of Gtk+. Yes, it has those odd SLOT/SIGNAL extension macros, but at least it doesn't force you to learn how to write a graphics toolkit just to do simple things like custom list cells and data bindings, and QtQuick is still so much easier to get good results with than anyone else’s GUI markup system.

Kristian Walsh

Re: Two different windows are a problem ... so add a third!

Architectually, UWP and Win32 are not compatible. UWP is designed around there being a graphics rendering pipeline, whereas WIn32 has the underlying concept that there is a grid of pixels in memory somewhere for everything, and it is shared between applications.

This can be frustrating at times (e.g., in UWP, most loaded bitmaps are just lightweight handles into GPU surfaces: you cannot access the pixels in a BitmapImage control unless you explicitly render the control into your own memory buffer), but the upside is that the GUI operates tens to hundreds of times faster, never needs buffering, and you are able to offload all GUI assets into your graphics card's onboard memory rather than losing application memory for it.

This approach seems to be a way of re-writing what can be rewritten of the old API using the new architecture, and leaving the rest in a kind of compatibility "island" drawing surface within those new windows.

The closest similar problem was what Apple faced when moving MacOS to the pre-emptive multitasking kernel of OS X in 2000 (at least Win32 was designed with multitasking and thread-safety in mind; Mac Toolbox wasn’t). Apple’s approach was to simply bin everything and tell developers to use a completely new API; an approach that, as I hope is obvious, is something that would never work in the Microsoft ecosystem, where developers have a much more robust and equal relationship with the OS platform vendor.

Kristian Walsh

It's funny to see X11 proposed as a model for Microsoft to follow. You guys must really hate Microsoft…

If I want bandwidth-efficient, remote application access from a Linux system, I use SSH.

If I want a graphical UI, I accept that it I have already demoted bandwidth efficiency in my priorities, so I use VNC or other system that works on raster images. Given the way GUI applications really draw their window content, as opposed to the way X advocates would like them to, there isn’t much difference between shipping window content bitmaps over the network versus the drawing instructions, and sometimes the bitmap is actually more efficient in cases where the display is composed of tens of thousands of drawing operations.

X11 was a system designed for a different, bygone, world, where centralised computers ran applications that were viewed on thin-client terminals (yes, I know the terminal isn't a "client" in X, but it is one in network terms). That is not how the majority of graphical application systems are used now, and there's a good chance that it never will be.

Anyone else noticed that the top countries for broadband speeds are well-known tax havens? No? Just us then?

Kristian Walsh

Measuring what's easy to measure, not what's significant

This survey is the equivalent of estimating a country’s road traffic journey times by using the average top speed of the vehicles sold there each year.

First error: by using speedtest data, they’re basically just measuring the speed between the user premises and the next hop in the network. That doesn’t tell you about backhaul provision (hi Cable ISPs!) or consistency of service (ibid.). Use an “off-brand” speedtest website and you can see your reported download figures tumble.

The other problem is that measuring speedtests is confounded by pricing structures of the providers more than the actual network capabilities. If providers in Country A give a 20 Mbit service for €15/mo, an 80 Mbit service for €20/mo, and a 200Mbit/sec for €35/mo, people like me will take the cheapest one that exceeds my peak bandwidth requirement (which, incidentally, never rises over 20Mbit/sec). Lots of users living on their own, who want Internet for occasional use will also go for the services with low bandwidth caps on cost grounds. But in another country, where operators only offer the 200 Mbit/sec option for €35/mo in an effort to maximise revenue, that country’s network to be considered faster, even though both are the same capacity.

The only legitimate way of determining this survey's results would have been the hard way: discover the share of domestic connections by carrier type (DOCSYS, xDSL, FTTP, UMTS), and then apply an empircially-derived performance figure per technology (and generation) to those connection counts (thus, a 1 Gbit cable connection scores far, far less than a 1 Gbit FTTP), and look at the amount of trunk network capacity in the country.

Happy birthday to the Nokia 3310: 20 years ago, it seemed like almost everyone owned this legendary mobile

Kristian Walsh

Lipstick phone

Ah the Nokia 7280. That's really a product of designers and engineers (both asking “how small can we make this?”) over management’s requirement to make something cheap. All the 7xxx phones were basically “screw it, we’re rich: go do something mad!”.

I knew someone who had one. It’s actually much easier to use than you think once you’ve got all your phonebook on the SIM (remember, this is from the days when your contacts and last ten SMS messages were stored on your SIM card). Entering your phonebook, though…

Real Crazy Finnish Design was the 7600 (a square/leaf-shaped phone with a screen in the middle and keypad buttons along the edge), or the 3600: for some reason this had the buttons arranged in a circle on a surface that would easily have accommodated a usable keypad.

Relying on plain-text email is a 'barrier to entry' for kernel development, says Linux Foundation board member

Kristian Walsh

Re: "plain old ASCII text is a barrier to communications"

Yes and no, really. ASCII-1963 was published as an uppercase-only alphabet; CCITT and ISO used this as a base for a two-case system, which began standardisation in 1964 and was published in 1967. ASCII was then revised in 1967 to be fully compatible with the new ISO standard. I suspect that a clean-sheet ISO design would have had fewer control codes (most likely just 16) and more printable glyphs so that the national replacement characters wouldn’t have clashed with punctuation characters that were often used as field delimiters in documents... despite ASCII having four control codes specifically for delimiting fields in the form of FS,GS,RS and US, it seems that nobody ever used them (@jake, you're at this game much longer than I am, maybe you've come across something? Personally I often use these as delimiters when I need to bundle multiple outputs from one *nix tool that may contain tab/spaces/linefeeds into another shell-script. ASCII-safe and almost guaranteed not to already be in the data being processed)

But ASCII is the classic example of how an inadequate but ubiquitous standard drags everything down to its level. Even by the standards of the 1960s, and even in the fiercely monolingual USA, it had considerable shortcomings - most notably, it is incapable of correctly encoding the proper names of places in parts of the USA that were first settled by the French or Spanish. Sadly, this shortcoming was then set in stone by the US Postal Service, which disallowed accented letters in official “preferred” addresses, simply because at one time its OCR and database vendors could not handle them or their terminals couldn't enter them. (Many state DMVs did the same later with names on driving licenses)

Back when I visited the Bay Area a lot, there was a sign I used to pass on I-280 for what was originally signed as “Cañada College” but was now misleadingly spelled “Canada College” thanks to this policy — as someone who was working a lot with text encoding and processing at the time (I wrote software localization tools), what struck me about it most was that you could still see the shadow of the tilde that had been removed from the sign-face after the official street-address changed; obviously someone was perfectly well able to deal with Spanish names back before computerization came along to limit their horizons.

(Actually, a quick look at Google StreetView shows me that a newer sign for the Cañada Road exit has regained its tilde, so there is hope for the future... although that’s one ugly tilde)

But on the original topic of kernel submissions, the biggest problem with insisting on “ASCII” is that nobody uses it, because there are pretty much no 7-bit text computer systems in use anymore. That leaves “an unspecified superset of ASCII” as your de-facto standard. (If I had a buck for every time I heard someone say “8-bit ASCII” like it was an actual thing that had ever existed...). To avoid this ambiguity, Linux Kernel submissions are actually to be submitted as UTF-8, not ASCII, as this allows maintainers to use their actual names in those submissions, not some bastardised, accent-stripped version of it.

Kristian Walsh

Re: "plain old ASCII text is a barrier to communications"

"a universal communication medium. Look up the acronym. "

I didn’t need to. ASCII stands for American Standard Code for Information Interchange.

Hmmm.. Very universal. At least it’s not EBCDIC, I suppose.

Aw, Snap! But you should see the other guy – they're in dire need of a good file system consistency check

Kristian Walsh

Re: I don't care....

Make the on-card filesystems read-only, put all writable stuff into a ramfs, and there's no issue with using SD cards. Flash is destroyed by writing, not reading, and for something like this that’s just a thin-client to a web-hosted service, there’s no need for writable persistent storage on the device.

I would, however, have also designed this with a recovery image so that any corruption of that main flash could be blasted away easily by a quick dd, but as it’s a mobile unit anyway, access for repair is probably not as big an issue as for fixed stuff.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2021