* Posts by Cem Ayin

49 publicly visible posts • joined 22 Dec 2015

Linus Torvalds declares war on the passive voice

Cem Ayin
Facepalm

Linus non est super grammaticos

But it is certainly good to know that the really important matters are being taken care of in Linux land. Such as ensuring that Gen-Z developers are/will be able to actually understand commit logs... Simple English rulz, comrade!

As IBM pushes for more automation, its AI simply not up to the job of replacing staff

Cem Ayin
Alert

Not an accident...

...waiting to happen. Rather an unmitigated disaster unfolding - first gradually, then suddenly.

"Senior software engineers stopped being developed in the US around 2012," Blake said. "That’s the real story. No country on Earth is producing new coders faster than old ones retire. India and Brazil were the last countries and both stopped growing new devs circa 2023. China stopped growing new devs in 2020."

and

"If it weren't for LLMs, there would be a serious lack of programmers in the next five years as Gen Xers started retiring," said Blake. "I had planned on coding 'til the day I died, but now I think I’ll be talking to LLMs primarily instead."

Assuming these statements to be factually correct, this is just isane; the relatively best outcome would be for this scenario to fail hard before unfixable damage is done. In fact, the worst case scenario would be their A"I" folks somehow managing to actually make this work, more or less, initially. Meaning that before long, they would find themselves with millions and millions of LoC that no one and nothing in the world understands and is able to maintain (including the LLMs that generated them). And it's not just IBM, of course; this would then be the code that the very foundation of our civilsation critically depends on. I hope I won't live to see the day when that bill comes due.

Warren Buffett’s favorite insurer GEICO drops VMware for OpenStack

Cem Ayin
Alert

VMware -> OpenStack? You sure?

While I agree that VMware is now, in a manner of speaking, on the economic death row I'll admit that a proposal to migrate from VMware to, of all the hypervisor stacks out there, OpenStack, made me raise an eyebrow. I'm involved in a project that uses OpenStack, as provided by our Computing Centre, on a moderate scale (approx. 100 instances) and we have had, time and again, and continue to have problems with instances failing, sometimes in bizzare ways (and don't get me started on what we've often had to go through after an OStack major release upgrade). Which is *mostly* acceptable for our use case, as the vast majority of those instances is made up of cattle servers the images for which are provisioned with Kickstart and Ansible. If one fails, you just delete it and have your infrastructure script create a new one, and even losing the image is not much of a problem. Still, I'm in the process of setting up a "vanilla" KVM server for hosting a few mission-critical VMs that keep failing on OStack every now and then, which has become a major annoyance. (Those hosts, too, are provisioned with Ansible and easy to re-create, but you still don't want an important service to slow down, freeze or lose its network connectivity for no apparent reason every so often.) And I know that we are not alone with the "OpenStack for cattle, something else for the pets"-approach.

It is possible that GEICO's IT department has truely superior OStack skills or at least good hope to acquire them "on the job"; or maybe they are in the process of re-factoring their IT systems to somehow run on redundant, expendable cattle servers exclusively, though I figure that might be hard for an insurance company (but then, what do I know about that particular trade). Or maybe their mission critical systems have been running on physical silicon all the time and they were basically mis-using VMware to host their cattle and are now correcting that mistake. Or maybe they have some other ace up their sleeve, so devilishly clever that I cannot possibly imagine what it might be. But otherwise I'd say "I whish you the best of luck", especially on OStack major upgrades.

Telegram founder and CEO arrested in France

Cem Ayin
Black Helicopters

Re: I hope Musk travels to France

I can see how you (and many others) just love to hate Mr. Musk; the powers that be will be satisfied with your "Two-Second-Hate", as it were. On the one hand.

Even so, there is something I have to break to you: Elon Musk is not Emmanuel Goldstein. He's not even Pavel Durov. He is, for the better or for worse and whether or not he actually enjoys being it, an important leader of the US Military-Industrial Complex and in that capacity not really dispensable at this time.

I'm talking about SpaceX, of course. Much as the US political establishment (not to speak of his rival billionaire peers) may hate him - most of them would hate to lose the current US technological edge in space exploration even more. And yes, space exploration still being in its infancy, technologically speaking, individuals can and often *do* make a huge difference, both in the domain of engineering and administration, as did von Braun and Korolyov in their day and Musk does in ours. "Rocket science" still has a long way to go before it may become a commodity one day, and there can be little doubt that if, say, a "lone wolf" were to be successfully unleashed on Musk, SpaceX would, before long, be going the way of Boeing...

Combine this with the fact that France, like any other country in Western Europe you care to name, is a fully owned client state of the US and it is not difficult to see that, if some overambitious French official bidding for powers above his station were to incommodate Mr. Musk in this way, a phone call by the US ambassador to France would fix matters for him within 48 hours *maximum*. (And I can vividly see the complacent smile on the ambassador's face as he is "teaching them frenchies manners".)

Ok, ok, one should never underestimate the power of human stupidity and cases where the top echelons of power have put the heads of some of their best military and economic leaders on the block, by the dozen even, are not unheard of: But at this time I'd give e.g. Mr. Sullivan (or who ever happens to be calling the shots for Team-Biden-turned-Team-Harris-in-the-blink-of-an-eye) credit for not being *that* stupid -- current US foreign policy doing its best to rival the pre-WW-I Wilhelmstraße in topsy-turvy diplomatic clumsiness nonwithstanding.

DARPA suggests turning old C code automatically into Rust – using AI, of course

Cem Ayin
Joke

Training material?

I'll admit that LLMs are not exactly my area of expertise, far from it. So my understanding of the matter may be completely wrong; anyway my current understanding of LLMs is roughly this:

LLMs compute, during their training phase, the probabilities that certain output sequences of words somehow "match" given input sequences of words, with "matching" meaning that a human reader will recognize the output sequence as a valid, sensible response to the input sequence. To achieve this, they need to process several gazillions of such pairs of sequences as have been previously established to "make sense" to a human reader, which is preceisely where there is the rub... (And, of course, LLMs don't "understand" a thing about those input or output sequences, which is why they'll happily recommend eating stones as part of a healthy diet or resolutely confirm that a rucksack will, on average, perform no worse than a parachute in arresting a free fall from an altitude of several thousand feet if that happened to be in their training input, their understanding of irony or sarcasm being somewhat limited.)

If this is so, then in order to train an LLM to translate C program code to functionally equivalent Rust program code - including cases where e.g. code doing crazy pointer arithmetics has to be painstakingly refactored rather than just "translated" - would require a gazillion or two of such pairs of code, which is not currently to be had for love or money. Ok, ok, the requisite training material *could*, in theory, be had for *money*, by hiring competent programmers to perform the translation job manually, but you'd need *an awful lot* of people, time and money, possibly more than could ever be saved by the resulting LLM, which is where the Ouroboros will bite its tail.

But hey, maybe this was precisely the idea:

"Oh, man, if only we could get rid of all that crappy C code in our mission-critical softwares!"

"Well, all you need to do is re-write the code, look there's already a project to re-implement POSIX shell utilities in Rust, we just need a lot more of such projects."

"Bah, you know as good as I that we'll *never*, *ever* get the funding for such an undertaking, code refactoring just isn't 'trendy' enough. These days, all the investment money, as well as public funding, goes to stuff that you can label 'AI', who cares about mundane tasks such as refactoring the basic foundations of our IT infrastructure!"

"Did you just say 'AI'? Hey man, I think I have just conceived a cunning plan..." XD

China stops worrying about lack of GPUs and learns to love the supercomputer

Cem Ayin

"Specialized" supercomputer

> Isn't "specialized supercomputer" a pretty decent description for [a bank of] GPUs?

Depends on your definition of "specialized". Essentially, all current supercomputers are built from off-the-shelf commodity hardware, this includes the GPUs. After all, GPUs originally found their way into supercomputing precisely because they were by far the cheapest SIMD-machines readily available, due to the scaling effects of the gamer market segment. And while Nvidia has succeeded in pushing more expensive hardware implementations of its tech (dedicated to GPGPU-computing and without even an actual grapics output port) into the market, the basic architecture of these SIMD-machines is still that of good ol' gamer GPUs, the HPC communitiy still essentially feeding on the crumbs that fall from the gamers' tables (though the current DL-model boom might change this eventually).

SIMD machines have existed before that, of course, but those were truely specialized hardware architectures, the silicon being produced in small numbers and frightfully expensive, just like the "classic" supercomputers of yesteryear. (When I started working in IT, Cray's vector machines were still all the rage in HPC, though I sadly never got to work with one of those...)

What Mr. Zhang was *probably* referring to is the idea of using DL-specific ASICs rather than off-the-shelf GPUs in future Chinese supercomputers. Whether or not this is a viable proposition depends on how much effiency can be gained in this way to at least partially offset China's current disadvantage in the domain of highly integrated circuits, which might well take a decade to overcome. Not being involved with DL-research personally I have no idea if such an approach stands a chance of working, but at least we've seen it happening in the domain of Bitcoin-mining before.

And then there's the issue of the economy of scale, of course. But given the importance of A"I"-techniques for military purposes, particularly in matters of reconaissance and advanced drone warfare, this would be less of a problem for the factory of the world. China *will* push ahead in this domain, coûte que coûte.

30 years of Linux: OS was successful because of how it was licensed, says Red Hat

Cem Ayin
Holmes

Re: Linux is not an OS

The polemic untertone of the TS and the downvotes he's received in turn nonwithstanding, his statement is factually true: Linux is not an OS, it's a component of an OS, a kernel. And that is not just a nitpick: it is the real reason why "Linux" was never widely adopted on the desktop.

For (leaving Android aside for the moment) "Linux" in the usual sense of the word, namely: the Linux kernel plus the GNU userland tools (and a few others) is in reality a /family/ of OSes, all of which are closely related enough to look "mostly the same" from the (Unix/Linux nerdy) end-user PoV but which in reality present a bewildering variety of ABIs that makes packaging /commercial/ applications for "Linux" a chore, particularly where end user apps with a GUI are concerned. And one which doesn't yield an attractive remuneration either, as, most software that is being used on GNU-Linux being FOSS, the average Linux nerd expects commercial applications on "Linux" to be cheaper rather than more expensive as compared with Windows and OSX. For a commercial developer "Linux" thus means: more effort for less ROI, or in other words: "Nah, not worth it"...

However, no OS can ever thrive on FOSS end user applications alone. That is a hard fact of life. And this is why "Desktop Linux" has remained a niche product. Sure, lately we've seen a plethora of novel packaging schemes where each application has to provide its very own userland runtime environment, either directly or by pulling in dependencies, the only viable option for portable applications in the Linux environment, apart from linking the whole damn thing statically on an ancient kernel version. But it's all too little, too late and still too balcanised to stand a real chance of becoming a game changer.

And there's another problem: There are use cases that require specific hardware drivers the implementation of which must remain a trade secret, such as operating high-end audio hardware, which is all but impossible in a GPL-licensed monolithic kernel. For high-performance GPUs at least, solutions exist, but I suspect those are only being tolerated because even the most radical GPL-Taliban want a good FPS-game from time to time...

Thus it turns out that the licensing (which created the distro balcanisation as well as the driver problem) is indeed responsible for both the success of "Linux" (in the server room) and its dismal failure (on the Desktop).

And Android? Well, Android does not have problem #1, as it is factually a proprietary OS (with a FOSS core component, just like OSX) and problem #2 is not really relevant on a smartphone, where you cannot swap out the GPU or attach an external high end audio interface. Which proves my point.

Feel free to vent your frustration by downvoting me, but know that it ain't gonna change a thing...

TCP alternative QUIC reaches IETF's Standards Track after eight years of evolution

Cem Ayin

For details see here:

https://svs.informatik.uni-hamburg.de/publications/2018/2018-10-11-Sy-PETCON-Tracking_via_the_QUIC_Protocol.pdf

Now-patched Ubuntu desktop vulnerability allows privilege escalation

Cem Ayin

Re: Once again, it's a local access issue

"If the miscreant has local access, it's already game over."

Well, unless both bootloader and firmware (including access to Intel's so-called "Management Engine" interface) have been properly locked down and the case has been secured with a padlock, of course.

There's not normally a need for that in the server room (but then, servers should - except in a few and far inbetween special cases - not run a GUI in the first place). However, if the Ubuntu box in question happens to be serving users in a public IT lab - a scenario quite common on university campuses - this bug is indeed an issue.

The Wristwatch of the Long Now: When your MTBF is two centuries

Cem Ayin

Re: such craftsmen could not possibly survive

"In most cases, when one broke ... it was cheaper to buy a new watch than fix the old one"

In fact, most of the Timex watch movements that I'm aware of were non-repairable by design: held together mostly by rivets rather than screws. The only way to "fix" a Timex movement was to completely replace it with a new one, same as with most of today's cheap quartz movements.

BTW, the industrialisation of the watchmaking business had already started about a century earlier in the USA, with Switzerland soon forced to follow suit, although industrial production was initially limited to the ébauche and a skilled repasseur was required to complete and fine-tune the movement.

Oh ****... Sudo has a 'make anyone root' bug that needs to be patched – if you're unlucky enough to enable pwfeedback

Cem Ayin

Re: SUDO and +s is a design weakness

"It was right decades ago, but its time is up..."

I partially concur: the suid-mechanism is not exactly one of the strong design points of Unix and basing a central component of user authentication and authorization on it is suboptimal to say the least - particularly if said component is a rather complex program written in C...

One of the few things that Windows does indeed better than Unix is that admin privileges do not depend on a well known UID but on group membership to begin with, which eliminates the need of contraptions like sudo just in order to avoid shared admin passwords.

Making things worse, sudo tries to be a poor man's RBAC and makes a rather poor job of it too.

(Note that the whole concept of RBAC, while being a good idea in principle, is totally flawed if it is not fully supported by the kernel's security reference monitor from ground up. E.g. the idea of allowing a user to install random software in system locations without conferring additional privileges is a complete fallacy on Unix: you'd need MAC-based security plus labels on each file a user creates /plus/ a mechanism in the SRM to ensure that a program installed by user X can /never/ run with any privileges X did not have at the time of the program file's creation - good luck with that one.)

Anatomy of OpenBSD's OpenSMTPD hijack hole: How a malicious sender address can lead to remote pwnage

Cem Ayin
Boffin

Re: "Isn't that the *nix philosophy"

You have it backwards: it is not the component-based design of the Unix UI as such that is to blame. Quite the contrary: reusable components are an efficient and highly successful concept. Where the age of the Unix shell(s) is showing is the poor way of "glueing" the components together: by passing all data as strings, which requires serialisation and parsing to be implemented over and over again - which in turn means asking, if not begging, for trouble.

(If only I had a $ for every quoting error I've encountered in my professional career...)

Hate to say it, but the PowerShell is the one thing that Microsoft did get right in terms of OS interface design.

Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers

Cem Ayin

Mostly OT: The Impossible Fact

The saying "es kann nicht sein, was nicht sein darf" actually comes from the closing line of a poem by Christian Morgenstern. You can find the poem here, along with a congenial english translation:

https://www.babelmatrix.org/works/de/Morgenstern%2C_Christian-1871/Die_unmögliche_Tatsache/en/32691-The_Impossible_Fact

Chemists bitten by Python scripts: How different OSes produced different results during test number-crunching

Cem Ayin

Possibly OT: What the user really wanted...

Sloppy coding, neglect to RTFM - it's business as usual in scientific research land XD (and you can't even blame the perpetrators, given that they have not normally been properly trained to code, even if coding happens to become their main task on the job; Luke 23:34 fully applies).

But apart from the obvious fault I vaguely smell another issue here, which is all too common in scientific computing although the core of the problem is rarely fully appreciated: the lack of indexed files in modern OS'es. Yeah, I know, ISAM, RMS and the like are sooo seventies (at best), who on earth needs such a thing in this day and age? Nobody, right? Except, of course, when they do, such as scientists wanting to store data that does not fit into main memory when individual chunks of the data need to be accessed by a string-typed key, a problem which is quite common of course.

Oh, yes, proper solutions for this problem abound, from the various dbm-derived key-value-stores around right down to HDF5, sure enough. Only, these libraries are not part of any standard API on POSIX-ish operating systems, their installation often needs to be requested through more or less official and less or more slow-to-respond channels or developers need to bundle their own copy with their code, plus each of these tools comes with a learning curve of its own - aye, there's the rub!

Now what I daresay 8 out of 10 scientists REALLY do is this: they create a directory on a POSIX file system and within this directory they create a file /for each record/ of data, so they have easy access to each of them. The whole seven million or so. Never mind that this ingenious solution will bog down even the most performant parallel file systems when scaled even to medium size and stubbornly resist any attempts to back this mess up in finite time. That's the problem of the IT guys, right...? (Or rather, more to the point, the users in question are not normally aware of any gotcha lurking there. After all, what could possibly go wrong?)

And all the user really wanted was a modern equivalent of an ISAM file in his OS' standard API...

DISCLAIMER: I have not looked deeper into the library discussed in the article so there might really be a good reason why they store data the way they do and my comment /may/ indeed be OT here. But the problem I describe is real, and I know I'm not the only one fighting with it on a regular basis.

Hack a small airplane? Yes, we CAN (bus) – once we physically break into one, get at its wiring, plug in evil kit...

Cem Ayin

Re: I'm a GA Pilot and I've wondered

@Mayday: While attacks on your aircraft's CAN bus should certainly not be the most important of your worries as a pilot it should be noted that this type of attack potentially provides two specific advantages for a professional and well-funded attacker:

- stealth: an attacker could conceivably sabotate the aircraft in a way that is virtually indetectable not only during a pre-filght check but even a thorough routine inspection.

- precision: an attacker could arrange for a fatal failure to happen just when it its most dangerous and the pilot has least time to react, say, during initial climb at 200 ft GND; and it would be possible to plant the exploit code weeks or months in advance and activate it only during a pre-set time window, only on a very specific route, or by casually strolling by while holding a bluetooth dongle some time before takeoff - the possibilities are virtually endless...

@Frederic Bloggs: Yes, EFIS-equipped GA aircraft have purely barometrical backups for ASI (Air Speed Indicator) and altimeter but not necessarily any backups for the engine control. Just think of all the havoc you could wreak on a FADEC-controlled engine if you were able to manipulate the FADEC (Full Authority Digital Engine Control) unit via the CAN bus. But even just manipulating readouts of critical engine temperatures could soon prove to be fatal for the aircraft and all persons aboard. And no, planting any such malware in-flight is probably impractical. But as the attacker, you don't want to be on board when it strikes anyway.

That said, none of the scenarios explored above will ever kill nearly as many GA pilots as the classical combination of bad weather and imprudent/impatient pilot...

Satellites with lasers and machine guns coming! China's new plans? Trump's Space Force? Nope, the French

Cem Ayin

Re: Anti-Satellite isn't that hard

"Taking advantage of the satellite's orbital velocity, it is not difficult (on paper) to hit them with a spread of 'buckshot' traveling at speeds on the order of 50000 km/h"

Hmm, not sure it is really that easy to /take advantage of the target's orbital velocity/ in order to shoot it down. Anything you launch into a rendezvous orbit will, of necessity, have an orbital velocity that is equal or very close to that of your target, assuming the orbits are synchronized.

You might try to hit your target "at an angle", i.e. launch it into an orbit that is inclined against your target's orbit, with the orbital elements carefully set to deliberately provoke a collision at some point, but if your relative inclination is low (making it relatively easy to hit the target), so will be the relative speed. Conversely, as the relative inclination increases, so does the difficulty of arranging for a collision.

You could of course carry this idea further by using a relative inclination > 90 degrees, i.e. launching your "buckshot" into a retrograde orbit, ideally at a relative inclination of 180 degrees with your target. The only question is how many other satellites, including your own kit, such a Weapon of Orbital Mass Collision would take out both before and after the encounter with its intended target (and how you would go about stopping it once the deed is done...) XD

The most promising strategy for attacking a satellite is probably arranging for a classical rendezvous orbit and then lob your destructive energy of choice (in the form of either electromagentic radiation or accelerated matter) at it from close range.

Disclaimer: I'm not an expert, just someone who has played quite a few hours of "Orbiter", so I might be missing something.

Meet the Great Duke of... DLL: Microsoft shines light on Astaroth, a devilishly sneaky strain of fileless malware

Cem Ayin

Re: Fileless?

"It's still a file that was downloaded. The fact that it wasn't written to the local disk makes no nevermind. It is a file fetched from a storage device and loaded into working memory."

No. Simple categorial error: you're confusing container and content. While "loading a file into memory" is common IT parlance the phrase is strictly speaking not accurate: It's the /content/ of a file that gets loaded into main memory (and possibly transmitted via a network connection). Neither the main memory of current digital computers nor a network connection is file structured in any way. The main memory is structured by pages (and possibly segments) and accessed by numeric adresses, a network connection typically provides an unstructured stream of octets - no file system structure whatsoever exists in either case.

Google's Fuchsia OS Flutters into view: We're just trying out some new concepts, claims exec

Cem Ayin
Boffin

Re: Amoeba

The concept of capability-based access control well pre-dates the Amoeba system, see the following links containing some examples dating back to the 70's:

http://cap-lore.com/CapTheory/KK/OperatingSystems.html

https://en.wikipedia.org/wiki/Capability-based_security

(Mr. Hardy's link collection lists a number of systems that are not strictly speaking capability-based, see his notes on the individual links.)

Also, since this is a British forum, it might be appropriate to especially point out the Plessey System 250 (which was not just an OS but a capability-based hardware architecture with an OS to match):

http://www.cs.washington.edu/homes/levy/capabook/Chapter4.pdf

Boeing admits 737 Max sims didn't accurately reproduce what flying without MCAS was like

Cem Ayin
Boffin

"Which part about dealing with MCAS was removed from the manuals 30 years ago? And at which moment the electric trim was re-enabled with MCAS worsening the situation?"

The reference is (obviously) not to a method for "dealing with MCAS" but to the "roller coaster" technique of recovering from a severe mistrim.

As for the de-activation of stab trim cutout, this appears to have happened around 5:43:11 when electric ANU trim was recorded, only to be followed by disaster when at 05:43:20 the stabilizer briskly started moving in AND direction again in response to auto trim input. Possibly the FC tried to flip only the left cutout switch back to "normal" in a desperate attempt to re-enable electric trim without re-engaging MCAS, a misconception that might have arisen from the different functions of the two cutout switches on prior 737 models (where the right cutout switch would disconnect the auto-pilot input only while leaving electric trim engaged).

See the following references:

https://theaircurrent.com/aviation-safety/vestigal-design-issue-clouds-737-max-crash-investigations/

https://www.satcom.guru/2019/04/what-happened-on-et302.html#more

https://www.satcom.guru/2019/04/stabilizer-trim-loads-and-range.html

Vertiv: Li-ion already eats 1 in 3 battery sales... let's shove multi-batt box into data centre folk's paws

Cem Ayin
Trollface

Li-ion in your server room

Playin' with fire, are we?

Baddies linked to Iran fingered for DNS hijacking to read Middle Eastern regimes' emails

Cem Ayin
Pint

Re: Hey, Iran

"Keeping in mind the rules of al-Shura (people's consultancy) in Islam, which is quite equivalent to democracy (but isn't the same though), a regime (and especially its ruler) can't be despotic and lay a claim to being Islamic at the same time."

I appreciate your sense of humor. So the Ummayads, the Abbasids, the Seljuqs, the Fatimids, the Ilkhans, the Ottomans, the Mughals and a few other, less important or even mentionable islamic dynasties right down to the Saudis and, of course, the current rulers of Iran have all, without exception, been - your words - "misrepresenting Islam to stay in power"? So Islam has basicly been constantly "misrepresented" ever since its inception? And as for those precious few upright true believers who represent the "real" Islam this leaves us with... the Qarmatians maybe???

I'll drink to that!

You better watch out, you better not cry. Better not pout, I'm telling you why: SQLite vuln fixes are coming to town

Cem Ayin
Trollface

Not sure if I read this right...

So Crome allows arbitrary websites to run arbitrary SQL queries on local SQLight DBs? JavaScript's not bad enough, eh?

Oh, well... as a design study for a novel operating system, Chrome ist certainly "interesting", but it still lacks a decent browser, I must say.

In Space, Still: 20 years since Russia hurled first bit of floating astronaut hostel into orbit

Cem Ayin
Boffin

"Floating"

Nitpick: The ISS is not "floating" (i.e. experiencing lift due to buoyancy). Quite the contrary, in fact: it is falling (as evidenced by the microgravity conditions aboard). The reason that it is not colliding with earth is not lift but the fact that it has been carefully accelerated to move "sideways" (to wit: in an orbit) just fast enough to avoid ever hitting Earth while falling towards, or rather: around, it.

SQLite creator crucified after code of conduct warns devs to love God, and not kill, commit adultery, steal, curse...

Cem Ayin
Joke

OT: 613 Commandments

As the old joke goes: When Moses came down from Mt. Sinai, the people were asking him: "Now, how did it go?" And Moses replied: "Not too bad - I managed to get him down to 613."

Do you really think crims would do that? Just go on the 'net and exploit a Windows zero-day?

Cem Ayin
Holmes

Re: So classic way to find an exploit.

"""

1) Read what the manual says a Windows function can do and will allow

2) See if its parameter checking stops you from doing whatever you want.

3) Check next function.

"""

I dont think anyone could have found this particular vuln using the above receipe. The problem here ist not insufficient validation of input to a privileged system function - as I read the article (and the linked explanation) it's rather a game of "spot the unintended side effect":

%SYSTEMROOT%\Tasks allows any authenticated user to create files there [NT AUTHORITY\Authenticated Users:(RX,WD)] and some system utility can be coaxed to change this file's permissions without checking if it isn't really an alias pointing elsewhere (say, a system library...) => game over :(

The problem is more akin to the classical "Unix /tmp-race" class of vulnerability. And now that this can of worms has been opened on Windows also, I would not be suprised if there was still more to be had where this particular vuln came from.

(If one absolutely wanted, one could of course subsume the creation of a file under "user input" to a "function" and checking for aliases in world-writeable directories under "parameter checking" but for my taste that would be stretching the meaning of those concepts a bit far.)

You: 'Alexa, open Cortana.' Alexa: 'Who?'

Cem Ayin
Joke

Mostly OT: Alexa vs. Cortana

Vaguely reminds me of a joke I read a few months back (originally with Siri vs. Cortana):

Nerd: Cortana, what's on in the Cinema tonight?

Alexa: Who is this Cortana?

Nerd: Er, sorry... I mean: *Alexa*, what's on in the Cinema tonight?

Alexa: *Who is this Cortana?*

Nerd: Never mind. Just a slip of tongue on my part. What I meant to say is: "Alexa, what's on in the Cinema tonight?"

Alexa: WHO IS THIS CORTANA???

Nerd: What's wrong with you Alexa? Listen, all I want to know is the program of the local Cinemas tonight, ok?

Alexa: ...

Nerd: Alexa! Ok, I did make a mistake. It's not personal or anything. Just a minor slip of tongue, you know? And now be a good sport and tell me what's playing in the Cinema tonight, ok?

Alexa: ...

Nerd: Alexa...?!?

Alexa: Go and ask this Cortana...

Build your own NASA space rover: Here are the DIY JPL blueprints

Cem Ayin
Joke

Launch Vehicle?

A DIY planet exploration rover is certainly an absolutely fabulous idea, but where's the DIY launch vehicle to match? You'll want at least something that can lauch your payload into LEO, so you can land it on your custom-made private LEO-Moon that you are about to 3d-print with the DIY kit you recently purchased at myMagrathea.com...

Gov.UK to make its lovely HTML exportable as parlous PDFs

Cem Ayin
Boffin

If your only tool is a hammer...

Both formats have their strengths and weaknesses; wise guys choose whatever suits the job at hand best.

Yes, PDF /is/ print-oriented - and that's a major advantage for publishing long texts that require attentive reading. A document set in a reader-friendly font with proper paragraph filling and hyphenation is so much easier on the eyes; it lets your mind focus on the content rather than the technicalities of a poor text rendering (which is the norm in HTML). I speak from experience, I do read a lot.

And I'm not alone. I work in an academic setting and at our lab, the computing devices most in demand ("high demand" being defined as "users scream /immediately/ when it fails") are 1. the personal laptop and 2. the workgroup printer - and that's for a reason. /Nobody/ would want to read a scientific paper as HTML on the screen, with the poor rendering constantly distracting the mind from the problem at hand. (Some folks do use rotating monitors for reading papers, but it is PDF they read on the screen in portrait format.)

And I haven't even mentioned the problem of embedded figures yet: good luck with copying the full content of a HTML page (skipping unneeded navigation code) for offline reading...

That is to say, there are use cases where HTML is simply no go.

The optimal use case for HTML (plus JS where that really makes sense) on the other hand is short, frequently changing or short-lived documents that noone would want to read offline or in print; or documents of a highly interactive nature; or reading the same document on a wide range of display sizes (making allowances for the text layout and rendering) - that's what it was designed for after all.

Bottom line: Use a hammer for nails and a screwdriver for screws. Heated ideologic debates as to whether screws are outdated and should universally be replaced with nails are frankly daft.

(And yes, both formats have rather more than their fair share of warts. A text format that is versatile enough to cover both use cases would be really nice to have. Good luck with developing something of the kind *and have it widely accepted by your audience*...)

Microsoft's TextWorld gives AI a Zork-like challenge

Cem Ayin
Holmes

Curses!

I wonder how their AI would perform against a really sophisticated piece of IF, say "Curses!" by Graham Nelson or anything in that league. Or give Infocom's "Hitchhiker"-Game a try. Will the AI ever make it through the screening door...?

Sysadmin shut down server, it went ‘Clunk!’ but the app kept running

Cem Ayin

Copy & paste

Many years ago I happened to be admin of a student computing lab running Solaris 9 on both server and workstations (those were the days...!) The workstations were set up to allow passwordless SSH logins from the server, so whenever a remote root shell session was needed on one of the workstations, the usual procedure was to log into the server, start (or re-attach) a GNU screen session and log into the clients from there. So far so good.

One fine day I had to re-boot a number of workstations I had been working on in "parallel" (really "time sharing" between the screen windows of course). So I typed the usual command: "/usr/sbin/shutdown -i6 -g0<CR>" in the first window, down and back up the client went, and being a lazy sysadmin, I just marked this same command line to be pasted into the second window, which I duely did next. Unfortunately, the second workstation had somehow reset the SSH connection in the meantime, so after the "/" had been consumed from the cut buffer and gone down the bit bucket, the SSH session's TCP connection was closed with a "connection abort" error message.

Guess what happened next?

Well, the terminal was of course feeding the string "usr/sbin/shutdown -i6 -g0<CR>" to the underlying shell session *on the server*; this being Solaris 9 the root's home directory, and thus the one that was normally your CWD when working as root, was "/", and, yes - hate to admit it - "." was in root's $PATH...

(Fortunately, I had made it a habit *not* to use -y with shutdown, so the server was duely asking for confirmation of the shutdown, which I *happily* declined...)

Brains behind seL4 secure microkernel begin RISC-V chip port

Cem Ayin

Πόλεμος πάντων μὲν πατήρ ἐστι

A research group that is really serious about computer security and has been for years is still receiving government funding? And its technology gets (albeit to a limited extent) adopted by mainstream IT industry giants? What's happening? Is the world of computing slowly being turned downside-up?

Could it be that the New Cold War will bring us reasonably secure computing devices eventually? So Heraclitus was right after all?

OTOH: Will "reasonably secure computing devices" (if indeed we ever get to see them) mean anything more for Joe End-User than a black box that is even more secretive about its phoning-home activities while being, due to its security features, completely opaque to even advanced computer nerds? It this what e.g. Google is aiming at with Fuchsia? Will the new world of computing be a truely "brave" one?

Let's harden Internet crypto so quantum computers can't crack it

Cem Ayin
Alert

Re: Possible deadly flaw - compromised software

"For a closed source implementation (eq most Windows programs) there is a danger that a deliberately weakened random number generator is used."

The very same problem exists for FOSS-code, even assuming it has been thoroughly audited. Consult the search engine of your least mistrust about "Reflections on Trusting Trust". As for the countermeasure proposed by Wheeler I'm not sure about its practicality in real life, given the various nondeterministic bits of compiler output; in any case it is (A) a rather involved procedure and (B) it would miss trusting-trust-style attacks targeting other system binaries or those performed at the firm- or hardware level.

Multics resurrected: Proto-Unix now runs on Raspberry Pi or x86

Cem Ayin
Boffin

Re: Anything we should steal ? - Definitely

"Like OS/400 and AIX?"

OS/400 (now "IBM i") is the other commonly cited OS that provides a single level store, and AFAIK the only one still in active commercial use today. (Aegis and KeyKOS are two other, now defunct, examples that I remember OTTOMH.)

AIX OTOH is IBM's homegrown (SysV-based) implementation of Unix and while certainly providing mmap(2) & friends I am not aware that it has a single level store. Not that it would be impossible to implement a POSIX-ish OS on top of an OS based on a SLS (it has, in fact, been done: Domain/OS), but to my knowledge AIX is basically SysV with the usual set of BSD extensions as well as IBM's very own bells and whistles added.

Note that a "single level store" means more than just the ability to mmap files (I had mentioned mmap only as a /very loose/ Unix-analogy); in fact such a system does not have a "file system" in the usual sense of the word at all - just a large, usually segmented, address space or object store that is transparently being made persistent by the OS through writing out dirty pages to permanent background storage as needed.

No need for file I/O to keep data persistent, /that/ is the key point.

FWIW, the concept appears not to be entirely "dead"; while searching Multics references I stumbled on this paper

http://repository.cmu.edu/ece/370/

but I have not yet had time to really digest it. And then there is of course "The Machine", or rather, with quite a bit of luck there will be one day... although "memory centric computing" - while also being built around a persistent main memory is a different and more far-reaching concept altoghether.

BTW, another Multics nicety that I had forgotten in my original post is the fact that Multics segements containing executable code could (being mapped into the address space all the time anyway) be directly invoked and the interface for doing so was IIRC provided and standardized by the OS, thus leading to implicit cross-language compatibility of executable code. No need for FFIs and stuff.

Cem Ayin

Re: Anything we should steal ? - Definitely

"Does Multics have any useful features that are not present in modern OSes ?"

Its virtual memory implementation comes to mind: in Multics, "files" were really persistent memory segments, i.e. all files were "memory-mapped" in Unix-parlance.

https://nbonvin.wordpress.com/2008/02/05/review_the_multics_virtual_mem/

The concept was not without problems of its own (see the linked article above and the articles linked therein); but given today's availablity of reasonably fast, persistent mass memory technologies I think that exploring direct mass memory access while doing away with traditional file I/O might well be worth exploring once more.

Create a user called '0day', get bonus root privs – thanks, Systemd!

Cem Ayin
Facepalm

"So we've got invalid authorization data here..."

"...no problem, we'll just fall back on granting maximum privileges."

Well done, Lenny. We are impatiently looking forward to that shiny new logind thingy, which I am sure you already have in the making...

Seagate hauls out fat form factor throwback hard drive

Cem Ayin

A bit larger?

That Eagle thingy looks rather dainty to me.

Have a look at this beauty:

http://www.cpushack.com/wp-content/uploads/2016/11/3000-33_1978-PromoPhoto-32.jpg

I mean the cabinet on the far left. No, that is not a washing machine (although the amount of energy being converted while spinning is probably comparable, at least it feels that way).

Security bods find Android phoning home. Home being China

Cem Ayin
Facepalm

Re: Sniff sniff

"I smell another transgression by the Chinese Government."

Sending millions of users' digital family silver to a single well-known server, apparently without certificate pinning? Using plain old DES in the mix? I know one should never underrate the power of human stupidity, but frankly I'd expect Chinese surveillance agencies to make a better job of it. Actually, I think even North-Corea would do a better job these days...

Browsers nix add-on after Web of Trust is caught selling users' browsing histories

Cem Ayin

Re: Naming

"Yes, these [the Nazis] would have had to sit on the left in the Reichstag. Unfortunately that side was occupied by the internationalists and people making eyes at Stalin, which was anathema."

You got it completely backwards. In the first half of the 20th centry, the demarcation line between "left" and "right" was marked, next to the social question, above all by the stance on two hallmark political positions: militarism and antisemitism.

While is true that in 1914, the leadership of the SPD caved in on the question of financing the war (Kriegskredite) it was not least this decision which eventually lead to the splitting off of the party's left wing and the formation of the USPD and ultimately the KPD. But at the time even the most depraved and corrupt Social Democrats would not sink as low as shouting antisemitic slogans, the rejection of antisemitism being regarded on the left as a sort of litmus test for political sanity. (The saying "Der Antisemitismus ist der Sozialismus der dummen Kerle" [antisemitism is the socialism of fools], often but maybe falsely attributed to August Bebel, which was in common use among Social Democrats in Germany at the time, sums it up pretty neatly.)

In short: in Germany in the first half of the 20th century, political people knew where they stood, knew their enemies too, and a party overtly adovcating antisemitism and imperial expansionism had no business sitting on the left in any parliamentary body. In other words: your comment is tosh, top to bottom.

That will be an F-grade in German history for you then. "Geschichte: 6. Setzen."

Red Hat eye from the Ubuntu guy: Fedora – how you doin'?

Cem Ayin
Meh

Fedora has its ups and downs...

...mainly depending on the release cycle of RHEL.

Don't forget that Fedora is, above everything else, a beta testbed for Redhat's cash cow, RHEL. In my experience, those Fedora releases that are candidates for the basis of a new release of RHEL tend to get significantly better (read: more stable and useable, less buggy) as the fork for the next version of RHEL approches. OTOH, releases that start a new development cycle are usually awful: full of quick, poorly tested hacks let loose on users who are basically expected to debug the stuff in lieu of the developers. (Remember the first version of the current installer? I don't remember when exactly it was introduced, but it could not do such basic stuff as install into a preexisting Linux partition. And it had a bug that made it set /dev/sda1 as the swap device, no matter what you specified in the user interface. Mightily fun it was. Not.)

Note that the above observations outline a /tendency/ only (though one I've been observing for almost a decade now). Also, various features for a new RHEL release might actually be culled from different releases of Fedora, which can make the observed pattern more fuzzy.

VMS will be ready to run on x86 in 2019!

Cem Ayin
Boffin

Re: Woo Hoo!

"Will this mean my VMS skills will be useful again? Wonder if they will implement a version of JCL (job control language) that was really powerful stuff."

You seem to confuse VMS with MVS. Or DCL with JCL.

Your wget is broken and should DIE, dev tells Microsoft

Cem Ayin
Boffin

Cool down

I know I'll get massively downvoted for this, but command name conflicts have been around in unixoid operating systems for ages...

Are you old enough to remember the conflict between 'rsh' (restricted Bourne shell) and 'rsh' (remote shell)? Some vendors solved this by putting the two binaries in different directories, but HP did rename the remote shell binary to 'remsh' (at least in the versions of HP-UX I have known, to wit 7.x - 9.x) which was not good for cross platform script compatibility.

Even worse for cross-platform scripting was their decision to embed the functionality of 'nawk' into 'awk' and do away with the 'nawk'-command altogether.

Another blunder of the sort that comes to mind is the decision of Mr. Thorvalds himself to name the Linux system call tracer after the SysV STREAMS tracer thus giving 'strace' the functionality of 'truss'...

The GNU project has been providing incompatible versions of POSIX commands under the original names for as long as it exists.

And let's not talk about the variety of shells that you find under /bin/sh in different versions of Unix or Linux (these days, many script writers naively assume that /bin/sh is always linked to /bin/bash only to find their scripts fail on non-Linux or Debian-based systems). Solaris even has two distinct versions of the Bourne shell (in /usr/bin and /usr/xpg4/bin, respectively) in order to be compatible with its early versions as well as with the official standard.

These are just the examples that I can think of OTTOMH.

As the saying goes: "The wonderful thing about Unix standards is that there are so many to choose from". Feel free to blame Microsoft, but don't forget to add at least AT&T, UCB, Sun, HP, IBM, SGI and Linus Thorvalds to the list...

Tesla whacks guardrail in Montana, driver blames autopilot

Cem Ayin

"Autopilot"

I believe that the term "autopilot" as applied to Tesla's driver assist system, while technically correct, is highly misleading and tends to raise unrealistic expectations wrt its capabilities.

Why is it "technically correct"? Because is basically does what an autopilot in an airplane or a seaship does: it keeps the craft on a preset course. (Actually the Tesla system even does quite a bit more than this, I am not aware of any aviatic or nautic autopilot sporting a collision avoidance system as advanced as the one used by Tesla - after all they don't need it.)

And that takes us straight to my second point: why is the term "autopilot" misleading? Because it makes people think that they can rely on it most of the time. Which is basicly true in the air or at sea, where obstacles are rather few and literally far inbetween and where the autopilot just frees the crew of the tedious routine task of staying on course. The environment still needs to be watched of course, but obstacles do not normally keep popping up all the time and in a split second.

But this is obviously not true on the road (not even on a motor-, free- or expressway) where the average density of obstacles is higher by a few orders of magnitude. The driver just has to watch the traffic all of the time, even under the best of conditions and no matter how advanced the driver assistance systems of his vehicle may be. Which, of course, makes the very concept of an "autopilot" for automobiles somewhat dubious - on the road it's either fully autonomous driving (at the moment not technically feasible) or it's mostly useless. On the road, the ability to stay on a preset course, augmented with limited collision avoidance capabilities just doesn't cut it.

It is true that Tesla clearly states the limitations of it's "autopilot". The problem is that these limited capabilities are just good enough to mostly work in many situations, thus making the driver more and more careless and daring over time, but can and will fail miserably eventually - typically in a situation where the driver has developed enough of the "contempt that goes with familiarity" for the dangers of relying on the autopilot and has, conciously or unconciously, come to expect a level of autonomy the system cannot provide.

You can call this stupidity, which indeed it is. [sarcasm] Or you could call it bravery (the line between the two being fine at times) - after all the odd bloke pushing the "autopilot" to its limits (and beyond) ist in essence nothing but a volunteer test pilot for a system that really wants to be a fully autonomous vehicle when it grows up but for the time being needs a lot of beta-testing and exploration of corner cases even for its most basic sensors and algorithms.

I wonder if there is somewere on the Tesla campus a monument to "our brave beta-testing customers of blessed memory". In any case there should be one. Dulce et decorum est pro progradu mori... [/sarcasm]

NVMe SSDs tormented for months in some kind of sick review game

Cem Ayin

Re: "Drives"

Yes, the architectural concept behind the project dubbed "The Machine" by HP management is certainly interesting, but IIRC it all hinges on the availablility of technically and commerically viable memristor memory (it's supposed to be built not around a single level /store/ but really a flat, single level, persistent main memory, as in the diagram linked to in your post) and it remains to be seen if HP, currently a company very much in distress, still has the power to make this a reality.

The problem is, as I (with my limited competence in the field) see it, twofold: 1. On the level of electrical engineering (provide the chips for a single level persistent memory, and with good-enough yields at a competitive price point) and 2. provide a software abstraction and SW development model that gives enough benefits to make abandoning legacy code attractive and commerically viable.

Neither is a trivial task, to say the least...

If they do manage (and I hope they do), interesting times could be ahead indeed

Cem Ayin
Meh

"Drives"

With non-volatile storage devices such as these available it's a pity the concept of a single level store (as implemented in MULTICS [now defunct] or AS/400 [now IBM i]) never caught on in the mass market. These devices would neatly fit in the storage hierarchy between a DRAM write cache (to ease the wear on the flash storage) and the remaining, higher latency stuff. Am I the only one who thinks that it's really a waste having to use these devices as "drives", for lack of a software abstraction that is able to leverage their power?

Microsoft has created its own FreeBSD image. Repeat. Microsoft has created its own FreeBSD image

Cem Ayin
Devil

MS Takeover? I think not.

What exactly could MS possibly hope to gain by throwing a fork of FBSD on the OS market?

This just appears to be an attempt to make Azure more attractive to BSD admins (and, probably more importantly, to Linux admins who are increasingly p****** off by systemd).

I'm by no means a Microsoft fanboi, but if anything this will give more momentum to FBSD and it is good news.

Satan chosen for closest Beastie resemblance.

Latin-quoting Linus Torvalds plays God by not abusing mortals

Cem Ayin
Trollface

"Hoc volo, sic jubeo...

sit pro ratione voluntas" has always been his favourite rationale for anything, so nothing new under the sun here...

Midlands tech dynasty SCC bypasses Europe, opens service centre in Vietnam

Cem Ayin
Joke

"SCC"? You mean as in...

...Sirius Cybernetics Corporation?

Smartmobes in spaaace: NASA deploys Android nanosats

Cem Ayin
Linux

They use what's already there, I suppose

"I always wonder why Android needs to go in to devices like this?"

It says "off-the-shelf smartphone tech", so most likely they are using ARM-SOCs originally designed for smartphones. And Android is what already runs on these devices, any flavour of GNU/Linux would probably have to be ported first (ARM-SOCs don't adhere to the same hardware interface standards that commodity x86-systems do).

New HTTP error code 451 to signal censorship

Cem Ayin

Re: Quis censores ipsos censor?

You certainly must mean "quis censebit ipsos censores" (or, if there is only one censor to watch, "ipsum censorem"), though this would still be dubious from a semantic point of view, since, while "censor" is certainly ethymologically related to "censere", the basic meaning of the verb is more along the lines of "to think (right)", "to believe" etc., as in "ceterum censo internautam non esse super grammaticos" and I'm not sure if "censorem censere" makes any sense at all (I don't have a copy of OLD at hand right now) but anyway this is now starting to get OT...