* Posts by Glen Turner 666

260 posts • joined 3 Jul 2009

Page:

Elon Musk puts Twitter deal on hold over bot numbers claim

Glen Turner 666

With the fall in the stock price of Tesla, the cost of Musk's funding for the acquisition of Twitter has risen -- Musk will need to stake more of his Tesla shares to purchase Twitter. One way to address that situation is to require less funding in the first place. Raising doubt about Twitter's number of users and the level of misuse of the platform are both ways to do that.

Why the Linux desktop is the best desktop

Glen Turner 666

Re: Linux "Desktop"

Linux has equivalent -- and better tools -- they are just different enough that they're not on the radar of people looking for a one-for-one branded replacement.

Group membership in Linux is easily enough driven from LDAP. Red Hat has a nice sssd wrapper around that set of technologies.

Group policy is more often then not used as a proxy for configuration control, and under Linux you do that configuration control directly. Ansible is the weapon of choice. This has the huge advantage of allowing the 'playbooks' to be stored in Git. Which means that there's far better administrative and audit control of changes then done with Group Policy. That's why AD is the #1 goal of hackers, because change there is organisation-wide and auditing isn't done because the mechanism is too difficult. Compare with Git pumping Git summary lines and diffs of Ansible playbooks (and thus enterprise-wide machine configuration files) into a Slack channel for continual casual review by the operational staff.

Linux tends to pull forward cutting-edge security technologies. The machine I am writing this on runs Secure Boot, to an encrypted drive based on a key in the TPM, with all system executable files and libraries under Integrity Management. It remotely logs to a server analysing activity for misuse patterns -- exactly the same software as used for our big internet-facing servers. Login requires a Yubikey to be inserted, if that's removed the screen locks. Windows is catching up, but in a typically Microsoft way -- using these desirable features to push their sales agenda, so use of a WebAuthn key requires cloud authentication rather than site-hosted AD.

Network equipment lead times to remain painfully long into 2023: Gartner

Glen Turner 666

Gartner was very timid

I think Gartner could have gone further. Although I do understand that even suggesting some vendor-independence is contrary to the way enterprise works and to some of Gartner's revenue.

Firstly, to warn of vendor lock-in via element manager software. If you'd chosen to do your switch port provisioning via generic tools -- say, Netbox and Ansible -- and monitoring via one of the good open SNMP platforms -- say LibreNMS -- then picking up whatever switching hardware is available doesn't raise massive integration and ongoing cost-of-management hassles.

Secondly, to fill out the suggestion for x86. A lot of 'appliance' middleboxes which do packet manipulation are already x86 underneath. There can be large savings in making that explicit. There's a spectrum of choices, from firewalls in VMs, to proprietary software in containers, to generic tools in containers. The state of the art is Linux's XDP software and the fd.io VPP software. Both of these will do high-touch packet manipulation at high rates (over 10Gbps on a modest server). Both can be run in easy-to-manage containers with little performance hit by selecting network interface cards with the SR-IOV feature.

A real cost of moving is in the firewall rules: and again avoiding firewall-specific element management can pay off (added to which, most firewall element managers lack sufficiently powerful storage of firewall rules: lacking auditing of change, a lot of them not even able to carry a JIRA issue ID to identify why the rule even exists; and lacking the abilities of modern configuration management like Git, such as to remove a faulty rule some months old without disturbing the changes made since). There's a lot to be said for maintaining the firewall rules off the firewall, in a YAML file, with symbolic names rather than IP addresses, and then 'compiling' that down to the vendor's format via a continuous integration job which then stages the change into Ansible.

Running a VPN server on x86 is another task best made explicit rather than using an appliance. By running the VPN server explicitly you can divorce it's authentication from corporate user authentication -- replacing corporate passwords with corporate-issued tokens or keys. Then losing a phone or laptop doesn't leak that all-powerful password in some configuration file somewhere. A password loss via client VPN software appears to be the say the Colonial Pipeline was hacked -- that password didn't just allow access to the VPN the say a token might, but onto servers within the network too. Running your own infrastructure also allows multiple types of VPN -- say OpenVPN and Cisco IPsec -- which then allows the use of the VPN clients provided with the device. Avoiding the installation of client VPN software is a substantial saving of helpdesk hassle. Back those VPN servers with a firewall and a Zeek instance to do intrusion detection and the result is as secure as the vendor offerings, easier to use, and runs much faster (because it can be run on this years' x86, not one from 5 years ago packaged into an 'appliance').

Thirdly, these 'some assembly required' systems don't bit-and-dime additional features. If you want active-active or routing protocol support then the questions are technical and management rather than financial -- is resilience best done with load sharing or with a proxy or with anycast; is the added complexity of a routing protocol worthwhile? There are so many firewall pairs configured as active-passive which would be better configured as active-active with a routing protocol, but the clients can't afford that 'added value' solution from a vendor.

Gartner doesn't often give the strengths of these 'some assembly required' solutions. Which is weird as they have massive mission-critical use by the FAANG networks. I would speculate that perhaps those analyses don't look beyond vendors who promote products, to investigate the full range of heavily-used software.

Worst of CES Awards: The least private, least secure, least repairable, and least sustainable

Glen Turner 666

unintelligible random sound from person in bed

> "This is a device that you put next to your bed, that if you make an unintelligible random sound, turns on and starts listening to everything you say," Doctorow said

He means sex sounds and Amazon recording sex, right?

MySQL a 'pretty poor database' says departing Oracle engineer

Glen Turner 666

Re: There is no reason not to choose Postgres

"The biggest impediment on those two vs Excel is recreating the environment"

Have a look at Juypter Notebook. It's a cloud-hosted web presentation of Python, R and other languages in a "engineering notebook" style. To recreate the environment you just log into the page again. It's excellent for a quick analysis, since if you have to return to the analysis later then the data exploration you did is presented in a narrative, making it easy to track what you were thinking at all those months ago.

Juypter has very quickly become the weapon of choice for analyzing science data.

Typical. Crap weather halts work on subsea fibre-optic cable between UK and France

Glen Turner 666

Re: Tunnel

The rights to place or use fibre in the Channel Tunnel are exclusive.

Microsoft does and doesn't want you to know it won't stop you manually installing Windows 11 on older PCs

Glen Turner 666

Not removing authentication, but removing authentication which uses passwords and replacing it with something better.

This is good: passwords are no longer fit for purpose. People simply can't think of ten mathematically-random characters for every website they use.

Rather authentication uses an attribute of yourself: such as fingerprints or face. And maybe a button-press to signal that you do mean to login.

Asahi Linux progress: Apple Silicon OS works – though it's 'rough around the edges' and has no GUI acceleration

Glen Turner 666

Re: Warantless

Apple have been upfront about the way they scan user's phone content, and that's not a backdoored CPU.

If you disbelieve Apple's description, then if anyone is going to find unexplained mailboxes to secret off-board processors or to find missing CPU cycles, then it's this very project.

Documentation of CPU functions doesn't help solve your concern -- Apple can simply not document the CPU functions which concern you. What you want is forward- and reverse-traceability from requirement to implementation. So you can prove that there's not a single gate in the CPU silicon which can't be traced back to a requirement. This is expensive and isn't offered outside of cryptographic processors.

The difficulty in providing a design is trustworthy is so hard that it's unlikely that any commodity CPU would meet this level of trust. It's just too easy for chip fabrication tooling to add covert CPU gates or chip initialisation code to your design. Similarly in verifying that the semiconductor fabrication mask doesn't have covertly-added etching. Bunny Huang goes through this in some detail in his effort to build a trustworthy laptop.

VC's paper claims cost of cloud is twice as much as running on-premises. Let's have a look at that

Glen Turner 666

There's more choice for us folk who aren't a16z startups

Casado and Wang are talking about startups. A particular sort of startup -- a Andreessen Horowitz client, with a lot of venture capital cash and with lots of compute and storage at the core of what they do. In that case, the $1k/sq.ft to pay for a datacentre build can be funded from opex savings from cloud. They are simply paying a cloud vendor that much.

Quinn's counterpoint is really about venture capital too. His argument is that first-to-market is everything for these types of startups -- that promise of future monopoly profits is *why* Andreessen Horowitz is giving the startup those truckloads of cash. Everything is secondary to building market share, even if that's paying too much for compute, because stopping to re-engineer compute is worse. Quinn's other argument is also very Silicon Valley -- the shortage of "good" people, which really means engineering staff across the current Silicon Valley toolset who live on the US west coast.

This is such a peculiar environment that lessons for those of us outside of the hothouse aren't that applicable.

For a start a more typical business might be more interested in cloud applications rather than cloud compute and storage. There's certainly cost savings in cloud email and specialist applications such as accounting, HR systems, payroll, and client relations.

Secondly, us mortals have a range of choices between the extremes of DIY datacenter build and cloud. There's a lot to be said for the midpoint of hiring racks in a datacenter, and then arranging two diverse dark fibre services to that. There's also a lot of be said for taking the lessons of the cloud, such as easy to provision VMs and containers, and making that available via corporate infrastructure -- in other words, OpenStack.

'I put the interests of the country first': Colonial Pipeline CEO on why oil biz paid off ransomware crooks

Glen Turner 666

Re: I Regret that I Have Only One Country to Give for My Money!

The CEO said that their finance system just came back online this week. So they're obviously happy to run the pipeline first and work out the bills later.

It was the production supervisor who used their "stop work" power to halt the pipeline, without reference up the management chain. The supervisor did this because a hack of the SCADA systems which control the pipeline could kill people across the east coast of the USA. When the pipeline was shut down it wasn't clear if the ransomware had got that far, but the production supervisor simply followed the firm's safety policy that people matter first and acted to minimise the risk to people.

As for recovery, that's a fair argument. The pipeline was restarted manually. Using the expertise of long-serving employees from the era when manual operation was the norm. Many of those employees are near retirement age. The CEO said to the Senate Committee that they'll make manual operation part of training going forward.

It took until two weeks after the incident for Mandiant, the contractors Colonial employed, to determine that the SCADA system hadn't been affected. So it's unreasonable to think that someone at Colonial could have made that decision shortly after the ransonware attack. There's a lesson there for SCADA software developers -- it shouldn't be that hard to determine the integrity of the software.

[The facts above are from the CEO's evidence to the Senate Committee, the interpretation is mine.]

Glen Turner 666

Re: Er ...

A "complex password" can't be brute-forced. So the password was picked up from the configuration of the VPN client on an employee's laptop. That's most likely because the employee's laptop was hacked and that VPN configuration file was one of the files exfiltrated and then offered for sale on a darkweb site.

Later evidence by Colonial's CEO mentioned locking accounts of departed staff. That might be a generic suggestion, or it might be a hint.

The main VPN used by the company already had 2FA. So there's also a lesson here about withdrawing old services, and in making sure replacement services fulfill the full range of requirements so that old services can be withdrawn.

Free Software Foundation urged to free itself of Richard Stallman by hundreds of developers and techies

Glen Turner 666

Re: want to control what everyone is allowed to say or think.

> He hasn't harassed any women though to my knowledge.

Women employed by FSF gave examples of their harassment on Twitter yesterday. Some women who worked in the vicinity of Stallman's office at MIT gave examples of their harassment in the leaked CSAIL email thread of September 2019.

Across decades, many senior staff at FSF and other organisations have had in-depth discussions with Stallman about his behaviour. Again, yesterday's Twitter has examples.

Stallman's defence of Minsky's sexual assault of Giuffre at Epstein's residence was the last straw -- the catalyst -- rather than sufficient reason of itself.

That the Board of the Free Software Foundation would welcome back Stallman -- there are no words. The threat to the FSF's own staff would alone be ample reason not to do it. Which is why so many other free software organisations are cutting ties with the FSF.

Australian police suggests app to record consent to sexual activity

Glen Turner 666

Re: Of course policeman is thinking how to solve the problem

The world deals with "A said, B said" all the time.

Every day insurance companies pay out on your assertion that you owned a good which you now don't have, even despite claims of the likely thief that they didn't nick the item. You can see that for property we've built an "alternative justice system" for property theft to make this crime less traumatic for its victims. The question is why we haven't done the same for cases of sexual assault. Despite many sexual assaults being less suited to the criminal justice system than property crime.

Before we go any further, most sexual assaults aren't A-said, B-said. It's common for one of the parties to have a great deal of corroborating evidence, often across some years. Unfortunately it's rare that this is treated as evidence at the time. If a friend discusses an assault with you, then do them a huge favour and make a note of that discussion.

In the criminal court case you have have at the top of your mind the issue comes down to what sort of evidence is allowed. For example, what is the weight for arguing supporting evidence, is tendency evidence permitted, how difficult is it to join cases. There's still a great deal of double standard in dealing with evidence in sexual assault cases: take alcohol, when a victim is drunk that suggests their evidence is less reliable, but the same argument would never fly for the questioning the assailant's denial. Joint cases is even worse: two women discussing the same experience with the same assailant and then going to police to prevent further harm by this assailant is a prosecutorial disaster: the defence will argue that there was collusion to make a false accusation. An argument which would never fly if the occasion was two businessman in casual conversation discovering a fraudster in common.

The bar for a criminal finding of guilt in a sexual assault case is very high: the High Court of Australia in Pell v The Queen said that sufficient doubt was created by the mere habit of the accused Pell standing on the steps of the church after services, and thus likely not in the sacristy raping the boy. This was sufficient to defend against the boy's excellent recollection. The defence didn't have to show that Pell was on the steps on the particular day in question: testimony from witnesses who saw Pell on the steps a few times that year was sufficient.

So it's pretty plain that the full process of the criminal law is a poor fit to most people's experience of sexual assault. Some alternative path to justice -- not necessarily outside the criminal justice system -- is needed.

'Meritless': Exam software maker under fire for suing teacher who tweeted links to biz's unlisted YouTube vids

Glen Turner 666

Proctoring software, unreliability coupled with high stakes consequences

A common configuration for "online proctoring" software is that being "flagged" by the software halts the online exam. This supposedly prevents people blatantly cheating and selling the exam answers in real time.

Even more moderate conditions still suck. If the software flags the student and then the exams invigilators check the recorded video, and then allow the exam to continue, that's a time penalty which may result in marks lower than the student's actual knowledge and skill.

If the proctoring software fails, then the student's examination is invalid. Putting that another way, better pray your operating system is no good at detecting malware-like behaviour.

Proctoring software usually objects to other people in the room. Students can't reliably take an exam from a public library, or from a share house, or from a room with popstar posters.

The software objects to the student's face leaving the camera's frame. Better not drop a pencil, be harassed by your little sister, knock the laptop lid.

The software does eye tracking, the idea being to detect use of notes. Better not sneeze.

Some proctoring software fails to detect people present when their skin colour has insufficient contrast. Got to love that the "systematic racism" here involves two meanings of "system".

For high-stakes exams the invigilators will often want the laptop's camera taken on a tour of the room. Bedrooms are sometimes too complex a space to pass this inspection. Of course students don't know this until just before the exam.

The presence of proctoring software increases the stakes of already high-stakes exams for students. Why a university would choose to do this -- rather than rapidly redesign assessments -- in the midst of the most demanding teaching year since 1939 says a lot about the relationship of the university to its students. It's also a great illustration that education isn't only what is said in the course.

How do you fix a problem like open-source security? Google has an idea, though constraints may not go down well

Glen Turner 666

Comment on paper

It's interesting that this proposal is a one-way street.

If I buy a Google phone I cannot build my own copy of the complete source code, obtain the identical binary outputs, and so verify that Google's build systems have not been subverted.

The paper's suggestion that there should be assertions about the software process is excellent -- are all commits signed from a hardware key with a SAK keypress -- that sort of thing. But again, why is this limited to open source software? If I am giving someone money for software surely that's more reason to be provided with assertions about the integrity of the source code and build system. At the moment all we get from commercial vendors is unverifiable bullshit -- right until their security issue SolarWinds was claiming that their software processes were best practice.

There's also a substantial amount of cost-shifting in the proposal. Distributions, such as Red Hat Enterprise Linux or Google Android, have substantial income to fund housekeeping tasks such as backporting bug fixes. But the paper proposes to move this expensive task to free software authors, who often don't gain much income from the infrastructure library they authored. This is a task made more expensive by Google's reluctance to share with developers in a timely way its fixes to the open source packages Google uses.

The notion of federated identities for software developers could very easily degenerate to Google authorising software developers. It won't be too long before the intelligence-security apparatus removes entire nationalities from the 'critical infrastructure trusted' community: they're unlikely to accept a Russian national. But this in turn means that Russian nationals discovering security issues will be pushed towards engagement with the malware firms surrounding the FSB.

And again this is a one-way requirement. I cannot obtain the identities of each individual who has contributed to the code on my Google phone. Let alone attributes of those identities such as nationality, criminal record and prior employment. I am required to trust Google's hiring processes.

In short the paper has some good ideas -- unsurprisingly since they are common in the discussion around SolarWinds -- but has well underbaked the development of those ideas into policies.

Apologies for the wait, we're overwhelmed. Yes, this is the hospital. You need to what?! Do a software licence audit?

Glen Turner 666

Hospitals would be attractive for software licensors as they are still open and, notoriously, working at capacity. An audit of software licenses of a shuttered business is more likely to find compliance -- the server and desktops might even be turned off.

It is the software licensee who does the grunt work of collecting the data for the audit. In a hospital context, the hospital's IT staff. An audit is not letting someone in the door and saying "hope your PPE is good", as delightful as that would be. The licensee's staff are safe, working from home, and writing e-mails with references to the contract clauses about software audit obligations.

The cost of an audit to the licensor is low. Basically an e-mail, some administration, and the customer relations staff giving a not-at-all-meant apology.

Whenever I have pressed a software vendor on their audit clauses the sales team have always responded that the clauses would be used "responsibly". That's clearly not the case.

Personally, a software licensor requiring a hospital to do a compliance audit in the midst of a pandemic would, in a better world, have the government solve that issue by issuing a statutory copyright license.

Crowdfunded Asahi project aims for 'polished' Linux experience on Apple Silicon

Glen Turner 666

Re: I don't see why Apple would stand in the way of this

Apple sells around 18m laptops a year. Support for Linux would not increase sales by even a percentage point.

The attraction for Apple in allowing Linux (that is, allowing non-secure boot) would to be to avoid entanglement in accusations of monopolistic practices by US hardware and software manufacturers, practices the EU has been traditionally keen to prosecute and China may be increasingly keen on pursuing.

Microsoft is designing its own Arm-based data-center server, PC chips – report

Glen Turner 666

Re: How many companies have to fail at server-side ARM64 ...

What has happened is the major buyers of sophisticated CPUs -- the cloud companies -- want performance per watt as well as performance per rack unit.

Compare the resulting pricing for AWS: Intel US$4.08ph, AMD $3.70ph, ARM $2.18ph. Graviton2 is about 20% slower than the equivalent Intel server, but about half the cost. Remember this is the second release of Amazon's ARM design up against the decades of tuning of Intel's design, and the difference is only 20%. Obviously that difference has further to shrink.

Other cloud providers will be facing similar pricing, but with the advantage that they use more of their compute cycles for their own services. That is, they can more readily re-target their internal services from AMD64 to ARM64 than Amazon's clients can.

I can't see that any company will take the risk of developing a server ARM chip. As you point out, plenty of startups have been burned. So the market will leave that development to the cloud providers themselves, who have abundant engineering resources to turn ARM IP into silicon.

The major difference between now and the past is Intel's years-long failure to deliver process improvements compared with its competitor TSMC. There is little reason to expect that to change. That failure alters the economics for cloud companies. In the past chips with better architecture would have their performance blitzed by Intel's process improvements. DEC's Alpha being an excellent historical example. So there was no incentive for cloud companies to explore CPU architecture. That blitzing-by-process-improvement is no longer in Intel's power to do. An architectural improvement over Intel's microarchitecture is now a long-run win. So CPU architecture is now worth cloud companies' efforts.

None of this is likely to be reflected in the "enterprise server" market. But that market is becoming increasingly odd and continually smaller. In many ways very much like the IBM mainframe business of the pre-PC 1980s. And just as likely to have a nasty surprise.

Deloitte's 'Test your Hacker IQ' site fails itself after exposing database user name, password in config file

Glen Turner 666

Tweet removed

Twitter removed the tweet from Tillie Kottmann which uncovered this issue. Presumably because the tweet breached Twitter's controversial "Distribution of hacked materials policy".

Linux kernel's Kroah-Hartman: We're not struggling to get new coders, it's code review that's the bottleneck

Glen Turner 666

"Linux has issues with code validation" isn't correct. It is clear where every line of code comes from.

"What would happen without Linus as ringmaster". Linux *has* a succession plan, at the moment that is GK-H. The risk is greater for other OSs: can you tell me who the successor to Microsoft COSINE's Jason Zander is as the engineering leadership for Windows? That's likely to be determined by business and poltiical issues at Microsoft at the time Zander steps down from that role. The same is true for Apple's engineering leadership. I don't know why you demand a different standard for Linux's engineering leadership.

As for resolving competing priorities, the lesson of Linux is that operating systems win by addressing everyone's needs. For example, it turns out that everyone wins if the kernel is capable of realtime scheduling, even if they aren't running a realtime application. Similarly, those small realtime systems win by using filesystems with features initially designed for large enterprises in mind.

"the likelihood of it being aggressively targeted by hackers (both state funded and criminal) is a certainty". Well yes. Because it is has already happened. But you are mistaken that the problem is "all those Application stack DevOps types". The DevOps technologies -- at their heart: easy safe continuous deployment -- makes preventing and responding to security issues much faster. The stronger isolation of Docker and similar technologies is also a win in limiting the fallout from security compromises. The security track record of real world deployment of this technology -- notably in the Google and Microsoft clouds -- is impressive.

Has Apple abandoned CUPS, the Linux's world's widely used open-source printing system? Seems so

Glen Turner 666

Run your own RIP

As the owner of an ancient Samsung ML-1510 laser printer: attach a RaspberyPi to it's USB port. A model with 1GB of RAM or more. Now you can configure CUPS to drive that printer directly (for Samsung, via the gdi driver). But CUPS and Avahi can also represent that printer to the outside world as a IPP Everywhere printer (ie, which is sent PDF files, which is discoverable using mDNS). Which means driverless printing and easy printer discovery from laptops, tablets and phones.

Looking at that another way, it's basically a return to the start of the PostScript era, where the RIP (raster image processor) was a computer separate to the print drum. With the RPi RIP having 1GB of RAM it can print the most complex of PostScript jobs at full printer resolution (for the ML-1510 that hardware is 600 x 600 x 1bit gray but 1GB will also cover 1200 x 1200 x 4bit-gray rasterising and using a RPi to do that was cheaper than a RAM upgrade for my household's other printer, a Samsung SL-M4020ND -- not a recommended purchase).

Glen Turner 666

Re: will drop PPD file support soon

CUPS uses PPD files as configuration files. This made sense when it looked like the printers of the world would mostly be PostScript. The configuration information for all other printers could then be munged to fit in a PostScript worldview, and PostScript used PPD files to describe printers.

But the world didn't end there. Today printers accept PDF and there's a network protocol to inquire about the printer's capabilities.

So it makes sense for a modern print spooler to have printers which don't work network-connected PDF spoolers like that to at least fit into that worldview. That leads to an API with a set of drivers. For older, simpler printers they can hardcode parameters rather than use IPP.

UK tech supply chain in dark over Brexit preparations months ahead of final heave-ho

Glen Turner 666

Re: Latest from the PM

The UK won't "become like Australia". Because we *do* have a comprehensive low-friction trade agreement with our nearest neighbour -- the Closer Economic Relations treaty with New Zealand (and our Constitution has an invitation to New Zealand to join the Commonwealth of Australia). We also have a trade agreement with our next-nearest neighbours, the ASEAN-Australia-New Zealand Free Trade Agreement. As you'd expect we also have FTAs with major trading partners like USA, China and Japan. These treaties are the result of over twenty-five years' sustained effort. Ironically an effort initiated by the UK ending Commonwealth Preference to join the EU (which caused an economic crisis in Australia).

Australia doesn't yet have a trade agreement with the EU. The issues there are around agricultural goods, and especially the ever-increasing application by the EU of 'appellation' to limit competition. So trade with the EU occurs on WTO terms. This is no great drama for Australia as the EU is on the other side of the world -- and thus not tightly integrated into production chains. Whereas for UK firms European firms are a few hundred kilometres away and production processes have become tightly interwound.

Australia's situation is in no way comparable to a UK having no trade agreement with the EU and seeking to trade with close-by nations under WTO terms.

Cisco ordered to cough up $2bn – yes, two billion dollars – plus royalties after ripping off biz's cybersecurity patents

Glen Turner 666

Read patent claims from the back if your trying to understand the invention

You read patent claims backwards. The later claims are the more specific and are the relevant claims. The earlier broader claims are a legal tactic. Once in a while litigation does result in one of the earlier broader claims being accepted, which is one of the many reasons why patent law is such a mess.

Future airliners will run on hydrogen, vows Airbus as it teases world-plus-dog with concept designs

Glen Turner 666

Re: hydrogen engines?

"Wouldn't we have seen them in cars if this was viable?" You mean, like we see kerosene-burning turbines in cars today?

The requirements are very different. Weight is a major concern for aviation engines, less so for terrestrial engines. Fueling for aircraft can be complex, because it can be limited to professionals.

Hydrogen is going to cost a lot, far more than using wind+solar to charge a battery. But it looks like batteries are going to remain too heavy to be economically practical in aircraft. So hydrogen is where aviation finds itself when looking for a power source which is not based on hydrocarbons (which make global warming worse).

Even then the economics are going to be interesting. Airbus are allocating a third of the former cabin space for fuel. That implies ticket prices rising roughly 30%, and likely more. That leads to an interesting regulatory question, with consequences for EU-US relations depending which of Airbus and Boeing have a practical plane available for order.

Hidden Linux kernel security fixes spotted before release – by using developer chatter as a side channel

Glen Turner 666

Linux kernel doesn't do too badly with this intractable problem

The article is talking about two problems. (1) commits fixing security issues, which are intended to be silent at the time, are detectable from comparing Git versus mailing list traffic. (2) the lack of oversight by the wider public allows trusted (but perhaps not trustworthy) insiders to apply commits.

(1) It's hard to know how the first problem should be addressed; short of moving discussion of other commits off LKML, which seems undesirable.

(2) The second problem is simply a fact of life in any software development process -- "Reflections on Trusting Trust" territory.

The Linux kernel have done their best -- encouraging regular committers to use (freely supplied) Yubikeys to sign commits based upon a physical keypress. This goes a long way to inhibit a hacking of the computers used by major Linux kernel contributors resulting in a commit which was not authorised by the contributor. The identity of regular committers is known, and for most has been verified by the sighting of government-issued identity documents at GPG keysigning events at Linux conferences.

As for an untrustworthy committer, we need to be careful in our claims. There *is* oversight: (1) retrospectively; and if the issues are complex, then (2) at the time by other selected Linux developers in a non-public forum. There is not public oversight prior to the commit, and it is difficult to know how that could be done -- do we ask those willing to exploit the bug for evil to exclude themselves?. The claim reported by the article of "code commits made without review" doesn't fairly reflect the complex situation. We can be confident that an untrustworthy committer will be detected after the fact, simply because of the great public interest taken in these "silent" commits.

For the Meltdown/Spectre bugs the kernel developers did a good job of documenting the issue and the fixes. It's easy to retrospectively trace from those requirements to the historical commits. It's likely that this supply of very good documentation will be the practice for future complex security issues. It's the process for "simple" security fixes which needs focus to improve retrospective traceability from CVE to commit (this could be as simple as retrospectively tagging a commit with the CVE it fixes).

Finally, I'd note that the Linux kernel's processes are not inherently inferior to software processes which happen in private industry. There would be few industry development processes which could validate the integrity of the source repository back to SAK keypresses. There are no industry development processes which so many backups of the source code under the close control of so many people, allowing blatant subversions to be detected within hours. The Linux kernel addressing major bugs by using a small tight teams of people with need-to-know is no different to commercial practice.

Facebook rejects Australia's pay-for-news plan, proposes its own idea: How about no more articles at all, sunshine?

Glen Turner 666

So sorry, we can't identify posts...

For a long time Facebook has been saying how hard it is to identify and remove all the posts from nazis. Even well-known nazis. Even well-known nazis who are arranging to kill people.

But it's expecting no issues identifying all posts from journalists in Australia.

Isn't it amazing what an incentive self-interest can be?

Fret not, Linux fans, Microsoft's Project Freta is here to peer deep into your memory... to spot malware

Glen Turner 666

If you run a large public VM farm -- and Microsoft does -- then identifying VMs with malware is important in stopping the VM farm from becoming the DDoS agent from hell. This isn't the only way to address the issue, but no technique has 100% coverage, so I can understand why they're building this method. It has some advantages too, such as not generating data which needs real-time analysis (unlike, say, intrusion detection systems).

This'll make you feel old: Uni compsci favourite Pascal hits the big five-oh this year

Glen Turner 666

Re: pascal was simply useless.

Pascal was useless as shipped. No linkage, no variable-length arguments (despite using them in the language, so a Wirth Pascal compiler couldn't be written in Wirth Pascal), the program needing to be in one source file. All the serious Pascal compilers fixed these shortcomings -- but none of them in the same way, which made Pascal non-portable. Of course standards committees tried to fix this, but usually by inventing yet another mechanism (insert inevitable XKCD).

But Pascal's programming tools were great: good IDEs, good sets of libraries. UCSD Pascal was a good system. Turbo Pascal was superb. This made Turbo Pascal the obvious choice for writing well-performing programs under MS-DOS.

Then two things happened to kill Pascal: UNIX and Windows.

UNIX was a joke. A minicomputer operating system of obscure commands and questionable stability and an odd security model. BSD and then Sun focussed on making UNIX not only a serious operating system, but one at the edge of operating system features. Other minicomputer operating systems didn't come close to what Sun was doing with their workstations. Even the billions spent by IBM and DEC didn't touch dynamic linkage or TCP/IP or NFS. And the language of UNIX was C.

UNIX was so obviously the future that Microsoft acquired a UNIX and used Xenix as the development platform for their MS-DOS products. So it was natural for Microsoft to seek to replace their expensive workstations and servers with PC-based C compilers and linkers, and then natural to sell that C compiler and linker, and then natural to support that C product better than the other languages it offered. In time it was natural for Microsoft to write OS/2 and Windows in C.

What also cemented C over Pascal was PJ Plauger's work on the ANSI C committee. Unlike the equivalents for Pascal, the ANSI C committee did a great job. It codified existing practice, it bought innovation where it was sorely needed and thus readily accepted (eg: function prototypes, the "void" type), it wasn't afraid to adopt a particular vendor's solution whatever its weaknesses (eg: BSD strings.h).

Now we had a language which you could write a program in and it would run on MS-DOS and UNIX: if you were developing programming infrastructure then using C doubled your market; and if you were writing programs then using C meant you had more programming infrastructure available. Many of the GNU tools became available for MS-DOS and people developed a hankering for the real thing: UNIX on their PC at a price they could afford. Moreover the action of AT&T and Sun in seeking to massively charge for their compilers meant that UNIX systems in practice all used the same compiler for applications programming: the free GCC. So not only was there a common language for UNIX at the 10,000ft level, there was a common language at the 1in level, plus autotools papering over the differences in system libraries. Pascal simply wasn't that ubiquitous.

With Windows, C and Win32 became the common choice for applications and Pascal's largest group of users quickly left.

Later the web browser killed Win32. But since C was the language of choice for both UNIX and Windows servers, that only made C more dominant. Then the world tilted and interpreted languages roared back into fashion on the server-side: Perl, PHP and Python. C became a niche language for systems programming and performance-critical paths (and part of the appeal of Python was its ease for putting C in performance-critical paths -- usually by importing a third-party library).

Microsoft doc formats are the bane of office suites on Linux, SoftMaker's Office 2021 beta may have a solution

Glen Turner 666

Re: Zotero

Zotero and EndNote are the two most popular citation managers, so to have Zotero described in this review as "...integration with an open-source citation management system called Zotero" did make me wonder how little academic writing the review's author has done.

If you do academic writing then you need a citation manager before you even begin to read -- then you can let the citation manager record all your sources as you go. Zotero works better than EndNote for modern multi-device users and I'd strongly recommend Zotero over EndNote for PhD candidates (who aren't just writing one essay, but a multi-year series of documents). In response, EndNote offer $0 licenses to current students, but this has the effect of making your years of curated citations inaccessible when you leave the university sector (again, more of a concern for higher-degree students rather than undergrads pumping out disconnected essays they'll never revisit).

LibreOffice 6.4 nearly done as open-source office software project prepares for 10th anniversary

Glen Turner 666

LibreOffice made corporate use of Linux possible

Thanks mostly to LibreOffice, but also to the Evolution email and calendar client, it is possible to use Linux as a client operating system within a large organisation. I think that's a win the article could have mentioned.

The other notable achievement of LibreOffice is it's dedication to reading a wide variety of superseded file formats.

But I'll agree with the article that the main effect has been to keep Microsoft honest with Office pricing and features (such as an easy PDF export).

Running on Intel? If you want security, disable hyper-threading, says Linux kernel maintainer

Glen Turner 666

Re: Surely...

If an attacker wants to defeat the Spectre mitigations then all they need to do is run a tight loop in their code and the mitigations will switch themselves off?

GitHub upgrades two-factor authentication with WebAuthn support

Glen Turner 666

Re: Git servers don't support 2FA on updates from Git clients

The point of WebAuthn is to replace typing that password with a button press, verifiable end-to-end, with no opportunities for keylogging or other MITM. So you'd end up with a better user experience with WebAuthn as well as it being more secure aginst the common issues.

The point of signing commits is a little more subtle. That protects your code from unauthorised modification to the repository and means that you can verify the commits as unchanged, so if GitHub is hacked you can check that your code has no unauthorised changes -- no need to rely upon other parties, such as assurances from GitHub. If all the developers use hardware devices for the GPG-signing (which is a pain to set up but just a keypress to use) then that's pretty unhackable -- essentially there's a unalterable path of trust from that keypress to code later cloned from the GitHub repo.

Typing a password a lot isn't great security -- it multiplies the opportunities for keyloggers, it puts false positives in the logs when people mistype them, effective passwords (>10 random characters) are simply too hard. You'd get more security using a password database which is then secured using a cyrptographic device.

There has been two real advances in security in the past decade: cheap authentication keys (of which Yubikey is the best known) and replacement of firewalls and VPNs with end-to-end encrypted and authenticated sessions (eg, Google BeyondCorp).

Glen Turner 666

Git servers don't support 2FA on updates from Git clients

GitHub's 2FA works on the web interface only (the same is true for GitLab). Once U2F or WebAuthn 2FA is enabled you need to generate a SSH key or a HTTPS token (aka password) to push a commit from a laptop's command line. These methods do not request 2FA. So the use of a keylogger or theft of a developer's laptop still exposes the repository to unauthorised modification.

The GIt command line client could be updated to support U2F or WebAuthn upon a "git push" but this has not happened yet.

Lacking that support at the moment your choice is to secure a GitHub SSH keypair or a HTTPS token using an proprietary authentication key (eg, Yubikey). This is usually a multistep process -- use the hardware key to secure a password database, then that database releases the access token after validation of the hardware key.

You can also securely sign commits by using a proprietary authentication key which implements GPG-signing and set the repository to require GPG-signed commits.

Unfortunately neither SSH, HTTPS or GPG expose the security of the key storage. So the Git server can't tell if the key exchange with with a secured keystore or with something as terrible as a passwordless SSH or GPG keystore. This is the problem U2F and WebAuthn exist to solve.

Pentagon makes case for Return of the JEDI: There's only one cloud biz that can do the job and it starts with an A (or rhymes with loft)

Glen Turner 666

Re: The arguments are solid

You forgot this option: Oracle don't need to win.

JEDI is around $10B of business. Let's say Oracle use $10m on a lobbying effort, and because of the fuss they kick up win 10% of that business. That's a massive ROI and Oracle will cry about losing all the way to the bank.

Google to bury indicator for Extended Validation certs in Chrome because users barely took notice

Glen Turner 666

Re: Security is hard

It *is* a matter of design, and designs around the address bar are poor but cheap. The screamingly obvious design is to prevent people entering credit card details onto a non-EV page.

Bill G on Microsoft's biggest blunder... Was it Bing, Internet Explorer, Vista, the antitrust row?

Glen Turner 666

Re: So which company do you think DID see the future often?

I'd suggest that you are overlooking Apple. It had a pretty remarkable run at computers: Apple II, Macintosh, the aluminium iBook laptops (compare with the competition from Toshiba -- one is a "modern laptop" the other isn't), the iMac. All iconic.

Then there's the non-computer products. The Newton, which although failed said "this is what the future of handheld computing will look like". The iPod, which had a revolutionary user interface and content licensing which meant you didn't need to visit the dodgier side of the Internet. The iPad, which said "this is how slabs work" and has an ease of use the competition still can't touch. Then there's the iPhone -- remember that before the iPhone that Microsoft had spent years as the best smartphone, but was irretrievably blown off that perch by the third iteration of Apple's phone. Along the way were good products in markets Apple have since left: printers and cameras.

NeXT, whilst not a Apple product, was a Steve Jobs product. Designed by ex-Apple engineers.

And isn't that the real concern about the future of Apple after the death of Steve Jobs -- that without his vision and drive that Apple won't see the future and won't be able to bring its considerable design skills to the product?

Dev darling Docker embraces Windows Subsystem for Linux 2

Glen Turner 666

Re: What are the benefits?

It depends upon your organisation. If you're tracking the development in MS Project, using Visio for diagrams, Sharepoint and MS Word for documents, then it makes as much sense to use Windows for Linux development as it does to use Linux for WIndows-oriented corporate applications.

On the other side, Red Hat have done surprisingly well at making CentOS or Fedora into a good corporate desktop: it can authenticate via AD, do email and calendar with Exchange. So your point remains a good one.

You've also got to consider the Microsoft side of things. Companies have some pride, and not being able to effectively program their own Azure product from their own Windows operating systems clients must have stung.

UK comms watchdog mulls 5G tweaks: Operators want moooooar power

Glen Turner 666

Re: Now We Will Need Tin Hats

Are you sure? The document talks of the "terminal power limit" going to 28dBm. "Terminal" being handset.

I read the proposal as widening the spectrum allocation to match that of EU so that the beamforming (ie, active) antennas designed for EU use can be used in the UK.

Table 2 in the proposal gives the base station powers: +65dBm/5Mhz (3150W) EIRP for passive antennas, +44dBm/5MHz (25W) TRP for active antennas (in an active system think of TRP as if each client has their own 25W transmitter on the base station). Note that these aggregate to considerable powers for base stations covering entire 20-80MHz allocations, you could expect the aggregated amplifier output for a basestation high above terrain (ie, no limits to output power, all quadrants active, entire band lit, lots of users) to exceed 10KW.

In any case, the inverse square law means that basestation powers don't matter.

The increase in terminal power is more of a worry but there we've got to go with the longitudinal medical research which doesn't show any effects from extended handset talk use. Fortunately the amount of time smartphone handsets are held to the head is decreasing, so average risk is falling in any case.

Astronomer slams sexists trying to tear down black hole researcher's rep

Glen Turner 666

Boyer explains Dr Bouman's role

There's an excellent essay on Facebook by Misty S. Boyer explaining Dr Bouman's role in the project, with copious references. You'll need to go and find it as I can't paste it here as the text is too large for a Reg comment.

https://www.facebook.com/paganmist/posts/10156249525816313

Glen Turner 666

Bryan Cantrill tweets

There was an excellent response by Bryan Cantrill on Twitter:

"This photo of Dr. Katie Bouman seeing the first image of a black hole upon reconstruction is perhaps the most evocative photo of intellectual breakthrough that I have seen -- of anyone, ever. It captures the moment of breakthrough just perfectly: the delighted grin; the eyes that show equal part elation and relief; the clasped hands that still reflect the intense anxiety of just seconds prior. It is a look that says, in short: "IT WORKED!" Anyone who has had such a moment in their life -- of prolonged intellectual struggle followed by breakthrough -- recognizes something of themselves in this picture of Dr. Bouman. That is why this photo resonates; not just because of Dr. Bouman's team's work (though that is obviously incredible!) but because her moment of joy inspires us -- all of us -- to strive for our own breakthroughs. There are regrettably some -- few, but noisy -- who have tried to discredit or minimize Dr. Bouman's role, largely because they have misunderstood what makes it so compelling. My observation would be that anyone minimizing Dr. Bouman upon seeing this photo must not have had that feeling themselves; for these embittered few, the feeling of breakthrough must be as foreign as the specifics of interferometry used to achieve it. Let us choose to collectively ignore these detractors -- and choose instead to be inspired by not just the achievement of Dr. Bouman's team, but by the incomparable elation of breakthrough, as epitomized by Dr. Bouman herself."

Be wary, traveller: There is no going back if you step over the Windows 10 20H1 threshold

Glen Turner 666

Re: Be wary? Don't do it then.

Windows Insider Fast Track is essentially Microsoft's equivalent to Fedora Rawhide. There's a surprising number of generous people willing to run these slicing-edge operating systems. Neither should be run on a machine used for Real Work. The advantage of the Linux alternative is that those people can grow their skills into fixing the issue, rather than merely reporting it.

Ignore the noise about a scary hidden backdoor in Intel processors: It's a fascinating debug port

Glen Turner 666

It *is* a fascinating debug feature. But as the slidepack points out, there's the ability to use it for havoc by using the debugging facilities. For example, burning a fuse to activate a debugging feature of the random number generator, in which the RNG always returns the same number. Being a fuse, that change will survive a reboot.

Buffer overflow flaw in British Airways in-flight entertainment systems will affect other airlines, but why try it in the air?

Glen Turner 666

Not the flight systems, the entertainment system, but still...

Its not a safety of flight issue, but he'd dropped the entertainment system at the beginning of that transatlantic flight people would be rightly upset about the selfishness of entertaining himself at the cost of everyone else's boredom.

Adi Shamir visa snub: US govt slammed after the S in RSA blocked from his own RSA conf

Glen Turner 666

Re: More privacy 200 years ago

If you give the clock another twist, say 430 years ago, then that places you into the reign of Elizabeth I. Where the state took a great deal of interest in what we would regard as private affairs, such as your relationship with your chosen god. The reason for the state's invasion of your privacy? Terrorism.

Cut open a tauntaun, this JEDI is frozen! US court halts lawsuit over biggest military cloud deal since the Death Star

Glen Turner 666

About Oracle's entire future, not just Oracle Cloud v AWS

Well obviously Oracle is upset, because their future is on the line.

Oracle make an expensive on-premises database. AWS make an off-premises compute cluster, which also includes a database API. So the US Department of Defence moving to AWS and re-writing their code to use AWS's API rather than Oracle's API means that the use of Oracle's database ends, which means that the annual licensing fee paid to Oracle also ends.

The threat from DoD's AWS strategy is not limited to DoD. They are a huge employer of contracted IT staff, and many of those contractors will carry their heretical notions into other government departments.

Oracle's complaint that there should have been multiple vendors falls a little flat. It's not the job of DoD to keep Oracle afloat, but to seek to maximise DoD's own efficiency. Which using just one cloud API does. But of course Oracle is going to try it on, after all if they win even 10% of the DoD's business that's still a billion bucks.

It's also interesting to reflect how Amazon owning AWS has allowed AWS to thrive. Oracle's usual strategy would have been to purchase this upstart system, much as they did with MySQL. But Amazon's systems are completely reliant upon AWS, so Amazon can't sell AWS without risking the availability of Amazon.Com's $0.5m per minute.

Use an 8-char Windows NTLM password? Don't. Every single one can be cracked in under 2.5hrs

Glen Turner 666

Re: Plenty of financial institutions need to buck up.

The XKCD algorithm seems suspect to me. Its basic assumption is that people can make a random choice of common words -- without reference to a dictionary and without using any random number generation.

I just asked 15 coworkers to give me three random words -- there were 7 words appearing twice and two words appearing four times. This sample suggests that the size of the in-practice word pool may be small.

Given the skew in lotto number selections, we know that humans can't make random choices from a pool of ~50 selections even when it is in their financial interest to do so.

Given the apparently small size of the pool and human's poor ability to choose randomly, I suspect the in-practice XKCD-algorithm key size may be substantially less than that suggested by the author's back of the envelope calculation. I'd want to see a controlled study before recommending its use.

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022