Apple quietly buried its remaining unsold stock of some 2,500 Lisas in landfill in 1989
They may not inspire the auction frenzy of an Apple I, but has anyone ever considered digging them up for a retro sale?
55 posts • joined 27 Jul 2010
Not sure if this represents a Five Eyes-wide decision to publicly point the finger, but New Zealand's NCSC has chosen today to also attribute recent attacks to China (APT10 isn't named in the press release, but it's strongly implied by the link to previous NCSC guidance).
A couple of theories that spring to mind:
(1) The GCSB is doing its job. They have access to classified threat information that can't be discussed in public, but it leads them to the unavoidable conclusion that it's less damaging to NZ's national interests to interfere with healthy commercial competition than to let Huawei kit in.
(2) The GCSB isn't doing its job. For murky reasons that involve keeping our Five Eyes partners happy they're screwing Spark and the country's future infrastructure over, under a smokescreen of "trust us, we know what's best for you." Or maybe they just like Vodafone.
(Or, more depressingly, (1.5) The GCSB is doing its job. They've made the hard-nosed decision that the Huawei "threat" is Trump Administration bluster, but Five Eyes is valuable enough that it's better for the country overall if we play along.)
The problem is that none of these theories are refuted by the known facts. As an NZ citizen I'd very much like to believe (1), and as a grown-up I grudgingly accept that (1.5) isn't outside the bounds of realpolitik. However, with the massive loss of public trust that the intelligence community brought upon itself with the Snowden disclosures you don't need a tinfoil hat to accept the possibility of (2). I do have some sympathy for the GCSB here, because even if they could declassify all the evidence behind their decision they'd still be accused of selective disclosure and nothing would change.
Ultimately though, it's irrelevant. Whether thanks to conspiracy or cock-up (here's looking at you, Cisco), we have to assume that any technology we import can't be trusted to behave in our national interest. I just hope that the people responsible for risk mitigation view all vendors as sceptically as they do Huawei.
Reading the comments on the GitHub issue, I can sympathise with Tarr saying "I no longer wish to be burdened with responsibility for this code, no matter how many people may have come to depend on it." Far too many of us who consume open source software feel entitled to upstream support even though that's not how the contract works (I'm guilty myself of muttering imprecations at authors of code which has cost me precisely nothing).
That said, fading support and ambiguous deprecation is a real problem, both with OSS and non-free products (non-free vendors seem a bit better at formally ending support so at least you know where you stand, but there are plenty of exceptions). If "the community" needs to maintain the integrity of an abandoned project, then we need a process for reliably doing that, but first we need clarity that the project has been abandoned - that includes walking the dependency tree to see if there's buried reliance on code that hasn't been touched in years.
Maybe that's a useful function of repositories like NPM, to help people assess the risk of using packages they host?
Machine learning algorithms, which seem to be compulsory in any new technology, validate the instruction before sending it on to SWIFT via Microsoft's SWIFT installation in the cloud.
Let's hope they've been trained on the headline-grabbing instructions that have emanated from North Korea...
...is not evidence of absence. According to the WSJ article:
During a two-week period in late March, Google ran tests to determine the impact of the bug, one of the [unnamed WSJ sources] said. It found 496,951 users who had shared private profile data with a friend could have had that data accessed by an outside developer, the person said. [...] Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said.
So it might in fact be true that the vulnerability was fixed before it was exploited. But the claim "Google know for sure no harm was done, therefore they had no obligation to tell their customers" simply isn't justified on the face of what we know. The hypocrisy is indeed strong here.
Up to a point I can sympathise with the people making the call on the microcode upgrades. A firmware upgrade on any enterprise storage kit is a Big Deal with huge potential for problems, and nobody wants to be the customer who discovers the lurking data corruption bug in the latest release. It doesn't help that all the release notes I've seen are written from the firmware developer's perspective, so the customer is caught between vendor support saying "of course we recommend you upgrade to the latest release" and a list of micro-detailed fixes that give no clear risk guidance to the end user.
Maybe what's needed is for something akin to CVSS scoring for security updates: I don't care which low-level firmware component had obscure bug XYZ, I want to know (1) how likely is it to to affect me, (2) how bad will the impact be if it's triggered, and (3) how risky is implementing the fix. Otherwise you're left making the best call you can, and inevitably some of those calls will be wrong.
"Their data" may mean "our data" and nobody should be able to leave that exposed.
I completely agree - by "should be able to" I meant "with the current implementation as I understand it, the worst I expect to be possible is that..." rather than "it's acceptable for an incompetent admin to be able to..." Upvoted both for the principle expressed and for catching my sloppy wording.
Both this article and the Kromtech post are missing something critical. It sounds like they're talking about Amazon's managed Elasticsearch Service offering, where you point and click (or not) through the console to have a cluster set up for you. But that service doesn't give you host access and it doesn't even let you install ES plugins of your choice, even if you are enough of a muppet to not configure any security.
The worst a dumb customer should be able to do is leave all their data exposed for the stealing and/or deleting. But Kromtech claim "the lack of authentication allowed the installation of malware on the ElasticSearch servers." If those managed ES versions can be remotely compromised through their REST APIs, wouldn't that be a fairly obvious thing for a provider to have patched?
If on the other hand we're talking customer-built (unmanaged) ES clusters, then the majority of the article is misleading if not downright wrong.
So if the Chrome team's mission is to help users be secure, why has Chrome ca. 56 made it so much harder to view certificate details? Up until recently you could right-click on the "Secure" marker in the address bar and go straight to the cert - now all that gives you is a link to a generic help page, and you have to drill down into the Developer Tools UI to find this information.
In what world is this an improvement?
Don't be so quick to dismiss the topic - Canadian academic Robert J. Smith? (yes, the question mark is deliberate) has modeled zombies and Bieber Fever as well as his more serious work on mathematical epidemiology:
Mathematicians care about the abstract model, not pesky little application details. And when the zombie apocalypse strikes, you'll be glad their work got funded.
Well for all we know, there could be 2,046,820,352 ZX81s providing that 1952GB, with 15,990,784 clustered together for each vCPU (because not even Amazon with their "everything fails, all the time" design philosophy would trust those dodgy RAM expansion cartridges).
Oracle gives its risk matrices to everyone but keeps the details of individual CVEs (Common Vulnerabilities and Exposures) to users with log-ins to its support portal.
This is incorrect. The Patch Availability documents linked to from the announcement are just that - they detail which patches to download for which product versions and link to other support docs for known issues, non-standard patching instructions, etc. They don't provide paying customers any more detail on the vulnerabilities than what Joe Public can infer from the risk matrices, which shouldn't be surprising:
(I have my own support contract with Oracle as an independent consultant, so the above is based on first-hand readings of the docs.)
In the unlikely event I ever bump into Frank Ostrowski I'll be more than happy to compensate him for any alleged loss of licence fee at the time (I was 11 or 12 so my pocket money would not have extended as far as ordering software from Germany) and buy him as many pints as he can sink in a night.
Well said, sir. A good chunk of the troubleshooting skills that keep me employed today trace right back to breaking copy protection on games I could never afford as a kid. For all it's cold comfort to the vendors of yesteryear who went out of business, it would be an honour to meet them and repay my childhood debts.
For even more resource-constrained environments there's Tiny Core Linux (http://tinycorelinux.net). A basic FLWM LiveCD image weighs in at 15MB and it'll run happily in 64MB RAM. Obviously that doesn't give you a lot of functionality, but it has a nice fine-grained package system that you can tailor to get exactly what you want and nothing else.
I'm not sure I'd be game to use it for my primary work machine (mostly because security updates are ad-hoc, AFAICT), but for special-purpose boxen it's hard to get more lightweight that this.
The IT questions in Section 27 are interesting:
Have you illegally or without proper authorization accessed or attempted to access any information technology system?
Have you illegally or without authorization, modified, destroyed, manipulated, or denied others access to information residing on an information technology system or attempted any of the above?
Have you introduced, removed, or used hardware, software, or media in connection with any information technology system without authorization, when specifically prohibited by rules, procedures, guidelines, or regulations or attempted any of the above?
If you're applying for clearance to work at the NSA, the correct answer is presumably "yes".
[Citation needed], but I'm guessing that's a reference to section 48 of the Telecommunications (Interception Capability and Security) Act 2013:
This requires network operators to advise the GCSB when they make changes within "areas of specified security interest" as defined in section 47. That section lists things like interception capability, storage of customer or network admin credentials, and parts of the network that aggregate large volumes of customer data (in flight or at rest). I'm neither a lawyer nor a network engineer, so hopefully someone better qualified can explain what this all means in practical terms.
I'd always assumed that "emoji" was a portmanteau of the "emo" in emoticon and the Japanese "ji" meaning character (as in "kanji", literally "Han [Chinese] characters"), but it's actually a Japanese word in its own right.
Kenkyusha's New Japanese-English Dictionary (5th ed.) defines it as "a pictorial symbol; picture writing; a pictograph" and gives the kanji 絵文字 (絵 "e" means picture, as in the famous ukiyo-e art style, and 文字 "moji" means written character). According to the Japanese Wikipedia article on 絵文字 the first encoded emoji was the baseball symbol in CO-59, a 1959 interchange code used by a group of large newspapers (carried into Unicode as U+26BE).
The Hacker's Handbook was one of my most prized possessions as a spotty teenager. Reading the text now (http://www.textfiles.com/etext/MODERN/hhbk), I have to smile at gems like this:
"Hacking is an activity like few others: it is semi-legal, seldom encouraged, and in its full extent so vast that no individual or group, short of an organisation like GCHQ or NSA, could hope to grasp a fraction of the possibilities."
They sure got that right...
Here in NZ they want a blanket right to demand passwords even without reasonable cause:
But it's okay, they promise not to disclose any lawful content and we all know government agencies never abuse their powers.
Is there really "zero chance" the malware authors could hack drive firmware without access to the source code? Sure, publicly available firmware binaries are probably obfuscated in nasty ways and would require a lot of reverse engineering even after decryption, but why should that be beyond the ability of a well-resourced organisation like the NSA? There's a long tradition of amateurs hacking DVD-ROM firmware to disable region locking, for example - if J. Random Hacker can do this in the comfort of their own basement, why can't the professionals do it on a grander scale?
"You agree that access to the Support Portal, including access to the service request function, will be granted only to your designated support contacts and that the Materials may be used only in support of your authorized use of the Oracle product and/or cloud services for which you have a current support contract. Except as specifically provided in your agreement with Oracle, the Materials may not be used to provide services for or to third parties and may not be shared with or accessed by third parties."
Where it gets murky is the situation you've described, where you pick up knowledge in the course of your authorised access that happens to be helpful to a third party sometime in the future. My guess would be that saying "oh hey, I have a downloaded copy of a support article that might come in handy here" is out, but saying "I've hit this problem before and I remember what the fix was" is ok - unless Oracle want to claim they own the part of your brain holding their content, of course...
It sounds like the behaviour described in the article, offering patches you've written yourself without access to licensed support material, is quite different from what they're squabbling about in the lawsuit. Whether it contravenes some other license clause is a whole separate question.
Interesting that out of all the potential applications they chose to highlight powering aircraft. With the level of scepticism they must have expected, surely the last thing they need is to remind people of the 1950s atomic-power-will-solve-everything optimism that fuelled the Aircraft Nuclear Propulsion programme. Then again, if they could demo this puppy in a B-36 I for one would buy tickets to watch.
(Mine's the one with the lead lining.)
To be fair to Oracle, EBS has been certified with JRE 7 since December:
The Metalink notes say they also support IE9 and Firefox ESR 17 on Win7. I have a lot of gripes about how Oracle handles certification and patching in general, but in this case the criticism isn't justified.
To be fair to the company, I can see how they might have got the name. The katakana (Japanese syllabic text) stamped on the logo reads ボロックス "borokkusu". That's also how you'd transliterate the English word "blocks" into Japanese - the extra vowels turn up because Japanese is built around what we'd consider to be consonant-vowel syllables. Since the Bollox range seems to be owner-designed kitset-style homes, "blocks" almost makes sense.
Then again I'm nowhere near fluent in Japanese, so this could all be a load of borrokusu.
Biting the hand that feeds IT © 1998–2020