Re: You /what/ Liam?
I used to teach ParcPlace Smalltalk. 3 day course. First hour and a half: concepts and syntax. Rest of course: class library.
107 publicly visible posts • joined 15 Dec 2011
Absolutely. Smalltalk and its surrounding tooling has very much the same vibe; it's an all-encompassing environment, and one uses it very differently to almost all current systems. "Debugging a program into existence" is very much a thing, where the debugger fires up on doesNotUnderstand: calls and you write the code that wasn't there yet.
Keep on doing it for fun :-). When do you reckon you'll be able to remote-control a coffee pot with it?
Let's just say these boards aren't going out into the field yet, where we expect them to survive a decade or more with no visits and total data traffic of under a megabyte a month. Six months or a year from now, once the firmware starts to settle down? Sure. They're good enough for what we need, and have an excellent ecosystem around them. That ecosystem is the difference for us.
The day Atlassian announced the end of Server, I swore never to use an Atlassian product again and started migrating all existing data off Atlassian products onto open source alternatives. The risk of using open-source is at least manageable (as long as you don't risk anything built on PHP or MySQL, at least, as those tend to be architectural smells that reek of poor thinking elsewhere in the system as well). The risk of vendor lock-in isn't, for us.
So far, we're very happy.
When we picked Terraform a year ago, we reckoned it was too big to fail and that if it went proprietary then someone would fork it and we'd still be able to use it. Turns out we were right. We'll continue contributing PRs for the providers we use where we need more functionality.
Unfortunately, there's only one way to find a decent supplier: use one of the small folks who are building their business and therefore trade on quality and service. When they get bought, as almost all do once the founders have built up the business and want to retire on the proceeds, then they become universally awful as they're being run for profit, not service (see icon). Move at that point.
All my domains are with Mel at Herald Information Systems, for example, but I've known her for decades. Excellent on all counts, and for some reason I don't think she's interested in selling the business to Big Provider. If she did, I'd be out of there like a shot.
This is the way the world works.
Our entire business runs on Debian. It was that or a DeadRat tracker, and as we go from IoT devices to private cloud it made more sense to go with something that will scale. It's been very stable for our use cases - which, to be fair, don't include user desktops.
Taking a random 1U box (a Dell R640), it's 21kg - under the male recommended limit, over the female recommended limit. Most 1U servers are likely in that zone, which might make Elfin Safety nervous.
We do dual lifts anyway, as rotating rust doesn't bounce very well if you drop the box 2 metres onto the floor.
Best server admin I know is female, by the way.
Think of a mainframe as several racks of servers and comms, rigged such that you can put a bomb in any rack and the entire workload just carries on running. It's that fault tolerance that attracts the price tag.
I heard a story (which might be apocryphal) of one mainframe that got half-destroyed when the 6kV distribution transformer next door to it went bang. The users didn't even notice. No transactions lost. If it can't do that, I'd argue that it's not a mainframe.
As soon as Shuttleworth demonstrated he was prepared to bend a Linux distro out of shape for commercial reasons, I stopped using Ubuntu. The fleets of machines I've specified and implemented for various organisations over the decade since have all run on Debian.
And, yes, I contribute effort and regular money to multiple FOSS projects and encourage the organisations who use my services to do likewise. Free-as-in-speech needs support that doesn't come for free-as-in-beer.
Also always, always, always check for cleaners with odd habits. We had a Sun workstation at a hospital in the early '90s, on a 4-way multiplug, which was plugged into the right-hand socket of the double 13A socket near the door. Note "right-hand". This is about to become important. The left-hand socket was empty, so as to leave a socket free for the cleaner.
This worked well for a year or so. Then the machine started becoming unreliable - it'd reboot between about 5.30pm and 6.30pm many weekday evenings. Aha! Must be a new cleaner or a dodgy vacuum cleaner! So we put some tape over the plug with a nice neat "Please do not unplug" notice. The following evening... reboot. The morning after that... check the socket. The tape had been peeled back, the plug presumably removed, then at some later time the plug had been put back in and the tape replaced. The empty, inviting left-hand socket next door was unused.
This was... mysterious. Time for some overtime! Hover outside the office after hours and see what happens.
The hover revealed the cleaner coming down the corridor, cleaning each room. When they got to ours, they peeled the tape back and... were just about to pull the plug on the workstation when our spy intervened. It turned out that when the new cleaner had received their training, they'd been shown how to use a vacuum cleaner (new technology for them) and the person showing them had plugged it into the right-hand socket of a double. Therefore the cleaner had assumed that they had to do it exactly the same way, and had been unplugging anything on the right-hand side of a double in order to plug in the vacuum.
Our spy promptly shut down the workstation, moved the multiplug to use the left-hand socket, and restarted the workstation. And we never had a problem again.
(Background: Yes, I have an ASC diagnosis. Yes, I'm involved in autism research. No, I'm not a great fan of Baron-Cohen's deficit model - few autistic folks are.)
It's worth looking at the double empathy problem here - https://en.wikipedia.org/wiki/Double_empathy_problem (and I recommend papers by Catherine Crompton in particular for some well-thought-out further research) where autistics communicate well with autistics, neurotypicals (NTs) communicate well with neurotypicals, and communication between the two groups is confused, confusing, and full of emotional and technical misunderstanding. "Why are they so rude? We're not even through the intros!" "Bored now. Why is he telling me his life story when the meeting's already thirty seconds in and we're still on small talk? And I can't disguise that boredom on my face."
I strongly suspect that NTs find autistic comms terse, arrogant, and entitled. Given the strong presence of ND and especially autistic people in tech - and remember, an ASC diagnosis *requires* you to have communication difficulties with neurotypicals or you don't get the label - then the fast, efficient, and open comms between autistics is going to come across as toxic. We don't dress things up. Mostly we don't say please, because the other autistic person doesn't care about hearing it and it's a wasted word. We say what we want. Toxic? Depends which side of the mutual incomprehension barrier you're on. I'd much rather receive clear, simple comms where the position and intent of the other side was clear; I spend far less time parsing it and I'm far more certain of the intent of the comms.
I'd be fascinated to see a version of this study where a group of autistic people rated the threads for toxicity, and a parallel group of non-autistic people did the same. I think the comparison of those scores would be very, very interesting.
I've spent quite a long time trying to port a moderately complex WinForms app so that it will run under WINE. I have access to the source, and it's .Net (presently Core 3.1) and, as far as we can, adheres religiously to the published APIs with no weirdness.
It doesn't port cleanly, because one third-party library (DevExpress XtraRichEdit - I ain't writing my own word processor!) does something deep in its innards that causes WINE to display cross-hatched scrollbars.
WINE might change that, but it's going to have to do so app by app, and it'll be hard to shift the line-of-business apps because at the moment the DevExpress and Infragistics of this world couldn't care less.
* You can turn off querying from organisations that break the rules.
* You can bring down the portcullis completely if you want.
* You can put a human between the request and the response, running the query past the Caldicott guardian in healthcare for example.
I was the architect of one such system.
An ex-housemate worked on AMULET. Apparently Steve was really quite peeved that the first silicon had more than zero bugs in it, as everything he'd previously designed had come back bug-free first try. I mean, the design was only an order of magnitude bigger than anything asynchronous that anyone else had ever attempted...
The spiritual descendant is really Furber's Spiking Neural Network (Spinnaker) work - neuron simulation using only kilowatts of power, rather than megawatts. Worth the look.
As I get older and my eyesight gets worse, screen real-estate becomes more and more valuable. I'd love a device that I can fold to put in a pocket, then get out and open to tablet size so that I stand a chance of reading it. Doesn't need 400ppi, just needs lots of degrees across my field of vision with my varifocals!
GDPR says nothing about whether or not personal data is encrypted; merely that personal data is processed.
Zoom is not a peer-to-peer network; it uses traffic routing nodes worldwide, and explicitly states in its T&Cs that it may use any node or combination of nodes to route traffic.
Net effect: your video traffic, even if allegedly "end-to-end encrypted" (show me the code, the design, and the architecture), may be processed through one of Zoom's US routing nodes on the way from an EU source to an EU destination. And if video traffic ain't personal data, I don't know what is.
Then add in users who use VPNs and deliberately appear to be in different countries, and EU offices of US organisations where the Internet traffic from the EU users pops out of a US Internet peer. Zoom has no way of knowing where any given user is physically sited, so its only recourse would be a re-architect of its entire system that routes all video and audio traffic peer-to-peer (and doesn't provide cloud recording or transcription services). Then it would only have the more common kind of user data to worry about... :-)
Possible? Clearly yes, for small enough and/or critical enough projects. One ex-colleague of mine wrote his own BCPL compiler for PDP-11, which he bootstrapped from his own PDP-11 assembler, which he originally hand-assembled. Then he wrote his own OS using that compiler. I didn't check what he used for storage and access to the PDP-11 while doing this, but it wouldn't surprise me if he went from scratch there as well.
Practical? That's a cost-benefit analysis :-).
... is to verify:
* the processor and system architectures for side-channel attacks, such as power or speculative execution;
* the microcode on the CPUs;
* the code on the management processor on each CPU die;
* the firmware on the network cards, disk controllers, and everything else that can DMA or can affect data ($deity help you with Thunderbolt);
* the microcode and firmware running on the flea on each server;
* the BIOS;
* the entire code of the kernel you're running and any loadable modules;
* the entirety of the user space of the operating system(s) you're running;
... and *then* you can get onto your own application(s) and the third-party libraries on which they depend.
No, you can't rely on these being checked against some suitably complex hash (remember that MD5 and SHA-1 are both considered compromised, so it'll have to be better than those) - how did you obtain that hash, and how do you know your channel to obtaining that hash hasn't been compromised?
No, you *really* can't rely on downloading the application and then comparing against the hash that you... wait for it... *downloaded from the same site*. Pure security theatre.
No, you can't rely on the browser or program you are using to download code or hash being uncompromised. Or, for that matter, the code you are using to calculate the hash.
No, you can't rely on your firewall. How do you intend to verify its firmware and its application definitions?
No, you can't rely on your network switches for data transfer. How do you intend to verify the switch's data and control planes, and its management software?
No, you can't rely on printouts. How do you intend to verify the application producing the printed version, the printer driver, the printer firmware?
No, you can't rely on your verification tools. How do you intend to verify them?
Second point: "Doing it right" would cost more than the entire revenue of most businesses - which means 100% chance of failure of the business. That's a higher chance of failure than "ignore it and hope it never happens to us". So, quite correctly, businesses try to hit the sweet spot of minimum overall chance of failure of the business - which means the standard risk management approach of choosing which ones you even bother trying to mitigate.
Final point: Overall - and I expect to be roundly downvoted for this - if the risk management is done without rose-tinted glasses, *this laziness is good for humanity*. There's no point spending more effort on verification than it takes to recover from the attacks that succeeded due to missing or failed verifications.
I think that backward compatibility is going to be an awful lot of fun to define.
Imagine, for example, the race conditions that nobody has ever found in their multi-threaded code because the existing code has particular performance characteristics such that one thread always gets there ahead of the other / the code is slower than the hardware being controlled. Now consider a project that *only* varies timing, and makes no other change. You've already lost backwards compatibility, in that code that work{ed,s} in the old environment no longer works in the new one.
I confess I'm going to sit back, grab the popcorn, and watch the fun, continuing to avoid as far as I can the trio of Topsy-ish "just growed" P languages that were originally fucked by their lack of architecture and are now *utterly* fucked by their requirement for backward compatibility: PHP, Python, and Perl. Spawns of Santa, all of them, hence the icon.
Our remaining Windows boxen and VMs are finding they're having a hard time of it reporting telemetry back to the mothership; they suddenly can't resolve any of the DNS names. Might have something to do with me blocking 53 outbound for anything except the household DNS server, which is running Pi-Hole... *innocent whistle*
Ad-free on mobile is another blessed relief.
It's never much fun to invite independent auditors in who you know will publish their findings openly. The first time you do that, you *know* there's going to be stuff you hadn't seen hauled out into the open, and a certain amount of egg on face as a result.
Much kudos to the folks who chose this approach, and co-operated with it, despite the inevitable findings.
https://en.wikipedia.org/wiki/Institutional_review_board is worth the check.
As the article noted:
"The paper describes how the authors submitted what's described as subtly subversive code contributions that would introduce error conditions into the operating system software, and it claims the researchers subsequently contacted Linux maintainers to prevent any bad code ending up in an official release of the kernel.
"It further states that the experiment was vetted by the university's Institutional Review Board (IRB), which determined that the project did not constitute human research and thus granted an ethical review waiver."
*checks Firefox add-ons*
AdBlock Plus (blocked 3 items on this page)
NoScript (blocked 6 items on this page)
DecentralEyes
Facebook Container
Containerise
HTTPS Everywhere
Privacy Badger (blocked 2 items on this page)
Don't Track Me Google
... yeah, no wonder Google doesn't want add-ons being able to access arbitrary features of your browser; some of these would be impossible in upcoming Chrome versions.
As I've said on a number of occasions in these comments, the *only* way to stop this is for someone to spend a few million to a few tens of million to set up a bug-for-bug-compatible free and/or open project that exactly tracks Office. No "improvements". No "doing it our way". No "but that's patented", even. A drop-in replacement so that users don't need re-training, investment banks can be certain that their traders' complex derivatives (many of which are *defined* in Excel spreadsheets) will keep the same values, and designers can round-trip documents without fear of formatting whoopses.
Until that point, Microsoft wins.
From the article:
"I welcome a good, robust debate on all these points, conducted in the right way."
... without any way of conducting the debate because, of course, the members' forum is no longer available. I wonder what the Chairman believes "the right way" to be?
Oh, indeed - there's a reason anything "healthcare" costs 10x more than non-healthcare, and the validation and consequential license fees are one part of that. That said, we chose CentOS over RHEL because a) we knew what we'd be paying for features like virtualisation, and b) we could bring support in-house if absolutely necessary. We chose Linux over VMware for our virtualisation layer because of VMware's complete lack of LTS; having to upgrade your virt layer every couple of years to retain support sucks.
If you're working in healthcare, or a number of other areas, then you may need to "validate" your environment according to ICH-GxP or a similar standard. You really, really, *really* do not want to have to go through this more often than you have to. You have to revalidate *every time you change anything about the system*. Generally, this means re-testing everything you care about, with test scripts, with each step on each test script initialled to say it has been run and each script signed and dated and run by someone who has demonstrated the knowledge, skills, and experience to run that script and understand what they're doing. This can easily take a couple of months. Then there are days of paperwork to release onto the production systems.
Monthly security patch cadences are far too fast for validated systems. Annual... maybe, but only if you can make them coincide with other updates and test the whole lot in one go.
"Yeah, it almost works, but we had to break the principles for the filesystem."
Well... yes. And anywhere else you need to drive hardware concurrently and across multiple calls. Good luck getting a multi-tenant GPU driver working entirely in a principled way, such that you can have some cores used for (say) a physics sim, some more to render to textures for an external display that ships pixels across USB, and the rest for a game.
We usually have 2-3 cats around the house, plus two long-haired humans. We both appreciate low background noise, so both our PCs (actually midi towers from QuietPC) are fanless apart from some large slow-spinners on the graphics cards that are stopped unless we're playing 3D-intensive games. The PCs tend to last 5-7 years before needing replacement - if you're buying fanless, it's so bloody expensive that it's worth buying further up the market and extending the useful life of the boxes.
One of the unexpected advantages over the several previous generations of fanned machines is the sheer lack of crud that gets into the system. We don't get appreciable dust/fur/hair/crud/PLA wisps from the 3D printer buildup even over that lifespan.