Re: Wish I knew what kind....
Look for "ten key less" designs.
Have main, cursor, and page-up/page-down/... as PC-AT; just drop the numeric part.
600 publicly visible posts • joined 19 May 2012
which means plenty of your user populations now have an inkling about the power of sudo
This was well documented years ago: https://xkcd.com/149/
Not sure where I would put this as a reply, so I'll put it here.
As the title says: each technology/approach/... should be evaluated on its own merits.
Some factors (eg. existing skills) can be mitigated (training). Attitudes of "I don't want to change" may need stronger action: if you are a developer you should expect change. All the time.
Sometimes not changing should be chosen, just remember the clock is ticking and how much are you willing to pay for support/fixes/.... for older platforms with few other users?
Just keeping things the same is just stasis. I'm sure many in my great-grandparents generation felt the same about oil lamps and candles over this new fangled electrickery.
Often changing from a limited hacky approach that while it works it is very much bailing wire and chewing gum. It "works" with constant intervention from developers.
The new way allows integration with diagnostics (with zero effort: work already done) so requires less support.
Reality this often isn't about anything expect an older developer trying to quietly make himself (IME always a man) invaluable.
People are against it because, so highly ironically, a lot of people involved in the ever-changing world of high tech...are personally resistant to change when said change means they must change.
This 1000%.
I've seen multiple times "senior" (in nominal experience, but not in ability) keep repeating the same approach. Even after training and all the support they needed. That the approach was based on an already unsupported platform using a hack to integrate. The (then) current way would have been less effort, but require a little thinking.
I have started to think that this disconnect: "meetings are key" vs "meetings are a waste" is really down to different job functions.
For managers (especially) and others: the work is meetings. That is where decisions are considered and mode; where progress on follow up is considered.
But for many others, including most individual contributors you real work is done working on your own (with as little distraction as possible. Meetings need to be as limited as possible (some choices need to be discussed/agreed, work in a team needs some coordination) for productivity.
The problem is too many managers not realising not everyone's (likely not a even a majority) of work is done in meetings.
In some cases the lack of agility is an unwillingness to do the feedback step and make changes to the way of working.
In others it is an assumption that "agile is cheaper" which has never been a claim of the agile movement (rather than those making training/consultancy money off "agile"). Agile is about getting the customer what they need now (not what they through they needed some years ago).
5 developers (or more likely 4 with a tester) can, over a few quarters cover a lot of functionality.
I've done, over less than 2 years, in a team that was usually that size[1] a complete custom e-commerce site plus complete backend (CRM, product management, order processing through to shipment, returns, inventory management, ...)
[1] There was a brief, few weeks, period of more people but that was largely due mitigating customer changing what they wanted while not changing the deadline.
That's because most organisations/teams/departments/... claiming to be agile are not agile. They are doing the "easy" bits but not the hard parts of agile. Thus have all the disadvantages without any benefits (outside claiming "we're agile).
Daily scrums have their place. But requires: it is a cohesive team working closely together for one goal, and decent leadership to keep the scrum focused (any discussion needs to be done outside the meeting with just those who need to be include). 15 min should be more than enough as even with 2min per person (more than enough in most cases) as if you have more that 5–6 in the call then you are too big to be an agile team.
But then that requires all the team members know that a scrum is there purely to keep everyone in the team aware what else if happening.
An actuarial approach could be taken: chance of finding the drive times chance of working times expectations of bitcoin value dropping.
Vs. search cost plus recovery.
Assuming values as claimed, I would expect investors would take those costs times significant multiplier as their cut (multiplier to account for the risks and costs) out first. Assuming costs of anything over £25m he's going to see very little.
Much like pay-outs with no-win no-fee lawyers.
> Don't forget Excel's year 1900 bug that is still present in every version of Excel.
You mean Lotus 1-2-3's "1900 is a leap year" bug that Excel copied because they wanted compatibility with the, then, dominate spreadsheet.
This worked – who uses Lotus 1-2-3 today[1]?
And, yes, it has been over two decades since I needed to create my own date handling (MS-DOS didn't do system libraries), but even then could use day count since epoch with a little reading (pre-internet days) and store that.
[1] I expected there are some who take pride in their masochism.
"Be assured there won't be 14,408mph winds, hurricane force winds..."
Hurricane force is ≥ 73 mph and while there is no upper limit on force 12; I think being > Mach 18 is into a while new realm where everything down to bedrock would be scoured off (and the bedrock would be eroded pretty quickly).
Maybe someone messed up the data feed to it was read as being multiples of the speed of sound rather than, the correct, km/h and was a bit too clever is automatically converting.[1]
[1] This is the site where you can turn it up to 11.
> tab tab tab
Is indeed quicker, which is why any text editor that is worth the CPU/Memory/... to run will automatically insert the correct number of spaces (including less than four if there is already a character or more in place. And auto-indent the next line to match (use shift-tab to reduce indent). A serious editor (eg. what you should be using if paid to write code) will support an editorconfig file in the repo so the team shares settings and does not depend on everyone configuring their editors the same.
If your standard editor cannot do this, then PEBCAK.
> This is like vi vs Emacs with 'religious overtones,’ project chief laughs
That would imply that the editor wars didn't have religious overtones.... and my memory of the late 80's Usenet editor wars was very much religeous.
I think the writer of the sub-head maybe a little young, likely "sweet summer child" applies!
Remember "Plan on doing it twice, because you will" (typically the second time is disguised as debugging).
The benefit of working through the problem once in prototype mode is you get to work out all the edge cases and business logic before doing the real implementation. Too many times what seemed simple initially turns out to be simple in the normal case and then lots of complexity on the edges; with no real way to plan those edge cases because the spec/user-story/... didn't consider any of that.
It was the twenty-third launch for that booster. One individual booster with more reuse than all other rockets combined.
As failures go it is small.
(|From the video appears a landing leg failed, which could indicate further checks for stress crack like problems while refurbishing boosters is needed.)
A more serious case of the same thing.
Automating the analysis of lung x-rays to detect some lung disease (can't recall which). Didn't work, the models fixated on the newer images with higher resolution (input positive cases) over older lower resolution images (input negative cases). So any recent x-ray was positive, whatever the state of the lungs.
The wonders of having no means to debug the model building other than extreme care with input selection when building.
I recall for sometime ago at my first job (so early '90s) talking to my manager about test rigs. He talked about how much effort, for a safety critical was put into testing the test rig. The test rig itself was relatively simple, but validating it covered all necessary good and bad cases for the test target was substantial.
For something you are pushing updates to at a rapid pace, needs more than just a few tick boxes.
It isn't as if CI/CD pipelines running multiple sets of tests using all sorts of software ( for example on Windows Server 2022) are hard to come about (the VM images for these are freely available.
Underlying organisational culture about "updates faster, respond fast, Fast, FAST" is likely the real problem.
Remember WHQL validates that a driver calls APIs correctly doesn't make a mess of interrupt handling, etc.
That there was a dependency on a definition file would not be in scope, because that is not the kind of thing that drivers for hardware drives have done historically. It would be great to see WHQL being updated to broaden tests.
Even better would be MS saying new "no third party drivers in the kernel" (Windows has long supported non-kernel drivers). However that would depend on MS being willing to fight through US and European counts that it isn't an anti-competitive measure (ie. third party software writer's laziness is an insufficient reason to allow kernel access).
> From what others have said crowdstrike bypassed their own rollout procedure to force the update straight onto production networks, bypassing staging.
Exactly: this isn't the failure of an individual, but an organisational operations failure where a choice to "push and be damned" led to being damned.
Deeper is the clear failure to do fuzz, and similar, testing within an integration test suite in the publication pipeline. As this would quickly lead to seeing the driver's lack of input validation with respect to the definitions.
Any OS, from a kernel component will have this problem.
Eg. RedHat: having a kernel panic due to CloudStrike: https://access.redhat.com/solutions/7068083
Edit: note I make no comment on whether the use of definitions + driver in kernel mode is a good idea[1] rather than the driver just being a bridge to user mode worker where everything happens[2].
[1] I don't think it is.
[2] The chance of a company like CrowdStrike getting this right (handling the worker failing...) seems low.
First: MS did try to stop ring-0 antimalware but was threatened with anti-trust lawsuits by the usual crowd (including, IIRC, McAfee[1]). (Can't now find the reference.)
Second: Crowdstrike did the same thing to RedHat Linux last month https://access.redhat.com/solutions/7068083 caused kernel panic
[1] CEO of CrowdStrike was McAfee CTO when McAfee was breaking systems...
According to the status history, yesterday's Central US Region issue (not at all world wide) was a configuration change issue in a storage cluster. Which is nery much not good, but not a made malware update.
The status page does have an alert about VMs running CrowdStrike Falcon agent having problems.
> The multiple and up to 15 times just makes it even crazier.
But does make sense. The theory is that eventually the network stack gets enough time before the next BSOD to have updated to the latest files, which don't trigger the BSOD.
I can also imagine this very much depends on the machine's internet connectivity being very low latency and very high bandwidth.