Re: What I don't understand
If your editor cannot do this natively there may well be an extension to do it.
578 publicly visible posts • joined 19 May 2012
> tab tab tab
Is indeed quicker, which is why any text editor that is worth the CPU/Memory/... to run will automatically insert the correct number of spaces (including less than four if there is already a character or more in place. And auto-indent the next line to match (use shift-tab to reduce indent). A serious editor (eg. what you should be using if paid to write code) will support an editorconfig file in the repo so the team shares settings and does not depend on everyone configuring their editors the same.
If your standard editor cannot do this, then PEBCAK.
> This is like vi vs Emacs with 'religious overtones,’ project chief laughs
That would imply that the editor wars didn't have religious overtones.... and my memory of the late 80's Usenet editor wars was very much religeous.
I think the writer of the sub-head maybe a little young, likely "sweet summer child" applies!
Remember "Plan on doing it twice, because you will" (typically the second time is disguised as debugging).
The benefit of working through the problem once in prototype mode is you get to work out all the edge cases and business logic before doing the real implementation. Too many times what seemed simple initially turns out to be simple in the normal case and then lots of complexity on the edges; with no real way to plan those edge cases because the spec/user-story/... didn't consider any of that.
It was the twenty-third launch for that booster. One individual booster with more reuse than all other rockets combined.
As failures go it is small.
(|From the video appears a landing leg failed, which could indicate further checks for stress crack like problems while refurbishing boosters is needed.)
A more serious case of the same thing.
Automating the analysis of lung x-rays to detect some lung disease (can't recall which). Didn't work, the models fixated on the newer images with higher resolution (input positive cases) over older lower resolution images (input negative cases). So any recent x-ray was positive, whatever the state of the lungs.
The wonders of having no means to debug the model building other than extreme care with input selection when building.
I recall for sometime ago at my first job (so early '90s) talking to my manager about test rigs. He talked about how much effort, for a safety critical was put into testing the test rig. The test rig itself was relatively simple, but validating it covered all necessary good and bad cases for the test target was substantial.
For something you are pushing updates to at a rapid pace, needs more than just a few tick boxes.
It isn't as if CI/CD pipelines running multiple sets of tests using all sorts of software ( for example on Windows Server 2022) are hard to come about (the VM images for these are freely available.
Underlying organisational culture about "updates faster, respond fast, Fast, FAST" is likely the real problem.
Remember WHQL validates that a driver calls APIs correctly doesn't make a mess of interrupt handling, etc.
That there was a dependency on a definition file would not be in scope, because that is not the kind of thing that drivers for hardware drives have done historically. It would be great to see WHQL being updated to broaden tests.
Even better would be MS saying new "no third party drivers in the kernel" (Windows has long supported non-kernel drivers). However that would depend on MS being willing to fight through US and European counts that it isn't an anti-competitive measure (ie. third party software writer's laziness is an insufficient reason to allow kernel access).
> From what others have said crowdstrike bypassed their own rollout procedure to force the update straight onto production networks, bypassing staging.
Exactly: this isn't the failure of an individual, but an organisational operations failure where a choice to "push and be damned" led to being damned.
Deeper is the clear failure to do fuzz, and similar, testing within an integration test suite in the publication pipeline. As this would quickly lead to seeing the driver's lack of input validation with respect to the definitions.
Any OS, from a kernel component will have this problem.
Eg. RedHat: having a kernel panic due to CloudStrike: https://access.redhat.com/solutions/7068083
Edit: note I make no comment on whether the use of definitions + driver in kernel mode is a good idea[1] rather than the driver just being a bridge to user mode worker where everything happens[2].
[1] I don't think it is.
[2] The chance of a company like CrowdStrike getting this right (handling the worker failing...) seems low.
First: MS did try to stop ring-0 antimalware but was threatened with anti-trust lawsuits by the usual crowd (including, IIRC, McAfee[1]). (Can't now find the reference.)
Second: Crowdstrike did the same thing to RedHat Linux last month https://access.redhat.com/solutions/7068083 caused kernel panic
[1] CEO of CrowdStrike was McAfee CTO when McAfee was breaking systems...
According to the status history, yesterday's Central US Region issue (not at all world wide) was a configuration change issue in a storage cluster. Which is nery much not good, but not a made malware update.
The status page does have an alert about VMs running CrowdStrike Falcon agent having problems.
> The multiple and up to 15 times just makes it even crazier.
But does make sense. The theory is that eventually the network stack gets enough time before the next BSOD to have updated to the latest files, which don't trigger the BSOD.
I can also imagine this very much depends on the machine's internet connectivity being very low latency and very high bandwidth.
> My experience is that when allowed to figure it out for themselves, competent people will evolve a system that works quite well for their project
This is agile.
Not a set of defined processes/outcomes, but a team working to work better for the customer.
Of course that doesn't leave much room for management who want to be in control.
> The interesting thing about this survey is that it compares Agile with projects where the requirements were fully resolved before the project started
The alternative is that the requirements are changing; but that requires an understand that "embrace change" does not mean "change is free".
> Maybe the sort of people who *really* get into Agile are simply the sort of people who couldn't successfully complete a project in the first place?
Or more likely (and this is my usual experience) that people who think they are doing agile, are not really doing agile. They do the easy bits (eg. avoid lots of up front documentation) but not the hard parts (eg. effective feedback process on how things are done; the prioritised feature list is actively maintained with the customer deciding when enough is done in place of a fixed budget).
Ie. almost every agile project is using the easy parts of agile to avoid the hard parts of waterfall, and thus is the worst case of both.
> Where's the change notification?
I think you mean "pull request": nothing has suggested this has been deployed.
And, any PR without a decent summary of the changes it introduces should never be merged. Which covers this "AI" "help" with current good practice and without huge computation resources. Even if it is just a commit. there should be notes in the commit about the change, again no AI needed for review.
> By 2030, software developers will be using AI to cut their workload 'in half'
As others have noted this is based on a whole pile of assumptions.
But even if those assumptions are correct, the workload will not half. There will be a combination of fewer people to do the work, and more work to do.
I don't know what the process would be IF the inquiry decides it is appropriate.
That is given in section 35 of the Inquires Act 2005 (as amended), subsection 7:
A person who is guilty of an offence under this section is liable on summary conviction to a fine not exceeding level three on the standard scale or to imprisonment for a term not exceeding the relevant maximum, or to both.
> Digital trade association TechUK
A trade association represents the interests of that trade and should not be seen as anything but biased. I very much doubt those building bit barns want to do anything except maximise their return on investment. Any additional cost are something to fight.
OTOH to make something like this work takes political leadership to balance the medium term costs with long term benefit (in this case long term is over decades). And when has that ever been seen in the UK in recent decades?
> Learning to think would be better than learning to code.
Correct. You learn to think by doing things: analysing problems, solving those problems.... which, when it comes to computers, is programming. There are other areas (physics, engineering, ...) that do this as well.
Or, for a completely different academic area: one does not study history solely to become a historian.
> If the satellites are at the corners of an equilateral triangle, then they need to know exactly the angles in order to bounce lasers off both the other satellites at the same time.
Or make the three edges the same length: which equally defines an equilateral triangle in flat space.
I, OTOH, find the key is to take notes. I don't need to (usually) go back to the notes, but the act of writing down at least a summary of the points really helps me absorb the information.
Equally if about to write some code I find writing some notes on what I plan to do makes keeping focus on the desired outcome (the code does what is intended) also helps. The code is likely to end up rather different to the plan, that does not matter. Having a plan makes all the difference.
But I fully understand we're all individuals[1].
[1] It is about time to watch "Life of Brian" again.
> This sounds like a feature created by an overly eager intern.
No. Because this has a completely reasonable use case. A page is loading (eg. for images below the fold) when the user navigates to another page; at this point those resouces outstanding for the first page will not be needed.
Just opening lots of HTTP/1 sockets has its own problems, and cancellations by the client (close the socket) allowed similar DDOS attacks. The difference is the HTTP/1 attacks have had a couple of decades in which defences and mitigations have been built up.