* Posts by deltics2

10 publicly visible posts • joined 4 Jan 2023

Microsoft, Google do a victory lap around passkeys

deltics2

This is the part I can't get my head around...

... if I get a new device, or simply want to access a site that I already have a passkey for from a different device, I don't have the passkey previously used. Presumably there will exist some mechanism by which I "prove" that I am the person who has/had that other passkey, so that a new/additional passkey can be created.

A bit like how I used to use a password to "prove" who I was before.

Can someone explain to me how that is not a bus-sized hole in the whole thing?

JetBrains' unremovable AI assistant meets irresistible outcry

deltics2

Re: Bad expectations (was: Bad code)

I *am* management, when it comes to my own code. But if you mean is it just to hit an arbitrary coverage target... NO.

I set myself a specific and reasoned coverage target of ... 100% (after allowance for one exception). Is that unreasonable? Is it meaningless? Neither.

I am fortunate to be working in a green-field context; this isn't about retrofitting coverage onto a legacy code base where considerations are different. And you would, of course, be right to point out that 100% coverage tells me nothing about the quality of my tests. What it DOES tell me however, undeniably, is that 100% of my code can be REACHED by tests. So, if I find a problem in my code or my tests, the solution will be test-ABLE.

Before using GoLang I too subscribed to the "arbitrary test coverage targets are meaningless" school of thought, largely as a pragmatic allowance for the fact that writing testable code in some languages and particularly in legacy codebases resulted in a disproportionate diversion of effort into refactoring/creating difficult to understand code in order to make it testable. With GoLang, writing testable code is trivial, to the point that the argument against coverage targets itself becomes arbitrary.

i.e. If you CAN (reasonably and practically) achieve 100% coverage, why would you choose NOT to? As long as you don't lose sight of what the metric measures: i.e. testABILITY not correctness.

Correctness of code is determined by the quality of your tests. Coverage is a metric only of the ability to provide tests that could/should deliver that correctness.

Now, as I mentioned, my 100% coverage target is not always actually 100%. For modules (re-usable packages in GoLang), 100% does in fact mean 100%.

But for microservices 100% means 100% of code outside of the service bootstrap function (main()), a well defined, tightly scoped exclusion

With the patterns I have adopted for microservices in GoLang, the bootstrap is reduced to triviality; it falls firmly in the category of "not worth the effort" (of trying to figure out a way to make it testable, because it is the hardest part to make meaningfully testable).

The same effect could be achieved by using a coverage target of some % less than 100, except that it is impossible to ascribe a fixed %age to account for main.go in any all/microservices as the proportion of the total code that it represents can vary. Whatever number you choose leaves the possibility that some small percentage of non-bootstrap code might also be untested (and therefore potentially untestable).

Better, easier and more accurate (and less arbitrary) to exclude it explicitly and cover 100% of what is left is.

However, I should also say that in my day job I am also a "manager" (strictly speaking a Chapter Lead, which is not the same thing, though I do have people leadership responsibilities).

Again I am fortunate to work in a similar context there - green-field GoLang microservices. The two squads I have touch points with have come to the same conclusions (this influenced my philosophy in personal projects; the teams aren't just doing what their manager/management tells them... quite the reverse in fact).

deltics2

Re: Bad expectations (was: Bad code)

Yep, expecting CoPilot to write your code is doomed to fail. BUT, CoPilot is an absolute God-send for writing tests and documentation. It's ability to generate a description of a function including example usage is perfectly usable. Not always perfect and it sometimes gets stuck in a rut where it trots out the same text for a function that it generated for a previous, similar function, but if you correct its mistake it is less likely to make the same mistake again. It can also work wonders in generating README.md content, in one instance that blew me away, it suggested a 90% complete and 100% accurate "Feature List" for an open-source module (GoLang) I was working on!

Similarly for writing tests, I have shown my colleagues the seemingly magical way that CoPilot can generate test cases for me. After using a vs code snippet to stamp out a boiler-plate data-driven test, declaring a struct to be used for each test case (which includes a "name" field which describes the test case) and writing the first test, for subsequent test cases I typically only need to provide the test case name and CoPilot will generate the rest of the test case for me. Often it will then start suggesting entire tests cases, complete with name.

The idea that CoPilot and similar AI will put good engineers out of a job is idiotic and serves only one purpose: to provide click bait headlines. What it CAN do is become that junior assistant programmer that all good engineers sometimes wish they had that they could give the boring jobs to do.

I for one am happy to spend less time writing tests and documentation when my AI assistant can do that for me, leaving me with much more time to spend on the fun stuff that AI sucks at: actually solving problems.

(I haven't tried the JetBrains AI so I have no idea if it is any good for these use cases. And I should also add that I use GoLang predominantly and there may be aspects of that language, being incredibly simple and highly idiomatic, that makes it particularly well suited for AI to be helpful. With other languages and other AI's your mileage may vary).

The pen is mightier than the keyboard for turbocharging your noggin

deltics2

Re: Why aren't you taking notes?

Taking instructions that you need to be able to recall for a few minutes is very different from learning something that will stay with you for weeks, months, years, or your entire life. That, I think, is the difference that this research (and the empirical evidence it is based on) is concerned with: "Learning", as opposed to "short-term recall".

For myself (and yes, everyone is different) I developed my own "short-hand" for note-taking in class, enabling me to take notes efficiently while still paying attention and "absorbing". BUT, I would then revisit my notes shortly after the lesson and transcribe to long-hand. That probably helped as well, by further re-inforcing the information by conducting a replay and revise exercise while it was still fresh in my memory, in addition to the notes I was transcribing.

Digital notes offer additional advantages such as searchability, linkability, referencability and flexible re-organisation.

These days we get the best of both worlds and for several years now I have been using an eink note book (reMarkable2) for day-to-day note-taking and for taking notes when working through things like udemy courses etc. For day-to-day to do lists and journalling I use a BOOX Note Air 3C - for productivity tasks (journals, to-do lists etc) the ability to use specific apps but still gain the embedding/retention benefits (*) of hand-written interaction with the device is a God-send, but the reMarkable remains my go-to for distraction-free, general note-taking and brain-storming.

* if I plan my day by typing into a list, I have to constantly consult my list through the day to remind myself what I was setting out to accomplish. If I hand-write my to-list I find I remember most if not all of the list, only returning to the to-do list app at the end of the day to update statuses and prep the next day.

Post Office boss unable to say when biz knew Horizon could be remotely altered

deltics2

Re: One flawed computer system is unfortunate, two is a pattern...

Do we know what those unspecified "concerns" were?

If those concerns had specifically referenced inaccurate balances and/or the ability for Fujitsu to remotely access and manipulate the system without the SPO's knowledge, then fair enough. But if that were the case I would expect this to be reported as "_these_issues_ (specifically and unequivocally) were identified during the trial", rather than the more vague and ambiguous "(unspecified) concerns were raised". Frankly this smells more like just another corporate entity in this debacle (in this case, the union) covering its own rear-end against any possible suggestion that they had failed to protect the interests of their members. i.e. "don't blame us! we SAID there were problems!".

I do not think there has ever been an IT system in history that went live with ZERO defects.

All sorts of things are found in beta/pilot/trial runs of software, some of which may be significant and need to be fixed before going "live", while others are able to be dealt with by documenting and warning/educating the users until they can be fixed (if deemed necessary to fix at all) and yet other problems can go wholly undetected and only ever come to light once fully deployed and implemented in production.

In short: Whatever the concerns were that were raised, they may have been entirely unrelated to the specifics that contributed to this miscarriage of justice. If they were, that would blow this case wide open so why not disclose it?

deltics2

One flawed computer system is unfortunate, two is a pattern...

Fujitsi have form:

https://www.rnz.co.nz/news/national/504421/fujitsu-told-to-pay-corrections-3-point-9m-after-lengthy-legal-battle

I suspect a number of factors were involved....

1. Based on the representation of the Post Office's attitude to their sub post masters it seems likely that there was fundamentally a lack of trust. When Horizon was introduced I suspect that one of the "features" it was sold on was the ability to identify previously undetected fraud/theft. So when such cases started showing up, instead of a "red flag", this was seen as "Horizon is working well". This explains why the Post Office represented this as the case when an objective observer might suppose the opposite. i.e. not a cover-up (at least to start with) but a case of confirmation bias.

2. Once there was the realisation that there was, in fact, a problem, a course of action had been established and a correction to that would have involved admitting to prior mistakes. In the face of that, continuing on the current course was understandably attractive.

3. Fujitsu had a commercial interest in ensuring that the Post Office believed that the system they had paid for was working.

I suspect that the Post Office initially were acting in good faith. Misguided and misinformed good faith, but good faith never-the-less.

The key to all of this is indeed when they realised that this changed from good faith to wilful cover-up. As well as a "moral responsibility" to bear some of the costs of compensation, there are bound to be individuals at Fujitsu who are guilty of perveting the course of justice and should be held accountable as such - along with any managers who were aware, right to the top if necessary.

RIP: Software design pioneer and Pascal creator Niklaus Wirth

deltics2

In addition to the terms of use restrictions, Delphi CE is a different SKU than any other edition; it is not like Visual Studio Community Edition, which is identical to the paid-for edition (or was; it has been a long time since I had cause to use it so I don't know for sure these days).

If you can stomach wading through the "feature matrix" (28 pages, listing things that really shouldn't count as "features" in 2024) you will find some perplexing omissions in the "Community Edition".

Bizarrely, code formatting tools are missing in the CE SKU but most ironic of all, given that the Terms of Use are especially compatible with developers working on free (and most likely, therefore, open source) projects, most (if not all) of the VCL source code is also not present. Aside from the irony, the VCL source is a primary learning resource for anyone new to the framework, not to mention incredibly useful when tracking down elusive bugs to identify whether the problem is in the VCL rather than your own code (which happened to me more than once in my many years as a career Delphi dev).

And AFAIK, even with the CE you still have to contend with the Embarcadero license infrastructure which appears to be a constant thorn in the side of the (dwindling) Delphi community.

Apple pops blue bubbles of Beeper Mini's iMessage service again

deltics2

Re: Also an anti-SPAM measure

Nope, because that would imply that the decision is motivated by a desire to protect and/or enhance the UX of their customers. Yet that UX is compromised and corrupted for their customers that have contacts who use Apple devices but choose NOT to use an iPhone.

Claiming a motivation for doing something is not credible if someone simultaneously and demonstrably acts AGAINST that same motivation. In that case, there must be some other motivation that they do not wish to reveal. Except in this case they have pretty much revealed it; they believe that by doing so they can coerce people into buying more iPhones; i.e. it's not about their customers, it's about adding more $'s to their bottom line. Which is why anti-competition laws may be relevant (though whether they may be constructively applied remains to be seen).

deltics2

Re: Blue bubbles?

No. Although you evidently think you were asking a rhetorical question, you weren't.

It is NOT only about the colour they show up in. In fact, I doubt that anyone gives a hoot about the colour they show up; except Apple. After all, it was Apple that chose to use different colours in the first place... in most other cases, if a messaging app even allows different colors to be used it leaves such decisions to the user.

So no, when it comes to iMessage and Beeper Mini, it's NOT about the colours.

It's about security... messages on iMessage are encrypted. SMS messages are not.

It's about quality (of shared media)... iMessage message allow full resolution images and video. No, you filthy-minded individual... in a family with young children often times parents might like to share moments with their young-uns with their partner who might be busy elsewhere.

It's about user experience... my wife uses an iPhone, I use Android. When I send her an SMS, she gets it on her phone. If I send her a message from my iPad or Mac, she gets it on her phone. When she sends me an iMessage it might go via SMS, it might go via iMessage; she has to consciously choose one or the other if she wants to be sure of how it is sent. If she doesn't, and it is sent over iMessage I get the message if I am using my personal iPad or Mac at the time, but not if I am using my work Mac (I am not allowed to sign in to personal accounts on my work devices).

I always have my phone with me - Beeper Mini meant that no matter how iMessage behaved on the iPhone, my wife and I had reliable communication.

Yeah, yeah, we could both use Signal or WhatsApp or <insert favorite/shilled messaging app here>, but scale the problem out across all of mine and my wife's contacts and we end up with conversations scattered across a plethora of disconnected and fragmented messages, leaving SMS as the only common denominator. Which brings us back to the fact that SMS is insecure and results in an inconvenient and downright frustrating UX for an iPhone user when messaging with someone who, yes, has Apple devices but does NOT use an iPhone.

OneDrive back on its feet, but ongoing Skype credit problem hasn't gone away

deltics2

Re: "I feel like I'm having an aneurysm with OneDrive being down"

"OneDrive, like any cloud-storage provider, isn't the problem"

Um, except when it *is*. In the early days of Windows 10, the OneDrive client had a particularly irksome bug where the OneDrive client for reasons that were unclear and never fully explained took it upon itself to delete swathes of files from the cloud rather than initially sync'ing them to a newly upgraded/installed WIndows 10 machine.