* Posts by PinchOfSalt

19 publicly visible posts • joined 2 Jun 2023

Black horse down: Lloyds online banking services go dark

PinchOfSalt

A realisation of what they are

As Lloyds gets rid of all its branch network, how long do you think it will take for them to realise that the comment from many years ago that banks are 'IT companies with banking licences' is actually true?

And that to outsource your core function should make you question why you exist?

PinchOfSalt

The much talked about bank of Mum and Dad :-)

Have we stopped to think about what LLMs actually model?

PinchOfSalt

The written word

I suspect part of this challenge is that we're talking about text and language as if that were communication.

They are part of it, but a long way from the whole.

As an aside, how good are LLMs at writing jokes? Or perhaps in line with humans, how good are they are telling jokes?

City council faces £216.5M loss over Oracle system debacle

PinchOfSalt

Add to the mix, turkeys and Christmas

As always, great points.

If I could add one further: The users in these situations are not going to suggest processes that result in them losing their jobs. Consequently, there's little by way of incentive to be more efficient with the new system.

In any change, I've tried to group people up into:

People who want the change

People who will benefit from the change

People who will be disadvantaged by the change

You need a different means of engagement for each category and be clear about which groups & individuals are in each. Conflate them and disaster will happen. Malicious obedience is very difficult to undo once it's set in.

Most project set out with the assumption that everyone has the same intent in mind. This is rarely the case for the above reasons. As a consequence, requirements are captured in a way that assumes they all have equal validity, which of course they can't.

On top of this, there's the challenge of those within an organisation who are experts at what they do as they've done it for years. Sure they have hugely valuable insight into how it has been done. But that's not the point of the new system. Taking 10 your old processes and welding them into a brand new system is usually a bad idea. They cost a small fortune to customise into the last system and will cost a large fortune to weld into the new one. What's needed is a step back with someone who knows what that process looks like at it's most efficient, present that and then see how far from 'good' the clients wants to be, based on specific business / vertical needs.

As someone further up stated, this is not a path that's easy for a consultant to follow as you're instantly up against the established knowledge and as such you often have to back down to keep your job.

AI stole my job and my work, and the boss didn't know – or care

PinchOfSalt

Lots of interesting points, but...

How can we take control of the content that's used for training?

If I put a notice on my website that says it cannot be used for AI training, therefore making it explicit that this is not permissible, can I then stop them using it, or legally challenge their use of it is I find they've infringed my licence for the use of my content?

According to reports I've seen, the largest constraint on AI is going to be the amount of available material for it to ingest. The big players have already set models off to ingest what they can from the Internet, so if we want to see a fairer approach to the use of AI, we need to control the training data more effectively - that's on us to change.

I'm not sure I see a very positive outcome from this AI thing the way it's going. Much like previous revolutions, it's intent is to put more power and money into fewer and fewer people's hands. Ignoring all other challenges like power consumption and sustainability of these things, this is I think the largest negative to it.

A good friend of mine and I were discussing the future with AI in it. He predicted that AI would take over most people's jobs in a reasonably short period of time. I asked what the social consequences of that might be, and he said he didn't know. However, what he really meant was he didn't care as he was one of those who, in the short term, would benefit.

This, I felt, was not good. We've fallen into the world of 'we can' without the counter balance of 'whether we should'.

AI chatbots amplify creation of false memories, boffins reckon – or do they?

PinchOfSalt

Authority

This has been worrying me for a while.

To coin a phrase, we are used to and sort of expect 'computer says no' type of behaviour from our IT. In fact we strive for it, to make our systems unquestionably reliable and deterministic. We've had around 75 years of teaching people that largely when a computer gives you a response, you can rely upon it (Horizon excepted, obviously).

This however is being up-ended with AI. These are not computer systems that we are used to. They are non-deterministic, and almost deliberately so.

However, the general public have not been taught to differentiate between the two. And in fairness neither have most of us.

So, there is a challenge here in that the scenario pits a person which knows they have a fallible memory against a computer which the person will believe has a perfect memory and does not wilfully deceive or make errors of judgement.

So, in some ways there is something new here. The relationship between people and people in authority positions is something that has been explored many times over, however the relationship between a person and a system where a person may have preconceived ideas of it's fallibility is new and probably needs research.

Study backer: Catastrophic takes on Agile overemphasize new features

PinchOfSalt

Re: A flaw in the initial requirements

Taking this a little more broadly, there are circumstances where asking the users results in the wrong answer.

You'd hope that when spending money developing software that you're pre-empting the problems you're going to have as well as the problems you currently have. This may require changes in process, removal of people from process, or even fundamentally removing a process and replacing it with a piece of logic.

Asking the users to define this is often met with the 'but that's not how it works' statement and they're right. And hence asking the turkeys about how great it will be after Christmas doesn't go so well.

So, users are important to an extent, so long as you're in the faster horses line of thinking. They're unlikely to be the ones specifying cars.

Intel to shed at least 15% of staff, will outsource more to TSMC, slash $10B in costs

PinchOfSalt

They've lost the consumer

I remember the days when understanding a processor portfolio was reasonably simple.

These days it's utterly impenetrable. If you're buying a laptop, it's pretty much impossible to compare the various versions of i2, i5, i7 across the generations to know which gives the best bang for your buck.

As a consequence of this madness, I suspect a lot of people just go for the cheapest, since there's almost no other metric which provides a clear answer.

It feels a little like their internal project codenames and SKUs have escaped into the real world without a translation table for us mere purchasers to have any idea what we should buy - as very well demonstrated by their latest erroneous claim of shipping 'AI PCs' when they haven't yet launched the product that gets close to the specification required.

I'm sure they're carving out huge numbers of marketers about this round of cuts, but frankly, I'd be keeping a few back to put some sense and order into their product naming and communication.

CrowdStrike unhappy about Delta's 'litigation threat,' claims airline refused 'free on-site help'

PinchOfSalt

Blame where blame's due

There's a problem on both sides here.

1, vendor releases software without testing it appropriately - that's their problem and they need to address that

2, customer installs software and deploys it to production without testing it appropriately - that's their problem and they need to address that

Of all the people on this site, how many of you have policies that allow for untested software to be deployed into production? And even further, across the entire estate?

I was somewhat incredulous that so many organisations were impacted by this.

Their end-user license will be clear that they do not warrant the software to be bug free... there's a reason they say that.

From network security to nyet work in perpetuity: What's up with the Kaspersky US ban?

PinchOfSalt

Embedded versions

I'm not sure if this is still the case, but the AV industry used to share core engines and signature files between them. Some of the vendors were a blend of four different engines and signature sets and Kaspersky was often used as one of these that supplied it as an OEM solution.

I wonder whether this is still prevalent and what those vendors are now going to do

Fragile Agile development model is a symptom, not a source, of project failure

PinchOfSalt

Re: History lessons

Spot on...

My last organisation was a content management system integrator and we used agile for software development but within a time and cost bound structure.

We worked for the worlds largest brands, implementing their front end tools to allow them to build web experiences and were voted by our clients the worlds best at it due to our customer empathy - ie we listened to their business problem and delivered them a mix of training, process, tools and software that addressed that problem. We documented all high level requirements, prioritised them, then did deeper requirements for those that were deemed important enough to go into MVP and through that process we could manage senior management expectations and the rate of change for the end-users within the business who needed to actually use the things we were building. Yes, we did allow for some changes, but that was done through a strict process during sprint planning.

Agile was not what we did. It was the way we did some things.

Having left that place and now working alongside companies that call themselves technology agencies, it's appalling to see methodologies being decided upon before even speaking to the customer. Or, Agile being used to do creative designs.

Agile is not unique in this... ITIL also went through the same cycle with people without experience (ie consultants) and tools providers happy to take your cash to answer whatever question you have with the phrase "ITIL solves that'.

I'm seeing the same now with the MACH alliance. Yet another sensible solution to a specific problem that's now in the hands of marketing and now being spread out in the attempt to gain market share it simply should not have.

IT infrastructure scared away potential buyers of struggling e-commerce site

PinchOfSalt

A little while ago this was an Oracle success story...

https://blogs.oracle.com/retail/post/online-sports-retailer-wiggle-uses-oracle-to-support-double-digit-growth

Dublin debauchery derails Portal to NYC in six days flat

PinchOfSalt

I've seen the one in Lublin in Poland and there was no rowdiness or odd behaviour around it. It's connected to Vilnius in Lithuania and was really interesting to watch.

I do wonder whether there's a 'settling in' period where this sort of embarrassing behaviour happens, and then goes away as the shock value has gone.

I agree with the comments above though - the time zones difference is problematic.

OpenAI CEO Sam Altman is back on the company's board, along with three new members

PinchOfSalt

Under redundant, it says 'see redundant'

So Google are now providing you the tools to create terrible AI content and then also using AI to identify that content and move it down the rankings.

They're at war with themselves. Who do you think will pay the bill for all this creation and cleansing?

AI to fix UK Civil Service's bureaucratic bungling, deputy PM bets

PinchOfSalt

ROI

I sense that the AI peddlers are just the child catcher in Chitty Chitty Bang Bang.

Look at the wonderful things I can do, and it's all free today...

Then suddenly the paywall appears, starts rising and you're trapped paying them instead of paying employees. You've got rid of your employees and now you're just an addict with a very expensive habit

Mamas, don't let your babies grow up to be coders, Jensen Huang warns

PinchOfSalt

Given all the self-help (self congratulatory) books on being a CEO, surely this job could be done by an AI by now.

More than enough training guff in those tomes to feed the thing and then unleash it onto the stock market.

Perhaps Jensen would like to be the first turkey to vote for Christmas now he's invested Santa Claus?

What Big Tech's balance sheets this week said – and didn't say – about real-world AI adoption

PinchOfSalt

Hiring AI

Perhaps we should apply the rules of hiring to AI.

When we hire someone, we ask a lot of things about their views, experience, look at their backgrounds, ask for references, where they were educated.

Why would AI be much different?

Deepfake CFO tricks Hong Kong biz out of $25 million

PinchOfSalt

Re: Corporate Culture

I'm not so sure.

We did a security test at my last place where the security team used my name and account to try to persuade people to do things they know they shouldn't. I was the COO and I'm naturally very calm, but this character was very demanding. A large number of people complied at all levels of seniority.

Their explanation was that because I never demanded anything in that way before, they assumed that it must have been a really bad emergency, so breaking rules was therefore justifiable.

My day off was somewhat ruined though as I got a lot of phone calls that day from people wanting to know what the emergency was!

US Air Force AI drone 'killed operator, attacked comms towers in simulation'

PinchOfSalt

Benchmark of AI

Good point raised about requirements. We haven't yet reached the point of being able to define the requirements of 'good behaviour' for humans, so I'm not sure we're articulate enough to define and codify it into a system. Note I say 'good behaviour' since 'bad behaviour' is a constantly moving feast, hence our statute books get ever longer.

I'm also curious why we are benchmarking the 'intelligence' against humans - ie can this new thingy be mistaken for a human. I'm just not sure how relevant that is when assessing the risk of a bad outcome. The risk is almost the inverse - its failure to 'think' like us is probably our biggest concern as we wouldn't be able to reason with it using our own belief system which is is both genetically and socially engrained into almost all of us.

If aliens arrive this afternoon, we'd take a risk averse approach. I'm not sure why it's any different just because we've invented our own alien.