* Posts by PinchOfSalt

54 publicly visible posts • joined 2 Jun 2023

Page:

Datacenters have a public image problem, industry confesses to The Reg

PinchOfSalt

Re: Stupid clever people

You are completely right.

When I wrote that I questioned the use of the term nice since it's subjective.

However, I very, very much doubt that people are really complaining about the water usage, etc.

The primary concern that I've seen reported from the reports in the US and UK is that these buildings are huge and disruptive to the locals who live there. They're often pushed through against the wishes of the local people because the economic benefit to the region is greater than the desire to maintain the current local look and feel.

The secondary things that then come up are water usage etc, once they lose the argument about maintaining the local feel of the proposed location.

PinchOfSalt

Stupid clever people

Classic IQ vs EQ problem.

How is it news that everyone wants the nice bits of something without having to suffer the consequences of that nice thing?

I want decent roads, but I don't want to pay the tax to make them decent.

I want a stable electricity supply, but I don't want any power stations or distribution network disturbing my view.

We live in a world where the gap in understanding between production and consumption is so vast that I'm bewildered that supposedly clever people are surprised.

Adobe turns subscription screw again, telling users to pay up or downgrade

PinchOfSalt

Really?

A single $56 subscription allows you to generate $10k+?

Most creative businesses have a net margin of around 8-10% once overheads and directors salaries are properly accounted for (ie not hidden in dividends).

What sort of services are you delivering that allows for a single person to be generating more than $10k per month with just this software suite?

I'm not arguing that you don't feel it's good value, for you it might well be. But when you're in most creative agencies, subscriptions for software is a meaningful chunk of your cost base and increasing far faster than the rate card can.

Sci-fi author Neal Stephenson wants AIs fighting AIs so those most fit to live with us survive

PinchOfSalt

AI vs AI battles

I'm not sure about this.

I'm less concerned about some sort of revolution of AIs taking over the world.

I'm more concerned that the use of AI will cause us to make ourselves extinct through boredom or lack of interest in other people and reality.

Anthropic’s law firm throws Claude under the bus over citation errors in court filing

PinchOfSalt

Error correction

At a previous employer we were using forms of AI and other methods to validate that marketing content complied with client brand guidelines.

It was actually reasonably good at this and better than humans.

The reason for this is that it didn't get bored and it was just as adept at spotting patterns as anti-patterns, whereas humans are only really good at pattern matching and very quickly tired of trying to identify anti-patterns as it's very intensive brain work.

So, having AI draft something and then asking a human to proof read it is not exactly ideal.

There is of course no corresponding data to show how many times citations are incorrectly made in court through pure human error as that's not very newsworthy, so before we through this under the bus, probably worth understanding that.

Trump ends Biden-era dream to cap US AI chip exports

PinchOfSalt

Copyright infringement anyone?

I'm confused.

Either there's a desire for all information to be freely available for the improvement of AI, or it isn't.

Why is NVidia getting a shout out here vs authors?

One can only wonder...

The Telegraph jumps the gun on World War III

PinchOfSalt

Process flow

As I recall, there was much fanfare about reducing the number of steps to publish from 50 odd to 17.

Sounds like it now has too few steps.

AI models routinely lie when honesty conflicts with their goals

PinchOfSalt

Conflicts of interest

I'm not sure the term lying is entirely fair.

There are two basic measures at play here:

1, Being factually accurate

2, Be likable to all users

These two things are in obvious conflict.

What we're seeing is the various developers and trainers playing with the balance between the two. I wouldn't therefore call it lying.

As a result, I don't feel this is a design flaw. It's a design feature.

Procter & Gamble study finds AI could help make Pringles tastier, spice up Old Spice, sharpen Gillette

PinchOfSalt

Game changer?

I'm not sure if this is a challenge of the the summarisation of their findings, but in the main body it talks of 'being comparable to working with another human' and finishes with 'this is a game changer'.

Nothing game changing about being able to do what we can already do and use far less energy and water.

On the issue of AI copyright, Blair Institute favors tech bros over Cool Britannia

PinchOfSalt

What's the point?

If we take a step back for a moment, we have a sustainability problem.

Today, a large part of the reason for creativity is money making. Whether you write a book, paint, or produce adverts.

The reason it makes money for you is that you have a reasonable expectation that people consuming your works pay for them.

If we remove that motivation, then who is going to continue to bother writing books, painting or producing adverts?

If people stop doing this because they don't get paid, then what are these AI tools going to get trained on?

The simplest possible solution in the short term could be paywalls for valuable content. Let the bots read the simplest possible versions for free, but anything of value needs a payment. This is how the record industry deals with it and it seems pretty sensible and scalable.

Asda's tech separation from Walmart nears £1B as delays mount

PinchOfSalt

Are they also working for Birmingham City council?

Just wondering...

Oracle Cloud says it's not true someone broke into its login servers and stole data

PinchOfSalt

Re: Encrypted passwords?

We use compute power to make AI cat videos.

There's more than enough compute lying around to do this.

AI models hallucinate, and doctors are OK with that

PinchOfSalt

Re: Not surprised

I've asked it before and I'll ask it again.

If / when they go down this path, what will it mean to be a 'doctor'?

If the doctors are using it, then why not give it directly to patients and cut out the meatbag altogether?

I'm not suggesting this is a good outcome, but some nugget somewhere will do it.

To some extent it makes sense to do this... It currently takes months to years to get the results of a brain scan for certain types of neurological problems. Having a means of doing this more efficiently / effectively makes sense. But, I don't trust that people will get lazy and just take this system at its word and not do the due diligence. Much as self driving cars will teach drivers to be inattentive at the wheel. Or GPS users to have less well developed brains for location mapping and route finding.

Under Trump 2.0, Europe's dependence on US clouds back under the spotlight

PinchOfSalt

Re: Why?

Evidence please.

Despite Wall Street jitters, AI hopefuls keep spending billions on AI infrastructure

PinchOfSalt

Distortion field

The figures from nVidia won't really show anything that we can use to determine the actual market.

The biggest part of all of this is how easily people will pay for this thing.

For example, how happy are you paying for the GPS functionality in your car you never actually use as it's utterly shite? How much are you paying for it?

To the GPS market, this shows massive adoption, revenue etc

To the car industry, it's a necessary evil that this feature box must be ticked to get people to buy said cars.

To the end consumer, they'll have bought it based on the ticked box, but use their phone based GPS anyway as it just works better, is more up to date, integrated with their contacts or whatever.

I suspect this distortion will be what we will see for a long time until the car industry looks for ways to cut licensing costs, and starts the journey to market in car GPSs as 'so 2020' and that phone integration is really what all the sensible people are buying.

So, for a while, the people at the bottom will show massive revenues, the middle layer will attempt to make money out of this by feature stuffing their already over stuffed products, and the end users will look on bemused as the software they are used to gets more and more complicated and they stick to just the features they used 5 years ago.

And everyone claims this is a success.

Our world faces 'unprecedented' spike in electricity demand

PinchOfSalt

Re: NetZero

I think this depends on your planning horizon.

Of all the countries that I know of, China has the longest planning cycle. They think multi-generationally.

They currently produce almost 30% of their energy using clean energy sources. They're also heavily biased towards EVs.

I agree, that in the short term it will be cheaper cash wise to be in oil and gas. But it only take a small trade war or real war and suddenly that gets awfully expensive awfully quick.

If stability is your game, then renewable energy production and storage (water, flywheels, batteries etc) are going to be better long term I suspect.

Workday erases 8.5% of workforce because of ... AI

PinchOfSalt

Feature failure

We had it at my last place.

The US couldn't reconcile its books for several months as it couldn't do the international transfers they way they needed it to.

We also wanted Zimit as a CPQ but gave up on that once we heard that Workday was going to rewrtie it after buying it.

In fairness, as with most of these things, the customer was a large part of the problem. They tried to do the implementation on the cheap by lifting and shifting existing processes into a new system. This made the thing heavily customised.

Trouble is, in whose interest is it to tell a client that they should hold off on that big software licence and that system integrator job whilst the business gets reorganised?

Trump's freshly minted meme coin passes $10B market cap

PinchOfSalt

Local implications

What are the implications of this on the UK?

The first one that comes to mind is Sir Kier's idea of UK growth through AI.

If Tangerine Man is going to 'Drill baby drill', freeing up gas and oil for burning to make electricity, whilst we are doing the more long term sustainable approach of using renewables (yes, I know someone's going to argue about this), then our power is going to be even more expensive than it already is compare to the US. So who is going to want to put an datacentre here?

Local infrastructure is driven by three things:

Maintainability - you need to get to it to do something to it

Latency - needs to be close for the application to do what it needs to do within the time allowed

Data residency - need to keep the data within the bounds of your jurisdiction

It seems to me that the first is somewhat irrelevant, given the way AI platforms are built. They're homogeneous platforms of compute capacity, so you can happily outtask this to someone else in a remote place.

The second is possibly important, but the sorts of workloads put onto AI at the moment are not that time sensitive, so this is less of a problem.

The third, well, I'm not sure that our rules are tight enough to force people to build here. We don't have a set of rules like China where the data must be recorded in country (but may be copied outside with strong business reason).

What do you think?

Apple Intelligence summary botches a headline, causing jitters in BBC newsroom

PinchOfSalt

Re: iPhone improvements???

We used to speculate that Oracles products were built with the same design ethic.

Win the deal on features ticked on an RFP response with a UX so hideous that you'd then spend a fortune with their consultants to fix it.

GitHub's boast that Copilot produces high-quality code challenged

PinchOfSalt

Re: What will it mean to be a....

I don't think I confused anything. I asked a question, you implied that this was something I agreed with, when it isn't.

However, what you state is correct. There will be a lot of people who conflate the two and use generative AI to do things it definitely shouldn't.

I watched a post on LinkedIn several months ago where someone said they were using GenAI to summarise differences between NDAs. Complete lunacy given the risks involved, but they felt they were being clever in avoiding legal fees.

PinchOfSalt

What will it mean to be a....

There's an underlying question here which I've found troubling.

If people are going to use such tools to do things they couldn't do before, what will it mean to be a professional?

For example, to be an accountant today you have to not only study and pass exams but also practice it for a period of time. Then you can rightfully claim to be an accountant and people won't call you out for overstating your competence.

However, in future, with an AI widget telling a layman how to do accountancy, would that laymen therefore be able to claim they no longer need an accountant? And if the advice from the AI widget is sufficiently good that they are an accountant themselves?

I pick this example as there is a bar set with a professional standards body, but this could apply to anything in the knowledge economy.

The workplace has become a surveillance state

PinchOfSalt

Re: "there is a growing desire among"

A friend of mine had to remove a member of his team. I would struggle to call him a Nazi for having done so, but let me explain so you can judge.

Worked an average of 2 hours per day

Took every Wednesday afternoon off without permission

Would drop off client calls or take client calls from his car whilst he was driving his children around during normal working hours

Would extend holidays without explanation, permission or apology

Didn't attend client meetings repeatedly

Persistently late with deliverables without explanation

Oddly he got defensive when this was raised with him several times, suggesting his boss should really be focused on the outcomes he delivered, the output he produced. Sadly there was neither of either, so my friend was drawn to how much time he was spending at his 'desk' to try to understand the delivery gap he was experiencing.

Unfortunately people behaving like this poison the well for others.

Microsoft rolls out AI-enabled Notepad to Windows Insiders

PinchOfSalt

What's going on with it needing to load more quickly?

What on earth has happened that a text editor on a modern computer takes enough time to load that anyone could ask for it to be quicker??

Seriously, how can that be possible?

A text editor 20 years ago took about a second. How can it be possible that with all the improvements in both I/O bandwidth and processing power that it's not able to do this faster than you can blink?

PinchOfSalt

Re: In Notepad? Really?

I'm not sure that the key here is that Notepad alone will cause you to need to buy more AI credits.

The goal will be to get as many ways into the ecosystem to burn credits that it becomes impossible to control in any sensible manner. This shifts them to having stable revenues via subscriptions and then nice, high margin additional spend that procurement and management can't easily constrain.

Voice-enabled AI agents can automate everything, even your phone scams

PinchOfSalt

Re: YMMV with this....

Cost is only one variable.

People are untrustworthy, so may also be telling the police what you're doing.

Also, there's far less physical infrastructure involved, so it's less easy to identify.

Third, you can run the scheme across jurisdictions, complicating any investigation, slowing down the time to capture to the point where you can pretty much just keep going forever.

OpenAI loses another senior figure, disperses safety research team he led

PinchOfSalt

It could to a great extent by de-skilling the job.

Instead of having to know the building regulations, be a certified electrician, etc, you can rely on the tool to tell you how to diagnose a problem, what to do to fix it and what final checks need to be done to validate the result.

There is a troubling and interesting challenge with this sort of technology, which started with the arrival of the Internet. With more and more info on-line, the qualifications and regulations to be in a trade have been updated more and more frequently. This helps to protect the industry to some extent from the DIY'er to looks at a video and does the work for themselves. However, this is a stalling exercise since once the regulations are converted into a tool that is able to process the regulation and 'see' the context of the need to apply it in a given circumstance, you're not able to do that any more.

Put more generally, there's a question that needs to be addressed in future about what it means to know a subject and to be a specialist. What will it mean to know 'law' or to be a 'lawyer' if you have a sufficiently 'useful' AI tool that can process the problem and provide a way to resolve it within the normal bounds of the application of that discipline.

I don't know the answer to that, but it feels like we're going to need to come up with something fairly quickly.

PinchOfSalt

A process flow

This is I think what we're seeing in action:

1, Invent something that sounds incredible, trained on data you've stolen

2, Launch it and accept it's a bit rubbish, but expect that with more development and data you can make it better

3, Do enormous levels of marketing to make everyone feel like they're an idiot if they're not using it (actually training it whilst making themselves more stupid since they're becoming dependent)

4, Use the input from the users to train the system

5, Start replacing the lower level people who have been using it as you now have sufficient training to replace them

6, By showing that you're improved the system, you encourage more people to use it as the quality has improved

7, Start eating away at the next level of people

8, Repeat until you have a pretty decent control of most jobs

9, Put your prices up since the people doing those jobs are now long gone and the systems and processes are forgotten, so now you're irreplaceable

10, Keep going as the governments will be too terrified to stop you as you are now national critical infrastructure

PinchOfSalt

Re: "remove the obligation to work for a living"

I think you're missing a piece of the puzzle.

Tax.

Today we as countries derive income from people - income tax and consumption tax and some capital gains.

If no-one is working, then that tax disappears.

There is no proven model through which we've been able to effectively tax multi-nationals and prevent them extracting profits to tax havens. As these corporations (or whatever name you'd like to call this new power structure), they will gain power and become even more difficult to govern.

As a result, the only outcome that is foreseeable is that these same companies that already extract all their profits will do so in future, so all that money that was flowing through a country's economy will be extracted.

The net result is that the only structure able to afford to pay UBI would be the AI companies. But they will have no interest in paying, hence why they extracted the cash in the first place.

This also ignores the societal problems of living without purpose. There's a very good reason that in the UK the average period for drawing down a pension is 8 years. It's not because we retire too late to enjoy it, it's that the process of retirement removes purpose removing the desire to live. Watching cat videos is cannot give anyone purpose. It is a time filler between having purpose and death.

Anthropic's latest Claude model can interact with computers – what could go wrong?

PinchOfSalt

Depends on which side of the fence you are...

I think that last piece had a couple of typos. What they meant was: Anthropic recommends that developers experimenting with Claude's computer use API "take the relevant action to maximise these kinds of opportunities."

Sorry, but the ROI on enterprise AI is abysmal

PinchOfSalt

Re: Remember when that carbonated drink company rebranded as a blockchain company....

I don't think that is their point.

Your definition of success is long term. If you redefine success as 'someone give a start up lots of money that they spend on huge salaries and nice expenses whilst not actually delivering much and just enjoying the ride' then this could be very successful.

AWS boss: Don't want to come back to the office? Go work somewhere else

PinchOfSalt

Re: It may be a security issue.

I think we're missing something here that I experience every day whilst working remotely.

There is no better communication style than face to face in the same place. This isn't an opinion, it's a well researched subject over decades.

Our 'collaboration tools' are good, but they don't replace that innate ability to communicate.

Neither do they allow the flow of information that happens in a face to face situation. Take a group of 10 people and put them on a call and everyone must speak one at a time. Put them in a room, and they can slightly break away from the group and form smaller groups. Those people can move from group to group by watching who is which group and slightly overhearing what the subject under discussion is. That can't happen online with anything like the fluidity of face to face.

I live in the middle of nowhere and it costs me £100 a day to go into London, so I don't do it every day. Equally, I work with many clients who are no in the country, so I have limited need to go into London. However, every time I sit down with a group of people face to face, I can see that advantage, and so I do it as often as possible.

Where it doesn't work is exactly what they've said. being in an office 60% of the time doesn't actually change the dynamic at all. Get 5 people together and 2 of them are not in the office, so everyone has to go back to being online - or run risk of a two tier discussion to the frustration of those who aren't in the room who can't participate to the same quality. Or it needs to be planned a long time in advance so that everyone clears their personal schedules so they can travel into the office - which takes a considerable amount of organising.

To round it off, perhaps I'm suggesting that we should not just talk about the quantity of communication (the majority of discussion is that) and focus more on the quality of communication.

Get more licenses for less with SAP price tiering, advise experts

PinchOfSalt

Really?!

I'm curious about this.

I've never seen a software provider set out their pricing in such a way as to create this sort of 'saw tooth' effect of total order value going up and down as volume increases.

Usually it is laid out such that incremental volume is cheaper per unit, but not to the total price.

The numbers look like a normal SaaS / software pricing structure but the explanation of it doesn't.

To use an example for completeness:

If 1-1000 is £10 and 1001-2000 is £5, then buying 2000 licences is (1000*10)+(1000*5) = 15,000. There's no way for the total price to go down as more volume gets added. I've never seen anyone do it so the entire 2000 is charged at 5 as it makes no sense to have the 1-1000 price point.

The only logic I see for doing as is suggested in the article is to be duplicitous - get the customers to sign up and roll out more licences than they need and then change the pricing structure so you capture them once addicted. Whilst this is possible, and perhaps has even been done, it tends to have pretty bad outcomes for your reputation.

Eric Schmidt: Build more AI datacenters, we aren't going to 'hit climate goals anyway'

PinchOfSalt

Don't forget the water...

If you think the energy consumption is nuts, add in the water consumption. This was linked to in an earlier story from The Reg.

My problem isn't with AI. It's our use cases for it. We're slathering it all over things we can already do in the goal of making ourselves more stupid and subsequently unemployable, rather than being laser focused on things we cannot do.

A simple application of Masolw's hierarchy of needs is required would help this focus considerably too. If we haven't got enough water to give people safe, drinkable water, why do we feel it's appropriate to use it to cool a data centre so someone can save themselves writing an email?

https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/

Fresh court filing accuses Oracle of creating 'maze' of options 'hidden' in 'contract'

PinchOfSalt

Not just Oracle

A couple of weeks ago my bank asked if I wanted them to review my pensions. I said yes, why not.

They then sent me a 160 page agreement to sign to allow them to do this assessment and 10 days to review and sign it.

I'm not at all sure this would be considered reasonable or be able to be enforced in a court. I review contracts fairly regularly as part of my work, but this was just impenetrable.

Brazilian court sprays Musk's X with more fines for returning after ban

PinchOfSalt

Re: And that is what is going to bury him

Depends on which arm of Fidelty. The Johnson family also has a private investment arm which holds the likes of Colt etc.

Black horse down: Lloyds online banking services go dark

PinchOfSalt

A realisation of what they are

As Lloyds gets rid of all its branch network, how long do you think it will take for them to realise that the comment from many years ago that banks are 'IT companies with banking licences' is actually true?

And that to outsource your core function should make you question why you exist?

PinchOfSalt

The much talked about bank of Mum and Dad :-)

Have we stopped to think about what LLMs actually model?

PinchOfSalt

The written word

I suspect part of this challenge is that we're talking about text and language as if that were communication.

They are part of it, but a long way from the whole.

As an aside, how good are LLMs at writing jokes? Or perhaps in line with humans, how good are they are telling jokes?

City council faces £216.5M loss over Oracle system debacle

PinchOfSalt

Add to the mix, turkeys and Christmas

As always, great points.

If I could add one further: The users in these situations are not going to suggest processes that result in them losing their jobs. Consequently, there's little by way of incentive to be more efficient with the new system.

In any change, I've tried to group people up into:

People who want the change

People who will benefit from the change

People who will be disadvantaged by the change

You need a different means of engagement for each category and be clear about which groups & individuals are in each. Conflate them and disaster will happen. Malicious obedience is very difficult to undo once it's set in.

Most project set out with the assumption that everyone has the same intent in mind. This is rarely the case for the above reasons. As a consequence, requirements are captured in a way that assumes they all have equal validity, which of course they can't.

On top of this, there's the challenge of those within an organisation who are experts at what they do as they've done it for years. Sure they have hugely valuable insight into how it has been done. But that's not the point of the new system. Taking 10 your old processes and welding them into a brand new system is usually a bad idea. They cost a small fortune to customise into the last system and will cost a large fortune to weld into the new one. What's needed is a step back with someone who knows what that process looks like at it's most efficient, present that and then see how far from 'good' the clients wants to be, based on specific business / vertical needs.

As someone further up stated, this is not a path that's easy for a consultant to follow as you're instantly up against the established knowledge and as such you often have to back down to keep your job.

AI stole my job and my work, and the boss didn't know – or care

PinchOfSalt

Lots of interesting points, but...

How can we take control of the content that's used for training?

If I put a notice on my website that says it cannot be used for AI training, therefore making it explicit that this is not permissible, can I then stop them using it, or legally challenge their use of it is I find they've infringed my licence for the use of my content?

According to reports I've seen, the largest constraint on AI is going to be the amount of available material for it to ingest. The big players have already set models off to ingest what they can from the Internet, so if we want to see a fairer approach to the use of AI, we need to control the training data more effectively - that's on us to change.

I'm not sure I see a very positive outcome from this AI thing the way it's going. Much like previous revolutions, it's intent is to put more power and money into fewer and fewer people's hands. Ignoring all other challenges like power consumption and sustainability of these things, this is I think the largest negative to it.

A good friend of mine and I were discussing the future with AI in it. He predicted that AI would take over most people's jobs in a reasonably short period of time. I asked what the social consequences of that might be, and he said he didn't know. However, what he really meant was he didn't care as he was one of those who, in the short term, would benefit.

This, I felt, was not good. We've fallen into the world of 'we can' without the counter balance of 'whether we should'.

AI chatbots amplify creation of false memories, boffins reckon – or do they?

PinchOfSalt

Authority

This has been worrying me for a while.

To coin a phrase, we are used to and sort of expect 'computer says no' type of behaviour from our IT. In fact we strive for it, to make our systems unquestionably reliable and deterministic. We've had around 75 years of teaching people that largely when a computer gives you a response, you can rely upon it (Horizon excepted, obviously).

This however is being up-ended with AI. These are not computer systems that we are used to. They are non-deterministic, and almost deliberately so.

However, the general public have not been taught to differentiate between the two. And in fairness neither have most of us.

So, there is a challenge here in that the scenario pits a person which knows they have a fallible memory against a computer which the person will believe has a perfect memory and does not wilfully deceive or make errors of judgement.

So, in some ways there is something new here. The relationship between people and people in authority positions is something that has been explored many times over, however the relationship between a person and a system where a person may have preconceived ideas of it's fallibility is new and probably needs research.

Study backer: Catastrophic takes on Agile overemphasize new features

PinchOfSalt

Re: A flaw in the initial requirements

Taking this a little more broadly, there are circumstances where asking the users results in the wrong answer.

You'd hope that when spending money developing software that you're pre-empting the problems you're going to have as well as the problems you currently have. This may require changes in process, removal of people from process, or even fundamentally removing a process and replacing it with a piece of logic.

Asking the users to define this is often met with the 'but that's not how it works' statement and they're right. And hence asking the turkeys about how great it will be after Christmas doesn't go so well.

So, users are important to an extent, so long as you're in the faster horses line of thinking. They're unlikely to be the ones specifying cars.

Intel to shed at least 15% of staff, will outsource more to TSMC, slash $10B in costs

PinchOfSalt

They've lost the consumer

I remember the days when understanding a processor portfolio was reasonably simple.

These days it's utterly impenetrable. If you're buying a laptop, it's pretty much impossible to compare the various versions of i2, i5, i7 across the generations to know which gives the best bang for your buck.

As a consequence of this madness, I suspect a lot of people just go for the cheapest, since there's almost no other metric which provides a clear answer.

It feels a little like their internal project codenames and SKUs have escaped into the real world without a translation table for us mere purchasers to have any idea what we should buy - as very well demonstrated by their latest erroneous claim of shipping 'AI PCs' when they haven't yet launched the product that gets close to the specification required.

I'm sure they're carving out huge numbers of marketers about this round of cuts, but frankly, I'd be keeping a few back to put some sense and order into their product naming and communication.

CrowdStrike unhappy about Delta's 'litigation threat,' claims airline refused 'free on-site help'

PinchOfSalt

Blame where blame's due

There's a problem on both sides here.

1, vendor releases software without testing it appropriately - that's their problem and they need to address that

2, customer installs software and deploys it to production without testing it appropriately - that's their problem and they need to address that

Of all the people on this site, how many of you have policies that allow for untested software to be deployed into production? And even further, across the entire estate?

I was somewhat incredulous that so many organisations were impacted by this.

Their end-user license will be clear that they do not warrant the software to be bug free... there's a reason they say that.

From network security to nyet work in perpetuity: What's up with the Kaspersky US ban?

PinchOfSalt

Embedded versions

I'm not sure if this is still the case, but the AV industry used to share core engines and signature files between them. Some of the vendors were a blend of four different engines and signature sets and Kaspersky was often used as one of these that supplied it as an OEM solution.

I wonder whether this is still prevalent and what those vendors are now going to do

Fragile Agile development model is a symptom, not a source, of project failure

PinchOfSalt

Re: History lessons

Spot on...

My last organisation was a content management system integrator and we used agile for software development but within a time and cost bound structure.

We worked for the worlds largest brands, implementing their front end tools to allow them to build web experiences and were voted by our clients the worlds best at it due to our customer empathy - ie we listened to their business problem and delivered them a mix of training, process, tools and software that addressed that problem. We documented all high level requirements, prioritised them, then did deeper requirements for those that were deemed important enough to go into MVP and through that process we could manage senior management expectations and the rate of change for the end-users within the business who needed to actually use the things we were building. Yes, we did allow for some changes, but that was done through a strict process during sprint planning.

Agile was not what we did. It was the way we did some things.

Having left that place and now working alongside companies that call themselves technology agencies, it's appalling to see methodologies being decided upon before even speaking to the customer. Or, Agile being used to do creative designs.

Agile is not unique in this... ITIL also went through the same cycle with people without experience (ie consultants) and tools providers happy to take your cash to answer whatever question you have with the phrase "ITIL solves that'.

I'm seeing the same now with the MACH alliance. Yet another sensible solution to a specific problem that's now in the hands of marketing and now being spread out in the attempt to gain market share it simply should not have.

IT infrastructure scared away potential buyers of struggling e-commerce site

PinchOfSalt

A little while ago this was an Oracle success story...

https://blogs.oracle.com/retail/post/online-sports-retailer-wiggle-uses-oracle-to-support-double-digit-growth

Dublin debauchery derails Portal to NYC in six days flat

PinchOfSalt

I've seen the one in Lublin in Poland and there was no rowdiness or odd behaviour around it. It's connected to Vilnius in Lithuania and was really interesting to watch.

I do wonder whether there's a 'settling in' period where this sort of embarrassing behaviour happens, and then goes away as the shock value has gone.

I agree with the comments above though - the time zones difference is problematic.

OpenAI CEO Sam Altman is back on the company's board, along with three new members

PinchOfSalt

Under redundant, it says 'see redundant'

So Google are now providing you the tools to create terrible AI content and then also using AI to identify that content and move it down the rankings.

They're at war with themselves. Who do you think will pay the bill for all this creation and cleansing?

AI to fix UK Civil Service's bureaucratic bungling, deputy PM bets

PinchOfSalt

ROI

I sense that the AI peddlers are just the child catcher in Chitty Chitty Bang Bang.

Look at the wonderful things I can do, and it's all free today...

Then suddenly the paywall appears, starts rising and you're trapped paying them instead of paying employees. You've got rid of your employees and now you're just an addict with a very expensive habit

Page: