* Posts by Martin M

115 posts • joined 12 Apr 2016

Page:

Google kills off Stadia

Martin M

Re: Solution to a problem nobody had?

Are you sure about tethered downloads and pretty much running locally on Xbox (and other services)? Xbox for sure allows you to stream games to an Android phone - can’t see how that would work via a tethered download. Even on an Xbox console, startup time for a cloud game is so short it couldn’t be downloading substantial portions for local execution. I’m pretty sure if you’re using Xbox Cloud, it is actually running on the cloud. The amazing thing is that with a decent internet connection you’re hard pressed to notice.

What is true is that if someone has a console, if they like a game they’ll probably download it after trying it on the cloud, which reduces cloud hardware costs for Microsoft in the short term.

In the long term (as more people get decent connectivity) pure game streaming makes more sense than music, and we know how that went. Avid gamers have extremely expensive hardware of which at least the GPU portion is going to be idling much of the time, even during some of the evening peaks. Storing millions of copies of 100GB games on high end SSDs is a lot of wasted silicon. Both are obvious targets for pooling centrally for increased utilisation - and it’s an even bigger win with casual gamers. Especially if the pooled hardware can be put to some other easily schedulable/preemptable use (transcoding? ML training?) during the off-peak. Also, many data centres can use electricity with lower carbon intensity than home users, so there’s environmental benefit..

I suspect many gamers will be increasingly happy to stream if it’s cheaper, greener and more convenient.

The main problem with Stadia was execution, not concept. And a raft of other Google specific problems you mention.

UK hospitals lose millions after AI startup valuation collapses

Martin M

Consent?

It would be interesting to understand how consent was obtained for data sharing, given the well documented problems with obtaining this at national level. Particularly for GOSH.

Claims of AI sentience branded 'pure clickbait'

Martin M

Re: Definition

Blake Lemoine's "revelations" do indeed seem to be rubbish, but an article by another Google VP, Blaise Agüera y Arcas, on June 9th in New Scientist is much more interesting.

In case you can't get beyond the paywall: LAMDA appears as though it might be capable of some high order social modelling, which has been hypothesised to be closely related to consciousness. In particular, if you can model other's reactions to you, you are as a side effect modelling yourself and your relations, and that sounds awfully close to some definitions of consciousness.

As you say though, consciousness is very hard indeed to directly measure, which was no doubt why Blake was cautious in his claims. And he said nothing at all about sentience.

BOFH: HR's gold mine gambit – they get the gold and we get the shaft

Martin M

Clearly you didn’t have a three year old during lockdown. *Many* unscheduled appearances during meetings!

US Cyber Command shored up nine nations' defenses last year

Martin M

Re: Back doors in firewalls,

Boris is bombproof on this. Given the number of peccadilloes and indiscretions we already know about, any more would be considered a feature rather than a bug.

Twitter buyout: Larry Ellison bursts into Elon's office, slaps $1b down on the desk

Martin M

Nah, you’d be required to buy an upfront license at the beginning of the year for the number of likes you expected to receive. Then, if you got more, an audit would ensue during which it would be demonstrated that the number of Likes displayed on the tweets were actually substantial underestimates, plus you’re responsible for your Uncle Bernard’s likes too. At list price. With RPI inflation and penalty interest backdated to the launch of Twitter.

CAST AI puts out report on extent of enterprises cloud resource overspend

Martin M

Pretty difficult to see how something like AWS Lambda + Aurora Serverless V2 doesn't give you "automated dynamic resource allocation from a very large common pool with charging based on actual usage". Or Azure Cosmos DB Autoscale and Functions. Or Google Kubernetes Engine with Autopilot. Etc. etc.

"Government digital service goes titsup on launch" predates cloud technology and is more a comment on some gov IT environments. Selecting the lowest cost supplier, insufficient or unrepresentative nonfunctional testing, extreme political pressure to go live too early etc. etc. are technology-neutral, time tested recipes for failure. A surprising number do actually get it right (or at least right enough), but obviously you tend not to hear about those ones on the news.

That said: if load spikes fast enough, all of the pooled available cloud capacity and fancy autoscaling in the world won't help you. It's fast but not instant to scale, and if you're not prepared to tolerate some user-visible errors during that time, the only answer is a bit of overprovisioning.

But scaling is much faster in cloud than (at least some) on prem environments. I have worked with more than one enterprise with capacity management so poor that rollouts have been delayed while a new datacentre hall is completed, or kit works its way through the supply chain, or while a hunt is carried out for VMs that can be killed and dedicated servers that can be pulled. In cloud, at least the wait is generally measured in minutes not months.

BT must 'prioritize' between 'shareholders and workers' says union boss

Martin M

Re: No choice

If you had bothered to look up the legislation referenced, which is the thing actually phrased like an RFC, unlike Companies House guidance:

—-

(1)A director of a company must act in the way he considers, in good faith, would be most likely to promote the success of the company for the benefit of its members as a whole, and in doing so have regard (amongst other matters) to—

(a)the likely consequences of any decision in the long term,

(b)the interests of the company's employees,

—-

MUST … have regard (amongst other matters) to … the interests of the company employees.

You can argue about the weight that might in practice be placed on those interests amongst competing concerns, but they are required by law to be considered. As I said.

You may have meant to say “My point was that the union was demanding boards ought to prioritise workers above shareholders which they can't do.“ but what you actually said was “Legally BT management is required to act for the best interests of the shareholders and nobody else”. Those are two different statements. The first is correct. The latter is clearly not.

Martin M

Re: No choice

Legally that’s not correct. The second of the 7 duties of a director, as described by Companies House and enshrined in the Companies Act 2006 s172, is indeed to promote the success of the company for the benefit of shareholders. However under this duty they are also required to consider the consequences of decisions for various stakeholders, explicitly including employees. In their words:

“Board decisions can only be justified by the best interests of the company, not on the basis of what works best for anyone else, such as particular executives, shareholders or other business entities. But directors should be broad minded in the way that they evaluate those interests – paying regard to other stakeholders rather than adopting a narrow financial perspective.”

AWS power failure in US-EAST-1 region killed some hardware and instances

Martin M

Re: Elastic

Small business owners have lacked IT expertise/clue since the dawn of computing.

And yes, I do blame them if they're so massively naive as to unquestioningly believe marketing. Most people wise up to that when they're about 5 years old, put a toy on their Christmas list off the back of exciting puffery, and receive underwhelming plastic tat.

Luckily, nowadays most sensible small businesses don't try to train their admin assistant to juggle EC2 instances, but instead go for a collection of SaaS. Many of those are horrible, and it is a lemon market, yet it's still almost always better than them trying to muddle through themselves.

Martin M

Re: Elastic

In other words, probably the exact same people who would have screwed up on-prem.

Martin M

Re: Elastic

They don’t. Most Platform as a Service products automatically fail over in the result of an AZ outage.

If you’re using EC2 then you have to engineer your own solutions, but APIs and tooling allow you to automate almost anything. If you have to do anything manual in order to failover - other than possibly invoke a script - you’re doing it very wrong.

JavaScript dev deliberately screws up own popular npm packages to make a point of some sort

Martin M

Re: Proof that the industry is mad

There are a huge number of excellent reasons why everyone should follow a controlled process for bringing in and caching dependencies - much reduced regression defects, assured availability of packages, reduction of some kinds of software supply chain risk, repeatable builds, robust configuration management, CI performance, bandwidth efficiency and probably many more.

However, Log4j ain't one of them. Those naively pulling in the latest version (especially if replicated all the way down the dependency chain) with every commit build were probably among the first to close that particular security risk - albeit entirely unintentionally...

Can you get excited about the iPhone 13? We've tried

Martin M

Re: Thanks!

Try “Secure shellfish” - scp client, gives you access to browse and download via the Files app.

Sync and share cloud services are also very slick for doing this, albeit often poorly supported on Linux desktops. I’m sure ownCloud or NextCloud would do it natively, Dropbox still has a Linux client I think, or - marginally less conveniently - rclone up to almost any service. Drag/drop to cloud synced drive, run rclone (if necessary), into the iOS app and download for offline use.

I have to say, I’ve not had a problem with this since pretty much the dawn of the App Store.

How Windows NTFS finally made it into Linux

Martin M

Re: Title to long :whaa:

I’ve actually had to re-enable hibernation recently because of the abomination that is Modern Standby being forced down my gullet by Microsoft/Lenovo/Intel. If I want my applications/open documents up and running after I lift the lid, it is now the only option that reliably avoids turning my laptop bag into a disconcertingly hot oven.

VMware to kill SD cards and USB drives as vSphere boot options

Martin M

Re: Nanny

A ‘fool reboot’ sounds a bit BOFH.

Seeing as everyone loves cloud subscriptions, get ready for car-as-a-service future

Martin M

Software defined computer

Traditionally known as a … computer.

Gives a whole new meaning to “rolling release”, particularly if the brakes stop working.

Remember the Oracle-botherers at Rimini Street? They are expanding third party support into open source database world

Martin M

Re: They missed the memo

I've just remembered a bit more about that Oracle support issue and should probably say I'm being a little bit unfair above - it was the trickiest production support issue I've ever encountered. Intermittent error that only occurred every few days, and made no sense at all, I was called in to help.

It turned out to be down to an unexpected interaction between Weblogic and a caching provider due to and ambiguity in the XA distributed transaction spec. This architecture choice was not mine ... hate XA and have long held the opinion that anyone even considering using it (thankfully few, nowadays) should probably stop trying to hide the fact that they have a distributed system and instead ponder the benefits of idempotency.

However, there was a certain amount of novelty being able to list "Sloppy language in an international technical specification" in the Root Cause Analysis :-)

Martin M

Re: They missed the memo

There’s a massive difference between knowing how to use a piece of infrastructure, which is what a full stack engineer does, and being able to provide third line support for it when it goes wrong and it doesn’t behave as expected / designed. 2 a.m. with a outage going on is the wrong time to try to be trying to familiarise yourself with a huge, complex database codebase and be wading through mailing lists.

Whether Rimini Street can provide good support at their relatively small scale, supporting many different databases, is a separate question. The likes of EnterpriseDB (for PostgreSQL) and Oracle (for MySQL) have been doing this for much longer and have people who will properly know the code. Not that Oracle support is always great - I have in the past had to decompile bits of Weblogic to show them where the bug is.

As for MySQL for mission critical - yes, I’d take PostgreSQL over it any day, but you are aware that InnoDB has been the default storage engine for well over a decade?

Fired credit union employee admits: I wiped 21GB of files from company's shared drive in retaliation

Martin M

Re: Rather moronic

What she did was moronic. But not sure I get the logic on why her fine should depend on how moronic the company is. Restoring in less than 10k is not “clever”.

Far from restoring from actual backups, with a proper setup this should have been a case of simply restoring a NAS snapshot - 5 mins tops actual technical work, call it a half day with surrounding paperwork. This is not advanced technology - I have had it on my home server, for free, for the better part of a decade.

But it sounds like even backups didn’t exist, so they’re spaffing money on disk recovery specialists instead.

Try placing a pot plant directly above your CRT monitor – it really ties the desk together

Martin M

Re: We’re talking CRT era here

OK, multi-monitor *graphics* support then :-)

And to be fair I suspect there were actually quite a few systems doing it well before Windows did, but original context was about widespread use…

Martin M

Re: Most common fault was Magnets

We’re talking CRT era here - the first official multi screen support was in Win98 I think, before that only via expensive/fragile hacks. As corbpm says, until LCD monitors came along an unfeasibly large desk was required anyway, so few people wanted to multiscreen.

I first had dual LCD monitors in 2003, but probably only because I was working on a trading floor and it was the default, hang the (significant, at the time) expense. When I started a new job in 2006, I had to spend some time convincing the head of department to give the development teams second screens, citing academic studies on productivity increases and defect count reductions. Probably the single best thing I did for them while I was there…

Martin M

Re: Most common fault was Magnets

I had almost forgotten about degaussing aka The Wibble Button. I can’t remember a time my monitor actually needed it, but it was sufficiently satisfying that I did it anyway. Thank you for making my day! DDDONNGGGGonngonggongong…

Teen turned away from roller rink after AI wrongly identifies her as banned troublemaker

Martin M

Re: exhibit ingrained racist assumptions in the design

No apertures, exposures. D’oh.

Martin M

Re: exhibit ingrained racist assumptions in the design

Do you seriously think the CCTV cameras used in the are likely to be genuine HDR?

Shooting stills in HDR is relatively easy - the camera just takes a burst at multiple aperture settings and there’s a (albeit computationally expensive) process to combine them. Although the results will not be good if anyone moves during that process.

Shooting video in HDR currently requires at least $1000 of camera, more usually $2000. I doubt those are capable of streaming the result easily, and running around with SD cards or SSDs doesn’t really work in this scenario.

I can’t imagine HDR hitting the CCTV/face recognition market for some time yet.

Cyberlaw experts: Take back control. No, we're not talking about Brexit. It's Automated Lane Keeping Systems

Martin M

Wishful thinking

"Legal terms presented to the user potentially operate to their detriment."

Legal terms presented to the user *always* operate to their detriment.

Age discrimination case against IBM leaks emails, docs via bad redaction

Martin M

Re: Claw Back the Criminals' Compensation

I know IBM is famous for big iron, is that why the skillet needs to be regularly updated?

Three things that have vanished: $3.6bn in Bitcoin, a crypto investment biz, and the two brothers who ran it

Martin M

It happens, some people actually run mixers out of jurisdictions with regulators that are interested.

https://www.coindesk.com/eu-authorities-crack-down-on-bitcoin-transaction-mixer

https://www.zdnet.com/article/ohio-man-arrested-for-running-bitcoin-mixing-service-that-laundered-300-million/

Supreme Court narrows Computer Fraud and Abuse Act: Misusing access not quite the same as breaking in

Martin M

You don’t have a water supply - do you pump from a well in your back garden? And without electricity or a mobile phone, are you typing these replies from a PC at a library or something? Wow,

There are likely to have been other laws that could be more appropriately applied. Bribery springs to mind if there was any element of benefit offered. Admittedly none will attract the death penalty, but that seems a wee bit extreme for looking up a plate.

What carries the greatest potential for state abuse - making life slightly more difficult for prosecutors in cases like these, or criminalising basically everyone and leaving them open to intrusive searches and police coercion?

If there are gaps in law resulting from interpreting this one sensibly, the best remedy is sensible new law.

Hyper-V bug that could crash 'big portions of Azure cloud infrastructure': Code published

Martin M

On the flip side, where do you think this vulnerability is likely to have been patched first - on-prem Hyper V or Azure?

Oracle vs Google: No, the Supreme Court did not say APIs aren't copyright – and that's a good thing

Martin M

Re: @LDS - Happy they killed the GPL - when it comes to dynamic linking.

The idea that the GPL depends on copyright isn't a new one and certainly not hidden.

Copyleft is purely a rhetorical term and isn't even mentioned in the license text (using GPLv2 here, for arguments sake). Copyright is, 15 times. The first time it is mentioned, in the preamble, it says:

"We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software."

This is front and centre in what must be the most distributed license text ever, and one of the shortest and most readable. So I'm pretty sure any "GPL worshipper" (whatever that is) realises this is the case.

However, interpretation of the GPL is a fairly nuanced topic. Try searching on "rosenlaw oslbook" for a flavour, and in particular a detailed discussion of linking in both the GPL and LGPL. Rosen thinks there's no difference in practice, because the LGPL is so badly drafted and refers back to the GPL anyway. Things may have moved along since then - the book was written in 2004. There may well have been case law since, and it doesn't address GPLv3.

Thousands of taxpayers' personal details potentially exposed online through councils' debt-chasing texts

Martin M

Idiots

I like to reply with "Thank you for your shortlink, but local security policies forbid me from opening them. Please enter the full link at https://bit.ly/<code>, otherwise I will be unable to read your message."

Target of the shortened URL can be varied depending on situation and level of annoyance...

Apple offends devs by asking for Developer Transition Kits back early, then offering them a measly $200 off an M1 Mac

Martin M

Re: An M1 Mac Mini isn't going to break the bank

Or just dispute the credit card payment, if you paid that way and Apple has sold a service that wasn't as described.

However - as suggested in the article - nasty things might happen to your developer account. So perhaps best to just let Apple have its wicked way ...

And just like that, Amazon Web Services forked Elasticsearch, Kibana. Was that part of the plan, Elastic?

Martin M

Re: Bad optics

I think a lot will come down to what happens next. If AWS work hard with a respected open source umbrella organisation to put in a proper governance structure and build a community in which they are a major but not the only contributor, it will look a lot better.

Elasticsearch has never had this community, really, which is why the tricks Elastic are pulling are possible. Don’t get me wrong, it’s an incredible product and building it was dependent on a lot of VC funding which would not have been possible to raise if they didn’t have control.

But no one really wants to be in a position where they come to depend purely on Amazon for the fork, particularly if they want to deploy on other clouds or on prem, distribute as part of a product or run as part of a SaaS setup.

Dropbox basically decimates workforce, COO logs off: Cloud biz promises to be 'more efficient and nimble'

Martin M

You’re right. File sync and share is really a feature now, not a product.

Martin M

You’re right. File sync and share is really a feature now, not a product, for most. Not all, of course, but enough that Dropbox will struggle.

React team observes that running everything on the client can be costly, aims to fix it with Server Components

Martin M

Not disagreeing on the importance of good documentation, although would rather see a small set that is completely current and addresses the load bearing walls of the architecture than reams of useless outdated rubbish. Similar at code level. The very best code doesn’t need much at detail level because it’s obvious what it’s doing. But good code is quite rare.

However, this isn’t really about specific documentation for a particular system - it’s about fundamental distributed systems design knowledge. It’s going to be relevant forever because information can’t travel faster than the speed of light and hence latency will always be important for globally distributed systems. Its not like this isn’t written down either, see

https://martinfowler.com/articles/distributed-objects-microservices.html

which links back to a 2002 book, and it was widespread knowledge well before then, right back to Deutsch’s seven fallacies of distributed computing in 1994. Yet people somehow manage to graduate from Computer Science courses or have long IT careers without knowing about this or e.g. ACID, locking/deadlocks, consensus, cache invalidation, message sequencing, idempotency, CAP etc.. I’m still not quite sure how.

In 2003, on a project with a global user base, as well as having explicit architectural principles covering interface granularity I insisted on a hardware WAN emulator in front of the local dev/test servers. Turned up to Singapore latency (about 300ms IIRC). People stop designing inefficient interfaces/using them badly quite quickly when they personally feel the pain :-)

Elon Musk says he tried to sell Tesla to Apple, which didn’t bite and wouldn't even meet

Martin M

Butterfly accelerator and brake for the best driver legroom ever! *

* Requires regular air dusting at 80 degree angle. Replacement only as part of sealed engine/drivetrain unit.

Unsecured Azure blob exposed 500,000+ highly confidential docs from UK firm's CRM customers

Martin M

Re: @Sandgrounder - Listen to what Teacher says..

Disagree. 10 years ago I think it’s safe to say this company would have been dumping uploads in an unsecured Apache directory. Apache and the OS wouldn’t have been patched since installation. The ‘server’ would be located in an unlocked stationery cupboard with unencrypted disks, and there would be no RAID or backup regime.

Using a cloud object store actually fixes or helps with most of these problems rather than exacerbates them, and provides some pretty powerful security constructs to those who are capable of using them. On AWS (not sure about Azure) you will be warned proactively and quite strenuously that your unsecured bucket is probably not a good idea. But ultimately, it can’t fix stupid.

HP bows to pressure, reinstates free monthly ink plan... for existing customers

Martin M

Idiots

Not just that. Anyone with primary school age kids will recognise that lockdown = hundreds of pages of bright, colourful home learning - never used a set of toner cartridges so fast in my life ...

Google Cloud (over)Run: How a free trial experiment ended with a $72,000 bill overnight

Martin M

Idiots

Not sure I buy the "not rocket science". Compared to an old-school centralised mainframe bureau or even telco caps, it would likely be a significant challenge to implement hard spend caps given the highly distributed nature of hyperscale cloud infrastructure, and the complexity and granularity of billing. Particularly without potentially impacting availability and latency in the normal case of credit being available - are you going to have each function execution or database write check a central service before executing? Or distribute information on available credit information to every server in GCP. I'm not saying there aren't solutions (probably based on streaming predictive analytics etc.) but it's certainly not trivial.

Regardless of the technical question, this is basically an incentive alignment problem though. The clouds should really be forced (through competitive pressure, regulation, court decisions etc.) to forgive any spend above the spend cap. They'd then manage to find an appropriate balance between stopping this kind of thing occurring and the technical costs of doing so.

Not on your Zoom, not on Teams, not Google Meet, not BlueJeans. WebEx, Skype and Houseparty make us itch. No, not FaceTime, not even Twitch

Martin M

Also, the idea that there is no value is rather given the lie by the large numbers of people who choose to use some form of videoconferencing when speaking to their friends and family. A lot of this is about context, poor equipment/setup and overuse in inappropriate situations, rather than a fundamental flaw in the technology.

The poor equipment point is particularly evident for me because at the beginning of lockdown I held my nose and bought a Facebook TV for me and a Facebook Portal+ for my 70+ yo parents, so we could keep in touch and they could watch my kids grow up. They're clinically extremely vulnerable and finding the lack of face-to-face very difficult, particularly as the kids change so fast.

Yes, Facebook. I know. But there still aren't any other proper dedicated consumer videoconferencing devices available as far as I can see, at least with/supporting decent size screens - which is just plain odd. They are fantastic.

Video and sound quality are awesome, and the kids interact very naturally as it works from the other side of the room and will zoom and pan as they run around. We can relax on the sofa later when they're in bed and have an hour's discussion without it being tiring. Eye contact seems pretty good so long as it's mounted on top of the TV. It's not the same as having them here but it's definitely worth having.

It's just a shame VC is so much worse on my work laptop. Completely different experience.

Martin M

As someone who changed jobs at the beginning of April, I've found it really valuable for all of its flaws. Broken eye contact is unnatural, but working closely with people for months without any idea of what they look like and their facial expressions etc. would be much more so. Lack of video chat would have left me feeling very disconnected.

That said, I often turn off video a few minutes into a meeting, especially where there are more than a couple of other people - the 'wall of faces' is not very useful for me. Although in most cases you're looking at a screen share by that point anyway.

And the lipreading/signing points are very good ones.

Shots fired! WordPress's Matt claims Jamstack's marketing is 'not intellectually honest' in debate with Netlify's Matt

Martin M

And no, nobody gives a damn about comment boards on your dumb-ass website.

There is substantial irony in this opinion being delivered via a comment board...

Prepare your shocked faces: Crypto-coin exchange boss laundered millions of bucks for online auction crooks

Martin M

Re: Oh , the joys of unregulated...

FATF country members are responsible for implementing recommendations on Virtual Assets and Virtual Asset Service Providers. The EU has 5AMLD which mandates that crypto exchanges have to have the same AML controls as banks. This is implemented in the UK in The Money Laundering and Terrorist Financing (Amendment) Regulations 2019 statutory instrument.

So who exactly has been saying money laundering regulation is unnecessary?

Enforcement is necessary for compliance, of course, but the regulation is there.

The perils of building a career on YouTube: Guitar teacher's channel nearly deleted after music publisher complains

Martin M

"how technology giants deal with smaller customers"

If you provide your material to Google and they sell advertising space by it and give you a fraction of that, you are not a customer. You are a supplier. And that really explains all you need to know. Small suppliers to enormous buyers always tend to get a bit screwed.

Doesn't make it right, of course.

Microservices guru says think serverless, not Kubernetes: You don't want to manage 'a towering edifice of stuff'

Martin M

Really?

"The key characteristics of a serverless offering is no server management. I'm not worried about the operating systems or how much memory these things have got; I am abstracted away from all of that".

Technically true but massively misses the point. AWS Lambda requires you to decide how to allocate "memory to your function between 128MB and 3008MB, in 64MB increments" (https://aws.amazon.com/lambda/pricing/). So now you have to capacity manage memory at function level rather than server/cluster level.

There are lots of good things about serverless, but this ain't one.

Gartner on cloud contenders: AWS fails to lower its prices, Microsoft 'cannot guarantee capacity', Google has 'devastating' network outages

Martin M

Re: Gartner in the title of the article...

Some techies have indeed been saying for years that "cloud" only equates to "someone else's computers, somewhere". But it's only true in the same sense that a house equates to bricks.

If you're talking about manually standing up VMs and storage in a datacenter through an API or web console, maybe. Although too many companies seem to screw up building and running internal clouds that try to do even that.

But really, what is driving people to cloud providers is access to a huge number - Amazon have 160+ - of highly automated services, all integrated into the same logging, monitoring, billing and identity/access infrastructure and very often into each other as well. Container management, ESBs, data warehouses, SDN, API gateways, VDI farms, call centre infrastructure, software development tooling, ML model lifecycle management, virtual HSMs, machine learning based PII classification, scale-out graph database, managed PostgreSQL, mobile app identity management - too many to sensibly enumerate on a single web page.

These - most of which are at least reasonable, some best in class - are all available for distributed teams to use and manage in 60+ datacentres in tens of countries. With largely good documentation and no need to file a ticket and wait for weeks to get going (or months, if someone has dropped the ball on capacity management). And which can be completely reproducibly stood up via a bit of Terraform - subject to appropriate governance, of course.

If you can point me to a corporate IT department that offers anything close to those 160+ services, with a similar experience, I'll concede it's just other people's computers. I suspect you'll struggle, because the cloud providers probably invested more into R&D for just one of those services than your entire IT budget for the last few years. There are massive economies of scale in automation - cottage industries within enterprises will just struggle to compete.

Of course cloud is just a tool though, and maintains many of the inherent issues with technology - I agree with that. It's just it does solve a useful number of those issues.

Why cloud costs get out of control: Too much lift and shift, and pricing that is 'screwy and broken'

Martin M

Re: The problem isn't the Cloud, but poor monitoring

Sorry, I think the BS is yours.

There are specialist third-party (not provided by the clouds themselves - that would make no sense as no-one would trust them) cloud spend monitoring and optimisation tools. Some of them are expensive and indeed only make any kind of sense for very large cloud estates. But you can do a great deal with the standard, built-in, essentially free ones.

On reversing out of the cloud: if you generate truly epic quantities of data, that generates some lock-in, but not irreversible. Case in point: Dropbox exited 4 petabytes of data from AWS S3 when they decided they had the scale and capability to build and run their own private storage cloud.

More importantly, and similar to any proprietary vendor including any on-prem ones, there is substantial lock-in if you go for proprietary high-level services as opposed to lower level standards-based ones. There are things you can do to mitigate that a bit (Kubernetes is often one aspect of this), but these tend to increase complexity and unpick a number of benefits of going to the cloud. Essentially, you end up trading potential long term costs of lock-in against short term increased build costs. It's not a new problem, nor is it cloud-specific. The right answer usually depends on how fast you have to get to market.

I've spend a fair amount of time looking at DR for both on-prem and cloud-based services in a fair number of companies, and from a practical perspective DR for cloud based services tends to be way ahead in my experience, because the clouds make it really easy to implement setups that provide (largely) automated resilience to outages affecting a smallish geographical area (e.g. tens of miles). On-prem DR is often a shitshow on very many levels. And the clouds do effectively provide tape or equivalents - S3 Glacier is backed by it, at least the last time I checked. They won't, of course, be tapes in your possession though, which is I suspect what you're fulminating about.

The one type of disaster that many people building on cloud do not address is wholesale failure of the cloud provider for either technical or business viability reasons. You have to take a view on how likely that is - the really big cloud providers seem to isolate availability zones pretty well nowadays (much better than enterprises - one I reviewed forgot to implement DR for their DHCP servers FFS, and it took a power outage for them to notice). The top three providers are probably too big to fail. If they got into trouble as businesses, the government would probably backstop, not least because they don't want their own stuff falling over. But if you want to mitigate - just sync your data back to base. There are lots of patterns for doing so.

Martin M

Re: The problem isn't the Cloud, but poor monitoring

Your time is the only significant cost, actually. The basics you get in the same way as most people get itemised phone bills for free. Tagging doesn't cost anything. Everything's so automated and integrated it will likely cost very much less than any cost allocation you're trying to do for on-prem services. Remember, cloud services are built from the ground up to be metered at a granular level for the cloud provider - all they've done is extend this out to customers.

From a technical perspective, there are storage charges if you want to retain for a long time, bandwidth charges to download info etc., but those are really really *tiny*. If you choose to use cloud BI services (e.g. AWS QuickSight) to do your reporting rather than desktop-based/on-prem server based analysis, of course you pay for those, but not much - think $18/mo for a dashboard author ranging down to $.30/30 min session for infrequent dashboard viewers.

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022