* Posts by Martin M

98 posts • joined 12 Apr 2016

Page:

Remember the Oracle-botherers at Rimini Street? They are expanding third party support into open source database world

Martin M

Re: They missed the memo

I've just remembered a bit more about that Oracle support issue and should probably say I'm being a little bit unfair above - it was the trickiest production support issue I've ever encountered. Intermittent error that only occurred every few days, and made no sense at all, I was called in to help.

It turned out to be down to an unexpected interaction between Weblogic and a caching provider due to and ambiguity in the XA distributed transaction spec. This architecture choice was not mine ... hate XA and have long held the opinion that anyone even considering using it (thankfully few, nowadays) should probably stop trying to hide the fact that they have a distributed system and instead ponder the benefits of idempotency.

However, there was a certain amount of novelty being able to list "Sloppy language in an international technical specification" in the Root Cause Analysis :-)

Martin M

Re: They missed the memo

There’s a massive difference between knowing how to use a piece of infrastructure, which is what a full stack engineer does, and being able to provide third line support for it when it goes wrong and it doesn’t behave as expected / designed. 2 a.m. with a outage going on is the wrong time to try to be trying to familiarise yourself with a huge, complex database codebase and be wading through mailing lists.

Whether Rimini Street can provide good support at their relatively small scale, supporting many different databases, is a separate question. The likes of EnterpriseDB (for PostgreSQL) and Oracle (for MySQL) have been doing this for much longer and have people who will properly know the code. Not that Oracle support is always great - I have in the past had to decompile bits of Weblogic to show them where the bug is.

As for MySQL for mission critical - yes, I’d take PostgreSQL over it any day, but you are aware that InnoDB has been the default storage engine for well over a decade?

Fired credit union employee admits: I wiped 21GB of files from company's shared drive in retaliation

Martin M

Re: Rather moronic

What she did was moronic. But not sure I get the logic on why her fine should depend on how moronic the company is. Restoring in less than 10k is not “clever”.

Far from restoring from actual backups, with a proper setup this should have been a case of simply restoring a NAS snapshot - 5 mins tops actual technical work, call it a half day with surrounding paperwork. This is not advanced technology - I have had it on my home server, for free, for the better part of a decade.

But it sounds like even backups didn’t exist, so they’re spaffing money on disk recovery specialists instead.

Try placing a pot plant directly above your CRT monitor – it really ties the desk together

Martin M

Re: We’re talking CRT era here

OK, multi-monitor *graphics* support then :-)

And to be fair I suspect there were actually quite a few systems doing it well before Windows did, but original context was about widespread use…

Martin M

Re: Most common fault was Magnets

We’re talking CRT era here - the first official multi screen support was in Win98 I think, before that only via expensive/fragile hacks. As corbpm says, until LCD monitors came along an unfeasibly large desk was required anyway, so few people wanted to multiscreen.

I first had dual LCD monitors in 2003, but probably only because I was working on a trading floor and it was the default, hang the (significant, at the time) expense. When I started a new job in 2006, I had to spend some time convincing the head of department to give the development teams second screens, citing academic studies on productivity increases and defect count reductions. Probably the single best thing I did for them while I was there…

Martin M

Re: Most common fault was Magnets

I had almost forgotten about degaussing aka The Wibble Button. I can’t remember a time my monitor actually needed it, but it was sufficiently satisfying that I did it anyway. Thank you for making my day! DDDONNGGGGonngonggongong…

Teen turned away from roller rink after AI wrongly identifies her as banned troublemaker

Martin M

Re: exhibit ingrained racist assumptions in the design

No apertures, exposures. D’oh.

Martin M

Re: exhibit ingrained racist assumptions in the design

Do you seriously think the CCTV cameras used in the are likely to be genuine HDR?

Shooting stills in HDR is relatively easy - the camera just takes a burst at multiple aperture settings and there’s a (albeit computationally expensive) process to combine them. Although the results will not be good if anyone moves during that process.

Shooting video in HDR currently requires at least $1000 of camera, more usually $2000. I doubt those are capable of streaming the result easily, and running around with SD cards or SSDs doesn’t really work in this scenario.

I can’t imagine HDR hitting the CCTV/face recognition market for some time yet.

Cyberlaw experts: Take back control. No, we're not talking about Brexit. It's Automated Lane Keeping Systems

Martin M

Wishful thinking

"Legal terms presented to the user potentially operate to their detriment."

Legal terms presented to the user *always* operate to their detriment.

Age discrimination case against IBM leaks emails, docs via bad redaction

Martin M

Re: Claw Back the Criminals' Compensation

I know IBM is famous for big iron, is that why the skillet needs to be regularly updated?

Three things that have vanished: $3.6bn in Bitcoin, a crypto investment biz, and the two brothers who ran it

Martin M

It happens, some people actually run mixers out of jurisdictions with regulators that are interested.

https://www.coindesk.com/eu-authorities-crack-down-on-bitcoin-transaction-mixer

https://www.zdnet.com/article/ohio-man-arrested-for-running-bitcoin-mixing-service-that-laundered-300-million/

Supreme Court narrows Computer Fraud and Abuse Act: Misusing access not quite the same as breaking in

Martin M

You don’t have a water supply - do you pump from a well in your back garden? And without electricity or a mobile phone, are you typing these replies from a PC at a library or something? Wow,

There are likely to have been other laws that could be more appropriately applied. Bribery springs to mind if there was any element of benefit offered. Admittedly none will attract the death penalty, but that seems a wee bit extreme for looking up a plate.

What carries the greatest potential for state abuse - making life slightly more difficult for prosecutors in cases like these, or criminalising basically everyone and leaving them open to intrusive searches and police coercion?

If there are gaps in law resulting from interpreting this one sensibly, the best remedy is sensible new law.

Hyper-V bug that could crash 'big portions of Azure cloud infrastructure': Code published

Martin M

On the flip side, where do you think this vulnerability is likely to have been patched first - on-prem Hyper V or Azure?

Oracle vs Google: No, the Supreme Court did not say APIs aren't copyright – and that's a good thing

Martin M

Re: @LDS - Happy they killed the GPL - when it comes to dynamic linking.

The idea that the GPL depends on copyright isn't a new one and certainly not hidden.

Copyleft is purely a rhetorical term and isn't even mentioned in the license text (using GPLv2 here, for arguments sake). Copyright is, 15 times. The first time it is mentioned, in the preamble, it says:

"We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software."

This is front and centre in what must be the most distributed license text ever, and one of the shortest and most readable. So I'm pretty sure any "GPL worshipper" (whatever that is) realises this is the case.

However, interpretation of the GPL is a fairly nuanced topic. Try searching on "rosenlaw oslbook" for a flavour, and in particular a detailed discussion of linking in both the GPL and LGPL. Rosen thinks there's no difference in practice, because the LGPL is so badly drafted and refers back to the GPL anyway. Things may have moved along since then - the book was written in 2004. There may well have been case law since, and it doesn't address GPLv3.

Thousands of taxpayers' personal details potentially exposed online through councils' debt-chasing texts

Martin M

Idiots

I like to reply with "Thank you for your shortlink, but local security policies forbid me from opening them. Please enter the full link at https://bit.ly/<code>, otherwise I will be unable to read your message."

Target of the shortened URL can be varied depending on situation and level of annoyance...

Apple offends devs by asking for Developer Transition Kits back early, then offering them a measly $200 off an M1 Mac

Martin M

Re: An M1 Mac Mini isn't going to break the bank

Or just dispute the credit card payment, if you paid that way and Apple has sold a service that wasn't as described.

However - as suggested in the article - nasty things might happen to your developer account. So perhaps best to just let Apple have its wicked way ...

And just like that, Amazon Web Services forked Elasticsearch, Kibana. Was that part of the plan, Elastic?

Martin M

Re: Bad optics

I think a lot will come down to what happens next. If AWS work hard with a respected open source umbrella organisation to put in a proper governance structure and build a community in which they are a major but not the only contributor, it will look a lot better.

Elasticsearch has never had this community, really, which is why the tricks Elastic are pulling are possible. Don’t get me wrong, it’s an incredible product and building it was dependent on a lot of VC funding which would not have been possible to raise if they didn’t have control.

But no one really wants to be in a position where they come to depend purely on Amazon for the fork, particularly if they want to deploy on other clouds or on prem, distribute as part of a product or run as part of a SaaS setup.

Dropbox basically decimates workforce, COO logs off: Cloud biz promises to be 'more efficient and nimble'

Martin M

You’re right. File sync and share is really a feature now, not a product.

Martin M

You’re right. File sync and share is really a feature now, not a product, for most. Not all, of course, but enough that Dropbox will struggle.

React team observes that running everything on the client can be costly, aims to fix it with Server Components

Martin M

Not disagreeing on the importance of good documentation, although would rather see a small set that is completely current and addresses the load bearing walls of the architecture than reams of useless outdated rubbish. Similar at code level. The very best code doesn’t need much at detail level because it’s obvious what it’s doing. But good code is quite rare.

However, this isn’t really about specific documentation for a particular system - it’s about fundamental distributed systems design knowledge. It’s going to be relevant forever because information can’t travel faster than the speed of light and hence latency will always be important for globally distributed systems. Its not like this isn’t written down either, see

https://martinfowler.com/articles/distributed-objects-microservices.html

which links back to a 2002 book, and it was widespread knowledge well before then, right back to Deutsch’s seven fallacies of distributed computing in 1994. Yet people somehow manage to graduate from Computer Science courses or have long IT careers without knowing about this or e.g. ACID, locking/deadlocks, consensus, cache invalidation, message sequencing, idempotency, CAP etc.. I’m still not quite sure how.

In 2003, on a project with a global user base, as well as having explicit architectural principles covering interface granularity I insisted on a hardware WAN emulator in front of the local dev/test servers. Turned up to Singapore latency (about 300ms IIRC). People stop designing inefficient interfaces/using them badly quite quickly when they personally feel the pain :-)

Elon Musk says he tried to sell Tesla to Apple, which didn’t bite and wouldn't even meet

Martin M

Butterfly accelerator and brake for the best driver legroom ever! *

* Requires regular air dusting at 80 degree angle. Replacement only as part of sealed engine/drivetrain unit.

Unsecured Azure blob exposed 500,000+ highly confidential docs from UK firm's CRM customers

Martin M

Re: @Sandgrounder - Listen to what Teacher says..

Disagree. 10 years ago I think it’s safe to say this company would have been dumping uploads in an unsecured Apache directory. Apache and the OS wouldn’t have been patched since installation. The ‘server’ would be located in an unlocked stationery cupboard with unencrypted disks, and there would be no RAID or backup regime.

Using a cloud object store actually fixes or helps with most of these problems rather than exacerbates them, and provides some pretty powerful security constructs to those who are capable of using them. On AWS (not sure about Azure) you will be warned proactively and quite strenuously that your unsecured bucket is probably not a good idea. But ultimately, it can’t fix stupid.

HP bows to pressure, reinstates free monthly ink plan... for existing customers

Martin M

Idiots

Not just that. Anyone with primary school age kids will recognise that lockdown = hundreds of pages of bright, colourful home learning - never used a set of toner cartridges so fast in my life ...

Google Cloud (over)Run: How a free trial experiment ended with a $72,000 bill overnight

Martin M

Idiots

Not sure I buy the "not rocket science". Compared to an old-school centralised mainframe bureau or even telco caps, it would likely be a significant challenge to implement hard spend caps given the highly distributed nature of hyperscale cloud infrastructure, and the complexity and granularity of billing. Particularly without potentially impacting availability and latency in the normal case of credit being available - are you going to have each function execution or database write check a central service before executing? Or distribute information on available credit information to every server in GCP. I'm not saying there aren't solutions (probably based on streaming predictive analytics etc.) but it's certainly not trivial.

Regardless of the technical question, this is basically an incentive alignment problem though. The clouds should really be forced (through competitive pressure, regulation, court decisions etc.) to forgive any spend above the spend cap. They'd then manage to find an appropriate balance between stopping this kind of thing occurring and the technical costs of doing so.

Not on your Zoom, not on Teams, not Google Meet, not BlueJeans. WebEx, Skype and Houseparty make us itch. No, not FaceTime, not even Twitch

Martin M

Also, the idea that there is no value is rather given the lie by the large numbers of people who choose to use some form of videoconferencing when speaking to their friends and family. A lot of this is about context, poor equipment/setup and overuse in inappropriate situations, rather than a fundamental flaw in the technology.

The poor equipment point is particularly evident for me because at the beginning of lockdown I held my nose and bought a Facebook TV for me and a Facebook Portal+ for my 70+ yo parents, so we could keep in touch and they could watch my kids grow up. They're clinically extremely vulnerable and finding the lack of face-to-face very difficult, particularly as the kids change so fast.

Yes, Facebook. I know. But there still aren't any other proper dedicated consumer videoconferencing devices available as far as I can see, at least with/supporting decent size screens - which is just plain odd. They are fantastic.

Video and sound quality are awesome, and the kids interact very naturally as it works from the other side of the room and will zoom and pan as they run around. We can relax on the sofa later when they're in bed and have an hour's discussion without it being tiring. Eye contact seems pretty good so long as it's mounted on top of the TV. It's not the same as having them here but it's definitely worth having.

It's just a shame VC is so much worse on my work laptop. Completely different experience.

Martin M

As someone who changed jobs at the beginning of April, I've found it really valuable for all of its flaws. Broken eye contact is unnatural, but working closely with people for months without any idea of what they look like and their facial expressions etc. would be much more so. Lack of video chat would have left me feeling very disconnected.

That said, I often turn off video a few minutes into a meeting, especially where there are more than a couple of other people - the 'wall of faces' is not very useful for me. Although in most cases you're looking at a screen share by that point anyway.

And the lipreading/signing points are very good ones.

Shots fired! WordPress's Matt claims Jamstack's marketing is 'not intellectually honest' in debate with Netlify's Matt

Martin M

And no, nobody gives a damn about comment boards on your dumb-ass website.

There is substantial irony in this opinion being delivered via a comment board...

Prepare your shocked faces: Crypto-coin exchange boss laundered millions of bucks for online auction crooks

Martin M

Re: Oh , the joys of unregulated...

FATF country members are responsible for implementing recommendations on Virtual Assets and Virtual Asset Service Providers. The EU has 5AMLD which mandates that crypto exchanges have to have the same AML controls as banks. This is implemented in the UK in The Money Laundering and Terrorist Financing (Amendment) Regulations 2019 statutory instrument.

So who exactly has been saying money laundering regulation is unnecessary?

Enforcement is necessary for compliance, of course, but the regulation is there.

The perils of building a career on YouTube: Guitar teacher's channel nearly deleted after music publisher complains

Martin M

"how technology giants deal with smaller customers"

If you provide your material to Google and they sell advertising space by it and give you a fraction of that, you are not a customer. You are a supplier. And that really explains all you need to know. Small suppliers to enormous buyers always tend to get a bit screwed.

Doesn't make it right, of course.

Microservices guru says think serverless, not Kubernetes: You don't want to manage 'a towering edifice of stuff'

Martin M

Really?

"The key characteristics of a serverless offering is no server management. I'm not worried about the operating systems or how much memory these things have got; I am abstracted away from all of that".

Technically true but massively misses the point. AWS Lambda requires you to decide how to allocate "memory to your function between 128MB and 3008MB, in 64MB increments" (https://aws.amazon.com/lambda/pricing/). So now you have to capacity manage memory at function level rather than server/cluster level.

There are lots of good things about serverless, but this ain't one.

Gartner on cloud contenders: AWS fails to lower its prices, Microsoft 'cannot guarantee capacity', Google has 'devastating' network outages

Martin M

Re: Gartner in the title of the article...

Some techies have indeed been saying for years that "cloud" only equates to "someone else's computers, somewhere". But it's only true in the same sense that a house equates to bricks.

If you're talking about manually standing up VMs and storage in a datacenter through an API or web console, maybe. Although too many companies seem to screw up building and running internal clouds that try to do even that.

But really, what is driving people to cloud providers is access to a huge number - Amazon have 160+ - of highly automated services, all integrated into the same logging, monitoring, billing and identity/access infrastructure and very often into each other as well. Container management, ESBs, data warehouses, SDN, API gateways, VDI farms, call centre infrastructure, software development tooling, ML model lifecycle management, virtual HSMs, machine learning based PII classification, scale-out graph database, managed PostgreSQL, mobile app identity management - too many to sensibly enumerate on a single web page.

These - most of which are at least reasonable, some best in class - are all available for distributed teams to use and manage in 60+ datacentres in tens of countries. With largely good documentation and no need to file a ticket and wait for weeks to get going (or months, if someone has dropped the ball on capacity management). And which can be completely reproducibly stood up via a bit of Terraform - subject to appropriate governance, of course.

If you can point me to a corporate IT department that offers anything close to those 160+ services, with a similar experience, I'll concede it's just other people's computers. I suspect you'll struggle, because the cloud providers probably invested more into R&D for just one of those services than your entire IT budget for the last few years. There are massive economies of scale in automation - cottage industries within enterprises will just struggle to compete.

Of course cloud is just a tool though, and maintains many of the inherent issues with technology - I agree with that. It's just it does solve a useful number of those issues.

Why cloud costs get out of control: Too much lift and shift, and pricing that is 'screwy and broken'

Martin M

Re: The problem isn't the Cloud, but poor monitoring

Sorry, I think the BS is yours.

There are specialist third-party (not provided by the clouds themselves - that would make no sense as no-one would trust them) cloud spend monitoring and optimisation tools. Some of them are expensive and indeed only make any kind of sense for very large cloud estates. But you can do a great deal with the standard, built-in, essentially free ones.

On reversing out of the cloud: if you generate truly epic quantities of data, that generates some lock-in, but not irreversible. Case in point: Dropbox exited 4 petabytes of data from AWS S3 when they decided they had the scale and capability to build and run their own private storage cloud.

More importantly, and similar to any proprietary vendor including any on-prem ones, there is substantial lock-in if you go for proprietary high-level services as opposed to lower level standards-based ones. There are things you can do to mitigate that a bit (Kubernetes is often one aspect of this), but these tend to increase complexity and unpick a number of benefits of going to the cloud. Essentially, you end up trading potential long term costs of lock-in against short term increased build costs. It's not a new problem, nor is it cloud-specific. The right answer usually depends on how fast you have to get to market.

I've spend a fair amount of time looking at DR for both on-prem and cloud-based services in a fair number of companies, and from a practical perspective DR for cloud based services tends to be way ahead in my experience, because the clouds make it really easy to implement setups that provide (largely) automated resilience to outages affecting a smallish geographical area (e.g. tens of miles). On-prem DR is often a shitshow on very many levels. And the clouds do effectively provide tape or equivalents - S3 Glacier is backed by it, at least the last time I checked. They won't, of course, be tapes in your possession though, which is I suspect what you're fulminating about.

The one type of disaster that many people building on cloud do not address is wholesale failure of the cloud provider for either technical or business viability reasons. You have to take a view on how likely that is - the really big cloud providers seem to isolate availability zones pretty well nowadays (much better than enterprises - one I reviewed forgot to implement DR for their DHCP servers FFS, and it took a power outage for them to notice). The top three providers are probably too big to fail. If they got into trouble as businesses, the government would probably backstop, not least because they don't want their own stuff falling over. But if you want to mitigate - just sync your data back to base. There are lots of patterns for doing so.

Martin M

Re: The problem isn't the Cloud, but poor monitoring

Your time is the only significant cost, actually. The basics you get in the same way as most people get itemised phone bills for free. Tagging doesn't cost anything. Everything's so automated and integrated it will likely cost very much less than any cost allocation you're trying to do for on-prem services. Remember, cloud services are built from the ground up to be metered at a granular level for the cloud provider - all they've done is extend this out to customers.

From a technical perspective, there are storage charges if you want to retain for a long time, bandwidth charges to download info etc., but those are really really *tiny*. If you choose to use cloud BI services (e.g. AWS QuickSight) to do your reporting rather than desktop-based/on-prem server based analysis, of course you pay for those, but not much - think $18/mo for a dashboard author ranging down to $.30/30 min session for infrequent dashboard viewers.

Martin M

Re: Cloud is expensive

Completely agree - the push towards 'migration of everything on prem to the cloud' is not something I'm uncomfortable with. IMHO the technical and legal reasons can often be mitigated, but are real concerns sometimes..

Regardless, I'm unconvinced that shifting a bunch of production VMs running applications not designed for the cloud from infrastructure that's already bought, in place and stable will really offer a sensible return on investment unless that infrastructure is unusually expensive for some reason (which is sometimes the case). Doing the migration generally involves good people if it's done well, and people are expensive. If it's not done well, it risks service stability.

But for many new services, cloud can be great.

Martin M

Re: The problem isn't the Cloud, but poor monitoring

That's just completely inaccurate. You can absolutely use and automatically enforce tagging to track resource costs and report costs back to teams/projects/cost centres etc. at a really granular level. Similarly you can control who is allowed to spin up resources. E.g. for AWS there is https://docs.aws.amazon.com/whitepapers/latest/cost-management/introduction.html . It's one of only five AWS Well Architected pillars - https://aws.amazon.com/architecture/well-architected/. I'm pretty sure the other big clouds have equivalents.

You have to set it up right, of course, but frankly if you don't, it's a governance failure on the enterprise side, not something inherent to cloud. I'm not saying businesses don't frequently get themselves in a pickle, but when it does it's often more to do with traditional infrastructure and operations sticking their head in the sand, refusing to engage and bring their expertise to the table to make sure it works.

Hidden Linux kernel security fixes spotted before release – by using developer chatter as a side channel

Martin M

I’m a big fan of the cloud in general, but unless you’re talking SaaS, I’m afraid I disagree. If a company doesn’t have a basic level of infrastructure and ops maturity, moving to a platform where by default anyone can spin up anything almost instantly will very quickly make things infinitely worse.

The first thing you need to build if you are moving to one of the big clouds is your management and control infrastructure. All the tools are there and easy(ish) to deploy - certainly compared to traditional enterprise IT - but it does need thinking about and is too frequently skipped, with predictable results.

UK utility Severn Trent tests the waters with £4.8m for SCADA monitoring and management in the clouds

Martin M

Analytics computation requirements are very high when someone is running a big ad-hoc analytical query (not infrequently, tens of large servers), and zero if no-one if no-one is. Typically, there's a small number of analysts/data scientists who do not query all day, which drives a very peaky workload. Traditionally, they've been provided with quite a large set of servers which are lightly loaded most of the time and run queries horribly slowly during peak workload.

Instead. the 'serverless' (hate that term) analytics services allocate compute to individual jobs, and only charge for that. Therefore are typically cheaper because there's not idling, and yet run queries at full speed when required vastly reducing data scientist thumb-twiddling (and have you seen what a good data scientist earns?)..

Post by AddieM below suggests they have racks of Oracle to support their warehouse. I can guarantee you that that storage is not cheap. Could they rearchitect to a more cost-effective, perhaps open source MPP data warehouse without forking out megabucks to an EDW vendor? In theory, but most plump for something as 'appliance-y' as possible to minimise complexity, and those are very spendy. Even equivalent cloud services with dedicated MPP compute (e.g. Redshift et al) tend to be a lot cheaper, and are fully managed..

Martin M

See my comment below. *Analytics* computation requirements can indeed fluctuate wildly, and that seems to be what they're talking about here. Plus lots of historical data, which means cheap, reliable storage is highly desirable.

Martin M

Makes a great deal of sense. Particularly if there is a very variable query workload you could stream the information into Azure Data Lake Storage and run queries using Azure Data Lake Analytics. That would provide cost effective storage as well as usage-priced analytics compute instead of relying on provisioning loads of expensive traditional data warehouse nodes (and their associated licenses) that are probably lying fallow most of the time, and insufficient when you do get busy.

This kind of analytical workload is normally a slam dunk for cloud over on-prem, and doesn't usually pose a direct threat to integrity or availability of operational systems - obviously confidentiality may obviously still be very important, depending on the nature of the data. The data flow is from the sensitive operational network to the less sensitive cloud analytics one, and you can make going the reverse way very difficult (even data diodes etc. for very high assurance).

The exception is possibly the monitoring side of things, where a DoS/compromise might slow some types of response. But it sounds like the biggest problem would be plain old non-malicious unreliable plant network reliability issues - any response would have to be resilient to that, and thus to more malicious attacks.

Putting the d'oh! in Adobe: 'Years of photos' permanently wiped from iPhones, iPads by bad Lightroom app update

Martin M

After this, I’m wondering if some kind of reconciliation is in order to make sure LR Classic hasn’t missed anything during the import.

I use LR on mobile as it’s one of the few ways of getting a RAW capture on an iPhone, given the built in camera app won’t do it.

Martin M

Re: Class action suit in 3... 2... 1...

The concern I have is that the same engineering standards are likely applied to both.

Martin M

Re: 'what if this was a more subtle bug nuking a handful of photos over a period of time'

And that’s why I regularly replicate from Lightroom on Windows to a ZFS based NAS server with mirrored storage and regular znapzend snapshots which are replicated to another ZFS server in the attic. These get progressively thinned but some of them are retained indefinitely. Plus I continuously back up from the Windows box to Backblaze, which retains versions for 30 days. The subset of RAW photos that I rate highly, develop and export to JPEG also get synced to OneDrive, Google Drive and Amazon Prime photos. Many of those get printed.

I’m not stupid, and in my day job have implemented infrastructure, DR and BCP for trading systems doing in and out payments of in excess of a billion dollars a day (much less netted obviously).

None of this, however, will protect me from a deletion that *I do not realise has occurred*, which is what I was talking about. Plus I find a DAM that deletes photos offensive.

Martin M

Idiots

I use iOS Lightroom and have paid for CC Photography Cloud since it started. Although cloud sync reduces the impact of this particular issue, it could still have affected any photos not synced. And yet three days after Adobe knew about a serious, avoidable data loss defect, they have still not emailed me to say "do not open LR mobile until you have updated the app". I had to find out via this Reg reporting on a forum post.

Putting avoiding corporate embarrassment ahead of customer's data is not a great look when selling a DAM which is first and foremost about reliably storing media. Combined with the epic fail in QA and release control, this is giving me some serious pause for thought. I'd already been taking a careful look at Exposure X5 - which is faster and can work with any cloud file sync vs Adobe's absurdly expensive cloud storage - and this may speed things up.

To those saying backup is a panacea - what if this was a more subtle bug nuking a handful of photos over a period of time? I have a proper backup regime for my main catalogue, but with over 20,000 photos (probably not uncommon for the type of person using Lightroom) this would be really difficult to notice happening, especially if I hadn't graded the photos yet. I'd just silently use memories. Backup is necessary, but nothing can replace careful software engineering.

Marketing: Wow, that LD8 data centre outage was crazy bad. Still, can't get worse, can it? Finance: HOLD MY BEER

Martin M

Re: The Cloud

Do you also design your own CPUs and motherboards, run your own internet backbone and have the ability to manufacture your own diesel supplies?

If not, it’s just a question of where you choose to draw the line. There are multiple valid answers to that.

Your approach is theoretically valid, but having seen the state of many on-prem DCs, and some of the people who run them, I have a slightly jaundiced view of what it actually looks like in practice. I accept there is some well-run on-prem out there though, and it can be great. It’s just not that common, and getting less so as the top talent is getting hoovered up by the cloud giants.

Martin M

Re: The Cloud

This was an electrical problem. Unless your onsite facilities staff are qualified and able to fix that kind of thing and have all the relevant spares to hand, you're probably going to be dependent on someone else whether you have physical access or not. I suspect Equinix might have rather more leverage on suppliers, given their size.

The visibility/comms point is very true though and that was clearly a key problem here.

If you're worried about disposal of disks you should probably look into encryption at rest, wherever they are. It's really easy in the cloud - robust key management infrastructure is there already and seamlessly integrated into lots of different types of block storage/object storage/database services. It's a nice advert for some of the advantages of using cloud infrastructure - you don't have to do all of this kind of thing yourself.

Martin M

Re: The Cloud

Err, no. This is simple colocation, which dates much further back than the term cloud and means something completely different. Feel free to check the NIST definition if you're still unsure.

Worldwide Google services – from GCP to G Suite – hit with the outage stick

Martin M

Re: A clear case of all your eggs

Which systems are you aware of that were designed for 7 nines application level availability? Did they achieve it over any kind of sensible period, e.g. years? What technologies and processes did they use to achieve less than 263ms / month downtime?

I’ve worked on some pretty critical systems that were designed for, and achieved, 99.99% availability. They cost a flipping fortune.

One replaced a VMS cluster, usually recognised to be a pretty reliable infrastructure, where an experienced operator made a mistake and caused a two day outage. That rather screwed its availability stats, and nearly took down a very large business.

Very few applications have ever actually achieved even 99.999%.

Microsoft admits pandemic caused Azure ‘constraints’ and backlog of customer quota requests

Martin M

Favourable handling

The one thing that no-one seems to be talking about is that it appears that Microsoft decided to unilaterally divert Azure capacity to one of their own services over meeting increased capacity requirements for their customers. We'll never know the criticality of the quota requests that were turned down, but were they all less important than e.g. not turning down the Teams video encoding quality/resolution knob?

Corona coronavirus hiatus: Euro space agency to put Sun, Mars probes in safe mode while boffins swerve pandemic

Martin M

Re: Why bother putting them into safe mode?

Evidently the irony intended was far too subtle, despite the references to pajamas and Netflix.

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2021