* Posts by Martin M

71 posts • joined 12 Apr 2016

Page:

Prepare your shocked faces: Crypto-coin exchange boss laundered millions of bucks for online auction crooks

Martin M

Re: Oh , the joys of unregulated...

FATF country members are responsible for implementing recommendations on Virtual Assets and Virtual Asset Service Providers. The EU has 5AMLD which mandates that crypto exchanges have to have the same AML controls as banks. This is implemented in the UK in The Money Laundering and Terrorist Financing (Amendment) Regulations 2019 statutory instrument.

So who exactly has been saying money laundering regulation is unnecessary?

Enforcement is necessary for compliance, of course, but the regulation is there.

The perils of building a career on YouTube: Guitar teacher's channel nearly deleted after music publisher complains

Martin M

"how technology giants deal with smaller customers"

If you provide your material to Google and they sell advertising space by it and give you a fraction of that, you are not a customer. You are a supplier. And that really explains all you need to know. Small suppliers to enormous buyers always tend to get a bit screwed.

Doesn't make it right, of course.

Microservices guru says think serverless, not Kubernetes: You don't want to manage 'a towering edifice of stuff'

Martin M

Really?

"The key characteristics of a serverless offering is no server management. I'm not worried about the operating systems or how much memory these things have got; I am abstracted away from all of that".

Technically true but massively misses the point. AWS Lambda requires you to decide how to allocate "memory to your function between 128MB and 3008MB, in 64MB increments" (https://aws.amazon.com/lambda/pricing/). So now you have to capacity manage memory at function level rather than server/cluster level.

There are lots of good things about serverless, but this ain't one.

Gartner on cloud contenders: AWS fails to lower its prices, Microsoft 'cannot guarantee capacity', Google has 'devastating' network outages

Martin M

Re: Gartner in the title of the article...

Some techies have indeed been saying for years that "cloud" only equates to "someone else's computers, somewhere". But it's only true in the same sense that a house equates to bricks.

If you're talking about manually standing up VMs and storage in a datacenter through an API or web console, maybe. Although too many companies seem to screw up building and running internal clouds that try to do even that.

But really, what is driving people to cloud providers is access to a huge number - Amazon have 160+ - of highly automated services, all integrated into the same logging, monitoring, billing and identity/access infrastructure and very often into each other as well. Container management, ESBs, data warehouses, SDN, API gateways, VDI farms, call centre infrastructure, software development tooling, ML model lifecycle management, virtual HSMs, machine learning based PII classification, scale-out graph database, managed PostgreSQL, mobile app identity management - too many to sensibly enumerate on a single web page.

These - most of which are at least reasonable, some best in class - are all available for distributed teams to use and manage in 60+ datacentres in tens of countries. With largely good documentation and no need to file a ticket and wait for weeks to get going (or months, if someone has dropped the ball on capacity management). And which can be completely reproducibly stood up via a bit of Terraform - subject to appropriate governance, of course.

If you can point me to a corporate IT department that offers anything close to those 160+ services, with a similar experience, I'll concede it's just other people's computers. I suspect you'll struggle, because the cloud providers probably invested more into R&D for just one of those services than your entire IT budget for the last few years. There are massive economies of scale in automation - cottage industries within enterprises will just struggle to compete.

Of course cloud is just a tool though, and maintains many of the inherent issues with technology - I agree with that. It's just it does solve a useful number of those issues.

Why cloud costs get out of control: Too much lift and shift, and pricing that is 'screwy and broken'

Martin M

Re: The problem isn't the Cloud, but poor monitoring

Sorry, I think the BS is yours.

There are specialist third-party (not provided by the clouds themselves - that would make no sense as no-one would trust them) cloud spend monitoring and optimisation tools. Some of them are expensive and indeed only make any kind of sense for very large cloud estates. But you can do a great deal with the standard, built-in, essentially free ones.

On reversing out of the cloud: if you generate truly epic quantities of data, that generates some lock-in, but not irreversible. Case in point: Dropbox exited 4 petabytes of data from AWS S3 when they decided they had the scale and capability to build and run their own private storage cloud.

More importantly, and similar to any proprietary vendor including any on-prem ones, there is substantial lock-in if you go for proprietary high-level services as opposed to lower level standards-based ones. There are things you can do to mitigate that a bit (Kubernetes is often one aspect of this), but these tend to increase complexity and unpick a number of benefits of going to the cloud. Essentially, you end up trading potential long term costs of lock-in against short term increased build costs. It's not a new problem, nor is it cloud-specific. The right answer usually depends on how fast you have to get to market.

I've spend a fair amount of time looking at DR for both on-prem and cloud-based services in a fair number of companies, and from a practical perspective DR for cloud based services tends to be way ahead in my experience, because the clouds make it really easy to implement setups that provide (largely) automated resilience to outages affecting a smallish geographical area (e.g. tens of miles). On-prem DR is often a shitshow on very many levels. And the clouds do effectively provide tape or equivalents - S3 Glacier is backed by it, at least the last time I checked. They won't, of course, be tapes in your possession though, which is I suspect what you're fulminating about.

The one type of disaster that many people building on cloud do not address is wholesale failure of the cloud provider for either technical or business viability reasons. You have to take a view on how likely that is - the really big cloud providers seem to isolate availability zones pretty well nowadays (much better than enterprises - one I reviewed forgot to implement DR for their DHCP servers FFS, and it took a power outage for them to notice). The top three providers are probably too big to fail. If they got into trouble as businesses, the government would probably backstop, not least because they don't want their own stuff falling over. But if you want to mitigate - just sync your data back to base. There are lots of patterns for doing so.

Martin M

Re: The problem isn't the Cloud, but poor monitoring

Your time is the only significant cost, actually. The basics you get in the same way as most people get itemised phone bills for free. Tagging doesn't cost anything. Everything's so automated and integrated it will likely cost very much less than any cost allocation you're trying to do for on-prem services. Remember, cloud services are built from the ground up to be metered at a granular level for the cloud provider - all they've done is extend this out to customers.

From a technical perspective, there are storage charges if you want to retain for a long time, bandwidth charges to download info etc., but those are really really *tiny*. If you choose to use cloud BI services (e.g. AWS QuickSight) to do your reporting rather than desktop-based/on-prem server based analysis, of course you pay for those, but not much - think $18/mo for a dashboard author ranging down to $.30/30 min session for infrequent dashboard viewers.

Martin M

Re: Cloud is expensive

Completely agree - the push towards 'migration of everything on prem to the cloud' is not something I'm uncomfortable with. IMHO the technical and legal reasons can often be mitigated, but are real concerns sometimes..

Regardless, I'm unconvinced that shifting a bunch of production VMs running applications not designed for the cloud from infrastructure that's already bought, in place and stable will really offer a sensible return on investment unless that infrastructure is unusually expensive for some reason (which is sometimes the case). Doing the migration generally involves good people if it's done well, and people are expensive. If it's not done well, it risks service stability.

But for many new services, cloud can be great.

Martin M

Re: The problem isn't the Cloud, but poor monitoring

That's just completely inaccurate. You can absolutely use and automatically enforce tagging to track resource costs and report costs back to teams/projects/cost centres etc. at a really granular level. Similarly you can control who is allowed to spin up resources. E.g. for AWS there is https://docs.aws.amazon.com/whitepapers/latest/cost-management/introduction.html . It's one of only five AWS Well Architected pillars - https://aws.amazon.com/architecture/well-architected/. I'm pretty sure the other big clouds have equivalents.

You have to set it up right, of course, but frankly if you don't, it's a governance failure on the enterprise side, not something inherent to cloud. I'm not saying businesses don't frequently get themselves in a pickle, but when it does it's often more to do with traditional infrastructure and operations sticking their head in the sand, refusing to engage and bring their expertise to the table to make sure it works.

Hidden Linux kernel security fixes spotted before release – by using developer chatter as a side channel

Martin M

I’m a big fan of the cloud in general, but unless you’re talking SaaS, I’m afraid I disagree. If a company doesn’t have a basic level of infrastructure and ops maturity, moving to a platform where by default anyone can spin up anything almost instantly will very quickly make things infinitely worse.

The first thing you need to build if you are moving to one of the big clouds is your management and control infrastructure. All the tools are there and easy(ish) to deploy - certainly compared to traditional enterprise IT - but it does need thinking about and is too frequently skipped, with predictable results.

UK utility Severn Trent tests the waters with £4.8m for SCADA monitoring and management in the clouds

Martin M

Analytics computation requirements are very high when someone is running a big ad-hoc analytical query (not infrequently, tens of large servers), and zero if no-one if no-one is. Typically, there's a small number of analysts/data scientists who do not query all day, which drives a very peaky workload. Traditionally, they've been provided with quite a large set of servers which are lightly loaded most of the time and run queries horribly slowly during peak workload.

Instead. the 'serverless' (hate that term) analytics services allocate compute to individual jobs, and only charge for that. Therefore are typically cheaper because there's not idling, and yet run queries at full speed when required vastly reducing data scientist thumb-twiddling (and have you seen what a good data scientist earns?)..

Post by AddieM below suggests they have racks of Oracle to support their warehouse. I can guarantee you that that storage is not cheap. Could they rearchitect to a more cost-effective, perhaps open source MPP data warehouse without forking out megabucks to an EDW vendor? In theory, but most plump for something as 'appliance-y' as possible to minimise complexity, and those are very spendy. Even equivalent cloud services with dedicated MPP compute (e.g. Redshift et al) tend to be a lot cheaper, and are fully managed..

Martin M

See my comment below. *Analytics* computation requirements can indeed fluctuate wildly, and that seems to be what they're talking about here. Plus lots of historical data, which means cheap, reliable storage is highly desirable.

Martin M

Makes a great deal of sense. Particularly if there is a very variable query workload you could stream the information into Azure Data Lake Storage and run queries using Azure Data Lake Analytics. That would provide cost effective storage as well as usage-priced analytics compute instead of relying on provisioning loads of expensive traditional data warehouse nodes (and their associated licenses) that are probably lying fallow most of the time, and insufficient when you do get busy.

This kind of analytical workload is normally a slam dunk for cloud over on-prem, and doesn't usually pose a direct threat to integrity or availability of operational systems - obviously confidentiality may obviously still be very important, depending on the nature of the data. The data flow is from the sensitive operational network to the less sensitive cloud analytics one, and you can make going the reverse way very difficult (even data diodes etc. for very high assurance).

The exception is possibly the monitoring side of things, where a DoS/compromise might slow some types of response. But it sounds like the biggest problem would be plain old non-malicious unreliable plant network reliability issues - any response would have to be resilient to that, and thus to more malicious attacks.

Putting the d'oh! in Adobe: 'Years of photos' permanently wiped from iPhones, iPads by bad Lightroom app update

Martin M

After this, I’m wondering if some kind of reconciliation is in order to make sure LR Classic hasn’t missed anything during the import.

I use LR on mobile as it’s one of the few ways of getting a RAW capture on an iPhone, given the built in camera app won’t do it.

Martin M

Re: Class action suit in 3... 2... 1...

The concern I have is that the same engineering standards are likely applied to both.

Martin M

Re: 'what if this was a more subtle bug nuking a handful of photos over a period of time'

And that’s why I regularly replicate from Lightroom on Windows to a ZFS based NAS server with mirrored storage and regular znapzend snapshots which are replicated to another ZFS server in the attic. These get progressively thinned but some of them are retained indefinitely. Plus I continuously back up from the Windows box to Backblaze, which retains versions for 30 days. The subset of RAW photos that I rate highly, develop and export to JPEG also get synced to OneDrive, Google Drive and Amazon Prime photos. Many of those get printed.

I’m not stupid, and in my day job have implemented infrastructure, DR and BCP for trading systems doing in and out payments of in excess of a billion dollars a day (much less netted obviously).

None of this, however, will protect me from a deletion that *I do not realise has occurred*, which is what I was talking about. Plus I find a DAM that deletes photos offensive.

Martin M

Idiots

I use iOS Lightroom and have paid for CC Photography Cloud since it started. Although cloud sync reduces the impact of this particular issue, it could still have affected any photos not synced. And yet three days after Adobe knew about a serious, avoidable data loss defect, they have still not emailed me to say "do not open LR mobile until you have updated the app". I had to find out via this Reg reporting on a forum post.

Putting avoiding corporate embarrassment ahead of customer's data is not a great look when selling a DAM which is first and foremost about reliably storing media. Combined with the epic fail in QA and release control, this is giving me some serious pause for thought. I'd already been taking a careful look at Exposure X5 - which is faster and can work with any cloud file sync vs Adobe's absurdly expensive cloud storage - and this may speed things up.

To those saying backup is a panacea - what if this was a more subtle bug nuking a handful of photos over a period of time? I have a proper backup regime for my main catalogue, but with over 20,000 photos (probably not uncommon for the type of person using Lightroom) this would be really difficult to notice happening, especially if I hadn't graded the photos yet. I'd just silently use memories. Backup is necessary, but nothing can replace careful software engineering.

Marketing: Wow, that LD8 data centre outage was crazy bad. Still, can't get worse, can it? Finance: HOLD MY BEER

Martin M

Re: The Cloud

Do you also design your own CPUs and motherboards, run your own internet backbone and have the ability to manufacture your own diesel supplies?

If not, it’s just a question of where you choose to draw the line. There are multiple valid answers to that.

Your approach is theoretically valid, but having seen the state of many on-prem DCs, and some of the people who run them, I have a slightly jaundiced view of what it actually looks like in practice. I accept there is some well-run on-prem out there though, and it can be great. It’s just not that common, and getting less so as the top talent is getting hoovered up by the cloud giants.

Martin M

Re: The Cloud

This was an electrical problem. Unless your onsite facilities staff are qualified and able to fix that kind of thing and have all the relevant spares to hand, you're probably going to be dependent on someone else whether you have physical access or not. I suspect Equinix might have rather more leverage on suppliers, given their size.

The visibility/comms point is very true though and that was clearly a key problem here.

If you're worried about disposal of disks you should probably look into encryption at rest, wherever they are. It's really easy in the cloud - robust key management infrastructure is there already and seamlessly integrated into lots of different types of block storage/object storage/database services. It's a nice advert for some of the advantages of using cloud infrastructure - you don't have to do all of this kind of thing yourself.

Martin M

Re: The Cloud

Err, no. This is simple colocation, which dates much further back than the term cloud and means something completely different. Feel free to check the NIST definition if you're still unsure.

Worldwide Google services – from GCP to G Suite – hit with the outage stick

Martin M

Re: A clear case of all your eggs

Which systems are you aware of that were designed for 7 nines application level availability? Did they achieve it over any kind of sensible period, e.g. years? What technologies and processes did they use to achieve less than 263ms / month downtime?

I’ve worked on some pretty critical systems that were designed for, and achieved, 99.99% availability. They cost a flipping fortune.

One replaced a VMS cluster, usually recognised to be a pretty reliable infrastructure, where an experienced operator made a mistake and caused a two day outage. That rather screwed its availability stats, and nearly took down a very large business.

Very few applications have ever actually achieved even 99.999%.

Microsoft admits pandemic caused Azure ‘constraints’ and backlog of customer quota requests

Martin M

Favourable handling

The one thing that no-one seems to be talking about is that it appears that Microsoft decided to unilaterally divert Azure capacity to one of their own services over meeting increased capacity requirements for their customers. We'll never know the criticality of the quota requests that were turned down, but were they all less important than e.g. not turning down the Teams video encoding quality/resolution knob?

Corona coronavirus hiatus: Euro space agency to put Sun, Mars probes in safe mode while boffins swerve pandemic

Martin M

Re: Why bother putting them into safe mode?

Evidently the irony intended was far too subtle, despite the references to pajamas and Netflix.

Martin M

Why bother putting them into safe mode?

When you could just stick in a VPN. Staff could command interplanetary spacecraft from their spare rooms while wearing pyjamas and watching Netflix on the other monitor...

Your Agile-built IT platform was 'terrible', Co-Op Insurance chief complained to High Court

Martin M

Idiots

Depends on the flavour of Agile approach. Scrum is the most common, and is all many people know, but isn't at all sensible for ops - perhaps that was what the answer was based on?

But Kanban can work rather well for teams doing a mix of incremental improvement, releases, incident response, daily tasks, support for major project deliveries etc.. It's not perfect out of the box.

Not sure if they're still doing it now, but Netflix ops were using a tweaked version of it a while ago - well prior to the crescendo of Agile hype - and they generally know what they're doing. Others too, if you Google. I've seen it work well - it's no big deal, really more of a visualisation/formalisation of what good ops teams tend to do naturally anyway. Really just a task board with some constraints on it that tend to encourage helpful behaviour. It must of course be integrated with your ticketing system to work.

Problems at Oracle's DynDNS: Domain registration customers transferred at short notice, nameserver records changed

Martin M

Idiots

Completely support the idea of going after the card company if the company you're buying services from has misrepresented and/or not supplied said services - see https://www.which.co.uk/consumer-rights/advice/can-i-claim-on-my-credit-card-when-something-goes-wrong

But is "theft of services" actually a thing in this context? I've not heard of it, and a brief Google suggests it is a term in use in the US, but applies to cases where someone forceably obtains services without paying for them, not where payment has been taken and service is not provided. Also from a legal perspective theft refers to a crime, but this would - like it or not - presumably be a civil matter under contract law.

It might be easier/more fun to use RICO if you think you can make out the case that Oracle is an organised criminal enterprise :)

Step away from that Windows 7 machine, order UK cyber-cops: It's not safe for managing your cash digitally

Martin M

What if they decide to DoS attack things on which you depend or are used for credential stuffing your mum’s email account?

Martin M

Re: Upgrade from Windows 7

Fair call - I did mean ‘browser that has security defects or exposes them in the platform it sits on’.

Right now - depending on whether not being able to validate crypto signatures indirectly enables any browsable remote exploits (haven’t thought about it, but it sounds a bit worrying) - that could potentially be any of them that use crypto32.dll.

Martin M

Re: Upgrade from Windows 7

As long as no one e.g. uses a browser with security defects. Plenty of ways to get compromised that don't require an inbound connection.

Martin M

Re: You want that again?

I'm not particularly a Microsoft fan. Hell, I've contributed patches to the Linux kernel in the dim and distant past. But Linux is unfortunately still not an OS for the average desktop user, if only because it's still a niche concern and therefore not a must-support platform for lots of software that people want to use. So for many people, it's Windows or Mac, and the latter (including hardware) is too expensive for most and actually not supported for as long, as while newer MacOS releases are free, they stop supporting older hardware quite quickly.

I fully agree the methods used to push Windows 10 on people who didn't want it were pretty appalling, and the persistence of nagging and the way Microsoft ramped up were massively annoying. Why shouldn't someone stick with a supported OS if they choose? But for those who did want it - and there will have been plenty - it allowed them to upgrade to an OS with long-term support available without having to pay money to do so. There's clearly some value in it for the latter.

Microsoft obviously benefited by shifting people early too, of course, and that was the quid pro quo - upgrade for free now, or pay later. Although comments further down the thread suggest you can still activate W10 with W7 license keys, so maybe they never really closed the upgrade programme at all.

I don't blame Microsoft for ending support for very old software - it has to happen sometime. Resuming nagging on machines that can see the Internet would actually be proportionate now, with support actually ended. No-one should be running an unsupported OS and be connected to the Internet.

According to Krebs on Security, there's just been a leak that there's a fix for a really serious vulnerability in the Windows core crypto library in next week's Patch Tuesday. Will be interesting to see if Microsoft backport it to Windows 7 or not.

Martin M

It really would be the civilised thing to do for Microsoft to heavily promote in-OS an upgrade to some form of Windows 10, making clear it’s the only safe thing to do. Perhaps with a permanent desktop background suggesting an upgrade to Home, so as not to devalue the free upgrade programme they did up to 2016 and the money people have paid for upgrades since.

I know there are people here who will say “forcing users to upgrade to a privacy-compromising OS is bad”, but Windows 7 has had an excellent innings (much longer than most Linux LTS releases) and realistically most of those in this situation won’t have the expertise to move to Linux or the money to upgrade, and won’t care. Would you rather the Internet was awash with compromised machines that affect everyone?

Londoner who tried to blackmail Apple with 300m+ iCloud account resets was reusing stale old creds

Martin M

Re: Stale old creds...

You say just “force the user to prove it’s them”, which is a spectacularly circular argument. The definition of authentication is asking the user to prove who they are - and if you had a better method of doing so than the password, you wouldn’t need the password. It also ignores the possibility that accounts may be pseudonymous, and therefore have no corresponding real world identity at all.

Finally, it’s unclear from the article whether Apple knew which accounts were potentially affected. Forcing a password reset on the entire user population would have been disproportionate, particularly as it turns out that most of the credentials were stale and there had in fact *been no data breach” (from Apple).

Internet Society CEO: Most people don't care about the .org sell-off – and nothing short of a court order will stop it

Martin M

Idiots

The mean domain valuation may be $100, but the effective valuation of domains that are actively being used for something important will be far higher.

Many domains will not be renewed if prices are ramped significantly. Domain speculation will abruptly become uneconomic (increased costs/no-one wants to buy a new .org any more and be subject to rampant monopoly rent extraction) and domains brought for brand protection purposes will be dropped. As a pure guess that might easily be 90% of all domains.

Those who stay are going to have to pay *a lot*. Which they will, as switching costs and risks are high. Would a charity's IT Director really take a decision that might pay off for their successor 10 years down the line, while incurring all of the costs and risks on their watch?

I'm sure the private equity company's economists will have a model for exactly how to price to squeeze the most out for them. But for example if it costs a largish organisation £150k to switch, they can probably be charged £15k a year for a domain name. The higher price (assuming you have to have only one price) would force switches by or in the extreme bankruptcies of organisations on the margin, but pricing optimisations in these situations tend to result in very high numbers and many fewer customers.

It may not break the bank for the likes of MSF but that doesn't stop it being truly offensive. That £15k might have saved a lot of lives. And those who are forced to switch are forced to bear additional risks and costs. Meanwhile, the private equity company makes off with their risk-free loot.

We are absolutely, definitively, completely and utterly out of IPv4 addresses, warns RIPE

Martin M

Re: Lies, damned lies, and statistics that don't lie.

Your timing's out. In 1994 I was connected from my college's computer room via 10base-T directly to university MAN. There were a few very old (probably 5-10 years) PCs in another room that used coax, but they were pretty ancient feeling and only used by a few geeks prepared to read their email via Pine at the Solaris command line. They got ripped out in 1995.

The university was connected to the JANET (UK Joint Academic Network) WAN. If I remember correctly (I may not), FTP'ing large files over this WAN link often saturated the local 10 Mb/s Ethernet connection.

Modem-wise acoustic couplers were long gone. I think I got my first modem - 2400 baud, as I was buying it from saved up pocket money - in 1991 and that connected straight to the phone line.

It took quite a while for TCP/IP to get widespread acceptance. First implementations were in 1975 and it didn't even make it into the Windows networking stack as standard until 1995. Most homes didn't have it when I left Uni in 1997, and lots of businesses were still on older technologies. So IPv6 is actually doing pretty reasonably by some measures.

You're flowing it wrong: Bad network route between Microsoft, Apple blamed for Azure, O365 MFA outage

Martin M

Not really, distributed computing has been around for a while. Deutsch et al’s eight fallacies of distributes computing were written in 1994-1997 and are all applicable to a cloud world - https://en.m.wikipedia.org/wiki/Fallacies_of_distributed_computing

“The network is reliable” is fallacy #1. Not considering failure cases properly for a service run by a different organisation at the other end of a WAN is pretty ridiculous. Experienced designers/architects don’t trust the network between two of their own services in neighbouring racks in a physical datacentre.

(Side note: Anyone running service in a cloud should assume they’re going to eventually see some truly weird failure conditions given the multiple levels of compute and network virtualisation stacked atop each other and run on heterogenous no-name hardware. If your application’s designed and monitored correctly, it shouldn’t matter that much.)

I can understand why problems connecting to APNs would cause problems messaging and hence authenticating iOS users. What is harder to understand is why a backlog formed and caused further problems. Keeping a long backlog of remote API requests (or doing unbounded retries etc.) which are irrelevant after a few tens of seconds because they are feeding into an interactive system is not a desirable property...

Tesco parking app hauled offline after exposing 10s of millions of Automatic Number Plate Recognition images

Martin M

Re: Wtf

Note under GDPR you don’t necessarily have to have someone’s name for them to be identifiable or identified, it’s sufficient that you can distinguish them from other individuals. As many people drive only one car it’s at least arguable that this is the case even for companies without a KADOE contracts.

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/key-definitions/what-is-personal-data/ says it depends on the context. Sounds like lucrative fun for the lawyers.

Martin M

Re: Wtf

Not really. The BPA themselves actually advise that the ICO definitely considers VRM to be personal data in the hands of a parking operator (original context of discussion), because it can be used to identify, even if this has not yet taken place. The I is for Identifiable, not identified. Hence processing is under the scope of the GDPR - https://www.britishparking.co.uk/write/GDPR%20Events/BPA-A4-How-Does-GDPR-Affect-Me-v2.pdf .

If you’re not a parking operator with a KADOE contract it’s probably more nuanced.

However, I should correct an definite mistake I made above: GDPR does not affect information collected by individuals for household/personal purposes. Mea culpa.

Martin M

Idiots

Only as long as you don't process photos either automatically (e.g. by running ANPR) or manually by filing them as part of a structured filing system. Otherwise you fall within scope of the GDPR and would be in breach. This all applies as much to individuals as companies.

Registration numbers are PII and you must have a lawful basis for processing. Legitimate interest is used to cover parking enforcement but would not cover your example, which would require consent - which of course would not be practical to obtain. Whatever the lawful basis, you must not over-retain.

In practice you might not be *prosecuted* for doing it, but that's a whole different question.

You can trust us to run a digital currency – we're Facebook: Exec begs Europe not to ban Libra

Martin M

Re: 1:1 inequivalence

Yeah, pretty much. Except (if I remember correctly) I don’t think SDRs can be held by private parties?

Agreed on the changing value thing too. Unless you get paid in Libra and most of your expenses are in Libra, and you really want a ‘my recent transactions’ tab on your Facebook profile, it’s probably not a good idea...

Martin M

Re: 1:1 inequivalence

The 1:1 backing means that each Libra token is underpinned by a basket of currencies in fixed proportions held in reserve by the association, each stored in an appropriately denominated custody account. For example, the basket underpinning one Libra might be $0.90 and £0.10. The intrinsic value of the currency comes from a promise to redeem tokens on request for the current value of the basket. The use of custody accounts means this should (in theory) work even if the association goes bust.

This means that from the point of view of any particular currency, the value of Libra (as seen from the perspective of a given currency) will vary over time. If the dollar is at par with the pound (to keep the maths simple) and with the above basket, it would cost £1 or $1 to buy a Libra token - *excluding trading costs*. If the pound subsequently drops to $0.80, then the value of Libra in pounds rises to 0.90/0.80+0.10=£1.225, but in dollars drops to $0.90 + 0.10*0.80 = $0.98.

It’s basically an ETF with improved visibility of tokens in issue. The real world collateral management and auditing still relies on trusted third parties and has to be very robust.

Whether this makes sense for unsophisticated retail customers, let alone the unbanked, is a whole other question. It replaces very liquid markets for fiat money - whose intrinsic value is underpinned by government guarantees that it can be used to extinguish tax liabilities - with a token underpinned by a reserve held by a private organisation is a whole other question. It also exposes holders to currency fluctuations, which is probably highly undesirable.

Regarding maintaining 1:1 backing, this is solvable. When the bulk resellers buy x tokens, the association will purchase currencies in the correct quantities - e.g. with the above basket purchased using USD, and dollar at par with the pound, they’ll buy £0.10*x in USD on the FX market. They might have to pay say $0.1002*x to do this, depending on transaction costs, the (moving) exchange rate, market depth and bid-offer spread. They charge the reseller ($0.9+$0.1002)*x, put $0.90*x into a dollar reserve account, put £0.10*x into a GBP reserve account, and mint x tokens. Everything balances.

The reseller takes on trading costs and an unhedgable market risk during the issuance (or burn) transaction, acting as a market maker. They continue to be exposed to market risk for as long as they hold the tokens, and may choose to hedge this risk or not. When buying and selling on exchanges, they will buy and sell to consumers (or other intermediaries) with a wider spread to compensate for all these costs.

There is a problem if you have a basket of three currencies and can only get one of the necessary FX trades done. In practice that’s vanishingly unlikely give the depth of liquidity in the Common FX currency pairs.

What is interesting is that their white paper mentions rebalancing the basket, e.g. changing the ratio of currencies, but not how it will work. It’s valuable in some circumstances but hard. Firstly because when the association does it, they will incur large trading costs to buy and sell currencies to bring the collateral into balance, and this will break the 1:1 collateralisation without an external capital injection.

Gov flings £10m to help businesses get Brexit-ready with, um... information packs

Martin M

Re: Looks like el Reg is being as disingenuous as the Biased Broadcasting Corporation

In 1975, people were voting for concrete, enactable outcomes. Enter a well defined entity - the EC - with well defined rules, or the status quo.

In the recent referendum, people voting Remain voted for a concrete outcome - the status quo. Leaver voted *against* the concrete outcome but not for anything specific and enactable out of a huge range of outcomes. It is reasonable to assume that at least some of those people voting Leave had in their minds that they would get the version of Brexit being put forward by the leading proponents of the Leave campaign, including open (amongst other things) trade with Europe via the easiest trade deals in history, £350m/week for the NHS, increased parliamentary sovereignty and an intact United Kingdom.

If the proponents of Leave can deliver that version of Brexit then there is no need for a second referendum. Experience suggests this is unlikely. If not, it is at least plausible that there is no support for the increasing likely No Deal scenario - to suggest otherwise is tea leaf reading without putting it to the vote.

Also, please define democracy - do you mean parliamentary democracy or direct democracy?

In the US? Using Medicaid? There's a good chance DXC is about to boot your data into the AWS cloud

Martin M

Re: Remember - Cloud computing

Cloud or not, I think it’s fair to say that if you’ve given any part of your infrastructure to DXC, that boat has already sailed (with the exception of the multi-user bit).

I’ve seen a lot of on-prem infrastructure in organisations large and small, and it’s been almost universally horrible. Inflexible, poorly designed, poorly maintained and offering only the most basic services. DXC have been near the epicentre of some of the worst of this, but it’s true for other suppliers and internally managed data centres too.

On the other hand, using AWS provides access to probably the most sophisticated virtual data centre infrastructure in the world, built by some of the best engineers in the world, with a wide breadth of pre-canned services that can be accessed in minutes with a few lines of Terraform rather than a multi-week/month procurement/design cycle. And then run by a company that has a track record of generally delivering on its promises.

So while it’s technically true that you’re running on someone else’s computer, in the sense that most of AWS is software, it’s kind of missing the point. Your “that others are using” comment is also mostly irrelevant nowadays; noisy neighbours haven’t been a problem for ages. If you’re worried about the sort of attacker profile that could pull off attacks through hypervisor escalation etc., fair enough, but you probably should be completely airgapping your entire desktop and server infrastructure from the Internet in that case. Rowhammer was fixed in AWS before it was even public, was your on-prem vSphere? It’s a somewhat niche requirement.

Most organisations, IMHO, would be better off concentrating on fixing their “crap” that they’re deploying rather than trying to replicate AWS internally, badly, with a fraction of Amazon’s resources and starting from where Amazon was circa 7 years ago. Perhaps where you work is the exception...

You do need platform engineers good enough to not leak keys via public GitHub repos, though. And a fair amount of up-front thinking/design on the foundational design. This is hard, but not as hard as the equivalent on-prem.

That's a nice ski speaker you've got there. Shame if it got pwned

Martin M

Re: Because skiing or snowboarding aren't dangerous enough already?

Not my experience ... as a boarder of 20 years, I absolutely keep my ears open on the slope. There's a massive blind spot to my right (I ride goofy). Skiers who don't realise this - and some boarders who should know better, too - have a habit of putting themselves right in it when they carry out a kamikaze overtaking manoeuvre. I've lost count of the number of times I've aborted a turn, potentially avoiding a nasty accident, based purely on what I've heard.

I don't really ski, but imagine it might be a bit less useful. Even then, if you're stopped and someone gets out of control on an icy patch above, you can hear scraping from a long way away and well in time to take action. Which for me, last week, wasn't getting out of the way but catching a terrified five year old about to eject themselves off the piste onto a very steep, tree-lined slope.

Bottom line: no amount of music enjoyment is worth a potentially serious accident, in my book.

Serverless is awesome (if you overlook inflated costs, dislike distributed computing, love vendor lock-in), say boffins

Martin M

Re: not true IF DONE RIGHT, and for the RIGHT projects

Implementing a low-mid usage REST/GraphQL API by exposing Lambda functions via API Gateway is an incredibly common one. In most cases you're going to be using some form of backend database and your mid-tier should be stateless in any case. It can save a lot of money - think about not just your production environment but all of your test environments that are often incredibly underutilised even on the smallest EC2 instances. I've seen a project collapse their mid-tier hosting costs from many thousands a month to about 100 quid by doing this. Production scales seamlessly with no need for cluster management, autoscaling configuration etc.

One gotcha to this: your application must be able to handle relatively long call latencies related to cold starts during load spikes, as containers and app runtimes are dynamically spun up. Latency will depend on language; statically compiled Go will be very much faster than Java's much heavier runtime and JIT compilation. There's a clear tradeoff there for not paying for always-on infrastructure. Under steady state load, things are fine.

Lock-in is a fair point - people need to think about that and go in with their eyes open. But if it actually became an issue, I strongly suspect someone would extend something like Kubeless to create an open source AWS Lambda compatible runtime (assuming that isn't already the case).

As usage increases you might get to the point where it makes sense economically to run your own clusters over EC2 with a dedicated team to manage them. But if your API is relatively well written and doesn't needlessly piss away cycles (OK, I admit that's a minority), you'll almost certainly never get there. If you do, it's a good problem to have. Even lockin is likely not a problem - you'll probably a/ have already rewritten your API several times over anyway and b/ have the money to do so because your service is a wild success.

As others have said, benchmarking ML use cases is simply ridiculous and suggests a bias rather than neutral academic work. No-one with an ounce of sense would do that on Lambda. Also all the points about I/O limitations - the types of use cases for which Lambda is well suited are usually CPU bound.

Microsoft Azure: It's getting hot in here, so shut down all your cores

Martin M

Ouch

But the thing that made me chuckle was the inline ad: “Azure - migrate your on-premises workloads to the cloud with confidence.”

Might want to pause that campaign for the moment, marketing peeps...

Microsoft's Azure Kubernetes Service mucked my cluster!

Martin M

Re: "the customer’s workloads had been overscheduled"

Oh - and just because the forensics team can determine that the cause of the fire was that you had 10 three-bar heaters plugged into the same gang plug overnight, that doesn’t make it their fault.

Martin M

Re: blame

I have some sympathy with them here. If your application depends on expensive high availability servers and storage, with every cluster node being in the same rack and connected to the same ToR switch pair, you should not deploy it to the cloud.

It has to be able to cope with an unplanned node failure and recover swiftly and in an automated fashion. It has to be able to cope with transient network connectivity problems including partitions, one way packet loss, variable latency etc. Ideally, it needs to be capable of distribution across multiple availability zones or even regions, as failures at these levels are not unknown.

What I do not like is the marketing from most major cloud vendors saying that you can migrate your entire legacy data centre to the cloud. But if you believe every bit of marketing you read, then you’re being naive.

Martin M

Re: "the customer’s workloads had been overscheduled"

Most deployments to Kubernetes aren’t via web UI, they’re via standard Kubernetes command line tools. If you’re using it, you probably aren’t (or shouldn’t be) the type of admin depending on point and drool handholding. As for preventing people getting into trouble - well, I’ve never met a technology that can stop a determined idiot from doing this.

Regarding limiting over-scheduling, it can absolutely be a valid user decision. Particularly in non-prod environments where you may burn a lot of money if you don’t contend the workloads, and probably don’t care too much if there are very occasional problems if everything gets busy at the same time.

If the user tried to deploy to production without using the very rich set of primitives Kubernetes has for controlling scheduling, I’d definitely say they bear a significant portion of the responsibility. It’s like massively over committing a VMware cluster. RTFM, know your workloads, and test properly in a prod-like environment.

What I do think was bad was that the user’s poor decision was allowed to affect the system level services. This would have made it difficult for them to debug themselves in a managed cluster, and it shouldn’t have taken a day’s debugging by the Azure team to locate this fairly basic problem. That bit Microsoft should definitely shoulder the blame for. Still, at least they’ve fixed it (according to the HN thread).

Whisk-y business: How Apache OpenWhisk hole left IBM Cloud Functions at risk of hijacking

Martin M

The article reads as if the illustrative REST call could be remotely executed. Having had a look at the PDF written by PureSec, that doesn’t appear to be the case - the endpoint is apparently only locally accessible to the container itself and the OpenWhisk management system (hopefully not neighbouring functions!). So it’s necessary to exploit an application-level weakness already present in the the function to do the POST to /init. Definitely good that it’s now made immutable after first invocation to prevent trivial escalations, but in many cases if the application has a vulnerability that can be abused to make it do arbitrary POSTs, it may be game over anyway.

The *real* eye-opener for me is their PoC. This constructs a hopelessly insecure function with a command injection vulnerability, then shows how that command injection flaw can be used to apt-get install curl and execute it to do a local POST to the /init endpoint.

WTF? Why on earth would an application function need to be apparently running as root and able to do apt-get install inside its container? That appears unpatched, and seems to be at least as fundamental as the /init thing.

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2020