* Posts by Martin M

50 posts • joined 12 Apr 2016

Microsoft admits pandemic caused Azure ‘constraints’ and backlog of customer quota requests

Martin M

Favourable handling

The one thing that no-one seems to be talking about is that it appears that Microsoft decided to unilaterally divert Azure capacity to one of their own services over meeting increased capacity requirements for their customers. We'll never know the criticality of the quota requests that were turned down, but were they all less important than e.g. not turning down the Teams video encoding quality/resolution knob?

Corona coronavirus hiatus: Euro space agency to put Sun, Mars probes in safe mode while boffins swerve pandemic

Martin M

Re: Why bother putting them into safe mode?

Evidently the irony intended was far too subtle, despite the references to pajamas and Netflix.

Martin M

Why bother putting them into safe mode?

When you could just stick in a VPN. Staff could command interplanetary spacecraft from their spare rooms while wearing pyjamas and watching Netflix on the other monitor...

Your Agile-built IT platform was 'terrible', Co-Op Insurance chief complained to High Court

Martin M


Depends on the flavour of Agile approach. Scrum is the most common, and is all many people know, but isn't at all sensible for ops - perhaps that was what the answer was based on?

But Kanban can work rather well for teams doing a mix of incremental improvement, releases, incident response, daily tasks, support for major project deliveries etc.. It's not perfect out of the box.

Not sure if they're still doing it now, but Netflix ops were using a tweaked version of it a while ago - well prior to the crescendo of Agile hype - and they generally know what they're doing. Others too, if you Google. I've seen it work well - it's no big deal, really more of a visualisation/formalisation of what good ops teams tend to do naturally anyway. Really just a task board with some constraints on it that tend to encourage helpful behaviour. It must of course be integrated with your ticketing system to work.

Problems at Oracle's DynDNS: Domain registration customers transferred at short notice, nameserver records changed

Martin M


Completely support the idea of going after the card company if the company you're buying services from has misrepresented and/or not supplied said services - see https://www.which.co.uk/consumer-rights/advice/can-i-claim-on-my-credit-card-when-something-goes-wrong

But is "theft of services" actually a thing in this context? I've not heard of it, and a brief Google suggests it is a term in use in the US, but applies to cases where someone forceably obtains services without paying for them, not where payment has been taken and service is not provided. Also from a legal perspective theft refers to a crime, but this would - like it or not - presumably be a civil matter under contract law.

It might be easier/more fun to use RICO if you think you can make out the case that Oracle is an organised criminal enterprise :)

Step away from that Windows 7 machine, order UK cyber-cops: It's not safe for managing your cash digitally

Martin M

What if they decide to DoS attack things on which you depend or are used for credential stuffing your mum’s email account?

Martin M

Re: Upgrade from Windows 7

Fair call - I did mean ‘browser that has security defects or exposes them in the platform it sits on’.

Right now - depending on whether not being able to validate crypto signatures indirectly enables any browsable remote exploits (haven’t thought about it, but it sounds a bit worrying) - that could potentially be any of them that use crypto32.dll.

Martin M

Re: Upgrade from Windows 7

As long as no one e.g. uses a browser with security defects. Plenty of ways to get compromised that don't require an inbound connection.

Martin M

Re: You want that again?

I'm not particularly a Microsoft fan. Hell, I've contributed patches to the Linux kernel in the dim and distant past. But Linux is unfortunately still not an OS for the average desktop user, if only because it's still a niche concern and therefore not a must-support platform for lots of software that people want to use. So for many people, it's Windows or Mac, and the latter (including hardware) is too expensive for most and actually not supported for as long, as while newer MacOS releases are free, they stop supporting older hardware quite quickly.

I fully agree the methods used to push Windows 10 on people who didn't want it were pretty appalling, and the persistence of nagging and the way Microsoft ramped up were massively annoying. Why shouldn't someone stick with a supported OS if they choose? But for those who did want it - and there will have been plenty - it allowed them to upgrade to an OS with long-term support available without having to pay money to do so. There's clearly some value in it for the latter.

Microsoft obviously benefited by shifting people early too, of course, and that was the quid pro quo - upgrade for free now, or pay later. Although comments further down the thread suggest you can still activate W10 with W7 license keys, so maybe they never really closed the upgrade programme at all.

I don't blame Microsoft for ending support for very old software - it has to happen sometime. Resuming nagging on machines that can see the Internet would actually be proportionate now, with support actually ended. No-one should be running an unsupported OS and be connected to the Internet.

According to Krebs on Security, there's just been a leak that there's a fix for a really serious vulnerability in the Windows core crypto library in next week's Patch Tuesday. Will be interesting to see if Microsoft backport it to Windows 7 or not.

Martin M

It really would be the civilised thing to do for Microsoft to heavily promote in-OS an upgrade to some form of Windows 10, making clear it’s the only safe thing to do. Perhaps with a permanent desktop background suggesting an upgrade to Home, so as not to devalue the free upgrade programme they did up to 2016 and the money people have paid for upgrades since.

I know there are people here who will say “forcing users to upgrade to a privacy-compromising OS is bad”, but Windows 7 has had an excellent innings (much longer than most Linux LTS releases) and realistically most of those in this situation won’t have the expertise to move to Linux or the money to upgrade, and won’t care. Would you rather the Internet was awash with compromised machines that affect everyone?

Londoner who tried to blackmail Apple with 300m+ iCloud account resets was reusing stale old creds

Martin M

Re: Stale old creds...

You say just “force the user to prove it’s them”, which is a spectacularly circular argument. The definition of authentication is asking the user to prove who they are - and if you had a better method of doing so than the password, you wouldn’t need the password. It also ignores the possibility that accounts may be pseudonymous, and therefore have no corresponding real world identity at all.

Finally, it’s unclear from the article whether Apple knew which accounts were potentially affected. Forcing a password reset on the entire user population would have been disproportionate, particularly as it turns out that most of the credentials were stale and there had in fact *been no data breach” (from Apple).

Internet Society CEO: Most people don't care about the .org sell-off – and nothing short of a court order will stop it

Martin M


The mean domain valuation may be $100, but the effective valuation of domains that are actively being used for something important will be far higher.

Many domains will not be renewed if prices are ramped significantly. Domain speculation will abruptly become uneconomic (increased costs/no-one wants to buy a new .org any more and be subject to rampant monopoly rent extraction) and domains brought for brand protection purposes will be dropped. As a pure guess that might easily be 90% of all domains.

Those who stay are going to have to pay *a lot*. Which they will, as switching costs and risks are high. Would a charity's IT Director really take a decision that might pay off for their successor 10 years down the line, while incurring all of the costs and risks on their watch?

I'm sure the private equity company's economists will have a model for exactly how to price to squeeze the most out for them. But for example if it costs a largish organisation £150k to switch, they can probably be charged £15k a year for a domain name. The higher price (assuming you have to have only one price) would force switches by or in the extreme bankruptcies of organisations on the margin, but pricing optimisations in these situations tend to result in very high numbers and many fewer customers.

It may not break the bank for the likes of MSF but that doesn't stop it being truly offensive. That £15k might have saved a lot of lives. And those who are forced to switch are forced to bear additional risks and costs. Meanwhile, the private equity company makes off with their risk-free loot.

We are absolutely, definitively, completely and utterly out of IPv4 addresses, warns RIPE

Martin M

Re: Lies, damned lies, and statistics that don't lie.

Your timing's out. In 1994 I was connected from my college's computer room via 10base-T directly to university MAN. There were a few very old (probably 5-10 years) PCs in another room that used coax, but they were pretty ancient feeling and only used by a few geeks prepared to read their email via Pine at the Solaris command line. They got ripped out in 1995.

The university was connected to the JANET (UK Joint Academic Network) WAN. If I remember correctly (I may not), FTP'ing large files over this WAN link often saturated the local 10 Mb/s Ethernet connection.

Modem-wise acoustic couplers were long gone. I think I got my first modem - 2400 baud, as I was buying it from saved up pocket money - in 1991 and that connected straight to the phone line.

It took quite a while for TCP/IP to get widespread acceptance. First implementations were in 1975 and it didn't even make it into the Windows networking stack as standard until 1995. Most homes didn't have it when I left Uni in 1997, and lots of businesses were still on older technologies. So IPv6 is actually doing pretty reasonably by some measures.

You're flowing it wrong: Bad network route between Microsoft, Apple blamed for Azure, O365 MFA outage

Martin M

Not really, distributed computing has been around for a while. Deutsch et al’s eight fallacies of distributes computing were written in 1994-1997 and are all applicable to a cloud world - https://en.m.wikipedia.org/wiki/Fallacies_of_distributed_computing

“The network is reliable” is fallacy #1. Not considering failure cases properly for a service run by a different organisation at the other end of a WAN is pretty ridiculous. Experienced designers/architects don’t trust the network between two of their own services in neighbouring racks in a physical datacentre.

(Side note: Anyone running service in a cloud should assume they’re going to eventually see some truly weird failure conditions given the multiple levels of compute and network virtualisation stacked atop each other and run on heterogenous no-name hardware. If your application’s designed and monitored correctly, it shouldn’t matter that much.)

I can understand why problems connecting to APNs would cause problems messaging and hence authenticating iOS users. What is harder to understand is why a backlog formed and caused further problems. Keeping a long backlog of remote API requests (or doing unbounded retries etc.) which are irrelevant after a few tens of seconds because they are feeding into an interactive system is not a desirable property...

Tesco parking app hauled offline after exposing 10s of millions of Automatic Number Plate Recognition images

Martin M

Re: Wtf

Note under GDPR you don’t necessarily have to have someone’s name for them to be identifiable or identified, it’s sufficient that you can distinguish them from other individuals. As many people drive only one car it’s at least arguable that this is the case even for companies without a KADOE contracts.

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/key-definitions/what-is-personal-data/ says it depends on the context. Sounds like lucrative fun for the lawyers.

Martin M

Re: Wtf

Not really. The BPA themselves actually advise that the ICO definitely considers VRM to be personal data in the hands of a parking operator (original context of discussion), because it can be used to identify, even if this has not yet taken place. The I is for Identifiable, not identified. Hence processing is under the scope of the GDPR - https://www.britishparking.co.uk/write/GDPR%20Events/BPA-A4-How-Does-GDPR-Affect-Me-v2.pdf .

If you’re not a parking operator with a KADOE contract it’s probably more nuanced.

However, I should correct an definite mistake I made above: GDPR does not affect information collected by individuals for household/personal purposes. Mea culpa.

Martin M


Only as long as you don't process photos either automatically (e.g. by running ANPR) or manually by filing them as part of a structured filing system. Otherwise you fall within scope of the GDPR and would be in breach. This all applies as much to individuals as companies.

Registration numbers are PII and you must have a lawful basis for processing. Legitimate interest is used to cover parking enforcement but would not cover your example, which would require consent - which of course would not be practical to obtain. Whatever the lawful basis, you must not over-retain.

In practice you might not be *prosecuted* for doing it, but that's a whole different question.

You can trust us to run a digital currency – we're Facebook: Exec begs Europe not to ban Libra

Martin M

Re: 1:1 inequivalence

Yeah, pretty much. Except (if I remember correctly) I don’t think SDRs can be held by private parties?

Agreed on the changing value thing too. Unless you get paid in Libra and most of your expenses are in Libra, and you really want a ‘my recent transactions’ tab on your Facebook profile, it’s probably not a good idea...

Martin M

Re: 1:1 inequivalence

The 1:1 backing means that each Libra token is underpinned by a basket of currencies in fixed proportions held in reserve by the association, each stored in an appropriately denominated custody account. For example, the basket underpinning one Libra might be $0.90 and £0.10. The intrinsic value of the currency comes from a promise to redeem tokens on request for the current value of the basket. The use of custody accounts means this should (in theory) work even if the association goes bust.

This means that from the point of view of any particular currency, the value of Libra (as seen from the perspective of a given currency) will vary over time. If the dollar is at par with the pound (to keep the maths simple) and with the above basket, it would cost £1 or $1 to buy a Libra token - *excluding trading costs*. If the pound subsequently drops to $0.80, then the value of Libra in pounds rises to 0.90/0.80+0.10=£1.225, but in dollars drops to $0.90 + 0.10*0.80 = $0.98.

It’s basically an ETF with improved visibility of tokens in issue. The real world collateral management and auditing still relies on trusted third parties and has to be very robust.

Whether this makes sense for unsophisticated retail customers, let alone the unbanked, is a whole other question. It replaces very liquid markets for fiat money - whose intrinsic value is underpinned by government guarantees that it can be used to extinguish tax liabilities - with a token underpinned by a reserve held by a private organisation is a whole other question. It also exposes holders to currency fluctuations, which is probably highly undesirable.

Regarding maintaining 1:1 backing, this is solvable. When the bulk resellers buy x tokens, the association will purchase currencies in the correct quantities - e.g. with the above basket purchased using USD, and dollar at par with the pound, they’ll buy £0.10*x in USD on the FX market. They might have to pay say $0.1002*x to do this, depending on transaction costs, the (moving) exchange rate, market depth and bid-offer spread. They charge the reseller ($0.9+$0.1002)*x, put $0.90*x into a dollar reserve account, put £0.10*x into a GBP reserve account, and mint x tokens. Everything balances.

The reseller takes on trading costs and an unhedgable market risk during the issuance (or burn) transaction, acting as a market maker. They continue to be exposed to market risk for as long as they hold the tokens, and may choose to hedge this risk or not. When buying and selling on exchanges, they will buy and sell to consumers (or other intermediaries) with a wider spread to compensate for all these costs.

There is a problem if you have a basket of three currencies and can only get one of the necessary FX trades done. In practice that’s vanishingly unlikely give the depth of liquidity in the Common FX currency pairs.

What is interesting is that their white paper mentions rebalancing the basket, e.g. changing the ratio of currencies, but not how it will work. It’s valuable in some circumstances but hard. Firstly because when the association does it, they will incur large trading costs to buy and sell currencies to bring the collateral into balance, and this will break the 1:1 collateralisation without an external capital injection.

Gov flings £10m to help businesses get Brexit-ready with, um... information packs

Martin M

Re: Looks like el Reg is being as disingenuous as the Biased Broadcasting Corporation

In 1975, people were voting for concrete, enactable outcomes. Enter a well defined entity - the EC - with well defined rules, or the status quo.

In the recent referendum, people voting Remain voted for a concrete outcome - the status quo. Leaver voted *against* the concrete outcome but not for anything specific and enactable out of a huge range of outcomes. It is reasonable to assume that at least some of those people voting Leave had in their minds that they would get the version of Brexit being put forward by the leading proponents of the Leave campaign, including open (amongst other things) trade with Europe via the easiest trade deals in history, £350m/week for the NHS, increased parliamentary sovereignty and an intact United Kingdom.

If the proponents of Leave can deliver that version of Brexit then there is no need for a second referendum. Experience suggests this is unlikely. If not, it is at least plausible that there is no support for the increasing likely No Deal scenario - to suggest otherwise is tea leaf reading without putting it to the vote.

Also, please define democracy - do you mean parliamentary democracy or direct democracy?

In the US? Using Medicaid? There's a good chance DXC is about to boot your data into the AWS cloud

Martin M

Re: Remember - Cloud computing

Cloud or not, I think it’s fair to say that if you’ve given any part of your infrastructure to DXC, that boat has already sailed (with the exception of the multi-user bit).

I’ve seen a lot of on-prem infrastructure in organisations large and small, and it’s been almost universally horrible. Inflexible, poorly designed, poorly maintained and offering only the most basic services. DXC have been near the epicentre of some of the worst of this, but it’s true for other suppliers and internally managed data centres too.

On the other hand, using AWS provides access to probably the most sophisticated virtual data centre infrastructure in the world, built by some of the best engineers in the world, with a wide breadth of pre-canned services that can be accessed in minutes with a few lines of Terraform rather than a multi-week/month procurement/design cycle. And then run by a company that has a track record of generally delivering on its promises.

So while it’s technically true that you’re running on someone else’s computer, in the sense that most of AWS is software, it’s kind of missing the point. Your “that others are using” comment is also mostly irrelevant nowadays; noisy neighbours haven’t been a problem for ages. If you’re worried about the sort of attacker profile that could pull off attacks through hypervisor escalation etc., fair enough, but you probably should be completely airgapping your entire desktop and server infrastructure from the Internet in that case. Rowhammer was fixed in AWS before it was even public, was your on-prem vSphere? It’s a somewhat niche requirement.

Most organisations, IMHO, would be better off concentrating on fixing their “crap” that they’re deploying rather than trying to replicate AWS internally, badly, with a fraction of Amazon’s resources and starting from where Amazon was circa 7 years ago. Perhaps where you work is the exception...

You do need platform engineers good enough to not leak keys via public GitHub repos, though. And a fair amount of up-front thinking/design on the foundational design. This is hard, but not as hard as the equivalent on-prem.

That's a nice ski speaker you've got there. Shame if it got pwned

Martin M

Re: Because skiing or snowboarding aren't dangerous enough already?

Not my experience ... as a boarder of 20 years, I absolutely keep my ears open on the slope. There's a massive blind spot to my right (I ride goofy). Skiers who don't realise this - and some boarders who should know better, too - have a habit of putting themselves right in it when they carry out a kamikaze overtaking manoeuvre. I've lost count of the number of times I've aborted a turn, potentially avoiding a nasty accident, based purely on what I've heard.

I don't really ski, but imagine it might be a bit less useful. Even then, if you're stopped and someone gets out of control on an icy patch above, you can hear scraping from a long way away and well in time to take action. Which for me, last week, wasn't getting out of the way but catching a terrified five year old about to eject themselves off the piste onto a very steep, tree-lined slope.

Bottom line: no amount of music enjoyment is worth a potentially serious accident, in my book.

Serverless is awesome (if you overlook inflated costs, dislike distributed computing, love vendor lock-in), say boffins

Martin M

Re: not true IF DONE RIGHT, and for the RIGHT projects

Implementing a low-mid usage REST/GraphQL API by exposing Lambda functions via API Gateway is an incredibly common one. In most cases you're going to be using some form of backend database and your mid-tier should be stateless in any case. It can save a lot of money - think about not just your production environment but all of your test environments that are often incredibly underutilised even on the smallest EC2 instances. I've seen a project collapse their mid-tier hosting costs from many thousands a month to about 100 quid by doing this. Production scales seamlessly with no need for cluster management, autoscaling configuration etc.

One gotcha to this: your application must be able to handle relatively long call latencies related to cold starts during load spikes, as containers and app runtimes are dynamically spun up. Latency will depend on language; statically compiled Go will be very much faster than Java's much heavier runtime and JIT compilation. There's a clear tradeoff there for not paying for always-on infrastructure. Under steady state load, things are fine.

Lock-in is a fair point - people need to think about that and go in with their eyes open. But if it actually became an issue, I strongly suspect someone would extend something like Kubeless to create an open source AWS Lambda compatible runtime (assuming that isn't already the case).

As usage increases you might get to the point where it makes sense economically to run your own clusters over EC2 with a dedicated team to manage them. But if your API is relatively well written and doesn't needlessly piss away cycles (OK, I admit that's a minority), you'll almost certainly never get there. If you do, it's a good problem to have. Even lockin is likely not a problem - you'll probably a/ have already rewritten your API several times over anyway and b/ have the money to do so because your service is a wild success.

As others have said, benchmarking ML use cases is simply ridiculous and suggests a bias rather than neutral academic work. No-one with an ounce of sense would do that on Lambda. Also all the points about I/O limitations - the types of use cases for which Lambda is well suited are usually CPU bound.

Microsoft Azure: It's getting hot in here, so shut down all your cores

Martin M


But the thing that made me chuckle was the inline ad: “Azure - migrate your on-premises workloads to the cloud with confidence.”

Might want to pause that campaign for the moment, marketing peeps...

Microsoft's Azure Kubernetes Service mucked my cluster!

Martin M

Re: "the customer’s workloads had been overscheduled"

Oh - and just because the forensics team can determine that the cause of the fire was that you had 10 three-bar heaters plugged into the same gang plug overnight, that doesn’t make it their fault.

Martin M

Re: blame

I have some sympathy with them here. If your application depends on expensive high availability servers and storage, with every cluster node being in the same rack and connected to the same ToR switch pair, you should not deploy it to the cloud.

It has to be able to cope with an unplanned node failure and recover swiftly and in an automated fashion. It has to be able to cope with transient network connectivity problems including partitions, one way packet loss, variable latency etc. Ideally, it needs to be capable of distribution across multiple availability zones or even regions, as failures at these levels are not unknown.

What I do not like is the marketing from most major cloud vendors saying that you can migrate your entire legacy data centre to the cloud. But if you believe every bit of marketing you read, then you’re being naive.

Martin M

Re: "the customer’s workloads had been overscheduled"

Most deployments to Kubernetes aren’t via web UI, they’re via standard Kubernetes command line tools. If you’re using it, you probably aren’t (or shouldn’t be) the type of admin depending on point and drool handholding. As for preventing people getting into trouble - well, I’ve never met a technology that can stop a determined idiot from doing this.

Regarding limiting over-scheduling, it can absolutely be a valid user decision. Particularly in non-prod environments where you may burn a lot of money if you don’t contend the workloads, and probably don’t care too much if there are very occasional problems if everything gets busy at the same time.

If the user tried to deploy to production without using the very rich set of primitives Kubernetes has for controlling scheduling, I’d definitely say they bear a significant portion of the responsibility. It’s like massively over committing a VMware cluster. RTFM, know your workloads, and test properly in a prod-like environment.

What I do think was bad was that the user’s poor decision was allowed to affect the system level services. This would have made it difficult for them to debug themselves in a managed cluster, and it shouldn’t have taken a day’s debugging by the Azure team to locate this fairly basic problem. That bit Microsoft should definitely shoulder the blame for. Still, at least they’ve fixed it (according to the HN thread).

Whisk-y business: How Apache OpenWhisk hole left IBM Cloud Functions at risk of hijacking

Martin M

The article reads as if the illustrative REST call could be remotely executed. Having had a look at the PDF written by PureSec, that doesn’t appear to be the case - the endpoint is apparently only locally accessible to the container itself and the OpenWhisk management system (hopefully not neighbouring functions!). So it’s necessary to exploit an application-level weakness already present in the the function to do the POST to /init. Definitely good that it’s now made immutable after first invocation to prevent trivial escalations, but in many cases if the application has a vulnerability that can be abused to make it do arbitrary POSTs, it may be game over anyway.

The *real* eye-opener for me is their PoC. This constructs a hopelessly insecure function with a command injection vulnerability, then shows how that command injection flaw can be used to apt-get install curl and execute it to do a local POST to the /init endpoint.

WTF? Why on earth would an application function need to be apparently running as root and able to do apt-get install inside its container? That appears unpatched, and seems to be at least as fundamental as the /init thing.

Euro bank regulator: Don't follow the crowd. Stay off the cloud

Martin M

Re: Crufty CICS?

Presumably the people who could definitively tell him it is not supported would be either retired or dead?

Martin M


Regardless of the rest of the arguments - and I agree there lots of things that need to be thought about carefully before putting mission-critical workloads on the cloud - I just don't understand the lock-in point. Do you have more vendor lock-in with your core banking system being:

a/ Crufty CICS code that few people understand on an IBM mainframe with infrastructure and operations outsourced to IBM, as is currently the case at many banks.

b/ A modern banking application on a commodity OS hosted using cloud IaaS services, which could (at least in theory) be hosted pretty much anywhere.

You have to understand how you would migrate data out again if you needed to - although if you're sensible you do this continuously anyway to an independent location. You need to be careful to minimise your use of cloud-specific services - if you use AWS Dynamo all bets are off (in all sorts of ways).

Migrating will still be a pain as you will have to do massive amounts of testing, but frankly that will apply if you upgrade the OS on a server in your data centre, so you need to be able to do that quickly and efficiently in any case.

Test Systems Better, IBM tells UK IT meltdown bank TSB

Martin M

Re: Vapourware


“Stories are not functional specifications.”

The highest level story title certainly isn’t, but once you get to the start of the Sprint in which they’re developed, they really should have been elaborated to contain pretty much everything a traditional functional spec has, including detailed acceptance criteria, error paths etc. As mentioned, Cucumber executable specs and the like often play a big part, but ultimately anything software engineers need to design and build, and testers need to create test scripts, should be attached to the story. The only difference is you’re doing it just in time rather than all up front.

“Show and tell to real users”. Yes, absolutely, if at all possible and I’ve seen it done several times - it works very well. Sometimes it’s not possible, in which case it’s really important to find the best proxy users possible. E.g. TSB could have dragooned random cashiers who also bank with them if they didn’t feel they could use customers.

Absolutely agree that process cannot compensate for people with insufficient expertise. IMO waterfall is possibly slightly more resilient to idiots than Agile, but if your team is comprised of idiots you’re probably screwed either way.

With a good team, well run Agile is generally my preference for most projects. Would I develop a compiler or nuclear power station control system that way? No. Horses for courses.

Martin M

I'm going to bite, and assume you mean 'agile' as in 'Agile software delivery process'. In most of these discussions, someone pops up and says 'Agile means no testing'. That's incorrect.

Waterfall can be done well or badly - and in the latter case testing is usually the thing that gets squeezed, as it's towards the right hand side of the plan. Agile can be done well or badly - if the latter it's unusually a poor team that uses Agile as an excuse and synonym for "no process and chaos" rather than actually understanding it. Using Agile requires more discipline than waterfall, not less.

In my experience of Agile being done well, there's a *lot* of investment into testing, with test specialists embedded into every team complemented by specialised test teams.

Stories (functional specs) are expressed in an executable form using Cucumber etc. for full traceability through test.

Extensive automated tests - unit tests (with continuous code coverage monitoring), code-level integration tests, UI tests, external interface tests, as much non-functional testing as possible - are integrated into a continuous delivery pipeline so you know ASAP when regressions occur. Exploratory testing by specialist testers complements the automated tests.

A formal 'Definition of done' for development work means it can't be marked as complete until the associated automated test artefacts are there.

'Show and tells' get early informal feedback from real users, with regular more formal 'business proving' tests combined with old fashioned UATs before big releases. You shouldn't find anything much by the time you do the UAT, but it's good as a belt and braces. Any non-functional testing (often security) or interface testing (e.g. some hoary old system where you have to book the test environment a month in advance) that can't be easily automated gets done periodically, but multiple times through the delivery.

Basically, the whole *point* of Agile is to surface problems - both functional and non-functional - as soon as possible so they can be dealt with, rather than leave all of the surprises until just before you go live - as so frequently happens with Waterfall.

Martin M

Re: Turning off OpenAM token validation!?

Also, most sensible microservice authentication methods (JWT etc.) are stateless, so shouldn't require calling out to OpenAM, which would obviously result in scalability issues.

Possibly the 'validation' that's been talked about here is a belt-and-braces check for token revocation or similar. Turning that off would be more reasonable from a security perspective. Possibly ... but I bet it isn't. Normally only access tokens would not be revocable, only refresh tokens, which would not be checked on "every interaction".

Martin M

Turning off OpenAM token validation!?

The article pulls out the increased fraud risk associated with the recommendation to reduce the use of Actimize, but misses the implications of the OpenAM recommendation - effectively turning off authentication of internally accessible microservices, if I read it right.

If 'internally accessible' means accessible only to other microservices within an isolated network for the backend banking platform and nothing else, that might, just possibly, be an acceptable short term risk to run, but is very very bad practice. One microservice breach would open up every unsecured microservice.

If 'internally accessible' means accessible to every other server in the datacentre (probably including a lot of poorly secured legacy), or - worse still - the general internal bank networks with all of their desktops etc., then ... wow. Just wow.

UN's freedom of expression top dog slams European copyright plans

Martin M

Which i think is what the guy’s point is when he says that “automated filtering may be ill-equipped to perform assessments of context”.

Deciding whether something is “fair use” feels like it needs generalised AI to me. So not available soon, then.

Just when you thought it was safe to go ahead with microservices... along comes serverless

Martin M

Re: Excellent

The “each function should have its own database” thing in the article is patently absurd. If that was the case, an function to retrieve state would not be able to access state written by a mutator. Just because functions can be independently deployed and have no shared stateful store dependencies doesn’t mean that they should or will be.

In practice, even in a microservice architecture, there will typically be clusters of functions around a single stateful per-microservice store. Usually this function cluster (plus any schema changes, if required) will form the most practical deployment unit.

And yes, design has to be carefully considered, as there will be a temptation for poor developers to either duplicate code or design systems with horrible performance by doing fine-grained functional decomposition over HTTP. But there’s no reason that has to happen as custom libraries can be deployed as part of a Lambda and abstract common logic within the context of a microservice.

Martin M

Re: Is it just me

They don't start up and down for each request - the underlying container instances, language runtimes and deployed function code are cached for a few minutes. Under steady state load it's efficient.

When you do have a usage spike, there can be issues if the Lambda containers can't be started up by Amazon fast enough to service inbound API gateway calls. Calls can be errored by the gateway for a short period while this happens. This at best impacts latency (if there's retry logic at the client) and at worst is user-visible.

Base container start-up times are really fast. Lambda start-up tends to be more determined by the language runtime. Java applications can be terrible for this - JVM start time is slow, the big frameworks like Spring can take a while to do the required reflection to get code running, then code runs slowly in interpreted mode until it gets dynamically compiled.

I'm glad Amazon introduced Go Lambdas recently. As a modern, compiled language it looks like a really good fit.

Hopefully they will also improve capacity prediction in future, or at least allow customers to configure greater 'headroom' in terms of pre-started containers (presumably for a cost) to smooth out any remaining issues.

IBM broke its cloud by letting three domain names expire

Martin M


Even given the incompetence of this particular company, it stuns me that they do not have an automated monitoring system watching the critical domain expiry dates and raising a P2 ticket a month before and a P1 ticket a week before. And also daily generating P3 tickets that get automatically shut by a different system, so you know if either the watcher or the watcher's watcher fails.

'We think autonomous coding is a very real thing' – GitHub CEO imagines a future without programmers

Martin M

By the way, I should probably have said "*can be* a fairly mechanistic process" above.

There's a world of difference between producing something that functionally works and creating tight, beautiful code. The latter is more like an art form, really.

Martin M

Arguably, integration of higher level libraries and components is *harder* than lower level coding, which is a fairly mechanistic process.

As for roles for specialised computer wranglers, it's notable that as abstractions and high-level reuse have increased, the number of developers has too. Probably because increased productivity means more problems are economic to tackle. Presumably that will end sometime, but we certainly don't seem to be nearing the inflection point yet.

What *may* happen is that there's a shift in skills requirements, with the ability to talk to end users to work together on accurate requirements being a bigger part of the job. Historically that's been mostly a different set of people than the coders (business analysts, UX designers, etc.) but it works a lot better when it's one person doing both that and the coding. People did some amazing stuff to efficiently solve business problems with 4GL languages and small teams back in the day, albeit the systems were difficult to maintain and often based on very proprietary underlying platforms.

Hardcore system-level programmers will still be required to build the lower level platforms, of course.

Rejecting Sonos' private data slurp basically bricks bloke's boombox

Martin M

I use Roon as a very slick alternative to Sonos. It's expensive ($120/year or $500 for life) but you can offset against much cheaper zone hardware. RoonLab's privacy policy seems pretty proportionate - https://kb.roonlabs.com/Privacy_Policy . The UX is absolutely beautiful.

I use a Pi with iQaudio PiDAC+ in the sitting room with an existing AV receiver and speakers - £90 all in (including a nice little case) vs £350 for a Sonos CONNECT. Bedroom is a Pi/iQaudio PiAMP+ - £110 vs £500 for a CONNECT:AMP. Both sound better than Sonos and are easily assembled in about 15 mins. Office is my existing Mac mini and speakers.

Or, as others have pointed out, there are some very good Pi audio distros (Volumio etc.) that can be combined with the free Logitech Media Server to give you a system entirely within your control that doesn't leak any information. Albeit less slick.

The downside is that all of the above need an always on PC or server, but that's the price you pay for not doing everything in the cloud.

Five ways Apple can fix the iPhone, but won't

Martin M

Re: Sound

"generally accepted that 16-bit PCM divided over 96 dB (okay, 110 with dithering) isn't quite good enough to deal with the peak sensitivity of the ear"

*Generally accepted* might be overstretching rather. E.g. see https://people.xiph.org/~xiphmont/demo/neil-young.html - from the bloke who heads up the organisation that invented FLAC, for heaven's sake.

Adoption of hi-res isn't really an argument that it makes a difference to playback (as opposed to mastering, where it is useful). Certain kinds of audiophile will buy all sorts of silly things, up to and including bags of pebbles to place into the corners of their rooms (http://www.machinadynamica.com/machina31.htm).

Of course, this doesn't mean Apple shouldn't supply a hi-res DAC. It costs incredibly little, all the decent DACs have it anyway, and it will make some people feel better.

What would be great is a better quality DAC and a decent headphone amp, would probably make an appreciable difference. Having to pay significant amounts extra for an external DAC and all the inconvenience that entails to get that does seem a bit rubbish for a premium device.

So you're already in the cloud but need to come back down to Earth

Martin M

"evolve to a two-site setup" - if you're only using one availability zone ("site") in the cloud you're almost certainly doing it wrong. For anything important, you probably want a 2-AZ setup at a minimum, possibly 3-AZ with data replication to a completely separate region (snapshotting to S3 will achieve the latter).

The premise of the article appears to be that you run all apps on the public cloud with a backup on-prem, or vice versa. That's an option, but in many cases you'll be better dividing your workloads by their characteristics. Custom build services that need to be rapidly developed suit the public cloud, as it's generally more convenient and flexible for development teams, who can use advanced cloud services without having to wait on anyone else for their infrastructure (of course, that means the dev teams have to have good embedded platform specialists). Commodity products that have a slow upgrade cycle and just need a Linux server and a bit of storage are probably best on-prem/colo running on converged infrastructure. Bursty workloads suit the cloud; you could perhaps put the base load on-prem/colo, but then you're running a split cluster which is often awkward.

Replicating more than the IaaS element of a cloud on site is expensive and difficult for all but the largest enterprises, who have the scale that might make it worthwhile. But unfortunately, large enterprises usually have politics, and probably either had their IT outsourced to one of the usual, woeful, suspects ages ago, or a CIO who is thinking about it so they can make themselves look great in the short term at the expense of stuffing things up longer term. The natural effect of that is that the outsourced IT provider will spend three years building a 'Cloud', which is actually a converged infrastructure that offers a fraction of public cloud functionality at 5x the cost. And doesn't work.

Don't get me wrong, on-prem IaaS is useful and done well can be valuable. Deploying Kubernetes et al gets you closer to where the public cloud was about three years ago. But don't fool yourself it will be the equivalent of the big public clouds, which are deploying new services at the rate of knots and are essentially impossible to catch up to at the moment.

At some point innovation will slow and most services that most people need will have good-enough, commoditised on-prem equivalents that can be easily deployed, at which point the equation changes. But my guess is that's probably still a good few years away.

Martin M

Re: Private hybrid public cloud

Err, not really. Yes, there's plenty of use of compute Infrastructure-as-a-Service, which is basically what you describe, but most use, I qualitatively believe, of the cloud is 'Platform-as-a-Service' or 'Software-as-a-Service'.

E.g. if you're using Salesforce SaaS then (rubbish as it is) you get a fully managed application/service where you never see the virtualised OS. If you're using AWS Lambda, API gateway and RDS to deploy your custom-build apps, it's similar.

Not to mention that even when you're using IaaS, it's much more than renting CPU cycles. VPC gets you virtualised datacentre networking, and EBS/S3/EFS respectively get you managed enterprise block, object and NAS storage respectively. None of which are easy to run well and in most enterprises are ... problematic...

For me, being able to set this stuff up, programmatically and repeatably via API, in minutes is the real benefit. No waiting about for weeks for your infrastructure team to extract finger from fundament and give you what you need.

This isn't to say that private cloud *can't* be done well. I just have never seen it. It may be something to do with the fact that running a cloud is complicated, and most of the people who can do it well are employed by AWS, Microsoft, Google etc. on salaries to match.

Tomorrow, DreamHost will square up to US DoJ to avoid handing over 1.3m IP addresses of anti-Trump site visitors

Martin M

Yes, if it also involved sweeping up the details of millions of innocent people who happen to oppose a government with capricious and authoritarian tendencies. It is up to the DoJ to frame their request in a suitably proportionate fashion.

I fail to see, for example, why this request cannot be for the IP addresses of the authors or viewers of specific posts or threads.

GlobalSign screw-up cancels top websites' HTTPS certificates

Martin M

Re: Wikipedia affected

If you've upgraded to macOS Sierra this won't work. Instead you need to do:

sqlite3 ~/Library/Keychains/*/ocspcache.sqlite3 'DELETE FROM responses WHERE responderURI LIKE "%http://%.globalsign.com/%";'

And possibly a browser restart (I did on Chrome).

This doesn't seem to be well documented around the web - I found it on Apple Stack Exchange answer 257080.

Ad agency swipes 'unnamed bloggers' for calling out its cynically fake 'save a refugee' app

Martin M

If they don't want their integrity questioned...

... then perhaps it would be a good idea to update the app's website at https://iseaapp.com/ to make it clear the application is a "prototype" and that they imagery isn't real time, making it useless and simply wasting users' time? Rather than leaving the page as is and just linking to all the favourable media coverage.

Bundling ZFS and Linux is impossible says Richard Stallman

Martin M

Re: Stallman can change the GPS as welll...

The GPL and CDDL may be compatible anyway; it's never been tested in court. The version of the GPL under which the Linux kernel is licensed is rather poorly worded in this area. There's a long treatment of the linking question in ch 06 of Rosen's Open Source Licensing book at http://www.rosenlaw.com/oslbook.htm. In Rosen's opinion (which he says not to rely on, and may have changed since he wrote it), he doesn't think that the anti-linking aspects of the license will stick if things actually went to court. Canonical's lawyers seem to have come to a similar conclusion.

Although he may have written the original license, Stallman can't retrospectively change licensing terms relating to others' copyrighted code (e.g. Linux). Linux developers have licensed their contributions under a specific version of the GPL license. The developers could choose to relicense under a different version or license in theory, but in practice it would be essentially impossible to coordinate. Every single individual and corporate contributor would have to participate, and some wouldn't want to.


Biting the hand that feeds IT © 1998–2020