Re: Who is running the project?
Looks like the S4/HANA bit is already live. So does that class as shiny, or has the boat (anchor) already sailed?
182 publicly visible posts • joined 12 Apr 2016
If you believe https://www.erpfocus.com/erp-implementation-costs.html it's over the odds (as expected for anything involving gov procurement), but only by about 20%. If the figures in the article include license/SaaS subs then it'll be less.
Although will no doubt run over.
To be fair, this is one of the few potential use cases for LLMs. If I ask ChatGPT
“ Where is windows would I configure the infernal device that acts as a personal version of gutenberg’s press?”
It tells me where to look (and describes the phrase as poetical, a nice touch). Aside from latency and GPU cost (you’d want to use it only as a backup), there’s no reason this couldn’t be integrated into the start menu. Though on an optional basis, one hopes.
If you really can’t remember *anything* though, I agree it won’t help (and it can’t do anything with your question, mind you neither can I!)
I am 100% not suggesting someone should say "Oh well, too bad". I'm suggesting they rotate their keys, with extreme urgency.
Passwords dropped in the street are very unlikely to be seen and used by even an opportunistic attacker. Credentials pushed to public repos (at least, those that are recognisable as a credential) are going to be automatically scraped within seconds or minutes by bad actors around the world. Their day job is either exploiting them directly or selling them to someone who will.
By the time the errant developer notices and tells someone who has repo deletion rights, it is, with near 100% certainty, too late to recover them. It may - if you're lucky, and the attack is not fully automated - be possible to prevent them being used and a foothold established in your network.
For the people downvoting me - you are aware how quickly AWS credentials accidentally exposed on GitHub are found and abused by attackers? Honeypot tests suggest 1 minute.
https://www.comparitech.com/blog/information-security/github-honeypot/
Note that at no point in the "what to do if you've exposed credentials" section does it say "delete the public repo in the hope that this will alter the past". Magical thinking.
Having played around with this on GitHub, I will say that the message on trying to delete a repo isn't explicit enough about the unexpected (if documented) behaviour. It really ought to have a disclaimer that says "If you're trying to delete commits you wish you hadn't pushed everywhere, this won't achieve it", and a link to a page describing what will actually help.
Also - as far as I can see it's not actually possible to create private native GitHub forks of public repos in GitHub, so your example appears to be impossible.
You can of course push and pull commits between repos on GitHub via a local repo (https://stackoverflow.com/questions/10065526/github-how-to-make-a-fork-of-public-repository-private). But at that point, GitHub doesn't know they're at all related and if you push anything to the public upstream, that is something you explicitly asked to do.
I’m not convinced that deleting the repo meaningfully limits the damage.
Just because the commit isn’t in a fork yet doesn’t mean the secret hasn’t been accessed. If one expects that published information can be magically made private again, one’s expectations probably need adjusting. Panicking and deleting repos will probably just get in the way of effective response.
The thing that *actually* limits the damage is rotating the secret, then it doesn’t matter who’s got the old one. You should be able to do this quickly anyway, so deleting the repository shouldn’t even buy much time advantage. If you can’t - well, that’s another area to take a look at.
If you commit a secret to a public repo, it is broadly equivalent to pasting it on the service formerly known as Twitter. Deleting the Xeet may make you feel a bit better, but you don't know who's already copied it. Deleting a repo is even more pointless with a distributed source control system that actively and encourages the wide and automated distribution of commits. In both cases, you just have to treat the secret as hopelessly compromised.
Instead of whining, what you need to be focusing on is a/ rotating your secrets ASAP and b/ stopping people doing it again by with secret scanning.
Agreed. But even if they didn’t, my layperson’s understanding of the UK GDPR is that images of people are personally identifying in and of themselves, provided the person is recognisable, regardless of other linked data. So their whole statement is problematic.
Furthermore (not necessarily the case here) if you process e.g. facial images with the intent of creating a biometric, they become special category data.
Until homomorphic encryption and/or secure enclaves are viable for normal workloads, the data is going to have to be decrypted before it is processed, and that means the key being in accessible RAM somewhere. So unless you don’t have to do any processing (e.g. password managers, which just pass opaque encrypted blobs to and from clients), encryption is not a panacea.
The increased patch frequency and security monitoring required to prevent your business being overrun by ransomware may have changed some of the economics on that.
SaaS wasn't really a viable alternative back then and really should be more cost effective due to automation and scale, at least for anything of moderate complexity.
Finally, "contractor under bus" was very much a risk with that arrangement, whereas hopefully with a decent SaaS product, they will have more than one person who knows how everything works.
I'm sure there's sometimes (often?) bad behaviour and laziness by delivery projects, and those projects really deserve what they get (or don't get). I also suspect in some cases infrastructure organisations know perfectly well they're going to run out of space but their budgets get squeezed, which is not really their fault.
However - I do also remember a quite a number of cases where I have given infra orgs notice well ahead their stated (long) SLAs, and somehow they still missed them - not by a little but a lot. This is across a fair spread of enterprises.
In comparison, unless you have huge workloads, you never give a cloud any notice at all and what you need appears magically in minutes. There are rare exceptions - if you've failed to reserve instances and a major AZ outage happens, you're going to find yourself at the back of a very long queue for spot instances. And I think I've read new instances in Ireland(?) are currently problematic because access to power for new DCs is rationed. But in general, the massive scale of clouds just makes the problem go away from a consumer perspective.
It was a long time ago (about 2016 I think) I was dealing with AWS on a genuinely enterprise scale, but back then they were very good at coming in and helping find and fix stupid overprovisioning. I was pretty impressed with their "what's good for the customer is long term good for us" attitude. It may have changed since.
Regardless, I've little sympathy for enterprises. Cloud providers give you tons of MI, policy tooling and so on to enable you to keep control of your cloud estate. If you want, there are even better third party tools. They just doesn't get used very much/well. The point around hard spending limits is valid at the small scale, but for most enterprises monitoring rather than blocking is fine.
I'm cynical, but the way many enterprise IT organisations seem to do on prem capacity management is this: 1/ Make it generally slow and hard to provision new services 2/ suddenly realise that the datacentre/big expensive SAN/core network switch etc. is almost full. 3/ only provision new things on an emergency exception basis while the capacity issue is fixed (which takes ages). At no point does a serious attempt at decommissioning, consolidating or optiimising old services take place.
That approach makes stupid capacity management decisions slowly, at the expense of capacity not being available when needed. Moving to Cloud lifts the constraints and in the absence of proper policies, stupid capacity management decisions can now be made really fast, via API call. On the other hand, the capacity is there when you need it.
But it doesn't have to be this way, either on prem or on the cloud.
Most normal non-tech companies with that scale of IT requirement don’t run servers themselves at all - they either started SaaS native or moved to SaaS years ago. It’s generally not economical to pay two FTEs (to cover leave and illness) to look after one server. Instead they buy apps on a £/user/month basis on someone else’s multi tenanted service, and tie everything together with nasty spreadsheets.
They won’t use AWS either - still needs IT admins / “devops engineers” etc.. They don’t have or want them.
A government run fibre network wouldn't help with utility security etc. Someone would (accidentally or maliciously) bridge the network to the Internet in double-quick time.
What would help is a zero-trust approach to security, rather than magical thinking around isolated physical infrastructure that should have died about the same time Natanz happened.
They were being done before, but not in a vertically integrated way, and that's where the clouds get their significant edge. My main point was that replicating all of the myriad capabilities hyperscalers integrate tightly within a single stack and organisation is not necessarily something that a typical government should attempt.
But I mostly agree with your last paragraph, and said something similar in my final paragraph - not every service needs the proprietary shiny. Many people seem to feel their service needs to be broken down into loads of Lambda microservices with polyglot Aurora and DynamoDB persistence, Kinesis Streaming everywhere with Redshift-based BI, Cognito authentication and so on - when actually a simple monolithic Python/PostgreSQL app would be quicker to write, easier to support and cheaply deployable anywhere with zero lock-in. Sometimes the lock-in's worth it, but it should always be a considered decision.
My minimum bar for infrastructure would go beyond manually-provisioned "bog standard servers" to proper, API-driven IaaS. Infrastructure-as-Code is a very good thing to be able to do, even for simple services. If a government can provide this itself - great. If not, the cloud providers are going to find it hard to jack up the prices on you if you're just using IaaS - you've got the leverage. Unfortunately, many organisations (commercial and gov) do manage to cock up even basic IaaS private clouds really quite badly.
I'm not sure what you're talking about when you say "the fibre network was designed by the government". I was talking about a physical global fibre network - do we have one of those? Or perhaps you were talking about the BT one that never got built?
The early internet protocols and some small-scale test networks were indeed some great innovations of academia and the military. However, 'the Internet itself' - e.g. the thing that billions of people use every day - has very largely been delivered by commercial, competing enterprises in a highly decentralised way.
But in any case, I asked which *should* be done by the government, not what could be done. In a mixed economy, governments don't do everything. Many loo rolls are used across government, and I have no doubt that the government could manufacture loo rolls if it wanted to. You could even make a pretty strong argument that protecting the wider loo roll supply chain is a critical national endeavour. But that needs to be weighed against a Department of Hygiene Supplies staffed by thousands of civil servants almost inevitably resulting in patchy availability of scratchy loo roll at £20 per pack.
There are definitely things which must be done by government. Monopoly situations where there can be no long-term market (I'm in favour of some renationalisation on this basis). Things the private sector won't do (too much risk with too little potential for profit, too big to fund). Things with a very long payback time (provided there is cross-party support).
I'm not sure cloud hosting falls into any of these categories. The closest is the monopoly point - but while there's the potential for lock-in to proprietary services, if you can forgo the bells, whistles and velocity the basic IaaS services are a substitutable commodity.
The low hanging fruit in terms of bringing in house is reducing the use of consultancies to build and support one-off custom systems. Aside from possibly buying in some flex capacity for really big projects, that is bonkers.
> Almost anything that requires recurrent payment should be run by government.
That's quite a broad category. Supply of chips for the canteen? Loo rolls?
>>Design and manufacture their own servers?
>This is one off. No need unless there is a national security concern.
They're not one-off, you refresh them on a frequency similar to that of a reserved cloud instance. So why is building, integrating and running your data centre more important than building, integrating and supporting your own server components instead of buying Dell? Is it possible your job may be related to the former.
>> Create their own hypervisors, integrated into custom silicon?
> No need. There are open-source and free hypervisors. Current commercial offerings of silicon are adequate.
Everyone loves something someone else declared as on their behalf as "adequate". Show me the commodity equivalent of AWS Nitro System. "No need" depends on your use cases and non-functional requirements.
> > - Run their own global fibre networks?
> This is an exception. But surely government could build a backbone network to connect data centres in different countries and could rent out surplus capacity.
> > - Design custom electrical substations?
> Energy supply should be run by the government. We should have a national supplier.
I'm picking up a theme here - it's not just clouds, you seem to think the government should run much of the economy
> > - Design their own CPUs?
> > - Design their own AI accelerator chips?
> > - Create their own distributed databases and analytics tools, heavily integrated into the infrastructure stack?
> No need
Again, I'm sure the people who are currently spending money on these things (or the services that are made better and cheaper by them) appreciate your opinion that they're unnecessary. You can't go and buy an equivalent of Spanner off the shelf, let alone find an Open Source equivalent.
> though of course software development should be in-house as hiring big consultancies is expensive and delivers poor quality. Though this > would need a reform of finances, as currently public sector by design cannot employ specialists as they are unable to pay market rates.
I mostly agree with this actually. Once they've reformed finances and shown they can deliver bog standard enterprise IT inhouse, let's pick up the conversation later about trying to reproduce something the world's leading tech companies took 10-15 years to build out. Although we won't, because we'll probably be dead.
> > Being a cloud is an awful lot more than building out a datacentre...
> Nothing that cannot be built and tailored for government use and bring massive savings long term. Not to mention local jobs.
I'm not quite sure what you're basing this assertion on. Perhaps you designed and built one of the hyperscale clouds yourself, and it turned out to be unexpectedly easy?
> > they increasingly consume complex, integrated technology stacks that require engineering effort and operational investment way higher than a single mid-sized government can afford.
> That's nonsense. It can only fall apart by corruption.
Well, that's OK then. Because history teaches us that dramatic expansion of the state *never* results in rampant corruption.
Which of the following do you think they should do, all of which are now things major cloud players do?
- Design and manufacture their own servers?
- Create their own hypervisors, integrated into custom silicon?
- Run their own global fibre networks?
- Design custom electrical substations?
- Design their own CPUs?
- Design their own AI accelerator chips?
- Create their own distributed databases and analytics tools, heavily integrated into the infrastructure stack?
Being a cloud is an awful lot more than building out a datacentre, even a clutch of gov-wide datacentres. Clouds long ago moved beyond providing a bunch of servers and storage in a datacentre - AWS now has over 200 customer facing services, all orchestratable by API, all integrated into IAM / billing / SDN / monitoring etc. etc., all very effectively capacity managed so you can almost always provision within minutes, seconds or less. These are big building blocks that enable you to e.g. build a highly scalable transactional app with mobile and web identity management, event driven feeds to a data warehouse, a data lake, a contact centre, maybe some media transcoding services with redundancy across three availability zones and if data protection regs allow to another region. Provisioned in large part with a few lines of Terraform and without so much as speaking to procurement.
Anyone who views this through the lens of "who's running the datacentres" has thoroughly missed the point. Teams creating business services don't consume datacentres: they increasingly consume complex, integrated technology stacks that require engineering effort and operational investment way higher than a single mid-sized government can afford.
There is an argument to be made that maybe we don't really need all this fancy stuff and the increased velocity it brings, and shouldn't focus on capability if it means lockin. That's really the same highly-integrated-and-capable-but-proprietary vs good-enough-commodity pendulum that's been swinging back and forwards forever. But it's a more nuanced argument that really needs to be applied at the workload level within a particular business context.
I suspect the livelihoods of most people reading this depend - directly or indirectly = on non-techies using computers, so personally I’m all for them being allowed to use their keyboards.
I’m no doubt a moron and idiot when it comes to servicing my car or understanding the fine details of server CPU microarchitecture. Thankfully, some clever people in those domains have put lots of effort into making their technologies easy and safe to use, rather than bitching about me.
“ You're now a paypig”
The British Library seems the poster child for an organisation that would have been much better off on a cloud provider,. Yes, there’s lock-in, but better than the alternative.
“ I'm gonna laugh so hard when one of the 'cloud' providers gets taken out by the same ransomware gangs.”
None of us are going to be laughing if a major cloud gets taken out, because quite large chunks of many large societies will no longer be working. Reckon it it would take more than the average ransomware gang though.
Which would no doubt be met by a serious response, which makes it less likely. Hey, it worked for nukes, even if was a bit dicey.
“ The big snag is that once you've sacked your IT staff because you've gone into the cloud, you've got nobody left who can optimise those controls”
Agreed, but the skillsets required to run a legacy server estate and those to run a client estate are rather different. Doing the latter doesn’t mean you have to do the former.
“Apple is actually being quite flexible and accommodating. Laws of the EU do not apply in other countries and they would be well within their rights to be more aggressive.”
IANAL but Thomson Reuters seem to disagree:
“EU competition law applies to any collusive or abusive conduct that has the necessary effects on competition in the EU and trade between member states, regardless of the nationality or geographic location of the enterprises concerned or where the conduct occurred.”
https://uk.practicallaw.thomsonreuters.com/w-016-5380
Enforcement of extraterritorial laws can sometimes be tricky, but unless Apple are prepared to pull out of the EU and their execs never want to visit or do a transfer in an EU airport, I don’t think it would be a problem in this case.
Leaving aside other humans, it's impossible to truly explain a lot of what happens in my own head. Anyone who's read Kahneman's Thinking Fast and Slow etc. should recognise that much decision making is not Type 2 conscious thinking, it's happening via a high-speed Type 1 black-box association machine, with any required explanations being invented and retrofitted in retrospect.
Amazon beg to differ for Amazon Marketplace purchases. Point 11 of their Conditions of Use states that:
Amazon allows third party sellers to list and sell their products at Amazon.co.uk. In each such case this is indicated on the respective product detail page. While Amazon as a service provider helps facilitate transactions that are carried out on the Amazon website, Amazon is neither the buyer nor the seller of the seller's items. Amazon provides a service for sellers and buyers to negotiate and complete transactions. Accordingly, the contract formed at the completion of a sale for these third party products is solely between buyer and seller. Amazon is not a party to this contract nor assumes any responsibility arising out of or in connection with it nor is it the seller's agent. The seller is responsible for the sale of the products and for dealing with any buyer claims or any other issue arising out of or in connection with the contract between the buyer and seller.
To add an even more Horizon-y feel to this, a browse of their website reveals that they provide ePOS software.
It seems to connect up with their JTL-WAWI "cloud" solution with access via RDP (optionally via a VPN tunnel). They even promise to patch the Windows servers.
I don't think anybody here will be able to think of any ways that this could all go horribly, horribly wrong.
The cost of (forced) assistance should be recovered by issuing equity at the valuation the company would otherwise attract at that point in time (e.g. likely near zero). Shareholders get soaked, but not as badly as if they company had collapsed, and the taxpayer often makes a healthy profit. Exec options go to zero.
Cyber insurance - now that’s an area fraught with moral hazard.
“Surely business can set up a subsidiary (or set up different structure that creates sufficient legal separation) in the country that doesn't ban ransom payments”
Except that sounds quite a lot like money laundering to me. Not sure I’d want to risk establishing the legal precedent if I were an exec accustomed to staying away from home in rather better rooms than a cell.
This is much more sensible. Opening up the API is nice and makes it possible to write an alternative server, but certainly doesn't guarantee it will happen particularly for minor products. Even if it does help users of this forum it won't help the 99% of people who don't know how to run their own server. Instead, financial incentives need to be aligned instead to let consumers compare cost of products properly up front.
I'd combine your suggestion with an obligation to say in the product specifications/advertising how long the company will provide the service for (as a minimum). It's not reasonable to expect subscription-free service for ever, but it should be transparent when you buy. If they don't keep the trust fund topped up sufficiently to run it for the remainder of the time, and the product is withdrawn or the company goes bust - director liability for the shortfall.
> Compilation still takes time. On an *extremely* high end box it's under a minute
In 1995 I submitted a trivial patch for a file system endianness bug in the Linux 68k port. It took a while, largely because a kernel recompile took over a day on my Falcon 030. I can’t remember how much RAM it had (2MB?), but it’s safe to say it wasn’t enough and a swapfest ensued.
I got into the habit of reading over the code a few times before kicking it off…
Yes, it’s opt-in from the server end. As a website operator, you register and point your DNS at them, at which point you obviously need to trust them as much as your origin server(s). I would imagine there’ll be an opt-in for this specific service given it’s rewriting pages.
Yes, they MitM - that’s how CDNs work. They’re distributed caching reverse proxies, so they have to terminate TLS on your behalf before connecting to the origin server to retrieve cache misses.
In fact Cloudflare is also a CA, so it can automatically transparently issue a domain validated certificate if you want it to. It can also provide a certificate for your origin server to secure the second leg.
It’s an interesting question, and one for a lawyer, but I suspect comes down to the context and whether it qualifies as fair use - hence the careful qualification.
Wikipedia’s take - https://en.m.wikipedia.org/wiki/Legal_issues_with_fan_fiction - explains there are no fixed rules but when deciding fair use on a case by case basis courts consider
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
So at the extremes: if you’re doing it as part of a 500 word school assignment, you’re fine. If you’ve published your own novel sitting alongside the Song of Ice and Fire series without a license, you’ll likely have problems.
If OpenAI making fair use? No idea, but it’s definitely commercial use. Definitely feels like one for the courts…
Firstly, most LLMs including ChatGPT are entirely capable of regurgitating quite long sequences of training data, referred to as "memorization" - https://www.theregister.com/2023/05/03/openai_chatgpt_copyright/ . Actors do this too, by altering their neural weights in a somewhat similar way, and if they wrote a play down and distributed it, there would be a breach of copyright.
But even leaving aside whether text is reproduced verbatim, case law has determined that copyright protection extends to the traits of well-delineated, central characters - distinct from the text of the works they are embodied in.
I've just typed "how would tyrion lannister describe having a baby" and "how would cersei lannister describe having a baby" and it spits out highly distinctive, extended replies very much in line with the thinking and speaking styles of those characters.
I'm no expert, but I can see how it might well breach copyright to reproduce these outside of a fair use context.
I'm not sure how Linux can be elitist when it's the majority of the server market and the vast majority of the *new* server market. My commiserations for being on the wrong side of history; it's like listening to a Solaris advocate circa 2008 or mainframe advocate in 1995. But if there's not too much of your career left, stop tilting at windmills, sit back, relax and enjoy. Surfing the trailing edge can be lucrative - your skills are increasingly at a premium.
"effectively an unsupported platform, or practically so"
VB6 IDE passed out of extended support *15 years* ago. No "effectively" or "practically" about it. I'd say it's remarkable it can even reliably connect to a modern Oracle database, but I have a suspicion that the one they're using will be of a similar vintage and support status.