Time for Twitler to accept dealing with unions as a cost of doing business and trim a tiny fraction off his bottom line. The Nordics are very productive so I think it’s less likely that their unions will be unreasonable, but the longer this goes on the worse it’s going to get for him.
330 publicly visible posts • joined 7 Jul 2009
I think diversity is great when it's a side-effect of good hiring - but not when it's a corporate goal or cultural bugbear.
Two principal things I look for in a candidate for roles I've interviewed for in the past:
- is the individual capable of doing the job, or can be made so in the short-term?
- can they communicate sufficiently well to facilitate this in a team?
I find that hiring individuals whose first language is not English can *sometimes* be an obstacle to the latter, although never one that's been insurmountable given enough patience. Any other identifying characteristic - age, ethnicity, gender etc - is a total irrelevance.
But I would not waste valuable time fishing around for some diversity box-ticking unicorn candidate when there are individuals who match the above criteria within reach - and would opt myself out of the hiring process if asked to do so.
Gartner wisdom at its best
On one hand, "Warrilow nonetheless advocated planning for multi-cloud adoption ... so you can diversify and avoid lock-in" and on the other, "Warrilow also advocated going cloud-native".
How can you avoid lock in when you've gone native on 2 disparate platforms?
I have yet to encounter a client that has made multi-cloud work as anything other than a toilet they can flush their budget down. Every analogue between providers works somewhat differently, and tools to try and abstract away those (like Serverless) don't work in practice without significantly customising their naive default configurations or adding in numerous third party plugins that aren't portable between clouds.
You need twice the number of skills in-house, pay through the nose for cross-cloud traffic, and can't shift workloads between providers without a lot of effort and testing. Even vendor-neutral technologies like Kubernetes have significant variances in the way they work out of the box if you use the one supplied by the provider.
There will be a tiny fraction of tech businesses for whom the total failure of an entire cloud provider is an existential risk, where going multi-cloud for resilience is the necessary mitigation. There will be a slightly larger one which need certain services and facilities that are not available from a single provider that may need multiple clouds in a rational hub-and-spoke model. But for everyone else, getting it working per the board's vision is a costly pipe dream that'll never be realised.
When decommissioning *your own* production data where you haven’t had absolute confirmation that it isn’t in use, you at least snapshot or backup first prior to removing it from your active set. That way you can quickly revert a potential mistake, which is ALWAYS a possibility when performing any non standard operation with a data source.
It’s frankly imbecilic not to do this with customers’ data.
Carrier networks use some form of NAT so your phone's IP address isn't unique. They have to do this because there simply isn't enough address space on IPv4 for every mobile phone to have one. It is certainly not a reliable unique identifier. And making *any* HTTP(S) request reveals your (gateway's) IP address to the third party you're connecting to.
In this instance it's probably a simple GET request with no such privacy-violating fingerprint being sent - if it was anything else they'd show their proof.
Given that an email link can be customised to identify an individual recipient's visit it strikes me that there's practically zero legitimate need for a CAPTCHA anyway for any well built system.
"Avanan, which sells an AI-based service that competes with traditional SEGs, unsurprisingly doesn't think much of these gateways and says it has new evidence to support its claims."
I have around an equal level of confidence in an AI-based system as a rules-based one: except with rules-based systems you can tell your users with confidence what sort of scams they are and aren't protected against, whereas with AI, it's a case of "most should be filtered out, but watch out for literally anything for those 1-2% of times where the AI doesn't work (yet)".
"Russia needs more internet access, not less."
I may be wrong, but IIRC Cloudflare and Akamai provide nothing that could be considered access or connectivity - just edge services layered on top of that.
If that's correct then the two of them pulling out of Russia would have no long-term impact on the ordinary citizens of Russia, other than increasing latency - and mildly inconveniencing businesses using their Russian servers to reconfigure and use ones in other regions.
Debate scope is vague
I’m not sure what the proposition is - whether ‘hardware rental’ means merely renting physical office or server equipment, renting dedicated infrastructure in the cloud, virtual infrastructure, or a vendor providing fully managed IT. The content suggests it could be any or all of them and I can’t really glean which Dominic is arguing against.
Government-favoured child safety app warned it could violate the UK's Investigatory Powers Act with message-scanning tech
IANAL but I'd expect a test of this law would clarify the illegality of the interception and relay (e.g. for off-device third party analysis) - as opposed to rules-based filtering - of incoming messages on behalf of the recipient. The two are very different. If the latter were illegal then one could argue that every spam filter, SPI firewall, traffic analysis package et al would fall afoul of it.
The problem with these regulatory actions is that they're tame enough to be considered a 'cost of doing business' for the arseholes who know they can break the law and reap the rewards. $70m is probably a fraction of the profits reaped over the 5 years of dodgy business practices they're supposed to be punishing.
Fines that push the business into bankruptcy and jail time/asset forfeiture for the top levels of the org chart are the only ways to disincentivise the next lot of pricks who have a similar idea.
Fastly 'fesses up to breaking the internet with an 'an undiscovered software bug' triggered by a customer
Those harping on about the "poor reliability of cloud vs on-prem" don't appreciate what a CDN does and why it's not a simple binary of on-prem vs cloud.
A CDN is generally in place for two purposes: to mitigate traffic issues for static assets, and to keep those assets as geographically close to customers as possible so as to minimise latency for your worldwide audience. You might be able to achieve the first one yourself with a lot of on-prem racks and fat pipes in a small set of DCs, but the second is *absolutely infeasible* to achieve for even the largest orgs; you would have to have kit in as many locations as the CDNs themselves. Given how critical low-latency is (perceived do be) in the modern browsing experience, that's why even Google, MS and Amazon use them and were partially affected by the Fastly outage.
And CDNs like Akamai vastly predate the advent of cloud computing.
Re: No confidential data should be in QR codes
If it’s actually needed, then the data in the QR code should be a subset of all other visible information on the certificate. If there’s more, then that’s a failure of the gov or org that designed the certificates.
That at least allows people to make their own decision on whether to air sensitive stuff on social media!
Early adopters delighted as Microsoft pulls plug on Mobile Backend as a Service. Haha, only joking – they're fuming
Re: Yea - give me that random stuff
I don't think you understand what continuous integration is - it's about doing things off the back of changes from your source code - generally on branches in a VCS - not your dependencies'.
If you have your project set up properly using things like package lock files, then the build process run under your CI will bring in the same versions of dependencies every time, until a developer commits a change that specifies different ones. And if you can't manage doing that locally on a developers' machine then you really shouldn't even be using a CI tool.
World's richest bloke battles Oz catastro-fire with incredible AU$1m donation (aka load of cheap greenwashing)
The landscape will change for sure - I predict highly skilled contractors will only be engaged to the larger firms via the big consultancies, who'll end up fighting over each other for good people with particular skills. Those who don't go down this route may have to reduce their rates to work with smaller firms who aren't in the scope of IR35, or go perm. I can't imagine many will go abroad - I've noticed remote work roles in northern Europe come up increasingly often, and I expect this will happen more with the surplus of skilled people in the UK. Some clients will look overseas to fill the skills gap, wiping out tax revenue in the UK: that could swing back in a generation when government wakes up to this wilful act of self-destruction.
Personally, I'd advise making oneself as indispensable as possible to someone with a large budget and tight deadlines in the next few months.
Still, there are a quite a few contractors I've worked with in the past who are not highly skilled, pleasant to work with or even decently competent. They're the ones who should be worried about April.
My previous team were forced to use Teams much to our chagrin... like most MS stuff it tries to be the be-all and end-all to the entire domain of corporate comms, and has a huge feature set, but performs abominably, at least on our Macs.
Slack by contrast does IM incredibly well but is next to useless for even basic video conferencing, for which Google Hangouts is still the king (in my experience) as long as you don’t have too many people involved.
But then again conference calls with more than 25 people involved are definitely a waste of everyone’s time.
Chef roasted for tech contract with family-separating US immigration, forks up attempt to quash protest
I can understand the (apparent) good intentions of devs for doing these things, as I wouldn't want my work used for purposes I find ethically abhorrent. But open source software can and is used for evil, and the willingness of those who do this to see no evil in this regard strikes me as at least naive - possibly hypocritical, or perhaps self-serving in today's publicity-driven economy. Singling out an intermediate relationship between Chef and ICE seems a bit of a gimmicky reason to me when we accept things such as IBM's complicity in the holocaust, and still allow them to use our OSS (and in turn use theirs).
If nothing else, this is another feather in the cap for local dependency caches...
German ministry hellbent on taking back control of 'digital sovereignty', cutting dependency on Microsoft
Re: Uncontrollable costs?
"IMHO no one has ever saved money by moving their data centers to the cloud"
Untrue - outfits I've worked with who've adopted serverless compute have definitely saved money from not having to run virtual machines 24/7 for workloads that aren't constant. I'd like to see anyone emulate the same with in-house kit in their own DC...
Hey, it's 2019. Quit making battery-draining webpages – say makers of webpage-displaying battery-powered kit
Yeah, no thanks
Low-energy use is just a consequence of low resource utilization - so ensuring your websites/webapps make as little use of CPU, memory and network capacity as possible will produce a battery-friendly experience...
So this is just the latest rehash of what web engineers have been advocating for the past two decades, reframed against energy use instead of performance.
Browser makers might offer a low-energy mode that allocates a fixed resource budget to each page, and throw down the gauntlet to client-side devs to build experiences that run sufficiently well inside those budgets. That way, users would be in control of their own energy usage, and could vote with their feet away from garbage sites that crawl when running inside this mode.
The idea that the consumer is no worse off when updates stop is fatuous at best. A Chromebook is by definition designed to work fully only when connected to the internet, which today is a very hostile environment. Patches are *required* to keep a Chromebook working correctly in that environment, and withdrawing that support at some unspecified point in the future gives it a variable useful lifetime.
When patches stop, that device becomes fundamentally less useful because it will start to exhibit failures related to the environment for which it is designed. Yes, you can install another OS on it, but that changes the nature of the device into something like every other laptop, prone to viruses etc - which it is sold as being a secure alternative to.
Of course, the support is provided by Google and the hardware by ASUS or whoever, and consumer law won’t help you because the retailer and manufacturer is fundamentally not involved in that.
I too think it’s revolting that hardware cycles encourage such wastefulness, but I think even regulations are unlikely to make the situation any better. Mandating that devices have software support for 10 years from date of purchase will just lead to the industry sidestepping their obligations, like delegating support to a subsidiary they can just shut down, delaying patches indefinitely, or deciding that vulnerabilities don’t necessitate patches.
It's happening, tech contractors: UK.gov is pushing IR35 off-payroll rules to private sector in Finance Bill
There's simply not enough airtime being given to the scandal that HMRC's practices are the sort you'd expect from third-world dictatorships. Unlimited retrospective taxation, giving official advice to individuals trying honestly to stay on the right side of tax law and then reneging on it, and now building tools to entrap and mislead.
Coming from this as a contractor in a fairly grey area, and having consulted with a tax lawyer over IR35, I wonder if the industry could collectively bypass the current rules by rethinking contracts. I know it won't suit all organisations - particularly those using the contract market for 'temp' staff to cover roles they want to employ permies for - but giving trust autonomy to contractors to fulfill their end of contracts as specialists, rather than as 'overpaid permies' might keep the market going.
I've seen some absolute messes from someone copy-pasting sections of employment and service provider contracts which refer to people rather than companies, make references to health and sickness, working hours, notice periods and other expectations that you wouldn't inflict on service providers. I think we'd have a lot more chance of scoring IR35 wins if our client contracts referred to deliverables or statements of work - even fairly nebulous ones - and agreed costs instead.
You can rely on HMRC to fuck it up
Not a single online interaction with HMRC I've had running a business for 2 years has resulted in me getting through one of their obtuse workflows without some error happening along the way, including this MTD switchover shambles. In my case, I got an email telling me my direct debit had been migrated to a new bank account two weeks before I submitted my quarterly return. Submitted it, everything appeared to work properly, and then one week had a feeling something would have gone wrong, and inevitably found that nothing had changed in my VAT bank account. Cue a call to them, and it was "oh, you weren't migrated in time for the direct debit to go through, you'll have to make a manual transaction".
Then why send me an email saying I had been 3 weeks before!?
But I can't fault their people on the other end of the phone when I inevitably have to call them. They have an almost melancholic view that whatever I've done online won't have worked because of the latest fuck-up of the day and seem to be capable of sorting it out manually.
Before we lose our minds over sentient AI, what about self-driving cars that can't detect kids crossing the road?
Seems like a paid alpha for a product that will be viable in 10-15 years time when ISPs (hopefully) improve their edge networks. I'm sure it will work soundly for a blessed few - whereas the rest of us on congested, inconsistently performant networks will just experience the same thing as most did with OnLive - noticably laggy, jittery gameplay. I suspect this generation's services will all fail, publicly, but provide Google, MS et al with the data they need to re-launch them when the majority of the world's infrastructure is capable of supporting streamed gaming fairly reliably.
But on a different note, just how was the conclusion that this is economically unviable reached? The traffic point is pretty moot - streaming game video is just the same as streaming Netflix, and that certainly hasn't been a problem. On the hardware front, as a cloud provider Google willl have plenty of spare compute capacity in its DCs, and Stadia will be niche enough to run lots of part-time casual gamer workloads with demand balancing while it works out how far to scale. Spare capacity is wasted capacity, so it may as well be put to some use...
If you want a vision of the future, imagine not a boot stamping on a face, but keystroke logging on govt contractors' PCs
Such an impressive technological solution naturally makes the assumptions that
1) All work is done on a desktop machine
2) All work involves typing or mouse activity
3) All work is done on one device
4) All work happens while online
5) Phone calls with the client aren't part of work
6) Meetings never happen
7) Business travel isn't work
...and probably a million other dumb things.
I wouldn't worry. It'll be trialled, ripped to pieces by users, and unceremoniously dumped when contractors ending up taking their clients to court, or walking out en masse over unpaid billable hours when they can demonstrably prove that they've been working during the hours the system tells the client they haven't.
Capita chief exec Jonathan Lewis, who was part of the panel being grilled by the MPs, chipped in: "That's close to 100 per cent of the margin on the contract. And if you add the incremental investment we've had to make, which is somewhere in the order of £60m, to deliver on that, over the term of this contract we will lose a very considerable sum of money," adding that the army would save "£200m" over the lifetime of the RPP contract.
A 2% margin on a £1.3bn IT contract? Oh dear. (I suspect quite a lot of executives on this project will be very well remunerated.)
So their Ericsson appliances/applications have somehow failed.
Perhaps they should have avoided this single point of failure for such a critical piece of comms infrastructure (using redundant bits of kit from the same supplier doesn't work if all instances of it are affected by the same issue).
If other operators around the world are affected then it sounds like a botched update, or a very well coordinated cyberattack. Or perhaps they're all using some cloudy service and THAT has gone down?
"Hi! Let's team up in getting this issue resolved. To start, can you tell us since when did the issue start? Have you tried clearing your mobile browser's cache and cookies? We'll be waiting for your response."
"We don't have adequate monitoring, we're putting obstacles in your path to make you think it's anything but our fault, and we won't update you until you update us."
I worked for a fintech company a while back, where regulatory compliance on explaining how significant, financially life-changing decisions by algorithms was imperative. At some point the company 'adopted AI' and I asked a question about how such an 'AI' working for the company could explain its (evolving) decision making process to a regulator.
I received no answer from the AI guys, but I suspect it would be along the lines of "Well, it all started with training data set n-458493, I thought the answer was 2, but I was told it was 3, so I adjusted one of my neurons to give 2.6 in future. Then training data set n-458492 came along, I thought the answer was 2.6, but I was told it was 1, so I adjusted two of my neurons...."
It's a crying shame that the no-nonsense emphasis of the code of conflict didn't make it into the code of conduct. The expectation of what some might feel overzealous reviews was the whole point of weeding out entitled personalities who shouldn't be contributing to the most critical software project in the world. I think LT's rants have done more good than harm, in publicising that poor-quality code and unprofessionalism will get you publicly embarrassed - but ONLY for those things.
Instead, this catch-all made it in:
Examples of unacceptable behavior by participants include:
- Other conduct which could reasonably be considered inappropriate in a professional setting
...where a committee (or individual) can interpret things how they wish to get rid of controversial people or ones they simply don't like. Things like https://www.inc.com/sonya-mann/drupal-larry-garfield-gor.html springs to mind!
"It's a local scan (in web page code) not a remote port scan.
That's a big CMA difference if you ask me. (local verses remote)."
I don't know why this comment is getting downvoted. No individual or remote system is connecting to your machine, and this (invasive, I agree) action is triggered by your browser downloading some asset on a system you are using voluntarily.
I agree there are ethical ramifications as this information is reported back and used 'somewhere'. But legally, I can't see how this could be any more a violation of the CMA than almost every media website the world over checking to see if you're running an adblocker in your browser, or downloading and running a script that performs port checks on your machine using netstat.
Facebook stockholders tell Zuck to reform voting rules as data scandal branded 'human rights violation'
"Cry me a river. In a market economy if you do not like how the company is acting on your behalf sell your shares. When the shares were bought I expect the situation was the same, so if they dont like the situation they can remove themselves easily.
In the market your money is your vote, use it."
You're not wrong, but I imagine the asset managers' desire to be seen as 'caring shareholders' is eclipsed by their desire for profit, and this is simply a publicity stunt.
Re: Can I Just Point Out ...
"What if the only purpose of tracking the value is to compare against some threshold value and display an alert / send a notification / ring an alarm or some-such? Then that would explain why it managed to sail under the radar for so long.
Making sweeping assumptions on usage, without seeing the full context makes you almost as bad as the original coder."
1) The submitter's boss told him it needed fixing.
2) If that was the case then the TOTAL_EXPOSURE variable is totally misleading and will hurt future generations... and the code is still bad.
"Thus to employ people without affecting OPEX you must use CAPEX (Capital Expenditure). This is usually defined as project cost.
Thus using CAPEX instead of OPEX you separate Contractor from Employee. Therefore, Contractors are not employees. No IR35."
CAPEX is the purchase of assets that must amortised over their lifetime as they deliver value beyond the financial year in which they were purchased. For tax purposes they cannot be claimed as business expenses. OPEX is everything that's more or less an ephemeral operating cost (and therefore a 100% deduction from revenue) for tax calculations.
The costs of retaining employees and the services of contractors to keep your business operational both fall under the latter. Which internal 'budget' a client uses to fund this is wholly irrelevant as far as HMRC's view of the client's, and contractor's tax situations is concerned.
The article omits an important detail of the case: this is the second time the guy has been targeted by HMRC, and ALSO the second time he beat them. That stinks of a punitive agenda which is against the principles of impartiality HMRC are supposed to be have. Even worse, the cost of prosecuting the case would have been higher than the paltry sum of tax they could have claimed if they'd won and so this would have been a symbolic victory. As it stands, us taxpayers have to foot the bill for the lawyers.
""It could be prejudicial to taxpayers if allegations of tax avoidance in HMRC's statement of case, which the tribunal may later decide are unfounded, are reported in the press before the case has been heard."
AFAIK 99.9% of "taxpayers" wouldn't go to Upper Tribunal unless the sums involved were colossal and worth taking on. So doesn't this judgement simply remove the additional benefits afforded to the wealthy and/or large companies who can afford to appeal?
Funny, when VMWare sued them in 2013 the Hortonworks line was:
"People have every right to freely pursue opportunities with any company they believe will advance their personal and professional objectives, and should not be limited in any way from doing so."
I was talking about this with a friend yesterday evening - businesses with enormous risk potential (i.e. banks) are incredibly risk averse and this generates IT practices that favour disconnected, non-converged silos of systems with high levels of technical debt. But at the same time, this, at least on the surface, prevents broad systemic problems from affecting the entire business' IT when something goes wrong. Since devops is so often 'sold' as realising various business efficiencies from converging these silos together, that's anathema to that way of thinking and so things stay the same.
As the company explains in its fess-up post, the source of the leak was an inadequately-secured GitHub repository: an employee wasn't using two-factor authentication. 8Tracks found out when there was an unauthorised attempt at a password change, and on investigation it found backups of database tables in the staffer's repo.
The source of the leak was storing backups in source control on a public service, and inadequate access controls - either allowing devs access to production data or ops to source control! How does 2-factor auth address that?
"As Mozillan Ryan Pollock explains, “Firefox now creates up to 4 separate processes for web page content. So, your first 4 tabs each use those 4 processes, and additional tabs run using threads within those processes. Multiple tabs within a process share the browser engine that already exists in memory, instead of each creating their own.”
I'm not saying this approach can't work, but having some experience in this area, it sounds like Mozilla's devs may have created a lot of hassle for themselves in trying to combine processes and threads to achieve their desired outcome. The old problems of one misbehaving tab deadlocking the others (well, presumably only a quarter of them) may still exist, and with the added problem of having to rebalance running tabs to the other 3 processes when one becomes overloaded? I appreciate this approach saves memory, but the one-tab-per-process model devolves so much of this scheduling and resource management overhead to the OS kernel, which is what it's designed to do.
The problem with feature detection and machine learning in general is that it assumes honesty in the learning material. Emergent technologies in an environment where the spectrum for deception is effectively infinite will at best result in an AI-poisoning arms race between the liars and the engineers trying to root them out. Full Fact's solution might make it harder for liars to get their material on the web, but any system that classifies data based on relationships between statements, news organisations, past reputations and so on is completely open to being gamed.
Blockchain could be *part* of the answer, in a cryptographically reliable, extensible chain of evidence of where a fact came from. Having to publish a chain of sources when they source garbage from World Truth Tv or even Wikipedia might make journos a lot more responsible about fact checking in the first place, and out those who mutate the truth for their own ends. In the same vein, reports that come from individuals that fanatically pursue truth on the front lines are going to be that much more credible.
Engineers presumably like to build and test things incrementally in smaller test environments, perhaps using commodity or obsolete hardware no longer in use, to ensure things work acceptably before buying the production-scale hardware (which if supported by MS, should have no problem running them). It seems bizarre that they could do that using something like OpenStack, but not Azure.
The conspiracy theorist in me suspects that might just be because Azure is ever so slightly less solidly built than you'd expect from a cloud platform.
...it's a complete red herring, because no information that Jeremy Corbyn actually provided has been disclosed. His being on a train is a matter of public record, not provided information - he's already 'disclosed' his whereabouts to the media anyway and the released CCTV images do not show anything sensitive beyond that. Virgin has not breached its own policy.
Whether it has breached ICO guidelines and can be sanctioned is another story, but any such sanctions will likely be very light given that CCTV material is often released into the public domain.