Re: The bigger issue is rising Chinese power
"El Regers might know some code, but very little about the world. Do more research, children."
Get you - read one little red book and that's your entire education.
83 posts • joined 12 May 2010
"Get a job with a Company" - This isn't Russia - why should my small Company be picked on to the benefit of the Qinetiqs, PwC, KPMG, Leidos et al?
"Put your rates up" - thanks, hadn't thought of that. The problem with the Putin tax is it is so swingeing you cannot commercially put rates up high enough to compensate.
"Work for more than 2 people per month" - you haven't a clue about IR35, have you? Its on a role by role basis. You can work for one day and still be within IR35.
"The issue is, you're effectively an employee" - if I'm an employee, then EVERYONE who contracts to another company is an employee, so EVERYONE should be caught. Defence Prime supplying the MoD? Well, you're acting as pseudo-employees and working under supervision - day-rates get hit by 63% tax. You provide a service contract? Employee by definition - bang 63% on your service charges. You do design? Then you're acting as a pseudo employee design team - 63% off of your company charged day-rate. Then take it out of the employee wages - maybe that will focus a few minds which are coming over as a bit blunt.
In the last 10 years, there have been 16 IR35 court cases that HMRC have instigated. Just 2 were won by HMRC, and 2 were split decisions. In 75% of the cases that HMRC took to Court, they were proved wrong. Any other body that was wrong 75% of the time would be deemed as unfit for purpose.
This doesn't include where they've investigated, shown a contractor is outside of IR35 and not pursued legal action - I know of a fair few of these.
Deliberately and spitefully manipulating the system so Contractees force a punitive tax regime on Contractors shows the HMRC is rotten to the core. Virtually all contracts have become IR35, not because they are, but because of a fear of HMRC investigation and that's not something fit for a democracy.
Er - you might want to check your maths - in the last 10 years, when there's been better legal understanding of IR35 legislation the HMRC have chased up just SEVENTEEN businesses, won 2 judgements and had 2 split decisions. Splitting the draws this gives about 84% in favour of contracts NOT BEING IN IR35.
Remember HMRC will only go to court if they think they've got a case, I know a fair few people who have been investigated for IR35 compliance and found outside - these figures do not include those and are not publicised by HMRC either.
"The clients have no one with my skills in the company, so IR35 does not apply."
You plainly do not have a clue. This has never been a determination for IR35. Substitution and MOO have been the main discriminators.
Unless this nonsense is sorted industry wide you'll be hounded out soon. Maybe you'll last longer or maybe you won't, but the doors have slammed shut and no-one is listening.
If you're in a position today where you can decide on your clients, then woop-de-doo, but make the most of it because this industry changes fast and so will your prospects.
That's a very good point and soon Contractors within IR35 will be suing Agencies and Contractees left, right and centre - the first time someone is seriously ill and needs a substantial time off (or paternity leave, or even bank holidays) there'll be legal action, someone other than contractors are going to get stung and this thing will either be dropped or a dose of realism put in place.
Well I'm waiting for the determination from BAe at this very moment. The agent won't push as he's scared of upsetting them. HMRC have made sure that there is no incentive for Contractees to come up with an accurate determination, if you put everyone inside you're fine, there is no punishment if you falsely over-tax - and what exactly is the path for Contractors to get a review? There isn't one. This is not how a democracy works. This is not how a fit-for-purpose HMRC would work.
Still, if HMRC go after the PwCs, Andersons, Mckinseys, Sercos next (a service contract must surely be disguised employment), plus they tax Amazon, Facebook and Apple at 65% of turnover then at least all things will be equal.
I've heard so many employees think IR35 is good (taking contractors down a peg) as they have overpaid rates. The simple fact is its a dumb arbitrary rule to extort cash from small businesses. You go to somewhere like NATS where contractors work identically to say Leidos staff (but contracted in to NATS and answering to NATS staff) workers. Do the revenue bust Leidos' day-rate? Of course they don't. Why not? Explain that little conundrum.
The day the Revenue chase PwC, Fujitsu, Leidos, BT or any other big subby that puts someone on a desk doing what is an employee role, then we can chat.
Contracting is a completely different mind-set and set of skills to being an employee. Remember, contractors are there because your company cannot cope without them. The moment they can, we're off.
About neural networks, what they are, what / how they 'learn', what they do and what the limits of their performance are. I really thought this was all understood 25 years ago, but a new generation has re-sprung nnets into magic and 're-discovering' basic properties.
And while we're at it 'researchers' should also understand how the front-end features contribute to the overall classifier performance too.
Someone mentioned on el Reg about a week ago a cookie delete add-on for Firefox which I then started using. It deletes cookies 15 seconds after you close the tab. That sorts out cross-site advertising.
What is amazing is some sites can have upto 250 cookies tracking your activities and the moment GDPR kicks in I shall be having a word as to why a site needs to inform 250 third parties about your activities...
and you're left to figure out the questions, ignore them.
Trying to come up with useful emergent properties of things from random observations is a fools errand.
This applies to monitoring (including surveillance data), EA models, SysML models, endless cyber-security SIEM 'notifications' etc etc etc...
Anyone that threatens to diminish European harmonisation by a 'European' institution will have their arm twisted by the EU. So yes, they will have something to do with it.
Unification only goes one way and the EU will not have anything that looks like a fracturing of European unity.
You really think that you have an option on this?
Vote the wrong way and your friendly EU will twist your arm until you come up with the 'right' answer. It'll impose punitive measures on your resistance until you toe the line. Ireland has done quite nicely out of the EU so I cant see your nation rocking the boat anytime soon.
Having looked at this, its incredibly heavy on human intervention. It has pre-set ideas of what a 'hacker invasion' should comprise (much like SIEM with 'if this and this and this' alarming, brilliant for detecting the last war).
And yes, even with differing test/train data you can still tune the system.
I'd like an explanation as to what it is classifying and why it is missing 15%? Is it to do with the PCA? If you do PCA, why do you need a neural network (apart from being sexy)? Why not use linear combination (at least you can see which dimension(s) are contributing).
My suspicion is the way they 'fuzz' the features and the classifier means that the performance cant get much better. You always need fine structure to get really good pattern matching, and they get shot of it.
I like the 'its your fault if the update process cant handle things'.
I've seen multiple kernel updates in one week. Its lucky that IT support are sitting around waiting for this to happen, implement system-wide regression tests on the test system before system-wide roll-out which naturally will be faultless.
Still the proof of the pudding is in how many people will sign up and drag down his feed. Good luck to him for making other people's lives easier.
Steve Jobs wasn't interested in the user experience. He was interested in the perceived value of his products. Good user experience doesn't let you charge 500 for an i-thing. Branding and value perception do that. He was very good at making things look like they had fairy dust on but it was all about increasing the perceived value.
Good user experience is a differentiator but it doesnt account for the value.
I almost bid for some Home Office work. Then in the small business T&Cs they wanted all foreground IP (OK, you're paying for that) but then all the background IP (so all of the expertise and specialist knowledge that makes us useful as an SME).
The only reason? We get your knowledge and hand it over 'to someone who is capable of delivering the work'.
We gave up at that point as being an SME our knowledge is our secret sauce...
IUK isnt about (re)generating geographical areas. Its about generating and exploiting potential ideas, primarily for exports. If you can show that the North is prejudiced against, thats a story. Most likely its due to more (and better) applications from down south on the M4 corridor. Not that there aren't good ideas up north but the stats are against them.
If you're not doing well in getting money (grants or clients), you might be terminally unlucky. But its more likely other people are better than you. So improve. IUK aren't fools when it comes to technology or business cases.
Good point, there's more than just TCP/UDP in the IP world. How about XTP which is free and open. I coded one in a BSD stack in 1999, oddly nuff to optimise comms for a Telecoms company as TCP didnt cut high throughput, high/low latency etc etc. And then I did it again in a linux stack for a SatCom operator.
Funny thing is TCP has lasted incredibly well, all things considered.
Once a company kicks you out, you have no ongoing responsibility to function or plan for them. If a company kicks you out then its up to them to replace your skills or accept they don't need them.
And presumably he had better phone calls to make to ensure he could 'continue to eat'. I suspect they wouldnt pay him for any information he had.
Not true. It depends on the ratio of the intra to inter-variance of the parameter being measured. Finger-prints are unique, but the parameters that measure them are finite. If you can measure accurately a fingerprint, it can be better than a password of some aribitrary complexity. If your fingerprint parameters can only differentiate 10 different prints then you have a problem.
The issue with speech is the parameters have a lot of intra-variance such that when you speak you map onto other peoples patterns. Its not necessarily that your voice matches, its the parameters that represent the voice that overlap.
This is a very good way to justify a huge defence budget without getting shot at.
Sir - I salute you!
"Expertise in scalable algorithm development on heterogeneous parallel computing, neuro-synaptic, quantum annealing, and distributed"
When I put words like this in a funding bid people laugh!
I like the idea of commodity hardware underpinning your infrastructure (both server and network) but mixing the network chain up with the functionality looks really complex to get right and maintain in anything other than a toy demonstration.
It seems a bit: "NFV rediscovers why functional programming is difficult to maintain".
Nope, its not absurd - have you heard of assembly? Why do you think we have compiled high level languages in the first place?
Rather than retype common sense, check out: http://stackoverflow.com/questions/2684364/why-arent-programs-written-in-assembly-more-often
Perl can be a dog to maintain and thats due to its features.
The practicality of code is that its hardly ever documented and pretty much the documentation is wrong (usually APIs are well documented but the code behind isnt).
And yes you can write poor code in any language - but pound for pund, I'd still rather not have to deal with well written js.
No-one said it would create more pollution. It displaces pollution. So the cities (generally) get cleaner but the areas around power generation facilities get more polluted. For the UK this could increase pollution on the continent.
The whole-life cost (such as upgrading the power grid, new power generation facilities, lithium cell construction and renovation, rare earth metal extraction and processing) is a lot more complex than 'Ooo, electric cars that run on magic!'.
I do like your comment 'Crack the fusion power station problem or get reliability in the delivery of renewables'. Renewables will never be reliable and if all we need to do is sort fusion then we're nearly there!!!
Anyway, back to reality.
PS For all the posters going Solar! Solar! Yeah, that works during the _day_ - most people charge their cars at _night_.
Yes, there is displacement of pollution. Electric cars arent 'free'. Looking at the Tesla page (I presume thats 'spouting the same old rubbish', eh Piro?) and they recommend a 22kW charging link. Thats pretty much 88 Amps. Most domestic houses are rated at 100A input, so charging your Tesla stresses your input supply quite nicely. Or pay to get multi-phase supply in, if you can get it (not easy for a lot of people).
Oh did you want a second electric car? Or a third one? Then you'll have to get up in the middle of the night to swap the plugs over.
So whilst it would potentially reduce pollution by using up surplus nighttime electric power (but not from those solar panels folks!) we wouldnt be able to support mass changeover to electric.
As for Norway - there aint that many people there - population 5million, density 15.5 per sq/km.
Have you heard of somewhere called London? 8.5m, density 5300 per sq/km. There's another 56 million here too. You'll need grown-up power for this so yes there is an issue with pollution displacement and the need for beefed-up infrastructure.
TCP is built on a bunch of assumptions that are not always relevant, but for the most are. It assumes that packet drops are due to congestion so throttles back to avoid pouring petrol on a congestion fire. If you dont have a throttling mechanism it can go faster. But you can also trash the network. So you need a throttling mechanism. And it probably wouldnt end up much different.
If you want you could use XTP and setup your own decide-when-retransmission is necessary from your own link statistics. Then you can decide if you include FEC on the data and undertake selective retransmission.
FEC is a waste of time, bw and cost if you have a great SNR.
FEC can be a waste of time and doesnt fix things if the SNR is higher than expected.
There's a big difference between 'clever' AI, simulating neural processes and shallow searches on huge amounts of data (like Google and Facebook). They're more clever engineering that fundamental research.
As for Microsoft, if they want anything really clever they buy existing academic resource in lumps (like the Cambridge Speech Group for their speech recognition). Its easier to buy clever than to accept that research, by definition, may not be successful.
So no, its not hard to accept that google, facebook and MS didnt get invited.
If you take a photo album (dataset) and take one picture out of each person you know and write their name on it (training data). You show these pictures to a stranger and then get them to match against all of the people in the rest of the photos in your photo album (unseen test data).
You know who they are, so you have the right category for the entire album (dataset).
The stranger has a go and gets it right 20%. You can only know that they get it right 20% of the time if you know the right identity of all of the other data in the first place.
In your case it would be less as inbreeding would cause a high degree of similarity in family photos.
So you have a neural network, which is a generalised classifier. Yes you can add more layers but once its generalising its pointless, unless you tie each layer to mean something specific. The point about neural networks is you dont know how any of the simulating neurons contributes to classifying any of your feature space. Adding more nodes gives you more degrees of freedom but you end up with the curse of dimensionality.
This bit about asynchronous training providing noise, thus enhancing recognition. Eh? Are you sure? If you add noise to training data you blur it to make it look like other classes damaging your classifier. It also means your classification accuracy is dependent on todays hardware architecture, a poor idea.
The bit about jumping out of local minima, well theres been lots of approaches to this over the last 25 years, attaching inertia to your gradient descent so you'd shoot past local minima, conjugate gradient descent, etc. Ironically if this was working as suggested it would mean that the neural network was matching the training data more specifically reducing performance on unseen test data, contrary to the adding noise to training data making it better theory.
I wonder when this paper will be published...
"This drastically reduces the number of moving parts, and insulates Docker from the side-effects introduced across versions and distributions of LXC. In fact, libcontainer delivered such a boost to stability that we decided to make it the default."
So Docker is the amazing thing, but libcontainer is the amazing thing, Docker does good stuff because of libcontainer. And libcontainer is included to reduce something else and make it better, Hang on. Docker wsnt stable and had side-effects, so we wrote some code to sort this out, which was Docker, but is now spun out of Docker, but is still in Docker. But is called libcontainer.
And sticking it out there will make everything work together. It was there because of the changes the lxc team kept making, so we'll let them change libcontainer. And that will work.
I wish I could write press releases.
Nope, I'm afraid you've got the sensitivity analysis bit wrong. The feature space is 19 dimensional. The sensitivity of one parameter (say IT load) depends on the values of the other 18 variables. You cannot just pick one slice through a multi-dimensional space and infer that moving 1 variable (IT load) always gives the same PUE response.
Graphs 4b and c are total nonsense as the inputs are integers - In reality there are only 2 data points, 0 and 1 - yet the paper talks about a non-linear relationship if you have 0.79 of a cooling tower. Thats nonsensical. Also, while we're at it, depending where you are on an exponential curve, it can look pretty straight. Where you are on the curve is dependent on the other 18 variables. So changing these can make the 'curve' look straight. Thats why its fundamentally wrong to extrapolate the response from just one set of values for the other 18 variables.
The cross validation is wrong too. The data was sampled at 5 min intervals and 30% was used as 'unseen' test data. But the dataset was shuffled chronologically. Looking at the variables and its highly likely that data received every 5 mins will be highly correlated. Removing (on average) every third data point means the test data is very highly correlated with the training data and cannot be said to be independent or unseen. Thats why the prediction rate is so high, relatively speaking there are a LOT of nodes in that network and it is basically overtrained to pieces with test data pretty much the same as the training data.
The usual way to demonstrate this is to show the test and train data performance over the set of training epochs. The training performance generally gets better and better whilst at some point the test data performance will get worse as the nnet becomes over-specified to the training data. We havent seen this.
The bit you've missed is to evaluate a neural network (or any form of classifier) you need _representative_ data, not complete data. As I said, once you step out of the data range, yes the nnet is 'extrapolating' and providing a numerical answer - but its just guessing. If the weather is different to the data gathered over 2 years (so very hot or very cold) that system is just guessing. And anyone can do that.
The real point of nnets is providing a tool capable of modelling non-linear relationships where you dont have to have a preconceived model of the relationships. If the relationships are linear then an nnet won't outperform a linear classifier. Because of the response function in the neuron you always get a smooth transition through the feature space that looks convincing but thats just an artifact of the maths, not the data.
To be honest, the more I look at this, the worse it gets. As I said it isnt very good or convincing and has some really basic mistakes in it. Its basically nnet models training data very well shock.
Fail : the validation in the white paper is wrong. You cannot validate a single input in a multi-input model non-linear neural net by holding the other 18 constant - thats meaningless as if you change the value of one other input the line may go down or wobble around or do something entirely different - thats the point of nnets - to provide an arbitrary non-linear model of n-dimensional space.
Also, when you use a model (nnet or other) you must bound when it knows what its talking about (because its based on data its already seen) and when its just guessing. If the weather is ever hotter or cooler than the data obtained over the 2 years (quite likely) the nnet will basically make a random guess at the PUE. The output is based on the internal weight space of the neural network which has been jogged into position from a random state. Nnets dont magically extrapolate answers and they wont magically interpolate stuff they wont have seen either. Similarly if it hasnt seen all of the combinations of cooling towers running it'll just guess. The PUE guess will be a smoothed extrapolation but in reality the PUE might have a pretty drastic change.
All in all, this isnt very good and has some quite basic errors in it.
Biting the hand that feeds IT © 1998–2020