Re: Yes but
9 down and 1 up?
You lot need to research what Innovate UK is actually about.
109 publicly visible posts • joined 12 May 2010
IUK isnt about (re)generating geographical areas. Its about generating and exploiting potential ideas, primarily for exports. If you can show that the North is prejudiced against, thats a story. Most likely its due to more (and better) applications from down south on the M4 corridor. Not that there aren't good ideas up north but the stats are against them.
If you're not doing well in getting money (grants or clients), you might be terminally unlucky. But its more likely other people are better than you. So improve. IUK aren't fools when it comes to technology or business cases.
Good point, there's more than just TCP/UDP in the IP world. How about XTP which is free and open. I coded one in a BSD stack in 1999, oddly nuff to optimise comms for a Telecoms company as TCP didnt cut high throughput, high/low latency etc etc. And then I did it again in a linux stack for a SatCom operator.
Funny thing is TCP has lasted incredibly well, all things considered.
Nope.
Once a company kicks you out, you have no ongoing responsibility to function or plan for them. If a company kicks you out then its up to them to replace your skills or accept they don't need them.
And presumably he had better phone calls to make to ensure he could 'continue to eat'. I suspect they wouldnt pay him for any information he had.
Not true. It depends on the ratio of the intra to inter-variance of the parameter being measured. Finger-prints are unique, but the parameters that measure them are finite. If you can measure accurately a fingerprint, it can be better than a password of some aribitrary complexity. If your fingerprint parameters can only differentiate 10 different prints then you have a problem.
The issue with speech is the parameters have a lot of intra-variance such that when you speak you map onto other peoples patterns. Its not necessarily that your voice matches, its the parameters that represent the voice that overlap.
This is a very good way to justify a huge defence budget without getting shot at.
Sir - I salute you!
Best quote:
"Expertise in scalable algorithm development on heterogeneous parallel computing, neuro-synaptic, quantum annealing, and distributed"
When I put words like this in a funding bid people laugh!
I like the idea of commodity hardware underpinning your infrastructure (both server and network) but mixing the network chain up with the functionality looks really complex to get right and maintain in anything other than a toy demonstration.
It seems a bit: "NFV rediscovers why functional programming is difficult to maintain".
Nope, its not absurd - have you heard of assembly? Why do you think we have compiled high level languages in the first place?
Rather than retype common sense, check out: http://stackoverflow.com/questions/2684364/why-arent-programs-written-in-assembly-more-often
Perl can be a dog to maintain and thats due to its features.
The practicality of code is that its hardly ever documented and pretty much the documentation is wrong (usually APIs are well documented but the code behind isnt).
And yes you can write poor code in any language - but pound for pund, I'd still rather not have to deal with well written js.
No-one said it would create more pollution. It displaces pollution. So the cities (generally) get cleaner but the areas around power generation facilities get more polluted. For the UK this could increase pollution on the continent.
The whole-life cost (such as upgrading the power grid, new power generation facilities, lithium cell construction and renovation, rare earth metal extraction and processing) is a lot more complex than 'Ooo, electric cars that run on magic!'.
I do like your comment 'Crack the fusion power station problem or get reliability in the delivery of renewables'. Renewables will never be reliable and if all we need to do is sort fusion then we're nearly there!!!
Anyway, back to reality.
PS For all the posters going Solar! Solar! Yeah, that works during the _day_ - most people charge their cars at _night_.
Yes, there is displacement of pollution. Electric cars arent 'free'. Looking at the Tesla page (I presume thats 'spouting the same old rubbish', eh Piro?) and they recommend a 22kW charging link. Thats pretty much 88 Amps. Most domestic houses are rated at 100A input, so charging your Tesla stresses your input supply quite nicely. Or pay to get multi-phase supply in, if you can get it (not easy for a lot of people).
Oh did you want a second electric car? Or a third one? Then you'll have to get up in the middle of the night to swap the plugs over.
So whilst it would potentially reduce pollution by using up surplus nighttime electric power (but not from those solar panels folks!) we wouldnt be able to support mass changeover to electric.
As for Norway - there aint that many people there - population 5million, density 15.5 per sq/km.
Have you heard of somewhere called London? 8.5m, density 5300 per sq/km. There's another 56 million here too. You'll need grown-up power for this so yes there is an issue with pollution displacement and the need for beefed-up infrastructure.
TCP is built on a bunch of assumptions that are not always relevant, but for the most are. It assumes that packet drops are due to congestion so throttles back to avoid pouring petrol on a congestion fire. If you dont have a throttling mechanism it can go faster. But you can also trash the network. So you need a throttling mechanism. And it probably wouldnt end up much different.
If you want you could use XTP and setup your own decide-when-retransmission is necessary from your own link statistics. Then you can decide if you include FEC on the data and undertake selective retransmission.
FEC is a waste of time, bw and cost if you have a great SNR.
FEC can be a waste of time and doesnt fix things if the SNR is higher than expected.
There's a big difference between 'clever' AI, simulating neural processes and shallow searches on huge amounts of data (like Google and Facebook). They're more clever engineering that fundamental research.
As for Microsoft, if they want anything really clever they buy existing academic resource in lumps (like the Cambridge Speech Group for their speech recognition). Its easier to buy clever than to accept that research, by definition, may not be successful.
So no, its not hard to accept that google, facebook and MS didnt get invited.
If you take a photo album (dataset) and take one picture out of each person you know and write their name on it (training data). You show these pictures to a stranger and then get them to match against all of the people in the rest of the photos in your photo album (unseen test data).
You know who they are, so you have the right category for the entire album (dataset).
The stranger has a go and gets it right 20%. You can only know that they get it right 20% of the time if you know the right identity of all of the other data in the first place.
In your case it would be less as inbreeding would cause a high degree of similarity in family photos.
RIMMERWORLD!!!
So you have a neural network, which is a generalised classifier. Yes you can add more layers but once its generalising its pointless, unless you tie each layer to mean something specific. The point about neural networks is you dont know how any of the simulating neurons contributes to classifying any of your feature space. Adding more nodes gives you more degrees of freedom but you end up with the curse of dimensionality.
This bit about asynchronous training providing noise, thus enhancing recognition. Eh? Are you sure? If you add noise to training data you blur it to make it look like other classes damaging your classifier. It also means your classification accuracy is dependent on todays hardware architecture, a poor idea.
The bit about jumping out of local minima, well theres been lots of approaches to this over the last 25 years, attaching inertia to your gradient descent so you'd shoot past local minima, conjugate gradient descent, etc. Ironically if this was working as suggested it would mean that the neural network was matching the training data more specifically reducing performance on unseen test data, contrary to the adding noise to training data making it better theory.
I wonder when this paper will be published...
"This drastically reduces the number of moving parts, and insulates Docker from the side-effects introduced across versions and distributions of LXC. In fact, libcontainer delivered such a boost to stability that we decided to make it the default."
So Docker is the amazing thing, but libcontainer is the amazing thing, Docker does good stuff because of libcontainer. And libcontainer is included to reduce something else and make it better, Hang on. Docker wsnt stable and had side-effects, so we wrote some code to sort this out, which was Docker, but is now spun out of Docker, but is still in Docker. But is called libcontainer.
And sticking it out there will make everything work together. It was there because of the changes the lxc team kept making, so we'll let them change libcontainer. And that will work.
I wish I could write press releases.
Nope, I'm afraid you've got the sensitivity analysis bit wrong. The feature space is 19 dimensional. The sensitivity of one parameter (say IT load) depends on the values of the other 18 variables. You cannot just pick one slice through a multi-dimensional space and infer that moving 1 variable (IT load) always gives the same PUE response.
Graphs 4b and c are total nonsense as the inputs are integers - In reality there are only 2 data points, 0 and 1 - yet the paper talks about a non-linear relationship if you have 0.79 of a cooling tower. Thats nonsensical. Also, while we're at it, depending where you are on an exponential curve, it can look pretty straight. Where you are on the curve is dependent on the other 18 variables. So changing these can make the 'curve' look straight. Thats why its fundamentally wrong to extrapolate the response from just one set of values for the other 18 variables.
The cross validation is wrong too. The data was sampled at 5 min intervals and 30% was used as 'unseen' test data. But the dataset was shuffled chronologically. Looking at the variables and its highly likely that data received every 5 mins will be highly correlated. Removing (on average) every third data point means the test data is very highly correlated with the training data and cannot be said to be independent or unseen. Thats why the prediction rate is so high, relatively speaking there are a LOT of nodes in that network and it is basically overtrained to pieces with test data pretty much the same as the training data.
The usual way to demonstrate this is to show the test and train data performance over the set of training epochs. The training performance generally gets better and better whilst at some point the test data performance will get worse as the nnet becomes over-specified to the training data. We havent seen this.
The bit you've missed is to evaluate a neural network (or any form of classifier) you need _representative_ data, not complete data. As I said, once you step out of the data range, yes the nnet is 'extrapolating' and providing a numerical answer - but its just guessing. If the weather is different to the data gathered over 2 years (so very hot or very cold) that system is just guessing. And anyone can do that.
The real point of nnets is providing a tool capable of modelling non-linear relationships where you dont have to have a preconceived model of the relationships. If the relationships are linear then an nnet won't outperform a linear classifier. Because of the response function in the neuron you always get a smooth transition through the feature space that looks convincing but thats just an artifact of the maths, not the data.
To be honest, the more I look at this, the worse it gets. As I said it isnt very good or convincing and has some really basic mistakes in it. Its basically nnet models training data very well shock.
Fail : the validation in the white paper is wrong. You cannot validate a single input in a multi-input model non-linear neural net by holding the other 18 constant - thats meaningless as if you change the value of one other input the line may go down or wobble around or do something entirely different - thats the point of nnets - to provide an arbitrary non-linear model of n-dimensional space.
Also, when you use a model (nnet or other) you must bound when it knows what its talking about (because its based on data its already seen) and when its just guessing. If the weather is ever hotter or cooler than the data obtained over the 2 years (quite likely) the nnet will basically make a random guess at the PUE. The output is based on the internal weight space of the neural network which has been jogged into position from a random state. Nnets dont magically extrapolate answers and they wont magically interpolate stuff they wont have seen either. Similarly if it hasnt seen all of the combinations of cooling towers running it'll just guess. The PUE guess will be a smoothed extrapolation but in reality the PUE might have a pretty drastic change.
All in all, this isnt very good and has some quite basic errors in it.
There's no evidence that UC is a good idea. It hasnt been worked through. IDS wont let it be worked through. Its a fluffy bunny idea that seems a good idea on the outside but hasnt been instantiated to a solution.
You should check out the pilot schemes for UC. Absolutely laughable.
The team building it though dont need a plan as they use 'Agile' techniques. Apparently the great thing about this is you dont need any requirements! No! What you do is keep building, changing and building until the users (whoever they may be) think its fine. Then Stop!
They've 'written off' 250million at the last NAO report.Written off is mandarin speak for lost, squandered or wasted. Nobody got told off for this.
UC has no clear strategy, plan, implementation. The solution to UC has no strategy, plan or implementation either. How can it?
This is a cost plus gravy train for all concerned. Whilst at the same time it isnt improving the benefits system or reducing cost.
"In 1991 another former head of the French secret services revealed that microphones had been placed under first-class seats on Air France ..."
http://www.theaustralian.com.au/news/world/sorry-angela-but-berlin-does-it-too/story-fnb64oi6-1226747832259#mm-premium
You might want to hold fire on the French cloud thing...
Cloud - that bag-o-shite snake oil....
I was chatting with the largest UK Gov cloud provider and asked 'Look, what do you mean by cloud" - the answer: "Well, its more a concept than anything concrete..."
No - read the article.
"Unified memory addressing across all processor types, for example, is a key feature of HSA. "It's fundamental that we can allocate memory on one processor," Rogers said, "pass a pointer to another processor, and execute on that data – we move the compute rather than the data.""
So a comment that you're slagging off had a perfectly reasonable rider on it, which was omitted.
CUDA works in a not too faraway way utilising shared memory.
Have you ever worked on a multi-processor system? Unless you bind your process/program/app (whatever its called this week) it will happily run where-ever. If you're running virtual memory, which is pretty likely, a pointer will not equate to a physical address and whatever processor ends up being scheduled in the next time-slice will use the same pages, pointers and data - moving the compute, not the data. Yes internally the CPU will need to recache and sort out pipelines but thats internal to the compute unit.
There's more to a brand new advanced idea than von neumann.
The post was about using TCP - I was pointing out that it won't work. A one liner of 'you can do this' means nothing. With regards to the 'fixes' in RFC4987, no, they're not being ignored:
Filtering - if you can filter, you can filter UDP
Increasing Backlog - RFC 4987 states this 'has serious negative aspects' and no-one's got it to work.
Reducing SYN-RECEIVED Timer - best you look up how long this timer stays open and how much would you cut it down to - then consider how much memory you have and how much a TCB costs.
Recycling the Oldest Half-Open TCB - this doesnt stop being swamped, it stops running out of memory - you can still be flooded with TCB from a DDOS and stop it working. Also a DDOS attack will knock out a valid connection if its half-open mode so its still broken.
SYN Cache - a decent DDOS will break this.
SYN cookies - this will increase the time before you're flooded - you still store a TCB, just smaller. And its complicated.
Hybrid Approaches - this isnt an approach - its a mixture of others.
Firewalls and Proxies - use these to filter UDP and its fixed as well.
See the above? That's called research.
So no, it won't work. You have a separate set of just-as-complex problems.
The comment was IL3 is labour intensive and the PSN is not handing out IL3 security passes.
And this isnt about running the country's infrastructure - its about providing cloud services to Govt of varying sizes. If you're a large provider you can recoup the costs of IL3 later on in a way unavailable to SMEs.
Anyone who has run an SME and tried to get List-X status knows about the stitch-up in supplying secure systems. You need to handle S* to get List-X. You can't handle S* until you are List-X. You WILL NOT get a contract until you are List-X. Procurers just will not touch you, even if you have everything in place.
The Govt don't want SME's.
Govt procurers love paying 3k a man day for Consultants. If you're a procurer, its in your best interest to handle contracts as large as possible.
The single only reason why there is an interest in SMEs is that the Govt think they're cheaper. And they are, only because procurers will not pay SME's what they will quite willingly pay to the 'big' consultancys.
Procurers are also completely risk averse - going with bigger, in their eyes, means going safer.
The great thing about Govt procurement is that the cash is never ending, you don't go bankrupt and you never get sacked.
Fantastic!
Rather than shrug their shoulders and go 'Its not out fault' when large corporations deliberately set-up their corporate structure to void paying tax, it would make a change for them to show balls and say, yes, we dont want to pay tax, so we go out of our way to set up our business with the prime aim of avoiding tax.
There is no other reason for their Corporate structures - they are inefficient, complex, and just there so as to not pay tax.
Tax dodging is hard for 'the little people' Its a piece of cake for the rich. And whilst it may be rare its undertaken by the very very rich so the sums involved are not 'rare'. Tax dodging means society misses out or 'the little people' have to pay more to make up.
Many decades ago we used a speech recognition grammar engine to produce euphemisms for, well, buffing the dolphin etc. <verb> the <noun> provided minutes of mirth.
The only one that didnt fit the grammar was 'Taking Picard to Warp Speed.'.
Semantic outliers, dontcha hate em?
"Time to introduce "descriptive analytics" and "predictive analytics”. "
A graph. And what a graph does next.
“This is already leading to the creation of advanced, almost-neural systems, which can learn complex patterns amid large data sets to predict the probability that an entity will exhibit behaviours that are of interest to the business. It is not confined to structured data."
Apart from that not making sense, learning complex patterns is a piece of piss. Matching a complex pattern in a meaningful way against previously unseen data is not. Proving its working without lots of test data sets is tricky too.
Yep, and what bankrupts some SMEs is buying over-expensive kit that wasnt suited to the business. Not buying expensive is not the same as buying cheap. As most people have said, its buying whats best for the business - the techies will generally not know what that is.
Most techies I've come across would like to buy lots and lots of kit, with each bit not doing a lot. It makes the catalogue look big but is uncomplicated and easy for them to look after.
Nope, voice recognition is hard because most people speak like idiots, slur words, um ah, and there is a lot of intuited context about the words such as taking the meaning from 'I scream', 'Ice cream' or 'Eyes cream'.
People think voice is easy as even the stupidest person can use voice. The reason it works is because you have a brain doing an immense amount of tricky processing getting it to work. Thats why voice, for the foreseeable future, will only ever have marginal improvement. No-one's come up with new tech in voice recognition for 20+ years, just improved CPUs mean they handle the search space better.
Direct text is a lot easier but still really difficult and it sounds like these guys have done a cracking job.