Re: Monkeys and Peanuts
Haven't you just advices people to become a doctor - which involves going to university doing a STEM course?
507 publicly visible posts • joined 18 Apr 2007
This is a pretty good summary. The challenge however is what you describe is a senior engineer. The jobs that are currently delegated to juniors are often the ones that an AI will be able to manage (though just like a junior it will need guiding and prompting). This gives the challenge of who will actually hire juniors to give them the opportunity to become seniors?
DBA is a job that I can see declining rapidly. Much of what it seems they bring is the knowledge of the arcane configuration setting and rules of the database software and performance tuning methods. That type of knowledge is what AI is/will be good at as it is essentially following patterns and basic inference. For many other tasks other engineers will probably pick up the slack aided by the AI.
When scanning through log files you don't need comprehension or retention. Frequently you can just look at the shape of the text and spot if it suddenly goes abnormal - eg pages of stack traces or error messages at the start of the line,
Other times it is the speed of the console rendering that actually slows down a noisy build tool, a console using less CPU and rendering more efficiently might actually make the build run faster.
The costs at the edge are relatively constrained. They make deals with with all the major telcos to place Netflix boxes inside their networks which has the massive advantage to the telcos that their interconnects and core routes aren't being saturated with Netflix traffic.
All of which require you to pay a stake to enter. A key question is whether signing the petition can be considered any form of payment and so any sort of contract was entered. At most the only remedy I can see is that the personal data has to be deleted to prevent future harm - but even then it was uncertainly covered in the terms of signing the petition.
I would argue it may violate GDPR, a CV contains lots of personal information, some of it sensitive (various hint or explicit mention of religious, medical or other protected information).
Gathering data with inaccurate details of how it will be stored and processed (e.g. claiming you are processing it for a job advert that doesn't exist) would contravene it.
Not that I'd expect the ICO to do anything about it.
A reversionary method is good, but pretty much by definition it will be less efficient and have much lower capacity and likely significantly more error prone. At best for a hospital that involves pen and paper, telephoning requests through spelling details out phonetically and radiographers having to physically sit at the xray machine terminal to review test results.. At worst it probably involves porters having to run round miles of hospital corridors delivering prescription requests, patient notes and other instructions, and some tests being impossible to perform/analyse the results of.
Turning away patients, particularly in the early stages is entirely rational and the only sensible thing to do
No, a significant part of the problem was senior Fujitsu employees perverting the course of justice and down right lying in court, to parliament and to the Post Office. Claiming things like remote access wasn't possible (when it was) and that they weren't aware of defects (when they were regularly manually fixing data).
In the UK at least that isn't legally correct. There are also database rights. Despite just being 'facts' it was upheld the collation and reporting of the days football results do have some legal copyright protection - another newspaper couldn't just copy and reprint them on the same/next day.
Government agencies have a history of skirting round the rules of not spying on their own population by asking a friendly foreign agency to do it for them. I'm certain that if this came to pass the NSA/CIA would have a fast track route to getting these certificates, whilst China would quickly compromise all these government run CAs.
The browsers on the other hand I'm sure will strictly follow the rules and not ban the CAs, they will just provide straight forward integrations to third party open source databases which may cause the CA to be banned completely independently of the browser manufacturer.
Without the ERP system it seems like they have no way to even know if the books are balanced let along work towards balancing them. Its not the only factor but is no doubt a big contributory factor and example of how the council was running everything. As they have said on other stories it isn't the whole but it is the part of the story the technical audience of El Reg are most interested in.
Fundamentally it shouldn't matter if you tell all and sundry that directory Y exists. If you want to protect it you need to have appropriate security to protect it, just hoping nobody guesses / finds out the directory name isn't security.
Now you do need to ensure that merely knowing the directory name doesn't give information away (CompanyXTakeoverBid would be a bad name) but a directory name like secure doesn't tell people much.
On windows I don't believe this, but in the days of DOS it does sound much more realistic. They didn't want the keys to display whilst the user was typing so avoided the standard APIs and so used a low level API which returns the key codes. (That era would also explain why someone though a hard coded password was a good idea).
Now, either this was a dos program running in windows, or the part about remoting in was complete embelishment.
Why so negative about language evolving? You say that Monetize is lazy, I'd argue its much more efficient (and according to here https://www.etymonline.com/word/monetize#:~:text=monetize%20(v.),%2B%20%2Dize. the word has been around since 1856).
Longer more complex sentences are harder for the brain to process, hence why the English language has such a large corpus of words. Normally their are also additional connotations laid onto the word which may not be reflected in its basic definition. E.g. listicle isn't just an article being a list, it implies it is probably a list of something trivial, full of adverts and likely just read for pleasure or amusement, rather than for more academic purposes.
I'm not sure what the point of them avoiding or firms insisting on video calls actually is? All they need to do is fake a Korean/appropriate Asian identity and fake the id documents to match, then just do the interview. Or alternatively as they said just get a go between to do the initial interviews/id checks and then do the work.
More effective is surely to trace the money. Presumably these people are being paid in currency into a bank account (if it is crypto then there's your first problem). Surely checks must be in place to make sure the account name matches the freelancers name. Banks are much better placed to verify id documents and make sure fake accounts aren't being created. Of course it still doesn't help again mules but finding, paying and trusting them surely reduces the effectiveness.
Not wanting to defend the Royal Mail but what you have forgotten is that total volumes of mail has dropped significantly (https://www.statista.com/statistics/1006816/royal-mail-volume-of-parcels-and-letters-delivered-uk/).
The universal service obligation means many of RM's costs are fixed - the time to deliver 100 letters to a street or 200 letters to a street is pretty much identical so the cost would be fixed but the revenue half. Cost increases are an attempt to keep sufficient revenues coming in, not making excess profit.
Its a deadly spiral though, The fewer letters posted the more it costs per item, which in turn means fewer letters get posted. Not helped by competitors not having a universal service obligation so can cherry pick just the profitable areas without the loss making ones.
The biggest problem is that you only get feedback for the candidates that you do select. Most candidates being interviewed will be at least 'ok' were you to employ them. You might think a recruitment process is good (and train your AI on that data) because all the people you recruit are good. In reality it may be that all the candidates you rejected would actually have been excellent but you will never know that.
Low to zero. He's not exactly been favourable to them in the past, and his research focus over the years make it clear about his views: https://www.cl.cam.ac.uk/~rja14/
Besides Google, Facebook and their ilk probably wouldn't be that bothered by doing mandatory client side scanning as that gives them the slippery slope to include additional data into targeted advertising
Not really, in the Roman arena it meant agreement/acceptance of the gladiator holding the sword over someone's neck to kill him. Except (apparently) sometimes the gladiator asked the opposite question of should he be spared and then thumbs up meant agreement to him being spared.
Not quite. The issue that QUIC tries to resolve is where client A and server B both support feature X of TCP, however because box x in the middle does some 'manipulations' they can't actually use it due to the box in the middle breaking the situation, even though the negotiation to activate the feature succeeded.
See comment above. UDP doesn't do head of line dropping, it does packet dropping. The protocol designer on top of UDP is free to implement their own flow control and retry mechanisms just as TCP does over IP.
The benefit is sometimes head of line blocking is what you want, other times skipping lost packets is what you want, QUIC can allow both modes of operation by the client unlike TCP which mandates the behaviour whether you like it or not.
But the implementation of TCP acknowledgement is implemented as a single stream with head of line blocking. One lost packet effectively pauses everything until the retransmission happens. (Well ok its a bit more complicated than that but the simplification is close enough to reality).
QUIC builds acknowledgement on top of UDP (in the same way TCP builds it on top of ip). This means it has greater flexibility to evolve more complex acknowledgement protocols - such as allowing traffic for other substreams to continue and only holding up the subs-stream with the lost packet, or deciding its a real time video stream and its better just to continue and let the error handling in the video decoder handle some missing data.
The designers of QUIC basically had 3 choices:
1. Build it on top of TCP just like HTTP and HTTP2. This meant all the problems and limitations of TCP, especially related to flow control.
2. Create a new protocol on top of IP alongside TCP/IP and UDP/IP (QIC/IP), Architecturally this would have been the cleanest approach, but would require all networking equipment and stacks to be updated to support it, we have seen how that has worked for IPv6
3. Layer it on top of UDP so that it can be used on the existing internet infrastructure, but create a new connection orientated protocol - QIC/UDP/IP
Option 3 was definitely the wisest decision, but it does cause confusion as people assume that means it 'is' UDP with its limitations, rather than the reality of its building something new on top of UDP for convenience.
Historically Apple phones have been premium purchases and brought as a fashion icon rather than purely on technical merits. People choose android are more likely to be price sensitive and either
a) don't have enough money to spend on an iPhone or apps, or
b) are more careful with their money so don't waste it on apps
Of course some apps are good value, and some android users will part with cash, but demographically iphone users are likely to spend more due to it being bigger spenders who buy into the platform.
The difference is that with phone tappers there was physical electricity flowing from A to B. There was a positive charge they could charge you for so to speak.
In this case there is no direct flow of current. It may induce extra electricity to be used by your device, a tiny amount of packets may flow up the phone line (or fibre optic cable) but that's immaterial. You're into the realms of saying that if someone triggers your PIR security light then they owe you for the electricity.
Ultimately if you go to a website you run the risk of them having an animated gif, large jpeg, autoplaying video, ad tracking or other javascript code. Trying to distinguish legally from a poorly written site using excess CPU cycles, through ads and tracking scripts to other more dubious operations would be near impossible and ripe for political abuse
There are many cases where you really really want a stay until the appeal is heard.
If a someone can get a win against their competitor in a lower court and convince the court of an injunction/massive fine (such as in a copyright or patent claim) then that may be enough to drive the competitor out of business.
The competitor may have no revenue coming in until the time the appeal(s) are all eventually heard, even if the appeal is ultimately successful (due to support from better lawyers/experts) it would be too late as the damage is done.
Last time I checked GCHQ and MI5 are part of the military command. And being signal intelligence and bring technical skills it would almost certainly be GCHQ the job landed with (assuming they aren't already doing it and this is just a way to make it legal/authorized/separately budgeted).
Do yes it won't be the army doing it but will be the military.
Was hoping for much more analysis, the article got distracted and failed to explain the pros and cons of memory db beyond a random Twitter figure with no context of what the scenario was and how the cost compared with alternative solutions.
On face value the concept looks interesting and allows much simpler coding and fewer bugs than having to manage both a cache and db (do let dev costs and time). So can other commenters give a more detailed analysis.
You make it sound like having the motor skills of a toddler isn't impressive - and I've not seen many toddlers who can back flip or jump that confidently and accurately.
Beyond humans there are very few animal species that could do all those actions on just two legs without a tail.
In fact whilst most adults and older children could do the steps, sloped wedges and transition from the blocks to the beam that would be done with more of a stepping jump with a leading leg. I doubt many could do the transition as three, two footed standing jumps without any swaying, repositioning or needing to regain balance on landing.
"the authors point out that their work shows it has "inherent advantages over TCP in terms of its latency, versatility, and application-layer simplicity".
That's pretty much the exact targets of QUICs performance advantages. In most network conditions with similarly configured congestion control algorithms TCP and QUIC will be capable of the same throughput - which is pretty obvious as it is the congestion control algorithms that manage the rate that packets flow so the only differences there are protocol overheads.
Latency is a really important factor in web browsing. The browser has to download a page, parse it, work out the links and then request those objects. Typically these objects are small and it is the round trip times and handshaking that starts to dominate. If particularly if you are connecting to other HTTPS servers the negotiation phase can be expensive. QUIC is designed to remove those round trips during the handshake and start delivering data quicker. Other features like parallel streams and push support also help with latency reduction. Pushing means the server can deliver the main webpage and then immediately start delivering the associated assets without waiting for the client to request them. Streams means the client can ask for a list of files and the server can send them interleaved. If file A takes 3 seconds of processing to be created it can just get on with sending file B and C. In HTTP you can either pipeline which just means you queue up the requests but they will still be delivered sequentially; or create multiple TCP connections which is expensive for both the server and client and due to TCP slow start it takes time before each connection can get maximum throughput.
Another beneficial feature of QUIC is the ability to cancel file transfers. In HTTP this isn't possible, if you want to abort you have to close the connection and then re-establish a new one. TCP slow start then kicks in where it takes time for the network stacks to calculate the optimal window size, initially the transmission sizes are limited.
To use it to its best this does in particular mean you need cleverer servers that implement and exploit all the relevant features. The conclusion "QUIC does not automatically lead to improvements in network or application performance for many use cases" is not really surprising.
The second issue with this research is that TCP has been the dominant protocol for decades so a lot of effort has gone into optimizing it. Even commodity network cards have all sorts of optimizations in them to get the best out of TCP and offload work from the CPU (calculating checksums, packet defragmentation, etc), linux has support serving http(s) directly from the kernel, and TLS offloading or even serving directly from the NIC is possible on more expensive chipsets. Historically UDP has been a second class citizen relegated to taking the slowpath rather than optimized TCP pipelines.
Effectively the comparison is between a highly optimized internal combustion engine with an electric milk float. In heavy traffic the milk float and petrol car are going to get the same performance (the congestion control techniques of the roads is the limiting factor). Over the last decade electric cars have got better rapidly, and whilst E1 cars still don't quite match F1 in time they surely will. The same can be said of QUIC, as QUIC implementations and hardware are optimized performance will increase significantly.
Finally QUIC has the advantage that much less of the code is in kernel space, this means servers can theoretically be optimized much easier for their use case - using different congestion control algorithms or other logic based upon if they are regularly serving lots of small files, or few large files. Custom TCP stacks with this flexibility is a much harder proposition.
“Importantly, the GAO’s decision will allow NASA and SpaceX to establish a timeline for the first crewed landing on the Moon in more than 50 years.”
First time I read this it parsed as "establish a timeline in which it will take more than 50 years before the first crewed landing", glad rereading it is after 50 years we now have a new timeline.
Surely the issue is whether the item is actually inventive?
The definition is: "an invention is to be taken to involve an inventive step when compared with the prior art base unless the invention would have been obvious to a person skilled in the relevant art in the light of the common general knowledge as it existed"
By definition the AI is almost certainly going to have a pool of general knowledge within which it makes connections based upon patterns (perhaps with some random permutations thrown in). Any other person (or copy of the same AI) are likely to come up with the same answer given the same inputs. If the AI can patent then the AI must be included in the category of 'skilled in the relevant art' and as copies of the AI can be made then anything one invents would be obvious to other copies of it.
In essence the AI is just playing the classic game of lets patent "<X> on a mobile phone" where it is picking values of X at arbitrarily. The patent itself may actually be inventive and useful, but only because the human has gone from a long list of brainstormed ideas to realizing that it is a useful invention and deciding to submit the patent application.
I've no problem with leverage being a verb in the British dictionaries.
Oxford dictionaries define it as:
verb
1.
use borrowed capital for (an investment), expecting the profits made to be greater than the interest payable.
"without clear legal title to their assets, they own property that cannot be leveraged as collateral for loans"
2.
use (something) to maximum advantage.
"the organization needs to leverage its key resources"
Using leverage as a verb in terms of finance is fine.
The second definition is management speak but even still doesn't really work with the original sentence:
"I [used] the password hash [to maximum advantage]"