Re: "or 30 per cent fewer phones"
If you read the entire paragraph, it's clearly:
Huawei down 5%
Samsung down 30%
It does not compare them. But I was originally a bit confused too…
247 posts • joined 15 Mar 2016
Then if you misidentify a significant proportion of users and serve them the wrong content your advertising is likely to be less effective than when you serve neutral or random ads - a lot of folk might be a bit sensitive about having their gender misidentified
If you serve random ads then half of the time people will see ads for the wrong sex (gender, whatever). If the thing works then this ratio will be [considerably] lower – by definition.
This all depends on how you define the baseline…
If your baseline is close to non-stress environment, then stress-free tests show results closer to true skill and stress environment shows the effect of stress. This is probably a reasonable baseline – and the conclusion would be then that even though the women were in fact more skilled they performed worse under stress. You can also choose the baseline differently. The only objective conclusion is the relative shift of performance when the amount of stress changes. In other words, women's performance was more sensitive to stress.
But unless you think stress the work environment should be as large as possible, it is kind of difficult to argue that removing stress disadvantages someone.
Sure, and why you had to write in weight?
Because no one says just ‘a kilo of apples and a kilo of gravel are equal’. The reaction to that would be ‘WTF?’ – and rightfully so.
Equal without qualifies means equal in all aspects that could be considered. Equal with qualifies limits the equality to some aspects. The qualifiers can be implicit – you might not have to always add ‘under the law’ if it there is enough context to make clear you mean this kind of equal. But your kilo example is just silly.
> Inertia is a powerful thing. People are lazy and afraid of change.
That's why you need the everyone uses Inkscape, it's the default for vector drawing approach. You will get nowhere with the inferior replacement for program X mindset. And around here it's true that basically everyone uses Inkscape. Some creative types may do some weird things on their Macs. But it has always been so (and is not going to change), so they don't matter.
> question whether content is worth anything to anyone other than the website owners
But that's perfectly OK because the website owners can pay for hosting if they want people to see their content. It used to be normal and many still do that. Web went to hell exactly the moment when everyone decided to not pay and instead [try to] make many through ads.
I agree with the sentiment. On the other hand, getting robbed and killed is rarely a necessary part of something people want. Whereas tracking…
Even discounting the idiots who cannot find a store across the street without GPS, lots of services can only really work by identifying you, getting your location and cross-referencing it with some big database (i.e. sending it somewhere ‘to the cloud’). Often even not keeping history of locations would break them, to varying degree.
I do not use any such service (in fact do not even carry any phone most of the time), but many people do and do like them. Trying to forbid the tracking part of tracking may prove rather unpopular (read: impossible). You can forbid by law the data selling part – and it should be forbidden. But will it be enough?
> Which particular rock have you been living under?
You have to be an IT guy who follows the development to find out. Most people are not. The thing they do requires (or can be done better with) some programming. If they installed Python x.y and it still works for them, that's what they still use. Even I learned about Python 2 EOL quite some time after its originally planned 2015 EOL, and compared to most, I am that IT guy...
So I will ask the opposite question: where and how was everyone supposed to find out?
Name any source and I can tell you: few people follow that.
If this was reflected in training sets and this in turn was reflected in the output of an AI trained using these sets would the result be 'biased'?
This. The problem everyone is trying to pretend it does not exist.
The real world is a messy place. All kinds of things are correlated with all kinds of other things -- for reasons good, bad and random. Often the reason is simple: History took a specific course. You don't have the entire multiverse available for training.
The more inputs and outputs, the more odd correlations the classifier will find. And as you say there isn't really any way to sort them out. We would need some automated way, perhaps some sort of AI? Right...
We are always getting headlines about sex and race. But that's just the tip of an iceberg.
> All the telecom companies on Earth are stutteringly terrified by Starlink and others of it's ilk.
All? Maybe American... Why would be ISPs here terrified by Starlink? They offer 200 Mb/s and better latency for 13.5 € per month right now -- not in 2027.
But I am terrified by some company usurping low orbits.
Google may have a vision, but their products are crap.
GMail is quite poor webmail UI-wise with terrible handling of threads. Its ability to filter e-mails is awesome though -- sometimes it even throws away replies to e-mails sent from GMail.
But what really made me laugh is `pretty good maps'. See a comparison: https://imgur.com/a/1yBHbKs. And this is still pretty much in the middle of a town where Google Maps should be strongest. For outdoors you just get blank rectangle from Google where other on-line maps show an actual map.
do much less 'x' per hour.
In a corporation the problem may be doing less X per hour.
In a university or other research institution, it is usually ‘we need to do X, Y and Z, none of which has ever been tried before’. Doing new things is the job description. Especially in hard sciences, where people regularly build their own devices, write software and invent ways how to control them, you can bet at least one of XYZ will be IT-related and impossible in locked down environment...
Automated stereotyping is actually an excellent description. We should switch to it immediately.
The goal is to classify (or otherwise map) a large and variable data set as lazily and efficiently as possible. So the NNs do the same thing we do -- pick some easy to spot things (proxies) that are correlated in the cases encountered so far. Because when we state the problem like that, this is simply the solution (in our case we arrived to it by evolution). Except, unlike us, the poor NNs cannot reason about it. Not that we do it often, but anyway...
Biting the hand that feeds IT © 1998–2020