
"upskill" as many as 100,000 civil servants in the latest tech by 2030.
Sounds good!
DeepMind AI subsidiary would work with technical experts in government to support them in "deploying and diffusing" new emerging technologies
Ewww
1190 publicly visible posts • joined 2 Sep 2019
In an increasingly automated world, the allure of artificial intelligence is undeniable. Yet, when it comes to user reviews, a cornerstone of modern consumer decision-making, the rise of AI-generated feedback presents a significant threat to authenticity and trust. While seemingly helpful, these algorithmically crafted critiques are fraught with inherent flaws, from a lack of genuine experience to the potential for widespread deception, ultimately making them an unreliable guide for consumers.
One of the most significant dangers of AI reviews is the potential for bias and manipulation. These systems are trained on vast datasets of existing online reviews, which can themselves be skewed. This can lead to the perpetuation and amplification of existing biases, whether they be positive or negative, creating a distorted perception of a product or service. Furthermore, the technology can be easily exploited to create a deluge of fake positive reviews, artificially inflating a product's rating and misleading unsuspecting buyers. This "astroturfing" on a massive scale undermines the very foundation of a trustworthy review ecosystem.
Beyond deliberate manipulation, AI-generated reviews fundamentally lack what makes human reviews valuable: genuine experience and emotional nuance. An AI has never held a product in its hands, tasted a dish at a restaurant, or experienced the customer service of a hotel. Its reviews are merely sophisticated collages of existing text, devoid of the authentic joy, frustration, or indifference that a real customer feels. This results in reviews that are often generic, lacking the specific, anecdotal details that can be most helpful to a potential buyer. The subtle but crucial context of a human experience is lost, replaced by a sanitized and ultimately unhelpful summary.
The proliferation of AI-generated content also erodes consumer trust. As it becomes more difficult to distinguish between genuine and artificial feedback, skepticism towards all reviews is likely to increase. This not only harms consumers looking for honest opinions but also businesses that rely on legitimate positive feedback to build their reputation. When the well of information is poisoned with fake and soulless reviews, the value of the entire system diminishes.
Finally, the ethical implications of relying on AI for something as influential as consumer reviews cannot be overlooked. The widespread use of this technology could lead to a significant displacement of human-written content, devaluing the authentic voices of real customers. Moreover, the data used to train these AI models often raises privacy concerns, as it can be scraped from public platforms without the explicit consent of the original authors.
In conclusion, while AI offers powerful tools in many domains, its application in the realm of user reviews is a perilous one. The absence of genuine experience, the high potential for bias and manipulation, the erosion of consumer trust, and the significant ethical questions they raise are compelling reasons to approach AI-generated reviews with extreme caution. For a truly informed decision, the discerning consumer should continue to seek out and prioritize the authentic, nuanced, and sometimes messy, but always real, voices of their fellow human beings.
The one thing that they want to avoid most of all is that they are each directly responsible to check the age of their users. On one hand it would make them look like the intrusive bad guys who want your private data (well, even more) and even ask for ID verification to prove your age, and on the other hand that makes them liable if they don't do it well.
By comparison, if there is a generally available method to prove your age, the ID verification can be done by a third party (which could be the government), and Meta can comply with the law by just checking those credentials, absolving them from figuring out whether the ID check is truly secure, whether the ID was faked, etc. Not their problem.
I would argue that it's better for users as well! If it's done properly, you can get an anonymous certificate from one website checking your age, and then use it or not use it to do whatever you want on all other websites, without having to identify yourself on each separate website, and without your activity being tied to your identity. The latter part is obviously really hard to have if it's the same website checking your age and offering the age-restricted services.
Let me be the first to state that DB should concentrate on having every day normal trains run on time without being cancelled. They have somehow gained a dismal reputation on that point in the past decade, that they most definitely did not have in the nineties (at the time, Italy was the butt of such jokes). This is far more relevant and important for the reputation of a rail network than the max speed you can reach on controlled tests.
Most of their budget is currently coming from Google for having Google as default search engine. The way the antitrust lawsuit is going, Google will not be allowed anymore to pay them for that. Even assuming it's possible to maintain a fast browser with good features on a tiny budget, it's much harder when you have been used to a steady source of income. I don't know the internal details, but I wouldn't be surprised if they're desperate to find solutions.
If we assume the time savings measurement is precise (and we probably shouldn't), it doesn't seem that much. But also, it doesn't necessarily mean an increase in productivity. Experiments on reducing work time, like 4-days week, often show counterintuitive results that the overall productivity does not change significantly; and people working twice more hours are not twice more productive either.
Indeed, we don't really need everybody to work anymore. During the past century, entertainment has soaked up more and more human activity, but even that is getting more efficient and the industry doesn't seem like it needs to grow much more. But then again, perhaps we don't need everybody to work. There is already a significant part of the population that doesn't need to work — they're retired. Of course, currently this is seen as the reward for working long enough to reach that state, but there is no fundamental need for it to be so. We can have a civilization where less people work, and shorter hours, and if some want to work more to have more money they can, but it's not necessary to live. I think we can get there, though the transition will be slow, and some assumptions have to change. Universal basic income does not have widespread acceptance, but in practice many European countries already have a social security net which is strong enough that technically if you are willing to live cheaply you don't need to work at all. I think it's going to be more and more common.
Th biggest issue might be to find ways to motivate humanity not to disappear out of boredom.
Rather than having it bought by a necessarily large and well-funded tech company which will immediately start abusing users for its own ends, why not set it up as a separate independent company? Most of the cost of maintaining a browser is actually maintaining the engine, which in this case is Chromium and maintained separately, with many different companies contributing, in their own interest, and I doubt that Google would stop contributing either.
It is probably not difficult for Chrome to find sources of revenue. The first coming to mind is of course a sliver of ads shown on the browser from all major ad publishers, with some checks ensuring that these deals are non-discriminatory.
Well the point is that at least they have to pay for it, so it puts a price on the pollution. It technically gives them an incentive to pollute less, which otherwise they wouldn't have to care about. Now if you think they should pay more, that's a different question. I hear carbon offsets are full of fraud, e.g. golf courses getting paid for not cutting the trees, making the prices too low.
For months, it would ask me to choose whether I wanted to start paying, or accept my data to be used for ads. I never answered. I hoped eventually they would give up and just show me ads without using my data, but what I got is even better: I have no ads in Facebook. They probably did not implement ads that don't use private data, so they can't show me anything.
I think it's useless to try to find a grand strategy or mastermind behind any of this. The whole thing is pushed by people thinking "I can make money", "I can get promoted", or other similar intentions. They're selling the dream, and they'll get rich long before people wake up.
The effect is not the same — in each market, sales tax and VAT apply to all products sold in that market equally. People buying a cars in the EU need to pay VAT, whether the car was built in the EU or the US, and people buying cars in the US need to pay sales tax, wherever the car was built in the EU or the US. Import duties, on the other hand, only apply on cars built somewhere else.
Yeah, Hanlon's razor might apply here.
I'm not sure it's dynamite that policies require selling to the public? I thought that's half the work of politicians. I don't even find it controversial TBH. When politicians decide to raise the age of retirement because there are more and more old people and not enough young people to pay for their pension, I imagine doing the simple math is much easier than convincing the public that this is necessary.