I put mine in the industrial shredder.
Posts by mIVQU#~(p,
44 publicly visible posts • joined 27 May 2022
European consumers are mostly saying 'non' to trading in their old phones
Anthropic CEO frets about 20% unemployment from AI, but economists are doubtful
Neptune OS is Debian made easy but, boy, does it need some housekeeping
Windows intros 365 Link, a black box that does nothing but connect to Microsoft's cloud
UK's first permanent facial recognition cameras installed in South London
Samsung co-CEO Han Jong-hee dies of heart attack at 63
HP Inc to build future products atop grave of flopped 'AI pin'
I was told to make backups, not test them. Why does that make you look so worried?
WFH with privacy? 85% of Brit bosses snoop on staff
Re: Relax.
I reluctantly somewhat agree. Assume everything on your work laptop is monitored (and hope it's not). Web surf tracking has been going on for decades, the teams status metric is bullshit and the key stroke logging is scary (a lot of people, I've noticed in the workplace will sign on to their personal email, WhatsApp, Facebook etc so a company is potentially hacking these individuals).
One simple rule: Work device is for work stuff only. Personal devices are for personal/private stuff only. Never install work related software onto a personal/private device and vice versa.
Windows 10's demise nears, but Linux is forever
Apple Intelligence turned on by default in upcoming macOS Sequoia 15.3, iOS 18.3
Microsoft to force Windows 11 24H2 on Home and Pro users
Google Gemini 2.0 Flash comes out with real-time conversation, image analysis
Both KDE and GNOME to offer official distros
Australia passes law to keep under-16s off social media – good luck with that, mate
Starlink's new satellites emit 30x more radio interference than before, drowning cosmic signals
Apple Intelligence beta lands in iOS 18.1, macOS 15.1 previews
Bing and Copilot fall from the clouds around the world
Microsoft breach allowed Russian spies to steal emails from US government
D-Link issues rip and replace order for besieged NAS drives
FBI confirms it issued remote kill command to blow out Volt Typhoon's botnet
Calculating Pi in the sky: Axiom Space plans to launch 'orbital datacenter'
Cisco intros AI to find firewall flaws, warns this sort of thing can't be free
European Space Agency grits teeth, preps contracts for SpaceX Galileo launch
AWS is running a 96-core, 192-thread, custom Xeon server
Will Flatpak and Snap replace desktop Linux native apps?
Google HR hounds threaten 'next steps' for slackers not coming in 3 days a week
The bonkers water-cooled shoe PC, hexagonal pink workstations, and IKEA-style cases of Computex 2023
With ICMP magic, you can snoop on vulnerable HiSilicon, Qualcomm-powered Wi-Fi
Amazon opens its ad-hoc Wi-Fi-sipping Sidewalk mesh to all manner of gadgets
Cisco kindly reveals proof of concept attacks for flaws in rival Netgear's kit
Zoox blurs line between workers and crash test dummies in robo-taxi trial
Brit newspaper giant fills space with AI-assisted articles
Write me an article on the ethics of AI in they style and humour of the register.com
Artificial intelligence has come a long way since its inception. From chatbots that could barely string together a coherent sentence to advanced machine learning algorithms that can beat humans at complex games, AI has made some impressive strides. But with great power comes great responsibility, and there are some serious ethical questions that we need to address.
Let's start with the obvious: job displacement. As AI becomes more capable, there's a real risk that it will start replacing human workers in various industries. We're not just talking about low-skill jobs like factory work or data entry, either. AI is already being used to write news articles and generate financial reports, and it's only a matter of time before it starts encroaching on more advanced professions like law or medicine.
So, what do we do about this? Some argue that we need to implement a universal basic income to support those who are displaced by AI. Others say that we need to focus on retraining programs to help workers transition to new roles that can't be automated. And then there are those who argue that we need to slow down the development of AI altogether to prevent it from becoming too powerful.
But job displacement is just the tip of the iceberg when it comes to AI ethics. There are also concerns about bias and discrimination. AI algorithms are only as good as the data they're trained on, and if that data is biased, then the algorithm will be biased as well. This is already happening with facial recognition technology, which has been shown to have higher error rates for people with darker skin tones.
To make matters worse, AI is being used to make important decisions that affect people's lives, such as whether someone is approved for a loan or whether they're a good candidate for a job. If these algorithms are biased, then they could perpetuate discrimination and inequality.
So, what's the solution? For starters, we need to be more transparent about how AI algorithms work and what data they're using. We also need to make sure that diverse groups of people are involved in the development of these algorithms to prevent bias from creeping in. And we need to hold companies accountable for any harm caused by their algorithms.
AI is an incredibly powerful tool that has the potential to do a lot of good. But if we're not careful, it could also do a lot of harm. We need to take these ethical concerns seriously and make sure that we're using AI in a responsible and ethical manner.