Re: ElReg
The market can remain illogical far longer than one would expect.
172 publicly visible posts • joined 14 Dec 2022
Tesla is an American company and America is obsessed with owning cars and built with the expectation that you will own a car. A lot of americans live out in the suburban sprawl where your house is in an endless maze of neighborhoods and 20 minutes from any store. The average commute time to work in the US is 27 minutes, with 10% taking over an hour.
The way he talks, he sounds like a teenage edgelord on Gaia Online circa 2004 bragging about the dark magic he learned and is totally going to curse the bullies at school with.
This sounds like the code equivalent of copying a quote into your paper and then using the thesaurus to avoid plagiarism allegations. Cool, you ended up with something disgusting to look at that conveys most of the same meaning as a thing a person already made. Congrats.
I can be a bit sure of myself, but do not think I have ever been as smugly condescending as Trollop saying "It is still quite hard to do, and only the biggest companies in the world understand this is the future they're investing in.". What I hear here is: "Actually customers, you are stupid for asking whether this can generate revenue: this is the future and you have no choice except to adopt. Get on with it." Convincing people to use these tools is the job of the AI companies, not some responsibility for the customer.
The suggested example seems crazy to me. an "AI Agent" that makes decisions about when to offer exceptions to pricing (or other policies) is a terrible idea. The primary issue is not that the program doesn't have the right data, it is that machines cannot be held accountable for decisions. If you have ever been in a meeting where a decision is made that involves large quantities of money then you know that "Who is actually going to take responsibility for this decision" is a huge factor. So what do you do when your AI Agent starts offering discounted pricing to too many customers? Or when you lose a huge contract because of a hardline stance that didn't go over well? Even if it does a good job most of the time, what do you do about a botched sale? And now that your sales team is not engaging with the customer in these discussions, who is developing those key insights that are not-yet-accessible-to-AI? So many sales decisions are based on small insights that people pick up over years: handshake agreements, offhand comments, plant tours and industry experience.
The suggestion with all these people is always more data or restructuring your data. What, will more surveillance fix this? Or do you spend manhours taking notes on everything instead of making sales? No company will say it outright, but I am sure legal departments around the world are sweating at the idea of getting a discovery request when all your sensitive information is stored perpetually for your AI salesmen.
If a tool is unsuited for how your business works, it only makes sense to ask what effort would be required to get it working and what the payoff is. Right now the payoff is unclear (despite what AI salesmen will say) and the cost is high. I cannot imagine the smugness required to blame the lack of adoption for a product on your customer. I am in Quality, so I do not have a love for sales teams, but I would not want to start working with an 'AI Agent' instead.
I had a colleague with a PhD in biochem from a religious, but well regarded, institution. We were discussing our degrees, and I mentioned I had studied the period on Earth more than ~2.5 billion years ago. He stopped me and asked how old I thought the world was. When I told him, he went to his computer and googled it. Then he asked me about humans, I gave a less certain answer and again he googled it. Then he just said "huh.", and I quickly tried to change the subject.
Very nice guy, but that was one of the most uncomfortable conversations of my life.
While I have been quite the skeptic of LLMs for a while, I was surprised to see a few of my colleagues at work sour on it. We had a 'copilot launch party' a while back when our company enabled copilot. A coworker of mine went and came back with copilot branded swag. He was summarizing his e-mail with it. Drafting e-mails. Taking meeting notes. Summarizing documents. Designing slides. We joked about his "AI persona" in e-mails because the tone was just completely chatGPT and different from how he used to sound.
I think he is still using it kind of like a search function right now, but the past few weeks he has brought it up as a bubble, complained that it isn't reliable enough to leave any tasks to and even complained about all the extra integrations showing up.
I know it is an anecdote, but it seems that at least a few people who try to genuinely integrate it into their work are finding that it isn't saving them time.
I don't even know if taking a cut is the best for them. I am sure someday they will claw more value from 'partners' that way, but before they get there I think a better plan is for them to 1. Prove this is valuable as a way to drive sales. and 2. Ensure only the companies that are paying you for ad placement actually get suggested by the bot.
If you get a commission cut, you get paid by one partner when you make a sale. If you make the partners compete to buy ad space then they are all pushing to pay you more than each other.
You can tell it was started by real industry insiders because those are the only people that think LLMs are 'machine intelligence' and pose a terminator style threat to humanity. The rest of us are just worried about drowning in slop and destroying the environment.
That said, go for it. These companies are hoovering up the total output of the human race in order to own monopolized services they hope will replace us as workers. It is just another way to disempower most of us and further concentrate money/power into their hands. It doesn't seem to do a good job, but they are determined to make 'Fetch' happen and they have massive coffers to burn away until it does.
Google is slowly heating up the water. They clearly want more control so they are floating developer registrations to restrict people from installing software normally a.k.a. 'sideloading'. They have a duopoly on phone OSes. Now they are reducing releases which presumably makes OSes based on android harder to maintain. That isn't a coincidence.
Once they have figured out how to lock down the systems in a more apple-like way and made sure no offshoot is going to show up that is commercially viable, I wonder how they plan to further enshitify Android?
Carbon capture as a way to offset increased burning of fossil fuels is an insane idea to me. We burn these fuels because they are cheap and effective and it is extremely easy for companies to externalize the costs of the harmful emissions. (Climate change and more direct health effects.) Asking corporations to pay the cost for those current externalities won't work because the whole reason they are doing this is that it is cheap. Other people paying for it is basically subsidizing the bad actors, and you are going to be competing with your capture rate (that costs money) against the fuel consumption rate (that generates money). The idea that you can use the captured CO2 for plastics sounds untenable while we are still using fossil fuels because the natural gas industry produces the raw materials for common plastics as a byproduct. We need to stop burning fossil fuels.
It isn't like these data centers are in places where renewables are totally impractical. They need something as fast and cheap as possible and fossil fuels are the best option in those respects. If we attach these complicated carbon capture processes then what is the point? Why would they take an added cost? Even if the capture is in a separate location, are they going to commit to building out those facilities out of kindness? If we have some massive coalition willing to pay for massive carbon capture / storage infrastructure... why can't that coalition push to reduce current consumption or subsize renewable options?
Corporations do not want to pay any avoidable costs and they hold all the power. People do not want to consume less. Heck, even if you want to consume less, many of us are having energy intensive AI forced into the things we use.
I am so glad we can all get slightly more work done by simply becoming assistants to a machine and giving up the act of creation to all be editors!
Of course, the company would need to get more work out of you to pay for the $300 worth of credits, and you certainly aren't going to work shorter hours. Now that you are just QA though, perhaps you don't need that fancy developer salary?
Do we? I feel like it could have been almost anyone in either party for this issue. If someone had to step forward and pay the political price, it would make all of them think twice before putting their names on an unpopular amendment.
(I mean, realistically, someone who is not up for re-election would do it and then head off to a cushy corporate gig. Or maybe someone in a totally safe district.)
Not sure I think a trifold is a good form factor, but I have found my folding phone to be a great upgrade. The only time I have felt any interest in a phone in forever. I regularly need to read larger documents while I am on the go and not needing a tablet or laptop while I am at a work site or traveling is fantastic. It is a tablet I can easily keep in my pocket and that is great.
I am not as good an author as Dame Thompson but I still have my own style and don't want copilot interrupting me. The mere suggestion that I would desire a rewrite is insulting. Word suggestions interfere with what I am trying to say and throw off my rhythm.
It is good that people write differently. It keeps our language interesting and evolving, and that is true for 'bad' writing too. I can understand why someone might want to average out their writing to a bland nothing on a resume, but in most communications I think it is important that everyone keeps their own voice. There really is not a best way to communicate! Your voice should be your own.I hate seeing these suggestions, even the sight of them messes up my flow, but they keep being re-enabled on my phone's Outlook app. (mandated by business)
That is without even getting into how the suggestions are just bad if you are not engaging in the expected style of communication. They seem to want all of us to speak like business people, at which point I would rather just self-exile.
I know it is important to publish research on things that seem likely, and I am glad this paper was produced, but this is surely not coming as a surprise, right? These predictive text models were trained on the totality of the text on the internet, a place where anyone can confidently state their opinions and beliefs as fact.
I am not an expert in this field, but I am kind of concerned at how anthropomorphized all this research comes off. These researchers and so many others read more like anthropologists or psychologists than anyone studying a computer system. Just in the abstract we see phrases like "We also find that, while recent models show competence in recursive knowledge tasks, they still rely on inconsistent reasoning strategies, suggesting superficial pattern matching rather than robust epistemic understanding." Which comes off to me as a wild way of saying that these models are not actually capable of understanding... a thing we know?
That last sentence is pretty amazing.
This really is emblematic of much of the more fringe Linux features and options. The maintenance of these things is often the passion project of one or a few exceptionally dedicated individuals. Pretty shocking to hear about a 10 year old leading the effort though.
I do worry that some of the replacements in customer service are here to stay. Not because the chatbots are good at what they do, but because being unhelpful and frustrating in certain customer service industries seems to be accepted and probably actually helps prevent costs.
While I do believe it is a bubble, I spent enough time watching other tech bubbles to know that calling the collapse is almost impossible. The investment clearly doesn't match the revenue, and the promises of firing a third of your employees seem like pretty clear lies today. We are not right around the corner from AGI, and AI in the workplace seems to either 1. require an attentive expert to babysit. or 2. just produce more work for your coworkers. Once this much money has come in behind something though, it isn't easy for them to just back out. There isn't anything new to pivot into even: the metaverse got forgotten in the AI quest, what can they move to? nvidia seems to be laying the groundwork for robotics.
The question I increasingly have is: What does the AI bubble leave behind?
We will certainly have some open models with uses. A lot of useless hardware (GPUs will not age as well as fiber did) and abandoned data centers? An internet that is harder to use? A massive skill deficit among newer hires?
Using an AI chat bot for information, you are always getting a filter. They are the ones deciding what is a bias, they are deciding what is acceptable and what may be bad PR or make some group or govt angry.
Forget left or right for a sec, do you trust any of these silicon valley oligarchs to be the filter through which you see the world?
Of course they are, they were sold AI as a tool to replace workers. Why else would these AI companies and nvidia be worth 40% of the US economy? It isn't because of chatbots or a new search mode, it was because companies were going to fire half their workers. The market is still acting like the AI companies have those valuations, so it is no surprise that companies are pretending the product will still do as promised.
If you admit the reality of the product, it would sure seem like the market needs a correction.
I am going to ignore the AI stuff, which of course is infecting x64 anyway.
I have nothing against ARM, though windows on ARM seems to be suffering the Linux curse to some extent. Users want all their old stuff to work exactly as it always has. For many years linux evangelists have let people know that there are equivalents for the stuff that doesn't work, and the bugs are pretty minimal for the stuff that uses a compatibility layer.
I don't think users care. They want their computer to work like always. Why pay more for one that doesn't?
Linux attracts people obsessed with control over their computer, free software or whatever else. Copilot+ PCs attract... i assume the casual excited AI crowd (but not the enthusiasts with workstations), and people who need/love battery life?
Not my experience at all, admitting when there is uncertainty is burned into everyone with a science PhD I know. When I was in my PhD program, you would get an earful if you ever presented a confidently wrong answer. During oral exams I frequently would break out the "I don't know. I guess I would go about finding that out by..." or "We don't really know that." for things I could be confident there was nothing in literature. I had professors that would specifically question you *until* you got to an "I don't know."
To this day my colleagues and I get in trouble with business teams by stating uncertainty too much.
I actually like the snipping tool additions. Adding basic highlights and whatnot to a screenshot I want to send to a coworker is great. The OCR functionality is pretty good as well.
It really is bizarre what they are doing to notepad though. They deprecated wordpad and just started shoveling all their random features into the premier editor for altering config files on your friend/family's computers.
Obviously, there are a million problems here, it is hard to know where to start.
What kind of person wants to live like this? Who has this little preference about their life? Lets look at some of the examples here:
"Find me new white running shoes"?
"Book me a round-trip flight and a hotel in Palm Springs for the first weekend of November, with a total budget of $700."?
Who has ever uttered a sentence like this? What kind of a person cares that little about the things they are buying to allow a computer to make these decisions unsupervised? For things most people care this little about, Amazon lets you buy stuff in a couple clicks anyway.
Even if I would never touch this, they want to make my life worse. The thing about concert tickets is just... hell. I am already fighting bots every time I want a pass to a national park, a ticket to a concert or a limited release item. Now we want to just automate bot purchases for everyone and remove the low bar that was actually there to automation. This is a threat: engage with our ecosystem, or get left out.
It feels like the tech-bros in charge of these companies are either obsessed with recreating sci-fi movies, or are selling their products to venture capital that is obsessed with recreating sci-fi movies. They are out of good or original ideas, so they just do whatever already has pop-culture cache, because they know that will get some hype. This feels very similar to when they tried to force the 'metaverse' on us (from an incredibly bad dystopian novel) despite the fact that reading text on a page is faster and more efficient. I have trouble seeing much use for a humanoid robot like this over a human or a unitask robot. Is the idea that they think AGI is real, coming and everyone will have a slave robot in their house?
I went to a restaurant in China earlier this year where we sat ourselves, scanned a QR code to order our food on our phones and were brought everything by a robot (wheeled obviously). I know I am getting old, but... It sucked. It felt quiet, isolating and impersonal. Maybe people find the novelty fun today, but I really think that having the aspect of human interaction removed from the mundane parts of our lives makes things worse for us all. It certainly has when I am calling customer service. I know these rich investors and inventors think so to, because rich people stuff is all about having actual people do things for you. Sure, a lot of it is about the ego of having people serve you, but I also think the human element matters. When you go to a fancy restaurant the chef comes out to speak with you about your meal because that connection makes the meal more special for people. When you buy luxury goods, a sales attendant will personally reach out to you and maintain contact about new releases, try-on events, .etc. A mail list would work fine, but it comes from a person, because people like that.
It doesn't feel like any of this tech is about making our lives better, it feels like it is about converting labor costs into capital expenditures or utilities whose revenue that can be captured. The actual products we get aren't better than what we had, they are just enough that we can deal with it. I remember how excited I was for a flatscreen monitor, broadband internet, an iPhone. Humanoid robots should be more impressive than any of that, but I don't feel remotely excited. They weren't actually designed to improve my life. (Also, I am definitely getting older. Real "yelling at clouds" stuff.)
NASA does a lot beyond rocketry. There are fields that are highly classified, and then there are a ton of labs dedicated to more basic science. It isn't just a rocket factory. They also have background checks and programs dedicated to monitoring for foreign influence. Anyone with access to information about our rockets, satilites around Earth or orbital calculations is a heavily vetted citizen. That has been true for forever. We don't just have big open facilities for anyone to wander around in. Yeah, I agree that for those things we should have background checks.
China has their own space program, obviously. The individuals who came to the US came of their own volition to study. With your analogy you are treating all these people like they are leeches, coming on behalf of China to take research. Spys exist, obviously, but who do you think are writing the research papers? People. Many of whom have come from all over to work in US labs. We get so many great people contributing to our nation because of our higher education programs here in the US. If we want to kick them all out, then yes China will happily make room. It has actually been a huge advantage for us that people want to come here and contribute to US programs.
NASA and US space programs, like all of our research, rely heavily on foreign nationals. Trump has basically destroyed the pipeline by which we get the best people in the world to come and support us. I have a PhD in planetary science and our department as well as our conventions were always full of a lot of very talented scientists who came to the united states because of the prestige of our research institutions, or seeking more rights. People were drawn by access to big name advisors, Apollo samples (yes, still), DoE national lab facilities and participation in NASA programs like those running the Perseverance rover or New Horizons probe. Many of them would attempt to continue on as professors at our institutions, or staff scientists.
But for a long time (before Trump even) US budgets have stagnated, while others have ramped up their spending. China is pouring money into faculty positions and their space program right now. Not just because of manned missions or whatever: they are throwing money at basic science. They are putting in money and making it a matter of national pride while the US pretends that climate change is a hoax and members of congress mock fun of research grants they do not understand.
Now under Trump there is no security in coming to the US. Talented Chinese nationals used to come here because they were afforded more freedom. Now the US is monitoring your social media for 'Anti-American sentiment". Visas are stripped midway through PhDs, and research grants are being tested for ideological purity as the US implements its own version of Lysenkoism in the medical sciences. Promising researchers are kidnapped by masked govt goons and the government is withholding research funds from institutions to try and bring them to heel.
The message is clear: the US is xenophobic and anti-science. The US will lose many of its most talented future researchers. This will slow our progress over time. The administration is only interested in space as an area to show masculine dominance. We need to prove American exceptionalism and might, but the people in charge of this seem to think these things are a given. America will be first to set foot on Mars because we are the oh-so-special America. They want the manly prestige and glory, but they HATE the people who actually do the work.
I will not claim that there isn't a risk from Chinese espionage, but I know many people in this field and guess what? If you are Chinese you were already probably no where near rocketry. If you mention NASA when coming into the country you are getting stopped by immigration. I have friends with PhDs in orbital dynamics and remote sensing who had to go become programmers because they could not get a job without citizenship. I have friends who wanted to stay here because of work-life balance or rights for gay people that are being forced out.
A country as contemptful towards research and truth as the united states will not be a leader in any of this for very long.
It feels like these AI companies have jumped the gun here. The AI companies obviously want to lock customers into pricing structures that lead to higher payments and externalize any of the risk associated with running the models. However, I expected them to start with more aggressive pricing and lean into these tactics once companies are committed. These contracts do get scrutinized at an enterprise level after all. The tactics of MS, Google and Slack all seem to be doing this my leveraging the capture they already have over organizations to force AI at a cost.
For the others, this seems a bit premature for the modern tech playbook. Is it that they have these unfavorable structures in place now, but are still subsidizing the cost of the AI to keep costs low and get customers to agree to bad terms? Is the hype so strong that customers are spending no matter what? Maybe they are certain their tools will cut labor costs so drastically that companies will be happy to pay?
They are all monsters, but the old ones knew that the best thing to do when you are a monster is hide in the shadows. Create a charitable organization with your name on it and talk about science or whatever but never say anything objectionable. Whether it was because they knew they were doing terrible things, or if they just think the commoners are too stupid for their greatness, they knew to cultivate an image.
Genuine question: Does this query the LLM every single time a value in the range changes?
Ignoring the usefulness and reliability of the function, this seems like a massive waste of energy. A poorly constructed spreadsheet (not exactly a rarity) could be sending huge amounts of redundant data for costly external processing. What happens if you are offline and working on your spreadsheet?
Overall, the examples we are given for handling descriptive text are probably ill-advised, but that is data that businesses put in excel for inexplicable reasons, and maybe some value can be had for someone. I don't like it, but some people may have a use there that isn't crazy, given what their business is handing them and how reliable their results actually need to be. I am terrified because I know people who are going to attempt statistics with this thing. I dread the day when I open up a spreadsheet and see =COPILOT("Tell me if this distribution is normal, thanks!", B2:B1200)
Not an LLM fan, but I can see your point. I have received excel sheets that are exported from a poorly optimized PowerBI dashboard that grabs data from half a dozen sources (SAP, sharepoint lists, random excel files) that include longform text. My standard for dealing with a lot of this data has been "Don't", so maybe having an LLM provide any insight, however dubious, would still be an improvement.
Well, first of all: they are not producing an intelligent or self-aware computer. This is really very far from the realities of the case, and I do think it is important to keep that in mind.
To engage with the hypothetical: of course they need to pay? If you, at home, produce an intelligent, self-aware being (known as a child) you still need to pay to purchase a book if you want to give it to them. You may disagree with the existence of copyright, but I think it is pretty clear cut that the law would require paying for copyrighted media. Maybe they are allowed to use a library? You definitely cannot just download it off some piracy website.
There are two issues I have here:
1. An LLM is not a digital person, it is predictive text. While I guess can understand the urge a dictator has to create a talking head for themselves, I do not understand the urge people have to interact with these things. If you train an LLM on Napoleon's writings, you do not have Napoleon. It doesn't act on behalf of the person or have their internal reasonings. There is no way to answer what Napoleon would have done in hypothetical situations, because people and LLMs do not reason and adapt similarly. Most importantly though, it simply *is not the person*. They did not *do* or *say* these things. It also opens up such massive room for rewriting history and vlurring the lines of reality and fiction further. Do we need anything like the PragerU simulacra of John Adam's painted face saying "Facts don't care about your feelings?"
Even from a dictator's perspective though, I still see issues. Ideology is not consistent, nor are people. Sometimes this is for good reasons, sometimes it is opportunistic, but it happens regardless. Many followers might not think this is the case, but I have to assume the dictators in charge know this. Look at the way China covers Mao and you can SEE that they know this. Training a bot on the body of your writings and letting it loose to explain your perspective just seems like asking for trouble. You control the discussion, you control the messaging, why would you give up the direct control of the legacy to a chatbot? LLMs are not easily updated when suddenly you have always been at war with Eastasia. Is the hope that a skeptic will be convinced by the chatbot? You control the schools and social media! I do not get how this is in any way a good idea if you are at all aware of either your own ideology or the capability of LLMs.
2. No one is being made immortal with a brain interface. Sorry, the tech isn't there, it isn't close and all our world leaders (and Elon) will die of old age before it gets there. If it gets there. To the extent that we understand the brain, it does not operate similarly to a modern computer, and there is a ton that we do not understand about the brain. I have not seen it explained how a brain-computer interface at any point would lead to digital people beyond expanding our understanding of the brain.
But even if you could capture the exact physical state of your whole brain and run a physical simulation on a super computer, transmit sensory information to it and receive communication from it: is that you? It is an old quandary, not to be solved in this comments section, but I don't think it is 'you'. The meat you that experiences the world as a mostly continuous phenomenon is a separate being. There is a hard break there, you experience death and cease to exist while a copy persists. Is that satisfying to one afraid of the void of nonexistence? If you can be uploaded into such a simulation, what is stopping you from making an arbitrary number of copies? Are those 'you' as well? I do not think they are in regards to one's own sense of mortality. Digital copies would be independent beings at that point.
There are not many comforts in the world today, but "...so long as men die, liberty will never perish."