Re: Autonomous vehicles
Waymo jknows how to make an AV, Tesla doesnt.
386 publicly visible posts • joined 16 Jul 2024
The current Uber model is definitely socialize the costs. Kid gets run over by a Uber driver while getting off a school bus. Uber drivers insurance and Uber driver pay. Not Ubers fault at all, no Metter how hard the drivers have to hustle. AV would be the AV companies responsibility.
I was surprised to see Dorsey blacklisted for selling Twitter and making "a massive profit" from it. Firstly, he reinvested most of the sale money back into shares of New Twitter, so he has actually lost over half that value. Secondly, every anti-Musk activist I heard at the time was vehemently opining that Musk must stick to his original too high offer and not be allowed to back out of the deal. Go back and check the comments on the reg and ars technica.
Now he is a founder of Blue Sky, which is open source and quite big, so a talk seems reasonable.
I worry about the activist left poisoning itself with trigger finger cancellation as a substitute for policy.
At the very least each customer should have a non reusable account name, and the bucket names should be keyed off the account name, i.e. a prefix.
Suppose a large company abandons a bucket. However some links inside the company still refer the the documents inside the abandoned bucket for procedural instructions, or a list of authorized contacts, etc. That's a weak link too, isn't it?
The court heard that the 25-year-old stinkerbell, Rhiannon Evans, sent a series of videos to her boyfriend's former partner of her holding her smartphone camera to her rear end, farting into it while smirking throughout. ..... The court also heard that the relationship between Evans, her partner, and Prytherch came to blows amid disputes over access to child visitation. Evans and her ex had previously been in a relationship for two years.
Who is Even's ex? Who is Evan's boyfriend? To whom does the child belong?
Nowadays you can get DNA from a hair. But DNA can be misleading too. DNA from the daughter of the Long Island killer was found on one of his victims, because the victim rode in the killers family car. Police confirmed wife and daughter were outside the country on vacation at the time of that murder.
AI is already helping busy doctors automate the process of challenging insurance rejections written by AI. This kind of perpetual motion system can only be made even more efficient with AI agents. Who would have dreamed of perpetual motion in our lifetimes? What a great time to be alive (*exempting those with serious illness)!
As for profit, you could start by charging based on actual costs. Within 20 % of break even, say, for usage where the data is not also being used for training.
Especially in the gratuitous pleasure market, e.g., home generation of graphics, music, unmentionable, there is really no justification to be taking a loss on those, yet OpenAI has set low prices for those uses simply because that's is all the market would bear. Never mind deep seek, the writing is right there on the wall already.
Resource constraints and competition drives innovation. If Deep Seek is really 10x better, that is how they got there. The current US model of a king and his handful of tech lords has become the anti thesis of capitalism - it harks back to the medieval economy. I am exaggerating, but that trend is real.
I would argue that the problem is not AI per se, but the pseudo capitalist ego baring way in which it is developing.
I would the companies would exclusively employ H1Bs or outsource to experienced scammers overseas. Your average US college graduate is just too likely to not understand, or at least care, that loose lips sink ships.
In this case, it's true the H1Bs would be doing a job that no US droid could do.
True AGI isn't even well defined. How many angels can you fit on head of a pin? However ML, including LLMS, are already at practical working tool status today. That includes positive and negative (mis)uses, from the pov of human well being. QC, on the other hand, apart maybe from that Canadian version, can't even do Hello World yet.
That's just a simple O(n) algorithm, so unless it goofed by sorting first, it's just a question of optimization. Not knowing knowing what it did makes the article seem speculative and a little boring. (Sorry).
However I'll just chime in now with a boring personal anecdote. Yesterday I asked copilot to write something in bash to parse a section of a .ssh/configuration file. It used awk in way I hadn't seen before and saved me some trouble. While I don't think that AGI, I do think it was something even better that could be called AUI (actually useful information).
This approach may also not be appropriate for latency-sensitive workloads, like AI inferencing, where proximity to users is imperative for real-time data processing.
Thats old thinking. Latency is the new Genius. Delayed gratification is worth more. $2000/mo, in fact, for "Pro" reasoning ability.
This aviation expert [ https://www.youtube.com/watch?v=BzmptA6s-1g ] "Blancolirio" notes that if the wheels are not down then the flaps and spoilers will not deploy. But the landing gear can be deployed by hand by pulling on emergency cables - the video even shows footage of the emergency cables being deployed. So if they had done that (manual deployment) then the flaps could have been deployed. The flaps/spoilers lockout will be released even if the landing gear is up when the aircraft is under 10 feet from the ground. But they were floating for over the runway quite high so ....
Paraphrasing from an elsewhere reported version of this story - Typists typing words of random letters are extremely slow compared to when they type natural language, the difference resulting from natural language being so predictable. The information is more dense in the case of words with random letters, because their no predictable structure. Another way to look at it is that the time it takes a human typist to type a document depends more upon the size of zip++ file that can contain it then its expanded length. (Zip++ because language complexity is more than the number of unique words).
That also explains the characteristic long winded and flowery language - padded with every possible platitude, while devoid of originality - seen in the output of ChatGPT when talking of weighty but soft human issues.
Hey ChatGPT, is it possible that AGI can not be achieved until there are further fundamental improvements in computing hardware?
Yes, it is entirely possible that Artificial General Intelligence (AGI) may not be achievable until there are further fundamental improvements in computing hardware. While progress in AI has been impressive, especially in areas like deep learning, natural language processing, and reinforcement learning, achieving AGI requires more than just better algorithms—it may also require significant advances in hardware. Here are some reasons why this could be the case: ... Energy Efficiency: Deep learning models, particularly large-scale ones, are energy-intensive. AGI may require hardware that can process massive amounts of data with orders of magnitude better energy efficiency to become practical for widespread use....
There are already profitable applications for general machine learning, including some uses of LLMs. Development while having to satisfy energy cost constraints is the secret of the evolution of life - and its no different for human progress.
Bleeding edge fundamental research is also fine - obviously - as long as honesty is maintained. To much hype is counterproductive.
FYI - According to The Information [paywalled] via Tech Crunch, Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits ... The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI.
I don't think your assertion that foreign investment simply increases the deficit is 100% correct.
The US runs a huge merchandise trade deficit, also known as the "US trade deficit", that you mention. That includes goods and services.
Much of that money comes back to the US - if only because exporters need to do something with their earned dollars that doesn't cause their own currency to appreciate (because a surfeit of dollars would drive down the dollar), which would kill their exports. The main avenues for investment are (1) US Bonds, (2) Stocks, (3) Direct investment in US businesses.
Because these return investments keep the dollar from dropping, they keep foreign imports cheaper, preventing competition from US domestic competitors. On the other hand it is great for the financial engineers on Wall Street, for political parties that need a limitless debt ceiling to preserve their preferred status quo, for bit coin bros, AI hype professionals, and so forth. There really is so much investment capital floating around that it makes more economic sense to surf short term hype waves than be stingy with it and worry about realistic long term future returns.
So indirectly, your assertion could be correct - if a good part of that $100B goes into investments that are based more on hype than careful long term reasoning, thus ensuring that the merchandise trade deficit continues.
If however, the money is invested in ways that improve much needed US competitiveness in goods and services, then it could theoretically improve the merchandise trade deficit - contradicting your assertion.
If the ban goes through, I bet that CCP manufactured TP-link-a-like hardware will still be available, just with a name change and USDA Approved Patented Secure Firmware © on the EEPROM. (And of course a 2x higher price tag for the extra minute of labor + stock dividends + CEO bonus). Although I would be happy to be proven wrong and see it manufactured elsewhere.
Your model Archer C3200 does appear to be hardware-locked-down by design [https://forum.archive.openwrt.org/viewtopic.php?id=62821&p=2]. :(
Obviously, it is better to check whether the hardware is supported before buying. Lots of hardware is supported by OpenWRT. I don't know much about DD-WRT - I thought they had merged.
Isn't Cerebras' aiming primarily for the so-called "language output inference" (better name "statistical-inference of language output") niche? Similar to another "language output inference"-niche company "Groq"? So they are not exactly directly competing with NVidia, because NVidia hardware can used for both "training" and "inference".