
I think you missed an opportunity ...
Surely your opening sentence should have ended with, "has died at the digitally apropos age of 01100101."
185 publicly visible posts • joined 23 Jul 2014
The CUDA "moat" is a classic example of the first-mover advantage in an ecosystem.
It's not just about technical parity but also developer convenience. HIPIFY, SYCL, and ROCm offer alternatives, but often require manual intervention, have compatibility issues, or lag in performance. NVIDIA's strength lies in its relatively unified ecosystem.
Custom accelerators, however, may challenge Nvidia's moat from outside rather than within.
Displacing a dominant player is always hard, whether in computing or evolution. But disruption often emerges from an unseen niche that overturns what seemed unshakable. Either that or the equivalent of an monster asteroid strike that shakes things up a bit.
Back in the early 80s, C would have been a fairly "minor" language - Ada would have been the serious and "secure" language choice. It is definitely suitable for writing an OS.
I've coded in both and I like both. Ada had concurrency built into the language, type safety (even user defined types), excellent templating system, low level bit fiddling, memory safety, etc.
It wouldn't surprise me if the OS was written in Ada. The alternative would be a custom language (and, yes, lots of those existed as well).
I don't deny that IBM kept growing - at least for a while. However, a lot of this growth came from downsizing / rebalancing in one area and hiring in another area. For example, in the late 80s, it was almost impossible to get a full time position at IBM in North America, but lots of contractor positions, but IBM was busy hiring in other parts of the world where salary costs were lower. Same happened in he 2000s when a friend's position was "relocated" to China and they were not interested in moving to the position's new location, so they were "reassigned" to a lower position in their current location. Not that it mattered much, they were deemed "redundant" the year before they were eligible to apply for an early pension.
Consider, for a moment, the curious case of the Elephant in the room. It's not just any elephant but an Invisible one, conjured from the whispers of 'What If' and 'Just Suppose.' This elephant, unseen yet unmistakable, roams freely, feeding on the foliage of Fervent Belief without the tether of Evidence. Its presence as palpable as the air we breathe, yet as elusive as the wind—seen not with the eyes, but felt with the heart of Conviction.
Around it, a party is in full swing, with guests engaged in animated discussions. Amidst this gathering, the Invisible Elephant moves silently, its every step a testament to the power of imagination over observation. Some guests whisper of its menacing hazard, others of its majestic strength, though none can agree, for its form is shaped more by conviction than by reason.
Not that the NASA announcement is any clearer, but:The announcement comes after a quarter of the sail was unfurled, demonstrating that the deployment technology works as expected. would be better phrased as The announcement comes after a a successful ground test in which one quarter of the solar sail was successfully unfurled, demonstrating that the deployment technology works as expected.
My experience with LLMs shows they generate incredibly neurotypical pap. Why? Because they ingest a lot of it, then - by the Law of Large Numbers - they regurgitate the most neurotypical pap.
This is no different than experiments/research that generates "the most beautiful face". Average enough "beautiful" faces and you converge towards the one that appeals to the most people. Same with LLMs, average tons of neurotypical pap and your output will be that which converges to the pap most people like/accept.
I am very sceptical about browser usage statistics and always have been.
Especially since browsers often lie to websites to get them to work.
Since Firefox automatically updates itself (might be platform dependent), couldn't Mozilla give an estimate of how many FF browsers get updated?
Of course, we then have the fact that multiple browsers are often installed on a machine.
The report by the GCHQ-run NCSC claimed. "There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose."
Shouldn't that rather read, "We have used our large repositories of malware to train an AI model for this purpose."?
The claim that the problem is SEO, or low quality content, or affiliate links pages is the root of what ails Google is absurdly false.
I had a website many moons ago (approaching 17 years) - long before search results become horrendously useless. Did it rank well, have lots of visitors? Of course not, how is a new, niche, web site supposed to compete against the millions out there at the time. However, when I signed up for Adsense (why not hope for a little passive income), I was very surprised to discover my visitors jumped 10x the first month and grew after that. It wasn't because of any viral page. Google decided that since I was running ads on my site, it was better for them to promote it.
Of course, Google has changed the was it ranks pages over the years and it seems to favour low-quality, affiliate-friendly site with lots of ads.
Search engines now behave more like the portals/services of yesteryear (AOL, Prodigy, etc) - trying to be your one stop interface to their vision o the Internet. I'm honestly surprised the the don't just open search content inside a frame (although, they are coming close with all their sidebars and stuff)
A Robber-Baroness plundering all she can, while the hoi polloi suffer and struggle?
From Wikipedia (to be fair, her quote is unreferenced, so it, probably, needs a citation tag): In 2018 she received a total of $2,458,350 in compensation from Mozilla, which represents a 400% payrise since 2008.[15] On the same period, Firefox marketshare was down 85%. When asked about her salary she stated "I learned that my pay was about an 80% discount to market. ... That's too big a discount to ask people and their families to commit to."
Well, let's see ...
The distance from Earth is 20 light hours. This is the radius. The angle is 2°.
Using basic trigonometry: tan(2°) = (distance pointing away) ÷ (distance to Earth)
0.03492 = (distance pointing away) ÷ (20 light hours)
distance pointing away = 0.03492 * 20 light hours = 0.6981 light hours = 41.9 light minutes.
The sun is 8 light minutes from the Earth.
Depending on relative orbital position, the distance between Earth and Mars is anywhere from 5 to 20 light minutes.
Depending on relative orbital positions, the distance between Earth and Jupiter is anywhere from 35 to 52 light minutes.
So, to correct for a 2° aiming error, we would need to send a "reflector" somewhere out into Jupiter orbit zone in order to bounce off a signal between Earth and Voyager.
One of the challenges in exploring the moon or planets is dust kicked up by engines during landing or activity on these distant worlds. Scientists in the Electrostatics and Surface Physics Laboratory at NASA's Kennedy Space Center in Florida are developing ways to mitigate this problem.
Electrodynamic dust shield, or EDS, technology is based on concepts originally developed by NASA as early as 1967 ...
https://web.archive.org/web/20130406081042/http://www.nasa.gov/centers/kennedy/home/mitigating_dust.html
A techy paper titled History and Flight Devleopment of the Electrodynamic Dust Shield is available here:
https://ntrs.nasa.gov/citations/20150016082
If your preferred article style is more newsy than techy, the BBC reported (23 Aug 2010) that "Working with Nasa, Malay Mazumder from Boston University originally developed the technology to keep solar panels powering Mars rovers clean." and that "They expect the technology to be commercially available within one year."
https://www.bbc.com/news/science-environment-11057771
Lot's of websites pull resources from other places - in fact, it's often encouraged. Want to display mathematical equations on your site: link to the - external - officially hosted version of MathJax - means you also get updates without having to manage it on your end. Want to use Bootstrap: link to the - external - officially hosted version. This is true for so many (most?) web frameworks, libraries, fonts, and other resources.
If every site has to maintain their own local copy of jQuery, what is their liability for not keeping it up to date? Because we've just been told they're liable for using external 3rd party resources (which silently roll out updates when needed - which is another thorny issue).
I am not trivializing privacy concerns. I prefer to be as untracked as possible, but every single connection on the Internet requires knowledge of the endpoint (yes, you can obscure it through a VPN or proxy, but you're not "anonymous" to them). You can't simply have a policy of not logging or tracking IP addresses, because that information is useful for blocking "hostile" actors (you know, the ones who keep trying to hack your site). On the other hand, it is really, really creepy if an actor - like Google - is taking access to a "public" resource and then trying to use it to build an identifiable profile.
Does the onus belong on a website using a 3rd party resource in good faith, or does it belong on the 3rd party providing a public resource in bad faith. Never mind all the thorny issues of what is appropriate due diligence on a website's part.
Security and privacy on the Internet are very difficult things.
Clicking on a YT video on the Videos tab, it presents me this message:
YouTube Privacy Warning
YouTube (owned by Google) does not let you watch videos anonymously. As such, watching YouTube videos here will be tracked by YouTube/Google.
Before offering me the "Watch Here" or "Watch on YouTube" buttons (unless you've been silly enough accept the "Remember my choice" option)
>> How do you test something as vague as panspermia? The only proof would be to go somewhere else, quite far, and find life much too similar to Earth's to be a coincidence.
Or ... that would be evidence that the development of life must follow physical processes that result in life following a particular molecular trajectory.
I think that was true 20 years ago when you found spirited, opinionated, and well trolled comments, but ... today - if the topic crosses a some perceived Left/Right divide - rationality goes out the window.
The "right-wing" nutters have always been around and engaging with them is like engaging with some homeless guy who's been off his psych meds too long and is coming down off a bender.
The "left-wing" nutters are a more recent phenomenon and engaging with them is like engaging with a tantrum throwing 3 year old who is screaming and thrashing about in the middle of a store aisle because you won't buy them "Fruity Unicorn Loopy Frosties"
As for whether $10B is sufficient ...
> "India's subsidies may not buy it a lot. Samsung is spending $17 billion on a single fab in Texas"
While there is graft in all governments, I am fairly confident it takes fewer dollars to grease the machinery in India than the US and, so, that $10B may be more than sufficient.
C and C++ are not the same language and there are a few corner cases were C code won't compile the same in C++, but ... I am digressing.
1) The programming paradigm is different between C and C++. In C++, you are "encouraged" to use C++ features - like the STL (Standard Template Library) instead of managing your data structures with malloc(), realloc(), and free(), or streams instead of printf() and FILE. So, equivalent C and C++ programs will look different.
2) C and C++ runtimes are different. C has no need for exception handling.
3) Even if you compiled a C program as a C++ program, things like structs get default constructors and destructors in C++.
Don't get me wrong, web browsers are pretty amazing - not to mention, extremely complex. What was once a thin client for remotely viewing and navigating documents has become a rather fat thin-client.
In fact. each passing iteration seems to be inching the browser closer and closer to being a full OS running as a guest on some target hardware (something Google anticipated back in 2009 with the release of ChromeOS).
Ah, this is obviously some strange usage of the word 'replicate' that I wasn't previously aware of.
To be fair, the paper is talking about robotic assembly of other self-similar robots. The fact that these robots are assembled from cellular matter is irrelevant to the discussion - except that ... it's really cool (or creepy) to talk about previously living cells being shredded, assembled into a sort of Frankenstein monster, and then, themselves, assembling other Frankenstein monsters from similarly shredded cellular material.
For those who didn't read the paper, the steps are as follows:
1) Take a frog stem cell and remove the contents
2) Take the cell husk and strip off the outer layer
3) Assemble the cell inner layers into a "robot"
4) Put robots in an environment filled with the inner linings of cell and watch the robots "assemble" copies of themselves.
I see this less of a great leap in self-assembling robotics (although the shape is interesting) and more a testament to the tenacity of life to go on even after it's been horribly mutilated. This type of experiment shows that the forces binding life isn't just about the DNA, RNA, or macro-scale organisation - it is far deeper.
>> "I'm frustrated that GitHub isn't taking its users' security and privacy seriously," Marlin told The Register in an email.
GitHub isn't a mind reader. It doesn't know which uploaded data is private and which isn't. It trusts that the user correctly commits files to public and private repositories.
Every single file that is uploaded - publicly visible or not - is a potential security threat, data leak, or doxxing.
I can think of at least two legitimate reasons for uploading and making public facing a cookies.sqlite file:
1) controlled data to work with
2) a honeypot
Yes, some idiots might "accidentally" upload that file, but they might also "accidentally" upload their banking details as well or any other file that contains "sensitive" information.
Agreed.
I assume he didn't want to have to share the ad revenue with the real ad slinging networks (like Google) that would have been actually needed to deliver the ads to real web pages. So, the cost of renting all those servers and IP addresses must have been cheaper than the cut Google et al would have taken.
Still as @Ace2 commented, he could have made more money being legit - maybe it would have taken longer, but, at least, he wouldn't be looking at prison.
Vim doesn't catch homoglyph attacks.
It also didn't didn't display a codepoint for the Python comment attack and, instead, displayed the disguised version of the code - mind you, odd cursor movement through the code was a tip off.
It did display codepoints for other bidi attacks, but it seems that certain bidi codes - like RLI (and perhaps a few others) - are rendered by Vim instead of displayed as codepoints
I am using Vim 8.1 with patches 1-2269 on Ubuntu.
Why kWh instead of Joules?
The atmosphere is a HUGE energy exchange system. Energy is put into water, turning it into water vapour. That water vapour is transported by the atmosphere. Energy is removed from the water vapour and it precipitates or condenses out.
In the places where these systems are being proposed, the atmosphere is not pulling energy out of the water vapour and thus causing it to precipitate. Therefore, the solution is to extract the energy from the water, thus causing it to condense.
It takes a lot of energy to vaporize water. For example, to vaporize 1kg of water requires:
2500000J @ 0C
2453000J @ 20C
2256000J @ 100C
In order to condense the water, you must remove that amount of energy. One way of doing this is with a refrigeration system of some sort which moves heat from a cool plate to a hot plate.
We tend not to use Joules because energy doesn't care about time. We prefer to use a timed measure of energy - such as kWh: 1 kWh = 3600000J of energy over 1 hour.
-------
Why would you get less water from a higher relative humidity?
How much water the atmosphere can hold depends on temperature. Consider the following quantities of water in the atmosphere at 100% humidity levels:
4.89 g/m³ @ 0C
17.3 g/m³ @ 20C
30.4 g/m³ @ 30C
At 0C, 90% relative humidity, the maximum amount of water you could extract from a cubic metre of air is 4.89 x 0.90 = 4.40 grams (about 4.4ml).
At 30C, 30% relative humidity, the maximum amount of water you could extract from a cubic metre of air is 30.04 x 0.30 = 9.12 grams (about 9.1ml).
While the paper references to papers it is pulling its number from, it seems the authors are not clear on what they are talking about and don't seem to understand how to present the data in a coherent manner. IMO, it reads like a "publish or perish" filler fluff.
-------
Data on Heat of Vaporization:
https://www.engineeringtoolbox.com/water-properties-d_1573.html
Data on Maximum Water Content vs Air Temperature:
https://www.engineeringtoolbox.com/maximum-moisture-content-air-d_1403.html
The study opens with: Misinformation spreads rapidly on social media, before professional fact checkers and media outlets have a chance to debunk false claims.
And near the end concludes: Both models performed best when only using the evaluations from those with high political knowledge.
I wonder what sort of "facts" are best evaluated by "those with high political knowledge"?
a) Science?
b) History?
c) Philosophy?
d) Propaganda?
As someone who was born and raised (until we left) under Communism, I have no hate for anyone, nor do I see other people as my enemy.
Love of your homeland is strong in people - regardless of the government in power - but love of your homeland is not the same as love of the government.
I find the failure to distinguish between "the people" and "the government" very common among Westerners. I suspect, this occurs because Westerners have the luxury of political tribalism with its attendant hate of other political tribes. Or, in other words, people under Communism (or any Totalitarian system) don't have the luxury of political tribalism and all the "two minute hate" it rallies forth against its opponents.
Most people just want to get on with their lives without hating anyone or having anyone hate you. So you keep your head down and ignore the politics - except, it seems, in the luxurious West.
The explanation of the methodology is quite good, but the discussion in the paper is designed to push the narrative that their algorithm tends to promote conservative / right leaning tweets more than liberal / left tweets.
The raw data is missing, along with data on the political leanings of those engaging with the tweets. From what I gather on the Internet, in the US and UK, the users skew to the left. (In the US, 60% of Twitter users lean Democrat, 35% Republican : https://www.pewresearch.org/internet/2019/04/24/sizing-up-twitter-users/).
As the paper notes: The selection and ranking of Tweets is influenced, in part, by the output of machine learning models which are trained to predict whether the user is likely to engage with the Tweet in various ways (like, reTweet, reply, etc). [SI 1.14]
I think The Economist expressed best when it took a look at Tweet favouring in 2020: The platform’s recommendation engine appears to favour inflammatory tweets https://web.archive.org/web/20200803093134/https://www.economist.com/graphic-detail/2020/08/01/twitters-algorithm-does-not-seem-to-silence-conservatives
Those inflammatory tweets are exactly the ones that are going to get engagement. As the paper notes, the only type of tweets they considered were: We then selected original Tweets authored by the legislators, including any replies and quote Tweets (where they retweet a Tweet while also adding original commentary). We excluded retweets without comment.. [p3. Results]. The rationale for excluding tweets without comment was: attribution is
ambiguous when multiple legislators retweet the same content [ibid]. I think there is an additional problem / bias - (I suspect, but have no data to support this) people will retweet without adding a comment if they agree / support the tweet, but are more likely to attach some editorial comment ("SO STUPID!!!") when they disagree / oppose the tweet. [Ambiguity note: I find the paper ambiguous on whether it is only "legislator" retweets without comment that are ignored, or if that includes any "engaged" retweet of a legislator's tweet / retweet without comment]
Since humans - like all animals - are evolved to watch and attend to (i.e. "engage with") wany real or perceived threat, tweets that are "oppositional" will garner more attention. Since (at least the US and UK) Twitter users are left leaning, they will engage with what they perceive to be threats - which is what the algorithm will serve up to them, which will come from the other side of the political aisle. QED. Or to repeat what The Economist said: The platform’s recommendation engine appears to favour inflammatory tweets
-------
Why the paper is self serving:
In the main body, it argues: With the exception of Germany, we find a statistically significant difference favoring the political right wing. This effect is strongest in Canada (Liberals 43% vs Conservatives 167%) and the United Kingdom (Labour 112% vs Conservatives 176%).
Yet, when you look (in Canada) at the amplifications of individual legislators, you see the Liberals and Conservatives are (almost) perfectly mirror each other (Chart 1C) - i.e. the amplification of individual members of the Liberal or Conservative parties is pretty much the same, yet the group amplifications are very different. The paper explains that this "discrepancy" is explained in SI 1.E.3 (which, I think is meant to be SI 1.5.3).
It is easy to see that if amplification a(G) of a group G were a linear function of the amplification of individuals i ... [then the sum of] individual amplification parity implies equal group amplification" [SI 1.5.3] (substance of the quote, equations didn't come through)
However, our definition of amplification does not satisfy this requirement. To see why, consider the function f (G) = |UTG|, where TG is the set of Tweets authored by members of the group G and UTG is the set of users who registered an impression event with at least one Tweet in TG. The function f is a submodular set function exhibiting a diminishing return property f (G ∪ H) ≤ f (G) + f (H). Equality would hold if Tweets from groups G and H reach completely non-overlapping audiences. [SU 1.5.3] (Again, apologies for the not quite 100% quoting, but ... equation problems).
This means that you have a much wider range of tweets from the Conservatives than the Liberals (remember, I'm looking at the Canada result / conclusion). Recall, from Graph 1C, individual Liberal and Conservative legislators get about the same amplification, but when we aggregate the amplification by group, the Liberals get less amplification than the Conservatives. But, the aggregate is a submodular set function: if the Liberals are all sharing the same tweet ("Conservatives Evil!") then each individual Liberal will get their individual "amplification", but the aggregate tweet amplification will be for that one tweet and consequently lower because of the high overlap for that tweet; if individual Conservatives are tweeting all over the place ("Liberals Evil!" or "Crystal Skulls" or "We're not the Liberals!"), each Conservative will get their individual "amplification" (which, more or less, matches the individual Liberals), but the aggregate group tweet amplification will be higher because there is less overlap with the tweets. This leads to (at least) two different ways to spin: (1) Liberals are focused and on point, Conservatives are all over the place, (2) Liberals share only one voice, Conservatives have many individual voices.
Now, many countries (apart from the US) have multiple parties. The paper focuses on Comparing the amplification of the largest mainstream left- and right-wing parties in each country [SI Figure S1A] and ignores all the other parties. In Canada, there are 2 other parties listed (NDP and BQ, both are leftist - indeed, the BQ has higher amplification in Canada than the Conservatives). Why aren't the Left and Right aggregated together so we can see the Left / Right amplification? Why is the amplification provided for only individual parties, but then generalized as "the right-wing gets more amplification". The Liberals + NDP + BQ are 3 left voices vs the one Conservative voice in Canada.
We should ask what the binary left / right "amplification" was for other countries (as well) and not just the party amplification (and then present that as representative of the left / right amplification):
UK : 3 left + 1 right
Germany : 3 left + 3 right
France : 3 left + 4 right
Japan : 2 left + 3 right
Depending on the jurisdiction, "opt-out" may not be considered "informed consent" and, hence, is illegal.
We've seen this with how Europe (and California, I think) deals with tracking cookies - you have to opt-in to tracking cookies, not opt-out.
Not sure I've seen debates on web-crawling, but I have on tracking, cookies, etc and I'm sure similar arguments would apply.
>> The difference is that death through disease is no longer common in the more developed parts of the world. Typhoid was part of normal life, punctuated with outbreaks of cholera and smallpox.
While true, there has been plenty of noise, over the past 40 years or so, about some sort of upcoming pathogen scenario that would eradicate the Golden Age ushered in by vaccines and antibiotics.
I recall the '76 Swine Flu (over hyped), '76 Legionnaire's Disease, '81 AIDS, '03 SARS, '14 Ebola (exciting). Not to mention the oft repeated warning that we are running out of effective antibiotics as strains of resistant bacteria proliferate.
It is nice to imagine that people will become more self-conscious of personal and interpersonal hygiene - but I am not holding my breath.
So, the question is: will humans continue the Golden Age against Mother Nature or will Mother Nature smack us down?
>> It still baffles me that some people don't get it.
It's a question of risk tolerance.
Some people won't get into a plane because - you know - it's the difference between missing out on visiting a new place or ending up in a body bag.
Even if COVID's mortality rate was 90%, there would be those who would be willing to take their chances.
Life is full of risks and everyone is going to deal with them differently.