Re: Radioactivity on your lap ?
But the article was specifically about the development of an alphavoltaic system.
318 publicly visible posts • joined 7 Aug 2007
It's worse than that, if you want 100W of electricity then you'll have to dissapate substantially more than that in heat, the thermocouples that convert heat energy to electrical power aren't very efficient (<10%).
New technology may improve things, I recently read about a company called Fourth Power who had recently demonstrated a thermophotovoltaic cell with over 40% efficiency, although they may be using temperatures somewhat higher (up to 2400 C) than you could use with radioisotopes. Would anyone fancy a white-hot can of liquid Pu-238 inside their nuclear battery?!!
None of this shit is ever going to power your everlasting gobstopperlaptop.
Icon because fire!
Trivy - KICK - liteLLM - CI/CD pipelines - API keys - GitHub tokens - GitHub Action component - trivy-action - npm ecosystem - Docker Hub - workflow files
What's all this drivel for? So people can claim to be at the forefront of IT? All I see is complexity and lots of space for holes.
I also saw this, "Trivy version 0.69.4". 'nuff said.
"*An* answer. The point being it may not be true."
It appears that there isn't a single answer, because of how it works. Data and metadata typically have different write profiles (default is 1 copy of data, 2 of metadata). It'd need to know the future mix of those 2 to give a single accurate answer. It could probably do better though.
"No, that is ZFS."
I said they claim that for btrfs, I don't know that it's actually true, but I've not seen any reason to doubt it. But if writes don't happen in the correct order and are interrupted midway then there may be a problem. I've never had a problem here, but I am quite selective in which features I use. Some parts of it appear robust, other parts are by their own admission not production ready. FaceFuck apparently use btrfs extensively, and contribute code to the project, but are themselves selective in which features they use.
"> I'm slightly confused (or better yet -uninformed) - what are the biggest benefits oh using bcachefs over btrfs?"
This is complex and depends on what features you want to use. The only real answer to this is to spend time reading up on them, but much of the info found by search engines is out of date. Bcachefs promises to do everything btrfs was supposed to do, and more, but I think it's still early days.
"1. It cannot reliably report free space. In other words, the `df -h` command lies to you. This means software can't check if it can safely do something without risking filling the volume."
This is annoying. btrfs own tools can tell you the answer, but may require knowledge of the filesystem architecture to interperet the results.
"2. Any attempt to write to a full Btrfs volume _will_ corrupt the volume."
I'm sure I read somewhere that's no longer the case, that it will reserve some space for key filesystem operations. Copy-on-write means that any operation (even file delete) needs free space for the writes. It is worth noting that btrfs allows you to use a greater percentage of the available space than ext4, because inodes are not allocated in advance. I have a btrfs filesystem that's currently 99.9% full, it's got a bit slow. Any filesystem will be troublesome when near full.
"3. There is no working `fsck`. The repair tools do not work. SUSE relies on Btrfs and SUSE's docs say, with a bright red WARNING heading, "do not attempt to use `btrfs-repair`.""
I think the claim is that the nature of the fs is such that it cannot become inconsistent, and therefore fsck is not needed. This does depends on writes being made in the correct order, so beware of writeback caching. I have never had a problem.
There is another nasty gotcha I've read about. If you have a 2 disk raid1, and one disk fails, you only get one opportunity to mount the degraded filesystem as read/write, so you must replace the failed device during this time because that requires writes. Future mounts of the degraded filesystem will be read only, because it is unable to satisfy the raid1 write profile with a single disk. So no using of degraded fs whilst you await delivery of a replacement disk, unlike mdadm raid.
"So you can have some zram as well as a swap file, and it will move the least recently used stuff from zram into disk swap as needed."
I was looking at these recently, and have been unable to find out if data remains compressed when zram moves data to the disk swap. I feel that if it did, then this would be a feature worth telling people about. My understanding of the architecture of the swap system is that if the zram swap device fills up, then the existing data will remain there, and new data swapped out of memory will go (uncompressed) to the disk swap. If data were to be moved out of zram swap to disk swap, it would be de-compressed as it leaves zram.
Zswap does write compressed data to the backing swap disk, so reducing the amount of writes, which is good for ssd's.
It appears to me that zram is only good if you have no disk swap, and zswap is the better option if you do.
I always encrypt swap partition, because raw memory contents may be sat there in the open.
But can we be sufficiently certain of the trajectory of the asteroid (before and after an impactor) far enough ahead of time to know we've done the right thing?
An asteroid will need to be diverted years before its potential impact, it will take years to get a mission to it, and we'll probably need several years of observations to pin down its trajectory. The principle of a small change increasing in time may allow to amplify our small shunt to a larger deviation, but this also gives rise to a larger uncertainty in its future trajectory.
We're gonna need more, larger impactors and much better observations to be able to make use of this.
"Nvidia CEO Jensen Huang has often said Washington must allow export of his company’s products, to ensure US tech dominates the global AI industry."
I would expect china would use the american designed chippery to try to get ahead in AI, rather than wait for their own chippery to catch up first before being able to get ahead in AI. Then once the chinese chippery does catch up, they'll shift their AI systems over to it. I'm sure they won't allow their AI models to be stuck on foreign hardware.
Old methods of plate glass manufacture couldn't get the thickness very even. It makes sense to put the heavier bit at the bottom.
Glass is solid. An experiment you could try is to smash some glass, find some sharp bits, store them for a long time, taking care not to damage the edges, and see it they're still sharp years later. If it were a liquid then you would expect surface tension to blunt the edges. I haven't done this, but maybe your descendants could report back in the future.
I recall it being pronounced "you-rain-us" until the flyby, and television started fretting about the mis-pronounciation and gave us "yur-a-nus", this being the trigger for the spitting image sketch.
I always thought "yur-a-nus" was stupid, and anyone saying "your-anus" was an idiot. I've never understood why anus's are funny.
microsoft, and maybe your clients, are the problem here, not linux or its fanboys.
"The post to which the one above was responding even specifically pointed this out but was clearly ignored."
It wasn't ignored, the reply stated that you're stuck on the ms hamster wheel.
The free software people aren't going to all this trouble to create ms compatibility, then stopping 1 inch short of the end just to be cunts. That last 1 inch is clearly very difficult / impossible. Whose fault do you think that is?
How much have you paid for linux? I have my frustrations with linux, but I've paid nothing for any of it, so I just have to live with it as it is, and occasionally let off a bit of steam here.
"The more I listen to the anti-AI people, the more I'm convinced that they are offended that software plus data shows humans are not unique or special in anyway."
I don't disagree with that. I always thought this would bring into question what intelligence really is, and that maybe people won't like the answer.
"Like it or not, AI has become the repository of cultural and technical memory."
My objection to all this nonsense can be summed up with one very simple question, namely "What the fuck are humans for?"
Culture has meaning because of the shared experience of humanity. How does culture (and technology) made by machines fit into this? Will we all become mind-slaves? What are humans going to do in the future that won't also be done by a machine? Pay for it?
ms-dos never allocated drivers to the correct sized UMB slot. They allocated to the largest sized one available, which was plain stupid. They should have allocated to the smallest one it would fit in. It was possible to direct it manually to a specific available slot, although sometimes there may be another reason why it wouldn't work. It wasn't difficult to look at the memory map and figure out a better way of arranging things to fit more into the UMB. Mem-maker (whatever it was called) that shipped with dos6 tried to do this, but was crap at it.
"Isaac Asimov created the famous Three Laws of Robotics as ethical guidelines for fictional robots."
But that's all fiction. Reality is more difficult.
Asimov wrote lots of stuff. I don't know what cos I find reading books to be mind-numbingly dull. But wikipedia has the following sentence:
In a 1971 satirical piece, The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched."
"The big players aren't the root problem."
People are the root problem. Ordinary people. They use these things for their own selfish reasons without any thought as to the consequences to the greater good. The big players are simply catering to the market that they see.
I don't expect that to be a popular opinion.
That's because no alternative arrangements were made to guide people. Junctions without lights give priority to some vehicles, maybe not efficiently. Junctions with broken lights have no structure to guide people, and they're too selfish and arrogant on the whole to work together.
I don't quite understand what's been turned off. I have an old 3G phone in the uk and it's still working just fine.
I thought 2G had been turned off in some areas, years ago when I had a 2G phone i had no service whatsoever in central bristol, the home of my network. Made getting a puncture sorted rather difficult.
Are you real or fake? I can't tell if you're joking. That's a future I don't want to live in. I don't get people at all, or understand how to interact with them, but seeing them being taken away and replaced with automation just leaves me hollow. Technology designed for use by non-techies is even more baffling than the non-techies. I don't see any reason currently to bother with the future. It'll be a little bit interesting to watch (until the means to see it breaks), and staggeringly depressing.