Re: what is this mmHg that you speak of
For an El Reg unit of pressure, how about PHB's? As in, PHB's "applying pressure". Or maybe you could do it in "deadlines". Same concept.
7931 posts • joined 1 May 2015
you can find leaks in high pressure air systems by squirting soapy water onto the suspected location, and it'll form bubbles and foam where the leak is.
A similar substance might help find them in a vacuum. It would have to maintain a liquid state while in a vacuum though.
Alternately, how about a gas that produces a recognizable signature, glows under UV light or turns into ice crystals as it expands into the vacuum, or something equally visible that you could shine lights on and see "something" out in space... ?
Certain CFCs mixed with oil or alcohol might do this last part, and then you'll see reflective things coming out through the hole [which would then sublimate, but hopefully if the right combination would show up long enough to see the leak]. Hopefully would show up in a visual scan of the outside.
Also a possibility, there's a kind of tire filler (like 'fix a flat') you can use for self-repairing bicycle tires, which if it is contained within a layer between an inner and outer wall of a compartment, could self-seal against most leaks. This would be pretty cool if a module were inflatable, as the outer skin could be layered and contain such a material. I guess it'd be "fix a flat" for the ISS.
Almost every business on the planet
For JVM and Python on the BACK end? You forgot to consider PHP, particularly within a Linux hosting environment.
JVM not so much either... not unless you write Java desktop applications or Android applications.
Python on the backend, seen that - with DJango. i hope I never see that AGAIN.
At least with Java, future devs will be able to do client applications and not just web pages. It's like a stepping stone for a native (read: proper) language like C or C++. Heh.
Python, on the other hand, seems to be way too encumbered with its "scriptiness". For a beginner I'm sure it seems cool, just like BASIC did back in the day. However, for writing maintainable and reliable code that's not overly dependent on 3rd party library HELL, or [worst yet] NOT having some incompatible change made to Python itself, And 'pip' is just a stopgap that hides the weakness, especially when downloading 'the latest' breaks something. And so I do not believe it is quite ready for "prime time".
Still I think Python is great for LEARNING and quicky scripts and prototypes and wrappers for things like GTK and WebKit. But I wouldn't write a commercial application with it
It's also good for demonstrating an algorithm or a process to people who are novice programmers. chances are that if your example is in Python, they will be able to run it and learn from it.
Not surprised it's top of the rankings for a school that is teaching programming.
does anyone happen to know WHICH version of DevStudio caused this possibility?
I've been using 2010 for a long time, mostly because I *STILL* target Windows 7 [and earlier] and I *REFUSE* to use an IDE with a 2D FLATSO interface. I do _NOT_ write "UWP" crap, either.
But now it may seem that I have even MORE reasons to _NOT_ use a newer DevStudio, if project files that it opens can SPREAD MALWARE like opening a spreadsheet, or a Word document, or using Virus Outbreak (MS Outlook) for e-mail... [assuming more zero-days exist for it, as past performance would indicate]
Micros~1 you need to get your act together on security.
(captain obvious now goes back to working)
I've been having to repair a lot of things recently, for whatever reasons [probably because the stuff is just old], like game consoles, monitor, even the KVM. More than half of it was NOT purchased on Amazon. But the rest was. Still I think I've spent less on Amazon over the last year than in previous years.
I've also been trying to use 'other than Amazon' when i can. At the very least, my choice causes me to compare prices and delivery time/cost against whatever Amazon is offering. But sometimes it's about business, and prices, and service and "nobody else seems to have it". Still looking for those alternatives, though.
I just hope that, for any site that (unfortunately) uses captcha for anything [sometimes even government sites do this, like for renewing your car registration], this new "de-googled" version of chromium doesn't become as *BROKEN* as I perceive Firefox to have become. It has been my experience that the more heinous captchas [like the slow fade-in fade-out ones] nearly always FAIL with Firefox. Whether it is because I'm using a 1.5 year old version of Firefox, or whether the various privacy features in Firefox (even with scripting and cookies enabled) is causing it, I do not know. It is merely an anecdotal observation, along with my bombastic opinion, but I think I'm right about it.
And the API issue with existing chromium kind of supports what I'm thinking here, that non-google browsers get INFERIOR SUPPORT from google. At least, that's MY take on it...
Got the bulls (us) by the *WHAT* now???
[good thing I don't use google things, at least not directly]
I've never had a problem setting up nor using bind to serve up any kind of serious DNS stuff, like a local LAN or a private domain name.
The only thing I've ever used dnsmasq for was a simple DNS+DHCP solution for a user to configure networking on a standalone embedded device via a phone or PC with a wifi connection. And since dnsmasq allows you to specify a single hard-coded name to connect to, you could set up the embedded device so that you press "the button" on the device for "config mode", use a phone to access it via wifi, then go to the web page "http://admin" (or whatever) and get a web page to configure it with, and have dnsmasq also provide the DHCP address for the connected device, etc.. Simple stuff like that seems to make sense with dnsmasq, and you have to press the right buttons on the device to make it go into config mode like that [after which the device would have its wifi client set up and would go off and connect through the LAN and use the LAN's DNS and DHCP, etc. and not its own]. So dnsmasq is never facing a public internet with this particular use.
Trying to use something like dnsmasq to do anything MORE than "what I described" might be the actual problem...
(for my own network I've been using bind and the isc DHCP server for both IPv4 and IPv6, no observed problems, and the bind server also handles DNS for a domain I own, and I've been doing this for almost 2 decades, though a bit less for the IPv6 part)
exactly - a 'Total Inability To Sustain Usual Parameters' event resulting in a ginormous "ooh, aahh" ball of "system integrity loss" would have cost a LOT more and set them back a LOT further. 400 additional seconds of fuel would make ONE HUMONGOUS FIREBALL, after all.
A look at Apollo 2 through 6 would confirm this approach (wikipedia articles on them are interesting and appear to be accurate). After Apollo 1, they needed to be more careful to identify potential problem before they become fireballs. Similarly with the shuttle losses. Space is (currently) a dangerous business, just like flying was 100 years ago.
pile driver is interesting. Such a device would ALSO be good for MINING operations...
* use a liquid explosive that requires detonation (but is otherwise stable), similar to those diesel pile drivers [but of course little to no air] with hydrazine or something like it
* the piston could be made of a lightweight material, then bucket-filled by a scoop and arm until it's "heavy enough" - once in place, that is.
* carefully designed pulley and cable to lift the piston (and drop it) so that the cable doesn't easily get all twisted when the operations instructions are sent with hours' delay.
* autonomous droid for the most part. I think we have the tech for this kind of thing already...
Field test THAT one, yeah!
oh and if hydrazine (or a similar chemical) can be made to work like diesel fuel on Mars or in space, so much the better! Imagine, piston engine space rovers. Who'd a thunk it? [a swashplate design might work best]. Add peroxide to the hydrazine for an even better burn!
A couple of years ago I experimented with Wine and discovered there are some serious shortcomings. One of the biggest: having BOTH 32-bit AND 64-bit running at the same time. Basically, did NOT work.
This completely screwed the ability to load things like DevStudio onto a Wine machine, or (for that matter) ANYthing that's "mostly 64-bit" but has some 32-bit executables here and there for some reason.
I was also disappointed in the *MESS* left behind when I went to uninstall the various wine packages in that particular VM. It wasn't pretty...
This _was_ more than a couple of years ago, so maybe it's fixed, now? I was actually considering the possibility of contributing to the project at that time. I wanted developer tools up and running for that reason. It was both for this _AND_ for CentOS, actually, but both seem to suffer from the same *kinds* of "Catch 22" level problems, inherent in the very nature of the projects.
Icon because I like the idea, but am disappointed in how it has dragged along...
I don't think it's 32-bit ARM that's at 'RISC' here (bad PUN-ishment) but 32-bit i386 specifically.
Raspbian/RPi OS is still 32-bit last I checked, though FreeBSD had 64-bit ARM for RPi 3 a couple of years ago.
embedded systems are still widely using 32-bit Linux for ARM (or in some cases MIPS I guess). It's smaller and runs slightly faster due to address width.
Not sure how many embedded systems [other than legacy] are using 32-bit i386 though. And maybe that's why they are considering dropping it. [although I've got some old Pentium III computers and motherboards that could be used for testing if they want 'em]
here is the ACLU's position:
(I sometimes agree with the ACLU, especially when it comes to individual rights and privacy)
some news reports say that violent protesters used Tw[a,i]tter and Fa[e]ceB*** last year to coordinate THEIR illegal actvities (riots, looting, autonomous zones). But I guess they have their OWN servers and aren't using AWS.
you can't really STOP criminals from abusing a platform. Trying to police all of it would be a monumental task. AWS was too quick to pull the plug.
For those engaging in criminal activity, USENET and IRC would have been easier, In My Bombastic Opinion, unless they were TRYING to get Parler in some kind of trouble along the way...
As for Parler using AWS, "all eggs" "one basket" and a few other things come to mind. Parler needs to NOT rely on JUST "the cloud", and particularly NOT a single cloud provider. And AWS seems to have proved themselves to be at least a _little_ hostile to their potential customers. It gives me pause for thought as to whether AWS or _any_ "megacloud" provider is worth the effort.
A 'private cloud', distributed geographically on servers and pipes that YOU own, would make a bit more sense. I've been pricing ISPs lately and have looked at quite a number of them, for a customer and for myself as well. Things *LIKE* AWS could STILL be a fallback when a sudden need exists for peak bandwidth. So the only cost of "you are off our platform" would be some temporary slowdowns.
Perhaps we should ALL consider this as a "what if this kind of 'cancelation' happens to ME" warning... that is, BEFORE putting all of our eggs into AWS's (or anyone else's) basket, and relying on NOT having some arbitrary, capricious, or even MALICIOUS decision by a provider (or group of providers) cripple our business.
I think that flash COULD have lived, but to do so, they would have needed to go open source and allow the community to assist with the security fixes.
For a while there was something called 'gnash', a 'gnu flash' for those who've never heard of it. it worked pretty well for a while, but then flash kept adding things and adding things and changing things and making it incompatible with older players and nobody updated gnash... so it *died*.
(Hopefully I've already described the situation well enough that the implications are obvious now and I don't have to become "Captain Obvious" and boringly explain it 'cause I'd really rather not)
From the article: Agree with other damn developers where you're putting your damn accessibility settings
First thing that entered my mind was to use the desktop settings so that it's doing "accessibility" out of the box already. FreeDesktop does a walkthrough here:
As for windows I _ think _ it is built-in (more or less).
There's also supposed to be an Accessibility API for 'droid. I haven't actually used it (yet) but was under the impression that such settings _usually_ show up automatically...
I thought this had been settled by the OSs but maybe not.
FYI - most of the standard menu arrangements and hotkey assignments were defined by Apple and IBM.
With the Windows 3.0 SDK came a dead-tree manual on IBM's user interface spec. It was designed for OS/2 but Windows also complied with it, more or less, at that time.
I might suggest a common hotkey for application accessibility settings... maybe ALT+A or similar... (apparently i-things have a configurable button for this).
This wikipedia article has a list of common keystrokes used by winders and gnome:
For touch screens, what would work best? Needs to be easy for people with finger muscle issues or voice-only interfaces.
a general comment - default user interface colors that are NOT light blue on bright white, especially if ANY visual accessibility feature has been enabled [just assume it please]. That specific color combination [I'm talking to YOU, Apple, Google] is EXCEPTIONALLY HARD on eyes over the age of 50.
(respecting desktop themes would ALSO help a LOT, if not being done already)
Last I checked, people are still making 486-based CPUs for things like the PC101 platform and other stuff that's mostly for embedded systems.
changing a hardware design might be difficult. But new designs should _DEFINITELY_ use something else [like ARM].
The question is whether or not these legacy systems have any new development or need for security patches...
But it's worth pointing out that, on platforms that can use both 32-bit and 64-bit [most x86 and ARM64], the 32-bit code is probably going to be a little bit faster, and a little bit smaller, due to 64-bits vs 32-bits for memory addresses. Abandoning 32-bit support in its entirety would be a MISTAKE.
But abandoning support for older processors... I guess they could just let the people who actually USE them submit patches themselves. THEN the cost of maintaining the legacy hardware vs maintaining support in the kernel might change something down the road [and devs can spend more time working on things that are more relevant to most of the Linux implementations].
sorta reminds me of the 80:20 or 90:10 [or whatever] rule, about 80% of the code taking 20% of the effort, and the other 20% taking 80% of the effort, usually supporting features that are used a fraction of the time, but taking up WAY too many resources to do it. Just a concept, but seems to be accurate In My Bombastic Opinion.
What this needs is an acronym.
From the article: the report calls for better software defect detection and remediation of identified vulnerabilities
B.S.D.D.n.R.O.I.V (ok maybe not)
But I usually solve these *kinds* of problems through Super High Intensity Testing.
And that's an acronym that's easy to remember!
NATIVE CODE is nearly ALWAYS better. Do only what's needed, and do it on the server. And it should be EFFICIENT code, and not "grab everything _AND_ the kitchen sink, 'just in case'". You don't need to thumbnail every file before you can select one, as an example, especially when a directory contains HUNDREDS or even THOUSANDS of files... example, do a gnome or mate 'file open' on files in /usr/bin - see what I mean?
At some point the server operators will STOP stealing CPU from the clients and realize how inefficient their processes have been, when they NECESSARILY move it to the server side and discover the resources that doing things "that way" actually consumes!!!
You know what "real" engineers and architects do for the majority of the time? Yeah, its documentation
Sadly, no. [although I'm doing docs at the moment, seriously]
I run into 'lack of proper documentation' a LOT. I think most others do as well. If "Stack Overflow" is the best source for information on a programming language or platform, then the official documentation is either poor quality or missing.
LBJ's last year in office - that would be the shootings of MLK and Bobby Kennedy (former attorney general, brother of JFK), among other things. Yeah no CIA involvement in THAT, either... [like maybe Wednesday's possible "false flag" operation by SOMEONE/THING, rabble rousers probably infiltrating what should have been 100% peaceful, cameras hyper-focus on the <1% involved in illegal activities, etc.]
black helicopter icon, of course
Yesterday's mob were the wrong kind of "peaceful protestors", or useful idiots..
I have to agree with you on that one. Yet it does not explain nor refute HOW they ended up "there" doing "that". And the quotes were added by me, fixed for ya.
Once the investigation and arrests happen, we'll know more.
someone remind me of how the 1929 stock crash happened, again??
At the center it had something to do with BANK SPECULATION and the loaning of money to people to PURCHASE STOCK, as I recall...
Yeah no resemblance *HERE*. Not like crypto-currency COULD be manipulated easily or anything. I heard this happened to the GBP a few years ago. What was the name of that guy wot dun it... "broke the bank of England"... right on the tip of my tongue...
"That's the point of
capitalism evil capitalists
fixed it for you.
the point of capitalism is for people to earn something of value based on the value and quality of their work, and then use that 'something of value' (like money) to purchase goods and services, etc. the way that human societies have worked since prehistoric times. It has NOTHING to do with exploitation. Evil, on the other hand, has EVERYTHING to do with exploitation. And that's the point.
but whether the people behind Qt's heading-towards-closed-source maneuver are evil capitalists... that will most likely become obvious at some point.
One of the best things I like about wxWidgets is that it's possible to port an MFC application to one that uses wxWidgets if you understand the differences well enough. Other than names of functions, which could be a set of 'sed' lines in a shell script, you have to alter how windows messages are handled as 'events'. it's similar but not the same, and requires actual though to re-write it, but I've done it a couple of times and I like the results.
As a result, if software had been written in C++ using MFC for Windows, chances are a Linux version or a portable version that uses wxWidgets for both windows _AND_ "everything else" could be practical.
if you make shared libs with C++, at least make sure no symbols are exported that aren't declared 'extern "C"' especially if you want 100% compatibility between, let's say, both CLANG and GCC applications using it...
what you do INSIDE the library should be abstracted and encapsulated, anyway. Anything ELSE would be bad programming habits.
then other languages (Python, Perl, etc.) could have bindings to your library without too many hoops to jump through.
I was _EXTREMELY_ disappointed when KDE appeared to go "all 2D FLATSO" like Win-10-nic and the chrome browser and "Austrails". Is KDE's 2D FLATTY-ness a direct result of changes to Qt? Because if that's the case, what's the point of anything 3D (like best-case use of 3D acceleration) in future Qt versions, if it's *IRONICALLY* 2D... [and OpenGL is still "a thing"]
it's not impossible to have dual licensing, a GPL license for any open source project that distributes it, and a private non-GPL license for people who want to ship binary-only versions. since the creators own the code they can do what they want with it, pretty much, even if the two licenses conflict.
Seriously I like to offer both a BSD-like or MIT-like license along with GPL for stuff i put "out there" as open source, and THEN give whoever distributes it the choice of which open source license to use. Wanting to control how people use something (you can't play with MY toys unless you do it MY way) isn't very "free", In My Bombastic Opinion.
It's sorta like: if you give a gift and then dictate (too many) terms on its use, it's not a gift, it's more like a lease.
Batteries are designed to be replaced
Ideally not so often that you might as well use throw-aways
(from the article)
batteries comprised of more abundant materials
This _I_ like. I've heard good things about Aluminum-ion types of designs [whether its exactly that, or some derivative of it]. Lithium being a 'rare earth' material would eventually make it more expensive. However, other materials that are much more abundant would make "better" batteries that are physically larger and heavier. For many applications the latter battery might actually be a better idea. I'm thinking hybrid cars and inexpensive laptop computers, specifically... things that an extra pound or two isn't gonna hurt, especially when minimal cost is one of your goals.
As for overall capacity, the improvements made to lead-acid batteries over the years to extend THEIR life hit a kind of plateau but still might reflect the *kinds* of things that could be done to LiPo, such as a method to increase the surface area of the lithium side, better electrolytes to improve power density and recharge cycles, yotta yotta.
But yeah, those damned laws of physics and chemistry keep getting in the way of our battery pipe dreams.
dendrites, I assume similar to 'whiskers' in electronics, except inside batteries...
one method that seems to work about half the time in old NiCd batteries is to short them out, rapidly charge at several times the C rating before it starts to overheat, then rinse and repeat until it holds a charge. I've done this both successfully AND unsuccessfully. YMMV. Don't let it catch fire.
not sure how you could address that with a charge controller. Single cell systems maybe, but multi-cell systems would get cell reversals and other serious problems. Maybe ICV detectors to indicate where the bad cell is and either auto-jumper it or shut down the battery so you can manually jumper it out. That might work, actually. But it would only extend the life of the battery array, not the cell itself... and single-cell things (like phones, slabs) probably wouldn't benefit.
during deep discharge, cell reversal is a major problem, so maybe ICV montoring could extend discharge levels by allowing you to go beyond the usual volt limits as long as there's no cell reversal...
I recall that back in the DOS days the chkdsk program was SO bad [creating a bunch of cross-linked files more often than not] that the ONLY way to properly fix the file system was to use Norton Utilities.
But for this NEW b0rkage, if I read the article right, you could do a chkdsk /f in "offline mode" (or is that recovery mode?) and manage to recover the disk. Or is this NOT the case?
I know how easy it is to recover a b0rked Linux system. I normally use set of tarballs, then re-install the OS and un-tar my backups onto the system. You could even partition it yourself and use tarballs to restore the ENTIRE OS. This is the opposite with windows. The registry nearly makes that IMPOSSIBLE without ghosting the entire drive. And specialized Windows backup software does NOT impress me.
Biting the hand that feeds IT © 1998–2021