Re: Nothing there...
actually I was hoping they'd do "confirmation" and additional science that the others have not yet done (for whatever reason), then the data is shared around the world so scientists can mull over it for years to come.
10931 publicly visible posts • joined 1 May 2015
So far the best password manager I've found is KeePassXC (the C language version of KeePass that can be compiled from source on Linux and FreeBSD).
There's even a button to make passwords visible. I use it a LOT so i can have longer more random ones. And though it may be possible to auto-paste into a browser, I typically just copy/pasta the passphrase from the KeePassXC 'edit' dialog box directly into the browser or ssh session. Or you could use the 'make visible' button to see the password and just type it.
(and I must have about 50 of them stored in there, now, because I *REFUSE* to use FB, T, G, or Micros~1 logins)
A carefully worded comment might illustrate how you might easily get past an AI algorithm.
(but I'll leave that one as an exercise)
This also reminds me of that old phrase, "the tail wagging the dog" when "the few" (easily triggered) must control "the many" (who lose freedom).
So humans with soft-touch moderation are needed to fight off the trolls and bots. However, an AI is more likely to behave like an aggressive spam filter, where e-mail from your mom is marked 'spam', but e-mail from scammers and 'male enhancement' vendors get through. EVERY! SINGLE! TIME!!
The internet does not HAVE to be a sewer. But it is. Maybe a click-through disclaimer is needed?
do it automatically while revealing your current cell phone number and IP address along with other personally identifying information that was gleaned the last 92 times you used this method.
Would the 'app' that you would need to make this happen ALSO upload GPS tracking data from your location over the last several days so that "they" will know where you've been?
yeah no tracking going on here. Nothing to see, move along...
[it's bad enough when you use a credit card in a store AND online and when you visit the online page you see your in-store shopping history along with online history...]
Hardware Dongle = TRACKING - you identity is NOW KNOWN to the web site, uniquely so.
As IRRITATING as a CAPTCHA is, I'd rather use CAPTCHA than GET TRACKED on that level...
Only an ad-slinging over-present cloud network would come up with THAT as a "solution".
(at least cache clearing and VPN can anonymize you a little bit, even with CAPTCHA)
Unless you are running high frequency snapshotting (and who does that on everything - especially file systems?) restoring from backup is a guaranteed loss of data
Some time ago I had a hard drive that was developing bad sectors in a short period of time. It was my server box. Here is how i handled it:
* do separate backup of as much critical data as I can, data that is not corrupted.
* install OS onto new hard drive, plus the basic software needed, as quickly as possible
* swap hard drive
* restore important data from most recent backup
Now it is up and running. OK I spent a day doing that. Better than a WEEK.
Then I went about analyzing the old drive to see what stuff was recoverable, and what wasn't. In the mean time, the server was RUNNING.
FIRST, get it BACK RUNNING AGAIN. *THEN* you worry about data recovery. Human safety gets shoehorned into the front of the line, as needed.
But I don't know how easily their systems could be restored, which might suggest their backup and restore process was a part of the problem. So maybe my perspective is off a bit. Still, I think they OWE us an explanation, regardless.
In any case, you can get SOMETHING running fairly fast if you set things up properly with your backups. If you're missing a week's worth of billing, at least you did not STOP THE OIL FLOW.
I'm also thinking that if I had set things up better, i.e. having a backup hard drive waiting in the wings with identical software [minus data] on it, that I could just swap in the drive and restore the most recent data from backup, and be up and running in a couple of hours, and not most of a day. BETTER planning, yeah.
And there should be a fine of 10 times your blackmail money to prevent this kind of thing from happening.
that would be a good start, yeah. step 1.
Hey Vlad Putin, you can earn some worldwide kudos by ACTUALLY SENDING those perpetrators to the modern day equivalent of a gulag... and THAT would be an EXCELLENT "Step 2"!!!
chrome (at least the versions I have seen) does not automatically delete privacy tracking info on exit but Firefox can. But for chrome (on Linux or BSD - windows, mac YMMV) you can either delete all of chrome's files in ~/.config and ~/.cache [which gives you back the defaults] or cherry pick and just delete MOST of them until you get all of the ones that track you, but don't actually delete settings you want to keep.
I saw that in my main browser, which prompted me to re-try it in the "safe-surfing sandboxed" browser that has script enabled.
I tested it with chrome on FreeBSD [a version built from ports a while back]. I initially used my "kill history" script that deletes LOTS of those files that chrome tries to use to save data across sessions. I recently increased the size of that list of files to be deleted, when I discovered that I wasn't deleting enough of them any more (certain things were starting to persist across browser sessions).
*ahem*
In any case, I did the "deanonymizing" test twice and got two completely different IDs. It does seem to take a while, though. You'd have to do this completely in the background for it to be effective, and over a fairly long period of time.
But a social media giant that "keeps you on the page" for a while (or runs a web bug script even after a trackable page closes) might still find it practical...
maybe we could send up a few "hotel" modules with piano lounges, restaurants, fully stocked bars, and other typical guest accommodations.
Space tea, space coffee, and space croissants at 7 AM, every day for the interplanetary breakfast bar.
And every lounge with a "stellar" view of the, er, stars.
If they simply made it POSSIBLE to load a non-app-store application [similar to Android downloading and installing a non-store APK] this whole issue would PROBABLY go away...
After following the first link in the article, I'm reminded that Apple banned Epic's game because it allowed in-game purchases outside the Apple store. But I recall _other_ applications being banned by Apple for different reasons. If there are no exploits or gross vulnerabilities, WHY ban them?
Instead, Apple has made _THEMSELVES_ the gatekeeper of iOS, with the obvious motive of PREVENTING people from switching to an Android platform (as indicated in the article), as well as preventing "apps they do not like for some reason" from being deployed on iOS.
So it looks like they got the 'evil' part of the definition right. The 'necessary' part, not so sure.
regardless iOS is great if it's what you want - I just don't see why they need a STRANGLEHOLD on "The Store" like that. I have to wonder how many customers they LOSE because of it.
eh, that's not _ENTIRELY_ true...
You're describing "Harvard Architecture" where code/data spaces are separate things. Your typical minicomputer never did this. In fact, PDP-11 code could even be categorized as "self modifying" when you put variable parameters after the function call, directly in INSTRUCTION SPACE, by using the previous program counter as a base register, and then cleaning the stack up with the 'RTS' instruction. Soft interrupts are similar, parameters are expected after the EMT instruction and the stack gets cleaned up when you return from interrupt. And to pass those parameters, you literally poke the values into the code space before making the call.
So it's worth pointing out that many non-IBM computer systems have had code/data in the same address space, particularly microprocessors and minicomputers. The big iron machines may have had separate code/data, but not necessarily all of them.
Anyway, some computer history from 50 years ago form someone who was there...
[worth pointing out - AVR microcontrollers use 'Harvard Architecture' so that you can run the program directly from NVRAM]
it lacks proper input sanitization, and is therefore vulnerable to code injection.
How about that - the world's oldest 0-day exploit is a code injection vulnerability!
[I was actually expecting 'buffer overrun' when I started reading the article]
so yeah - in MY book of definitions, that'd be "a vulnerability".
I've been wanting THIS for a long time.
It might cause some initial problems, due to case-sensitivity of file names and the use of '/' vs '\', as well as drive letters and different device names/handling, but I believe if they were to "embrace" Wine, and migrate to a Linux kernel with Wine on top, we'd all be better for it.
I'd pay money for that, particularly if I can keep my Mate desktop and just use the subsystem to run windows applications.
you could make use of memory metal to cause an expected change in temperature (heating or cooling, whatever works best) to open up the "space blanket", and it wouldn't require a whole lot of expense nor electricity.
A solar sail could work the same way, actually...
The I.R.S. doesn't like tax evasion much. So if anyone bought and sold a lot of bitcoin and made real money on it, and did NOT declare all that on the tax forms [it's considered "foreign currency" and apparently has a special place to declare it, though i always use tax software that just asks me about it], then the IRS will be wanting to find out how much money you made and bill you for the unpaid tax, with interest and a LOT of penalties. Yeah, they do that.
thanks for the reminder, I need to do backups today. not the automatic daily "do the dumps and copy the incremental changes to 3 different machines on the network" kind, but the "burn it to DVD and put it in the safe" kind. just my own stuff, but still...
(I admit, I've been a bit lax on the DVD burning)
Well it looks like there's an issue for this over on github
https://github.com/audacity/audacity/pull/835
It's marked as 'closed' though. I was going to put my own $.10 in but looks like the pressure is on.
One specific thing they said: "If you are compiling Audacity from source, we will provide a CMake option to enable the telemetry code. This option will be turned off by default."
I expect that FreeBSD Ports and various Linux distros will be taking advantage of this, by NOT putting the tracking in.
(probably best to read their words from the issue)
Local applications should NEVER have ANY tracking in them, REGARDLESS of opt-in.
I think the FreeBSD port might even need to include a patch to disable tracking COMPLETELY, or perhaps make a "no tracking" build option... that is NO TRACKING by default!
(to THINK they had the AUDACITY to add tracking...)
reasons for soldered-in battery:
* thin profile [people want this]
* WAY less likely to have the 'flickering' problem that flashlights get when galvanic corrosion appears on the battery terminals [even with gold and silver, dissimilar metals and moisture and electrical charges still present]
* unlikely for the battery to be easily reversed, which would probably explode it and/or seriously damage board components.
etc. - there are many good reasons for soldering the battery in place. Then you void the warranty if anyone changes their own battery. But I agree it SHOULD be both possible AND reasonably easy to do so. And cracking the case open should take less than a minute and be repeatable.
that might be reasonable if you bump it up a bit, to account for the time it takes to change out a battery.
Keep in mind that just looking at a broken device and determining what is wrong with it might cost more than $50 in the USA (when you factor in costs of labor, time, equipment, number of techs needed to handle the expected demand, and so on).
Back in the day, TV repair shops had a minimum fee for this reason.
However, a price cap on battery repair *MIGHT* motivate engineers to make it easier to change the batteries out, in order to save time for the tech doing the replacement and make it cost-effective, and at least 'break even' on the repairs. Profit would be better, of course.
A plus for the industry: if this only applied to FUTURE devices, they could re-design THEM to be more easily repaired, and THEN use this mandated service as a "new feature" for the new phone/slab/whatever.
(I'm currently thinking of a case modification in which you simultaneously press pins into 2 or more waterproof holes on the sides of case, where locking hooks are, and then slide it open - this could allow it to remain "thin" while also making it possible to open in a shop [or DYI repair], AND to assemble quickly, and ALSO rework a device that fails in manufacturing or is returned for warranty repair)
strangely, i-Things may be one of the biggest violators in this regard.
LiPo batteries wear out after a few years, usually not that few (3 or so). A relative of mine has an i-Pad, a hand-me-down actually, and it needs a battery. I checked the procedure for repairing it and it involves heating the glue around the screen to separate the halves so you can access it. The batteries ARE available, but the procedure is just as likely to do damage to the unit as allow you to repair it...
(a battery compartment where you can access it directly would be nice)
[Unfortunately, sending it in to get it repaired would cost half as much as a new one]
on a related note, I've repaired laptops a few times, and they are difficult enough to deal with. This is just impossible in my view. I can only see a cracked screen as a result of getting it slightly wrong.
the whole '.Not' thing (In My Bombastic Opinion).
The attempt to be all and do all for both back-end AND desktop on EVERY possible OS is just too, too much. And when Micros~1 began to use it on the desktop, in Windows Server 2003, performance was the FIRST thing that suffered. I know because I had Win Server 2k running on the same box, but 2k3 ran like a FAT BLOATED PIG, required over twice the RAM, and 3 or 4 times the CPU clock speed to even APPROACH equivalent performance.
That kinda said it all, to me [other than the humongous bloated shared lib collection that takes an hour or so to "update the indexes" after updating the libraries, even on MODERN computers!].
VB6 had multithreading
not multi-threading per se, as i recall. VB had a 'DoEvents()' [or similarly named] function that would call the message loop handler. It could make it LOOK like you were multi-threading. It's like cooperatively pre-emptive tasking.
In fact, this was present in VB 1
What I used to do for time consuming operations is disable the window that was doing the task by setting the 'Enabled' (I think that's right) properties to zero, then do the function that is time consuming while the hourglass cursor is visible, where the polling loop calls 'DoEvents()", and then fix it back once the operation is complete. typical example: waiting for DDE transactions or file transfers.
VB used to be a decent rapid UI development tool that could assist writing a user interface in a short time, to help you test out ideas. Shipping a VB application to customers, however, DEFINITELY had its shortcomings.
Still, the whole "Rapid development" thing made learning the basics (pun intended) worth doing.
[then you'd re-write it in C or C++ or similar and ship THAT version, once the UI bugs and features were worked out]
I had something that worked well enough with VB 1, then VB 2 broke my hacks, and I re-did them, and then VB 3 released and I had to fix a few more little things, and it never really did work quite as snappy as VB 2. I assisted others with some VB 5 stuff but that's about it. I kinda gave up on VB. That shared lib always created install problems and refused to work on newer OS's, forcing you to upgrade to a newer VB, and the coding challenges to make your VB hacks work again.
Too bad, because when it first came out, VB 1 was _REALLY_ cool! Windows applications in minutes/hours instead of days/weeks.
I'm betting most ISPs don't support multicast to endusers though
I don't think any do. Even IPv6 wouldn't help.
To make it work you'd need too many available multicast IP addresses and the net blocks are simply too small to make it work outside of a small network, unless you get really really creative with the routers.
Sendmail is the built-in for FreeBSD. I got used to its quirks and it's still supported for integration with other e-mail related things (like Cyrus IMAP), at least last time I integrated the two - which has been a while, yeah.
Exim runs by default on Debian derivatives as well, last I checked. Since it listens locally by default, it's probably not a problem unless you open up the listening ports for LAN or (worse) Internet access.
(verified, Devuan recent distro running exim4, listening port 25 only on 127.0.0.1 and ::1, default out of the box config for mail as I recall)
will join civilization and the respect of privacy.
Agreed. You'd think the ACLU would have more to say about this...
Also worthy of note, it may be that SOME (or maybe even ALL) of this tracking data is related to logins. How many phone-things will allow you to log in via Fa[e]cebook or Google? And at that point, the various SDKs do the tracking on behalf of the SDK maker [in this case, FB and Google]. Anyone who has put this particular kind of login feature into a phone application (or maintained code where it was in use) should understand.
You could probably say the same thing if a "Microsoft Login" were in use for the same purpose. These cloudy login services could EASILY track EVERYTHING you do.
Google uses Firebase for this. It's kinda integrated with the "app" deployment as well, last I checked anyway [it may have changed since the last time I had to use it]. And I would expect school-related software to have some kind of login. And a kid isn't going to want to type a password every time, so would most likely accept a one-button login feature.
And so we "circle back" to the core of the problem - those login SDKs that are tied in with the ad networks and tracking and everything else.
Just wanted to point all of that out.
sometimes the people who fund these things like to donate because they get something they want for the money they spend... and in this case, maybe it's an Audacity + MuseScore thing that's beneficial to all?
As long as it stays alive and isn't pay-walled somehow, and doesn't pop ads or nag screens into our faces while using it, I don't care _WHO_ owns it.
(But I think this will work out just fine - I do music production stuff from time to time, and the beauty of Audacity is in the plugins, where vendors can make money if they want to)
well I wouldn't expect that, but again, forking is still an option... if you don't like community v pro.
I expect pay-for on the plugin side. that's where they can really make money if they want. make the thing that USES the plugins free, and the plugins cheap enough, and it'll work.
forked, yes, but MuseScore is alrady an open source application (at least was, it was in FreeBSD ports last i checked and builds from source as far as I know).
So the company may very well make their money selling plugins and things like that [I'll have to investigate but I believe it is the case] and audacity might be a "value add" for already paid for things. I suspect it will go well.
Oracle still supports open source Virtual Box... yeah, who knew?
RHEL has the paid support that big Enterprise customers probably want.
CentOS _was_ the alternative for "the little guy" who is willing to do most of that support work himself. But being 'ahead' of RHEL (like a 'testing' branch) makes it "less stable".
So, 'Rocky Linux' will take the (former) CentOS slot of "stable" vs CentOS now being "testing",
Think of the *kinds* of issues that Windows is currently having from the way ITS patches roll out.. often inadequately tested, from what I see. You don't want to be first in line unless you're prepared to deal with the consequences of a bad patch.
So the assumption is that once a patch goes out for RHEL, the Rocky Linux project will fold it into their code base as well, making it stable in that a patch that's rapidly re-patched will only show up after the re-patch. Or that's what I'm thinking. And CentOS patches would be a "heads up it's coming" for them to get it all ready, maybe deploy a bit faster.
If I had a choice between rapid-patch and stability, I would ALWAYS choose stability. INstability requires constant fiddling and I'd rather not spend tons of time doing things that, to me, appear more like "scampering" and "tail chasing".