Re: Who didn't see this coming?
"I wonder how long it'll be before there's a monthly subscription charge"
'Not Soon Enough' as far as Micro-shaft is concerned
10841 publicly visible posts • joined 1 May 2015
You'd think he'd be able to find a job based on his "merits"
well, not having seen the guy's resume, who knows. I'd suggest that he leave silly valley and go to Texas. Silly Valley has probably labeled him "troublemaker", and there's no casting couch big/wide enough for him to get his 'favor' back. OK that last part was kinda, bad. coat, please.
well, NO discrimination is the best idea, but if you do THAT, and the hiring environment is basically what Damore said it is [mostly white men applying], then you're gonna get sued, regardless, because, lawyers and insane people who can't simply ACCEPT that they don't discriminate [until they HAVE to discriminate, because,REVERSE discrimination, which is PROBABLY true in this case, out of self preservation].
That being said...
If employees could be discriminated against for their POLITICS, they should just shut the hell up about it when at work. After all, business is business, and politics is politics. Happy customers/employers keep you employed and are more likely to give you raises.
And then as long as "the workplace" doesn't use what you say on line ON YOUR OWN TIME [assuming it disagrees with them\ and you're not violating any laws or revealing trade secrets, if they were to discriminate against you BECAUSE of your 'after work' politics, they'd be "sue-able" I'm pretty certain. And the lawsuit would be completely justified.
Anyway, my $.10 . It's not so bad being a techno-whore. If the guy with the money that hires me is a total lefty, I'll just say "yes, sir" and shut the hell up if he says something "left-ish". He's paying the bills, after all.
So - did Damore possibly INVITE the discrimination from past behavior? Just curious...
'Goodbye "cloud" I'm done with you.'
Sadly that may be the only alternative...
Still, it would seem to me that *maybe* an 'Open NAS' or equivalent might work on those drives...
(has anyone tried to load it?)
If another OS _can_ be loaded on those devices, maybe THAT is the fix?
and they're STILL hard-coding back doors into their stuff, EVEN THOUGH it has been proven time, and time, and time, and time, and time ... again that DOING! THAT! IS! BONEHEADED! STUPID!!!
Anybody got a CLUEBAT for these idiots?
There may have once been a reason for this, for vertical market systems NOT on the internet, so you could go to a customer site and un-brick "whatever they did to it". Since the 90's, that has become *INCREDIBLY* *STUPID* to do. A physical reset button with a 'password reset' command of some kind would be a better idea, but NOOooo they had to do a BACK DOOR with a HARD CODED USER/PASS combo.
Nice. Job. Not!!!
something that a "super-heavy" might be really good for...
if travelling to Mars or the moon becomes more common, it's a fair bet that ships (yes ships) would want to refuel in low earth orbit, and how do you get the fuel "up there"? With super-heavy boosters!
Also components for building a REAL space station, like the one we see in the 2001 movie, would requier "super heavy" boosters.
Note I'm suggesting a Falcon Super-Heavy here because 70 tons is kinda small when it comes to things like fuel and water+supplies for space hotels and interplanetary travel.
Q: how many additional boosters can you strap onto a Falcon Heavy before it can't handle the load?
A: let's find out! [but first, get the Heavy off of the ground, and launch something more useful than a car]
Considering what the article said about the EFF, I have to wonder if the appearance of impropriety, i.e. taking money from Google (and maybe Facebook), and then declaring that there are no privacy violations with either of these [both known to hoover up our information and track us], even though it's always "opt out" and never "opt in". And in some cases I suspect there _IS_ no 'opt out'. Youtube is apparently NOT complying with privacy settings when you select "do not track", as one example, so when I look at embedded youtube on a web page, I often see a 'privacy settings' warning [I didn't want autoplay videos anyway, so it's just as well].
The message I typically see looks like this (in lieu of the embedded video):
"This embedded content is from a site (www.youtube.com, flickr.com, etc) that does not comply with the Do Not Track (DNT) setting now enabled on your browser." And there is a button to view the embedded content.
(this was on a site that apparently serves up that particular warning if it detects you selected "do not track" options in the browser)
OK, so _HOW_ can Google (owner of youtube) get any kind of FAVORABLE acclaim from EFF regarding privacy, when they (allegedly) do NOT comply with the 'do not track' policy you select in the browser???
Or, the site that serves up that particular warning ought to stop misleading people... assuming they're NOT correct (and I suspect they _ARE_ correct).
Methinks there is a foul smell in the air, and it's not a good one for privacy for the individual.
I like a lot of what the EFF does and stands for. Some of it irritates me. If sending them money could sway their position on a few things, then I might consider it, if I _HAD_ that kind of money, at any rate...
"Actually, they don't - some small shop do, but big ones don't - and that's always been a thorn in MS side."
I would *REALLY* *LIKE* to see more evidence of that (what YOU said), because it's what I _WANT_ to hear, but I have been hearing nothing but the MS coolaid mantra for so long that maybe my perception of this situation is off... because the perception Micro-shaft wants people to have is that it "everyone" is doing it Micro-shaft's way [whatever that might be this month] and as such, if you're not on the SAME bandwagon, you're an old, stick-in-the-mud, obstructionist dinosaur that should have gone extinct already.
"The Visual Studio debugger is light years ahead of GDB in every way possible. And has been for decades."
not really. gdb was intended to have a wrapper around it, as I understand. It's a lot like the old codeview application, but simpler. Also similar to the way kernel debugging works, for those of us who've done that.
DevStudio's debugging interface isn't any better than 'ddd' as far as I am concerned. In fact, I think it's HARDER to use DevStudio nowadays (compared to '98 which was probably the BEST version for people who like to type and not mousie-clickie every damn thing), with the way the hotkeys and toolbars and displayed source files have been screwed all to hell (as far as I can tell, anyway). It was MUCH easier (and saner) in "the old days".
If you've ever used 'ddd' (a GUI wrapper around gdb) you'll see an example of GUI integration around gdb, which is as good as anything else as far as I'm concerned.
Where 'ddd' falls apart is when you set a breakpoint during event handling from X11 from within the SAME desktop as the process being debugged. Basically there's a lock on the X server so everything freezes up due to the 'deadlock'.
So, there are 2 basic solutions to that: a) use a separate desktop (which I already do) for the debugging session, and b) fix the interface (i.e. re-write your own gdb wrapper) so that it unlocks the X server across debug breakpoints. Managing the 2nd option may require some clever hacking. But I intend to give it a good try anyway.
The X11 library has a locking mechanism for multiple threads accessing the X server, mainly XLockDisplay() and XUnlockDisplay() (if you initialize it for threaded behavior; I keep the events in the main thread to avoid problems). Additionally, you can lock/unlock the server itself via XLockServer and XUnlockServer (you sometimes need to do this with certain operations, like mouse-dragging). These may be implicit with certain kinds of X11 library calls and event handling itself. So if I spend some time digging through the X11 library I bet I'll find something _like_ this being used during event processing, locking the X server (or the library) for concurrency reasons. I would then intercept that when I hit a breakpoint, shut it off while in the debugger GUI, and re-do the state prior to returning to the program.
So yeah once that's solved, everything's good again, you can debug in X11 and Micro-shaft can keep their bloatware developer studio and any incarnations they attempt to make runnable on Linux.
[and I doubt Wayland would "fix" anything, either - it would probably make things WORSE]
"when those behind it take weird decisions, such as removing menu icons and mnemonics"
Ack. I concluded that the gnome 3 dev team is a closed "in a bubble world" set of millenial-minded "developers" that fall into the following traps:
a) they like the 2D FLATSO because THEY *FEEL* it is "cool" or something...
b) they "feel" they know better than YOU do how to use YOUR computer
c) they are 4-inchers - i.e. they do MOST things on a 4" screen
d) they lack the experience that resulted in the original 'WIMP' solution (like using DOS systems for years).
e) they INSIST on FORCING people to use THEIR way [i.e. they're ARROGANT ELITISTS]
only a very young person would even DARE to use 'soft color on white' for a user interface, because "pretty much" everyone over 35 needs glasses to even SEE that, let alone the low contrast color-only distinction. Keep in mind that rods are more common than cones in the retina, but rods respond to luminocity, and cones to color, so people over 35 generally need some pretty THICK glasses to read text that is light blue on white... and only a CHILDISH IDIOT would _INSIST_ on that in the FIRST place! Right 'Australis' inventors? Right, Chrome "developers"? Right, Micro-shaft?
Gnome 3's devs are WAY too much like the arrogant idiots (that horked up Win-10-nic) over at Micro-shaft, for this very reason. WAY too many similarities.
It's why Mate forked, why Devuan exists, and why there is so much OUTRAGE every time you mention gnome 3, systemd, or wayland.
"It's because they don't know any better?"
more like, commercial software vendors don't know any better [and do not produce Linux versions]. They also tend to swallow Micro-shaft's coolaid, i.e. ".Not" "C-pound" and "UWP"...
collective wisdom in the decision-making positions seems to be lacking, yeah.
I've been working (for years) on a decent tool for GUI development with X11. If I could get paid for it I'd have it done by end of 2018...
(the intent is to have a Win32 layer so the same code builds/runs on both windows AND with native X11 libs).
My main motivation for NOT using GTK is the way it handles dialog boxes and edit windows. I don't like it. Instead I'm doing something that uses native X11 calls. The edit window is about half-working, the clipboard works properly, most of the dialog box features work, but it lacks completion of the edit window [including a working undo buffer], some dialog box features, a dialog box graphical layout editor, property sheets for configuring the application, a refactor tool, integrated gdb debugging, something to work around X11 server lockup if you break in the middle of an X11 call, and the "wizards".
yeah a lot left to do, but I could STILL do a basic dialog box application with it right now...
the intent is to make it work like devstudio, without the crappy/irritating interface - more focused on typists and power users instead of VB "programmers".
"won't-work-on-Wayland"
THAT explains it! @#$$%(*#@$&* WAYLAND!!! (that thing needs to *DIE* by being *MURDERED* *TO* *DEATH* and *BURNED* *WITH* *FIRE*)
Wayland: NUKE IT 'TILL IT GLOWS, then SHOOT! IT! IN! THE! DARK!!! (and buried under tons of concrete in a grave next to systemd)
ACK on the influence by Gnome 3 "developers" on Mate. I have trouble running certain mate applications (like pluma, for one) when I do the following:
su - differentuser
export DISPLAY=localhost:0.0
pluma &
it gripes like hell at me and won't load the settings properly. same with Atril.
Additionally, if I'm running a fluxbox desktop via tiger VNC (so I can user vncviewer and debug X11 applications from within a GUI without the server hanging) and I run 'mate-terminal' I can't save the settings, nor can I run it without the "--disable-factory" paremeter [or it crashes]. this is on FreeBSD by the way, and this USED to work PERFECTLY a couple of years ago with gnome 2 and so I have to ask, W.T.F.? dd the Mate devs _DO_ to make *THIS* a problem, now? I suggest they followed _SOMETHING_ _CRAPPY_ that the Gnome 3 "developers" did, probably with gsettings or systemd or both.
"Android? Seriously?"
ACK - the button-icon-menu (think 'Unity' yeah) interface that 'droid is famous for works very well on phones and devices (like slabs) without keyboards. Once you have a mouse and keyboard, it *STINKS*.
Apple has OS/X _and_ iOS with different interfaces that make sense for the use case. "Everybody Else" (Especially Micro-shaft) needs to STOP IT with the "one interface" crap.
If 'droid had a MATE-LIKE interface on the desktop, though, I'd be VERY happy with it! That assumes it's not 2D FLATSO. 2D FLATSO is a _major_ DEAL BREAKER with me. But Google has a history of that with Chrome. So I doubt their internal culture of arrogance would excrete ANYTHING ELSE...
"This piece sound like a panegyric to Gnome"
right, and I was thinking about Mate (and why I use Mate instead of Gnome 3) while reading it...
Cinnamon seems to have the best "windows-like" appearance, and Mate the best overall [my $.10 worth]. Gnome 3 is what the millenial "shove it up your rectum" types *FEEL* we should have. Same *kinds* of people seem to drive Firef*x Australis and Chrome's UI.
nevermind "the rest of us" particularly power users...
"The floor of the Senate" (and/or the House of Representatives) is where all of this should have been decided in the FIRST place.
Having an executive branch LEGISLATE is JUST WRONG. That's effectively what 'net neutrality' was when Obaka's administration's FCC people tried it.
Bureaucracies are supposed to IMPLEMENT and ENFORCE, not legislate.
If the Senate and H.R. pass net neutrality, and Trump signs it, it will become law.
If they do not pass it, it SHOULD NOT BE IMPLEMENTED by the F.C.C. or any OTHER agency (thus circumventing the legislature).
That's how "separation of powers" are SUPPOSED to work. it's why I'm glad Pai SCRAPPED it.
"Get security at the cost of performance by properly flushing the pipelines between task switches."
I would think this should be done within the silicon whenever you switch 'rings'. If not the OS should most definitely do this. Does the instruction pipeline (within the silicon) stop executing properly when you switch rings, like when servicing an ISR? If not, it may be part of the Meltdown problem as well, that is the CPU generating an interrupt, which is serviced AFTER part of the pipeline executes. So reading memory generates a trigger for an ISR, but other instructions execute 'out of order' before actually servicing the ISR...
I guess these are the kinds of architecture questions that need to be asked by Intel (and others), what the safest way is to do a state change within the silicon, and how to preserve (or re-start) that state without impacting anything more than re-executing a few instructions...
So I'm guessing that this would need to happen:
a) pipeline has 'tentative' register values being stored/used by out-of-order instructions, branch predictions, etc.
b) interrupt happens, including software interrupts (executing software interrupts should happen 'in order' in my opinion, but I don't know what the silicon actually does)
c) ring switch from ISR flushes all of the 'tentative' register values, as if those instructions never executed
If that's already happening, and the spectre vulnerabilities can STILL leverage reading memory across process and kernel boundaries, then I'm confused as to how it could be mitigated at ALL...
the whole idea of instruction pipelining and branch prediction was to make it such that the software "shouldn't care" whether it exists or not. THAT also removes blame from the OS, really. But that also doesn't mean that the OS devs should sit by and let it happen [so a re-architecture is in order].
But I wouldn't blame the OS makers at all. What we were told, early on, is that this would speed up the processors WITHOUT having to re-write software. THAT was "the promise" that was broken.
"OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline."
read: they used CPU features as-documented to avoid unnecessary bottlenecks
The problem is NOT the OS. It's the CPU not functioning as documented, i.e. NOT accessing memory in which the page table says "do not access it", even if it does so only briefly. The fact that a side-channel method of detecting this successful access exists does not preclude the somewhat lazy method in which Intel's code checks the access flags when out-of-order execution is happening. Security checks should never have been done after the fact, and yet they were.
(my point focuses mostly on meltdown; branch prediction is another animal entirely)
In short, Intel's benchmarks could have been *slightly* faster (compared to AMD, which apparently doesn't have THAT bug) because they delayed the effect of security checking just a *little* bit too long...
fixing that in microcode may not even be possible without the CPU itself slowing down. If AMD's solution was to have more silicon involved with caching page tables so that the out-of-order pipeline's memory access would throw an exception at the proper time, then Intel may have to do some major re-design.
So you could argue that NOT doing these security checks "at the proper time" within the out-of-order execution pipeline may have given Intel a competitive advantage by making their CPUs just 'faster' enough to allow the benchmarks to show them as "faster than AMD".
And it's NOT the fault of OS makers, not even a little. They were proceding on the basis that the documentation represented what the silicon was really doing. And I bet that only a FEW people at Intel knew that the security checks on memory access were being 'delayed' a bit (to speed things up?).
It's sort of like only a FEW people at VW knew that their 'clean diesel' tech relied on fudging the smog checks by detecting that the car was hooked up to a machine and running a smog check, and thus alter the engine performance accordingly so it would pass. THAT gave VW competitive advantage over other car makers. Same basic idea, as I see it.
"What happened to BIOS initializing enough hardware to load the boot block and then handing everything else off to the OS"
Micro-shaft and DMCA and gummints - OH MY!
I'm happy to see things like "secure boot" and "management engines" and whatnot blowing up in the faces of the designers. Maybe it will *FORCE* them to adopt "the simple solution" instead...
"Whenever we had to find prime numbers at school the ones I 'found' were usually divisible by 3."
yeah too much busywork, doing all of those divisions. Imagine doing it WITHOUT an electronic calculator. That would be when _I_ was in school... through Jr. High anyway.
thinking of high school, I had a friend who came up with a really interesting way of calculating prime numbers. He proposed prime numbers "by addition", basically a set of 'for' loops that marked an array (you could use a bit array) for every value divisible by 'n' and then you just examine the array afterwards and print out anything with a zero in it. It would be significantly faster than dividing by every odd integer <= sqrt(number), but maybe not faster than dividing by "discovered prime numbers" <= sqrt(number). Anyway, for a value of this magnitude (re: article's number), I think you'd run out of RAM...
(then again it's only 2 ^ 77 million, so perhaps not?)
"only gets accepted for publication or considered by journalists after a working prototype is available"
that wouldn't be scientific, that would be like "flat earth" thinking. Publishing 'unproven' ideas for peer review, PARTICULARLY before having a working prototype, is ALWAYS a good idea. It also helps you to establish ownership [they should get a provisional patent, too].
I can think of many things that have fallen into the 'unproven' category (at least at one point in time), like Evolution, the Big Bang, nuclear power, Einstein's theories, black holes, and television. In fact, I understand that someone had constructed a model of a color picture tube using sugar cubes, and used THAT to obtain a patent, which RCA allegedly had to license before they could produce color TV picture tubes with multiple electron guns... so yeah, theory shouldn't be restricted from publication until "after a working prototype is available". That's just ridiculous.
It's also a good strategy to publish FIRST (before you have a working prototype). In this case (as an example), battery makers should NOW 'want in' on their 'iron oxide' design. Some smart battery maker will likely invest some time+money into building prototypes, licensing the design with an really good contract, and maybe even having exclusive rights (for a little while, at least).
not just Elon's money, but EVERY! LAPTOP! COMPUTER! MAKER! and EVERY! PHONE! MAKER!
This is the best news in battery tech since the announcement of Aluminum in lieu of Lithium a couple of years ago [I remember reading about it on El Reg].
But if the batter is MORE STABLE (particularly with respect to gassing, a problem I've had to deal with in hardware I've been working on), then it's even MORE awesome!
yeah, nothing good happens when your aging LiPo batteries look like pillows...
[the other day I accidentally shorted one and it swelled up like a balloon in about 5 seconds, got hot enough to melt plastic - I put it under running water and it shrank down flat almost as quickly, but couldn't hold a charge any more]
"The mitigation would be to only allow it access to low precision timing."
They should all truncate it to millisecond resolution then. Why does javascript need microsecond-level performance timers?
/me points out that I've profiled code effectively with millisecond-level resolution, MANY times. I'd explain why it works, but would probably get a dozen or so off-topic replies, half of which would contain pejoratives and whining about me using CAPITALIZATION for emphasis. I tried to explain it once on a Microshaft forum when I was profiling early insider versions of Win-10-nic that way, and I don't think they liked what I found, so I got "the flack about my methods" instead of a REAL discussion.
"On average, women have IQs that are a few points higher than men. But men's standard deviation is higher. Therefore, when you get into the high-q range, men substantially outnumber women."
do you have a reputable source for this? it sounds interesting if it IS true, but then again it's very likely to be "just a perception" based on the distribution of 'smart kids' in a typical classroom... all of those girls who are great at school work [because they're not "being boys"] and one or two geeky boys that seem to have hyper-intelligence... a perception, but maybe not a reality.
Anyway, if this has been proved for real, it could be an interesting point for a LOT of arguments.
however, I would explain some of the pay gap this way: men tend to be risk taking and aggressive, women tend to be 'safe', because it's evolution, baby. risk takers probably ask for raises more often and are willing to be aggressive about it, even to the point of getting fired or rage-quitting. but women probably wouldn't do that. that's a perception, too, yeah, but I think I'm more right than not by suggesting this.
"Any company would replace set A of workers with set B of workers if set B performs as well as A and costs 20% less."
So, I think then the implication would be: hire MORE women so you can pay them LESS?
And would they be SUING if more then 50% of the employees are women? (yeah, probably would, because the sueball throwers love to throw sueballs)
Yeah there are some confusing indicators out there, because if the perception were correct, you'd see 'Silly Valley' sweatshops filled with single working moms where daycare was company-provided and that helped to justify the lower pay, etc. like "I owe my soul to the company store" snap, snap, snap...
I wish I could find that Dilbert comic where some woman demanded she gets paid the same as the men, and then she gets a 10% pay cut [or something like that].
"those voting down who are offended the most are the ones most guilty of such crimes."
heh - yeah, I don't even consider the down-voting any more. howler monkeys and my personal fan club, mostly. except when I don't get ENOUGH of them (so I'll JAM this up with MORE caps-lock ha ha ha to see how many I can get from the fans)
"I write C++ REST services. Not for any performance reason but because I've never got on with all the Node/Java web development frameworks"
and that's the kind of thing I'm talking about.
as an extra added bonus, I did some work for a company with a poorly designed back-end, by adding C utilities that are called from the DJango framework. Upload processes that WERE taking more than a minute (due to cpu-intensive activity) were shaved down to a few seconds.
But I didn't re-do all of the DJango stuff. That wouldn't have bought much of a performance change. What I _did_ do made a HUGE difference, mostly because I saw a lot of "the Python way" coded into the back-end. It implied, to me, that "the established way of doing things" is simply GROSSLY inefficient, and re-coding that stuff in something that _is_ efficient (CGI via C programs, or even Perl if it's simple enough) would buy you a HUGE performance boost (in many cases where CPU intensive operations were slowed down by anti-Meltdown patches).
If it's not CPU intensive, you probably wouldn't see a change (yeah). So for THAT, who cares.
also I've written simple web servers in C a few times. One fo them was an attempt to "genericize" something to fit on an Arduino - yes, a somewhat generic web server in under 30k of NVRAM, intended to let you configure an IoT device with a web page. down side, you can't really do anything with the Arduino because the web server code is still a bit too piggy, so I shelved it... [had to try it anyway in case it worked]. but I was able to change device parameters and store them in EEPROM (things like the IP address, fixed or dynamic) so there ya go.
THAT being said, to *ME*, 'C' coding is probably faster (in a significant number of cases) than doing a bunch of stuff with BLOATWARE, with 3rd party library hell, just to fit it all into "their way of doing things".
And it's that "3rd party library hell bloatware" that's slowing down the back ends WAY too much already, I bet.
(and with dis-respect to the 'random caps resolution' comment from earlier: BITE ME)
maybe it's time to re-consider server-side inefficiency. that is, instead of bloating your server side with massive libraries consisting of scripted and interpreted lingos (say 'Python' and 'Javascript'), to INSTEAD go with C language utilities and CGI-based things for otherwise CPU-intensive processes.
Yeah, that's a major infrastructure change, if you've invested a LOT of time in NodeJS or DJango.
Additionally, if you're using an SQL database, you might want to consider an "efficiency re-architecture" to limit CPU utilization and I/O calls. As an example, check your 'outer join' logic to see if you're linking tables together in the primary filter query, specifically things that don't need to be linked until later. Even a 'select for update' could start with a filter that doesn't require linking any additional tables, if you design your database intelligently (and with efficiency in mind).
[this would prevent a boatload of unnecesary I/O or networking system calls, where the "fixes" for Meltdown would impact performance the most]
I've seen enough gross examples of lazy server-side code [and been tasked to fix it] for one lifetime, probably. But I'm sure it won't be the last. "Blame where blame belongs" for inefficient server-side code, because someone 'felt' that efficiency wasn't an issue. until now.
"How come the solution is to move to Windows 10 and Ms Office?"
Because it works, the required applications exist for it and has a lower TCO. And the users prefer it.
it's getting "shilly" in here. BRRrrrr...
Sorry. Your argument fails right away.
a) I've NEVER had a problem importing a Micro-shaft "Weird" document in Libre or Open office.
b) if the formatting changes, it's because of the use of non-standard fonts, etc.
c) Why not DUMP MICRO-SHAFT ORIFICE and adopt LIBRE as "the standard" instead? [last I checked, Libre has binaries that run on winders for those who *must* use a Micro-shaft OS
And your attempt to use the same-old-FUD from the early noughties about "total cost of ownership" being LESS with per-seat licensing? *BORING*
I think a REAL USE CASE from that period of time might shed some REAL light on TCO...
https://www.cnet.com/news/rockin-on-without-microsoft/
it was FUD then, and it's still FUD now, that TCO is allegedly LOWER with Micro-shaft "pay to play" licensing. The Ernie Ball story says exactly the opposite: he saved enough money IN THE FIRST YEAR to cover the cost of the 'fines' that resulted from the "surprise!" audit.
In any case, I would MUCH rather see governments consider open source FIRST, for the explicit purpose of saving money for the taxpayers. Germany could ALSO make the case that open source allows them to NOT rely on some other country for their I.T. support, thereby promoting local (or at least German) businesses...
And think of this: if governments DEMAND open source, for the purpose of security audits, transparency, and the promotion of PUBLIC projects, that would do FOSS a LOT of good.
icon, because, FUD that I respond to.
" had to scrap Mint and switch to Win 10"
you COULD have set up dual-boot or run Win-10-nic in a VM...
windows-only conferencing software. *ugh*. What idiot decided THAT was a 'requirement'?
It's not just a religious argument, either. It has everything to do with privacy, licensing, and what you're now kept from doing by the Win-10-nic OS [like customizing your computer so it's not "all 2D FLATSO" like Micro-shaft seems to be shoving up our collective rectums]. This is the kind of FREEDOM you get with Mint.
So if you actually LIKE the 2D FLATSO, you can have it. But I happen to *HATE* it. So I pick a theme with Mate that doesn't do that [on Mint, Ubu, FreeBSD, Debian, or whatever].
But I _AM_ disappointed at the use of Firefox 57. I refuse to use it because I can't put the "non-FLATSO eye candy UI" extensions on it.
"If you have a pool spread across two physical disks, and ZFS detects one of the disks is having problems, what is the recommended way of replacing that disk?"
here's what I'd do:
a) do a scrub to try and clean-up and recover as much as you can.
b) Install the new drive, with the old one still in place, and do a 'zpool replace' command (to replace old drive with new). I think this will work in your case.
NOTE: if you remove the old drive, I don't know what affect this will have on device naming, so you'd probably have to watch out for that. I think ZFS is smart enough to deal with device name changes from swapping SATA ports/cables and primary/secondary arrangements.
ZFS does something called 'resilver' to copy from old drives to new drives (as part of a RAID or replication or spanning multiple drives). 'man zpool' for more, maybe read up on some of the Solaris resources which go into a bit more depth than the FreeBSD (and probably Linux) docs [but the commands should be all the same for everyone, from what I can tell].
if this doesn't work, you can build a new pool with a new hard drive, and just copy the files. that works, too. Then after copying, remove old drive(s), rename pool/mount-points as necessary, and you're done.
@Tom 38
regardless of the GPL licensing "dogma" you refer to, ZFS is supported well enough that you're able to use it. That much should be obvious, at least.
/me has been using FreeBSD with ZFS for a while now, both on a workstation [boot from ZFS] and on a server [UFS+J for OS, ZFS for data and archives]. ZFS warned me about my hard drive going bad, so I was able to pretty much recover everything [built latest OS onto hard drive in a VM while server remained running, copied data files via network, swapped in hard drive, a few tweeks later, up and running!]
what's wrong with 1991-style web sites? "modern" (and the scripting/tracking/bloatware associated with it) is HIGHLY overrated... especially that 2D flatso light-blue-on-white crap (like Australis uses).
maybe slackware just doesn't want to break their 'working' web site. [or they're too busy slacking off, heh]