Re: what about my fax machine?
ISTR reading perhaps a year back they were trying to get fax removed from the universal service obligation for precisely this reason.
Ah yes, here it is.
1201 publicly visible posts • joined 27 Dec 2008
Microsoft have always struck me as being in a "Having your cake and eat it" positionwhen it comes to trademarks. This is a company that claims a trademark of "Windows" for windowing systems and "Word" for word processors but has claimed "Internet Explorer" is a generic term and thus not trademarkable.
In the case that is the subject of the article though it seems the (original) site owner is on solid ground: it it long established that trademark protection does not extend to cases where such use is necessary to refer to a company or its products. That is why a site like The Register does not need permission for Microsoft, Apple or indeed ARM to refer to those companies whenever they publish articles critical of them. I suspect this is lawyers simply trying it on knowing there is no real case. They know people are scared of getting involved in expensive legal cases, especially when they are acting simply as intermediaries such as the hosting provider here. The article is lacking in specifics but this is particularly true in the US where each party generally pays its own costs which inevitably favours those with the deepest pockets.
Your analysis is valid but stops too soon. Public key cryptography is simply too expensive to get much use by itself. Instead in practice it is used to negotiate a common key for a symmetric algorithm that carries the actual data. The symmetric algos are crypto-proof short of being able to somehow unravel the PRNG in use. It is the public key stuff before it to get a common key at both ends that is theoretically vulnerable.
They are comparing the harmful chemicals as proportion of the dust collected. Remove many of the sources of dust from the environment and of course the composition of dust will change, some sources are removed in their entirety and the overall amount would be expected to be lower.
Consider major sources of household dust not present on the ISS. Carpets are not continually wearing out. Mud is not being traipsed in from outside. Pollen is not being carried in on the wind. Chemicals present in carpets, mud or pollen will be much lower as a result. Anything else will as a proportion be correspondingly higher.
Been solved literally decades ago, the semiconductor properties of diamond have been well known for a long time and diamond devices are far from novel in a research setting. Just various practical problems prevent widespread commercial deployment. You wouldn't want to use these for generic computer chips, the higher bandgap implies a higher operating voltage meaning higher operating voltages and heat generation, although on the flip side diamond is one hell of a thermal conductor.
This is more power electronics as the article implies. Among the things that immediately come to mind if they may finally displace thermionic valves for high power radio and TV transmitters - semiconductors can be high power or high frequency, they still can't match valves for both at the same time.
What a load of Stuart Hall International Travel. Yes, that one is legit, same Stuart Hall who was done for kiddie fiddling as part of Yew Tree.
At my first job after Uni at what was then the Inland Revenue there was an idea to have specialist teams to deal with Appeals, Reviews, and the Self Employed. It actually happened but they did alter the order of the categories.
They might try witholding lumps of the source folder tree for drivers they've not included in the kernel image for the VM, but that could be a bit tricky; the source for the kernel also includes all the drivers, and a partial copy of it might not count as "the whole source", even if chunks of it were not built into the program the customer has received.
Not an issue, the GPL was always drafted with the intent you can cut bits out and slot them in elsewhere... "That's a useful data structure, I'll have that" or "I'll rip out that function for this" is the original intent of the licence. You've never had to supply the source in full by design, only the portions you use.
It does make sense for several reasons, not least the performance of most filesystems degrades quite markedly when you hit tens of thousands of files in a directory since they are often searched with a simple linear scan.
However it is more usual to put the most significant units at the root of the hierarchy which also makes it child's play to separate out last month's or last year's files for backup, archival or deletion purposes.
It isn't an modicication to the licence (which would prevent it from importing GPL code), it's simply down to the terms used to designate the licence:
None of those formulations affect the terms of any version of the lience in any way whatsoever.
I had that exact thought earlier today, only it was on Amazon, in the "Buy this again..." Bar. And yes, it was a cordless drill, specifically one I bought only a couple of months ago.
They really need to add some "disposable or consumable item" filter to the logic for that one. Especially because it is Amazon, who have a vested interest in making the sale as opposed to simply slinging the ad. After all if my drill has gone duff already I'm hardly likely to buy another of the exact same model am I?
If that forms part of the license for the software. GPL says nothing about side agreeements, including side agreements I insist you take up before I give you the code and/or the binaries. I'd like to see the additional terms provision tested in court, it sounds like a land grab that would be thrown out, but it's an irrelevance: it isn't what IBM/RH are depending on here. Essentially there are two distinct contracts here - the subscription agreeement and the software licence. The licence effectively states that additional terms may not be added to the licence: it is silent on all other matters. The subscription agreement does not form part of the licence and thus its conditions stand.
To simplify take it there is one author we will call A. Whenever and however you receive a copy of the software you get a licence from A saying you can do X, Y and Z. Then there is another party we will call RH. You make an agreement with RH that you will not do X. Now you have two separate agreements, there is your agreement with A and your agreement with RH. You are contractually bound by both: the only way you can conform to both is to forget X and use Y and Z only, regardless of what A says. Try to refer to what A says in your agreement with RH and you'll be laughed out of court - A has no standing in the agreement you reached separately with RH.
Which means that RH customers can not pass on a copy of the subscription agreement (but who would want a copy of it?).
Meanwhile, the rest of the GPLed code can be shared, as described above.
But you are subject to a contract under which you have agreed not to distribute the code. As shown above, the GPL does not appear to prevent side agreements, only additional restrictions within the program itself or its components. You can guarantee IBM lawyers will have looked at this very closely before they proceeded.
I'm not saying I like it - it's very much against the spirit of the GPL which tries hard to prevent that kind of restriction, but the crux of the issue is what the licence says, rather than the broader intentions or what you want it to say. I was surprised when the article author stated it appeared to be permissible, but looking at it I agree with him.
From the GPL ( https://www.gnu.org/licenses/gpl-3.0.html ):
"If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term."
That was my understanding, but look at what it is stated to mean - as opposed to the presumed intent:
“The Program” refers to any copyrightable work licensed under this License.
The subscription agreement with Red Hat is not under the term of the GPL, indeed it is given before and as prerequisite to getting the licensed material, ergo it can't be construed as part of "The Program". External restrictions are not covered by that clause.
Sure, it's not in the spirit of the GPL but it looks to me as if they have a prima facie case. It's one of the risks of producing a licence so long and complex in the first place: that you may consider the FSF "one of the good guys" doesn't change that peril.
By the tanker load it is, it's essentially a byproduct of oxygen production. The difficulty is the logistics of small quantities. It isn't hugely exotic or difficult to obtain, just impractical for small users.
Last time I needed some LN2 -perhaps a decade ago - it was easy, the local British Oxygen keeps it in stock. Even at small quantities the metrics differ wildly: 1-2L may have cost £10-15 including cryosafe Dewar hire. It'll evaporate to nothing over a weekend. A 25L Dewar may be £50-60, but a month later 90% of it is still there.
*) BSD+MIT in, who gets attribution?
That's missing the point, the attribution relates to the authorship, not the licence. Even historically if you lift code from a BSD-licenced project and place in another BSD-licenced project (or indeed GPL code in another GPL project) and do not copy over the relevant copyright notice that is itself a licence violation, even without a change of licence.
Generally the first thing an Ethernet signal will hit internally is an isolation transformer. Ground loops are an irrelevance since you have typically at least 1000V isolation.
Electrical storms on the other hand are more problematic. It doesn't even need to be a direct lightning strike, and even burying the cable is no guarantee.
The constant reading of the i3, i5 monikers is confusing though, you can't look at at two model numbers at compare them without reference to generations, benchmarks etc. You knew a 486 was better than a 386, ore a Pentium 3 better than a Pentium 2. The I series have been reused so much you never really know.
I have better things to do than memorise comparison scores of every chip on the market and keep that updated every few months. Even the gamers can't tell - I used to work in insurance claims and one of the perrenial disputes was yes, that current i3 really is faster than your five year old i5, so no, we're not paying for today's i5.
If this manager gets all excited about a Soundblaster and CDROM then he has (had) much more clue than the typical manager.
Mid 90s is more into the EIDE era. You can still find new optical drives for a pittance if you go to the right places The days of the e.g. Matsushita interface were already past. In that era it would have been an SB16 and the Gravis Wavetable daughterboard everyone lusted after. The AWE32 was on the market but wasn't as well regarded.
Back in our day, we only had a limited amount of channels to scroll through, so ultimately kids hooked on TV would be exposed to A LOT 'content' that was not in their direct interest and ultimately, watch other stuff.
It could still happen though. I remember at Uni, I'd pick up a Guardian every day on the way in, probably spend an hour a day reading it. Then someone mentioned some catastrophic flooding in (from memory) Bangladesh. What's that then? Supposedly it's been the top news story for the last three weeks. No, I've not heard about it. Then I picked up my paper and yes it's right there on page 1, 2, 3 and so on. Never the whole page, somehow my attention was drawn to a different story each time.
But key mappings have changed before. I'm old enough to remember when paste was shift-insert. I got over it.
In this case I have to admit I agree with the proposal. Screens are bigger than they were 30 years ago when your choice was generally between a 12" or 14" monitor. They hold a hell of a lot more content on screen. It's probably only a couple of weeks since I sent a screen grab to my boss with a minimised wiki page of some actress visible in the taskbar. It may not be a disciplinary thing but looks unprofessional. There are plenty of cases here and elsewhere or porn or confidential material being unwittingly included in screenshots.
There is still an algorithm working in a deterministic manner and actually it's fairly simple and easy to comprehend. The difficulty with AI is that the decision making process is ultimately governed by tabulated data. There's nothing wrong with that either per se - it's how e.g. Yacc has generated parsers for nigh on 50 years.
The difference is how that tabulated data is generated. In the case of something like Yacc it can be done longhand, it's time consuming but utterly predictable. The difference with AI is those tables are filled statistically based on what values give the best results. The underlying algorithm and overall behaviour is still utterly predictable - it's still scoring, comparing and if-elsing - but it follows logic never designed, considered, or explainable by a human.
I'm not going to be drawn into the whole up/down voting thing, really you need to grow up and understand forums such as this are not representative.
However, who gets to decide what is child abuse? It isn't a universal standard, it depends on the parent, the child, the community and broader societal standards. Parents will differ as to how much risk they will allow their children to take and again against each potential physical or moral harm.
Some parents are happy to see the kids leave the house and wander miles from home, climb trees to heights where a fall would be fatal. Others are so obsessed with stranger danger they can't go to the local playground without an escort, ideally they'd be in the front room on the PlayStation.
Equally there is the moral stuff. Some parents couldn't give a shit if their 12 year olds smoke. Others recoil in horror at the idea of their 15 year old going on a "date" to the cinema or whatever.
Who's right? You? With your particular values and prejudices? What gives you or anyone else the right to act as ultimate arbiter?
Ultimately you need to give any family quite a bit of slack and honestly subscribe to the belief that parents generally know what is best for their children. Of course child abuse exists but it must by necessity be the most extreme cases only, a middling set of values restricts freedom and liberty of both parent and child.
How do you encode that into law, especially on social media and not knowing either or the abilities or interests of that child?
It doesn't work given the evidence available though. We know the liquid portion spins at the same rate as the surface due to the existence of the South Atlantic Anomaly, where convection currents are the "wrong" way round causing a localised weakness in the magnetic field. Equally we know the solid core turns at the same rate as the liquid one since the existence of fern-like crystal structures growing from the solid core, aligned with the magnetic field, are inferred from speed differences in seismic waves north-south and across the equator.
Theory has to fit the available evidence, and the current consensus is that it's down to plate tectonics. I do not claim to be an expert in these matters so I trust them.
Bulk composition of Venus is essentially similar to Earth, including a nickel/iron core. Venus's slightly lower density is explained by lower gravitational compression as opposed to chemical makeup.
If you really knew why Venus lacks a global magnetic field a Nobel prize would surely be on its way, because noone really knows for sure. Current thinking is that it is ultimately down to elevated temperatures boiling off all the water. That means it can't be subducted under the surface, and in turn shuts down any system of plate tectonics through lack of lubrication. That traps heat inside the planet and ultimately stops convection within the core. No convection = no magnetic field.
Like I said, it's speculative and little more than a best guess, but it's the one preferred at this moment in time. I recall hearing of a proposal to put seismometers on the Venusian surface, that is challenging given the conditions but potentially a goer with the right equipment. If memory serves that was a proposal submitted at the same time as what ultimately became Messenger.
...to cite the next time a solar powered project fails due to lack of power, whether that's dust, shadow, orientation or whatever else.
It's easy to advocate an RTG when you're not the one holding the purse strings, trying to source material or integrate them into a project. These are long standing issues and not easily overcome. Sure you can set up some research reactor to produce scientific quantities of an isotope, but for bulk quantities you really need commercial power stations to do the job at scale.
The likes of Magnox (here in the UK) are mostly history now, because of both economic and proliferation concerns.
AT&T ultimately missed a trick there. The Design and Implementation of 4.x BSD by McKusick, Marshall et all became the standard reference for anyone studying Unix internals.
Even recently I've been known to recommend them - there are guides on the innards of Linux but it is a) more complex b) changes rapidly and c) is getting a bit grubby really - large parts of it are well overdue for a refactor if not redesign.
Linux distributions provide a much wider range of choice. Don't like Gnome Shell? Use xfce instead. Prefer old-school init and admin? Slackware is still going strong. You get the drift.
There's some truth to that, but it's less true than it was historically. To my eyes Linux peaked around or shortly before the millennium, say the time of the 2.0 and 2.2 kernels. You had a decent traditional Unix system, slap a copy of Motif on there (proprietary at the time, LessTif existed but was buggy) and you had a decent enough system, a few quirks and limitations but nothing's perfect.
Then came the desktops, and other large scale projects counter to the Unix traditions such as CUPS - SystemD is just the latest in a long line. At first KDE was a library and few applications, Gnome had ambitions but wasn't ready for primetime. Slowly they became integrated wholes and you lost the ability to select components on a piece by piece basis that had always been the strength of Unix. For example, what if KDE has that really nice applet to configure your WiFi, but you ditch KDE in favour of something else. Sorry can't use that now...
The ultimate evidence of this is these very reviews of each latest distro. Rarely do they discuss anything of substance, mainly it's the interface and eye candy. Sure you can change it, that doesn't mean it's easy. If it was why are there so many advocates for one distro over another?
Is that really what your insight comes down to? Essentially, I can write it down, so I may as well consider myself to own it?
SVB fell because it invested in T-bills in a declining bond market. It seems you could rather they had invested in a notepad and a pencil with which to write all those zeroes.
There's plenty of evidence that the books / gospels of the New Testament were cherry-picked from many different available ones...
The Dead Sea Scrolls didn't contain any candidate New Testament texts, most of them are too early and the rest are mostly of a secular nature.
What they did show, since it is Catholism at issue specifically here, is the heritage of many Old Testament books. A Protestant Bible has a section in the middle labelled "Apocrypha", neither Old or New Testament and not considered divine work. Catholic Bibles include them in the main body of the Old Testament. The Protestant argument for exclusion centred on they must be more recent since they had never been found in Hebrew.
Until the Dead Sea Scrolls found many of them in Hebrew...
Yes, the Churches (any Church) have a lot to apologise for, but making up fantasy to suit an ad hoc argument does not really advance your cause.