Re: The universe is weird, we want a refund
Back when HP still made instruments they released a new model of atomic clock, guaranteed to lose or gain no more than one second in 234,000 years.
The guarantee lasted five years...
897 posts • joined 27 Dec 2008
Amazing how you can come to that conclusion completely on your own and then start criticising based on what is essentially an uninformed wild guess.
User input has already been checked: the file has to have been parsed to identify the need for a colour space conversion. And based on the brief description in the article it doesn't sound like a buffer overflow. My hunch would be a maths issue, a division by zero or something like that.
Now the anachonism that really makes me laugh is the preference for having the terminal emulators look like a 1980 green on black screen.
There are some good reasons for light on dark on screen, though. In the old days it minimised the effect of any screen flicker, both consciously and subconsciously. The other effect is relevant even now in that it reduces the simple amount of light shining in your face all day. Reading on screen is fundamentally different to reading on paper in that the screen is its own light source: treating it as if it was paper is a non sequiter.
You'll find that many text editors that default to dark on white are not really a white background anyway - more grey90 in X11 speak. Even that subtle reduction in brightness noticeably reduces the amount of glare in your face and reduces eyestrain, and don't forget your eyes have better resolution when your pupils are more open.
I recall reading some study that had been done into this perhaps 30 years ago that concluded the most comfortable colour combination was actually yellow on blue, but like most such studies based on users and actual data it was completely ignored. Perhaps 15-20 years ago I started configuring my terminals and editors to cream on dark blue in response and definitely find working on anything else does get tiresome more quickly. I also tone down the syntax highlighting in my editor to just subtle changes of hue rather than abrupt changes of colour. The only change in font is to italicise comments. I find the default highlighting settings in many editors is almost designed to distract as opposed to allowing you to focus on the code.
An obsession with the new and shiny coupled with a lack of any depth of knowledge and a healthy dose of bullshit.
Three quarters of SO respondents use assembler and hate it? I doubt three quarters of them would be able to name an assembler.
64bit will be big enough to memory map anything you like for the foreseeable future. I did some calculations a couple of years ago and 2^64 bytes was equivalent to several months global hard drive production. Just had a look for some current statistics and it still seems to be a couple of weeks to a month's worth.
Amazon search has always been crap but together the marketplace and sponsored results have made it almost useless thanks to click bait descriptions (don't describe what it isn't) and questionable sponsoring of terms.
A couple of months ago I searched for "ball pein hammer" on Amazon. Got a couple of novels back in the sponsored results... Daddy's Little Angel and Stepbrother with Benefits. I think you can imagine the genre. Although I did wonder who would pay to place those in a search for hammers.
But we do have strong evidence that it is wrong, or if you prefer, incomplete, just as relativity exposed the limitations of Newton's laws of motion. The fact that we can't create a consistent view between GR and quantum mechanics should and has set alarm bells ringing. Or have you missed all the speculation regarding string theory or other hypothetical solutions?
There isn't a maximum length in RS232 and never has been. The only mention of a specific length is in the preamble discussing the aims of the standard rather than any inherent technical limit. Lines measuring hundreds of feet are quite common and have been for decades.
Note I still use the present tense too, it may not been in use as much these days, mostly for console and configuration ports, but still widely used. I still keep a couple of serial cable working sheets in my ring binder portfolio thingy - the sheet is designed to avoid silly transcription errors which in my experience are the biggest hassle with RS232. One side dedicated to RJ45 modular adapters, the other for more general cables. Certainly used one within the last month, I can remember what it was for.
Could well be something like that, the earlier 248 day issue is exactly the same duration that older Unix hands will recognise as the 'lbolt issue': a variable holding the number of clock ticks since boot overflows a signed 32 bit int after 248 days assuming clock ticks are at 100Hz as was usual back then and is still quite common.
See e.g. here. The issue has been known about and the mitigation well documented for at least 30 years. Makes you wonder about the monkeys they have coding this stuff.
The best way is to run ethernet from the router to secondary wireless access points. You've not really described your home but I'd probably begin by putting a second WAP on the second floor assuming your router is on the ground floor, but if you are going to the effort of stringing cables through floors I would strongly recommend putting jacks in on the first floor at the same time and doubling up the cables, i.e. two runs to each floor - it's comparatively little extra work, costs very little in materials compared to the work involved and generally gives you the biggest bang for buck in terms of being able to easily extend the network in future.
It's easy to say that after someone has already encountered, identified and rectified the issue. Essentially they have said "we've had a problem with these..." and your response is "well don't use them then". They've already said they're replacing them so you don't get brownie points for stating the bleeding obvious with hindsight. The smart thing is avoiding the issue cropping up in future after it was initially encountered.
It adapts quite well, the event driven paradigm suits FP were I/O is the traditional ballache. The problems tend to arise with coders with no grounding in functional code: it is a considerably different mindset to procedural code. Essentially instead of saying what you want to do you make a set of statements that are true.
The F in FPGA comes from Field where the metal connection was that bit you programmed to set the functionality on a Field of components originally . A chip then would be made with about 10 expensive quartz masks and so for small production runs the original FPGAs where basically a shit load of components that you changed the wiring layer and connections - so two expensive masks to prototype on a relatively mass produced blank set of components where the cost of the other 8 became insignificant.
What you are describing there is an uncommitted logic array or ULA, and they are still very much a thing. FPGAs are not mask programmed by definition: the "field" is where they are programmed, i.e. after leaving the fab.
Not familiar with the Next but it may well be transferred to an onboard chip for initial loading, some FPGAs have only partial support for the serial memory protocols and lack e.g. clock stretching if the data source can't keep up with the device. I wouldn't worry about it "wearing out" though, some of the later chips go as far as supporting partial re-programming: you can reconfigure half the chip while the other half is still running whatever it is already programmed with.
They've been breaking it for years, if I enter n1a in the address bar I want to go to that host. I don't want to search for it - that is what a search bar is for. I certainly don't want you to try n1a.com and similar before you go to where I told you.
Just do what I damn well tell you. I'm fed up of browser vendors making arbitrary decisions about what is in my interests.
But Kraft did buy it, thereby satisfactorily netting out the financial transactions as recorded in the audit outputs. Hypothesising about what would have happened if they hadn't is utterly pointless.
No, it's a key issue. If the risk of the deal falling through is transferred along with the licence then a real, substantive service is being provided. Essentially the risk is being insured against and the £400k is the premium for this service. From Autonomy's perspective the transaction is a done deal so it may be appropriate to book it as completed.
On the other hand if no risk was transferred it is difficult to see what service of substance was provided. It is akin to paying an insurance premium but not getting any cover. The revenue isn't guaranteed at that point and it may be inappropriate to book it until it becomes so (subject to usual provisos about bad debt etc). It's then difficult to see what in substance occurs as a result of the sequence of transactions. Auditors are expected to be able to unravel that kind of pure paper shuffling and report on the actual underlying reality.
No, it's established law that you can't contract for the benefit of a criminal party. You'd get laughed out of court if you tried to enforce a lot of those provisions. Legal costs and civil damages, maybe, but you can't contract for criminal fines, community service or jail time.
change orbits, chase something down, catch up, slow down to match it's speed,
Orbital mechanics is often counter-intuitive. You need to slow down to catch something up, then speed up to 'brake'. Slowing down pulls you into a lower, faster orbit. Speeding up again raises you to the same height and speed as the target.
I usually see them in the (24 hr) MacDonald's in the local Asda around 5am. Most I can recall seeing is 11 officers in the Maccies and six cars in the car park.
You see that many and stop and think "presumably that is the entire complement of Police that are supposed to be patrolling Preston."
If you really need cites it shows how little you have to contribute to the discussion. Have you ever read through the code or documentation of pretty much any GNU project? Stallman himself has commented on the issue, as I recall he referred to the GNU Emacs manual. There are several pages of copyright notices, each mandated by a chunk of BSD code Incorporated into the editor. What was imported remains covered by its original license so those notices are required, but any changes are covered by the GPL and can't flow back to the original project. This isn't some novel interpretation, it is the very core aim of the GPL.
And that is my entire point. BSD is relatively tolerant of relicensing which is how BSD code can end up in GPL projects in the first instance. The reverse is not permitted by the GPL. Once the code is within the GPL any changes made to it can't be back-ported to the BSD project because the GPL expressly prohibits that kind of relicensing. Hence GPL projects are free to leech from the wider open source community but don't give back.
In the context of the current article it also means that projects can't be relicensed. If circumstances change making the project unsustainable it dies as a result. Sure, someone else can pick up the code under either licence but someone has to do it: if it was a given there would be no market for a commercial offering.
Err, there's a little bit of a contradiction here. You claim one of the issues I cite is not an issue. As evidence of this you cite a modified version of the licence (i.e. not a pure GPL) which partially addresses the issue raised.
Which begs the question, if this isn't a problem why is the modified licence needed in the first instance?
I do feel bad for those who are trying to make a living from writing it.
I don't think you need to: in days gone by there never was any expectation of a financial reward for open source, it was put out there for the hell of it. Sure, there would be rewards, reputation, something on your CV etc but not necessarily directly financial. If money was to be made it was for commercial-level support, but outside of a few key apps that is always a non-starter: I might be well to pay for support on e.g. PostgreSQL, but FUSE... not a hope in hell.
The idea that you somehow deserve a living from open-source is a modern one and fundamentally at odds with its very premise. In essence it is say "I'm sharing this with everyone, for free, use it how and for what you want". You can't then notice that someone is then usng it to (gasp) make money and demand some form of kick-back after the event.
Richard Stallman 's view is this: keep the same level of freedom you received with the code.
Not at all. Huge chunks of GPL code are lifted vertabim from sources licensed under BSD and other agreements. This is true enough today but was especially true in the early days of the FSF when a lot of GNU projects were essentially BSD code with a few minor additions and the entirety plastered with the much more restrictive GPL.
GPL strings: don't add new restrictions/strings on top of those imposed by the upstream provider.
See the above. It has the affect of locking the upstream provider out of changes: if one of the chunks of BSD code inside a GPL project gets modified with e.g. a one line bug fix or three line portability patch that can't then be merged back into the original version.
This occurs even within GPL projects: the orignal author retains their rights but widely used projects will inevitably end up a patchwork quilt of tiny contribitions by any number of people. After a while it becomes essentially impossible to establish who has rights over any given section of code if it needs to be re-licensed for wahtever reason.
The permissive licenses promote sharing: essentially do what you like subject to the minimal conditions of the license: generally, acknowledge your sources. Stallman talks the talk about sharing but doesn't walk the walk: it is at best like a private member's club: we will share this between ourselves subject to our rules. If you want to use it you have to abide by them. However, if we want to use your code we'll use it regardless of your rules. We'll then go on to apply our rules to your code.
See, no neurons have been hurt in the process.
Of course not, because you haven't even begun the process of citical thinking. Next time, try reading the very license you pontificate about.
No, exactly the same issues arise with GPL and in that case a lot of the solutions are simply not available because of the strings of the license. GPL is essentially Richard Stallman's view of self-proclaimed egalitarianism: we'll pretend this is free provided you agree with my entire vision. If you don't we'll leech your code anyway. If you do but need to reconsider, tough, because the license is structured in a way that over time even the original author loses all rights to the code.
It could well be true. As I understand it the NBN is essentially a link between the customer and the ISP. As such it's likely to be based on low level links and virtual circuits using MPLS or similar and the carrier is moving abstract data between the two sites and whether it is IP or something else makes no difference to the carrier. IP headers, packets, segments etc aren't pertinent to the carrier's role, they are simply part of the data to be moved. Any logging, blocking etc based on the content is managed by the ISP at the other end of the link who has knowledge of what structure is imposed on the data sent.
Maybe not, since "I threw in code that wrote the variable to two locations" is clearly NOT atomic...
This immediately strikes me as one of those 'lack of formal education' comments since it appears to misunderstand the very premise of atomiticity. It does not mean the code is simple or occurs in a single cycle, it simply means it can't be separated, i.e. it all runs in a single block of time without interference or interruption. It goes far further than those elementary atomic ops you may see defined in your compiler documentation.
Lock the bus, mask interrupts, anything else you need to do, and you can have arbitrarily complex code as an atomic op.
The human body can survive far higher Gs than commonly assumed. It's probably around 20 years since G meters began to be installed in Formula 1 and IndyCar. At the time it was assumed the maximum remotely survivable G loading was around 50. The very first crash they got the data from had a peak of (from memory) 148 Gs. The driver jumped up and out of the car completely unhurt.
Presumably someone else who hasn't read and understood the licences they pontificate about.
The GPL doesn't require you to open up your changes either if they are purely in-house. Sure, you can't redistribute the code even in binary form in that case but for the big cloud players that isn't a motivation, indeed exclusivity is a virtue.
Well, yes, that is what it is designed for.
While there are clearly some vulnerabilities here I can't help but feel that it is being over-egged in significance. In substance it amounts to an authentication bug, screaming "look what I can do" is hardly evidence of further flaws when those are documented features of the system.
Bear in mind the wider context here: these are professional grade server boards. People then pay a premium for these BMC equipped boards. If you opt to use a shared rather than dedicated IPMI port one of the first things you are asked is what VLAN you would like the remote managent on to keep the traffic segregated. Even if you never use it (possibly because you bought an inappropriate board in the first place) the vulnerability doesn't arise because it needs a prior login during that power cycle and before the BMC is rebooted.
So yes, there is a flaw here which has been patched. But do not start down the "dumb lusers don't understand this" road: it doesn't affect them and if for some reason they are using what are advanced professional-grade tools it is ultimately their responsibility to mitigate the well understood risks of these fetures.
Core as in ALU, so essentially integer math and logic.
But that would mess up the counting since cores have had multiple integer units going back as far as the original Pentium. You wouldn't call that a dual core processor. I prefer the simultaneous thread metric proposed earlier.
Whose $300m are you going to spend to do it? Or perhaps 10 times that if if you wanted to get to the interesting stuff (the water) and report back. It's a lot of money to spend (and a lot of scientific careers) to blow on perhaps a 15 year project that would be incredibly speculative and have little chance of success since there isn't the detail yet to properly plan such a mission. That's what these two missions will get.
It's easy to say you want the results now but resources are limited and to get any slice of the pie you have to show you are not needlessly pissing money against the wall.
...closing the Start button.
I'm not sure of the precise details but from memory you right clicked the Start button to bring up a context menu. Among the entries was a close option. If you clicked it the button disappeared - taskbar still there, space reserved for the Start button, just no actual button.
Only way I found to get it back was to bring up Task Manager and kill Explorer. It would be automatically respawned complete with a new Start button.
Biting the hand that feeds IT © 1998–2020