Re: Magazines
... have just enough content to get enough users to pay for / pick up each issue ...
Or, the good old cover disk.
299 publicly visible posts • joined 16 Aug 2016
Having dabbled with using Linux as the base of my computing on some spare machines over several years I am, courtesy of Windows 11, trying to make a determined effort to switch to Mint as the everyday workhorse for my new laptop. It is not going as smoothly as I would like.
The basic install and set up was quite straight-forward and had most of the applications I needed. There were some underlying differences in Linux vs Windows that took a little time to sort through, but not a big deal to me and easily resolved with online research. (I would rate this as not necessarily a barrier for a 'non tech' user as they might be unfamiliar with achieving the same thing on Windows anyway and so would need to search online for a solution).
Then there are the frustratingly stubborn problems. Getting the Printer/Scanner connected - thanks for nothing Canon (MX340). Not really Linux's fault, but it is a consequence of trying to switch to Linux. Having still got Windows machines around I switch to them when printing is needed and will get the new laptop sorted over time, but I can see that an 'average' user facing the problem would be heading back to Windows.
My current PITA is setting default icons for file types - an application I am using does not set it up during its 'installation'. Using the file browser (Nemo) I can set the icon for a particular file, but not all files with that extension. So far no amount of playing around with mime.types and such has proved successful and it is getting annoying. I am interested in learning how to get Linux working** - but that is just a side interest, there are things I want to be doing on the computer.
** that might also be part of my problem - rather than simply trying various apps that claim to do the task I am trying to establish what goes on under the hood [to some extent].
Linux distro's out-of-the-box achieve a great deal and, in my experience, are very capable but problems can arise and can be tricky to resolve. The same can be said of Windows - but in many cases the path to sorting out problems is well trodden (though sometimes a bit extreme - reinstall Windows) so solutions can be found. At the moment I am finding myself spending too much of my time relying on old machines. YMMV
How is understanding the provenance of the software going into your system an outdated attitude for those concerned with high-assurance systems?
That a lot of OSS is a bit of a free for all certainly makes establishing provenance pretty difficult (if not impossible), and that might make it undesirable to include such software in your system (unless steps are taken to establish that it has not been compromised). However, unless the need for such checks is stated (and given value) it will not happen.
Provenance should not be a sole factor in determining that a piece of software meets the assurance levels needed for the system, but it adds to the evidence that is used to build the assurance case.
The story of the 1980's construction of the US embassy in Moscow illustrates how provenance and control over the origins is important. The initial US belief had been that it was OK to let the Soviets build the embassy as they could sweep it clean when they took possession. This proved to be so impractical (due to a good deal of "mischief") that it ended up with only a specially checked/protected room inside the building being considered secure. (That story is contrasted with the construction of the Russian embassy in Washington - which had a stringent security programme throughout its construction).
[article]
[Footnote: It was a Seagate 7200.11 (ST3750330AS) and I have found the firmware update (SD15) that eliminated the problem - saved with my other 'old' driver collections - but no sign of the fix (which was more interesting from an engineering hacking point of view). I hope I printed the notes and stored them with the cables 'just in case'. Ah, memories of the good old bad old days.]
I remember [years back] having one of those HDDs that had a notrious firmware bug in which some internal counter wrapping round would brick the drive. There was a fix that involved dimantling the drive and inserting some FTDI cables connected to a serial port on a working PC, then running up a terminal window and typing in various magic incantations. Ended up doing that twice to the same drive as it bricked tself again.
I thought I had filed away the details somewhere [might come in useful one day :-)] - but can't seem to lay my hands on it. Another problem with data recovery - finding the backups.
...
"New Improved", "Oiginal", "Original Pro", ... ad nauseum (though that might be reserved for the ad based licensing model where the LLM will generate the ads locally based on your use of it. Product placement baked into LLM training data - that's one for Douglas Powers Enterprises).
There is technical debt in the existing bodies of code, but also some gained value from the years of use and bug-elimination*. Re-writing established code is not without its risks so the decision whether to re-write or not is not always a straight-forward one.
* This is known to be far from perfect as bugs are still found, from time to time, in quite old libraries.
When writing new code the risks of repeating well known** errors is greater and so adopting languages (or as mentioned here - compiler enforced code profiles) that prevent those errors is more desirable.
** Despite some coding errors being well-known for decades they continue to re-occur in new code; people make mistakes. Based on the evidence of continuing repetition of well known coding errors, relying on people to know that they should not do something is not a reliable means of ensuring high quality code.
Having new code written in languages that prevent certain types of error helps prevent the addition of more problems to the code base and allows V&V activities to focus on other error types. Using well-established libraries, even if they are written in a less rigorous language, avoids making other types of error (such as handling corner-cases in the requirements) that might arise during a re-write of the functionality. The hybrid code might not be perfect but will often be the most cost-effective way of reducing overall system errors and provide the best opportunities of delivering effective V&V and high quality code in a constrained development budget.
I understand how key escrow works (or rather - how it is meant to work). My point was that the determined user has options that mean that intermediatories do not get the opportunity to see the private key.
If you rely on the intermediatory's off-the-shelf application to do all your key set-up then, yes, this could include an escrow facility.
Buying from the US is not without its problems - even if you are buying off-the-shelf. Getting in on future development is even more challenging. The US even has problems with its internal projects with state representatives holding out for more spend in their state. Even European countries on their own have internal politics to contend with as the lamentable state of Germany's armed forces bears witness to.
Collaborative European projects are not without their unique challenges, but there are many successes and Europe does need to sort itself out as a bloc. Spending priorities in Europe have not favoured military spending and that is possibly the biggest challenge.
Tornado was good for dashing off into the skies above the North Sea, launching missiles at some unfriendlies, and getting out of range of any return fire, but it was hardly a fighter. It was a successful programme that satisfied the needs of those that operated it.
Electronics is used to allow relaxed stability/unstable airframe designs which confer a number of advantages. But the electronics requires effective controls in order to do their thing. It can keep you out of the danger zones of the airframe during controlled flight giving you carefree handling. But if you have a problem like engine(s) out you have a limited amount of time before you start losing control due to lack of electrical power, lack of hydraulic power, and/or lack of airspeed. Once you have lost control the airframe will do its own thing, such as finding one of its stable modes (e.g. a flat spin) until the flight ends.
With that 1979 date it looks like you are including the technology development programmes like the Active Controls Technology Jaguar and Experimental Aircraft Programme.
The French were in and out of the early stages of the programme and had a specific need for a Carrier Aircraft that the other nations didn't - this created compromises that the other nations didn't want. Once they had got all the technical data out of the programme the French left to do their own thing (which some consider to have been their plan all the time) and the remaining partners needed to re-adjust the programme.
The remaining programme also had to deal with the politics of a multi-nation programme - as well as the changing situation brought by the re-unification of Germany which particularly affected that partner.
Most of the Eurofighter nations already had a strike aircraft (Tornado - another programme considered a success), so prioritizing Typhoon's interdictor role was a natural choice, the swing role of Typhoon was always a consideration and not just "tacked on".
Flat spins are a nasty situation to find yourself in. F-14s had a bit of a reputation for being susceptible to this - until they got their Digital Flight Control System upgrade. The double engine flame-out crash of the DA6 Typhoon ended in an inverted flat-spin with the airframe pancaking into the ground. So it is not a totally surprising end to a bad day - the circumstances leading up to it are usually more "interesting".
... an argument that once code has been released under (A)GPL terms then it can never been relicensed more restrictively ...
[IANAL] My understanding of things is that a copyright holder can issue copies under whatever [reasonable] terms they choose and it is quite reasonable to license copies given to different parties differently.
Thus making an open/free copy widely available under one set of licensing terms does not preclude them also making it available under other terms. The fact that a copy has been made available under these other terms does not cancel the rights of others to use what they have.
As copyright holders they could also modify the software or use it as a base for another product and provide that under a commercial licence. If the modification has not been made available under an open/free terms then others cannot simply apply the open/free terms to the modified software. They still have the right to use the unmodified software under the open/free terms.
Things like the GPL also have a copyright - so somebody using that licence text as the basis for their licence might run into copyright issues unless the GPL text is itself published under terms that allow it to be used in the way they are using it.
I do not know which points are being argued in the case before the courts, and not being a lawyer maybe I don't just need a legal team, I need the Eagle Team.
I find them useful for running photo's down to the local print shop and for swapping "too large to email" video clips with family/friends. Can ensure (to a limited value of certainty) they are erased of any other data and will not be too unhappy if they don't get promptly returned.
Yes, I know about various cloud services for data sharing - but largely prefer to avoid them because "cloud"
Well, if my A in A Level Physics taught me one thing it was that there was an awful lot more that I didn't know about a lot of things. It got even worse with my degree. And that was over 40 years ago and they have invented even more physics since then. But I still enjoy hearing about Quantum and Relativistic Physics subjects - typically Don Lincoln on the FermiLab channel or Matt O'Dowd on PBS Space Time. The universe is a weird place (even outside of social media).
Agree with all your points. My comment was to point out that there is often a more complex cost-benefit equation - particularly if quality/correctness/security have value, and I think the points you make here expand on that.
Unfortunately a "race to the bottom" on costs also has adverse consequences, many of which get regular coverage hereabouts. I think that also applies to the "skills" of those working in large parts of the programming industry.
It would be nice (IMHO) if the software professions could shift the focus of software creation so that more importance was given to eliminating classes of error we have known about for decades and still see arising. Despite all the advances made in programming language design and compiler technology there seems to be a great deal of resistance to moving on from a rather minimalist 1970s approach. (That is more of a general lament rather than a belief that Rust in particular is the way forward - in my active days Ada was the 'new way').
If people are using pointer arithmetic, it's for a reason. And Rust won't be able to do a damn thing differently or any more securely about that.
If the reason people are using pointer arithmetic is because the language only provides pointer arithmetic as the means to operate on particular data types, whereas another language provides a more complete abstraction of such data types, then you can do something about it.
Perhaps no re-skilling costs, but there might be other costs associated with achieving and maintaining high-quality C code that could be reduced by adopting Rust.
For example, there might be additional peer review and test costs associated with ensuring you don't have memory safety errors that you can dispense with, or the re-work rates caused by defects found during testing might be lower with Rust than with C because with Rust the defects are avoided or immediately revealed when the code writer performs a test compilation before submitting the code.
Preventing code defects or eliminating them earlier in the development lifecycle is often more cost-effective.
Whilst the study used names to introduce an obvious race/ethnicity marker into a set of controlled data (then observed the AI making different assessments) it is also noted that in open data there are other indicators, based on the use of language, that can indicate race/ethnicity/etc and can be picked up by AIs.
One of the things I really do miss from Usenet/Newsgroups is the ability to put a thread on ignore.
Sometimes it is useful to block/ignore a Troll/Shit-poster entirely - but more often it would be nice to skip a sub-thread that has gone off-topic or descended into irrelevance (other than for the parties bickering).
Conversely, sometimes I find a very informative post and take time to look at what other things the poster has commented on to see what other pearls there might be. ften I think it would be nice to put them on a watch-list.
Ho hum - if wishes were horses ...
Supplementary Question (as we seem to have some knowledgable types around):
How robust/protective are the PSUs for the "smarts" in the meter? Having had electonic equipment taken out by mains voltage yoyo-ing up and down as the grid attempts to restore power it would not seem a good idea for meters to be susceptible to such events. (ISTR that a type of PSU often used in household electonics does not react nicely to the mains repeatedly switching on and off in short order).
TIA
I would be all for judicious improvements, but security is usually a trade-off with convenience (at some point the inconvenience of a security feature outweighs the benefit obtained). And, absolute security is very difficult to achieve (I noticed a recent report about how German police circumvented Tor anonimity).
Given the history of marketing agencies and their enthusiasm (along with various others) for surreptiously tracking users across the internet there is good reason to be suspicious of a move to allow more adverts and such-like through - though the desire for revenue to fund future work is understandable.
Raising awareness of short-comings in current offerings (really suppliers should be open about such things) and campaigning for improvements (or against regression) is commendable.
They can start by making a "real" Incognito Mode based on Tor.
Please no! When I want Tor levels of anonymity I have Tor (based on Firefox) and I accept the overheads and restrictions that come with that - but it would be a PITA for most things.
The current form of Incognito/Private browsing is a good choice for a lot of things (like banking, shopping for SWMBO's Christmas Present) where I do not want to leave a load of history lying around and appreciate an additional level of sandboxing to help keep such transactions away from my other browsing activities. Problems like having to repeatedly request new Tor circuits until I get an in-country exit node so that I don't trigger bank security are not what I want, and would be the sort of thing that would cause average users to leave in droves.
Tor does Tor; Firefox needs to focus on its core purpose and provide a solid internet experience for users.
Judging by some of the recently reported behaviours there is still rampant disavowal of responsibility and obstruction of remedy. Something more forceful - such as application of a hob-nailed boot - is needed.
I saw something recently where the PO was saying that compensation delays were being caused by the lawyers they instructed taking a ‘conventional legalistic approach’. Well instruct them differently or dismiss them!
Perhaps by making these tools open-source they make it practical for independents to get into the game and produce non-shit movies.
Of course, good movies need more than special-effects and swish production tools - but having these available would help produce a more polished product and to learn skills that would help make them commercially successful.
In my [very limited] experience amateur film makers can be very interested in the technical aspects of their 'hobby' and there are probably going to be many who enjoy working with these tools.
Well it all depends on how sad you want to be.
You could keep them and try to explain to your SO or offspring the value of keeping such things - just in case. And mark my words, the day will [eventually] come when someone will ask you if you have something that will fit some bit of old kit (Centronics cable anyone). You now have the added advantage of expanding your range of USB-C cables.
You could set up an E-Bay account and flog off your cherished antiques.
Or, you could go old school and take a stall at a local boot fair and flog them there.
Alternatively, you could take the hard step of letting go and sending them to the recycling centre (but don't forget [UK only] to take the fuses out of any mains plugs and add to your collection of spares)
If only it were possible to buy some sort of cap to fit over the socket and protect it from ingress of dirt. You might like to think that some tat bazaar would have something like that.
[ for those that don't want to think - https://www.amazon.co.uk/dp/B0BRQ7SCF6 ]
But the Turing machine is a concept machine that provides a minimalist design that can be shown to be able to complete any general computing task. It is its use in this conceptual role, rather than a usefully practical computing engine, that makes it a foundation of modern Computer Science - in particular being able to show that another architecture is able to implement a Turing Machine (that it is Turing Complete) provides a critical insight about an architecture.
The von Neumann architecture is concerned with the architectures of practical machines (but can also be applied to more abstract architectures).
I don't think anybody would seriously tout a physical implementation of a Turing Machine as a model for a usefully practical computing engine.
"Turing might have had a bit of a blind spot when it comes to working with others"
Hmm, I think he would not be alone in that respect - even in the context of WWII where the stakes were so high there were plenty of "personalities" who seemed unable to "play nicely" with others.
I have on occassion dumped my shopping when I have become totally fed up with hunting for items after one of their "let's move stuff to different aisles for shits and giggles" events.
On other occassions, when I have time on my hands, I have one of the supervisor staff to repeatedly ask where items are. I noted on a recent occassion that the local Tesco had left placards around saying where the item that used to be there could now be found - but that is probably in response to wider customer griping.
What sort of static analysis are you thinking of? There are forms of static analysis (including such as Polyspace and Absint for C. SPARK is available for Ada) which can unearth very deep errors. Proof of Absence of RunTime Errors has been around for decades and the automated proof engines have been growing in capability all the time.
This does not mean you should throw any rubbish at the tools - you need to follow sound design and implementation principles to get the best benefit. With poor code you tend to get swamped with warnings (rather than unambiguous hard errors or green lights) which you have to work to resolve.
These tools also lead in to formal verification of code behaviours. PARTE provides an interesting stepping stone in that runtime errors are indicative of erroneous code, but you don't need to formally define the behaviour since it can be derived from the language definition (or basic maths). However, PARTE often benefits from additional code annotations that define intended constraints for the code operation - the tools verify these constraints will be met and uses the constraints to simplify subsequent proofs.