Well, obviously, people have been voting for the wrong corrupt populist autocratic lizards. Got any gin?
(pace Douglas Adams)
468 posts • joined 7 May 2008
If FLOSS (e.g. GPL2) stops being maintained by whoever is doing the maintaining, anyone can (as a result of the licence) pick up the pieces, fork the project, and continue.
If closed source stops being maintained by whoever is doing the maintaining, copyright laws prevent the software from being forked and continued for anything other than private use. You'll need to negotiate for rights first to do anything else. So as a company reliant on such software, you cannot spread the cost of continuing without getting the rights to distribute/publish.
Immediately the risk becomes huge. If you are big enough you can enter an ESCROW agreement so you get the source if the original owner fails or decides to walk away: but you really need to get distribution rights as well.
Back to FLOSS: if your company is dependant on a defunct FLOSS project, you can pay someone else to pick up the pieces. There's no licencing issue. You can do it in-house if you have the resources, or you can pay external programmers. You are free to do what you want with the code, and collaboration with other groups who need the same thing to spread the cost is easily possible.
Choosing closed-source is taking on a whole lot of risk, and leaving your crown jewels in the velvet-gloved fist of the software supplier. Choosing FLOSS means you have the means to control your future use of the software. Whether you choose to use that control or not is up to you.
To be fair to the numpties, it's common for login screens not to specify which part of the credentials is incorrect. The idea is that any hint makes life easier for intruders. But "An error has occurred" is a stupid way to tell the user that the credentials are wrong.
I use a particular service once every few weeks, and every time, it fails to allow me to log in. I have to use the reset password option every <insert expletive for emphasis> time. I know that the password I type in is correct (as I have it written down from last time*), but no, password reset or I'm not getting in. I know the username is correct, as that is the email address to which it sends password reset emails.
Another of life's little irritations.
*Yes, yes, I know. If I lost access to the service it would not be the end of the world, and the username and password are unique to that service. Anyone finding my credentials will be able to fail to log in in the same way as me, and be unable to try and log in on any other service. I also know that some systems force you to change the password from the one sent in the password reset email. This one doesn't. But in any case, I've tried both changing it and not changing it, and neither option works.
There's an xkcd for standards which applies to 'standard' processes as well, although multiplying up to 28,000 of the things is impressive. Anyone who looks at a standard tends to think of a variation they need (want) and following that, it is a short step to multiplication of 'solutions'. Saturday Morning Breakfast Cereal makes essentially the same point about 'definitions' as applied to Social Sciences, but it is, in my experience, a generally applicable point.
As for BPA analysis, it suffers, like code from premature optimisation. As Donald Knuth famously said in his 1974 ACM Turing Lecture:
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.”
Donald Knuth, Computer Programming as an Art (Communications of the ACM December 1974 Volume 17 Number 12 1974)
Removing flexibility from a process is a premature optimisation. The art of a good analysis is to leave the process flexible enough to cope with its anticipated use-cases (including the rare ones, and reasonable variations), while being standard enough to benefit from efficiencies of scale. If your process is too difficult to use, or can't be used by the users for all the things they need to do, it has failed. There's a great deal more to do than just automating the most frequently used steps and calling it a day. Sadly, that level of facile 'analysis' happens all too often.
For those who may not understand this joke, it is based on the notion that Mary, the mother of Jesus, was without sin, so she could be qualified to cast the first stone. Her aim was off though, it should not have hit Jesus, but rather the 'woman taken in adultery'.
The notion that Mary is/was* sinless is Roman Catholic dogma.
*Many people believe that Mary never died, as such, but was bodily taken up into heaven (assumed), others believe she died before being assumed. Lots of clever people have debated this kind of stuff for centuries, so I'm just presenting the bare facts without comment.
In control theory, we talk of servo control of a feedback system (servo has the same root as slave, incidentally).
As it is the concept of slavery that is objectionable, then it follows that servo is suspect, and also robot, which comes from the Czech word robota*, meaning serfdom, villeinage, or corvée with an original etymological root from a word meaning slave/slavery. Adoption of alternative forms might take a while.
Finding useful and usable neutral terms to use that are also sufficiently precise and nuanced is a challenge. You only need to look at the history of nomenclature in chemistry to see how long change takes to embed.
*Famously brought into English via Karel Čapek's play Rossum's Universal Robots (R.U.R.) (Rossumovi Univerzální Roboti)
And yet again, someone uses the ambiguous term 'blocklist' to replace 'blacklist'. Is it a list of blocks (like in a file), or a list of things to be blocked? It is not obviously the latter. I would prefer the use of 'allowlist' for 'whitelist' and 'denylist' for 'blacklist'. If you are looking to make a usage change, I would have thought people would want to use a term that is phonetically very distinct from the less-preferred term; and 'blocklist' sounds, and looks, very similar to blacklist. Perhaps some think that this similarity is an advantage?
I really hope we can generally agree on a set of neutral technical terms so that everyone who wishes to can contribute to IT without fear or favour - it seems to be the decent thing to do.
The ONS can say what it likes, and the staff may well believe what they say, but the security and intelligence services may well decide that the information should be taken (not volunteered) for their own purposes.
The Covert Human Intelligence Sources (Criminal Conduct) Act 2021 potentially gives the security and intelligence services broad cover to commit illegal acts:
(a)in the interests of national security;"
(b)for the purpose of preventing or detecting crime or of preventing disorder; or
(c)in the interests of the economic well-being of the United Kingdom.
And, it is not just the security and intelligence services - the list of agencies that can apply for Criminal Conduct Authorisation are:
Any police force
The National Crime Agency
The Serious Fraud Office
Any of the intelligence services
Any of Her Majesty's Forces
Her Majesty's Revenue and Customs
The Department of Health and Social Care
The Home Office
The Ministry of Justice
The Competition and Markets Authority
The Environment Agency
The Financial Conduct Authority
The Food Standards Agency
The Gambling Commission
In theory, this is all about running covert human sources in organisations under investigation, such that the source cannot be found simply by their refusal to commit a criminal act; however should the security and intelligence services decide that it is necessary and proportionate to run a covert source within the ONS, then providing a clandestine copy of (selected) data could well be covered by a CCA.
It would be interesting to be told how many CCAs per year (or month) are issued by the above organisations, and what their durations are.
I sure hope the tire speed limit is a nice margin above Vr, otherwise take off will be with burst tires, which will result in a not so smooth landing.
In the linked Boeing document above gives a case study showing:
Scheduled Ground Speed at Liftoff: 199 knots
Rated Tire Speed: 204 knots (235 miles per hour)
and goes on to say
...case study showed that a rotation rate that is 1 degree per second slower than normal can result in a 4- to 5-knot liftoff speed increase. This is in addition to the increase in all-engine takeoff distance associated with the slow takeoff rotation (see fig. 3). This illustrates how a slower-than-normal rotation rate can easily use up what may seem like a large tire-speed-limit margin, especially if paired with a higher tailwind component than accounted for in the takeoff analysis used for dispatch.
Some operators have elected to simply examine the tires after an overspeed takeoff event using the normal tire inspection criteria in Chapter 32 of the Airplane Maintenance Manual. if no damage is found, the airplanes are dispatched normally and no further maintenance actions are performed. Based on many years of service experience, this approach seems to have worked well because very few, if any, tire tread losses have been attributed to an overspeed event. Based on this service experience, Boeing has typically not objected to this practice even though there is no overspeed takeoff capability specifically designed into the tire.
I would have thought...
...that normally on takeoff, the pilot would be using everything short of War Emergency Power, throttles to the stops. I would hate to clip an obstacle at the end of the runway because the airline was trying to save a few bob's worth of fuel, and the passenger cohort had more than the normal fraction of bloaters.
Yer average commercial passenger jet's engines can suffer catastrophic consequences if they spin too fast (overspeed), or if the exhaust gas temperature (EGT) gets too high, or they try to push too great a mass of air. You need sufficient thrust to reach Vrotate before you run out of runway (and preferably V2 soon after), but importantly, not so much that you exceed the speed rating of your tyres before lifting off. This means you need to carefully control your thrust to ensure you accelerate enough to reach Vrotate before the end of the runway, but not so much as to exceed the tyre speed rating. The margins can be surprisingly small.
More background details here:
Strictly, the Common Travel Area (UK, Ireland, IOM and CI) applies only to British and Irish citizens.
Common Travel Area rights can only be exercised by citizens of Ireland and the UK. If you are not a citizen of Ireland or the UK, you will not be able to exercise Common Travel Area rights.
British information: GOV.UK: Common Travel Area guidance
Under the CTA, British and Irish citizens can move freely and reside in either jurisdiction and enjoy associated rights and privileges, including the right to work, study and vote in certain elections, as well as to access social welfare benefits and health services.
Hmm, you might be the DEC FS tech that told me that story a long time ago. We had 'quite a few' RA81s. I had a very nice visit to Kaufbeuren, including the high-speed drive on the autobahn from Munich, and lunch at the plant, which included beer (!), the explanation being that the German workforce would refuse to work if they could not have beer with their lunches.
Our on-site engineer was invaluable. He mentioned his 'baptism of fire', when he had been newly trained on a particular piece of DEC kit (could have been the HSC50), where the training video made it look easy: open up the cabinet, identify the failed card by the indicator LED, replace card, and away you go. In his case, he gets called out, confidently opens the cabinet to find....no indicator LED on any of the cards. A 'while' later, after most of a new HSC50 is sent from Reading, he finally got things working again, and became less trustful of the training videos.
Prestel is what kicked off the Computer Misuse act 1990: or rather it was the fact that someone accessed Prince Philips Prestel mailbox, and at the time they could only be charged under the Forgery and Counterfeiting Act - which was appealed, and the Crown lost(!). The very public lack of suitable legislation meant the Computer Misuse Act came into being.
Details in the Wikipedia article Computer Misuse Act 1990
Which is why operators (or other people with appropriate access) would use a marker pen and draw a diagonal line across the top of the deck. This made reconstituting a dropped deck a great deal simpler.
If the punched cards had sequence numbers, there did exist machines for automatically reordering messed-up stacks.
The illustration in this Wikipedia article on the use of punched cards in programming shows an example of the diagonal line.
Jolla ended software upgrades for the Jolla 1 phone in November last year. It won't run SailfishOS 4, so the last version for the Jolla 1 was 3.4.0 (Pallas-Yllästunturi). It was supported for 7 years.
Which is not a bad run.
It's still a shame that 7 years is regarded as an exceptional maximum. I understand the reasons: stuck on an old kernel because of binary device drivers that are not upgraded by the manufacturers. Designing and building consumer electronics for long lifespans is hard.
There are, but not in colloquial British English.
Scandinavian (roughly*) has two words meaning more: mere(non-countable) and flere(countable). I have no idea why the word meaning 'countably more' didn't get incorporated into English via the Vikings.
And Scandinavian (roughly*) does have two words corresponding to less - mindre and fewer - færre.
*details differ between Danish, Icelandic, Norwegian, and Swedish. Don't know about Faeroese.
I suspect not. I believe it comes from a shortening of the word refrigerator (note the lack of a 'd') which has a Latin root in the word frigus, meaning chill,coldness. Ecclesiastical Latin pronunciation gives the 'g' a 'dj' sound, which leads to it being spelled in English with a 'd' to distinguish it from words with a hard 'g' in the middle, like magnet, regulate, cigar, bogie, sugar.
Classical Latin pronunciation would be frig, to rhyme with brig.
Since Germany has only existed since the unification in 1871*, three hundred-odd years ago they would have been Saxons, Bohemians, Silesians, Prussians, Bavarians, ...). They would probably have spoken a German dialect, though.
*Wikipedia says: Prior to 1803, German-speaking Central Europe included more than 300 political entities, most of which were part of the Holy Roman Empire or the extensive Habsburg hereditary dominions.
I attribute the popularity of "impact" largely to people not knowing whether they should use "effect" or "affect", and going for something else entirely in order to avoid the risk of making themselves look stupid.
And thereby failing entirely: at least with effect and affect you have a 50% chance of getting it right, with impact you have a 100% chance of looking stupid.
Well, yes and no. With sufficient ventilation hardwood burns quite nicely: just ask any wood-burning stove owner. With insufficient ventilation, it takes a while, but will make hardwood charcoal, which is 'rather brittle', so not something I'd like to trust to structurally.
But the FPSE people do have a point: hardwood chars nicely, and char is rather a good insulator, so a structural element will form a layer of char on the outside while maintaining good strength inside for a long time. Which is why the Chinese used (impregnated) white oak re-entry shields for some of their recoverable surveillance satellites (Fanhui Shi Weixing).
The capsule for the FSW, like that of the US Discoverer/KH-1 spy satellite, was mounted heat shield-forward on top of the launch vehicle. The ablative impregnated-oak nose cap covered electrical equipment. The spherical aft dome contained the recovery parachute. The film reels for the camera were located in an intermediate compartment.
The FSW did have an oaken heatshield. I saw one on display at the bicentennial airshow (Richmond, Sydney, 1988). This craft was launched on 1987 August 5, and was previously known in the west as China 20, then variously called FSW 0-9 or FSW 0-10, depending on whose chronology you refer to. Cospar 1987 067A, NORAD 18306. The oak was charred, and some broke off when display attendants moved the spacecraft. They simply vacuumed most of it up so that the charcoal didn't mess the carpet! I was able to photograph both the exterior and interior (equipment had been removed).
I had a very similar experience at about the same time. The on-site DEC engineer pointed out the RA81s would be out of warranty if their temperatures exceeded a certain value, so the Ops Manager got the loading bay doors open* and 'a number' of large fans running. No loss of service (it was some VAXclusters and an IBM mainframe clone) and no hardware loss.
*I'm pretty certain he had the building main entrance open, and the multiple security doors through to the machine room open as well to get airflow through - in through the entrance, out through the loading bay, with people stationed to prevent unwanted visitors.
Repack all my thousands of JPEG photos with this new compression algorithm with zero loss of detail compared to the original JPEGs? As in pixel-for-pixel identical but smaller file size?
The white paper, linked to above, on page 2 says:
In terms of compression performance, key results are:
- Lossless JPEG transcoding reduces JPEG size by around 16% to 22%.
It uses the Google Brunsli jpeg repacker algorithm.
A medium level description of how it achieves this is in this conference presentation:
Search for "4.13 JPEG recompression" on the page
The legacy JPEG format has been thoroughly explored18,19 over the years and most of its inefficiencies are addressed in the JPEG XL recompression format:
• more robust DC coefficient prediction is used
• AC0x and ACx0 coefficients are predicted on the base of neighboring blocks
• Huffman entropy encoding is replaced with ANS and Binary Arithmetic coding
• frequently used ICC profiles, Huffman code tables, and quantization tables are encoded as template parameters
• context modeling is used to separate entropy sources
• similar to the approach described in Section 4.8, DCT coefficients are reordered in a such way that more blocks have longer series of zeros at the end. The index of last non-zero coefficient is encoded explicitly, which is more efficient than limited RLE.
Those improvements enable 16% size savings on average in a corpus of 100 000 random images from the Internet. For larger photographic images, the savings are in the range 13%–22%, depending on the JPEG encoder and quality settings.
If you want very low level details, look at "Annex M - Lossless JPEG1 recompression" on page 127 of this document (labelled as page 135 if you follow the internal link in the contents)
"it is a [completely unnecessary] upgrade to the ubiquitous JPEG"
As an 'end-user', you are completely correct.
However, if you are serving images from a central server, saving storage and bits/second adds up. The average individual doesn't care, but an organisation curating a large set of images can save on storage costs; and if serving images out to a large number of viewers, can benefit from the bandwidth savings - a per cent or so on a bottom line of millions of dollars/pounds/euros is worth having.
You don't have to stop using JPEG. There are, however, features of JPEG XL which might be of interest to others.
So JPEG XL offers the same, or better capabilities as JPEG, GIF, and PNG. No-one is going to force you to change, but some people might like the ability to choose and benefit from some of the optimisations and development in image compression technologies since JPEG.
It's absolutely fine that you don't want to use JPEG XL. You have that choice, and choice is good. The problem is that Microsoft's move to patent software in this area could well remove the ability to choose for many people (at least until the patents expire), and give Microsoft a revenue stream founded on work that was intended to be open and royalty free for all.
If JPEG fits your use case, that is great, and no-one is saying you can't continue to use it. JPEG XL might fit other people's use-cases better, and I think giving people the ability to choose to use it is a good thing.
JPEG XL really, really needs to be patent/royalty free, because it is a very well thought out upgrade to the ubiquitous JPEG. Not least, it allows lossless conversion/transfer from the original JPEG format into the new JPEG XL format (obviously the original JPEG was not lossless), while decreasing the file size, which seems a bit like magic.
It incorporates techniques from FLIF* (now to be deprecated in favour of JPEG XL), which allow progressive downloads, so you download only as much of the data as needed to produce an image in the size and resolution you require: i.e. you can use the same source file to deliver a thumbnail or a highly detailed multi-mega-pixel image.
When I last looked JPEG XL had not gone though all the necessary committees, but the file format is agreed (frozen), so it won't change.
Rent-seekers on this incredibly useful image format upgrade should be extremely strongly discouraged by all legal means necessary.
*The FLIF developer says: "All the good stuff from FLIF went into JPEG XL. In lossless compression, jxl is slightly better than FLIF, while it decodes faster. I stopped working on FLIF and I think JPEG XL can do everything FLIF can do and more."
Since Hubble is widely believed to be a prototype for a line of inward-pointing spy telescopes, are all those worn out too ?
Actually, it is the other way round. NASA chose to use a 2.4 metre mirror in Hubble instead of the originally suggested 3 metre mirror as "changing to a 2.4-meter mirror would lessen fabrication costs by using manufacturing technologies developed for military spy satellites."
"A perfect 2.4 m mirror observing in the visual (i.e. at a wavelength of 500 nm) has a diffraction limited resolution of around 0.05 arcsec, which from an orbital altitude of 250 km corresponds to a ground sample distance of 0.06 m (6 cm, 2.4 inches). Operational resolution should be worse due to effects of the atmospheric turbulence."
Thank you for the clarification.
Having read elsewhere that the facility had wooden floors (!), which I can't quite believe, the concept of wood inside a metal container with holes in strikes me as 'bold' design for a data centre.
More pictures of the burnt out SBG2 in this article:
A comment in that article says OVH have started swapping out C13/C14 power supply cables due to possible insulation defects. Perhaps they have an insight into the cause of the fire.
Tweet with video of damping-down operations:
I'm really interested to see any investigation report.
I wonder if the fire-resistance of the data-centre was affected by the design: the data-centre was basically stacked shipping containers.
I thought as much seeing the pictures taken by the Sapeurs-Pompiers (Great name!)
In principle, putting things in metal boxes sounds pretty fire-resistant, but once you start cutting holes in them for the D/C infrastructure, you might be building a giant brazier. I'll be interested to read the results of any investigation, if they are made public.
I'm afraid this will demonstrate the considerable abilities of people/companies to comply assiduously with the letter of the law, while riding a coach and horses through the spirit.
There is a huge difference between 'availability of parts' and 'designing for repairability that is cost-effective and practical'. Sometimes there are arguably good reasons for designing complex integrated parts (like modern car headlight assemblies), and sometimes there may not be. I don't mind a phone being a couple of millimetres thicker and a few grams heavier if I get an easily replaceable battery and screen - but other people, apparently, do. Getting hold of foreign language keyboards is unreasonably difficult for laptops (there are, or were, sellers on eBay, but more official routes were difficult or impossible to find).
Sometimes you need special tools and/or jigs, so even if the part costs pennies, the equipment needed to replace an old part successfully costs a fortune, or is unavailable. Sometimes there are valid health and safety reasons why user-repairs are discouraged.
As others have pointed out, the embodied energy/carbon of an item makes repairing a good long-term option. I am unreasonably irritated that the vacuum cleaner hose for a Miele vacuum cleaner costs more than buying a new cleaner in the sales/special offers. The part is theoretically available, but I object to paying a king's ransom for a plastic tube that will most likely fatigue fail in same way as the original.
I will stop ranting on this topic for now. Triggered, I was.
I suspect a law will require some tweaks to get it working effectively.
Despite reading a fair amount of classic Sci-Fi written by people of unsound opinions to modern sensibilities, I too have not read any of Octavia Butler's work. Which is a shame. The first thing that comes to mind, much like John Brown (no body), is the 'Butlerian Jihad' from Frank Herbert's oeuvre:
"Thou shalt not make a machine in the likeness of a human mind," the creation of even the simplest thinking machines is outlawed and made taboo
There's a few people who feel that way about AIs in general.
Of course, give the sandiness of Mars, the pattern matching mechanism in my mind could well have been weighting anything to do with Arrakis for the Butlerian idea to float to the top of the maelstrom of my thoughts. Hmm. Weight. Float. My mind is not being logical.
The problem is not what it 'sees', the problem is an inadequate representation of what it is 'seeing'.
If one were to take many apples, and attach a similar size label to each with different words on them e,g, 'screen'; 'keyboard', 'mouse', 'cpu', a human would see a set of apples with labels on them. The AI might well report the image as being 'a computer', or if less clever, a collection of objects like a fire-screen, a piano-keyboard, a small furry creature and an AMD Zen processor. The problem is not the size of the label, but the almost context-free processing of the information it is gaining from the analysis of the image.
Allowing a hand-written label to override what is actually there simply does not make sense. It doesn't look like the AI will easily acquire the necessary domain knowledge on its own, either.
I'm very scared of cognitive decline as I get older - I would not be surprised if I am already showing symptoms: so my issue is how long I can support myself on all this technology. I look at one of my older relatives who has trouble driving an iPad, and mixing up the WiFi password with the PC account password and the Gmail password and wonder if/when I will get to the same state with the then current technology.
My father was simply Not Interested, and paid bills by cheque at the bank, and never used the web or sent an email in his life.
So I need younger victims who will be able to support me (and understand my IT set-up) as I get older. Their experiences will contribute to future columns like this one.
The entire certificate based 'security' edifice is a sham. When was the last time you checked that all the CAs your computer is set up to trust are actually organisations you want to trust?
All Transport Layer Security does is give (limited) guarantees that people you don't trust cannot tap in and read data in transit between source and destination. It says nothing about the trustworthiness or not of the destination, which should be assured in some independent way.
When the padlock appears in the address bar, what does the fact that the connection is secure, verified by e-Szigno Root CA 2017* tell you about the site you are sending data to, and how worthy is that CA of your trust?
One of my banks used to act as its own CA, and the only way to do online banking was to install their root cert on the devices you wanted to use. Unfortunately, they gave up that approach.
It is a shame that no-one has come up with a way to make a web-of-trust easy to use. Centralised 'trust' authorities have well known problems.
*Or TUBITAK Kamu SM SSL Kok Sertificasi - Surum 1. There are plenty of other examples.
Thank you for the corrections and clarifications.
One thing to note, the GDPR "was among 69 EU legal acts incorporated into the EEA Agreement by the EEA Joint Committee in Brussels on 6 July 2018."
The regulations entered into force in the three countries of the EEA on 20 July 2018.
The practical effect of this is that when you say, "When it comes to the data subject, the test is whether he "is in the Union".", from 20 July 2018, is for practical purposes "the Union and the three EEA countries (Iceland, Liechtenstein and Norway)".
Background: EFTA: How EU Law becomes EEA law.
There have been a few cases where a suspect could prove conclusively that they could not possibly have been anywhere near the place the DNA evidence was found, yet it was a match for their DNA.
A notorious example of this the identification of the German mass murderer (the "Phantom of Heilbronn") whose DNA turned up at many different crime scenes.
The fact that someone's DNA has been found at a crime scene does not mean that they were there.
There are many ways to transfer DNA from place to place, including swabs, dirty handkerchiefs etc. You also have to have rather good lab techniques to avoid cross-contamination. Demonstrating that the forensic lab has followed those techniques is not always easy.
There has to be evidence other than DNA linking the person to the scene. Saying that person X's DNA was found somewhere says very little about whether they were there in person.
What would make sense (for me at least) would be a keyboard with a trackball located just beneath the space bar.
I saw an interesting approach on an old keyboard: a rotating bar that could slide from side to side on its axle. Rotating moved the mouse pointer up and down the screen, sliding the bar left or right on the axle moved the pointer left and right. Unfortunately, the buttons to click were separate, you could not simply depress the bar, so if anything, it required two hands to operate quickly, or one serially - move the mouse, then click a button.
It obviously never took off.
It's a shame, especially as there is a vocal group within the FLOSS ecosystem that have a visceral hatred of anything to do with Microsoft and have been migrating (and campaigning to migrate) projects off GitHub to somewhere not tainted by Microsoft. I don't know what the next best choice from that point of view is.
As pointed out by other commentators in this thread, once you move things into 'the cloud' (i.e. using other people's services on other people's computers), you have a tendency to lose control over your fate. Sometimes, it can be the right decision: the services available to small businesses can seem almost magical when compared to trying to do the same yourself - but if your business is dependant on somebody else's profit margin, you are in a vulnerable position.
It is an MBA's dream to lock in customers so that they find it very difficult to move elsewhere. A lot of services are deliberately designed to do just that: once the transition costs of moving get significantly painful, people choose the lesser pain of staying and enduring 'reasonable' price increases.
I always take a wet anti-bacterial wipe in with me simply because if I do have to touch anything I don't have to worry about using hand gel all the time, other than when I go in and when I leave. After all, I'm touching other stuff too, like the shopping bag handles, pockets, wallet, card etc. Also, the anti-bac wipe, at one layer thick, lets you use the touch screens on a self service till and not have to touch the buttons on the keypad when that has to happen. I'm also, by default, cleaning the keys for the next person who may not have had that foresight.
Given that SARS-CoV-2 is a virus, anti-bacterial wipes may not be giving you the protection you think.
SARS-CoV-2 is an enveloped virus, and it is not clear that, for example, that Quaternary Ammonium compounds such as benzalkonium chloride (BAC) are effective at reducing viral loads on surfaces.
Make sure your hand and/or surface disinfectant is effective against SARS-CoV-2 - not all disinfectants are.
I don't get it... I'm sure I've seen plenty of posts on here whenever there's a new security issue found in Windows about how Linux doesn't have security issues...
Would you be so kind as to provide links to three of those posts (or more, if you like)?
No true Scotsman..., sorry, I'll start again, no experienced Linux techie would ever claim Linux doesn't have security issues. I expect posts claiming that to be either clueless fanbois, sarcasm (both unappreciated and explicit), or people with a tenuous grasp on reality. Linux, GNU, and FLOSS software in general definitely has security issues, but issues, once found, can be resolved and distributed by anyone: not just the copyright holder. With non-FLOSS software, even clear security problems might not be legally mitigable, and you could well be dependant on a software maintainer that requires some cold hard cash before resolving problems. Which is fine. You can choose to pay. Or not.
According to this table, there were just seven deaths registered in 2017 in England where the underlying cause was exposure to electric current at home.
You need to add domestic fire fatalities where the fire was caused by faulty electrical installations. Electrocution is not the only way to die when the electrical installation is faulty.
That said, there is an ongoing debate over whether the Part 'P' regulations, by making things more difficult for D-I-Y electrical work, encouraged people to overload and misuse trailing multi-way extension sockets and thereby make overheating problems more likely.
Oh the wailing and nashing of teeth when PATT testing met the university engineering dept.
Testing portable appliances three times would be enough to give me paranoid schizophrenia as well (or testing the procedures that test the procedures that test portable appliances). The to and fro of regulations is just a game which will come an equilibrium, even if people don't co-operate; which is a beautiful outcome to my mind.
The Debian installer works adequately, but doesn't give complete freedom in setting up partitioning, LVM, LUKS*, various RAID levels, and filesystems in advance (or even during) the install. I understand giving full flexibility is difficult, but doing something non-standard is unreasonably difficult - I have to resort to chrooting and copying things once they have been set up, then fiddling with fstab and GRUB etc.
Debian is, as you say, not difficult to install, so long as it is in a way allowed for in the installer. Which is fine for most cases, but by no means all.
*A case in point: some people like to encrypt the entire hard disk (except for the ESP) then layer LVM on top of the encrypted disk. Others prefer to set up LVM, then encrypt each volume separately. Some like to use non-standard filesystems e.g. if they are using an SSD, they might want to use F2FS or NILFS2 for their root, or even bcachefs. The installer makes this difficult. At least, it did the last time I tried it. Since I don't do a full install that often, I haven't checked the most recent incarnation, and I don't have time to spin up a VM to check for this comment. Sorry, the rest of my life intrudes.
My experience of offshoring to an in-house business unit based in India was that the Indian employees used the company I worked for as a training opportunity to get more experience/polish their CV, then used that additional experience to get a better paid position elsewhere in the local technology hub/city. It led to continuous employee turnover/churn, and the need to 'on-board' inexperienced employees all the time. Basically, we were training people to get better paid positions elsewhere, which was not a viable business proposition.
Of course, the company I worked for did not want to do the obvious and improve the pay and conditions for the Indian workers, as that would have made offshoring look bad. So we had to put up with the situation. I left not long after, so I don't know how things have ultimately played out. Being unable to guarantee that technical staff would be around for the duration of project roll-outs, let alone the full term of multi-year customer contracts was a big drag on efficiency - Fred isn't worth us any more: meet Joe, who comes to us straight from university/technical college. Please help him come up to speed. Rinse and repeat at the next project meeting. Continuity 'R' us, it wasn't.
My other experience of the large Indian outsourcers is that their project teams seem to be a bubbling maelstrom of people, or to put a positive spin on it - an extremely dynamic environment. Working out whether it cost-effective was above my pay-grade.
Well, I think Ubiquiti make some nice hardware for installing OpenWrt on.
To be fair, the installation process may be non-trivial.
(Don't all click at once, I think the OpenWrt web-server is a bit of a frayed-shoestring operation.)
File Allocation Table is terrible design for SSD devices because it requires that the first storage blocks (where the table is stored) are re-written over and over again reducing the life of the device, and is a fixed size irrespective of the size or number of the files .. but has the single advantage of being simple (and standard - copied from CP/M).
Well, it would be if SSDs did not have wear-levelling algorithms which effectively put a Copy-on-write layer underneath whichever filesystem is layered on top. Yes, writing again and again to raw flash is a bad idea, but that was soon recognised. Essentially, every write to the FAT moves that block elsewhere on the SSD, which has pretty much constant seek times for any block read. Of course, write amplification then causes other problems, which TRIMming partially mitigates.
Biting the hand that feeds IT © 1998–2021