The irony of 3rd party "magSafe"-like adapters...
.. is that you can not only restore the magsafe experience to your USB-C equipped MacBook, but can also add it to your iPad or any other device that uses USB-C for charging.
363 posts • joined 15 Jun 2014
Whilst not free, RemObjects Oxygene is an ultramodern implementation of Pascal with language features that other modern languages only aspire to. You can use Visual Studio or RemObjects own Mac and Windows native IDEs (Fire and Water, respectively). Those native IDE's are themselves implemented in RemObjects languages, as is their build system (EBuild).
Languages, plural, because as well as Oxygene they have implementations of C#, Java, Swift, Go and (coming soon) BASIC (VB.NET compatible). Why use these implementations ? Not least because - as their IDE's testify to - you can use any/all of these languages to develop for any/all of the platforms that the Elements compiler stack supports. That's (deep breath...) .net .net core win32 win64 Linux (x64) JVM Android, iOs, macOs, iPadOs, watches, tvOs, WebAsm.
You get the idea.
Yes, we have a lot of people arriving by air, but not as many as countries with much closer neighbours.
Ireland (Republic of) makes a very good comparison. They have an almost identical population but squeezed into 1/4 the land area. Their airports see 3x the international passenger movements that NZ does (30m vs 10m), being much closer to the UK and Europe. On top of that, they also have 3 MILLION sea port passenger movements in a year. NZ by contrast receives just 300 THOUSAND cruise ship passengers.
Ireland instituted a lockdown comparable to NZ's at more or less the same time as NZ with approximately the same number of cases at the point. Yet the outcomes were vastly different. The Irish lockdown wasn't CALLED a lockdown yet in some respects had stronger measures. e.g. a limit on a 2km radius from home for exercising. The "limit" in NAZ was "keep it as local as possible" with people interpreting this in some cases to mean going on 40+ km bike rides around the Auckland region, or driving to beaches and reserves to take their exercise.
The NZ government's response certainly gave the impression of, and claimed to be, well communicated, but in practice the messaging was vague and inconsistent, the lockdown "rules" were issued as guidance, peppered with "as possible" and "ideally", allowing people to interpret them as they saw fit. In our own neighbourhood social distancing was a joke and there were flagrant breaches of the LAW (never mind the lock down rules) such as exercising dogs off-leash in areas where the law required them to be on leash; the lockdown rules required them to be on leash at ALL times, even in areas where off-leash exercise was normally permitted.
The "closure" of our borders was not "a bit late" it was ridiculously late, and followed border protection "controls" that were an absolute joke. Even when the border was finally closed to all but NZ residents and citizens and relatives, arrivals were required only to promise, hand-on-heart, honest-injun, to self isolate. The government SAID that everyone would be receiving a visit from police within 3 days to ensure that this was observed, but the REALITY as admitted by the Commissioner of Police, was that this simply did not happen and that there was no effective follow up. In the few cases that were followed up, there were a significant number of examples where the property given as the self-isolation address was found to be empty.
"Kudos to the government" ?
The only people giving kudos to the government are those fawning followers of the Jacinda Ardern personality cult and those ignorant of the reality (which is sadly a great number since political engagement here is absolutely woeful).
Please don't use NZ as an example. Our contact tracing was a shambles. People who KNEW they had been in contact with one of our biggest clusters went to the press to complain that 48 hours after the cluster was identified and 48 hours after the DG of Health publicly said that comprehensive contact tracing was being URGENTLY undertaken, they still had not been contacted.
Likewise our testing rate was diabolical during the majority of the outbreak here. It was only once numbers had started to decline that testing ramped up to effective numbers.
Testing and tracing was only effective at all because of the low number of cases, not the other way around.
In fact, just about everything the government SAID they were doing was subsequently shown to be far removed from what was actually happening. Even the PM widely applauded 20% pay cut (leaving her only earning 20% more than Boris Johnson, PM of a country with 15x the population and commensurately larger economy and higher GDP, rather than the usual 50% more that she usually pockets)... that announcement was then followed by the government VETOing a change to the legislation required to make that pay cut possible, two weeks after making the promise to take the pay cut and having done nothing in those two weeks themselves.
Our low numbers had far more to do with the geographic remoteness, sparse population and lack of high density population centres than any steps taken by the government.
Indeed, the government had to be FORCED into every step they took, belatedly.
The medical community had to petition the government to close schools with the government defending to the last minute keeping them open. One of our top 2 largest outbreak clusters was then a school.
The opposition party had to petition the government to impose mandatory quarantine at the borders because of widely publicised breaches of the "honor system" of asking people pretty please to self isolate upon arrival and the acknowledgement by the Commissioner of Police that there was NO comprehensive follow up to ensure that self-isolation was being observed and in the few cases that had been spot-checked there were a large number of instances of the property given as the self isolation address being found to be completely empty.
Our Health Minister breached the lockdown rules and was allowed to keep his job. You might think this was because he was crucial to the governments response, except that he never fronted any questions to the media and has no relevant medical or health profession experience or expertise. What he does have is a Prime Minister who - as revealed by an Official Information Act request - did not trust her own ministers to even answer questions from the media, issuing written instructions to ministers to "dismiss" questions and requiring that any statements or responses they did make be submitted for her personal approval in advance.
The NZ government has done a masterful job of presenting itself as competent and effective through this. The reality however is completely different, despite the numbers.
The outcome speaks for itself. That is, the outcome tells us only what the outcome was. It does not tell the story of the control freak running the country with an indecisive and incompetent government.
It's not the idiocracy that was responsible for the favorable covid19 outcome. For that just look to the geographical remoteness, sparse population and lack of major population centres.
Goodness knows there are a litany of failings that can be enumerated in the NZ governments response. The only thing the government managed effectively was their own image.
Building codes also "ensure" that homes are leakproof against the elements. Even the code that applied in the 90's which led to the "NZ leaky homes" debacle. Just saying.
One thing NZ'ers love more than anything else is bureaucracy. Making sure the bureaucracy is effective ? Yeah, nah, she'll be right. ;)
I suspect that what we have here is a fundamental misunderstanding of what "Economies of Scale" actually means.
Building 2 of a thing rather than 1 of a thing is not "scale" in the sense meant by that term. Yes, there will be some savings in the sharing of design effort, but these savings will be negligible compared to the savings that accrue when you start tooling up to real scale.
e.g. some mention was made of savings due to re-using tooling, but the tooling to build 2 of a thing will be very different than the tooling to build many THOUSANDS of a thing. In fact, the tooling to build 2 of a thing will likely be just the same tooling that you use to build 1 because you don't need it to withstand the rigour of knocking out hundred and thousands of the thing and there is no meaningful advantage to be gained by investing in more sophisticated or robust tooling that could.
So yes, by re-using that tooling two or even three times saves some cost, but you will never significantly reduce the manufacturing cost of the 2nd or even 3rd item as compared to the first.
... pass UP to the agency/body that licensed Uber to operate these vehicles on public roads without first ensuring that adequate safety protocols and controls were in effect to protect the public.
Superficially it seems like an absurd flaw in the system as described.
"Yes sir, our vehicles are perfectly able to recognise when emergency braking is required to avoid a potentially fatal collision when the vehicle is operating autonomously"
"Good, so there is no possibility that such a collision could occur ?"
"Well, not quite, because you see although the vehicle can determine that braking is needed, it won't actually apply the brakes when operating autonomously, but it will alert the safety driver who is then required to perform the emergency braking maneouvre, so it really depends on the safety driver being alert and their reaction time"
That should have been the end of the license hearing right there. Relying on a "safety driver", let alone one expected to have lightning fast reaction times, even if paying attention, in this situation is patently absurd since the circumstances under which the vehicle is going to NEED to alert the driver to the need for emergency braking are most likely to occur when the driver is NOT paying attention! For if they WERE paying attention then the need for emergency braking would almost certainly have been avoided long before it became necessary!
The fact they were allowed to operate without even this obviously ineffectual backstop operational only highlights further the negligence of the issuer of the license.
But, the license issuer being a state body and the operator being a Golden Boy "tech startup" means that it of course will have to be Joe Blow that takes the fall.
I haven't read the threads and maybe someone has tried this and found it doesn't work, but what about dongling a USB 2.0 hub via Thunderbolt ? Would that take T2 out of the critical path for anything connected through the dongled USB hub, or does it still get involved ?
From the information about the incident it seems this is more-or-less what they DID have, albeit the snapshot was at most 5 minutes older than the moment of droppage.
But even if the snapshot were taken instantly before the DB was dropped, there would be an elapsed period between when the DB was not available and when the screw up was realised and the database restored from that snapshot.
The statement that "5 minutes of data was lost" comes from El Reg and appears to naively extrapolate from the claimed oldest age of backup and ignore the question of just how long it was between the database ceasing to exist and the realisation of same and restoration of backup.
"Re-Inventing the wheel" is a phrase that is right up there with the most over-used, abused and mis-appropriated in history.
Nobody EVER re-invented the wheel. The wheel was invented precisely ONCE (discounting for simplicity any question of parallel invention).
What DOES happen is that new types of wheel are constantly being designed to suit purposes and applications to which existing wheel designs are not desirable or well suited.
The designers of the Boeing 747 were not castigated for re-inventing the wheel when in designing the under-carriage they refused to simply slap on the wheels from a Model T Ford.
Mines the re-invented human enclosure garment with the re-invented telecommunications device in the re-invented integrated garment storage solution.
I'd challenge the assertion that science doesn't try to prove (or disprove) things "everyone" knows won't work.
Once upon a time well over 90% of people "knew" that the Earth was the centre of the Solar System, if not the Entirety of Creation. So to keep taking observations to create a model that placed the Sun at the centre of our particular local system and establish that as just one of countless such systems would be a complete waste of time, yes ?
Once upon a time well over 90% of people "knew" that life could not exist without heat, light and oxygen. So there would be no point looking for life where one or more of that trinity was not to be found and we would still be entirely ignorant of the vast range of extremophiles that defy our previous "certainties" about where life could be found.
Once upon a time well over 90% of people "knew" that Earth is a flat disc, so to try to sail beyond the Great Ice Wall that kept everything on said disc would be to condemn oneself to an eternal descent into an infinite abyss. Of course, science would have no truck with such a reckless and self-evidently suicidal endeavour.
In a sense you are right about science being interested in "new things", but sometimes those "new things" are ideas which may very well contradict or at least undermine our previous or current known truths. i.e. scientists absolutely are concerned with proving (or disproving) what we all already "know" because surprisingly often what we all "know" turns out to be wrong.
Sometimes they do indeed simply confirm what we already know, and we ridicule them for wasting their time (and our money). But every once in a while .......
On this occasion, perhaps the thinking was that the germinated seed might react quite differently to the very different nature of the onset of Lunar Night (as opposed to any process we might have on Earth for "freezing" a germinated seed to approximate that process)
We wouldn't know unless we tried it.
And yes, it's a good PR stunt as well. But t'was ever thus... ask the Montgolfier's about the scientific value of PR.
That's not really approximation, just inefficient representation.
0.9999..... == 1.0
Ergo 3.9999...... == 4
'Proof' (probably much over-simplified and possibly even not really a 'proof'): If there are infinite 9's after the dp of a value that you wish to increase to the next whole number, then you need infinite 0's before the 1 to be added, but if there are infinite 0's then you can never reach the point where you can place the 1 to be added. If you cannot express the difference then there is no (meaningful) difference, other than the representation.
By extension I suppose n.0000.....N is always also equal to n.0
Mines the one with reciprocal ∞ in the pocket.
Yes GitHub is version controlled, but it doesn't mandate or enforce a standardised mechanism or convention for a obtaining or specifying a specific Version "X" of Module "Y" without knowing the specific repository in which it is held, the layout of that repository and the particular tagging conventions used by the repository owner to indicate specific versions in the commit history.
It seems rather more simply that JFrog are trying to establish for Go what nuget, npm and maven et al have established for other languages (and note that GitHub doesn't make those any less useful).
Having code in nuget, npm and maven doesn't make it more trustworthy, simply more convenient and reliable (*).
Same rules apply.
(* reliability in the sense that when two developers say that they use version X of package Y, now they can be sure that they are indeed talking about exactly the same thing in a way that they possibly might not be if they both pulled the code from two difference sources who each had their own idea of which particular revision of that code should be labelled with that version. To that extent you might also consider it **more** trustworthy on those terms even if it does nothing to help assure anyone that "this code works and doesn't contain anything dangerous")
* However in certain limited applications we already have AI. The google search engine for example
The classic mistake of re-branding data-processing as AI.
* Getting a flying car that a non-pilot can fly and not crash into the myriad other flying cars is hard.
And land. Let's not forget landing. Any idiot can take off and fly and largely be successful at avoiding other things in the air. But getting safely back on the ground again? That's the REALLY hard part.
Almost as hard as building a personal vehicle with sufficient energy store and efficient propulsion to have a range that makes it useful for anything beyond saying "See! We built one!".
* Expect military applications [for driverless cars] in the next few years
But don't hold your breath for the broad and tumultuous changes to infrastructure and legal liability laws required to make widespread, meaningful personal use feasible.
* [cure] To all cancers, no. To some cancers, yes
Yes, the problem here is the categorisation of many disparate conditions with a common underlying cause into one "curable" thing. One might just as well posit a "cure for viruses". A cure for cancer is a null proposition.
* the problems [with developing fusion power] are engineering and material science not physics
That's true of any new technology. There was nothing stopping Neanderthal's from developing fusion power. The physics was the same, the lack of engineering and material science was the only problem. So sure, we can have anything (subject to the laws of physics being observed) we just need something unspecified and currently unknown and possibly unknowable to make it possible. You might extrapolate from this that it is only a question of time, possibly. But to put a timeBOX on that discovery, is to attempt to draw a long bow where the bow string is not fixed at one end - try it.
And what sort of a record do we think SpaceX would have had in the 1960's (never mind that, even in their actual timeline) had they not had the benefit of learning from NASA before them (and in some cases NASA stepping in directly to correct some pretty basic blunders before they had a chance to embarrass the wunderkind).
"On the shoulders of giants" and all that.
> Falcon fairing halves missed the net, but touched down softly
> in the water. Mr Steven is picking them up.
He may not be much cop at a game of catch but it's good to know that the Death Star's Head of Catering is not above retrieving his own dropsies. Just a shame that this one is wet. And this one is wet. And this one is wet! And this one is wet! And this... What?! Did you dry these in a f*****g rain forest ?
Well, you could take the view that money was being demanded of her in the form of the savings to the organisation in terminating an employee at a fraction of the cost of their salary that would be paid over the period during which they were being "performance managed" out, if indeed such was the intent.
... but I don't see anyone addressing the question of WHEN to do so.
We're talking about a star system that is so far away that we're really only guessing at the distance and a probe being sent to investigate a planet that we only think is there but have no data for exactly where, let alone projecting where precisely (or even approximately) it will be at the time that the probe arrives.
Even if the distance calculation is accurate, and in astronomical terms might be considered "good enough", once you're in the locality you need to be assuredly more precise. In the same way that if I were to send a package from here (Auckland) to a friend back in the UK, just saying "Please take this package 18,360km that-a-way" (points toward London) is not going to result in a successful delivery or even arrival in the right county, let alone town.
Any probe will need to be fully autonomous, able to determine it's arrival at the star system, identify and locate any and all planetary bodies and perform the calculations necessary to either achieve a stable orbit around any particular thing of interest, avoid missing everything of interest and sailing on past into the void or (irony of ironies) avoid colliding with the very thing it has been sent to probe (or something else that we haven't yet determined existence of from our current, remote observation station).
And it needs to be able to choose when and where to perform any delta-v itself. Anything reliant on Earth-based technology (requiring data to be sent to Earth and then commands or laser light destined for any sails sent back to the probe) would result in that delta-v occurring almost 12 years too late.
Which then just leaves the question of figuring out where Earth is once you've returned to the spot it was at when you left (assuming that you can either compensate for the expansion of The Universe, the duration of the trip is such that this expansion doesn't amount to a significant drift or that there is no such expansion after all so it simply doesn't figure).
Yeah, but that stuff is about 25 years old now, so cannot possibly be any good.
Besides, .NET "killed" COM so for the last 20 years COM hasn't been cool, meaning we have a new cohort of developers who can re-invent this wheel all-over again and think it's new. Sadly they will likely fail to learn from previous iterations because - again - it's old so is either forgotten or immediately discarded as irrelevant with nothing to teach us.
I wonder why is this industry seemingly so uniquely susceptible to this tendency to reject the past instead of learning from it ?
How did .net Core not make the list ?
I mean, .net Core is at the core of .Net right ? Oh no wait, that would be .Net Standard, which is the thing which everyone has been running for the past nigh on 20 years right ? Oh no, wait that's .net Framework. Standard is the common pieces shared between .net Core and .net Framework.
So we have .net Core which isn't at the heart of .net Framework and both of which are SUPER-sets of .net Standard which isn't the (previous) standard .net that everyone has been used to.
You , dude... how do you like living in a territory where a manufacturer can abdicate their responsibilities for producing a product of reasonable quality after some arbitrary time period and then charge you extra for the privilege of ensuring that the product does indeed last a reasonable period of time ?
Come to NZ where the Consumer Guarantees Act (CGA) provides every consumer the assurance that every product sold in NZ will last for a "reasonable" length of time (literally what a "reasonable person would expect") taking into account how the product has been treated by the consumer, it's price and typical longevity of the type of product.
The CGA also applies a reasonableness test to whether replacement or repair is acceptable - e.g. if a product breaks after a few days of use (possibly weeks, depending on the cost of the product), or out of the box, then a new replacement is warranted (the consumer paid for a new product and is entitled to fair use of a new product). After a few months or years, then a consumer may need to accept a repair or reconditioned unit. Consumers may also exercise their right to entirely reject the product as being of merchantable quality (for a full refund) if it is not capable of performing as advertised/claimed or if the product suffers repeated failures (the CGA continues to apply to any replacement or repair).
Manufacturers must replace or repair within that period regardless of what they may say in their warranty terms and conditions - it is impossible to opt or contract out of the CGA. Even Apple.
AppleCare or no AppleCare.
"Plex allows people to stream their content from other places – media servers, cloud storage providers like Google One and Dropbox, etc – to any device they own for $5 a month, $40 a year, or $120 for "lifetime" usage."
And this capability is not affected by the shuttering of Plex Cloud, except in so far as it means no longer being able to stand up a media server in the cloud, directly connected to cloud storage.
Anybody running their own Plex server and using Plex Pass to access their content remotely can still continue to do so - that has _never_ relied on Plex Cloud. I think that should be made clearer, lest El Reg be accused of misrepresenting facts in the interests of fomenting discontent.
Yep a cross-platform UI for kiosks would be straight-forward. Otherwise... not-so-much.
Kiosks are much simpler for such things because the UX is constrained by the very specific nature of the device. But the vast majority of software in the world does not run on kiosks. It runs on various different devices and form factors and operating environments, many of which specifically differentiate themselves from others by differences in the UX.
If cross-platform development was EASY, everyone would be doing it by now and it wouldn't need to be constantly re-invented to get-it-right-this-time-no-honestly-we-have-nailed-it-now. I am guessing you don't remember (perhaps weren't even born at) the time of the likes of Omnis and various other 4GL's that had "nailed" cross-platform development in the 90's - a time when there was far less diversity in platforms to contend with.
Which is of course why Omnis and it's ilk went on to dominate the software development industry and why we are all using those tools now.
Oh, but wait. Then came Java with it's Ultimate Solution to the write-once-run-anywhere problem. Hmmmmm.
Then .NET. Then Qt. Then FireMonkey. Then Xamarin. Then .net Core (Jeez even .NET is taking two bites at the cherry).
It's almost as if this is a bit harder than some people seem to think.
I'd also offer into evidence "By His Bootstraps".
But to be fair, my take is that the time-loop is very much at the centre of Bootstraps and Zombies where-as in TEFL (and To Sail Beyond the Sunset) the loop was only really a device to facilitate a much wider exploration of societal and cultural norms (very much the recurring theme in Heinlein's work) through the character of LL.
I think perhaps the more pertinent question would be where have you ever seen crotchets, minims, quavers, brieves or clefs, let alone sharps or flats ... in programming source code ?
'#' has numerous "names" in different contexts. It's use as "sharp" in C# is purely whimsical, not due to any domain accuracy. As such, C-Octothorpe or C-Gate or C-Hash or C-Pound are equally valid whimsy (just not MS compatible whimsy).
> Every HTTP site creates an attack surface exposing every visitor to MITM, injection, and other attacks.
Ironically of course, every HTTPS site is also by definition an HTTP site. The difference in the presence of SSL doesn't change the fact that the basic protocol is the same.
The "ironically" part therefore comes from the fact that what you say about HTTP is also true about HTTPS. As soon as you put a publicly accessible site out there you have created an attack surface exposing every visitor etc etc etc. Whether that site employs HTTP or HTTPS doesn't alter the accuracy of that statement, only the difficulty involved in exploiting the attack surface you are generously providing.
... would be to set an AI the test of getting this thing up and running in the first place, like our plucky correspondent. i.e. In the face of incomplete and inaccurate instructions can an AI actually achieve an outcome it has been given in the form of a verbal or written instruction ?
The answer of course is "Don't be ridiculous."
With that in mind, I here-by declare the end of supposedly tech-literate journalists throwing the term "AI" around as if it is either meaningful or relevant to any current technological endeavour.
... completely fed up with the way that "AI" has become the go-to term for what we used to call "Data Processing" ?
Sure, the volumes of data involved have increase dramatically and the complexity of the algorithms along with it, but essentially the technology is no different than it has been for the past 30-50 years.
It's not AI, it's a computer. Dammit.
Is there a phone which has an entirely symmetrical design w.r.t left/right handed-ness AND the ability to re-assign button functions to suit (i.e. completely reverse/mirror the configuration between left/right hand "modes") ?
I don't think so. Even if there are buttons on BOTH sides, the buttons on one side typically do an entirely separate set of functions than the buttons on the other side.
So buttons on one side or buttons on both sides, left/right handedness is not a consideration, except in so far as the designers presumably already are taking *both* into account and ensuring that people aren't forced to resort to pinkie gymnastics, regardless of any left/right bias.
You saw the part where it was mentioned that this was running in a mainframe environment ?
ISO-8601 was first published in 1988.
Chances are the data on the mainframe had been around for decades before anyone thought of the need to standardise on date formats and was governed by more practical (at the time) considerations such as the need to save space, optimise processing efficiency etc etc within the constraints of what are likely to have been some very idiosyncractic/esoteric qualities of the runtime environment.
So fine, years later these new standards come along and processing power and storage efficiency are no longer the constraints they once were, and now you're only challenge is to convince the bean counters that they really should invest their beans in refactoring ALL of the date handling code in their systems which currently isn't broken at the opportunity cost of any amount of other value-add work in those systems together with the risk of introducing defects in systems that otherwise are working perfectly.
Good luck with that.
As for the documentation part of your question: What is highly likely to have happened here is that the date conventions in use were indeed very thoroughly documented but thanks to wilful misinterpretation of the Agile Manifesto (among other things) nobody believes in documentation these days (until AFTER they have learned how important it is). The developer might even have been told he had to read that documentation. Perhaps he even had read it in order to pass a control gate, but didn't actually take it in. Or perhaps they had read it years before, hadn't had to deal with date values in data for so long that they had forgotten or simply overlooked what they knew.
Bottom line is: This sort of screw up happens. That's why we say that working software is more important than documentation, but "working" doesn't mean "compiles and runs". It does mean that, but so much more.
Which is why testing is so crucial and why you never run new code for the first time in PROD.
That's the real mind-blown aspect of this story.
For many vehicles a wetware driver doesn't need any technology to "see through " the vehicle in front other than the front and rear windows already installed and their own ocular sensor array. (Translation: You can simply, literally see through the car in front).
Now, following a truck, SUV or family saloon loaded with crap on the parcel shelf, the wetware obviously has to make a suitable adjustment.
And that's the thing - for all the talk of "advanced" tech and AI and other brands of snake-oil, the simple fact is that as "impressive" as what might have been achieved might be, it is still a long, LONG way from being able to match a human. Even one of only average intelligence.
So they're doubling their output to 100MW. That's nice. But also actually wrong.
100MW is the capacity of just ONE of Iceland's geothermal stations with the overall total capacity somewhere closer to 705MW
Let us know when they get close to New Zealand's 854MW.
"but all Apple major products are massively over priced for what you get"
Wrong. Linus (of Tech Tips) demonstrated this by doing a tear down of an iMac, or possibly an iMac pro - I can't be bothered looking it up right now.
The point being that if you went out and bought the same components used int he assembly of that Apple product (with the exception of the case which of course would have to be a commodity case since you cannot buy the proprietary iMac chassis off the shelf) then your total spend would have been several hundred dollars MORE than buying the same components assembled by Apple in the form of an iMac/Pro.
Now, they also went on to establish that you could get equivalent or better performance with some smarter buying decisions, but in terms of a like:for:like component list, Apple was actually cheaper.
Google feeds - you USED to be able to swipe articles to dismiss them. NOW you have to stab at the individual burger menu and tap "Hide this story" from the menu thus summoned.
GMail - you can swipe email notifications, but only to snooze them or to get to the GMail settings (!) where-in you cannot make any further changes to what swipe does. As a bonus, this useless swipe action works equally pointlessly in either direction.
I'm sure things were more swipable in the past but if we are assured this is "progress" then I guess it must be.
Simply long press on one of the camera options in the swipe menu/list. Bingo... pick and choose which modes you want included (two are not even enabled by default: Sport and Slow Mo [as opposed to SUPER Slow Mo]).
You can even change the order of them to group your preferred/most often used modes together.
A similarly hidden AWESOME feature is the ability to long press the shutter release button and DRAG IT TO WHERE-EVER YOU FIND MOST COMFORTABLE!!! Sorry for shouting, but this is so ridiculously useful when trying to take selfies and getting cramp from trying to hold the phone steady and crab your finger to reach the button.
I've historically always found phone camera apps to be a frustrating mess. The S9+ app on the other hand... brilliant.
In the commercial passenger air business, passengers are already referred to - or at least thought of - as "self loading cargo".
The problem is, you can't slap a barcode or RFID tag on a passenger that they couldn't/wouldn't remove, lose or give to someone else, so all that automated cargo tracking that works so well for the manually loaded cargo is the bit that doesn't work for the self-loading stuff, so you have to keep checking that each piece of cargo really is the piece of cargo that the documentation it carries claims it is.
Also a box of auto components being shipped around the world cannot start having villainous thoughts and plan to bring down the plane or hijack it in order to fly to it's preferred destination of Maranello instead of the scheduled arrival at Dagenham (via Heathrow) or secure the relase of fellow auto components imprisoned in JIT parts bins in Sunderland.
If you could organise to collect passengers from their homes or designated pick-up points, verify their cargo papers at that point, place them in a secure container (from which they cannot leave) for delivery to the airport to be directly loaded on board, luggage and all then sure, that could work. You just need to have passengers willing to be treated even more like cargo than they already are.
The bigger problem is that as of current state, biometrics are not even _as good_ as your username. Biometrics often cannot tell that I am me (poor lighting, not holding the sensors in the "correct" relative position to the bio being metred etc).
If biometrics cannot reliably identify me (false negative) then it must be allowed that they might sometimes be confused hat someone else is me (false positive). Even if that is not the case, only being able to identify me as me when particular environmental conditions pertain is equally dumb. I am only me when I am well lit me ? Hmmm, maybe that's why some people prefer to get intimate with their partners only with the lights off ?
Are usernames really any better? For sure I can deliberately provide a false username if using a username as... my username... but nobody relies on _just_ a username for authentication. That would be stupid.
What in the name of all that's angled or curly does WebAssembly have to do with C/C++ ?
WebAssembly is just a VM standard. It may reference C/C++ as examples as the sorts of languages that could target it but this is primarily because those languages are built on compiler stacks with re-targetable back-ends, not because WebAssembly is specifically aimed at C/C++ (or Rust, which also get's a nod).
RemObjects recently added WebAssembly as a back-end to their compiler stack, which means you can target WebAssembly using C#, ObjectPascal, Swift or Java, since these are the front-ends already available on that stack.
Biting the hand that feeds IT © 1998–2021