Re: I can hear the conversations already
The sad part is that I don't see how this will be any different from the current situation. A minimum wage worker reading from a script doesn't generally do any better than that.
1634 posts • joined 3 Nov 2011
"does not believe the lifted records have been used for fraud"
It has not been used for fraud yet. Once the data is out there, the chance of it not being used for fraud eventually is essentially zero. Saying you haven't noticed it happening yet is an utterly worthless statement that only suggests you're hoping everyone has forgotten about this specific leak by the time the fraud actually happens so you don't need to worry about being held responsible for it.
"But phone chargers? Most people now have loads"
Well, sort of. I have quite a few standard 0.5A USB chargers. I think I have two that support some sort of fast charging, but only the early generation type that is nowhere near the 20ish W of more modern standards. I have zero USB-C cables or anything capable of supply relevant amounts of power through them. I have a variety of cables with either mini- or micro-USB connectors rated for a variety of different powers, some power only, all unlabeled of course.
Sure, it's possible to buy them separately, but when buying brand new things costing hundreds, potentially thousands, of pounds, there's an awful lot to be said for making sure it's actually usable out of the box. If I buy a shiny new phone, I shouldn't have to worry that I'm not actually able to charge it, and I shouldn't have to do research to figure out what I actually need to be able to do so. Providing the basic equipment needed absolutely should be part of the package.
"That bowl of cereal was about the same as a pint in a pub. So is a pint also too high?"
You can get a week's worth of cereal and milk for just a couple of pounds. Meanwhile ordering a cask from the local brewery is about £1.50 per pint. Even the cheapest crappy lager is legally required to be no less than £1 per pint in Scotland and Wales. So at £6 per pint or bowl of cereal, you're looking at a 500% markup at most for a pint, and realistically more like 3-400%, while for cereal you're looking at over 2000% markup.
Obviously it's the punter's choice whether any of those numbers are too high, but the cereal is significantly more expensive. The fact that pubs continue to exist thousands of years after they were first invented, while the only cereal bar has failed after just a few years suggests that the vast majority of punters consider one fine and the other somewhat less so.
"So basically the punishment for the ad campaign is "don't run the ad campaign again". Presumably it was already complete. So no harm to O2.
If the ASA genuinely believe people signed up not realising the price hike was there, then force O2 to honour the original charge through the length of the contract."
The problem is that the ASA not only has zero authority to actually do anything, it also has zero intention of doing anything even if it could. It's not a government enforcement agency, it's an industry body set up voluntarily so they could avoid having any actual regulation. Everyone involved agrees to accept the occasional slap on the wrist, just enough to keep the law from getting involved, but on the understanding that they won't ever suffer actual penalties. So the toothless ASA say "You've been very naughty, don't run those adverts that finished months ago again", and the advertisers respond with "Oh dear, what a terrible mistake we've made, we certainly won't say that exact sentence word for word again. What a good job we're all very good and there's no need to get the government or law enforcement involved here. Wink wink.".
So there's no point worrying about what the ASA should do if there really was a problem with an advert. The ASA is just a smokescreen with no intention or authority to do anything. If you have a problem with misleading adverts, you need to demand the government actually steps in and creates a real regulator.
"What the UK won't get that way is super accurate positioning and/or military applications."
This has been the stupidest part of the whole thing all along. The "super secret, amazingly accurate military system" is not actually any more accurate than the freely available services. In fact, its spec is actually worse than the basic open service and much worse than the high accuracy service. The difference is that it's supposed to be more resistant to jamming, and they promise not to turn it off in the middle of a war.
In practice, jamming is irrelevant when you're dropping bombs on people with AKs from thousands of feet up, and as long as we don't suddenly decide to invade France it's fairly unlikely that they're going to threaten to shut everything down to stop us using it. The normal commerical services are perfectly adequate for anything the UK plans on doing, military or otherwise. The problem is nothing to do with how useful the system is, it's purely about being upset for not being allowed in the club, despite us being the ones who decided to leave.
"low-latency connections required for time-sensitive IoT applications"
Pretty much all the IoT applications I've seen being hyped are pointless consumer tat of the "smart" home variety. Are things like central heating thermostats and internet-connected toasters really so time sensitive that a few milliseconds of network lag are relevant?
"You've been watching too much Monty Python. I have spent decades working in British engineering and have never seen any form of discrimination based on 'class'."
Indeed. It's fair to say that Britain used to have a class system. The landed gentry were at the top, those in trade (respectable well-paying jobs that is, doctors and lawyers and such) in the middle, and the working class firmly at the bottom. Even if someone managed to be successful enough to buy themselves into land and nobility, they'd still be looked down on as new money, even more so if a worker somehow managed to get that kind of success. It was never really as instituionalised as the Indian caste system, but there was a certainly a general social understanding and attitude for how it was supposed to work.
But none of that really exists any more. The gentry are increasingly irrelevant where they still actually exist at all, and if anything most normal people look down on them now rather than the other way around. The "self-made man" is no longer looked down on but is considered something to aspire to, and someone who worked their way up from the bottom is generally considered better than someone who was born into riches.
What we do still have is relatively low social mobility. If you start off at the bottom of the socioeconomic ladder, you're unlikely to climb much higher through the course of your life. And this inequality and low mobility is a real problem for a society. But it's very different from any kind of actual system of classifying or enforcing class. Poor people don't have much opportunity to become rich and successful, but if they do they're generally considered an inspirational success story, not something to be derided and hammered back down.
"Both sides have good points to make."
Agreed. I think the problem is that the law doesn't easily handle "I know it when I see it". It seems obvious that booking.com is used as a brand and company name and should therefore be trademarkable, while washingmachine.com isn't and therefore shouldn't. But how do you define the difference between them in an objective law such that a reasonable person will always reach the same decision for any given term?
The specific descision for booking.com seems fair enough, but the precedent it sets seems likely to cause all kinds of trouble in the future as people try to exploit the idea.
"OPPO says it is primarily aiming this device at a younger audience, emphasising the phone's entertainment credentials."
What entertainment credentials exactly? It has a screen, a speaker, a camera, and can connect to the internet. I'm not entirely clear how this distinguishes from every other phone on the market.
OK, I'm drawing a blank here. It feels weird enough that "bitch" is apparently considered so hideously offensive that you can't even use the word when discussing its offensiveness, but it really gets silly when you censor the things you're discussing to the point that I can't actually figure out what it is you're discussing.
That aside, this once again really highlights the sorry state of
AI machine learning as field. Here we have what is apparently a standard dataset used throughout the field as both a standard training set and a benchmark for all kinds of algorithms. But we're told that there are too many pictures so the classification had to be done by a computer in the first place, the pictures are too small so humans can't recognise them anyway, and no-one's ever bothered actually looking at it to check if the labels make any sense at all. The fact that some offensive words occasionally appear seems far less important than the fact that the entire thing appears to be utterly worthless for its intended purpose. You can't train a machine learning system on unknown computer generated data and expect to get a useful result at the end.
"To me it proves Asians are aliens, hard working and brilliant, on the other side they allow a murderous monster like Mao a place in the Sun, and support a fascistic regime, albeit with a hammer and sickle label stuck to it, to exist and put to world in fear for a 1930's scenario of escalating territorial disputes with its neighbors."
Wait, those facists back in the 1930s were Asian? Because otherwise it seems a bit odd to call all Asians aliens for acting exactly the same way as Europeans (and everyone else for that matter, there's a reason we call it "human nature").
Not to mention that most people over 65 are retired, what with that being the normal retirement age at the moment (although soon to increase by a year or two). For an awful lot of people, the retired lifestyle looks almost exactly like lockdown, so it's hardly surprising that they cope better when forced to do exactly what they were already doing. Even more so when you consider the financial situation, given that someone living off a pension and savings doesn't need to worry about being furloughed or suddenly losing their job entirely.
Even ignoring retirement, the likelihood of being more settled down and more financially stable pretty much scales with age. So of course it's the younger people who tend to be more widely social, travel further and for longer, but don't have any kind of stable career or financial base to fall back on who are the most worried.
Meditation and "strolling" are not actually exercise. I certainly see a lot more people at the moment taking several hours to walk a few hundred metres around a park, but as iamanidiot notes that sort of thing is probably much less exercise than they used to get just working a normal job. Indeed, it was particularly notable early on in the lockdown how many people took "one exercise per day" to mean "go and sit in the park with a book once per day". And of course the entire point of meditation is to not move around and just relax, which is pretty much the exact opposite of exercise.
These sorts of things likely have benefits for mental health and such, especially given the alternative of being shut inside a house all day doing nothing, but they're not going to make any difference in terms of getting fit or losing weight. It's all very well to study how people cope with lockdown, but it seems pretty odd to proclaim that people are doing more exercise, and then go on to note that by far the biggest increases were in activities that involve no exercise whatsoever.
"TikTok is something positive, interesting and informative to watch"
But what exactly does that have to do with TikTok? It confuses me in exactly the same way as Twitter - there are already plenty of ways of posting text and videos, so how exactly does imposing a length limit as your sole distinguishing feature suddenly make it desirable? TikTok is literally just a really shit version of YouTube, or Vimeo, or even Facebook, so what exactly is the point? Hell, speaking of Twitter, anyone remember Vine? Probably not. TikTok is an exact copy of it. But no-one cared in the slightest and even the cash-haemoraging Twitter decided it wasn't worth keeping. So why is TikTok suddenly considered so amazing, and full of interesting, informative stuff when it's just a poor copy of existing services, including ones that have already failed when trying to do the same thing?
God I'm old. Now get off my lawn.
Indeed. Segway is undeniably the groundbreaker here. Simple, relatively small electic scooter that's fast enough and with enough range to make getting around towns or similar easy and convenient. The subsquent popularity of electric scooters and the disappointingly misnamed hoverboards shows that the idea was great and the timing really not bad. The problem Segway had is that the innovation pretty much stopped at the first model. As should be expected, that was too big, awkward and expensive to be truly popular, despite showing the promise the technology had. 20 years later Segways still look basically the same, but now you can buy basically the same thing from any number of other companies as a child's toy.
You certainly can't call the idea a mistake, because the damn things are everywhere despite being technically illegal to actually use*. But Segway as a company certainly made a huge mistake by resting on their laurels and marketing to tiny niches instead of making the effort to turn it into something marketable to the general public.
* For those unaware, things like electric scooters are powered vehicles as far as the law is concerned, so you can't drive them on pavements or any other public spaces. But they're not road legal so you can't use them there either. Until the law catches up, you're only allowed to use them on private land. Obviously no-one actually pays any attention to this. And before it comes up, ebikes don't have this problem because they only assist in addition to pedalling, otherwise they have to be licensed as mopeds; electric wheelchairs also have to be licensed and meet regulations to be used on roads, although I think they have an exemption allowing them to be used in pedestrian areas.
"They're rechargeable. The earbuds are charged by their case and the case is recharged in the same way phones are."
Which means not only do they use incredibly inefficient inductive charging( at best about 60%, but can easily fall below 30%), but they require two separate charging steps with an additional battery stuck in between the socket and the thing you're actually trying to charge. People often dismiss things like phone power use because it's so small compared to something like a kettle, but when you have billions of the things being recharged every day, switching to wireless charging and using three times as much energy is actually pretty significant. Just because they have rechargable batteries doesn't mean they're actually good for the environment - in terms of both energy use and waste, ear pods are far worse than regular headphones.
"I replaced the faulty battery in my cheap £13 earbuds. Took nothing more than a soldering iron and some patience. Expensive ones may or may not be harder to repair."
Have a look at some of the teardowns on ifixit. Both Apple and Samsung earbuds are simply impossible to do anything with, because you have physically rip the things to pieces in order to get at anything inside.
"Lets just be clear here - Skynet is the series of MoD Satcom platforms that dates right back to the early 1970's - way, way before Arnie got into his Terminator suite."
From the article:
"All these mentions of Skynet, satellites, cybersecurity, and AI will probably remind readers of Arnie Schwarzenegger's most famous film role. But don't forget, Skynet dates back to the 1960s as a UK defence project."
"The outer covering simply decomposed and the cushions contained within went AWOL... Judging from my searches on line this is not a one off event. Be forewarned."
This is common to pretty much all fake leather and similar synthetic materials ofter used for headphones (and chair armrests). It degrades over time, and a lot faster when exposed to sweat. Most decent headphones have easily replaceable pads (and maybe headband padding) because they're essentially considered consumables. It's possible Logitech might be worse than normal, but I have a set of Sennheiser headphones that are well over a decade old now, and onto its third set of (nice and cheap third party) pads.
"all vehicles must be taxed and insured and therefore cycles were not allowed"
While it's not a legal requirement, it is entirely possible for a cyclist to be insured. Arguably quite sensible in fact, and one of the main reasons for joining organisations like Cycling UK (née CTC). And cyclists generally pay all taxes owed; the same amount paid by other low-emission vehicles like electric cars. So do Costa allow all vehicles if the owner can show the correct documentation, or are they just lying about why they don't want to serve certain vehicles?
"To me that means that all content will be online, players will be trapped in Sony's ecosystem and won't be able to do anything outside of what Sony allows online."
But having a disc or not isn't relevant here. There are already plenty of games that install from a disc but still require an online connection to actually play. And there are plenty of downloadable games that are perfectly playable offline once you have them. The online services horse has well and truly bolted at this point, so there's no point worrying that a new console might potentially trigger it in the future.
As for being trapped in Sony' ecosystem, that's literally the only reason consoles exist at all. If they didn't want to lock you in their walled garden, they'd be just another PC hardware retailer. They're not even pretending any more, consoles are just a PC with a branded lock on the door - you agree to lock yourself in in exchange for bribes of exclusive games.
"When one looks at the structure of Academies and these UTC's it does appear to be more a method of providing somebody with the ability to 'cream off' funding for education to put in their back pocket than to empower schools to decide their own budget."
Which should be entirely unsurprising given the Convervatives' constant push to privatise all the public services they can. When it's clearly not acceptable to just sell things off wholesale, we get this endless chipping off of parts to sell off in pieces. Education gets this kind of "free" school nonsense, the NHS gets its PFIs, prisons are just handed over to private companies, most government services are now provided by Crapita or other similar companies.
And of course all the services are pile of mismanged crap and disappearing money. The point isn't to provide useful services, it's just to make sure everything that can be sold is. As long as their own families can still pay for decent education and healthcare and their mates can make a quick buck, why would anyone want to make the effort required to actually do the job properly?
"This is only the first blunder, the 'A.I' is going to do this again, or something similar, all because MS would rather not pay a few salaries for a quality, intelligent edit."
Unfortunately, this is not a problem that requires AI to be involved:
"There is a simple method to preventing sponge attacks... In other words, sponge examples can be combated by stopping a model processing a specific input if it consumes too much energy."
If we're talking about things like real-time data processing for an autonomous car, ignoring inputs entirely doesn't sound a lot better than handling them too slowly. Either way, the car is unable to understand its surroundings and becomes a danger to everything around it. This method could be useful in non-critical roles - if you're doing something like bulk image recognition you just have to compromise between a reduced data set and computational efficiency. But in any situation where any or all the input data could be important, the attacker wins either way. If the whole point of your AI is decide which data is important, throwing it out before that decision can be made is just as bad as taking too long.
"Maybe he would then actually get on and do something with his life."
Sadly unlikely. While much of the focus has been on the anti-5G element, the BBC report provides a few more details - https://www.bbc.co.uk/news/uk-england-merseyside-52966950
"Whitty, who had 29 previous convictions, including for assault and for possession of a firearm"
This isn't one of those rare conspiracy nuts who finally decided to get out of the basement and actually take action, he's just a run-of-the-mill petty thug who happened to use 5G as an excuse for his latest stupid crime. Perhaps better education, treatment and rehabilitation could have helped earlier in his life, but by the time a person is in their late 40s with 30 criminal convictions (and no doubt plenty more not prosecuted or let off with warnings), it's a bit late to be thinking up unusual punishments in the hopes they'll suddenly turn their life around.
From the description of events, it appears that the wombat didn't attack anyone, the humans attacked it:
"He looked up at me and just dived to get past me and I held him for a long time"
The wombat was startled and tried to run through a convenient opening, so the human grabbed hold of it for no apparent reason, and after some unspecified time of abusing the poor animal was surprised when it started biting her to try to get free. While there's not enough information to be sure, the fight probably only went on for so long because they kept attacking and trying to restrain it instead of just leaving a clear escape route.
Basically, the humans did pretty much the exact opposite of what you should when faced with a wild animal, and were surprised when a scared, cornered animal reacted in the exact way any sane person would have expected. It doesn't matter what country you're in or how cute or inoffensive you may think an animal is, you do not try to grab hold of wild animals. Just give it an escape route and make sure you're not in the way. If that's not possible, try to contain it by closing doors or whatever and call someone competent to deal with it. Don't dive in to start a wrestling match and then blame the poor animal for fighting back.
Pretty much every video call I'm on has at least one person complaining about the problems they're suddenly having with their internet. Pretty much every time the problem is that they keep moving around and facing away from the microphone so it starts trying to filter them out as background noise. Oddly enough, no amount of disabling video, faffing around with settings, or turning it off and on again ever seems to help. It's hardly surprising that most people fall back on blaming "the internet" for their problems when even some of the supposedly more intelligent and technically competent people can't figure out how to make a simple VOIP call work.
It's also likely a lot of households will be having issues with wifi congestion. Having four or so people scattered around a house all with poor wifi signals will grind things to a halt even if the raw bandwidth needed is far less than that available. Getting better coverage with a mesh setup, or even just a better non-ISP-provided router would likely solve a lot of problems people stuck at home are having with "the internet". Or using actual wires like some kind of savage of course.
"That's some of what the author was trying to get across I think!"
That's the impression I got. Anyone familiar with Python should be aware that while it's an easy one to learn, there's a reason pretty much all the heavy lifting is done by calling C libraries. But it's interesting to see a more quantative look at exactly how much speedup you can get from specific changes.
As for HildyJ's point about programmers saying programming is important and chip designers saying chip design is important, the obvious answer is that neither is particularly useful in isolation. This study gives a nice example of that. Parallelising the code to use all the cores gives a nice speedup. If the chip doesn't have multiple cores that obivously isn't going to help, while if the programmer doesn't use them it doesn't matter how many extra chips you add. Hardware and software have to develop together, otherwise you'll just end up with hardware providing options that aren't used by the code, and code trying to do things that aren't physically possible on the hardware.
"How come the earliest start are labeled PIII and not PI?"
Because back when the difference was first noticed no-one had any idea why they were different. The classification of the two populations came about because of spetroscopic measurements done around the 1920s, which noticed that stars could mostly be grouped into those that have lots of metals in them and those that don't. These observations were done around the same time as observations indicating that the universe might be expanding, and other observations suggesting that some nebulae might actually be outside our galaxy and could potentially be entire galaxies themselves. The 1910s-1920s were a pretty big time for cosmology.
But obviously, without even knowing that other galaxies existed or that the universe was expanding, let alone how big or old it might be, there wasn't any way to figure out that the different stellar populations were due to the time of their formation. So they were pretty much arbitrarily labelled I and II, and it just happened to turn out that it would later appear more sensible if it had been the other way around. Population III were only added much later, and simply followed on the already established trend.
See also something like electron charge being negative, which was again essentially a 50/50 guess at which direction the charge-carrying particles might be moving when an electric current flows, made before we managed to figure out the actual details.
"Vague memories of chemistry lessons, but was it the Na that fizzed and floated on a cushion of hydrogen while the Li did the same but produced enough heat to ignite the hydrogen?"
No, it was potassium that did that. As noted by someone above, lithium is less reactive than sodium. Potassium is the next one down the list and significantly more reactive. You might also have seen video of rubidium, which is even more reactive to the point they don't generally allow it in schools. When dropped in water, lithium gives a dissapointing fizz, sodium floats around on a cushion of hydrogen, potassium ignities the hydrogen (usually in a series of pops rather than a constant flame), rubidium blows up the water tank.
"SI is metres & millimetres. CGS uses centimetres. IIRC, the French preferred CGS. The OP mentioned cm, anybody expecting mm will be out by a factor of 10."
Which is further complicated by the fact that CGS isn't actually a standard set of units, but several different sets which have the basic centimetre, gram and second part in common but are completely different for other units, mostly once you start involving electromagnetism. It's always fun reading older physics textbooks and trying to figure out what unit set is actually being used by trying to find which physical constants have been set to 1, because of course the author never actually bothers to tell anyone.
"he argued that simulated racing is a long way from the real thing, and the real-life drivers can be forgiven for not adjusting."
Yes, absolutely. Video games are very different from real racing, and it's entirely understanable that drivers would struggle when asked to suddenly start doing what is effectively an entirely different job. It's also understandable that someone used to doing well competing at one sport might be disappointed when they don't do as well at a new one that wasn't even their choice to start competing in. But none of that is an excuse for cheating. He's a professional who is paid to do a particular job. He refused to do it and cheated instead. "It's not the same as my normal job so I cheated in a charity competition" is not an apology, and indicates exactly how much forgiveness this individual deserves.
"so are the people who write email software that makes CC the default instead of BCC."
I don't really agree. The problem is that not everyone needs to use email in the same way. Some of us don't ever have any reason to worry about sharing email addresses, so there's no reason to ever use BCC. Others deal with sensitve information and need to use it a lot. There just isn't a single default that is actually appropriate for everyone.
Perhaps it would be better to err on the side of caution, since a bit of annoyance on my end is not as bad as having people keep accidentally splurging personal data around the place. But a better solution would be to make things more easily configurable. In Outlook, for example, as far as I can tell there is no way to make BCC the default. You can make it slightly more visible, but that's it. It really should be possible to configure your normal use case once, either individually or as a wider policy. BCC shouldn't need to always be the default for everyone, but it should be possible to make it so if that's your normal use.
"Are these guys new to the concept of capitalism or what ?
When a supermarket chain eyes a spot of land it wants it not only gets the local authorities to exempt it from taxes for ten years, but also gets roads and infrastructure made on its behalf then takes off after nine and a half because, all of a sudden, it remembers that it won't be making money soon.
And now these senators are wondering about a $12 billion plant ? Were they born yesterday ?"
See the reply to the post a couple above yours. Of course they're not new to this and they know exactly what's going on here. They're not surprised there might have been various incentives to get this deal, they're annoyed that none of the pork is coming to their constituencies. It's entirely standard behaviour. When I offer tax incentives to a business, it's an important economic stimulus supporting local jobs and the country's wellbeing. When you do it, it's blatant corruption and must be investigated.
"That wouldn't surprise me. Unfortunately, it's not just Apple doing this. Amazon and Google were both caught keeping databases of this stuff and they're almost certainly still doing it."
The main difference is that Amazon and Google are pretty open about the fact that they're constantly spying on everything and selling all the data to anyone who offers to pay enough. Perhaps not everyone realises the full extent of what they do, but by this point there's really no excuse for not understanding what you're letting yourself in for if you buy into their ecosystems.
Apple, on the other hand, is usually seen as the much more privacy-focussed alternative, which doesn't do all the advertising crap and is therefore much less intrusive about personal data. The choice is always seen as the more open systems which include your personal stuff in that openness, or the walled garden that also keeps your personal stuff walled off. So to see Apple doing exactly the same as everyone else is much more surprising and disappointing.
As for Microsoft, no-one uses Cortana on purpose, but it's essentially impossible to actually disable it so you can be sure there's plenty of spying going on there too. Just look at the mess of their "telemetry" nonsense, and ask yourself if they'd behave any differently with their voice recording.
"At 1.5x the cost which is the rough ballpark used to account for tax, pensions, office space, equipment etc that'd cost 22.5k p/a per employee so for 40 million you'd get ~1,777 people doing the job for around 0.1% of the companies yearly profit so it's not really even a case of the business model not being profitable."
Sure, but if you're going to look at the entire company's global profit, you also have to think about the cost of doing the moderation globally. Maybe you could handle the work in the UK with 2000-odd people. Then you need another 2000 or so for each other country in Europe, and for each similarly-sized one around the world. Probably 10 times that for a bigger country like the US. That means it's easily 1% of profits just moderating a single country, and probably at least 20-30% in total. And that's assuming minimum wage is enough to cut it, when they're (certainly Facebook at least, I think Google as well) already having issues with lawsuits and claims for psychological treatment even with the minimal amount of people involved. Could they do it and still be profitable? Maybe, I don't think we have anywhere near enough information here to do a real calculation. But it would certainly be a significant hit to profits even if it's not enough to kill things off entirely.
It's also worth noting that the Google/Alphabet thing isn't really worth worrying about. Google accounts for 99.4% of Alphabet's revenue. Alphabet isn't a real company, it's just a shell to collect Google's stuff together under a different name and funnel some of the money to speculative projects (like self-driving cars). Any distinction between the two names is completely meaningless; it's all either just Google, or Google wearing a false nose.
"Well duh. Google has billions in the bank and can't be arsed to hire more than a single person to manage app rejections ?"
Indeed. People love to talk about how monitoring apps, videos on Youtube, and so on isn't easy because there are just so many of them. But the fact is that it is incredibly easy - you just need to employ enough people to actually do the job. Is that more expensive than waffling about automation while refusing to actually address the problem in any meaningful way? Almost certainly. Might that mean some business models might not actually be profitable? Quite possibly. You do not have the inherent right to a profitable business.
"It’s an unusual result as Nominet’s last five years of data, lovingly presented here and depicted in our chart below, shows April is usually a pretty quiet month for registrations."
That data shows that in the last five years April has been the busiest month in two and the lowest in another two. March has exactly the same record. January is consistently in second but only first once, while Feb has been mostly third and never first. March and April seem a bit more erratic than Jan and Feb, but there doesn't seem to be any particularly meaningful trend visible, and certainly nothing to say that April is usually quiet. This year doesn't seem unusual at all, it's almost identical to 2018.
Biting the hand that feeds IT © 1998–2020