I honestly first read it as "Perl nutter". I think I've spent too much time deep in Perl the last few weeks....
477 posts • joined 13 Nov 2009
> Another perhaps more relevant question might be, why did Linux go with a signed number of seconds since "the beginning of the epoch"
Because sometimes you have to handle dates prior to 00:00:00 UTC on 1 January 1970. A positive integer indicates number of seconds after epoch, a negative number indicates number of seconds before the epoch.
1970 seems a long way away now, but when Unix was just getting started it was very much the near present, and people did want to store dates prior to 1970 on their machines. Linux took a lot of Unixisms, and the epoch seconds time was one of them, so for compatibility it followed the same concept.
Using signed integers, in this case 32bit (there were other bit widths for time_t) with the "00:00:00 UTC, 1 January 1970" epoch, allowed for a year range from 1902 to 2038, which seemed like a decent compromise at the time.
Yes, but if they did that they would actually have to pay more Tax themselves. This way big business keeps their low/no tax loopholes, while governments (who don't pay taxes themselves) squeeze the middle class and poor (because at the end of the day, corporate taxes are not paid by companies, they are paid by the customers, in the form of higher prices).
Funny how those that pay little (or no) taxes, are the ones who seem the most keen to push tax increases on the rest of us.
> The article says "the main source of radiation does indeed come from the Sun" which means the Sun actually does shine out of Uranus.
Well, technically it is reflected off Uranus. How much light is reflected is very much related to how much surface area is exposed to the sun.
The issue here, is that there is more light emanating from Uranus than just from reflection, resulting in the scientists conjecturing that there is a actual light emitter in Uranus causing the discrepancy.
I ended up gutting my G5 and putting an ATX motherboard in it (it had a motherboard failure). Still using it all these years as my main desktop PC, albeit with a 6 core AMD and 8GB of RAM now.
I do still have a G4 mac mini running Tiger that I like to use (and browse the web on using this very same aforementioned browser) from time to time. It is a shame the developer is ending the browser, but I totally understand, and was surprised anyone was still maintaining anything for the old Mac.
Oh well, a great thanks (and many pints) to the dev for all his dedication so far. I will keep using it until such time as it no longer renders anything properly, and then I will probably install Linux on it.
The point is that Gentoo (then bleeding edge) had it in 2004, my Debian distro's had it a year or two later, when it was considered stable.
Fact is, we had parallel init before systemd existed (which I think was back in 2010 or so), so "faster boot times" was a solved problem, not to mention a problem that was pretty much irrelevant in Linux world. For two simple reasons:
1. Most Linux servers had uptimes measured in years, and when they did reboot, the bios POST process was usually far longer than the actual OS boot time. When it can take 15 mins for a machine to get past POST, you didn't care if the OS booted 30-60 seconds faster. Especially as you don't have to sit in front of a server waiting for it to boot up.
2. For desktops and laptops, suspend worked fine, so you rarely needed to reboot. Even hibernate started working pre 2010's, so you could really be "up and running" quickly. Saying that, in the days of HDD boot was usually limited to how many iops you could do. SSDs have even made single-process init startups blisteringly fast now.
Beyond that, I will say for Gentoo, back in 2004 compiling and tweaking for specific CPU architectures did make a noticeable benefit, that for me was worth it, even if I had to compile all from scratch. As machines got more powerful, the benefits diminished, so I stopped using it (funnily enough, as CPU's got more powerful, more people started to use Gentoo because it could be compiled faster, but I used it less as there was less of a performance improvement of doing that with faster computers).
Nowadays I am running Devuan Linux, and thus am thankfully systemD free, although do have to do battle with it at work on CentOS/RHEL from time to time.
> The core idea of replacing the serial init scripts with a parallel boot process is fine, it makes server booting faster and replicates SMF on Solaris in that regard.
Yes, and it was done before systemd. My Gentoo 2004 vintage distro had parallel init booting, and it did speed things up quite a bit.
So systemd was very much a (poor) solution in search of a problem when it came out in the first place, and not much has changed, except it is getting forced down our throats by RedHat/IBM.
> Via Eden processors aee now officially dead.
Crap, my main firewall is still running an old Eden mini ITX board, as is my garage PC, and my point to point wifi bridge.
I guess this means I can no longer upgrade Linux on it? That unnerves me, especially for the firewall & Wifi Bridge, which I always keep well patched and up to date.
Seems silly to have to junk perfectly working HW that is doing its job just fine just because of removal of existing software logic.
> Doing some tidying up a couple of weeks ago I found a "new" sealed in-box unused BT Broadband router from 2010. Completely useless, yet it felt so wrong to drop it in the bin.
I've taken to re-using old routers as wifi access points. I got three of them now, each with the same SSID and channel. Everything (except my Android phone of course) seamlessly switches to the strongest signal AP, resulting in excellent coverage across the house.
I have much difficulty throwing out old stuff that is working, it just feels wrong to consign perfectly working stuff to the tip. I tend to donate it to my local charity, friends in need, or I just keep it and use it, and try to buy as little stuff as needed.
> Maybe because the bot users could simply place fake bids to block any auctions. In addition, the retailers would have to be denied stock, at least at first. They would not be happy.
I don't understand how that would work. e.g. say I bid £1000 on auction, then you bid £20,000 to "block the auction", the price (on ebay) will probably be £1,500 unless someone else bids it up.
If nobody else bids, but then you refuse to pay on auction end, what happens is that the offer goes to the next bidder in line, so I get it anyway for £1,000.
You have not blocked the auction from happening, nor did your £20,000 bid result in a silly high price visible on the auction page to put other people off.
Having read the paper you linked to, the results were inconclusive. However nowhere did they say alarms might make things worse.
If there is one quote which can sum up more or less the results, it would be this one:
"Given previous research findings suggesting that they have been effective deterrents, it would be premature to conclude that domestic burglar alarms are (or have become) counter-productive and hence that their installation should no longer be encouraged."
So research so far shows alarms do work as deterrents, the paper just refines that to "they either help, or do nothing". They don't seem to make things worse overall.
They interviewed burglars who were in prison, and "in the community". The ones "in the community" overall seemed to avoid houses with alarms, while the ones in prison didn't (or would actively target them, the logic being there is more valuable stuff in there worth protecting that can be stolen).
From what I can see, the only time your statement matches, is with the interviews of burglars in prison.
However, the sample is biased due to the fact they are in prison (i.e. they were caught).
All that tells us is that there is a higher likelihood that burglars willing to rob an alarmed house will be caught, convicted, and end up in prison, which for me is another plus point for having an alarm.
In theory an alarmed house may be seen as a more "juicy target", but as stated in the paper, whether a house is alarmed or not is not the highest concern when looking to burgle it. Rather how visible the house is from neighbours and public, and whether it is occupied, seem to be the big things burglars look for.
> Wow, that's pretty bad, I'm glad things don't work like that around here. A "restocking fee" is illegal, as simple as that
Fair enough, round here (the UK), its pretty common, especially on Ebay. The logic being that if the item is faulty or not as described, the seller is at fault and has to absorb the cost of shipping the item back and restocking it.
However, if you just "change your mind", and there is nothing wrong with the item, then it is not the sellers fault, and you have to pay the return shipping and the restocking costs of the seller.
> A shop is allowed to have their products placed in the webshop of the big retailer, but the retailer must show that it's actually a third party that's selling the product. It's not possible for a small shop to resell an item as if it's from a big retailer and just apply a markup (certainly not without informing customers).
This is a bit different. What I think people do, is create the ebay shop, then just scrape the top 20 most popular items off amazon, list them on ebay with 30% markup, and if someone buys it, they just order it from their amazon account for 30% less, ship it as a gift (which allows different delivery address to purchase address, and also does not come with invoice, so you don't know how much they paid for it), and pocket the difference as profit.
It's basic arbitrage. I guess it is a way to catch out people who don't price-check across Amazon/Ebay, but also catches out people like myself that may be willing to pay more specifically not to buy from Amazon.
In theory, with APIs at both ends, you could well automate this, and have it just bring in money every so often, which is what I suspect they do.
> Seems to me there's a lot of work to be done in regards to consumer, as well as worker protection in your neck of the woods.
I guess so, I don't know what kind of protections exist outside of the where I am, but we do have distance selling rules here and some protection.
Paying more for an item is not illegal here though, and I am not even sure how you could protect against that. People are allowed to resell new items for more money, and if someone pays that price it's a sale.
If I bought something at my local shop, and listed it online for 30% more, and someone bought it, how would that be different?
I try to as well, including some of the small "ebay shops" people set up, including paying more than the price on Amazon just so I don't buy from them.
Unfortunately quite a few times this has backfired. I buy something off a small Ebay store, only to find an Amazon gift parcel arrived. What some people do is list stuff on Ebay that is available on Amazon, but add 30% to the price.
So you buy from them thinking you are paying a bit more to avoid Amazon, only to get an Amazon parcel, but at higher cost than just buying it yourself off Amazon.
Worst thing is you can't return the item as "not as described", or "faulty", as there is nothing faulty about the item, and it is as described (they don't mention it is just bought off Amazon, presumably to prevent people just going there to buy it cheaper).
You can just return the item as "you changed your mind", which is your right to do so by distance selling rules, but (a) you then have to pay the return shipping, and (b) most of these shops charge you a 30% "restocking fee", so they get paid either way and you end up even more out of pocket.
None of this is illegal, so trying to boycott is very very hard to do, unless people are forced to clearly state "item comes from Amazon" when they sell things.
And also being known to suck up RAM like nothing else. Slack on my work machine, at the time of writing this post, has sucked up just over 5GB of RAM. To need 5GB of RAM to do what is effectively a slightly glorified IRC client is mental, and just demonstrates how poor the "developers" of Slack and/or Electron are.
Not to mention this general trend of a desktop app being a loose wrapper around a web-browser + website bundle is just dumb. The whole point of a desktop app is that it is not a web app. If I wanted web-based bloat, I would just use the app on my browser. Don't sell me on "we have a desktop app", which turns out to just be a locally run web server + browser + webapp in a horrendous bloated bundle. That is even worse than just using the web-app in the first place.
The only silver lining, is that this level of bloat is causing my company to purchase new laptops for all of us, so we can run Slack and something actually productive at the same time (we get a bump from 8GB to 32GB of RAM).
Its a different subset of people.
On Usenet, most of the people were of the nerdy persuasion, as computers were not as prevalent back then, and most "normal people" were living and socialising physically rather than virtually.
They knew the tech, they knew the risks down the line, and guarded their privacy very strongly.
Also, machines back then were very "user centric" as well, you did not have to fight your OS to not spy on you, nor "Jail break" it in order to execute code you want, so it was much easier to control what apps ran, and how much data about you was collected.
Fast forward to 2020, those same people, still know the tech, still know the risks down the line, and still try to guard their privacy very strongly.
The difference is the rest of society have come online, and they don't understand (or care) about the implications of this much technology under the control of so few. They actively don't want a powerful user machine under their control, they want "appliances" that "just work", and that is where the tech is going.
Of course in the minds of the masses the phrase "Nothing in life is free" doesn't seem to occur, because then they may realise there is some purpose behind being given these virtual trinkets "for free", that goes beyond what they see.
Its an ever growing battle to not get absorbed by the collective, and those who do care about individual privacy and control are a minority. Open source saves us a bit, allowing us to modify/tinker and customise systems to serve us, but as we can see with Google/Amazon, its not a cure all.
> I guess our recycling centres will see a lot of Google Home and Alexs devices appearing very shortly.
I really wish this were true, however I am getting the impression that most people in fact don't seem to care.
Before you could claim they were not aware, or they naively believed Google/FB/amazon/etc.. when they pinky promised that they are not recording and will respect privacy. However multiple times now we have seen evidence that it's a pile of lies, and they are recording everything, yet people still use them.
Each time it happens, the company claims it was a mistake of some kind, usually due a "developer" forgetting to disable some debug code, or something similar.
That is patently absurd for anyone who has actually worked in IT. No code makes it to a production FW image without passing through at least one other person. Even if a lone dev made a mistake, there are other devs, teams of integrators, QA people, security people, all of which would have had to sign off on the FW update.
And they are telling me none of them noticed? Not even a massive increase in traffic as audio started getting shifted en-masse to their datacentres post FW update? Either said company is lying, or the amount of incompetence shown should bar them for ever working on anything more complex than a 90s era LCD watch.
Yet, they are successful. These systems are getting more prevalent. You can notice (and avoid) the little boxes in rooms that spy on you, or just chuck them in the bin where they rightfully belong, but now phones have the same technology, as do more and more modern cars, complete with cameras and microphones monitoring you. You can't just remove the spying component, it is all integrated.
Honestly, moving to a shack in the middle of nowhere and just detaching from modern society seems more and more appealing as time goes on.
> “Just because something is newer does not mean it is better.”
True more often than not in my experience.
Nowadays "Newer" tends to be a synonym for "More expensive, but built more cheaply". The desire to squeeze the most profit tends to this result, along with "engineered failure", which one of those travesties of the modern world that really get my goat.
Pint, as its Friday, and the best weather for it!
I personally love working from home. I have a better setup, higher quality monitors than the bargain basement Dells provided, and a lovely mechanical keyboard that is a pleasure to type on, but would drive my co-workers crazy if I used it in the office.
I am more productive when I work from home. I have lower costs (so my "take home" pay has gone up on the same salary). I have less wear and tear on my car. I am less distracted by random questions from people just walking up to my desk, and I avoid the office politics. I can concentrate on my work, and yes I do work longer.
That is because from my perspective, work always started the moment I left the house, and ended the moment I got back (because I can't do anything else while commuting, it is effectively "work time"). By not having to commute, I added at least 2 hours a day to my life, every day, and that is a huge extra chunk of time. Even if some days I work longer than 9-6, I am still overall in benefit of time.
A few times I got pinged at odd hours (e.g. 10pm on a Friday) or on a Sunday about some work issue, so I did end up having to make it clear that I still have "me time", and that outside of working hours, or on weekends, I will not respond unless we agreed out of hours work/on call beforehand.
It even helps the environment to not have masses of people shifting themselves to and fro every day of the year just to sit in a different room.
If you think about it, the whole concept of commuting just so you sit in a different location to the one you live in is really stupid and wasteful. Yes, some jobs require physical presence, and in the days of mechanical factory work, you had to be on the factory floor to do your job. However a good chunk of our industry is now virtual, and commuting was just a hold over from before ("We always did it that way, so that is why we do it now").
I like to think that with this realisation, there would be a mass switch to remote work for those industries that can (and it also reduces the amount of traffic during the commute for those who do have to go to work, giving them benefits too).
However it is not looking good. Despite admitting that productivity is up, costs are down, and internal polls showing the majority of the company prefer remote working, my company has started pressuring people to come back to the office, so we can sit in socially distanced perspex cubicles due to Covid.
I guess senior managers like to be able to walk around "their estate" and survey worker bees in rows doing work, more than they care about the costs and drawbacks of doing so. I guess it is an ego thing, either that or they have to justify the costs of the (empty for months now) office they spent big money on to the board.
Hence now considering finding a different company that is more open to full teleworking. Is there somewhere specific where I can find companies who only do full remote work? I know from the news of about FB/Twitter for example, but no idea if they are the only ones.
> Ah, the days when things were written specifically for IE. I remember having to manually tinker with pages to get them to work properly in it.
Yeah, we are now in the days when things are written specifically for Google Chrome.
Unfortunately if Chrome decides to do something unilaterally, and others don't follow, you just get breakage to which peoples most unhelpful responses are "Use Chrome". It is like we are going back to the days of "IE Only" websites, although a bit better because Chrome is at least cross platform, although more spyware infused then I remember IE being.
> Oh, and can we please find another name for AI? Artificial it may be, but I have difficulty accepting 'intelligence' in something that doesn't have sentience. Statistics, maybe?
I tend to call it "Machine learning", which seems to describe it to me better. The machines are capable of learning, but they are not sentient, nor are they "Intelligent" in the sense humans are.
The best you can say is that by training the machine so it learns something you want it to, you have imparted a limited sub set of your intelligence into your machine to solve a specific problem within a limited domain.
That does not make the machine intelligent, anymore than a machine that is programmed normally by a human is "intelligent". Nor do I consider it "Artificial", as the learning is real, as is the system it runs on.
Nah, a lot of the lockouts, restrictions, spying, etc... that I have been trying to rip out are in Android itself. While Android One may remove the annoying "undeletable" apps like FB, etc... I still have the Google spyware itself to deal with, not to mention its UI is not much better over the "customised" versions I've used.
The only phone UI I have recently used that seemed nicely designed is the Apple one (work gave me an iphone as a work phone). The phone is a pleasure to use, as long as you use it the way they want you to.
Otherwise it is too restrictive and controlling. It feels like I am being babysitted all the time. I can understand the appeal if people want a phone to "just work" and don't care about the cost or flexibility/freedom, but it is not for me.
Cyanogenmod used to be my "goto" OS for Android phones. Since the fork to LineageOS, I have not had the luck to get it working on any phone I buy. Even if I manage to unlock the phones bootloader, I find that versions of LineageOS available are not supported, or if they are, they are not "finished" and still experimental.
Rarely does a LineageOS port end up "finished" to the point where you can use all the phone features as intended, at least from what I see.
I am not sure if that is due to modern phones being much harder to port LineageOS to, or since the rename they have been short of manpower to port to them. It may be just that so many phones are released nowadays, so quickly, with so many varieties of HW, that its impossible to reverse engineer and polish up a third party image before the phone becomes "out of date", and something new comes along.
My plan for the moment, instead of lugging my laptop around, is to see if I can get one of those Gemini PDAs and load up a proper Linux on them. With tethering to a dumbphone that has 4G + bluetooth, I should be happy for my on the go computing.
> If you're a techie, things have never been better. For less than £500, you can get a tiny battery-powered computer that's faster than the workstation your boss spent £2,000 kitting you out with just 15 years prior.
I'm not sure things are better now. Yes, my phone has better specs than my workstation 15 years ago, but it can do much less.
Most of it is taken up with bloat, or apps I don't want but can't remove, on an OS that is designed to restrict and spy on me. All blanketed in a GUI that was made by people who seem to have gone out of their way to ignore all good HCI practices and make user interaction as much of a PITA as they could manage.
I end up spending a lot of time fighting the OS to get it to do what I want, and it is a losing battle.
The HW has improved by leaps and bounds, but the software has gone in full reverse. My old Nokia N900 could do more than my current phone, despite having completely anemic specs in comparison.
Fat good having all this power in your pocket is, when you can't make use of it.
As for the Nokia 5310. Dual sim, long battery life, headphone jack, removable battery. It would be up my street if it wasn't for the built in Facebook app, and no high speed tethering (give me 4G + unrestricted bluetooth/USB tethering and I'm sold).
The state of smartphones has gotten so bad, that when my current Android dies, I will go back to a simple phone + tethered laptop when on the go.
> What happened to horizontal computer cases? They were much more practical than having a monolith from 2001 beside the monitor, sometimes so tall as to tower above it too, while the monitor itself is at the wrong level. Now, all that remains of them is very niche product.
I used to put my case vertically because:
(a) it was easier to move it forward/backwars to inevitably plug something into the back, without the huge weight of the CRT to shift with it at the same time.
(b) I could take the panels off to twiddle something without moving said heavy CRT off to one side, assuming I had space on the desk for the CRT and the case horizontally.
Basically, I used vertical cases back then because of the weight and bulk of the CRT. With flat panels now, that issue has gone away, however nowadays you don't need to access the back of your PC often (most of the time you just want to plug a USB device in, and most monitors have built in hubs for that now), and the PCs themselves are quite small.
I still have an old tower as my desktop, but that's because its a lovely Apple G5 case that is nice to look at. If it wasn't for that, I would probably shove it out of sight somewhere behind my desk and never think of it again (unless something goes wrong).
If you want a horizontal PC case, you can do what I did for a while, and buy a 2U rackmount case (there are "short depth" ones around 55cm) and use that. I find the 1U's have small fans which are too noisy, but the 2U fit standard "silent" fans, and take normal size PSU's well
Surely the GUI is up to whatever application you use to listen to your internet radio station, rather than the station itself?
I use MPD (https://www.musicpd.org/) myself, and I source my stations from http://www.radio-browser.info/ (they have a free API, if you fancy to write your own radio app).
I use RadioDroid on my phone (GPL app that uses the above DB), which I connect to in my car when I want to listen to the radio. I never bothered with DAB, and indeed at this time I see no point really in broadcast type distribution of radio a la DAB(+) etc...
Whatever the choice they provide, it will never match what you can find on the internet. I can listen to radio stations all over the world, and even in high bitrates (320kbit/s +) if I have the bandwidth for it. Unlike DAB, where you have a fixed channel and you must shove $x number of stations down it whether or not I listen to them, with internet radio, I only stream the data I actually want, so I can get higher quality with lower bandwidth than a DAB transmission (and even ignoring that DAB uses mp2, while internet radio uses newer/better compression such as mp3/vorbis/opus/aac).
Not to mention the ease at which I can rip the streams and replay them. I have actually ripped a few days worth of radio streams (using streamripper, another GPL app), which I play when I am driving abroad (avoiding roaming internet charges).
Really, to me DAB seemed like a solution in search of a problem when it was first released, and now, 20 years later, it seems to be largely irrelevant. Pretty much everyone has a smartphone with internet access, which allows them to do much more apart from listen to the radio while on the go.
Yeah, your post reminded me of an alternative to DAB that unfortunately didn't win, which is DRM (not that kind):
Basically they send digital signals down the original AM and FM frequencies, with more modern technology to pack higher quality audio into the channel.
Perhaps in future it will see use, but I suspect before then people will just move to packet switched networks and listen to internet radio (which is what I did).
Me too! I remember the episode of Horizon from 1996 that dealt with it, "Planet hunters". I still have a digitised copy which I occasionally watch for nostalgic reasons, and it is amazing to see how far we have come.
At the time of that episode, they still had no idea how many planets there were out there, most stars they investigated seemed to have none. Finding the first ones (AFAIR, it was orbiting a pulsar), was a serious event, even though they had no hope of harbouring life.
Now it seems almost every star has some kind of exoplanet, we have so many we need machines to keep track of them, and keep finding more and more planets in existing data.
>Wordpress accounts for 80% of hacks BECAUSE it is the majority player,
I am not convinced, having in a former (apparently cursed) life, been a web admin for a 100% wordpress webhost, I can assure you that beneath the shiny CMS frontend, wordpress is a horrid insecure mess.
The amount of times I see wordpress do things like "exec($random_byte_string)", or "include $dynamic_path.php" in the code is frightening. It looks like it was programmed by 1st year undergrads as a "learning php" project.
The two things above, coupled with a buggy file_upload.php, most likely results in 99.9% of the hacks on wordpress.
I have seen it, where attackers use a page to send a random php byte string to the underlying exec() function (With no checks or sanitation on wordpresses part), resulting in a compromise.
The other method I have seen them use is to exploit the file_upload.php to upload their own php file, which they then execute by including their PHP in another file with a dynamic include function.
One thing I eventually did was disable exec() in php, which broke wordpress of course. I then went through all of the wordpress code, and rewrote the chunks that depended on exec to make it more secure.
The second thing I then did was make the wordpress web folders read-only to the web browser. This stopped the file_upload compromise. After that we had virtually no problems at all with security.
However, it also meant that (a) you could not install any plugins/themes once set RO, nor upload any files, and (b) some plugins/themes had to themselves be rewritten in order to work without exec.
I maintained this private branch for the company while I worked there, and there were no more compromises (but a lot of moaning from clients for why $free_plugin_X does not work on "our wordpress", and "Why can't I just upload files myself").
The content itself was held in a mysql DB, so once a wordpress site was configured with a theme and media uploaded, the text could be changed by the end user as normal.
So a "secure wordpress" can be done, but it requires a higher skilled developer to do, which is lacking in most of the wordpress ecosystem (especially in the "free themes/plugins" area).
Fact is, even if WP was 10% of the web market, it would still get exploited like now, because their security mistakes are so basic, your average script kiddie can compromise your site.
I don't know, for me, tabs make more sense as an indentation method. Especially if you want 'consistency' and 'standards'.
Using tabs means that the code always has the same indentation in the form of one tab per "indent", and it is up to the editor to display that indentation as 4 spaces, 8 spaces, or whatever the user prefers.
$x number of spaces should never be used for indentation, because it is fixed in the code. Someone may like 1 space per "indent" in order to fit as much in the width of their monitor. Somebody else might like 8 spaces per "indent" because they have a really wide monitor and prefer to see it that way.
Using tabs makes accommodating the spacing of indents client side, and therefore easy to display in the users preferred tab width. The alternative is hard coding the spaces, which means each person either has to tolerate somebody else's preferences, or has to re-indent the code before working on it (and then possibly redo the previous indentation afterwards), which is error prone and a waste of time.
I always use tabs for indenting, and depending on the system I am viewing it on, the "tab space" can be between 2 and 8 for the same code.
I admit the forced use of fixed spaces in Python is one of my long standing irritations with the language, and the only part of the PEP standards that I happily ignore in my own code. Unfortunately I can't do so when working on a big project with others, due to the hard coding of spaces, so we have to stick to "4 spaces per indent", and all be equally unhappy.
You can represent ipv4 addresses as hex if you want:
e.g. 184.108.40.206 => d8.3a.d2.c4 (www.google.com)
Indeed, the operating systems I use work just fine with that, for example. My browser (pale moon) correctly connects to http://0xd83ad2c4/ on Linux. Other things like ping work as well:
~$ ping -c 2 0xd83ad2c4
PING 0xd83ad2c4 (220.127.116.11) 56(84) bytes of data.
64 bytes from 18.104.22.168: icmp_seq=1 ttl=54 time=17.6 ms
64 bytes from 22.214.171.124: icmp_seq=2 ttl=54 time=15.4 ms
--- 0xd83ad2c4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 15.459/16.540/17.622/1.089 ms
Same thing is with ipv6 in reverse, you can represent it as decimal. For example. the ipv6 address you gave can be represented in decimal like so:
2001:4860:4860::8888 => 42541956123769880606220662448000886044
As ipv6 has more bits, to keep it short and easy to remember, hex rather than decimal is used.
While I don't find "2001:4860:4860::8888" particularly easy to remember, it is easier than its decimal representation.
> The bus company were reasonably happy to accept responsibility, up until they got the £20,000(!) repair bill.
To save weight, because every ounce of energy is precious (being directly related to the range before needing a long charge), BEVs are made almost completely with aluminium.
Aluminium is a PITA to weld, you need to use a TIG welder, and correspondingly needs a higher skilled weldor. As a result, while pretty much every garage can weld steel, very few specialist places can weld aluminium, and the costs + labour are correspondingly higher.
Especially with electric vehicles, as you have to be very careful to not ignite the battery pack while welding, and make sure it is electrically isolated/safe.
Saying that, more and more ICE cars are also going to Aluminium bodies to save weight (and improve economy), so I expect they will have similar repair costs.
Yeah, its funny how "newer" has come to mean "worse than before".
Once upon a time newer was considered better, I used to look forward to software upgrades, because useful features would be added, bugs would be fixed and performance would be improved.
Nowadays upgrades usually mean worse performance/more bloat, more lockdown, more spying, more "monetization" of every nook and cranny they can find, more "online only" subscriptions they want me to have, and usually a different set of bugs introduced.
Indeed for some software I go out of my way to avoid upgrading as long as possible, and I am obviously in the majority, as it is a big enough problem that some of them have started doing forced upgrades (where there is no technical reason you can't use the old version anymore, they just block you until you upgrade because they can).
Even outside of software, newer tends to mean "more flimsy" and "more cheaply built" for the same price, or more. Sometimes when (after many years of using an item) it comes to replace it, the same item is more expensive, yet more cheaply built, and fails faster than the old one. Only good thing is that physical manufacturers have not found a way to force me to use newer items, so there is a thriving second hand market in them.
It was not uncommon in older British cars to have a mixture of metric and imperial, especially for designs that have their roots in the time before metrification of the UK.
Case in point, the Jaguar V12 engine, which had a mixture of imperial and metric threads, as the engine was originally imperial, and with time newer bits added to it were metric.
Then some cars had imperial sized heads on their bolts, so garages could use the tools they already had, but metric sized threads. So unless you knew in advance (or checked each bolt), you could not assume that an imperial headed bolt/screw was actually imperial threaded as well.
> it may be a pain for people doing full sky surveys of variable objects - but the Noble committee doesn't care about a bunch of stamp collectors so meh.
Pardon my ignorance in the matter, but doesn't planet hunting come under "full sky surveys of variable objects", as they have to scour the sky looking for variations in a stars brightness/position?
That does seem to be a very interesting field of study atm.
Beyond that, the starlink satellites have not yet ruined any of my amateur observations, however there are not that many up there, so the chances of hitting one are still quite small.
I don't think the objection to starlink is to the current number, but rather the future, when there may well be 30,000 of the things ( https://spacenews.com/spacex-submits-paperwork-for-30000-more-starlink-satellites/ ). How much disruption they will cause at that point I don't know, but by the time we find out it will be too late to do anything about it.
It tends to be easier to stop/alter a project when its just getting started, rather than after its already established, hence why people are complaining now, while things "don't look too bad" to outsiders.
I can not see what python3 does that perl does not do. I cannot not see what python3 does that python2 does not do. I can see it is popular. But can someone explain why it is popular?. What was added over time that made a break necessary, What are its direct competitors?
In my opinion, Python became popular because it was easy to copy and paste from others code. Which made it easier for "newbies" to program.
Case in point. Back in the turn of the millennium (2000 or so), I wanted to learn to code on Linux. I was still at school and could not afford windows compilers. I could not get my head around C, so I looked at the two main options at the time. Perl was the established player, and this upstart called "Python" had just reached version 2.0.
Logically I went with perl, as it was the most popular, and tried to cobble stuff together the only way I knew, by copy pasting other code I found online, and trying to understand how it worked. Problem is, it just would not work, I would get syntax errors, or other errors, or it just would give incorrect results. I would look for "how to do $x" online and get 20 different ways of doing it, it was overwhelming, and I eventually gave up and tried Python.
Python was different, there is "only one way to do it", which meant I could copy/paste code from different projects and it would work, I could search "how to do $x" and get one overwhelmingly "correct" answer, which worked, and once I understood what a piece of code did, when I read other peoples code I could understand what they were doing.
Python is what got me deep into programming, and indeed I do believe this was one of the reasons many "educational" projects for young children seem to start with Python (or Python-like) programming languages. It is literally the "Basic" of Linux. Nowadays you can code up a python program to do what you want just by copy/pasting from stackoverflow (not that I would recommend it for anything serious, but for newbies it is useful)
Now, 20 years later, I still code in Python, but less and less scripting. The flexibility and string mangling of perl beats python hands down, while for performance I prefer C. Python sits in an interesting niche, I guess roughly where Java does, as kind of "middleware", and also it has some very good libraries for statistical analysis (the "jupyter" notebooks with numpy, pyplot and stats libraries has no equal for the price).
As for the changes between 2 and 3, the only one I was ok with was the conversion of "print" from a statement to a function, beyond that the changes either made no difference to me, or made my life harder, so meh.
They also changed how they handle arithmetic division:
>>> 2 / 3
>>> 2 / 3
This caused horrible breakage on some software, because it did not throw an error, it just computed incorrect results. They should have had the interpreter print "Integer floor division warning at $line" (toggled by an interpreter flag perhaps?), as it would have helped me isolate and fix the issue.
And then there is the string handling. Python2 was simple, and you did not have to worry about it too much. In python3 you have "bytes", and then "strings", and some things expect bytes, others expect strings, and you have to translate between them. The amount of hell I have had when porting things, having to decode/encode all over the place, is a real PITA. And it does not seem to have brought any real benefit to me, just a lot of headache and more code needing to be written for the same task as before.
I've ported a few things across to python3, but if I am honest, some I ported to other languages, because it was easier.
Overall I am not a fan of the changes to python3, some things are good, but overall they have made simple things harder and more obscure. Going forward I expect to be doing less and less Python work.
> it clings stubbornly to the aged MicroUSB standard.
Good, microUSB is the one standard that has managed to persist for any period of time (short of the old barrel "nokia charger" back in the 90s early 2000's), why change it?
Pretty much every single device uses microUSB to charge now, not just phones. I got so many microUSB chargers, and so many more cables (as you know, microUSB is more than just a charging port), and it is so nice not to have to carry a charger/cable/adapter everywhere I go safe in the knowledge everyone has at least one microUSB cable kicking around. Or when going on holiday, being able to just take one or two microUSB chargers to cover all my devices. I have literally not been able to do that since the aforementioned "Nokia-everywhere" era.
Why on earth then, would I want them to move to another standard? One that is nowhere near as ubiquitous and convenient? I suspect a desire to charge us to replace all our chargers again, and for ones which have DRM to make sure you only buy the official, expensive, "branded" chargers. Yeah... no.
So, microUSB is a good point for me.
> There's a 3,000mAh battery, which charges at 5W over – as you may have guessed – MicroUSB, in addition to a 3.5mm headphone jack, Bluetooth 4.2, and 2.4GHz Wi-Fi b/g/n
Honestly, this sounds like my perfect next phone, and has a microSD slot to boot! If I find out the battery is removable, and you can put LineageOS on it, It would be perfect, and a guaranteed purchase from me. Alas the article didn't mention a removable battery, which is a shame...
Still, amazing to think that an 8 core CPU and 2GB ram is considered a "poor show" for a handheld mobile device. How times have changed... *glances at old Nokia n810, single core 400Mhz CPU with 128MB RAM*
> I intended "credit rating" to be understood in both literal and allusiory meanings for charge cards and social media respectively. Hope this is clearer now.
You mean a credit rating, that is social? Where have I heard that before....
Not sure that is the path I want us to be on, but I doubt the opinions of us peasants get paid much attention to by the powers that be. At least the Chinese are up front and honest about it.
I too have no social media, which means I must have a very low rating, but I am fine with that.
If a company rejects me due to lack of social media they can use to dig into my private life, then I dodged a bullet. I would not want to work for such companies.
Aaah yes, we used to call them "Presshot", as a corruption of their core codename ( https://en.wikipedia.org/wiki/Pentium_4#Prescott ).
The P4 was excellent at turning electricity into heat, and was the CPU that first made me consider trying AMDs offering (which was not as performant, but more efficient per Watt). Since then I have stuck with AMD on my machines.
- Their mortgage tracker does not resolve my mortgage application currently in progress (I called them up and they said its a technical problem and to try again later)
- Their online complaints page doesn't recognise any UK address as a valid UK address, and even if you use the "International address" option to type in your address directly, the submit form has an error (so you can't submit any complaints)
- Emails to them (marked delivered) seem to vanish in the bowls of their system, forcing someone to go hunting around for them, if they even find them.
- If you call them, they can usually pull up needed information, but do apologise as "their system is having some problems"
- Both me and other people I know have been victims of fraud on their natwest card in the last 2 months. In one case, their new natwest card came pre-defrauded (before they even used the new card the first time, there was a fraudulant transaction from Holland for Netflix on it). I had never been the victim of fraud until 2 months ago.
Quite a mess really. Something is going on in the bowels of that bank.
It is odd how little push back there is about this. It is like we are going back to the bad old days of IE, when one anti-competitive behemoth would implement non standard behaviour into its web browser, forcing others to either follow or risk breakage of the internet (Which, as said behemoth had majority market share, was too large to ignore).
Perhaps Googles idea is a good one, perhaps not, but the right way to do it (IMO) would be to try to make it a standard. If everybody else agrees it is a good idea, it will quickly be ratified and adopted, if not, then changes proposed, until it is considered good (or unsalvageable, in which case it get rejected).
Sure that may take longer, but getting a broad consensus is better than dictating direction (same reason we prefer democracy to dictatorships, even though things get done quicker in dictatorships).
AFAIK, after MS bought Nokia, they sold Nokia Maps, which was then rebranded as "Here Maps" ( https://wego.here.com ).
They have an app, it works great (even gives live traffic updates if you have internet), it can route trips >1000km (which most other app fails at) which is useful when I do a Euro tour, it works offline (and you can download the maps to sd card beforehand over wifi).
If you forgo the live traffic etc... it is also pretty private. History is stored locally on the app, and once you got the maps downloaded you don't need the internet/cloud at all. In fact I re-purposed my old phone as a plain GPS unit, with the app, a SD card full of maps and no SIM. Occasional update over wifi and its good to go.
To be honest, I am not quite sure how they make money. All the above is free, although I have been told they licence their maps/technology to car manufacturers for their in-car GPS units
It is the only app I use when I go on European tours, and I highly recommend it. I do still miss my old n900 though, although I did find my collection of n810's when doing some spring cleaning, so wondering what to do with them (alas, the old online deb repos for it no longer exist).
> I know you can control Linux with an MDM solution, but is it easy to push an update out to thousands of machines? Is it easy to monitor that deployment?
I have used Ansible to do that (being SSH based, all you need is a running SSH server on the target and a login, which pretty much every remote administered *nix machine has).
The biggest update push I did was to circa 130,000 Linux servers/workstations at a previous place I worked at. Took about an hour, which is a lot faster than if we had to do it manually (or even write a script to do it for us). The operation is atomic on a per host basis, so at the end you get a summary of success or failures.
Also, outside of the Windows world, it is perfectly possible to update a machine (including Kernel) without needing a reboot. So having a system uptime of years does not preclude the box sitting unpatched. Worst case is you have to restart some services so that updated libraries are loaded.
> Seems they have a lot to gain and little to lose leaving it there.
They have probably found out (a) a new zero day hole, and/or (b) others have discovered this hole and are using it (possibly against NSA/allied systems).
At the point where your adversaries know and exploit the vulnerabilities you know about (or just defend from them), that is the time you should patch it and move to some other zero-day exploit,
The NSA also has a mandate to defend against threats, it is a balance between knowing vulnerabilities (to exploit others) and disclosing them to be fixed.
Biting the hand that feeds IT © 1998–2021