Re: Forty Two
67 publicly visible posts • joined 13 Jun 2007
When a human learns, they will generally be guided by conversations with teachers, parents, and peers. This helps to create a model of what is good or truthful and what isn't.
The quality of the result depends on the quality of these extra inputs.
In the absence of any sort of guidance and mentoring, you could wind up with anything.
I suspect this has been around for a while and isn't (much of) a rush job.
After the announcement a week ago, a company has put their hand up and said "we've already got one".
Odds are the thing is a variant of one of those systems you mention but the real question is how far have they moved already.
Device drivers exist for linux based on Google's work with Android so most people would start further up the stack in inventing anything. Also binary compatibility is worth a lot so that encourages a lot more copying.
There's still a lot of room to improve things without changing everything at once.
"..showing users what application they're signing into and the location of the device, based on its IP address, that is being used for signing in"
I think this can be counter productive. I'm often asked to verify that I'm logging in from Australia (a big place so not that helpful), or from Melbourne or Sydney (300km, 800km away respectively), Location by IP address is very hit or miss in Australia.
I understand this and can ignore the silly messages. I only verify when I'm sitting next to the computer and am causing the alert process.
Most computer users are at least a little less IT literate than me. Telling them they are being attacked by somebody 800km away will often not end well. In the end-user mind, telling somebody to ignore some details in a message is the same as telling them to just confirm every message. The topic of this article.
A further problem occurs when some installed software connects to the mothership at system start up, causing verification messages when the user isn't expecting them.
I like the idea of entering a code from the SMS message to complete the loop properly, even if I sometimes have to ring the guy who was previously in my job to get the verification code. His yacht is normally in range so this isn't much of a problem.
I don't think good reliable authentication is anywhere near a solved problem yet.
It looks like IBM mainframe disks are back in fashion (maybe they never went out of fashion, I haven't used them for 20 years or so.).
Does this mean ISAM is new again and we'll all be using PDS's to store our source code and executables again?
I'm looking forward to someone studying how to store data records on small cardboard rectangles using holes punched through the cardboard.
The Magsafe connector was very nice and I miss it greatly. The USB-C connector on my current machine seems like a step backwards but is much better than the much older barrel connectors and it does allow me to plug power into any of the USB-C connectors.
Possibly the biggest problem with the magsafe connector was that it was Apple-only. This meant that it was impossible to get 3rd party power supplies. Without competition, apple didn't need to innovate so there was never any development of airline or car power adapters.
The best news is consigning the touch bar to hell. It is only useful if you don't know which key you want to press and can deal with looking at the keyboard. For touch typists using the machine a lot it's the definition of crap. It's worse than a waste of space, it actually slows your workflow and promotes mistakes. The lack of any feedback when touching it means you also need to watch your hands lest you accidentally hit an unintended key.
My current Macbook pro also turns out to have the last of the stupid butterfly keyboards so this is all a bit of a raw nerve for me. My previous machine was a 2012 model macbook pro which finally died early 2020. I normally get a long life out of my Macbooks but if they can get the keyboard working properly without a touch bar then I may very well trade up quickly.
A magsafe connector would be a nice touch. Maybe they should allow powering via either magsafe or USB-C to allow some flexibility - nah, sorry forgot we were talking about Apple for a second there.
Here was me thinking that dropping support for PPDs was Apple's way of telling the open source people to get stuffed. PPDs are very hackable. They are just text configuration files to tie the bits together for any printer.
The alternative is getting binary blob drivers from printer manufactures that agree with your choice of O/S and processor. Works well enough if you are Apple or Microsoft but stuffs the rest of us up.
Basing everything on all printers supporting IPP or some variation thereof is of course just a lie told to people who don't understand the industry.
LPRng? lpd and a couple of filters for me.
Going into a black hole isn't a problem. It's the getting out that requires faster than light speed for an escape velocity.
This is sort of the principle of black hole event horizons.
As I understand it, black holes gain weight whenever things fall in. They (very) slowly loose weight due to Hawking radiation.
Actually, in this case they do add the extra bloat of HTTP to the process. This isn't just DNS over TLS on it's own port, this is DNS over HTTP over TLS using port 443.
That is, it's every bit as bad a design as you could hope for (actually not quite, they could cook it all up as a SOAP transaction to get some real bloat going).
Ext4 on removable devices is just as stupid as NTFS (or UFS). They are both designed for permanently mounted storage on a single device. Both contain security features that are just plain simply wrong for transport between devices (for a removable device, the best indicator of permission to read or modify a file is physical access, with encryption for actual content security). There is little point in having UIDs or GIDs and associated permissions or ACLs when the corresponding mapping to real users or groups is a very individual system thing.
Furthermore, the standard for Ext4 is simply 'whatever is currently in the linux kernel' and in true Linux style is a movable feast (exactly the same is true for NTFS of course).
Neither filesystem takes very kindly to being physically ejected without being unmounted first, I would have thought a fairly basic practical requirement for any portable filesystem.
FAT used to be the right sort of answer in the past but has problems with modern media and file sizes.
What we really need is something purpose designed for transporting data between different systems on modern media. This is what ExFAT was designed for.
The problem has been that Microsoft's licensing ($$ for each use) has made it impossible to use in open source since there is no easy concept for number of use's even if some organisation was happy to pay the money. Not being viable in open source effectively means it is dead in this day and age so this is effectively microsoft trying to save it's life.
Now if only someone would fix the performance of ExFAT on Macos with files in the TB size range.
Voice over TCP/IP will always remain a hack that I believe even skype only uses as a last resort. However Voice over UDP/IP using RTP is extremely common and becoming the standard means for fixed line phone calls around the world.
Support on actual fixed phones has been very slow taking off with most people going through a local gateway (Analog Telephone Adapter). Support from mobile devices is also a little hit and miss, generally using a customised app from your carrier of choice. Free VOIP apps tend to be difficult to configure and lacking in useful features although the situation is improving.
As for roaming between WiFi networks there are a lot more problems than the initial captive portal (a legal necessity as you seemed to party acknowledge) to solve. The change in IP address would be a much more difficult problem to solve.
Even within single organisations with handoff between WiFi cells with the same SSID (and therefore no new captive portal or IP address), the problem of dropped audio within a fraction of a second is still a problem that WiFi vendors are solving with incredible proprietary hacks. Add to that the fact that most WiFi devices resist handoff for as long as possible, degrading performance unnecessarily, don't look to WiFi in the near future to make this work.
Of course what we really need for IoT devices is for them not to need to talk to the mothership at Google, Amazon, Apple, or their chinese equivalents.
I bought an IoT power switch recently that would only work when used through the prescribed app. That app sent my requests of to a server in china which was also connected to the device. Coincidentally the app also insisted on knowing my GPS coordinates from the phone. This means that there exists, somewhere in the chinese part of the cloud, a database of devices and exactly where they are located in the world and the means to turn them off and on. Very scary. I wanted to name my switch 'nuclear reactor purge' but my wife wouldn't let me!
The problem is that most people don't have any sort of infrastructure at home that could happily manage this sort of thing in a well protected way (register readers excepted!). The easy answer for lazy manufacturers to get a product to market is to run a central server somewhere to manage things for everyone. It also allows them to think of ways to monetise all their connected customers sometime in the future.
The proper answer is for someone to build a suitably simple piece of hardware kit that everyone can have in their home that can manage their own devices without recourse to servers in some undefined part of the world. It would have to be based on open standards so there would be multiple compatible implementations from different vendors using different chipsets. Builders of IoT would need to support the same standards.
Wishful thinking I know. Standardised protocols are only the beginning of a very long path to enlightened happiness.
It's easy to think of this as vulnerability but I don't think that's the point.
What it means that a bad guy™ can use the feature on their own equipment to investigate and develop new speculation attacks in the comfort of their own homes. When the attack has been properly developed, it can presumably be set loose on their targets without any further need for the debugging help this 'feature' has given them.
The only system that the attacker needs root access to is the one sitting on the desk in front of them.
Just tested this on a very up-to-date FreeBSD install.
"The '-logfile' option cannot be used with elevated privileges."
% Xorg -version
X.Org X Server 1.18.4
Release Date: 2016-07-19
X Protocol Version 11, Revision 0
Build Operating System: FreeBSD 12.0-ALPHA8 arm64
Build Date: 07 October 2018 07:35:55AM
It pays to not be at the bleeding edge I guess. (The Xorg executable is setuid but obviously at this back-level version there are sufficient checks for dangerous options.)
It seems to me that this is the sort of security that should have been baked into a product like this in the first place. All updates delivered personally by a verifiable representative of the company. The only extension might be a visual comparison of a locally produced secure hash and one published on the web to guard against rogue/compromised company reps. (a visual check because the device doing the updating shouldn't be capable of connecting to the net.)
Sometimes the internet isn't the right answer. This is one of those times.
Yup. I've got a bunch of them around the place.
The only one I have actually directly connected to the internet is regularly updated and has pretty minimal functionality enabled. The others are blocked by firewalls except when I'm updating them.
That being said, they are very nice flexible cheap little boxes.
You do realise that setting up normal IPv6 addressing is actually easier than DHCP. DHCP is the hard way that we get to leave behind with IPv6 except for the really unusual corner cases.
The router advertises the network prefix regularly on the wire (or when asked). The device picks a unique address on the local network (64 bits to play with and usually based on the MAC address) and away it goes. Easy. All your modern devices do this already. Windows has been doing it since XP but your router wasn't smart enough.
The only exception might be a few really stupid IoT devices that have been developed by a work experience student and shouldn't be allowed on a network anyway.
Get over it and move on.
The only secure way to communicate via email is with end to end encryption using something like pgp.
The fundamental problem with starttls is that if the certificate on the other end fails for some reason then it can 1) use it anyway, 2) downgrade to non-encrypted, or 3) bounce the email back to the sender. Number 3 is pretty unfriendly for the average user to work around. Number 2 is just stupid because the connection may be legitimate but with a self-signed certificate (or expired, wrong name, whatever) and the encryption would still defeat anyone listening to the connection. Alternative number 1 wins by default.
DANE is good (provided you can use DNSSEC to authenticate it) but support is crap. Also, because of the multi-hop nature of email it is still only protecting an individual hop (although that is probably enough for uncomplicated email these days). Fake headers could be added by anyone along the way claiming encryption when it isn't used (why you would is beyond me if you can fiddle the headers then you already have access to the content).
However, all this is solved if your email client encrypts the message in a way that it can't be decrypted until the destination email client decrypts it. The worst that an adversary can do is stop the email from being delivered. This is something that already happens regularly with over zealous spam checkers so is now an inherent problem with email anyway.
Security works by having multiple layers. It protects you against accidents and malicious attacks.
Subroutines should check their arguments. You can call this a security thing or you could call it just being careful of other code having bugs. Personally I don't care which but it's good coding practice. Now it is possible to just say fix all the buggy software and then you'll never need to validate arguments but I've never heard a competent programmer advocate that. Call it security in depth.
Now I will admit that you have a problem when it's the kernel checking it's own behaviour because things can get ugly when it shoots itself. These things need to be well thought out and tested. That doesn't mean it shouldn't be there.
It's lucky most projects don't have project managers like Linus Torvalds. This sort of behaviour is not how you get the best out of people. It is bullying behaviour that shouldn't be tolerated anywhere in this day and age.
I thought underscores were illegal in DNS names. I know Microsoft had other ideas in the distant past but now even they frown on them. Why the hell are netflix using them?
Oh, and to echo everyone else: why is an init process doing DNS resolving? An init process should start things and possibly stop and/or monitor them. The tool to do DNS resolving is a DNS resolver. I would be very upset if my DNS (unbound and bind depending on system) resolver started starting processes. The reverse also applies. FFS.
The key here isn't whether sha-1 should be used in git in the first place.
Good practice in designing security software should acknowledge that after some time all of these things become obsolete so you need to design in a framework that allows you to easily migrate to future algorithm when the need arises. Baking sha-1 into the design is a mistake if it is then too difficult to change.
Other than that,there is no particular reason to be worried about sha-1. It's just another warning shot to not use it in new products and to start looking at how to turn it off in existing software. This should be simple with well designed software.
I think that Linus thought the GPL was just like BSD. He now seems to defend the rights of business to use Linux any way they want, without interference from lawyers. That's the BSD model that he probably saw earlier in life.
Mind you, there are probably ways that you could move Linux to a BSD license if they really wanted to but why bother. There are plenty of good operating systems out there with a BSD license on them already.
If Linus really believed in the GPL (perpetually free software) he wouldn't be keeping the whole shooting match licensed under the very outdated and full of holes GPLv2. The GPLv3 does a much better job in the 21st century and other projects have easily migrated to it. Blame the contributors perhaps (contributions under GPLv2, blah, blah, blah)? No, I think that's just a nice scape-goat for keeping it all as BSD like as he can get it.
Remember that it was Linus (I presume) who dropped the 'and later versions' clause from the licensing clause on Linux and created the whole license mess that people are now fighting over. I can't help but think if he had talked to some good lawyers way back then, the world would be much simpler now.
I would have expected a Certification Authority to behave ethically as part of its business model.
For the CEO to claim that they were just operating within the law and that this is the cut-and-thrust of business shows that they have confused the two concepts of law and ethics. What they are doing may well be legal (I am not a lawyer, etc) but stealing a name from a non-profit is in absolutely no way ethical.
The list of trusted root authorities in our browsers represent the companies that we trust to a very high standard to make our decisions on the authenticity and legitimacy of domains on the Internet. I expect them to do this both within the bounds of law and with a very high degree of ethics.
A legitimate approach to this would be to remove Comodo from everyone's list of trusted certificate authorities since they clearly are not living up to the high standards demanded of them.
They would then go out business because internet sites could no longer choose to use their now untrusted certificates.
This is business comodo. Sorry to see you go. Don't slam the door.
It could just be my old, faulty memory but I thought MacOS was the predecessor of OS/X. This would put the last release (Mac OS/9) somewhere around the turn of the century. I'm too lazy to look it up exactly but that would mean it was all obsolete about 10 years ago.
I don't know anyone still running a pre-OS/X mac. I have one (Mac plus running OS/7) but I certainly don't fire it up and do work with it. It still works though.
Wow. I'd very happy to have either copper or fibre.
My NBN future (guessing at least the next 10 years) will be wireless delivery. I'm really looking forward to that like a good toothache! Of course at the moment I'm stuck on ADSL 1 unless I switch over to BigPong so maybe I shouldn't complain too much. Friends who have ADSL2 in the region tell me that they are going to be moved off that to wireless in the long term.
A contact doing nbn installs suggests that they are really not very interested in anything other that wireless because it avoids playing in pits.
I'm not sure where they would be bothering to install this stuff. It might just be Malcom Turmbull's place.
"As an example of how TCP congestion control can get in the way of network performance, the paper cites a broadcast of two packets to multiple receivers:"
I think I see a problem here... (hint for non-network people: TCP is very strictly point-to-point not broadcast).
In fairness I couldn't find the word broadcast in the original paper, on the story.
He's right. Eventually the iPad will be marginalised.
Something else will be the next big thing and by then Microsoft might have a competitive tablet OS and no one will care.
If Microsoft wants to survive they need to work out what the next big market will be and start working towards that. They also need to shake the belief that the answer to everything is Windows. It may be that no one will want to buy Windows for Underpants.
The iPad really is crap in an enterprise environment and there may be a few bucks to be made building something better for that market. Unfortunately there wont be big money in it, just a few crumbs for the companies still hanging around in that space.