I reckon the proper term is 'institutional stupidity'
Seems to me about as well thought out as Microsoft hiding file extensions. And it'll likely work about as well.
Google is having another go at killing off the displaying of https and www in the URL bar of upcoming versions of Chrome, despite protests from users. The company had previously had a crack at nixing the scheme and subdomain last year, but rolled back the change after users expressed alarm. Back then the plan was to lose the …
I too find the hide-extensions-by-default part of Windows annoying. First thing I turn back on.
However, it's pretty daft that here in 2019 we're still using part of a file's name to indicate a file's type, on every single OS out there. It's totally bonkers. OS/2 introduced the idea of file type being a field in the file's metadata, nearly 30 years ago, and that was fab. We don't use file name extensions to indicate content type coming from web servers - MIME does that for us. So why do we persist in using file extensions on filesystems? </rant>
But if you bring a stick from another computer (or a network drive), it still needs the file extension to initially identify the file type before creating the hidden file. Then you get left with hidden files scattered all over the disk afterwards.
It copes with the use case of mounting the disk, detecting the file type from the extension, renaming the file extension away, and then using it, but that's hardly common.
Well that's true enough, though it only takes a guess at the application to default to use when opening based on the .ext. And THAT mechanism is a kludge to cope with Windows & DOS files. The comment originally, though, was that it doesn't REQUIRE it, not that it can't cope with it or doesn't make use of it.
I generally find it's best just to leave the extension on. It's neither here nor there; we've grown up with it. Now... the semi-colon versioning format for VMS... I can live with that!
Right. My unix scripts use a shebang to indicate the actual file type. The shell has to actually open the file and look inside it to see what the file type is. This is so clearly superior that I don't know why we don't do the same thing for houses and cars.
So how does it tell the difference between a ZIP and an EPUB (they're both ZIPs internally)? It'd be like telling the difference between a Ford Topaz and a Mercury Mystique (essentially the same car if not for the make marks).
PS. Why Do I get the feeling someone's going to eventually exploit a shebang for some nasty malware (a la Confused Deputy) in future?
No, it uses #! to tell it where to find the shell that's going to interpret this text file.
While this is true, the #! is still the magic number. It identifies the file to the exec(2) family as an "interpreter file", and the code for handling interpreter files then parses the remainder of the initial line of the file.
The UNIX magic number system is a hack. It's a hack that has in practice worked quite well - better, in my opinion, than the filename-extension hack (which was also used by CP/M and MS-DOS, of course, and if memory serves VSM, though with a tighter format).
Some other OSes took other routes. IBM's venerable CMS (created at the Cambridge Scientific Center) put file-type information in a separate piece of metadata alongside the filename, rather than making it part of the filename proper. OS/360 and its successors up through z/OS put some file metadata in the catalog and some elsewhere, such as in the member directory entries in a PDS. No doubt there were other schemes.
And the older MacOS's used special case-sensitive four-character identifiers, both for the type of file and the program used to create it. It's a trade-off, really. Making the file type easy to chance means they can be re-purposed more easily but that very mechanism can also be exploited.
Just because *nix have no ideas of file metadata but a very simplistic one, and not the file type. And resorting to file analysis to get a clue. After all, when it was designed there were only executables and text files. And it stayed stuck there, head in the sand.
This post has been deleted by its author
The advantage of file extensions is that they are easily portable across file systems (and comm networks), being in the file name. Any other solution is strongly tied to the file system, or at least the file structure, and it leads to portability issues. Mac and OS/2 had to add specific file/folders when writing to file systems not supporting their metadata, and those metadata could be easily lost when the file is copied without them.
It's also easier to create custom file types without complex procedures to register them somewhere - and requiring no external "registry" - but yes, it could lead to collisions.
Because different file types can have the same magic numbers. Take ePUBs, ODF documents, and so on. They're really just repurposed ZIP files so a magic number search will mistake them for ZIP even though there's more to them than that.
With the exception of Windows I don't think any other OS (worth mentioning) mandates the use of the file extension to indicate file type
Apparently the default filesystem for VxWorks does, and VxWorks is very widely used.
Just because it's not used on general-purpose end-user and server machines doesn't mean it's not important.
What I really hate with the Omnibox is the way it will ignore DNS and go to the search page if you enter a local server name.
Enter myserver and you get Google.com search myserver.
Enter myserver.mydomain and you get Google.com search myserver.mydomain
Enter http(s)://myserver and you have a good chance it will actually resolve to your server.
If the http(s):// is not relevant, why won't it go to a local website when you enter just the server's name. If that name doesn't resolve, by all means display a search page, but if it resolves to a local website, display that first!
That's beacuse they aren't FQDN. If you use myserver.modomain.local for instance then it will work, or, more commonly, you just put a forward slash after it to tell it that it's a host.
e.g.
myserver/
I thought that was common knowledge?
You wouldn't really want in searching a, potentially, slow DNS to see if there is a result before running your search queries.
Hmm, the point of your post *seemed* to be that you hated that Google would search for a local host rather than just go directly to it. I pointed out that unless you enter something that is a valid (a single host is only valid because of the dns suffix that is set on the PC or added by a DNS to make it so) then there isn't a way to know that it is supposed to be a host.
Unless, you just use a trailing forward slash which does tell it that, works perfectly and always has.
You wouldn't really want in searching a, potentially, slow DNS to see if there is a result before running your search queries.
Yes, I would. But even more than that I'd want a proper UI with proper functional separation, which is one reason why I don't use Chrome.
Software written for lazy fools is rarely worth using, in my experience. Chrome is not an exception.
It is not possible to determine if that is true. You would be relying on a number of factors, including the fact you may have DNS recursion which pushes your request up to another DNS server, any of those could be running slow due to network congestion or server overload (remember your DNS server would now be handling queries for every search term on your network). The timeout if there are DNS issues could result in having to wait 5 seconds for every search term you type.
So if you want to follow standards then a host on it's own is unqualified so it's only by a DNS automatic Suffix addition based upon what you've asked your PC to provide that a host would work with some applications. Just add the trailing slash or use an FQDN to specify that it is a host that you are trying to reach.
Curious to know which browser you are using that allows you to enter the host with no qualification and it goes to DNS first for every query?
"To assuage the anguish, kindly old Google will allow power users to plug in an extension to turn off the scheme and subdomain hiding antics of the Omnibox…."
That's just what Mozilla said when they took away my tab groups from Firefox. Then, they took away the ability for extensions to do any such thing.
Not that I'm bitter or anything….
Simplified Tab Groups with Waterfox?
But there appear to be tab group add-ons for Firefox 57+ now (Simple Tab Groups and Panorama View).
Show a security indicator, but leave me my https://.
Make an extension that hides it and let the power users who care install that.
What is it with the always having to change what we've been looking at for the past twenty years ? If you don't like it, make an optional something and activate it for yourself, but leave us the fuck out of your issues.
It'll be even better when Yale and Bramah get red and green lock icons added to the ISO-fuckall character set and fucking fuckers start using $LOCK.www.example.co.uk where $LOCK is the green Yale lock character...\
For Firefox, on Windows 10, you'd need characters that look like a green laminated padlock followed by a brown/gray shaded square and a character that looks like ://
Also, let me point out that moving the text in the address box left or right based on what icons are shown has to be another one of my least favorite things about chrome and google.
Does anyone have a reliable installation script for a firefox build environment?
The information is still there - it is just presented in a way that makes it less likely that users will misunderstand. In our testing (which has been conducted over many years), most users see "https:" and assume it is secure, safe, trusted, etc. when in fact, using the HTTPS protocol does not automatically make something safe or secure. The HTTPS protocol does not even require strong encryption (0 bit encryption is actually a possibility), and certificate checks could fail, or the website could mix secure and insecure content - there are lots of ways it could be insecure even if it has "https:" in the URL.
If you want to see the full gritty details of the connection security, it is in the security bubble (click the padlock or web icon).
Regarding "security is tied to the base domain", this refers to the cookie origin (websites can set cookies for their own domain but not others), JavaScript's same-origin policy and document.domain relaxing, and the user's own recognition of "what website am I on" (where the most important part in terms of actual ownership is the domain, not the subdomain)
"Obscuring the URL, a cynic might suggest, would be handy to conceal when, say, an ad giant is flinging Accelerated Mobile Page (AMP) versions of websites at users."
Nah, the cynic in me believes this new "feature" wil be used to obfuscate the long strings of characters that Google (and Facebook) attach to shared web links to track users to their friends they share with.
"Nah, the cynic in me believes this new "feature" wil be used to obfuscate the long strings of characters that Google (and Facebook) attach to shared web links to track users to their friends they share with."
Many other sites do that kind of thing. DDG does/did that as well - once upon a time they didn't, then they did it intermittently, then there was a period when they did it for weeks, but I've not noticed them doing it for the last few months.
Anyone knows how to stop the refresh that google does when you search for something? It initially displays the vanilla links (so browser would correctly show links that have already been visited), then it refreshes and shows it tracking links - which is bloody annoying as then there is no easy way to tell which links I have already visited.
The tracking info in the links I have solved by writing a script for my clipboard manager so that when a google search results link is copied to the clipboard I have the option to sanitise it. When DDG did their extended period of link tracking I wrote something to sanitise those as well.
"Obscuring the URL, a cynic might suggest, would be handy to conceal when, say, an ad giant is flinging Accelerated Mobile Page (AMP) versions of websites at users."
Of course knowing the exact URL means you can type it directly into the address bar bypassing Google. And they would rather you do a Google search to find the website, even if you know the URL so they can push those paid for listing to the top of your search.
Yes, this is what I was thinking. It is about making people search for pages rather than typing www.whatever.com into their address bar. They would like for every time someone goes to CNN they type "cnn" into Google Search. Sadly, probably too many do that already today.
Sure for many sites cnn.com will get you the same page but that's not true everywhere. Sometimes cnn.com will get you a blank page or error, and only the www will work. Google wants that, because in those cases you are much more likely to search for it because the average person won't know to add the "www" to a page that when they visit it shows up without the "www".
"While the Chocolate Factory is still keen on axing "m." at some point, it is "www." that is for the chop now."
"Long ago, we chose to hide 'https://' for this reason, and simply show a security indicator (secure or not)."
Stop hiding information from us!
On the other hand, there is exactly zero chance that I would use Chrome, so that doesn't affect me. I've never used Vivaldi, but knowing that they're in on this "hide stuff from me" bandwagon means that I know I don't need to consider it, either.
Re Vivaldi, It's entirely optional John. There's even a sub menu called Address Bar in settings where you can choose to hide or show the full address. It's the first thing I do when I install it (show it I mean). I've been using it since it came out of Beta and like it. I also use some of the other browsers at different times for different purposes, but not Chrome (obviously).
There is no need to put up with this. 20 years ago, Google was compellingly great. Its lustre is far diminished, and for the majority of things the less perv-y alternatives are more than adequate. It seems to be the cycle in tech, that “free and good enough” ritually displaces the establishment.
jezzzz
${protocol}://${host}.($[subdomain}).${domain}.${topleveldomain}
where are the subeds checking these articles.
google would happily have us find everything in google and go that way - some of us don't need/want/wish/care to be tracked/tagged/tallied/accounted/advertised at/on. I've spent days trying to explain to my mother what she needs to include for urls to work and what she can toss on the pile.
Hmm, Google have been pretty much an advocate of switching everything over to https, so their cookies are recommended to be sent via https as well. That argument doesn't stack.
It really doesn't look like Google's domain selling business is a high priority for them. I don't think they would give a monkeys whether you use a subdomain or not.
There's plenty of things to be upset with Google about, or worried about their motives. Those two don't stack up, at all.
If it wasn't for some people believing they knew better, we wouldn't have the internet and we certainly wouldn't have the URL standard in the first place.
I assume you believe we should still have ActiveX and plugins in browsers too?
Not sure I'm following that one. Even a wildcard cert would not cover www.example.com and example.com simultaneously. AFAIK, unless example.com is included as the Subject Alt Name on the www.example.com cert, the browser should complain if example.com responds to www.example.com. So, user goes to example.com -> complaint, user goes to www.example.com -> no complaint, even though the address bar says example.com. As if the SSL situation was not confusing enough already :-/
I've never understood why you have to have www.
bbc.co.uk is a fine domain. www.bbc.co.uk seems verbose to me. It makes sense if there is also, say, ftp.bbc.co.uk but there often isn't, and in any case the default is normally for www.bbc.co.uk to mean bbc.co.uk.
It does amaze me how many companies (local govt was fond of this) use www.bbc.co.uk but have no IP mapped to bbc.co.uk. So you have to include the www prefix. That's just ignorant.
It does amaze me how many companies (local govt was fond of this) use www.bbc.co.uk but have no IP mapped to bbc.co.uk. So you have to include the www prefix. That's just ignorant.
No. it's not ignorant, it's just following established practice.
By convention, www has always pointed to the host that serves the main website for a domain. The base domain may have any number of other subdomains which are not the main website - or maybe not even be a website at all. As a convenience, some companies may also point the base domain at the website host, but that's not really how it was meant to be set up.
The more Google and others try to obfuscate the full URL, the less people like you are able to understand how the world-wide web was planned to work.
Exactly.
It's like complaining that you have to include the house name or number in the address to send a letter (remember those?).
41 Any Street, Some Town is not the same property as 51 Any Street, Some Town, and if you miss off the 41, then your letter won't be delivered to the right house. Just putting Any Street, Some Town isn't enough unless you have a very good postman who knows where everyone lives.
www is a required part of the address, to correctly identify the host IP.
"By convention, www has always pointed to the host that serves the main website for a domain."
I remember when web sites first started coming into existence, and even then prepending the "www." was idiotic. There's no need for a special domain because there's already a special port. Many efforts have been made to get websites to stop doing that -- which is why so many will now respond to both "www.example.com" and "example.com".
We do need to just get everyone to stop using "www." for this purpose, but regardless of common usage, it remains a fact that "www.example.com" and "example.com" are two different URLs that don't necessarily resolve to the same web site. Hiding the "www" is a terrible UI decision because it means that the browser is lying to you by reporting you're at one URL when you're actually using a different one. Aside from increasing confusion, this can also be leveraged to engage in attacks.
There's no need for a special domain because there's already a special port.
I'm sorry John but that's rubbish, it would only apply if you had a single IP address with everything serving from a single host, but even back in the day that was considered unwise.
For DNS, you can't direct to a specific IP address using just the port.
At a minimum, a domain is going to have at least two Nameserver records: so ns0.domain.com and ns1.domain.com, which should be on separate IPs, and ideally separate subnets, then probably a mail server, e.g. mail.domain.com. Back in the day you would also often have an FTP server, ftp.domain.com and then a web server www.domain.com.
It is then very clear that if you want to talk to the mail server, you connect to mail.domain.com, if you want the ftp server you connect to ftp.domain.com, and if you want the website, you connect to www.domain.com, and the root nameservers know to connect to ns0 or ns1.domain.com.
That's why the convention was adopted, and the reasons for it haven't changed, in fact they are more relevant today than ever.
As mentioned below, if you want to use a CDN or DDOS protection or a loadbalancer or any other enhancement by use of CNAME records, you need to be able to distinguish the web host from the base domain and all the other sub-domains.
"That's why the convention was adopted, and the reasons for it haven't changed"
Yes, I understand why the convention was adopted. I just think that the reasoning for it is flawed -- it was adopted for convenience and expediency, not out of technical necessity.
"if you want to use a CDN or DDOS protection or a loadbalancer or any other enhancement by use of CNAME records"
This is a bit of a stronger argument, but the practice began before CDNs were a thing.
As well as being convention - www serves your content to the World Wide Web, there is another reason you may not want to use the base domain.
If you decide you're going to serve your content via a CDN, and will do so by creating a CNAME out to that CDN, you're going to quickly come unstuck if you try to use the bare domain.
If a CNAME exists for a label, it must be the only record for that label, so if you do
example.com. IN CNAME endpoint.cdn.provider.com
Then you now cannot create MX records to receive mail etc. You'd need to have them created in your provider's DNS for endpoint.cdn.provider.com.
The alternative being that you delegate your DNS out to their resolvers, which entails trusting them a fuck of a lot more than you'd need to if you simply CNAME out a single label - www.
it's also the protocol
Actually, the protocol part is https://, not www, and it is mandatory, according to the relevant Internet standards (not looking up the exact RFCs, but yes, that's how it is defined). A URL is not compliant without the protocol part.
Of course, Google's proposal will not break URLs comms-wise since they will only hide the protocol part from the user in the interface. By the way, I don't use Chrome, but at least some smartphone browsers do that today. In all cases if you copy/paste the missing parts will be handled. Also, since times immemorial browsers knew to assume the protocol an subdomain and even TLD (.com) parts if the user didn't type them - long before the address bar got merged with search.
Having said that, any "simplification" of the URL shown is a horrible idea. I agree with the suspicion that browsers (certainly Chrome) will go to search rather than to an address if an incomplete URL is typed in, and this is probably a part of the motivation. Not showing it, especially to a non-technical user, is vile though. A housewife will not even pay attention to what is shown there and will not be confused or unduly inconvenienced. Everyone else will, however. Think of QA (and R&D after QA report bugs). They will have to play with all conceivable combinations of what is typed and what is shown to verify that everything works (the same way, in all browsers). Or Support. Among imaginable use cases that will be borked is a Hell Desk operator asking "What website does not work for you? Could you please tell me exactly what is written there?" The possibilities are endless.
URL = Uniform Resource Locator
the whole point is its one form wherever its used, so that systems are interoperable
Somemone should print out RFC 1738 and all those that reference it, neatly tie it in a bundleand drop it on whoever had this mad idea from a great hight
I mean, the type of user who notices they get different content from example.com and www.example.com is the type who would know why. By contrast most non-techy users won't know what's going on even if they see the full URL.
Is the plan to remove ALL the descriptor or only default ones www & m?
Notice how Chrome makes the Bookmark manager hidden from casual inspection. The bookmarks bar is also hidden by default. As others have said, you are supposed to ask Google every time you want a site- if you typed it in, you obviously want it badly enough to know its name, instead of just blindly clicking on something. Plus, you might click on $site's sponsored link instead!