"Then I don't want to run JS."
"I've got safer sites to surf."
Google's Accelerated Mobile Pages technology, known as AMP among web publishers, took a beating this week as an antitrust lawsuit filed by the Attorney General of Texas charged that the ad biz used AMP to hinder competition. And on Friday, Terence Eden, a member of the AMP Advisory Committee, which was formed two years ago in …
This post has been deleted by its author
I design web applications for a living and I'll tell you right now, the trend toward client-side scripting isn't going away. The reason it exists is that users constantly demand more responsive sites with faster UIs, the only way to do that is to bring more of the UI processing client-side.
Most companies don't have the budget to build an entire second path for users who don't want client-side scripts and the number of people who care about this is so small that I have yet to hear a single client even bring up the idea that someone might not want to run client-side scripting.
I'm not making a value judgement here, just giving the reasoning as to why we are where we are.
I use JS as little as possible but with Google pushing "Core Web Vitals" down your neck you have to use some form of JS on a cached website.
Example - A few WordPress websites I run are cached so they load much quicker but then that leaves timestamps and other important information to be static, JS is then used to pull that data client-side with AJAX.
The speed difference with cache and non-cache to visiting users is negligible but for "Core Web Vitals" it's the difference between getting your page seen or not seen.
"more responsive sites with faster UIs"
The vast majority of sites neither have nor need a "UI", unless you class the page itself as a UI. Most sites don't need JS just to deliver and display the basic data. That's what HTML and CSS is for. I can't think of a single reason why a site should render as a blank page if the user has JS disabled.
> The vast majority of sites neither have nor need a "UI"
I agree with the generality of your sentiment, but any hypertext system requires a user interface. Yes, good, attractive user interfaces can be implemented without a scripting requirement in cases where content is by and large passively consumed (blogs, news, video sites, etc.), but it is still a user interface.
Don’t blame the users they are not the ones driving web page development. They did not demand that everything including the kitchen sink should run in a browser that’s down the IT industry. For example users were not demanding word-processors that could run from a browser when they have a perfectly good operating system for them to run in. It was Google and Microsoft that did that for their benefit it means we are connected to the Internet and them, even when doing using applications that normally have no need of the internet.
When things do need a remote connection like git (I use gitlab) they provide a web interface that’s as confusing as fuck but hey it looks passable why? They could have written a proper application that runs in the OS using the standard familiar controls and using the GUI design guidelines for that OS. It can all be done without a browser I do as much as I can like get, pull, branch etc before I go to the web interface. They do it in a web browser for their benefit, so they don’t have to build applications for multiple OSs.
I mentioned GUI design guidelines, well when it comes to web design there are none. Web designers are a law unto themselves. Every site is different what the controls look like and how you navigate around the site. There is no consistency, every new web site you go to is a new adventure trying to find where things are. Some sites like to keep that adventure going by have some parts of their site look and navigate differently.
Computers have got more powerful and should render pages faster than in the past internet connections have got faster so can download quicker. But Web Designers seem to think they are Game designers and must use up every resource they can, so any performance increase is gobbled up. Why because it seems that web design is all about standing out. Your site must be the flashiest most eye catching and have more bell and whistles than the others. Usability and performance are not important and can be sacrificed to the god of “who gives a fuck it looks awesome”.
"For example users were not demanding word-processors that could run from a browser ..."
If you are using Gitlab (or even Github) then you are not a typical user. A typical user might not even be aware of the borderline between an OS and a browser, or between local storage and cloud storage. That type of user benefits from cloud word processors/storage - they can purchase very cheap computers or notebooks, break them, or get them stolen, and still not lose everything. The alternative for them was Windows.
> The reason it exists is that users constantly demand more responsive sites with faster UIs, the only way to do that is to bring more of the UI processing client-side.
1. Also because doing the processing client-side saves server resources¹ and (when done right!) increases security by not sending information across that doesn't need to be sent or by protecting information that does need to be sent (e.g., Germany's virus tracking application encrypts user data client-side to the recipient's public key, so data remains encrypted at rest).
If you are at say OnlineEquationSolver.com you understand that it'll probably need client-side code and that's obviously OK, but if you're at CuteCatPics.org and get a blank page, well that's not very impressive.
¹ Some dilettante is bound to say "ah! but it uses *my* resources!" Yes it does, because it's overall more efficient to have the audience contributing a bit of their idle power than to have the servers having to provision extra resources to cope with it—that scales really badly. Chances are that your computer already has more than enough power to do the job without breaking a sweat *and* you're not paying for the content or service being provided anyway. You always have the option of navigating away.
“Some dilettante is bound to say "ah! but it uses *my* resources!...” It does seem that you have a valid point but lets see.
Today many web sites are for business we go to those sites to do business with the company they service. So, I could argue that why should I pay with my resources when I am there as their customer. Another big chunk of web sites, the site itself is the business it makes it money from ads. Visiting those web sites is what generates their revenue. I would think it is a very small percentage of web sites that are there just to offer visitors a free service that is costing them out of the goodness of their heart.
Now you could point out that most of us here are running add blockers I do on my personal PC but not on my work PC. But most internet users don’t, it seems to me that even though a few like us have blockers the Ad business is doing very well judging by Google’s revenues and profits. So, ads are still being shown money is still flowing to web sites. Even if I don’t see ads at home I am still paying because the cost of ads are included in the price of goods I buy. In fact, if I never used the internet again after today, I would be paying towards web sites that I will now never visit.
Now you may argue there were ads before the internet so that ad cost was there already there, and it is now just split between one more medium to deliver ads. But I wonder, has the percentage that advertising adds to the cost of goods remained the same or is it now higher? It would be interesting to see if we could compare the average percentage advertising adds to the cost of goods today with the percentage advertising did in the 1980s. I have a sneaky suspicion it is greater today than it was in the 1980s.
So, the way I see it is, ah! but it uses *my* resources! Yes, they are my resources so keep your bloody hands off them and pay your own costs you freeloaders.
There is no relationship between cost and revenue in your agument to support it.
Just because you visit the site and generate say 1c of ad revenue, does not mean you have paid for all the costs associated with a usable implemetation (both development, user expectation and per visit costs).
You seem to also assume a linear relationship to the cost per user, wrt the number of users. To make it linear (or extend the linear region to the working range of user count) can necessitate splitting up the user load between client and server side.
The first time I saw an AMP page on my mobile (via Firefox) I looked for an addon to make sure I didn't have to see another one. The Redirect AMP to HTML one from memory works (at least on my phone) by breaking the AMP page. It throws up an error message and I can then adjust the URL to point to the non AMP version.
"is a minute not spent browsing the web and seeing Google's adverts"
is a statement that really sums up the real problem here. That Google has too much control of the platforms, search, and Ad markets. Not a phrase that will play well in court though. They need to claw Chrome development away from Google/Alphabet, along with Android. They also need to be handcuffed from pushing web standards in ways that benefit themselves or block rivals. Right now it's impossible to break into those markets, because Google is dumping a "free" product fueled by their monopoly control of search and advertising. They are overdue for some tough love from the antitrust regulators.
Its not allowed on my PC
Like initial posting AC, my default is JS enabled unless I decide to turn it on, which varied for different websites.
- I looked at a AMP page source (that would not render without JS obv.) and it was a huge amount of script and markup, the actual "content" text was a tiny proportion, not exactly efficient loading in my view (even if I had js enabled, which I didn't).
I use a simple rule - if I visit a "new" site (so have no no site specific whitelist / blacklist (or whatever PC name is now) - and with default JS off then no useful content is rendered then I leave as I expect js to add value, it should not be used to serve the basic content, as there are plenty more news sources out there.
Possibly what the article mentions about intentional & artificial delays introduced by Google. Either this article or one of the links explicitly mentions that AMP does not, in fact, result in faster loading pages, which was Google's excuse for "selling" it to site owners.
You don't need AMP, just decent will written html and CSS.
Also, get rid of crappy "mobile" web pages.. They are evil.
Both points are dealt with well by El Reg. (Though I hate that stupid bit of "bling" that munges post dates. I recently wanted to check which of two posts were posted first - however, the date on both was "14 hours ago"
Exactly – it was always possible to create fast web pages, even nice looking pages, before AMP came along.
But the trend in web development has been towards bloated frameworks which means developers don't really understand what's going on in the background – often loading tons of stuff which will never be used. Plugin culture is here, faster development perhaps, but slower, bloated pages for users.
AMP is faster because it trims out much of this cruft, but I feel that was always just a useful cover for Google's real goals.
"All we can do now is learn from the process "
And what have we learned ? That you do not allow Google to be in charge of the Web and its standards.
Any body that purports to create and maintain something destined to be a standard should have a multi-stakeholder Board and should take input not from companies but from experts and public opinion.
In a transparent and public manner. While publishing the minutes of the meetings. And ensuring that what is said is being done is actually being done.
In other words, do the opposite of ICANN and you can't go wrong.
> […] the company went so far as to hinder non-AMP ads "by giving them artificial one second delays" to convince publishers not to use header bidding.
That's the sort of incompatibility by design that Microsoft used throughout the 90s with DOS and early 00s with Infernal Exploder.
Naah it's more like Standard Oil all over again.
Mid you Bezos isn't much better. He wants you to pay him to warehouse your stock and then decide if he wan'ts to sell it. Pretty much what google is doing to your content with AMP.
Google decided to become evil, and it indicated to us this decision by tweaking the letter ‘e’ in their logo and made it slanted
Of course I already know, you’re thinking ‘this is ridiculous, what an immature and infantile thing to do.
I should point out that we invaded Iraq in March 2003 when Mars was at its closest. Sounds like a ridiculous attribute I know, but just because you or I or others think this is would be an insane strategy for success, but trust me, power goes to peoples heads and they detach from reality.
Google are evil, and they became evil around the time Alphabet was FORCED into existence, if you don’t know what the hell I am on about, I’d suggest you open them eyes occasionally
I've challenged myself to implement 3 types of useful elements with CSS only, for perl1liner.sourceforge.io documentation.
The blue sidelines give tooltips on where you are (breadcrumbs) and allow deep bookmarking. While the hamburger menu and play button on the code examples use :hover for active functionality. I could even make the humourous post-its wiggle, but haven't implemented that yet.
There's just one regret: the menu doesn't pop back after clicking. You have to push the mouse out again :-(. I wish there was also :hover-unclicked, i.e. active when hovered but inactive again as soon as you click inside. If anybody knows a CSS solution, happy to hear it!
Though that's not the main audience, it's even okish on smart phones without extra effort. Mobile Firefox and Chrome emulate :hover on tap, so even that works. Safari's not so smart, tough luck.
Biting the hand that feeds IT © 1998–2021