The simple fact that most Linux Desktops default file browsers offer tabs, already answers the title question for me :D
66 posts • joined 15 Nov 2021
Here is an interesting to make traffic better in general:
How about we
* reduce the amount of streets and parking lots in our cities by, say, 70%
* use all that free space to build rails for public transport systems
* as well as nice ways for pedestrians and bicycles to get around
* preferably shaded by some trees and flanked by some bushes and other green stuff (ya know, because it filters the air, binds a bit of CO2, coold down the immediate area by water evaporation and looks a hell of a lot nicer than the concrete wasteland our cities have become).
* reserve road traffic primarily for the transport of goods (transportation and delivery) to stores
* and in general decide that cities, first and foremost, are LIVING AREAS, not traffic areas?
Re: ..From corporations to US law enforcement and spy agencies...
The problem is not that companies can be forced to hand data to government agencies, the problem is that companies can be forced to hand data to ANOTHER governments agencies, with no oversight, no way of even knowing if it happened, and no way for the people whos private information is handed over to do anything about it.
European Agencies are governed by European Law, controlled by European Watchdogs, and have to answer to European Courts.
Re: Keep on spreading this nonsense...
Which situation is better:
a) The code is open source, so the potential for public scrutiny exists (whether or not this potential is used is a different question). If a bug is found, many people will be able to suggest fixes, or even implement fixes prior to an official release. If a bug isn't fixed, the project can be forked. If a bug isn't fixed for some prior version, people can backport it.
b) The code is closed source. Everyone has to trust some company to check it for security loopholes. If a bug is found, many people won't even be aware until it is fixed, the company is the only one who can fix it, and until it is released people have to use some workaround (usually "turn XYZ off") (if there is one). If the bug isn't fixed, projects relying on the software have to be redesigned to use something else. If the bug isn't fixed for some prior version, no one can backport it.
A or B?
For those worried about Microsoft's Pluton TPM chip: Lenovo won't even switch it on by default in latest ThinkPads
Re: So, ETH was lost to bad code, and now new ETH has magically been added
Fraud happens regardless of the currency system used. If humanity used seashells to buy groceries, someone would still try to sell non-existing bread to people.
The question is: How much money was unrecoverably lost due to buggy software.
Re: Sure, it'll beat outsourcers
> That's where machine-generated code stands right now.
"The New Guy", as in "the guy we hired yesterday who never saw anything in our codebase", can still email accounting until someone shows him the lowcode platform...and from there he can apply what he knows about `import csv, requests, logging` et al. to produce something that works in most cases and, if he's good, only makes the senior sigh in frustration slightly.
By the time AI can do that, it would probably be a good idea to get going with that Mars colony, because then we're not far from it walking up to my desk stating: "I need your Mouse, your Keyboard, and your Motorcycle." in monotone heavily accented english.
Re: Just another compiler
A compiler takes in instructions (not a problem statement), written in an unambiguous, artificial, formal language (aka. code). It then translates the exact instructions into other exact instructions.
This system takes in a problem statement (not instructions), written in ambiguous, natural, contextual language (aka. english). It then derives exact instructions from the problem statement.
Re: Sure, it'll beat outsourcers
The problem with most on-the-job tasks is not in the writing of the code, or designing small algorithms, it's with architecture. What should the code do, what goal should it achieve, how can it fit with the system.
Here is a problem that never comes up in coding challenges, and which I am pretty sure no AI will be able to solve on its own for a very long time:
"You know the lowcode-platform accounting uses, right? They get large CSVs from our new customer, and need to read them in, but the platform can't do it. We need something to bridge the gap. Oh and the bigwigs want it to log all entries in some form of summary in case we get audited...just think of something there. "
Contextual knowledge. Communicating systems. Knowing prior code. Efficiency/Usability considerations. Architectural Problems.
Huge problems for the AI...no problem for humans. The task is so simple, its usually the kind of work people would often hand to interns/new hires to see how they do. I bet by the time most programmers read through the specification of this really simple task, they already have at least a half-formed idea how to do it.
This is not ot say that AlphaCode isn't a tremendous achivement. The fact that it can take a natural language description of an algorithmic problem and then come up with a solution it hasn't seen before is beyond impressive. I hope this thing makes it into a viable product that we can use not just to boost productivity but to use in novel ways, and maybe even learn from.
That won't chang the fact that we can look forward to at least 6 more months of various announcements how this will be [the end of, a threat to, a total gamechanger, ...] for programming and software development. *sigh*
At present count, my system makes a bit over 200 fonts available for use. Font-Families exist, as does font-stacking. This should be more than enough for most use cases.
And if a design absolutely, positively, entirely, cannot exist without that one particular font, then whoever runs the website, then whoever runs the site can host it himself.
And my car...
...is usable without getting in, starting the motor, turning it off again, go to my mailbox, wait for the steering wheel to arrive, then go to the corner store, get the paint-package, painting the car, geting in the car, fixing the steering wheel, trying to start the car...
...only to discover that someone last-second-replaced the car keys in my hand with a hot-dog in an attempt to sell me more of them, and now I got mustard all over my Dashboard, because I squelched it against the keyhole accidentially.
The last round of manufactoring automation was mostly about jobs doing repetitive, easy to describe programmatically, things. Because of that, it could be done with the technology available back then. The challenges were mostly in the electromechanical parts of the robots, the software was relatively straightforward.
And it created jobs, because a lot of the labor required to build, assist, control, configure and manage these machines could only be done by humans. Combined with the rise in service industry jobs (which robots couldn't do either 20 years ago) we saw a net rise in jobs.
This time, the situation is different. The jobs that will be taken are jobs done by humans because we thought only humans can do them 20 years ago. The advantages are mostly not in robotics, but in the software controlling the robotics, so no huge manufacturing industry going to spring into existence because of it.
The nature of the ontrolling software changed as well. The robots of yesterday had to be programmed step by step to perform their tasks (creating a lot of programming, controlling, etc. jobs). The new robots learn to do tasks by example and are guided by machine learning systems.
And last but not least, this time around automation will hit the service sector just as hard as manufactoring, if not harder.
> someone needs to program the computers and service the robots.
Yes, but how many persons are doing that, and how many persons have been replaced by the robot they service?
Also: While this means good business for me and others in IT, the people who are being displaced cannot just go find a job in these markets, as this requires special skills and knowledge.
Tougher rules on targeted ads, deepfakes, crafty web design, and more? Euro lawmakers give a thumbs up
I own that $4.5bn of digi-dosh so rewrite your blockchain and give it to me, Craig Wright tells Bitcoin SV devs
Re: OK something I've never understood in this case
Okay let's say someone wins such a case, and a court order is issued in his favor.
And then what?
The entire point of a distributed PoW ledger is, that a majority of the nodes in the network have to agree to a protocol in order for it to be effective. So, in order to change the rules such that these transactions are validated on the chain, 51% of miners have to agree to run the modified rules on their rigs.
These miners are in tons of countries, each with their own jurisdictions, laws, courts, legal system, etc.
So how would such a court ruling be enforceable?
Re: "with your private key and your peers' public keys"
>wireguard has no ability to do this.
It doesn't have to.
It provides a solid foundation to build other things on top of it, making the simple use cases simple out-of-the-box, and empowering developers of more complex solutions to build on an efficient base.
This is unix philosophy at its best, and as time has proven, it is a vastly useful (and successful) approach.
Verifying message integrity and sender authentification are useful when you are downloading critical information such as compiled software, an install iso, etc.
It is completely pointless when I am reading a simple read-only blog or similar content. I don't need to be able to verify message integrity when I am reading a recipe for pizza dough, a webcomic, read someones thoughts on the social life of dolphins, or follow the newest rant why (spaces, vim, c#) are better than (tabs, emacs, java).
Not all information is critical.
Why force HTTPS on simple read-only pages
...with no login features or transactions taking place?
They display text. There is no login, there is no transactions. There are no cookies. The page visitors give up no secrets they aren't giving to their ISPs anyway.
Forcing "HTTPS Everywhere" on such pages is similar to locking every door in a house, not only the front door. It doesn't increase security,
Re: There is no reason not to choose Postgres
>Unstructured analytics are exactly what tools like spreadsheets were designed for.
True, but there is no need for this spreadsheet being a proprietary product in a format that is hard to read, hard to write and hard to use with anything but said proprietary software.
Re: It is probably possible but is it desirable?
>provisioning all manner of customers regardless of what the customer is using to access the systems
Let's say I launch a simple text-only webforum with an estimated 100.000 requests per day total. Or lets say a "where-is-the-nearest-ATM" app for an AR device, which would have a similarly low load.
What would even be the point to run that on big iron in a warehouse somewhere, using a frameworks and infrastructure designed and built to accomodate the next facebook, when I can spin it up with no problem in an on - premise box with code written over a long weekend?
Re: It is probably possible but is it desirable?
It's not just the people who try to solve problems, it's also the problems they try to solve.
Problem-A: I have 150g of walnuts, and I would like to crack them for a snack while watching TV.
Problem-B: A baking-products factory has 15t of Walnuts per day, to be made into crushed walnuts or whatever, so they also have to remove the shells.
I will probably use a hand-nutcracker for this problem
The factory will likely use some industrial-grade nut-processing machine.
Neither approach makes sense for the other problem, even though they are in the same domain, with scale being the only difference.
There is no "one-size-fits-all" solution
Not in business logic, not in programming, not in infrastructure.
Why would I use tools designed for HPC to solve something that can be done using the computational power of a wristwatch? Why would I map the job of drawing graphics on a screen to the same tools used to build large-scale IDS? Why would I build systems to wrangle a few dozen text documents every few days using the same stack used to wrangle 10000 documents every minute? Why would I build a machine learning system on the same framework used to write hardware drivers?
Yes, in theory, a screwdriver could be used both to tighten screws, and to hammer nails into walls.
Doesn't mean it should be used that way, or that craftsmen are doing it wrong by carrying 2 tools.
The problem domain defines the tools used to solve it.
Not the other way around.
Re: Lack of comprehension and imagination ...
Even fusion energy isn't "boundless". A fusion reactor has a set output maximum.
For that matter, even a Dyson Swarm or Penrose-Sphere would not be boundless.
And besides, as our civilisation reaches such heights in technological capability, it also reaches new heights in energy consumption. Sky-Cities, large-scale-spaceships and the industry around building a galactic civilisation and keeping these darn space-amobeas from our traderoutes don't come for free.
So long story short, even if we find better ways to get energy, there are ALWAYS better thing to use said energy for instead of blowing it into the wind to support some new crypto-fad.
Re: Alright, so the way to save power in datacenters....
>The energy consumption differential is very real.
Did I say it isn't?
But "real" doesn't mean that the difference matters on a global scale.
Datacenters and their infrastructure are ~1% of the global energy consumption. Most of that is infrastructure we already build as efficient as possible...network components, OS kernels, FS drivers, etc.
So we take the fraction of that 1% that is the actual application-code running on these datacenters, and we shave a few percent off that. What percentage of global consumption will that be? I don't know, but I assume it's not much.
Meanwhile, new code is written, and new hardware spun up month after month, for more pointless apps, and to shuffle yet more ROT data around. And rockets are launched for space-tourism, the car is still widely accepted as being the ultimate mode of transportation, we still produce mointains of milk, meat and other energy-inefficient foodstuffs, and yes we still burn coal as an energy source.
Re: Thrashing about wildly looking for straws to clutch...
>We always assume competent Java and competent Rust developers.
Even if every single Developer was perfect at his job, there are overly optimistic deadlines, badly planned projects, code written on crunchtime, requirements changing halfway through, decades old legacy code to be interfaced with new systems, etc. etc.
A language can only do so much.
It's the quality of the code written in practice that matters most.
So he best a language can do, is help the developer to write good code.
And in my opinion, the best way a language can achieve that, is by being easy to learn & easy to read.
Re: Thrashing about wildly looking for straws to clutch...
>We can assume memory consumption would go down by 50%, based on experimental results so far.
IF the code using that memory is efficient, and IF it is possible to port the old code.
And those are 2 BIG if's.
Rust isn't inherently more memory efficient. I can write inefficient code in any language. Heck, I could write inefficient assembly.
In fact, the more complicated the language, the easier it is to freck up and write something that looks okay, but has huge potential for improvement. Yes, rust can result in very efficient code. It also gives me all the complexity required to produce something that kinda works, but only as long as I throw $$$ worth of hardware at it to keep it ticking at scale.
As to the second point: Developers time matters. If I have 1,000,000 Java Engineers, each spending 6m to learn Rust, that is 6,000,000 months or 500,000 years of time invested, and not a single line of code has been ported over to Rust at that point. And there are billions of lines of enterprise level Java out there that would need to be rewritten, from scratch, and also tested, deployed and maintained. Who's going to do that, and the answer is "no one".
And for all that, what do we get? A single-digit improvement in an area that amounts for maybe 1% of global power consumption. Wow.
A much better use of all these countless work-hours and mental resources, would be figuring out how to reduce individual traffic, improve public transport, and get people away from believing that it's a good idea to burn 3l of gasoline in an SUV to get 500ml of Milk from the corner store.
Alright, so the way to save power in datacenters....
...is using a programming language that provides a marginally more efficient use of electricity...
...instead of reevaluating whether we really need to store, process and distribute all these exabytes of ROT Data, or run the gazillions of pointless (cr)apps, with layer upon layer of tracking bulls... on top?
Sure, lets learn Rust, and then use this marginally more energy efficient language to develop the next super-needed fitness-tracker, daily-water-intake-tracker, cat-meme-generator, and to wrangle 10 Megafantastillion of photos showing peoples food. Because our civilization desperately needs all this to function!!!
That's how you save the planet </sarcasm>