* Posts by Tom7

265 publicly visible posts • joined 3 Aug 2010


Let's take a look at those US Supreme Court decisions and how they will affect tech


Re: What about signs

The whole article is a serious beat-up. The equal protection clause applies to governments, not businesses. Even universities are free to use affirmative action in their admissions, so long as they don't receive government funding for those places.

For an article about "how [those decisions] will affect tech" there is not a lot here about how those decisions will affect tech; just a bunch of left-wing bashing of the decisions and some scaremongering "maybe this will have big implications for tech companies." What implications?

Whose line is it anyway, GitHub? Innovation, not litigation, should answer


Summarising open source as "the creator reasonably wishes people to be able to read it and put them to use" is a vast simplification. Open source licenses have conditions and those conditions are full of minefields for AI ingesting source code. Does use of the code require attribution? How exactly does an AI coding tool comply with that? How does an AI ingesting source code even know that the repo it is ingesting it from belongs to the original author and is correctly following the license terms? If it's been forked within github then it's reasonably straightforward; there are many, many examples of software being copied into github from other places by other people. It's not worth it for the author to go around telling them not to do it, but that doesn't mean the author is okay with it or that they're okay with AI then ingesting it.

Open source AI makes modern PCs relevant, and subscriptions seem shabby



I got all interested after this article and went and figured out how to install Stable Diffusion locally. About half an hour later, I had the GPU version installed, which immediately died because I have an AMD GPU and it will only work with NVIDIA. Okay.

So I went for the CPU version. I've just run my first generation; ten and a quarter minutes to generate a 512x512 image that isn't really what I asked for.

The difference between SD and the more commercial offerings is still large. They just work, they produce reasonable results and they're fast enough to just dabble with and adjust your prompt if the output isn't quite what you were after. SD takes quite a bit of nous to know how to get it to work at all, the results are a bit disappointing and it's slow enough that you'll have got bored by the time your first image is delivered.

Fed up with Python setup and packaging? Try a shot of Rye


Re: No mention of pip and venv?

I can't figure out what was wrong with this. I blew it all away and started again and it all just worked. It had somehow got it into its head that it needed quite an old version of pylint in that one project and nothing I did seemed able to convince it otherwise.


Re: No mention of pip and venv?

As with a number of other systems, this only solves half the problem. It helps you manage a collection of other people's packages, not to package your own.


So I'm curious - how do you install Python packages without pip? Or do you insist that all the Python you write use nothing but the standard library?


Re: No mention of pip and venv?

They work if:

* You want to use the version of Python that is installed by default on your system

* You don't want to package your work

As soon as you want to use a different version of Python for a project, or you want to publish your work on PyPI or distribute it in wheel form, there is a whole world of pain that venv and pip don't help you with.

That said, rye seems to still have some pretty rough edges. After an hour of faffing around, I still can't get it to install pylint into a venv...

Croquet for Unity: Live, network-transparent 3D gaming... but it's so much more


Re: It's not FLOSS :(

That's not how they describe it in the documentation. The reflectors also serve to serialise the event stream, ensuring that each client sees the event stream in the same order with the same timestamps and so ends up with the same simulation. At any rate, you can't use Croquet without using the reflectors and you pay for their use (subject to a free usage allowance).


Re: It's not FLOSS :(

It's also not really clear what your comments about it being "serverless" mean, since it's dependent on a network of "reflectors" that are operated by Croquet and are closed-source.


Re: It's not FLOSS :(

Yep. And "Contact Us" if you're interested in running your own reflector.

I get it. They need a revenue model. But this is a lot less exciting now.

Enter Tinker: Asus pulls out RISC-V board it hopes trumps Raspberry PI


Re: Yikes.

I hadn't thought of that but that's really neat. An EtherCAT driver for this would be awesome; one interface for networking, the other for control. It's curious, in a way, that it's running Linux and not FreeRTOS, which would be a more useful OS for robotics etc I think (although Yocto seems to have decent support for PREEMPT_RT, it's always still a pain to actually make something do what you want it to in RT).

The Great Graph Database Debate: Relational can't do everything


The crux of our disagreement is simply with the claim that some future "well-architected" relational database engine could render the use of today's useful, existing, in-production graph databases unnecessary

Not a great way to start, but mis-quoting (apparently deliberately) the other side. No-one has so far mentioned some future "well-architected relational database engine" but rather well-architected databases (ie schemas) within existing relational database engines.

This, and a big pile of thinly-veiled neo4j sales-speak, appears to be about the level this contribution to the debate is operating on.

How to get the latest Linux kernel on your Ubuntu box


Why do instructions of this sort always include `sudo apt update` when `add-apt-repository` has, for some years now, done this automatically?

Massive energy storage system goes online in UK


Re: Decommissioning?

Solar is a really dumb idea in the UK and makes the problem worse; all the generation is at the time of minimum demand.

Nuclear is not a bad idea but extremely expensive as it is currently implemented. Much more expensive than wind.

Hydro is a good idea but we've run out of suitable geography to build more.

Geothermal is a good idea but it's not obvious that there is a large, economic resource in the UK.

I really shouldn't have to explain why an interconnector from Moroccan solar output is not a solution here.

So wind is pretty much what we're stuck with. It is far from impossible for there to be very little wind across the whole of Europe - there have been several electricity price shocks in the past few years that have been linked to exactly that.

Yet another thing you've not taken into account is that "powering 300,000 homes" means meeting the current electricity demand of 300,000 homes. That's about 1/3 of the current energy demand, with the other 2/3 roughly split between transport and gas use. If we're going to transition to a future with no fossil fuels, that triples the electricity demand.


Re: Decommissioning?

Not to mention that to make wind energy actually reliable in the UK, you'd need to build somewhere around 9,000 of these - that would allow you to power 30 million homes for about a week.

Someone has to say it: Voice assistants are not doing it for big tech


"the dream of a cross-platform voice-assisted future"

I think one of the issues is that none of these assistants is actually really cross-platform.

If you own an Echo, then you probably have Alexa on your Echo, either Google Assistant or Siri on your phone and Cortana on your laptop/desktop. Amazon and Google both appear to have abandoned the desktop space. Google and Apple dominate the phone space; there is an Alexa app but it's so inconvenient to reach that it's pointless. Google has a stand-alone device of some sort but Amazon seem to dominate this space.

In a way, the disparity helps the metaphor. People expect computers to be like people and it's just weird when you have six different devices all called "Alexa" that appear in different places and you interact with in different ways.

Twitter engineer calls out Elon Musk for technical BS in unusual career move



The number of RPC calls isn't the problem but... "we spend a lot of time waiting for network responses." Sounds a lot like someone playing word games with what is and isn't an RPC to win a point.

How GitHub Copilot could steer Microsoft into a copyright storm


Re: I am not a lawyer

You haven't actually read the terms of service, have you?

If you had, you would have spotted this in the definitions section:

The "Service" refers to the applications, software, products, and services provided by GitHub, including any Beta Previews

Licenses mean what they say they mean, not what you'd like them to mean.


Re: I am not a lawyer

Do you have an alternative interpretation? Those words seem pretty clear to me. Anyone who posts code on GitHub licenses it to GitHub for the purpose of providing any service that GitHub provides - including Copilot.

That doesn't extend to the people who use Copilot of course - they're just SOL. But Microsoft is covered for their use in training Copilot.


Re: No Solidarity with A.I.'s run for profit!

Check the definitions though - "the Service" is defined as any service or application provided by GitHub, not just the GitHub service itself.


Re: Liability has already been defined

What's the relevance of that? No-one denies that everything posted on GitHub remains copyright; the point is that the authors have, by using GitHub, granted Microsoft a license to use that code.


Re: I am not a lawyer

The author of said code agreed to the GitHub terms of service, which includes a license for Microsoft to use your code for essentially any purpose "as necessary to provide the Service" (quote from the ToS). Here 'The “Service” refers to the applications, software, products, and services provided by GitHub, including any Beta Previews.'


Re: No Solidarity with A.I.'s run for profit!

PAAAAAhahaha. You posted it on GitHub! You realise that involved choosing a license, right? Specifically this in the GitHub terms of service:

4. License Grant to Us

We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video.


Re: Liability has already been defined

Given that it's trained on GitHub public repositories, this language in the ToS looks relevant:

You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users

If Copilot is part of the GitHub service, then anyone who posted their code on GitHub has implicitly licensed their code for this purpose.

Note that, in this agreement, the "Service" is defined as "the applications, software, products, and services provided by GitHub, including any Beta Previews."


This is very much the issue and it's not nearly as clear-cut as your example makes out.

Most people learn by looking at what others have done, then build something themselves based on what they've learned. Whether that is a copyright violation or not depends on just how close it is to what they've seen from others.

One way of looking at Copilot is that it's a tool to make that process of looking at other people's work and using what you learn a lot more efficient. But it lacks any sort of "hang on, that's too similar to what we've seen elsewhere" filter and also hides the source of the material from the human who is using the tool, so they have no reasonable basis to assess whether the code it's just produced is a copyright violation or not.

Copilot, as an AI, is not a legal person who can be sued for the copyright violation and, naturally, the Copilot terms of use make the end user completely responsible for assessing whether the output is a copyright violation or not.

The only sane course from here is to avoid Copilot like the plague.

OpenAI, Microsoft, GitHub hit with lawsuit over Copilot


Yes, absolutely it is. However, the GitHub terms of service include the grant of a license to GitHub "to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the applications, software, products, and services provided by GitHub, including any Beta Previews" (note this is a synthetic quote, generated by substituting definitions from the "Definitions" section of the ToS for the terms so defined). I expect the Microsoft / GitHub will simply rely on this part of the terms of service; by posting your code, you gave them a license to use it pretty much however they like in the course of their business; the fact that Co-Pilot wasn't conceived at the time the code was posted is irrelevant, as this section of the ToS also includes "including improving the Service [the applications, software, products and services provided by GitHub] over time."

What will be very interesting is how the court treats people who posted someone else's copyleft-licensed code. Such a person has every right to make copies of the code, make derivative works, post it all on the internet etc etc; what they don't have a right to do is to grant a non-copyleft license to GitHub, which they implicitly purport to do when they post it on the site.

It's worth noting that GitHub has separate terms of service for corporate customers. Those terms have a similar license grant, but crucially define "the Service" much more narrowly, as "GitHub's hosted service and any applicable documentation" instead of "the applications, software, products and services provided by GitHub, including any Beta Previews."

It's official: UK telcos legally obligated to remove Huawei kit


It's official

UK publications obliged to stop using the stupid Americanism "obligated".

Open source databases: What are they and why do they matter?


Free is, well, free

It seems odd to talk about how FOSS databases are dominant in startup culture without mentioning that an Oracle database license costs five figures per CPU. If you're using Postgres or similar and you become capacity constrained and want to expand, you shell out an extra $10 per month to AWS or whoever and spend a few hours configuring it all. If you run Oracle and want to do the same, you call your local salesman and tell him you're bent over, ready and waiting. In a time when software scalability is everything, what sort of startup wants to expose themselves to the risk that they'll be successful and need to buy more Oracle licenses at whatever the hell the going rate is then? It's not like you'll have a choice; shell out or your service will fall over.

The crime against humanity that is the modern OS desktop, and how to kill it


Re: It does suck

I agree that Windows 7 was the peak of Windows usability. 10 was sort of okay and sort of not. I haven't used Windows regularly since 7.

IMO the current Ubuntu / GNOME desktop gets it right. I'm keyboard-centric so the 'super key + start typing' thing works really well for me; it's the Windows 7 scheme without the folder-structure to fall back on.

The only drawback I could understand is that it doesn't work very well for touch. The Android-like page after page of unsorted app icons is not exactly usability plus.

I think the author has missed one of the key reasons that OS makers keep on messing with desktops - they're still searching around for a desktop metaphor that feels equally natural when you're sat at a screen, keyboard and mouse as it does on a tablet or phone. Moan as much as you like that tablet UIs have no place on the desktop, but personally I have a laptop that folds around into a tablet and turns into a touch screen. Which UI metaphor should it use?

This tiny Intel Xeon-toting PC board can take your Raspberry Pi any day


It has 16 digital IOs. The specs make no mention of any peripheral controllers on them. It's hard to see that as more useful than the Pi's 27 GPIOs, most of which can be configured to some alternative function (I2C, SPI, UARTs, PWM).

Or, for that matter, an ESP32's extremely impressive list of peripherals and very good RTOS.


No mention of any GPIO, ADC, touch or PWM peripherals accessible on the board. Also no mention of WiFi. It's hard to see this as an RPi competitor. It's just a SFF PC. Impressively SFF, maybe, but it doesn't offer any of the things that make the RPi distinctive.

Modeling software spins up plans for floating wind turbines


Re: Oil rig technology?

Yes, for a given value of "solved". An offshore oil rig is pumping thousands of barrels of oil per day, some of them hundreds of thousands. A barrel of oil is equivalent to around 1.7MWh of energy, so an offshore platform is producing anything up to around half a million MWh per day. A 10MW turbine, operating at a 30% capacity factor, produces about 75MWh per day. Not all oil platforms are that big, but neither are all wind turbines. A turbine support structure has to cost about 15% of what an oil platform's support structure does to make the economics comparable. That's before you consider that a turbine also needs a cable capable of carrying XMW back to shore installed, while your average oil platform stores it all internally until a ship comes along and takes it away.

In the medium term, I think wind turbines will have chemical plants built into them that produce synthetic fuels. There is a pilot (onshore) plant in Iceland producing 1.4 million litres of methanol per year from geothermal energy; there's no particular reason that the same could not be built into a turbine tower. Then, again, the stuff could be stored until a ship comes and collects it. Similar chemistry is available to produce ethylene and ammonia, major energy-intensive feedstocks for industrial processes.


Re: Oil rig technology?

It's a crap idea. The life of a wind turbine is already largely limited by the life of the blades under constant flexing from wind loads. So someone's invented a turbine that requires much more flexing of the blades to control it. Slow clap.

The challenge with deep-offshore wind is not the bit above the water but the bit below it. People have been prototyping floating conventional windmills for well over a decade. If this new turbine was a good idea, it would be a good idea on land as well as offshore. it isn't.

The thing about deep offshore is that you can't just let it bob around aimlessly. You still need a grid connection to each turbine that's capable of carrying several MW (or whatever the rated output of the turbine is - some are up to 10MW these days). So you've still got to lay a cable on the ocean floor and that upsets environmental types because no doubt there is some fragile sea grass somewhere on the path between the turbine and the shore. You then also need a way to anchor the turbine to that location, in a way where it's not going to break loose, snap its grid connection cable, smash up any other turbines in its path and become a hazard to navigation in rough weather. This all makes it terribly expensive to install. There are enough shallow-water locations where turbines can be installed but haven't to make deep-water offshore wind a solution to problem we don't have yet.

Why the end of Optane is bad news for all IT


Re: Insane

In a way, I think Optane was a good idea poorly timed.

Ten years ago we all had spinning disks in our laptops and how transformative it was to replace the spinning disk with an SSD five years or so ago. Workloads had been disk-bound for decades while everything else on the system got orders of magnitude faster; suddenly, storage caught up several orders of magnitude. For most people, most of the time, their systems are now fast enough for their needs. Most people now look at their laptop and see how much slicker it is than five or seven years ago; the idea that storage could improve by another order of magnitude just doesn't hold that much attraction. If we'd had another ten years to get used to SSDs, we might be feeling the limits a bit more and faster storage would be more attractive.

To interact a bit with the author's ideas, they write this as though we could have jumped straight back to a 1960s paradigm because Octane appeared. Never mind that back then software amounted to hundreds of bytes and running a programme was expected to take hours or days; the idea of having more than one programme running at once simply didn't make sense to people then. Attacking the filesystem as an abstraction for managing storage is all very well, but unless your software is going to go back to being a single process of a few hundred bytes, you have to have *some* sort of abstraction for managing it. No-one really seems to have done any work towards figuring out what that abstraction could be. Saying you just install an application into primary memory and run it from there, where it maintains its state forever is all very well; how does that work if you want to run two copies of the same piece of software? If your answer is to separate data from code and have multiple copies of the data, how do you tell your computer to run a new one or pick up an old one? There is a new category of thing that is persistent process memory; how do you identify and refer to that thing? How does that model even work for something like a compiler, where you feed it a file and it produces another file in output? Is persistent state even useful there? If not, how does the abstraction work?

UK Info Commissioner slams use of WhatsApp by health officials during pandemic


Re: we all know one big reason

This is just paranoid conspiracy-theorising.

Go have a good look at the data protection practices of your average NHS trust. Use of WhatsApp for staff communication - including discussion of patient information - is absolutely rampant. It's a data protection and management nightmare but no-one seems to be doing anything to rein it in. This has come from the bottom up, not the top down.

I'm personally aware of two cases where a whole ward were required to join a WhatsApp group and a member of staff used the resulting access to personal phone numbers to stalk other members of staff. Nothing can be proved because he deleted all the conversations from WhatsApp soon after they happened and no-one thought to screenshot them.

US Supreme Court puts Texas social media law on hold


I'm not really sure this is good news for the platforms

They argue companies have a First Amendment right to exercise editorial discretion for the content distributed on their platforms.

These platforms have spent at least the last decade arguing that they don't exercise any editorial discretion over their content, in order to benefit from the section 230 safe-harbour provisions. If they're now arguing they exercise editorial discretion and need to do so, haven't they just opened themselves up to liability for anything that's posted on their platform? In particular, they become the publisher of any libellous speach...

Appeals court unleashes Texas's anti-Big-Tech content-no-moderation law


Not an easy area of law

It's curious that most of the arguments against this law, at least as presented in this article, are not constitutional arguments, they're arguments on the lines of, "But if you do that, the internet will become a really bad place." It very much gives the impression that the legal argument really is, "We don't like the effects of this law, let's try to find a constitutional argument to sink it."

The difficulty for the platforms is that they want it both ways. There have been various attempts to classify the social media platforms as common carriers. They don't want that, because they want to be able to exercise some sort of control over the content they carry. But the alternative is to exercise control over the content they carry - and that then makes them responsible for the content they carry, removing the "safe harbour" protections.

It's hard to see the current situation surviving - the platforms are saying they don't exercise editorial control over content and have the safe harbour protections from liability, until the content is some type that they REALLY want to have control over.

The law needs to change here. Platforms need to be able to exercise some control over content without losing the safe harbour protections. I'm not thinking about control over political content here. But StackOverflow should be able to restrict the content people post to content about software development - exercising editorial control - without becoming liable for everything that every user posts on the site. Currently, it's not obvious that they can do this - either they exercise no editorial control and have safe harbour protections, or they exercise editorial control and are liable for content. The court have worked around this by basically ignoring the issue, but it can't last.


Re: Only a sith deals in absolutes

I blame the people who objected to paying taxes to fund the troops protecting them in the first place.


Re: Both are un-Constitutional

That's the same as saying that a state can't ban or regulate the sale of any goods, so long as the customer is from another state - plainly ridiculous. Congress has complete power to regulate interstate commerce - but that doesn't stop the states regulating it, so long as that regulation doesn't conflict with federal regulation.

Intel energizes decades-old real-time Linux kernel project


There are two Tom7s? Wow. I've been using it since 2010, though it seems you did get there first.


Yes indeed, although how many people will manage to use it correctly is debatable - most RasPi GPIO seems to be done in Python, which rather defeats the purpose.


Not really. There's a reason that desktop operating systems don't generally use hard real-time schedulers; they don't usually produce the best user experience. TBH it's been a long time since linux desktop performance has had problems other than memory exhaustion for me - and this won't help with that.

Ubuntu applies security fixes for all versions back to 14.04


Re: Your scheduled bit pedantry whenever shell commands are mentioned

The usual reason for multiple sudos rather than sudo -s is that it leaves visible traces of what you've done in the system log files, where sudo -s just records that someone has become root but doesn't show what they've done.


Soooooo.... are the fixes important?

OpenShell has been working on a classic replacement for Windows 11's Start menu


With WSLg now able to run Wayland sessions, how difficult would it be to replace the shell with GNOME? I'd seriously consider it. I've been running Ubuntu for my day to day work for so long now that going back to Windows is a pain, finding and relearning how to do everything. At the same time, there are a few apps (though maybe not many these days) that don't cope well running on Wine or similar. A Linux/GNOME session that can run Windows apps natively would be really attractive.

IPv6 is built to be better, but that's not the route to success


Re: Won't happen in my lifetime

You should want that. The reason Facebook, Twitter, TikTok and so on have massive amounts of power today is because IPv6 hasn't been adopted, devices don't have a public IP address and peer-to-peer networking is impossible.

Hand me a global internet where every device has a public IP address and tomorrow I'll give you a social network where you actually connect with your friends instead of connecting to Facebook. Until then, any attempt to build it will drown in user complaints that it doesn't work. Or doesn't work on some of their devices. Or doesn't work when they're at work or at their friends' house. Or doesn't work when they roam onto the wrong mobile network. Actually, none of those things; the complaint will be that it just doesn't work because the average consumer has no idea how to figure out that it's related to any of those things and shouldn't have to care.

Web3: The next generation of the web is here… apparently


Re: Ummm, Do you work in IT?

If you have one friend, it's possible but unlikely. We're talking about both of you changing your IP address at exactly the same time.

By the time you have five friends, it is vanishingly unlikely.

Recovery involves either being physically close enough to a friend for NFC to work or being on the same subnet as them.


In what ways exactly? Not having Facebook scrape all your data for advertising? Not having advertisers insert themselves into your communications with friends? Not having nutjobs promoting content to you? Being able to communicate without people without a corporation trying to make money off it? Only sharing content with friends instead of friends+platform? Having granual levels of vouching for someone's actual identity? Being able to "delete" your identity without pleading with some corporate department?

I sure there are downsides, but there are some hefty upsides, too.


Yes, if everyone in your "circle" changes their IP address while the IP address change notifications are all in flight, you lose connectivity. Excuse me being skeptical whether this is a realistic situation. If only one person's IP address doesn't change while the notifications are in flight, they'll receive all the notifications and then everyone else will (eventually) ask them where to find everyone else.

True that a lost/damaged/stolen phone poses problems. At least someone can't social-engineer your phone company into giving them your ID.


The thing is, web3 should be a thing and it should be completely decentralised... it just shouldn't involve cryptocurrency. It should involve cryptography.

OpenPGP has had almost everything you need for years. Here's a brief outline of how a decentralised social network works:

You start by installing an app on a device which we'll call The App. When you first start The App, it creates as self-signed OpenPGP identity.

Next time you see a friend of yours, you convince them to install The App. You use NFC to cryptographically sign each other's identities - in OpenPGP terms, this is a "Positive Certification". On each of your phones, The App notes at which IP address they found each other.

Once you have a circle of friends created in this way, you might accepting remote friend requests by certifying someone else's identity (and them certifying yours). These work in the same way as NFC certifications, but they are "Casual Certifications" rather than positive ones. You can gauge how likely it is that a friend request really came from the person it claims to come from by seeing how many of your friends have given them positive certifications or casual certifications; their identity can be given a score by The App on this basis.

The App keeps track of how it contacts your friends (ie their IP addresses). Whenever your device's IP address changes, it sends a message to each of your friends saying, "Hey, my IP address has changed." You use your OpenPGP identity to sign this message so they can tell it's really from you.

Whenever you post new content, The App sends a signed message to inform all of your friends. The App on their devices can decide whether to download the content immediately from you or wait until a later time or ignore it entirely, based on user preferences, network conditions etc.

If The App tries to contact a friend and gets either no response or a response not signed with the right key, it starts asking all your other friends in turn, "Do you know where this identity is?" If no-one has a valid location for them, it means that everyone's device has changed IP address simultaneously (or close enough that the address change notifications didn't get through). It's not entirely impossible - say if everyone in your circle turned their phone off overnight or there was a really major internet outage or something. But on the whole, it's pretty unlikely to happen. And it's mitigated in two ways. Firstly, The App on devices on the same subnet uses IP multicast to find each other and check whether they've signed each others' identities. And secondly, friends who are physically next to each other can use NFC to reconnect. If one person falls off the network somehow, it only takes reconnecting with one person to then reconnect with your entire network.

This is proper social networking. It's not mediated by anyone; you decide who you trust, what you want to see, what you share with whom. Nothing is stored on a server anywhere; the only server involved in the whole damn thing is the one you install the app off of. There is no way for advertisers to advertise on it. There is no way for political parties / conspiracy theorists / antivaxxers / whatever other nutjobs to push their content unless you actually know them. Implementing end-to-end encryption of all the content is trivial, if that's what you want; at any rate, it's all signed. Decided you don't like your identity and want to start fresh? Just uninstall the app and reinstall it. You'll have to reconnect with all your friends with your new identity, but that's what starting fresh is actually like.

There are three problems:

* Almost every device is behind a NAT gate these days. This makes direct connections between devices impossible to do with any reliability. Once IP6 is universal and every device has a publicly routable IP6 address, this problem will go away. We are not there yet. I would not be surprised to find that Facebook is actively discouraging ISPs from implementing IP6 to prevent exactly this sort of thing.

* There is no way to monetise it. Or not that I can think of. You could perhaps sell the app. But someone will just write a compatible client. It will be worse and have crypto backdoors and will inject advertising into the network but it will be cheaper than yours and people will use it. Which leads to the third problem:

* No-one has done it yet.