Re: At Last! A Silver Lining!
As if. The memory will be wasted on giant LLMs and huge volumes of AI codeslop.
56 publicly visible posts • joined 2 Oct 2012
Typically machines where upgrading is nontrivial. eg app depends on some library where there's been a compatibility break between old and new versions - PHP5 and Python 2 being some older examples. It's easier to pay for LTS than to invest time in porting the code. Maybe the app has no future (was due to be decommissioned but its replacement timeline has slipped) and it just needs to hang in there for a bit longer.
Another one is proprietary apps that are built for an older OS, eg linking with libpng12, libncurses5, libtinfo5, libudev0 are ones I come across regularly. Personally I just hack these (symlink libudev0 to libudev1, seems to work) but if it's a regulated environment maybe you can't do that. Again it's not always trivial to update to a newer version of the app, especially if there are unwanted changes to other functionality (ie version N+1 is not a drop in replacement for N)
AIUI the stop sell is only where FTTP is available. ie if so, you won't be able to order FTTC or ADSL. If no FTTP is available then you can still order whatever connection you have.
I think in stop sell areas contract renewals will also push people into FTTP where they have it. Not sure what happens to people who are on rolling contacts who never re-contract.
OneNote has a complicated history.
Around 10 years ago MS released the UWP version, "OneNote for Windows 10", which was separate to classic OneNote 2016 that was part of paid Office. ONfW10 was pre installed on some Windows 10 systems and was a derivative of the web version which put everything in the cloud.
Then they announced that everything was moving to the UWP version and they would sunset classic OneNote, with OneNote 2019 as the last offline version. The UWP version was still missing a lot of features from classic and the expectation was they'd be added to UWP so there was feature parity.
This didn't happen. Classic OneNote is still part of O365. Now MS are reversing course and deprecating the UWP version in favour of the original Win32 version they had all along, a codebase that dates from 2001.
So there shouldn't be any problem with ON going away, or loss of access to files (you can download .one files from OneDrive now to use offline or open in other software), it's just a useless crippled UWP which is going.
I normally run about 200-600 tabs open at a time. This is made wieldy by some userChrome.css that gives me 6 rows of tab bars, and a plugin that discards memory from idle tabs (when you click on them they reload). This is in 16GB RAM on an underpowered lightweight laptop, not a desktop monster.
Why? Tabs are my short term cache. If I'm working on a project I open tabs for all the various options. For example if I'm shopping for something then I might have dozens of ebay/AliExpress/manufacturer/... listings open. I keep them open until the project is completed. I usually have a dozen or so projects on the go at once, so if I don't find what I'm looking for I leave the tabs open until I've completed it (eg made the purchase) and then close a slew of tabs at once.
I admit that Firefox's tab closing isn't great (you can at least select a run of sequential tabs with shift clicks) which can make the tabs get out of hand, particularly if the sites icons aren't easy to identify. But you can also search for open tabs which is very handy.
I could do this with bookmarks, but bookmarks persist and I don't want the overhead of managing a bookmark database. Tabs are much more conspicuous so can see at a glance current open projects.
The problem with WA (and maybe Signal, I don't know) on dumbphones is those companies tend to churn their protocol quite regularly, and say 'unless you're running the app version x.y.z or later it'll stop working soon'. The WA clients built into dumbphone OSes tend not to get many updates with the result that you get maybe a couple of years of WA working and then it breaks. It's only if your dumbphone is actually running Android under the hood that you can keep up with the WA client updates.
I use FF because of the following:
1. Ublock Origin (uses the original manifest v2, not the crippled manifest v3 in chrome)
2. UserChrome.css where I install a patch to allow 6 rows of tabs. I can't use another browser where a single row of tabs limits you to 20 or so. (A vertical tab bar just wastes more space without allowing more tabs)
3. A tab sleep plugin (can't remember what it's called) that makes 200+ tabs in a window manageable on a 16GB machine.
Yes, I know, I'm weird.
Particularly for batteries - what good are 7 years of updates if the battery needs replacing after 3/4 years and you can't buy a new replacement?
This was a habitual problem when Samsungs had removable batteries - you either had to buy a generic no brand battery, or a 'genuine' battery that was a fake because Samsung only produced them for about a year. So you could only buy sketchy batteries from dubious sellers, whether or not they had Samsung written on them.
An unlamented cloud provider did that by reflashing thousands of elderly servers with a management firmware image that failed to boot. It might have been fixable by individually desoldering and rewriting the flashes, but a big task to do that when you've just bricked thousands of machines. Probably cheaper to scrap them and start again.
It's a pity that phone brands who focus on hardware features don't have good software platforms to build on. If this was a laptop it would come with Windows and then the brand could do their hardware party tricks - like Toughbook do. Microsoft look after updates for a decade or so, the manufacturer doesn't have to worry.
But because Android is such a mess they have to buy a platform from Mediatek or Qualcomm that is abandoned in a few years and so the phone is junk long before the hardware is. I'd guess a lot more rugged phones are being scrapped for software rather than broken screens, water damage or whatever.
This will actually affect me. I have a tablet that I'm planning to fix to the 3d printer as a control panel, but since it's running a first generation Atom there's no 64 bit support. Perfectly good hardware, it's got one job to do and it'll do it just fine. No reason to upgrade, new hardware won't be the right form factor unless chosen very carefully, and will add nothing to the chosen application.
Back in 2013 I was deciding whether to buy a 15" Macbook Pro or a Chromebook Pixel. They were about the same base price (the Pixel was fancy for a Chromebook), the hardware was comparable. But I could spec the MBP with 1TB of (removable) storage, while the Pixel only had 32GB of soldered SSD.
I went with the MBP and I still use it daily (it's my videoconference and Powerpoint machine). It's not receiving OS updates any more but is still getting security patches. Meanwhile the Chromebook went out of support in 2018, and the 32GB SSD meant that I couldn't have done very much with it even if I installed Linux.
Things have improved on the storage front, but there are still Chromebooks with 64GB or 128GB of soldered SSD. Which is better if you want to use them for some appliance function, but not so good as a daily driver.
Now if all machines were forced to include upgradeable storage, then things might be different...
'Chris' is a bot: it does speech recognition, looks for pauses in your voice and then selects a phrase to play from a dictionary. It's designed to make you agree to having an appointment, and then it says that somebody will ring back to confirm a time. Once they've got you to agree, a human will ring back. It's actually quite a smart way to screen for victims (from the scammer's perspective, anyway).
Lenovo pricing is all smoke and mirrors - I've seen 50% discounts between the web pricing and account quotes. Probably not as much on this one, but wouldn't be surprised if it drops a hundred or two.
Anyway, it's a thin-and-light, and there's often a price premium for those (Macbook Air, XPS) over a chunky laptop (Vostro, Latitude etc).
It is very much not simple, but there is device tree for this laptop in the kernel. So I suspect a suitably-compiled kernel could be persuaded to boot on it, and then you can run a distro of choice, maybe not perfectly but enough to try it out.
(it's the same chip as the Windows Dev Kit 2022, and that one doesn't have an official device tree. Folks are working on that)
The thing is that that economic balance can change over time. For example, just imagine there was a global pandemic and all the delivery and shipping services were disrupted - You couldn’t obtain the replacement modules, so you had to make do with what you could get. Or the manufacturer goes bust and the parts aren’t available any more. The classic car world is familiar with aftermarket suppliers and DIY lash ups for long gone parts, and this would be similar.
My interpretation of one of Louis Rossmann’s rants on this subject is that they changed the I2C address of the chip, which is a kind of thing chip manufacturers do when they put out a variant chip.
So the Apple version replies when you talk to address (for example) 0x68 and the generic one only replies when you talk to 0x60. If you swap in the generic one the Apple firmware talks to 0x68, doesn’t get an answer and fails stop. Or it carries on, but if the command is ‘turn on the main power rail’ then not much progress is going to happen beyond there. This low level code may be in the SMC microcontroller which we can’t change even if we hacked up our own OS patch.
That’s just my surmising as an informed civilian though. I could be wrong.
The aircraft that comes out of the hangar is G-ZKBA, BA's first 787 which was delivered to LHR last week. If you watch carefully at 1:44 and 2:15 the airframe is labelled line number 233, which is apparently JY-BAF, an aircraft delivered to Royal Jordanian almost a year ago. An aero engine geek (not me) could probably tell the difference because G-ZKBA (line number 346) is fitted with Rolls-Royce Trent 1000 engines, while JY-BAF has GE GEnx-1B.
For the record G-ZKBA took exactly two months from loading to first flight... only 60x as much as the Wellington bomber.
The menu bar is a mess for me. It takes more than one line, which means it spills over onto a second line, which is overlayed onto whatever the text is:
http://tinypic.com/r/igvp74/8
On comment threads, the whole content column moves to the right to avoid it:
http://i60.tinypic.com/5lq4y9.png
until I scroll down, when all the content pops to the left:
http://i61.tinypic.com/jpua8l.png
It's very annoying to have the content jumping about as I scroll up and down.
I've enlarged some of my browser fonts to make them more readable (it's a 15" retina Macbook at 2048x1280 so native is a bit small, I've set it to smallest being 17pt), but I thought CSS was supposed to deal with getting the layout right?
Did a signup process for a financial institution recently. Authentication secrets in the post, all very secure. But then the letter with the random key arrives and says:
"you must enter your postcode in upper case with no spaces"
Err, perhaps somebody ought to introduce them to toupper() and isalnum() ? How hard can it be to write 3 lines of web form validation code instead of wasting the time of a million humans?
Just bought a Samsung 40HU6900 40 inch TV for work - I do a lot of CAD, so a big monitor is very handy. Post World Cup prices are plummeting - at release in early May it was £1000, now it's £639. We bought from John Lewis for £729 inc 5 year guarantee (we have a ton of first-generation 2560x1600 panels with faults where they overheat and fail: worth paying a bit extra to avoid early adopter risks like this)
It's a nice display - there are two main issues. One is the graphics cards haven't caught up: the TV has HDMI 2.0, but there's no hardware that outputs that. The alternative is DisplayPort 1.2, which uses a hack called multi-stream transport (MST) to pretend that it's two displays to the GPU. The Samsung doesn't have DisplayPort, so I'm on 3840x2160 30Hz at 4:2:x on my late 2013 Retina MacBook Pro 15". When a suitable HDMI 2.0 GPU comes out I'll use that on my Linux box. The chroma downsampling is slightly annoying, but it's OK. I'm not bothered by 30Hz as I don't game. This was about the only affordable 40" panel I found: the alternatives were a variety of 28" TN models, which I suspect are all the same panel inside. A 28" 16:9 UHD panel would have been worse than the 30" 16:10 2650x1600 panel I previously had, hence the reason to go for 40". It also has 4 HDMI inputs, while the previous only had 1x dual-link DVI so I had 3 monitors on my desk for different machines.
The other issue is that being a TV it's laden with crapware. What TF is 'football mode' and why TF would I want it? There's also tons of 'smart' (arse) features that I don't want, like on web browser and apps. However, by not connecting the TV to the internet most of these mercifully don't work. But more annoying is the 'picture improvement features' which just serve to mangle the picture. I think I've turned most of these off now - the worst was something called 'Motion Plus' that was a special 'blur all scrolling or typing' feature. The most important feature, being the 'Source' button on the remote control to select input, works OK - a few more clicks than the usual monitor button, but I'll survive.
The other useful feature would have been picture-in-picture, but that only works if one source is TV. It's also slightly reflective (less than my Mac, but more than its predecessor), and doesn't have an adjustable stand. If this annoys me sufficiently I may find another VESA-mount stand. Put it on the power monitor, depending on backlight brightness it takes a fairly constant 70-150W, dropping to 50W in 'where's my signal gone' and 450mW in standby. Poking about with other picture settings didn't change the power numbers.
So, in summary I'm about 80-90% happy. For the money it's a decent monitor, but be prepared to turn lots of stuff off to make it usable.
Ah, appears the diversion via St John's is a research cruise to measure water circulation fluxes in the subpolar North Atlantic.
Here's our plucky adventurer from the James Clark Ross end of the telescope:
http://ukosnap.wordpress.com/2014/07/08/fantastic-views-of-rockall/
It wouldn't necessarily solve it, but it might help. The main issue is relying on a tool as both editor and version manager. If it screws up, you've lost your version history. That's an eggs-in-one-basket risk avoided by using an external tool.
If you have an external VCS you've at least got guaranteed access to everything in the history. Some of those might be corrupt, but there will be a known-good version. You can diff the last known-good against the first corrupt to see what changed. It may or may not be straightforward to port that change forward into the latest version.
Plus you can see what you're changing - if the editor decides to reduce the file size to zero bytes, diff will show you that before you commit.
The main headache being they don't always play nicely with binary files, but as mentioned there may be a plugin to support zipped files which would help in this case.
It does rather make the point that Proper Version Control (you know, those things with 3-letter names) would have no trouble here, as it's decoupled from the editor in question. I suspect it's probably not as good for diffs in Word etc, but you do at least have the history going back to commit #1.
There has been a resurgence in people building hardware. This is good.
However, a lot of it is quite simple stuff. I'm getting a bit tired of endless Arduino-style projects, involving an ATMega, a few bits and pieces wired to GPIOs, and a pile of C code. Sure, it's hardware, and sure there are plenty of gadgets out there that are like that (what does your smoke alarm do?) but how far can you go with this approach?
Are these startups building phones or laptops or servers or basestations or fancy RF things...? The hard engineering (20 layer PCB, 10GHz signalling, DDR3/4, PCIe gen 3) isn't happening outside large companies. The one place it is happening is China - the Shanzhai are making phones and tablets, which requires some decent engineering.
Maybe this is a function of commoditisation - BeagleBoards and Raspberry Pis already exist as components so we don't have to do that tedious work. But if you hit the limit of what's possible with them, you have a very steep wall to climb.
So IoT is the latest buzzword. Maybe you can do all of that with an Arduino. But if the volume's there it's almost certain that someone can do it cheaper and lower power with an ASIC - and which of these startups is doing ASICs? And if the volume isn't there, is there enough revenue to make it worthwhile except for niche products?
Someone recently said 'heavy semi[conductor development] is like steel and railroads' - in other words, needs lots of investment of money and time. Board-level stuff is less, but to do anything complex still ain't cheap or easy.
If you're a big webby company, scale up your password reset system just as you scale the rest of the site. Don't host it on a 486 in the basement, because when things like this happen...
On the question of salt, they could store each old hash with its own salt and checking the new password by hashing it with each salt in turn and seeing if it matches. That would be more work, but no less secure than individually salted hashes. The password database would be larger, but the old hashes would be purely for elimination - compromising one would only reveal a deactivated password.
It's a rather curious approach though - what's the threat model from re-using old passwords? (I note Google prevents that too). It would only make sense in an enforced changing regime (when it prevents swapping between 'passwordA' and 'passwordB' every month - but can't detect 'password201405')
It seems to me that the audiophile 'industry' is a bit like the whisk(e)y industry. We solved the problem of turning grain into alcohol a long time ago - the purity of industrial distillate is pretty good these days. But pure alcohol isn't what people want. It's all about the impurities - all those peaty, smoky, earthy notes, botanicals, colourants, whatever. The more impurities it's managed to acquire the better. That's why it's left sitting around pickling bits of tree for a very long time.
I wonder whether it's the same for 'audiophiles' - actually they like small amounts of distortion and it doesn't 'sound right' if they aren't there.
The good news is this is easy to game - just add a DSP which introduces the 'right' distortion, sell it for $5000, profit.
Which, incidentally, doesn't seem far off what 'Beats Audio' does today.
So they took a bog standard £500 laptop and put in it:
An Intel Atom (why Android x86? Did they get CPUs on BOGOF from Intel or something?). Equivalent ARM SoC would be say $20.
2GB RAM: $15
16GB flash: $15
Battery 19Wh: $30
A dock connector: $5
And want to charge £900 for the privilege?
Since I haven't seen this anywhere, here's /proc/cpuinfo (I finally managed to start telnetd and get in, using ethernet):
Poky 9.0.2 (Yocto Project 1.4 Reference Distro) 1.4.2 clanton
/ # uname -a
Linux clanton 3.8.7-yocto-standard #1 Tue Oct 1 00:07:32 IST 2013 i586 GNU/Linux
/ # cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 5
model : 9
model name : 05/09
stepping : 0
cpu MHz : 399.076
cache size : 0 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : yes
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 7
wp : yes
flags : fpu vme pse tsc msr pae cx8 apic pge pbe nx smep
bogomips : 798.15
clflush size : 32
cache_alignment : 32
address sizes : 32 bits physical, 32 bits virtual power management:
It even ships with the f00f_bug! Welcome to 1997 all over again!
Also as an application processor there is no software ecosystem. There's tons of x86 Linux distros out there, but it won't run any of them (without yet-to-be-done hackery). Yocto is just getting off the ground as an embedded system platform, but it's not a 'hacker' OS like Debian or even OpenWRT, it's more an 'appliance' OS - make a change, rebuild all the packages, run the regression tests, signoff by management, ship firmware v1.23.45.6 to the factory. You aren't really intended to ssh in and run emacs to change things.
They provide the Arduino ecosystem but it's no good as a microcontroller.
So I'm yet to work out what it /is/ good for.
NUC is advertised for 'digital signage' - which is fine if you have a mains plug, but no good if you need to integrate into an existing setup (which only has 12V for example).
PC motherboards are fine except they aren't small. I can't easily slip one inside my product as you could with a Pi-sized thing. The other problem many of these boards have is connector placing: I can't mount one on a spare bit of back panel to make the ethernet available, because I need to make space for the ruddy VGA connector to stick out, and the back panel needs to be 170mm tall to accommodate an ITX motherboard (mounted vertically as the rest of the case is used for something else). These boards are also not thin having big heatsinks and airflow requirements.
You might have been discussing the speed of generic I2C in an ideal world, but the rest of us were discussing the speed of the GPIO that's provided on the Arduino headers on this board. As far as Arduino-land is concerned it's just GPIO, it shouldn't matter how it's wired internally. Except a flat-out rate of 230Hz (that's 2ms per transition) rules out a lot of things where you need to drive any kind of protocol via the GPIO as it's way too slow. (In my case I'd want to use it for being a JTAG master, which wants KHz or MHz and has no hardware acceleration in common microcontrollers).
I/O timing is here:
https://communities.intel.com/message/207904
Essentially, I/O mediated via I2C can go at 230Hz (not KHz or MHz) maximum. The 2 pins that don't go via I2C can go just under 3MHz (it's unclear the jitter on this).
SPI and PCIe are all very well, but the whole point of the Arduino footprint is access to raw GPIOs to wire to things. PCIe is very awkward to deal with unless you can build a gigahertz PCB (not straightforward) and an FPGA to receive it. SPI is potentially useful but needs extra chips to break out into GPIOs. All of these have increased latency over a simple GPIO.
I received one of these on Monday. My previous experience was with trying to use a first-generation Intel NUC in an embedded application, which lead me to conclude that Intel doesn't really understand embedded (it needs a 19V power supply, WTF?). Let's see if Galileo is different.
Unboxing with Galileo, there's the board, a power supply, and a booklet of disclaimers in umpteen languages. No instructions at all.
OK, go off to the website to read the quick start guide. Set up the Arduino software, it tells me to update the firmware. But my board has newer firmware than is available in the download bundle, fail.
Right, try the LED blink demo, that works.
Now, I want this as a Linux box, so lets see if we can get Linux up. Write an SD card as per the instructions. Put it in and power up. Nothing happens. No serial output, no LED flash.
Of course there's no display so I can't see if it's booting. Reading the instructions, boot messages and EFI menus go to the serial port. Which is not the USB port I'm attached with, but the weird 3.5mm jack. For which no cable was supplied. The instructions helpfully say you need to make a serial cable. But most computers don't have serial ports any more. 3.3V serial to USB dongles are common now, and I have one available. But the jack socket is RS232 levels, which it won't do. So I need to make a 3.5mm to DB9 adaptor, and then have a full RS232 to USB adaptor so I can plug it into a computer.
To even see the boot messages.
After all this palaver (ie some minutes), the LED is flashing, so something must be happening. But I don't have ethernet handy (I'm at work, getting stuff on the network is time consuming, laptop has no ethernet port), so I have no other means of interacting with it. /dev/ttyGS0 is the Arduino programming USB serial provided over a USB Gadget driver, which is fine except ttyGS0 doesn't work until the board has booted - and you can't enable a terminal on ttyGS0 without having the board booted and already logged into it. Normal Arduino boards have a USB-serial converter onboard which solves all these problems - not this one.
The SD card is the kernel and a .ext3 file on a FAT partition, so I can't even try traditional mount-the-SD-on-a-PC tricks. (Well I could loopback mount the .ext3 but this is getting awkward)
Plus the distro is weird (Yocto). It doesn't run vanilla distros like Debian, so I can't just image an SD card and go. (Actually someone has almost managed that, but it doesn't like libpthread for some reason, and images aren't yet available). Yocto looks OK for deploying embedded Linux in a commercial environment, but rebuilding your distro from scratch isn't something you want to do when casually hacking.
Of course, none of this is mentioned in the quickstart guide - and there's only a fairly sparse forum to fall back to.
Hardware-wise, there are two full-speed I/O pins, the rest are via I2C. That's hopeless for bitbanging any kind of protocol. It's actually a Linux box running an 'Arduino environment' as the only process. A worse idea I couldn't imagine. Why did they think Arduino was a sensible environment to target?
And the Quark chip runs finger-burningly hot.
I'd really love a small, cheap, Intel board with either GPIO or USB (it's to be a JTAG server for some third-party JTAG tools that's only built for x86). But I'm wondering whether to cut my losses at this point as they clearly have no idea. Maybe someone will do a Raspbian for it and solve all the problems - until then it'll live in the ever-growing pile of abandoned dev boards.
My school was an anti-Acorn school. They had RMs and then Macs. When they had a big throw-out of hardware (RM 186s and 386s, smashed Mac Pluses) I saved them from the skip. I think there were some 380Z/480Zs, but those went before my time.
The RM Nimbus 186 was not a pretty design - ribbon cable buses for expansion cards. I spent quite a while trying to port ELKS (Linux for 8086) to it - eventually gave up because I couldn't find any documentation on the wierd Nimbus hardware. Even salvaging most of the RM software when they cleared it out was no help - I have Autosketch and Windows 1.03 but nothing particularly useful - and they wouldn't run most DOS software. Most software ran in the BBC BASIC emulator. I tried for a while to find their network OS for Z-Net (their peer-to-peer serial network) - I think it was Microsoft Networks (long before MSN as an online service). Never found anything useful. How hard it was to find anything before the internet age.
The RM 386sx16 was OK - at least it ran Windows 3.1. My high point was running Linux, X and Netscape in 4MB of RAM. 30 pin SIMMs were a pain though.
I still have an RM Pentium 75 - ran a floppy Linux distro as a router until a couple of years ago. I didn't touch the hard drive which still has their Window Box software - how to make Windows 95 unusable.
I think there's a pattern here - RM took theoretically decent hardware and worked out how to make it almost useless...
Once MS get over imposing this artificial cliff, there's plenty of more nuanced options they could take.
For example, charge a subscription for updates. Maybe there could be two tiers of subscription - the gold 'we support everything in XP' and the bronze 'we reserve the right to disable functionality if it's too much of a pain to secure' .
Also impose further conditions, like not being able to activate new XP licenses or transfer old ones. So it will die with the hardware. Though I haven't thought through all the second-order effects (prices of secondhand XP machines will rise, maybe a blackmarket in XP transfers).
The biggest headache is those XP machines that will stop receiving updates and become zombie fodder, because nobody is paying attention to them. I can't think of a solution for that case - short of the last update formatting the hard drive and setting fire to the network card.
So we have a face off: Microsoft v half a billion people.
MS are turning off support for XP simply because they want people to pay up for a new version. There is no other reason, it's not an edict from God or a Security Council resolution They'll still be fixing the security holes for their 'special' clients. It's purely a commercial decision not to provide them to everyone else.
MS might find that people aren't prepared to go along with their plans, and will carry on using XP. Being interesting to see who yields first. My money is on MS. Easier to fix Microsoft than fix half a billion PCs.
In law, anything made available [i]is[/i] published. In the old days you'd see an advert in the back of the local paper "Secrets of Reincarnation. Send 29p to PO Box blah, London N1 blah". It doesn't matter that you got back a handwritten badly-photocopied sheet, that's a publication. Same goes for something on a random website. Doesn't matter that three people have asked for it, it's 'made available to the public'.
If it's password protected, that's not a publication. It's not made available to the public, it's made available to your Aunty Joan only. Same goes for an internal document. It may be a memo from Bills Gates to a hundred thousand minions, but it's not made available to the public and thus is not a publication.
A grey area is hidden links. I can put a private document on my website and tell only you the URL. That's not a public document. But if your email is hacked and the URL is leaked so that crawlers pick it up, arguably that becomes a publication.
The question I want to know is the one I keep asking about clouds. So, you've given me 1TB of cloud data instead of local storage. How do you propose I get my data into this cloud, on my (fast for UK) 2Mbit domestic upload bandwidth? I make that to be 46 days nonstop at full throttle - not accounting that I'm probably limited to a few tens of GB per month.
And I couldn't even make 3G behave itself in *central London* today - uploading my files at tens of KB/s - don't make me laugh.
It would make a nice Linux machine, except for the braindead lack of storage.