
ZSYS due to be removed from Ubuntu installer
See "[FFe] Remove zsys from installer " https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1968150
140 publicly visible posts • joined 16 Jun 2011
I was expecting there to be a catch in all this since NTT doesn't mention the ACTUAL distance record they broke, but after a bit of ducking (DDG) I find that the the distance record for 800Gbps was 970km in September 2020 [0] and that has probably been surpassed since, so this new technology from NTT is definitely a significant advance - especially as it seems to be out of the R&D stage and looking for scaling into production.
[0] https://opticalconnectionsnews.com/2020/09/ciena-breaks-800g-distance-record/
... for Google.
With each passing product it becomes ever more abundantly clear that the only purpose of those products and services is to Vacuum up data about people, things, places, and the relationships between them, to feed the Google advertising engine.
Once they've wrung the good stuff out of a product or service they cancel it - it was never about providing service to a user (or possibly - gasp - even a customer!)
If this OS install has grown over time with in-place incremental upgrades it makes a lot of sense. Logical Volume Management (LVM) has gained features over the years that probably were not available when LVM was first adopted.
Physical > LVM > RAID >LVM is probably due to LVM not supporting RAID modes originally so likely it is Multiple Device (MD) RAID - probably RAID-1 mirror.
My guess would be originally the install was on a single HDD. As more storage is required it is far easier to manage it flexibly via OS (e.g. LVM) services rather than hardware RAID. So, add in more physical HDD/SSD, "pvcreate ; vgextend" and then "lvextend" for those volumes needing more space.
So over time, without any major OS re-installation, using several physical HDD/SSDs, the host has a RAID mirror with OS and data volumes on top.
Nowadays LVM supports RAID modes natively (using the kernel Device Mapper (DM) MD RAID functionality under the hood) so the additional layer could be removed whilst the OS is operating without too much trouble (I've done this on multiple systems over the years). This is one of the delights of using LVM - being able to re-shape storage architecture quite fundamentally whilst the system is live (including more exotic options like adding iSCSI block devices as LVM PVs to create remote mirrors).
I've also done a similar live migration from 32-bit to 64-bit in-place (original 2007 install, host still in operation). Once the kernel is switched to 64-bit it supports both 32-bit and 64-bit user-space. At that point you can create a 64-bit chroot install with all the required packages followed by copying over configuration files package by package and switching the running service from the 32-bit to 64-bit in the chroot.
Eventually you've a 64-bit kernel with a base 32-bit core running all 64-bit services. At that point the boot configuration can be pointed at the 64-bit root file-system (a Logical Volume) and the system rebooted.
When doing this it helps to actually upgrade the 32-bit packages to the target OS version first so that the package upgrade scripts handle most of the per-package configuration file changes for you. If skipping several OS releases it's unlikely we could rely on that to correctly handle all changes and would have to manually check and review each package configuration. Once that's done the switch from 32-bit to 64-bit should be straight-forward.
Distros will be backporting the fix from mainline [0] and/or the v5.16.2 stable tree [1]
author Jamie Hill-Daniel <jamie@hill-daniel.co.uk> 2022-01-18 08:06:04 +0100
committer Linus Torvalds <torvalds@linux-foundation.org> 2022-01-18 09:23:19 +0200
vfs: fs_context: fix up param length parsing in legacy_parse_param The "PAGE_SIZE - 2 - size" calculation in legacy_parse_param() is an unsigned type so a large value of "size" results in a high positive value instead of a negative value as expected. Fix this by getting rid of the subtraction.
[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=722d94847de29310e8aa03fcbdb41fc92c521756
[1] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v5.16.2&id=8b1530a3772ae5b49c6d8d171fd3146bb947430f
Agreed about the numbers - but my primary point is the sheer complexity of verifying the combined effect of the sheer number of dependencies in most large applications, especially where there is a frequent commit cadence across the application and its dependencies.
Also, in respect to CI/CD those won't be doing a rebuild and test cycle on each commit or PR on all those dependencies - or randomising the test harness to reflect real-world client connections.
I can easily imagine one of the many dependencies introducing subtle, conditional, behavioural changes that don't do anything different when in a test environment but could trigger malicious payloads on very specific request parameters (IP address, referrer, user-agent, date/time, request parameters, etc.).
I fully agree with sentiments regarding the typical Javascript eco-systems with regard to pulling in miscellaneous dependencies without review although I feel the language itself (sans strong typing) is as good or bad as any other, depending on the project requirements.
I get most upset by 'live' dependencies in web-sites to third party served code - the code being served can be trivially modified by the server based on the requesting user agent identity, IP address, and other heuristics, to deliver a highly targeted malicious payload that the web-site/application developers could never trigger.
For $deity's sake copy and serve the verified code/resource to your own server on the same domain as the primary resource!
However, a similar dependency eco-system exists with Rust crates and GoLang imports.
I was quite interested in certain GoLang projects until I dug deeper and two things stood out in particular to me and my requirements:
1. On Linux, code making syscalls needed (at the time I reviewed it - may have changed) to run a C-language co-process to call into the kernel. This aspect introduced some 'interesting' complexities and rather spoiled some of the GoLang promise (and performance - learned via 'crun' - the C-language alternative to 'runc').
2. In typical projects the source-code has an alarming number of "import "github/user/project" which relate to external dependencies fetched using "go get ..." so these external dependencies (and the graph of dependencies in a typical application-level project) have a similar security/review cycle issue.
Similar issue for point 2 in Rust Crates. Each Cargo.toml may well include lines of the form "some_external_library = { git = "https://github.com/SomeRandomAccount/SomeExternalLibrary" }"
Same applies to Perl with CPAN and others.
It seems to me there's a seesaw sliding-scale between Convenience and Trust and currently the scale is tipped too far in favour of Convenience.
Trust comes from reviewing the code - either yourself or your team, or by people you trust (typical web of trust). For example in Linux distributions we typically favour the package maintainers with implicit Trust when installed dependencies.
The problem, and challenge, for 'import the latest from $pseudo_random source' is the lack of a web of trust for each version/release/commit.
Slight tangent, but related to a point that BinkyTheMagicPaperclip brings up previously: "Never, ever, blindly pull the latest version into your product without thorough testing"
"colors.js is incorporated into almost 19,000 other npm packages and gets 23 million downloads a week."
This scares/worries the systems engineer in me.
If on a WEEKLY basis 23,000,000 downloads (requires/imports) are being done across ~19,000 dependencies, and if a similar relationship holds for other critical dependencies, that seems to suggest a huge number of projects frequently iterating builds and deployments.
Bearing in mind this is a single package the security vulnerabilities of this practice seems stark across the Node.js ecosystem.
It's important to understand that ARNS (Aeronautical Radio-Navigation Service) operates in the 4200 - 4400 MHz range for transmit and receive. The issue appears to be a combination of ARNS receivers being sensitive to (strong) signals outside the immediate band and the cellular base-station signal strength in 3700 - 4000 MHz.
Historically the band has been used for low-power services that do not suffer 'bleed' so ARNS receiver design especially didn't require tight band-pass filtering.
Frequency Allocations: [2] slide 7
3700-4000MHz Fixed Mobile
4000-4200MHz Fixed Satellite
4200-4400MHz Aeronautical Radio-Navigation
"It should be understood then that any interference that is unpredictable and that can mix with the linear FM waveform, thereby causing the radio altimeter to mistake the mixed signal as terrain has the potential to cause a radio altimeter to report a false altitude. "
[0] page 9 "1.1 Radio altimeter modulation and receiver sensitivity"
Affected Fleet:
"All FAA Part 135 helicopters are now required to have an operational radio altimeter
◦ Approx. 22,000 operational civil rotorcraft
◦ Some FAA Part 91 aircraft require altimeters for certain operations such as Cat II ILS, etc.
◦ Approx. 34,000 general aviation/private aircraft
◦ All large passenger aircraft
◦ Approx. 7000 US based civil aircraft
◦ Plus international carrier"
[1] slide 5 "Equipage and operation US National Example"
[0] ITU-R M.2059-0 "Operational and technical characteristics and protection criteria of radio
altimeters utilizing the band 4 200-4 400 MHz" https://www.itu.int/dms_pubrec/itu-r/rec/m/R-REC-M.2059-0-201402-I!!PDF-E.pdf
[1] "Radio Altimeter Interference" https://www.icao.int/NACC/Documents/Meetings/2018/RPG/RPGITUWRC2019-P08.pdf
[2] "FAA Radar Altimeter and Compatibility with 5G presentation" https://rotor.org/wp-content/uploads/2021/08/FAA-Presentation-RA-5G-Industry-Forum-July-2021.pdf
It was a joke, but seeing as you missed that part, I never mentioned contacting letencrypt.org (shouldn't that be letSencrypt.org) but "phoning home" as almost all Microsoft software seems to do - to Microsoft.
If the signing certificate expired every 3 months and the system hadn't phoned home to Microsoft to fetch updates in that time things would get 'interesting'.
Scary that this appears to pre-suppose all Windows systems must be online regularly, and have to re-fetch signed applications even if the code hasn't changed (unless the signatures are detached and it can just fetch the new signature).
That could equate to a lot of bandwidth!
At least on most security-conscious Linux distributions, microcode updates are applied early in the boot process; usually they're prepended to the initial ramdisk image (initrd.img) or equivalent and installed by the kernel, or earlier by GRUB with an additional "initrd /boot/microcode.cpio".
Debian and derivatives have packages intel-microcode and amd-microcode.
This transcript of the trial [0] seems to show the Ping Fix was a correction to the Horizon system involving reconciling of Camelot/Lottery transactions which originally were not handled by Horizon - the Ping Fix is apparently the addition of functionality to Horizon that relies on data provided by Camelot but still didn't get it right:
"PG so lottery and paystation weren’t part of the original Horizon design and that introduced the pre-PING issue of mistakes between terminals and post-PING it introduced the issue of dodgy TAs and integrity of the datastream?
AB yes"
From that transcript it seems like originally the Camelot/Lottery transactions were reconciled almost manually at PO HQ and in doing so introduced lots of errors, and after the Ping Fix were/should have had Transaction Corrections (TCs) but those often did not receive Transaction Acknowledgements (TAs) or TCs and TAs were applied in error - the meaning of the acronyms is not spelt out there so I may have those wrong..
Sounds to me like the whole mess was due to transactions from branch A being mixed up with some other branch due to the poor system integration and reconciliation of transactions.
"AB confirms the situation of really important transaction data (as wrt to this last example) not appearing on Credence and ARQ logs (relied on in court by the Post Office to prosecute Subpostmasters) has not yet been corrected"
Overflow or signed integer ID field error anyone?
[0] https://threadreaderapp.com/thread/1107591068974047235.html
Many times slow boot (or more accurately, time to reach "graphical.target") is due to waiting for the "network-online.target" which will only be reached on most laptops once the/a WiFi network is found and connected.
This is usually a side effect of configuring a Network Manager (WiFi) connection to be available to all system users which causes it to be brought up before desktop log-in is reached.
However, for the more general case systemd provides useful tools for identifying where boot-time delays occurred:
systemd-analyze critical-chain
systemd-analyze blame
By default these assume "--system" but with "--user" the user session start-up can be analysed separately.
"critical-chain" is the most useful when one service is delaying others, such as when waiting for a network connection to become available. The numbers show the @when and the +duration of each unit. E.g: on my laptop it takes +5.649s for the WiFi connection to be established:
graphical.target @11.614s
└─multi-user.target @11.614s
└─kerneloops.service @11.579s +34ms
└─network-online.target @11.570s
└─NetworkManager-wait-online.service @5.919s +5.649s
└─NetworkManager.service @5.419s +403ms
└─dbus.service @5.158s
└─basic.target @5.091s
(Bug alert: seems like ElReg's CODE and PRE tags do not preserve layout - the above should be indented and each line shouldn't be wrapped in P tags)
As always
man systemd-analyze
details many more useful reports and visualisations of the boot process and how to interpret the output.
Reports suggest spending £30 billion - that is the ball-park of what most experts believe the cost of delivering Fibre To The Premises nationwide would likely be.
According to reports in 2014 when the Government was first consulting on switching off the analogue network, and later in 2018 when BT/Opereach began consulting about withdrawing the Wholesale Line Rental (PSTN) products in favour of VoIP [2]:
"Most of the telephone network is owned by BT, some 75 million miles of wire, worth between £2.5bn and £5bn according to a 2011 estimate by Investec bank"
There was disagreement by BT at the time over the original Investec estimate that the copper could be worth £50 billion so it isn't clear what the current value is but it looks like, if it can be extracted cheaply, it could fund part of the switch to a full fibre diet, err, network!
Let's not knock the target - if it transpires BT/Openreach does want to rapidly convert to a full-fibre network (finally) including rural areas - cheer them on, even if the reality is it cannot be delivered to an unrealistic timetable; once the ball is rolling it is going to gain momentum.
As someone on the end of 2km of (quality) copper with VDSL hovering around 10Mbs/0.9Mbps I for one welcome the fibre overlords!
According to the indictment:
"j. On or about November 18, 2015, defendant YAO travelled from China to O'Hare International Airport in Chicago, Illinois. At the time, he had in his possession over 3,000 unique electronic files containing Company A's proprietary and trade secret information, including nine complete copies of Company A's control system source code and the systems specifications that explained how the control system source code worked."
Unless Yao was arrested at O'Hare how is this known unless Yao's devices were cloned and the images later inspected ?
Or is this sleight-of-hand wording to imply Yao was carrying those documents through the airport? Note the indictment uses the term "in his possession" which isn't the same as "carrying" - potentially all this means is Yao is alleged to still have copies of those trade-secret documents after ceasing to be employed by Company A. It could be this wording is being used to satisfy the 'inter-state' requirement for bringing charges.
I agree with others that this data should be published, but in terms of guessing what Amazon's reasoning is, I'd suspect it might have to do with revealing to its competitors what their energy cost is and therefore, indirectly, what their margins are. Energy is probably the largest ongoing operational expense for a data centre so small differences in efficiencies probably represent several % points of profit margin.
I work with people with visual and sensory impairment. One of the major problems for these people is that the technology aids designed and made for them are extremely expensive due to high R&D costs and low volume.
The advent of powerful PDAs (you may call them 'smart'phones) has lowered the cost dramatically for many aids (no more need for dedicated devices) and there is work ongoing in university labs and elsewhere to use machine learning to describe the scene the camera can see, including recognising objects, reading labels and signs, and more [0].
Some of this technology is available in dedicated devices that are very expensive, e.g. the Orcam MyEye 2 [1].
If the same technology could be enabled in the browser it would reduce the cost dramatically and expand the areas where it can usefully aid users.
[0] https://www.microsoft.com/en-us/research/product/soundscape/
[1] https://www.orcam.com/en/myeye2/
Snaps (snappy) is developed at Canonical, and originated for the now-defunct Ubuntu Phone.
Unless I missed something L. Poettering works for Red Hat and has never been a developer of Snappy/snapcraft.io/snapd et al.
The *idea* is a reasonable one - for an OS that uses system libraries that are not compatible with some application, make it possible for the application developer to publish, at will, a blob that contains all the required dependencies, and isolate it from the host OS to limit opportunities for compromise.
The bigger the delta between the host OS and the application though, the more needs to be included in the blob.
In your particular case "just a media player" is a vast under-appreciation of VLC. It needs all the plugin libraries, and the libraries they depend on, possibly down to libc itself.
I would assume the snap has to ship almost all plugins rather than them being able to install on demand as the Host OS can do, so you'll end up with that is effectively another OS image.
The typical dependency tree for 'vlc' on a Debian/Ubuntu/Mint system (even ignoring Recommends: and Suggests:) is 5,700 packages! Here's the rough calculation:
$ apt-cache depends --no-suggests --no-recommends --recurse vlc | egrep 'Depends:' | cut -d: -f 2 | sort | uniq | wc -l
Sussex Chief Constable today tells us that two drones that have been found in the area have been ruled out of the investigation and there were 115 reports of drone sightings with 92 from apparently credible witnesses.
Oh, and the Police were flying their own drones in the area which could be what some witnesses reported.
Am I missing something or do the reports and reactions of the airport not stack up?
Originally we're told there were sightings of drone(s) at 21:03 on Wednesday. Then the further night-time reports (both sets of reports apparently from airfield personnel) So, rather dark. Being able to see and identify a drone would require it to be extremely close. Otherwise its just "lights in the sky moving in what appears to be a controlled manner".
As a result the airport shuts down air operations.
The 'reported sightings' in daylight don't add any clarity - many may well be false and/or mis-identified reports due to people being primed to expect 'drones'.
Then this further - at this time apparently 'unconfirmed sighting' according to the BBC's report of the police statement - sighting Thursday, again after dark, and the air operations shut down again.
The reaction seems like extreme over-reaction unless those in charge at the airport know something we've not been told. It's almost as if they had something in mind when 'drone' was reported and were reacting to that - e.g. possibly a prior threat to attack an aircraft with drones that was thought to be a hoax, so when a 'drone' is apparently detected they react to the prior threat, not this sighting.
The reason I suggest this is we've had previous alleged near-misses and drone sightings by pilots and ground staff at various arifields across the world and not one of them has shut down air operations like this - so why is Gatwick reacting differently?
This is where open-source and end-to-end encryption strengths really lie.
Open-source means experts in the field have the ability to test via reproducible builds that any binaries match the source code, and that the source code does not allow unauthorised parties.
End-to-End encryption and Perfect Forward Secrecy (correctly implemented) can properly protect against a communications provider (MITM) being able to add a party to the 'conference'.
Sounds like this is the kind of back-up data source that would aid many quasi-autonomous driver assistance systems.
If it is accurate enough to differentiate lanes, or even possibly lane-drift, it could act as a component of the position awareness/warning system. Think of drivers allowing the vehicle to drift due to being distracted, dozing, arguing with kids in the back, etc.
If we're heading for a world of 'connected' vehicles (in the sense of each transmitting its position and velocity to the immediate surroundings) it also offers options to prevent driving too close to other vehicles even in conditions where LIDAR, cameras, and other sensors become unreliable.
Intel's formerly vacant chip fabs in Santa Clara [0], maybe? There's an Intel white-paper describing their high density sub 1.07 PUE design [1].
Nuclear because their hot aisles can reach 54 degrees Celsius.
[0] http://datacenterfrontier.com/intel-data-center-new-heights-efficiency/
[1] http://www.intel.com/content/dam/www/public/us/en/documents/best-practices/intel-it-extremely-energy-efficient-high-density-data-centers-paper.pdf
CONFIG_HAVE_ARCH_VMAP_STACK: this is a great addition. Initially for x86 but hopefully the other architectures where this is possible will follow suit sooner rather than later.
For those not understanding its purpose or operation - it is simply using the virtual memory mapper to allocate pages of memory for the stack of each kernel task and including guard pages either end so that any stray writes can be detected and contained almost as soon as they happen.
[0] http://lwn.net/Articles/691631/
So before we can consume networked BBC iplayer content we have to enter into an additional contract involving the exchange of our (valuable) personal data?
<sarcasm> Will the over-the-air broadcasts refuse to decode if we don't provide the same data to those 'smart' TVs and radios? </sarcasm>
It seems like the iplayer content is no longer 'free'. How does this square with the BBC's current charter which says:
13. No charge to be made for reception of the UK Public Services and associated content.
(1) The BBC must not charge any person, either directly or indirectly, in respect of the
reception in the UK, by any means, of—
(a) the UK Public Services
It is arguable that requiring personal data as a condition is a (direct or indirect) charge in that the BBC requires valuable information (if it was not of value to the BBC there would be no reason to ask for it).