* Posts by Daniel Palmer

362 posts • joined 19 Apr 2007

Page:

Ex-org? Not at all! Three and a half years after X.Org Server 1.20, 1.21 is released

Daniel Palmer

>So: instead of getting millennial-defensive about Wayland's tendency of defecating on itself at random

This sounds a lot like telling other people about what to do in their own time again.

I work on Open Source. You don't get to tell me what to work on.

If you want something done either pay for it or do it yourself.

Daniel Palmer

>no, Wayland is yet another thing (similar to what Poettering has done)

>being crammed at us by the 'new, shiny, it is OUR turn now"

>too-young-to-realize-what-they-are-doing types.

No one is forcing you to use anything you don't want to. The systemd situation was slightly different in that tools that were needed by everyone, not just systemd, like udev etc got consumed. This isn't the case here.

No one has the right to force you to use wayland and equally you have no right to tell anyone what software they can work on. If you want to continue using xorg because you know better than everyone else then it's up to you to keep it working.

Using other people's work for years and contributing absolutely nothing does not mean you get to rattle your cane from your wheelchair at the whippersnappers like you are some sort of authority.

Daniel Palmer

Person that came up with wayland: Thought that the current xorg was a deadend and developed something to replace it that won out among the other similar projects

You: Complains about wayland but apparently didn't step up to maintain Xorg yet feels entitled to say someone should be murdered and pissed on for actually doing something.

Number one issue with working on open source is people that gob off about what other people do in their own time.

The Register just found 300-odd Itanium CPUs on eBay

Daniel Palmer

Come on lads, do some research

>The product spluttered along through new releases in 2012 and 2017, but its death notice came in 2019 –

>and in February 2021 it was ejected from the Linux kernel.

Read your own article that was linked. It got marked as orphan in MAINTAINERS which means no one came forward to maintain it.

It didn't get "ejected" however and the code is still in mainline to this day:

https://github.com/torvalds/linux/tree/master/arch/ia64

Whether anyone actually runs mainline on an ia64 machine is another thing but it'll probably only get removed when someone cares enough to submit a series to remove it and that will probably only happen after some treewide cleanup that it gets in the way of. Even then there might be one weirdo out there that puts their hand up to keep it.

IPv6 still 5–10 years away from mainstream use, but K8s networking and multi-cloud are now real

Daniel Palmer

Re: Is this the most sensible Gardner report ever?

>It's almost exactly what's happened, though. The 2 port bytes are now part of the address.

And you don't think that's a really shitty and error prone solution? Doing this has broken the original intention of ports, means you need a bunch of crappy state tracking logic at the gateways...

I would go as far to say that NAT is part of the reason we have insane stuff like almost everything developed in the last 10 years being some crappy hack that encapsulates data over HTTPS or does the HTTPS setup and then hijacks the socket.

Daniel Palmer

Re: Is this the most sensible Gardner report ever?

>>And this response is a perfect demonstration of my point.

Ok ok I get it. Everyone should suffer having to pay huge amounts of money to get a single IP or suffer sharing an IP with tons of other people with some hacky port based multiplexing system because *you* don't want to fix your setup.

Anyhow, please add 2 bytes to IP addresses in the structs of your TCP/IP stack, rebuild it and see how that works out.

Daniel Palmer

Re: Is this the most sensible Gardner report ever?

>>Decades on, I still think that the winning solution would have been just to

>>add another two numbers to the scheme used by IPv4

You realise IP addresses aren't strings right? 192.168.0.1 is not encoded on the wire as a variable length string and it isn't four numbers as you seem to think. An IP address is exactly one number and that number has to be 32bits wide because everything that speaks IPv4 expects 32bits of address in the 32bit space there is for an address. You can't "just add two numbers" because there is nowhere to put them and nothing that speaks IPv4 cares about anything but the 32bit address.

So you either need to:

- Develop a new protocol which everything needs to get upgraded or replaced to support. You need to do it decades before the existing protocol is unworkable and as everything needs to be upgraded anyway you should probably at least try to design something that doesn't need to be ripped out again just as it's been widely accepted.. basically the plan for IPv6

- Create address namespaces with gateways that can translate between different IPv4 namespaces like NAT does and hope it's not a complete shit show with people hoarding huge amounts of the "global namespace" so they can keep their stuff running while the plebs are forced to share a single address with thousands of others.

Daniel Palmer

Re: Is this the most sensible Gardner report ever?

>Propoents of IPv6 think that not being able to access every single device in

>my home or office without first setting up a VPN / Port Forwarding / Reverse

>Proxy / whatever, is an issue that needs to be solved.

Maybe some people don't want to have to resort to crappy crap-over-crap routing hacks to do anything and aren't silly enough to think that NAT actually hides their network from the internet/are capable of setting up their gateways to actually work properly.

Daniel Palmer

Re: Is this the most sensible Gardner report ever?

>Sure, but all you had to do to migrate to the new standard was

>allocate a bit more memory for the address,

Yeah, great but it doesn't work like that does it. "just making the addresses longer" breaks everything on the same level as IPv6 does. If you have to break everything you might as well try to fix existing issues with IPv4.

Do people here seriously think you can just add 2 bytes to IPv4 addresses and it would just work?

Linux kernel sheds legacy IDE support, but driver-dominated 5.14 rc1 still grows

Daniel Palmer

IDE should work on that just fine (assuming it still boots, I have no idea what the situation is with really old x86 stuff) because if the register had actually known what they were writing about it would have been clear that the legacy IDE driver was removed and not IDE support.

The legacy driver was mainly used for machines that have a few people still maintaining support for the fun of it like m68k, superh etc.

Daniel Palmer

Sorry but what you have written makes no sense.

If you plan for something to be in 5.14 you sent your pull request somewhere between 5.13 being tagged and 5.14rc1 being tagged. If you didn't send a pull request in the "merge window" (Which is the fancy name for the period after the last release and the next rc1) then there is no plan for it to go in.

>The fact it's a 'release candidate' was mentioned in the first candidate,

In the linux release process rc1 is when all major code changes are in. If it's not in rc1 and it's not fixing a bug that came in/got exposed with all of the stuff that got merged it's unlikely you're going to get Linus to accept your pull request. Especially for something big like adding support for a new language into the kernel build system.

You don't have to believe me. Read the linked document that comes from the kernel source code and check for yourself. Maybe once you've understood it you'll teach the guy at the register that has the job of trawling the LKML for clickbait so they can at least write nonsense that has some relation to reality.

Daniel Palmer

>Among the notable planned inclusions in Linux 5.14 are support for the Rust programming language,...

The registers coverage of kernel development is awful. Considering all of the articles you guys make about single sentences in emails on the LKML you'd think you could take the time to research how the kernel development process works.

5.14rc1 is out. This means the merge window is closed. Anything that wanted to go into 5.14 should have been merged by now.

If not it's probably not going in.

https://www.kernel.org/doc/html/latest/process/2.Process.html?highlight=merge%20window

Linux maintainer says long-term support for 5.10 will stay at two years unless biz world steps up and actually uses it

Daniel Palmer

The reason you might think NetBSD has stable interfaces is, well, because NetBSD is basically dead at this point. To use it as an example of something people would seriously consider writing drivers for outside of the hobbyist space is hilarious.

Anyhow, what you are asking for is a module interface that exposes raw bus interfaces.. which is already there and is already *stable* because that stuff has been in use for a long time.

The problem is people that actually write drivers don't want to do that unless the explicitly don't want use the proper subystem for something. i.e. Nvidia shouldn't need a shim driver and their drivers would probably be smaller if they didn't use a shim but they have secrets they want to hide apparently.

No one is mainline is going to waste their time on something that would only be used by the people that would rather maintain an awful shim than upstream.

Your driver won't break between kernel versions if you actually upstream it because whoever makes breaking API changes will also fix your driver otherwise the changes won't go in.

Anyhow the kernel contains a document that already covers this. I think you should try to send an email to it's author and see what happens.

https://www.kernel.org/doc/html/v5.10/process/stable-api-nonsense.html

Daniel Palmer

>Google are currently developing a kernel module to host device drivers,

Got a link for that? And you realise that isn't a new idea right? There have been shim modules for ages. There have even been non-vendor attempts at it like the wifi compat stuff.. Guess what; It never works out because you end up with a ton of linux in your module and you have to keep adding to that with every old kernel you want to support. You always end with what we have now: LTS kernel releases to help out when tracking mainline isn't possible and people whining that other people should do work for them for free because *reasons*.

Daniel Palmer

>The people who can actually lend support / help /

>effort / money to the task of maintaining Linux - vendors

Vendors that want to work with mainline are working with mainline. Have you actually ever looked at the LKML? You might want to before writing more nonsense.

Daniel Palmer

>It'll probably never get done due to the nature of Linux

That's all down to the vendor. There are vendors like NXP and Xilinx that are actually working on mainline trying to get all of their stuff in, there are vendors like Allwinner that seem to actively avoid mainline but there is enough hobbyist effort to reverse engineer and support their stuff to the point where it's as good as or better than the officially supported stuff...... then you have vendors that are happy to stay on some old Android kernel forever. That's not a Linux problem, that's a vendor problem.

Stop using those vendors for your products.

>but a decent, well specified driver interface would really help here.

Why should mainline shoot off it's own legs to help vendors that have no intention to actually work with mainline? Linux doesn't break userland but internally it can change dramatically for example reworking entire subsystems that can't take the sort of features that people want to add to them anymore.

What you're saying is mainline needs to maintain a lot of compat guff to allow for a windows-esque driver ABI to help vendors that have no interest in mainline? Do you not see how messed up that logic is? People that want to improve Linux should waste their time on compat bullshit for vendors that don't give a crap. Would you do that work? Maybe if you were getting paid by one of those vendors... and if that were the case it would be a properiarty layer they would not release to the community.

FYI: those vendors already do this and the result is wifi drivers that are bigger than the rest of your kernel because of all the layers of HAL crap and reproducing of stuff that's in the kernel like mac80211

>I knew I could target an interface that was guaranteed

>stable for an amout of time that would make my life much much easier.

No it wouldn't. As a driver writer you would like to have all of the new helpers and generic code that has been added to make writing drivers easier and reduces the duplication for hardware that does the same function with a slightly different hardware implementation. For example: Would you want an ABI where you can present an block disk to the OS when you're writing an SD card host driver? No. Because that would mean you now have to rewrite the whole Linux MMC layer in your driver.

What you actually want is for someone else that actually understands the SD card spec to have come along and carved out all of the pieces needed to write an MMC layer driver for an SD host so you can write a driver by filling in the 5 or 6 callbacks it needs to drive your specific hardware. Actually having to look after your code after that is the payment for having someone else make your life easy in the first place.

And I can't be arsed to respond to the rest of what you wrote.

If vendors want their shit to work with mainline and LTS kernels coming off of mainline they need to work at getting their stuff upstreamed. It's simple as that. No one is going to waste time making their lives easier for the sake of it.

Daniel Palmer

Re: Please donate

Linux *mainline* is a community effort.

Supporting anything but mainline is basically what some maintainers do as part of their work funded by the Linux Foundation etc. because the big sponsors for those bodies want LTS kernels.

The silly thing here was the original posting thinking telling Greg that he should actually support 5.10 for 6 years because that would be useful for them and 2 years isn't long enough would result in Greg replying "oh you're totally right, I'll bump it up to 6 years just for you!".

If you are doing something on the hobbyist level you shouldn't be trying to use LTS kernels. You should be trying to follow mainline as closely as possible. If you are shipping products that rely on LTS kernels because tracking mainline just isn't physically possible you need to either offering the people needed to do the LTS work or sponsoring someone like Greg to do it.

Barbs exchanged over Linux for M1 Silicon ... lest Apple's lawyers lie in wait

Daniel Palmer

Hector Martin is out of his mind..

>"they are free to submit their Linux changes for upstream inclusion directly to kernel subsystem maintainers, or to contact us to contribute."

Why the hell would they need to contribute through Asahi.. Hector isn't a maintainer of anything in the kernel so why would they bother to send anything through his team? He's hurt because they beat him to it after he made a big song and dance but hadn't done much except get a pretty logo made up.

Actual maintainers of the areas that need to be changed are already reviewing their patches and he won't get a say.

Crowdfunded Asahi project aims for 'polished' Linux experience on Apple Silicon

Daniel Palmer

Have fun waiting..

"His projects include porting Linux to the Sony PlayStation 4"

This is often quoted in the different press pieces about this.. but did you guys actually check what this port consisted of?

~40 mostly small commits to add a PS4 subarch to x86 and some small drivers that never got mainlined and has been abandoned since 2017.

And there's been zero discussion of how this will actually happen on either on LKML or the arm kernel list from what I can tell.

I can imagine something partially working (to console) in a few months and then you're looking at maybe years of release cycles if it does ever get mainlined due to all of the specifics needed to accommodate the Apple weirdness in the Linux arm64 support without making a mess.

I would like to be proven wrong but I think this might take long enough that AMD will have something that makes the M1 look like a toy and even Intel might have gotten their stuff together.

Linus Torvalds may have damned systemd with faint praise

Daniel Palmer

Re: praise?

>In fact 'faint praise' is fairly standard English language idiom meaning the opposite of praise.

Aware of that but there was no praise at all in what Linus wrote.

I.e. had Linus written "we can rely on init to expose where the kernel needs to ignore it" that would have been *faint praise*. But he didn't. He said we can't trust it outright.

Daniel Palmer

praise?

How does saying that the kernel should be doing something instead putting any trust in init to do the right thing indicate praise?

Linux 4.12 kernel lands: 'Go forth and use it' quoth Linus Torvalds

Daniel Palmer

Re: Why so many kernel regressions?

>Why should he have to "look at" anything? It should just work.

Unless you paid for a support contract the ball is in your court when things don't work.

Daniel Palmer

Re: Why so many kernel regressions?

@Andy

>Fair enough, you did your due diligence.

Notice how he never mentioned what the regressions are? They might not even be in the kernel.. they might be in out of tree drivers. Exactly zero details on something that he thinks is a major issue.

>a lot of that code change in each new kernel is from the vendors making

>changes to their own previous submissions.

I would like to see stats to prove that.

>With so many proprietary driver blobs,

So not things that are in the kernel. And how many blobs do you really have aside from firmware? Maybe one for a GPU?

>it's very hard to tell what changes they've made from one time period to the next.

Git has tools built into it to help you work out exactly what commit broke something. Linus only merges patch sets that have a clear set of commits to show how things changed so those tools are useful. If you know one version works and the next doesn't you have the complete history of changes broken up into bitesize pieces to work from.

Daniel Palmer

Re: Why so many kernel regressions?

"Some of them I did. But most were related to my laptop brand,

and testers would need the exact same model (or line) to reproduce them and fix."

Probably not actually. If you can tell people which versions work and which don't it's very possible someone can find the roughly what patch or patch series caused the change.

I suspect your issue is that you went onto a forum and said something vague like "why kernel have regression. My compooter no work no more!" and no one could help you because you left out any details of the actual issue.

"True, but there is a new Linux kernel version every 5-6 weeks now! MS handles it differently."

Unless you run gentoo or similar it's very unlikely you have anything that's close to mainline on the day it drops. Many distros use LTS kernels that are updated much less frequently.

"My grub lists over 15 kernels now, I can use whichever I want"

So open an issue on the bug tracker for your distro's kernel package and report that XYZ stopped working between kernel x.x.x and x.x.x. It's very likely that if it's a real regression it's already in their bug tracker.

"but this won't work for Linux newbies"

Did you pay for the distro you use or a support contract? You got a very nice kernel and set of software for (probably) absolutely nothing. It doesn't come with free lifetime hand holding. Most distros have support communities that go out of their way to help people but no one has unlimited time to waste on people that don't help themselves.

Daniel Palmer

Re: Why so many kernel regressions?

Did you try reporting the problems you are having somewhere that someone that can help you visits. I.e. the forums for the distro you're using, the bug tracker for the distro you're using or maybe the LKML?

>4.12 had over a million of LOC added, is there any other software piece which

>changes that much at that frequently?

I would suspect that Windows has similar levels of change between major releases. The difference with Linux is you can if you really want back out patches until you find the one that broke something for you. If you can't do the legwork to track down the issue yourself you need to make your issue known to people that can.

U wot M8? Oracle chip designers quietly work on new SPARC CPU

Daniel Palmer

Re: Me likey

Yeah man! Can't wait to pay those lovely guys at Oracle many times the price of a Xeon system that performs just as well or better in many cases and then get gang raped per socket for software that actually does something interesting with it.

< insert meaningless drivel about how Intel's stuff is based on an 8 bit architecture, CISC is bad because RISC is good because *reasons*, Itatium being a flop etc>

Linus Torvalds slams 'pure garbage' from 'clowns' at Grsecurity

Daniel Palmer

Re: strcat and strcmp have seen me fine for years...

I don't remember when those became syscalls but lets pretend for a moment that those are part of the kernel that userland calls directly...

So the implementation is naive. Expecting memory not to be corrupted by programming errors or tempering is expecting too much. So lets just remove those functions. What's that? lots of shit you use doesn't work anymore? Oh dear you better get fixing it then.. You can't fix hundreds of libraries etc right this moment because someone decided some perceived security benefit is more important than stuff working? Oh dear oh dear oh dear whatever shall you do? You found out that expensive EDA, CAD or whatever package you use doesn't work and it's not supported by the vendor anymore? I guess what you need to do is run really old buggy versions of everything so you can avoid upgrading one thing that is super duper secure but makes your system unusable.

Intel: Joule's burned, Edison switched off, and Galileo – Galileo is no more

Daniel Palmer

Re: I'm going to stick up for the Joule...

>Well in my day job the pay is over $300/day so my employers don't care how much the devkit costs.

So your employer isn't going to let you mess around for a few weeks to see if you can make something work in a zynq when you could drop the money on an intel module or one of nvidias things to see if it works as an actual product first.

Daniel Palmer

Re: Hardly unexpected

>Not having source for the GPU driver doesn't stop most IoT developers (no SoC that I'm aware of has open GPU drivers,

The i.mx6 can now run Android with no binary blobs.

>and x86 systems with open GPU drivers normally significantly underperform the OEM BLOB).

Intel's drivers are opensource and they are the OEM.

>Not having source or correct details to make it talk to external devices

>through the likes of GPIO, SPI, I2C etc does.

Many of the parts people use for IoT don't have decent datasheets and the existing driver in the SDK is the only reference. One part I have worked with had massive useful features totally undocumented in the datasheet yet had code to support them in the SDK.

Daniel Palmer

Re: Another botched call by Intel

@druck

No one cares about assembler. If you have to code in assembler it better bit 8 bit and cost less than a cent per part.

Daniel Palmer

Re: "failed to get across in their marketing that this wasn't aimed at the Pi users "

>There are a lot of ARM based processors, and quite a lot of ARM based boards.

But very few that are clocked above the magical 1.something ghz that aren't phones or expensive server hardware.

Daniel Palmer

Re: I'm going to stick up for the Joule...

>Especially the FPGA-ARM hybrid devices where the computational grunt-work

So you're talking about chips like the xilinx zynq which the cheapest available board is $100, the FPGA isn't all that big and the amount of time you spend developing your custom cores is going to over take the money you would have spent to use pre-existing software on something x86 based if your day rate isn't a fiver and a packet of crisps.

Daniel Palmer

Re: I'm going to stick up for the Joule...

There are people that do stuff more than turning relays on and off.. If you want to do software defined radio, machine vision in a small space with latencies better than sometime next week you are basically in a tight spot.

The common ARM stuff is produced on old fabs to make it cheap and doesn't clock above ~1.2ghz.

Almost no one is shipping chips that use the higher performance cores (A15 instead of A7).

Daniel Palmer

Re: Another botched call by Intel

>Actually, ARM's ISA is pretty good.

ARM's 32bit ISA is a mess now and they dropped stuff that people get a woody for like conditional execution for aarch.

Oh and the code density thing is only in thumb mode and they didn't invent that. They licensed the patents from Hitachi.

TL;DR; no one gives a crap about the instruction set. Is it fast enough? Does it have a decent C compiler? .. No but it has this really cool instruction that does .. stop, you can keep it.

Daniel Palmer

Re: Hardly unexpected

>The Raspberry Pi 3 is currently using quad core 64 bit A53s

Does raspbian use a 64 bit userland yet? FYI: those A53s are slower than the 32bit big.LITTLE cluster on the odroid xu4.

>and is far from the only cheap SBC to use 64 bits.

OrangePi etc have 64bit Allwinner based boards that are the same price/cheaper than the actual cost of the Pi.

>capable of many tasks and I'd hardly describe them as plateauing.

Doing stuff you would like to do on them like software defined radio is difficult because of the lack of cpu power, memory bandwidth and really basic stuff like decent USB.

>Where Intel shot themselves in the foot with these IoT processors

>was in their lack of support and documentation available to mere mortals.

Not really. There are IoT products shipping in the hundreds of thousands of units that are using chips from Marvell, Broadcom (cypress now) etc that have documentation and SDKs that are only available under NDA and have strict distributor rules (i.e. you can't order their stuff from digikey without approval). If you are serious about doing something getting into vendor schemes to get NDA, access to samples etc isn't that hard. No big vendor gives a crap about some guy that is going to produce maybe 5 units in total. It's not even worth their time maintaining a github repo.

>If you want to do pretty much anything with a Pi then you'll find the details

>you need on the web somewhere.

You aren't going to use a Pi in an real world IoT product because it's far to expensive and overkill.

>With Intel it's mostly guesswork as they won't tell anyone short of a large

>OEM anything, and that's with an NDA in place even.

Weird. You keep going on about the Pi which is NDA and binary blobbed up to the eyeballs to discredit Intel when Intel are probably the only vendor that has full opensource drivers etc for their CPUs, GPUs, NICs etc in the mainline kernel.

Daniel Palmer

Re: Hardly unexpected

>more powerful boards in a market wanting small, power sipping ones.

Actually a lot of people want faster boards. The cheap ARM board has plateaued on the performance side (1.2GHz quad cortex A7) and that's not enough for a lot of applications.

Daniel Palmer

Re: Another botched call by Intel

No one actually cares what the instruction set is. They care that it's cheap as shit.

ARM is cheap as shit so that's what people are using.

Internet of snitches: Anyone who can sniff 'Thing' traffic knows what you're doing

Daniel Palmer

I dunno guys. If someone is already on your network sniffing your data to do traffic analysis it either means they already have access to your wifi or are in your house already. I think that means you have more problems than some IoT device selling the ambient temperature of where it's located to "the cloud data mafia" for big buckeroos.

Sorry, totally forgot this is el reg. I should have written:

oh noes small microcontroller based thing on my network is so much more scary than the 3 or 4 windows machines I have because it's called IoT. Whats that? IoT uses IPv6? How will I remember all the digits? What about my NAT?

Stanford Uni's intro to CompSci course adopts JavaScript, bins Java

Daniel Palmer

Re: Android and the "Language of Choice".

"Everyone I know who develops for Android does the bare minimum of Java forced on them by Google and jumps into something better as soon as possible."

You can't know many Android developers that develop apps that aren't just a full screen surface then.

Global IPv4 address drought: Seriously, we're done now. We're done

Daniel Palmer

Some of you guys worry me greatly.

Why? Because you're meant to be "tech" people (whatever the hell that means this day of the week) or sysadmins etc and your arguments are "IPv6 addresses are too long and remembering almost the same number of characters (in most cases) as an ipv4 address but with rules for making them shorter and more readable makes my head hurt" and "but NAT means I don't have to setup a firewall properly".

Good grief.

Oracle campaigns for third Android Java infringement trial

Daniel Palmer

Re: Google are switching to OpenJDK...

They are switching to the OpenJDK standard library not the JVM AFAIK. The reason is because the current harmony based libraries are stuck on java 5 or 6.

Google's brand new OS could replace Android

Daniel Palmer

Re: How does using the Linux kernel prevent Google from distributing Android updates?

"And, every time you change the kernel, you have to recertify (both time consuming and expensive)."

Can you provide a link that proves that? Please keep in mind that a lot of the radio stuff in Android is in userland and not the kernel itself.

Daniel Palmer

Re: How does using the Linux kernel prevent Google from distributing Android updates?

>since the Linux core of Android has essentially forked from Android some time ago

Google maintain a patch set for Linux that they have been gradually getting mainlined.. not really all that different from the kernel RHEL etc use that ship with vendor patches. Hardly a fork.

Idiot flies drone alongside Flybe jet landing at Newquay Airport

Daniel Palmer

Re: Prop vs Jet

>LiPo batteries would also be blended, resulting in finely divided particles

>of reactive metal being flung about,

LiPo batteries don't contain sheets of pure lithium metal. While there are videos of people cutting them with metal objects thus internally shorting them and causing them to burst into flames most LiPo explosions happen while actively pushing energy into them not when discharging.

There are also videos of people crushing them in a hydraulic press and nothing happening.

I doubt they pose a massive problem once cut into confetti sized pieces.

Linux security backfires: Flaw lets hackers inject malware into downloads, disrupt Tor users, etc

Daniel Palmer

Re: Patch incoming in... 3,2,1

"Except for Android phones and smart TV's (and home routers?). Those will be screwed over."

Most of that stuff is running kernels so old that they don't contain the affected code.

Intel's Crosswalk open source dev library has serious SSL bug

Daniel Palmer

Re: @Daniel

@Brewster's Angle Grinder

Android apps use "shared libraries".. mainly because if you want to add native code to an app deployed in Java the native code is loaded in via a shared library.

So long story short: Android has shared libraries, system wide ones and APK wide ones.

Apps shouldn't be allowed to mess with the system wide ones as that would be a disaster and the OS has no hope in hell of patching random .so files shipped with apps. You seem to think some code being in another file == totally easy to do security updates for with no knowledge of what the .so actually is.

Daniel Palmer

That will only work for apps developed in Java as that isn't replacing the system TLS library but replacing the security provider used by the "Java" runtime to create instances of SSLSocket etc.

It doesn't actually update the system wide library that is in the read only system partition...

And this doesn't work for the SDK in question because it's a platform for writing cross platform apps so it's unlikely to be able to call into Java code to setup TLS sockets.

Daniel Palmer

Re: @Daniel

"Where the DLL resides doesn't change the problem."

It does. If a .so is packaged with an Android application other applications shouldn't be able to load it (Each application is sandboxed in it's own user on Android) so if there is a problem with that .so it's limited to that application.

"Either the OS upgrades a library, with all the potential compatibility problems, or the app does it,"

We're not talking about anything special here because it's a library. As far as the OS is concerned is just a file owned by the application. There's no way to track every single piece of code that is shipped with third party applications and have the OS go around fixing it. The only realistic option is to contain the damage by sandboxing the applications as much as possible.

Google has options via the Play Store to press the eject button on apps that are really serious problems and they apparently do scan code (compiled from Java) for common defects (recently they have been sending e-mails out for issues that will stop apps working with N) but if you think that they can build something into the OS that can work out where each binary in an application comes from, work out what it was really built from (app specific patches etc) and supply automatic replacement binaries without breaking applications then I want some of what you're smoking.

"in which case you're waiting on the developers."

You're waiting on developers either way. If the fix for this is trivial to implement (updating the SDK and rebuilding the APK) updates will be hitting the Play store in no time.

Daniel Palmer

The term "shared library" doesn't mean "system library". It means that it can be dynamically linked at runtime thus shared between binaries that link against it.. it doesn't have to be visible system-wide to be shared.

Android has shared system libraries and most apps will be using them. There are very good reasons applications can't go messing with system libraries. Take a few moments and think about it...

Now a thought experiment; What do you do if you want to provide some functionality that isn't available in the system libraries? Do you allow random applications to fight over which version of the library gets installed system-wide or do you just sandbox the application and let it provide whatever libraries it wants for it's own use without interfering with the OS and other applications?

Google tells Android's Linux kernel to toughen up and fight off those horrible hacker bullies

Daniel Palmer

Re: About bloody time!

>implement best security practices known, at least conceptually, since the 1970s.

Known since the 70s but not implemented in probably the most widely used proper (i.e. not RTOS) kernel in the world? I wonder if there is some level of "easier said than done" to this?

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022