* Posts by Daniel Palmer

345 posts • joined 19 Apr 2007


Barbs exchanged over Linux for M1 Silicon ... lest Apple's lawyers lie in wait

Daniel Palmer

Hector Martin is out of his mind..

>"they are free to submit their Linux changes for upstream inclusion directly to kernel subsystem maintainers, or to contact us to contribute."

Why the hell would they need to contribute through Asahi.. Hector isn't a maintainer of anything in the kernel so why would they bother to send anything through his team? He's hurt because they beat him to it after he made a big song and dance but hadn't done much except get a pretty logo made up.

Actual maintainers of the areas that need to be changed are already reviewing their patches and he won't get a say.

Crowdfunded Asahi project aims for 'polished' Linux experience on Apple Silicon

Daniel Palmer

Have fun waiting..

"His projects include porting Linux to the Sony PlayStation 4"

This is often quoted in the different press pieces about this.. but did you guys actually check what this port consisted of?

~40 mostly small commits to add a PS4 subarch to x86 and some small drivers that never got mainlined and has been abandoned since 2017.

And there's been zero discussion of how this will actually happen on either on LKML or the arm kernel list from what I can tell.

I can imagine something partially working (to console) in a few months and then you're looking at maybe years of release cycles if it does ever get mainlined due to all of the specifics needed to accommodate the Apple weirdness in the Linux arm64 support without making a mess.

I would like to be proven wrong but I think this might take long enough that AMD will have something that makes the M1 look like a toy and even Intel might have gotten their stuff together.

Linus Torvalds may have damned systemd with faint praise

Daniel Palmer

Re: praise?

>In fact 'faint praise' is fairly standard English language idiom meaning the opposite of praise.

Aware of that but there was no praise at all in what Linus wrote.

I.e. had Linus written "we can rely on init to expose where the kernel needs to ignore it" that would have been *faint praise*. But he didn't. He said we can't trust it outright.

Daniel Palmer


How does saying that the kernel should be doing something instead putting any trust in init to do the right thing indicate praise?

Linux 4.12 kernel lands: 'Go forth and use it' quoth Linus Torvalds

Daniel Palmer

Re: Why so many kernel regressions?

>Why should he have to "look at" anything? It should just work.

Unless you paid for a support contract the ball is in your court when things don't work.

Daniel Palmer

Re: Why so many kernel regressions?


>Fair enough, you did your due diligence.

Notice how he never mentioned what the regressions are? They might not even be in the kernel.. they might be in out of tree drivers. Exactly zero details on something that he thinks is a major issue.

>a lot of that code change in each new kernel is from the vendors making

>changes to their own previous submissions.

I would like to see stats to prove that.

>With so many proprietary driver blobs,

So not things that are in the kernel. And how many blobs do you really have aside from firmware? Maybe one for a GPU?

>it's very hard to tell what changes they've made from one time period to the next.

Git has tools built into it to help you work out exactly what commit broke something. Linus only merges patch sets that have a clear set of commits to show how things changed so those tools are useful. If you know one version works and the next doesn't you have the complete history of changes broken up into bitesize pieces to work from.

Daniel Palmer

Re: Why so many kernel regressions?

"Some of them I did. But most were related to my laptop brand,

and testers would need the exact same model (or line) to reproduce them and fix."

Probably not actually. If you can tell people which versions work and which don't it's very possible someone can find the roughly what patch or patch series caused the change.

I suspect your issue is that you went onto a forum and said something vague like "why kernel have regression. My compooter no work no more!" and no one could help you because you left out any details of the actual issue.

"True, but there is a new Linux kernel version every 5-6 weeks now! MS handles it differently."

Unless you run gentoo or similar it's very unlikely you have anything that's close to mainline on the day it drops. Many distros use LTS kernels that are updated much less frequently.

"My grub lists over 15 kernels now, I can use whichever I want"

So open an issue on the bug tracker for your distro's kernel package and report that XYZ stopped working between kernel x.x.x and x.x.x. It's very likely that if it's a real regression it's already in their bug tracker.

"but this won't work for Linux newbies"

Did you pay for the distro you use or a support contract? You got a very nice kernel and set of software for (probably) absolutely nothing. It doesn't come with free lifetime hand holding. Most distros have support communities that go out of their way to help people but no one has unlimited time to waste on people that don't help themselves.

Daniel Palmer

Re: Why so many kernel regressions?

Did you try reporting the problems you are having somewhere that someone that can help you visits. I.e. the forums for the distro you're using, the bug tracker for the distro you're using or maybe the LKML?

>4.12 had over a million of LOC added, is there any other software piece which

>changes that much at that frequently?

I would suspect that Windows has similar levels of change between major releases. The difference with Linux is you can if you really want back out patches until you find the one that broke something for you. If you can't do the legwork to track down the issue yourself you need to make your issue known to people that can.

U wot M8? Oracle chip designers quietly work on new SPARC CPU

Daniel Palmer

Re: Me likey

Yeah man! Can't wait to pay those lovely guys at Oracle many times the price of a Xeon system that performs just as well or better in many cases and then get gang raped per socket for software that actually does something interesting with it.

< insert meaningless drivel about how Intel's stuff is based on an 8 bit architecture, CISC is bad because RISC is good because *reasons*, Itatium being a flop etc>

Linus Torvalds slams 'pure garbage' from 'clowns' at Grsecurity

Daniel Palmer

Re: strcat and strcmp have seen me fine for years...

I don't remember when those became syscalls but lets pretend for a moment that those are part of the kernel that userland calls directly...

So the implementation is naive. Expecting memory not to be corrupted by programming errors or tempering is expecting too much. So lets just remove those functions. What's that? lots of shit you use doesn't work anymore? Oh dear you better get fixing it then.. You can't fix hundreds of libraries etc right this moment because someone decided some perceived security benefit is more important than stuff working? Oh dear oh dear oh dear whatever shall you do? You found out that expensive EDA, CAD or whatever package you use doesn't work and it's not supported by the vendor anymore? I guess what you need to do is run really old buggy versions of everything so you can avoid upgrading one thing that is super duper secure but makes your system unusable.

Intel: Joule's burned, Edison switched off, and Galileo – Galileo is no more

Daniel Palmer

Re: I'm going to stick up for the Joule...

>Well in my day job the pay is over $300/day so my employers don't care how much the devkit costs.

So your employer isn't going to let you mess around for a few weeks to see if you can make something work in a zynq when you could drop the money on an intel module or one of nvidias things to see if it works as an actual product first.

Daniel Palmer

Re: Hardly unexpected

>Not having source for the GPU driver doesn't stop most IoT developers (no SoC that I'm aware of has open GPU drivers,

The i.mx6 can now run Android with no binary blobs.

>and x86 systems with open GPU drivers normally significantly underperform the OEM BLOB).

Intel's drivers are opensource and they are the OEM.

>Not having source or correct details to make it talk to external devices

>through the likes of GPIO, SPI, I2C etc does.

Many of the parts people use for IoT don't have decent datasheets and the existing driver in the SDK is the only reference. One part I have worked with had massive useful features totally undocumented in the datasheet yet had code to support them in the SDK.

Daniel Palmer

Re: Another botched call by Intel


No one cares about assembler. If you have to code in assembler it better bit 8 bit and cost less than a cent per part.

Daniel Palmer

Re: "failed to get across in their marketing that this wasn't aimed at the Pi users "

>There are a lot of ARM based processors, and quite a lot of ARM based boards.

But very few that are clocked above the magical 1.something ghz that aren't phones or expensive server hardware.

Daniel Palmer

Re: I'm going to stick up for the Joule...

>Especially the FPGA-ARM hybrid devices where the computational grunt-work

So you're talking about chips like the xilinx zynq which the cheapest available board is $100, the FPGA isn't all that big and the amount of time you spend developing your custom cores is going to over take the money you would have spent to use pre-existing software on something x86 based if your day rate isn't a fiver and a packet of crisps.

Daniel Palmer

Re: I'm going to stick up for the Joule...

There are people that do stuff more than turning relays on and off.. If you want to do software defined radio, machine vision in a small space with latencies better than sometime next week you are basically in a tight spot.

The common ARM stuff is produced on old fabs to make it cheap and doesn't clock above ~1.2ghz.

Almost no one is shipping chips that use the higher performance cores (A15 instead of A7).

Daniel Palmer

Re: Another botched call by Intel

>Actually, ARM's ISA is pretty good.

ARM's 32bit ISA is a mess now and they dropped stuff that people get a woody for like conditional execution for aarch.

Oh and the code density thing is only in thumb mode and they didn't invent that. They licensed the patents from Hitachi.

TL;DR; no one gives a crap about the instruction set. Is it fast enough? Does it have a decent C compiler? .. No but it has this really cool instruction that does .. stop, you can keep it.

Daniel Palmer

Re: Hardly unexpected

>The Raspberry Pi 3 is currently using quad core 64 bit A53s

Does raspbian use a 64 bit userland yet? FYI: those A53s are slower than the 32bit big.LITTLE cluster on the odroid xu4.

>and is far from the only cheap SBC to use 64 bits.

OrangePi etc have 64bit Allwinner based boards that are the same price/cheaper than the actual cost of the Pi.

>capable of many tasks and I'd hardly describe them as plateauing.

Doing stuff you would like to do on them like software defined radio is difficult because of the lack of cpu power, memory bandwidth and really basic stuff like decent USB.

>Where Intel shot themselves in the foot with these IoT processors

>was in their lack of support and documentation available to mere mortals.

Not really. There are IoT products shipping in the hundreds of thousands of units that are using chips from Marvell, Broadcom (cypress now) etc that have documentation and SDKs that are only available under NDA and have strict distributor rules (i.e. you can't order their stuff from digikey without approval). If you are serious about doing something getting into vendor schemes to get NDA, access to samples etc isn't that hard. No big vendor gives a crap about some guy that is going to produce maybe 5 units in total. It's not even worth their time maintaining a github repo.

>If you want to do pretty much anything with a Pi then you'll find the details

>you need on the web somewhere.

You aren't going to use a Pi in an real world IoT product because it's far to expensive and overkill.

>With Intel it's mostly guesswork as they won't tell anyone short of a large

>OEM anything, and that's with an NDA in place even.

Weird. You keep going on about the Pi which is NDA and binary blobbed up to the eyeballs to discredit Intel when Intel are probably the only vendor that has full opensource drivers etc for their CPUs, GPUs, NICs etc in the mainline kernel.

Daniel Palmer

Re: Hardly unexpected

>more powerful boards in a market wanting small, power sipping ones.

Actually a lot of people want faster boards. The cheap ARM board has plateaued on the performance side (1.2GHz quad cortex A7) and that's not enough for a lot of applications.

Daniel Palmer

Re: Another botched call by Intel

No one actually cares what the instruction set is. They care that it's cheap as shit.

ARM is cheap as shit so that's what people are using.

Internet of snitches: Anyone who can sniff 'Thing' traffic knows what you're doing

Daniel Palmer

I dunno guys. If someone is already on your network sniffing your data to do traffic analysis it either means they already have access to your wifi or are in your house already. I think that means you have more problems than some IoT device selling the ambient temperature of where it's located to "the cloud data mafia" for big buckeroos.

Sorry, totally forgot this is el reg. I should have written:

oh noes small microcontroller based thing on my network is so much more scary than the 3 or 4 windows machines I have because it's called IoT. Whats that? IoT uses IPv6? How will I remember all the digits? What about my NAT?

Stanford Uni's intro to CompSci course adopts JavaScript, bins Java

Daniel Palmer

Re: Android and the "Language of Choice".

"Everyone I know who develops for Android does the bare minimum of Java forced on them by Google and jumps into something better as soon as possible."

You can't know many Android developers that develop apps that aren't just a full screen surface then.

Global IPv4 address drought: Seriously, we're done now. We're done

Daniel Palmer

Some of you guys worry me greatly.

Why? Because you're meant to be "tech" people (whatever the hell that means this day of the week) or sysadmins etc and your arguments are "IPv6 addresses are too long and remembering almost the same number of characters (in most cases) as an ipv4 address but with rules for making them shorter and more readable makes my head hurt" and "but NAT means I don't have to setup a firewall properly".

Good grief.

Oracle campaigns for third Android Java infringement trial

Daniel Palmer

Re: Google are switching to OpenJDK...

They are switching to the OpenJDK standard library not the JVM AFAIK. The reason is because the current harmony based libraries are stuck on java 5 or 6.

Google's brand new OS could replace Android

Daniel Palmer

Re: How does using the Linux kernel prevent Google from distributing Android updates?

"And, every time you change the kernel, you have to recertify (both time consuming and expensive)."

Can you provide a link that proves that? Please keep in mind that a lot of the radio stuff in Android is in userland and not the kernel itself.

Daniel Palmer

Re: How does using the Linux kernel prevent Google from distributing Android updates?

>since the Linux core of Android has essentially forked from Android some time ago

Google maintain a patch set for Linux that they have been gradually getting mainlined.. not really all that different from the kernel RHEL etc use that ship with vendor patches. Hardly a fork.

Idiot flies drone alongside Flybe jet landing at Newquay Airport

Daniel Palmer

Re: Prop vs Jet

>LiPo batteries would also be blended, resulting in finely divided particles

>of reactive metal being flung about,

LiPo batteries don't contain sheets of pure lithium metal. While there are videos of people cutting them with metal objects thus internally shorting them and causing them to burst into flames most LiPo explosions happen while actively pushing energy into them not when discharging.

There are also videos of people crushing them in a hydraulic press and nothing happening.

I doubt they pose a massive problem once cut into confetti sized pieces.

Linux security backfires: Flaw lets hackers inject malware into downloads, disrupt Tor users, etc

Daniel Palmer

Re: Patch incoming in... 3,2,1

"Except for Android phones and smart TV's (and home routers?). Those will be screwed over."

Most of that stuff is running kernels so old that they don't contain the affected code.

Intel's Crosswalk open source dev library has serious SSL bug

Daniel Palmer

Re: @Daniel

@Brewster's Angle Grinder

Android apps use "shared libraries".. mainly because if you want to add native code to an app deployed in Java the native code is loaded in via a shared library.

So long story short: Android has shared libraries, system wide ones and APK wide ones.

Apps shouldn't be allowed to mess with the system wide ones as that would be a disaster and the OS has no hope in hell of patching random .so files shipped with apps. You seem to think some code being in another file == totally easy to do security updates for with no knowledge of what the .so actually is.

Daniel Palmer

That will only work for apps developed in Java as that isn't replacing the system TLS library but replacing the security provider used by the "Java" runtime to create instances of SSLSocket etc.

It doesn't actually update the system wide library that is in the read only system partition...

And this doesn't work for the SDK in question because it's a platform for writing cross platform apps so it's unlikely to be able to call into Java code to setup TLS sockets.

Daniel Palmer

Re: @Daniel

"Where the DLL resides doesn't change the problem."

It does. If a .so is packaged with an Android application other applications shouldn't be able to load it (Each application is sandboxed in it's own user on Android) so if there is a problem with that .so it's limited to that application.

"Either the OS upgrades a library, with all the potential compatibility problems, or the app does it,"

We're not talking about anything special here because it's a library. As far as the OS is concerned is just a file owned by the application. There's no way to track every single piece of code that is shipped with third party applications and have the OS go around fixing it. The only realistic option is to contain the damage by sandboxing the applications as much as possible.

Google has options via the Play Store to press the eject button on apps that are really serious problems and they apparently do scan code (compiled from Java) for common defects (recently they have been sending e-mails out for issues that will stop apps working with N) but if you think that they can build something into the OS that can work out where each binary in an application comes from, work out what it was really built from (app specific patches etc) and supply automatic replacement binaries without breaking applications then I want some of what you're smoking.

"in which case you're waiting on the developers."

You're waiting on developers either way. If the fix for this is trivial to implement (updating the SDK and rebuilding the APK) updates will be hitting the Play store in no time.

Daniel Palmer

The term "shared library" doesn't mean "system library". It means that it can be dynamically linked at runtime thus shared between binaries that link against it.. it doesn't have to be visible system-wide to be shared.

Android has shared system libraries and most apps will be using them. There are very good reasons applications can't go messing with system libraries. Take a few moments and think about it...

Now a thought experiment; What do you do if you want to provide some functionality that isn't available in the system libraries? Do you allow random applications to fight over which version of the library gets installed system-wide or do you just sandbox the application and let it provide whatever libraries it wants for it's own use without interfering with the OS and other applications?

Google tells Android's Linux kernel to toughen up and fight off those horrible hacker bullies

Daniel Palmer

Re: About bloody time!

>implement best security practices known, at least conceptually, since the 1970s.

Known since the 70s but not implemented in probably the most widely used proper (i.e. not RTOS) kernel in the world? I wonder if there is some level of "easier said than done" to this?

Brit chip bods ARM quietly piling up cash. Softbank will be happy

Daniel Palmer

Re: Does that add up?

For the mirco controller market if royalties for ARM Cortex M? designs go up or SoftBank does something else that pisses off silicon vendors we will see all them start pushing their pre-ARM stuff again. Which would be a shame. While I'm not an ARM fanboy having decent free toolchains, debuggers etc that work across thousands of different parts from different vendors is very useful.. I don't really want to go back to using the 500MB zip with a bunch of unworkable crap and $200 or $300 debug tool per vendor/family days.

For the medium performance stuff like phone and tablet chips I'm not sure Apple etc have anywhere else they can go aside from Intel or AMD. SoftBank buying ARM could be just what Intel have been waiting for.

Rejoice, sysadmins, there's a new glamour job nobody understands

Daniel Palmer

Re: Roll up, roll up! You don't even need to study!

Yes. Patching remotely is automated and the device has multiple copies of the firmware so it can't be bricked if an update fails. Next question.

Patch ASAP: Tons of Linux apps can be hijacked by evil DNS servers, man-in-the-middle miscreants

Daniel Palmer

I'm willing to bet there are similar issues in every libc and any of the runtime environments for any "safe" language like Java, .net, python..

Would be nice to see proper discussion of the issue and what people should be doing to not get caught out by it instead of retarded finger pointing and sneering.

Chip company FTDI accused of bricking counterfeits again

Daniel Palmer

Re: Not Bricked

But what if a counterfeit arduino with a fake chip that is a clone of a real chip that says in the documents that it's not to be used in critical life support systems is used to control the machine that stops the 100 kilo lead weights installed above all patients heads from dropping and killing them fails because it couldn't handle invalid data because of bad coding practices! *DEEP BREATH*


Daniel Palmer

Re: Goodbye FTDI

>proactively avoided anything with an FTDI chip in it.

So you avoid all of the dev kits with FTDI chips as the JTAG interface.. like 90% of them because no other vendor makes a chip like that.

>The risk is just too high and counterfeits are all over the supply chain,

>even in heavily controlled sourcing.

If you source from Digikey etc you should be OK. I suspect most of the people that are getting stuff bricked are using parts sourced from Random/Cheapest Parts Dealer in China.

>Imagine the liability if a counterfeit got into a medical device

>and FTDI's driver f*ckups killed somebody.

What if the counterfeit part dies without FTDI's driver fucking it up? Surely a system that is running critical life support services A: uses only parts that can be traced back to the original vendor, B: Doesn't use Windows or is at least fault tolerant enough that it doesn't rely on Windows being remotely stable. C: Doesn't go updating critical drivers while it's doing a critical life support task?

Have you considered that potentially counterfeits might be busted by the official driver even if it doesn't intentionally try to break them because they aren't 100% compatible?

>Sorry to say, but hopefully FTDI will be out of business before that happens.....

I doubt that will happen. They don't just make this chip and whatever issues you have with their drivers the alternatives don't exist or aren't as good. If you just want a decent USB->Serial chip the Silabs CP2102 is good but if you want a high speed multiprotocol chip like the FT2322 you have less choice.

>It's a shame really as FTDI has been the defacto standard for USB-to-serial for decades,

>way to destroy your business.

Because their chips work unlike the alternatives with the exception of the CP2102 I mentioned.

Star Wars BB-8 toy in firmware update risk, say UK security bods

Daniel Palmer

Re: Pen testing fail?

The transport used to get the firmware to the device doesn't matter if the firmware is signed. If they (the vendor) just rely on transport security to stop rogue firmware that would be a problem but they (pen testers) didn't show they could change the firmware and make the device download it and run it.

All they have done is see something happens over a clear text protocol and made a noise about it.

Daniel Palmer

Re: Pen testing fail?

This is much like the barbie hack where they read out plain text data from the spi flash by wiring it up to a tool that talks to spi devices and did WiFi scans using features that are part of the firmware and used during provisioning.. They "hacked" nothing but made it sound like they did and sites posted their crap verbatim. Big head security reachers and clickbait news sites are a match made in heaven.

Linus looses Linux 4.3 on a waiting world

Daniel Palmer

>that drivers probably don't belong inside a kernel

Stuff like GPU drivers have a kernel component and a userspace component in a lot of cases so it's not true that "GPU drivers are in the kernel" unless all you are thinking about are dumb framebuffer devices.

Bug-hunt turns up vuln in LibreSSL

Daniel Palmer

malloc either gives you a pointer to some memory with the size you asked for or not.. I'm not seeing why it would be malloc's fault if you write past the memory you asked for.

Japan begins mega-rollout of 100 million+ national IDs

Daniel Palmer

This is basically like a national insurance or social security number at the moment. I suspect the way things work at the moment is a massive pain in the ass to administrate and causes a ton of head ache.

The income tax office knows someone with your name and your current or maybe 10 years previous address, filled or had filled for them a tax return at some point. They apparently pass on the income details for people to their city or town office based on the name and rough address and then that office sends out local tax, health insurance etc details. For me the tax office keep sending stuff to an address I moved from over a year ago even after telling them the change of address (good on JP for forwarding everything for so long) but that address is different from the address that I have registered at the town office... I suspect because of my foreign name they can work out how to sort it out but I imagine them messing stuff up for people that have common names and the income tax office sends income information to the wrong city or town office fairly often.

TL;DR; version - Japan gets NI numbers that make the interlinked public systems a little less fragile. Not the end of the world or some plan to document all the evil foreigners or whatever.

ARM wants you to jump into mbed with it – IoT open-source OS in beta

Daniel Palmer

Re: The beauty of embedded projects is...

>Therefore you can get operating systems like FreeRTOS/OpenRTOS

>which can run with very few kilobytes of RAM.

All operating systems would only need a few K of ram if all they did out the box was scheduling, some multi-threading primitives and heap management.

If you need TCP/IP, TLS etc you're basically limited to the more expensive end of the microcontroller spectrum.

ONE MILLION new lines of code hit Linux Kernel

Daniel Palmer

Re: Yes but

>For microcontroller programming storage is often measured in kb not meg (and not big numbers).

The sort of microcontrollers that Linux can run on (like the H8300 port that has just been reintroduced) can access megabytes of RAM (and usually have 32bit address spaces) so their storage would be measured in megabytes.

>Some consumer routers also don't come with 80 meg of flash even today.

You don't need to have every driver that Linux has.. you probably can't even select most of the drivers on non-x86 arches.

>I looked and without the modules the OpenWRT kernel itself is only about 1 meg.

>Good enough to boot a router but not much else.

Surely the 1MB kernel is enough to boot the router and load smallish modules like iptables etc to make it function as a router... otherwise you need to tell the openwrt guys they have been wasting their time for however many years they've been working on it.

Incoming! Linux 4.1 kernel lands

Daniel Palmer

>Talk to performance gamers.

My recent Nvidia GPU (I forget which..) works fine with whatever nvidia driver is in Debian/Sid and I can play all of the games I have in steam just fine.

>Radeon 6850 should've been near the top of the support list.

One hardware vendor not being very good with their Linux driver's isn't the fault of "the linux community" or Linux itself. There's nothing in the kernel that stops AMD's stuff from working if they want to support it properly. If they don't that's their problem. I'll just stick to nvidia stuff.

>Oh? I tried Ubuntu on a old Dell Inspiron. Fell flat because no nVidia driver worked on it.

>Noveaux was too slow and the nVidia blob wouldn't support the chipset.

>Dead end. And this isn't the first time.

I would guess the chipset you were trying to use is one that is A: either not yet supported in the latest greatest version of the driver or B: A legacy chipset that requires the one of the legacy versions of the drivers. Neither of these issues is an issue that is due to the kernel being Linux. Nvidia stops supporting older chipsets in their drivers for Windows too. If you have a brand new card and you want to run it with an OS that is used by less than 1% of nvidia's customers you should expect driver updates that support that card to take a little while. With the nvidia drivers at least support for their new chipsets does eventually happen.

You can't just stick any old nvidia card in a Windows machine and expect it to just work either so your point is moot from the start.

>But the Linux community, which includes the kernel community,

>should be pushing for most mainstream support, but they're not, so they're in the rut they are now.

How do you push commercial entities that rely on profits from sales to produce products that would certainly lose them money?

I'm not sure why the Linux community or kernel community should be pushing for a bunch of productivity apps that you're interested in when most of us aren't interested in that stuff either way. I use Linux because I'm a developer and I have access to some of the best tools out there and they are free and opensource. The apps you're interested in are all going "cloud" based anyhow and it won't matter what platform your on for those in a few years time.

"Except the desktop will continue to exist for performance applications like gaming.

See the above common beef PC gamers have concerning their video cards."

If demand for Linux drivers for the latest generation of GPUs goes up then driver support for those GPUs will improve.

Daniel Palmer

>If Linux wants to be THE OS for the desktop, it will need several boosts here and there.

I'm not sure Linux (the git repo containing code for a general propose unix workalike kernel, or a compiled version of said code) wants to be anything. There are developers that want Linux to be the go to kernel for desktop systems, there are some that are only interested in machines with hundreds of processors, there are others that are more interested in seeing it run on stupidly under powered relics (the h8300 port might make a come back with device tree support like ARM :P). The fact that linux isn't targeted to one particular job is exactly why it's used literally everywhere and why it's interesting to work on.

>Support has improved considerably, yes,

Support for what exactly? Most common PC hardware works out of the box. The kernel side of things needed for The Desktop(TM) are there and have been for ages. If you want a totally integrated experience like Windows or OSX the distribution you're looking for is Ubuntu.

>but it can still have teething issues, particularly where vendors aren't exactly

>forthcoming with hardware support for various reasons such as protection

>of trade secrets.

IMHO hardware support under Linux is far superior than any other OS. People make out like you can just plug any old shit into a Windows machine and have it work but fail to mention all the time you have to spend hunting for drivers or returning stuff because the vendor doesn't support whatever version of Windows you happen to have.

>And then there's the software selection, particularly for the consumer

>end where people just want to put it in and work.

Linux is a kernel. It's not really the kernel's fault that the current desktop market share is mostly Windows so that's what commercial developers target. There's nothing particular about the Linux kernel that means the type of applications that run on Windows couldn't run on Linux.

>There are native applications that can do a lot such as GIMP and LibreOffice

So you're not talking about Linux. You're talking about common types of applications that desktop users need whether they are running Linux, OSX,... Anyhow this is going to be less and less of a problem now that everyone wants to use more portable development tools etc so they can get their stuff running on the desktop, web, mobile etc.

>but it will always trail the bleeding edge (and that's what killed it for me since I like to game).

Not really the kernel's fault again. There are lots of games on Android so it's not like Linux can't do games.

Nvidia's GTX 900 cards lock out open-source Linux devs yet again

Daniel Palmer

Re: JustNiz

>You do realize that ARM, for example, is not remotely cross compatible,

>eh? Binary for one ARM isn't always (rarely, IME) going to work on a chip

>from another vendor.

Why Trev? Please come up with a good answer.

Hint: The main driver part of the nvidia solution runs in userland.

Daniel Palmer


>the nVidia driver potentially has access to the entire memory space.

The kernel part of the nvidia drivers has source available so you can compile it against your running kernel. You're free to audit it and tell us about what you find.

>Because of this ANY bug you are experiencing with the kernel

>cannot be ruled out as a nVidia driver problem

>(potentially other software too, but usually it's trying to track down a kernel problem).

The kernel has the "tainted" stuff because of this. But the kernel part of the nvidia drivers have source available as I mentioned before so you don't have to resort to disassembling kernel modules to debug it you just won't get any help from the mainline kernel guys as it's not their code.

>nVidia has shipped buggy drivers

There are buggy drivers in the mainline kernel too. Using staging drivers also taints the kernel IIRC.

>and it's much harder to get dev attention if you're running a tainted kernel for this reason.

The reason for the kernel tainting stuff is so that you remove all of the external drivers etc you have loaded and reproduce your bug before reporting it otherwise you are potentially reporting a bug to the wrong place and wasting people's time. It's not about dissing people run code that isn't in the mainline. If that was the case why provide the infrastructure to build out of kernel modules in the first place?

Daniel Palmer

Re: JustNiz


>You'd be right. I'm not a developer, nor do I claim to be.

Ok so don't try to use someone else's wicked cool skill that you think are massively impressive to try to size up someone else because you have no idea what you're talking about.

>Linux developer - professional or not - doesn't give him

>standing to diminish someone like Chris. There's a difference.

You're taking offence to something that wasn't written. For most people that develop on Linux either at the kernel level, application or whatever are in no way affected by the revelation that nvidia has added signing to their GPUs. It affects a tiny minority of developers that are working on opensource drivers for nvidia GPUs and almost no one else. I think you're going to massive lengths to make this look like it deeply affects your friend's hobby project but I don't see how it does.

>No, vendors need to open source their frakking drivers so that the rest of the

>world isn't held up by their internal politics. There's a whole industry that needs

> to be able to move faster than they can.

In a perfect world yes. But this isn't a perfect world. There are lots of vendors that are trying to get all of their stuff mainlined. One example I can think of is Marvell that has been paying an external company to rewrite their drivers so they are acceptable as their previous binary blobs (that everyone who signed the NDA had the source code for) had no chance of getting included. Qualcomm has apparently been assisting the guys writing free drivers for their GPUs... If you look through the LKML though you'll see plenty of times where a vendor has offered their inhouse driver for mainline and it has been rejected because it's poorly written. There might be a few mails back and forth to try and correct the issues but a lot of the time the conversation dies and the driver doesn't go in. Bottom line is that open sourcing stuff that is wrapped around layers and layers of NDAs and licensed IP is not easy and once the open sourcing has happened it doesn't mean it's going in the mainline and it's going to be supported forever and ever. FYI: the kernel part of the nvidia drivers do have source available.

>So do I, and nVidia doesn't release that information with a simple NDA.

> It takes a hell of a lot of lobbying and a lot of money.

So don't use their stuff then.

"Hobbyists have a bit of a problem that they aren't very valuable to big semiconductor companies that need to ship hundreds of thousands of units to make a design profitable.""

>Yeah, but fuck 'em, eh? Awesome attitude

Where did I say that Trevor? You seem to run some sort of consultancy; If everyday a bunch of charity cases walk into your office and give you their sob story will you do work for free or for a rate that means you lose money? You might once or twice out of the goodness of you heart but you aren't going to do it everyday until you go bust are you? Hobbyists should stick to hobbyist friendly vendors that release proper documentation for their products and be hard on vendors that don't release documentation. What hobbyists really don't need is people flapping their gums about stuff they don't care about or need.

>Poor support outside of x86.

You keep going on about this horrible non-x86 thing but I don't think you have a clue what you're talking about.

>Reams of WONTFIX bugs and corporate history of simply ignoring bugs

>raised are all good reasons.

The intel GPU drivers have been opensource for a long time. They still crash the whole X server when people do certain actions in kicad with some models of GPU. The bug has been there for about 5 years. Open sourcing the drivers doesn't instantly fix hard to fix bugs.

>. 1) Inability to firmware update cards (nice to have in a lot of ways)

They haven't stopped anyone from updating firmware. They have locked out firmware that isn't signed with their key.

>2) lack of open source drivers that can be recompiled on other platforms (absolute must).

This is about that non-x86 thing again isn't it? Are you aware that nvidia have a bunch of SoCs with ARM cores and nvidia GPUs. By what I saw at Tokyo MakerFaire (I'm one of those hard done by electronics/computing hobbyists you are so concerned about BTW) it looks like they even have working CUDA on ARM. It was pretty funny really.. Nvidia had a stall with impressive CUDA and machine vision demos running on their SoCs and Intel was next door trying to flog their unimpressive buggy pentium class crap next door.. but that's another story.

>Now some of my clients have a desire to get into the firmware and tweak and tinker,

What exactly are they going to tweak/tinker? I can maybe understand that they might be able to find where values like the different core frequencies are held in the flash and overclock their cards but I very much doubt they are in IDA disassembling the stock firmware, documenting and re-implementing it on a daily basis. Take a read of this article by someone that has been reverse engineering GPU drivers, it might open your eyes a little bit: http://lwn.net/Articles/638908/

>because they need every erg of speed.

So, yeah, poking in a hex editor to tweak the settings of the cards which nvidia doesn't make available.

>But I think there's a much broader need for open source drivers that can be tweaked

What are you going to be tweaking exactly? I'm sure there are things to tweak but I'd like to hear a solid example.

>and recompiled for different architectures,

What architectures do you think could really do with nvidia GPUs but don't have binary drivers. Keep in mind that there are only 3 or 4 current architectures that have pci-e interfaces.

>and where bugs can be fixed that nVidia won't.

Unless the bugs are in the firmware that has no relation to the firmware being signed or closed source. Nvidia could have opensource drivers and closed firmware (like 99.9999999% of the stuff in your machine that has a mainlined driver but requires firmware).. would you still be demanding they remove the signing if that was the case?



Biting the hand that feeds IT © 1998–2021