The original developer of CentOS announced Rocky Linux the same day that red hat announced this change to CentOS.
101 posts • joined 7 Jun 2016
Red Hat defends its CentOS decision, claims Stream version can cover '95% of current user workloads'
Third time's still the charm: AMD touts Zen-3-based Ryzen 5000 line, says it will 'deliver absolute leadership in x86'
Honestly, without a workload that demands an x86-64, I wonder if there is any point staying with one. For low usage like internet browsing, I am pretty sure we are entering the age of ARM laptop/desktops which will be cheaper and serve those purposes with much lower power draw. Heck a phone or tablet does this just fine. While PC games are still far off, the ARM based (nvidia Tegra) Nintendo Switch does show future potential on the gaming front too.
And I wouldn't be surprised if in the future we see the same from RISC-V either.
"The Cinebench score seems remarkable given that Zen 3 tops out at 4.9GHz vs Intel's 5.3GHz.
However, Intel's hitting those frequencies at 14nm whilst AMD's now on 7nm. I wonder what's stopping AMD clocking faster? And when (if?) Intel hits 7nm, there's a good chance they'll be on top again."
I suspect power leakage is a big part of the issue and that becomes more of an issue the smaller/more dense the circuit/transistors get, more so below 14nm. FinFET is replacing MosFET for 7nm for reasons like this, and where there is some* validity to Intels stance of not rushing further down the smaller silicon sizes. But I'm not an expert in this area, so not certain if this is the reason.
*a very very small amount of validity, Intel is still trailing massively to the gains that smaller silicon is offering.
DLSS is a nice innovation but it requires game devs to implement it and to implement it they have to go to nvidia and get nvidia to train their AI to enhance upscaled images for it. It will not be accessible to all games, more so don't expect it for games associated to AMD like Borderlands 3 for example. However the most notable title of this year, Cyberpunk 2077 will definitely have DLSS 2.0 support.
Right now, I wouldn't even expect the 3090 to be able to keep a constant 240hz@4K even with DLSS, without DLSS a constant 120HZ@4K should be achievable at native/rasterization. At native I expect the 3080 to potentially be able to hit 120HZ but not to be constant and the 6000 series to probably top 100-110HZ but doubt it'd get 120Hz. This is all based on estimates, not testing or empirical numbers and I'm going off of the rough average demand of games today, not next year or the year after.
8K on the 3090 is a joke, it's ONLY achievable with DLSS, without it expect console level FPS (20-35FPS) for 8K for most games. Most games aren't optimised for 8K anyways, so this should be expected. 8K will remain niche for more than a few years to come, 1440p is hardly widely adopted and 4K isn't much better. 1080p still remains the dominant resolution for now.
Re: Watch the gap!
I don't think the price drop of the 3000 series has anything to do with AMD, I am more inclined to say that it is due to the 2000 series probably not selling quite as hotly as anticipated with pricing being one of the reasons given internally for that. There were rumours of AMD saying that their rx 6000 series will be competitive against the 3080 but going by similar things from the rumour mill, that maybe a slightly exaggeration and it'll be mid way between the 2080ti and 3080.
If the 3070 is around or slightly better than the 2080ti then the rumour it'll be around $550 (compared to $400 for 3070 and $800 for 3080) it's slightly better price for rasterization performance. However also going by those rumours the 6000 series beats Turing for Ray Tracing, which isn't a great boost and that AMD has some competitor to DLSS but how good that will be is questionable at this stage compared to DLSS which is much more mature and is hardware accelerated with Tensor cores. A hybrid approach is expected which means performance outside of rasterization will likely be closer to the 3070...
But take it all with a grain of salt, right now until 3rd party reviewers can get their hands on these cards and actually test these things out, it's hard to be certain how much of this information is correct. Also the thing that is a boon for AMD right now is nvidia's lackluster supply/production of the ampere chipset as well as a very well known bug in the windows driver which was causing multiple issues for RTX3080 users but is now patched. The flip side being that AMD has a worse reputation regarding driver and card stability.
Overall I don't think the 6000 series is going to be the ampere killer that a lot of people would like to think it is and AMD needs to surpass itself with regards to what is expected from nvidia hopper which will be a 5nm card with MCMs and production expected to start next year with TSMC.
Can't defend this one
Title says it all, why is this all being built into SystemD? what should be at most an optional plug-in is instead built-in but more likely be a feature that should not exist at all.
SystemD should be stripped back to being just an initialisation system and the process manager should be made a plug-in, then any other features made plug-ins/modules.... OPTIONAL plug-ins/modules so that administrators can actually control what is on their system, stop making a stupidly Monolithic solution where a monolithic solution was never wanted or needed.
Chrome suddenly using Bing after installing Office 365 Pro Plus... Yeah, that might have been us, mumbles Microsoft
*David Attenborough voice* And here we have, in the wild, a rare glimpse... of what may be... a positive IBM quarter
It's a no to ZFS in the Linux kernel from me, says Torvalds, points finger of blame at Oracle licensing
Wait... don't MS and Oracle both have Linux Distributions now, sure for MS it's for their internal routing in Azure called SONiC, Oracle has Oracle Enterprise Linux (OEL) which was based on RHEL. I don't think either would want to call Linux as theft now given that they are both using and redistributing linux distros.
Re: "It solves a problem that people have."
Seems I skimmed a bit too fast, apologies. Personally I haven't experienced anything like that but then generally try to avoid DHCP in the first place where possible, static configurations have (IMO) always been more reliable but I understand/know that static isn't always possible (of course).
My playground server is currently down due to moving but I suspect the behaviour mentioned here is controlled within the NetworkManager.conf, perhaps the dhcp setting but can't test to be sure and thus can't currently make an argument about it being configuration.
Re: for the want of a nail
whoever said doing things properly doesn't matter? Nobody ever made that argument. The Argument is about having realistic expectations of what is actually achievable instead of believing everything can just be fixed with software, such as hardware failures or badly configured systems/servers.
As far as run flat tyres go, they are another example of where there is fundamental issues, Run flats are more expensive and generally degrade faster than conventional tires, blowouts are still possible with run flats and they generally only protect against specific areas, punctures from the side of the tire may leave the tire still completely unusable. Meanwhile development of self-sealing tires has come along and for many small punctures is an alternative method of keeping the tires still working while looking to get service/tire replacement.
Re: "It solves a problem that people have."
You still seem stuck in the mindset of where NM was in CentOS/RHEL 6, it was majorly reworked for CentOS/RHEL 7 and no longer randomly breaks things. What was released in CentOS/RHEL 6 was definitely a daemon aimed at laptops and it was terrible by all accounts.
If you want to go for RHCSA/RHCE now, you have to learn NM properly which today it has vastly superior tooling to where it was on CentOS/RHEL6 and actually knows how to play ball with a server. In fact, NM has a new alternative to bonding, it has teaming. Further to this you can pass it configuration in JSON and it actually bothers to read what was in /etc/sysconfig/network-scripts/ too now. The only reason I see for disabling NM in CentOS 7 is the memories of trauma from what was given in CentOS 6.
Re: "It solves a problem that people have."
But yet neither the linux kernel nor systemd are limited to only supported single embedded systems and nor is software in most web sites hosted on the Internet.
And indeed, there are cases where software fails because it is a heap of crap but this is just one of MANY cases, software also often fails when it is running on dodgy hardware, software often fails when the supporting system is not up to spec or ill-configured. Software also fails because of unforeseen events too, it's impossible to always know what a system is going to deal with but blaming it ALL on software developers is like blaming the Car Manufacturer for every punctured tire that car gets, even if say it ran over a nail.
This is why I say the previous statement war arrogant because it was so limited in scope and some idealistic world where you can just code away all issues in the world but software developers just won't do their jobs....at least that is VERY much how it came across.
Re: "It solves a problem that people have."
Obviously I understand there is a difference, thus the very wording "there are multiple cases where", this is not an all encompassing phrase to begin with. I can only see you going to this point if I had used an all encompassing phrase, which I didn't. So why are we here?
"Well behaved computer-based systems have been achievable for decades, but have become unfashionable, especially since uncontrolled system failure became widely acceptable. Discuss..."
Why would 'well behaved computer-based systems" become unpopular unless there was fundamentally something wrong with them? The answer is that there is something fundamentally wrong with them almost every time. It is impossible to build an entirely crash-proof computer because components do degrade and resources get pushed too far, and there are simply better ways to achieve uptime than being reliant on a design with systems that are single points of failures.
Re: "It solves a problem that people have."
NetworkManager isn't something you should disable as of RHEL/CentOS 7, it's actually better (IMO) than Network in RHEL/CentOS 7. The reason most people hate NetworkManager was because of how terrible the implementation in RHEL/CentOS 6 was.
One of the reasons Network Manager is better in RHEL/CentOS 7 is that it gives separation between connection and device which (in my opinion at least) gives you far more power over your network configuration and more versatility all-round, but it does require a lot more learning for it but at least bash completion exists.
Re: "It solves a problem that people have."
Your response seems filled with arrogance to me, it almost sounds like you're saying "if only developers could do their jobs".
Crashes happen for various reasons and there are multiple cases where crashing is in fact preferable to continued running of a service. There are major differences between daemons running on a server and some terrible app that runs on a smart phone. More so, a lot of the time daemon failures aren't even the software's fault but the fault of the administrator for not properly configuring applications in the first place and supplying insufficient resources for what they are trying to run. A Daemon killed by OOM-killer for example is not the fault of the software but rather are usually the fault of the administrator(s) responsible for that server. Further there are hardware failures and failures in other software/libraries which also add to the stack. No serious developer is going to create their software from Assembly code to make even a basic web daemon.
So this isn't about developers being amateurs or designing bad software that crashes a lot but rather a mixture of many complex issues being over-simplified by somebody being naively idealistic on what is truly achievable in the real world.
Re: the country that invented chocolate
when you go back over an old post and see a silly response... Chocolate as in the solid brown bar that people recognise today was invented by Joseph Fry. Yes cocoa had been known for a long time as well as dust being available but it was in England that the modern day solid chocolate bar was invented in.
Re: 737 MAX
"Not only has the MAX a problem with the single sensor, its main problem , according to pilot friends, is because they fitted the extra powerful CFM LEAP engines (same as actually work on newer Airbus320) but the 737 MAX geometry, location of engines , is compromised by its lower loading height (almost no lifter required to delicately toss in the hold-bags)"
You mean the entire, well-documented reason that MCAS was installed/required in the first place.
Software can change MCAS's ability to repeatedly fight the pilot's override, MCAS was relentless and definitely required some re-training which Boeing denied previously. Also there was a vital Gauge that Boeing decided to sell as an additional $10K optional extra.
Re: America = a cesspit
Actually it's because the repeal of NN was essentially blocked and reversed using the congressional review act meaning so you never saw the loss of NN. Now most the states have enacted their own NN laws which the FCC would like to imagine it can repeal but the courts do not appear to agree. So you'll never see NN disappear because even if the FCC removed it from federal level now, it'll remain at state level in almost every stated.
So.... maybe just maybe, some critical thinking is a skill you shouldn't be preaching because you clearly didn't do your research.
"Surely, if you're working at the "global scale" you claimed in this very thread, you're using redundant servers and automatic failovers, load balancers and the like - and making sure server B is functioning before rebooting server A?"
This might be a surprise, but I haven't always worked for the same company using the same set-up. I have previously worked for an MSP, before that I worked in DC ops and before that app/DB dev. Unfortunately while working at said MSP, the sales team would sell some very stupid and unsuitable solutions but my complaints about the terrors their sold fell on deaf ears.
"Bloody hell, when I was running a couple of mate's websites off of a spare laptop in the linen cupboard under the stairs, I had a backup clone running at another mate's house where I could redirect DNS before shutting down/doing work on the main machine!"
"That's only really feasible if by "global scale" you mean - all my kit is in easy-to-reach, or otherwise manned datacentres. Which isn't always true."
Not even remotely true, You can have these things called "spares", a lot of DCs have on-site technicians or engineers as well if it comes to requiring physical replacement. Some companies will replace entire racks out if a single server fails with equipment all over the world, the fact that a sys admin might be in the US isn't going to prevent them from replacing out a rack in Austria. If you're at that scale, you aren't ever going to be relying on a single person, that'd be dumb past belief.
"Yes, you can drain the traffic and serve from elsewhere, but now you're dealing with increased latency for the duration of a remote re-install to somewhere with shakey international connectivit, or, waiting weeks for hardware to be swapped out (look how often the boat to TL is ;) ). All
because some berk didn't like text logs?"
Well apparently the set-up I'm dealing with is considerably better than the one you're dealing with since where I work, there are solutions to ALL those issues already. If you bother to do your set-ups correctly from the start you're not going to end out with most these problems.
"That shakey international connectivity btw? Also a bit of a shot-in-the-head for centralised ELK/Splunk. Working on Global scale doesn't always mean you're solely in big D/Cs with masses of international uplink, sometimes you're in the back-of-beyond, closer to the users"
If you're running your systems on the customer site, you can still have a localised syslog server. Sure a centralised ELK/Splunk would be better but it doesn't stop the ability to log to more than just the local server.
I respect your opinion. A lot of people just look at it as sysv vs. systemd but in reality it should be looked at for the pros and cons. I've said somewhere above, systemd is great for enterprise but terrible for expert users. As most sys admins are expert users, they naturally do not like systemd compared to sysv which is great for expert users but (in my honest opinion) isn't a truly enterprise worthy solution. I personally would like to see something better than either sysv or systemd.
it's an important metric when you've got a customer shouting down your ear after their services being restored NOW. What you value and what your employer or your customer values are highly different things. Generally customers love high uptimes, they never like to see their services or servers go down, even if it's just to apply monthly or security patches on servers running in higher availability.
There is quite a few, for starters the lack of dependencies, processes are just loaded in a pre-defined order and so if you have one daemon reliant on another service but the first service fails, sysvinit will just continue to load the latter service. This means having to manually detect if the previous daemon is running and terminating self or worse, letting the dependent service load when perhaps it should remain down.
The above plays into the fact that sysvinit really has no idea what state the processes it starts are running in.
Single threaded initialization, it's mainly a speed thing here so it's opinionated but in an enterprise environment this is actually meaningful. Longer boots is increased downtime for any reboot, like kernel updates for example. It's also not great when you have a customer on the phone shooting for their site to be brought back up right now, you check sysvinit and see it timing out on some process to later find out said process was trying to run a rdns lookup request that never got answered and so just sat their for 5 minutes holding up the entire boot process.
But the biggest issue, is generally other System Administrators who put together kludges into init scripts, so you end out with bespoke servers using band aid solutions that then later break for various reasons (i.e. package update replacing said script). This is often due to the quality of the sys admin but in a real enterprise environment, you shouldn't be running bespoke and patchy server configurations/scripts.
"And when JournalD shits itself and stops forwarding? It happens."
That's hardly an excuse to not be sending logs to syslog however. Just because you're using syslog server doesn't mean you automatically stop also having local copies of the same said log files too.
"That's fine if you're working on a very local scale. It doesn't work nearly so nicely when you scale up to global scale. Don't get me wrong, it's a nice tool to have, but you don't want to have to rely on it alone.
It's also not much use if journald gets stuck and stops forwarding, or if your box is failing to boot in the first place."
Considering I work primarily in the global scale, no, it's definitely nicer in the global scale, more so when you can use something like Splunk or ELK to compare error logs from different servers. It's also nice when say /var goes read only but you still get logging occurring, cas /var can also go read only, it happens.
As for servers that have boot issues, if you're truly at a global scale then you'd probably just replace the server out. Drain everything it is doing to another server and get it diagnosed later, removed, replaced or re-installed. If you're at global scale, you should have decent higher availability in place after all. No point worrying about servers going bespoke because somebody came up with a cunning plan to fix some strange boot issue or having to worry if somebody really fixed the corrupted root file system/etc.
If you're using imjournal, then your logs on the syslog server will be readable, you haven't even tried looking into it have you?
Also systemd reports to you the state of the process, it's pretty simple, systemctl status <name>, in sysvinit you'd need to have a cron or another daemon checking constantly against the script and hoping that you haven't missed any cases where the process might not be in the expected state in your scripting, whereas systemd you have the process respawn itself on failure, or you can make it e-mail you.
Seriously in the time I've worked in an MSP, the amount of sysadmins I've seen generate terrible little scripts that they are so proud of and thinking it's production applicable when in reality it should get a very fast rm -f... is too many too count. Scripts are great when done well, the problem is all the people that think their scripting is up to par when it isn't. Good scripts need heavy peer reviewing, testing and maintenance before being deployed properly (not just cut/pasted or SFTP/Rsynced into place). Unlike the 25 line scripts somebody usually places on to some random server is a bad practice that just makes things bespoke and so so bad for production.
Re: Next NetworkManager
I am not saying you can't disable it, you definitely can. However Red Hat has been building most their tooling in RHEL7 around it now, like NMCLI, having gone for RHCE I was not able to avoid having to use nmcli or firewall-cmd and in some ways I can't say they are bad tools either but they were forced on to me. Not sure if it's red hat being too pushy on their new technologies or those of us being too clingy to the old ways.
It's really very stupid and there must be much easier solutions than "the user must be logged in". I am surprised he didn't defer to cockpit or something actually. Even a flat file database that stores the encryption key encoded using the relevant private key sounds at least like an option... but then where would he store it... oh right he doesn't want it in /etc does he.
You'd need to take a too narrow a view to get stuck on this for all systemd's adoption. Obviously SystemD being developed by Red Hat engineers such as Poettering, we saw it first coming from the Red Hat/Fedora side. But this does not then explain adoption by other Distributions like Debian or Ubuntu. I think it more likely that there are actual real-world reasons to explain this and I'd put this back to it being for the Enterprise and not for the expert user.
Obviously there are many distributions that do not use SystemD still, and some branches of both Debian and ubuntu that do not use SystemD.
sysvinit was seen as legacy way before SystemD became a thing and there have been multiple previous attempts to replace it (i.e. runit). If you're solely relying on the logs on the local server, then you're already doing something wrong in the first place, at least at minimum have a proper syslog server and forward your logs, if not getting something like Splunk or ELK. Grep is great but using it for say scanning 1GB+ log files on a loaded production DB server is just a no, very bad practice.
you can still use text based files in systemd for launching services/daemons but the issue is that this really gives no real comprehension of process state and that is problematic in a professional enterprise environment, which is part of the reason why sysvinit really is legacy. Also the whole single threaded initialisation process and multiple other issues, since sysvinit has no real comprehension of dependancies either.
So remember when NetworkManager came out in RHEL and it was originally for laptops switching between different wifis but slowly was modified for usage as a Server and then by RHEL7 it was basically put down as a requirement to use for RHEL servers... this whole it's for laptop comment concerns me that this'll follow the same route that NetworkManager went to eventually become a requirement.
It is easy to say that systemd leaves a lot to be desired but the problem is that nobody else made an alternative that gained any kind of wide-scale adoption to replace the very much legacy sysvinit. So yes, while there are systems much more in-line with the linux design philosophies, they weren't popular. Generally speaking this can be put down to the alternatives leaving far more to be desired or failing to meet some basic criteria in real-world usage.
The last good thing I remember Atari doing was Rollercoaster Tycoon 3, but they fell out with the developers, Frontier. So when years later Frontier and Atari go head to head on new theme park tycoon/sim games... avoid Rollercoaster Tycoon 4 and get Frontier's Planet Coaster instead. Atari cheaped out and went for the cheapest developers they could find, after going through two different developers they ended out with a mobile games developer... for what is supposed to be a relatively high-end PC Game... it did not end well. Planet Coaster meanwhile continues to go strong.
As Frontier goes, well before Planet Coaster had already made Elite Dangerous and since made Jurassic World Evolution and this November have Planet Zoo due to come out.
Geo-boffins drill into dino-killing asteroid crater, discover extinction involves bad smells, chilly weather, no broadband internet...
Re: Hooray for science!
people literally died from the heatwaves this year in the UK... great time for wearing a winter coat for sure. We have had two extremely hot summers in a row, it is becoming far more common place now with 2018 competing for hottest summer on record and 2019 having the hottest July day on record for the UK. This is what some people would call a worrying trend.
Not sure I can even follow this one? Are you claiming that US Law is better than EU? That doesn't sound remotely true. Given also in this case that it is the EU and not the US that brought Qualcomm's anti-competitive behaviour to the courts. Is there perhaps a similar action in the US?... given the parent companies are both US companies... well?
Most US law is traditionally based off of UK law anyways given The Constitution was based off of the Magma Carta. Also the banning of Huawei is just Trump throwing his toys out of the pram because it is a Chinese company doing better than America ones, the UK actually did a FULL analysis of all the security issues within Huawei's 5G implementation and found numerous issues, but none in regards to any intentional backdoors or the such. Everything you're saying here just seems to be faith based patriotism with zero facts, logic or rationales behind it.