Re: Shakespearian question?
I noticed too late that I should have said "they don't need to state a reason for their downvote...."
85 publicly visible posts • joined 24 Jul 2019
In principle your response is agreeable. However, where I disagree is that having the upvote to add weight to your disagreement has value because it limits the need for everyone post the same disagreement. However, the downvote just allows the "torches & pitchforks" type model where people just gang up and do so because it's just too easy to do that. They need to state a reason for their downvote nor do they need to have a post that states their reason they can upvote. They just downvote because they either disagree or they just want to pile on.
I've seen that all of this appears to lead to things like karma farmers on Reddit (both up and down) and generally stifles conversation. If every time someone posts a dissenting opinion, and they get downvoted to the extreme we don't gain anything good. The foundation of free speech is based on hearing not only what you want but also those things you may not agree with. The voting system in these forums harms that in my opinion.
All I'm saying is that, as it appears, they are divisive. As hollow and vapid as the Facebook "Like" function is, they haven't implemented the complimentary negative aspect of it. If you don't like what was posted and disagree, then you have to post something yourself.
Reddit suffers from this same flaw.
Again, I view it as divisive, and it serves very little value. So, in turn, it also makes the thumbs up also be somewhat questionable. It all starts to feel too much like an echo chamber one way or the other.
We're not being charged by the character here.
Buyer (you) beware. We bought into Cloud Services Automation and Operations orchestration to go along with Server Automation back in the day only to have HP/HPE split and throw all of that stuff to Micro Focus where software goes to die. I tend to like HPE servers, but they have made me very distrusting of any software they offer.
It seems, at the core, that this whole thing is a simple case of freedom of speech and liability. While I tend to agree with what the Supreme Court is saying regarding the number of frivolous lawsuits, I think the case should be considered under the idea that a provider (Google) should be held liable IF they were alerted to questionable content and did nothing about or continued to promote it. If the content in question was never reported for any reason and got a lot of hits then it's tough to fault their algorithms for promoting it. It doesn't seem difficult to have an algorithm that also considers other criteria that would allow them to detect and review questionable content.
Protecting freedom of speech is paramount of course. This case isn't so much about that as it is about liability (duh) but many will equate these ideas as the same thing.
Well, that's really my point. You're already at risk because of what you state but if you lock in to what could end up being a proprietary processor that is literally not going to be available anywhere else then it's even worse.
Yes, AWS is doing this with Graviton already. Of course I know that there will be those that say that it's ARM so it's available in the market outside of these providers but do we have assurance that they aren't building to a spec that no one else can use?
How long between when the lock in occurs and when something goes wrong so that you can't just move away without costs that go well beyond just egress charges?
Is nobody worried about the lock-in this stuff creates? What is keeping these cloud providers from implementing proprietary code to keep you locked into their service?
I honestly don't get it. While we can all complain about the Intel cost model, the truth is that cloud providers charge significantly more than any server vendor charges for Intel-based (AMD too of course) solutions. Even when you wrap all of the "capability" of automation and APIs around their services it's still really difficult to justify the expense.
I personally find ARM very interesting but there seems to be very little Of the Shelf software that is common in the marketplace so it means that a lot of people are going to do a lot of custom development to work in those environments to support it. (I'm sure I'll get downvoted by everyone that loves ARM and believes that the software is everywhere)
Anyway, this is definitely interesting but it feels like it's a big risk for large companies to take.
I don't think you understood what I meant by that response. AWS isn't going to suffer any lost money or any increased cost that they can't mitigate by charging their customers more. Until the world sobers up from the current state of cloud drunken-ness, AWS and other cloud providers will continue to let the bad behavior run rampant.
Ironically Intel effectively killed the better alternative to Optane when it vowed not to support Memristor. While Memristor may have had an uphill battle to be put in place sooner, it would have revolutionized computing (and still probably will) by blending primary and secondary storage into the same thing. It had a path toward migrating away from the current model without completely abandoning the current model but it required companies like Intel to at least consider it as a viable technology. They didn't and opted for Optane instead because they controlled it and now look at them.
Running your data center with VMware was a very smart move in the current climate of public cloud. The reason it was a smart move was that it was substantially cheaper, performed better and was more transparent. This is all in the context of the kind of applications that large corporations still use and have a hard time migrating to a modern architecture.
Now, with this kind of stuff, there is almost no reason not to just suck it up and pay a public cloud provider after a lift and shift. While it may cost slightly more, it's within the range of value add that's probably justifiable.
For the record, I'm not sold on public cloud because it's just a new version of the mainframe with a lot of illusion in that walled garden.
This actually makes sense for Intel. IF RISC-V is a superior instruction set to x86 and there is a groundswell of support building then it makes sense for Intel to fabricate a processor platform based on it. They don't have exclusive rights to x86 so it's not like they are abandoning a crown jewel. It just has a large base of support. They have wanted it to go away for a very long time and may have finally realized that having a proprietary processor instruction set just isn't going to happen.
It is my understanding that all cloud providers and major hosting companies (Facebook/Meta) do this. They buy servers in full racks and when one fails they just leave it there until the failure rate of the rack combined with its life cycle pulls the whole rack out.
You get a downvote. Your comment is analogous to "racism isn't real because I haven't personally seen it".
Cloud provider's have earned a reputation of "fire all of your infrastructure people because DevOps".
Cloud technologies offer a lot of very good things but the real resistance comes from the fact that a lot of companies have cut loose a lot of their key people because they won't need them in the cloud. We all know that's not true but that's exactly what providers like AWS sell to companies.
Two key themes continue to pop up in these conversations; exceptionalism and inequality. Neither of these terms is to be taken at face value and an entire article could/should be written on each of them. Please don't think of either of them in the pop culture sense but in the literal, clinical sense.
FWIW, it was one of your UK commenters that opened my eyes to exceptionalism in regards to how my country is handling so many of the challenges it is facing today.
I've personally lived through the world of inequality and it's not just a matter of race, religion, sex or socioeconomics. I'll only say that everyone should consider that biology, geography and genetics all seem to contribute.
It's obvious that everyone is rushing to relate their experiences in regards to this articles subject matter but how are we looking past the fact this entire article is a mess? It's almost as if we are coming on the back end of an argument where the author is answering a lot of questions that we haven't actually heard or soap-boxing about their own personal experiences in the job market that may or may not have relevance to anyone else at all. The whole thing is almost incoherent.
That's not to take away from the actual experiences and I'm sure that all of us have seen these examples in some forms over the years but man, what a mess of an article.
All good points.
I would say that there are company/organizations that do exist in the United States that are called "Think Tanks" that do exactly the kind of thing we're talking about. However, there have been plenty of statements in the news regarding the validity of their work because of the potential influence from their sponsors.
Government is often no better in this regard because of lobbyists, politicians, etc.
This whole thing just feels like a much bigger social problem than anything else.
Isn't this just a case of outside funding being the source of corruption? How is government any less corruptible than corporate sponsors? Isn't the very act of material exchange the source of the corrupting influence? It seems like ethics in general are required at every single level of that particular mechanism to "ensure" no corruption and that seems not very likely.
What if funding were established through a double-blind mechanism where the funding source and the researchers were shielded from each other? Could that be a way to make it work?
To add to that note; the car industry is filled with ACTUAL engineers that have a formal scientifically based education with a solid standards model that isn't controlled by companies selling a product. It is governed by ACTUAL governments and must follow a lot of regulations. (I know, I know, I'll get flamed for suggesting that all of us geniuses aren't ACTUAL engineers)
Why? Because cars can cause death, injury and significant loss of real property if they suffer malfunction or misuse. This is very basic and doesn't capture everything else involved but it's certainly foundational.
Just for fun, imagine a world where you had to apply the same rigor that is applied to designing, building and operating a car to the IT industry. It would be very entertaining at the least.
Diabetic ketoacidosis is such an avoidable tragedy. As a parent with a Type 1 Diabetic child, I've developed a pretty good understanding of the disease and getting so far gone with ketoacidosis is heart breaking. Modern medicine really does know how to handle that condition so it makes me sad to think that they didn't manage to catch it in time.
Regardless of Dan's significant place in the world, anyone succumbing to such a condition really does suck.
That one is pretty easy. As much as I personally dislike Apple, they make products that work and that make their customers happy for a price they can accept.
Dell/VMware makes a combo that is no more reliable than any other hardware/software combo, offer lousy support and make it extremely expensive. Of course we can really thank EMC's influence for the last part. VMware does their level best to make it work but Dell's reputation for, at times, shoddy hardware takes the cake.
Remediation for these things IS to replace the running container. Using SSH to troubleshoot a running container in production is really missing the point.
Making a container a simple virtual server replacement is missing the point and asking for trouble. Fix the problem in dev and roll it to production. You shouldn't need SSH in production. It's a crutch.
OK, I'll admit that I had not revisited this in some time but a little research indicated that I was still on target but I can see why people would vote it down.
X86-64 can operate in Long Mode and potentially support 64-bit operands. However, it defaults to 32-bit operands.
I'll admit that I am having a hard time finding any confirming data but the general feel is that there are many apps using 64-bit virtual address space but not as many using 64-bit operands. I admit that I may be wrong about that however.
Thumbs up for the excellent technical discussion.
I'm not implying that clock cycles have anything to do with this. In fact, I'm suggesting exactly the opposite. The core processing on x86 is still 32-bit which is why everyone in the world didn't have to recode their app in order to work at all on x86-64.
It's sad that actual 64-bit processor architectures haven't taken off in the mainstream. Between Itanium, DEC Alpha, Sparc, Power (though Sparc & Power9 is still around) and I'm sure there are others, the market had a future that could have been ramped up much like the current (polish a turd) x86 architecture (which is actually IA-32 for Intel and RISC64 for AMD via NexGen). Now with x86 we are very incrementally increasing actual performance and we could have seen a bigger leap by now. Of course that requires everyone to adopt 64-bit processing and a lot of code would have to be redone no matter where you run it. It's not just 64-bit memory addressing which didn't necessarily need such a major overhaul.
Oh well, such is life.
Computers are so expensive these days! He can't be expected have a spare, especially when he's willing to pay someone 80K for blah, blah, blah.
Of course he could just do his research in a machine in the cloud that he destroys and recreates hourly. Nah, that would never work.