Re: New machine for a new CPU? What a waste of $!
"You wanna sell me a product worth my time, money and interest, Intel? Come back with a modern instruction set architecture."
Sorry, but they killed Alpha AXP more than a decade ago.
90 posts • joined 18 May 2011
"There are several newer languages that look very interesting, doing approximately the same thing as Java- Rust, Go, Julia, & Swift possibly."
There is also an older language, Oberon, from which Java and Go, especially were derived, and which arguably could do as good, if not even better, a job. The only problem is that it is a Wirth language, and must, as a consequence, be ignored. Of course, it does need a set of libraries that could serve as a drop-in replacement for the Java libraries, although it already has libraries that are almost functionally equivalent.
To me, Java always had characteristics of a language designed by committee; a committee that thought every fashionable thing must be included in the language.
Roo made a statement that comes closest to a question I have.
"At the end of the day it's your choice to make excuses for vendors with massive margins, personally I would like them to actually fix the defects in the products that folks buy from them."
So my question is as follows.
Which part of the Windows kernel is its trusted computing base? That is, which part is responsible for guaranteeing the invariant of the operating system?
In this case of, I think it is about much more than just "defects". The problem is closer to conceptual. Who has sat down and considered and decided the question as to what will be invariant in Windows? What was the outcome of those deliberations? That is, what did they decide will be the invariant the system must guarantee?
The entire OS kernel can't be the trusted computing base. That is far too large (last I heard, the Windows kernel was more than a million lines of code). Only then can I make sense of a question like what would motivate putting something like graphics or font drivers in the kernel.
"Don't think 6-to-4. Think 4-to-6 (as in what if it's the IPv4 device that has to connect to an IPv6 device, not the other way), using only existing IPv4 protocols."
That's just it: 6-to-4 provides for bidirectional communications, initiated from either side. The same mechanism works in both cases, and only the IPv6 networks need to be adjusted to accommodate the IPv4 nodes.
"And as others have noted, some networks shouldn't be directly-addressable, not trusting in the filtering capability of the firewall (which they feel can be bypassed), which means that aspect of IPv6 is a liability."
To the extent that a statement like "some networks shouldn't be directly-addressable" has meaning, it reflects a serious misconception. If by "directly-addressable" you mean routeable, the only network that really fits that description is an isolated network. The instant you enable transit of datagrams from that network to topologically exterior networks, whether via NAT, or simple forwarding, that description breaks down. A network is either "directly-addressable", or it is unreachable. That is, datagrams addressed to nodes on that network, will reach that network intact, or it will reach that network modified, or it will not reach at all. If it does not reach, then the address used as the destination was not valid; no need for further consideration. If it reaches intact, then the network was directly addressed, and the datagram was not changed in flight. If the datagram reaches its destination modified, you are now left to decide on the trustworthiness of the datagram and its contents. I don't know about you, but I like to have my data come intact.
Net 10 IP addresses (private IP addresses) is ultimately just a convention about router filtering rules that says that datagrams with one of the private IP addresses as a destination MUST NOT be forwarded onto the global Internet. Besides the fact that IPv6 does indeed specify such address types (globally unique local addresses), there is nothing set in silicon that makes that necessarily so. In fact, there is indeed leakage of such IPv4 datagrams onto the global Internet. This usually arises from misconfigured border routers. The only advantage of Net 10 IP addressing is that sites using that can maintain their internal addressing without considering changes from their upstream transit providers. IPv6 makes provisions for that also.
"Then you have the IT people working behind the scenes, the ones who have to work the nitty-gritty of the network: especially when things go wrong. These people need to be able to talk low-level, and in terms of low-level, IPv4 was at least within reach for most: four numbers no higher than 255."
Not at all. That is just pure nonsense. In what way is a 16 octet string (the IPv6 address) more complicated than a 4 octet string (the IPv4 address), other than being longer? I work with troubleshooting networks every day, and besides rarely needing to go low-level and dealing directly with the bit-level data on the wire, or dealing with its hexadecimal equivalent, I am usually able to trace a connectivity or performance problem without needing to go so low-level. Are you honestly going to assert that it is prohibitively more difficult to deal with a 16-octet string represented with quads of hexadecimal digits than it is to deal with a 4-octet string represented with triples of decimal digits? I actually find it easier to convert hex to binary and back than I find to convert between decimal and binary. Things might be different for you. But if you are honestly making that assertion, aren't you just admitting that some workers don't like to consult manuals and other documentation? In what way is that a problem of IPv6?
Another idea implied in your statement quoted above, is that the IPv6 header is more complicated than the IPv4 header, and that the IPv6 datagram is more complicated than the IPv4 datagram. Again that is not completely true. The IPv6 header is actually _simpler_ than the IPv4 header, because several of the options and flags that would have appeared in the IPv4 header do not exist in the IPv6 header. If options are needed, they may be placed in IPv6 extension headers. True, IPv6 defines several different extension header types, but IPv6 defines them to have simple structure, and defines simple rules for extension header processing. You process the IPv6 headers in the order that they appear. Thus IPv6 datagram processing has definite semantics, based on the headers appearing in the datagram. IPv4 header processing is definitely not that straightforward.
"That assumption is part of the problem. Reality doesn't hold up to this, as there really ARE plenty of hardware fixed to IPv4 and incapable of being upgraded to IPv6."
Yes, indeed, it is not trivial to make the transition to IPv6, but it isn't as complicated as a skeleton transplant either. I really should not have tried to make it sound like it is trivial. I apologize.
That said, IPv6 was meant to coexist indefinitely with IPv4. Various 6-to-4 mechanisms have been provided to ensure that the initial islands of IPv6 can interoperate with the ocean of IPv4. Eventually, when adoption of IPv6 becomes widespread, there will still be (probably very large) islands of IPv4, and those same 6-to-4 mechanisms will be what allows them (the IPv4 nodes) to remain online. Thus I think my main point still stands: you do not need to take down your IPv4 networks to build out your IPv6 network.
I actually think what you say in your last paragraph, Charles 9, is full of misconceptions. I will answer that in another reply to your post.
"Sure, but you need to persuade me why I should upgrade now rather than wait until these unspecified new services are available."
The move toward usage of IPv6 is not so much an upgrade as it is a transition. It was never meant to be an _upgrade_ per se. It was meant to overcome several limitations that became apparent even when less than ten percent of the world's human population was connected to the Internet. It was meant to address the question of what should be done with IP when the human population greatly exceeds the 4.2 billion IPv4 addresses. It was meant to address the question of what should be done when each of those human beings desires to walk around with several internet connected devices. It was also meant to enable and support the return to the end-to-end interaction and security model that was the original vision of the global Internet.
As I said in another comment, networks exist to afford connectivity. The value of that connectivity grows with the second power of the number of connected entities. More likely than not, I would never be able to make an argument that could convince you to move immediately to IPv6, but I honestly don't believe I need to. You may begin to use IPv6 any time you like, and you should do so only when you perceive the value of connectivity to the global Internet via IPv6 exceeds that for IPv4. I could never imagine what is in the pipeline that is only available on IPv6 that will convince you to move right away.
I do believe that eventually you will, though, simply because the transition is so easy; simply because you more likely than not, do not need to replace any equipment; simply because when you do decide to make the transition, the environment to support that transition will already be in place. IPv6 does not require the Internet to "take a holiday" to make the transition.
"The most common use-case of multicast is live video streaming to lots of people at the same time, eg internet TV or video conferencing. But you can do that over IPv4, and lots of people already do."
"Of course there are many people who are perfectly happy with normal HD, or even Standard Definition video, but it is something to run with. At the moment we have nothing."
Yeah, but if the world sticks with IPv4 we will have nothing. One of the biggest advantages is the potential to accommodate a collection of new users and services that FAR EXCEED the current size of the Internet. I don't believe that is something to sneeze at. We may find that even in the case of multicast applications (which I gave as just an example), that some applications we currently deploy as unicast, might be more reliably scalable as multicast applications when we start talking hundreds or thousands of times more connecting parties than we have at present, but such scaling is precluded owing to a lack of address space.
But in the end, IPv6 is (in my opinion) something akin to the LASER. Initially, nobody has any idea of what is possible.
"Except that kind of talk doesn't sink in with the laity. You gotta be able to sell the stuff in simple "buy or die" English. Otherwise, your spiel will just go WHOOSH! over a mob of glass-eyed slack-jaws."
Aha! Fair enough. But that is the classic "If we build it, will they come?" argument. IPv6 has already been built. Will they come? Well, based on the uptake statistics, you may be amazed that the uptake has exceeded all your wildest expectations. Or you may be driven to despair. I suppose it depends on where you started from.
But let's turn the argument around somewhat. If they come, will we build it? In the territories (like Asia) where it is difficult enough to come by a comfortable-sized IPv4 address block, people are indeed adopting IPv6, but in many cases, they find themselves needing to employ means like tunneling to obtain IPv6 transit service. Why should that continue? Why shouldn't all the local ISP's offer native IPv6 transit service? The upshot to that deficit of native service is that customers (or potential customers at any rate) might give up in frustration.
I accept that "my spiel" must come out in "BUY OR DIE" simple language, but really, is it true that everyone is so dim that they do not understand opportunities for new services? That they do not understand NEW CUSTOMERS? I posit that the persons who need simpler language than that are not the persons we need to try to influence. Those are most likely the persons who will go along with whatever is already available, and not think too much about how it's delivered. But seriously, I don't know what to do with respect to that.
"One thing Google, Nextflix et al could do is make Ultra HD video available only on IPv6. Then people would have a reason to switch, and ISPs could sell it as a premium service."
I'm no longer much of a believer in restricting a service to one network to compel adoption of a new network. It is the business of a network to afford connectivity, and things should "just work". By that, I mean that services like Netflix should continue to work without the user needing to think of, or even to know about, which network he uses to gain access to the service. A sort of 'principle of minimum disturbance' if I may.
It's true to argue that you don't get much adoption if there is nothing new or unique about connecting via IPv6, but I would answer that by saying that the potential for accommodating new services and new customers with IPv6 (especially multicast-based services) so greatly exceeds what is possible with IPv4, that that could be persuasive enough all by itself.
"I also gotta wonder why a company that sells printers, computers, servers, enterprise services, and networking equipment is incapable of converting over to IPv6, thus permitting it to sell its two class A networks."
Anybody who has publicly routeable IPv4 addresses will probably hold on to them. At least until adoption of IPv6 reaches maybe the seven nines percentage range globally. The primary reason for that as I see it, is that the easiest way to maintain reachability between IPv4-only networks, and IPv6 networks is to maintain some number of dual-stack hosts where the IPv4 address of that host is one of the globally valid IPv4 addresses in the pool you have. And that's just it. You keep the IPv4 addresses in a pool and you allocate from that pool when an IPv6 node needs to communicate with an IPv4-only node, and then release the allocation when it is no longer needed. Initially, configuration may be done manually, but as things progress, you should see movement towards automatically maintained address allocations and pools, and then towards globally shared pools once IPv6 adoption becomes widespread enough. That is when I suspect you would see current holders of IPv4 address allocations releasing them back to organisations like ICANN.
If that ever happens, one possible benefit would be a much more coherent arrangement for IPv4 routing infrastructure.
"'Voice' is an application, if the provider can't handle the carriage of end-to-end voice communications over a mixed IPv4/IPv6 infrastructure then I suspect neither can they support the end-to-end carriage of voice between a VoIP phone and a fixed phone or mobile phone."
Oh, I wasn't talking about carrying VoIP over mixed IPv4/IPv6 infrastructure. I (though I did not make that clear; sorry) was referring to the situation where a given IPv4 customer was not able to get his IPv4 addresses via any other means than by Carrier Grade NAT. In that situation (and I can speak here from experience trying to deploy real-time communications to traverse NAT), there is significant, perceptible degradation of quality-of-service compared to native IPv4, or native IPv6. Even in the case of having to deploy in a 6-to-4 scenario there was better quality of service once you were able to otherwise keep NAT out of the picture.
"IPv6 reaching IPv4 has never been the problem. That's why there's a block set aside for the purpose. It's what happens when IPv4 has to reach IPv6 and the hardware's too old to be able to learn accommodation techniques yet are not in a position to be replaced anytime soon. Particularly if it's the IPv4 side that has to initiate the conversation."
If you are currently using a computer running IPv4-only, you are demonstrating that there is no problem the other way, either. The Internet backbone has long ago gone IPv6, and thereby demonstrate support for IPv4-to-IPv6, IPv6-to IPv4, IPv4-to-IPv4, and of course, IPv6 to IPv6. It really isn't that much of a problem.
"Can you please explain why my ISP has to give me IP addresses for my internal devices when I only need 1 or maybe 2 public addresses that could cover all my needs ? Isn't this a waste ?"
Your ISP doesn't "have to". If you or your ISP can show that you will never need more than one subnetwork, your ISP is fully prepared to hand you a /64, which will allow you to have a single network whose nodes may take any 64-bit IPv6 address having the same prefix as the one the ISP handed you. If you and your ISP agree that on a given link that there will be no need for more than the single nodes at each end of the link, the ISP is fully prepared to assign a /127 to the link. But those two cases are by far, the less common cases.
And as another commentator replied, that arrangement actually _simplifies_ the routing computation, and in particular, reduces the space requirement.
"My company has enough public IP4 addresses to satisfy demand now and for the foreseeable distant future."
It is not whether your company will need IPv6 to connect to their internal devices, or to their existing customers. Presumably your company is doing well and accumulating new customers. What are you going to do when (not if) you gain a new customer who was never able to obtain IPv4 addresses?
What would you do if you choose to adopt a VoIP provider who, in order to assure quality of service and low connection latency, uses native IPv6, rather than Carrier Grade NAT on IPv4?
IPv6 is a part of your future, and you probably can't avoid that.
"Not up on these things for a while (naughty me) but didn't I see that subscribers are dished out /48s or /32s of IPv6? Might not be as unlimited* as people think?"
I don't know what you are talking about.
Most enterprises get a /48, which allows them to create up to 65536 subnets with 64-bit node addresses; Most ISP's get a /32, which allows them to allocate up to 65536 customers.
Most individuals and home subscribers get a /56, which allows them to create up to 256 local subnetworks, each subnetwork having nodes with 64-bit addresses. Address space might not be "unlimited", but most subscribers will find what they get is definitely sufficient.
Something I think we should all remember: IPv4 only nodes will continue to be reachable as long as they remain up, even after the world has mostly adopted IPv6.
While IPv6 is meant to be a successor to IPv4, adoption of IPv6 DOES NOT require disabling IPv4. If you have IPv4-only nodes on your network, you may configure some of your IPv6 nodes (usually routers) to perform the task of transferring traffic between the IPv4 network and the IPv6 network. I suspect that this is the arrangement that will be in use for the foreseeable future. IPv6 is here, and it is not that much of a skeleton transplant to get it going.
"Install your own then, dumbass."
I take it you're a Fordnatic, then. I doubt you love Fords more than I do. I am a second-generation Ford fanatic, actually. My father is a Ford fanatic.
But while I was plugging for the manufacturers to put a kill-switch and fuel shutoff switch as OEM equipment in new cars, I dare say you have indeed put an idea into my head. I think I will install such a kill switch. You never can tell when it'll be needed.
Off to the drawing board for me!
"Note that the usual fuel cutoff mechanism won't help in the case of a diesel engine which for some reason is burning lubricating oil, which does happen. Hence the earlier suggestion of blocking the air intake."
Yeah. That. There's nothing worse than a diesel engine that's so worn that it's started burning oil. The only way to stop run-on is indeed to block the engine's air intake. I've had to pull apart diesels that truly destroyed themselves by run-on. Not a pretty sight.
Yeah. I'm on my third Ford.
Found On Road Dead
For Only Rum Drinkers
Fix Or Repair Daily
Ford Owners Recommend Datsun
Fastest On Race Day
But for utter shite, Fords don't seem to be any worse than any other make, and I've had to fix them all.
As for having a kill switch, I just wish vehicle manufacturers would consider that. Put in a kill switch that both kills power to engine ignition and computers, and shuts off the fuel supply. Thus when you hit the kill switch, the engine has no chance of running until you want it to again.
Will these "Financial Analysts" please shut up already?
Here they go again, prescribing another of those big corporate moves motivated by a desire to "increase share price", and again, no one thinks of what happens when the Customers of both firms start walking away. And they most certainly will, as surely as night follows day.
When will they ever learn?
Thanks, Paul Hovnanian.
Based on your description, it is not clear that functions like output regulation, protection and monitoring need to become disabled if the GCU software should crash owing to the overflow of a finite counter. Is it that the generator is still able to produce output, possibly at some reduced level, if the GCU is disabled, or is it that the generator is completely offline and unable to carry any load?
According to the FAA article quote in the article, "causing that GCU to go into failsafe mode", does failsafe mode mean no generator output? That would be rather strange for a safety critical system like an aircraft electrical power system, and not something I am accustomed to hearing about aircraft systems.
What I don't understand is what is the purpose of the GCU.
From what is described, I see a black box of some sort with a finite counter in it, that is capable of shutting down the generator when that counter overflows.
Given that a generator is a reasonably straightforward thing, what kind of improvement can one have in mind attaching to it a box that shuts it down for something as predictable and maybe trivial as an overflowing finite counter?
Help me. I really am trying to understand.
Finally! I can have my flying car.
After all the decades of waiting, I can finally have and own an object that, at the push of a button, can change itself from the second most energy intensive means of conveyance, to the most energy intensive means of conveyance. How could you not like that, all you naysayers?
"Amazon has a flair for proposing unusual delivery models."
Amazon also has a flair for patenting prior art.
A friend of mine used to operate a 3D printer out of the back of her panel van. And that was where I first heard about 3D printing, almost a decade ago.
I suppose Amazon, if awarded the patent, will figure out some way past the problem my friend encountered that led her to revert to operating the printer in a shop, on a concrete floor. She found that driving around with the machine in her van, despite her best efforts to protect the moving parts, always led to various calibrations going out-of-whack, and she had to spend a lot of time recalibrating. Maybe Amazon has found a way past that that is cheap enough to still turn a profit.
I wish 'em luck.
"Threats can be minimised with a well-thought-out patching strategy, regular penetration testing, layered security defences, threat intelligence sharing and a strategy for introducing new technologies."
That all sounds like locking the stables after the proverbial horse has already bolted.
As necessary as the items in the above list from HP are, they seem to be rather studiously ignoring the real first line of threat minimization.
How about suggesting that people run good code. Isn't it far better to write good code rather than install and patch?
It is easier to build the system secure (or correct) than to try to retrofit security onto a deployed system.
That CIO was almost right. The internet is for _science_.
If it wasn't for porn, the Internet would not be half as popular as it is today; porn did for the Internet just what it did for the VCR: it made the use of it desirable.
So arguably, porn was (is?) the killer-app of the Internet.
Let's see how much profits the likes of Google, Facebook, etc., would make, if all but business usage was blocked on the Internet.
Yeah. The tool is called System Center Software Asset Manager. Didn't cha know that Microsoft has always sought a way to get more sales of this System Center abomination? Now they have found a way.
I only have three words in this context:
"Ernie" "Ball" "Corporation".
"Why is space and time Minkowskian so the time dimension is singled out as having a different sign in the space time metric. Why? Why not all the same or two timelike dimensions?"
An interesting question that can lead you down quite a long and interesting intellectual path. I can't tell you the why. I can only tell you the what. Here's what I found along the path so far.
Denote by X the spacetime position of a sytem relative to an observer;
Denote by P the quantity called the energy momentum of that same system relative to that same observer.
Set your frame such that x is time, and p is energy; the other 3 elements of each 4-vector correspond to your x-, y-, and z- axes, etc., respectively.
The first thing you notice is that for any given observer, X and P are Fourier duals of each other.
Next you notice that there is no reason (in mathematics anyway) why there should not be more than the 4 dimensions of space-time like we have. The most observable effect if we had more than 3 space dimensions would be that the inverse-square law no longer holds if there were more than 3 space dimensions. The upshot of that is that there could be no stable circular or elliptical orbits in such a universe, and structures like galaxies, solar systems, and even atoms would spontaneously collapse. The mathematical idea is that inverse power laws work as 1/r^(D-1). We clearly don't live in a universe where D_space is other than 3.
If, instead, we lived in a universe where there was more than one dimension of time, we would also necessarily live in a universe where energy was no longer a scalar quantity. Instead, energy would be a vector quantity. Again that does not fit what we observe about our universe.
So how come our model of spacetime is Minkowskian? That has to do with the postulate that the laws of nature are the same for all observers, regardless of their relative positions, or their relative motions, together with the recognition of the fact that no exchange of information between separated observers can take place instantaneously. That leads us to the postulate of a fixed, finite speed, that is the same for all observers with which information may be exchanged between any two separated events. That leads us then to recognize that the interval of separation between two separated events is the same for all observers, which leads us to the idea that space and time are indeed aspects of the same mathematical object. Thus we may measure space and time with the same units. To allow for the fact that we experience space and time differently, we can multiply \delta t by some conversion factor, that we may call c.
So how do we arrive at different signs for the time-like part and the space-like parts of the space-time vector? We do that by recalling what we observe should we send out a flash of light (or any other message, actually). We observe that the furthest the flash could have reached after any given interval (\delta r) is given by (Pythagoras' Theorem) (\delta r)^2 =(c \delta t)^2. So if we bring both terms to the same side of the equals sign, and denote the resulting quantity by (\delta s)^2, we find (\delta s)^2 = (\delta t)^2 - (\delta r)^2, where (\delta r)^2 = x^2 + y^2 + z^2.
There's more along the path, but that's about as far as I got.
Oh, like you were able to find it in OpenVMS back in 1998?
You're right. Those facilities really should be built into an operating system. To do it, though, they will actually need to start recognizing separate concepts and referring to separate concepts by name.
In those days, the VMS Process was able to achieve the desired isolation;
We used the VMS Job to achieve further isolation and grouping of related components or applications;
We used VMS Clusters to achieve scale-out and redundancy;
We used VMS Galaxy to achieve multiple operating system versions (albeit of VMS) on the same physical host.
Yea, surely those who fail to learn the lessons of VMS are doomed to reinvent it (expensively).
Too bad the operating system has, through its history, been saddled with such awful owners.
Nonetheless, I am all for what the likes of Docker and even VMware are doing.
For automated configuration, I think cfengine predated both chef and puppet. A copy of cfengine even comes with HP-UX.
cfengine uses a mathematical basis called promise theory, so you can see why it is so popular!
The authors of both chef and pupped apparently knew about cfengine. I always wondered what their objections were that prompted them to create a new entry in that same space.
To quote the Rackspace CEO: "I think people are now saying they will develop greenfield apps for the cloud purposefully."
If only that would happen. Then we could eliminate the Dirty Little Secret of virtualization and of cloud computing. That is, we would be able to eliminate quite a lot of the overhead associated with those two deployment styles.
If you're wondering what I'm going on about, I am talking about the inefficiency of the hardware emulation required to support the guest operating system instance. I'm talking about the guest operating system instance itself, together with all its upper level drivers, interfaces and abstractions. There's quite a lot of productivity to be wrung out of those deployment styles still.
Of course, to really take advantage of all that, the hosting operating systems will need to provide the necessary API's for resource allocation and release, etc. Also, the community will simply have to settle down and choose an architecture-neutral distribution format for specifying applications and resources. I remember reading about some mobile code research going on at the University of California at Irvine, that looked particularly appealing at the time, and based on what little I can remember, still looks well ahead of its time.
Maybe that is the path to the future, I don't know. Might be worth a shot, though.
To me this demonstrates something I have never been able to explain well enough to cloud advocates for them to understand.
This whole episode to me is a demonstration that Microsoft is at least smart enough to be able to make the distinction between a system and its embodiment or deployment. The cloud is not your system, folks. It is the embodiment, chosen after you have a properly specified and designed system, and after you have assessed all alternative deployments for its economic or technical advantage.
The explanation given by Microsoft suggests that they do indeed understand the distinction, and do indeed understand the need to make that assessment.
The long and short of it is that the deployment to the cloud is an optimization of a more-or-less well-designed system. You choose to implement some component of your system or all components of your system after having done the engineering to decide what would be best for your needs. You choose to deploy to the cloud as some organizing principle, but as a means to achieving the greatest advantage for the business's needs and circumstances, and according to the circumstances in which the deployed system must operate. Any other way is just wishful thinking.
While I'm hopeful, I am not holding my breath.
I really haven't seen anything in the cited information to indicate to me that VMS has a solid future. In particular, I don't see anything that suggests VMS will be safe from an eventual M&A&K (merge and acquire, then kill).
HP still retains all rights to VMS. VSI does not appear to be particularly robust against a possible takeover attempt by a company like Oracle who will waste no time killing it off. The stubborn refusal of the owners of OpenVMS to either popularise or open-source the operating system only serves to make it more likely that should something bad happen, the users will be forced to look for another operating system.
I'll believe it when I see it.
This brings back memories of a young man who became President of a now-defunct technology company. Later he created the position of CEO, to which he appointed himself (not quite the same as the present situation, admittedly). Later still he became the Chairman of the board.
In the months prior to having sold that company to its competitor, this young man had held the three positions concurrently, President, CEO and Chairman of the Board.
Wonder whatever became of him? All I can remember is seeing his face in a GQ Magazine interview back in the day.
Yep. That is how we know that even Microsoft has devolved to just managing its share price.
Sort of reminds me of this obscure company about a decade ago who acquired its competitor in hopes of also garnering that competitor's customers. When said customers stayed away in droves, the company's lady CEO (who later ran for political office and is associated with the USA's Republican Party) would announce a series of layoffs whenever the share price started falling. Invariably the share price would go up a bit for a few weeks. Then it would start falling again, and the cycle would begin again.
At the time, I became convinced that every publicly traded company eventually devolves to just managing its share price.
The pattern appears to continue.
Must be a synonym for groupthink. Whenever I hear the word "alignment" used in a corporate setting, it always looks on closer examination to be used as a shorthand/euphemism for "Here's what you are to think. No variations will be permitted. Any attempts to vary will be met with the most merciless browbeating. Good luck; have a nice day."
"Alignment", "Teamwork", and "Synergy", whenever used in a corporate setting make me cringe in anticipation of someone using a hammer drill on my lower molars.
Please accept my apologies; I am not normally cynical.
I can only imagine the kind of systems you could build with that.
Non-volatile, fast access, byte addressable, high-capacity storage? You got it.
Carve it onto a die with a 64-bit processor core, and what do you get? Active memory.
Given the form factor, I suppose you can have 16 processors and 16 TB in a 150x100x30 mm module.
Man the system possibilities that come to mind if I could have hardware like that.
I'd love to write the software to work with that.
Gadolinium Barium Copper Oxide is usually represented with the symbol(s), GdBaCuO.
Gd for Gadolinium;
Ba for Barium;
Cu for Copper, and O for Oxygen.
To be even more pedantic, B would be the chemical symbol for Boron;
C would be the chemical symbol for Carbon.
So did they actually use Gadolinium Barium Copper Oxide? Or did they use Gadolinium Boron Carbon Monoxide?
Biting the hand that feeds IT © 1998–2020