Well, some of us still remember rec.aquarium vs. sci.aquarium from the '80s, perhaps the first online flame war. At least, the first I encountered.
-P.
94 publicly visible posts • joined 6 Apr 2011
I understand that when a building is put up, the builders are supposed to supply an "as built" plan to correct the original architect's and engineer's "as planned" plan. I don't know how often they actually do so, but either way, I wish this were done in SW development. .
In an Agile project I was involved in, the spec for each sprint was published, but more often than not, something had to be changed along the way. The changes were never documented. The net result was that there was no written documentation for the program "as built"; knowledge resided in the recollection of the product and project managers and developers. Since I was an add-on to the project, in a product management role, in order to design (and assist in the implementation of) a particular facility, I did quite a bit of testing to make sure it worked in context.
Once I saw something surprising happen and asked someone who had been on the product management team at a high level for a long time, "What is the program supposed to do when X happens?" He couldn't tell me. He said, "Why don't you try it and see?" Well, I had already tried it and seen, but it was not satisfactory. What I was really after was "What is SUPPOSED to happen" in this situation. I didn't know whether what I had seen was an implementation failure or a design failure.
IIRC, the lamented Fred Brooks said that every project of any size require a "Librarian", to keep track of project documentation and keep it up to date. That book was written in the waterfall era and I think technologists using more recent development paradigms have largely assumed that this role was no longer required. But my (limited) experience, described above, leads me to believe that it is needed as much in the agile era as in the waterfall era, or maybe even more so.
Not exactly. The question has always been how far the "unlawful search and seizure" clause in the Bill of RIghts extends to anything and everything that a person may do privately. The upshot was that the decision about making abortion legal was thrown to the states. In the four situations in which that proposal was voted upon in the most recent election, all four states (including Kentucky, a very conservative state) voted to make it legal.
Ruth Bader Ginsburg, a liberal of liberals and a feminist, joined the court after Roe v. Wade was originally decided. She stated publicly that she felt the case was incorrectly decided, and that it should have been left to the states, which is, in facts, what just happened.
@Neil Barnes
I'm dubious about the utility, except maybe in extreme cases. I wouldn't assume it's MBA driven. Surveillance and AI based upon it might be described as a solution in search of a problem (if it's a solution to ANY problem), and it's the kind of thing that extreme techies might suggest to management as an aid to performance appraisal. (Shudder.) Not to even mention that the belief that Big Brother is Watching You is not exactly conducive to a happy, healthy work environment.
As to relationships between employers and employees in the U. S., it all depends on the company and which employees you are talking about.
I am retired from a company that has over the years consistently been cited as one of the best places in NY (and even the US, I think) to work.
Amazon, on the other hand is sometimes cited by employees as terrible. I don't know how much of that is belly-aching to arouse public support for unionization. But even some high level employees, including one whom I know and respect, have left after a relatively short time, commenting that it is a bizarre and awful work environment.
But then I know another who spent several years there and left for a position that interested him more. He respected the Amazon weltanschauung. He worked there over twenty years ago, a long time ago in the life of Amazon. Still, he liked it, though he was challenged and worked long hours. I suspect that warehouse workers are more likely to get surveilled than either of these friends.
Regarding key logging, etc. on employees' computers, the question is who owns the computers. If they're talking about computers owned by the employees, it's very bad. If it's computers used by employees that are owned by the employer and used exclusively for work, then I don't see a "rights" problem with it. Use your own computer when you're off the job; on the job, you already know that your email is monitored, etc. Why would you use the device for anything that's not work-related, anyway?
That's not to say that I think it's likely to be very useful to the employer.
-P.
Google's Pichai "admitted that Google's AI voice assistant sometimes fails to understand and respond appropriately to requests. "The good news is that anyone who talks to Google Assistant—while I think it is the best assistant out there for conversational AI — you still see how broken it is in certain cases," he said.
Wait... what if it's sentient but merely hard of hearing?
... and indeed I may be wrong.
It seems to me (as a professed amateur and dilettante) that:
1. Cloud instances usually are VMs. If so, how can you tout the Cloud while denigrating VMs?
2. Yes, VMs have a large attack surface, so security indeed could be a significant concern. But has this manifested itself in recent years?
3. Not all software, internally or externally developed, has been containerized, and these constitute a rather large library of applications in many or most commercial settings, and so VMs seem like an excellent solution to keeping them running.
May I remind you that it is completely legal in every way to make commercial use of FOSS code? Think of all those commercial HPC clusters running Linux.
And there has never been a prohibition against reading FOSS code, seeing what you can learn from, it, and using what you have learned when writing your own proprietary code. Yes, fuller attribution would have been gentlemanly, but there is no FOSS requirement to be gentlemanly, as anyone who has listened to Linus or Stallman over the years well knows.
Those who point out that most FOSS code is crap are likely right, but there is no justification for the assumption that the training process assumes that it is all great. Though I don't know what the process was, it is extremely unlikely not to be that, even aside from the fact that FOSS code varies strongly from case to case in code style, safety and correctness.
The question was raised in the article whether Intel would optimize for ARM as well as Intel architectures. But if the focus is on HPC, they would also have to take GPUs (especially Nvidia CUDA) into account. This strikes me as a much tougher problem.
But presumably, even in its absence, there' are a lot of of non-HPC workflows (and HPC applications that cannot or have not been CUDAfied) that would benefit.
This was my first question as well. Could not the authoring software place the answers, together with the grading infrastructure, on a separate domain that requires its own authentication?
Since this issue was identified so long ago, I wonder if it was ever reported to the companies that write the testing software. It does not sound hard to fix this at the app level.
The one good idea is assigned places. Everything else sounds really awful. I don't know if the leader can assign places, though, which would be useful, especially if we want to go "around the room" making comments or brief presentations. And all that wasted space in the corny elementary-school auditorium mockup. And if I want to turn off the camera, it's usually because I'm still in my underwear. Oh well; black electrician's tape might still work. (Well, not for underwear, but you know what I mean....)
Security guru Bruce Schneier recently blogged that he likes Zoom and that they have fixed the most egregious recently disclosed security flaws. He also mentioned that they have some ways to go with key management and with security for the free app, and that the web version remains unencrypted. But he still uses it, even for corporate business. And he likes the feature set.
Others have made the same point I'm making without using the words I just used.
Technical debt is when you do a rush job and don't write your code in the manner currently mandated for maintainability and extensibility.
There is no evidence that this happened when the code was written. And (pointed out by others) it's worked fine all along.
This is purely the fact that the code and infrastructure are too slow to handle the suddenly increased load. A capacity problem, as someone else pointed out. They could upgrade their hardware (still possible, as others have pointed it out), or contract out the excess, presumably to IBM. The latter is presumably the wiser course because this rat will work its way through the snake in time.
You don't have it straight. You have it backwards. OSS is the big company, and they lost the case against Perens, paying $300,000.
The article concludes, "As to whether OSS's redistribution terms violate the GPL, that has yet to be tested in court."
So OSS has lost one case so far and the second case has not yet been adjudicated, or, possibly, even filed.
If Microsoft Edge Developer Experience was really a 1970s prog rock band, what role would an individual with a name like Kyle Pflug have?
(a) Lead singer
(b) Front man
(c) Manager / Publicity agent
(d) Roadie
(e) Bus driver
(f) Write in your own answer: __________________________
The contractors should get together and start an offshore operation, say in India or the (shudder) EU. That offshore operation then gets contracted for services, takes a small cut, and hires the contractors. Or the contractors could even be employees of the offshore enterprise. Perhaps the contractors themselves would cooperatively own the contracting body.
You're welcome,
SATD.
"The big guys spend 1.5-2 X more on selling as they do on actual R&D."
Yes, but all advertising has to more than pay for itself to be sensible. Without spending those big bucks on advertising, their ales would be lower, so their revenue would be lower; and therefore their research budget (which in pharma runs about 20% of revenue) would also be lower.
Not, of course that I in any way condone the practice that this article is about.
'The annual tracking of "Santa" is due to get underway shortly, and the North American Aerospace Defense Command (NORAD) will be using satellite imagery from Bing Maps to make things a little more, er, realistic for those wishing to monitor the magical elf.'
Umm, Santa is not an elf!
When the (scientific software) company I used to work for got onto the Cloud, we had a heck of a hard time getting our major pharmaceutical customers to buy into it. They were concerned about security. We would ask, "Do you really think your own security infrastructure is more robust than Amazon's?" There were some lead adopters, but not many, and yet many of these same companies were outsourcing their internal IT to outside organizations, especially IBM, in what was sometimes called an "insourcing" arrangement.
We had close relations with IBM at the time, had historically supported AIX, etc., and I remember telling our contacts at IBM that if you had a cloud, all your existing IT-management customers would buy in, because they would be willing to trust you, even if they don't trust Amazon. IBM would respond with some nonsense like "Well, we already have a cloud." It was not by any means what a cloud had already come to mean at that time. What they had was a farm of supercomputers, fixed partitions of which you could lease by the day, week or month, based on what might be a long reservation list. A cloud implied leasing virtual equipment by the hour, the provision expanding and contracting per demand.
Their Cloud-Pak and OpenShift technologies sound good and are somewhat orthogonal to IBM's public cloud, but I feel they missed a big opportunity not that many years ago.
Admittedly, it takes a lot of water to turn a big ship around, plus they were frying other fish.
When the (scientific software) company I used to work for got onto the Cloud, we had a heck of a hard time getting our major pharmaceutical customers to buy into it. They were concerned about security. We would ask, "Do you really think your own security infrastructure is more robust than Amazon's?" There were some lead adopters, but not many, and yet many of these same companies were outsourcing their internal IT to outside organizations, especially IBM, in what was sometimes called an "insourcing" arrangement.
We had close relations with IBM at the time, had historically supported AIX, etc., and I remember telling our contacts at IBM that if you had a cloud, all your existing IT-management customers would buy in, because they would be willing to trust you, even if they don't trust Amazon. IBM would respond with some nonsense like "Well, we already have a cloud." It was not by any means what a cloud had already come to mean at that time. What they had was a farm of supercomputers that you could lease by the day, week or month, based on what might be a long reservation list. A cloud implied leasing virtual equipment by the hour, the provision expanding and contracting per demand.
Their Cloud-Pak and OpenShift technologies are somewhat orthogonal to their own public cloud, but I feel they missed a big opportunity not that many years ago. Admittedly, it takes a lot of water to turn a big ship around, plus they were frying other fish.
Well, umm, there were a few more problems with Android in the times you mention, including one which persists now, or might only be ending now: The inability to get OS upgrades was as new versions were released. Considering that the updates included fixes for security issues, that one's a biggie.
Also, and probably a result rather than a cause of the fact that Android is not the market leader, accessories are quite slow to come out for Android phones. I never could get a battery case for my L6, and that's something I can't live without. So that's when I switched to iPhone.
It's interesting to me that virtually all the commentators so far believe that HP simply failed to do its due diligence.
However, the allegation is that Autonomy cooked the books and lied about their sales and revenue.
Lots of mergers and acquisitions don't go well, but very few lead to allegations of fraud against the principals of the purchased company.
Therefore, I'm inclined to give at least equal credence to the allegation: that HP did perform due diligence, but that the principals of Autonomy lied and committed fraud, perhaps in depth, i.e., by concocting and presenting fictitious accounting records.
I use feedly.com now and am pretty happy with it.
It seems to me that The Reg used to supply RSS links to individual authors. I used to use it to keep up with new postings by Alistair Dabbs, but I haven't been able to find an alternative for a long time. Perhaps someone can tell me if there is a way to do this that I am just missing.
I like RSS a lot and have been annoyed at its gradual demise.
@Trevor I thought the article was quite cogent and to the point re. ETL, which conceptually IMO is well used to apply to the problem, regardless of implementation. But I wish you had given as much detail about how the data integration platforms work as you did about the general ETL problems and its other solutions. The moreso because they are now your preferred solution. Perhaps you could do this in a future article.
With apologies to Kris Kristofferson....
I can't believe how many respondents are harping on the multiple meanings of "free." Stallman told us what he meant, and we all probably agree that "free" is a crummy word to describe it, because it's bound to be misunderstood if you haven't read Stallman's gloss. But enough already about that.
Having said that, I don't subscribe to Stallman's social agenda, which is very obviously what his definition of "free" is about. He reluctantly accepted LGPL as a compromise, understanding that without such compromises the Free™ SW he espouses is unlikely to gain traction. LGPL is Free™ technically, but is in practice as usable in the commercial context as un-free open source.
Stallman was already on the slippery slope with LGPL; from there it is only a small step to BSD, MIT, and Apache open-source licenses. But the earliest BSD license (1988) actually predated GPL (1989), so I would alter the author's remarks to rather state that Free™ is what open-source became after the ideologues showed up.
The federal appeals court that Oracle is appealing to previously took on an appeal by Oracle, which resulted in their remanding the case back to the district court for trial. The result was a unanimous jury decision for Google, which Oracle is now appealing. Which means that they have to get the federal appeals court to agree that no fair jury could possibly have rendered such a decision. Well, if so, it would seems that the federal court would have summarily ruled in their favor on the previous appeal. Aside from the merits of the case, the history does not augur well for Oracle.
Were they trying to run a cloud across Telco-owned routers at customer sites?
Sounds a bit like the days of Grid Computing, when an emergent concept (so-called because it never actually emerged ;-) ) was for cable companies to rent out (to third parties) computational resources on large numbers of set-top boxes that the cable companies owned. The boxes usually had MIPS chips, and they were nodes in a fast network connection, so what else did you need? Or so the argument went....