Re: week-long hackathon ?
Security would never be implemented in the client (i.e the browser) in the first place.
HI DrXym, let me introduce you to Any Web2.0 Developer...
1972 publicly visible posts • joined 16 Jul 2011
"> Or did the code suddenly become faster
That might be possible"
From the article: “The size of the compressed bundle didn’t change significantly pre- and post-coffee transformation, so our users shouldn’t notice anything different,” the trio write. “The site performs and behaves as before.”
So even the devs who did the recoding do not believe that the code has become significantly faster.
"> less buggy
Better, more compressed syntax means less bugs, yes"
No it doesn't. Bugs don't come from the syntax but from the coders and intermediaries. In this case, the coders remain constant, but an intermediary has been added (CS-to-JS compiler). So the potential for bugs actually increases slightly.
> better usable on slow browsers and behind shoddy links
That's a design problem"
Which they could have worked on rather than wasting time recoding into a derivative language.
Flash is a form of NVRAM (Non-Volatile Random Access Memory) which is itself a subset of RAM (Random Access Memory).
If you are making the common mistake of confusing RAM with system operating memory, then you may want to look here: http://www.tomshardware.com/news/fusio-io-flash-ssdalloc-memory-ram,16352.html
Flash, even Flash "storage" is a type of RAM (specifically NVRAM, as opposed to DRAM and SRAM.) This is a form of memory, even if it's not the system operating memory.
While the author does use some terms ambiguously, none are used incorrectly.
Any apparent discrepancy between the article and the underlying subject matter can therefore be attributed to poor reading comprehension.
Dear Mectron,
Simply TYPING various WORDS in ALL CAPS does not make your message MORE IMPACTFUL. It DOES make people think YOU are being a JERK.
On to the meat of your argument: There are a lot of practices (e.g, marijuana sales and use, prostitution, gambling) that are illegal in some jurisdictions; this does not make them any less valid or profitable in other jurisdictions. Indeed, that doesn't make them any less profitable in the jurisdictions where they are illegal. So even if your premise is correct, your conclusion does not follow.
Thank you. Have a nice day.
Accordingly, we expect that a total of approximately 234 million shares held by employees who are employed by Facebook through October 15, 2012, will be eligible for sale in the public market as of market open on October 29, 2012.
Hmm. October 29th, eh? Surely there won't be a massive selloff of shares on such a lovely day.
Downvote Downvoted: OP was just stating his opinion, not claiming that Mr. Willis had necessarily waived any rights. Besides, a licensee has no rights* to licensed material other than those conveyed by the license he agreed to.
Furthermore, the existence of illegal or unenforceable terms does not negate the necessity of reading and understanding the terms of any contract one enters into. Most modern contracts include a clause, which has been held to be legal, which states that if there are any illegal terms in the contract, those terms are null and the rest of the contract stands.
Many software licenses include a non-transferable clause which has held up in court (for a good example, research the legal history of AutoDesk's AutoCAD license.) That would easily cover this "legacy" scenario. So unless Apple's lawyers really messed up, or a judge were to find a substantial and relevant difference between music licensing and software licensing, Mr. Willis is not likely to succeed.
PS. It's waive. You can wave your rights all the way to the bank, but you'll just look like a fool.
* Technically, "fair use" and other similar constructs are not licensee rights, but exceptions to rightsholders' rights of enforcement.
Sorry, but no.
And compared with Linux, Windows Server really is a POS PITA. It's slow (databases are faster on Linux, kernel latency is bad, network performance isn't great).
Databases are applications, and network performance is most closely tied to hardware and possibly driver settings.If you're claiming that a specific database application (MySQL, perhaps?) performs better on one OS than the other, then you may have a point -- but then the question is whether the application has an optimization issue.
But the comment that really tells me of your experience is this one:
Stuff that should be standard has to be installed separately, and often from a third party (ssh, for one, perl for another - but I could go on).
Since Microsoft has its own protocols for remote management and its own scripting and shell languages, why should SSH and Perl be included standard? Furthermore, you'll find that although they are included in most distributions, SSH and Perl are not part of Linux, but developed by different developers and packaged into the distribution by the distributor.
That comment alone tells me that while you may have worked in this industry for quite some time, you clearly haven't absorbed much information about how the different OS ecosystems work.
It was not written as "update 24 or newer of Java 6."
It was written as "Java 6 Update 24 or newer."
No punctuation, nothing to indicate whether "or newer" applied to the major version or just to the update. At that time, Java 7 was over a year old, so it was a reasonable assumption that the developer would have at least checked the app with Java 7 and included a note that it did not work.
In fact, it turns out that this particular app wouldn't work with any Java 6 update past 26 either. If this were the only developer who'd done this, I'd chalk it up to a bad vendor and be done. But every "enterprise" Java app I've seen has a similarly narrow range of version/update parameters.
You claim that this happens with all languages/platforms (in fact you seem to conflate language and platform a lot), but I manage systems written in C, C++, C#, VB, PHP, JS, etc. on a variety of platforms and I rarely see this kind of problem with any other system. It seems endemic to Java.
If Java changed more frequently, as you'd apparently prefer, we'd probably have to ban it altogether in our organization as the proliferation of specific versions needed and the consequent rash of security holes would make it almost impossible to administer.
Not, at least, until you can find me a way to fit into my pocket what would take up two hundred linear feet of shelf space if it were printed on paper.
If you read "two hundred linear feet of shelf space"'s worth of books on a single flight, then you are either the fastest speed-reader on the planet, read the largest-print books on the market, or both.
Or I suppose they could be all children's books, where the pages are inch-thick cardboard with 3 words per page on them...
Nobody put a metaphorical gun to Samsung's head and required the South Korean giant to look to Apple's user interface as a template ... Samsung could have examined what users are actually doing
A very good point, and one which I happen to agree with. But we know that others don't, so let's see how you support this argument:
...Others have seized the opportunity. Microsoft is one. The Windows Phone Not-Metro UI – built around hubs and live tiles – may be flawed, but it demonstrates the merit of thinking about user requirements rather than simply trying to replicate the experience. RIM has also tried to focus on the user interface as a workflow in BlackBerry 10...
So to support your argument that looking to users rather than the current UI frontrunner can be a better way to design a UI, you point to: the two companies currently fighting each other for the irrelevance crown in the smartphone market. Was no better example available? How about BlackBerry 10 years ago? Or Nokia back when it actually mattered? Or how about *gasp* Apple itself with the first iPhone?
There is no such thing as unauthorized fork under GPL. Any fork is authorized fair and square as long as the original copyright notices are retained and any derivative work is GPL too.
So from your second statement, a fork which does not have the original copyright notices would be unauthorized .
Yet from your first statement, such a beast does not exist.
What law of physics prevents people from removing copyright notices? Or is your first statement false?
Bear in mind, Mr. Gumby, that structure does not necessarily mean relational structure. While that's the most common, and often the most efficient, form that data structuring falls under, there are other ways of structuring data (hierarchical structures such as XML, for example.) And normalization is a consideration if and only if the nature of the data and intended use calls for it.
Even so, you simply underscored my point. The primary concern is not whether the data can be modelled, but whether a given model* is time- and cost-efficient.
*This assumes that the model is appropriate to the answers you need from the data, because, well, why are you considering it if it isn't?
It's funny you should mention address, as I've recently worked on a project that dealt with address data in a semistructured format. Specifically, we had two tables of (US) addresses, one of which was formatted Street Address (as a single line), City, State, Zip, the other of which was Address Line 1, Address Line 2, Address Line 3, City, State, Zip. Since all of the Address lines were pretty much free-form, there was a lot of mess in there. I ended up matching on City, State, and Zip first, then comparing the Street Address with each of the three lines from the other table, parsing all lines into the following format:
Street Number
Street Direction
Street Name
Street Type
Unit Type
Unit Number
All of that parsing was relatively straightforward. By far the most difficult piece was accounting for all the different variations in spelling and abbreviation. Do you know how many ways there are to abbreviate "street"!?
So I sorted that to my satisfaction (over 95% correct match rate). I agree that you will have to compromise on exactly what you can include, but the reason for structuring the data should dictate what is acceptable to lose.
is easy to apply to pretty much any data -- if you have the time. I have yet to see an example of data which could not be structured.
I personally think that big data is defined primarily by the relative amount of data and speed with which it needs to be processed. In other words, I would define big data as any data set which much be processed in less time than it would take to apply a consistent structure to the volume of data being processed.
I welcome examples of data which cannot be structured.
"We can confirm that we are not building a historical database of program and user IP data," a spokesperson told El Reg
... "It wouldn't make any sense to. Since we're moving to the same closed app-store type architecture as Apple, we'll soon have a historical database of program and credit card data -- that we have to retain for pretty much ever, because you'd complain if you couldn't reinstall that fart app. Credit card data is directly linked to an individual, so what's the point of keeping the less-traceable IP address? Sheesh!"
Well, that's false to begin with. The principle of network neutrality is that prioritization (or de-) of network traffic should be done solely for reasons of stability and performance. Perhaps the talking head was referring to a specific, poorly-written law rather than the underlying principle?
Wow. You read a lot into my flippant comment. But since you went to all that trouble to respond, I'll tell you exactly what's wrong with your response.
First off, you assume that I believe that computing devices are an end in themselves, which I have never believed. Then you attribute that absurd mindset to engineers,which I believe is an insult to engineers. The engineers I have known consider such technology tools, even such technology they have developed themselves. And all smart engineers know that one tool, designed to operate one specific way, will not work for everyone. There must be a variety of tools or a tool with a variety of options to fit everyone's needs and wants.
This leads to your second mistake: the assumption that, because Apple had a resurgence at about the same time that some stupid tech journos were claiming the year of "linux on the desktop" (for about 7 years straight as I recall), and that said claim never materialized, that Steve Jobs must have "slaughtered" the Linux-on-the-desktop advocates. However, the lack of a conviction, trial, accusation, or even a "could you come down to the station for a few questions" indicates that your alleged serial killing likely did not happen. Linux on the Desktop has not come about because it has never been a goal of the Linux community as a whole. Only a fool, or a stupid tech journalist, would think otherwise.
Third, you suggest that I want people dependent upon me for the everyday use of their computing devices. Frankly, I'd prefer they let me alone to do my actual job, as it's more than enough to keep me busy. Nine times out of ten, I do the following when I get a call for a simple issue: wait 20 minutes. Eight of those nine times, the person who called me has figured it out for themselves by then. The remaining two times I work with them, rather than for them, to ensure that they don't need any assistance from me in future.
Finally, you missed my entire point. I appreciate the goal of keeping things simple and easy to use. Nothing infuriates me more than a non-intuitive interface. But in my experience, Apple has been very schizophrenic on this front. For everyday, run-of-the-mill use, their interfaces are pretty intuitive. But for anything more, heck sometimes even if you want to just sort a list in a different order, the interface either becomes cumbersome or the task may even be impossible. Not because such an interface is not possible, but apparently because Apple's designers just never thought anyone might want to work that way. And therein is the unspoken problem: intuitive is a subjective term. Apple would have you believe that it's a fact of their design, but it's not. Everyone intuits in a different way.
There are VERY few people in our industry who've figured that out, but those of us who have aren't about "how smart we are" or "which OS is the best". In fact, we tend to make short, flippant comments on those subjects. We're about choice -- oh, and about having a sense of humor and humility as well.