Re: EE WiFi calling
On my iPhone, it’s an in-built option. No separate app, so no separate contacts, call history, etc.
It’s mildly useful when in areas of no/poor phone signal but good wi-fi signal.
142 publicly visible posts • joined 3 Jul 2009
Google have pretty much followed Microsoft's old "Embrace, Extend, Extinguish" strategy, but it's not an uncommon approach for companies that want to monetise open-source software. Plex did the same thing with their version of XBMC.
But the real problem is a total lack of GMS alternatives that app developers can reliably target (outside of China, and Amazon's FireOS at a push). That is what is needed to weaken Google's stranglehold on Android. Amazon, Samsung, etc need to get to get together and build (an open-source) GMS alternative, to keep AOSP viable. Then again, these companies care about profits, not users. And Google probably forbids such actions.
As long as 'obtaining a user's consent' is not just permitted in the same way that it was recently mandated for cookies. That is, something which simply says 'your continued use of this site implies consent'.
It must be possible for users to opt out of all forms of tracking that are not critical to the function of a web site while still being able to make full use of that site.
1.
In raw numbers, 52 million to 85 million over two years is good growth.
2.
The percentage of repeat purchasers is growing year-on-year. This is also a good thing.
3.
While expanding market-share is all well and good, I think Apple would prefer to have people upgrade and remain within the iOS ecosystem. The number of iOS users is hardly small (especially since that number is not just iPhone users).
4.
A new user and a repeat purchaser both have to buy an iPhone. Hardware sales is where Apple make the vast majority of their money.
If an app seller needs to contact the buyer, then it should be possible to send them an e-mail through the Merchant Store system - that is, Google keeps the eventual destination of the e-mail hidden from the seller.
For physical goods, the seller will need the buyer's address for practical reasons, but it's not necessary. The seller could post the item to a Google-owned warehouse, which then forwards the item to the buyer.
For software downloads, there is never a need for the seller to know anything about the buyer, other than that they have paid for the item. And it's not like you get the buyer's bank details when they buy from you, so Google are able to keep some stuff a secret. Why then can't it keep other stuff a secret too...?
PS Feel free to substitute Google for your favourite privacy-stealing multi-national faceless corporation.
And function bodies change too! I've seen countless functions where the name is misleadingly (and dangerously) inaccurate because the body has been changed. While comments won't prevent this, they will help explain that the function doesn't do what its name suggests.
Anyway, good comments include the "why", not just the "what".
Why do we even need channel numbers? We managed to go from ordering by broadcast frequency to ordering by number, so it shouldn't be too hard to ditch channel numbers and allow users to order their channels as they see fit on their TV.
Yes, so some clever solution would be required for old TVs that deal in numbers only, but I'm sure any such TV that has a IR port would work with a new remote (anything from a fancy touch-screen smartphone-like remote, to a cheap big-buttoned one that you manually mapped your 10 favourite channels to, or a bluetooth/wi-fi donge+IR blaster combo that you could use a smartphone app with).
"All data is encrypted, which answers security needs. "
I'm sorry, but that's woefully short on info.
From whom is the data secure? Outsiders? (i.e., shared encryption key across all users) Non-file owners? (i.e., user specific encryption keys)
What about sharing files? Does a secure key-exchange occur to allow the file to be re-encrypted for the other users? Where is the decryption/re-encryption done? If it' on the regular PCs, what's to stop malware sniffing the key? Is there a (secure) key server infrastructure or does the business still need a dedicated secure key server?
It seems to me that one of the biggest problems that people perceive to exist in the patent system is that too many patents are accepted without sufficient scrutiny. This is particularly true of the US patent system, where USPTO funding is linked to the number of patents accepted.
If we accept that the patent acceptance process needs to become more strict, the obvious question then is who should be responsible for the additional scrutiny that such a change would require? Patent officers are over-worked, at least in the eyes of those people who proclaim that the system is broken, so it cannot be those individuals.
My suggestion: any and all competitors of the company which is filing the patent.
I believe that the benefit of this would be three-fold:
1. The competitors have a vested interest in finding as much prior-art as is possible to invalidate the patent application. I can think of no other entity which would put as much effort, or be in as good a position, to provide evidence that the patent should not be granted.
2. If the patent were to be granted, court cases involving patents would be much simpler as the claimant would have had ample opportunity in the past to challenge the patent claim and therefore that aspect of the cases would be greatly reduced. Crucially though, where appropriate, any claimant should be able to show that they had insufficient ability or knowledge to challenge the patent when it was first submitted.
3. Competitors would learn of the patent's claims sooner (i.e., at filing time, rather than at granting time), thereby making the IP available earlier and thus increasing the potential for future innovation.
Now, there are almost certainly holes in my suggestion, but I would welcome any constructive feedback.
The point of getters and setters is to provide encapsulation. If you decide to rename the internal field, it does not break every other class which uses and/or updates that value. They also control HOW you permit access to those fields, e.g., synchronisation can be handled in the class, invalid values can be overridden, related fields can be updated at the same time so as to keep the object internally consistent, changes can be logged in the one place.
Trying to write C-like code in Java will always be an exercise in frustration. Just as trying to write Java-like code in C is.
It was more of a reply to Chris N, as well as anyone reading the comments who may not be aware of the mobile site.
I agree that the regular site should be (and is) viewable on modern smartphones, but I happen to prefer the less cluttered look of the mobile website on my phone, and it also avoids having to do the zoom-n-scroll dance to read each story headline (my eyesight is not what it used to be, and my phone does not have a comically-oversized screen).
What is the success rate for breaking "question captchas"? For example, "half of six times one". Are natural language processing techniques good enough to parse the meaning of such text?
(and if not, as VoodooTrucker pointed out, I'm sure the academics would appreciate some help from the spammers)
I'm sorry, but do you honestly believe that SSL cannot protect against a man-in-the-middle attack!? It's one of its main aims!
Your scenario fails because the ISP server cannot authenticate as my computer with the gmail server, nor as the gmail server with my computer. In other words, both my computer and the gmail server will know that someone has tampered with the communication.
For your own sake, please do some reading on the theory behind security protocols.
The first patent (filed in 1996) describes using the same memory for unprocessed and processed images (i.e. raw sensor data, and jpegs). This is as opposed to having separate dedicated memory (usually as part of the sensor chip) to store the unprocessed images. The main thrust of the claim was that the unified system could better support features such as burst-mode, since it did not have a limited space for unprocessed image data. Additionally, the system could process images at the same time as taking new ones.
The second patent (also filed in 1996) describes a way to process images in a linear fashion using multiple image processors that can be controlled by the user. The fact the claim specifies two or more means that doing this for a single image processor had probably already been patented.
Now, both of these things are quite obvious to "someone skilled in the art"**, so I suspect that the patent applies to the "how", not the "what".
**for example, if we were still using negative spools, it would be like trying to patent the idea of having a camera with two (or more) spools to increase burst-shot speed.
But yes, Eastman Kodak have been trying to make digital cameras for a lot longer than Apple.
Interestingly, when Kodak sued Sony, they ended up with a cross-licensing agreement, so Kodak haven't patented everything to do with digital cameras!
"Apple has no financial interest in helping out these [Web] apps..."
How many Web apps require users to pay for them? Answer: very few. Therefore, if the developers turned those Web apps into native iOS Apps, they would almost certainly be sold for free. And Apple loses money on free Apps.
Ergo, it would appear to be in Apple's best interests to make Web apps as good as possible.
Now, I agree that it is odd that there are now two HTML(etc) rendering back-ends, but I suspect that it was easier to plumb in Nitro et al to Safari, than the more general UIWebView part of iOS (since that probably has to deal with varying app permissions, etc). If there is still a discrepancy come iOS 5.0, then we can start to complain.
There is this little thing called "Grand Central Dispatch" which was introduced to iOS in version 4.0 (and OS X 10.6 prior to that). Many of the API functions were modified to make use of it. At the same time, developers were encouraged to begin using "blocks" to encapsulate code that can be run in the background.
As such, even old applications that don't use "blocks" may still see a benefit simply because of the updated API code. This is in addition to any apps that will have been explicitly multi-threaded (i.e., most, if not all, games).
http://en.wikipedia.org/wiki/Grand_Central_Dispatch
I don't have a PhD. Nor am I a Trekkie. I simply couldn't think of a better name when signing up. (aren't the intarwebs great!)
However, after implying that my reasoning was flawed, you then state that it is impossible to determine the outcome of a fair coin toss - that was exactly my point!
Yes, it makes economic sense for the insurers to charge men more, but that doesn't mean that _every_ man will cost them more. For the sake of being concrete, consider the following example:
There a 1m men and 1m women insured by company X. 0.5m men and 1m women have a risk factor of 0.2 and the remaining 0.5m men have a risk factor of 0.6. This means that men have a risk factor of 0.4 and women have a risk factor of 0.2. Statistically, men's premiums should be double that of women's. However, the reality is that the 0.5m high-risk men should have premiums three times that of everyone else.
I understand that insurers use a lot of metrics to try and determine an individual's risk factor, but the set of metrics they do use fall far short of what is required to give a "reasonably fair" insurance premium.
Just to be clear, I have no problem with insurers using gender as a metric to determine risk, but it should not be weighted as strongly as is currently the case.
"If I do a SELECT on my claims database, and group the output to show total payments, broken by age ranges, and split by gender, then the output, from a history of actual claims, is obvious. Younger males are a greater risk than older females. This is a fact."
Statistically, across the entire population of insured persons, that is true. However, pick a male at random and a female at random from your database. Can you guarantee that specific female will be less risky to ensure than that specific male? It should be obvious that the answer is "no".
This is the crux of the problem - applying broad statistics to individuals doesn't always work.
Matt, you are obviously a smart bloke, but it sure as hell doesn't come across in your articles due to that very large anti-Apple axe you seem wanton to grind.
Firstly, you make the same mistake that countless commenters have made in the past: you think Apple has a monopoly. I expect that from them, not from the article writers.
Apple has, at best, 10% of the desktop market (which I'm including laptops in). They have, at best, 30% of the worldwide smartphone market, and 5% of the worldwide mobile phone market. iOS is not a market. Microsoft, on the other hand, do have a monopoly in the desktop market, which is why they got into hot water.
Now, given that Apple don't have a monopoly, they can't be done for abuse of a monopoly. So please stop trying to suggest it's a certainty.
Regarding the Sony eReader app, Apple are well within their rights to reject it from the App Store. I may not agree with it, but I accept that they can do that. Why? Read on, dear chap.
To continue your line of argument, customers want the Sony app. By rejecting it from the App Store, Apple makes their product less desirable in the eyes of those customers. There is nothing illegal about a company making its own product less desirable. Furthermore, those customers have a choice: they can get a Sony eReader device, or get a different smartphone for which the app is available.
If I wanted to get all riled up about something, I'd complain to the EU that McDonalds doesn't sell Burger King chips. Of course, that would be just as silly.
By all means, revisit this topic when Apple have over 90% of the mobile device market. But I'm sure there are a few Android folk who will happily tell you that day is never coming...
PS - you did want people to rage, didn't you?!
So, the original work done by Ron Rivest, Adi Shamirh and Len Adleman (and independently by Clifford Cocks) to develop the theory and algorithms behind public key cryptography was easy was it?
As for the easy vs hard argument, cryptographers (the builders) and cryptanalysts (the breakers) have an equally difficult job. However, I would argue that the invention aspect of creating a new cipher makes the cryptographers job that little bit harder.
Take a look for yourself:
http://shop.virginmedia.com/help/traffic-management/traffic-management-policy.html
For punters on the 30Mb/sec package, you can download up to 10GB between 10am and 3pm, before you get throttled to 22.5Mb/sec for 5 hours (i.e., still faster than the 20Mb/sec people upgraded from). Between 4pm and 9pm, you can download 5GB before being throttled to 22.5Mb/sec for 5 hours. Outside of those times you can download as much as you want (local bandwidth capacity permitting, of course). Upstream restrictions also apply.
On the 50Mb/sec package, you only get your upstream throttled if you upload more than 6GB between 3pm and 8pm, at which point your upload speed drops from 5Mb/sec to just over 3Mb/sec.
Everyone *may* have P2P and newsgroup traffic throttled between 5pm and midnight during the week, and midday and midnight at weekends if there is insufficient bandwidth on the local network (these are the only restrictions that apply to people on the 100Mb/sec package).
"Other than being part of the MPEG LA Patent Pool, which licences H.264, I can't see any logical reason to object to Flash without objecting to H.264"
It's very simple really. Anyone can obtain the complete h.264 spec and build a fully compliant encoder and decoder (e.g., the x264 guys). This is not possible with Flash. The OpenScreen project is only a partial spec - it does not cover any of the DRM aspects of Flash.
Consequently, the only company that can build a fully compliant Flash player is Adobe. If this were not true, Gnash would be a lot better and would be used by a lot more people. If you believe anyone can create a fully compliant Flash player, please answer the following question: why is it the case that, even after so many years, has not a single person or group in the worldwide open source community has produced such a product yet (remember, Gnash does not support Adobe's DRM).
Let me get this straight. Apple, who eschewed the closed and proprietary Flash in favour of the open and W3C-backed HTML5 suite of technologies is somehow worse than Google, who are removing support for the open (albeit not free for everyone) h.264 standard backed by TWO international standards bodies for the aforementioned closed and proprietary Flash?
Web users have been bent over a barrel by Adobe for the last decade in their reliance on Flash. Apple have done more than any other major company to try and break Adobe's stranglehold. Google are simply undermining the recent progress in order to push their own agenda.
Ask Google (or any of their supporters) about the licensing fees that content creators must pay to Adobe for producing Flash content? (via Adobe software) Google only cares about distribution fees so they have no problem using Flash. What I don't get though is, given the same quality, WebM produces slightly bigger files. This results in more storage space required, and higher bandwidth costs for anyone storing/distributing WebM video. My only conclusion therefore is that Google is using WebM as a tool to leverage additional concessions out of MPEG-LA, or trying to undermine Apple in the iOS vs Android battle.
"Now, for a piece of software that has just started up, how does it know that a plug-in has been installed sneakily by another app acting as admin, rather than the user choosing to install it?"
Password-protection.
If the user wishes, they can lock their current plugins with a password. As you stated, this won't prevent admin-level installers adding new plugins. However, if the list of plugins is digitally signed using a key based upon the user's password, then the program can detect changes: upon launch, the signature is re-checked (or hash re-calculated) and if there is a difference, a warning appears (the old list can be determined by the subset of plugins which produce a valid signature/hash).
Since the user's password is not stored on the machine, there is no way for any program (even an admin-level installer) from providing a valid signature/hash for the updated plugin list.
As a few of the commenters have already pointed out, it is ultimately Firefox's fault in allowing newly installed plugins to be loaded without first informing the user. It should be no different to how Firefox handles updates to existing plugins - i.e., allow the user to enable/disable/remove them before continuing. Plugins installed via Firefox itself would have tacit permission to be loaded, obviously.
From the article:
"The Eee 1215PEM happens to have Gigabit Ethernet, but most netbooks and 11.6-inchers only go up to 100Mb/s, but the Air doesn't even have that."
Out of the box this is true, but for a piffling £19, you can get a 100Mb/s ethernet adapter which plugs into one of the USB ports:
http://store.apple.com/uk/product/MB442Z/A
And since an extra USB port is probably more useful to more people, it seems like a sensible trade-off.
I find it interesting that UK users of the live football data are legally allowed to access it. Contrast this with the view taken against copyright infringers who download material from foreign servers.
Is the difference simply that copyright legalese has been slapped on the original movie/song/etc?
Is it therefore possible for Football Dataco et al to attach a copyright notice to the fixture lists and get the same protection?
I know it's a bit of a grey area as regards the point of whether fixture lists are a statement of fact or not (I would guess there is some originality to them and thus may be given some protection, however, once a fixture has actually been played, it becomes a non-copyrightable fact).
Of course, I am not a lawyer...
I just realised that I mis-understood your post. The actual URL can only be obtained by me, a facebook friend, or someone with a packet-sniffer if the video is viewed on an unsecured network. And yes, there is typically no pattern to the URL.
My privacy concern is more aligned to the packet-sniffer scenario - not too unlikely in this world of coffee shop free wi-fi.
(Before I get really off-topic, I have similar concerns about other sites which only secure the log-in process - e.g., hotmail until quite recently)
"But if I'm not much mistaken, only the owner of the video/picture/album can access that URL"
Did you try the URL? I can access it from any computer without being logged into facebook, therefore anyone should be able to view the video.
The point I was trying to make is that many of facebook's privacy controls are only applicable when you are in facebook world (i.e., going through the site). Outside of facebook world, their privacy controls are meaningless.
Ideally, the above URL should require you to log in (and be a friend of mine) before allowing access to the video. Of course, that wouldn't stop my friends from downloading the video, but that's not something which facebook (or anyone else for that matter) can prevent.
"Each person owns her friends list, but not her friends' information. A person has no more right to mass export all of her friends' private email addresses than she does to mass export all of her friends' private photo albums"
Ignoring the fact that photo albums, etc are not private if you make them available to others, this is an easy fix. Just add a privacy setting along the lines of "allow x to export y" (where x is a person/group/etc and y is e-mail/phone no/photos/all info/etc). In other words, let _the_user_ decide which of their friends get to do more than just view their data.
Of course, one could argue that "friending" someone is implicitly providing said person with unlimited access to the data you make available to them on facebook (this still allows for segregation of friends based upon groups/etc).
Personally, I go with this latter line of reasoning.
Of course, I still find it funny that facebook thinks I believe that marking my videos "for friends only" stops other people for accessing them. The following video is marked as such:
http://video.ak.facebook.com/video-ak-sf2p/v6812/133/30/118220332422_31321.mp4
This is what I like about science: its modus operandi is to attempt to explain how things work within the limits of our current knowledge, and to expand that knowledge. The mistake that anti-science people make is that they think science is about determining set-in-stone-forever-more facts. It's not. Any existing hypothesis (commonly referred to as "scientific fact") may be invalidated upon the discovery of conflicting evidence. Now, for a lot of hypotheses, there is a huge body of experimental evidence backing up the idea (e.g., the Earth going round the Sun) and the likelihood of such hypotheses being invalidated are extremely small. However, science will not attempt to avoid or cover-up any rigourous evidence which would invalidate such ideas.
Now consider religion (Christianity in this example).
Everything was set in stone just over 2000 years ago (if you ignore the many re-writes of the Bible and the fact it was generally created from second- or third-hand accounts of events at best, and fanciful stories at worst). Then, if something in the Bible is potentially contradicted by scientific experiment, "believers" stick their fingers in their ears, run round in circles and continually sing "la la la la" until it's time to go to church.
Now, my issue is not that I think science is the greatest thing ever, or that religion is stupid, but that science continually evolves and adapts and is not afraid to modify the current set of "accepted facts" whereas religion is all about resolutely sticking to the beliefs and writings of a few blokes two millennia ago.
Web != Internet.
How many Web servers are running Web browsers? How many VPN gateways are running Web browsers? How many e-mail relays are running Web browsers? How many kerberos/SSL key servers are running Web browsers?
I could go on, but four examples of high-profile targets should suffice.
"The Marketplace application returns a "1" if the application isn't allowed to run, so on receiving a "1" the application displays some information to that effect and quits. But that decision tree can be changed so an application receiving a "1" decides instead to go ahead and run"
Why is it the app's decision as to whether or not it is allowed to run? Surely, that decision should be taken by the OS? For example, the OS calculates a hash of the compiled app (i.e., the bytecode), then checks with the Marketplace as to whether or not the app can run. As long as the hash doesn't change, there is no need to consult the Marketplace for subsequent launches.
Fail because, well, they're doing it wrong...
When you order food in many pubs these days, you are given a wooden spoon with a number on it which you place on your table to identify yourself to the bar staff (instead of each table being individually numbered). Hence, if you take such a spoon in with you, you may be given food that someone else ordered, for free.
Not the best joke on the list. My favourite was no. 7, although it has been around for ages in the form of: "What do you call a dog with no legs? Anything you want because it can't chase you" (or something to that effect).
1. Sky have the money to get first-run films before other operators
2. Sky's film channels are cheaper with Sky than with Virgin (£16pcm vs £20-30pcm)
3. Consumers that want early access to the latest films are forced to get Sky or pay up to double the price
4. Sky profits increase due to non-free-market forces (step 3)
5. Go to step 1, lather, rinse, repeat.
Ofcom will consider ways to mitigate either step 1, or step 2.
Prices taken from:
http://allyours.virginmedia.com/html/tv/sky-movies-channels.html
http://www.sky.com/shop/tv/movies/