Tell me what and why I would use this, smaller words and shorter sentances, pretend I'm a PE teacher.
Nokia chief executive Stephen Elop might be remembered as the Thomas Watson of our time, based on his remark you don't need quad-core processors for smartphones. "You don't need a quad-core phone unless you want to keep your hands warm in your pocket,” said the Yankee CEO brought in to re-float the Finnish ship. If we’re …
You only need to carry around one gadget - your phone completely replaces your laptop or desktop. On the go, you have a phone. When you're around a keyboard & monitor, you have a desktop machine. I've been waiting for the cpu power of phones to get good enough to do this for a while.. This probably isn't it, but in one more generation of gadgets it will be.
Yeah this would be awesome for travelling, where you mainly do a bit of web surfing, emails, the odd word document etc - except for the monitor part, may as well carry an ultrabook/air as it would be the same size as a monitor.
Like the concept just not sure how it would work in practice...
Am I the only one who would actually prefer a phone to be just a phone - single core with good battery life over anything more! If I want to do more, I'll use a tablet/laptop/desktop. A smartphone that last more than a day is to me worth 10x the ability to play Crisis on it.
The closest I've seen to something worthwhile was Nvidia's idea of a powerful multicore coupled to a weak core, so the phone runs on the weak core until it needs more power, then starts up the beefier cores. That could be useful!
The advantage of ARM chips is that they don't use battery if you're not using it.
If you profile your phone energy usage you'll find it's the full length screen that will be eating most of the battery.
If you don't turn on internet and the massive full length screen all the time then your phone will last longer than a Nokia 3310!
My rooted, bloatware-stripped, Samsung Galaxy Note uses 1% of battery per hour when sitting on WiFi running GrooveIP. It's got a powerful, responsive, dual CPU -- but no power penalty.
I hooked it up to an HDMI monitor, Bluetooth mouse and keyboard yesterday. Even Android ain't a bad desktop when displaying on a big monitor, at 1280x800 :)
I am a happy camper... Maybe Ubuntu can make me happier... Time will tell...
I'll tell you, you aren't the only one to just want something "simple" (more subdued?) and trade off performance for battery life. There are days I'd got that route. Almost need a second phone for it. One for heavy duty power usage, one for light duty... The issue is the phone software and expected capabilities required of the "phone" are about to outstrip most everything already available. This is a problem for any anemic single core and even lower clocked dual core devices now. Phones will shortly be expected to virtualize/paravirtualize to keep pace with the BYOD policies in the workplace, where your company's virtual phone will be running on top of your normal phone. (BTW - plus's and minus's there, and depending on which day it is, I'd argue on either side of it).
Part of the reason I went with T-Mobile was they had the sole availability of a SGSII 1.5Ghz dual core which *should* last me two years. I don't have any confidence that the 1Ghz dual cores will make it that far with what I'm seeing on the horizon, like running a virtual desktop on top of your phone, or a virtual corporate device as part of BYOD. Course by next year I'll probably need a six core with unobtanium batteries to power it. :)
Anybody remember when phones were used for phone calls? (well, Pepperidge Farms remembers!)
Elops a fucking idiot. Everyone knows that the number of cores has nothing to do with how much heat it kicks out. TDP is what is responsible for that.
Right now many of the dual-core phones are 800Mhz and the single cores are 1.4Ghz, and both punch roughly about the same...
Many of the ARM devices can turn off cores, for example the Tegra3, which has a 5th ultra-low power core for when the device is not doing anything, all the other 4 "proper" cores are all turned off.
Elop, you really are the wrong man for the job, when Barry Shitpeas can out-talk you technically.
> Elops a fucking idiot.
He's actually doing the smart thing here, given his situation. Microsoft needed to get an OS with a decent user interface out the door quickly, so they didn't bother supporting multicore in WinPho7.
Elop needs to sell a lot of WinPho7 devices, so he's using a standard sales FUD technique. Sheer nonsense of course, but if it drives sales Nokia does better, which is what Elop wants.
If the rumours are true and WinPho8 does support multicore (see, e.g., http://www.tomsguide.com/us/Windows-Phone-8-Windows-8-Apollo-NFC-phones,news-14074.html) then Elop will change his argument and declare that multicore technology has improved sufficiently over the last few months to warrant their inclusion in the latest Nokia device.
"Am I the only one who would actually prefer a phone to be just a phone - single core with good battery life over anything more."
Well there are literally _thousands_ of models out there that literally just do that. They are called feature phones. I know the distinction between smartphone and feature phone is arbitrary, but feature phone typically are what you seem to want.
So please stop trying to make smartphones dumber and dumber. If I buy a smartphone, and pay often 10 times as much or more than for a feature phone, I don't want to have a dumbed down device, I want something full featured.
Of course what I'd really like would be a low power Linux phone with a text console and a keyboard. :) I could live without colour, though I like the screen of my N810 which turns monochrome in reflective mode.
"Am I the only one who would actually prefer a phone to be just a phone"
Strange then that you've previously posted :-
"I got an iPhone 3G at launch in 2008, I was out of the 18m contract by the end of 2009, when I got the HTC Hero. I didnt like it and found it vastly inferior to the iPhone, so replaced it with the 3GS after a few months and saw out the 18m contract. Then around the middle of last year I bought a GS2 and a sim only contract - which I gave to my dad when the Lumia launched and I bought that"
There's some interesting debate going on here and it's nice to read the different sides.
But I think I should clarify that when I say I want a phone to be a phone, that doesn't mean it shouldn't do things like web browsing, camera, music etc. I just don't see the logic in it booting a full desktop OS and becoming more than a phone...
I want a *smartphone* that remains just a phone. My sole complaint is this current race to the top in processor power will be coupled with a decrease in the usability of the device as a phone. It's nothing against dual cores, it's just a problem when their performance seems to take preference to battery life.
I'm still running an old Mesh computer that has a 900MHz Athlon processor and 384MB of RAM, which happily runs Windows XP (albeit slowly- but it's fine for web browsing/emails/document editing). So it just seems to me a smartphone could be optimised to work perfectly on lesser hardware and offer better battery life...
I don't think unity is awesome but maybe you find it unuseable because you can not adapt to change rather than it being broken. It's better for someone to try and innovate even if they fuck up. There after all hundreds of linux distros I'm sure you can live if one goes off experimenting and you find yourself in capable of adapting.
"innovation" is fine so long as it's not forced down everyone's throats. That is the Microsoft way. That's also how Apple does things. Bold experiments are fine so long as they don't trash the status quo. The fact that this is easy to do in Linux is why un-loved aspects of X11 are so handy.
Defaults should be sane, useful, usable, and represent the widest use case.
Unix does have some notion of UI guidelines.
Unix probably has some of the earliest "UI guidlines" in that there is a general concensus that UIs should be separated from the logic, and that if you have UI it shouldn't be captive of possible so you can always control it from another program.
Unix carries the notion that openes does not stop at the source code, but goes beyond that. An open system also uses open file formats you can process with simple and common tools.
"Defaults should be sane, useful, usable, and represent the widest use case."
The meaning of sane, useful and usable is open to interpretation. While 'represent the widest use case" means everything should look like Windows.
I've been using Ubuntu since 7.04 and you know what Unity works for me after a few minor changes. I dock the apps l use in the Launcher/Panel (what ever it's called). Reduced the size to 32 pixels and disabled auto-hide.
On the broader topic the great thing about Linux is there is choice, there is flexibility, there is the ability to customise. If you don't like what 1 person provides you can change. Get Lubuntu and forget about Unity.
I'd still rather not have android as it's been a pita for me already but I like the move to real linux for phones. If it continues and we get a full fledged ubuntu phone I'll buy that.
You have to give them credit for at least trying to mainstream Linux even if thing like Unity aren't so great.
Unity is OK on a netbook. It falls apart (for me anyway) moving up into larger display Laptop and desktop machines. Other people like it though, so c'est la vie. I personally switched to Kubuntu to escape Unity.
The way things are headed with convergence, the mobile only OS's like iOS, Android and Winphone are all looking at the end of the line as the desktop versions move over onto the phones and tablets. Apple and the Linux vendors (like Google and Canonical and others) have a head start as they're already running stripped down BSD/OSX and Linux's with different layers running on the top obviously. A full blown OSX or Linux running on the phone with the iOS/Android (respectively) compatibility layer on top isn't out of the question. Microsoft is converging as well. It makes sense to only maintain one code base though.
1. not going to happen anytime soon (not until all GPRS/EDGE/UMTS/HSPA and mind you WiFi patents too expire, in any case)
2. while you wait for this, everyone else buys generations after generations of brave new LTE/gaga/googoo/blahblah iPhones and stuff and your point of reference moves ahead of you, again.
3. Return to p.1
Its *purely* an experiment so far, but truthfully it works quite well on an T-Mobe SGSII T989 OC'd to 1.7Ghz. Give it a quad core and it'd FLY. I used the the Ubuntu Installer app on my rooted device running Juggernaut (along with several manual steps not recommended for normal users). Basically fires up a headless Ubuntu which you then VNC into (from your phone or anything that supports VNC). Not ready for masses yet, but it works and the performance is OK when VNC'd in from my desktop with the resolution set at 1280x800. (performance is merely acceptable if VNC'd in from the device into itself).
I'm still experimenting with it, but I can see some definite advantages to this being developed further for normal consumption. One I can have my "desktop" literally follow me where ever with REAL desktop apps, except its on my phone. Particularly useful is if you are out and want to check email/ebay/facebook/whatever from a friend's PC (VNC'd into my phone) without pontentially leaving your credentials on their PC. This way your authentication stays on your browser, rather than on the friend's browser. One of the other options is the whole VDI thing from VMWare, but running this sort of thing from an overpowered phone does seem to be a better utilization of resources.
Bingo! Give that AC a prize. :) The other thing is more cores, with a slower clock, and newer manufacturing processes will actually reduce power requirements while increasing performance (if the OS/apps are done correctly). Or more cores with similar clockspeed and newer mfg process has the same power consumption as the device before it, but with turbo'd performance.
What people forget is the reason for more cores is to maintain and sometimes reduce power consumption while still increasing performance. Adding cores is now the easiest way to boost performance without a power plant next door (or a battery backpack in the case of mobile devices). What the processor mfg's ran into years ago was a near brick wall on single core procs for power consumption/heat dissipation. So they dropped the clock speed and popped another core in for better performance in the same/better power envelope. Elop however is doing what any CEO of a company behind the 8 ball (with obsolete hardware and software) would do and is blowing smoke to keep folks buying current product until the real products (dual/quad core Win8 phones) get released.
I like this idea, and it warms me to Android - been hoping for a successor to my beloved Nokia N900 (best allround device out there). I would still prefer to see a native flashable Linux based OS for these devices.
Oh, and I may be wierd, but I like unity and Ubuntu and use it on my personal laptop all the time! Windows 7 feels awkward and old fashioned when I use it.
Nice idea, and I genuinely think that a portable personal "core" that you carry and connect to shells tailored for a given task is a possible future of computing.
That said, quit pissing around and make a smartphone already! Maemo/MeeGo are dead in the water (again), and everyone and their dog's moving to dumbed-down walled-off systems that send a packet back to the manufacturer every time you pick your nose. Canonical making a phone is my last hope for a useful Linux-based device that drops the irritating hardware abstraction, lets me hack it to bits as I see fit, and is actually natively capable of performing advanced tasks and getting things done.
You know - A BLOODY SMARTPHONE!
I have to ask, I mean really.
Why the HELL is making phone chips more powerful and more thirsty regarded as a good thing? Why would I want to buy more expensive hardware in order to make up for the failings of a crufty and poorly optimized OS?
In my little word, software needing more powerful horsepower is a bad thing. Your mileage clearly varies.
> more powerful and more thirsty
If you are stuck with single-core, as WP7 phones are, then more powerful is more thirsty. But multi-core does not necessarily mean more thirsty. Cores can go idle which saves battery, but only to a certain degree. With multi-core then some cores can be completely turned off while the one remaining can be put to idle. With a quad-core, instead of 50-60% saving, you could get 75% plus 50-60% of 25%.
This is part of the reason that some phones last for days while Nokia WP7s seem to last less than a day.
The other thing is that multi-cores do not need to be symmetric. There can be one or more fast core plus one or more slower but highly efficient core that idles at much lower levels of usage (plus a GPU for example). An OS that supports this (WP7 does not) could switch in whatever is needed at the time and thus may be blazing fast when required and yet have long battery life.
That doesnt address the main point both Elop and others are making us this core race is being driven not by any user need but by a poorly crafted bloated OS. It's patently obvious that out of the 4 main smartphone OSes Android is by far the most demanding on the hardware.
> obvious .. Android is by far the most demanding on the hardware
It is not 'obvious' that this is the case. It may be obvious that Android can handle more powerful, more varied, and more featureful hardware.
While an Android phone may have dual 800MHz a WP7 (such as Nokia 800) has 1.4GHz single (because it can't handle more than a single core).
Android can reduce its battery usage by using dual, or more, cores by switching some off and idling the lower speed cores, which increases the battery life.
WP7 phones (single core) are made obsolete by MS talk of dual core 'SuperPhone' and WOA (and thus implied WP8) which _requires_ dual core.
I noticed that some recent phones are qHD at 960 x 540 which makes it easier to upscale and downscale (and thus should be _much_ less demanding). WP7 is locked to only 320x480 or 800x480 (by MS directive and WP7 capability) which makes HD especially demanding on rescaling. This is supposed to be fixed by 'SuperPhone' (which also makes current WP7 phones obsolete).
I thought this was about shoddy OS implementation driving ever hungrier chips. And you know as well as I do that all you fat dual-core smartphones guzzle juice like an American economy car.
The continued "WP7 is shit because it doesn't support hardware you shouldn't need" thing is getting pretty old, dude. As well as not really relevant.
> The continued "WP7 is shit because it doesn't support hardware you shouldn't need" thing is getting pretty old, dude. As well as not really relevant.
Whether WP7 is shit or not is independent of how many cores it supports, and hasn't been discussed. However, support for multi-cores can have a beneficial effect on the battery, as pointed out with good reasons.
It seems to be that it is the 1.4MHz single-core that guzzles juice:
"""The Nokia 800 ... it has had its fair share of issues regarding battery life and audio and camera quality."""
Yes, the first batch (reportedly, I haven't got one and don't work for Nokia or MS so I can;t state with certainty) had shitty battery life. It was apparently an implementation thing. Later batches don't have the issue (also reportedly).
We can all be snide, you know. If I went looking, I expect I could battery-draining reports on some early or shoddy Android implementations. Or exploding iPhone batteries. But that wouldn't do anything except make both of us look like we're trying to score cheap points instead of discuss the issue, would it?
"With multi-core then some cores can be completely turned off while the one remaining can be put to idle. With a quad-core, instead of 50-60% saving, you could get 75% plus 50-60% of 25%."
Sorry but I need more explanation. Hypothetical example of single core 1GHz and quad core 1GHz CPUs. So scaling a single core to 100MHz vs. scaling a single core to 100MHz and three cores totally shut down. Yay, a larger percentage saving for the quad core but the same net result?
> Hypothetical example of single core 1GHz and quad core 1GHz
I wouldn't put you in charge of design, then.
Typically a comparison may be made between dual core 800MHz vs 1.4Ghz single core. These would give approx the same level of performance on an OS that could multi-task effectively.
A quad core could have 3 x 800MHz and one, say, 400MHz. This would outperform both of the above and by shutting down the 3x800 and idling the 400 could use less battery when idle than either. When required it could switch on one, two or three of the faster ones as required so it need not use more battery than the above even when running.
How hard is that ?
> Why the HELL is making phone chips more powerful and more thirsty regarded as a good thing? Why would I want to buy more expensive hardware in order to make up for the failings of a crufty and poorly optimized OS?
> In my little word, software needing more powerful horsepower is a bad thing. Your mileage clearly varies.
You are not being specific about which OS you refer to as "needing more horsepower" and "failings of a crufty and poorly optimized OS".
However, it seems that the Nokia 800 is basically an N9 with the screen resolution reduced (to fit WP7) and the 'horsepower' increased from 1GHz to 1.4GHz. So perhaps this is what you are referring to.
" it seems that the Nokia 800 is basically an N9 with..."
Don't be fooled by the similar enclosures: the N9 runs off a Ti OMAP system-on-chip; the Lumia 800 is a Qualcomm Snapdragon design. The camera modules are the same, but that's about it.
My 2c on Elop's original point: he said "quad-core", not "dual-core", and he's probably right. Also, he said "right now", and the lifetime of a mobile phone is shorter than that of a PC or laptop. A dual-core system using ARM's big-LITTLE architecture is far more suitable for phones than the current designs using symmetrical CPUs. Once you go to two cores, you've made the best gain: after that, it's diminishing returns as you still have to power the idle cores.
And Android devices are not "more capable of using better hardware" than other OSes. As a fact, Symbian and iOS both support multiple CPUs. It is only Android that NEEDS such high performance hardware to achieve acceptable user experience. If you run an interpreted runtime, you will always use more CPU power for a given task than the same program written with the a native instruction set. Just cos it's nice and smooth on your dual-core >1.5 GHz phone with buckets of RAM doesn't mean that it's an efficient OS: you may as well call a Range Rover a sportscar because it has a good 0-60 time.
"Android that NEEDS such high performance hardware to achieve acceptable user experience"
Is this from your experience or just repeating what others have said?
I ask, because my HTC Desire (running 2.3.3) is single-core and I have it under-clocked to 800MHz so that the battery will last ~2 days. I chose 800MHz because I couldn't notice any difference to the default 1GHz (apart from some games that I don't play).
I could post a screenshot from CPU Master, but that wouldn't prove that user experience, so unfortunately you're going to have to take my word at face value.
I can't comment on WP7, having never used it, but the drive for more powerful, higher-core chips in Android phones seems to me to be around mobile gaming, since this is a big-money area versus iOS.
> he said "quad-core", not "dual-core"
Elop has to tread a fine line. Current WP7 is limited to single core, but the next iteration 'SuperPhone' and WOA (and by implication WP8) will _require_ dual core by directive from Microsoft.
> If you run an interpreted runtime
Dalvik isn't an 'interpreter', it is a byte code VM. Just like C# on WP7.
Sorry, but a 'byte code VM' is just the name for a specific kind of interpreter. Both Dalvik and C# *are* interpreted runtimes: the interpreter converts the bytes codes used by CIL or Dalvik into native machine code for execution. The only difference between this and a tokenised BASIC interpreter is that the instruction tokens being interpreted represent lower level operations.
WP7 at least has the advantage in that it doesn't try do do all the OS's heavy lifting in this kind of an environment.
> WP7 at least has the advantage in that it doesn't try do do all the OS's heavy lifting in this kind of an environment.
You are confused.
Android's OS is Linux, written in C, plus a small amount of assembler, compiled to machine code. Most of the middleware and core libraries is also from Linux and is written in C and compiled to machine code.
Dalvik is a framework and VM for running apps, it doesn't do the "OS heavy lifting". It also has a JIT compiler which then means that it is not 'interpreting'.
It is also possible to develop in other lanuages for, or on, Android such as C/C++, Python, Lua and others.
Right now there's not much point, if you want a "full computer" somewhere you can just bring your laptop, now that MacBook Airs (and similar) are more or less tablet-size.
But imagine a nice wireless standard for monitors and speakers... some kind of wireless HDMI... and a world where all phones and TVs support it (in hotel rooms, etc.). Then you just check into a hotel and you can start using the TV as a monitor, add a bluetooth keyboard and mouse and BAM instant desktop experience anywhere you go without having to carry a laptop.
I suppose you can get this more or less if you carry around an HDMI cable for your phone, plus a bluetooth keyboard mouse... just not as convenient.
It's been noticed.
Adding the RF and getting the telco permissions might be tricky, but it's an indicator of what can be done with very little space and very few milliwatts and even fewer dollars in the right circumstances.
All being well, the Raspberries should be leaving China this week. Should be an overnight sensation fairly soon afterwards.
Elop's comment is right up there with "640Kb is enough for everybody".
Of course the 640Kb was a limit for MS-DOS on PC (IBM PC actually set the limit, MS-DOS could run to 1Mb). The single core is a limit for WP7 because:
a) it doesn't multitask effectively (more like TSRs on MS-DOS).
b) It wouldn't use more cores, it can't.
What Elop fails to understand is that, with an effective multi-tasking system (such as Linux) that can use multiple cores, when it goes into power-saving mode it can completely switch off cores and then slow down the one remaining. This means that it can reduce battery drainage due to CPU by around 90% rather than 50 or 60% for a single core.
WOA apparently _requires_ dual core at least, but that seems to be to ensure that the GUI can continue to work when a process runs wild and chews all the cycles it can get,
This indicates that current Nokia WP7 phones will not run WOA or WP8 and so will be obsolete at the end of the year. That is if they are not already obsolete.
"Of course the 640Kb was a limit for MS-DOS on PC (IBM PC actually set the limit, MS-DOS could run to 1Mb)."
The 640KB limit was originally due to the 20 bit address bus of the original 8086.
20 bits gives you 1MB (not Mb). Now some of that address space is required for things like the BIOS ROM, buffers, screen memory and IO so during the hurried design of the IBM PC the IBM beardies declared that there would be a maximum of 640KB available as general RAM, the rest being reserved for the system.
When DOS was ported over to the PC (from qDOS which was originally written for Z80), they essentially designed it based on the assumption that the 640KB limit defined by the hardware platform was going to be there forever.
Indeed, the first PC only came with 64KB, so 640KB probably seemed an enormous amount of RAM back then.
This is the genesis of the (probable) urban myth where Bill Gates allegedly states that "640K should be enough for anyone".
Hmmm, now I feel the need for a trip down <ahem> "memory lane".
It wasn't long before the 640K limit became overly restrictive and during those early days a few schemes were devised to work around the limitation.
First, utilities appeared that allowed you to find and "recover" unused portions of the reserved area and these extra crumbs were referred to as "high memory"
.. continued ..
Then "expanded memory" (EMS) came along. This was a hardware memory expansion board that used a special software driver to swap memory from the memory board into the "High Memory Area (HMA) where it could then be used by a CPU operating in "Real mode" (aka DOS mode).
When "protected mode" arrived with the 80286 (along with an expanded address bus), "extended memory" (XMS) was introduced. XMS used a software driver to temporarily switch from real mode to protected mode in order to access physical memory above the 1MB ceiling. XMS didn't require a special hardware memory board.
Later drivers were introduced that emulated EMS in XMS without the special hardware although by then most people were using XMS unless they had an application that for some reason required EMS.
Oh, I remember the days when you had multiple config.sys and autoexec.bat files to load up just the right driver profile (usually to make some game work properly) like it were only yesterday. I honestly think I spent more time tuning those files than I did actually playing the games but there was occasionally that special thrill you got when you managed to eek out another 16KB of RAM! Yeah baby!
These days it's just another entry in my list of "complicated geeky things I have learnt that is now totally useless except for boring people during a nostalgia trip on an Internet forum."
That said though, happy days they were, although looking back that entire memory model was really just a huge shit fight most of the time.
I certainly wouldn't want to go back there.
> 20 bits gives you 1MB (not Mb).
Exactly, and I have some 8086, 8088 and even an 80186 machines here where almost the whole 1megabyte is available for the OS and programs.
> Now some of that address space is required for things like the BIOS ROM, buffers, screen memory and IO so during the
hurried design of the IBM PC the IBM beardies declared that there would be a maximum of 640KB available as general RAM, the rest being reserved for the system.
The original 5150 'A' Model only had address lines for 256Kb no matter how many 16Kb or 64Kb memory cards you could stuff into it. The 'B' model (which I have here) did allow memory to grow to 640Kb if you could afford the cards and could find enough slots (such as with an expansion cabinet).
Other machines had other limits. I have a Sharp PC that has around 900Kb with the BIOS and screen buffers above that.
> (from qDOS which was originally written for Z80)
I don't know where you got that from. QDOS was based on CP/M and was written (some say cloned*) to support the 8086 S100 cards that SCP were starting to produce for their Zebra S100 systems. There was no need, nor any utility, in having it written for Z80 as SCP were a full OEM for DRI's CP/M.
* At the time there were so called 'source code decompilers' for CP/M (and other) which output annotated 8085 source from the CP/M BDOS. In fact all the annotations and source were coded in the 'decompiler' but it had to be sequenced by the BDOS binary to avoid copyright issues.
Intel had also produced an 8085 ASM to 8086 ASM-86 converter. It has been alleged that these two together is how the first cut of QDOS was produced. Apparently it was from CP/M 1.4 BDOS because Gary Kildall claimed that SCP-DOS had a bug that was present in 1.4 and was fixed in 2.0 and 2.2.
To clear up the QDOS confusion, the QDOS that was the predecessor to MS-DOS was first written for 8086, and was later renamed to 86-DOS. Both Atari and Sinclair had entirely separate QDOSes of their own, but neither of these were written for Z80 (6502 and 68008 respectively).
I'd have preferred if Nokia had kept going with Maemo, but after that assassination I'll just have to keep my fingers crossed for this. I hate Unity on the desktop where I run Mint KDE so it'll be interesting to see how this works on the smaller screen. I had given Ubuntu on my Galaxy Note a very quick go but it wouldn't boot and I hadn't time to play around so I must retry that soon.
Elop's comment was that "You don't need one", not that you can't make use of multicore processors. He was alluding to the fact (well his opinion) that Android requires more processor "grunt" to operate smoothly than iOS and particularly Windows Phone. I don't know how you can fault that argument really. If the OS performance overhead is lower, surely that is a good thing? It leaves more headroom for the device to perform elective tasks.
This would be absolutely killer for my Asus Transformer! I know, I know, Unity is clunky and not always very intuitive on a normal PC, but it would be nice to see if all their "touch sreen optimizations" actually bear fruit on a tablet/sudo-netbook. One has to admit that Unity is less broken now than when it was first introduced. Maybe I'm the last one left who has any faith that Ubuntu can iron out their UI issues, but I'm excited about this. ICS is great and all, but it really doesn't benifit from an attached touchpad and keyboard since most apps are just not able to make use of them. Not sure I want this on my phone yet though...
"We really know the desktop and spent a lot in time the last two years trying to leapfrog Windows and get to a future of a fully converged set of experiences"
I could not help but keep thinking about that sentence. As a long time (K & U)buntu user, I have to say that yes, Canonical has spent the good part of the last couple of years attempting to dumb down the Ubuntu desktop to a level where it resembled a Smartphone instead of a full fledged desktop.
Now it is time for them to realize that phone users are not going to like the experience when they plug their dual-core-as-powerful-as-a-desktop-was-ten-years-ago monster phone in a docking station with a keyboard attached... because they expect a full fledged desktop UI. Oh the irony.
A pity, really. Everyone is entitled to have its own vision and push for it. And spend its hard earned dot com millions on it. And one usually tends to sympathize with them, even if they fail, just because they were brave enough to keep trying. This time, Canonical is failing, but not because legions of unhappy Ubuntu users were not crying out loud.
Maybe there is a lesson here: do not ignore disturbances in the force caused by your loyal users. Yes, some critics were being extreme. Even rude. But you should have heard them. A phone is not a desktop. A desktop is not a tablet. A tablet is not a phone. A tablet is not a desktop. A desktop is not a phone. Get it?
The worst part? I hope to be wrong, because even if it is the UI mess that Unity is, I would be happy for Linux to get some significant "desktop" share. Linux has already won the embedded, router, phone, server and most other markets, with the desktop and tablet being the only thing keeping it from total world domination.
> dumb down the Ubuntu desktop
Yes, they have produced a _different_ UI, just as iOS is different from OS/X and Metro is different from W7.
But UIs on Linux optional. Ubuntu can run KDE, Gnome, XFCE, LDXE or several others. While Unity is the default and is rather orientated towards tablets or netbooks, the others are in the repository and in other related distros (eg Xbuntu).
> A phone is not a desktop. A desktop is not a tablet. A tablet is not a phone. A tablet is not a desktop. A desktop is not a phone.
You should mention that to Microsoft who seem to be 'converging' everything to Metro in the belief that they are all the same.
What Ubuntu is doing is providing an environment where the UIs _are_ different for different usages, but with one device. You can choose the UI when you login, even if it is LXDE (which I use).
Or in the case of Kubuntu (KDE) if installed on a netbook, it has a different UI than if installed on a laptop/desktop. Pretty cool really, and really well tweaked for the screen real estate. I wish Canonical had done that with Unity rather than trying to apply the same interface everywhere. Hopefully MS copies KDE's work there instead of Ubuntu's so we have different UI's on the phone an desktop. Once size does not fit all!
The GUI is just there to get people hooked to it, and to have multiple terminal windows as well as something for the browser.
The reaon why I want Linux on a mobile device is that I can run all my non-Gui software on it. I want to be able to run olsrd on my mobile device, or openvpn, or of course ssh.
You would be right except for:
Ubuntu One is the personal cloud that brings your digital life together. Now you can sync your files across all your Ubuntu or Windows computers; access your contacts or notes from anywhere; or stream your entire music collection to your mobile phone. Ubuntu One helps you do it all.
That's my verdict. If you have an OS that's not fit for both mobile and desktop uses in one go (like Android 4.0), then it's game over for the user experience I'm afraid. Most of you see this from the experienced IT guy perspective. If you want to confuse the hell out of Joe Public, go ahead. Most people don't even know what Linux is, let alone try and readjust to an OS designed for the geeks-elite to run an open-minefield of open application.
Great idea in principal, but serious; linux distro developers need to get their head round the mobile market and get an OS which will fit both the mobile and desktop experiences without too many headaches. Challenge? Of course. Who's already ahead in that game? Apple. When OS X and iOS bond? Who knows. They're the only company who understand that Joe Public wants a nice GUI experience from A to B without too many complications.
Maybe suicide, maybe not. In this case, it's the same OS. Different "window" dressing. Not like its Windows running on top of Android. So no big deal really. I'm expecting Apple to follow suit shortly, since they're running the same OS under iOS and MacOS and the mobile hardware has caught up. Why maintain two code bases anymore? Microsoft has already signaled they are following suit with wiping out the old Windows CE/Mobile/Phone altogether and going with the Windows NT code base so it resembles more closely the Linux/Android "agility" with relations to platform hopping which is difficult under the current Windows. (But was easier with "Classic" NT 3.51)
The new BYOD stuff from VMWare are about to introduce VM's on your phone for a secure business "phone" to replace the remaining Blackberry's in the corp world, with a virtual phone in its place. (I don't know how I feel about that really, but its happening anyway)
I personally like the idea of different UI's for different functions since i get something customized for the function at hand. Nothing worse than using a screwdriver as a hammer (which is what Unity and Metro feel like). For Joe Public? Dunno.
Ubuntu on desktop is disaster since the unity crap. And now they want to put it on phone? Even it works, you still get a very underpowered desktop, it might be good for limited situation, but it doesn't make any sense. If we have the sort of processor power and power source technology in ?Star Trek, then we are talking about true mobile phone/computer all in one device.
People just write slow, high level rubbish code these days. So it's no wonder you need so much CPU power in a phone.
I've been playing with some MIDI electronics projects recently and it's refreshing to see people who can write decent code on PICs in only a few bytes of RAM.
While a modern smartphone is obviously a lot more powerful, it needs to be if you're going to write applications in Flash then convert them into Java style code to run on a VM above a HAL which finally (after a few layers of API) does something.
Chip vendors have to go somewhere and their best improvements seem to be related to multi-core.
I wondered how a phone would benefit from this but it seems that newer chips will offer better power management opportunities, that alone makes sense for phones.
But also, phone vendors have to go somewhere to continue selling and making the phone a thin client is a good way to offer users more tangible features from the phone platform. Multi core gives the platform a boost and when connected to a display, presumably there is power too and running all the cores is feasible.
Finally Ubuntu can build on the Linux part of Android and provide desktop-like functionality for little development effort. So it all makes sense.
How long before Apple implements a full OSX experience into future iPad, iPod Touch, or iPhone when docked (via expensive accessory) coupled with iCloud storage ... then they claim to be the first to do such a revolutionary device.
If they were to do this (Apple, that is) monitors would start popping up all over the place with a dock connector for this purpose and pretty soon you'd struggle to buy a monitor that didnt have the dock i bet.
As irritating as it is sometimes, Apple does tend to make ideas catch on through sheer greed of others wanting to jump in on the action with accessories etc.
Since Apple MacOS/iOS is analogous to Linux/Android, I would think less than 6 months of development time. 3 months on a crash program for the hardware/software combination that Canonical is pushing here. Cisco could push it out even faster since they already have the Cius Android tablet phone with the dock and peripherals, and had Ubuntu running on it already at one point for home electrical monitoring program which got cancelled.
Keep in mind, with the Ubuntu Installer app in the marketplace already does the basics of using Ubuntu desktop from your ARM based Android phone/tablet from anything that supports VNC. It currently is for the hardcore geek among us (sheepishly raises hand), but isn't for the masses due to the manual steps required. That was one guy doing the majority of that project. I don't see why Apple couldn't have this available almost immediately for iPad2 (which might be the only mobile product they have with resources to do it) since they have much vaster resources. MIcrosoft won't be able to until its mobile codebase switches to Windows instead of a very badly cut down WinNT kernel which it is doing for Win8.
"In years past, Canonical tried dual-boot on Windows PCs, a program that has failed and been withdrawn."
Of all the people you personally know running Ubuntu, how many are dual boot and how many boot from inside Windows.
A + B = 100%
Now add in the those who only use Ubuntu.
A + B + C = 100%
Your assessment of failure is tuned to to the Microsoft song.
Those who still dual-boot only do so because of software they cannot donate to and only runs under Windows or Mac OS.
I have a Droid 2 Global. This is a single 1.2ghz ARM, and 512MB of RAM. I do have a Debian chroot on here, and a while ago I did install OpenOffice (or is it LibreOffice now? Whatever) and ran it on a remote X display. Not only did it work, it was SNAPPY, and even if I started whipping through menus, preference screens, etc. to try to drive up CPU usage, I couldn't get it above 5%.
So, I could easily see a phone running a full desktop environment with a dock.
Biting the hand that feeds IT © 1998–2022