
How come my sleepy eyes read that as ForeSkin..?
Analyst house IDC predicts that the traditional embedded-systems market is reaching an inflection point where a new breed of intelligent devices will take over the market and drive the current fashionable terms de jour: Big Data and "the internet of things". IDC defines intelligent systems as those that use a high-level …
When my phone answers my calls, takes a message books my appointment, calls my wife to say I'm late, then tells me to put on my coat because it might rain...........!
Without me having to touch a screen press a button, then buys me a beer...
Then I will believe it is intelligent.
Until then it is just a block of plastic with a circuit board and battery.
IDC are getting more and more like one of those 900-xxx services that companies call to rub their flagging egos. "Oooh big boy you are so huge and make me wet".
In any Internet of things, there will be a growth in embedded systems. That will be a huge boost for ARM. Today ARM and anything more than 8-bit embedded systems are almost synonymous.
Even your Intel-Inside laptop has many more ARM cores than Intel cores: 2 or 3 in the hard drive, one in the Bluetooth module, another in the Wifi module. Probably another in the mouse/tablet.
"How about reporting its stock and calling for replenishment based on sales ?"
Payphones have been doing this kind of thing for decades (phoning home to say my coinbox is full, or whatever).
Over ten years ago a well known global fizzy drink vendor was using a well known UK-based globally-supported supplier of payphone management equipment and services to "network" their vending machines too. Doesn't need much bandwidth, doesn't need much compute power.
So it's been possible for quite a while, the far more important question is why is the KitKat machine is always nearly empty of KitKats, why doesn't it report what it's sold and then HQ can use that info not just for planning re-stocking but also to reallocate the space in each machine to more closely match the demand on each particular site and thus maximise return on investment.
And the answer is probably: doing stuff cleverly like that costs time and money up front even though it might make more money in the medium term. Spending money up front without an instant return is no longer considered sound business practice.
Apparently ARM didn't pay IDC this year.
I see the embedded space finishing the move from 8- and 16-bit processors to ARM. The complex devices are using Linux on ARM, while the rest seem to be using one of the RTOS systems or a vendor-specific library on a Cortex-M.
That migration includes systems that used to run on various x86 controllers, and ones that previously would have 'winced'.
If the application can't justify an ARM, the alternative seems to be one of several little-known 8 bit microcontrollers. They have wacky instruction sets and weird I/O pins, but must cost almost nothing.
Those tiny microcontrollers probably won't go away. Developing for ARM requires a certain amount of initial investment, which may not be justified for simple devices. The smaller microcontrollers can simply be programmed in assembler, which is, for simple things, much quicker than having to install a C environment. Plus their hardware often is much simpler to use.
So if you need the power of an ARM, by all means get an ARM (you probably already did so), but if it's just a washing machine you want to control, you are probably better off with some 8051 clone or an ATMega or PIC or something.
I don't think that the up-front cost for embedded ARM development work is that high. For example, the eCos RTOS with GCC compiler won't cost you anything and is on a par with many commercial alternatives. If you don't mind vendor lock-in, most silicon vendors have their own free tool distributions as well.
I think there's a wider trend here. It used to cost a fortune to tool up for things like FPGA and embedded software development, but there is now so much competition from the silicon vendors that the entry level prices for these tools are tending towards zero.
Where I agree is that a good developer chooses between ARM, 8051, FPGA, etc. depending on the requirements of the job on hand. They are all just tools in the toolbox. In that context, embedded Windows on x86 is the big expensive lump hammer that's used to bludgeon any design problem into submission.
Well setting up such an embedded toolchain is not entirely trivial. Setting up some monolithic assembler is much simpler. Software licensing cost is not an issue here, since those systems are often developed by engineers which easily cost the company a thousand euros a week.
For GCC I essentially have to re-build the compiler for every platform. Vendor provided libraries usually are not only near impossible to use, but also extremely hardware dependent on some ARM controllers. And you need to use libraries as the complexity of the hardware is far to high to do it manually.
Again I agree that ARM will probably sweep up the low power 32 Bit market. And perhaps a lot of the 16 Bit market, too, but that's just a fairly small niche. The majority of micro controllers are far smaller than that. And it doesn't make any sense for those to move to ARM.
Yocto is your friend if you do not want building a GCC toolchain - Intel already did it for you (and the following CPU architectures: i86, ARM, PPC, MIPS). ;-)
All toolchains use Yocto Eclipse plug-in that simplifies on-target profiling/debugging for UI oriented developrs. :-)
http://www.yoctoproject.org/
If the time and money cost of setting up your embedded toolchain is other than negligible, specifically if it is a significant factor in any given project's bottom line, then something has gone seriously wrong somewhere. Probably at management level, where by the sound of things they appear to be staying with what they thought they knew five years ago rather than what makes business and engineering sense this year.
"I think there's a wider trend here. It used to cost a fortune to tool up for things like FPGA and embedded software development, but there is now so much competition from the silicon vendors that the entry level prices for these tools are tending towards zero."
The entire price for dev tools, especially in the FPGA market should be zero. Completely free, full featured tools are one of the best ways for the FPGA/Microcontroller manufacturers to get their parts into designs.
I know if I am picking components for a design and I need a certain functionality then I pick parts that I don't need to pay huge money for tools to get the functionality I need.
"The entire price for dev tools, especially in the FPGA market should be zero."
I'm sure the FPGA vendors would love that. Unfortunately, their upstream tool vendors probably aren't as keen on the idea. I've no doubt that this is behind Xilinx's decision to replace Modelsim with their own simulation tools.
Back to talking about ARM processors, one of the most interesting angles out there is the Microsemi (ie, Actel) take on the Cortex-M1. They roll the ARM royalty into the price of their M1 enabled devices so that there is no upfront license fee or royalty required from the end user. It means you can basically design your own fully customized Cortex-M1 based SoC for the price of a 100 quid dev board. Quite how Intel think they're going to compete with that, I'm not sure!
IDC conjecture assumes that: There will be no "ready-to-use" off the shelf ARM based platforms and developers which want to get something done using OTT will go Intel (and thus windows).
Close but no cigar - there is already a healthy market of "mostly baked" ARM based SOCs in the far east. That is the territory where Via retreated after being defeated by Intel.
Even if we assume that Intel hits their power targets for the new Atom and even if we assume that it hits the price point under current and future Wondermedia, OMAP, etc SOCs it still will not have all those GPIOs which are present on these systems. These will have to be grafted using yet another unspecified 8 or 16 bit controller(s) which costs extra money to integrate and build.
IDC is making the mistake of extrapolating POC development to real system. A lot of the POC work is being done on Windows and early smart meters, smart devices, etc all have a rather expensive Windows build driving them. That will not necessarily be their production build. In fact it is least likely to be.
Mips, not mentioned isn't idling too. IMHO, with iphone and android, developer and hardware companies finally got rid of their weird x86 addiction even including Microsoft with their windows 8.
There is no turning back, monopoly is over and it will be good even for Intel. They need competition to progress too.
the crucial words are
"x86 systems, which currently have 8 per cent of the market, will grow to 41 per cent as manufacturers seeks to put more grunt into their systems."
Change "grunt" to "complexity and expense" and it might have been technically more accurate but it would still have been daft for anyone to actually want to do this, and it's daft for anyone claiming to have a clue to say this.
ARM (and just as importantly, ARM licencees designing SoCs) don't yet have a great deal to worry about from x86 (and Intel's lack of licencees designing SoCs). Even the MIPs SoC builders and the unknown microcontrollers are probably going to be more successful than Intel in the lower end markets.
Intel, on the other hand, do rather need a success, any success, outside the x86 Wintel market. Pick a random selection of consumer or professional electronics from any time in the last few years and unless it specifically needs Windows its Intel content will likely be negligible, quite possibly zero (iRMX is irrelevant).
Intel have had various attempts outside the Wintel market (everything from I2O to WiMax to IA64 and beyond) for many many years but each one has failed. Left with only the Wintel market, Intel have been reduced to commercial tactics which in different circumstances would have resulted in people being locked up for running protection rackets - not just the well known Dell/Intel sweetheart deals etc, but also Intel buying whole formerly multiplatform companies e.g. Wind River Systems (suppliers of the multiplatform VxWorks embedded OS) and Virtutech (suppliers of the SIMICS multiplatform chip and system simulator). So who's going to use SIMICS or VxWorks for a new non-x86 design now?
Was it Terry Shannon that used to call IDC, Gartner, etc the "coin-op consultants"?
FPGAs have real ARM cores now
ARMs start at 80c for an M0
The whole board with High end ARM running Linux is cheaper than an x86.
Where are the Industrial I/O x86? Geode anyone
IDC must be in La La land.
At the high end running an OS the ARM has been displacing MIPS and x86 for years and growing.
At the low end they didn't exist. x86 can never ever eat that. The M0 and M0+ etc eating into 8bit market except for smallest cheapest 6pin to 28 pin devices.
Routers, Industrial controllers, set boxes Tvs that did use x86 are using ARM, MIPS, ST20 (basically a Transputer). ARM is growing there.
Microchip 18F PIC and MIPs core devices with USB and Ethernet.
Atmel now do ARM cores for Linux OS based devices.
This is laughable. Surely no-one in the Industry believes it, least of all Intel or AMD?
Mine's the one with NXP Cortex M family data sheets in the pocket.
I've read that a Cortex-M license brings in just under 25c. That probably means well under 25c for the bottom-end parts, and noticeably more for the latest M4 . Those don't run Linux, but are easier to write code for than simpler parts. The code density is pretty close to the alternatives, and the power usage is usually much below the 8 bit parts.
Running Ubuntu, I didn't have to modify the Canonical-packaged GCC-based ARM toolchain to compile. (But I didn't need to for the AVR either.) I did end up writing my own linker definition file and many header files to match my preferences, but they now can be readily downloaded. Of course "readily" means extracting and installing myself, rather than the checkbox install for the compiler.
Colour me retarded, but isn't installing Windows on something that isn't obviously a computer, and is maybe hard to get at a bad idea? Surely that would mean every time it blue screens or cocks up an update or suffers bit rot it means a service call. Perhaps that's the grand plan; to keep an army of Windows techs in alternative jobs.
You may have a point. There is a whole certified Micrsoft dependent ecosystem out there, looking increasingly at risk as increasingly-smart consumer electronics and professional equipment continues to prove that not many people routinely *need* Windows.
But without direct or indirect connectivity to the wider Interweb, where will these embedded Window boxes get their Windows Updates, their ant-virus updates, how will they ever run Windows Genuine DisAdvantage?
After all, MS and their partners have been telling us fo years that these essential things must be done as dictated or the sky will fall in.
Actually, if you lock down both the hardware and software configuration (as you would in an embedded system) Windows is perfectly stable.
I've seen it used in both safety critical applications (air traffic control, military radar control, in an x-ray imaging system system for use in key-hole surgery) and in remote infrastructure (as supervisors in wireless base stations). In the latter configurations the average 'tech' might not know anything about the O.S. They just get told by email that a board in location X has failed, they drive out, pull out the module with the red light, put in a replacement and wait for the light to go green.
It's only when you throw in a bunch of cheap RAM from PC-world, some dodgy drivers for some no-name hardware and allow the user to install crapware from web that Windows turns to shit.
That said, this kind of stuff is all very much at the top end of embedded systems in terms of performance. In terms of volume it is dwarfed by the number of ARM and myriad lesser embedded systems out there. As has been said elsewhere, any high end system will probably have a few low end micro-controllers running independent sub-systems.
..The most surprising thing about this article, is that did not have the chutzpah to go the whole hog and extol the benefits of embedded Itanic, or the heating advantages of embedded x86 in cold environments. Having built refinery control systems (where the “internet of things” is a 30 year old legacy), I’m surprised the “vision” seems quite so idiotic, but I guess they’ve considered and rejected:
1) Embedded Power efficiency is not about “MIPS per Watt”, but “Instructions per joule”, there is no point having an insulin system for diabetics that has uber analytics, but shutdown on long flights.
2) Embedded Safety systems need to make crude but critical decision (e.g. switch off the engine if airbag activated). We might like a Formula-1 type system for our cars, but it won’t replace critical systems because of testing complexity.. if “smarts” is an addition to the control system, why does it need to be embedded (and not on a take-away tablet like F1)?
3) Programming for the physical world can use parallel algorithms, why would you want a few big cores, when an array of 80 simple cores can do it faster (and 79 can sleep when not needed)
IMHO “intelligent systems” is plain stupid market-speak.. the smart appliance is an “active appliance” that does what it knows, and escalates what it doesn’t... The thing that is new is that ARM SoC can run a full network stack and address a 4Gb SD-Card + GSM & GPS are now quite cheap.
Most embedded systems are cost-conscious to the max. Anything which adds even pennies to the unit cost are non-starters.
Who is going to pay the Windows Tax or for x86 which add dollars, not pennies, to the per-unit cost?
I, for on, will go to great lengths to avoid anything (especially a vehicle) with Microsoft software in it. Don't give a crap whether or not it's Linux or some commercial RTOS -- I just don't want MS (or Fruit) shite.
I work in a consultancy doing endless embedded microcontroller designs. We typically choose between something like an MSP430 from TI for the lowest power consumption, an ARM variant for decent performance, or a DSP like the Blackfin for high performance. Customers often have different processors which we are happy to work with, but they're not Intels. Intel are in old designs or used by people who don't really do embedded and stick a micro sized PC into their product.
By the way, many smart meters use MSP430s rather than PICs or ARMs.
The last thing any "intelligent-systems" manufacturer wants to do it tie itself in to being supplied by Intel or AMD - the key strength of the ARM connected community is the ferocious competition of ARM partners for 'sockets' on consumer goods where pennies (of profit) matter. The ARM product range offers a pathway from coin-cell powered computing to chips which threaten Intel in the Desktop and Server space, so the conclusion that an increase in intelligent inevitably means a switch to x86 is absurd.
"might like a Formula-1 type system for our cars, but it won’t replace critical systems because of testing complexity.."
Just fyi, the latest T3-alike rag from the Institution of Engineering and Technology has a recruitment ad from a well known F1 company. They want people familiar with DO178, which is the standard for safety critical software (including testing) used in the aerospace industry.
So maybe the F1 folks do take software safety seriously, which would be good.
It'd be interesting to hear the standards used in safety-critical control systems in mass market cars (random example: Toyota Prius), which in my book are really quite complex too.
I didn’t mean to suggest that F1 is not interested in safety, but a complex algorithm for brake-fade management wouldn’t cut it for regular driving because 60 cornering scenarios means 60 crash test scenarios, whereas ABS has only a few. The point I was trying to make is that you wouldn’t build an intelligent car; you’d build in USB or WiFi interface and SD-Card storage into the simple controller (ala Raspberry Pi) and let people download diagnostics to their smart phone/Laptop or upload data.
I can see a market for a break-down app or smart metering for rental cars, but none for a powerful embedded in-car PC. If IDC think there's a market for “intelligent systems” like a vending machine that refuses to sell candy to fat people that haven’t been to the gym they're dumb.
"""a new breed of intelligent devices will take over the market and drive the current fashionable terms de jour: Big Data and "the internet of things"
"""
Should we infer from this that all these embedded systems are going to be running SETI and the like because they are bored? CERN or JPL offloading data processing to Las Vegas slot machines? Sounds like the plot of a Douglas Adams story.
People who know more than I do have already pointed out how ARM is moving both down and up the embedded systems value chain with the M0 down to something like µW for real work and Tegra et al already able to do heavy video processing. If El Reg is going to cover these reports some kind of critical analysis of the press release would be appreciated.
Far from rebelling against smart systems, we won’t buy anything that isn’t active.. the key verb is “active” not “smart” or “intelligent”. One day very soon all heating boilers will txt you about problems; active meters will report usage (and help utilities find leaks).. the key being simple decisions and active notification of everything else.
An “active” fridge might work-out (via RFID) that the milk as gone off and notify you; a “smart” fridge might order more milk; an “intelligent” fridge might stop ordering more and more milk during your vacation; but in the “internet world” your schedule is in the cloud, not the fridge.. if you need an avatar you'll have only one in the cloud.