
This ain't no union shop
I predict a firing.
217 publicly visible posts • joined 12 Feb 2007
.NET sucks, QT sucks, GTK+ sucks, Java Swing/FX sucks... it all sucks! All these frameworks are either a bloody mess or so outdated, they make HTML5 look like the best option for developing desktop applications... I can't even tell if that is good or bad, I'm so disillusioned with it all.
In any case, it is beyond me why anyone would take the time to get their heads around any new development framework MS comes up with... think of all the man hours wasted getting up to speed with Windows phone app development... what a worthwhile effort that was!
How is Intel's board an ARM killer if it is taking aim at an AVR-based Arduino?
I also don't see what any of this has to do with IoT given neither board has any sort of networking capability as standard.
If the ARM-based C.H.I.P. delivers what is promised, then that'll be vastly better suited to IoT applications (built-in Wi-Fi and Bluetooth 4.0) and will supposedly only cost $9. Even the BBC micro:bit is more relevant, in the IoT space, than the Arduino due to its built-in Bluetooth - another ARM based board.
I don't have anything against Intel - Very happy with all my i7 machines, but Intel blew their chances in the embedded market when they sold off the XScale range - I fully expect this Quark nonsense to go the way of the i960.
I think it is pretty likely that the development tools will involve OpenCL. Altera already provides an OpenCL SDK for its FPGAs.
I don't think time-slicing an FPGA is a particularly efficient way of sharing its capabilities across processes - Way too much state to save/restore during context switches. Given that FPGA fabric tends to be highly homogeneous, I don't think it is far-fetched to imagine a scheme where processes could request blocks of gates from the OS, not dissimilar to the way they request RAM (i.e. sbrk/mmap) albeit through some sort of driver interface.
Did 100Mbit ethernet or 54Mbit wi-fi turn out to be adequate? No.
Are consumers moving away from shifting data about in the LAN to shifting it in and out of the cloud? Yes
No, consumers aren't likely to utilize a 1Gbit connection 100% of the time, but when they upload or download something, the preference is likely to be; the quicker the better.
The basis for your argument seems to be about what people need. But the Internet isn't about what people need - Unlikely as it might sound, life can exist without the Internet! It is about what people want.
Sure, you can have wider registers for SIMD operations (e.g. NEON), but you've got to load and store them somehow. A 64-bit data path throughout your architecture means you can move more data per operation which will benefit SIMD performance, so it is relevant. Incidentally, one of the major features of the ARMv8 architecture is enhanced SIMD support, making it standard and increasing the number of available 128-bit registers.
As mobile apps are generally audio/graphics DSP intensive, an increase in throughput is likely to be more useful to developers than the ability to address a vast and sparsely populated address space.
Whether a language is a scripting language or not is related to the level of operations programs written in the language are intended to perform - It has nothing to with the paradigm(s) a language supports. A scripting language can be procedural, object-oriented and/or functional just as much as a systems language can be.
Indeed Javascript is more than a purely functional language, but that really stems from Netscape's insistence that Eich create something that looked more like Java (because that's what the cool kids were using) than Scheme, which is what he had originally wanted to implement and which is more than apparent if you use it functionally.
The point I was originally trying to make was, the primary reason Javascript seems to receive so much bad press is because programmers come to it expecting to be able to structure their code into hierarchies of classes, the way they're used to. They find they have a hard time getting Javascript to work that way, because that's not how it was really intended to be used, so dismiss it as being crap. They fail to discover that, by changing their approach to suit Javascript instead of the other way around, it is actually quite a well thought-out and nice language to work in.
Closures are its main bloody asset! It's a functional language and the moment you treat it as such instead of trying to bend it into it isn't (like Java), it actually becomes quite pleasant to work with.
If Eich had got his way at Netscape, Javascript would have looked more like Scheme/LISP than a C descendent.
Sounds a bit like what ARM set out to achieve with their Thumb/Thumb2 instruction sets, but taking things one step further with proper data compression rather than just shrinking the instruction format.
The article doesn't mention what architecture this is targeting, although as it's Intel I'm guessing x86 because I can't imagine them getting back into smaller embedded stuff having dumped MCS-51, i960 and XScale a long time ago. If that's the case, it makes sense for the compression/decompression to be on chip so that code compatibility can be maintained. If compression had to be done at compile time, compatibility would go out of the window.
As for 'intelligent drapes, coffee machines, toothbrushes, baby monitors, stereos, alarm clocks, supermarket shelves, air-quality sensors, and more', surely that has been ARM's bread and butter since almost forever? It's hardly new for microcontrollers to be embedded in that kind of stuff and you don't need vast amounts of processing power or memory to achieve Internet connectivity.
Forge:
verb (used with object)
1. to form by heating and hammering; beat into shape.
2. to form or make, especially by concentrated effort: to forge a friendship through mutual trust.
3. to imitate (handwriting, a signature, etc.) fraudulently; fabricate a forgery.
(source: Dictionary.com:)
Have you ever tried writing any significant amount of code in C using GTK+'s bloated piece of crap GObject system? I shudder to think how many hours of my life I've wasted manually specifying vtables and handling GObject properties. Glad I jumped that ship 20 months ago.
Yes, C is a vastly easier language to understand compared to C++ and can even be a more productive language in some circumstances... But not when you use it to manually implement everything C++ gives you out of the box minus the syntactic sugar.
Well of course it was overpriced - That's how retail works! You think I don't know that a camera I pay £400 for costs bugger all for Samsung to manufacture?
There is barely anything inside a tablet computer. In the quantity Samsung makes stuff, the mark up they put on wholesale would cover the production cost of a 7" tablet several times over.
Televisions, phones, tablets and cameras are all essentially the same thing in different form factors these days and are becoming better at inter-operating. Samsung giving away tablets with purchases of other Samsung gear both makes it more likely that a consumer would chose a Samsung product over some other make - Especially at Christmas.. and it also means that a consumer with two inter-operating Samsung products (e.g. a tablet and camera) is likely to buy another Samsung product (e.g. a television) later on or maybe even at the same time.
How do you make teaching attractive to the talented? Those who are talented in a particular field want to get on with exploring it and pushing boundaries, because they can. Would money really be enough to compensate people of that mindset for the dying inside that they would experience, regurgitating the same old shit, year in year out?
There always seems to have been this misconception amongst embedded systems programmers (of which I am one, although seemingly more enlightened) that C++ is somehow inherently more resource hungry than C. This simply is not true. There are actually very few language features that impose memory or execution overhead (basically just exceptions and RTTI) and only do so if used.
The only time raw pointers have any place in C++ is when you're wrapping up legacy C code. In some ways, I wish C++ had never attempted to be backwards compatible with C - So many programmers treat C++ as C with classes (aka structs with functions), which doesn't even come close to utilising the language properly.
It also probably doesn't help that there are very few well written C++ libraries/frameworks out there to at least serve as an example of how C++ should be done properly. The STL is fantastic these days, especially with all that C++11 has added. Boost libraries aren't bad, but even having been a C++ programmer for 12 years, I sometimes find the complexity rather irritating. The problem is that, for the most part, the STL and Boost are libraries of generic classes and functions. Generics are absolutely right for containers, algorithms and such, but do nothing to serve as examples of how framework level or end application code should be structured. That isn't a criticism of the STL or Boost by the way - Certanily, as far as the STL is concerned, it is entirely appropriate that it be a library of generics and not much more because C++ is a systems language and its standard library must minimize its dependencies on any particular underlying platform.
At least with languages like Java, Ruby, Python etc. there are plenty of libraries (for better or worse) which beginners can both use and draw inspiration from when designing their own code. C++ is always going to be bewildering to the beginner without good real world examples to illustrate the language. Unfortunately, some of the more prevalent C++ frameworks, such as QT and wxWidgets treat C++ like it's still 1998 (arguably for legitimate reasons) and are, I think, examples of how things should NOT be done in modern C++.
Intel has already been there and got the t-shirt. They inherited StrongARM from DEC, then went on to develop their XScale implementation of the ARMv5 architecture before flogging it to Marvell so that they could focus fully on x86. Can't see them doing a u-turn any time soon (even though they still hold some sort of ARM license apparently).