* Posts by thames

848 posts • joined 4 Sep 2014


India reveals plans to make electronics manufacturing its top industry


Following in China's footsteps

Manufacturers, including Chinese companies, have been setting up in India for a few years now. Labour is getting more expensive in China as standards of living rose, so industries that depend on paying rock-bottom wages have been moving to places like Vietnam and India.

This is exactly as has been expected by both the Chinese government and economists around the world. China are moving up the value chain and producing goods which have a greater technology and design component to them, rather than just being cheap.

This in fact is part of the reason behind the current US unhappiness with China. Chinese companies are beginning to be direct competitors to US companies, or even as in the case of Huawei surpass US companies in technology development, meaning that China are no longer just providers of cheap labour.

Now that India have abandoned socialism they have begun to follow in China's economic footsteps and have been seeing faster economic growth as a result. The same sort of industries that moved from Western countries to China will move from China to India (and Vietnam and elsewhere) in pursuit of the lowest wages. Meanwhile in China they are producing automobiles and aircraft, and Chinese tech providers are among the world's largest.

India meanwhile have no more intention than China of being anyone's lapdog and also see themselves as a future global power, not as cannon fodder to be used to serve someone else's agenda. Economic predictions suggest that China will surpass the US economically within the next couple of decades, and India will do the same towards the end of the century, pushing the US into third place in the world. The two global superpowers will then be China and India, and the centre of global power will have shifted decisively to Asia, perhaps forever.

The US will then find out what life is like when you drop from being number one in the world to being an also ran, something that Britain experienced in the middle of the 20th century. I'm not sure the Americans will be able to accept that with as much equanimity as the British did.

This'll make you feel old: Uni compsci favourite Pascal hits the big five-oh this year


Re: Modula-2

With Modula-2 import statements went along the following lines:

FROM InOut IMPORT ReadCard, WriteCard, WriteString, WriteLn;

You imported everything you wanted to use explicitly. If you find that you have "too many" imports, then there is a problem with how your code is organised. Modula-2 programs are supposed to be broken up into modules, that's the whole point of the language. The greatly improved module system was one of its big advances over Pascal. The explicit imports were also an improvement over C, as it greatly reduced the chances of naming collisions. You could look at the import list and see exactly where each imported reference came from.

I'll put it another way, think of Modula-2 modules as being like objects in newer languages. Wirth was trying to solve the same problems as object oriented programming, but in a different way, and it was not obvious at that time that objects were the future of language design. Modules were not supposed to be huge with loads of imports any more than modern objects are supposed to be huge. If they were/are, your program design was/is wrong.

Comparing Modula-2 to C++ is pointless, as it came well before C++ and experience had been gained in the mean time.

Cardinal numbers (unsigned integers) were also greatly welcomed at the time, as most people programming with Modula-2 were doing so on 16 bit computers and in many cases were struggling to get enough positive range out of a 16 bit integer to do what they wanted.

If your loop termination condition contained an error, then a bug is a bug and it's the programmer's error. I personally would much rather have a bug that failed in a way that was impossible to miss than to have a subtle off by one error that produced slightly wrong results.

The actual thing that most experienced Modula-2 programmers complained about can be seen in the import example that I used above, which was the I/O routines. They didn't have the degree of flexibility that Pascal's I/O had, but instead were very simple functions that did one thing with one type. As a result you needed more functions to do mixed I/O than the equivalent Pascal program. Of course those I/O functions were really fast because they did just one thing and one thing only, but drawing forms on screens or printers took more work than I would have liked.


Re: OK

I used UCSD Pascal, Turbo Pascal, and Modula-2. Of all the compilers and IDEs for any language from the MS-DOS era, the one that impressed me the most was TopSpeed Modula-2. It was way ahead of what Microsoft or Borland were offering. They later offered Pascal and C compilers as well. TopSpeed was founded by the original Danish creators of Turbo Pascal when they had a business falling out with their US based marketing partner (Phillip Kahn) and they split from the company, taking their next-gen IDE and compiler with them while Kahn (Borland) kept the Turbo product line.

One of the interesting things about the Modula-2 language was how it addressed the same issues as Object Oriented Programming (then a new concept) was trying to address, but in a different way. Modula-2 used Opaque Types and modules instead of objects.

Overall I think that objects were the better overall solution, but it's interesting to consider what might have been had opaque types caught on rather than objects.

UCSD Pascal is another interesting technology, in that it used an architecture independent VM long before Java was even a gleam in its creator's eye, and as you said Western Digital even built a CPU specifically to run it. There were also Basic, Fortran, and C compilers which produced code which ran on the same VM.

It also allowed for native code compilation under programmer control through the use of compiler directives ($N+ and $N-, if I recall). The VM had virtual memory management built in, which wasn't a feature that the MS-DOS OS (where I mainly used it) itself could give you.

What killed UCSD Pascal was it was licensed to a company who had exclusive rights to all newer versions and they ran it into the ground rather than continuing to develop it to keep up with progress in hardware.

This is also another "what might have been" event. If it had been open sourced rather than licenced to a single company, it might have evolved from what was fundamentally an 8 bit architecture (with split code and data pools for 16 bit machines) into the 16 and 32 bit era and still be with us today supporting still more languages.


Re: pascal was simply useless.

Most versions of Pascal could do type casts just as well as C could. The problem was that this was one of the features in Pascal that wasn't standardised. If you had a problem then it was with the particular version of Pascal you were using.

The same is true of goto. Common versions of Pascal supported goto a label, but again the implementation was not standardised, and in some cases may need a compiler directive to enable it.

I've done binary communications programming in Pascal, and in the instances I am familiar with the basic algorithms can be translated from C code pretty much as is, if you know Pascal well enough.

The main problem with Pascal was there was a base language which was standardised and then a series of extensions to support lots of things involving OS and hardware interfacing. Professional grade compilers then were proprietary, and compiler vendors were more interested in promoting vendor lock-in than portability.

You're not getting Huawei that easily: Canadian judge rules CFO's extradition proceedings to US can continue


Re: China really shouldn’t have

Riiight. Let's had her over to a regime whose head has openly admitted to wanting to use her as a bargaining chip in trade negotiations.

Canada just wants the whole problem to go away instead of being used as a pawn in the global dominance game between great powers.

If you miss the happier times of the 2000s, just look up today's SCADA gear which still has Stuxnet-style holes


Re: Missing the point, maybe

I don't have any arguments with the general thrust of the article itself. Bad security is probably worse than no security, as it gives the illusion of security and may lead to people assume they don't need to take further measures. We need more articles of this sort to raise awareness. I am just saying that this will be far from the worst problem present, and I gave an example of how poor to non-existent password security really is in many cases.

Security in the industrial automation field tends to range from the bad to the farcical. The main thing which probably prevents more problems from being reported is the relative obscurity of the field and the high cost of the proprietary technology which discourages researchers who don't have the niche background or the budget to probe into it. For example the Schneider M221 is at the extreme budget end of their product line, but things don't get any better from the security perspective with the larger and more expensive kit.

I originally came from the industrial automation field and started reading The Register years ago in order to get a better understanding of what was going on in the IT sector, as I realised that what the industrial automation field needs is a greater injection of IT knowledge and technology.

If I communicated my point poorly, then I apologise. What I am trying to say is that in the scale of IA security problems this is at the lower end and there are far worse things to be found and proven. By all means, keep up publishing stories such as this, as the steady drip, drip, drip, of bad news is the only thing that will spur people and companies into action. The only problem is that I think not enough people in the IA field are reading publications like The Register in order for the message to get through to the people it needs to. An easier way of finding past stories which were connected with industrial automation might be handy, as at present they are simply lumped in with all the other security stories. I don't think your site is set up for stories to have multiple tags however.

In my opinion, the field needs an injection of security technology from the IT industry. The problem is that the major vendors are so focused on vendor lock-in that they continually re-invent the wheel badly, particularly when it comes to security. From their perspective inviting "the wrong type" of IT industry tech risks watering down vendor lock-in such that they can't extract the maximum amount of money possible from customers.

SCADA systems are a subset of industrial automation, much like web servers area subset of the wider IT industry. They tend to get more attention in security news lately because of their use in critical infrastructure such as electric power, petroleum, water, and the like. However most people working in the industrial automation field can go through an entire career without ever laying eyes on a SCADA system.

SCADA systems also tend to get more attention because they use (badly, usually) more IT industry technology such as MS Windows, databases, and the like. In most cases they got an injection of IT tech in the 1990s to replace their previous proprietary platforms but haven't moved on much since then other than to maintain compatibility, and more recent ideas have largely passed them by.

PLCs such as the Schneider M221 may be used in a complete system in conjunction with a SCADA system (e.g. to control a pump the SCADA system is monitoring), but in most general manufacturing they're stand alone. In some ways this is a blessing because they never get networked, but just work quietly running stand alone machines in a factory somewhere. It also means though that they are probably a rich field for finding security vulnerabilities because not as many people have been looking for them, and they are starting to get networked more now for a variety of reasons.

So to sum up, I'm sorry if my earlier post came across differently than I intended. My real point is that in the grand scheme of things though things this example is just barely cracking open the lid on a very large can of worms.


Not a big deal, as industrial security is almost non-existent anyway.

SoMachine is the IDE software used to write, download, and debug programs for Schneider PLCs. SCADA systems are something else altogether.

To get to the substance of the article though, it doesn't look like a big deal for several reasons. One is that they essentially had to comprise the Windows PC being used to run the SoMachine programming software. Once you do that, all bets are off and you can probably just grab the passwords anyway. All of the industrial control security problems that I can think of off the top of my head are really Windows security problems at their root.

The other reason is that use of passwords in PLCs is very rare anyway. They're generally left blank, as using them defeats the purpose of using a PLC in the first place, which is to allow your maintenance crew to modify the program at need.

The few places where a password does see occassional use are when an OEM uses a PLC as part of a "black box" system which contains some supposedly secret process knowledge, or very occasionally for certified safety systems. The latter generally run on specialised hardware however, not bog standard PLCs.

In most PLCs that have passwords, the password is typically stored as plain text in the PLC. The programming software then says "may I have the password please" and the PLC hands it over. The programming software then compares that to whatever the user typed in and if they match it says "OK, go ahead and use me". If you write your own software and know the protocol, then you can of course just bypass that charade.

Oh, and if you're on the Modbus/TCP network, you can talk to anything, including controlling the I/O directly. There is no authentication.

In my opinion, trying to bake security into industrial controls is pointless. Doing it right is hard enough for the IT industry, so expecting automation system designers to get it right is hopeless. I think it's better to adapt IT industry technology to industrial applications as a security layer(s) than expecting the industrial sector to implement and maintain their own security systems correctly.

The rumor that just won't die: Apple to keep Intel at Arm's length in 2021 with launch of 'A14-powered laptops'


What proportion of Apple's user base are still dual-booting Windows? It's something that I read about as being possible but rarely hear about people actually making use of these days. I doubt that Apple are going to cripple the future development of their product line by maintaining a legacy use case that most of their users don't care about.

There's also nothing preventing Apple from keeping a few legacy x86 versions of their laptop line around for those customers who have a genuine use case for them, at least for a while. I say 'x86', because these could quite as easily be AMD and Intel.


I don't see why Apple would bother with fat binaries. The concept was made obsolete years ago by the Internet. It only made sense when you bought your software on CD in boxes.

Linux manages to support multiple chip architectures without fat binaries. The OS has a standard installer and it knows what chip architecture it's running on. It just fetches the correct binary for that architecture from the correct repo. I run Ubuntu on x86 and Ubuntu on ARM without fat binaries and it's completely transparent to me as a user. I'm sure that Apple is capable of copying that idea if necessary.

As for performance, if you care about per core CPU performance above all else you're not likely to be running an Apple laptop anyway. I don't seriously see it as an issue for the bulk of their target market. Graphics performance is what will matter most, and that is down to the GPU.

For overall performance, they will rely on having a large number of cores for peak loads, and throttling down to a few lower powered cores whenever possible. Have a look at all the work that Apple has spent the past few years putting into multi-core process dispatching. They won't have been doing that work if they didn't intend to make use of it.

If you're writing code in Python, JavaScript, Java and PHP, relax. The hot trendy languages are still miles behind, this survey says


Technology changes, the market changes, the economics change, and software tools need to change along with them.

In the end, decisions have to make sense from a business perspective so you have to balance hardware costs with development costs and time to market. As more and more computing devices have come into use in more and more places, the number of niches which need to balance those factors in different ways has also increased. The result has been newer languages which fit those application niches better.

There has also been a general trend away from the proprietary languages or language dialects of a few decades ago to open source languages. I'm sure you remember for example the proprietary 4GL languages which were supposed to, if you believed the salesmen, take over the market, but which faded from the scene along with the companies that owned them. The proprietary languages have been nearly entirely replaced by open source alternatives as the latter have nearly frictionless adoption paths which lead directly to the people who use them.


With respect to the TIOBE index, they use a different methodology drawing from a large number of search engines. As an example of how this can make a difference, if you look at TIOBE's historical graph you will see a huge shift back in 2004 when they made a major change in how they determined the rankings. You can see that over time they are not necessarily even consistent with themselves.

Redmonk on the other hand use a much simpler method with only two data sources. However, their methodology does try to distinguish between languages that are talked about versus languages that people are actively writing code in.

TIOBE sell access to their raw data set, so they have an incentive to make it as big and complex as possible.

Neither method is really "correct" in the sense of counting how much code is being written in which language, but they do provide a general idea of which languages are all hype versus which ones are seeing actual use, and they give an idea of overall trends.

On either list, once you get out of the top 10 you are looking at languages which are not seeing widespread current use outside of niche applications or else are on their way out.


Re: Failure wins?

What these sorts of lists are intended to do is to sort out are overall trends rather than fine gradations. If you see that language 'x' is backed by a big promotional budget by some company but isn't cracking the top 20 after 5 years then you know it is seeing very little real world adoption outside of the original backer.

Once you get out of the top 10 in any of these lists you are down into a very statistically noisy area as you are talking about low single percentage point adoption rates or less. If you want to be ahead of the adoption curve for new technologies (e.g. you're looking get in on the ground floor for the next set of lucrative consulting gigs with trendy new technologies) then you need to look at what might drive adoption still higher rather than just current ranking level.

For example the Redmonk post itself suggests that Go seems to have reached its natural ceiling in the mid teens (so not that relevant) with nothing to drive it higher at this time. On the other hand, Dart is just below the top 20, but has been rising rapidly based on the new Flutter UI toolkit, and so might be a language to keep an eye on as an alternative to something like Electron.


Java tied for #2

Larry over at Oracle might really want to think about how being tied for #2 after having been #2 for so long is generally followed eventually by being #3 and then #4, etc., If he wants "his" language to remain relevant in future, he has to find ways of building partnerships with the parts of the IT industry that are actively growing, such as mobile.

Or perhaps he doesn't care, and just looks at Java as being another "legacy" asset to be milked for cash ruthlessly before being discarded and replaced by some other new acquisition.

I've drawn my own conclusions based on his actions and will plan accordingly.


Re: Fuchs-ya

That will be the amount of RAM to run the compiler on the host, not to run the resulting Fuscia operating system Most of that will likely be taken up by tables of optimisation data which must be kept live during the build.

This is not an unusual for large software projects which are compiled from source.

Famed Apple analyst chances his Arm-based Macs that Apple kit will land next year


Re: Where are the benchmarks?

Common math libraries such as Numpy (used with Python) are available on ARM as well as x86. ARM also have an ARM specific SIMD math library.

If you use GCC, then using SIMD vector code via built-ins (pseudo-functions which patch in SIMD instructions into C code) is pretty straightforward and actually much easier and better documented with ARM than it is with x86. If you can write GCC SIMD on x86, then doing it on ARM is a breeze in comparison. The main thing is to have a good set of unit tests to automate the testing process on multiple architectures and that's something you really ought to have anyway.

If you are using LLVM, then the SIMD situation is not as good, as their built-in support is very limited. Auto-vectorisation (letting the compiler try to do it itself) on LLVM can help to only a very limited degree. It can work on very simple things, but often with SIMD you need to use a different algorithm than you would with generic C code and there's really no obvious way to express that in C.

If Apple suddenly start making major commits to LLVM relating to ARM support to fill in some of these gaps in compiler support (as compared to GCC), then it might indicate they have something up their sleeves with respect to future hardware releases.


Re: Where are the benchmarks?

Thanks, I just had a deeper look into it and it turns out that in 32 bit mode the Pi 3 is ARMv7, but in 64 bit mode it's ARMv8. Standard Raspbian OS is 32 bit only, so that apparently limits both Pi3 and Pi4 to ARMv7. I suspect that the reason for this is so they can offer a single image to cover all currently supported versions of Pi (which include the 32 bit only Pi2) instead of offering two different versions of Raspbian. Most of their user base probably won't care one way or the other.

I had just relied on looking at /proc/cpuinfo to find out what ARM version my Pi had and hadn't dug further. I still plan to buy a Pi4 for testing, but will probably put a 64 bit version of Debian or Ubuntu on it to get the best performance and improved SIMD support.

I found a blog post where someone ran both 32 and 64 bit benchmarks on a Pi3 (using Debian for the 64 bit) and he got very significant performance improvements by using 64 bit on the same hardware. Even without SIMD, 32 versus 64 bit makes a very significant difference on x86 as well, according to my testing, so that shouldn't be too surprising when it comes to ARM as well.

As Apple would undoubtedly run any hypothetical ARM laptop in 64 bit mode, that needs to go into the ARM versus x86 comparison discussion as well. Start to add all these factors up and an ARM CPU starts to look quite credible from a performance standpoint.

Thanks again, I'll be taking this into account in my future plans.


Re: Where are the benchmarks?

I've been doing a few projects in 'C' in which the tests include running benchmarks on multiple platforms. One of them is a Raspberry Pi 3.

Using a wide mix of mathematical operations on arrays, on average a Raspberry Pi 3 comes in at close to 20% of the performance of mid range desktop Intel compatible CPU running at 1.8 GHz. That's a average though, and isn't normalised according to how frequently those instructions would be encountered in a typical application.

On an individual basis they can vary between the RPi 3 actually being faster than the Intel CPU to only 10% of the speed. A lot varies on the individual CPU and whether an SIMD operation is available for that task.

A Raspberry Pi 3 is ARMv7, which only has 64 bit SIMD vectors, while the Intel CPU has 128 bit vectors (256 bit is theoretically available, but the Intel architecture is such a mess of conflicting variants that it's not practical to use it on a large scale). An ARMv8, such as is found in a Raspberry Pi 4 also has 128 bit SIMD vectors. I plan on getting my hands on a Raspberry Pi 4 to conduct more testing, but essentially it should nearly double the performance of many operations just based on having twice the SIMD width. The faster clock rate will help as well. Apple's hypothetical ARM chip, whatever that turns out to be, will I suspect be even faster than an Raspberry Pi 4 and will likely have a faster memory interface as well.

I also benchmarked an old, low end laptop dating from the Windows XP era (I put Linux on it for the test). It came in as slower than a Raspberry Pi 3.

Many of the sort of things which do very intensive numerical operations can benefit greatly from SIMD and other such hardware acceleration. There's nothing to prevent Apple from adding special processing units or instructions targetting particular use cases such as image or video editing to accelerate those to get even better performance. GPU acceleration will play a part in this as well, and some of the really intensive tasks may make use of GPU based calculations.

To make a long story short, the actual performance of a hypothetical Apple ARM PC will depend a lot on the specific use cases and how the application software was written. UI performance will be primarily GPU bound, and I don't expect Apple to use a slower GPU.

Based on my own experience, I expect that synthetic benchmarks that look at some sort of average will be almost meaningless when comparing CPUs which have a different fundamental architecture, and we would need to see application oriented benchmarks to get a real idea of how it would perform in real life.

Nokia said to be considering sale or merger as profits tank


Re: Efficiency

Reportedly their "antenna kit", which is how most reports describe it (I think they mean RAN) has better technology than their competitors and can support more users with less equipment, saving costs on purchasing. Probably more importantly, it also apparently requires fewer base stations in areas with lower density of subscribers (i.e. anywhere outside of major cities, which accounts for a lot of base stations if a carrier is to provide coverage nearly everywhere). This is apparently why mobile operators are so keen on buying Huawei's RAN kit, even if they plan on buying the rest of their kit elsewhere.

In third world countries, which account for a big share of the global market, Huawei also provide complete, pre-engineered, turn key systems which saves a fair bit of money on engineering costs. In Western countries carriers tend to mix and match kit from different companies. This however is not the practice everywhere, and the carriers often just want to go to a vendor and buy a complete mobile network, something I understand only Huawei have a credible offering for.

Is it a bird? Is it a plane? No, it's a flying solar panel: BAE Systems' satellite alternative makes maiden flight in Oz


In military terms it's far less of a sitting duck than a satellite. In a war with an advanced opponent your satellite communications network will be gone within days or even hours of the start of the war, while a HALE like this can be kept well back from the front lines while still acting as a communications relay and so be difficult to see or hit. This is why there is so much military interest in these. The UK are the leaders in this field.

Huawei to the danger zone: Now Uncle Sam slaps it with 16 charges of racketeering, fraud, money laundering, theft of robot arm and source code


Plan B?

I guess their extradition case in Canada must not be going too well for the US if they have to cobble together some new charges like this.

Current status of the Meng Wanzhou case in Canada is the case has to pass the hurdle of proving "double criminality" when Canada had no equivalent Iran sanctions in force (as Canada was backing the European treaty with Iran). The next hurdle is an abuse of process hearing regarding how she was held and questioned at customs and immigration at the request of the US. Neither of these is decided yet, but there are reasonable grounds for the case being thrown out on either one. And that isn't the end of the story either.

It is possible that this new development may be "Plan B" for the US, as it avoids the rather shaky "double criminality" question and short circuits the very unsavoury details surrounding the abuse of process events.

Wake me up before you go Go: Devs say they'll learn Google-backed lang next. Plus: Perl pays best, Java still in demand


To be taken with a grain of salt

The numbers will be skewed by a few things. For one, certain fields tend to pay more, and also tend to use certain languages. In these cases there's a lot more to getting more pay than just learning a new programming language. You also need the knowledge of how things are done in that field of endeavour.

There are also geographic factors. Again, some languages are more commonly used in certain geographic areas, and those geographic areas may have higher pay, along with higher costs of living (so the people there aren't necessarily better off so far as quality of life goes).

The "intend to learn language 'x' next" statistic also needs to be taken with a grain of salt. Lots of those people will also "intend" to lose weight and exercise more, but not get around to those either. What often prompts learning a new language is that it is needed for something they need to do, which may have not much relationship with whatever their current aspiration is. It is though an indicator of a language that they don't currently know, but have heard of. Languages which are both high on the list of ones people would like to learn and high on the list of what skills hiring managers want will probably give a better idea of what languages people will actually put the effort into learning to a useful degree.

I can believe that Perl pays fairly well, so long as we're talking about large Perl applications and not some script somebody hacked together to comb through log files. Not many people are learning Perl these days, while there's still a lot of large commercial web sites that were built on Perl in bygone years and still need maintaining. Perl is like mainframe COBOL programmers in that respect.

No big deal, Rogers, your internal source code and keys are only on the open web. Don't hurry to take it down


Rogers are a big media company, owning cable, cell phone, ISP, broadcasting, and sports team assets. Annual sales are around $14 billion. Whatever their excuse is going to be for this mistake, lack of resources to do things properly can't be it.

Server-side Swift's slow support story sours some: Apple lang tailored for mobile CPUs, lacking in Linux world


Re: x5 Speed Increase on Server Side with Swift?

That article in the link you provided is an excellent analysis. The author of the benchmarks cited in the Reg story was comparing apples to oranges from multiple different perspectives.

I suspect the reason for this was benchmark author didn't really know much if anything about many of the frameworks he was "testing" and just googled some tutorial examples. The result was numbers that were measuring very different things in each case.

This is such a common problem that it might make for a good series of articles for El Reg to publish if they can get someone (or a series of people) to write it for them.


Re: x5 Speed Increase on Server Side with Swift?

Take these performance numbers with a very large grain of salt. There's several problems with these sorts of comparative benchmarks. The first it that they are always micro-benchmarks with very little real content. A particular web framework may be a top performer at one micro-benchmark and down near the bottom on another. Have the various frameworks serve real data and they may start to converge on fairly similar numbers.

Another problem is that the authors of these comparative benchmarks may know a few web frameworks, but they rarely know half a dozen or a dozen fairly well. For the ones they don't know they generally just google some tutorial examples and end up with a configuration that is very suboptimal and not uncommonly using deprecated methods. Someone who knew what they were doing with it might get a totally different set of numbers.

A common problem is that particular frameworks may address scaling across multiple cores in different ways, but the author wants to make them all the same in order to be "fair". So he ends up using the best case for his favourite (and best known to him) framework while disadvantaging other frameworks who need a totally different set up.

Yet another problem is that there was no indication of memory consumption, other than the author noted he didn't get any out of memory errors. RAM consumption is often more important than CPU load in real world web applications, but it's hard to come up with useful benchmarks without putting more work into each test than the author is willing to spend on it.

Still yet another problem is that different frameworks may be oriented towards solving different problems. One may be oriented towards high volume web sites, while another may be oriented towards being able to set up a lower volume site with minimal man-hours and cost involved. The latter (smaller web sites) are far more common than the biggest sites.

So while these comparative micro-benchmarks can be interesting at times, you generally need to know a lot about each one before you can tell if the author knew enough about what he was doing for the numbers to mean anything.

So overall, take it with a grain of salt.


Not much take up.

The problem in this case is that IBM and a few others were looking for an alternative to Java for server-side use that wasn't tied in with a potential competitor, and as Apple had just come out with Swift, they thought this would solve that problem.

However, Apple remained the driving force behind Swift and they had no real experience with or interest in third party server applications. IBM and friends on the other hand did, but were not able to build a viable community around Swift. Without that community the third party activity to build a large ecosystem of third party libraries and software tools never really developed.

There's lots of fairly good programming languages out there. The few who make it past niche applications and hobbyists into the mainstream are the ones which build up a large and active community.

For example, look at Python. It succeed on a shoe string and without corporate backing because of the community built up around it.

I'm not surprised to hear this news about Swift, as the writing on the wall was evident years ago.

US hands UK 'dossier' on Huawei: Really! Still using their kit? That's just... one... step... beyond


If you buy Huawei kit you can either cherry pick bits and pieces that you want and then combine with kit from other manufacturers, or you can buy a complete turn key system from them.

The UK's mobile operators want to pick the best technology from a variety of suppliers and combine it into a system using their own engineering staff, or to contract companies in the UK (or elsewhere in Europe) to do it. There's already loads of Huawei kit in the UK's existing 4G networks because of this and it isn't going away regardless of 5G.

In many third world countries however the mobile operators simply buy a complete turn key system from a single supplier, and unlike many other companies Huawei is capable of doing this. It's not what UK companies want to do however.

Security comes down to system design, not the individual bits and pieces. When you buy a label there are no guarantees of security coming with it.

And in the end, the alternatives may be headquartered in Europe, but the actual kit is made is made in places like India, who have their own ambitions for being a world power.

Oh, and a lot of Huawei's software is written in India. The world is a much more complicated place than it was in the 19th century, which is where a lot of people who put national labels on multi-national businesses seem to have their minds stuck in.


Re: Oooh! Dossier from the US!

I've seen a copy of it. It's quite chilling. It claims that China has exploits of mass destruction that can be launched in 45 minutes.

MI5 gros fromage: Nah, US won't go Huawei from dear old Blighty over 5G, no matter what we do


We've been down this road before.

The UK would look pretty silly if they banned Huawei equipment and then had to turn around and reverse themselves on that when the US decides that Huawei are no long a "threat to national security" because the US had finally signed a good enough trade deal with China (currently under negotiation).

The US declared that Canadian steel and aluminum were "threats to US national security" as well, and slapped massive tariffs on them until Canada said we weren't going to sign a new NAFTA deal with the US until they were retracted. Then all of a sudden Canadian steel and aluminum were no longer threats to US national security after all and the tariffs vanished.

The US has been making systematic use of "national security" as a trade negotiation tactic because they see it as a loop hole in WTO trade rules, despite their mis-use of it being so obviously transparent. They want to rope other countries in to supporting their blockade on Chinese 5G kit because the global market is so big that the US being the odd man out wouldn't really phase Huawei much.

This is the way Trump rolls. We've been dealing with this sort of thing living next to the US on a regular basis for a few years now. He's openly admitted to simply making up "facts" on the spur of the moment when dealing with the Canadian government in order to derail the conversation when negotiations weren't going his way.

Canada are also holding off on making any decision on this, partially in order to see what the UK will do. In Canada all cell phone network kit must be reviewed for security problems, not just that from Huawei or China. Canada's equivalent to GCHQ have found no serious problems with the stuff from Huawei when used in the manner which the telecoms carriers are proposing. Any problems with Huawei kit are purely diplomatic and politically related, not security or technical ones.

Reusing software 'interfaces' is fine, Google tells Supreme Court, pleads: Think of the devs!


Re: maybe

No, I want to see Google crush Oracle, to drive their salesmen before them, and to hear the lamenting of their shareholders.

Actually, I'd be happy to just see Android purged of all traces of Java, just to see the look on Larry's face when he realises that he's managed to knock yet another reason to learn Java out from under the pool of potential developers for "his" platform.

At this point you would have to be mental to develop a phone app using Java, given that if Oracle somehow wins, Google will almost certainly deprecate Java and phase it out, leaving you high and dry.

Snakes on a wane: Python 2 development is finally frozen in time, version 3 slithers on


Re: More lazyness than anything

What you describe as being difficult with handing bytes is the situation in version 2. Strings could be 8 bit ASCII bytes or they could be unicode, depending on what any particular function decided to return, which in turn could depend on your OS locale settings, the phase of the moon, or any number of other factors. A "string" of bytes could change into a string of unicode unexpectedly if you weren't careful.

In version 3, strings were made to be purely unicode all the time, and new "bytes" and "bytearray" types were added which were purely intended as arrays of raw bytes and not to be interpreted as unicode strings.

There was also however added a full set of byte formatting and manipulation functions added which let you do string-like operations on bytes and bytearrays (searching, replacing, justifying, padding, ASCII case changes, formatting, etc.). So now you can write out bytes to image files and the like an be sure that what you are getting is bytes.

So now with version 3 strings are text and are unicode all the type, while bytes are bytes and not to be confused with text strings.

This change ironically is what the biggest complaints were about, mainly originating with people who only ever used 8 bit American ASCII and were not happy about having to change their code to better accommodate people who wanted to use funny looking accented characters as part of their alphabet.

In version 2 I was initially very uncomfortable using the "struct" module in the standard library (used to format packed binary data - very useful, have a look at it) because of it's use of strings when it wasn't clear when a string was an array of bytes or when it was a sequence of unicode characters. With version 3 it uses bytes and bytearrays and there can be no confusion with inadvertent unicode.


While writing this I'm currently skiving off from from a project which involves a set of C libraries for Python, involving both Python and C code. It runs on multiple operating systems, with multiple chip architectures, in both 32 and 64 bit, using multiple C compilers.

The Python code runs without change on all platforms, CPU architectures, bit sizes, and C compilers.

The C code on the other hand must be tested, debugged, and tweaked to run on all of them. Word sizes vary, operators have subtle differences in how they behave depending upon CPU architecture, compilers must be placated as one will issue warnings for code that another has no problem with, and obscure documents must be consulted to find the edge cases where the compiler authors simply state that the compiler will generate invalid machine code without warning and must be worked around manually.

I've written software over the decades using Fortran, Pascal, Modula-2, various flavours of Basic, various 4GL languages, assembly language, Java, C, and a host of other languages that I can't be bothered to recall. Given a choice, if I had to write software that I had to be sure would work correctly, I would take Python over any other choice, provided the application was suitable for it (there's no one size fits all language).

I was very sceptical about Python before I first tried it 10 years ago, but I put my preconceptions aside and give it a good try and was pleasantly surprised. In my experience the people who complain about Python mainly tend to be people who have little or no real world experience with it.


Re: More lazyness than anything

Features from version 3 were backported into the later versions of 2 in order to make 2 more forward compatible with 3. Version 2.7 was a sort of half-way house between 2 and 3. That was intended to allow for easier porting of applications.

The main practical issue that people had was with unicode, as 3's fully integrated unicode support turned up a lot of latent user application bugs which had issues with handling unicode. I ran into some of these with version 2, and they could be triggered by something as simple as a local change in the OS the application was running on.

The number one issue driving Python 3's changes was fixing the unicode problems once and for all by making the language and libraries unicode from end to end. There wasn't any way of doing this while preserving perfect backwards compatibility, but now that the nettle has been grasped Python can put that behind them.


Re: Apple's walled garden

Red Hat are still shipping 8 with Python, it's just that they have separated the "system Python" version from the "user Python" version.

Red Hat's system tools rely on Python, so they still need to ship it. However, when you type "python" or "python3", you will get whatever version you have designated to be the user version, rather than the version which Red Hat uses for system tools. To get a user version, you need to install it via "yum".

What this means is that you can upgrade the version of Python which your applications use without affecting the version which the system tools use. That tends to matter to Red Hat, as they have very long life cycles.

When the system tools used 2.7 and newer applications used 3.x, that didn't matter, as both 2 and 3 could coexist without any problems because one is address as "python" and the other as "python3". The problem comes when Red Hat want to pin the version the system uses at say 3.5 but the user wants 3.8 for applications and both versions are known as "python3". This change lets two versions of 3.x coexist without conflicts.

Ever wonder how hackers could possibly pwn power plants? Here are 54 Siemens bugs that could explain things


Re: So reassuring

SPPA-T3000 is basically a big Java program that runs on a set of MS Windows servers, with operator access being via Windows client workstations. The "highways" are the networks connecting them together and to the plant equipment. It shows equipment status, records performance for analysis, and allows operators to change equipment settings as needed.

Given that software systems based on similar technologies in more routine business environments seem to have security vulnerabilities being reported all the time, it shouldn't be too surprising that we are seeing some here, even if Siemens goes to great lengths to obfuscate just what their system is.

Basically though, the security challenges here are in essence the same as in any piece of big enterprise software and there's no reason to expect it to be immune from the same vulnerabilities.

Redis releases automatic cluster recovery for Kubernetes and RedisInsight GUI tool


There's "Redis Cluster Proxy", but that's still in alpha state. I've never tried it, but it's under development on Github.


Multi-threading wouldn't necessarily give you any better performance, and may in fact reduce overall performance. In-memory databases need locks to maintain data consistency and the lock overhead can be greater than simply running the database on a single thread. This is a very different situation from on-disk databases where the database would otherwise spend considerable time waiting for I/O.

Dr. Stonebraker, who is a noted database researcher and one of the people behind Ingres, Postgres, H-Store, and many others did some detailed research in recent years where he proved the above problems with multi-threading in this type of application. This led to VoltDB also being single-threaded.

The answer is supposedly clustering, but that comes with its own problems as noted in the article.

I used Redis in a application a while ago, and the biggest problem I had was actually the serialising/deserialising overhead inherent in their protocol at the client end. I needed something very fast and with low overhead, and that wasn't it. I'm not sure if they've improved since.

I'm not sure that anyone has yet come up with a really satisfactory solution in this area.

Thanks, Brexit. Tesla boss Elon Musk reveals Berlin as location for Euro Gigafactory


Re: No, the UK was never in the running

According to news reports in Canada (CBC), locating the plant in Berlin seems to be a quid pro quo for Germany's new electric vehicle subsidies announced a couple of days ago. The timing between that and the Tesla announcement is unlikely to be coincidental.

Germany couldn't offer a direct subsidy to build the plant, but they could offer broader electric vehicle subsidies which Tesla will benefit from to a large degree. Tesla depends heavily on such subsidies for sales, and they tend to be in places with the largest subsidy programs. They have a high enough profile in the industry that they can negotiate subsidies directly with governments. They were almost certainly consulted by the German government on the new subsidy program, hence the coordinated announcements.

If the UK wanted to bid for the new Tesla assembly plant, then they would have had to do so by topping Germany's subsidies. I suspect that the current government in the UK has no intention of getting in a subsidy war with Germany.

Ransomware freezes govt IT in Canadian territory of Nunavut, drops citizens right Inuit

This post has been deleted by a moderator

Cubans launching sonic attacks on US embassy? Not what we're hearing, say medical boffins


Re: Were previous medical reports wrong?

Yes, Canada never bought into the "high tech sonic weapon" story, regarding it as fantasy. As noted in the PS, current Canadian detailed medical investigation points to over use of fumigants to kill mosquitoes in the embassy building and diplomatic staff residences. The fumigants were also detected in the bodies of people affected by the symptoms. The insecticide fumigants are based on on neurotoxins, and so have a direct effect on the nervous system, including the brain.

At the time fumigants were being used excessively due to panic over Zika virus, which was spreading throughout South America. As a result, embassies were having their diplomatic buildings sprayed as much as five times more frequently than is normal. I suspect the Americans were doing the same in their embassy buildings.

Canadian researchers also feel that any psychological effects of tension or pressure would have played at best a minor role in the effects seen.

Any "mass hysteria" that is involved in this seems to be affecting people in Washington who seem to think the Cubans have some unimaginably advanced new high tech weapon and are testing it out on multiple embassies in Havana before presumably using it to engage in world conquest.

First Python feature release under new governance model is here, complete with walrus operator (:=)


It will be a very useful feature in list comprehensions, where it offers the potential for performance improvements as you no longer have to repeat an operation in order to use the result several times in the same expression.

Here's an example directly from PEP 572.

results = [(x, y, x/y) for x in input_data if (y := f(x)) > 0]

Without it you would have to write

results = [(x, f(x), x/f(x)) for x in input_data if f(x) > 0]

The other alternative would be to unroll the comprehension into a for loop, but comprehensions (of various types) are generally faster as they have less loop overhead. This means they are a common optimisation method, and anything which makes them even faster is generally a good thing.

I run into this type of problem on a regular basis and look forward to using it once version 3.8 percolates out into the common Linux distros.

Chemists bitten by Python scripts: How different OSes produced different results during test number-crunching


Re: Language question

@Kristian Walsh - Just to address your point about how Python handles integers, what you describe is how Python version 2 did things. Starting with version 3 all integers are "long" integers, there are no more "short" (native) integers.

What this means is that there are no longer unexpected results caused by integers getting promoted to long (this wasn't a problem for the actual integer value at run time, but more one of the type changing causing complications when inspecting types in unit tests). Eliminating the need for Python to check whether an integer was a "short" or "long" integer each time an object was accessed meant that there was no performance penalty to simply making them all long.

For the sake of those who don't know what the above means, with Python it is impossible for integers to over flow because they are not restricted to the size of native integers but can be infinitely large. This is a big help as you don't have to add integer overflow checking to your own code.


Re: Language question

I would suggest using Python. If the data format is some sort of standard one in the scientific field, then chances are that there is already an open source Python library which imports it.

If it's a custom format, then Python is pretty good at manipulating data. In particular have a look at using the "bytes" data type, as well as using the "struct" module.

When you open the file, you are going to want to do line-by-line data processing or at least chunks of it, unless you happen to have enough RAM to hold it all at once. Python has a lot of options for this.

I would suggest having a look at Pandas, which is a popular Python framework for handling large data sets.

Boris Brexit bluff binds .eu domains to time-bending itinerary


Re: Out of curiosity ...

There was a spam email campaign a few months ago operating from a set of ".eu" domains, but I haven't seen anything from them for a while. I don't know whether they have moved on or whether the spam is just getting filtered better upstream from me now.


If the EU insist that UK holders of ".eu" domains have to give them up, the obvious solution would be to say that this rule takes effect a year or two after the UK has actually left the EU. At that point the status of the UK should be clear and domain owners will have had time to sort out an alternative.

Insisting that UK registered ".eu" domains must cease to exist right on the dot of the official Brexit date introduces a lot of problems for everyone involved for no obvious rational reason and seems motivated purely out of spite.

I don't live in the UK, or anywhere in the EU for that matter, and don't have any stake in the game. However, I suspect that this sort of political gamesmanship by EURid is not exactly doing much to enhance the reputation of the EU, which already suffers from a reputation for being a regulatory morass which is a nightmare for outsiders attempting to trade with it to navigate.

Oracle demands $12K from network biz that doesn't use its software


There is a good chance that whomever is using VirtualBox in this application may be using it as part of an automated test setup. I use it to test software on multiple operating systems. I can control the VM through VBoxManage and then run all the tests via SSH. The whole thing is orchestrated automatically via a bash script.

None of that requires the VirtualBox Extension Pack however, which adds some rather niche features, mainly related to USB device pass through. The problem in this case is with the VB EP, which you can download from the VB web site, but only for personal use and evaluation.

There is the distinct possibility that whomever is using VM in this application may not actually need or be using the Extension Pack, but only installed because it was there for download.

Auditors bemoan time it takes for privatised RAF pilot training to produce combat-ready aviators


That problem was anticipated and allowed for. The British Commonwealth Air Training Plan (BCATP) was set up in Canada at the start of WWII to train pilots and other air crew for the UK, Canada, Australia, and New Zealand. Over 130,000 air crew, mainly pilots but also navigators, wireless operators, and others, were trained through this program in Canada during the war. Smaller numbers of aircrew from a number of other countries and colonies were trained through the program as well.

Canada's aircraft industry produced thousands of training aircraft to equip it.

At the end of the war Canada wrote off the UK's share of the costs along with the rest of the UK war debt to Canada.

A similar but much smaller program was run in South Africa and Southern Rhodesia.

Production of arms, vehicles, ships, and aircraft, as well as training training and other activities was coordinated in the Commonwealth, but it's not something you will read much about in pop culture.

Git the news here! Code quality doesn't count for much when it comes to pull requests


"other factors such as the reputation of the maintainer and the importance of the feature delivered might be more important than code quality in terms of pull request acceptance"

I didn't see anything which suggested that any of the code being accepted was actually bad. In that case then yes, I would expect that an important feature that users were waiting for would get priority over an unimportant change.

And I would also expect that a change that didn't much value would get rejected, regardless of how well formatted and styled it was.

Backdoors won't weaken your encryption, wails FBI boss. And he's right. They won't – they'll fscking torpedo it


It's rather simple. If the US has that sort of access then every other country will demand (and get) the same or else block the messaging service from their territory. After all, what honest and law abiding person would want to provide a safe haven to "terrorists, hackers, and child predators" by not being able to decrypt messages on their territory?

And of course there can't be a different phone backdoor for each country or else "terrorists, hackers, and child predators" would just use a messaging service from another country so that the country they are resident in can't open the backdoor.

If we follow the logic of this argument, then every country needs equal access to the same backdoor. Because if you don't do that then you either don't have a messaging service that works internationally or you have to admit that the whole "we need backdoors to protect against "terrorists, hackers, and child predators" is specious. Unless of course you are going to claim that only Americans are "terrorists, hackers, and child predators" and so only Americans need monitoring to stop them doing such things.

So the only logical end state is that every country eventually has access to every phone everywhere. And that of course will be "Totally Secure" (add spiffy logo and branded web site as required).

Google's Go team decides not to give it a try


Re: panic and recover functions.

@eldakka said: "Personally I've never understood the point of 'try'. "

Exception handling is a feature that is very useful in certain types of applications. If you need to use it you really appreciate it.

Where it is especially useful is when you have a long sequence of things to do, each of which must be checked for errors, but where the exact nature of the error doesn't really matter in terms of what you will ultimately do about it.

If you check for errors through a series of "if/else" statements, then the code ends up almost unreadable and likely containing several new errors of its own.

With exception handling you put an exception block around the code that has to be checked and it bails out automatically for you without all the repetitive boilerplate. The code becomes much more readable as you can follow the main flow without getting tangled up in the error handling. It's particularly useful in libraries where you allow the exceptions to bubble up to the higher levels where they can often be handled more sensibly in the given context.

Without exception handling what tends to happen in practice is that people forget to check for errors, especially those rising up from lower levels, and the errors manifest themselves as crash reports from users.

There can be cases where manual error handling using if/else can be the better way, but not in most cases. I would compare it to automatic memory management versus manual memory management. The latter is better in some cases, but in most cases is a notorious source of bugs and memory leaks.

With 'C' exception handling is sometimes emulated using a programming convention that uses goto (yes, C has goto). I've used this convention in a few applications, but it's not as clear or as readable as actual exception handling.

So to sum up, "proper" exception handling is better at putting the programmer's intent clearly and concisely while being less prone to creating new errors because the intent is inherent in the syntax. There are many application domains where it is very useful and helps eliminate entire classes of common programmer errors.



Biting the hand that feeds IT © 1998–2020