Virtually all competent IT departments...
... offer Linux desktops, and particularly more technical departments readily gobble them up.
It's just that incompetent IT departments are far more common than competent ones.
4830 posts • joined 9 Mar 2007
It's not just DWDM on a chip, it's OFDM on a chip. The carriers can be much closer together relative to their bandwidth than on DWDM as they are all coherent and have a well defined distance. Essentially the neighbouring frequencies will interfere with the middle one, but those interference patterns will cancel themselves out over a symbol period.
Honestly I do not think this will require a lot of rack space. After all this is "just" 422 times as much as already established 100 Gbit/s and this can apparently is compatible with CMOS, so you could even place your routing logic on the same piece of silicon.
that there may be countries somewhere, where your ISP is less trustworthy than Cloudflare. Of course this doesn't apply to Europe where your ISP could easily get shut down if they were caught exploiting your DNS traffic, whereas Cloudflare only makes a non-enforcable "promise" that they won't mess with your queries.
After all that is just a sign that can display text. Serial lines are more than adequate to get the data there. This probably uses something like RS-485 which works much better for long lines, but is otherwise nearly identical to "normal" serial lines. Also it works over normal twisted pair and doesn't need higher grade cable you would use for Ethernet. BTW you can run such a system in a unidirectional mode so an attacker on the bus can eavesdrop and modify the data, but not access the master.
I mean this makes a lot more sense than what a German railway company did, using Windows PCs to directly drive their station signage... with the obvious result that eventually they were hit by ransomware.
For example on average there are only around 1300 credit card transactions per second in the US. While this may sound like a lot, it's probably less computation than playing an MP3 file takes.
Of course there is _way_ more database activity, but we live in an age where storing your database in RAM or on fast flash memory is feasible.
To put this into context, every fixed line call in Germany has to go through a complete lookup of the portability database. That's a database listing every number that has ever been ported. That's millions of datasets. The lookup works with a simple barely optimized program which rarely takes more than a millisecond to look up a dataset, even on a very modest computer.
"I wouldn't have thought there would be much reason not to do something like that as a web app, that way anything with a recent browser from a single board computer (such as a Pi) to a full on workstation costing thousands could be powering the screen."
Yes, but web standards change very quickly, and web developers always want to have the newest technology to fail in. Also web browsers are hugely complex systems (more complex than operating system kernels) which are therefore likely to fail in inpredictable ways.
I think the problem is that we do not have propper "graphical multimedia terminal" standards. Sure we have VT100 to which we have added truecolour and mouse support, but if you want to display a photograph or play a sound, your choices are severely limited.
First of all, it's extremely easy to do something and pin the blame on someone else. Want the Russians to be the culprit, buy a Russian PC to develop your code on and leave Russian language clues. Attribution is basically impossible, unless you are dealing with stupid people.
Then there's the whole area of side effects of doing this. If you want to make attribution easier you have to make sure that things like anonymous communication disappear. This endangers large groups of the population, from whistleblowers to homosexuals. Probably even people like security analysts.
Third it doesn't fix anything. The security holes are still there. If they are not used by criminals they probably are used by "Lawfull" organisations.
In short it's an insane idea, not well thought out and based on assumptions which have been proven wrong many times.
Since the planing of Galileo both Russia and China have created their own satellite-based navigation system. Most mobile phones today support all 3 fully operational systems now, and they are all operated by different entities meaning that even if one decides that Europe is evil, there are still 2 other systems.
There actually was a talk about this problem at the 36c3. The proposed idea for a solution against that was to mark your library "Geek-Code"-style to indicate if you see it fit for use for security critical things.
A typical example would be a crypto library someone started because they wanted to experiment with it. Of course one could use it for serious things, however since it wasn't meant for that there could be serious issues with this. Nevertheless releasing such code may be beneficial for some as a demonstration device.
"Are you suggesting using CSVs because they are "standard"?"
No of course not, I'm suggesting that because in 99% of the cases it can be done in a very simple way. Often you don't need the ability to have the delimiter character in your data fields, you can simply replace it with another character or reject that input as invalid.
For example if you just have nummerical values, scanf can easily read that for you. With slightly more effort it can also read space delimited colums of strings.
Even if you need arbitrary data, there are way simpler ways then the "Windows CSV". Just use no quoting and add an escape character. That way your parser only needs to read in the input character by character and only have 2 modes. The first is the normal mode, the second is the "after escape character" mode.
One of the worst examples for how you can mess up a simple format is probably the "Windows CSV" which adds things like quoting which makes parsing very hard.
XML and JSON may have their advantages for complex and dynamic data structures. However one rarely needs that. Relying on standards is not always a good idea, particulary when you need more code to use a separate library than an implementation of your own parser would need.
A good summary of the state of the art is here:
Although you can never be 100% safe, you can always lower your risk by lowering your dependencies.
For example, if you have a simple list, using XML or JSON adds complexity without providing value. If you use simple delimiter separated files you can often use standard library features to parse such a list.
Beware of environments where adding a new dependency is simple. Adding a dependency is a potentially dangerous thing to do, think before you do it, think before pulling in code that adds new dependencies.
Most of the JS from other domains is malware by now. Usually it's code that manages ad providers to do things like holding an auction to determine what ad will be displayed to you. I can accept advertisements, but I do not accept such behaviour.
"Native code means porting if you have more than one target, which in itself can be imperfect and can result in bugs."
Yes, but if you choose one of the sensible ways to do this, porting is easy and reliable. I maintained a large-ish software package and there were something like 3 lines of code with ifdefs around them to handle differences in platforms. The platforms were Linux, Windows and MacOSX.
"I fail to see how this argument is any different from JS. That can suck your battery too."
It is in no way different than JS, but that's my point. WebAssembly is essentially like JS, but you don't even get the (potentially obfuscated) source code.
If we want to have server-dependent "Apps", we should perhaps ditch any kind of code executing locally and instead define a sort of "terminal". This doesn't need to be based on character terminals, but could instead be a DOM-tree controlled via Web sockets.
... that it's a gigantic security nightmare. Even if your sandbox is somehow "secure" it still can be used to suck your battery empty or mine $crypocurrency without your consent.
Then again the need for a platform independent "bytecode" for programs might have existed in the 1990s, but today we have moved on to distributing software in source form. Why turn back the time to where software was distributed in opaque binary files you had to disassemble in order to adapt to your needs?
> Which raises the question of what actually is the native look and feel of Windows these days?
Well actually that still is the same as in the Windows 9x era. You can see that when all of the modern GUI extensions crash. I think it even reverts to the "System" bitmap front.
Of course if you don't like the look of the GUI elements you can use the OwnerDraw event and draw them yourself.
Take a look at Lazarus, it's a Free (as in speech) alternative to Delphi with all the nasty bits taken out. Software natively compiles on at least Windows, Linux and MacOSX, and since it uses the native GUI toolkits it'll always look and feel native. For all of those platforms you get a fairly large (10 Megabytes) static binary you can just drop onto the system and run.
For example the search function in Outlook can only search for whole words. So if you have a composite nown (as common in Germany) "Ticket" won't match "Carrierticket". Of course in the age of multi megabyte RAM in PCs doing a full text search of the subject still seems something hard for Microsoft.
Sonos always was more of a lifestyle product aimed at people with more money than brains. I mean it was always obvious that those things were bound to happen as everything relied on proprietary and closed standards.
Normal audio equipment, on the other hand, is designed to rely on open and simple standards. The analogue line in virtually every device has will still work in 50 years just like it did 50 years ago. Bluetooth and HDMI, while probably not around in 50 years, are widely supported from many different manufacturers.
and that's to boost sales, as shortages mean that it will become harder to source something. Considering this, it also makes sense to lower the prices, this also boosts sales.
To me it looks as if Intel wants to reduce its stockpiles of older processors.
It's just that more and more little toy projects get shared on github. There the idea is that someone wrote some code which isn't worth thinking about copyright, so they simply slap on some BSD license as they don't care what is being done with that code.
It's more a sign of a rise of casual code sharing on github than a fight against copyleft.
After all the Trusted Computing Initiative was not about protecting user data but about protecting business models. If it was about making computers safer they would lobby for the elimination of scripting languages in browsers, and the elimination of "Service Mode" features and "security enclaves".
"Well apparently they're willing to trade sales figures for style so I guess we'll see how that pans out.
No great surprise if they manage to sell scant few and end up scrapping the line within 18 months."
Nah, that's not how sales works. In case they sell some, they'll proclaim that it as an ingenious idea, but since the market is shrinking it wouldn't sell as well as previous models. In case they sell very few they will also blame it on the market.
It's not there to store data or numbers or anything like that, that should be obvious to anyone using it.
It would be nice if Microsoft would release any indication on what Excel is supposed to be. If I was allowed to put on my tinfoil hat I'd say they won't, because that would mean they would have to actually respond to bug reports.
Cisco has abysmally bad security for decades now... on different product lines... with completely new implementations. I mean just by chance they should have found some competent programmers starting a new project.
I mean even Microsoft managed to clean up their mess for a while after XP.
The de-allocation method wouldn't be a problem as such... if they didn't have weird stuff like implicit object copies which can, if you aren't super carefull, cause 2 "copies" of an object have their members point to the same memory locations, paving the way to use after free and double free.
Pascal, for example, doesn't have that, the destructive assignment operator := does not copy objects, at best it copies references to objects.
... that people who use it think it's a high level language when it's in fact more of several layers of complexity above plain C. Since C++ is so insanely complex, members of a team will spend most of their time learning new features. The result is typical "beginner" code where someone tried to use a feature for the first time without fully understanding it first. This distracts people from writing good code as the amount of "thinking" they can spend on a given amount of code is constant.
C has the same intrinsic problems as C++, but it's much simpler and much closer to Assembler. So if you know some Assembler you can spend more of your "thinking" to making sure your code is correct.
Of course today what we need is a language with about the complexity of C, but with type checking and without the problems of making it to easy to include libraries.
Slow or absent changes mean that your code will work for a long time. Any change in the language can mean that your code breaks resulting in more of a motive to replace it. That's why the slow development of COBOL caused it to be indispensible for many banks. If Java was changing every couple of years no bank would seriously consider it.
I mean Java didn't even include the really sensible feature ideas of J2K yet.
But Intel and AMD have lots of patents regarding x64. Any commercial emulator would need to license those, and both companies don't seem to be open to licensing their patents.
What they could do in theory is to emulate a 15 year old version of x86. That might even work as most of the software that depends on x86 and Windows is old enough to not need anything newer. In any case it's better than nothing.
I mean you don't know for sure if you can trust the DNS resolver of your ISP or even your own DNS resolver. There is always a chance that some one broke into your house and bugged it.
With DoH over Cloudflare you can finally be sure that the NSA will store and archive each one of your DNS requests. After all that's the main point about using it.
Tory-rebels have voted for the Anti-Anti-No-Deal-Deal-law which would back-stop-deal the the no-no deal in case of Backstop-Hard-Border-Brexit-Prorogation.
Apparently 210 voted Aye, 145 Nay, 22 Yey and 4 Huibuh.
Can anybody explain this in normal words?
Well those "digital teletext" services never really got off the ground. In the areas where they are used, they are mostly used for displaying additional advertisements.
However DVB allows for not only Teletext, but arbitrary VBI data. This arbitrary data is encoded as 720 monochrome samples per line. I don't know if any device supports this, but in theory it could work.
Teletext is actually alive and well in most countries it had any kind of presence. For example in Germany virtually every station has some sort of Teletext offer. It's actually trivial to have Teletext via DVB and most receivers will happily support it.
New developments include "Teletwitter" where a social media person at the teletext office monitors Twitter and broadcasts the best tweets via a special subtitle page.
If you write a "Free" (as in speech) piece of software you typically don't have to maintain it for other people. You write it, make it work for you and perhaps for the people you care about, but any "power user" is expected to modify it for themselves. Of course only a fraction of the users will do so, but that should be more than enough to sustain development.
"Open Source" software, on the other hand, takes the idea of delivering source code and combines that with the traditional model of software creation and distribution. You have one entity that writes and maintains the code, but the users are not supposed to really take part in it.
Automatic dependency management tools like npm actually follow more the "Open Source" aspect of it. You have packages which are maintained by a few people and you use without actually looking at their code. As with any dependency this brings risks. Since such tools make it so easy to add dozends of dependencies nobody stops to think about the risks any more. That's how you get tiny libraries doing trivial things that nobody checks for malware.
Biting the hand that feeds IT © 1998–2020