Re: Got a dead Samsung Galaxy A1 here...
Yeah, it should go without saying that if you do get it wet, wait for the micro-USB port to dry out before trying to charge it.
102 publicly visible posts • joined 3 Nov 2014
Indeed I guess it's their insurance that will repay Virgin's costs and losses for this, so they won't really have much of an incentive to improve.
I expect the cost of loss of service on the backhaul services will be massive.
And the consumer outages will likely tot up to several dozen thousand quid too.
Pretty poor that a status update couldn't be posted on VM and Sky's website though. It's not like this one is their fault.
Huh, were people seriously having issues linking Bamboo and Stash/Bitbucket into a cohesive CI solution?
Or is this some separate cloud-only offering that they talking about? Move the Bamboo config into a file within the project, and then auto-run that within Bitbucket upon a master branch commit/merge?
The details are a bit vague, might be the writeup.
MongoDB really needs to stop their "install with no authentication enabled" mechanism.
Security seems to be way down their list of priorities. For a database that is often hosted in the cloud, that is an abysmal behaviour. Clearly they want to remove barriers to entry.
OTOH they provide free courses online that includes how to administer a cluster, so it really is the devs using it that are ultimately to blame.
If they've done it once, they've done it before. I suspect this may become a big issue for SFR - anyone who had repairs or claims turned down for 'it arrived like this' is now clearly going to get back in contact to reclaim or get a refund on the higher charges incurred, etc.
But yes, two rules here. Don't be a bastard customer (although we only have the idiots' word on this, and bad customer service can turn the nicest customer into a ball of anger) and don't be an idiot. Just do your job. Luckily these two will be having a long search for their next opportunity.
Ah, one possible solution is to use an SPI graphics chip, as the micro:bit features SPI on its edge connector (and I2C).
That will have its own memory as well, negating the video memory issue (16KB isn't a lot, a B&W 320x200 display needs 8KB)...
And I think a lot of hackers will be interested now they see it has SPI.
I, personally, would have preferred the microbit to feature a character display (e.g., 40x2) rather than a LED matrix, but maybe the LED matrix is cheaper.
It should be able to run Elite (assuming Elite only needed 16KB of variable data) - the CPU is a lot more powerful than a 2MHz 6502 - but there's no video output (I'm assuming that audio can be sent over Bluetooth at least), so someone will need to hack up a software controlled video output (not easy, but has been done for PICs, etc, in the past so it should be possible) first.
Generally speaking, if an attacker has access to your physical desks, you've lost already.
So maybe the problem is accidental disclosure of passwords via photos/videos on social media or otherwise... in which case passwords in a notebook should be fine.
However what appears to be needed is an office single-sign-on and integration into the services that all require separate passwords at the moment.
Let's not mention the placement of passwords and other sensitive information in standard waste paper bins rather than secure disposal units.
I agree that iPlayer is a loophole for the current TV license. Also, you do get quite a lot for the license fee, so it's probably overall worthwhile, although I understand the annoyance of people who never consume BBC content. But is adding TV license complexity the answer? How are they going to check iPlayer use - getting ISPs to provide addresses for IP addresses that accessed it?
How about allowing the BBC to play adverts on iPlayer content.
The user can register their TV Licence if they want to avoid this pain. Or they can pay fees, say £10 for ten hours pay-as-you-view.
In addition, alcohol in moderation reduces anxiety, stress, and all those other bad things.
Reducing these things is a useful thing in a typical work environment. companies should be forcing their employees to drink a couple of pints every lunchtime!
The new limits are such a joke that they will actually now be totally ignored, the old limits, pulled out of thin air as they were, at least were achievable on the odd week of sobriety.
Anyway, the stats say that 40 units have the same risk as 0 units (there's a J shaped curve, 10-15 units is ideally healthy), so that's my new upper limit (if I'm feeling risk averse that week). Of course, if you are willing to take a little risk, 50, 60, 70 units is fine.
It was designed to move from system to system, draining that system's star and firing remotely over hyperspace to the target systems.
Once the beam leaves hyperspace it presumably still has to travel some distance to the target, hence the trails in the sky.
The six planets are in the same system, some look like moons of the main planet. If planets can be moved by these societies, then it's not infeasible to consider planet/moon relocation + terraforming (or whatever the term would be for a non-terran situation!) has occurred in history too - the habitable zone is quite small. I would imagine that the starkiller attack killed upwards of 100 billion people/beings. And the audience felt a collective meh due to lack of empathy building.
They could have explained better than the star system targeted by the starkiller was the home of the galactic senate (at this stage in star wars history).
I suspect some scenes were cut for the theatrical version to fit it into a reasonable play time. And the planet scenes were a bit jarring. But what is clear is that they weren't even going to try to entertain the possibility of the senate surviving.
Make it more bearable by remembering that Jar Jar became a member of the senate, IIRC.
It's quite clear that mastery of gravity is one of the core skills of these societies. Artificial gravity on ships, managed gravity on smaller worlds, tractor beams, etc. Transporting the sun's contents by 1AU appears to be as simple as a targeted gravity tunnel/beam type thing. Probably a very very big tractor beam unit (size seems to matter in this galaxy!).
And it is either stored in hyperspace or within strong gravitational storage systems, possibly compacted to nearly a black hole in terms of density (they are fitting a star within a small planet).
I think it is possible that the Star Wars universe is simply not advanced in terms of computing power. Computing is what gave us easy replication.
Everything is mostly manual. They've mastered gravity and intergalactic travel, but everything else is pretty damn basic. One possibility is that they inherited the technology and never developed anything themselves - i.e., we are looking at a late medieval developed society with starships and some nifty weapons. They could reverse engineer the gravity and star drives to recreate them at will (simple mechanical engineering - otherwise repairing the systems using a wookie wouldn't really be viable) but not the computing aspects (if any) that these predecessors had. Other technology - tractor beams, force fields, open-air-to-space-docks, etc, I will assume are derivatives of the gravity mastery they have.
Notice most droids are battered and really old. If they were easily copyable, then they would be replaced more often. They're pretty crappy designs, as manufacturing is apparently still at the pressed steel stage. They have a learning ability, some form of brain, but are developed to the level of toddlers in many aspects.
What seems to be the case is that the droids are individually taught. C3P0 was taught, for example. I.e., there is some tech capability in terms of neural network modules and auxiliary analogue/holographic mass storage, but it's not a copyable tech like is typical with computing stuff.
And who told the developers to limit the version numbers that the application could run on, because that was what it was tested on, and the company didn't want to accept liability for people running it on a different version?
LAWYERS.
See here, lawyers caused the problem, and who makes money from these lawsuits? Lawyers. Shoot the lot of them, I say.
The other people to blame are the managers and directors in those software companies who don't support their applications in the future to run on more recent versions of the underlying runtime.
I would assume it applied to consumers, not enterprise users, who should have tools for managing client installs already, and managing server installs for the main useful and compelling use-case of Java - web application servers.
Given that most consumers probably had Java for Minecraft (and even that now manages JRE installs for the user) and that one BitTorrent client back then, and before that, godawful applets and Yahoo! games.
And let's not even talk about Ask Toolbar bullshite.
Java for consumers is a terrible PITA and Oracle simply haven't helped at all.
Indeed we have internal apps that Java Updates still break. Works on Java 8 Update 60, but not Update 65. WTaF?!
I would rather the installer KEPT the previous version as well as the new version, whilst wiping older versions. That would allow a simple "switch to previous version" functionality for these situations.
Instead they wiped the known-working version, and kept the cruddy old versions.
Oddly enough, with the rise of single-page javascript web applications the number of "pages" per website is dropping over time... although the amount of information shown via them continues to rise.
Indeed the biggest barrier of single page web apps is the fact that a full page reload makes it easy to cycle even more adverts into our eyes. Which means reloading loads of mostly-unchanging content each page reload and worse, reinitialising a load of javascript.
For example, this website could be a single page webapp (online article viewer) with most top-links being filters, and the stuff on the right being direct links to articles (again, presented by the webapp). The only other pages would be the mostly static stuff linked in the footer.
Simple form testing should have caught this really early. End-to-end tests are not difficult to write with modern testing frameworks. And obviously there should have been a calculation service within the application that could also be unit tested. There are multiple points of failure here.
But I suspect the blame leads with outsourcing the software development to a big IT house for a lot of money, and that IT house putting it's cheapest developers onto the problem (often grads lured by slightly above average graduate salary rates).
When you consider that Confluence is one of their weaker products, that should give you a hint as to how the other software performs.
Basically, Bamboo, Stash (BitBucket) and Jira are the leading software options in their field. They win by being locally installable (don't need to export your data outside of your network) unlike competitor cloud offerings, and actually working (there's issues of course, but it's software).
I think there's something about handbags and mobile phones that leads to them breaking.
Men stick 'em in their pockets where they're safe (well, front pockets are safe). Women stick them in the great tardis void of their handbag, so it's not a surprise when other hard things in the bag damage it.
However I used to see a lot of cracked screens in the past on my commutes, but the last year or two they have been far less common (and often have working screens under the cracked glass), so I presume a combination of more advanced gorilla glass, better cases, more flexible LCD arrays, etc, have all improved things.
I have cracked the screen on one phone, ironically in my front pocket at the time, when bending over in a non-health-and-safety-advised-manner to pick up something. That a toddler dropped.
So we need wireless charging in our desks to power our Intel laptops that advertise "all day" battery life anyway?
I don't carry cables with me anyway. I have enough spare micro-usb cables to leave some at work, and the phone lasts a day or two without charging anyway. If I had a laptop I used at work, I'd get work to buy me a home charger as well as the work charger (and ultimately this will be solved with USB Type C charging too).
So unless you need a 100% sealed device or a device that can't accomodate a charger socket (Smartwatches), wireless charging is still a gimmick in my mind. On the other hand, at least they might get to the point of having a single standard for power delivery, even if it throws away a lot of power just to do its thing.
Intel has fabs in Israel too - that's tenuous enough a link for a true rabid IS member to target them elsewhere.
The objective of terrorism is to create terror in order to achieve an aim or get publicity for their cause. The IRA were very effective at this, as another commenter has pointed out already. Have no doubt, if the NI peace talks had failed, then London (and the UK) would not be as prosperous as it is now.
You might moan at the economic cost of a day or two of tube strikes a year. But we used to have ten times that disruption in the past due to the actions of the IRA. And we couldn't attract all that international business with that still hanging over the country.
The only upside would be that the stupid Walkie Talkie building would never have been built.
I just hope they do it in a better way than RMI was implemented in Java. Pluggable serialisation mechanisms are a good start, and hopefully the client/server libraries make it very easy to add support into an application.
I wonder what the 'common language' for defining the interface is. We've been there before with XSDs and WSDLs and code generators (I remember using Apache Axis a lot in the distant past) trying to find a basic common ground for complex data types.
And the world ultimately moved onto simple REST controllers and JSON (via a bad SOAPy dream) in all but the most performance requiring applications.
So a small increase in risk of arse cancer if you eat processed meats often...
And a list of dangerous items that has all been grouped into a mere two categories... yeah, good idea. All this does is make the list look stupid. Plutonium, Bacon. Yeah, very similar.
As with all these things, moderation is the key. Don't ban yourself (the regret when it's a false alarm may overwhelm, especially with bacon), but don't overdo it (just in case).
And what's this thing about sausages? British sausages are raw meat and some herbs - where's the processing there?
OpenGL ES 2.0 is a bit behind as well, for a gaming device.
The new Apple TV at least has 2 GB and supports Metal graphics API (and OpenGL ES 3.1 features).
This Roku needs to be cheaper, despite its 4K support, to compete. It sounds like it's not using the latest technology though, so that might be the case.
Two large, powerful A72 cores - great! This is a good inclusion.
Eight A53s? This seems an odd choice. On the other hand, they're dirt cheap in terms of silicon area, and licensing, so why not, and it might be more sensible than a quad-A57 which is larger and more expensive. They can handle worker threads just great, and by clocking some low, and other high, you cover the perf/W curve a little better.
I'd warrant that a 2xA72+4xA53 would feel pretty much the same as this, ultimately, but 6 is less than eight, and that wouldn't do.
Yeah, people who depend upon a very specific job for their wages are found to cut corners when given impossible targets by the board?! Never!
No-one on the board ever thought to ask how their engines don't need urea when every other engine does? Of course they thought about it, they chose to not ask, to not know, so they could claim not knowing.
Ultimately, we'll hear things about a sick company culture, some heads will roll, some deals will be done (to keep pensions, golden parachutes, etc) in return for leaving, and then things will continue as before.
The only solution appears to be fitting the urea tank (if there is space in the design), and making modifications to connect it up, add the "empty tank" warning light to the dash, etc.
After this, they are going to have to give free refills to all owners.
And because this appears to increase fuel consumption by 5%, then that's a pretty major refund to give to customers.
I am going to go out on a limb, and assume that (at least for early designs) the car was designed with the tank in mind, before someone told the engineers to remove it. So adding it might not be too hard - just involving time, manufacturing, new catalytic convertor, new signalling, new dash panel ... a couple of grand per car maybe. This might take four hours per car, so 5 million hours of work in the UK. If VW have 1000 UK car techs, then it will take HALF A YEAR just to do the retrofitting, and that's not considering the normal work those techs do in order to be hired. For later designs with no space for the tank, then VW have major problems.
Now the UK government has been quite wimpy on this. They said they would rerun the tests, but nothing since. If the cars don't meet even the lame UK requirements, then I suggest that the cars should be taken off the road immediately until they are fixed (NOx is a terrible gas). Let VW provide 1.2 million cars to the people disadvantaged by the move at their own cost.
Remember - any damage done to VW is going to benefit other manufacturers, many of whom have a much bigger presence in the UK than VW. It is in the governments interest to lay down the law hard.
More likely the board at one point demanded that engineering reduce cost and also to make diesel less hassle.
The board also made it clear that they didn't want to know how it was achieved.
Engineering solved the problem, via this cheating.
The board didn't "know" about it, even though the requirement came from them, and clearly some understanding that the solution was probably going to cut corners.
The board approved the software purchase from bosch. Bosch told them to disable that test mode in production cars. There was a whistleblower. There's simply no way that the board weren't informed enough to warrant an investigation.
They might not have actually "known", but it would have been grossly intentional ignorance on their behalf.
VW had better be hoping that the affected cars have the capability to be modified to incorporate the NOx reducing technology they left out, and then claimed they didn't need.
I.e., refillable Urea tanks (need space), integration into engine/exhaust system (might need new components to replace existing components - let's hope that doesn't require a full engine strip down), mitigation for loss of advertised performance, driver compensation, ... could be thousands per vehicle, before corporate fines (tens of billions), lost revenue from sales, loss of market position, loss of goodwill, and so on...
This will probably cost them $50B or more in the end, $100B isn't far fetched. That doesn't look good on your CV if you were CEO when this plan was put in action.
The micro-bit is based around a 16MHz ARM Cortex M0 core (+256KB flash memory, and 16KB SRAM).
I don't think it can emulate the original BBC Micro (not even the Model A) entirely, but it should be able to emulate the 2MHz 6502 just about.
Interestingly, the micro:bit's USB interface chip has a 48MHz ARM Cortex M0+ core driving the USB stack...
I wonder if we'll see interesting projects using, e.g., https://www.adafruit.com/products/618 or https://www.adafruit.com/products/931 running simple games, etc?
> he said to himself "I could do this"
Interestingly, this line of thought is why we have the ARM processor today:
Acorn people went to Western Design Centre to talk about licensing their advanced 6502 compatible processors, realised the company was a couple of people doing the designs themselves in a garage, went away and did the same thing themselves from scratch.
£5 board design for case-less operation versus £25 board + case + microSD + all the other stuff.
One is affordable for the scale of deployment envisioned, the other isn't.
This is assuming that the school already has a suitable computer room of course. This is just an IoT thing - use a computer to program a Thing to control something physical.
Hmm.
Sea ice extent: http://iwantsomeproof.com/extimg/sie_annual_polar_graph.png and https://ads.nipr.ac.jp/vishop/data/graph/Sea_Ice_Extent_N_prev_v2_L.png
And regarding ice volume: https://sites.google.com/site/arctischepinguin/home/piomas/grf/piomas-trnd2.png
Lewis is so unbiased it's unfunny, but it's actually just turning the register into another disclose.tv or national enquirer type of rag.
Luckily he is easy to disprove, but he never takes it on board. Basically, a troll. The Register should be above it.
I don't think the author understand the paper.
By 2011, we had already released 531 GtC.
This paper says that by the time we have released 600 to 800 GtC, the West Antarctic Icesheet becomes unstable. By the time it has reached around 1000 GtC, the Wilkes Basin becomes unstable.
These two sheets, once melted, will contribute 3-5m and 3-4m of sea level rise each, within a short time period.
We are currently releasing about 10 GtC a year into the atmosphere (accelerating at 2% a year).
We are seven years away from the lower bound for the West Antarctic Sheet to start collapsing (indeed some believe it has begun already), and certainly by 2035 it will be well underway. In ~50 years, the Wilkes Basin will start to collapse unless we get carbon emissions under control. This collapse appears to be the start of something that cannot be stopped due to feedback loops.
These sea level rises are shown in Figure 2 of the paper.
Now these rises take some time to occur as the ice takes time to melt, so rather than ~8 metres melting immediately, we will see a fraction of that (20cm by 2100), but we will be committed to the full amount in due course. Just as hitting ~900 GtC commits us to over 2 degrees temperature rise eventually. http://www.washingtonpost.com/news/energy-environment/wp/2015/08/20/scientists-are-still-trying-to-figure-out-how-fast-we-could-lose-west-antarctica/ also states that the collapse rate is hard to predict/model.
Fun fact: To avoid the 2 degrees maximum warming would require us to drop global CO2 emissions 6% annually from 2020 onwards. Like heck that is going to happen - 8 GtC/year comes from coal, oil and gas burning and that ain't slowing down. 2 GtC comes from other sources.
This paper supports the IPCC scenarios. I.e., from global sources, 1m by 2100 AND steady rises for a long long time beyond that. The IPCC scenario is widely help to be conservative.
If a manufacturer is willing to stand by their autonomous driving technology, they will be providing free insurance as part of the car sale price.
And these cars have a substantial audit trail for recreating what happened in an accident, so human interpretations and lies will not be an option. It's likely the car's data will be fed into an insurance computer to simulate the scene and assign blame.
The big losers will be taxi drivers. They will not be required. And this is where Uber wins in the future as a transport provider with autonomous cars parked ready to go in the areas statistically likely to have demand in the next hour. Google and Apple will surely also enter this market. As costs plummet for providing taxi service (cheap power, no drivers, centralised online ordering, simpler vehicles), many people who don't drive daily will simply stop bothering to own a car.
I will watch France's taxi drivers go totally batty over this development. Their mortgaged permits will be worthless in ten years time, and they will be out of work.
IIRC Amstrad sold 7 million PCWs over the years, meaning the Raspberry Pi has quite a long way to go to beat the most popular British computer record...
But yes, the PCW was a solid, value for money machine. The printer integration proved to be a downside, of course, but that was Amstrad through and through - an entire package, for cheap.