Space Force predicted this
China needs the arm so you can cut the solar cells off the American satellites.
Also to steal the treasonous monkey.
92 posts • joined 2 Dec 2010
So what you are saying is they released software based on a license they didn't understand or they have basically reneged on something they previously released.
You can't blame Stallman and FSF for this; either it is open source (and people are free to make money off the software) or it's not. It's not exactly some strange side-effect or loophole. It is right there, item #6 of the Open Source Definition.
My take is Elasticsearch has seen the $$$ AWS makes and gone... we want some of that action.
Doesn't even need a fake US ID, any counties will do.
"This looks like a legitimate Elbonian drivers license so you're all good to go on our cloud, Amanda Hugankiss"
The real worry is yes, another database to track people with. Also, depending on how small they go with the order, how good is the provider's security for that data?
It's a reasonably reliable sign, especially during the dot-com days.
If they started to muck around with coffee or fruit or whatever, it was time to either burn down your leave or find another contract. It didn't mean things were going bad tomorrow, but you had fair warning.
I live in one of those locations where it is ready for service (passed the RFS date in March) but... well its not quite orderable yet.
So the website helpfully says
There’s still work to do before we connect your premises.
So, using their own statistics, which are more about stretching the truth then reporting what is actually happening, am I one of the 50% that is ready for service or the 50% that is not?
There were two parts that to me seem to be Comcast using the weasel-words.
First, they won't hand over individual data, that doesn't mean it won't be sent in aggregate and who knows how small those "buckets" get. Also it has been shown many times you get enough aggregate data you can sometimes work out who someone is.
The opt-in can of course be covered by 5 scrolling pages of "EULA yada yada" style opt-in in the future, so its all good.
It's Comcast right? We all know how this will end.
You would think given this is a highly public event with some data privacy contraversy floating around it that they would know someone would try something like this.
There are methods to help against DDOS usually using some sort of service provider.
I think the days of "we didn't think it would happen to us" or "we didn't expect it that big/that way" are long gone.
It will be all ok because of those safeguards, you know the ones that:
* Try to take down sites using 127.0.0.1
* Remove Debian CDs because they were CDRs
* Tried to nail someone because they were using bittorrent to get valgrind
It seems the rush to find the pirates there may be a somewhat liberal interpretation of what a safeguard is. The hint is, its not "some crap we made up so you all ok about us which we will ignore".
Still it's nothing new; I'm sure there were such things happening in the high seas in Ye Olde Days where some ships that was unknown and/or suspect got taken out.
Obviously this weasel is part of the Cyber Squirrel conspiracy. While they don't have a break-down of all their agent types and only list successful attacks by Squirrels, bird, raccoons etc, I'm sure it was them.
You can find out what other successes they have had at http://cybersquirrel1.com/
A penguin is a bird, right? (277 successful missions so far)
You do recall correctly. People may want to rewrite the history but Qt around the time Gnome and Gtk started was quite hostile to open source. It was that typical "we'll call it open source but you play by our rules" attitude.
The competition of Gtk definitely put some pressure (but would not be the only reason) to open up Qt; it's all ancient history now but doesn't mean it didn't happen.
Yeah, 25 years seems about right. I was an employee of Dick Smith back then. Half of us had electronic interests and you could see things changing. While there was a full complement of electronic components, there was this temptation to go into consumer electronics because that's where the cash was (Telephone Answer Machines and My kids first computer)
Move forward a few years and noone had any idea and electronics components were those under-stocked annoying things in the corner nobody cared about. I stopped going and went to plays like JB or online instead.
Strangely enough, Jaycar hasn't changed terribly much and is still going.
I suspect the admin had the best of intentions at the time. There was a time email was newish and he probably thought he was helping people out by fixing typoed email. I doubt he was thinking it would be a problem getting work related emails and sending them on their way.
Me? I'd nuke it and then consider if I want a catch-all anymore. Maybe just check the mail logs and add some alias for some common problems. It is easy to be the armchair general with hindsight though.
Not really hard to send stuff from 895M addresses; you can build programs that send it from just over 4 billion addresses. Now; if they were sending it from more than 5 billion addresses and using IPv4 then I'd be impressed.
I'm surprised source IP filtering is still not in yet (and yes I'm quite aware of some of the pitfalls of it). Doesn't make sense for consumer type lines and for the vast majority of commercial ones too.
You do realise that in Australia there are essentially voting machines now? All the bits of paper get counted and then the numbers are sent to a central site and put into a computer, which then does things like send it to the media, update the website and ultimately give the results.
Sure, for simple cases you could pick up fraud, e.g. Voting booth A at electorate B voted 75% Party C, but the scrutineers with their samping might see it only 25% so it looks sus. For more subtle changes its harder, but for the lower house its the edge-cases that get more checks.
For senate (and the story was about the senate voting), good luck with that! There is in theory a 1:1 relationship between the number of bits of paper seen and the numbers that go into the computer but after that it gets hard real quick, especially when you get to the later preferences when the usual suspects have their quotas.
That's not to say I think AEC is fiddling the books, quite the opposite. I'm just pointing out there have been computers involved for quite some time.
The bigger problem is disenfranchising public from the senate voting because it's almost impossible for normal humans to vote how they want in the senate. Not really an IT problem though voting machines might help with the "tablecloth" but a change how the senate is elected would certainly help.
I was never sure, but I assume this is the retail provider, not the wholesale. For example I use the iinet/internode/tpgi borg as my provider which is the retail side but the DSLAM is Telstras, would that make my crappy internet a black mark against i/i/t or Telstra?
If it is based on retail, then it really is that Telstra provides crummy internet. I already know the internet is bad outside metro areas using Telstra wholesale, but then, who else would you use?
It doesn't have to be defamatory or wrong, it just needs to be old or not the current situation. The classic example being someone has gone bankrupt not payed his creditors etc and its reported in the paper. Fast forward a few years later and he is no longer bankrupt, debts are gone etc but you search for their name and the first few hits are those old reports.
The reports are true, just old.
The problem with this sort of law is what is old and what is not relevant? If I am a politician and have done some shady stuff a few years ago, should that data be "forgotten"? What about a hotel with bad reviews?
Also, if I don't like all the other Flat Phillips and want all the hits to be about me, why not just send in a report for all those other websites so I get the first hit on searching.
Actually the 96 cyber-attack thing sounds good at first, but depending what it is could be meaningless.
You'd expect someone such as Arbor or other DDoS mitigation company would have detected far more than 96. One security vendor (yes I know they have a drive to increase the number) is saying there were 25,000 attacks today.
Even if they discovered 96 attacks a day, I don't think 0.4% is that impressive for me to have my privacy routinely invaded.
During the times of creating the Debian Free Software Guidelines (DFSG) there was a lot of heated discussion around Fields of Endeavour. People were a little uncomfortable with Debian being used on.. certain things. The problem was those "certain things" varied from person to person. For some it could be genetic research, others it was military while there used to be licenses prohibiting software for CB radio (yes this last one actually existed).
In the end, there seemed to be no sensible way of a) working out and agreeing what was universally the "bad thing" and b) having a sensible way of limiting it that could go into a license or the DFSG. Debian now has item #6 as a result.
You're missing what he is talking about.
He is talking about the proposed wide-spread data retention scheme that may get introduced in Australia. That scheme will have 2 years of storage of anyone using the internet, to a point and with exceptions.
To get around that specific scheme, just have a Big Mac, or perhaps a Frappe and hook in to the wifi and use something like gmail. The spooks will know that someone in the Maccas accessed gmail but not who they were emailling.
Not exactly Mission Impossible stuff. Meanwhile everyone else using an Internet connection will have their data logged for 2 years all ready for the movie companies or hackers to gain access.
For those that don't understand metadata, EFF has a pretty good page about it at:
I saw this one today:
A secure connection cannot be established because this site uses an unsupported protocol.
Error code: ERR_SSL_VERSION_OR_CIPHER_MISMATCH
I think it means the website is using an old version of SSL, possibly SSLv3; maybe.
Those sort of error messages bug me, you KNOW what is wrong Mr Chrome but you give me a message with OR in it. Firefox was a little better with:
Cannot communicate securely with peer: no common encryption algorithm(s). (Error code: ssl_error_no_cypher_overlap)
And IE? Well IE 8 just worked fine with no error message at all.
Why would they retain the IP address for billing? They don't need it.
"User 1234567 downloaded 15 MB at time X" versus "User 1234567 with IP address 127.0.0.1 downloaded 15 MB at time X" doesn't give the carrier any more information. The ones I've seen generally try to aggregate the data as soon as they can for data storage reasons. It costs 1/12th of the price to store hourly data usage versus 5 minutes and from a billing dispute use, the two are pretty much identical. So yes it comes off the actual production systems in short intervals but its only until its "rolled up".
Admittedly, its been a while, but it would mean its a backward step.
There is also the required level of accuracy. The level for operations stuff (think MRTG etc) is pretty low. The level for billing is much higher but still leaves some leeway. The level of accuracy required to say User 1234567 is a terrorist/pedo/pick your boogieman is even higher still. Making sure something is accurate (whatever that means) costs money.
This family of functions is obsolete and anyone needing this sort of feature should be using the more modern (and IPv6 capable) ones instead. The fact that exim is the default for some systems and is remotely vulnerable is a little bit of a worry, but the default setup of exim is to connect to the localhost only. That moves it from a remotely exploitable bug to a privilege escalation one (if its the default setup).
Still, it should get fixed if you have vulnerable versions. Debian Jessie and Sid aren't so no need to update for me.
I wouldn't say many if it has gone to court. The defence would argue that the material was cannot be used as it was not obtained correctly and that would of been picked up by some news outlet; even if it was to compare it to this one.
Of course if it was one of those US courts that doesn't care about procedural fairness, such as some of the military ones, then who knows what has gone through them.
This is really all about Microsoft avoiding the situation where all of its non-US competitors would sprout some (if it is upheld, actually valid) FUD about any US based cloud service and how the US government can have a sneaky peak in whenever they feel like it.
The way it works now if ANY CA keys are compromised then any certificate can be faked. Leaving browser caching (remembering if you like) keys its seen before aside, if having this CA spooks you and you say you are not going to use it won't make your website any safer as there is no "hard link" between a specific server certificate and the CA one. It would be lovely to be able to independently say that my server uses a certificate from this specific CA so reject any signed certs from anyone else but that doesn't exist.
Someone else said that they'd be worried about the daemon having some evil mode. It looks like it will be open source (there is some code on github) which should at least remove any intentional nasties.
Maybe not tarred at first. Perhaps a quick peek through 2 years of web browsing history to find something that at very least sounds dodgy (you visited a website that shares the same IP address as weloveisis.com would do it).
Then they've found the tar; A quiet word to a tame press and the job is done.
Biting the hand that feeds IT © 1998–2021