* Posts by ATeal

217 posts • joined 20 Dec 2013


National Lottery Sentry MBA hacker given nine months in jail after swiping just £5


Re: This seems out of proportion to the offense

Yeah I was just trying to think of a way to explain that the £230k or whatever they spent bringing everyone back wasn't ... his fault? And how dubious that is for securing anything: "I'm only paid to write code that assumes 'normal' input" is the next step?

If they couldn't stop brute force IP attempts that's a really really bad sign.

"blaming the rain for the leak in the roof" really is apt.

(Why am I on a moderation queue?)

Also I thought it was "Camelot lotteries LLC" or something? I don't gamble outside of the bedroom (also: charity that raises money via gambling....?)

Snakes on a wane: Python 2 development is finally frozen in time, version 3 slithers on


How spiteful are they being?

A negative reading suggests that they'll not point to, mention, talk about and so forth any project which chooses to carry it on (like Debian hopefully, we owe those guys a lot with stuff like this)

Will they actually shun it so badly? I can't even see it taking that much time unless some absolute clanger is found.

Amusingly this doesn't even affect me if there was a new python 2.7 version it's not like I'd ever see it updated anywhere I see it now.

Anyway.... anyone got any information? I assumed nothing would happen (you know like ipv6 day)

UK's Virgin Media celebrates the end of 2019 with a good, old fashioned TITSUP*


Re: I've got a better picture:

Damn, to be fair to virgin, that's okay to take down service.

I imagine London has lots of traffic going into it, that's gong to come don to a few big pipes at some level (like the actual "backbone" of the internet, I imagine if one of the bigger pipes went off, the others would get clogged, having alternate routes and having alternatives that can take the loads are different things!)

The angry tweet in the article is a bit unfair IMO - because damn look at that. This comes under "shit happens"

I should add, I am very strongly in the "small areas should be okay" camp - that is if a street loses a link, it can go via the next street's link kinda thing and the network should not fail if a few small areas have to go different routes, but if that line has even 1/8th of what could be going through it going through it (and the article says thousands) - then yeah that's shit happens not stupidity IM(H)O.

Attention! Very important science: Tapping a can of fizzy beer does... absolutely nothing


This is crap

but not even ignoble style crap because it was so crap at what it did. It'd be ignoble-type-crap if it did a decent job.

What a waste. Also 2 minutes shaking?

Rather than theorising on the sides of the inside of the can thing could they not look at something in a clear container? If only glass bottles were a thing although plastic is also a smooth surface

With a tiny amount of time musing on it:

1) If you shake something for 2 minutes and then open it can anything stop that? We're talking drops or mild agitation usually - this is absurd.

2) Isn't blinding it overkill? We /know/ if shaken they do this, I'd get one or two controls, but ~ 250 of them? It seems absurd to randomly treat them like that but not consider the opening technique either.

Also you could probably do better making a simple model of pressure against "can tugidness" (Finally a use for that word!) - if you squeeze a container, it depresses slightly, this you could measure and use for internal pressure on cans and plastic (but doubtful with glass unless VERY sensitive) - I'm thinking like "get a flat metal bar, glue a small mirror onto it, clamp a laser pointer with the button on, work angle backwards from that (the laser goes near enough dead straight, and the mirror means you can "amplify" the deflection due to depression of the can)

I'm thinking a simple vice, a metal beam fixed at one end with a weight on another, a can mounted in a U shaped bracket and this thing allowed to lean on it.

If they cared enough to do it, why not do it right?

PS: Carlsberg?!?!1?¬`?

Internet Society CEO: Most people don't care about the .org sell-off – and nothing short of a court order will stop it


Re: Domain name market?

Silly, don't assume it'll be a reasonable price! Or that there's any chance of KDE getting it, or that there are not other KDEs that'll fight tooth and nail for it.


Re: Domain name market?

I should perhaps emphasise my point: you /cannot/ easily change domain name and at some point there is only one .whatever provider right? Even if people resell through it (as is quite common) - that bumps the price of switching vastly up, if they were to charge $100k per year to a tiny .org that does something useful, then rational economics says "the cost of switching is less than the cost of staying".

Anyone with a .org is at the mercy of the core .org registry (+ any resellers that put extra on that).

The long list was to show "look at all these .orgs that will result in broken links if they change"

They say lightning never strikes twice, but boffins have built an AI to show where it'll come next


Re: What for?

Are you just saying or are you implying the guy didn't know that when he said "being male"? If so someone needs a sense of humour. That someone might be....


Re: If it doesn't strike twice

uncountably infinite.

Also not striking twice in the same place is not some maxim or empirical law. If it was by square-meters alone it's nowhere near out of places to strike, make smaller as needed :)

Morrisons tells top court it's not liable for staffer who nicked payroll data of 100,000 employees


Re: Depends if decent efforts at data security made by Morrisons


"dubious stuff detected, because it happened" - but hey it'll stop ... no wait, it'll only stop big multi-dippers, and if you go AI, have loads of false positives and miss stuff too.

Microsoft has made a Surface slab that mere mortals can dismantle


Surely ARM is the problem?

Is this not windows RT all over again? Forgive me if this is ignorant of some great leaps I've missed I don't use this stuff (or macs - yep one of those 12 people)

Having said that I'd rather have x86(-64) emulated on ARM than the other way around because of the register counts - but I still can't see that being very good, does it go full JIT? I'd love to see some stuff about how they decided what was worth JITting (the algorithm behind that) also the article mentions intel /32/ bit, what's the point here? Although there is loads of 32 bit stuff still in use (there's loads of everything still in use!) are pure x86(not -64) builds still that common for things?

Bad news, developers: Apple Mac App Store tells cross-platform Electron apps to get lost


Like never really

There will always be someone using something. We may not like it (or it may be bad) but that wont stop it.

I feel for the developers here (and love wxWidgets even more) especially as the calls are not even in their stuff (it's in stuff their stuff that their actual stuff uses)

The other confusing part here is intent, what is the intention of these being private? Private to whom and relative to what? It probably means "apple only" but if that is true then no work around should be possible.

If people DO work around this (by adapting it to whatever they can in the public section) then surely this is AT MOST unstable API stuff?

But whatever....

Cyber-security super-brain Rudy Giuliani forgets password, bricks iPhone, begs Apple Store staff for help


Re: If he wasn't such an obviously stupid man

Also the thing is dated some time in 2017 - not sure what fuckups were before and after though. I'm not going to look it up either.

Fairytale for 2019: GNOME to battle a patent troll in court


Pretty sure that'd be INTENT to patent surely? Otherwise that'd be a really bad (mix of land grab and hell)

Lab books and stuff used to be legal documents so if you write in your lab book "Today I wrote a for loop with a non-trivial iterate part, I added 2!" or something then you can use that when someone tries to patent it tomorrow going "here's the ground work, we were going that way" when you couldn't get a provision/pending thing.

Not even /they/ are /that/ stupid. If we include such a broad action on the stupidity scale then it'd be a 10 and the previous most stupid thing would be like a 1.5, a 2 tops. This is an extraordinary "not even they're /that/ stupid"


Re: Prior art not that important

I have seen one claiming to (LOSSLESS-ly) compress ANY AND ALL data by at least something - note the impossibility there.

I remember reading about how it wasn't exempt like the other impossibility you mentioned is (perpetual motion) and that being the point of the thing I was reading.



something I can't be bothered to write about bijectivity here as those who know the word know the proof already.


Think about it: if you compressed 2 bits, which could be 00, 01, 10 or 11 into 1 bit (thus there is SOME compression) - then you only have two outcomes, 0 and 1.

The decompresser needs to map that one bit back to the two it came from, but as the inputs to the decompresser are only "0" and "1" - we can only get 2 outputs from it, not 4 - unless it's randomly having a go or something.

Inductively now we have 3 bits, that's 4 outcomes for each 2 of the bits, times 2 with the third bit (so x00, x01, x10, x11 where x is the 3rd bit, we get 8 things), if this is compressed the tiniest amount we can call "compression" by 1 bit, we have 2 bits output. Now we have 8 things mapping to 4, that's okay. we can do that.

The decompresser cannot take those 4 back into 8 distinct things! It can only take it to at most 4 distinct things!

The gist here is that any LOSSLESS "compression" can only compress SOME stuff, it MUST make some other stuff bigger. We do well by hoping that the bigger cases don't happen. Or adding an extra bit (flag in practice) that if the compressed version was bigger is set to 1, indicating "don't bother "decompressing", we stored it as is"

LOSSFUL compression is fine, as look at the 3 bits to 2 bits, we discarded some information and can only describe 4 out of the original 8 cases - you can never recover and will always be missing at least 4 output cases when you decompress your compressed 8 cases.

SpaceX didn't move sat out of impending smash doom because it 'didn't see ESA's messages'


There's a restriction of Ada called Spark that has the hardcore stuff, requires human help to take it through the theorem proving.

Ada (and Haskell and Ocaml) are more about typing to help stop errors. Consider the humble "length of a list" - establishing the invariant of "length goes up by one on an append" is not trivial because of wrap-around/undefined-overflow/memory-allocation-for-arbitrary-precision.

You can boil it down to inductive proofs that never really mention the length explicitly - but as I said, this "isn't fun" to say the least.

It gets even more restrictive/not-fun if you have concurrent things involved (that are not nice and separate).

Another example would be formally verifying kernels. These usually are "plugin" or modular designs, you formally verify the tiny core bit, the crap that does file-systems and process scheduling when they can communicate - whoa it gets nasty quick.

Another area is guaranteeing that threads make progress - I blame lack of this on one of a handful of reasons Intel's transactional memory shit was shit.


Even this guy hasn't read the EULA since - because "no warranty" carries so much weight and people think about whether or not they should be doing this.

There's a "hard" real time Linux that's separate from the kernel. I forget it's name, but even it focuses on 99%th percentiles the terms and requirements have become diluted. It may get millions of hours of safety but only go through a handful of codepaths.

C'mon guys you should know this. That you don't is (crackpot as this sounds) my point!


There's LOADS of attempts but in general proving programs is hard. There was an example I was reading on Wikipedia the other day for random variate generation that used c++11's thread_local to store a bool so the second call returned the 2nd value of the variate.

So rather than a function returning a struct containing two members, the first call returns the first, the 2nd call returns a stashed value in thread locals, the 3rd calculates the next pair and so forth.

Languages let you do stupid shit like this, even recognising what's an even call and what's an odd call gets very difficult (and thus: can'tbe done) VERY quick. This is the main problem with formal verification.

The message here is that to formally verify requires either human help/theorem proving in all it's nasty generality to MAYBE do it, or extremely limited languages that stop nice things. I'm not saying the above is good. It's an example of stuff you /may/ write that makes the problem difficult. Even though this is abhorrent - but exists and came from a reference.


Re: So....

Ooof. Damn.


Proper computer science is maths. Formal verification and the computational hierarchy and all that abstract theoretical shit - all ours.

You can tell it's maths because "fuck it, let's not bother with that much pedantry" happens all the time and very little formally verifies.

Want examples?

PS: I am hoping I never need a pace maker because the way things are/are-going that shit is going to hurt when it skips a beat traversing all those page tables to load some part of a JavaVM

One day (unfortunately) that's going to happen....

Hardcore real time systems that used to be on those massive late-60s/early-70s IBM mainframes used to run nuclear power-plants, 3 have transitioned to an emulator running on windows :/

AWS celebrates Labor Day weekend by roasting customer data in US-East-1 BBQ


Re: Is it luck the snapshot survived?

Note: This does not pertain specifically to EBS and it's snapshot system.


Sucks you'll probably never read this but kind of.

It's actually a tree. You make your base image say. Snapshot it - that's gone from (empty drive) to (base image crap) so it contains ALL the data on the drive + snapshot format data describing it, it may be compressed but you get the idea

Okay now you make some changes, and snapshot again - that snapshot says "hey I'm based on this other one" and any read requests will hit it first, if the changes affect those reads it returns data, never querying the base image, otherwise passes to its parent (the base image here) - which may return zeros from its base image if it's out of bounds.

This keeps going on. Yes this /could/ use garbage collection or just "fuck it, a new complete snapshot would be good" if you have layers and layers of reads.

However changes made may be in use on other snapshots with their own active drives.

As has been pointed out, they're stored on S3 which is (probably) much more durable and is designed with very different things in mind than backing a file system (in the generic sense)

This system works at the block level, where you (conceptually) ask a drive for block 123456 or say "write this to block 3471233" - the snapshot system need not have any idea how to actually understand WTF is going on. Indeed it might be of an encrypted drive (probably should be too)

The catch there is encrypted shit is hard to compress. The hard core version has no trivial base layer of 0s either. Not sure if they go that far.

It's an interesting problem because these systems are not just backups with deltas it's common to have a base image, customise it a tiny amount for a bunch of machines, and have them running, then patch the base image, smarter ones are file-system aware.

A block-based one may have meta-data for a specific-server configuration file in the same block as metadata about a file changed by an update to the base image, they must cry bloody-murder here - so typically systems that are not filesystem aware (most, or at least the default behavior of most) instead know how to generate the specific images on-demand. For example "image-specificier %id" might be run with the server's "ID" filled in wherever it sees %id or something. You patch the base image and then it regenerates snapshots from it.

It's really quite interesting but it does split into a bunch of different problems.


Is it luck the snapshot survived?

I also got the impression EBS was more durable than that. By "lost power" do they mean got some nasty massive over-current that actually buggered the drives? I'm not going to check my notes for a comment (I actually care about durability of file systems, I know right, crazy) ... I'm pretty sure I ticked EBS off in the "okay and will work" (not reordering writes up to a point...) don't quote me (I don't use it or support anything that does) but:

The system respects flush commands as barriers (fsync like, so later writes can move before earlier ones but earlier ones can't escape the flush, this is why applications wait) - so WTF?

By "pretty sure" I mean "assured that it happens" by what I've read and heard, even if what I'd like to see isn't in the docs.

Also is it just dumb luck the snapshots were not buggered as well?

Microsoft's only gone and published the exFAT spec, now supports popping it in the Linux kernel


Re: Bring compatibility problems to Window, not the other way around


I don't know if its still going but I once used that at the end of my windows days. (Nostalgia about GCSEs here) I made a filesystem that used Python's tkinter and had a spinning disk and everything - it worked. (nostalgia about that here)

Anyway it worked. This was before I was aware of the semantics of filesystems though, fuse does a good job, but either way "Dokan exists".

That's Window's fuse. Or was?


Took a while to see a fuse comment

exfat has a working read/write fuse driver. It's not like it's slow and modulo a few dark corners fuse does very well in keeping file system semantics.

I'm not quite sure what the point would be in another implementation (or a large copy/paste of bits). It's not a dire situation where it's needed.

Any answers?

(I mean yeah if they can painlessly sure, but why bother? For its own sake?)

Alibaba sketches world's 'fastest' 'open-source' RISC-V processor yet: 16 cores, 64-bit, 2.5GHz, 12nm, out-of-order exec


RE: 50 instructions

It's REALLY hard to count instructions, for example NOP and "right shift by zero" are the same on one arch (MIPS?) also look at MOV/move instructions, are they just moves or loads and stores too? What about move (x86 now (-64 too)) which loads/stores but from a calculated address for small values? (it shares this with the LEA instruction for most common cases)

Really.... ultimately, don't you think that each instruction should count as one? So if we have an instruction giving 3 registers, (say A+B-->C) - should that count as R^3 instructions where R is the number of (eligible) registers? Say we have 16 general purpose, that's 4096 instructions right there, okay now suppose we support 4 sizes, just shy of 16.4k instructions now - or is this just 1?

So I can well-believe it's 50, which is actually "5 instructions" in some sense with some parameters for a few of them which allow us to consider it as 50 overall.

So how many things can RISC-V do really? I haven't checked but I'd expect it to have 32 registers (integer and FP - so at least 64) at least, - okay so 131k instructions or 1?

Oh I just thought of a good example. is LEA separate from a MOV (in this case a memory using MOV not the reg-reg type) - or is it the same instruction with a bit set to indicate to store the address not the data itself?

I'd prefer a table of what they say they added TBH. So I don't doubt there are 50 in some sense.

Take the bus... to get some new cables: Raspberry Pi 4s are a bit picky about USB-Cs


Re: Need to wait for

People missed the reference, crap cables don't work with USB3.0 properly and thus "would work" - see my comment above.


Re: I at least partially blame USB C

I apologise for my lack of punctuation and lack of any sort of proof-reading.


I at least partially blame USB C

This has probably been said but not as fast as I thought. In USB-C systems a cable is not just a cable. You can screw it up now. Remember a few years ago where there was that google guy who made a page reviewing cables, as it could actually be dangerous and damage devices?

I don't like this. Not only have we lost backwards compatibility (at least it being so easy) there are now /so many/ variables in play, what can the cable handle? What can that device output? I've been avoiding it and I'm starting to thing I wont be able to much longer.

Forgive my lack of a strong argument, memories I don't want to bring back and also I GTG now. But was it worth it to switch? Okay so weird flip the thing 3 times wont happen any more.

I've not worked directly with a 3.0 or above device (I've used libusb and do have some good notes on it but that was oof a good 7 years ago now) on the software side nor read much about can someone answer with how QoS is maintained? I know of some (dead) protocols like firewire before the consortium ballsed it up where devices got timeslots basically. Your mouse would always get a chance to send stuff irrespective of what the rest of the traffic was, because it was guaranteed some regular slots. With all I've heard (but again not looked into) about the quest to do everything over USB-C it'd need this surely?

Microsoft doles out PowerShell 7 preview. It works. People like it. We can't find a reason to be sarcastic about it


Re: No sarcasm you say

Addendum: I was just reading below and I can see the comments on "my team" are not very good - I'm not with those guys! Take this at face value.


Like the emperor in Star Wars "I shall become more powerful than you can imagine" or something? It's very hard to measure this (beyond the 4 levels of computational power ending with "turing machine") and I'm not sure "oh yeah in this language you can just type the letter z and I made it do that exact thing for you" is good - as that'd make whatever the fuck it is "more powerful" right?

However I'm very sceptical that PowerShell is something that what 50 years of shell developers thought "you know what we don't need" and it isn't a case of "we do it how we've always done it" (look at pipes, albeit an old example).

Having said that I don't really care, I hear that it's got a notion of types so you can stick JSON between things, and I've not bothered to look it up to see if this is true. It didn't stop me shaking my head and sighing - thinking "when will they learn" to myself. I could write (so trivially that it's not worth checking for existing work from, considering only time and effort) JSON stuff to extract, append, ect ("" XML with XPATH) - I could also write that parallel for thing mentioned as a program which takes as arguments what to run without touching the shell, just as it is possible (modulo low hanging fruit for error messages and general speed improvements) to do with arithmetic

Seriously it'd be easy. Easier than faffing around with job control ;)

I think we should measure power as "A is at least as powerful than B <--> for all things B can do A can do it too" (said differently, B is no more powerful than A) - without invoking Turing! I could work in XML if I wanted to. I could not use anything on the normal PATH and use some other stuff instead....

I've admitted I'm not fit to compare the two, but is there even a point to trying? What I've described is (mostly? Dare I say all?) vanilla POSIX shell stuff!

That's a hell of Huawei to run a business, Chinese giant scolds FedEx after internal files routed via America


OMG some actual evidence!

I don't like the hysteria that seems to surround Huwiiu (nor the Kasperskydoviksky lot) because of the total lack of evidence; so it's nice to see some actual evidence (presumably undisputed).

Before drawing any conclusions is there any legit reason for this? Like suppose it was "next day delivery" - it may go a really silly route distance wise, but make it time wise? Is there some really big hub at their HQ where planes come in from all over the place, sorting happens, then things are sent to all over the place?

Having said that I wouldn't put it past either of them (US/China, large US companies - for example the relatively small RSA did some bad stuff) to "do bad things" as both of them /could/ (and one, for sure, in recent history, has - like a lot) so maybe I should take a moment to separate out my distaste first!

Either way "us lot" are fucked though.

NPM today stands for Now Paging Microsoft: GitHub just launched its own software registry


Anyone else worrying about too much power?

I felt this pre Microsoft buying it BTW, so no EEE links please!

A lot of projects are switching from their own hosted thing for the main repo itself and putting mirrors about to using github as /the/ central repo of truth - ignoring the irony of "it's decentralised" supposedly being a big selling point of git and the lack of decentralised actual users. I can't be the only one worrying that GitHub, this free (as in cost) thing that is basically being used as a CDN for some projects, might not be around forever, or might not be trust-worthy.

Furthermore a lot of projects with actual and useful wikis (as in mediawiki, or something proper and worthy of being called it) are now using GitHub's "wiki" - which really isn't fit for the name. It's a step back.

To my knowledge, you cannot easily view past states of a repository on github, IIRC you can scroll down an infinite-scroll page of the commit log that way, but for a busy repo finding what happened a few years ago is a NIGHTMARE that I gave up on doing. There's no easy way to navigate temporally.

It does worry me. I must confess I don't use Git for any projects (I can get git repos, update them, that's more or less it) I work mainly with centralised ones (version numbers <3) - but if update because I trust the committers I don't diff the update or read the log really. Does anything stop github from submitting their own changes (if they wanted to) - obviously they could make the website not show that step, but would any others be likely to notice?

Although not so much of an issue now, I was also worried about GitHub dying, at least with git (modulo large files?) every client has the complete history so if it did go down quick copies would be plentiful. So many scripts and project pages point to it now, some are even redirecting to a github.io subdomain.

I digress. The problem with my worries is that eventually something will happen (nothing is permanent blah blah blah) - but I make no comment about when anything might happen. "GitHub is forever" of course cannot be. Maybe it's just my general resistance to change.... I dunno.

Anyone have any comments? I'd love to hear from someone who takes the stance that projects are not migrating fast enough, why are you okay with it?

Hi! It looks like you're working on a marketing strategy for a product nowhere near release! Would you like help?


Re: Cost centers

I really do hate the "regurgitate everything" part of that. "They" seem to think that the refactoring book (written by the guy who did enterprise design patterns, Fowler?), for example, is like some sort of universal truth, along with other rules of thumb and names/concepts to help us reason about what we do.

It takes a lot of experience to realise (the implications of realising) that all this stuff is man made, not some natural or canonical (unless some hipster has gone full lambda calculus) thing.

If this was ever to be realised I suspect the dev-ops tab would vanish pretty quick ;)

Microsoft slaps the Edge name on SQL, unveils the HoloLens 2 Development Edition


Re: Differential Linguistics - AI parsing

Is this that "I have done computer science III" kid I was warned about?

Wannacry-slayer Marcus Hutchins pleads guilty to two counts of banking malware creation


Re: So now he has admitted to creating nasty malware.

man strings

man grep

IBM Watson Health cuts back Drug Discovery 'artificial intelligence' after lackluster sales


RE: No actually AI is...

I'm getting board of bumbling around trying to sell people on functional analysis like some sort of deranged Jehovah Witness (but armed with proof)

The problem we are all suffering from is that you can get convergence with pretty much any method, the conditions you can show sufficient for convergence are one of the rare cases where the conditions are actually really really broad (typically if you can show if you have X then it converges, X is some tiny bit of information, very specific, here X is very broad and the "NN (structure activiation function of ( sum of inputs * weights ) + bias ) )" ) (FTW (LISP))

Nice to see some philosophy in there with schemas of knowledge, ironic you are bickering about what AI is, as if philosophy hasn't asked all those questions and formed an array of answers for us to muse on for 5 minute before thinking we've come to some deep conclusions.

Seriously quantify more.

Addendum: you know how people think stats can assign a number to something with perfectly describes that thing? Well polynomials are dense in continuous functions - that is to say a polynomial can get as close as you like (under a metric like the sup norm - I hate being vague I set off my own noob-dar) to any continuous function. Well floating point and the "network" structure can get pretty damn close too. So people are imbuing it with "statistics says okay" sort of reverence.

We've read the Mueller report. Here's what you need to know: ██ ██ ███ ███████ █████ ███ ██ █████ ████████ █████


Re: How was it redacted?

Yeah I'd noticed how careful they were, not sure if shitty jpeg or 2nd gen either.


Re: Character spacing?

I can imagine you going around with Google Translate thinking "these X-ards speak shit X" when actually they have the hang of it.

Just for a laugh I did "Greetings simple peasant folk" into one language and then that to English (so one hop, English -> X -> Y -> English would be 2) and I got:

"Simple country farmers greetings"

Do you know how many words of various lengths their are? Fucking loads. When the redactions (of this document) start coming out as "investigative techniques of how Bush did 9/11", "the ongoing aftermath of Bush's doing of 9/11", "had a meeting to discuss how Bush did 9/11" - it'd so be accused of bias :P

Brit Watchkeeper drone fell in the sea because blocked sensor made algorithms flip out


Just to confirm:

They literally (not like "training issue") can't control where it crashes beyond working out with some trig and iffy assumptions where it'll land if we tell it to go somewhere from a certain direction?


Scare-bnb: Family finds creeper cams hidden in their weekend rental by scanning Wi-Fi



"Hidden cameras in listings"?

So what he was fine because the camera wasn't mentioned in the listing?

Back to drawing board as Google cans AI ethics council amid complaints over right-wing member


FFS you'll find a reason to hate anyone if you look close enough

I don't see why this board had to have public names (beyond the "about us" page on some site somewhere) - I wouldn't want a board active on twitter, you'd want people who knew their shit and were fit to consider the arguments watching what goes on through the various projects in various offices.

The personal politics of members /may/ give them a stance when it comes to (a guess, and for example) "should we write software (naturally using AI and maybe blockchain - or something we can call either/both) to better manage those jail things for immigrants?" should that kind of stuff come up, but that's kind of the point of them! To walk the line between what's okay and what's not.

It's (urgh this isn't going to look good) animal testing. There was some lipstick example which wasn't and caused great harm to humans, after that stuff was (since we now know what stuff does the need for it has rather died down) - that has always been a balancing act between "is the knowledge gain truly something we cannot learn any other way?", "to what level of harm and for how long are animals exposed?" and so on - you don't want a group that go "never" to every answer, you want a mix and an "open" (to some extent) group that can see the points and give ground as needed.

For better or worse you need someone who can see (for better or worse) lipstick and the market for it is .... beyond their scope (dare I say) and so on.

As a "for worse" side, if you have a board that never yields in either direction you can end up with what dogs have endured at the hands of cigarette companies for decades, so they can fudge the evidence in that case. For better or worse lipstick was going to stay, there were dangers to humans and animal testing and cosmetics has a fairly good history believe it or not (which is why it's textbook).

Today's "no animal testing" cosmetic industry is only here because of animal testing to work out what was safe or unsafe and what could or couldn't be used, the premium a brand could charge for this (in the earlier days) offset the extra they'd pay for not using cheaper and newer ingredients and so on.

The situation is somewhat similar. For better or worse, like cosmetics, Google are here to stay, an ethics board must fight between both sides, otherwise be discarded unfortunately - again this may suck, but the situation is what it is and at least in the next few years isn't going to change.

Two Arkansas dipsticks nicked after allegedly taking turns to shoot each other while wearing bulletproof vests


To be fair you would want to try it.

I'd want to go first and have to go afterwards ;)

Amazon consumer biz celebrates ridding itself of last Oracle database with tame staff party... and a Big Red piñata


WTF is up with that Dabbb guy?

https://forums.theregister.co.uk/user/66711/ he's all over this thread.

I'm guessing you have some sort of Oracle DB certificate thing - you know most of that knowledge will carry over right? Since 2012 MariaDB (at least) has really started doing well at adding stuff (before that it was like "we support /the syntax of/ constraints" and stuff like that) - just yesterday I was reading about the recursive "WITH" statements - which is the only thing I've ever actually looked up and found an Oracle DB documentation link for and little else. Sequences I found in Microsoft's "transact SQL" (dialect used with MSSQL?) - IT HAS THAT TOO (now) - forgive that tangent it's just ... they've really pulled their socks up.

But the point still stands, if you can do Oracle you can transfer those skills over. Yes ALL of them take the SQL standard "under advisement" but transactions and isolation levels really only have a few ways to go - so the knowledge carries over.

FWIW BTW I think this has been a long time coming. For example: I've never used Oracle (or MS') DB offerings - well except for MySQL. I cut my teeth on that with PHP, I then carried that over to other projects; I still use the long end-of-lifed GUI tools. I imagine I'm far from alone. It's probably only lasted this long because of the MS courses you can get certified for (probably - not my area)

Although from what I've heard there are still some rough edges around MySQL/MariaDB - for example replication. You get the binlog (give the slave a set of changes to make to tables) or statement based (make the slave run the statement and derive the changes for itself) methods and have to be careful not to use non-deterministic functions in the latter case and so on. Oracle can do this better(? - I've just "heard" it) - there's also the WITH statement stuff mentioned, but MariaDB has that now. The other case I imagine is indexes. I hear TransactSQL gives you A LOT of control over table structure and index types. For a long time InnoDB (Sometimes called XtraDB but not any more) - the one with transactions - only had B-Tree indexes (they could be unique, I mean the actual structure) - not hash or R-tree, or fulltext. Again times are (FINALLY) changing!

However I imagine loads of other people (from those I've met, and my own work) just did what I did: build around it. Join us ;)

Brit founder of Windows leaks website BuildFeed, infosec bod spared jail over Microsoft hack


Who loves windows that much?

The sick bastards.

They're so lucky the judge imagined the convo:

"What are you in for?"


*Leans in close*

"You know *the* big tech company? Well I headed a site for *real* fans where we speculated on leaks and just generally worshipped stuff - in the end I fished for some credentials and had 2 and a half weeks of internet access no fetish could rival in pleasure - I saw everything"


"Oh wow ... what secrets did you find?"


"Oh it's a surprise my friend, but I will say this: brace yourself, next time you get pestered for updates and your computer comes up from that restart you are going to ejaculate when you see the next default wallpaper"

Seriously WTF?

Huawei savaged by Brit code review board over pisspoor dev practices


Yeah I was gonna say...

I wont bitch about how shit the world is now and all that - I may copy and paste the bitching but....

It's about par for the course - all Huweiewaiwoo can say is "we take it very seriously" - like the others. A lot of gear that doesn't get much public exposure to penetration testers/security researchers/pointers-at-the-emperor's-genitals ect is hidden in telecom stuff - and some of the most dreadful stuff is there,

Not to get all philosophical on you guys but the more "open" "it" (the platform, to the ideal of the code and a usable toolchain*) is the better. For example take a TV (getting a "dumb" one is hard these days), skybox, games console, ect. These are computers but locked to fuck and there's absolutely no way without HUGE effort to poke with these. That barrier is bad, it was once thought it was good enough but as a certain TLA has taught us, actually some will go to great lengths to "poke at it" and keep it secret.

It needs to be pokable.

Open-source 64-ish-bit serial number gen snafu sparks TLS security cert revoke runaround


Re: How do they know how many values are "wrong"

I've just realised that the number is simply how many were generated by this method and I feel silly now.


How do they know how many values are "wrong"

Shouldn't half have the top bit set and half not (a very small "ish" obviously) if they come from a (C)PR source?

The wording is very weird from that quote as "must be positive" - well you can always interpret some zeros and ones that way?

FFS now I've got specdiving.

Buffer overflow flaw in British Airways in-flight entertainment systems will affect other airlines, but why try it in the air?


No flame war on the "Right Thing" to do?

I imagine *A LOT* of things with text-boxes are ill-equipped to deal with this (editors should be alright, they have like nice trees, can work with a file rather than all in ram, blah blah, I'm talking something that is ultimately a null-terminated string) I am surprised that there has been no real talk of what the program *ought* to have done.

In this case an absolute limit should be fine, but generally these are not good (some old editors have 4kib or even 16kib hard limits - not very future proof and often exceeded for generated files, Bison has options to generate small code even today because of this)

Anyway what *should* you do guys? C'mon, absolute limit in *this* case but those old editors.... should they try to find out how much ram is free and use that (I bet that's fired some of you lot up)



Let's not forget:

He copy and pasted stuff. He put some random crap into the window, selected that, copied, pasted a few times, then selected that copied and pasted a few times <--- that's it.

He plugged in a mouse right? If it was that USB device that bricks stuff you connect it to (the one that charges slowly then shoves a lot back into the port) then yeah you'd have a point.

Imagine that "try copying and pasting loads of text" becomes some standard benchmark that "average people" try for "fun", seeing if "software is up to par" - then there's no "security researcher" here.

C'mon guys get some perspective. If he attached a debugger then yeah maybe, but f'cking copying and pasting a few times?



Biting the hand that feeds IT © 1998–2020