Re: Funding peanuts
Just big enough peanuts that they can keep a straight face when they claim credit for whatever growth happened anyway
865 publicly visible posts • joined 11 Jan 2019
Perun put out a nice summary of the damage evidence yesterday: https://www.youtube.com/watch?v=MPXs2wDv4Kc&t=1803s
12 confirmed destroyed (some not only fuelled but armed with cruise missiles as if ready to take-off)
3 confirmed damaged (clear holes punched in fuselages but impossible to assess internal damage)
Then there's a lot of FPV shots of drones flying into aircraft but no follow-up footage, which could easily boost the destroyed/damage count a lot but likely still a long way short of Ukraine's claimed 41 (which, in any case I think it's safe to say is a very optimistic ceiling on the real number). Cloud-cover hampered satellite imagery after the attack. I would bet even Putin is struggling to get truly accurate numbers, especially with all the 'repairability' salve his lackeys are offering up.
Also discusses the very interesting fact of Tu-160s being completely left alone, which makes the inactive A-50 rotodome attacks extra interesting.
Positioning itself in US and Western telecoms is hardly an act of war if the US have been busy doing the same for decades, is it?
It's apparently gone beyond intelligence collection now though, this is why the US is suddenly rattled (in a way they weren't but probably should have been when the CCP were merely stealing next-gen stealth fighter designs etc over the last 25 years): https://www.cisa.gov/news-events/cybersecurity-advisories/aa24-038a
"The U.S. authoring agencies have confirmed that Volt Typhoon has compromised the IT environments of multiple critical infrastructure organizations—primarily in Communications, Energy, Transportation Systems, and Water and Wastewater Systems Sectors—in the continental and non-continental United States and its territories, including Guam. Volt Typhoon’s choice of targets and pattern of behavior is not consistent with traditional cyber espionage or intelligence gathering operations, and the U.S. authoring agencies assess with high confidence that Volt Typhoon actors are pre-positioning themselves on IT networks to enable lateral movement to OT assets to disrupt functions."
The most charitable explanation is that having hired hordes of cyberwarriors and pointed them at US networks they've ended up compromising all kinds of low-hanging fruit that wasn't particularly a target, just because they can, because it might be helpful, and because it's cheap to do. But even in that case it's easy to see how the realisation of the capability might feed-back into foreign policy.
Perhaps a more realistic explanation is they want as many levers as possible to pull or threaten to pull by 2027 when they aim (as they would put it:) to secure their territorial integrity that's been compromised for the last 75 years, without the US intervening.
Whatever the reason I think the potential for foreign-soil sabotage (particularly the grey-area warfare/deniability angle) is quite unprecedented in major power conflicts historically, and that itself makes it difficult and risky to deal with.
GDPR compliance & data sovereignty is already a key selling point for the EU cloud providers (OVH, Scaleway, Hetzner, ..). It's hard to compare apples to apples but the prices don't look uncompetitive to me. Way less features than AWS, but then AWS undeniably has far too many features.
I presume the hyper-scalers win so much business because of various network effects. It's easier to recruit someone who knows their way around AWS. It's easier to expand your online business to say, Malaysia, should you ever wish too. You can worry less about your cloud provider going bankrupt. It is also a pain in the ass linking services between cloud providers, so easier to stick everything on one big one. And maybe, just maybe you'll need one of those newfangled features, like Amazon's 12th different way of reimagining databases. You won't, but it's better to be safe than sorry, right?
But if it's already been accessed simply deleting your copy of the cookie might only make things worse. Better to go to the site and log out. Particularly if said site is a data-slurping tech giant that encourages and facilitates you staying logged in on multiple devices.
square metres - modifying noun, main noun. Metres but they're square ones. (Also boring poems).
metres squared - noun, verb. Metres but someone has squared them.
squared metres - adjectival verb, noun. Metres but someone has squared them and it's a bit more of a surprise.
metre's square - possessive noun, main noun. Square that belongs to the metre. Compare/contrast: yard's yard.
meted squares - adjectival verb, noun. Those are your squares and you're not getting any more!
metres square - modifying noun, main noun. Everything in this square is measured in whole metres..?
Nothing is really 'so cheap to include' when it comes to satellites. Cooling & powering that kind of hardware in space adds a lot of mass, which should be relatively proportional to cost by the time they've spread R&D and fixed costs over 2,800 satellites.
But it's compute, not AI. The first will be invaluable for filtering and pre-processing the fire-hose of data the satellites collect. The later is just a buzz-word to make a 50+ year old endeavour sound more impressive.
The map-colouring example in this article is very good for the intuition (for words follow the link near the top Quanta's original ZKP article): https://www.quantamagazine.org/computer-scientists-combine-two-beautiful-proof-methods-20241004/
In a conflict, knocking the President out would cause a lot of confusion
Granted, but there isn't any major ongoing conflict where this is likely. The president could be taken out by a small well-armed terror cell, but then the confusion of succession would be manageable(*). And if there were suddenly a major threat they can avoid travel as you say.
Pretty much no other national leader goes to such lengths to stay alive and when they do die it's overwhelming at home (quite often in their actual home) to domestic terrorists, a coup (albeit sometimes foreign-backed), or some other scenario under the general heading of 'civil war'. Even when it's abroad, all the cases I can find involve domestic motives, and domestic assassins who've used the opportunity of the travel to fix a leader's location (the PM of Jordan in 1971, King of Yugoslavia in 1934, a couple of examples). You might even have to go back to Archduke Ferdinand for a truly interesting foreign case, but even that was within-empire. So the point is, as long as The Proud Boys, and Antifa, and whatever is left of the US Mob don't have access to MANPADS the Air Force One missile defences seem OTT.
On the other hand if a capable military actually attacked Air Force One with fighters and air-to-air missiles it would go straight down, because it's a hulking great jet, not a nimble fighter or a stealth spy-plane. So even the theoretical threats it protects against are quite narrow. Zelenksy routinely zips around an actual warzone by train when Russia would do almost anything to take him out. Apparently he does this because it's much safer than being in the air. We don't really know the military capabilities of his train or the costs. It's undoubtedly still risky. Trains don't go across oceans well, and the US rail network isn't very good, so it's not really an option for US presidents, but it's an interesting example of matching the method of travel to an actual threat.
(*) I do wonder if the US president is perceived as less expendable than many world leaders because the process of election only leaves one uncontroversial successor (the VP) before you get to senators elected under a totally different system and then rapidly to appointees. In a Westminster-style system there's an endless supply of MPs elected in the same way the PM was, and it can take a ruling party a few hours to decide on a successor if needed.
AF1 is what the President is currently in and the second shadows all missions and is kept at a separate airport nearby and if that's not possible, it's kept aloft with in-air refueling in the vicinity which isn't what they like to do. In addition, there's more military cargo aircraft that haul around the limo, security forces, supplies, etc.
I wonder just how sensible this whole expensive, redundant, anti-missile setup is with respect to plausible threats, and to what extent it's already mainly for show without adding gold-plating as well. (Consider for instance that previous US presidents who've bought it in the line of duty have done so safely on the ground in the US due to inadequate protection from small arms fire).
I had a bit of a laugh at the retrofitting for in-flight refuelling hassle because the military dayjob of the UK government's official airplane *is* air-to-air refuelling of other aircraft (including apparently, foreign government jets).
Because they're also a reputable penis enlargement therapy retailer, and that's a more wholesome pedigree than most VPN providers.
Edit: I went to the site, it's "online only", no "pumps, pills or painful surgery" and once you subscribe you'll get "Lifetime access" to whatever the service is.
Yes, lifetime access.
Why not lift the prediction to the compiler/interpreter, instead of the hardware/microcode, reduce the complexity of the hardware so you can focus on secure, robust, efficient processing.
The predictions are never 100% so the processor will always have to deal with unwinding mispredictions regardless of whether it's responsible for the prediction in the first place. Some of the prediction complexity could be passed to the compiler though (eg. a more elaborate version of profiling and emitting branch hints) and that would make exploiting these vulnerabilities easier because the opaqueness of the branch predictor is one obstacle to researching them. That in turn would hopefully me the CPU bugs get squashed quicker.
bandaid
It's not a bandaid, it's an essential way of mitigating the relatively poor latency of DRAM. The most algorithmically perfect, tightly written pieces of code benefit enormously from pipe-lining/speculative execution/branch prediction. In many cases they're benefiting more than inefficient code, because one of the reasons inefficient code can be so is because it has unpredictable branching (that's not necessary for the task).
It looks like only back to 8th gen era, and even that only limited mobile chips (which I believe are 6th gen architecture or something, just to confuse things): https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/blob/main/releasenote.md
Comet Lake / 10th gen for mainstream desktop if I read right. Maybe they will add more though, because that's similarly stingy to AMD's first offering of their microcode signature vulnerability fix and they were persuaded to go back a bit further in the end. I fear this inevitably leads to the evolving smartphone business model with fixed security update periods heavily stratified by how premium the chip is.
You should be able to KNOW the information is coming from the government by the .gov website. On X anyone can set up an account named something official sounding and release whatever they want, and people are just supposed to be able to tell those apart from the legit accounts.
Here in the UK a lot of local government information now seems to be communicated by informal and thinly-veiled political rants via the medium of Facebook posts. When you try to discover any substance, detail, or anything remotely official to back up these posts it turns out the relevant section of the proper government website hasn't been updated in several years.
Any recommendations for the unfortunate Windows user?
Thunderbird is fine. If you only use it for RSS it's pretty bloated I suppose, but only in memory footprint (like everything else these days) not in shoving features down your throat.
I don't really understand the historic attempt to bolt RSS readers onto browsers, it's a much more natural fit for an email client.
Google (if forced) would do what makes it most money. They'll evaluate spin off (essentially an IPO) compared to selling to whoever has got deep pockets. They'll also listen to their investors (a little bit) as to whether those investors want cash, or to have shares in a standalone ChromeCo.
Actually the current proposal is a Divestiture Trustee will divest it for them, so a bit like a company in administration they won't have any say in who ends up with the asset and the trustee's mandate is something as yet not detailed but involves creating market competition. Quite what scenario that leads most likely leads to I don't know.
If those jumpers still existed the functionality would probably be implemented in software by now anyway, and so prone to vulnerabilities just like the software locks that do exist.
I wonder how common it is to wipe the BIOS with the reset jumpers which (to knowledge) are still common, when performing a ransomware recovery? Probably not common enough. (Yes the firmware in other devices remains an issue that could undermine that but it significantly raises the bar).
There's no extra memsetting or copying (at least if you get rid of the unnecessary fills you added). I meant that there's no uninitisaised data at the language level. In the resulting asm of course there'll be lots, but it'll be statically guaranteed not to be treated as initialised data (give/take bugs in unsafe code).
You cannot accidentally copy a Vec in Rust (outside of the expected amortized constant time growth if you don't pre-empt the final capacity), because it doesn't implement the Copy trait. You need to be explicit by .clone() ing it, and of course you think twice before doing that because it can be costly.
The layout and access is precisely the same as in C++. Two arrays on the heap. Where on the heap? We don't know, the allocator takes care of that and it's non-deterministic in general. In C++ and Rust alike. If you care about layout you need to go closer to the metal. [Edit: I just realised I switched the Rust code from on-stack arrays like your C++ to heap-allocated Vecs at some point. I can't remember why and it isn't necessary (simply remove the two vec! to go back to arrays) but they are quite large for the stack anyway and a couple of extra one-off allocations and pointer derefs isn't hurting performance.] It's absolutely not like Python which uses garbage collection and almost every trivial thing is an object with massive heap fragmentation and constant cache misses.
I don't see the translating all the complex array code to slices being viable without introducing errors
More likely you'll find and fix bugs, but it's a lot of work so don't try it unless your old code base is known to be quite buggy or is especially security sensitive.
I can't see myself switching to Rust tbh, but it's been an interesting diversion, thank you again
Fair enough, respect for being open-minded enough to try it even though you had some strong prior views on it. I only encourage people to do that before bad-mouthing it because there is a lot of misinformation from the die-hard C/C++ crowd :)
Legibility is what you're used to. It was an adjustment for me for sure, but there's less detail hiding in arithmetic because of the functional constructs, less off-by-one errors. Less wondering is that resource freed properly? Is that iterator valid? And so on. Because much of that checking is off-loaded to the compiler. But of course you need to know the language pretty well before you can trust what the compiler brings to the code review.
For example building in -O0 shows the memcpy calls for the slices
There's a bit of this in any compiled language, but I actually think it's one of the most intriguing things about Rust. It's a modern language in the sense that it's built with awareness of, and to rely on the deep optimisations LLVM performs. There's a less-is-more approach to the high-level abstractions which stands in stark contrast to the low-level tinkering C, and to a lesser extent C++, encouraged. Yet it somehow works just as well. It also relies on very demanding compile-time type-checking/borrow-checking. It wouldn't have been possible 20 years without unreasonable compile times. Even 15 years ago the relative immaturity of compiler optimisations would have reduced it's performance or forced language design compromises. So it's not intrinsically better than C++, it just leans better into the resources we're able to invest in software development at the current point in time, and as a result wins some security guarantees that have also never been more important than at the current point of time. Capabilities and requirements will continue to evolve so no language can really afford to sit still.
/* surely this is one thread, not four */
It's actually 5. chunks_mut produces 4 chunks which are looped over and each loop iteration spawns a worker. The main thread then waits at the end of the scope block for all 4 workers to join. In my original Rayon version the main thread would become a worker so there were only 4 total (or only 4 running at a time anyway).
Ah,
vec![2i16; SLICE_LEN * MAX_WORKERS]
This expression already creates the Vec initialised with all 2s. (2i16 is 2 of type signed 16 bit integer). You don't have uninitialised data(*) in Rust, strict RAII is one of the core principles.
(*) To be more precise there is a special MaybeUninit wrapper for low-level use which tracks memory that absolutely needs to be uninitialised and prevents it polluting non-unsafe code, and even in unsafe code you're prevented from doing anything too dangerous with it.
Godbolt is really good at mapping the asm back to specific C++ lines. That's partly because Matt has put effort into refining it, but it's also because C/C++ specifically require functions to have unique addresses in memory. Rust code and resulting asm are more disconnected because the compiler is freer to transform the code, inlining and smearing the higher-level functionality to make it harder to map but hopefully with faster execution. (Even so I am a bit surprised how bad the mapping is in this example).
My recommendation is not to worry too much about the actual asm but to benchmark it in reasonable real world scenarios (with randomised data to thwart compiler optimisation of constants). Only when you perform instruction-level measurements of branch mis-prediction or pathological fetches etc do you really need to untangle which line of Rust is responsible for a particular asm instruction, and in those cases it's usually pretty obvious where the bottleneck is likely to be.
Okay yes, there's a pointer/size (aka slice, aka fat pointer). These aren't in an array though, there's just 8 of them produced 2 at a time (one of each pair pointing into v1 and one into v2) that are passed to the threads. The threads mutate v1 but they don't need to mutate v2, that's just how you wrote the original code. I don't really understand your last line (I'm incorrect that one array can not be mutable?). v2 isn't mutated, but it could be if needed. (I still maintain in this simple case it would be better for each thread to accumulate the mutations itself and avoid any need to synchronise the writes back to v1, but sure, maybe the side-effect of the v1 changes is somehow integral to whatever functionality we're simulating here).
There's nothing stopping you downloading libraries from repositories and keeping them without any reliance on 'cloud' (aka the internet *). Rust's cargo for example has the 'vendor' command, although there are other approaches too. There's also a very strict policy against deletion on crates.io, so no they won't disappear in a year or two just because a single dependency dies:
Crate owners can delete their crates under certain conditions: the crate has been published for less than 72 hours, or the crate only has a single owner, the crate has been downloaded less than 500 times for each month it has been published, and the crate is not depended upon by any other crate on crates.io. If these conditions are not met, the crate will not be deleted.
(*) It's no different to downloading a C++ library from a random .edu website. Of course you keep it locally because the postgrad who published it inevitably loses their webspace and ceases to care about the library a few years later.
No no no. I don't know where you've pulled iovec from, that's not Rust. The best to way to interpret iterators is that they're an abstraction that the compiler can reason very easily about. The end result is just a begin pointer and a length (together: a slice) passed to the threads, or something mathematically equivalent that the compiler has conjured up. The compiler ensures memory safety so the asm can be as dirty and hacky as needed, there's no need for copying. In fact there's often less copying than in C++ because there are less standards imposed on what actually happens in memory.
You need compiler flags '--edition 2021 -C opt-level=3 -C target-cpu=native' in Godbolt. I'm predicting you're going to complain there's more asm code in the Rust version so I'll pre-empt that by pointing out that's it's mainly robust error-handling for all the edge cases the system calls might produce. The hot paths are remarkably similar in both, unrolled loops and vectorisation for both the multiplying and adding, a smattering of allocations to handle the threads. What you'd expect.
Since you're hung up on this and some Rust coders are now incorrectly implying outside crates are necessary here's the same with only std. I even moved the sum outside the threading like you wanted even though it'll be worse for performance (as it happens it made for simpler code since I can ignore the return values of the threads):
https://play.rust-lang.org/?version=stable&mode=release&edition=2024&gist=303bbb6171701f9b90bf023f2f487563
A little more verbose than with Rayon but not really. Rayon's true power lies in creating trees of nested tasks with worker threads which work-steal when they've finished their own tasks, something which would require an extra level of abstraction in C++ too, but it's overkill for this task. I am truly sorry for misleading you a '3rd party library' was necessary.
I mirrored the signed 32-bit to unsigned 64-bit promotion too. So this is now the exact same pattern as your C++ code, with the sole exception that it will catch some underflows/overflows, depending on compile mode.
You might be interested to know Rust's scoped threads are very similar to C++'s jthread, they join automatically when references to them are dropped. In Rust however this isn't a mere convenience but is essential for sharing data with strong concurrent-aware memory-safety guarantees because we can allow threads to borrow references to data owned by the spawning thread, safe in the knowledge that the spawning thread will outlive the spawned thread. I think that's neat but I'm sure you won't. It's also why there're no unnecessary copies, as you feared there might be. The child threads gain temporary ownership of v1/v2 but the main thread can access them again when safe to do so. The borrow-checker enforces this, so for example the same pattern without a scoped thread block won't compile. This is the kind of feature which backs up phrases like 'fearless concurrency'. I am absolutely certain other languages will get there soon so I'm not a cultist holding Rust up as some perfect forever-language, but it is absolutely doing a lot of useful stuff that C++ is currently not doing.
serialisation point
The Rust code serialises after the sum, I think you acknowledged that already and we're in agreement the Rust code parallelises a bit better, but this hasn't anything to do with how difficult it is to check for correctness.
The sum of products is not always the same as the product of sums, so again this is not the same algorithm. It's unlikely to matter for integers, but again still not a like for like.
Yes, it's the same algorithm. You're misunderstanding something but not explaining well enough that I can figure out what. Can you give some combination of constants and initial vector values that give a different answer for my code? If we're using floats that are not strictly commutative the results will differ by an epsilon or two at most, but we are not (in that case anyway neither result is more correct, but it would take me a few seconds to move the sum outside the threading if truly needed).
This doesn't need a third-party library as it's an array of threads that do a trivial computation
Just going to reiterate: you're using a C++20 feature compiler-makers have been slow to implement while I'm using a universally loved library that takes seconds to deploy and that's been solid since 2018 in a language that's barely that old. Someone rooting for C++ really doesn't want to get into an argument about language standards. Boost was unofficially flying the C++ language standards flag for several decades while the standards committee sat on their hands. Don't get me wrong, I'm glad that's no longer the case but please get down from that horse.
because the point is not the computation but the parallelism and the correct use of the cache
Not sure what your point is but it feels a good moment to mention I mirrored the behaviour of your code. If I did it the performance way I wouldn't have mutated v1 vector because that's much more costly than reading and accumulating into one value per thread. Your code leaves v1 in a particular state I felt it was important to replicate despite costly main memory writes. In truth, LLVM may well optimise away those inconsequential writes, but I tried.
who can tell how many copies your example does
I can tell you, none. In Rust if you do a copy you'll either have a clone() or you're creating a new array/vec involving a light Copy type. The type here (i16) is Copy which would mean very fast SIMD memcpys, but they are not happening anyway. Even when you think you're creating a new container it's usually better at copy elision than C++. I've used Rayon in much more complicated scenarios where an extra array is needed on output and I can tell you you won't be performing those scenarios in C++ with jthread without an extra buffer either.
https://en.cppreference.com/w/cpp/thread/hardware_destructive_interference_size
Yea you can apply atomic padding in Rust. It's not mandatory, perhaps it should be but there's a heavy trade-off with memory usage in some scenarios. The usual way is just to manually align structs containing atomics to 64 bytes because everything has cache lines that long currently. It's a bit foolish trying to generalise and future proof because we don't know what future hardware will look like.
The most complex code in that example is slice indexing.
Yea, this is what I started off saying. It's needlessly complex. The danger of getting it wrong allows C/C++ developers to imagine they're playing with fire but too smart to avoid getting burned. Let's be honest some of them really love that and it's a problem.
No, you're using an int to accumulate. This is one of the many gotchas of C++. 0 literal is considered an int preferentially ( https://en.cppreference.com/w/cpp/language/integer_literal ), which means type T in T accumulate( InputIt first, InputIt last, T init ) is int. Only after the accumulate is your T/int implicitly cast to size_t, and it may have overflown 32 bits by then. If you don't believe me check in Godbolt with um, 8 workers, slices of 8192 and starting vectors full of 64s and 128s. (*)
I did the sum in parallel because the result will be identical but achieved with better performance. 'Lock free' is a misnomer, any time you have 'lock free' code you're offloading the threading contention to the CPU which itself performs locking. You can see this clearly because you have costly lock asm instructions that guarantee atomicity and memory ordering. That may give better performance than heavy-duty locks sometimes but it's not a panacea compared to just reducing inter-thread data-dependencies. If the calculation is worth threading in the first place (this toy example isn't, that's besides the point) then you definitely prefer to merge 4 primitive values rather than 4 entire slices, and the Rust code will accumulate 'along the way' while multiplying, rather than doing 2 passes of v1, so more cache friendly.
You almost certainly have locks anyway, they're hidden inside the vector<jthread> destructor. If you don't have 'locks' in your code or the C++ library code then you're also giving up time-slices to the OS on the main/merging thread while the other worker threads aren't ready (because the alternative, spin-locking, is crazy in these circumstances) and that again involves locks, and context switches, and completely over-shadows whatever performance you think you're getting from not having locks. Don't get me wrong, nothing drains performance like locks done badly, but locks when appropriate are just totally okay and you can bog down your code trying to get rid of them. In any case despite this side-track, rayon uses the same strategy to coordinate threads, testing atomics and giving up time slices to the OS. There are some details like maybe you spin a couple of times before handing back to the OS but it's all about trade-offs in the end.
Rayon is as close to Rust stdlib for this kind of thing as you get, it's built on threading primitives that have gradually migrated from libraries into the standard. I note jthread support is a bit patchy still with C++ compilers while Rayon is supported on all Rust compilers (:troll:) where there are OS threads present, so the situation is not greatly different.
(*) I should however be promoting to usize as that's the equivalent to size_t in Rust, both are unsigned. I think that's a mistake but if it's not, the typical reason for using size_t is that you're going to do an allocation or other memory operation and that's dangerous if you might be casting a negative number ( https://cwe.mitre.org/data/definitions/195.html ). You can however cast i32 to usize in Rust quite happily so I'm not chalking that one as a win, but you do, as I said already, need to do it explicitly which means you won't get confused about what type your accumulation is happening in.
because of some reason, you've not felt fit to elucidate
For context, this is what I'd consider equivalent Rust code: https://play.rust-lang.org/?version=stable&mode=release&edition=2024&gist=31f47b74ecca1e909cf99b9608ae9e94
- There's no indexing arithmetic to scrutinise
- No awkward +1 fudges in loop (to be fair, the C++ could have avoided this by decrementing i after the loop body, but there'd still be more going on than ideal for such simple loop logic)
- Even if you introduce non-local bugs such as making v1 and v2 different lengths the Rust code has no chance of accessing either array outside bounds
- It's clearer that we're multiplying elements in 16 bits, accumulating in 32 bits, and only then promoting the result to 64 bits (I've taken the liberty of assuming a 64-bit machine in interpreting size_t). This might be a bug if you intended to accumulate in size_t (incidentally I did not spot this in my earlier review of your code). The Rust code helps us recognise this, not only because it says i32 in black-and-white but because in debug build it'll panic if the accumulator overflows.
https://godbolt.org/z/PsMr1xdfd
As a C++ programmer for 30 years and a Rust programmer for 4 years, I find it way more difficult to review for correctness your simple bit of code than the equivalent in Rust. There's the advantage, even if you're one of the very few who can write sound C++ every time.
If that were the case then virtually all cryptocurrency transactions would be money-laundering in whatever jurisdiction you’re referring to (I say that while acknowledging cryptocurrency transactions aren't legal everywhere). As would a lot of the Panamanian/BVI/Swiss/etc fiat currency transactions involving shell companies, shady off-shore trusts and so on.
Money laundering is more commonly about obscuring the criminal nature of the money, not the source or destination. Of course, if the source is undeniably criminal then you need to obscure that in the process, but that's the not the sufficient condition.
Money-laundering differs quite a bit by jurisdiction but I think the simple answer is that the money doesn't become criminal until the ransom is paid, so the payer cannot be accused of laundering criminal money. Perhaps middle-men negotiators who also handle payments might be in more danger. I'm interested what specifically the lawyers said? (I'm imagining teeth-sucking and fence-sitting)
In the UK anyway there is a consent process, so you can ask the police for permission before doing anything that might constitute money laundering. Not that I recommend paying ransoms but anyone considering do so should already be in contact with the police, for a range of reasons.
Very clearly a ransom should not be concealed as 'C-suite golf membership dues' in your accounts!
You need to get off the internet if that's how it makes you feel. Seriously, it's not a place for children or the faint of heart.
I'm a member of an ethics committee at a large, prestigious university(*) and I can say this view is pretty wide-spread. If you're doing an experiment involving online individuals with all the anonymity and arms-length-ness that social media provides then concerns like consent get interpreted very differently compared to in-person experiments. I guess it's harder to empathise online, *and* people feel online is an onslaught of constant harm anyway, so what's a bit more?
(*) Not the slightest bit of truth in this but I thought it'd make my point more persuasive.
I used to use one to merge the Windows-mandated Documents folder you can't avoid stuff ending up in or properly move, with my actual documents folder. Fight cruft with cruft.
(These days I've managed to largely ignore the former, whatever's in there sure as hell ain't my documents)
Yea I started just saying the Brits stole it from the Chinese, but it improved the narrative to include India especially given their huge role in actually growing it. It seems unknown but perhaps non-commercial tea use in India provided some of the motivation for the East India Company to try to grow it there? And it is the native Indian varieties which generally remain strongest to this day (enjoying some good Assam as I type).
Sounds like an interesting book, I shall endeavour to check it out.
Tea. Which the Indians stole and made stronger. Then the British stole it and added milk and occasionally weird things like bergamot. Then the Yanks stole it and added ice, and used it as gimicky flavour in various baked goods, many of which they stole from the Germans.
About that.. https://cyberscoop.com/china-national-vulnerability-database-mss-recorded-future/
Last year, publication of the Microsoft Office vulnerability CVE-2017-0199 came out 57 days late on the Chinese database. In the meantime, a Chinese advanced persistent threat group exploited the vulnerability in cyber operations against Russian and Central Asian financial firms.
Apparently you have to GO to Bluetooth range (30'/10m?) for EVERY light
Trouble is, every device contains a Bluetooth/WiFi/GSM module these days, so if you have a full compromise you may well be able to worm your attack to all (or your one desired) light, at least in dense urban areas, and to begin it from some internet-connected device you can safely compromise from Mom's siheyuan.
But, still preferable to having unnecessary central control like in movies.