Well for the price of a crappy freezing house share in london, you could get a reasonable apartment or even a full house somewhere else in the country. If you're working from home you have the opportunity to move out of london and go somewhere cheaper.
794 posts • joined 12 Mar 2007
Google extends homeworking until this time next year – as Microsoft finds WFH is terrific... for Microsoft
You need self control, but flexibility is a good thing...
The 4 hours extra of working a week, can be offset by the reduced time wasted commuting. I've worked in jobs where commuting wasted 10 hours a week, so spending 4 of those working and 6 of them relaxing is a win for both sides, and helps justify the idea to the company.
In terms of working weekends and evenings we should take a flexible approach...
I'm happy to work some evenings and weekends if i have nothing better to do (which is frequently lately, as many of the places i could go to are closed) providing i get something in return - either the ability to take off an equivalent number of hours at times i would otherwise be working, or get paid for the extra time.
I've been working from home for quite some time, and will frequently take a few hours off during a weekday if work is quiet, and then work an equivalent number of hours in an evening or weekend. I complete timesheets for the hours i worked, and aim to balance them out so that i stick to my contracted hours per week on average.
Work often isn't 9-5 anyway, dealing with people in different timezones, encountering delays or having to wait for things etc. If i was in an office i might be sat twiddling my thumbs while i wait for something, if i'm at home i can clock off and go do something else for a couple of hours and then resume work later.
Re: Interesting market effects
Setup of new v6 networks is already easier than v4, you don't need to worry about nat, or address conflicts, or conservation of limited address space etc.
Companies like microsoft and facebook are entirely ipv6 internally, with border devices that can proxy traffic to legacy ip for when they have to communicate with outdated third parties.
The ARM people are not stupid, they have priced the royalty rates where they are for a reason...
If they start cranking them up, a lot of customers would leave and move to other architectures - MIPS, RISC-V or POWER etc. Most embedded devices are not tied to any particular architecture, Linux runs on everything and the firmware is generally rebuilt for each new device anyway so their customers are generally not locked in.
Detroit Police make second wrongful facial-recog arrest when another man is misidentified by software
It's not so much the software that's at fault, what's at fault is officers trusting its results blindly. The software is a tool, and all it can do is reduce the number of photos that you need to check manually. You still need to do the actual detective work.
This guy needs to sue for wrongful arrest. If the costs start stacking up they will have an incentive to improve officer training and deal with incompetent/lazy officers. If you don't hit them in the budget, nothing will change.
I was screwed over by Cisco managers who enforced India's caste hierarchy on me in US HQ, claims engineer
One the one hand we're constantly being told to protect and respect different cultures...
And yet it's the indian culture which has this caste system and has resulted in this discrimination. It is his indian colleagues who are discriminating, non indians probably wouldn't even be aware what caste he was from or what the traditional relationships between them are.
Some aspects of cultures like these are simply incompatible with the ideas of equality expected in western societies, but the idea of forcing others to change their culture in order to be compatible is also supposed to be bad, and people get accused of racism for expecting immigrants to adapt to a new way of doing things.
ZFS co-creator boots 'slave' out of OpenZFS codebase, says 'casual use' of term is 'unnecessary reference to a painful experience'
Keepnet kerfuffle: Firing legal threats at bloggers did infosec biz more damage than its exposed database
"We then store this data in our own secure Elasticsearch database"
This statement has proven to be false, their database was demonstrated to not be secure.
If all it contained was a mirror of already-public data then noone would have cared anyway.
Also what is it with hiding known insecure services behind a firewall? The service should have been configured to use a secure form of authentication first, and then placed behind a firewall as a second layer. If one layer still fails you still have others.
MacOS on Arm talk intensifies: Just weeks from now, Apple to serve up quarantini with Kalamata golive, reportedly
I also recall the m68k to PPC transition...
A lot of software and even key parts of the OS remained 68k code for quite a long time.
Apple stopped making 68k machines at i believe the 33mhz 68040, and transitioned to the 50mhz PPC601.
The early PPC machines were often slower than the high end 68k machines because of the emulation overhead.
For software that was 68k only, running a mac emulator on a 68060 amiga was at one point faster than any real mac.
The 68060 itself was fairly competitive with the 601 even running native code.
There are some pretty powerful SIMD options available for ARM too...
There's even an ARM based supercomputer:
And with the lower power usage of ARM, they could clock higher or add more cores while keeping in the same power/heat budget.
GPUs are also good at a lot of the things that SIMD instruction sets are used for.
"It's ok for me so i dont care about anyone else"
In developed countries you still get your own ipv4 address when you sign up for home internet...
In developing countries this is not the case, you are stuck behind CGN and have a second class connection. You are an outside viewer, you are not part of the internet. Not to mention the performance overhead and extra cost caused by this setup.
Getting new ipv4 allocations is difficult and expensive, and developing countries are not exactly flush with cash.
Many popular sites and services on the internet started out as a hobby, if getting an externally addressable connection is difficult or expensive this innovation goes away too.
Until ipv6 takes over, ipv4 will continue to stifle developing countries and will continue to stifle the development of new services.
Re: Doomed to eternal limbo
IPv6 is less complicated than ipv4, and nat is one of those things that makes it so. Your devices have multiple addresses, you have the added complexity of correlating them. Then on a network of any size with interconnects you have to worry about address overlaps.
NAT does break things, many protocols have been redesigned to work with nat and often losing features or performance in the process, many nat implementations have specific kludges for certain protocols (eg ftp) that can be abused for malicious purposes. Having a single nat gateway under you control is also nowhere near as bad as one provided by the isp that you have no control over, or multiple layers of nat.
NAT turns the internet into a client-server model instead of a peer to peer model... Communications protocols were designed to connect users directly together (eg original ICQ, the DCC features of IRC etc), nowadays since users can't connect to each other directly all communication takes place through a third party server which decreases performance and reduces security/privacy.
Re: Doomed to eternal limbo
Using ipv4 addresses with AWS and similar providers has problems...
With AWS at least all ipv4 traffic is natted, ipv6 traffic is not, some protocols don't like this.
Also ipv4 addresses are recycled whereas ipv6 addresses are not, if you shut down an ipv4 instance you have to make sure anything that was pointing to it (firewall rules, dns records, static configurations etc) has also been cleaned up otherwise you could have a security breach when someone else is allocated the same address. For an example of malicious activity taking advantage of this, read the recent story about houseparty posted here a couple of weeks back.
Re: in Sinofsky's defence - is iPad Pro + iPadOS heading towards achieving the Windows 8 vision...?
// The security landscape is getting worse. The app security model for WIN32 and macOS was never designed for such a hostile security landscape. Mobile app models (secure sandbox; limitations; app store; automated updates; etc) are better suited to this.
This is the key point..
Traditional operating systems were designed by and for geeks. They are complex tools that require knowledge and experience to operate correctly.
They are kit cars, whereas an ipad is a ready to drive vehicle. You won't get the same performance or flexibility, but you will be able to drive to work or the shops without any hassle and that's what matters.
Fully featured computers have always been a niche product aimed at specialist use cases and only ended up being used by the masses because actual consumer oriented products were not available yet.
TCL 10L: Remember the white goods flinger that had a licence to make BlackBerrys? It made a new own-name phone
Forget BYOD, this is BYOVM: Ransomware tries to evade antivirus by hiding in a virtual machine on infected systems
Re: Use of SMBv1 for XP compat may be at the core
Encryption is not the reason to deprecate SMBv1... SMBv2 doesn't implement encryption either, and it's optional for newer versions of SMBv3.
The problem is the inherent complexity and age of the protocol, with smbv2/v3 being much cleaner and simpler.
However they are also not without problems, on windows the protocol is deeply embedded into the os and runs with a high privilege level, the protocol allows a lot more than just file sharing, and there are still weaknesses with the authentication system - especially ntlm.
Houseparty denied it had been hacked... while miscreants were abusing its dot-com domain name infrastructure
Yet another reason why we need IPv6...
IPv4 address on AWS and other such platforms need to be recycled because there's a shortage of them, if a machine gets killed and they don't remove the DNS records then someone else will soon inherit them. The address allocations are also random and spread all over the address space AWS owns so if your trying to add firewall rules, or determine what the traffic is from a packet capture or logs its painful.
IPv6 allocations are based on blocks per customer, so houseparty will be allocated a large block by AWS and all of their allocations will come from that. If they drop a machine then the address goes dead and won't be allocated to a different customer as it still belongs to houseparty.
Another good example of this absolute mess is Zoom:
75 separate spread out ipv4 blocks that belong to aws (and do zoom even control all the addresses in those blocks?), or a single ipv6 block that belongs exclusively to zoom... I know which i'd rather use for monitoring and firewall rule purposes.
DBA locked in police-guarded COVID-19-quarantine hotel for the last week shares his story with The Register
Re: And this is why the Aussies are on top of it
What makes you think that reporting and testing in third world countries like myanmar and laos is at all accurate?
Also the vast majority of deaths have been elderly people with existing conditions, in developed countries with effective healthcare systems there are a lot of elderly and sick people still alive thanks to the healthcare, in third world countries people who would be in these categories are often already dead.
Many young and healthy people experience little or no symptoms, and third world countries are full of young healthy people because it's hard to survive there otherwise.
Happy birthday, ARM1. It is 35 years since Britain's Acorn RISC Machine chip sipped power for the first time
Re: "All issues with management blobs etc. aside, this is a bit debatable IMO"
It didn't take ARM an age to get to desktop level, it took them an age to get back.
The earliest ARM chips were used in desktops, and those machines were more than performance competitive with the common x86 and m68k designs of the time.
Re: Update it not kill it
It does, there is FTPS which is FTP over SSL...
The problem is NAT.
FTP uses separate ports for data transfer and control, and the benefit here is that you can remotely initiate transfers between 2 servers without the data having to touch your client (especially useful when you have slow or asymmetric connections)...
But this doesnt play well with firewalls or nat, the firewall doesn't know which ports to open or which address to translate them too. There are kludges for plain FTP where the firewall will watch for FTP control traffic and intercept the requests, but this won't work if the control channel is encrypted.
There are also techniques like bounce scanning, where you can make an ftp server connect to arbitrary host/port combinations as a slow form of port scanning, so you can see what's reachable from the perspective of the FTP server.
Re: Colour me disappointed...
It's more stupid than that...
They use cloudflare, and cloudflare fully support IPv6 by default, for some reason they've got it turned off or just not bothered to create the AAAA record.
From here the latency to cloudflare over ipv4 is usually over 3x higher than over ipv6, because of the overloaded nat gateway imposed on me by the isp.
IPv6 by default
Many ISPs now provide IPv6 by default, and many providers are now using NAT for IPv4 connections - especially for mobile users...
As a consequence of this, connections going over IPv6 are generally faster and more reliable.
The more traffic goes over IPv6 the better for the ISP and the customers, as NAT gateways are considerably more expensive to operate than routers.
The lack of a NAT gateway can also reduce battery usage on mobile devices, as they can use longer sleep times for protocols like activesync without the gateway terminating the connection for being idle.
A lot of the users still stuck with IPv4 have explicitly turned it off, or are using antiquated equipment.
You. Drop and give me 20... per cent IPv6 by 2023, 80% by 2025, Uncle Sam tells its IT admins after years of slacking
Quite the contrary unless you're using ancient software...
Modern operating systems prefer ipv6 and are designed to use it, running modern systems on a legacy ipv4-only network is actually a security risk.
Same for devices, pretty much everything supports ipv6 and will prefer it. Anything that doesn't is generally either so old that its unsupported and a security hazard in its own right, or cheap garbage from china that is just as risky.
Hosting anything requires inbound, and because of the lack of inbound connectivity you end up with devices that proxy through a third party server run by the manufacturer - do you trust a chinese server having access to your CCTV more than you trust a firewall under your own control?
P2p requires inbound - and p2p is not just for bittorrent, its useful for many things - especially reducing latency which is good for gaming and voip. With NAT you have to push your traffic through a third party server which increases latency and gives them leverage over you.
NAT means you share an address with multiple users, if one of those users does something to get banned from a particular service then you are banned too. This is quite a significant problem in some countries where every isp uses cgnat.
NAT makes it difficult to determine the true source of traffic. Someone complains that malware traffic is originating from your home address, you have 20 devices and occasional visits from guests, which of them is infected with malware?
The ipv4 address space is too small that its practical to scan it all, so multiple strains of malware do so which at best just wastes your bandwidth.
NAT gateways generally have specific kludges for protocols like ftp.
NAT is _NOT_ a security feature, its broken.
If you want to control inbound traffic, use a stateful firewall.
NAT requires a stateful firewall, but a stateful firewall does not require nat. We were using stateful firewalls with routable ipv4 on both sides back when ipv4 addresses were plentiful, and we do the same thing today with ipv6.
NAT is a dirty hack, it causes problems and breaks things. The sooner it dies the better.
Re: It's the hardware
For any non trivial network, IPv6 is much easier to manage than IPv4...
You have end to end connectivity, with firewall rules allowing or blocking traffic as required. You don't have address translation confusing the matter.
You have improved security because the rules are easier to understand, and when you allow or deny an address you're allowing just that address and not other things that might be behind it.
The address you see in logs is the address of the host, not the address of an intermediate node doing address translation.
You have a large enough address space to design everything properly without having to worry about address translation hacks.
If you're merging multiple previously separate organisations, or establishing vpn connections to third parties you don't get address conflicts.
IPv6 is better, IPv4 is old, broken and requires all kinds of nasty kludges to keep limping along.
That's why microsoft are moving to ipv6 and ditching ipv4:
Data surge as more Brits work from home? Not as hard on the network as their nightly Netflix binges, claims BT
Re: What was that ?
The last mile connection is fibre but that doesn't mean the ISPs backhauls can cope with lots of users maxing out their fibre connections at once.
There could also be poor/limited peering between different ISPs, so even domestic traffic will clog up or take inefficient routes.
The UK is different, the last mile connections to users are often old and poor but the backhaul and peering is generally very good. Plus with the users on slower connections, you need many more of them to start saturating the backbone links anyway. Plus one user saturating their local adsl isn't going to have any effect on other users lines.
Corporate VPN huffing and puffing while everyone works from home over COVID-19? You're not alone, admins
Over time software becomes commoditised, the existing versions provide all the features people actually need so there is no money to be made selling new versions. It's the end of the line for the business model of selling software.
It's going to be replaced with open source software or services, open source doesn't need to make a profit so it can quite happily go on providing only bugfixes.
Re: File Transfer Potocol
Use of NAT is also a big flaw that breaks more things than just FTP...
Move to IPv6, give each FTP server it's own address, use IP the way it was designed - end to end addressing.
The reason FTP uses separate data connections is so you can do FXP - open two control connections to two different FTP servers, send a STOR to one and a GET to the other and tell the two servers to talk directly to each other without the data having to be pulled down to your connection and then uploaded again. This was especially useful in the days of extremely slow connections, but is still useful today where you might have servers with multi gigabit connections to each other, but clients on asymmetric connections with poor upstream performance, clients on mobile connections, clients with small data caps etc.
The idea that people aren't already looking for vulnerabilities is extremely naive...
Organisations like the NSA almost certainly have access to the source code, and probably used that access to develop the suite of vulnerabilities that leaked a couple of years ago. There's no reason to believe they don't continue to do so.
There are also almost certainly underground leaks of source code out there, also being used by people with malicious intent.
The only difference open sourcing would make, is that whitehat researchers would be able to look for vulnerabilities too and might actually fix vulnerabilities rather than trying to exploit them.
Printer and scanner?
Why does your printer and scanner require specific software?
I have a LaserJet 4200 which wikipedia tells me is from 2002, it still works perfectly with pretty much anything. It supports Postscript and PCL so i can print from the very latest systems (catalina has no problem), and even from vintage unix boxes or amigaos.
There's nothing to stop you running a 32bit vm either...
I updated to Catalina when it came out and everything just continued working. Apple has had 64bit support since the first port to x86 (even 10.4 could run 64bit cli tools) and even 64bit PPC support before that, it's not like the removal of 32bit is a surprise to anyone.
Many vendors updated their mac apps to 64bit years ago, so they just continue working if you update to catalina and users don't notice a thing.
The problem is those who just keep kicking the can down the road until it becomes a major problem and then panic leaving users in the lurch instead of a smooth migration, the same thing that's currently happening with ipv6 deployment. Behaving like this shows contempt for your customers, so i wouldnt want to use software supplied by such a vendor.
You shouldnt be learning fixed menus anyway, you should be learning how to locate the options you need wherever they may be hidden. Software changes, there are multiple programs capable of doing a single task and there are multiple different versions of each one. If you get too used to the way a particular program/version does things you'll start having problems when its updated and things move.
Re: "Has Excel succeeded?..." at charting?
It stems from a typical office environment where only the msoffice tools are provided to users, and users only have training in these tools.
Yes a proper database would be better, but the users aren't provided with one and don't know how to use one anyway.
It's like those people who strap huge unsafe loads to the back of their motorbike because they don't have access to a truck. They have a bike, and they know how to ride a bike so that's what they use even tho it's a poor tool and ends up being dangerous.
Sophistication and planning?
This wasn't an attack with "a high level of sophistication and planning", this was a poorly configured network and a guy who knew just enough to be dangerous... If he really knew what he was doing he would have known what monitoring was in place and taken better steps to cover his tracks.
Why was a service account for a printer able to login from outside the organisation?
Why did a printer service account have admin privileges?
This bit about requiring inside knowledge to do the hack quickly, i've seen enough internal pentests where domain admin was compromised within 15 minutes, and given what has been disclosed about service accounts and password sharing i cant imagine it would have been very hard at this place.
Disks are usually the first things to die, especially in harsh environments... But with a small embedded os it's quite possible to load everything into ram and power down the boot drive, or boot diskless over the network etc... Proper embedded devices fail a lot less often than windows.