* Posts by Nate Amsden

2268 posts • joined 19 Jun 2007

Running DOS on 64-bit Windows and Linux: Just because you can

Nate Amsden Silver badge

support was removed by AMD

I believe I recall(perhaps incorrectly) that the AMD64(?) instruction set disabled 16-bit support when operating in 64-bit mode(if that makes sense). So for example if you wanted to run 16-bit natively you could but your OS would have to be 32-bit. Perhaps it had something to do with making the system with less registers or something?

Just doing a quick web search turns up one comment(#13) from 2004 which claims this, though I'm confident I recall seeing more official source(s) over the years:

https://forums.anandtech.com/threads/amd64-and-16-bit-compatibility.1292075/

Since Intel licensed the instructions from AMD (I think I recall that?) then the same would be true for 16-bit code running on 64-bit Intel x86-64 chips.

Adobe apologizes for repeated outages of its Creative Cloud video collaboration service

Nate Amsden Silver badge

Good example

Just because you are hosted in cloud doesn't mean you can leverage it if your app sucks(which it seems so many people don't understand, and probably never will). I can't count the number of times over the past 20 years of managing web applications where the performance limits were in the app and adding more servers/cpu/whatever wasn't going to do anything. Add to that most places don't properly performance test(I've actually only seen one valid performance test in 20 years and that was a very unique situation that really can't be replicated with another type of application). I have seen countless ATTEMPTS at performance testing all of which fell far from reality.

The org I work for did tons of performance tests(I wasn't involved in any of them, but my co-worker and manager were) before we launched our app stack in public cloud in late 2011, only to have all of those numbers tossed out within weeks and the knobs turned to 11 because the tests did not do a good enough job at simulating production workloads and cloud costs skyrocketed as a result. Of course moving out of public cloud months later (early 2012) helped a huge amount, and every day since just better performance, latency and availability across the board, and saved $10-15M in the process (over the past decade) for a small org.

I'll always remember a quote from a QA director probably 17 years ago he had a whole room of server equipment they used to do performance tests on for that company (the only company I've worked at that had dedicated hardware for performance testing), his words were "if I had to sign off on performance for any given release we wouldn't release anything". That company there was a handful of occasions immediately following a massive software update we had to literally double the server capacity in production for the same level of traffic vs the day before. I ordered a lot(for us anyway) HP DL360s back then shipped overnight to get the capacity in place on many occasions.

Another company I was at(the one with the good performance test), had the fastest running app I've ever seen, over 3,000 requests per second per 1U server sustained, made possible by no external dependencies, everything the app needed was on local disk (app ran in Tomcat). One particular release we started noticing brownouts in our facilities from hitting traffic limits that we should not of been hitting. We hadn't run a performance test in a while and when we did we saw app throughput dropped by 30% vs an earlier release. Developer investigation determined that new code introduced new required serialization stuff in the code which reduced the performance, they suspected they could get back some of that decrease but far from all of it.

Then there's the DB contention issues, Oracle latch contention at a couple different jobs, and massive MySQL row lock times (60-120+ seconds at times) at other places due to bad app design. Another quote I'll forever remember from that company 17 years ago during a massive Oracle outage due to latch contention "Guys, is there anything I can buy to make this problem go away?" (I wasn't responsible for the DBs, but the people that were told him no).

OVHcloud datacenter fire last year possibly due to water leak

Nate Amsden Silver badge

Re: Ironic

There are no smart ones here. The smart ones would never of been a customer of OVH to begin with. Only reason I can see to use a provider like OVH is because you really, really don't care about just about anything (other than perhaps cost).

The big IaaS clouds are really not much better though. They too design for facility failure and expect the customer to account for that(as we have seen many customers do not account for that, or if they do they do a poor job of it). A lot of people still believe that big names like Amazon and Microsoft have super redundancy built into their stuff they of course do not, because that costs $$$, they rather shift that cost onto customers.

Meanwhile in my ~20 years of using co-location I have witnessed one facility failure(power outage due to poor maintenance), and we moved out of that facility shortly after(company was hosted there before I started in 2006). That facility suffered a fire about 2-3 years later. Customers had plenty of warnings (3+ power failures in prior years to the fire) to leave. There are a TON of facilities I'd never host critical stuff in(probably 60-75% of them), even the facility I use to host my own personal gear in the Bay Area (which has had more power outages than my apartment over the past 7 years but for my personal stuff given the cost it's not a huge deal).

My favorite facility at the moment is QTS Metro in Atlanta(look up the specs on the facility it's just insane the scale). Been there over 10 years not a single technical issue(not even a small blip), and the staff is, I don't have words for how great the staff is there. Maybe partially an artifact of being "in the south" and perhaps more friendly but they are just amazing. Outstanding data center design as well, 400-500k+ of raised floor in the facility, N+1 on everything, and nice and clean. I put our gear in there while it was still somewhat under construction.

By contrast my most hated facility was Telecity AMS5 in Amsterdam (now owned by Equinix). I hated the facility so much(had to put "booties" over your shoes before walking on the raised floor WTF), and I hated the staff even more(endless stupid, pointless policies that I've never seen anywhere else). Fortunately we moved out of that place years ago (before Equinix acquired them).

Splunk dabbles in edgy hardware, lowers data ingestion

Nate Amsden Silver badge

maybe a new feature...

But I have been filtering and dropping data before it gets into Splunk for many years by sending it to the nullQueue via regular expressions in transforms.conf, a very well documented ability of Splunk.

Not only did it reduce the license costs but it also dramatically cut down on the amount of sheer crap that was going into the indexes introducing a lot more noise making it harder to find things. Simply removing the HTTP logs from our load balancer health checks that returned success per my notes back in 2018 saved nearly 7 million events per day. Overall at that time I removed roughly 22 million events per day for our small indexers that at the time were licensed for 100GB/day. Included in that was 1.5 million of useless windows event logs(these were more painful to write expressions for, one of which is almost 1,500 bytes for the expression). We had only a handful of windows systems, so absurd they generated so many events! 95%+ linux shop.

The developers for our main app stack also liked to log raw SQL to the log files which got picked up by Splunk. I killed that right away of course(with the same method) when they introduced that feature. I also documented in the config the exact log entries that matched the expressions to make it useful for future maintenance.

Don't get me wrong it wasn't a fast process it took many hours of work and spending time with regex101.com to work out the right expressions. Would be nice (maybe fixed now not holding my breath) to be able to make Splunk config changes to the config files and not have to restart splunk (instead perhaps just tell it to reload the configuration).

VMware esxi syslogs are the worst though, I have 59 regexes for those, which match 200+ different kinds of esxi events, at least with ESXi 6 the amount of noise in the log files is probably in excess of 90%. vCenter has a few useful events though I'd guesstimate noise ratio there at least 60-75%.

I had been using nullQueue for the past decade but really ramped up usage in 2017/2018.

Citrix research: Bosses and workers don't see eye to eye over hybrid work

Nate Amsden Silver badge

Re: Really ?

At my first paying job back in the 90s they had installed I think it was Internet Manager by Elron software just before I started(I looked it up again recently on archive.org to confirm the name). My friend who was in the IT dept got me the position in a new "startup" within the company. The parent company was a 24/7 manufacturing shop. One night someone caught the "shop floor" employees browsing porn in the middle of the night, so they decided to install this software to block porn mostly. I guess it was a transparent proxy of sorts, it routed all internet traffic somehow through my friend's desktop computer (or perhaps just the monitoring aspect), and he could see in real time what every url people were looking at, and it would flag stuff for him to block etc.

The #1 offender (by far) was the VP/brother of the owner of the company. He may of been a co owner to some degree I'm not sure, he was also the head of HR for a while after the HR person left. It also generated a list of top users of internet bandwidth, I was #1 pretty much every time I believe in large part because I used a screensaver (wow can't believe I remember the name now) called PointCast(?) it was a really cool (to me anyway) news ticker thing that pulled in tons of data. They would give me shit for being the top user (by a big margin) every month.

He made a point to browse mostly non english porn sites which the monitoring software had trouble flagging automatically. But he would sit in his office and just browse away and my friend would block the sites in real time on some days. I so wanted to pick up the phone and call him and say something like "Oh wow that's a great site don't you think? I'll save that for myself for later.."

This of course was well before the days of HTTPS being common so everything was clear text.

Every computer I've ever used at a company in my 25 year career has been setup by myself. The last company I was at where I did not have some control of IT systems(despite not officially being in IT since 2002) was about 2006ish. I specifically recall the IT admin guy getting frustrated with my computer, which was Windows XP but I had replaced the Explorer shell with LiteStep (similar to AfterStep which I liked at the time). He couldn't figure out how to do things so he would ask me to do stuff like open control panel for him so he could do something(rare occasion). I don't recall the reason(s) why he would want/need to do something to that computer maybe I asked him, not sure.

Lenovo halves its ThinkPad workstation range

Nate Amsden Silver badge

Can the new one use ECC RAM?

I still use a P50, and I regret not getting it with the Xeon so I could have ECC memory. Currently with 48GB (max 64GB). Not that I have (m)any issues without ECC just would like it with this much memory. I don't know what I was thinking when I opted for the regular i7.

IMO anything with more than say 4GB should be on ECC where possible, and IMO again beyond say 64GB regular ECC isn't adequate anymore (feel free to do web searches on HP Advanced ECC, IBM ChipKill, and Intel Lockstep mode(Introduced with Xeon 5500), though I like HP's implementation the best by far(as it offers better than regular ECC protection and has zero memory overhead).

Note that HP's Advanced ECC was first introduced in 1996, it's not new technology. Shocking that so many are still relying on regular ECC these days.

SmartNICs power the cloud, are enterprise datacenters next?

Nate Amsden Silver badge

another issue..

is these things will bring more bugs with them. Take for example iSCSI offload HBAs. They're quite common, perhaps almost universal on storage arrays, but on servers themselves they are rarely used(even if the capability is present, often is on recent systems regardless). While my systems primarily run on fibre channel storage(so I don't have a whole lot of recent iSCSI experience), I have read almost universally over the years the iSCSI offload HBAs are bug ridden and the general suggestion when using iSCSI is to use a software initiator(which by contrast gives far better results in most cases).

I remember my first experience with hardware iSCSI on a 3PAR E200 storage array in 2006. The NIC on the array (Qlogic dual port 1Gbps) had an issue where it would lock up under high load. I managed to mitigate the problem for a long time by manually balancing paths with MPIO (this was before ESX had round robin). Then maybe a month before I quit that job I rebooted the ESX hosts for software updates, and forgot to re balance the paths again. Major issues after a couple of weeks. I remember my last day at the company I had to hand the support case off to a co-worker as the issue was not yet resolved(and was pretty critical impacting production). A couple of weeks later that company replaced all the iSCSI with fibre channel to solve the issue(a patch ended up being made available a few weeks after that). Felt bad to leave them hanging but my next job started pretty quick I couldn't stick around.

I have read several complaints over the years about network cards with TCP offload enabled causing other unexpected issues as well and in many cases the suggestion is to disable the offload. It makes it more difficult to diagnose when you run a packet capture on the VM or on the host and the data differs from what is actually going over the wire because the NIC is doing stuff to the packets before it goes over the wire.

So beyond cost, these Smart NIC people need to be sure their stuff is really robust for enterprises to consider adopting them. Hyperscalers have the staff and knowledge to deal with that extra complexity. Given history I am not really holding my breath that these vendors will be up to the task. But they could find use cases in tightly vertically integrated solutions that are sold to enterprises, rather than a generic component that you can put in your average server.

Elliott Management to WDC board: Spin out or sell flash biz

Nate Amsden Silver badge

Re: no growth in flash (for WD anyway)?

yeah but the end result for WD is they don't have flash anymore and they get the same money back they spent to get into flash in the first place?

Nate Amsden Silver badge

certainly the disk drive half. Flash can be sourced from many places. Wasn't aware hybrid drives were still around, I used the early Seagate 2.5" hybrids for several years. Doing a search for hybrid on Western Digital's store indicates they have no hybrid flash/hard disks for sale. Not sure if they ever had one. They do have hybrid storage systems for sale.

Meanwhile(after checking), seems Seagate sells a Firecuda drive that is hybrid, though I don't see mention on the data sheet how much flash the drives have(at least for 2.5", for 3.5" apparently they have 8GB of flash)

Nate Amsden Silver badge

no growth in flash (for WD anyway)?

Article states WD bought Sandisk 6 years ago for $19 billion and then states the flash business now is believed to be worth $17-20 billion(assuming that is what "enterprise value" means?). Article says the transaction was transformative but the investor action seems to want to just cancel it and get their money back (in a sense assuming they get ~$19 billion if it's sold off?).

I don't have an opinion if it is a good idea or a bad idea either way(don't care), though I believe I specifically recall Chris Mellor here on El reg saying something along the lines of Seagate being doomed because they didn't do something similar as WD(and I think he said so many times), now here we are years later and there's a possibility that WD undoes all that.

Just seems like an interesting situation. What's most surprising to me I guess is the valuation of the WD Flash unit apparently having gone nowhere in 6 years(despite the massive increase in flash usage during that time). Maybe WD paid a super huge premium at the time I don't know.

VMware walks back ban on booting vSphere from SD cards or thumb drives

Nate Amsden Silver badge

Re: what is vsphere.next

interesting ok, was fearing it was perhaps some kind of "rolling release" of vsphere.

Nate Amsden Silver badge

As the article states, on a local SSD, or I suppose a spinning disk would work fine too. All of my systems are boot from SAN (over fibrechannel).

Never liked the thought of cheap crap USB flash/SD card being a point of failure for a system that costed upwards of $30k+ for hardware+software (that goes all the way back to the earliest ESXi 3.5 I think?)

If you are large scale likely you may want to check out or are already using stateless esxi, basically boots from the network directly into memory. Sounds neat(never tried it), though seems like quite a bunch of extra work required for configuration hence more useful at larger scales (perhaps starting in the 100+ host range).

My two personal vsphere hosts(run in a colo) boot from local SSD RAID 10. My previous personal ESXi host(built in 2011) did boot from USB(and the flash drive died at one point fortunately it was a graceful failure), because the local RAID controller was not supported to boot(3Ware).

Nate Amsden Silver badge

what is vsphere.next

I did a web search and found nothing.

VMware says server sprawl is back, and SmartNICs are the solution

Nate Amsden Silver badge

offloading storage better

I would think offloading of storage would be better, especially given their vSAN stack. Remove the need for CPU, memory etc overhead from the host and put it on the DPU (similar to Simplivity except I assume that only offloaded CPU, also similar to what Nebulon does(I think), in fact perhaps VMware should just acquire Nebulon(I have no experience with it)).

I assume they haven't gone that route yet because storage is more complex than networking(hence my comment about acquiring Nebulon). With these Smart NICs I haven't seen mention of abilities to offload SSL for example (perhaps they do and the news articles just haven't mentioned it). With commercial load balancers like BigIP and Netscaler for example they all have SSL offload chips(at least the hardware appliances). I could see a new type of virtual hardware you could attach to a VM to map a SSL offload "virtual DPU" or something to the VM to provide the hardware acceleration (similar to virtual GPU), so a VM running intensive SSL stuff could leverage that(provided the SSL code supported leveraging the offload).

Meta strikes blow against 30% 'App Store tax' by charging 47.5% Metaverse toll

Nate Amsden Silver badge

maybe should be other way around

facebook paying for/subsizing half of all metaworld purchases to encourage people to make things worth purchasing.

(never have had a facebook account, well maybe they have a shadow account for me I don't care either way)

Day 7 of the great Atlassian outage: IT giant still struggling to restore access

Nate Amsden Silver badge

Re: Ah....remember....."cloud" is cheaper......

really it comes down to too many eggs in one basket. Certainly service failures can occur on premises. But pretty much universally those failures affect only a single organization. Granted there can be times when multiple companies are experiencing problems but it's still tiny compared to the blast radius of a SaaS provider having a problem.

My biggest issue with SaaS at least from a website perspective is the seemingly constant need that the provider feels to change the user interface around and convinced everyone will love the changes. Atlassian has done that tons of times and it has driven me crazy. Others are similar, so convinced all customers will appreciate the changes.

Go change the back end all you want as long as the front end stays consistent please.

At least with on prem you usually get to choose when you take the upgrade, and in some cases you can opt to delay indefinitely (even if it means you lose support).

Just now I checked again to confirm. Every few months I go through and bulk close resolved tickets(in Jira) that have had no activity for 60 days. I used to be able to add a comment to those tickets I would say "no activity in 60 days, bulk closing". Then one day this option vanished. I asked Atlassian support what happened and they said that functionality was not yet implemented on their new cloud product (despite us having being hosted in their cloud product for years prior). I can only assume it is a different code base to some extent. Anyway that was probably 3-5 years ago, and still don't have that functionality today. (there is an option to send an email to those people when the ticket closes I don't want that, I just want to add a comment to the ticket).

Don't get me started on the editor changes in confluence in recent years just a disaster. Fortunately they have backed off of their plans to eliminate the old editor(for how long I don't know but it seems like it's about 2 years past when I expected them to try to kill it).

Then there was the time they decided to change the page width on everything in confluence(I assume to try to make it printable), at least in that case they left an option(per user option) to disable that functionality(it messed up tons of pages that weren't written for that option).

The keyboard shortcut functionality drove me insane in confluence as well, for years assuming it was there before(I don't know, I never used keyboard shortcuts in confluence going back to my earliest days of using it in 2006) it was not a problem but past couple of years I would inadvertently trigger a series of events on documents that I did not want just by typing. I was able to undo it every time, and finally disabled the keyboard shortcuts a few months ago.

Atlassian Jira, Confluence outage persists two days on

Nate Amsden Silver badge

Re: Cloud vs On_Premise

While I knew the situation I had a good laugh anyway. I recently renewed a 10 user server license for confluence that I purchased(?) about 10 years ago(for extremely limited personal use) but had lapsed. The cost to renew was $110, to "true up" the license to current time. That's fine, not a big deal.

Then saw the suggestion hey you can move to data center edition. Again I knew the situation but was curious anyway. To see the $10 price on the left for my existing license(to renew again for another year), vs the lowest cost data center offering of a mere $27,000 I think it was on the right(for 500 users).

But at least the license is still perpetual(for the given version of the product anyway).

I've been using confluence since early 2006, and with the cloud version(inherited from the orgs I worked at) the experience has significantly gone downhill in many aspects. My favorite version of confluence I think was probably version 3(guessing here it was a long time ago, the last version to support editing wiki markup). I have been somewhat relieved that their cloud folks have seemed to have postponed indefinitely their forced migration to the new editor. I had so many issues with it and tickets and phone calls. They kept saying that the new editor will be forced soon and I'd have no choice but to use it. But that was about 2 years ago now and that hasn't happened. Surprising they have not been able to address whatever edge cases the old editor allowed that the new one does not yet. At least they fixed one of my most annoying issues which was keys getting stuck and just printing the same character over and over and over again. Took them weeks to figure it out after trying to blame my computer/browser for the issue.

I use JIRA regularly as well but much less often. I don't use any other Atlassian products.

My regular wiki at home is Xwiki which seems to work quite well, confluence is just for some other stuff that I want to be able to access that I haven't moved(yet).

Microsoft arms Azure VMs with Ampere Altra chips

Nate Amsden Silver badge

Re: AWS Graviton2 is similar

Graviton2 is a different situation I think. I believe that chip is designed by Amazon, means they reap the benefits more of vertical integration(mainly cost savings, not having to pay higher margin costs to another supplier).

Nate Amsden Silver badge

what

Who would ever realistically compare a 8 core CPU with hyperthreading against a 16 core CPU (hyper threading or not)? Also the article makes it sound like the cost of a 8x x86 CPU(with HT enabled) is same/similar as a 16 x x86 CPU VM. I assume this is not the case(I have never used Azure, the fixed allocation models of all of the big public clouds have been a big turn off for me starting ~12 years ago so I really haven't paid much attention to them over the years).

Things would be a lot simpler if they just spit out some numbers from some benchmarks to compare the systems. Benchmarks are of course questionable by themselves but the performance claims being made here seem even more vague than benchmark numbers.

However if a single modern ARM CPU core can compete with a single modern X86 CPU core in server workloads that would be interesting, historically anyways it seemed ARM's designs were for just tons of cores on the chip(more than the standard x86 anyway), as an aggregate they may very well be on par with x86 (historically again they have had similar power usage from what I've read, that being 150W+/socket), but you're not comparing core-to-core performance( because the chips don't have the same number of cores - which in many cases doesn't matter I just mention that because the article seems to focus in on core-to-core performance).

Never personally been a fan of hyper threading myself mainly because it's not easy to assume how much extra capacity those threads give(but I haven't disabled it on any of my systems I just measure capacity based on actual cores rather than some funny math to adjust for extra threads).

Linux kernel patch from Google speeds up server shutdowns

Nate Amsden Silver badge

HDDs too?

is this an issue with HDDs too?(I assume it would be?) I've never noticed anyone complaining with lots of hard disks(assuming they are not abstracted by a RAID controller) complaining about slow reboot over the years.

GitHub explains outage string in incidents update

Nate Amsden Silver badge

a lot do have this issue, not many companies are public about what causes their outages. DB contention is a pretty common issue in my experience over the past 18 years of dealing with databases in high load(relative to what the app is tested for) environments. I've seen it on MySQL, MSSQL and Oracle, in all cases I've been involved with,the fault was with the app design rather than the DB itself(which was just doing what it was told to do). (side note: I am not a DBA but I play that role on rare occasions).

I remember in one case on MSSQL the "workaround" was to restart the DB and see if the locking cleared, if not restart again, and again, sometimes 10+ times before things were ok again for a while. Fortunately that wasn't an OLTP database. Most critical Oracle DB contentions involved massive downtime due to that being our primary OLTP DB. MySQL contentions mainly just limited the number of transactions the app could do, adding more app servers, more cpu more whatever had no effect(if anything could make the issue worse) the row lock times were hogging up everything.

Hackers remotely start, unlock Honda Civics with $300 tech

Nate Amsden Silver badge

Re: almost never use remote key fob features

I don't think it transmits far when used in this manor? because if the key is further away than a couple of feet even I think the door won't unlock(note am not pressing any buttons on the key fob to transmit anything just pressing the unlock button on the car door). And the car recognizes when the key is inside the car or not(fortunately as it prevents me from locking the keys inside the car which I have accidentally tried to do many times).

Nate Amsden Silver badge

almost never use remote key fob features

Since I first heard about these kinds of attacks many years ago I almost never use my remote key fob(at least the buttons on it), of course I have to use it to unlock but I just make sure I am close to the car when I hit the unlock button(on the car not the key fob) so it can sense the proximity of the key fob to authenticate the unlock. I assume that is much harder to sniff out then pressing the unlock or lock buttons on the key fob from a distance anyway.

PlanetScale offers undo button to reverse schema migration without losing data

Nate Amsden Silver badge

Re: sounds neat

thought about it more and I'm sure I would of pointed the app against the read only standby DB, then I could blow away the primary DB and restore the updated data, then update app to point to the primary DB, restart it, then rebuild the standby. basically just a few seconds of downtime.

Nate Amsden Silver badge

sounds neat

But performance is usually much more important than schema changes, I mean people have been doing schema changes for a long time and generally they get tested quite a bit and it's a known problem to deal with if a schema change goes badly.

I had one situation at a company ~15 years ago with MySQL, we had to do a schema change but the production MySQL server lacked the disk space needed to re-write out the entire table again(at the time maybe it was 100-300GB I don't remember). Fortunately it wasn't our OLTP database(which was Oracle) but rather an data store that didn't have to be updated too often(the data inside was job search results). Ironically enough I was trying to justify the need for a SAN at the time to leverage shared storage and make it available to more hosts. I already had a "demo" SAN installed on site, though I had it installed at our HQ not at our co-location. I did this purposely because the vendor said pretty much every customer that deployed their system ended up buying the demo unit because they had to rush it into production for some critical need, I wanted to avoid that trap.

So here we are, critical need for this SAN. So I came up with a crazy plan. We stopped the app(s) that updated the MySQL database, we may of even put the database in readonly mode I don't recall. The app continued to function fine, users were unaffected other than newer things weren't getting into the DB(the only thing that updated the DB was the app itself, users could never update that DB).

I took a USB drive, drove it to the data center, hooked it up, copied the DB over, drove it back to the office, copied it onto our demo SAN, did the schema migration, then copied the DB back to USB, drove it back to the data center and copied it back into place again. I assume we had to take an outage on that MySQL DB for copying the data back into place, but really wasn't a big deal. Fortunately our colo was only about a 45min drive away. These days my colo is 2,000 miles away.

Manager was convinced at that point we really could benefit from shared storage so we bought that demo unit after all and had the vendor move it to our colo.

Samba 4.16 release strips away more SMB 1

Nate Amsden Silver badge

People are too paranoid about clear text protocols. If you're running FTP on your LAN (which is the only place something like SMB would be run can't imagine anyone using over the internet same goes for NFS), and you are worried about man in the middle you got way bigger problems than FTP if you already have an "attacker" on your inside network with the ability to intercept that traffic.

FTP is probably less vulnerable, in that there are generally far fewer "exploits" against FTP servers than SMB systems.

OVHcloud datacenter 'lacked' automatic fire extinguishers, electrical cutoff

Nate Amsden Silver badge

You seem to believe the hyperscalers build their data centers to top tier standards. They do not. They really never have. Their model of operating is if a data center goes down you are still online because you built your apps to handle that failure by leveraging multiple facilities. Obviously there is a huge cost difference from a top tier facility to a lower tier facility, which is why they do it the way they do.

The only exception might be in markets where hyperscalers are leveraging co-location capacity, but they won't tell you that they expect you to make your apps more redundant.

But as we've seen in many situations most orgs don't do that(or at least do a poor job at it).

Nate Amsden Silver badge

got what you didn't pay for

Too bad for the customers not smart enough to realize the system was designed in a way so that failures would be have to be handled by the customer rather than the provider. They just saw the low price and said hey let's use that, all data centers are the same right?

https://www.datacenterknowledge.com/archives/2010/05/04/terremark-extinguishes-fire-stays-online

"Early on April 30, a fire broke out in one of the data center electrical rooms at Terremark's NAP of the Capital Region in Culpeper, Va. The fire department was on site for hours, and the event was covered by local media. But the facility remained online throughout the entire event, according to Terremark, with no downtime for customers."

https://www.datacenterknowledge.com/archives/2010/11/03/damage-from-fisher-plaza-fire-6-8-million

"A 2009 fire at the Fisher Plaza data center hub in Seattle caused $6.8 million in damages[..]The July 2, 2009 incident knocked payment processor Authorize.net offline, disrupting e-commerce for thousands of web sites, while also causing lengthy downtime for Microsoft’s Bing Travel service, domain registrar Dotster, colocation company Internap and web hosting provider AdHost,"

I worked at a company that was hosted at the 2nd facility(hosted there before I started my job), I moved them to a different facility in mid 2007 if I recall right, after two full data center power outages (there were more before I started). One of the power outages was a curious customer wondering what the "Emergency Power Off" button did(after that incident all customers had to attend EPO training before gaining access). Though in THIS case, the fire was contained to the electrical room and as far as I know no customer equipment was damaged.

The building ran on generator trucks for several months while they replaced the electrical system. The fire caused a roughly 42 hour downtime of the facility I think(including knocking the HQ of a local TV channel offline). I do recall being told stories some customers were freaking out because the batteries in their storage systems (to maintain data in the cache) of course can only retain power for X number of hours and there was uncertainty when power would be restored.

Though some storage systems were designed to handle that better in that they ran on internal battery long enough to dump the contents of cache to an internal drive(one per controller so there's two copies of the cache data) before shutting the controller down in the event of a power outage.

Devs of bcachefs try to get filesystem into Linux again

Nate Amsden Silver badge

Re: ZFS is superior

If you have 24x7x365 requirements you'll be using a proper storage array with redundant controllers and hot swap everything. ZFS as a storage system in the best case is a tier 2 storage solution, though most ZFS configs would be hard pressed to even be tier 3 storage. Nothing against ZFS, it's better than other general file systems certainly. It's more the design/architecture of the underlying hardware.

I wrote a blog post covering this back in 2010, the general design of ZFS is pretty much the same since. I referenced this email thread as the source:

https://www.mail-archive.com/zfs-discuss@opensolaris.org/msg18898.html

The author of the email makes a great point regarding data availability(ZFS can't help much here) vs data reliability(ZFS does good here).

Nate Amsden Silver badge

Re: Snapshots

might be easier to use something like rsnapshot. I use rsnapshot for most of my home backups(seems I have 16 linux servers on my home/personal colo network). I back up things like /etc /var /home /usr etc. Though I think it's MAYBE once a year I go to the backup to look for something.

I deployed ZFS on my home file server many years ago because I wanted snapshots for situations like this. Turns out in the ~5-6 years I was using ZFS I never used the snapshots once. So I decided to stop using it, as that was my most important use case for it(at home anyway). I'm perfectly happy with hardware RAID 10 (largest array I've had at home for the past 20 years has been just 4 drives, and yes when I ran ZFS it ran on top of my 3Ware RAID card).

It certainly won't catch every write just runs at regular intervals. But if you lose data that frequently then you have bigger problems to solve.

Nate Amsden Silver badge

Re: Say what you will about Windows . . .

I hope it's better now, about 10 years ago a friend of mine worked at MS in high tier escalation dept for their products(Windows server at least maybe others). He said one of their regular suggestions/processes was helping customers turn off dynamic disks in windows as they were a huge source of problems. I think they had some special tool or thing to flip the bits to make them non dynamic anymore? I don't remember exactly. But was surprised to hear that dynamic disks were so problematic(I had used them on several systems w/o issues though small non critical systems, less than 5% of my work involves dealing with windows).

As recently as a few years ago I had a conversation with a windows admin who felt the same, dynamic disks were to be avoid at all costs. Could be they got fixed and that person wasn't aware I don't know.

By contrast I don't recall ever hearing of bad things about LVM on linux. I suppose a bad thing could be snapshot performance(I've read there's a 50% performance hit?) Though honestly in the 15+ years I've been using LVM I've never once taken a LVM snapshot. All my snapshots happen on the storage array.

CafePress fined for covering up 2019 customer info leak

Nate Amsden Silver badge

maybe better than nothing

Sounds like CafePress failed at almost every level from a technical/security standpoint. $500k seems like a small fine for something that impacted millions of folks especially that amount of data that was stolen. Have no idea what the typical fine is for something like this. Likewise it seemed the penalties for Equifax were very light as well(well for Equifax the penalties were a joke but that breach got me off my ass to finally make a habit of keeping my credit report locked/frozen).

Given the last 4 of credit card numbers were snagged, wouldn't surprise me if they had lots of PCI problems as well, since obviously they seemed to collect credit card numbers even if they didn't happen to store the full number. (I remember one company I was at before PCI was a thing, you could see full credit card info in their logs if you just set the logs to DEBUG, and the logs were in DEBUG mode most of the time because the app stack was terrible).

Maybe in the future the penalties will be much greater. How much would the penalty be if this was a GDPR violation, anyone know/guess?

Germany advises citizens to uninstall Kaspersky antivirus

Nate Amsden Silver badge

Re: Just don't use ANY anti-virus

It's only malware if it doing things without your consent. Most people install AV with those filters and things willingly in order to give greater protection. Sort of like saying firewalls that do SSL intercept are malware too because they can see inside your encrypted connections(and I read recently that at least Palo Alto's newer versions have no issues with TLS 1.3 either). But again, that is by design, and the customers are installing it knowing that it does those things(and wanting it to do those things).

Canonical: OpenStack is dead, long live OpenStack

Nate Amsden Silver badge

Re: Too complicated is not the whole problem

I had high hopes for OpenStack back when VMware had their brief trip down the "vRAM tax" road(looks like that was in 2012). I was convinced that vSphere 4.1 would be my last version of VMware and I'd jump to OpenStack or more likely to plain KVM(as a long time linux veteran and 90%+ of our VMs were/are linux).

But I learned a bit more as time went on, and VMware backtracked on their vRAM tax(my org was never affected as we didn't deploy the versions that had the tax), and decided making that technology jump didn't make sense for us anymore.

Nate Amsden Silver badge

Re: Too complicated is not the whole problem

same can be said for kubernetes. Too complicated for most orgs. Though that hasn't stopped the hype around that tech(yet anyway). I felt hadoop was similar as well back when it was at it's peak hype. I actually had a VP of my group at the time suggest that HDFS could be our VMware storage, we didn't need a SAN just run the VMs from HDFS. Company built a 100+ node hadoop cluster back in 2010 after I left (using hardware from a vendor I was against), I was told it took them over a year just to get the hardware stable(after suffering ~30% failure rate in the cluster over a extended period which resulted in ~50% of hardware being offline due to quorum requirements?). New VP came in decided on different hardware. They still struggled with writing correct jobs I was told, several people complaining why it was so slow, turns out in some cases the jobs were written in a way that prevented them from being run on more than one node. But at least they had the data, probably 15TB of new data a day. One of the folks I knew at the time was at a company which deployed hadoop as well but they had something like 500GB of data total. WTF why are you deploying that, he said they wanted it.

Some forces at my current org wanted Kubernetes. Not because it was a good tool for us but because it was cool. VMs were old school they wanted the fancy auto scale up and scale down. I knew it wouldn't work the way they expected. Spent at least 3 years I think on the project, even got some services to production. All if it was shut down last year when the main person working on the project left. Had tons of problems, one of which they spent 6+ months trying to resolve(ended up being a MTU problem on the containers that were built). Auto scaling didn't work as advertised(perhaps due to lack of performance testing, something I warned about many times but was ignored). Lots of kubernetes alerts saying oh hey I'm low on CPU, or I'm low on memory I can't start new containers. Look at the host and it has TONS of CPU and memory, in some cases there was 10GB+ of available memory. But because of bullshit like this bug open since 2017 (https://github.com/kubernetes/kubernetes/issues/43916), the systems complained regularly. Also had a problem with data dog monitoring where it would consume large amounts of disk i/o (upwards of 10k+ IOPS) took again months to track down they eventually found the cause it was running out of memory in the container(not sure why that would cause the i/o as there was no swap on the host) but increasing memory on the container fixed it. Data dog could not suggest to us how much memory was needed for monitoring X metrics so we just had to monitor it and keep increasing memory over time.

The complexity of the system grew even more when they wanted to try to do upgrades without downtime. The people behind the internal project eventually acknowledged what I had been saying since before they even started - it's WAY too complicated for an org our size, and offers features we do not need. So they gave up.

My container solution I deployed for our previous app stack(which was LAMP). LXC on bare metal hardware back in 2014. Took me probably 6 weeks going from not even knowing LXC existed to being fully in production running our most critical e-commerce app. Ran for 5 years pretty much flawlessly, saved a ton of $$ and really accelerated our application. I proposed the same solution even if as interim on our newer Ruby app stack but they didn't want it. Wasn't cool enough for them. I said fine, you can build your kubernetes shit and when it's ready just switch over. I can be ready with LXC for this app in a couple of weeks and we have the hardware already. But nope they wanted to stick to VMs until Kubernetes was ready. And of course it never got ready.

ReactOS shows off SMP support in open-source take on Windows

Nate Amsden Silver badge

Re: IT OS need Versus Machine tool need

I remember being intrigued with ReactOS when I first heard about it in the late 90s.

Solves a problem that doesn't exist really. Lots of old hardware around. Do you really think companies would be comfortable running their mission critical stuff on something like ReactOS? No, they'd rather just run the old windows. Looks like some companies are still making new embedded 486 systems today. Used hardware for XP will be around for a long time.

Sure it has security issues but so does/will anything especially something as complex as ReactOS has tried to replicate over the years. At least with the old windows you know there aren't any updates coming that will break stuff. In some cases you can probably even run it in a VM. If you care about security you'll design the network in a way which minimizes exposure, though I bet most won't bother.

By the time ReactOS replicates a real XP system that 40 year time frame you mention will have passed already. I admire the effort, they just don't have enough resources to do that complex of a job.

Thought one bit of the article was funny at the start it says the devs say "this is a work in progress and not yet in the trunk", the same can obviously be said for the entire project as the article later notes "It(the ReactOS product) remains in a resolutely alpha state."

Could almost say Hurd is the ReactOS of the linux world, and that too has gone pretty much nowhere in the last 20 years either. Though at least with Hurd they have full source for all the user land components so have made more progress(I assume, haven't tracked their progress) but still from a market perspective almost nobody is asking for what Hurd can deliver so nobody uses it.

OpenZFS 2.1.3 bugfix brings compatibility with Linux 5.16

Nate Amsden Silver badge

Re: We're absolutely firm on this

FreeBSD may have no issues with the license, but for whatever reason(I won't speculate further) it wasn't successful enough in the marketplace to keep OpenZFS as it's reference platform

https://en.wikipedia.org/wiki/OpenZFS

"As of 2019, OpenZFS (on some platforms such as FreeBSD) is gradually being pivoted to be based upon ZFS on Linux, which has developed faster than other variants of OpenZFS and contains new features not yet ported to those other versions"

I run ZFS myself on linux on several systems. I'm not a die hard fan of the system by any stretch, it has it's use cases. Obviously ZFS-based storage systems never really made a dent in the enterprise storage market.

Afraid of the big bad Linux desktop? Zorin 16.1 is here

Nate Amsden Silver badge

Re: Zorin, Ideal for beginners

Doesn't ubuntu prompt you to install those codecs during the installation? I believe it has for many many years...I just did a web search and found a sample screen shot of such a page from Ubuntu 15.04. It's a checkbox during install to install codecs. Maybe it's not super clear to newbies I suppose.

I see another screenshot from Ubuntu 20.04 specifically it is a (default unchecked) checkbox that says "Install third party software for graphics and wifi hardware and additional media formats. This software is subject to license terms included with it's documentation. Some is proprietary."

But I guess people are so used to clicking next->next->next->done and don't read what they are installing.

Linux was easy to install for the end user literally 20 years ago, Corel installer, SuSE (I remember it even had a game you could play while the packages were installing, I think at least). Hell my sister had zero linux knowledge and really minimal computer experience and managed to install Yahoo! messenger back in maybe 2003 on her SuSE system. That messenger installed via Wine. I was blown away that it worked so seamlessly. She didn't even ask me or tell me I just noticed one day an icon on her desktop and it was installed via wine, She must of downloaded the windows installer not knowing any different and the system just worked. Honestly I wouldn't even expect that of a linux desktop in 2022 yet alone 2003.

Nate Amsden Silver badge

Re: @RegGuy1

In my experience that is often called "mouse over activation". I've used it myself since the late 90s on everything(started with AfterStep as my window manager). Used it with Gnome 2 on Ubuntu 10.04 until I switched to Mate. Still run Mate today and run with this option. Used it on windows too for years, easy to turn on with..powertoys control panel?? Not sure if that exists anymore I'd wager the registry option is still there whatever it might be though.

However maybe 5-6 years ago something started bugging out on it. Drives me mad. This feature breaks with the marco window manager at random times, sometimes after a day sometimes after less than a minute(sometimes 3 times in 5 minutes). I was hoping an upgrade from Mate 17 to Mate 20 (actually a complete new computer and a fresh install) would fix it but it did not. Still happens every day. Originally I would have to log out and login again but I marco has a --replace option to reload it without losing anything. So I would run that. Then I got irritated enough and put a shortcut on the top menu bar so I can just click it when it fucks up. I have to click it a lot. Logs don't really show anything useful. At first I thought it was due to a vmware workstation upgrade triggering the behavior but have seen it without switching to workstation (which runs full screen in one of my 16 workspaces).

I also use edge flipping(move mouse to edge of screen and it changes to the adjacent workspace), critical to my workflow(again been using that for over 20 years now too) to use that I had to take some extra steps to build the (now obsolete I suppose last version Ubuntu shipped with this was 16) app brightside. Fortunately it works flawlessly on Mate 20/Ubuntu 20 after building it from source(took some trickery). I'd wager it won't build again when Mate 20 reaches end of life and I have to switch to something else, trying not to think about that time. Could not find any other edge switching apps for gnome last time I looked.

Probably one of the only people in the world that prefers 1 monitor. 16 work spaces (4x4) I believe I can get a lot more done faster than someone who uses multiple monitors.

Uncle Sam has a datacenter waste problem

Nate Amsden Silver badge

quite a bit of growth

5000 data centers?

it was ~2000 data centers 12 years ago

https://www.datacenterknowledge.com/archives/2010/10/13/feds-discover-1000-more-data-centers

"The process defined a data center as any room larger than 500 square feet dedicated to data processing that meets the one of the four tier classifications defined by The Uptime Institute."

500 sq feet seems more like a server room than a data center(data center I'd say should start at 10k sq feet?), but I guess they lump them all together so the non tech people don't get too confused.

DBAs massively over-provision Oracle to protect themselves: Microsoft

Nate Amsden Silver badge

lack of shared resources

One of the biggest benefits of VM environments is the ability to share resources between systems something that has been possible for more than 15 years now. However in most public cloud environments this is not possible, things are hard allocated(akin to the days of running each system as a dedicated physical server). Most database servers probably are lightly utilized (certainly in my 20 years of experience). There are some super critical big expensive systems that need more special care but those are rare.

The ability to share resources from a hardware/VM licensing perspective is pretty good/efficient/cost effective strategy. An alternative may be to run multiple DB instances on the same server though that is a bit more messy from a admin standpoint. Oracle licensing doesn't yet cover things like memory or disk space(at least I think so HA HA) - so you could easily at a bare minimum run other servers on the same storage as your Oracle(provided of course you are aware of the i/o workloads of those systems so they don't impact any critical Oracle installations). Though some Oracle DBAs I'm sure are super paranoid(perhaps rightfully so). However storage these days can be so damn fast that it probably doesn't matter anymore. Back when my org had only an array with 15K disks one of our production MySQL servers would sometimes issue a query that was very bad, that one query consumed more disk I/O than all 500+ other VMs combined. Fortunately that was an app bug, and I put in a process to detect that query and kill it within a few seconds any time it ran.

I haven't seriously been in a position to use Oracle since 2008, everything since has been MySQL basically. I helped my company at that time migrate from Oracle EE to Oracle SE, and we leveraged single socket VMware hosts (quad core) to do it. Production Oracle remained on physical hardware, but the test/QA/dev/reporting Oracles were all VMs, and all shared hosts with other VMs). Storage was shared as well, with the VM hosts using iSCSI and the production hosts using fibrechannel to the same back end storage. VM hosts eventually moved to fibre channel as well after I left due to a critical bug in iSCSI on the array that we had.

I do miss Oracle for some things(I would never count myself as a DBA though have managed databases for many years). I was part of a team that managed the largest Oracle OLTP in the world back in 2005 though(50-60TB at the time). Ran on HPUX Itanium with a full rack of 15K RPM disks(plus another rack for the standby). My responsibility was limited to developing custom monitoring which everyone used (my primary responsibility was on the application tiers and to a lesser extent networking, didn't get involved in things like storage management until my next job). Some of our biggest outages were because of Oracle issues, that and Weblogic, most of it came down to poor application design(that 50-60TB OLTP size was because of large usage of CLOBS(?) - raw XML dumps(with tags) for the app). For one outage Oracle even flew two people on site to assist, only time I've ever seen a vendor do that.

Backblaze report finds SSDs as reliable as HDDs

Nate Amsden Silver badge

interesting but not too useful

They seemed to be ignoring many of the bigger names in SSDs whether it's Samsung, Intel, Sandisk. Perhaps this is due to cost as I know Backblaze is going for the best cost they can get.

I'm guessing they use a lot of Seagate SSDs because they got a special deal on them since they do use a ton of Seagate drives. I did a quick search for SSD marketshare and found a website that claimed Seagate had just 0.3% of the SSD market, so obviously not a big player there. Samsung and Sandisk were by far the biggest players.

My personal track record over the past 8 years has been zero SSD failures across my personal systems all of which run Samsung or Intel SSDs(one Intel only SSD and a few HP OEM Intel SSDs). Also zero failures on SSDs on my 3PAR arrays(oldest SSD there is from Oct 2014 with 88% write lifetime left). My sample size is small though not even 65 drives total. Certainly have far exceeded my expectations in any case. The bigger MLC SSDs on the 3PAR probably ran north of $20,000/ea when they were new though(not uncommon on enterprise storage).

Personally I don't touch SSDs on my own systems unless it's Intel or Samsung just because of this good experience. Sandisk comes into play more in enterprise and would expect them to be used more commonly in storage arrays rather than end users buying them directly. Also not interested in touching QLC flash yet anyway.

Only other SSD I have owned was a Corsair I think probably 10-11 years ago, was not too happy with that one. Samsung 850 Pro was my first real "home" SSD, which according to my email archives I bought around Aug 2014.

Linux kernel edges closer to dropping ReiserFS

Nate Amsden Silver badge

Re: Puzzled

Reiser3 certainly has some good use cases. I use it still today for those cases. Mainly situations where there are tons of small files. For my personal mail server I use it for my Cyrus IMAP mail spool which has about 1 million files in it(38G of disk space used). ext4 and others can be used as well they just consume more disk space because of all of the files.

Other use cases I use it for at work are for deployment servers that involve a lot of source code checkouts, again significant space savings(25-35% I think) for little or no overhead. ZFS with compression certainly can save a bunch of space in this situation as well however it has significant overhead both memory and CPU (especially considering ~70% of the VMs I have deployed are 1 vCPU). I do use ZFS with compression in some other use cases, and XFS in some cases as well, ext4 remains the only filesystem I use for the root filesystems.

Will be unfortunate if the day comes where reiserfs isn't available anymore. I never tried rieser4, last I heard I think it never got completed.

What is it with cloud status pages not reflecting reality?

Nate Amsden Silver badge

Dyn

Dyn had a pretty big outage I think last Thursday, it was regional. Their status page indicated they were in maintenance doing some big migration the maintenance was taking a month or more they were taking their time to be careful. But for at least 30+ minutes in several areas DNS was unreachable. I emailed them and they finally acknowledged the issue and updated their status page(with a "partial outage" message).

https://www.dynstatus.com/incidents/kc7plp9945ng

Kind of surprised I didn't notice any news articles about it. Still waiting to see if they release a root cause, but from my perspective it was their biggest outage since the big DDoS attacks many years ago. Though it was regional, it took a long time for my external monitors to trip. The issue was related to their anycast system, BGP broke down somewhere.. Dyn has a super strict SLA too (unlike Amazon), something like if you can't reach their DNS for more than 15 seconds then it's an outage. This was consistently unreachable for a very long period of time from certain regions (my home connection was unaffected).

Two decent outages in ~13 years as a customer of Dyn(that I can recall anyway). Not perfect but something I can live with. I think Amazon had more outages in Q4 2021 alone. I've had more outages from our CDN providers as well over the years, maybe half dozen decent outages in the past decade.

FreeDOS puts out first new version in six years

Nate Amsden Silver badge

Re: ReactOS open source 32bit Windows, following a similar journey to maturity hopefully.

People running vintage hardware generally will have no issue running vintage OS as well, obviously they will have vintage software to run on the hardware.

ReactOS sounded neat when I first heard about it in the late 90s I think. I don't think it really went anywhere over the past 20+ years so don't hold your breath too much. They just don't have the resources to do it (it's certainly no small task). ReactOS is probably worse off than even Hurd, which is so obscure I even forgot what Hurd was called I had to go browse Debian's site to refresh my memory.

I know they(ReactOS team) does it "because they can", but it just seems like too huge of a task to seriously take on.

Nate Amsden Silver badge

HP had DOS too

I remember using HP DOS 4.0 on a HP Vectra 286/12 a long time ago, it even came with a proprietary ANSI program launcher(I don't remember the name). I do remember another HP DOS utility for writing documents(just remembered it now hadn't thought of it since the 90s) called Executive Memo Maker.

I tried to look recently but I could find no evidence on the internet that HP ever had DOS(which surprised me quite a bit given HP is a big company and sold a lot of PCs - no screenshots no mention on Wikipedia, nothing that I could find). I even remember the box it came in. It was my first computer.

I remember the only way I knew how to edit autoexec.bat and config.sys was to do a fresh install of windows (had 3.0 with multimedia extensions, with a MSCDEX.EXE CDROM driver that would corrupt itself on a regular basis). Also I remember deleting vast amounts of the system files to get drivers to fail to load on boot to free up memory to play games, then I would reinstall everything again.

HP DOS 4.0 worked fine for pretty much everything except games, used too much low memory(after that program launcher launched it would probably get down to around 450kB or less of free low memory). DOS 5 with highmem+EMM (later with QEMM 3rd party tool?) and of course DOS 6 was of course much better for those games.

But HP had DOS too! Just wanted to say that. I don't know if they had other versions than 4.0.

VMware patches critical guest-to-host vulnerabilities

Nate Amsden Silver badge

don't freak out too much

I just checked my group's VMs on ESXi 6.5, just over 700 VMs, 98% linux, only one of them has a USB controller(a special purpose windows system), should be easy enough to remove.

To check, I used the tool govc, parsed the VM listing, and then used the device.info option to list all devices attached to each VM. I know powershell is popular with vmware folks, as a linux person I have never really used Powershell myself(yes I know it's available for linux).

https://github.com/vmware/govmomi/blob/master/govc/USAGE.md

Microsoft tempts G Suite customers with 60% discount

Nate Amsden Silver badge

Re: Google's Foot Gun

Don't think there is much risk here, I mean most of those tech folks who bought into the google cloud stuff or office 365 stuff have already tried/or succeeded in pushing others to use the service, the momentum is already going they don't need your help anymore. Pay up, or get out.

(have self hosted email/web/dns since ~1997)

'Boombox' function sparks Tesla recall

Nate Amsden Silver badge

not really a recall?

Have seen reports of multiple "recalls" for Teslas recently, they all seem to say software updates are the cause. But to me a recall means you take the car to the dealer, or maybe in Telsa's case the dealer comes to you. But in any case I'm sure I've read Telsa sends software updates to the cars over the internet I guess? No need to be near a dealer. Are there software updates that Telsa has that can only be installed by their own staff? Maybe there is I don't know.

Last time my car needed a software update(not a Tesla) it had to go to the dealer, something to do with the exhaust system or emissions or something like that, dealer applied the update and were honest about it saying it was a very fickle process taking up to an hour assuming it went smoothly, hooked the car to some equipment then downloaded the update from the internet (why they didn't or couldn't store the update on their local systems confused me since they said if their internet connection went out they couldn't do the update). Ended up taking about 35mins I think.

Apple tweaks AirTags to be less useful for stalkers, thieves

Nate Amsden Silver badge

Re: One would almost call this planned…

I read a comment at one point the person speculated the reason why airtags get a lot more attention is the network is much bigger than any of the other tracker networks, anyone with an iPhone? Don't need a special app installed(I believe). Vs something like Tile where as you say need an app installed, people who don't use Tile probably won't go out of their way to install the app for the purposes of helping others find their stuff.

Kind of surprised Samsung hasn't copied the airtag stuff yet, maybe they will soon. They seem to copy just about everything they can from Apple.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022