Wouldn't surprise me in the least to find Qemu as is in the code resources.
There's already several alternative x64 emulators out there, Parallels is behind the game for doing so.
260 publicly visible posts • joined 16 Jul 2013
Yes, you could put various L3 Sups in the 6509. Depending on your needs, and just how fat your wallet was, you could go anywhere from basic L2 to full L3 BGP Internet routing. And since the TCAM was small and fixed in size, as the Internet routing table kept growing astronomically, so would your wallet have to to keep up by swapping out the Sup engine of the day so you could keep up.
FWIW: the 7600 designation was the same exact chassis/cards, but marketed by a different BU at Cisco.
If one was an ISP, and had a fat enough wallet, they'd get the 7600. If you started out as Enterprise, you'd get the 6509. Same features, options, Sup's available. Just a different badge on the front, and different sales team on the backend talking to you at Cisco.
Cisco would laugh all the way to the bank either way.
Exactly, I don't know how many slanted reviews I've read that base the worthiness of the OS based on if the installer is GUI or "way out of date".
I always choose the text installer on Linux systems anyway because it goes through quicker, easier to navigate, quicker to get the job done.
But none are as quick as FreeBSD is to install with its' text installer.
There exists software hardware virtual port redirectors over the network for situations like that.
Provides a virtual serial or parallel port on a VM to talk to the hardware box that has the hardware dongle somewhere else on the network.
You did have to do weird things in the past, like figure out how to load the USB stack by hand, because the VM didn't trigger putting on in, but your software needs it. Gotten smoother now.
But once it was going, it worked a treat.
Sounds a whole lot like the crypto miner people applying for datacenter space I ran years ago.
They don't use any power, it'll just sit there, super low bandwidth. It'll be so cheap, to run, you should barely charge me.
My question is why they didn't just put them in their garage then.
Oh, I don't have enough power at home, but you must just have loads at the datacenter you barely pay for.
My personal opinion on the change to Linux are
a) the devs/users really like running side applications on their NAS boxes. The older jails/bhyve setups cost them considerable development time to maintain the side applications. Replacing that with Linux containers, make their development/maintenance time to be much lower, especially with some opensource apps only being released as containerized apps.
b) Now that OpenZFS exists, and is plugged easily into many Linuxes, they can put more development into OpenZFS rather than trying to deal with various different import dates of ZFS code into FreeBSD releases, and redoing work that FreeBSD "reverted".
Proxmox VE has an okay management interface.
There are things you have to do on the CLI.
Probably the biggest thing preventing service providers from looking at it seriously is so many products that work with VMware do not work with Proxmox VE. (especially backup software).
I wouldn't count a tar file of the disk image files to be a proper backup solution (ie. built in Proxmox VE backup).
The networking is pretty simplistic, although I haven't really experimented with the new virtual network features of 8.1 much.
The API is pretty light weight.
One thing I encounter that bugs me to no end is that an NFS passthrough into a container doesn't "register" with the system for the first 5-8 minutes of uptime. Thus the container they are passed through to won't start in any reasonable time after host reboot.. I set a cronjob for 10 minutes post-boot to CLI bring up that container, and that seems to do the trick. Could just be my setup, but this seems to be consistent for me in multiple installs.
The bigger problem with MacOS is that Apple also cut off support for older machines for upgraded MacOS versions, so it may be a hardware (artificial) limitation preventing the upgrade as well.
(I know there are ways to make newer OS versions force upgrade on older hardware. )
I generally treat each generation of machine I get as a time capsule, that if I want to use this set of software, I need to keep that system as is. There are so many programs that would die off if I force upgraded it beyond its' means.
But the hardware still is completely usable for what it has. Ie. 98% of what I do can still be covered by my white MacBook (after I put in the SSL proxy software to allow modern crypto), and it is still as speedy as it was back in the day, still runs all my old software.
OOTH, I do have latest hardware, almost none of those old games can run, and even in emulation, do not run half as well as running on my ancient white MacBook.
It could be that those drives run hot because the bearings are on the process of going out.
So because the drive is at the tail end of working right because the bearings no longer work at 100%, it heats up, causing the drive to malfunction even more.
I think a lot of processes like this are a viscous cycle.
But if it gives them insight into drive failure predictability, I think it would be worth tracking.
Myself, I have smartd track drive failure notices, and can start to plan on replacement when I have say more than x # of bad sectors or so. Seems to work for me.
As the article stated, many versions of SQL server will break hard from my testing.
My application doesn't officially support newer versions of SQL server. So, its either run unsupported SQL version and hope it works 100% correctly, keep the old version, and disable Microsoft's force changes, or replace the whole thing at some unknown cost, and unknown amount of migration and dev work.
In my testing, Microsoft's TLS patches for SQL don't work reliably. The only fix is to do major version upgrades.
I've also got old network devices online. They right now have their management links isolated, but now we'll have to keep out-of-date desktops around so that they can still be managed.
Rip and replace is the only option. Otherwise, operationally they work totally fine for our needs. Its not a security application, but just because the UI was written long ago the whole thing needs to be tossed.
>> PCI (I think PCI existed back then) has pretty strict rules about limiting access
Yes, PCI existed long before then.
I think PCI is applied unfairly overall. Huge corporations seem to get free passes at doing horrible PCI violations, while small and midsized get raked over the coals.
I believe it was shown that Target & TJ Max (another credit-card breach due to wifi based roaming cash registers) were keeping way more data than allowed by PCI, such as the CVV and other data they should not have been.
There are little red caps you are supposed to use on 66 blocks for data lines, especially things like 56k DDS and T1 lines. To prevent this very thing.
Of course most techs (especially any of the crews in the last 25 years) didn't know what the red caps signified, and once the data line is dead, it just sits there taking up room anyway.
I've been in closets that have many many dozens of T1 NIU cabinets without a single lit line in them. Because who gets a T1 any more (for voice or data).
DNS over TCP is used anytime the response is over 512 characters.
With DNSSec, SPF records, DKIM records, large MX record sets, etc. all being over 512 characters, not being able to receive a DNS packet response over TCP is a severe deficiency.
If all you are doing is looking up web site addresses, it might not be such an issue, but if you are doing email in what-so-ever fashion, most likely things were failing left and right for Alpine users.
Security Systems look ancient when you install them out of the box, let alone how long they keep them running.
The one that controls my floor has a software interface looks like it was written in TurboPascal, and I know it was installed 25 years ago...
I'm shocked the drive hasn't fried itself yet.
I'm guessing that IA/GA dance was to make atone for the showstopping bugs found in vSphere 7.0, vSphere 7.0U1 & vSphere 7.0U2 releases.
Thankfully 7.0U3 seems pretty stable overall from the getgo.
All of their releases used to be GA releases, they only started the IA recently.
8.0U1 should be the first milestone patch release of 8.0 (which has been good to us as well so far). Its not like a major new version.
> I used to work where we had a 4 hour fix contract ...
Yeah, all of those are now 4 hour response, and best effort to fix.
We'll guarantee you get an initial response from T1 support via email within 4 hours. You may get parts by the end of day. Maybe tomorrow. Maybe nextweek depending where we have them.
IMHO, telco billing is the 3rd ring of hell.
I would wager that they have categories in yearly reporting of services they know are supposed to be cancelled, have cancel orders pending that never clear, or are outright abandoned that are a huge line item on their list. But of course gets covered up because they can't show such blatent scamming of customers.
Back when I had to do that, it was cancel service. Check the next n months; complain to telco monthly that they haven't disconnected the circuit yet. Once it is finally off the billing 8-14 months later, then you start the process to grab back the money paid out (unless AP is on the ball and is discounting it's payment and dealing with the dunning notices because the telco is going to be putting you to the debt collectors for money it shouldn't even be collecting).
This wasn't an isolated incident, nor any one company.
It was _every_ single time. Every telco company known.
I'd estimate that at least 30% of telco revenues come from disconnect services that just keep auto billing and being auto paid.
Halon doesn’t displace oxygen. It is safe to be around. That is a total myth that never dies. It functions chemically to prevent combustion from occurring.
The biggest dangers are what is described above, a huge inrush or gas will blow everything around violently.
Think floor tiles launching. What ever is on the ceiling. Etc.
Also if there is a fire, the stuff burning will be incredibly toxic. Halon can decompose in the fire to unpleasant chemicals you don’t want to breath either. The respirators are there for the output of a fire. You really don’t want to be around anything burning in a data center.
They make one product, that Microsoft essentially gives away for "free" to the same market audience..
I'd say 7% post COVID growth against something already bundled in by your biggest compitior is fairly decent.
But I'm sure they want to grow by COVID numbers again, and that just ain't going to happen.
"In ransomware attacks, data recovery efforts do necessarily take significant time, both due to the nature of the attack and need to follow additional security protocols."
Yes, it may take a couple years to infinity for the security engineers to reverse engineer the decryptor without paying the ransom.
Who leaves their FW admin interface open to exploit?
Fortigate from the start has options to lock down the "admin IP addresses" that can access any admin protocol (ie. SNMP, GUI, SSH), just like any FW vendor..
At a minimum, lock it to your inside addresses, although would be better to restrict it just the internal IPs your network admins use.
Thats been SOP from the start for us using Fortigate. Still, something else can be used to springboard off to the device, but if your restricted IP range of who can even touch the box is a tiny footprint, the chance of exploit is greately reduced.
Also, to the Reg, FortiSwitchManager is a smashup of two different products.
There is a PSIRT for FortiSwitch when they are in a security fabric with FortiGate.
And there is a separate PSIRT for FortiManager for certain versions. They generalliy aren't mentioned together in the same breath.
I figure that 819,900 of those open servers are from admins that don't have a clue what they are doing, followed some recipe somewhere, and now they have data leaks.
Echoing from above, don't have things listening on the Internet that you don't need to, firewall them off. AND TEST YOUR EXPOSURE.
Its a rare sysadmin I've met that actually tests what is listening outside to the world, and sees what traffic they've let in.
>> discovered a "never-before-seen technique" that used malicious vSphere Installation Bundles ("VIBs") to install multiple backdoors.
Seems obvious that is how you'd need to install a backdoor on ESXi.
One could run only with VMware signed VIBs and not allow anything else, but too bad the real world kicks in and generally any installation needs 3rd party VIBs to function in an enterprise environment.
Tried doing SD card boot for VMware.
I had to replace the cards about ever 4-6 months.
Gave up and spec'd out systems with local SSD boot after that.
Granted, other brands of server hardware seem to do better than others, but it was of no surprise to me that VMware unspec'd SD card boot off the HCL.
This must be the reason for offering such a service.
As somebody that plays in that space, there aren't alot of cloud customers that potentially have their own IPv4 blocks sitting around.
The only reason I could see it is if some cloud customer is really tied to OVH, and they want to get away from the absolute shit reputation that OVH IPv4 address space has that gets blocked quite often elsewhere, enough so they go get some rando IPv4 block off an IP broker.
They were probably referring to the Apple II+ picture shown. With the II+, you got a language card which held the top 16k of RAM if you wanted to hold your basic or whatnot in it and bank switch in RAM.
Otherwise, you had the lower 48k for program RAM, and the top 16k for ROM and I/O space.
The 6502 could address the full 64k, but the top space was ROM and I/O and gaps.
Because they auto-remap the bad sectors behind your back.
Now the remapped sector has to have an additional lookup command adding latency.
You can run out of the spare sectors that they hide from you as well.
This is one reason that disk wipe software had to develop special methods dealing with SSDs, because wiping all *active* sectors doesn't wipe *all* data off the disk.