If they adjust the costings on Azure to something approaching reasonable then I'll use it, spent 3 years with a company using nothing but Azure and it's a bloated over-engineered mess where you pay the total cost of a 10 year old server per month for something lower spec.
81 publicly visible posts • joined 8 May 2010
Crashed a super-computer with Lode Runner clone
Well it was a super-computer run system of windows (running under Nix) for maths/compsci lab, not the entire super-computer. But this thing was not underpowered, every semester break banks would hire it out for number crunching.
We were doing Java, which at the time was still shiny and everyone claimed you couldn't get memory leaks in.
So I'm trying to run my lode runner game I built for an assignment and everything freezes up and the system crashes, complaints from around the room indicate it wasn't only my machine, the entire lab of 60 systems has gone down. All the systems came back up within a minute, because the sysadmin knew what he was doing and everyone continues on with their work.
I make a couple small changes and re-run the game, again everything crashes, again complaints from around the room.
Now I'm getting suspicious, it was about the same amount of time into my game and I'm sure the positions of some of the enemies and my character were almost in the same spot...So I wince and run it again... now I'm certain it's something in my code. I leave the lab with the excuse of "I'll get more done on my own system"and get the hell out of there before the BOFH decides to check the logs for which terminal keeps taking out 60 systems and punitively punish them.
In my own system the same behaviour, but now I'm getting logs, on looking into it I'm getting segmentation faults, something our lecturer, mr potato-head (he looked like a potato and was permanently stoned in class), had claimed was impossible in Java (turns out he also stole the course from Stanford and just put his name in the code). I look into where it is failing and....well my lazy programming and recursive loops in ai logic had struck. Every time an ai would get near 3 walls (roof, right, & floor) it would go into recursive looping between 2 decision functions and chew up all the RAM. A quick change to make sure descisions are being processed non-recursively and no crashes.
Still it's my one claim to fame, I crashed a supercomputer with Java.
Equifax still as dodgy today as it was in 2016
firstly the breach was first discovered in 2016, the public were only notified in 2017.
If you sign up to equifaxes many credit checking businesses (remember this used to be a company that would previously steal information to sell to insurers), you will notice that they can send your data anywhere as long as they deem you 'having an interest' in that country, OR if they deem you would be ok with it.
Add in that their login page is STILL vulnerable to SQL injection attack (indicating they are not sanitising db inputs)
They have had numerous breaches over the years, the last being either late last year or early this year when they lost USA Senators information (all of them)
In the court case it was found that it was 'normal' for the employees to trade shares based off client information and that all data was stored in clear text.
Windows is the one desktop system at a time. They are usually good for legacy within reason too.
Linux has 3-4 all doing things completely differently at a time.
So if I want to develop a linux GUI app, I either have to limit my audience to using 1-2 of the desktops (KDE & Gnome would be my pick) or spend an awful lot of time adapting my GUI setup to work on other desktop systems.
Even if I make a windows 7 app today, it will work on windows 10 & 11 and everything in between. This is IMO the secret to MS's GUI App dominance, they make it easy to develop for. With Linux I either use something cross platform like Xamarin, or I don't bother. Developing for the linux desktops (at least about 5 years ago when I gave it a crack) was a pain, it was very fiddly and random crap would go wrong with no clear indication of why.
Now I could spend another few weeks determining why and getting it up and going on everything, but for a small market share it doesn't make financial sense.
If Linux could pull together and get a single dominant GUI that makes app development easy (or even incorporate a standardised webview system so we can use blazor for the interface) then you would get a lot more of the mainstream SME GUI applications being released.
"Over in Apple land, serious photography and video workers will still run high-powered and high-cost Macs."
- Nope and nope. When we were doing photographic prints for professional phtographers (on MF 5"x6" film for print sale, oils reproductions and gallery systems) we didn't touch a single Apple because they couldn't handle
1. the workload required
2. the profile setups where we had to be able to associate a printer and monitor with a single profile.
3. The colour accuracy.
For serious film very few use Apple, why? well if something goes down you need to get it back up ASAP and when a movie is costing ~$1m an hour for downtime (this is mid-budget film) and you cannot get support or a reliable system (FCP is not reliable) then you don't use that product. Most serious film production uses Avid for editing, Resolve for Colour, Nuke for compositing, and Houdini for VFX. Admittedly the music may use an Apple, but that is often outsourced to other houses. And the M1/M2 do NOT support the transfer formats used by these programs and M1/M2 systems are all slower for rendering than even the low-range video prod suites, let alone for hardcore VFX. Even in the magazine industry when I was using systems for magazine layout, none of them were Apple.
consumers and prosumers use Apple for video/photography, not the high end.
1st year engineering (at decent unis) they do a lot of the supposedly 'unknown' Mayan and Egyptian pyramid building techniques (the Egyptians also used a clever perspective trick with a pole and drawing on the ground which to the observer at the pole would see the vertical mirror image of the pyramid to be built.
And yes, the basic idea behind the block and tackle lifting allows lifting a weight far greater than you weigh, extend this principle out and you get all kinds of cool methods for lifting large stone.
Also btw, if you build it on the ground and make sure your widths & distances are right, it will be perfectly aligned with the local ground.
The conical heads is an easy one, cause along with those images were all the other images showing how they strapped the children's heads with cloth/leather straps and change these as they age, forming the skull up into the beehive shape. Literally right next to the some of the images u-fool-ogists post about. This has been known about since the 80's at least.
Re: I have to say I would really like a laptop with the touchpad to the right
1. The entire mainboard is replaceable as they did with the 13" to support new socket configs. This way you only pay fro a CPU + mainboard instead of a whole new lappy.
2. I agree the price is a bit high particulalrly for that monitor.
3. The mainboard should support the ddr 5 specs the chipset supports, so you may be stuck with 5600 on intel till a new chipset comes out, but on the new AMD platform you should be right.
4. You can get a NAS/Media center case for the older 13" mainboards to turn them into entertainment units for the living room, don't see why this wouldn't happen for the 16" models too.
Dunno, that site only compares compression at low quality, i have found contrary to that article that i can compress webp to a much gretear extent than jpg for fullscreen images, for example i can get a fullhd image down to around 70-100kb with webp and it has the same visual fidelity as a jpeg at 200-400kb.
Re: Apple CAN'T Do VR
Apples to oranges comparison here, you are talking about a SoC compared to an individual CPU when they are not the same thing.
M1/M2 archtecture contains chips dedicated to: translating the x86 instruction set, the memory is directly integrated which also means it is not adjustable, I might add the DDR6 buses are slightly higher (there will be controller latency but we should see roughly equivalent speeds), translations for codec reading/writing direct SSD access (the ssd doesn't have the controller on it, it is part of the SOC).
M1/M2 have also been shown to be pretty poor at high-vertices count 3d (such as you would need in houdini & maya) translations, being about 1/8th the performance of a mid-tier Ryzen 5 for CPU only results.
Take an average gaming PC with an nvidia card and they tend to beat the M1/M2 for video transcoding in codecs M1/M2 do not support on their firmware. I might add, the tests we conducted were on a computer that was approximately half the price of the higher-end M1 pro.
The ssd size is abysmal.
Apple silicon GPUs being equivalent to 3080s? Ummm, that was a marketing presentation Apple did, and no they are not. Apple silicon could get 60-70fps at 1080p in Tomb Raider running natively in metal, Tomb Raider running natively in directX with a 3080 can do ~315fps at 1080p, this is also with the RTX settings turned on on the PC of which you cannot do on the M1/M2, when you turn rtx off it goes even faster. How they got this score was buy voltage limiting both the CPU and GPU on the PC, and it was also running a very low end hard drive. (voltage is still relevant, but not for gaming). get up to 4k and the M1 Max was struggling at 34fps, whilst an AMD 6800M was getting 50FPS.
The M2 had about a 20%-30% uplift in native games due to the removal of the voltage limitor (it's basically the same chip with a few more cores) so that would put it up with the mid-range AMD 6000 cards for gaming.
But here's the thing, we already have 8k gaming headsets with a wide FoV, Apple's device will probably only work on Apple computers, which means the walled garden approach, and given their recent behaviour in trying to stop anyone replacing hall sensors, and their history of poor lifespan devices (they only expect their devices to last a max of 4 years), I will not be going with any Apple products.
The glasses may be ok, but until they get their vertex transformation up, make sure it actually works with professional applications, and make the system actually play games well instead of at the barely playable level, architects and other pre-vis depts will keep using other devices. Then I see it as marketing wank.
One way of creating a UAP event is to grab two sticks of balsa in a cross and tie one of those thin black garbage bags to the 4 endpoints, on the balsa you put a bunch of birthday candles.
Light them all up, and a soft glowing spherical thing will rise into the sky.
Due to the way the light manifests through the bag, size and distance are really hard for an observer to determine, then you will get a bunch of people claiming UAP/UFO evidence off it. (some will claim it was moving super fast, others super slow).
This has been done since the 70's that I know of.
(Never do this in spring/summer in Australia please, we get enough bushfires from lightening strikes and fuckwits)
Interesting, possibly more of a virtual com port issue
"and bingo – one of them reined in the Mac Pro's blindingly fast USB speeds sufficiently." - which were slower than a budget pcs at the time.
It sounds like the sampler was using com port over usb, a tech that has a lot of issues on MacOs old and new as you couldn't change the speed,
I had something similar to this occur with other peripherals and just changing the baud rate, sometimes data bits or sig bit in linux or windows and off you go.
It's also odd to see a hardware sampler in that era, we had 24bit sampling on computers in the 90's and pci XLR panels for direct input.
Re: SAP complaints start now
I have a lot of exeperience in integrating with SAP from external software, both the older internal SAP systems and the cloud offerings. My brother has also had to use it in his last 3 jobs.
Never have I encountered such a rubbish system, it has barely changed the overall methodology from the days when it was used on production lines for single-product and chemical companies in the 80's.
I have seen so many times their idea of a db being non-normalised, unique-id hell.
To give an example, one of the typical instances of SAP stupidity is the unique id per component per device, meaning that if you have shared components you don't just use one unique id for each component, but a different one for each device it is used in. (this is in a multinational instrument supplier that had their setup recommended /advised by SAP itself).
to give another example TAFE Vic was implementing SAP for 10 years, at the end of that process it still could not take a single enrolement, SAP had this on their website as a 'success story'
Another example is that the db lost (irrecoverably) the records in QLD DSS of 'at risk' children, leading to several deaths as they were then unmonitored
QLD health had several security breaches that were tracked back to common issues (like sql injection attacks) in the SAP interface.
Non-validated input to endpoints allowing garbage to be submitted from external 'public' sources
Spurious data ingestion (for CSM) - meaning that the data output was garbage and worse, misleading
Poor performance (speed/memory) for amount of data input/output
Salespeople being given technical titles
'Technical' staff not understanding basic (I mean 3 line) SQL queries and the capabilities of the queries (often with clients being told what was being asked was impossible)
Misleading sales pitches
Taking executive staff on 'training' courses that just happen to be at resorts (not quite bribery, but close enough)
Outright bribery in some cases
ERP capabilities are very limited and restrictive
ERP in this case stresases the 'enterprise' in that title, eg: garbage code where people were paid per line.
The fact that only 20% of European customers believe SAP's offerings (SAP made the mistake of sending the login emails to their stats contractor so they got the staff that had to use it daily) are fit for their current purpose let alone future purposes says a lot.
Re: Replaceable batteries
The shell still exists and the glue is inside the waterproofing and the shell (the ip waterproofing standards are pretty rubbish anyway), the glue can actually cause a puncture when attempting to remove the battery (the fire in the iphone repair place in spain was caused by that) and any space saved is negligible. Internal memos show a lot of these 'features' were about stopping people from extending the life of older devices, they did not allow switching off the cpu throttling feature until after the first lawsuit and negative press started coming out.
Besides if someone opens to replace the battery then they should be responsible for their own safety as long as instructed how to do so (it's a battery, it shouldn't be that problematic)
The issue was not the battery, it was the artificial slowing down of the device (something that they have applied to most old iphones, not just the ones with faulty batteries), then the device causing so many issues (rather than just running out quicker) when the optional part of that feature is switched off (there is also non-optional slowdown built in too)
The attempt to hide behind the 'it's for you not us' argument after denying they did it etc. The failure of their software to be able to detect remaining charge, resets instead of shutdown etc.
Re: I'm sure this has been known for a long time - perhaps its not on the web though.
That was the Aztecs, the Maya (who used these calendars) had already died out a couple hundred years before I believe. (In the USA they seem to use Mayan as a catchall term for anyone from that region from pre-spanish settlement bloodlines)
The amount of non-normal SAP dbs I have come across is staggering, also the propensity for some implementers to use commands vulnerable to SQL injection attacks is very worrying.
Sap is about sales, not service. Their tables are barely related and seem mired in 80's mentality, no you shall have a list of unique ids for the same component for each device it is used in.
Not since the horse industry have I seen such rubbish design (some of them are still on PIC dbs)
I think the basics should be common
The amount of times I have seen direct SQL commands with vars in the command (rather than as SQL variables) is ridiculous (I'm looking at you wordpress). One of the key factors to security is developing at the minimum level of competence which means not allowing sql injection attacks through use of sql vars.
Re: I don't understand the motion
Real reason nosql was used and is still used in speed critical and reliability critical applications such as air traffic control? Speed. Indexing in SQL for speed on highly related data often takes a lot of custom faffing around (I know we did it for > 10 years), in a good NoSQL db this is inbuilt, sometimes you may need to indicate a property on a field requires indexing, but often the out of box solution (VDB & Realm) is fast enough. backups can be done within engine/or within filesystem, or within separate programmability
When a full SQL Server instance takes 10 hours to insert 1 million of the same records that velocitydb takes 12s on the same system for then there is a big bloody problem with using SQL for any kind of large-scale input/output.
Another reason NoSql is often preferred by developers - no separate install. Using most SQL systems means having to separately install the sql server (or have an online one running). The SQL systems (like SQLite) that can be packaged as part of an application are large and very slow. Most NoSQL packages can be included in your codebase to run when the application runs (in this situation)
Complex models - In NoSQL I can have a DB model that contains a list of non-db models which each in turn references a different db model type in a relationship. I cannot do that on SQL and have the queries work on it, the non-db model inside the db model would itself have to be it's own table even if the lookups would never query for that table-data in a solitary context.
Native models - Allows pulling data out in a native format used by the code, whether it is C#, C++, even FE scripting languages (although if you are after speed, scripting kinda makes that a bit pointless).
Entity Framework does NOT do this by default, and requires console commands to be run for each migration or model change, and requires careful planning to avoid breaking the model -> db or db-> model relationship. The liklihood of a user requiring SQL native output is pretty low, usually the data goes through an application first, this removes the conversion step that SQL requires to take flat-file data and convert it into 3d models.
Easier migration - handle your migration in code, either leave a new column null or add a default value, or work out the correct value of the new column from other fields on the old data.
The arguments against anything new on behalf of SQL remind of the arguments used in pro-PIC dbs back in the 80's. and if you've ever had to use PIC you would understand how wrong those were on a fundamental level, the complaints of not being able to store multiple pieces of data in one field missed the fundamental difference of how sql didn't rely on you knowing already what datatype the field had to provide for the info. The issue I see is you approach this like it is MSSQL or Oracle when the way the data is organised is fundamentally different and allows for different things.
So lets take a look at your arguments briefly
Managable - NoSQL generally doesn't require much management, migration data that needs to change is put into code. Sharding and other facets like geo-replication can be handled externally or manually in many ways, the speed increase allows for greater flexibility and a LOT more control in this regard as you are not limited by the tooling within the db itself. SQL has the advantage that you can mix and match data from different sources easier rather than having hardcoded relationships, but you can do this with a graphdb, nosql is built around the models being input and output so it cannot do this wihtout setting models up to do it (although a lot do allow either SQL or SQL-like queries on the data)
Extensible - yeah not sure what you mean by this, most good NoSQL dbs allow for more extensibility in that they allow extensions to the core functionality of the db at a code level, something SQL doesn't allow except at the surface level in the case of stored procedures and that autoprogrammability at a SQL level (not at the core code level). Some of them do not have stuff like stored procedures or views because they are redundant due to the way the db works, in other cases you can do something similar to graph calls within a nosql db to get a view.
Reliable - for anyone using Azure MS SQL instances will know what it is like when they update the db and bugger the connection up. Or simply the price of running a instance on the equivalent of a computer from 10 years ago.
Running it yourself on your own server MSSQL is finniky about what else is on there in services and will often fail if the server is not setup in a specific manner. Running on local systems is terrible as AV programs can and will interfere, and remaining up to date on client computers (when used in applications) is not easy as there are breaking changes between versions that disallow downgrading.
Autobackups are great but nosql only require minor changes to code to organise or you can setup your OS to do them.
All the abover being said, if you have a stable system then sql will be good.
The big difference is that now you simply organise your code the way the program/application requires, no more DB manager required unless you are running on a multi-instance web-accessible way. (then it would be a server admin not a db admin).
The one downside seems to be scaling out with a ton of data, but both velocity and realm both handle this quite well (why Mongo bought the cloud version of Realm to run it's backend).
All of the above being said, the only time I can see a graphdb being useful over both NoSQL and SQL is when you want to use it for neural networks and other association based things. I have never come across graphql being more useful than SQL or NoSQL calls. (I guess maybe if you have a dynamic method of calling the data and need highly variable amounts of data for those calls?)
The above is probably poorly worded I don't have time for anything else, so sorry.
The solution is pretty simple, the reliance for a secure system is on the business incorporating the FOSS software, they have to fix it if it is not up to scratch before releasing their product, if the FOSS software itself is insecure then it should not be released for 'retail' use by itself until security issues have been resolved.
What could it hurt?
1. SAP doesn't work for any client I have had, from big multinationals to smaller local state based organisations it does not do what it promised the client it would. The 1 time I have heard of it working was when it was used for what it was designed for, on production lines.
2. SAP doesn't really employ technical staff (at least not ones to help clients), they are all sales droids. One of them tried to convince me that the best way to submit data was via a direct js form on a public site with no data validation or verification. Another told staff at another business that a particular function wouldn't be possible in their sql database which I then did in 4 very simple lines.
3. There is a reason SAP target employees & CEOs above the level of technical staff who will not be using or implementing their product, give them 'training' that happens to be in luxury resorts etc.
SAP claimed the Victorian Tafe system was a success story, it took 10 years and never managed to take a single enrolement.
SAP claimed the QLD health system was a success story, but their system deleted all the records of children at risk, resulting in at least one child's death.
This guy might actually force them to build or modify a product that works as advertised.
Why? Citrix was shit for VM networking, their product did NOT work well. It had numerous issues with SQL server and remoting in on limited connections and would often fail in my experience. The VM itself was incredibly touchy and had issues running legit software that worked fine on windows xp -> win 8. Dumping that area given they had 10 years to make it work was a case of cutting your losses.
Yeah this group is predatory, but given SAPs history, good riddance, the less companies out there killing off small business because it 'might' compete with them, the better.
This coming from the org that used one password (total) for all it's staff to access all it's spyware toolsets?
Not to sound paranoid, but it is also a LOT easier to reverse engineer or snoop GC/Memory-Safe applications, particularly while they are running. A lot easier than native machine code etc.
Too much for too little
$300 per month for an SQL db that runs slower than it does on a 10yr old desktop.
Azure is overengineered, forcing simple setups to have a ton of little tweaks and additional modules to get running properly and secure. Add in the nightmare that is MS's naming conventions and their menu system.
Re: A great way to lose your largest customers is to sue them.
the issue is the license is restricted to the Nuvia team and their specific dev, and qualcomm has been using it outside that team and outside that project.
This is also not the first trime qualcomm have decided they don't need to pay licensing, like when they joined Apple in refusing to pay for the Australian IP rights for WiFi.
The published journal papers I have seen on small reactors show that the emissions created in building them is far above using coal power for the same output wattage over the same lifetime.
Would be interested to see how these salt ones stack up against that in environmental cost of production vs the total output and cost of input.
I remember the days pre-VRML when you could hack the nintendo power-glove and hook it into a printer-port for VR on the screen. Blew my mind at the time.
3D BOXES!!!!!! THAT I CAN MOVE!!!!
Hopefully the metaverse goes the way of 2nd life. (also around 1/3rd of people get motion sick in VR, so there is the same issue as 3d tvs had in that you have just cut your potential audience by 1/3rd)
The difference is legal,
Data Loss - the data is gone but no-one unpermitted has it.
Data theft - the data is in the hands of a 3rd party that is not permitted to have it normally.
This argument is essentially an argument based on limiting the liability of a company and very little to do with protecting data. If someone gets that far into your systems, they would be able to put monitors and keyloggers on your website and steal the data direct from the customers anyway using a MITM attack between the site and the payment processor.
Re: The real issue
You (as a lot of others) are conflating a religious/ideological comment or belief that Pascal remarked on, with a proof of concept argument. In this case it is irrelevant.
It's the same as as saying because Darwin believed in evolution he couldn't believe in God or vice versa. Plainly untrue as Darwin believed both. (notice I do not say Catholic as that is a gatekept community with a belief system within it).
These are two different fields (to over simplify it is treating how as the same as why) and it is a strawman argument to conflate both.
Breadth vs depth is the core argument here, all of the responses in the examples given as 'proof' have strong relations and the bot digs down into those, sometimes the responses seem more tangential but not to any significant degree being more a grouping system rather than using something like an allegory and none of them exist outside standardised source sets for training ML. Some of the responses do not make sense when you look at a syntactical structure, it goes through the linkages of associated properties and sometimes those are conflicting with themselves.
First few mistakes
1. using VB as a production ready language
2. using Access for anything (remember the old 65k records then fall over issue?)
This reminds me of VB code that had >5000 lines (this is one function) and > 40 paramaters because the object that containerised them only existed in the local namespace, then putting them all into the object, then using them all in another function call because it was external.
I'm sorry but exposed PSU capacitors in the chassis are a real worry, as is the fact that swapping another validated working mac studio hard drive also caused fails which leads me to believe there is going to be more t2-style shenanigans, the second port also wouldn't work with the original hdd. The ssd also doesn't have the controller most ssds have (it's on the board instead). So what exactly is repairable here?
Re: Threadripper? Deadripper more like.
Here' a bucket of salt, given that mid-range AMD cpus are still flogging the M1 on handbrake tests (HB using native M1 code) for multi-core, I suspect that there's a lot of 'optimisation chips' that give false positive results for benchmarking.
I would be more interested if they allowed >1 monitor to connect (where the monitors use DP).