* Posts by Steve Channell

206 publicly visible posts • joined 19 Aug 2014


Larry Ellison fought internal battle to kill Oracle's first-generation cloud

Steve Channell
Thumb Down

Beat IBM?

Oracle beat IBM to market by four years, running on DEC VAX minicomputers when IBM was touting mainframes. It won through by being first and cheaper.

"So how did we ultimately beat IBM? We were faster and cheaper and more secure and more reliable." is pure fiction. Without DEC Vax/VMS and DECnet Oracle would have been an also-ran.

Oracle's EXADATA is not cloud-ready.. "gen-2" is a story that makes a virtue of deficiency

The Metaverse is the internet no one wants

Steve Channell

Zuckerberg in a dipper

People think they'll choose their avatar in the "metaverse", and Mark fancies a spaceman outfit.

In reality, the first hack will be to choose the avatar for others, and Mark will almost always end-up as a baby in a dipper, and Putin in BDSM gimp suit.

Rust is eating into our systems, and it's a good thing

Steve Channell

Re: Pension plan

Java is already the COBOL of the 21st century, while Java/EE is a slower version of CISC/COBOL in the 21st century. C++ might be the PL/1 of the 21st century.

by far the best thing about Rust is that you don't have to explain why you never switched to unique_ptr<> for type safety ... many of us C++ developers don't have the luxury of rancid buggy code to justify a shiny new toy.

Rust is great, but cpp2/cmake/vcpkg will eclipse it

Automating Excel tasks to come to Windows and Mac

Steve Channell

I remember writing excel macros in 1987

Back then it was only available on a macintosh, but it had XLM "script sheets"

Nothing new here, when you can code and use functions in a sheet it'll be an alternative to VBA

Open source databases: What are they and why do they matter?

Steve Channell

Re: Nuff said

fair point.. using oracle currently - nice that the RDBMS hasn't changed since '90s, shame that applies to the tools too

Steve Channell

Re: Nuff said

In the real world "For developers, there is no debate. The future of the database is open source" is misleading, developers have a preference for open-source because they don't have to pay for it, but they also don't want to be the one to support it - devs would rather be an Oracle developer rather than a PostgresSQL DBA.

For businesses the question does not primarily relate to license costs, but to recovery cost in failure, or support cost with performance problems. The problem comes with complex HA clusters and DR failover where there is little interest devoting time, and those that do, do-so because they work for a cloud vendor. Total cost of ownership of PostgresSQL vs MS SQL/Server is not necessarily in favor of open-source, especially when management insist on a support contract.

The "investor community was pretty clear that open source is a dying" either relates to dumb investors, or investors looking at Snowflake and thinking that $12billion company shouldn't exist if the answer is open-source.

With the move to cloud computing, the cost driver is scalability of managed instances rather than license cost of instances

The wild world of non-C operating systems

Steve Channell

Re: Multics & PL/I

No discussion of operating systems is complete without a reference to Multics :)

It was Multics and MVS usage of PL/1(S) that persuaded Intel to adopt PL/M

Warning over Java libraries and deserialization security weaknesses

Steve Channell


Not only has Java deserialization been a "bug" for twenty years (remote class loader enabled injections of arbitrary code into STRUTS a decade ago, and more recently with Log4J), it has been maintained as a "feature" to prevent IKVM and Android cross compilation of bytecode

Why Intel killed its Optane memory business

Steve Channell

Re: numa

Both Windows and Linux have NUMA support, but Optane was different in two respects: [1] It It required on-chip memory controller, so confined to high-end CPU [2] It really needed OS support for a third memory tear (e.g. Cluster/OS state for restart sync) and database/Hypervisor checkpoint store.

Steve Channell

OS Software was the issue

While SSD can be scaled with storage devices, mirrored and replicated; Optane can't

For Optane to succeed OS's needed to provide kernel support like IBM's mainframe Hiperspace/Dataspace or Expanded storage, but the use-cases were to narrow and performance too marginal.

Intel's own test showed a performance advantage only if RAM was constrained and SSD did not have non-volatile memory cache (battery for DRAM).

Arrogant, subtle, entitled: 'Toxic' open source GitHub discussions examined

Steve Channell

The problems is not Linus Torvalds, it's the mediocre that behave like Linus

I well remember Linus's post on usenet because I was using minix (pcnx build) and it was a great argument for upgrading to a 80386 PC.

It would not have got anywhere if [1] Linus hadn't worked so hard, [2] he hadn't been responsive to feedback, [3] he'd allowed all the Computer Science PhD student to shovel code into it for bragging rights.

Fast forward thirty years, and [1] he's still working hard, [2] he's still responsive, [3] People are still trying to shovel code into. When one rogue commit can trash the whole edifice, we have to accept that proportionate behavior.

The problem is not Linus, but the many mediocre developers that think they've acquired god-like status because a corporate policy mandates code reviews for Pull Requests and there is no specification to constrain the terms of a rejection/change-suggestion. There really was a time when rejection in a core-review required a reference to functional/architecture requires, standards or defect.

Unbelievably clever: Redbean 2 – a single-file web server that runs on six OSes

Steve Channell

Re: Clever work around for artificial incompatibility

The 8086 processor has segmented memory for compatibility with 8080 programs: for .COM files DOS set the all segment registers the to the same 64k segment, loaded the .COM file and started executing from the first instruction (whether valid or not) . calls to the OS were via processor interrupts, and would execute whatever code was referenced in the interrupt vector table. SCP/MS/PC DOS used the same interrupts (e.g. INT16 for keyboard) as CP/M - they were both essentially boot loaders rather than what we'd call an OS today.

It's a semantic distinction to say that 8086 .COM programs were not 8080 programs - they couldn't reliably change segment registers, so would probably fail if they included non 8080 instruction. It's fair to say that few "useful" programs written for CP/M would run on DOS, but Viscalc was one of them.

Steve Channell

Clever work around for artificial incompatibility

Old timers will remember when MS/DOS programs were either .COM or .EXE depending on whether they ran in a single 64k segment or need the OS to setup the environment using header information. .COM files were 8080 programs that could also run on 8086 processors, with PC/DOS or CPM “operating system”: while redbean is undoubtedly clever, it is not conceptually different to 1979 design.

Object file formats data from the 1960’s with little difference until ELF was created to allow for shared object load in Unix System-V for an ABI without hardware dependence on interrupt vector table. ELF vs COFF used to be a question of load-time and space, but is largely irrelevant today. It’s a badge of shame that modern OS do not support multiple executable file formats.

One day we’ll able to load .dll files on Linux (without LoadLibary) and .so files in Windows (without clang cmake).. maybe this is the kind of initiative to get things going

Only Microsoft can give open source the gift of NTFS. Only Microsoft needs to

Steve Channell

nothing to do with OS/2

OS/2 had HPFS ("High Performance" File System). NTFS was purely an NT file-system.

Now that Windows has moved to ReFS, there's a good case for NTFS to be ported to Linux, but should also include DLL loader for PE files

Any fool can write a language: It takes compilers to save the world

Steve Channell
Thumb Down

Is El-Reg going down-market?

"any fool" can barely understand the rules of grammar and operator precedence, writing an unambiguous language grammar is not trivial (even with ANTLR).

'C' is a lower-level language than {Fortran, Cobol, Algol, PL1, etc} because it has increment/decrement operators and pointer arithmetic - that does not make it a glorified macro assembler.. it might be a punchy line for an opinion piece for a tabloid, but has as little relevance to 'C' programming as it does to macro assemblers.

The case again 'C' is the lack of intrinsic bounds checking and automatic conversion, the case against C++ is complexity: ABI compatibility is a facet of its age and backward compatibility (e.g. name mangling) not some fundamental brain-fuzz- Rust would have the same issues if it was as old.

Oracle releases Java JDK 18 with enhanced source code documentation

Steve Channell

Still no EOL for RMI class loader vulnerabilities

Once upon a time, the idea of a passing any object over a remote connection, that would load missing remote classes seemed like a good idea. Back when Java was seen as a technology for set-top-boxes or applets for HotJava browser it was part of the "develop anywhere, deploy everywhere" marketing.

Fast forward to the new millennium (20 years ago) the danger was clear, but bloody-mindedness prevented its deletion - triggering the STRUTS and then Log4J code-injection disasters.

GraalVM and IKVM could never fully support the RMI legacy: it's disappointing that Oracle is choosing to extend compatibility with a vulnerability rather than deprecating it. Since when was debug symbols considered an "enterprise" feature?

Nvidia reveals specs of latest GPU: The Hopper-based H100

Steve Channell

Grace Hopper

Great to see a computing platform named in honor of Admiral Grace Hopper

File Explorer fiasco: Window to Microsoft's mixed-up motivations

Steve Channell


A banner that says "you're using notepad, have you considered VS Code" might be welcome, as would "you're using Access, have you considered uninstalling it" would be welcome.

Links like the next ripoff crypto scam appearing in any app {explorer, edge, Twitter, the Facebook} are a problem.. if MS sticks to the former, surely the question is whether we get anything for tolerating it

Deutsche Bank seeks options as sanctions threaten Russian dev unit

Steve Channell

Re: Bloody stupid PR

Anybody that used Luxsoft had similar exposure.. but the "No more exposed than other financial institutions” eluded to institutions with centres in Kiev - that closed over night

Steve Channell

Bloody stupid PR

Let’s be honest (we’re among friends), this is just a bloody stupid comment by some stupid muppet in the PR department: DB is no more exposed than any other financial institution.

What’s concerning is the apparent lack of “adult” supervision - what they really meant to say is “we have some resources in Moscow that are [insert excuse that matches visa criteria] critical, and like to get them out quick”.

Steve Channell

Citrix and Wyse terminals

No developers in DB have desktops, everything is virtualized

Another data-leaking Spectre bug found, smashes Intel, Arm defenses

Steve Channell

Re: Actually...

speculative execution existed om mainframe when intel was still primarily a memory chip manufacture.

Intel aggressively used it to recover from the advantage that AMD achieved with its Opteron processors unified memory model (Intel had stuck with its north/south memory bridge).

It's fair to say that performance was favored over security.. through conspiracy followers would suggest it was a deliberate design. Caches do not use address translation tables, so odd sequence of instructions can bypass virtual memory security. It only because a real risk when virtualization and cloud computing emerged (you need to inject machine code into the environment to use it).

IBM Cloud to offer Z-series mainframes for first time – albeit for test and dev

Steve Channell

Age of infrastructure-as-a-service

Good headline, but hardly new to the mainframe - mainframes were originally rented, and started the "time share services" decades ago. The service died out for the same reason cloud computing might: cost.

The enabler will be tiered/delegated RACF

Red Hat signals Intel's software-defined silicon will debut in Linux 5.18

Steve Channell
Thumb Down

Regressive step

While switchable features have been a feature of computing from the earliest days, this represents a wholly new development moving from purchase of hardware to purchase of a license. The ability to purchase core usage seems like a good idea (especially with Oracle licensing) that you could upgrade or downgrade as needs change, but there are regressive aspects.

Intel will further be able to experiment in production with features like transactional memory that does not necessarily work.. pressuring vendors to add support for features that could be enabled to break AMD or emulation.

Licensing could be used to force customers to upgrade, by expiring licenses or changing to a subscription model that borks old kit after a period of time.

A 8-core CPU discounted as a 4-core seems like a good way to reduce end-user cost, but could be used to manipulate the market by field-upgrading Intel processors whenever AMD launches a new product.

Doesn’t seem like a consumer friendly or open initiative – Linux should have nothing to do with it.

Carked it, Diem? Zuckerberg's grand cryptocurrency thing may sell off assets for $200m

Steve Channell

An international stable-coin is not just for Terrorists, Drug dealers and thieves

It might actually have some valuable software, but was always going to fail when tied to the facebook.

Microsoft's do-it-all IDE Visual Studio 2022 came out late last year. How good is it really?

Steve Channell

Re: The Microsoft naming department

[1] C++ structured exception handling has finally, but is used to close a try block.. sure Java recognized that a finally block could include code that would otherwise be duplicated.. but did not originate with Java. It is not replaced by RAII because resource access is not the only source of exceptions

[2] structs (and classes) in C++ can be allocated on the heap or stack.. C# introduced boxing to pass structs by reference. By convention (like Hungarian notation) structs are used as value types in C++. Java hotspot supports stack allocation of classes, but only if the object cannot escape the scope of function

[3] Properties were inherited from Visual Basic.. Java never had a notion that could be "improved on"

[4] multidimensional arrays are not "unsafe in C++" but are dense arrays where the index is calculated explicitly. Array of array is preferred for efficiency, not for safety. For GPGPU multidimensional arrays are needed to avoid pointer dereferencing in kernel functions

[5] C# 1/1.1 did not have generics (like VB), C# 2.0 acquired generics from C-omega (research project), with explicit support in the runtime, and not GJ style type erasure - it was not to be "better than J2EE", but to support a clean and efficient type system

[6] A std::shared_ptr<T> contains a pointer to the object and a pointer to the reference counter, there is no change to the object (or architecture), and no "garbage collection" passing std::shared_ptr<T>& is preferred to avoid interlocked increment/decrement. PImpl pattern is C++ specific (to avoid #define public private), it has no meaning in managed languages.

It is touching to see such affection for Java, but if we're honest the JVM was inspired UCDS p-system.. outside of academia Microsoft was the main proponent of p-code from their Cobol, Pascal and Basic compilers.. it is not unreasonable to say that Java RMI was inspired by dynamic p-code generation of DCOM proxies.

Sure "Cool" is a tacky name, but Java got its name from the HotJava browser, which got its name from the high caffeine coffee from the island of Java

Steve Channell

VS 2022 is 64-bit

64-bit IDE is the biggest change. Inteli-sense was a big problem for very large projects, that drove developers to use VS Code.

Support for Clang, CMake and WSL2 debugging makes Linux development with VS straightforward. It's unlikely there will ever be a port of VS to Linux because it is no longer needed with GUI supporting wsl/g (Win11 inclusion of Wayland and GPU virtualization kinda turns windows into a hybrid Linux OS for developers)

Steve Channell

Re: The Microsoft naming department

Not true about C# and Java. J# was the increment on Visual J++. There would probably never have been a C# if Sun Microsystems had not gone to court about COM+ interop annotations. Only now with GraalVM is Java remedying the omission.

C# never had Java's declarative exceptions (a good thing - exceptions as part of the interface were an experimental feature that was never removed).

C# always had structs as a value-type (including complex types like DateTime that Java has to shred to place in arrays). struct make GPGPU interop simple

C# always had property methods that Java only has thanks to IDE support (or via abominable JDO)

C# always had multidimensional arrays (like Fortran). Makes GPGPU interop simple

C# 2.0 introduced proper generics without type-erasure in the GJ fudge for Java 1.5

C# only looks like Java, if you've never used C++

Former Oracle execs warn that Big Red's auditing process is also a 'sales enablement tool'

Steve Channell

First movers advantage

Oracle was the first RDBMS released, and helped first but success of VAX/VMS then UNIX. Really big, important systems stuck was the IMS, then DB2.

Oracle has always been simpler to work with because of multi-version concurrency control, but now that the cost of MVCC is lower (relative to network/disk latency) all other "adult" (i.e. not MySQL) have added it.

Postgress is going to eat their lunch in the medium term.. long-term licence fees will be cut.

Spruce up your CV or just bin it? Survey finds recruiters are considering alternatives

Steve Channell
Black Helicopters

Re: The Register, LinkedIn, paperspresentations and github are better than a traditional CV

Those little "like us on blah" buttons on the bottom right of the page links your posts to your social footprint (unless you're using ToR).. privacy is a concept. Anyway, rejecting an approach from GETCO was a mistake (just because they only had a banner page).

Steve Channell

The Register, LinkedIn, paperspresentations and github are better than a traditional CV

The Register comments, LinkedIn experience/comments, published papers, conference presentations and github are better than a traditional CV

In one instance, I was contacted by a High-frequency-trading company after commenting on a register article about a new transatlantic cable.

Coding tests can be a bit of fun, but software development is not about banging-out code in double quick time; but fashioning an algorithm that will scale reliably with test-cases, telemetry and documentation. Understanding a business problem, and being agile to evolution is more valuable than speed. The problem with coding-tests is that they favor people who have the time to practice over and over again.

Going round in circles with Windows in Singapore

Steve Channell

Re: The dialog box is clearly photoshopped

Appearances can be deceptive, pre-GPU the shadow was rendered by GDI after writing the the window box

Steve Channell

Out of memory in error handler

It's not photoshopped, it's a failure during exception recovery - probably DrWatson trying to write dump file, with a hard-disk failure

SmartNICs, IPUs, DPUs de-hyped: Why and how cloud giants are offloading work from server CPUs

Steve Channell

Intel playing a 3-Com strategy

Aside from mainframe front-end processors, the first to embed logic into a network card was 3-com in the 1980's, but the cards failed to keep pace with the increasing speed of CPU, with negligible market pentation. The reason to mention it was that 3-com's apparent technology advantage discouraged other vendors from entering the ethernet card business, allowing 3-com to dominate the market with their dumb cards, with very high margins. Intel's strategy would seem to be to prevent other vendors from dominating the market.

discrete GPU may not have a future for regular desktop graphics (browsing, word-processing, spreadsheets) but high-performance ray-tracing will still require powerful GPU for the considerable future. but NVIDIA is currently disadvantaged by the time it is taking for games software to move to ray-tracing for 4k and 8k games.

General Purpose GPU will always have a place for AI and simulation,

Offloading TLS encryption to a SmartNIC/DPU is the best way to reduce CPU load, but additional functions (RDMA, block storage) will require cooperation with OS providers - something NVIDIA can do while it waits for ray-tracing games to catch-on

Cryptocurrency 'rug pulls' cheated investors out of $8bn in 2021 – report

Steve Channell

Re: the stash


Steve Channell

Re: the stash

"Satoshi Nakamoto" spent six months mining bitcoins as an initial load to use all the small prime numbers. That initial stash of coins has never been exchanged and remains on the blockchain.

If the coins were now exchanged for FIAT currency, the gain would have to be declared and tax paid. Financial Libertarians don't believe the government should be able to create new money (quantitative easing), and see cryptocurrencies as a way to displace government.

With the market value of "Satoshi Nakamoto" stash running into billions, a sensible strategy would be to diversify some of the "wealth" into other assets - the fact it has not suggests he/she is a Financial Libertarian.

Steve Channell

Re: 66,239

Evidence of Financial libertarianism is evidenced by "Satoshi Nakamoto" not selling the stash of coins E had accumulated during six-months of testing before launching bitcoin.

"proof of waste" has proved effective as a mechanism for trustless truth, but CBDC won't use it to scale DLT.

Since the dawn of time, a percentage of surplus wealth has always been lost to theft - crime has almost always been more expensive than tax.

Azul lays claim to massive efficiency gains with remote compilation for Java

Steve Channell

Re: 80-100 per cent faster than what you can do with Graal

The comparison is not with {C++,C#, Rust etc} but with current Graal VM - the premis is flawed.

Graal comparability is mentioned, but not that the specific area is RMI dynamic (remote) loading - not supporting log4j RCE is a feature, not a bug.

Graal is hampered by Java's lack of a module system, but dynamic loading dependency is not an issue for containers (unless using RMI)

Java Hotspot is a profile guided generator, but is only reliable if the first 100 odd executions are representative - can be significantly slower than Graal or JRocket (optimising for closed accounts)

I'm sure Azul has (relatively) great technology, but remote runtimes compilation exacerbates the problem with RMI flaw - that is driving people to ditch Java completely.

Log4j RCE latest: In case you hadn't noticed, this is Really Very Bad, exploited in the wild, needs urgent patching

Steve Channell

RMI gate

The idea of looking up a message template for culture dependant messages is not totally bonkers, but doing it remotely is a bit dumb.

The exploit here is that it's assumed to be a string, but if the LDAP server returns a Java object (e.g. com.russia.gru.nasty), RMI will make a remote class loader call to get the class to deserialize the object.

The nasty bit is that the loaded class is now running on the app-server (probably behind the DMZ, and probably running as a service-account).

RMI was a bad idea 20 years ago (.NET 1 made a point of avoiding it)

Renting IT hardware on a subscription basis is bad for customers

Steve Channell
Thumb Up

Architecture absents

Time was when hardware provision took so long that hardware order needed to be placed near the begging of a project.. which meant we needed quite a good idea of architecture early on.

Architecture modules needed to be quite comprehensive and sized to avoid ordering the wrong platform.

Without the architecture trigger, it is tempting to skip the platform architecture phase completely, and presume cloud scalability.

We're making F# more normal as a language, says its creator

Steve Channell

Re: I like playing in F major

No it's not based on Fortran (in the same way C is not based on COBOL).

While F# Is functional, it has always supported OO programming, and even mutability (you just need to declare variables as such - like Rust)

The biggest impediment to F# adoption is the number of features that have migrated to C#

Steve Channell

Re: I like playing in F major

You can program microcontrollers in F# using Wilderness Labs Meadow

UK Treasury and Bank of England starting to sound serious about 'Britcoin'

Steve Channell

Re: Will happen, will be first, will succeed, but will be more like a database

"Proof of work" is inherently slow, but slowness prevents excessive forking (an issue for Ethereum relative to slower bitcoin). A CBDC would only include clearing banks, other central banks, and some wholesale banking parties - no retail transaction. With a smaller group, simpler/faster hashes can be used.

In the retail channel, digital removes friction for large payments, and automates settlement for smaller payments.

The Bank of England will be amongst the first, because (unusually) it is not the only issuing bank for sterling£ notes (RBS, BoS, Ulster Bank etc also issue £ notes).

Steve Channell
Thumb Up

Will happen, will be first, will succeed, but will be more like a database

CBDC Will use digital ledger technology (DLT), but will be nothing like bitcoin: There will be no mining, and participating nodes will be limited and pre-trusted.

Unlike bitcoin or distributed database a CBDC DLT can be extremely fast because DLT time is relative, and forking provides distributed concurrency. Bitcoin hashing provides a trust mechanism (proof of waste), but doesn't need to be as expensive if nodes are pimited

'We will not rest until the periodic table is exhausted' says Intel CEO on quest to keep Moore's Law alive

Steve Channell

I do hope they paid you

For the gushing review of their polished mirrors and atmospheric smoke. RibbonFET might be a game changer, but was not demonstrated, and might not work.

The reality is that Intel need to "promise" jam tomorrow because it has a weak position, and about to take a beating. AMD make faster AMD64 chips (x86-64, x64,.. are aliases), nVidia makes more scalable chips, and Apple is humiliating them with M1 max chip.

Nvidia nerfs RTX 3080, 3070, 3060 Ti GPUs to shoo away Ethereum miners

Steve Channell

Re: nVlidia is not a government agency

The key is "one of the definitions", not the definition: Friedrich Engels (Karl Marx's buddie) was a "socialist", but also a factory owner. The illustration was not to suggest that nVidia is "socialist", but as a preface to the proposition that it will fail.

"proof of waste" is just about the most stupid idea for consensus in a digital-ledger and a waste of the computational power of a GPU: the answer is not regulation of supply, but innovation to remove the need

Gartner's Windows 11 adoption advice: Explore but don't rush

Steve Channell
Thumb Up

Wayland and GPU access from WSLg is worth having as a developer

otherwise, just another incremental improvement on Windows NT 3.51

Microsoft's .NET Foundation under fire as resigning board member questions its role

Steve Channell

Funding is the key

The problem for any small developer in this space is that most users will always choose the MS option. NHibernate is one of a number of adequate ORM tools, but most people will just use Entity Framework.

This guy develops a UI framework (maybe with the thought of monitizing it at some time), but can't compete with the MS UI stack, and can't use board membership to get a better billing.

The answer is for MS to do something like "Google summer of code" and pay develops to develop interesting tools

Imagine a fiber optic cable that can sense it's about to be dug up and send a warning

Steve Channell
Big Brother

Hunter Killer

The biggest advantage of technology has to be in sub-sea fibre-optics where access is difficult with long distances between repeaters.

Detecting earthquakes and anchor damage seem like a euphemism for detected physical hacking for espionage. Israel demonstrated that fibre-optic can be hacked; provided you don't splice all fibres at the same time, it's undetected by the owner of the cable.

Getting an early warning that somebody is interfering with a sub-sea cable is only really useful if you can have a hunter killer submarine lurking underwater to kill an attacker before they get away - for that you'd need nuclear powered submarines.

While Australia is a relative backwater (in global telecoms), it is significant that none of its sub-sea cables go near China.. where anchor damage and earthquakes are such a problem.

Only one software giant to make impact on the robotic process automation market, says analyst

Steve Channell

Robotic Process *Analysis*, not automation

The biggest value of RPA tooling is not automation, but Analysis.

While RPA tools are great for screen scrapping, the advent of Single-Page-Application frameworks {Angular, React, Vue, Elm, etc} has rendered a generation of automations redundant because [1] pages are more complex [2] web-services are better suited to traditional integration (even if the integration is node.js mashup)

*Attempting* RPA, is a good way to perform process analysis, but the outcome is just as likely to provide information for an integration project as it is for a robot - and in a small number of low-volume cases it's actually cheaper to get people to do it.

The exception is where the integration touch-points are well understood by the vendor, or they have hooks in the API (usually because they wrote it). This is why Microsoft is likely to be the biggest winner:

[1] Outlook/Exchange/Office365 are the largest automation surface (even if few are using Dynamics, or hosting in Azure), and Active-Directory is the trust mediator

[2] RPA tooling can be discounted for platform lock-in