Always involve IT, even if it's just as an FYI at the start of a project
Quote: "The lesson for anyone thinking of deploying RPA is that they must involve IT in projects early on. Business teams thinking they can use RPA to get automation without troubling IT will find it is a false economy, said Neil Ward-Dutton, VP of AI and automation practices at IDC."
This goes for many other things, not just for RPA.
Oops, this ended up longer than expected! Sorry.
Not RPA, but in a similar vane. A good few years ago (early 2000s), the company I worked for did a project without our (IT) knowledge. I was one of the techies looking after what was basically an Integration platform. We took in customer data in various formats, CSV, EDIFACT, XML, TRADACOMS, that came in via Internet FTPS, dial-up (UUCP and Kermit!) converted all this to formats the internal systems could handle (XML for the newer stuff (at the time), good old fixed-width for the IBM Mainframe we had).
The peak data started mid afternoon, ended late evening (end of day stuff) and the majority of the clients (with a very few low volume users as an exception), did everything as batches at the end of the day, hence the peak being when it was. So we'd typically get just one or two files from each customer, but each file would contain 100s to 100,000s of data items in each file.
As such the platform was tuned for batch processing, and all the internal transfers and back end systems were set up the same way. the expectation being few files per customer, but with many records in each file.
Anyway, my employer had outsourced some of it's IT to a certain company who's name begins with the letter 'F', and they'd designed a web app a year or so earlier, that customers could use, instead of baking their own system (aimed at small and mid-sized customers), and this system simply appeared as just another customer to us. The fact this was 100s, or 1000s of customers behind it, didn't really matter to us. They batched up all the customer data together every 30 minutes or so, and sent that through to us. All was happy with the world.
The 'Business' decided they didn't like this added latency due to the batching being done on Fs' web platform, (as it delayed when the data turned up on back-end systems), so they asked them to change it to near-real-time processing instead. This they implemented one weekend, without informing anyone in IT or any service managers or owners.
Everything was fine till about 10:30 on Monday morning, when one of my colleague noticed there was some lag between data arriving on our system, and when it hit any back-end system, and this lag was getting gradually worse.
I jumped onto the UNIX platform where it lived, and had a look around, and found a working directory that was used during a batching up process had something like 1,000,000 tiny files in it, when we'd only expect to see a few hundred larger files, at most.
We eventually figured out the web interface created by F had been updated, to generate a 'batch' file for every single item of data being generated (millions a day).
Worth noting at this point, that each batch file had two headers and a trailer. So as an example, whereas a single batch of 1000 items of data would have had 1003 records in total in one file, this now meant we had 1000 individual files, each with 4 records, as each needed their own headers and trailers! Resulting in 4000 lines to process, rather than the original 1003. (This also broke the Mainframe, as it created jobs based on headers, so 1 old job, became a 1000 new jobs, the MF team were also not happy!).
Basically we ended up (in total with other customer data) with something like 100 times more files than expected, plus around a three fold increase in overall volume of records. The Integration platform was already running at around 95% utilisation during peak hours (about 4 hours a day), poor system didn't stand a chance. (It was still working, just not fast enough to keep up, as the backlog increased, the slower it got!).
The only initial work around was to close down the feed from Fs' system, move all the backlogged data out of the way, and allow new stuff to come though at its regular speed. We also manually pushed through all the data from other customers (as this was still in batches), so at least it only impacted this one source (although this one source accounted for a large portion of daily volumes). It was late afternoon by the time we got this done.
The 'Business' had to go back to F and get them to backout the change, as none of the systems could cope with it. Massive egg on their faces, at least internally, as it turned out they'd been selling this near-real-time service for a while and this was their grand launch! No doubt we (IT) probably got blamed behind our backs for it not working!
All they needed to do was ask one person in my team, "What would happen if we changed the batches to this?", and anyone on the team could have predicted the outcome with ease, and saved everyone the all the wasted time, effort, lost revenue etc. and perhaps even come up with a solution for their business need!
About a month later the change was re-implemented. This time after engaging with our team, where we developing a solution and even tested it before go live! Worked perfectly 2nd time round. Go figure!