Many years ago, I was working at a financial institute that processed several millions in transactions every night. Like many institutes, it relied on a (t)rusty mainframe to crunch through the numbers overnight.
These were the days where business continuity was a relatively simple process. You simply had two of everything. So there were two overnight batch processes to process payments, the primary and a backup.
The backup was only used in emergencies, and there was a process to swap systems to use the backup if need be.
It so happened that there was a developer who had grown tired of working and decided it would be a good idea to plot a heist and retire.
The plan was relatively simple. To introduce an extra payment instruction of several million to be transferred to an offshore personal account.
All that was needed was a few lines of code inserted into the overnight batch process to make it all happen.
However, the slight snag was that it wasn't very easy to make any changes to the production system. It was closely monitored, and all changes were subject to strict scrutiny and several layers of approval. Any changes would have raised many questions, ones for which there were no good answers. But the devilishly-minded developer had other plans. The backup batch job wasn't subject to the same rigour. So, the developer figured that if they inserted the code into the backup system it could go undetected.
As the developer anticipated, the commands were successfully inserted into the backup process without raising any alarm bells. Now, all the developer needed to do was to wait for the production system to hit an issue and for processing to take place through the backup system.
This presented a problem though. Once the payment instruction had been executed, sending millions into their offshore account, the developer would need to get out of the country quickly before the payment discrepancy was discovered.
The plan was relatively simple, take a flight to a country that had a non-extradition treaty. Second was to extract all of the money out and place into several other accounts to limit any chances of payment being reversed and to make tracking difficult.
Everything was set and ready to go. The developer just had no way of predicting when the overnight process would switch from production to the backup system.
So the developer took a chance and booked the first flight out of the country for the next morning. Then, before leaving work that night, the developer manually forced the system to process from the backup process overnight.
It was the almost perfect heist. Almost. Police arrested the developer at the airport just before boarding the plane.
Our manager was a nice guy. One of those people who cared about his team, but was perceived as being too soft to hang with the senior managers of the company.
Several weeks after the incident he presented the timeline of events and findings to his senior management on the attempted heist. Everyone was rather pleased at the outcome and one executive piped up about how, "we were lucky to catch the developer."
It was at that moment that our manager displayed an assertive trait that no-one in the team thought he possessed.
Luck? Do you mean that we were lucky to have extensive monitoring controls on the production system? Or that we were lucky to automatically raise an incident whenever an overnight job switched to the backup system. Perhaps it was luck that we captured all administrator level access to highlight any unauthorised changes. Or that it was by pure luck that we had existing relationships with law enforcement who could arrest our suspect within a few hours. If that's what you mean by luck, then yes, absolutely, were very lucky.
Good security isn't something that happens by chance. Luck has very little to do with designing secure systems, or implementing robust monitoring and threat detection capabilities. It is a well-defined and thought out process that takes time and effort. If anything, IT security needs to create its own luck.