< Go to Homepage

How AI Can Be Used to Fight Hackers Who Steal $445 Billion a Year From Global Economy

Technology RealClearLife Staff
(Getty Images)
(Getty Images)

 

While Russia’s cyber escapades continue to dominate the headlines, a more salient threat to those outside of the Beltway is escaping notice.

For the average hacker, private sector networks are preferred targets over their public sector counterparts because they tend to be far less protected, and far more lucrative targets. Each year, the global economy loses $445 billion to organized cyber crime, according to McAfee. As a bulwark against the mounting threat, financial institutions are looking to artificial intelligence to shift their cyber security from a reactive strategy to a proactive one.

The scale of the threat cyber security professionals face has grown so large, with tens of thousands of possible daily threats to investigate, that the industry has assumed getting hacked is inevitable. “There is no such thing as perfect security,” said David Thaw, a Fellow of the Information Society Project at Yale Law School.

Imagine trying to defend a bank vault from robbers while trying to keep the bank open for business. In this scenario, each time a new patron enters the bank, he or she is more likely to be a robber than a legitimate customer. There’s always the option of locking all the doors, but then the establishment can’t do business. “The most secure vault in the world one with no way in or out,” says Thaw. “But, that’s not a very useful vault.”

The challenge of cyber security is finding the bad guys within the dizzying stream of harmless transactions—and doing so without disrupting those transactions. Keeping the exchange of data and money flowing is especially crucial for financial institutions since that’s the whole premise of their business. For banks and the rest of the industry, security stakes are arguably much higher given the assets and liabilities involved.

Thaw believes that its important for companies to accept that it is inevitable that one will slip through the cracks. Once their cyber security team is open to the fact that a hacker will slip through their defenses, they can start to prioritize stopping the ones that will do the most harm. From there, Thaw says it becomes a question of “what risks do we want to mitigate?”

(Getty Images)
(Getty Images)

 

With cognitive technologies like machine learning, cyber security professionals can improve their accuracy in targeting legitimate threats instead of wasting time chasing down thousands of false alarms. When it comes to malware, or malicious software, the amount of time it spends on a network, called dwell time, generally determines the level of destructiveness of the attack. The faster security experts can respond to an attack, the less damage is done. According to the latest report from cyber security firm FireEye, the average dwell time is about 146 days. By using products like Darktrace or MIT’s A² that use deep learning or machine learning to spot malware faster, dwell time can be cut down to minutes.

Compounding the challenge of investigating cyber threats, the malware’s purpose is not always easy to identify at first. For instance, the dyre wolf, a strain of malware targeting banks that stole $1.5 million with each breach according to this IBM threat analysis, appears as a run of the mill computer virus until it accesses networks of banks that process large scale wire transactions. So for those smaller banks, the dyre wolf trojan would manifest in a completely different way than it would for the large ones.

Chimera-type threats like these are more easily countered by cognitive security systems because they analyze based on a massive data set that includes threats, possible vulnerabilities, academic papers, and law enforcement reports. Not only is that data updated continuously in real-time, it’s rigorously fine-tuned by its developers to get the most accurate results possible.

However, these data sets, or training data, have their own flaws. If biased information is given to the AI system, it will spit out biased results. Cognitive security also can’t be deployed and instantly start thwarting hackers. “Over time, the system fine-tunes its monitoring and learns from its mistakes and successes, eventually becoming better at finding real breaches and reducing false positives,” says Techcrunch‘s Ben Dickenson.

But even with the most accurate training data, it can still be a weakness that hackers use to exploit. If a cyber criminal learns what training data was used to teach a system, her or she can figure out what the AI-enabled system isn’t looking for and exploit the gap. That said, AI-enabled tools remain one of most advanced in the cyber security arsenal.

Although no company is ignored, the financial services industry is particularly attractive to cyber criminals for two reasons. First, as the adage goes, is “because that’s where the money is.” Cyber criminals either steal from the financial institutions themselves or use the system to shield their true intent. Insider trading, money laundering, fraud, and all sorts of financial crimes siphon money from the global economy. With the ever-growing deluge in online financial transactions, it’s more difficult than ever for institutions, and those policing them, to parse out the legitimate from the illegal.

Just as important as a money haul for hackers, financial institutions have a high concentration of valuable data. Banks, investment funds, credit card companies and the rest of industry all have the personal information of their clients on file. It’s should come as no surprise that data like social security numbers or taxpayer identification numbers are prized finds, but they cost between $4 and $240 on the Dark Web, KrebsonSecurity reports.

(Getty Images)
(Getty Images)

 

Because of this, banks and other financial services are constantly defending their networks from online criminals. A figure as inconceivable as $445 billion becomes easier to comprehend when the daily threats cybersecurity professionals are faced with investigating number in the tens of thousands. Organized cybercrime comprises 80 percent of all cyber attacks, according to estimates from IBM Security. Like crime syndicates in the physical world, cyber criminals have the advantage by sharing data, tools, and methodology, and even skilled labor. As a result, their attacks are that much more difficult to stop since they’re prolific and adaptive.

As of March 1st, the nation’s first cybersecurity regulations took hold for financial institutions operating in New York State. On the eve of when the rules were set to go into effect, an interdisciplinary cadre of professionals gathered at Fordham Law School’s Center on Law and Information Policy to discuss how artificial intelligence and machine learning (yes there’s a difference; learn that here) could be used to fight cyber crime.

A common theme was threaded throughout the symposium: probability.

Cognitive systems were needed to defend financial institutions because “the numbers are not in our favor,” as one executive put it. Alert generation, transforming raw transactional details into red flags, was among the more exciting applications mentioned. Some systems automatically respond to threats and only alert someone once when an action must be taken by an expert. “The goal here isn’t to replace humans. It’s to augment them… to make the superhuman,” Caleb Barlow, Vice President of IBM Security said in his keynote speech. “You still want someone with that practical experience and ask ‘Did you look at this?’ and engage in that dialogue.”

Policing financial crimes online has been made easier by partially-automated investigations that can be audited and reviewed to ensure legal transactions aren’t being mistaken for something else. By combining trading patterns and communication patterns from internal emails and other messages, investigators can piece together clues that can tip them off to crimes sooner. In some cases, machine learning can even predict the illegal transaction before it’s occurred.

Financial institutions looking to leverage the computing power of machine learning or artificial intelligence should be wary of treating it like a silver bullet. However “intelligent” cyber security systems may be, there’s no shortage of vulnerabilities caused by people that use the networks they protect. Basic “cyber hygiene,” such as two-factor authentication or not using a password as simple as “123456,” is an easy way for employees without tech know-how to contribute to the companies collective defense—on top of employing cognitive technologies.

As one industry professional at the Fordham symposium put it, “human beings will remain to be the biggest vulnerability, probably forever.”

—Matthew Reitman for RealClearLife