fbpx
Chad Mills

AI and the law need to work together to fight cybercrime

I’ve previously written about whether the law is useful at stopping spam and how AI fights cybercrime. This post discusses how each of these approaches is critical, and why the problem can only be solved when they are used together.

Laws strictly defining a class of abuse will fail

At present, cybercrime is running rampant, virtually unchecked by laws. Laws like CAN-SPAM that define what content constitutes abuse are doomed to fail. See my post on whether the law is useful at stopping spam for more details.

These problems always have fuzzy borders, and no law can cover all of the abuse without also covering substantial amounts of legitimate behavior. The criminals are actually emboldened by a clearly-articulated law.

Many of them see clearly what the government will fight, and subsequently craft their content just outside that space. They even go so far as to claim that the law proves they aren’t doing anything wrong.

AI can slow cybercrime, but not stop it

Artificial intelligence algorithms can spot attacks at a very large scale. Whereas a human looking through individual pieces of data may never notice such a pattern, algorithms operating on massive datasets can spot the patterns and identify the abuse.

It’s easy to do in retrospect. It’s hard to do in real-time as abuse is coming in. In most cases, the abuse-fighting techniques can get to where they catch 80-90% of all incoming abuse, and the bad guys treat this as a tax. They increase the quantity they send, which costs very little, and their attacks are even bigger than before intervention.

AI fills in a critical puzzle piece. But it isn’t enough. When abuse is detected, the criminals just try again. It’s an inherently losing battle.

When the criminal goes to jail, it becomes a lot harder to continue the attacks. But, tech platforms don’t have this power. They can only block the attack and hope they recognize the attacker’s next attempt as well.

Court actions are effective when driven by AI from industry

When tech companies use AI to find cybercriminals, then use courts order action being taken against the infrastructure behind the attack, it works.

In particular, malicious websites can be taken down by removing them from centralized DNS services used by consumers around the world. Or when armies of infected computers are working together to carry out malicious attacks, courts can authorize taking control of the computers controlling the attack.

These are limited cases where AI and the law work together and it works.

This is still reactive and slow, and doesn’t solve the problem

These actions are reactive, in response to a crime being committed. They are generally pretty slow since they require the involvement of a court. But they’ve been effective at thwarting many attacks.

And it still doesn’t put the criminals in jail in most cases. These cases typically target John Doe, and action is taken against the computing infrastructure rather than an individual who is responsible.

Still, it’s promising.

AI provides scale, law provides finality

Finding the source of a crime is best handled by AI in most cases—at least in fighting large-scale abuse on the Internet.  It can also help find instances of more targeted attacks, though the legal system does not scale as well as AI in handling issues this frequent.

The law enables taking actions that victims don’t have the ability to exercise on their own. Even when it’s just criminal infrastructure that’s taken down, not the criminals themselves, it still prevents it from being used in future attacks.

What’s really needed, though, is holding criminals accountable. This rarely happens today. But only the law can enable this.

There’s still much more that needs to be done

There are myriad problems to be solved in stopping cybercrime, which is currently a losing battle.

Countries that harbor cybercriminals, like Russia, China, and Brazil, need to be handled to where they comply with law enforcement or lose their access to the Internet.

Data needs to be logged by intermediaries—like VPNs, platforms, etc—in order to track down the criminals so governments can trace abuse back to their source.

Efficient means need to be created to collaborate across national boundaries. Online abuse almost always crosses borders, and it can be made to cross many borders, with a different set of countries involved in each crime.

Conclusion

Artificial intelligence and the law are both failing to stop cybercrime today. People working in the tech industry tend to disregard any possible value a law could provide. People writing the laws couldn’t possibly keep up with advances in artificial intelligence and have a deep understanding of what it makes possible.

The only way to solve these problems is to get these two communities working together.

About author View all posts Author website

Chad

Chad currently leads applied research, ML engineering, and computational linguistics teams at Grammarly.

He's previously led ML and data science teams at companies large and small, including working on News Feed at Facebook and on Windows and Outlook at Microsoft.

Leave a Reply

Your email address will not be published. Required fields are marked *