Chad Mills

The concerns I hear most about artificial intelligence are wrong. I’ll tell you why.

clarity from confusion

Common Concerns about AI

Have you heard these memes before?

“Not paying for Facebook? You’re not the customer; you’re the product.”

“Google collects troves of your private data and sells it to advertisers.”

People are concerned about big tech companies, and this often comes out in the form of memes like these. These memes are catchy but wrong. In both cases, there’s some truth combined with a misrepresentation.

Facebook is a multi-sided market with multiple customer segments. Users are the primary customer base, and the ones most of the company is focused on helping. Mark Zuckerberg famously spends little of his attention on ads, and every new feature starts with a user problem the team is trying to help people solve.

Google collects data to target ads, and algorithms figure out which ads a user is most likely to click on. Google shows the ads to these users without sharing any private user data with the advertiser. Only when a user chooses to click on an ad does an advertiser learn a particular person saw the ad; no personal emails or other private data changes hands.

These memes common misconceptions that seem plausible because part of the claim is true. Facebook is free. Google collects information to target ads. But the scary part is wrong, and the people who work on these products know it even if it seems plausible to those less familiar with the technology.

The latest technology caught up in this unfortunate saga is artificial intelligence, as well as the data collection and policy practices that surround it. Like these previous examples, the concerns I hear most about artificial intelligence are wrong. I’ll tell you why.

Data Collection

Tech companies collect massive amounts of personal data, which employees can misuse.

Employees at tech companies can look me up in their system and see all my personal data: what sites I browse, my address, who my friends are, and what we communicate about privately. They can stalk me.

Employees typically don’t have access to any data unless it’s necessary for their job, such as handling customer service requests.

When an engineer building an algorithm needs to access data, it’s typically not individual records but millions of records in aggregate. And these are again not generally detailed profiles of users, but rather millions of examples of one particular thing, like millions of emails sent over the past week being used to train a spam filter.

At the major tech companies, this data is carefully controlled and its use is logged and audited. At one tech company I worked at, misuse of customer data was one of only two things—along with violent or threatening behavior—that could get an employee immediately terminated.

This does not mean abuse of user data is impossible, but rather that it is minimized and taken very seriously at most companies. It is worth understanding the data sharing policies and practices of the companies you trust with your most sensitive data.

The most egregious privacy violation I’ve heard of happened at Uber, which has since reformed. This is a company that openly flaunted local transportation laws in an attempt to grow, and where the company culture was so toxic that it led to the departure of their founder and CEO.

It’s a good idea to pay attention to the values of a company you’re trusting with your data, because some trust is required. Fortunately, misuse of data is rare and large tech companies have enough to lose that they have to have strong data protections and privacy practices.

Control of the News People See

Social media products control the news I see, undermining democracy and distorting my view of reality

The leaders of any social media company have particular political views they are on a mission to promote. They set policies and direct their employees to build systems and processes that support their perspective while suppressing alternative views.

Business leaders focus on building a great product that will attract many users. This approach typically means incorporating user feedback signals into artificial intelligence algorithms. It is possible for anything controversial, regardless of perspective, to get penalized since there will be vocal complaining about it.

In my time working on News Feed at Facebook, we had an entire engineering team solely focused on helping people be more in control of what posts show up in their feed. This team would work on ways to help people curate their feed to show the things they care about, overriding the algorithm, with features like See First, Snooze, and more experimental approaches.

The reality is that most users don’t want to spend the time it would take to control everything they see. And without the algorithms, what people see is much less relevant, making the product worthless.

Without algorithms, a user could log into Facebook and see the ten latest advertisements from the pizza place they liked, missing a friend’s birth announcement and a relative’s graduation. These algorithms are critical to helping people meaningfully connect with one another.

Finding the right balance of curation and control is hard, but the reality at large tech companies is much closer to building a product people want to use than trying to manipulate users.

The Singularity

Artificial intelligence will soon pass human intelligence and machines will take over the world

Algorithms are getting better and better, even beating human-level performance in some narrow areas. As this improvement in AI continues, we will soon have machines smarter than humans. But we can’t trust such a machine, which will be capable of outsmarting and destroying us.

AI is getting better, but only works in very narrowly-defined tasks or sets of closely-related tasks where there is an incredible amount of effort required to get it to solve one small problem. That can be worth it when that one problem is encountered many times by millions of users, but it is far from a general intelligence capable of working across a wide variety of tasks.

Almost every successful production system is more than just an AI model. They often include some hand-crafted rules as well to solve a particular problem effectively. It will take major fundamental advances, not just iterative improvement, to get to a world with generally intelligent machines.

If our current systems are any indications, the humans will be in control with systems that constrain the algorithm playing an essential role in making it work.

Purpose of this blog

Artificial intelligence is slowly transitioning from science fiction to reality, making our daily lives better. As with any change, there are many concerns. Unfortunately, many of these concerns are based on misconceptions about the nature of the technology and its capabilities.

With much of the innovation in AI happening at large tech companies, which most people don’t have visibility into, the situation can seem much more ominous than it actually is.

The purpose of this blog is to make a deeper understanding of the capabilities of AI and the way it’s used more accessible, and to use this shared understanding to facilitate a less sensational discussion about the societal implications of this new technology.

I’ll share my learnings from building AI-based systems for billions of users over the past fifteen years at companies like Facebook and Microsoft. This includes addressing real challenges, like model bias and user privacy. But overall, I hope this added context will give people interested in this technology the confidence to embrace AI in their personal lives and help bring us further into an AI-driven world at work.

Artificial intelligence is making the world a better place

AI still has a long way to go before it lives up to the standard set in sci-fi movies, but we do already have algorithms that improve our daily lives. Driven particularly by advances in machine learning over the past decade, AI is everywhere.

Google enables us to find any information we need instantly. Facebook helps us maintain meaningful relationships with an extensive network of people. Our cell phones predict traffic and remind us when to leave for an appointment.

AI helps us with targeted recommendations of a wide range of music, shows, and other products online that fulfill our particular needs and interests without even needing to search for them. It powers personal assistants that answer questions, turn on lights, and book dinner reservations.

The growth of people trained in AI has enabled these advances to start going beyond the tech sector. Media companies like Disney have launched streaming services incorporating AI technologies, which was previously possible only for tech companies like Netflix.

Driver-assist technology on newer cars keeps us safe and helps us park. Machine learning has transformed marketing, financial investing, dating, job searches, and customer support. Businesses of all types are continually finding new ways to make their businesses better with AI.

The widespread concern about AI

Despite the massive improvements in our lives brought about by AI, many find the direction technology is taking us to be concerning. Some critics are completely against the technology, but more often people embrace the incremental improvements brought by AI in their everyday lives while the concerns bubble into political convictions or negative opinions of tech companies.

At the beginning of this post I gave several examples of the concerns people have: data collection practices that power AI, algorithms manipulating the news people see, and an impending Singularity that could destroy life on Earth.

There are a broad set of concerns people have about AI, and these examples only scratch the surface.

The most insidious concerns, however, are those that are absorbed unconsciously from the culture by people working excitedly to bring the world into an AI-powered future. These people tend to have complete confidence that the products they’re building or the engineers they’re working with are making the world a better place.

But when they hear concerns from friends and relatives, or read articles about nefarious activities of some other company, it seems plausible. This can sap their motivation or make them less excited to share the work they love with those they care about.

More understanding will help to bridge this gap

Part of the concern about artificial intelligence comes from how unfamiliar it is. To people not actively building AI technology, it can seem almost supernatural. Even claims that AI will surpass human intelligence can seem entirely plausible from this mindset.

To a machine learning engineer struggling to get a spam filter to stop mistakenly filtering out newsletters from a local toy store, or a researcher trying unsuccessfully to detect humor in product reviews, the idea that AI is about to take over the world seems bizarre.

By building a basic understanding of how this technology works, what it’s capable of, and what its limits are, it’s possible to bridge this gap.

AI can do amazing things. Like an airplane that is capable of flight even without most of the passengers understanding how, AI can be a trustworthy ally in helping us accomplish our goals much more efficiently than would otherwise be possible.

As I work to make the capabilities of AI more widely understood and explore the societal implications of this topic, I’ll also share some insight into how AI is applied in industry.

This will also expose some rare areas where concerns are well-founded. For example, while major tech companies have strong privacy policies and data-handling practices, this happens behind closed doors and trust is required. Not every company will live up to a high standard in this area. Companies need to give visibility into how data is used and protected to earn this trust.

That said, the vast majority of concerns around AI should be resolved with more visibility into how it works. I hope to set aside many of the fears that people have about this technology and to do so while retaining the objectivity necessary to discuss meaningful challenges that the adoption of artificial intelligence poses, including topics such as cybercrime, data privacy, and public policy.

For those excited about bringing us into an AI-powered world, or those who just want to understand that world better, I’ll be sharing my thoughts here each week.

About author View all posts Author website


Chad currently leads applied research, ML engineering, and computational linguistics teams at Grammarly.

He's previously led ML and data science teams at companies large and small, including working on News Feed at Facebook and on Windows and Outlook at Microsoft.

1 CommentLeave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *