Chad Mills

How humans learn (and computers don’t)

I generally write about artificial intelligence: how machines learn. Many of the most interesting observations in this area come by comparing artificial intelligence to human intelligence. Here I’ll examine how humans learn, before providing some high-level analogies to how AI learns which I’ll explore more in future posts.

Reality & perception

Underlying all learning is the obvious: there’s something out there to learn. Our five senses are the way we learn. These senses are our basic data collection tool that our mind can use to learn from.

The day my daughter was born, she started her day in a very insulated environment without light to see or air to smell. After her birth, she had many opportunities to get rich information about the world by opening her eyes or using any of her other senses.

It was a lot to take in, and too much to process after such a sheltered beginning. She would rarely open her eyes and would sleep most of the day. Without yet being able to focus her eyes or organize the massive amount of information available, she learned most with her mouth.

She had a built-in sucking reflex that enabled her to eat, and this was the most important thing for her survival, so this occupied much of her attention. But she wasn’t just using her mouth for taste: she used her sense of touch to feel the shape and texture of different objects she would put in her mouth.

Eventually, she learned how to focus her eyes and take full advantage of her senses. But she started off overwhelmed by all the available data, using a small subset of it to help her start making sense of the world.

At this level, she learned that matter in the world is often packaged into objects. She quickly learned to identify particular individuals, such as recognizing her mother by smell and the sound of her voice. But these were still individuals and not yet concepts, which is the next step.


While identifying individuals is nice, concepts are even more powerful. By seeing patterns in the individual entities my daughter learned, she was able to form concepts like ball or diaper which represent countless similar objects.

Concepts are great! Every time my daughter encounters a new ball, she no longer needs to discover that it rolls. When she meets a new person, she knows that person can probably talk, move around, and much more.

It’s possible to generalize knowledge like this to apply to anything falling under a concept. This saves massive amounts of time and frees up mental capacity for learning fundamentally new things.

My daughter was able to understand concepts long before she was able to speak. I know this in part because she could understand what I was telling her well before she could produce the sounds.

Even at only several months old, when I was about to change her diaper I would verbalize what was about to happen. In response, she would visibly relax her muscles and be ready for me to start moving her about.

Initially, all concepts she formed represented simple objects in the world, called basic-level categories or first-level concepts.

Later, she learned to form more abstract concepts, both going broader (from table and couch to furniture) and more narrow (from table to wood, brown, and heavy).


Propositions involve combinations of concepts, and are the basic unit of thought. For example, “dogs can bark” is a proposition.

I mentioned that generalizing knowledge across all members of a concept is one of the great advantages of forming concepts. If I know all dogs bark, I don’t need to learn whether each new dog I encounter can bark.

Propositions are the way we represent this knowledge, and since they are compositions of concepts these two ideas are inextricably linked.

While my daughter learning to say words was exciting, when she learned how to put them together into sentences—propositions—we were finally able to communicate beyond her pointing out things she wants.


Propositions are combinations of concepts, but propositions themselves can also be combined.

A simple way to do so is to combine a series of propositions into a story. However, there’s a particularly special way to combine concepts of relevance to learning: logic.

If “all men are mortal” and “Socrates is a man,” then we can conclude “Socrates is mortal.” The conclusion can be a new proposition you didn’t know before, so logic can meaningfully lead to learning.

Logic is a set of rules for combining propositions in a way that guarantees the combination will be true if each component is true.

Logic can make strong claims about whether a particular line of reasoning is correct or incorrect. Note, however, that logic does not have any way of validating that the underlying propositions are true. It can merely show that if the underlying propositions are true, the conclusion necessarily follows.

It can also demonstrate, in some circumstances, that something is false. This is only true when a contradiction is found and premises are shown to contradict one another.

Logic, then, is a valuable process but also very limited. It is not the primary means of gaining knowledge, but it can be enormously valuable in serving in an auditing role, spotting errors to correct as well as new connections between disparate pieces of knowledge.

A high-level comparison to artificial intelligence

Artificial intelligence functions fundamentally different than how humans learn.

The most relevant similarity is that both operate on input data that is treated as a given. In the case of human perception, the input data reflects real information from the world; for an artificial intelligence algorithm, getting accurate and representative data is one of the hardest challenges (see Privacy or Algorithm Bias? Pick One.).

Human learning is internally-motivated, starting with the need to communicate with parents about when you need food or a diaper change. Humans are self-driven and self-motivated, with clear objectives such as surviving and achieving happiness. Computers, on the other hand, only get external guidance on goals–and often only once, when the algorithm is created.

Computers do not think in concepts, though classification algorithms can be used to try to identify an instance as falling under a concept if a human decides it’s important for a machine to do so (e.g. is this email spam?).

But even when computers learn concepts, it isn’t an all-inclusive way of thinking. Each concept requires a massive amount of work, and it generally depends on a specific dataset disconnected from every other concept the computer is taught. This means computer-learned concepts are fundamentally different and disconnected in a way concepts in a human mind are not.

What computers can do exceedingly well is logic. Without regard to whether the information contained is true or false, computers are straightforwardly programmed to implement the rules of logic, checking for errors or inferring additional conclusions. Unfortunately, this is disconnected from perception. It can be harnessed for useful ends if a human points the logic engine at the right input data and problems, but again this is very different than the auditing role logic serves for a human.


Human learning starts with data from sense perception and it’s a self-driven, goal-oriented process. It works by grouping phenomena encountered into concepts, building propositions from these concepts, and using logic as an auditing tool to spot mistakes made along the way to arriving at the propositions.

Computers, on the other hand, start with data that may or may not accurately represent the world, and—driven by a human setting the goals and methods—learn narrow patterns to help a human achieve some goal. These are disconnected from other patterns learned and don’t tie back to reality in a consistent and integrated manner.

Even though logic can be implemented flawlessly on a computer, this functions as one more pattern recognition algorithm on data detached from reality rather than an auditing tool over a comprehensive, reality-oriented approach to learning.

There is much more to say here, so stay tuned for future deep-dives into a variety of the topics raised here.

About author View all posts Author website


Chad currently leads applied research, ML engineering, and computational linguistics teams at Grammarly.

He's previously led ML and data science teams at companies large and small, including working on News Feed at Facebook and on Windows and Outlook at Microsoft.

1 CommentLeave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *