fbpx
Chad Mills

The “input data” humans learn from

Human intelligence is much more advanced than artificial intelligence, but they both have unique strengths and weaknesses. One question that’s often overlooked is: which has better input data to work with?

This seems like a strange question. But it’s an important one. One of the most important principles AI practitioners understand is the importance of having lots of high-quality data to work with. This is often much more important than the specific algorithm used.

Normally, people working in the field compare the quality of data across different AI systems. We can extend that same comparison to human intelligence, which is where we’ll begin.

Consciousness starts with perception

Humans are conscious. We have brains and we think. Awareness of the external world is the basis for all knowledge. Learning happens when we have awareness of the world and notice patterns in it.

There are more complicated cases. We can also use that same awareness directed at people or books to learn patterns based on others’ observations. And once we’ve learned multiple things that are related, we can use reason to identify new connections and conclusions between them.

In any of these cases, the root of all the knowledge we learn is our perception of the world. What we learn may be derivative, whether from other people’s perception or our own, but perception is the base.

Perception is advanced

Our senses seem magical. Let’s consider vision as an illustrative example.

We don’t have awareness of each individual particle of light that hits our eyes. The object of our visual awareness also isn’t just an image—even a three-dimensional image—of the world around us. Only a small portion of what we see is in focus. And we see objects.

Mentally, we don’t just see scattered bits of light and then consciously choose to group together all the ones associated with a chair. We’re aware of it as an object. We can choose to focus on parts of that entity, looking at the details, but we know it’s part of that object. And as we move, we perceive the same object even as it changes shape and size.

There are some cases where what we’re looking at doesn’t include objects. This could be looking through a kaleidoscope or at swirls of ink in water. In these cases, we come closest to the level of sensation. Even here, we still see a connected swirl of color or texture as one thing or tend toward noticing patterns.

Our brains can’t process more than a small amount of information at a time. We don’t think about each flash of light hitting our retinas and then consciously compose them together. Our brain and subconscious do a bunch of work to make the data we work with easy to understand and use.

Humans learn from perception

When we learn knowledge, that happens in our conscious minds. We conceptualize our understanding of the objects, actions, attributes, and relationships in the world. We generalize about the patterns we find.

The input to this process is data from perception. Recall that perception is advanced—far more than raw sensory stimulus, it groups together individual stimuli such as grouping many different light rays into the perception of a chair as an object.

Let’s take a simple example: learning that balls roll downhill. Learning this already requires knowing about balls, rolling, and sloped ground. These are all basic concepts learned directly from perception themselves.

To learn about balls rolling downhill, you see different balls rolling and notice differences between when balls roll on flat surfaces compared to slanted surfaces. On flat surfaces, the balls slow down and stop rolling. On slanted surfaces, they speed up and don’t stop until they hit an object blocking their way or until the surface levels off.

By seeing examples of this with different balls and slanted surfaces, you come to the conclusion that placing a ball on a slanted surface will lead to it rolling.

Notice that none of what we’re learning here comes from analyzing individual photons of light, even though we’re using our eyes to see the ball rolling.

It’s this high-level perception of objects, actions, attributes, and relationships that forms the input data we use when we learn.

Things we learn become automatized

When we’re first learning something it’s not usually obvious—it takes us testing our hypothesis out on new scenarios and paying attention before we really understand it. But after we’ve had experience with what we’ve learned in many contexts, it gets much easier.

This is more straightforward for procedural learning, such as typing on a keyboard. We initially have to think about where each key is and consciously choose to press it. By the time we learn where all the keys are and how to place and move our fingers to press them, it becomes much easier.

We don’t have to think about where each key is. We can think at the word level about what we want to type, and we’re able to type much more quickly and effectively.

This true not just for physical actions we learn, but also for knowledge. When you first learn how to add two and three, you might need to use your fingers or some other object to retrace your steps in getting to the result. After mental practice, it becomes second nature.

This is not just a great skill, but something that raises the base of what we learn from. We take into account not just what we perceive in the world, but also the conclusions we’ve automatized making based on our learnings throughout the course of our life.

Conclusion

When humans learn, they start with rich, structured data about the world. This shouldn’t be shocking, but it is something most people don’t think about.

Perception is the input to consciousness, which is where learning happens.

In a future post, I’ll turn to the input data computers use. Comparing these two will subsequently help with understanding the capabilities and limitations of AI, including the places where computers have unique advantages and where humans do much better.

About author View all posts Author website

Chad

Chad currently leads applied research, ML engineering, and computational linguistics teams at Grammarly.

He's previously led ML and data science teams at companies large and small, including working on News Feed at Facebook and on Windows and Outlook at Microsoft.

Leave a Reply

Your email address will not be published. Required fields are marked *