You Can Learn Anything with These 10 Mental Models

You Can Learn Anything with These 10 Mental Models

Updated on October 01, 2022 17:15 PM by Michael Davis

You Can Learn Anything with These 10 Mental Models

A mental model is an idea that can be used to explain a wide range of things. Once you know where to look, you can find models like supply and demand in economics, natural selection in biology, recursion in computer science, and proof by induction in mathematics. In the same way that knowing about supply and demand helps you solve economics problems, knowing about mental models of learning will help you solve learning problems. Learning isn't usually taught as a separate class, so most of these mental models are only known to experts. In this essay, I'd like to talk about the ten that have had the most impact on me. If you want to learn more, you can check out the references.

To solve a problem, one must search

With their landmark book, Human Problem Solving, Herbert Simon and Allen Newell started the study of how people solve problems. In it, they said that people search through a problem space to find solutions. A problem space is like a maze: you know where you are and would know when you've reached the end, but you don't know how to get there. Along the way, the walls of the maze make it hard to move around.

You can also have abstract problem spaces. For example, to solve a Rubik's cube, you have to move through a large number of possible configurations. The scrambled cube is the starting point, the cube with each colour on a separate side is the end goal, and the twists and turns are the "walls" of the problem space. Most real-life problems aren't as simple as mazes or Rubik's cubes. The start point, the end point, and the exact moves aren't always clear. But "searching through the space of possibilities" is still a good way to describe what people do when they don't know how to solve a problem or don't have a way to remember the answer. One thing this model shows is that most problems are hard to solve if you don't know anything about them already. A Rubik's cube can be put together in more than 43 quintillion ways, which is a lot to look through if you aren't smart. Learning is the process of picking up patterns and ways to do things so that you don't have to search so much.

Add Block

Memory is strengthened via use

More than seeing something again, retrieving information helps you remember it. Testing your knowledge isn't just a way to see how much you know; it also helps you remember things. In fact, researchers have found that testing is one of the best ways to learn.

How does retrieval help? One way to think about it is that the brain saves energy by only remembering things that are likely to come in handy. If you always have the answer on hand, you don't need to remember it. On the other hand, retrieval difficulty is a strong sign that you need to remember something. You can only use retrieval if there is something to get. We need books, teachers, and classes because of this. When we can't remember something, we use problem-solving search, which, depending on the size of the problem space, may not give us the right answer at all. But once we've seen the answer, we'll learn more by getting it back than by looking at it over and over again.

Knowledge grows faster and faster

How much you can learn depends on how much you know already. Research shows that how much you remember from a text depends on how much you already know about the topic. In some situations, this effect can even be more important than general intelligence.

As you learn more, you add what you've learned to what you already know. This integration gives you more ways to remember this information in the future. But if you don't know much about a subject, you have less to hang new information on. This makes it easier to forget what you've learned. Once a foundation is set, it's much easier to learn new things, like how a crystal grows from a seed. This process can't go on forever, or knowledge would grow faster and faster forever. Still, it's important to remember because the beginning stages of learning are often the hardest and can give a false idea of how hard a field will be in the future.

Add Block

Creativity consists primarily of imitation

Creativity is one of the few things that people often get wrong. People who are creative are often seen as almost magical, but in reality, creativity is much more ordinary.

Matt Ridley says that innovation comes from an evolutionary process in a very good review of important inventions. Instead of coming to the world fully formed, new inventions are mostly the result of random changes to old ideas. When these ideas work, they grow and fill a new need.

Innovations that come out almost at the same time are proof of this point of view. Throughout history, many different people have independently come up with the same idea. This suggests that these ideas were "nearby" in the space of possibilities right before they were found. Even in fine art, people have forgotten how important copying is. Yes, many art revolutions were clear rejections of what had come before. But almost all of the revolutionaries were deeply rooted in the old ways they were trying to change. To rebel against a convention, you have to know what that convention is.

Add Block

Skills are unique

Transfer is when you get better at one thing because you practised or trained on something else. When people study transfer, they often find the same pattern:

When you do something often, you get better at it.
Doing a task over and over again helps with similar tasks (usually ones that overlap in procedures or knowledge).
Practicing one task doesn't help much with other tasks, even if they seem to require the same general skills, like "memory," "critical thinking," or "intelligence."

It's hard to make accurate predictions about transfer because they depend on knowing how the human mind works and how all knowledge is organised. But John Anderson has found that in more limited domains, productions, which are IF-THEN rules that work on knowledge, are a pretty good match for the amount of transfer seen in intellectual skills. Even though skills can be specific, having a wide range of them makes them more general. For example, it only helps to learn a word in a foreign language when you use or hear that word. But you can say a lot of different things if you know a lot of words. In the same way, knowing one idea may not matter much, but knowing a lot of ideas can give you a lot of power. Every year of schooling adds 1–5 points to your IQ, in part because what you learn in school is similar to what you need to know in real life (and on intelligence tests). There are no short cuts if you want to be smarter. You'll have to learn a lot. But it also works the other way around. You are smarter than you might think if you learn a lot.

Mental bandwidth is exceedingly constrained

Only a few things can be in our minds at once. George Miller thought at first that the number would be seven, plus or minus two. But newer research suggests that it's more like four things. This very small space is the bottleneck through which all learning, ideas, memories, and experiences must flow if they are to stick with us for a long time. Subliminal learning doesn't work. You can't learn if you aren't paying attention. The best way to learn faster is to make sure that the things that get through the bottleneck are useful. Using bandwidth on things that don't matter may slow us down.

Since the 1980s, cognitive load theory has been used to explain how interventions can help us learn more (or not learn as much) based on how much mental space we have. This study shows:

For beginners, solving problems might be a bad idea. Beginners do better when they are shown solutions that have already been worked out.
Materials should be made so that you don't have to flip back and forth between pages or parts of a diagram to understand them.
Duplicate information makes it hard to learn.
It's easier to understand complicated ideas if they are shown to you in parts first.

Add Block


Success is the greatest instructor

We learn more from what works than from what doesn't. This is because most problem spaces are big, and most solutions are wrong. When you know what works, you can eliminate a lot of options, but when you fail, you only learn that one strategy doesn't work.

A good rule of thumb for learning is to try to get about 85% right. You can do this by adjusting how hard your practise is (open vs. closed book, with vs. without a tutor, simple vs. complex problems) or by getting more training and help when you fall below this threshold. If you do better than this, it's likely that you're not looking for problems that are hard enough and that you're practising routines instead of learning new skills.

Add Block

Examples

How people can think in a logical way has always been a mystery. Since Kant, we've known that you can't learn logic by doing things. We must already know the rules of logic, or else someone who isn't logical could never have come up with them. But if that's true, why are we so bad at solving the kinds of problems that logicians make up?

Philip Johnson-Laird came up with a solution in 1983. He said that we reason by making a mental model of the situation. To test a syllogism like "All men are mortal. The man Socrates is. We imagine a group of men, all of whom are mortal, and say, "Therefore, Socrates is mortal." We then imagine that Socrates is one of these men. From this, we can tell that the syllogism is true. Johnson-Laird said that this kind of reasoning, which is based on mental models, could also explain why we aren't very logical. We have the most trouble with logical statements that ask us to look at more than one model. The more models we have to make and check, the more likely it is that we will make mistakes.

Daniel Kahneman and Amos Tversky's research in this area shows that this "example-based reasoning" can lead us to mistake the ease with which we remember examples for the likelihood of an event or pattern.

Experience makes knowledge less obvious

Through practise, skills become more and more automatic. This makes us less aware of the skill, so we don't have to use as much of our limited working memory to do it. Think about learning to drive a car. At first, it was hard to use the turn signals and brakes. You don't think much about driving after years of doing it.

But there are some problems with making more skills automatic. One is that it gets harder to teach someone else how to do something. When knowledge isn't clear, it's harder to explain how you decide what to do. Experts often underestimate how important "basic" skills are because they have been automated for so long that they don't seem to play much of a role in their daily decisions.

Another downside is that automated skills are harder to control with your mind. When you keep doing something the same way you've always done it, even if it's no longer the best way, your progress can stop moving forward. Seeking out harder problems is important because they get you out of your comfort zone and force you to think of better solutions.

Relearning occurs pretty quickly

How many of us could still pass the last tests we needed to graduate after years of school? When asked questions in class, many adults feel embarrassed to say they don't remember much.

Every skill we don't use often will eventually be forgotten. Hermann Ebbinghaus found that knowledge decreases at an exponential rate, which means that at first it decreases quickly and then slows down as time goes on.

Still, there is a bright side. Most of the time, relearning is much faster than learning something new. Some of this can be seen as a problem with the threshold. Picture that memory strength ranges from 0 to 100. Under a certain number, say 35, a memory can't be accessed. So, if the strength of a memory went from 36 to 34, you would forget what you knew. But even a small boost from relearning would be enough to fix the memory so that the person could remember it. On the other hand, making a new memory from scratch would take a lot more work. Connectionist models, which are based on human neural networks, are another piece of evidence that relearning is possible. In these models, it may take hundreds of iterations for a computational neural network to get to the best point. And if you "jiggle" the connections in this network, it forgets the right answer and responds as if by chance. But, just like with the threshold explanation above, the network relearns the best response much faster the second time around. It's annoying to have to relearn things, especially when you're having trouble with things that were easy before. But that isn't a reason to stop learning deeply and widely, because even forgotten knowledge can be brought back to life much faster than starting from scratch.

add Block

What's New : Viral