Journal logo

Confidence: what math mistake did Musk make?

In 2016, there was the first crash death of a Tesla car due to the use of its assisted driving feature.

By eberhardPublished 2 years ago 5 min read
Like
Confidence: what math mistake did Musk make?
Photo by Afif Kusuma on Unsplash

Right away the media began to question the company's technology, and public opinion mostly held Tesla responsibly.

To get rid of the PR crisis, Tesla, the company that provides assisted driving technology, first explained that the responsibility for that traffic fatality was mainly because the person driving did not hold the steering wheel for a long time.

But the media said, since you provide the function of assisted driving, the driver used the accident, indicating that your technology is not off.

Tesla's CEO Musk then said that Tesla's fatal crash occurred after 130 million miles of Autopilot use, while the average U.S. drove 93 million miles on a fatal accident. Therefore, Tesla's accident probability is below average.

When this statement came out, it provoked some ridicule from scientists who said that Musk didn't learn math well because he has absolutely no concept of confidence in statistics.

Confidence is what I'm going to teach you in this talk, and it helps you measure whether a piece of information is reliable or not. We often talk about learning lessons, but most people don't learn lessons properly and often make the same mistakes as Musk.

So where did Musk go wrong? You know, a major car accident is a random event, you do not know when the next one will happen, only when the amount of statistical data is large enough to determine from the results of one kind of car is safer than another to make sense.

Otherwise, according to Musk, if soon Tesla has another crash, won't the accident rate double again in one fell swoop? Should we say it's not good enough technology or just not lucky enough at this point?

To help you better understand this point, let's look at the following example.

If you count the number of people who pass through the gate of Tsinghua University in a day, you will find that there are 4,543 male and 2,386 female students entering and leaving the gate, according to which you can roughly conclude that "the ratio of male to female students in this school is 2:1".

Of course, you can't say that the ratio of boys to girls is 4543:2386, because you know that whether or not each person leaves the school today is completely random and determined by several chance factors.

What's more, the preferences and needs of men and women for going off-campus may also be slightly skewed. So if you were to give a rough ratio after counting that sample of nearly 7,000, no one would challenge you.

However, a different situation would be different. If you got up early in the morning on May 3, went to the entrance of Tsinghua West for two minutes, saw three girls coming in and out and one boy coming in and out, and concluded that 3/4 of the students in the school were girls, obviously people would not accept that because they would think it might be a completely random coincidence.

Maybe the next day, May 4, you go back to see two minutes, you will find that the four people entering and leaving the school are all boys, certainly not to conclude that "this school is only boys".

So what level of confidence has to be reached to be considered reliable? In engineering, including in drug trials, a level of 95% or higher is usually required.

For two outcomes that can be quantified metrically, for example, in terms of returns on investments, does the fact that fund manager A claims to be 1% higher than competitor B mean that fund A is better than fund B?

Let's say we count once a month, at the usual stock market volatility, we would need 1000 sample points or about 100 years of data.

This requirement is not met by any fund today, which means that funds that claim a 10-year average return slightly better than the broader market are exaggerating because that little difference does not have a high degree of confidence.

Or, a statement like "so-and-so fund outperformed the broader market by 1% over 10 years" is not very informative to show that the fund is better than the broader market.

There are many truths in the world that are cult to verify, from historical events to the fact that it is difficult to repeat multiple times, and it is very difficult to draw lessons.

Medium to the success of certain enterprises are after the fact summed up a set of self-justifying theory, let them slightly change the environment or even not change the environment again, are very difficult to obtain the same success.

As small as an individual, there is also a lot of chance involved in making something happen, and whether the same approach works next time depends on the circumstances.

I talk about the role of fate in my book "Insight" and many times we have to admit this and must not summarize experiences that do not exist, or in more scientific terms, don't trust information that has a low confidence level.

What happens when you use experience or information that has a low confidence level to guide your actions?

It's like an army that believes in ghosts and focuses its energy on finding them and thinking of ways to deal with them, only to have a few people come up behind them and easily wipe out the army.

You may think that I am just playing an analogy, but in fact, this is happening around us. This ghost is the so-called "human control" of artificial intelligence.

As we have repeatedly said in the previous courses from various perspectives, the so-called sentient, uncontrolled intelligent computers not only do not exist today but also will not exist in the next few generations. Many people are suspicious, always thinking about what people should do after such robots appear, which is looking for ghosts.

In fact, what is scary today are the big companies that use the incomparable computing power of super data centers, and ubiquitous monitoring systems, and have the great data processing power, and they are the people behind AI, which is more scary than AI.

When we, the general public, are still worried about the machine has achieved intelligence, will not rebel against humans, in fact, has been controlled by those intelligent programs.

If you don't believe me, just look at how many people today have changed their habits of life since they had WeChat, lost the ability to actively look for news after having Today's headlines, and even lost the ability to discern the authenticity of the news, and how many people bought a bunch of useless bargains after having Taobao.

Summary of key points

We have discussed the concept of confidence. A common mistake people usually make when looking at the news is to ignore its confidence level. For things that can be repeated, the confidence level is only high after they have been tested enough times. For things that are difficult to test repeatedly, we have to verify them in some other way, which we will talk about later.

Of course, it is not always costly to accept unreliable information, so the question is, is it possible to quantify the damage caused by misinformation?

wall street
Like

About the Creator

eberhard

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.