Journal logo

Is the key to autonomous driving hardware or software?

Various auto enterprises and solution supplier of intelligent driving an arms race to a climax, laser radar, 4 d imaging radar, high power chips on, from the pilot to city domain, the head enterprises through high-speed urban parking three domains, there are even brand Shouting a slogan of deliver the level of L4 autopilot, seems to have a breakthrough Gao Jiezhi driving tendency, How close are we to truly working autonomous driving? Will 2022 be the first year of high-level intelligent driving? Is the card point hardware, software or experience? Who will lead the current first tier? Will there be new entrants to the first tier

By 肖湾Published about a year ago 9 min read
Is the key to autonomous driving hardware or software?
Photo by Nicolas Orellana on Unsplash

As a member of the cultivation in the field of intellectual drive ceaseless, very happy to see so many peer thering is no lack of insight views on this question, also hope to take this opportunity to share with you fly every car out RISING Gao Jiezhi drive some of the thinking behind the PILOT the fusion, in order to form more communication with you and jointly promote the intelligent driving technology further.

In recent years, new energy electric vehicles and intelligent driving in the automotive industry has been in the center of hot topics. The former is growing rapidly with the updating and popularization of battery and energy supplement technology; The latter, along with automotive active safety and advanced intelligent driving, presents an increasing technological appeal. In particular, Tesla, as a representative, has realized the continuous evolution of energy management, vehicle control, human-computer interaction and intelligent driving functions with excellent energy management and intelligent vehicle technology as well as continuous OTA update (we will not discuss intelligent manufacturing related topics in this paper for now). Each function has gradually become "exquisite single product function" from the independent function of the embryonic form. More and more "exquisite single product function" gradually iterates and gathers to form the powerful technology label of Tesla, helping it to become a high-tech company and gradually realize the leadership of technological innovation. Skip the technological evolution back to the market, this way of making steady progress in delivery and constantly reaching, also helps it to advance all the way in business, and walk in the forefront of intelligent new energy vehicles.

We focus on intelligent driving from Traffic Light and Stop Sign Control, a function launched by Tesla in North America in 2020. This is an accident caused by failure to Stop and slow down in front of the Stop Sign in a foreign country

There were many similar scenes back home. In the face of such scenarios, the current on-board ADAS/ intelligent driving vehicles' processing of such scenarios is even less popular than the "pay attention to slow down in front" of mobile phone + navigation APP. Similarly, the traffic light scenario has not been applied on a large scale in the current intelligent driving mass production system, which is far behind the warning of "pay attention to slow down at the traffic light intersection ahead" in mobile phone + navigation APP. Second, many, if not most, people ignore such warnings to "slow down" at road crossings -- especially when they are in a hurry. Tesla has developed traffic light and TSR sign control for this type of scene, which takes into account cost, intelligence and safety in terms of experience, and is a combination of technology and life. Such a function, in the traditional OEM+ supplier cooperation mode, usually need to go through a long time to discuss the feasibility and security of the function. Moreover, since the generalization stability and universality requirements of this type of function are very high, it is basically not feasible for intelligent driving solutions without data iteration capability. Tesla realizes this type of experience by relying on its high-level intelligent driving scheme, and its functions are quickly implemented in mass production models. Such commercial anchor point and development pace are hard to match for many intelligent driving schemes without long-term evolution ability, which is also a peak that many Oems cannot reach on the intelligent track at present. SOC chip, but at present, as the force, the emergence of a variety of high-performance sensors, and more and more computer related personnel engaged in intelligent vehicle field, track lively fuelling the industry talent for an in-depth technical and business, also appear gradually a lot of interesting solutions, and the scheme also has a long iteration continuity, We will look at the key of intelligent driving from the perspective of connotation and extension.

1. Places that have more computing resources on chips tend to have the vitality of intelligent driving. This dimension is essentially a connotation dimension. Can the core technological value and user experience brought by the intelligence attached to the car, especially the high-level intelligent driving function, give the brand more intelligent tension and user engagement? Is it possible to develop advanced intelligent driving functions into products with user engagement and commercial value? In the case of Tesla, with its growing Autopilot, it's clear that the answer is a resounding yes. As a start, Tesla, relying on its huge innovation power, on the whole scheme adopted in a few years ago a very leading design, all is the standard unification of hardware (note, here consider HW3.0 models), since the research chip (this is a very large easily neglected in - precision process flow of digital chip production is greater than the cost of laser radar), And to the user presents high-level intelligent driving and interactive perception display, closely related to intelligent driving research and development investment: chip design + chip flow + chip bottom + intelligent driving sensor hardware + intelligent driving software + data closed-loop. After continuous data iteration and algorithm software iteration, Tesla opened FSD Beta in North America and other places, which also showed the tenacious development ability of its scheme. Its commercial anchor core is in the closed loop of chip, algorithm and data, and the charm of each link needs to adhere to long-term investment and solution focus. With the support of the algorithms related to FSD Beta, this hardware scheme shows greater vitality again. FSD Beta began to add very rich traffic elements to the environment model

This is achieved on the basis of its neural network, accumulated data and other large updates. After these implementations, FSD Beta, which began to be promoted in North America, has greatly improved its performance compared with the current AP. Behind all this, the hardware has not changed, but the software has really changed. Deeper into the root, the lifeblood of software change is the innovative change in the perception of more resources mastered by the chip, realizing the complex environment model description around the entire vehicle, more accurate measurement of dynamic traffic participants, complete real-time static traffic element description, and dynamic static element association. Thus, it lays a foundation for the realization of safer, comfortable and smooth intelligent driving function. The advanced intelligent driving system adopts the advanced platform of PP-CEMTM(Pixel Point Cloud Comprehensive Environment Model). The hardware includes 33 different types of sensors (LUMINAR LiDAR, 4D imaging radar, high specification camera, etc.). In the software algorithm, the core of PP-CEMTM scheme is to construct the complex environment model information around the vehicle in real time, including traffic participants such as people, cars and their corresponding position, speed, type, orientation, etc. Elements and the surrounding traffic information, including the lane lines, traffic lights, lu yanshi, pavement, road speed limit, general static obstacles, etc., and some of the traffic participants extended attributes, such as car targets and the relationship between the factors of traffic (whether pressure from the lane line, whether in gouges where whole pieces were missing, whether to be part of, whether both inside and outside the footpaths, occupy the road time, etc.). These require a lot of computational power to complete the processing of sensor data and realize the perception and processing of surrounding environment information. The visual perception of Fefan R7 adopts multi-camera joint perception, which synchronizes the camera images of FoV with different forward widths and widths and the camera images around the car body in time and space, and then jointly feeds them into the BEV neural network for multi-task visual network reasoning to obtain the information of traffic participants such as moving people and cars around the car body. As well as static road routes, ground signs, traffic lights, curb, road speed limit, general static obstacles, etc., the laser sensing of Fefan R7 adopts three-dimensional point cloud neural network detection and multi-type segmentation, as well as traditional object detection algorithm, to realize the perception of traffic participants such as people and cars moving in front of vehicles. In addition, it can learn and extract information such as road edge, road surface fluctuation, laser reflection intensity and other short-distance road routes. 4D imaging radar of Fefan R7 inputs 4D(3D XYZ + velocity) point cloud, which can realize the perception of traffic participants such as omnidirectional 360-degree moving people and vehicles. Perception and fly every R74D imaging radar with the design of a single neural network in view of the front stationary vehicles for 4 d radar point cloud in time sequence learning and reasoning, greatly improving the capacity of millimeter wave radar to detect stationary vehicles, and through the time series data of voxel processing, general obstacles and static footpaths and other identification. As shown in the figure below, green points are stationary points in the environment detected by MMW, and non-green points are moving points in the environment detected by MMW. Moving people, cars and road edges can be extracted from millimeter-wave radar.

Fly every R7 also adopted the laser point cloud and image fusion before, for example, one of the former fusion as shown in the figure below, point clouds and forward another right FOV image after the complete synchronization of time and space, combined into neural network, before integration, realize the forward detection such as car, at the same time also has more accurate for static traffic elements combined perception, Such as conical rod detection, identification and measurement.

At the same time, Fefan R7 also jointly learns the point cloud of liDAR and millimeter-wave radar, especially strengthens the detection, identification and measurement of stationary vehicles in front of the vehicle, in order to play the advantages of 4D imaging radar and LUMINAR LiDAR long-distance measurement, and with the algorithm software to achieve more accurate stationary target measurement.

Complete these information perception after detection, fly every R7 fusion algorithm used to perception of visual, millimeter wave, laser radar independent results after integration, integrated laser/millimeter wave + image at the same time before the fusion results are all elements of the whole observation period, and the RISING PILOT piloting comprehensive precision map information function, Through the full integration of the information of these environmental elements, accurate measurement and accurate judgment of the information of the surrounding environment of the vehicle can be realized, and complete complex environmental model information can be provided for the function realization of the backend.

businesscareereconomyindustrysocial media

About the Creator


Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights


There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2023 Creatd, Inc. All Rights Reserved.