Lidar and Multi-lidar are the Future of Autonomous Cars
Khaled Zakaria
XXXXXXXXXX
ZAKKD1804
ENSC105
Ian Brooks
2nd March, 2020
Executive Summary:
This study is an informative paper to the CEO of Tesla, Elon Musk, to try and demonstrate to him that Lidar and multi-lidar will help and improve autonomous cars. This essay includes information so like how LiDAR and Multi-Lidar technology works and how much it costs and how this technology detects pedestrians and cars and how safe LiDAR and Multi-Lidar are. Finally, this essay includes the discussion about the ethical as well as the legal approaches of this technology.
Contents
Introduction
4
Lidar and multi-lidar will help and improve autonomous cars
4
Costs of Lidar and Multi-Lida
5
LIDAR AND Multi-Lidar technology detects pedestrians and cars and how safe they are.
6
Ethical and legal approaches of Lidar and multi-lida
6
Conclusion
7
Reference
8
Lidar and Multi-lidar are the Future of Autonomous Cars
Introduction
Autonomous vehicles are vague signs of an uncertain future and will soon be a thing of the past. Driverless cars will disrupt the automotive business moreover usher in a huge restructuring. In the years to come, you just want to turn around and see them moving in the streets near you. However, an inevitable question arises in our minds: how do self-driving cars work? Of course, many technologies can make a vehicle drive it, however, how do cars change lanes and maintain a secure distance from the other vehicles that hurt the past, and how do they find roadblocks moreover other such obstacles ahead?
Lidar and multi-lidar will help and improve autonomous cars
Lidar technology is used for ranging and light detection. As its name recommends Lidar works with factory ground robots or driverless taxis to coastal areas to map and measure deforestation. Self-driving cars can use Lidar for obstacle recognition as well as evading to safely direct the environment utilizing a rotating laser beam [1]. The price map and point cloud output of the lidar sensor provides the essential information for the automaton software’s to control the presence of probable obstacles in the environment moreover the position of the robot associated with these potential obstacles. Creating a 3D representation by calculating the rate of light and distance covered by it helps establish the location of the vehicle. Multi-lidar: Compared with Lidar systems, multi-Lidar sensors can get better the environmental perceptions of autonomous vehicles. In a multiple Lidar system, the position of the sensor determines the density of the combined point cloud. There are several studies conducted a preliminary study on the optimal Lidar placement strategy on an off-road autonomous vehicle . Obstacle detection is a significant research area and has many applications in outdoor environments [3]. In particular, the use of obstacle detection for mobile robot tasks (for example avoiding obstacles, pre-collision,
mitigating collisions, and stopping) is very high. Moreover, some mobile robots should be capable to execute obstacle detection at high speed in unknown and dynamic environments. For these purposes, laser rangefinders are widely used. Nevertheless, even if the laser scanner is very co
ect, it can only provide a very small scene range. To get over this restriction of a solo laser rangefinder, many studies have focused on multiple laser beam fusion methods [5]. Particularly, using a 3-D laser scanner to generate point clouds, using INS to accumulate points from a 2-D laser scanner, and stereo vision seems to be a good solution. But, 3-D laser scanners (for example, Velodyne1) are too expensive to use. Cumulative points are delicate to the frequency and accuracy of INS. To resolve these issues, an effectual two-dimensional laser scanner a
ay configuration method and a quick system for ba
ier detection in active environments is used. By using the composed 3-D range data and can quickly detect obstacles. Considering the boundaries of the automobile embedded computing environment, an effective MODT framework is used to process the combined Lidar data. An automated driving system capable of performing all driving tasks on all roads and environmental conditions that can be managed by humans is classified by the Association of International Automotive Engineers (SAE) as the highest level of automation [6].
Costs of Lidar and Multi-Lida
In everyday land conditions (driving along streets or navigating through buildings), reflected laser light is an improved source of data than the radio waves and sound, which is why Lidar is so famous: it is easy, reliable, as well as costly relatively low. Cost effective Lidar and Multi-Lidar data to make real-time maps of streets, driverless cars trying to navigate through those streets, or the streets that robots have to cross, however, you can do it in another method as well [10]. For example, a company sells less-expensive lidars, including a 16-laser "puck" model that was selling for $4,000 last year. Few companies also have a solid-state model. Companies expect it to eventually cost less than $1,000 in automotive volumes. However, these lidars do not deliver the high-end performance spinning 64- and 128-laser models. Few flagship 64-laser lidar units were selling for a reported $75,000 each.
LIDAR AND Multi-Lidar technology detects pedestrians and cars and how safe they are.
With the help of Lidar, autonomous vehicles can drive smoothly moreover ignore collisions by finding obstacles ahead. This increases the security of commuters or makes self-driving cars less prone to accidents as there is no threat of rash driving and human negligence. The device is Lidar and acts as the eye for autonomous vehicles. It gives them a 360-degree view and helps them drive themselves securely. As the resolutions of Lidar and Multi-Lidar are getting higher and higher, and they can operate over a longer range, new use cases have emerged in object tracking and detection. Lidar maps not only let you know exactly where you are in the world moreover help you navigate, but also track and detect obstacles like pedestrians, and cars [11]. The ba
el-shaped objects above these cars are lidar and light detection moreover ranging systems, which can generate 3D images of the car's su
oundings multiple times per second. To compensate for the shortcomings of a single sensor, Lidar and Multi-lidar-camera sensors fusion are utilized. During sensor fusion, the computational difficulty of the image processing is deceased, and detection presentation is better.
Ethical and legal approaches of Lidar and multi-lida
Is this technology safer than a human driver? How do we keep people safe when developing and testing the technology? At the time of the accident, who was responsible: The developer who created the wrong software, the person in the driver's seat who could not identify the system failure or one of the hundreds of hands who had been exposed to the technology in the process? The need for driving innovation is clear: According to the National Safety Council, motor vehicle deaths exceeded 40,000 in 2017. A recent study by RAND Corporation estimates that putting technology into autonomous vehicles will save thousands of lives once a technology is 10% better than human drivers [1]. Industry leaders continue to advance AV: The Brookings Institute estimates that to date audio and video technology investments have exceeded $ 80 billion. Top car, ride-sharing and technology companies including Uber, Lyft, Tesla and GM are all working on autonomous vehicle projects. GM plans to release a vehicle that does not require human driving by 2019, not even a pedal or steering wheel. Srikanth Saripalli, a related instructor in the Department of Mechanical Engineering at Texas A & M University, explained in Dialogue that autonomous vehicle accidents are usually caused by sensor e
ors or software e
ors. The first problem is technical: the light detection and ranging (LIDAR) sensor cannot detect obstacles in the fog, the camera needs the co
ect light, and the radar is not always accurate [2]. Sensor technology continues to evolve, but self-driving cars still need a lot of work to drive safely in cold, snow, and other adverse conditions. Inaccurate sensors can lead to system e
ors that are unlikely to cause the driver to fall. Lawmakers have already begun making these decisions. State and municipal authorities scrambled to host the first self-driving car test to attract lucrative tech companies, job opportunities, and an innovative and friendly reputation. At present, state-level laws and administrative orders are scattered and regulate autonomous vehicles. Various laws complicate testing and eventual widespread adoption, and driverless cars are likely to require a unique set of safety regulations. Outside the United States, more specific discussions have taken place. Last summer, Germany adopted the world's first code of ethics for driverless cars. The rule states that human life must take precedence over property damage, and in the case of unavoidable personal accidents, decisions cannot be made based on "age, gender, physical or mental makeup" [4].
Conclusion
In conclusion, as self-driving technology is tested, developed and getting closer to us on the road, this dilemma outlines future moral challenges. The public wants as many people as possible to be safe, but if that means sacrificing their safety or the safety of their loved ones, it is not hopeful. It has been concluded that Lidar and multi-lidar performances as an eye of the self-driving vehicles providing them a 360-degree view. If people give their lives to sensors and software, they need to make good ethical decisions on the journey to a safer path. In a series of investigations, researchers have found that people believe in utilitarian ethics in driverless cars-in an unavoidable accident, self-driving cars should minimize human casualties-but will not be keen to ride in cars that can drive themselves.
Reference
[1] M. Sualeh and G. Kim, "Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking", Sensors, vol. 19, no. 6, p. 1474, 2019.Doi: XXXXXXXXXX/s XXXXXXXXXX.
[2] T. Kayarga, "Multiple Object Detection and Tracking in Dynamic Environment using Real-Time Video", International Journal of Trend in Scientific Research and Development, vol. -2, no. -1, pp XXXXXXXXXX, 2017.Doi: XXXXXXXXXX/ijtsrd7181.
[3] A. Chesworth and J. Huddleston, "Precision optical components for lidar systems developed for autonomous vehicles", Next-Generation Optical Communication: Components, Sub-Systems, and Systems VII, 2018.Doi: XXXXXXXXXX/ XXXXXXXXXX.
[4] H. Guo, D. Cao, H. Chen, Z. Sun and Y. Hu, "Model predictive path following control for autonomous cars considering a measurable distu
ance: Implementation, testing, and verification", Mechanical Systems and Signal Processing, vol. 118, pp. 41-60, 2019.Doi: XXXXXXXXXX/j.ymssp XXXXXXXXXX.
[5] S. Brisken, F. Ruf and F. Höhne, "Recent evolution of automotive imaging radar and its information content", IET Radar, Sonar & Navigation, vol. 12, no. 10, pp XXXXXXXXXX, 2018.Doi: XXXXXXXXXX/iet-rsn XXXXXXXXXX.
[6] E. Ackerman, "Lidar that will make self-driving cars affordable [News]", IEEE Spectrum, vol. 53, no. 10, pp