01 logo

AI Drones In Search And Rescue

Search and Rescue with AI Drones

By Andrew DPublished 2 years ago 17 min read
Like
AI Drones In Search And Rescue
Photo by Jason Blackeye on Unsplash

Search and Rescue is often a very serious and dangerous line of work. When a natural disaster such as a tsunami, earthquake occurs, sending in first responders has its risks since more often than not they have to deal with the aftermaths of the disaster and could even be at risk of being put in danger themselves. Even in human caused disasters, such as bombings or shootings, there are always risks at hand, such as dealing with radiation of a nuclear explosion and so forth. The use of autonomous drones is helpful in this regard.[2] Piloted drones are in fact used nowadays, an example would be the deadly Alabama tornadoes where drones are being sent in to look for anyone who might be in danger, however these are piloted, not autonomous. Autonomous drones have the capability of covering more ground in faster time and it would be much more efficient if they could help the people on their own as this would allow the rescuers to handle the situation much more easily. This isn’t an easy task since Search and Rescue is often never the same and as such the drones would need to be programmed in a way that they can adapt to different environments and elements that the aforementioned disasters might introduce. Therefore this is why we should begin developing and thinking of ideas to better develop these drones in order to make the problems that are faced much more manageable.

How is the data needed for the solution to be gathered?

In order to obtain the data which we will use to program the autonomous drones the use of piloted drones will be used. Piloted drones are nowadays being used in the search and rescue field and as such they propose information regarding the different scenarios the drones might face. The different situations which were recorded by the piloted drones all had different solutions which the pilot performed with the drone. Therefore the mentioned solutions may be implemented into the autonomous drones’ software. As such when a drone enters a chaotic scene, for example after an earthquake occurs, they would be able to compare the current situation with previous recorded earthquakes in order to know exactly how to locate people who are injured and then notify the human supervisors in order to rescue them. Various companies and universities have research teams working on autonomous drones. For example, in order to solve looking for something in a chaotic scene a research team has created an artificial neural network system that can run on a computer on board the drone. [2] This neural network allows the drone to emulate some of the ways that human vision works. Therefore if these different research teams share their work it would be easier to make autonomous drones a reality. Another way that data can be gathered to solve this solution is through trial and error. In the beginning, these autonomous drones will not operate 100% efficiently. As such, pilots would be able to take control if the drones are experiencing any errors during an operation. These errors can then be worked after the operation, in order for them not to repeat. Companies such as Google and Amazon have created autonomous drones that are capable of delivering packages on their own and in the near future will most probably be used around the entirety of the US. These drones can be used as a backbone for Search and Rescue as they already have navigation implemented and as such the same software can be used for these drones to navigate to and from danger zones.

Question 3:

When trying to automate the search and rescue process there are a lot of challenges and boundaries that will cause inconveniences, and setbacks to this implementation. A search and rescue environment is a dynamic and challenging scenario that even poses risk to the highly trained individuals that are specialized for such missions. Hence introducing autonomous drones to aid in a specific aspect of these missions would not only be of aid to the first responders but also to the people at risk.

Our implementation is targeted at locating the people in need. The main challenges faced by our implementation are: the highly volatile, high risk, and dynamic environments that our drones will be operating in. In an effort to adapt and deal with these issues, our drones will be equipped with a multitude of gadgets that will allow our intelligent system to gather data about the environment and conform and operate in those situations. The main equipment on our drones is split into two categories, navigation, and survivor spotting. For navigation, LiDar sensors and satellite navigation will be implemented, along with the standard equipment already on the drones. For survivor spotting, IR-RGB cameras, thermal cameras, ultrasonic sensors, high powered LED spotlights, and an air contamination sensor. Along with these information gathering tools, a loudspeaker and microphone will also be equipped for communication.

Challenges such as fog, heavy downpour, low light conditions, smoke, and many others pose risk to the traditional methods of search and rescue, namely, helicopter spotters or inflatable boats. The equipment included on our drones was purposely chosen to be able to adapt to these situations and aid the rescuers in spotting and location the people in need faster and more efficiently.

The equipment on our drones was selected specifically to deal with the variability of the environments. The IR-RGB and thermal cameras will provide a view of the environment from multiple different spectrums allowing the A.I. system to analyze the situation it is in in depth. The ultrasonic sensor offers another layer of information when dealing in close range (approx. 5-10 meters). IR-RGD cameras have a photographic sensor that captures light between the 400-70,000nm range, outside the human visibility spectrum, which allows the A.I. to effectively see through smoke, rain, and other semi opaque mediums. For even worse visibility conditions thermal cameras can offer another spectrum, locating heat signatures and their intensity. Ultrasonic sensors will be mainly used for guidance and maneuvering obstacles. The loudspeaker and microphone would be used in situations of contact with survivors, the drone would ping the location of the survivors back to the rescuers and play a pre programmed message to reassure the people in need. In the more volatile cases, the loudspeaker and microphone could be accessed manually to communicate directly with the survivors.

While using drones might solve or optimize some issues like, the need for large amounts of food and water for a large crew of rescuers and spotters, or fatigue and tiredness coming for long periods looking for survivors. Drones introduce a new layer of challenges, while drones don’t need food, water, and rest, they do need to charge on a more frequent basis than rescuers need to. On location generators are already common in search and rescue situations, so charging is less of a problem and more of a minor inconvenience. Rescue vehicles with on board generators could travel as a charge hub alongside the drones to offer a quick charge portable station and also be a first responder in case survivors are found. Another limitation of drones for search and rescue is their range, small drones have a range maxing out at around 10km so exploring a larger area of land would require the controller station to also move as well. Workarounds for this limitation can be setting up the control center on a movable vehicle such as a customized truck, or including a signal relay card within each drone, allowing drones to exceed the range by ‘hopping’ their transmission off of one another. This approach will slow down transfer speeds significantly but would overall exponentially increase the range of the drones, as the number of drones used is increased. Since all the drones would need to transfer back to the control station is a location ping of where the survivors were found, having a weak connection line wouldn’t be that big of an issue when only transferring kilobytes or bytes of data.

Overall, the implementation of drones for search and rescue would be very beneficial for all the parties involved. Human overseers would still need to be implemented however, resources needed, fatigue, stress, and loss of life due to time restraints would all be decreased by the ease of deployment of drones.

Question 4: A.I. techniques

For our implementation to function, different A.I. techniques need to be applied and work seamlessly together. The 2 main aspects where A.I. would need to be implemented is in the path planning, and in object detection. Our drones would need to self pilot themselves and be able to spot people, while also taking into consideration their shortcomings and the environmental challenges. The implementation would consist of 2 A.I. systems, one taking care of path plotting and piloting the drone, while the other gathers information from the onboard equipment and analyzes them in real time. If a match for a survivor is found, the A.I. in charge of spotting survivors would communicate that it has found a message to the pilot A.I., which would then ping the given longitude and latitude coordinates back to the control station.

The path plotting and piloting intelligent system would be managed using a Multi-objective path planning algorithm (MoPP). An MoPP algorithm is a specialized autonomous path planning intelligent system that as input takes the area of land that needs to be traversed. The system takes into account the communication range of the drones, the charge capacity, and the amount of drones being used. Using satellite navigation software like google maps, it takes predetermined routes (roads, footpaths, etc.) and alters them for the most efficient traversal of the entire area given the number of drones available, the communication ranges and the charge stations available. This algorithm was most highlighted in Autonomous robots issue 44 in June of 2020. Deep learning neural networks trained on human drone flight data would be used to supplement the MoPP algorithm for flight and maneuverability.

The survivor spotting portion would be handled using deep learning object detection and classification algorithms. Using a combination of publicly available, and custom sourced videos and imagery, our neural networks would be trained at detecting human heat signatures and forms . In addition using the RGB photos and video feed confirm their presence and differentiate them from nonhuman entities.

Actuation:

The search and rescue drones must be capable of performing a set of actions fully autonomous in any given environment. The drone must adapt to the ever changing environment it is sent into. An AI drone should handle the tasks of:

Optimal route planning.

Navigation through the environment.

Search thoroughly places of interest.

Identify living human beings.

Transmit the location of survivors and sensory data to the rescue team.

Firstly, the drone must be able to understand and differentiate between the search and rescue scenarios. Mostly being, outdoor vs indoor and large scale vs small scale operations. Also, the drone should be able to produce a level of urgency to the given situation. For all situations, the AI needs to handle partial observability and incorrect knowledge. A way for the drone to adapt to the different scenarios is by changing the resources used. One way of doing this is, if it is sent to an outdoor operation the drone makes use of gps and maps. While if it is sent into an indoor area the drone should rely more on sensors to determine the position in space it is in and where it can fly to. In both situations the drone makes use of complex search algorithms for path planning as mentioned in the Ai techniques section. Combined with gathering of big data using sensors and cameras and deep learning models will improve decision making and efficiency.

How will the drone interact with the environment? The AI drone does not need to change the environment to satisfy its goals. It is there to navigate and gather information about the environment. The AI drone makes use of computer vision and sensors which enables them to collect sensory data from the given environment. [1]The drones are able to detect objects and record important values such as oxygen level, radiation, visibility, temperature and much more. These values are reported to the rescue team in real time. More importantly, the drone searches the area to locate survivors. Images and locations are communicated back to the rescue team.

User interaction:

The end goal of the rescue drone is to locate survivors. However, there are still some minor goals that must be satisfied when reaching this end goal. These being, communicate an alert to the survivors and assess the conditions. Firstly, upon reaching the people, a distress message should be played. This message would be a simple audio file that is played through the drone’s speakers. Something in the nature of, “Stay calm and remain in this location. Help is on its way!”. This distress message is there to give hope to the survivors and try to convince them to stay in their exact positions. More ideal would be to equip the drone with basic human resources, such as, water and food. However, this is not done for our drone as speed and simplicity are prioritized in the drone’s build.

Secondly, the drone must be able to quickly and effectively assess the condition of the survivor. For this to be done with little to no mistakes, the Ai must contain a software that has been trained to make these assessments. This software should be trained using computer vision combined with reinforcement learning. Prior to the actual missions, the Ai must learn how to solve computer vision tasks to learn more about the conditions.[10] Using CV, the Ai can identify injuries, measure body temperature, oxygen level, radiation level and any sensory information that the Ai can use to determine living conditions (this would be the environment's states). The Ai is given a large amount of sensory data and shouldn’t rely heavily on only one or two, as in some situations these might lead to inaccuracy. For example, if the camera malfunctions the Ai should not use image processing techniques to determine conditions. The action taken by the Ai is setting a priority level based on the environment states. Let’s say it can rank the priority from 1 to 5, 1 being little to none urgency and 5 being a life and death situation. Reinforcement learning is used to improve the decision making part of the Ai. This can be optimized by simulating a large number of scenarios and examining the results of the Ai. In a real life operation, all of this must be done quickly by the drone and communicated back to the rescue team. The rescue team acts in accordance to the priority level. Getting first to the ones of higher priority will lead to more lives saved.

Ethical Issues:

Ethics can be defined as choosing what’s morally right and wrong. Furthermore, ethics are the foundations of culture and human life. Hence, AI machines should follow ethical principles since they have an effect on society. When dealing with Search and Rescue autonomous drones it is critical that the AI performs ethically. There are a number of issues one must consider before implementing the AI system.

A common concern and maybe the most important issue is accountability. Given the infallible nature of search and rescue there is no space for the AI drone to fail when given a task. If it does fail, this will result in loss of precious time in life and death situations and may even result in loss of human life. This error might be caused by lack of experience and prior knowledge the drone had or due to the environment, for example a bird flies into the drone and causes damage. In fact, a system failure may be caused by a number of entities. The question in hand is who should be held accountable for the failure.

Andee

To deal with this issue one must focus on the more important element. Conserving human life. What can one do to limit system failure? Given the stochastic environment the AI drone will navigate in, it is almost impossible to simulate all the possible scenarios the drone will be facing. However, care must be taken to ensure that the drones will never stop reacting to the environment. This must be done by implementing an intelligent system which is capable of solving in real time issues. What if the system still fails to make the right decision? A response to this query in search and rescue is to have more than one drone search the same area. The more drones available, the less chance that a mistake is made. Needless to say, this approach comes with drawbacks, some being; cost will be higher since there is a need for multiple drones and efficiency would be reduced since the drones need to cover the same area.

[11]Other ethical issues revolve around use, privacy concerns, bias and practical concerns. These issues must be considered throughout the development of drones. Politics and human bias should have a limited effect in the development and implementation phases of the AI. All in all, these drones should improve society. This must be the goal when considering AI solutions for real life problems.

Conclusion:

To conclude, in this report we provided a short overview of an Ai solution to a real-world problem. That is the development and implementation of search and rescue AI drones. The challenge was to create a drone capable of navigating through the environment fully autonomous and locate and identify survivors with a high degree of precision. We briefly discussed the various Ai techniques that help solve these challenges. Many solutions for an Ai drone to help in search and rescue are already being developed in the real world. Ai technology will help facilitate and improve many tasks in society. Although all this, it is important to implement ethical machines, this was discussed in the ethics part of the report. AI drones would be beneficial to mankind since they would help reduce risks that we may face, since they would replace us in the most dangerous situations, and help us rescue people stuck in extremely risky environments with less time taken, and therefore the people that are rescued could be given the care that they require in shorter time frames. This will be achieved through the methods that were discussed in this document, and with various other solutions that may be obtained through the work of other universities, and other research teams that work for various different companies. As such this is an AI solution that cannot be solved alone, but through the help of a lot of different people.

Citations and references:

[1] Samira Hayat, “Multi-objective drone path planning for search and rescue with quality-of-service requirements”, Springer Link. Available:https://link.springer.com/article/10.1007/s10514-020-09926-9

[2] Vijayan Asari, “Autonomous drones can help search and rescue after disasters”, The conversation.

Available:https://theconversation.com/autonomous-drones-can-help-search-and-rescue-

after-disasters-109760

[3] Alex Brokaw, “Autonomous search-and-rescue drones outperform humans at navigating forest trails ”, The Verge.

Available:https://www.theverge.com/2016/2/11/10965414/autonomous-drones-deep-learning-navigation-mapping

[4] Oisin McGrath, “Are UAVs the future of search and rescue?”, Airmed&rescue.

Available:https://www.airmedandrescue.com/latest/long-read/are-uavs-future-search-and-rescue

[5] Fintan Corrigan, “How Do Drones Work And What Is Drone Technology”, Drone Zone. Available:https://www.dronezon.com/learn-about-drones-quadcopters/what-is-drone-technology-or-how-does-drone-technology-work/

[6] Jonathan Feist, “Autonomous drone vs self-flying drones, what’s the difference?”, Dronerush. Available: https://dronerush.com/autonomous-drone-vs-self-flying-drones-10653/

[7] National Oceanic and Atmospheric Administration, “What is lidar?”, Ocean Service.

Available: https://oceanservice.noaa.gov/facts/lidar.html#:~:text=Lidar%2C%20which%20stands%20for%20Light,variable%20distances)%20to%20the%20Earth.

[8] Bob Gross, “Ultrasonic Sensors to Detect Human Presence”, Max Botix. Available: https://www.maxbotix.com/ultrasonic-sensors-detecting-people-156.htm#:~:text=Human%20Presence%20Detection%20with%20Ultrasonic,excellent%20reading%20to%20reading%20stability.

[9] Tristan Greene, “A beginner’s guide to AI: Computer vision and image recognition”, TNW. Available: https://thenextweb.com/neural/2018/07/17/a-beginners-guide-to-ai-computer-vision-and-image-recognition/

[10] Alexander Bernstein and Evgeny Burnaev, “Reinforcement Learning in Computer Vision”, ResearchGate. Available: https://www.researchgate.net/publication/324558061_Reinforcement_learning_in_computer_vision

[11]Keng Siau, “Ethical and Moral issues with AI”, ResearchGate Available: https://www.researchgate.net/profile/Keng_Siau/publication/325934375_Ethical_and_Moral_Issues_with_AI/links/5b97316d92851c78c418f7e4/Ethical-and-Moral-Issues-with-AI.pdf

gadgets
Like

About the Creator

Andrew D

Student at University, motivated to learn and improve my skills so as to achieve my maximum potential.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.