Sebastian Scherer

Please also see my RI homepage for publications.

Publications

[1] Sebastian Scherer. Low-Altitude Operation of Unmanned Rotorcraft. PhD thesis, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 2011. [ bib | .pdf ]
Currently deployed unmanned rotorcraft rely on preplanned missions or teleoperation and do not actively incorporate information about obstacles, landing sites, wind, position uncertainty, and other aerial vehicles during online motion planning. Prior work has successfully addressed some tasks such as obstacle avoidance at slow speeds, or landing at known to be good locations. However, to enable autonomous missions in cluttered environments, the vehicle has to react quickly to previously unknown obstacles, respond to changing environmental conditions, and find unknown landing sites. We consider the problem of enabling autonomous operation at low-altitude with contributions to four problems. First we address the problem of fast obstacle avoidance for a small aerial vehicle and present results from over a 1000 runs at speeds up to 10 m/s. Fast response is achieved through a reactive algorithm whose response is learned based on observing a pilot. Second, we show an algorithm to update the obstacle cost expansion for path planning quickly and demonstrate it on a micro aerial vehicle, and an autonomous helicopter avoiding obstacles. Next, we examine the mission of finding a place to land near a ground goal. Good landing sites need to be detected and found and the final touch down goal is unknown. To detect the landing sites we convey a model based algorithm for landing sites that incorporates many helicopter relevant constraints such as landing sites, approach, abort, and ground paths in 3D range data. The landing site evaluation algorithm uses a patch-based coarse evaluation for slope and roughness, and a fine evaluation that fits a 3D model of the helicopter and landing gear to calculate a goodness measure. The data are evaluated in real-time to enable the helicopter to decide on a place to land. We show results from urban, vegetated, and desert environments, and demonstrate the first autonomous helicopter that selects its own landing sites. We present a generalized planning framework that enables reaching a goal point, searching for unknown landing sites, and approaching a landing zone. In the framework, sub-objective functions, constraints, and a state machine define the mission and behavior of an UAV. As the vehicle gathers information by moving through the environment, the objective functions account for this new information. The operator in this framework can directly specify his intent as an objective function that defines the mission rather than giving a sequence of pre-specified goal points. This allows the robot to react to new information received and adjust its path accordingly. The objective is used in a combined coarse planning and trajectory optimization algorithm to determine the best path the robot should take. We show simulated results for several different missions and in particular focus on active landing zone search. We presented several effective approaches for perception and action for low-altitude flight and demonstrated their effectiveness in field experiments on three autonomous aerial vehicles: a 1m quadrocopter, a 3.6m helicopter, and a full-size helicopter. These techniques permit rotorcraft to operate where they have their greatest advantage: In unstructured, unknown environments at low-altitude.

[2] Sebastian Scherer and Sanjiv Singh. Multiple-Objective Motion Planning for Unmanned Aerial Vehicles. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '11), September 2011. [ bib | .pdf ]
Here we consider the problem of low-flying rotorcraft that must perform various missions such as navigating to specific goal points while avoiding obstacles, looking for acceptable landing sites or performing continuous surveillance. Not all of such missions can be expressed as safe, goal seeking, partly because in many cases there isn't an obvious goal. Rather than developing singular solutions to each mission, we seek a generalized formulation that enables us to xpress a wider range of missions. Here we propose a framework that allows for multiple objectives to be considered simultaneously and discuss corresponding planning algorithms that are capable of running in realtime on autonomous air vehicles. The algorithms create a set of initial hypotheses that are then optimized by a sub-gradient based trajectory algorithm that optimizes the multiple objectives, producing dynamically feasible trajectories.We have demonstrated the feasibility of our approach with changing cost functions based on newly discovered information. We report on results in simulation of a system that is tasked with navigating safely between obstacles while searching for an acceptable landing site.

[3] AndrewD Chambers, Supreeth Achar, StephenT Nuske, Joern Rehder, BerndManfred Kitt, LyleJ Chamberlain, Justin Haines, Sebastian Scherer, and Sanjiv Singh. Perception for a River Mapping Robot. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '11), September 2011. [ bib | .pdf ]
Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision- free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system.

[4] Rudolph Molero Fernandez, Sebastian Scherer, LyleJ Chamberlain, and Sanjiv Singh. Navigation and Control for Micro Aerial Vehicles in GPS-Denied Environments. Technical Report CMU-RI-TR-10-08, Pittsburgh, PA, June 2011. [ bib | .pdf ]
Micro-air vehicles have been increasingly employed in diverse research projects in both military and civilian applications. That is because their high maneuverability and accurate mobility. Many of them have been successfully used in outdoor areas, while some have been operated indoors. However, very few have dedicated especial attention to the case of high pitch and roll movements while doing scan-line based odometry. In this paper, we present a general approach consisting of algorithms that enable small aerial robots to fly indoors. We solve the overall problem of large movement change in pitch and roll angles by improving the standard scan matching algorithm. We also validate the effectiveness of the upgraded algorithm by a set of experiments that demonstrate the ability of a small quad-rotor to autonomously operate in cluttered indoor scenarios.

[5] Lyle Chamberlain, Sebastian Scherer, and Sanjiv Singh. Self-Aware Helicopters: Full-Scale Automated Landing and Obstacle Avoidance in Unmapped Environments. In AHS Forum 67, Virginia Beach, May 2011. [ bib | .pdf ]
In this paper we present a perception and autonomy package that for the first time allows a full-scale unmanned helicopter (the Boeing Unmanned Little Bird) to automatically fly through unmapped, obstacle-laden terrain, find a landing zone, and perform a safe landing near a casualty, all with no human control or input. The system also demonstrates the ability to avoid obstacles while in low-altitude flight. The perception system consists of a 3D LADAR mapping unit with sufficient range, accuracy, and bandwidth to bring autonomous flight into the realm of full-scale aircraft. Efficient evaluation of this data and fast planning algorithms provide the aircraft with safe flight trajectories in real-time. We show the results of several fully autonomous landing and obstacle avoidance missions.

[6] Supreeth Achar, Barath Sankaran, Steve Nuske, Sebastian Scherer, and Sanjiv Singh. Self-Supervised Segmentation of River Scenes. In Proceedings International Conference on Robotics and Automation (ICRA), May 2011. [ bib | DOI | .pdf ]
Here we consider the problem of automatically segmenting images taken from a boat or low-flying aircraft. Such a capability is important for autonomous river following and mapping. The need for accurate segmentation in a wide variety of riverine environments challenges the state of the art vision-based methods that have been used in more structured environments such as roads and highways. Apart from the lack of structure, the principal difficulty is the large spatial and tem- poral variations in the appearance of water in the presence of nearby vegetation and with reflections from the sky. We propose a self-supervised method to segment images into `sky', `river' and `shore' (vegetation + structures) regions. Our approach uses assumptions about river scene structure to learn appearance models based on features like color, texture and image location which are used to segment the image. We validated our algorithm by testing on four datasets captured under varying conditions on different rivers. Our self-supervised algorithm had higher accuracy rates than a supervised alternative, often significantly more accurate, and does not need to be retrained to work under different conditions.

[7] Sebastian Scherer, Lyle Chamberlain, and Sanjiv Singh. Online Assessment of Landing Sites. In AIAA Infotech@Aerospace, Atlanta, April 2010. [ bib | .pdf ]
Assessing a landing zone (LZ) reliably is essential for safe operation of vertical takeoff and landing (VTOL) aerial vehicles that land at unimproved locations. Currently an operator has to rely on visual assessment to make an approach decision; however. visual information from afar is insufficient to judge slope and detect small obstacles. Prior work has modeled LZ quality based on plane fitting, which only partly represents the interaction between vehicle and ground.

Our approach consists of a coarse evaluation based on slope and roughness criteria, a fine evaluation for skid contact, and body clearance of a location. We investigated whether the evaluation is correct for using terrain maps collected from a helicopter. This paper defines the problem of evaluation, describes our incremental real-time algorithm, and discusses the effectiveness of our approach.

In results from urban and natural environments, we were able to successfully classify LZs from point cloud maps collected on a helicopter. The presented method enables detailed assessment of LZs without an landing approach, thereby improving safety. Still, the method assumes low-noise point cloud data. We intend to increase robustness to outliers while still detecting small obstacles in future work.

[8] Sebastian Scherer, Dave Ferguson, and Sanjiv Singh. Efficient C-space and cost function updates in 3D for unmanned aerial vehicles. In Robotics and Automation, 2009. ICRA '09. IEEE International Conference on, pages 2049-2054. IEEE Press, May 2009. [ bib | DOI | .pdf ]
When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.

[9] Sebastian Scherer, Sanjiv Singh, L Chamberlain, and M Elgersma. Flying Fast and Low Among Obstacles: Methodology and Experiments. The International Journal of Robotics Research, 27(5):549-574, May 2008. [ bib | DOI | .pdf ]
Safe autonomous flight is essential for widespread acceptance of aircraft that must fly close to the ground. We have developed a method of collision avoidance that can be used in three dimensions in much the same way as autonomous ground vehicles that navigate over unexplored terrain. Safe navigation is accomplished by a combination of online environmental sensing, path planning and collision avoidance. Here we outline our methodology and report results with an autonomous helicopter that operates at low elevations in uncharted environments, some of which are densely populated with obstacles such as buildings, trees and wires. We have recently completed over 700 successful runs in which the helicopter traveled between coarsely specified waypoints separated by hundreds of meters, at speeds of up to 10 m s? at elevations of 5?1 m above ground level. The helicopter safely avoids large objects such as buildings and trees but also wires as thin as 6 mm. We believe this represents the first time an air vehicle has traveled this fast so close to obstacles. The collision avoidance method learns to avoid obstacles by observing the performance of a human operator.

[10] Christopher Urmson, Joshua Anhalt, J Andrew Drew Bagnell, ChristopherR Baker, RobertE Bittner, JohnM Dolan, David Duggins, David Ferguson, Tugrul Galatali, Hartmut Geyer, Michele Gittleman, Sam Harbaugh, Martial Hebert, Thomas Howard, Alonzo Kelly, David Kohanbash, Maxim Likhachev, Nick Miller, Kevin Peterson, Ragunathan Rajkumar, Paul Rybski, Bryan Salesky, Sebastian Scherer, Young-Woo Seo, Reid Simmons, Sanjiv Singh, JarrodM Snider, Anthony Tony Stentz, William Red L Whittaker, and Jason Ziglar. Tartan Racing: A Multi-Modal Approach to the DARPA Urban Challenge. Technical Report CMU-RI-TR-, Pittsburgh, PA, April 2007. [ bib | .pdf ]
The Urban Challenge represents a technological leap beyond the previous Grand Challenges. The challenge encompasses three primary behaviors: driving on roads, handling intersections and maneuvering in zones. In implementing urban driving we have decomposed the problem into five components. Mission Planning determines an efficient route through an urban network of roads. A behavioral layer executes the route through the environment, adapting to local traffic and exceptional situations as necessary. A motion planning layer safeguards the robot by considering the feasible trajectories available, and selecting the best option. Perception combines data from lidar, radar and vision systems to estimate the location of other vehicles, static obstacles and the shape of the road. Finally, the robot is a mechatronic system engineered to provide the power, sensing and mobility necessary to navigate an urban course. Rigorous component and system testing evaluates progress using standardized tests. Observations from these experiments shape the design of subsequent development spirals and enable the rapid detection and correction of bugs. The system described in the paper exhibits a majority of the basic navigation and traffic skills required for the Urban Challenge. From these building blocks more advanced capabilities will quickly develop.

[11] Sebastian Scherer, Sanjiv Singh, L Chamberlain, and S Saripalli. Flying Fast and Low Among Obstacles. In Proceedings IEEE International Conference on Robotics and Automation, pages 2023-2029, April 2007. [ bib | DOI | .pdf ]
Safe autonomous flight is essential for widespread acceptance of aircraft that must fly close to the ground.We have developed a method of collision avoidance that can be used in three dimensions in much the same way as autonomous ground vehicles that navigate over unexplored terrain. Safe navigation is accomplished by a combination of online environmental sensing, path planning and collision avoidance. Here we report results with an autonomous helicopter that operates at low elevations in uncharted environments some of which are densely populated with obstacles such as buildings, trees and wires. We have recently completed over 1000 successful runs in which the helicopter traveled between coarsely specified waypoints separated by hundreds of meters, at speeds up to 10 meters/sec at elevations of 5-10 meters above ground level. The helicopter safely avoids large objects like buildings and trees but also wires as thin as 6 mm. We believe this represents the first time an air vehicle has traveled this fast so close to obstacles. Here we focus on the collision avoidance method that learns to avoid obstacles by observing the performance of a human operator.

[12] B Hamner, S Singh, and S Scherer. Learning obstacle avoidance parameters from operator behavior. Journal of Field Robotics, 23(11/12):1037-1058, 2006. [ bib | DOI | .pdf ]
This paper concerns an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment. We have implemented steering control that models human be- havior in trying to avoid obstacles while trying to follow a desired path. Here we present the formulation for this control system and its independent parameters and then show how these parameters can be automatically estimated by observing a human driver. We also present results from operation on an autonomous robot as well as in simulation, and compare the results from our method to another commonly used learning method. We find that the proposed method generalizes well and is capable of learning from a small number of samples.

[13] Bradley Hamner, Sebastian Scherer, and Sanjiv Singh. Learning to Drive Among Obstacles. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2663-2669, October 2006. [ bib | DOI | .pdf ]
This paper reports on an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment. We have implemented steering control that models human behavior in trying to avoid obstacles while trying to follow a desired path. Here we present the formulation for this control system and its independent parameters, and then show how these parameters can be automatically estimated by observation of a human driver. We present results from experiments with a vehicle (both real and simulated) that avoids obstacles while following a prescribed path at speeds up to 4 m/sec. We compare the proposed method with another method based on Principal Component Analysis, a commonly used learning technique. We find that the proposed method generalizes well and is capable of learning from a small number of examples.

[14] S Scherer, F Lerda, and E Clarke. Model checking of robotic control systems. In Proc. of the 8th International symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS), Munich, Germany, September 2005. [ bib | .pdf ]
Reliable software is important for robotic applications. We propose a new method for the verification of control software based on Java PathFinder, a discrete model checker developed at NASA Ames Research Center. Our extension of Java PathFinder supports modeling of a real-time scheduler and a physical system, defined in terms of differential equations. This approach not only is able to detect programming errors, like null-pointer dereferences, but also enables the verification of control software whose correctness depends on the physical, real-time environment. We applied this method to the control software of a line-following robot. The verified source code, written in Java, can be executed without any modifications on the microcontroller of the actual robot. Performance evaluation and bug finding are demonstrated on this example.