{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Module 3: Estimation & Control \n", "\n", "\n", "[Intro to Estimation, Control, & Planning](https://docs.google.com/presentation/d/1xAL3BKDZoWbzkqT2aKxyIYz828DAJDDnAR21DR6nc8Q/edit?usp=sharing)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Unit 1: Kinematics & Dynamics\n", "\n", "__Goal:__ Students will understand the various reference frames involved with UAV navigation and how to perform transformation/rotations between frames. We will use this knowledge to practice our first autonomous flights through use of open-loop control. We will explore some of the physics behind quadrotor flight, i.e. the quadrotor's state and equations of motion. \n", "\n", "__Lectures:__\n", "\n", " 1. Linear Transformations (Khan Academy): [Video 1](https://www.khanacademy.org/math/linear-algebra/matrix-transformations/linear-transformations/v/linear-transformations), [Video 2](https://www.khanacademy.org/math/linear-algebra/matrix-transformations/linear-transformations/v/linear-transformations-as-matrix-vector-products), [Video 3](https://www.khanacademy.org/math/linear-algebra/matrix-transformations/lin-trans-examples/v/linear-transformation-examples-rotations-in-r2), and [Video 4](https://www.khanacademy.org/math/linear-algebra/matrix-transformations/lin-trans-examples/v/rotation-in-r3-around-the-x-axis)\n", " 2. [Euler Angles, Matrices, & Quaternions](https://docs.google.com/presentation/d/1GVwlcPnoXEd8YIijFUpA8aaEsqARjQlkwz9vTtguAIw/edit?usp=sharing)\n", " 3. [Equations of Motion](https://docs.google.com/presentation/d/1mnRvfEPzBNhGA3VAog5cy7RKLxdanBdgMxHPgDytt0k/edit?usp=sharing)\n", "\n", "\n", "__Practicals:__\n", "\n", " 1. [Coordinate Transforms](coordinate_transforms.html)\n", " 2. [OFFBOARD Mode & Open Loop Control](open_loop_translation_control.html)\n", " \n", "__Advanced Topics:__\n", "\n", " 1. [Using ROS tf2 Library](http://wiki.ros.org/tf2)\n", " 3. [Differential Flatness](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5980409)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## _Challenge 2:_ Aerial Choreography \n", "\n", "__Goal:__ Teams will compete to generate the best drone choreography; i.e. executing to most interesting and precise aerial maneuvers and trajectories using open loop control.\n", "\n", "### Basic Challenge: Square Trajectory\n", "\n", "Can you program your drone to trace out a perfect square in the air? Building on the code from the [open loop control practical](open_loop_translation_control.html), add functionality to trace a complete square. The instructors will judge the winning flight by taking a long-exposure photo and score how well the LEDs onboard trace a square\n", "\n", "### Advanced Challenge: Freeform Sky Writing\n", "\n", "![BW Skywriting](images/sky_write3.jpg)\n", "\n", "![MIT Mascot Skywriting](images/skywriting_mit_beaver.jpg)\n", "\n", "Now let's get creative! Building on the basic challenge, can you make more elaborate aerial trajectories? Perhaps you could try to trace out \"BW\" in the sky for \"Beaver Works\"! It's up to you to decide and encode the most creative path for your drone. Again a long-exposure photo of the flight will be used for scoring with the most interesting aerial trace winning. Note: any flight that collides with the safety netting is disqualified, so make sure you know what your drone is going to do before it does it!\n", "\n", "__Assignments:__\n", "\n", " 1. Challenge Report" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Unit 2: State Estimation\n", "\n", "__Goal:__ From this understanding of quadrotor dynamics, we will then gain an appreciation for the importance and challenge of estimating the state of the quadrotor system.\n", "\n", "In previous units we implemented black box localization algorithms on our drone (i.e. optical flow, ARTag identfication) so the drone has some estimate of where it is in the world, i.e. position. What about the other state variables of velocity, orientation, and angular rates? How is information from different sensors fused in such a way to provide an internally consistent, reliable estimate of the complete state of the drone? This is the role of the _state estimator_. In this unit we will gain an understanding of the importance and challenges of estimating the state of a robot as well as the techniques for doing so.\n", "\n", "__Lectures:__\n", "\n", " 2. [State Estimation Concepts](https://docs.google.com/presentation/d/1XvmpOj22YY_8G9WJtQzyF8tyoFqdUGLfzBHfnAyXjDM/edit?usp=sharing)\n", " \n", "__Practicals:__\n", "\n", " 1. [Accelerometer-based Position Estimation](https://github.com/BWSI-UAV/laboratory/blob/master/flight_log_analysis/accelerometer_filtering.ipynb)\n", " \n", "__Advanced Topics:__\n", "\n", " 1. [Kalman Filters](https://github.com/AtsushiSakai/PythonRobotics#extended-kalman-filter-localization)\n", " 2. [Kalman Filter (Linear System)](https://github.com/BWSI-UAV/laboratory/blob/master/control/linear_estimation_and_control.ipynb)\n", " 3. [Extended Kalman Filter (Nonlinear System)](https://github.com/BWSI-UAV/laboratory/blob/master/control/nonlinear_estimation_and_control.ipynb)\n", " 4. [Particle Filters](https://github.com/AtsushiSakai/PythonRobotics#particle-filter-localization)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Unit 3: Feedback Control\n", "\n", "__Goal:__ In a general sense the goal of this module is to understand how and why the output of a system can be looped back to the input in of the system in order to track a desired reference. For our purposes, the system is the quadrotor, the output is most often the state (position, velocity, orientation) as estimated by the state estimator, and the reference is some target state the quadrotor is trying to achieve (as generated by the planner). \n", "\n", "In a specific sense, we will see how the computer vision algorithms we have developed can be implemented on the quadrotor in order to achieve autonomous flight\n", "\n", "__Lectures:__\n", "\n", " 1. [Feedback Control](https://docs.google.com/presentation/d/1zNs8I7kZRAkrgk6v_GUz2FGrbknvyp5dRYudM537OWs/edit?usp=sharing)\n", " 3. [PID Control](http://bit.ly/2KGbPuy) (MATLAB Tech Talk Video)\n", " \n", "__Practicals:__\n", "\n", " 1. [Simple Feedback Control](https://github.com/BWSI-UAV/laboratory/blob/master/control/simple_feedback_control.ipynb)\n", " 2. [Advanced Feedback Control](https://github.com/BWSI-UAV/laboratory/blob/master/control/advanced_feedback_control.ipynb)\n", " 2. [Yaw-Only Color Tracker](https://bwsi-uav.github.io/website/yaw_color_tracker.html)\n", " 3. [Line Following](https://bwsi-uav.github.io/website/line_following.html)\n", " \n", "__Advanced Topics:__\n", "\n", " 1. [Nonlinear Estimation & Control](https://github.com/BWSI-UAV/laboratory/blob/master/control/nonlinear_estimation_and_control.ipynb)\n", " 2. [Linear-Quadratic Regulator (LQR) Control](https://github.com/AtsushiSakai/PythonRobotics#linearquadratic-regulator-lqr-speed-and-steering-control)\n", " 3. [Model Predictive Control](https://github.com/AtsushiSakai/PythonRobotics#model-predictive-speed-and-steering-control)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Unit 4: Motion Planning\n", "\n", "__Goal:__ Feedback control requires the input of a reference or target that the control loop is attempting to track, but how and where is this reference/target generated? This is the role of the _planner_. The goal of this unit is to understand the importance, challenges, and techniques for generating higher level plans for a robotic system\n", "\n", "__Lectures:__\n", "\n", " 1. [Planning Concepts](https://docs.google.com/presentation/d/1w16yrWtcI6EGMJZq0Hw-1aYM_Z9datrgxAxqIFd29Qs/edit?usp=sharing)\n", "\n", "__Practicals:__\n", "\n", " 1. [Obstacle Avoidance Maneuvers](https://bwsi-uav.github.io/website/obstacle_avoidance.html)\n", " 2. [Yaw Planning]\n", " \n", "__Advanced Topics:__\n", "\n", " 1. [Probablistic Roadmap (PRM)](https://github.com/AtsushiSakai/PythonRobotics#probabilistic-road-map-prm-planning)\n", " 2. [Rapidly-Exploring Random Trees (RRT)](https://github.com/AtsushiSakai/PythonRobotics#probabilistic-road-map-prm-planning) \n", " 2. [Kinodyanmic Planning: Reeds-Shepp](https://github.com/AtsushiSakai/PythonRobotics#rrt-with-reeds-shepp-path)\n", " 2. [Real-Time Kinodynamic Planning](http://asl.stanford.edu/wp-content/papercite-data/pdf/Allen.Pavone.RAS18.pdf)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 2 }