User:Groupi

Advanced Robotics Wikibook
by Kamil Stec and Bentley Whitfield

The Future of Robotics
Robotics was first focused on building an actual intelligent piece of machinery. Robots first started out as legs or just arms and a head. These first edition robots had only one simple task to do which was to either move their arms/legs or speak. Now these robots required months of planning, programming, and building just to get the first tier robots to function properly. As the years have gone by and our understanding and programming abilities improved the robots became more advanced and able to perform more difficult tasks. A future where robots are as common as cars and cheaper is on the way. This is according to Prof. Hiroshi Ishiguro, named one of the top 100 geniuses alive in the world today, who has devoted himself to creating robots so humanlike it's hard to tell the difference. In the future, our lives will be full of robots," he says. Ishiguro compared the evolution of robots to the evolution of cars. "Once we have developed practical robots, we can spend more and more time building autonomy," he said. Autonomous androids which look just like you could conduct your business, attend conferences, and go shopping on your behalf, while you sat in the comfort of your home. A camera would monitor your facial expressions and your android's face would mirror your expressions. Ishiguro says there is even a psychological phenomenon whereby, if someone touches your android, you feel it. "It's a very tactile sensation," he says. In one experiment, he put an android instead of a mannequin in a shop window in Japan. "The idea of a mannequin is to show the future. Do you want to be a mannequin in the future? But no human will stand in a shop window. So we put an android in the shop window," he says. Aside from gaining hundreds of Twitter followers, Geminoid-F (as she is known) managed to clear the shop racks of every outfit she wore. "Everyone wanted the clothes of the android," says Ishiguro. The big driver in Ishiguro's research is an interrogation of what it means to be human, and whether it is important for the robots we engage with to look like humans. "Our brain is not for using computers; our brain is for recognizing humans. Young children can't use a computer, but they can interact with a robot," says Ishiguro. The robotic future we thought we would see is completely different then the reality of how robots are evolving. Some robot advances may include robots that do our bidding but the main stream of these robot advances will be in robots as small as insects that can perform surgery. Robots will also be built as a collaborative swarm to do larger tasks together such as asteroid mining for research. The research capability of the rapidly increasing robot advancement is endless. Instead of risking lives and billions of dollars researchers could send a swarm of tiny robots to record and perform simple tasks. This will not only save money but allow researchers to see and observe at a very safe distance. There are some advances that make it seem that robots are catching up to human intelligence but that isn’t entirely true because humans are the ones programming and designing them so just as the robot gets brighter so do we as humans with every advancement. It is amazing that many years ago there were people doubting and contemplating on, “Why would anyone want to buy a personal computer?”, and now the computer technology has shrunk in scale size for mobile devices and it seems like the same is going on with robotics. Many believe that one day we all will have our own swarm of robots to do tasks, coexist, and interact with us on a daily basis. Just looking at the some simple machinery like a crane or a forklift both are controlled via a remote. A crane has a simple robot concept it may require a lot of space but it is very simple to control either through direct controls but now they are even controlled via remote. Nowadays everyone has at least one computer and a mobile device. Some only have mobile devices that work as well as a computer does just much smaller and easier to carry around. Even though the future of robotics will look much different then we first perceived but we must be ready and open our minds to the endless possibilities in robotics. We shouldn’t fear a future filled with robots but we should welcome it with positive outlook knowing the future is bright and full of robots.

Programming Concepts & Programming Languages
Programs are a long list of codes put together to be able to perform specific tasks. As the languages have changed the programs that they created have advanced and became more efficient. Old programming languages are not used as often anymore but they are still a great base to start on. Today the languages have advanced and even changed within themselves with new styles of typing and new simpler command prompts. Coding has become a major field in which one can pursue. Every aspect of the computer, robot, and even IT field has a need to know and understand the basic programming languages to be able to create new programs, new tasks, and even program advanced robots. Programming is a specific set of steps created to complete a task. These steps follow a specific order when they are written to ensure the task is done correctly. Programming and programs are part of our daily lives. For instance when we make a sandwich we know that we need two slices of bread, butter, meat and anything else we want to put on it. We then begin to make the sandwich in a specific order take two slices, butter both slices, place meat and cheese, put slices together and consume. This is exactly how programs work in a simple sense anyway. All programs use the same three steps in order to create a larger well function program. To start we must put the sequence of commands in a specific order to eliminate any possible errors. We use conventional sequencing to avoid confusing anyone that reviews or looks at our program. In programming we can add conditional statements based on a true/false or yes/no answer. An example of this would be the One potato, two potato game. If the counter lands on your fist then you remove your fist from the game. If both your fists are removed from the game then you are knocked out of the circle. A Programming example of this is If a word exist in a list then print it out else tell the user that it doesn't exist. Most programming languages use the conditional structure.....If.......Then........Else. Looping structures are used to make the computer perform a set of tasks either a set number of times or until the user makes it stop. Here are some ways that looping might be done: Do the following 20 times. Do the following once for each word in the list Repeat the following until the user presses the option key Repeat the following as long as the option key is depressed. Programs can range from setting an alarm on your phone or watch to very sophisticated instructional or business applications. Top-down design is a way of approaching a complex programming task by first mapping out the entire program and identifying the major components that it will require. Then the programmer would use flowcharts and general statements to represent the logical flow of your program. Once the major components are identified, the programmer then focuses on each component in greater detail, finally culminating in writing the actual program code for creating each component. This term, from the prefix pseudo-, 'false' and the root word code, 'programming instructions', describes a way of representing the detailed steps your program must perform without having to worry about the specific vocabulary or syntax of a specific programming language. You use your knowledge of the basic control structures, common sense and logic to write plain-English statements to explain in detail how you will accomplish each main step. Here is an example of simple program written in 2 programming languages.

Robot Control
The way programmers control robots has changed. Robots used to be controlled by wires and cables as robots have been advancing. Even during the phase with wires and cables a program was embedded to tell the robot what to do. A robot can be programmed with using most types of programming languages. Each programming language has its own way of writing code but they perform the same task in the end. Each programming language has advantages and disadvantages because of the various commands that it has, some have more than others. After the robot has been programmed with one of the programming languages. The robot can be controlled by a few different ways. In the first stages of robotics the robots were controlled with wires and cables. These robots were large and very simple builds. For example the robot was either legs, body, or a face being limited to simple movements going forward, back, and facial expressions like smile and frown. As robots got smaller the way a robot can be controlled has changed from wires to controllers and joysticks allowing a user to control the robots movements. A simple example of this would be a remote control car. The remote control car is just a few motors and gears but uses radio waves to send signals from the controller to the car. These have further advanced as the robots have become even more advanced we have added new ways of controlling the robot. Nowadays we can use Bluetooth devices such as an nxt brick to nxt brick. We can even take it further and use our own handheld devices to control the robots movements. Many robots nowadays either have an external controller or have the entire program installed ready to use. The way a robot is controlled depends upon the programmer and what the robot is being used for. If it is a robot for short distance and needs little to no control then a fully initialized program sequence could be installed. If the robot needs to be able to have a more accurate degree of control then some type of controller would be best such as a remote or Bluetooth device.

Robot Hardware
The type of hardware a robot has depends on the type of task the creator or programmer choose. Each design is different and unique in its own way because the vision of the creator and programmer should be in sync if the build is to work properly with the program. For example a program that is made for vehicle to drive on all wheels and steer through terrain would not work properly in a robot that was built only for smooth terrain. The complexity of the build varies because the build material can be anything from nxt lego pieces, to metal welded pieces, to even full life size metal/steal hardware. When robots were first designed they were huge with metal/steal pieces with wires hanging from them. They took weeks, months and even years to design and build. The funding needed for these builds were incredibly expensive. As the designs became more advanced the robots became smaller and smaller yet the robots became more efficient and sophisticated. The cost of this type of research has increased to be able to enable more study possibilities. Each build has to have a plan or base idea on what the robot will be performing. A simple task will only require a simple robot. For example a robot car can be easily constructed with lego nxt pieces and some simple programming. However, a robot that will be doing surveillance will require not only a strong versatile structure like a metal/steal frame, a wireless controller of some sort, a well equipped camera, and a well constructed and functioning program to make this robot work properly. A robots hardware is complemented by what it has inside as well its not just about the type of material one uses it also is about all the other major components needed to make the robot function properly. The robot needs some type of brain or computer to hold and access the program. So a computer that is both powerful and small enough to fit into the design of the robot works great. The memory of that computer needs to be sufficient enough to hold the whole program. Sometimes there may be more than one area of memory storage to compensate for the large program needed to make the robot work. A great example would be the robot car, a standard car that drives itself. Each year multiple programming corperations compete and compare how each one got a vehicle to drive on its own. These cars needed tons of sensors, memory capacity, and power to make them run. Even though that at this time a self driving car requires so much hardware and software doesn't mean we won't be seeing them in the streets in the near future.

Mathematics of Robotics
The mathematics of robotics is a much more complex way of making the robot perform certain functions or follow a specific path. If a programmer knows the exact measurements of a course for example they would be able to program the exact distance each movement direction the robot will need to take. These calculations however are a little complex. There are a lot of variables that one needs to think about when doing such calculations. According to the IEEE RAS Technical Commettee, "As modern robots address real-world problems in dynamic, unstructured, and open environments, novel challenges arise in the areas of robot control algorithms and motion planning." Basically, since teh robots are now able to do more than just a few simple tasks and research is aiming at being able to use robots for real world challanges, simple programming algorithms won't cut it anymore. These newer robots need a more complex algorithm to make them work properly in a unknown environment. Researchers at the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Laboratory for Information and Decision Systems (LIDS) came up with a system which enables robots to move more efficiently, thus saving time and energy. Since it is more intuitive and efficient, the new algorithm also provides more predictable movement which is crucial for robots which interact with humans. Researchers from MIT combined two innovative algorithms developed there to build a new robotic motion-planning system that calculates much more efficient trajectories through free space. Although such tasks seem intuitive and simple to us, most of the existing path finding algorithms rely to collision avoiding rather than finding more efficient paths between the robot’s initial state and its goal. Instead checking every possible path in order to find the most efficient and collision free course, motion-planning algorithms tend to randomly pick points in the environment and determine whether each is reachable from the closest point that’s already been evaluated. It enables lower resource consumption and faster reaction to changes in the surroundings. The amount of mathematics or calculations needed in a robot program can vary. Most calculating programs use an avoid a collision type of base but as the researchers at MIT have discovered that a complex and combined algorithm can make the robot move more efficiently and cut the motion time in half. A simple obsticle program looks like this: task main {  do    { motor[motorA]= 75; motor[motorB]= 75; }   while(SensorValue(sonarSensor) > 20); } A more advanced programmer would most likely have many calculations in order to ensure the robot runs more smoothly.

Task Planning and Navigation
A basic problem in robotics is to resolve specified tasks and commands and plan the resulting motions and sub-tasks. The planning system needs to transform a task-oriented problem into a plan, which describes how the given problem can be solved by the robot. For this transformation a detailed knowledge base and world model have to be available. These models give the robot a description of its environment, and thus enable it to construct the necessary operations needed to fulfill the task. The plan generated in this way contains a sequence of action elements (for example movement, picking up items, manipulating items) with assigned resources The motion control manager determines the start and destination of a path and plans the course and the necessary actions. The resulting motions of a robot are called trajectories or paths and consist of a sequence of desired positions, velocities and accelerations at a given point. The sequence of plan elements is called a task execution sequence. During this planning stage all constraints and restrictions, like closed or impassable areas are considered as well as target times, resources, supplies or the processing of parallel or sequential tasks.

Navigation system The navigation of a mobile robot comprises localization, motion control, motion planning and collision avoidance. Its task is also the online real-time re-planning of trajectories in the case of obstacles blocking the pre-planned path or another unexpected event occurring. A higher-level process, called a task planner specifies the destination and any constraints on the course, such as time. Many problems have to be solved before robots can match the sophisticated navigation abilities of people. Most mobile robot algorithms abort, when they encounter situations that make the navigation difficult. Set simply, the navigation problem is to find a path from start to goal and traverse it without collision. The relationship between the three subtasks mapping and modeling of the environment; path planning and selection; path traversal and collision.

Knowledge Based Control
Knowledge based control in typical vision application, a number of stages of processing are involved in going from the raw input data to the final result. Typically, at each stage of processing a number of alternative algorithms can be employed. Each of these algorithms, in turn, may have one or more tunable parameters. These parameters may be continuously variable, or may take discrete sets of values. Usually, due to uncertainty in the data and in the problem model, it is not possible to predict beforehand if a given algorithm sequence will produce the desired result for a certain parameter setting. It may be necessary to start with a rough guess of the parameter values, execute the algorithm sequence, examine the results and, if necessary, modify the parameter values or the selection of algorithms, and repeat the procedure until results of the desired quality are obtained. In this section, we examine the types of knowledge used by a vision specialist (and therefore required for knowledge based control), and the implications for the design of self and tuning vision systems. Should satisfy the conflicting requirements of flexibility and convenience.

Robot Vision
This field of robot vision guidance is developing rapidly. The benefits of complicated vision technology include savings, improved quality, reliability, safety and productivity. Robot vision is used for part identification and navigation. Vision applications generally deal with finding a part and orienting it for robotic handling or inspection before an application is performed. Sometimes vision guided robots can replace multiple mechanical tools with a single robot station. The research community of robotics has developed a very large body of algorithms but for a newcomer to the field this can be quite daunting. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. The strategy differs from conventional approaches to robotic vision. One common method, for example, is called object segmentation. To learn to find a coffee cup, a robot first examines a picture of the cup. It extracts-or segments-the cup’s image out of the picture. Next, it erases anything that occludes the cup, say, a sugar bowl, and cleans up the image, maybe creating a black silhouette of the cup on a white background. The robot then estimates what the cup looks like at different scales and rotations. Researchers are also thinking about incorporating data from other senses, such as touch, into the final map, although they’re not yet sure how this can be done. Any robot that interacts with its environment will benefit from having tactile senses, says Ernst Niebur, a scientist at the Johns Hopkins University. Niebur feels confident that he can build a tactile map that would then feed into the final composite map. “We know the brain is capable of doing it,” he says.

Robots and Artificial Intelligence
ROBOTS AND ARTIFICIAL INTELIGENCE: Artificial intelligence is a branch of computer programming that allow machines to perform human tasks through human intelligence. Asimo is considering to be the most Humanoid robot created. It was invented by Honda and has the ability to perform small human tasks with great ability. This robot was originally invented to assist those with limited mobile ability.

Honda began construction and program of Asimo in the year 1980 and has just started to make public appearances around the world. This robot has come a long way. When first invented the robots movement was “choppy” throughout the years more smooth motions have been added. Since the earlier years of Asimo the Robot has also become wireless giving it the ability to move freely. Current builders on the robots say the biggest challenges on such a robot were having it walk freely amongst people. The timely movement and registry process was hard to make fast at a pace that would originally look natural and most importantly not interfere with humans. Asimo has the ability to walk at a speed of 1.7mph and run at a speed of 3.7mph. The speed is slightly varied by how the robot can stand and obtain a flat surface.