User talk:SPcRobotics

The Future of Robots
Humans in our world today are inventing robots with a mode of no limitations. The inventors have escaped the ground, planet, and appearance of the Robotics. After thousands of years of effort, the quest to build machines that emulate our own appearance, movement and intelligence is leading us to the point where there is no turning back. Once this point comes-once the accelerating pace of technological change allows us to build machines that not only equal but surpass human intelligence-we´ll see cyborgs and other combinations beyond what we can even imagine

The word â€robotâ€ originated almost a century ago. Czech dramatist Karel Capek first used the term in his 1921 play R.U.R. (for â€Rossum´s Universal Robotsâ€), creating it from the Czech word â€robota,â€ meaning obligatory work. In the play, he describes the invention of intelligent biomechanical machines intended as servants for their human creators. While lacking charm and goodwill, his robots brought together all the elements of machine intelligence: vision, touch sensitivity, pattern recognition, decision making, world knowledge, fine motor coordination and even a measure of common sense.

The first generation of modern robots was, however, a far cry from these anthropomorphic visions, and most robot builders have made no attempt to mimic humans. The Unimate, a popular assembly-line robot from the 1960s, was capable only of moving its one arm in several directions and opening and closing its gripper. Today there are more than two million Roomba robots scurrying around performing a task (vacuuming) that used to be done by humans, but they look more like fast turtles than maids. Most robots will continue to be utilitarian devices designed to carry out specific tasks. But when we think of the word â€robot,â€ Capek´s century-old concept of machines made in our own image still dominates our imagination and inspires our goals Robotic engineers are designing the next generation of robots to look, feel and act more human, to make it easier for us to warm up to a cold machine.

In our generation robots are looking more realistic with hair and skin with embedded sensors will allow robots to react naturally in their environment. For example, a robot that senses your touch on the shoulder and turns to greet you.Subtle actions by robots that typically go unnoticed between people, help bring them to life and can also relay non verbal communication. Artificial eyes that move and blink. Slight chest movements that simulate breathing. Man made muscles to change facial expressions. These are all must have attributes for the socially acceptable robots of the future.

Programming Concepts
Computer programs are collections of instructions that tell a computer how to interact with the user, interact with the computer hardware and process data. The first programmable computers required the programmers to write explicit instructions to directly manipulate the hardware of the computer. This "machine language" was very tedious to write by hand since even simple tasks such as printing some output on the screen require 10 or 20 machine language commands. Machine language is often referred to as a "low level language" since the code directly manipulates the hardware of the computer. By contrast, higher level languages such as "C", C++, Pascal, Cobol, Fortran, ADA and Java are called "compiled languages". In a compiled language, the programmer writes more general instructions and a compiler (a special piece of software) automatically translates these high level instructions into machine language. The machine language is then executed by the computer. A large portion of software in use today is programmed in this fashion.

We can contrast compiled programming languages with interpreted programming languages. In an interpreted programming language, the statements that the programmer writes are interpreted as the program is running. This means they are translated into machine language on the fly and then execute as the program is running. Some popular interpreted languages include Basic, Visual Basic, Perl and shell scripting languages such as those found in the UNIX environment. We can make another comparison between two different models of programming. In structured programming, blocks of programming statements (code) are executed one after another. Control statements change which blocks of code are executed next. In object oriented programming, data are contained in objects and are accessed using special methods (blocks of code) specific to the type of object. There is no single "flow" of the program as objects can freely interact with one another by passing messages.

A ratio usually represents a ratio between integers. Division of integers that can't be reduced to an integer yields a ratio, i.e. 22/7 = 22/7, rather than a floating point or truncated value. Ratios allows a computation to be maintained in numeric form. This can help avoid inaccuracies in long computations.

Robot Control
There are many ways to design software for controlling a robot. The focus is not on low-level coding issues, but on high level concepts about the special situations robots will encounter and ways to address these peculiarities. The approach taken here proposes and examines some control software architectures that will comprise the brains of the robot.

Probably the biggest problem facing a robot is overall system reliability. A robot might face any combination of the following failure modes:

Mechanical Failures - These might range from temporarily jammed movements to wedged geartrains or a serious mechanical breakdown.

Electrical Failures - We hope it is safe to assume that the computer itself will not fail but loose connections of motors and sensors are a common problem.

Sensor Unreliability - Sensors will provide noisy data (data that is sometimes accurate, sometimes not) or data that is simply incorrect (touch sensor fails to be triggered).

The first two of the above problems can be minimized with careful design, but the third category, sensor unreliability, warrants a closer look. Before discussing control ideas further, here is a brief analysis of the sensor problem.

An example of robot control is when it interacts with a wall. In a worst-case scenario, what could happen while a robot was merrily running along, following a wall? Several possibilities:

1. The robot could run into an object or a corner, properly triggering a touch sensor.

2. The robot could run into an object or corner, not triggering a touch sensor.

3. The robot could wander off away from the wall.

4. The robot could slam into the wall, get stuck, and conditionally trigger a touch sensor.

5. The proximity sensor could fall off its mount, causing a series of incorrect sensor readings. Ideally, control software should expect occurrences of cases like those numbered #1 through #4 and be able to detect case #5

Robot Hardware
The application of new hardware in Robotics is advancing every day. The fields of software and hardware engineering and biotechnology to recreate life or intelligence raise ethical and social issues. Robotic hardware designs are becoming more complex as the variety and number of on-board sensors increase and as greater computational power is provided in ever smaller packages on-board robots. These advances in hardware, however, do not automatically translate into better software for controlling complex robots. Evolutionary techniques hold the potential to solve many difficult problems in robotics which defy simple conventional approaches, but present many challenges as well. Numerous disciplines in the technology, but there is an ethical responsibility on the part of the creator to ensure that the robot or virtual pet causes no harm. There are so many advances in the construction of robots, especially in our generation.

The next step was autonomous, humanoid robots. The mechanics of walking were not simple, but Honda had proven that those problems could be solved with the creation of its ASIMO robot at the turn of the century. Sony and other manufacturers followed Honda's lead. Over the course of two decades, engineers refined this hardware and the software controlling it to the point where they could create humanoid body forms with the grace and precision of a ballerina or the mass and sheer strength of the Incredible Hulk. Humanoid robots soon cost less than the average car, and prices kept falling. A typical model had two arms, two legs and the normal human-type sensors like vision, hearing and touch. Power came from small, easily recharged fuel cells. The humanoid form was preferred, as opposed to something odd like R2-D2, because a humanoid shape fit easily into an environment designed around the human body. A humanoid robot could ride an escalator, climb stairs, and drive a car, and so on without any trouble. The hardware in these new Robotic inventions is extreme. Robotic cars and trucks are one obvious application for vision systems. There are more than 40,000 deaths in the U.S. every year because of car accidents. Human negligence causes most of these accidents. With robots doing all the driving, the number of accidents will go way down and we will eliminate one of the leading causes of death in the U.S. Unfortunately, robotic vehicles will also leave every taxi driver, bus driver, truck driver, etc. out of work. Robots with vision systems will be able to do all the cleaning in every hotel, store, airport and restaurant. Things will be spotless, but that will unemployed perhaps five million people.

Mathematics of Robot Control
Mathematics in robotics mainly involves roboto kinematics. Robot kinematics is the study of the motion (kinematics) of robots. In a kinematic analysis the position, velocity and acceleration of all the links are calculated without considering the forces that cause this motion. The relationship between motion, and the associated forces and torques is studied in robot dynamics. One of the most active areas within robot kinematics is the screw theory.

Robot kinematics deals with aspects of redundancy, collision avoidance and singularity avoidance. While dealing with the kinematics used in the robots we deal each parts of the robot by assigning a frame of reference to it and hence a robot with many parts may have many individual frames assigned to each movable parts. For simplicity we deal with the single manipulator arm of the robot. Each frames are named systematically with numbers, for example the immovable base part of the manipulator is numbered 0, and the first link joined to the base is numbered 1, and the next link 2 and similarly till n for the last nth link.

Robot kinematics are mainly of the following two types: forward kinematics and inverse kinematics. Forward kinematics is also known as direct kinematics. In forward kinematics, the length of each link and the angle of each joint is given and we have to calculate the position of any point in the work volume of the robot. In inverse kinematics, the length of each link and position of the point in work volume is given and we have to calculate the angle of each joint.

Robot Programming Languages
How to Program a Robot

Step1- Buy a factory-built robot. There are a few different manufacturers, but the most popular and well-established maker of domestic robots is the iRobot company. You can visit their site by following the link posted below.

Step2- Set the internal clock on your factory-built robot. It may come with an atomic or radio-controlled clock already in it, which means you will only have to turn it on to set the time. Once the robot is set to the right date, schedule the times that you would like the robot to operate. For cleaning robots, it’s usually when you are away from the home. Some robots also may require the measurements of the room it is to be traveling in.

Step3- Build a robot. This step is for the far more advanced robot users. The parts and construction of a robot largely depend on what the robot’s primary function will be. If you want the robot to carry things around, it will probably look like an arm mounted on wheels. Because of the large variety of different robots and the complex nature of their construction, it is advised to seek out specific plans for the robot you wish to build.

Step4- Write the code for your robot. Again, this seems like a vague and huge task for one step, and it is. There are a couple of different programming languages you can write your code in depending on the software you are using. The code that you write will also depend on what the robots primary function is. Since you don’t want your robot to get stuck in a corner, a common piece of programming deals with what to do when in such a situation. The programming should vaguely resemble basic reasoning. For example, IF left sensor detects an object THEN turn the wheels to the right. Programming requires a lot of foresight and trial and error.

Step5- Test your programming. This is important for both factory and home-built robots. Run the robot through all possible situations it may encounter and take note of how it performs. Go back and fix the code as you see fit.

Obstacle Avoidance
Obstacle Avoidance is a robotic discipline with the objective of moving vehicles on the basis of the sensorial information. The use of these methods front to classic methods (path planning) is a natural alternative when the scenario is dynamic with an unpredictable behaviour. In these cases, the surroundings do not remain invariable, and thus the sensory information is used to detect the changes consequently adapting moving.

The research conducted faces two major problems in this discipline. The first is two move vehicles in troublesome scenarios, where current technology has proven limited aplicability. The second one is to understand the role of the vehicle characteristics (shape, kinematics and dynamics) within the obstacle avoidance paradigm.

Most obstacle avoidance techniques do not take into account vehicle shape and kinematic constraints. They assume a punctual and omnidirectional vehicle and are doomed to rely on approximations. Our contribution is a framework to consider shape and kinematics together in a exact manner, in the obstacle avoidance process, by abstracting these constraints from the avoidance method usage. Our approach can be applied to many non holonomic vehicles with arbitrary shape.

For these vehicles, the configuration space is 3 dimensional, while the control space is 2-dimensional. The main idea is to construct (centred on the robot at any time) the two-dimensional manifold of the configuration space that is defined by elementary circular paths. This manifold contains all the configurations that can be attained at each step of the obstacle avoidance and is thus general for all methods. Another important contribution of the paper is the exact calculus of the obstacle representation in this manifold for any robot shape (i.e. the configuration regions in collision). Finally, we propose a change of coordinates of this manifold in such a way that the elementary paths become straight lines. Therefore, the 3-dimensional obstacle avoidance problem with kinematic constraints is transformed into a simple obstacle avoidance problem for a point moving in a 2-dimensional space without any kinematic restriction (the usual approximation in obstacle avoidance). Thus, existing avoidance techniques become applicable.

Task Planning and Navigation
Task planning for mobile robots usually relies solely on spatial information and on shallow domain knowledge, such as labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation and localization), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. Defining specific types of semantic maps is key, which integrates hierarchical spatial information and semantic knowledge. Semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains. Several experiments demonstrate the effectiveness of solutions in a domain involving robot navigation in a domestic environment.

For any mobile device, the ability to navigate in its environment is one of the most important capabilities of all. Staying operational, i.e. avoiding dangerous situations such as collisions and staying within safe operating conditions (temperature, radiation, exposure to weather, etc.) come first, but if any tasks are to be performed that relate to specific places in the robot environment, navigation is a must. In the following, we will present an overview of the skill of navigation and try to identify the basic blocks of a robot navigation system, types of navigation systems, and closer look at its related building components.

Robot navigation means its ability to determine its own position in its frame of reference and then to plan a path towards some goal location. In order to navigate in its environment, the robot or any another mobility device requires representation i.e. a map of the environment and the ability to interpret that representation.then we talk about robot itis very useful for college and school students. Navigation can be defined as the combination of the Three fundamental competences

Robot Vision
One of the most fundamental tasks that vision is very useful for is the recognition of objects (be they machine parts, light bulbs, etc) Evolution Robotics introduced a significant milestone in the near-realtime recognition of objects based on various points. The software identifies points in an image that look the same even if the object is moved, rotated or scaled by some small degree. Matching these points to previously seen image points allows the software to 'understand' what it is looking at even if it does not see exactly the same image.

As the hobbyist robotics market rapidly grows so too are the machine vision choices that the hobbyist has at their disposal. The CMUCam (initially created at Carnegie Mellon) is by far the most popular vision camera that can track an object based on its color and even move the camera if it is mounted on servos (small motors) to track the object. At a low price and basic usage it has become very widely used by hobby and academic roboticits.

Knowledge Based Vision Systems
Knowledge Based Vision Systems are described as which configurates automatically programs for image processing and supports the recognition of objects. The system runs in two phases. In the first phase based on the primitives (curved edges, corners etc.) and the explicit specification of the content of the image given by the user a sequence of operators will be generated and all their free parameters will be computed adaptivelly. In this phase the system uses a rule base composed of knowledge of visual processing operators, their parameters, and their interdependence. In the second phase a hierachical object model is formulated and edited by the user based on the primitives selected in the first phase. The system editor is specially provided for this purpose. Using the hierachical object model facilitates a rapid interpretation of the result obtained from the previous image processing for the subsequent object recognation

Robots and Artificial Intelligence
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it. Major AI textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."

The field was founded on the claim that a central property of human beings, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of breathtaking optimism, has suffered stunning setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.

AI research is highly technical and specialized, so much so that some critics decry the "fragmentation" of the field. Subfields of AI are organized around particular problems, the application of particular tools and around longstanding theoretical differences of opinion. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects