lunes, 28 de junio de 2010

Do Androids Dream of Electric Sheep?

How will the robots be in the future and what will be able to do? is the question we ask everybody. Now and have ownership in factories, labs, operating rooms, shopping centers and even homes, but they still rely on humans.

Those who design robots are trying to provide their own capacities of humans: to be autonomous, to adapt to their environment and learn.

With this goal arrives the evolutionary robotics, a new technology that attempts to relate the biology, cognitive sciences and artificial intelligence. The work of these scientists is to mimic the human learning process, trying to reach the same plasticity, the possibility that they can recognize environmental stimuli and adapt.

The robots are different from any other machine engineering, because his conduct, in part, is unpredictable.

The designs are more abstract than other machines and much more difficult, which can control up to the last screw. But the robotic technology is different, because they are complex systems that are developed using neural networks and genetic algorithms.

The computer scientist who works Argentine Ezequiel Di Paolo specializing in robotics and cognitive sciences, graduated from the Instituto Balseiro and researcher in Cognitive Science and Robotics at the University of Sussex in the UK, is developing models of biped robots, different Japanese robots as "Asimo", manufactured by Honda, which used a total control system. (These are a set of videos of Asimo)

http://www.youtube.com/watch?v=Q3C5sc8b3xM

http://www.youtube.com/watch?v=P9ByGQGiVMg

http://www.youtube.com/watch?v=VTlV0Y5yAww

Unlike the Japanese, British scientists proposal is to create robots that self-regulation and seek their own adaptive equilibrium.

This possibility can be distressing to many humans, but it is difficult for the autonomy of robots could become a threat to humans, because there is something in human beings may not be reproduced, that is the perception of ourselves, the inner, the awareness and the critical judgement.

Robots can not have emotions, they do not care what happens or can happen, do not grieve or rejoice at nothing, not interested human affairs.

In science fiction, the Three Laws of Robotics are a set of rules written by Isaac Asimov, which most robots of his novels and stories are designed to meet. In such a universe, the laws are "math formulas printed on the trails positronic brain" of the robots (what we now call ROM). First appeared in the story Runaround (1942), provides:

1. A robot may not injure a human being or, through inaction, allow a human being suffers harm.

2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

http://www.youtube.com/watch?v=AWJJnQybZlk

This wording of the laws is the conventional way in which the human stories set out, their real form would be an equivalent set of instructions and more complex in the brain of the three Laws robot. Asimov attributed to John W. Campbell, had written that during a conversation held on December 23, 1940. However, Campbell argues that Asimov had thought them already, and simply they expressed them in a more formal way. Las three laws appear on a large number of Asimov's stories, as they appear throughout the series of robots, as well as several related stories, and the series of novels featuring Lucky Starr. They have also been used by other authors when they have worked in Asimov's fictional universe, and frequent references to them in other works, both science fiction and other genres.

The three laws of robotics represent the moral code of the robot. A robot will always act under the imperatives of its three laws. For all intents and purposes, a robot behave as a morally correct. However, it is legitimate to ask: Is it possible that a robot violates any of the three laws? Can a robot "harm" to a human? The most of Asimov's robot stories were based on situations in which although the three laws, we may respond to the above questions with "yes."

The intention of scientists, therefore, is eventually to build a robot that looks more like an animal than a machine and you get to be autonomous.

Although the robots do not feel emotions, they can simulate and transform it into devices that can interact and perhaps fulfill a social function, for now, programmed and artificial.

The idea is to create robots that acquire a kind of criterion itself, and this is a real challenge. But the truth is that the man did not yet know himself completely, so that we may only be able to build a robot like that when we have more knowledge about ourselves.

Japan is the country with more investment in robotic technology, a country with a robot every 34 workers, and Singapore, South Korea and Germany are the countries that continue to Japan in robot density.

According to figures from the Institute of Electrical and electronic engineering in 2008, had a million robots in the world.

United States, researchers at Cornell University, constructed a machine with the ability to make copies of itself.

In England, scientists from the Universities of Aberystwyth and Cambridge, have developed a computer system capable of proposing hypotheses, devise and conduct experiments, understand the results and make new scientific research without any human involvement.

"Adam" is the first of a series of robots in the country, dedicated to medical research and is already building a "Eva", another robot of the same series to be devoted to discover drugs to combat diseases like malaria and schistosomiasis. (This is a video about Adam)

http://www.youtube.com/watch?v=IY1sPV9e9H0

The robot approaches it can not be stopped. Will they be our allies or will they be like us, conquer, and then move to destroy us?

http://www.youtube.com/watch?v=WGoi1MSGu64

References:

http://www.cogs.susx.ac.uk/users/ezequiel/ (web page of Ezequiel di Paolo, expert in robotic)

http://www.asimovlaws.com/articles/archives/2004/07/3_laws_dont_qui.html (article about the three laws of robotic)

http://asimo.honda.com/ (information about Asimo)

AI: Current researches.

When people speak about artificial intelligence imagine robots with human form probably inspired in a science fiction film. Maybe in the future it will be like this, but for the moment artificial intelligence is much more simple. The AI is intelligence created artificially, but what do intelligence characterize?

Something with intelligence has to have mental attitudes such as beliefs and intentions, the capacity to have knowledge, this means, to learn. A machine with AI doesn’t have all the knowledge from the beginning, when it was created, it has to acquire it by the experience. This is one of the most important characteristics. It has to solve problems, even dividing a big problem in other simpler problems. It has to understand situations, in other words, to give sense to opposite ideas, if it is possible. A thing with AI has to draw up a plan for an assigned job, predict consequences and see alternatives. It has to know the limits of its abilities and knowledge. It has to distinguish between similar situations, it can’t give the same solution to a certain group of situations, it has to solve each one by individually, distinguishing them. It has to be original, creating new concepts and ideas, even using analogies. It has to perceive the external world and make a own impression of it. And it has to understand and use the language and symbols to communicate. If something has this characteristics, we can say that its something with intelligence.

So we can say that the AI has human characteristics like the learning, the adaptation, the reasoning, the correction of mistakes and the vision of the world under a point of view. Thanks of this we can use the AI in a lot of matters.

But nowadays, an thing with AI doesn’t have to have all this characteristics, only some of the most important, maybe in the future it will have strictly all the characteristics, but not yet. The most attractive idea speaking about AI is to thing about robots, and we will speak about it, but there are a lot of interesting applications of AI.

The current investigations are based in the service and business sectors. In the business sector AI is used to help the worker, as a tool. All of us use our computer everyday. In a company, for example, AI can be used to do the numbers, and it will be more efficient than if a group of human do it, so we can investigate to do more efficient programs for working better and faster. The interesting point is when the AI is used to take decisions on business, always under the human supervision. For example, in the Stock Exchange the AI is used to do predictions. It observes the data, compare, and take decisions. In a mixture between service sector and business some groups are investigating a program that make predictions in the Stock Exchange for small “brokers”. The idea appeared from a Spanish student, Rafael Rosillo, who created a program that helps the people to get into the Stock Exchange. Of course, people will not become rich with this, but it can be use as a tool to get into this world without so much of risk to lose a lot of money. Here we have an example of how a human can learn by the AI. The other branch of current research is going to the service sector. For example, Newstracker is a program that looks for specific information. You ask the program what you are looking for ant it looks in internet newspaper, article, magazines… and it edit for you a personal newspaper with all the information you was looking. Today there are a lot of this kind of programs with the Digital Terrestrial Television, all this programs look for an ideal program for you in each moment. Other market that is very usual for AI is the videogames. Videogames about war create the enemy who try to kill you with AI. It doesn’t know what movement you will do, depending of this it will realize an action or other one. In football videogames the footballer with AI simulate the real footballers, each one with a personality, and playing in a different way. Even the simplest game like chess have AI. The most of this games haven’t all the characteristics of the Intelligence, but anyway this is called AI, and the objective of the investigations is get closer to the definition of intelligence.

As we can see, AI is around us all the time in computers and digital machines. But we can put this AI inside a machine able to move and make works, then we will have a robot. A robot takes information of the environment and analyzes this data making an answer and doing an action. There are some kinds of robots. In a factory that makes cars the robots are working to construct the cars. But when we speak about robots and AI we are speaking about machines working not in a only way programmed, we can order a work and it will do in the best way possible and adapting itself to the situations. One of the main problems with the robots is the storage of information. A robot should be learning always but for this we need an infinite database. The current investigations of robots got imitate the human movement walking; recognize objects and not so much more. The future investigations are pointed to get better the recognition of the 3D space and to feel sensations like pressure or temperature. A research team of Tokyo University developed a flexible plastic that can be used in new robots like the skin. With this new advance a robot can make more sensible works like for example cooking.

We can avoid thinking in a robot with human form when we speak about AI (as we said at the beginning), but here you can see an example of a robot with the form of a dog. It is called Big Dog. It has many sensor that recognize the terrain and a very modern mechanism which let it move in all king of places.

http://www.youtube.com/watch?v=cHJJQ0zNNOM

This robot is used to carry weight, for example with military purposes.

The investigations in the future are aimed to create robots that help the humans to make the work faster and better. We can thing that they can replace us, maybe it is possible that some kind of jobs are affected by the introduction of robots in the work space, but we are so far of the world in which the robot are working for humans. Anyway, if we are in a world like this it will be a good point because the investigations are following this way, and humans will work in other sectors that will have to increase, like the sector of the computational sciences.

References:

http://www.monografias.com/trabajos16/inteligencia-artificial/inteligencia-artificial.shtml

http://www.lne.es/gijon/2010/05/31/inteligencia-artificial-mejor-broker/922656.html

http://www.monografias.com/trabajos13/intar/intar.shtml

http://www.monografias.com/trabajos64/inteligencia-artificial-investigacion-sistemas-computo/inteligencia-artificial-investigacion-sistemas-computo2.shtml#xfuturo

http://www.bostondynamics.com/img/BigDog_Overview.pdf

Different perspectives within AI

Within Artificial Intelligence field there are two main fields that have been trying to create a computer as similar as the human brain. In scientific words, a computer able to pass the Turing test (it will be discussed in the following section). The first method was exploited between 1950 and 1970 and it was oriented to discover the knowledge rules. It was called the symbolic IA. After 1970 a second field appeared due to the bad results of the first one. This new method was called sub-symbolic and it was based on the probability. These are the main features of these fields but they are much more complicated than this. Thus, this is a more deeply explanation of their features.

In the earliest 50’s a new field in computation appeared, Artificial Intelligence and a lot of researchers concerned about this new field. At the beginning they tried to develop the knowledge, in the computer, as a collection of basic rules. For instance, if we know that the birds can fly and a pitch is a bird, the pitch can fly. This is a very simple deductive reasoning. Therefore, the researchers only had to store in the computer a lot of premises in the computer analysed the inputs with these premises. However, this method has a problem, the ambiguity of a normal conversation caused a lot of difficulties to the computer. Therefore, these premises must be enunciated rigorously. Furthermore, all the rules have an exception, for instance, in the case of the birds, a penguin is a bird but it cannot fly, however the computer would deduce that it can fly.

Because of these obstacles this field based on an easy deduction method was abandoned. In the 70’s the researchers focused the IA in the probability. They wanted to create a program able to learn with a lot of examples. For instance, the researchers showed to the program a lot of pictures with birds and it had to distinguish what they have in common. This new method worked quite well but the problem was that with complicated concepts as maternity or grammar this software got confused.

This is the most popular field in IA nowadays, also called Computational Intelligence, and there are different techniques as Evolutive Computation, Swarm Intelligence or Neural networks. The last one is maybe the most famous because it tries to copy the human brain operation. It is a system in which all the neurones are interconnected in order to produce and output. Nowadays the problem of this technique is the capacity of the computers. A human brain has 100000 neuronal connexions and a powerful computer only has 10000. Nevertheless, the Internet could help in this problem because it give us the opportunity of having connected computers and it makes possible to copy the human brain.

In the last year a researcher called Goodman has developed a program language, Church, that includes basic rules as the first IA methods but they are based on probability. Therefore it mixes both methods. For instance, if we establish that a pitch is a bird, the program Church will give a probability to its possibilities of flying, If we give the program some extra information about the pitch (its age, its sicknesses), the program will modify its initial probability estimation and it concludes that maybe the pitch can not fly. This method is still a theory but it seems a huge step in the IA, which last years would seem stagnant, because we give the opportunity to the computer to learn by itself using a probabilistic calculation. Indeed, this is the way the humans use to learn.

These are a few simple examples in which we can see how these technique works. They are all related with Neuronal networks and it is quite visible how the computer by itself learns how to work when we give it a certain problem.

http://www.youtube.com/watch?v=lmPJeKRs8gE

http://www.youtube.com/watch?v=oU9r64tc7yE

http://www.youtube.com/watch?v=MbtJ-Y4-T0Y&feature=related

http://www.youtube.com/watch?v=rFMBTIPLUFg&feature=related

http://www.youtube.com/watch?v=ytRi4rvnBsc&feature=related

Perhaps the most amazing is the last because we can see literally how a machine is learning to face a problem and to solve it. As It was said before these are simples examples with just a hundred neuronal connexion, imagine what we are able to do with millions of them, this is just the beginning.

Furthermore, this field is not only in the computational labs. There are some commercial products in the streets. For instance, a famous videogame called Quake II (you have a gun your mission is to kill as many zombies as you can) has one of the enemies based on neuronal Networks. It is called Neuralbot and it learns from the movements that we do during the game. Here you can find some extra information about it:

http://homepages.paradise.net.nz/nickamy/neuralbot/nb_about.htm#about

This is a simple video to explain how this enemy works:

http://www.youtube.com/watch?v=_StkH25eulg&feature=related

In one hand, this is an IA approach, to develop a technology able to create machines with reasoning capacities similar to the human intelligence.

On the other hand, some researchers have focus on another approach. This approach uses the computer as a simulation tool in order to validate theories. It doesn’t want to obtain intelligent programs but discover what is the intelligence. Because the intelligence activity come up from the animals, a lot of researchers have focused their attention on what is the life. This field, which is called Artificial Life, tries to create something alive using a combination of data and programs. The first assumption that is made in AL is that the intelligence comes up from the life. Furthermore, if we are able to create life we will understand better how it works and what it needs to exist. Nevertheless, a lot of people have decided to separate this field from the IA because of its huge dimension.

Within IL, the Celular Automatons are the best example of life generator. They don’t seem intelligent but they present several fundamental aspects of life. The definition of Cellular Automaton is:

It is an ensemble of cellules that are interconnected each other. The state of one of them depends on the state of its neighbours and its own previous state. So if we give to the Automaton several inputs it will produce an output depending on the transition function that we have gave to the automaton at the beginning.

The most famous program is the game of Life created by John Conway. It is quite simple because you have a screen with a lot of cellules (in this case the bits of the computer screen), which could be white or dark. If we give the program some initial instructions (for instance, if one white bit has three black bits neighbours it will turn into black) and we let it run, the bits will change its colours and the screen evolves with time. This program became famous because with particular initial instructions the screen seem evolve as something alive and there are a lot of researchers interesting in this. It is impossible to talk about this deeply in this article because it would fill hundreds of pages but I will put some example of how this program works.

http://www.collidoscope.com/cgolve/map.html

http://pentadecathlon.com/lifeNews/index.php

I put here the real program if someone is interested on it:

http://www.bitstorm.org/gameoflife/standalone/

As you can see in the example it seem alive and the question is, is there life inside the box. My answer would be affirmative, artificial life, an artificial universe. Maybe we are living in a Game of Life.

ARTIFICIAL INTELLIGENCE. THE BRAIN OF THE MICROPROCESSOR

The term "artificial intelligence" was formally coined in 1956 during the conference Darthmounth more by now had been working on it for five years which had been proposed many different definitions that had been achieved in no event be accepted fully by the community research. The AI is one of the newer disciplines together with modern genetics is the field where the majority of scientists "would like to work more."

One of the big reasons which made the study of AI is it to learn more about ourselves and difference of psychology and philosophy that also focused study of intelligence, AI and its efforts to understand this phenomenon are aimed at both building and understanding intelligent entities.

Artificial Intelligence development includes several fields such as robotics, mainly used in industry, understanding of languages and translation machine vision to distinguish shapes and used in assembly lines, word recognition and machine learning; computer systems experts.

According to John Mc Carthy intelligence is the "ability of humans to adapt effectively to changing circumstances through the use of information on those changes." Another interesting way to illustrate the use of intelligence serious societal theory of the mind of Marvin Minsky which each human mind is the result of actions by a committee of minds less power than talking to each other and combine their respective skills in order to resolve problems.

In an area a bit more technical, one of the definitions that have been given to describe the AI is "that which is used to make a given program to behave intelligently without attempting to take into account the 'form of reasoning 'used to achieve this behavior. " Then, here comes a dilemma, since according to that any problem solvable by a computer, uncomplicated and as a human being could be included in the field of artificial intelligence by going only to the application of rules in a row at the bottom of the letter or what we found with the name of algorithms within the language of AI.

When applied algorithms to solve problems but not acting wisely if you are being effective but really complicated problems facing the human beings are those in which there is no known algorithm and rules arising from dealing solution-oriented Heuristics calls where ever there is no guarantee that the application of these rules will bring us closer to the solution as with the previous ones.

We found that we orient ourselves to the creation of an artificial system capable of human cognitive processes, these relate to learning and adaptability and their authors are Newell and Simon of Carnegie Mellon University.

The AI to try to build machines that behave like human beings apparently have given rise to two opposing blocs: the symbolic approach or top-down, known as classical AI approach sometimes called connectionist subsymbolic.

When a task is performed by a well-defined algorithm for storage, classification or calculation, a computer can do. This concept of algorithm, sequential, fixed and certain operations can not handle problems where the path of reasoning which is variable and different situations must be addressed without being specified.

One of the most important methods are neural networks. This model considers that a neuron can be represented by a binary unit: each time their condition may be active or inactive. The interaction between neurons takes place through the synapse. According to the sign, the synapse is excitatory or inhibitory.

The perceptron consists of inputs from external sources, and output connections. In reality, a perceptron neural network is as simple as possible, is that where there are no hidden layers.
For each configuration of states of input neurons (stimulus) perceptron response is due to the following dynamics: synaptic potentials are added and compared with a threshold of activation. This weighted sum is also called a field. If the field is greater than a threshold, the response of the neuron is active, otherwise it is inactive.

With architecture as simple as the perceptron can not perform more than one class of functions "Boolean" very simple, called linearly separable. Are the functions in which the input states with positive output can be separated from those negative output by a hyperplane. A hyperplane is the set of points in the space of input states that satisfy a linear equation. In two dimensions, is a line in three dimensions a plane, etc..

If you want to perform more complex functions with neural networks, it is necessary to insert layers of neurons between input and output, called hidden neurons. A multilayer network can be defined as a set of perceptrons, linked together by synapses and arranged in layers according to different architectures. One of the most commonly used architectures is called feedforward: input connections to the hidden layers and from these to the exit. The operation of a neural network is governed by rules of the spread of activities and update the states.

Another of the methodologies used for the simulation of computer intelligence are genetic algorithms, using the evolutionary strategy as the main operator and mutation as a secondary operator (and even optional). The genetic algorithm, like neural networks, functions as a black box that receives certain inputs and produces (after an indeterminate amount of time) the desired outputs. However, unlike them, genetic algorithms do not need to train with examples of any kind, but are able to generate their own examples and counterexamples to guide the evolution from completely random initial populations.

The selection mechanisms of the fittest and sexual reproduction of the genetic algorithm, are charged with preserving the appropriate characteristics of each individual in order to converge to the population at optimal solutions.

Genetic algorithms can also be distinguished not easily get trapped in local minima, like most traditional search techniques, probabilistic operators in addition to using more robust than deterministic operators, they often use other techniques.

However, being a heuristic, they can not always guarantee finding the optimal solution, but experience to date seems to show that, when used properly, can provide very acceptable solutions and, in most cases higher than found with other search and optimization techniques.

Although still attacked by some sections of the community of Artificial Intelligence, genetic algorithms, like neural networks, have been gaining little by little, and on the basis of the effectiveness of its results into practical applications, the recognition of researchers as an effective technique in complex problems, as evidenced by a growing number of conferences and publications around the world in the last few years.

WELLCOME!!!

In this blog we will upload some publications about artificial intelligence for the final assigment of Digital Art and Culture. We are Javier Gómez García, Francisco Sancho Valero, Alfonso Andrés García Soto and Martín Colombano Sosa. We hope you will like it!!!