lunes, 28 de junio de 2010

ARTIFICIAL INTELLIGENCE. THE BRAIN OF THE MICROPROCESSOR

The term "artificial intelligence" was formally coined in 1956 during the conference Darthmounth more by now had been working on it for five years which had been proposed many different definitions that had been achieved in no event be accepted fully by the community research. The AI is one of the newer disciplines together with modern genetics is the field where the majority of scientists "would like to work more."

One of the big reasons which made the study of AI is it to learn more about ourselves and difference of psychology and philosophy that also focused study of intelligence, AI and its efforts to understand this phenomenon are aimed at both building and understanding intelligent entities.

Artificial Intelligence development includes several fields such as robotics, mainly used in industry, understanding of languages and translation machine vision to distinguish shapes and used in assembly lines, word recognition and machine learning; computer systems experts.

According to John Mc Carthy intelligence is the "ability of humans to adapt effectively to changing circumstances through the use of information on those changes." Another interesting way to illustrate the use of intelligence serious societal theory of the mind of Marvin Minsky which each human mind is the result of actions by a committee of minds less power than talking to each other and combine their respective skills in order to resolve problems.

In an area a bit more technical, one of the definitions that have been given to describe the AI is "that which is used to make a given program to behave intelligently without attempting to take into account the 'form of reasoning 'used to achieve this behavior. " Then, here comes a dilemma, since according to that any problem solvable by a computer, uncomplicated and as a human being could be included in the field of artificial intelligence by going only to the application of rules in a row at the bottom of the letter or what we found with the name of algorithms within the language of AI.

When applied algorithms to solve problems but not acting wisely if you are being effective but really complicated problems facing the human beings are those in which there is no known algorithm and rules arising from dealing solution-oriented Heuristics calls where ever there is no guarantee that the application of these rules will bring us closer to the solution as with the previous ones.

We found that we orient ourselves to the creation of an artificial system capable of human cognitive processes, these relate to learning and adaptability and their authors are Newell and Simon of Carnegie Mellon University.

The AI to try to build machines that behave like human beings apparently have given rise to two opposing blocs: the symbolic approach or top-down, known as classical AI approach sometimes called connectionist subsymbolic.

When a task is performed by a well-defined algorithm for storage, classification or calculation, a computer can do. This concept of algorithm, sequential, fixed and certain operations can not handle problems where the path of reasoning which is variable and different situations must be addressed without being specified.

One of the most important methods are neural networks. This model considers that a neuron can be represented by a binary unit: each time their condition may be active or inactive. The interaction between neurons takes place through the synapse. According to the sign, the synapse is excitatory or inhibitory.

The perceptron consists of inputs from external sources, and output connections. In reality, a perceptron neural network is as simple as possible, is that where there are no hidden layers.
For each configuration of states of input neurons (stimulus) perceptron response is due to the following dynamics: synaptic potentials are added and compared with a threshold of activation. This weighted sum is also called a field. If the field is greater than a threshold, the response of the neuron is active, otherwise it is inactive.

With architecture as simple as the perceptron can not perform more than one class of functions "Boolean" very simple, called linearly separable. Are the functions in which the input states with positive output can be separated from those negative output by a hyperplane. A hyperplane is the set of points in the space of input states that satisfy a linear equation. In two dimensions, is a line in three dimensions a plane, etc..

If you want to perform more complex functions with neural networks, it is necessary to insert layers of neurons between input and output, called hidden neurons. A multilayer network can be defined as a set of perceptrons, linked together by synapses and arranged in layers according to different architectures. One of the most commonly used architectures is called feedforward: input connections to the hidden layers and from these to the exit. The operation of a neural network is governed by rules of the spread of activities and update the states.

Another of the methodologies used for the simulation of computer intelligence are genetic algorithms, using the evolutionary strategy as the main operator and mutation as a secondary operator (and even optional). The genetic algorithm, like neural networks, functions as a black box that receives certain inputs and produces (after an indeterminate amount of time) the desired outputs. However, unlike them, genetic algorithms do not need to train with examples of any kind, but are able to generate their own examples and counterexamples to guide the evolution from completely random initial populations.

The selection mechanisms of the fittest and sexual reproduction of the genetic algorithm, are charged with preserving the appropriate characteristics of each individual in order to converge to the population at optimal solutions.

Genetic algorithms can also be distinguished not easily get trapped in local minima, like most traditional search techniques, probabilistic operators in addition to using more robust than deterministic operators, they often use other techniques.

However, being a heuristic, they can not always guarantee finding the optimal solution, but experience to date seems to show that, when used properly, can provide very acceptable solutions and, in most cases higher than found with other search and optimization techniques.

Although still attacked by some sections of the community of Artificial Intelligence, genetic algorithms, like neural networks, have been gaining little by little, and on the basis of the effectiveness of its results into practical applications, the recognition of researchers as an effective technique in complex problems, as evidenced by a growing number of conferences and publications around the world in the last few years.

No hay comentarios:

Publicar un comentario