Homo sapiens literally, “intelligent man” has adapted to nearly every environment on the face of the earth, below it, and as far above it as we can propel ourselves. We must be doing something right. Our “intelligence” arises from interacting with other beings. We humans are the most social of animals: we live together in families, tribes, cities, nations, behaving and thinking according to the rules and norms of our communities, adopting the customs of our fellows, including the facts they believe and the explanations they use to tie those facts together. Even when we are alone, we think about other people, and even when we think about inanimate things, we think using language the medium of interpersonal communication. A long time ago, people discovered the variety of the interesting insect or animal behaviors in the nature. A flock of birds sweeps across the sky. A group of ants forages for food. A school of fish swims, turns, flees together, etc. We call this kind of aggregate motion “swarm behavior." Recently biologists, and computer scientists in the field of “artificial life" have studied how to model biological swarms to understand how such “social animals" interact, achieve goals, and evolve. Moreover, engineers are increasingly interested in this kind of swarm behavior since the resulting “swarm intelligence" can be applied in optimization (e.g. in telecommunicate systems), robotics, traffic patterns in transportation systems, and military applications. A high-level view of a swarm suggests that the N agents in the swarm are cooperating to achieve some purposeful behavior and achieve some goal. This apparent “collective intelligence" seems to emerge from what are often large groups of relatively simple agents. The agents use simple local rules to govern their actions and via the interactions of the entire group, the swarm achieves its objectives. A type of “self-organization" emerges from the collection of actions of the group.
The autonomous agent does not follow commands from a leader, or some global plan. For example, for a bird to participate in a flock, it only adjusts its movements to coordinate with the movements of its flock mates, typically its “neighbors" that are close to it in the flock. A bird in a flock simply tries to stay close to its neighbors, but avoid collisions with them. Each bird does not take commands from any leader bird since there is no lead bird. Any bird can fly in the front, center and back of the swarm. Swarm behavior helps birds take advantage of several things including protection from predators (especially for birds in the middle of the flock), and searching for food (essentially each bird is exploiting the eyes of every other bird).
Models and Concepts of Life and Intelligence
Swarm intelligence (SI) is artificial intelligence based on the collective behavior of decentralized, self-organized systems. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems. SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local interactions between such agents lead to the emergence of complex global behavior. Natural examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.
Biological Basis and Artificial Life
Researchers try to examine how collections of animals, such as flocks, herds and schools, move in a way that appears to be orchestrated. A flock of birds moves like a well-choreographed dance troupe. They veer to the left in unison, then suddenly they may all dart to the right and swoop down toward the ground. How can they coordinate their actions so well? In 1987, Reynolds created a “boid" model, which is a distributed behavioral model, to simulate on a computer the motion of a flock of birds. Each boid is implemented as an independent actor that navigates according to its own perception of the dynamic environment. A boid must observe the following rules. First, the “avoidance rule" says that a boid must move away from boids that are too close, so as to reduce the chance of in-air collisions. Second, the “copy rule" says a boid must fly in the general direction that the flock is moving by averaging the other boids' velocities and directions. Third, the “center rule" says that a boid should minimize exposure to the flock's exterior by moving toward the perceived center of the flock. Flake added a fourth rule, “view," that indicates that a boid should move laterally away from any boid the blocks its view. This boid model seems reasonable if we consider it from another point of view, that of it acting according to attraction and repulsion between neighbors in a flock. The repulsion relationship results in the avoidance of collisions and attraction makes the flock keep shape, i.e., copying movements of neighbors can be seen as a kind of attraction. The center rule plays a role in both attraction and repulsion. The swarm behavior of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual boids. One of the swarm-based robotic implementations of cooperative transport is inspired by cooperative prey retrieval in social insects. A single ant finds a prey item which it cannot move alone. The ant tells this to its nest mate by direct contact or trail laying. Then a group of ants collectively carries the large prey back. Although this scenario seems to be well understood in biology, the mechanisms underlying cooperative transport remain unclear. Roboticists have attempted to model this cooperative transport. For instance, Kube and Zhang introduce a simulation model including stagnation recovery with the method of task modeling. The collective behavior of their system appears to be very similar to that of real ants. Resnick designed StarLogo an object-oriented programming language based on Logo, to do a series of microworld simulations. He successfully illustrated different self-organization and decentralization patterns in the slime mold, artificial ants, traffic jams, termites, turtle and frogs and so on.
Terzopooulos et al developed artificial fishes in a 3D virtual physical world. They emulate the individual fish's appearance, locomotion, and behavior as an autonomous agent situated in its simulated physical domain. The simulated fish can learn how to control internal muscles to locomotehydrodynamically. They also emulated the complex group behaviors in a certain physical domain. Millonas proposed a spatially extended model of swarms in which organisms move probabilistically between local cells in space, but with weights dependent on local morphgenetic substances, or morphogens. The morphogens are in turn affected by the paths of movements of an organism. The evolution of morphogens and the corresponding ow of the organisms constitutes the collective behavior of the group.
Learning and evolution are the basic features of living creatures. In the field of artificial life, a variety of species adaptation genetic algorithms are proposed. Sims describes a lifelike system for the evolution and co-evolution of virtual creatures. These artificial creatures compete in physically simulated 3D environments to seize a common resource. Only the winners survive and reproduce. Their behavior is limited to physically reasonable actions by realistic dynamics, like gravity, friction and collisions. He structures the genotype by the directed graphs of nodes and connections. These genotypes can determine the neural systems for controlling muscle forces and the morphology of these creatures. They simulate co-evolution by adapting the morphology and behavior mutually during the evolution process. They found interesting and diverse strategies and counter-strategies emerge during the simulation with populations of competing creatures.
Evaluation of Swarm Intelligent System
Although many studies on swarm intelligence have been presented, there are no general criteria to evaluate a swarm intelligent system's performance. Fukuda et al. try to make an evaluation based on the flexibility, which is essentially a robustness property. They proposed measures of fault tolerance and local superiority as indices. They compared two swarm intelligent systems via simulation with respect to these two indices. There is a significant need for more analytical studies.
Stability of Swarms
In biology, researchers proposed “continuum models" for swarm behavior based on non-local interactions . The model consists of integrodifierential advection-difiusion equations, with convolution terms that describe long range attraction and repulsion. They found that if density dependence in the repulsion term is of a higher order than in the attraction term, then the swarm has a constant interior density with sharp edges as observed in biological examples. They did linear stability analysis for the edges of the swarm.
Characterizations of Stability
There are several basic principles for swarm intelligence, such as the proximity, quality, response diversity, adaptability, and stability. Stability is a basic property of swarms since if it is not present, then it is typically impossible for the swarm to achieve any other objective. Stability characterizes the cohesiveness of the swarm as it moves. How do we mathematically define if swarms are stable? Relative velocity and distance of adjacent members in a group can be applied as a criteria. Also, no matter whether it is a biological or mechanical swarm, there must exist some attractant and repellant profiles in the environment so that the group can move so as to seek attractants and avoid repellants. We can analyze the stability of swarm by observing whether swarms stay cohesive and converge to equilibrium points of a combined attractant/repellant profile.
Overview of Stability Analysis of Swarms
Stability of swarms is still an open problem. We searched the current literature and found that there is very little work done in this area. Jin et al. proposed the stability analysis of synchronized distributed control of 1-D and 2-D swarm structures. The convergence under total asynchronous distributed control is still an open problem. Convergence of simple asynchronous distributed control can be proven in a way similar to the convergence of discrete Hopfield neural network. Beni proposed a sufficient condition for the asynchronous convergence of a linear swarm to a synchronously achievable configuration since a large class of distributed robotic systems self-organizing tasks can be mapped into reconfigurations of patterns in swarms. The model and stability analysis in is, however, quite similar to the model and proof of stability for the load balancing problem in computer networks.
Properties of a Swarm Intelligence System
The typical swarm intelligence system has the following properties:
- It is composed of many individuals;
- The individuals are relatively homogeneous (i.e., they are either all identical or they belong to a few typologies);
- The interactions among the individuals are based on simple behavioral rules that exploit only local information that the individuals exchange directly or via the environment (stigmergy);
- The overall behaviour of the system results from the interactions of individuals with each other and with their environment, that is, the group behavior self-organizes.
The characterizing property of a swarm intelligence system is its ability to act in a coordinated way without the presence of a coordinator or of an external controller. Many examples can be observed in nature of swarms that perform some collective behavior without any individual controlling the group, or being aware of the overall group behavior. Notwithstanding the lack of individuals in charge of the group, the swarm as a whole can show an intelligent behavior. This is the result of the interaction of spatially neighboring individuals that act on the basis of simple rules.
Most often, the behavior of each individual of the swarm is described in probabilistic terms: Each individual has a stochastic behavior that depends on his local perception of the neighborhood.
Swarm Intelligence Applications & Algorithms
Swarm Intelligence-based techniques can be used in a number of applications. The U.S. military is investigating swarm techniques for controlling unmanned vehicles. The European Space Agency is thinking about an orbital swarm for self assembly and interferometry. NASA is investigating the use of swarm technology for planetary mapping. A 1992 paper by M. Anthony Lewis and George A. Bekey discusses the possibility of using swarm intelligence to control nanobots within the body for the purpose of killing cancer tumors. Artists are using swarm technology as a means of creating complex interactive systems or simulating crowds. Tim Burton's Batman Returns was the first movie to make use of swarm technology for rendering, realistically depicting the movements of a group of penguins using the Boids system. The Lord of the Rings film trilogy made use of similar technology, known as Massive, during battle scenes. Swarm technology is particularly attractive because it is cheap, robust, and simple.
The inherent intelligence of swarms has inspired many social and political philosophers, in that the collective movements of an aggregate often derive from independent decision making on the part of a single individual. A common example is how the unaided decision of a person in a crowd to start clapping will often encourage others to follow suit, culminating in widespread applause. Such knowledge, an individualist advocate might argue, should encourage individual decision making (however mundane) as an effective tool in bringing about widespread social change.
The use of Swarm Intelligence in Telecommunication Networks has also been researched, in the form of Ant Based Routing. This was pioneered separately by Dorigo et al and Hewlett Packard in the mid-1990s, with a number of variations since. Basically this uses a probabilistic routing table rewarding/reinforcing the route successfully traversed by each "ant" (a small control packet) which flood the network. Reinforcement of the route in the forwards, reverse direction and both simultaneously have been researched: backwards reinforcement requires a symmetric network and couples the two directions together; forwards reinforcement rewards a route before the outcome is known (but then you pay for the cinema before you know how good the film is). As the system behaves stochastically and is therefore lacking repeatability, there are large hurdles to commercial deployment.
Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' - simulation agents - locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee. The ultimate application of ant-based routing methods might be on the Internet, where traffic is painfully unpredictable.
Particle swarm optimization or PSO is a global optimization algorithm for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space, and are evaluated according to some fitness criterion after each time step. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large numbers of members that make up the particle swarm make the technique impressively resilient to the problem of local minima.
Stochastic Diffusion Search or SDS is an agent based on probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by evaluating a randomly selected partial objective function parameterized by the agent's current hypothesis. In the standard version of SDS such partial function evaluations are binary resulting in each agent becoming active or inactive. Information on hypotheses is diffused across the population via inter-agent communication. Unlike the stigmergic communication used in ACO, in SDS agents communicate hypotheses via a one-to-one communication strategy analogous to the tandem running procedure observed in some species of ant. A positive feedback mechanism ensures that, over time, a population of agents stabilizes around the global-best solution. SDS is both an efficient and robust search and optimization algorithm, which has been extensively mathematically described.
The application of swarm principles to robots is called swarm robotics. Swarm robotics is currently one of the most important application areas for swarm intelligence. Swarms provide the possibility of enhanced task performance, high reliability (fault tolerance), low unit complexity and decreased cost over traditional robotic systems. They can accomplish some tasks that would be impossible for a single robot to achieve. Swarm robots can be applied to many fields, such as flexible manufacturing systems, spacecraft, inspection/maintenance, construction, agriculture, and medicine work. Many different swarm models have been proposed. Beni introduced the concept of cellular robotics systems, which consists of collections of autonomous, non-synchronized, non-intelligent robots cooperating on a finite n-dimensional cellular space under distributed control. Limited communication exists only between adjacent robots. These robots operate autonomously and cooperate with others to accomplish predefined global tasks. Hackwood and Beni propose a model in which the robots are particularly simple but act under the influence of “signpost robots." These signposts can modify the internal state of the swarm units as they pass by. Under the action of the signposts, the entire swarm acts as a unit to carry out complex behaviors. Self-organization is realized via a rather general model whose most restrictive assumption is the cyclic boundary condition. The model requires that sensing swarm “circulate" in a loop during its sensing operation.
The behavior-based control strategy put forward by Brooks is quite well known and it has been applied to collections of simple independent robots, usually for simple tasks. Other authors have also considered how a collection of simple robots can be used to solve complex problems. Ueyama et al. propose a scheme whereby complex robots are organized in tree like hierarchies with communication between robots limited to the structure of the hierarchy. Mataric describes experiments with a homogeneous population of robots acting under different communication constraints. The robots either act in ignorance of one another, are informed by one another, or intelligently (cooperate) with one another. As inter-robot communication improves, more and more complex behaviors are possible. Swarm robots are more than just networks of independent agents, they are potentially reconfigurable networks of communicating agents capable of coordinated sensing and interaction with the environment. Considering the variety of possible designs of groups mobile robots, Dudek et al present a swarm-robot taxonomy of the different ways in which such swarm robots can be characterized. It helps to clarify the strengths, constraints and trade of is of various designs. The dimensions of the taxonomic axes are swarm size, communication range, topology, bandwidth, swarm reconfigurability, unit processing ability, and composition. For each dimension, there are some key sample points. For instance, swarm size includes the cases of single agent, pairs, finite sets, and infinite numbers. Communication ranges include none, close by neighbors, and “complete" where every agent communicate with every other agent. Swarm composition can be homogeneous or heterogeneous (i.e. with all the same agents or a mix of different agents). We can apply this swarm taxonomy to the above swarm models. For example, Hackwood and Beni's model has multiple agents in its swarm, nearby communication range, broadcast communication topology, free communication bandwidth, dynamic swarm reconfigurability, heterogeneous composition, and its agent processing is Turing machine equivalent.
As the research on decentralized autonomous robotics systems has developed, several areas have received increasing attention including modeling of swarms, agent planning or decision making and resulting group behavior, and the evolution of group behavior. The latter two can be seen as a part of the branch of distributed artificial intelligence since several agents coordinate or cooperate to make decisions. There are several optimization methods proposed for the group behavior. Fukuda et al. introduced a distributed genetic algorithm for distributed planning in a cellular robotics system. They also proposed a concept of self-recognition for the decision making and showed the learning and adaptation strategy. There are also other algorithms proposed.
Cooperative Behavior in Swarms of Robots
There are a number of swarm behaviors observed in natural systems that have inspired innovative ways of solving problems by using swarms of robots. This is what is called swarm robotics. In other words, swarm robotics is the application of swarm intelligence principles to the control of swarms of robots. As with swarm intelligence systems in general, swarm robotics systems can have either a scientific or an engineering flavour. Clustering in a swarm of robots was mentioned above as an example of artificial/scientific system. An example of artificial/engineering swarm intelligence system is the collective transport of an item too heavy for a single robot, a behavior also often observed in ant colonies.
NASA's use of swarms
NASA has been investigating swarms for future space exploration missions. The three submissions in the autonomous nanotechnology swarm (ANTS) concept mission deploy multiple spacecraft to provide backups and ensure survival in space.
In one incarnation, a Saturn autonomous ring array will launch 1,000 picoclass spacecraft with specialized instruments—organized as 10 subswarms—to perform in situ exploration of Saturn's rings to understand their constitution and formation.
The lander amorphous rover antenna ANTS application is a lunar-base-activities submission that exploits new NASA-developed technologies in the miniaturized robotics field. Forming the basis for launching landers to the moon from remote sites, LARA also uses innovative techniques to move rovers in an amoeboid-like fashion over the moon's uneven terrain.
The prospecting asteroid mission involves launching a swarm of autonomous picoclass spacecraft (approximately 1 kilogram) to explore the asteroid belt for asteroids with certain characteristics. Figure 1 provides an overview of the PAM mission concept. In this submission, a transport ship launched from Earth will travel to a point in space where gravitational forces on small objects, such as picoclass spacecraft, are all but negligible. Assembled en route from Earth, 1,000 spacecraft will be launched from the Lagrangian point into the asteroid belt. The spacecraft—equipped with specialized instruments—will form subswarms to collect relevant data from asteroids of interest.
Given that many of the spacecraft could collide with one another or with asteroids and become lost, multiple-spacecraft missions offer greater likelihood of survival and flexibility than single-spacecraft missions. Additionally, the self-directed swarm will exhibit intelligence, which is critical since round-trip delays in communication from Earth can stretch upward of 40 minutes. The mission could be lost before ground control is notified of a problem.
Swarm-based Network Management
The first swarm-based approaches to network management were proposed in 1996 by Schoonderwoerd et al., and in 1998 by Di Caro and Dorigo. Schoonderwoerd et al. proposed Ant-based Control(ABC), an algorithm for routing and load balancing in circuit-switched networks; Di Caro and Dorigo proposed AntNet, an algorithm for routing in packet-switched networks. While ABC was a proof-of-concept, AntNet, which is an ACO algorithm, was compared to many state-of-the-art algorithms and its performance was found to be competitive especially in situation of highly dynamic and stochastic data traffic as can be observed in Internet-like networks. An extension of AntNet has been successfully applied to ad-hoc networks (Di Caro, Ducatelle and Gambardella 2005). These algorithms are another example of successful artificial/engineering swarm intelligence system. Consider the unpredictable environment of a telecommunications network, in which a phone call from one place to another (Paris to Honolulu, for example) generally has to go through several intermediate nodes (perhaps New York and San Francisco). Such a system requires a routing mechanism to tell each call where it should hop next to establish the connection, and a good routing method avoids congestions to minimize delays. Backup routes are especially valuable when traffic conditions change dramatically – for example, when stormy weather at an airport or a phone-in competition on television leads to localized surges of phone traffic, which require that messages be rerouted on the fly to less-congested parts of the network.
Researchers from Hewlett-Packard’s laboratories in Bristol, England, have developed a computer program based on ant-foraging principles that routes such calls efficiently. In the program, hordes of software agents roam through the telecom network and leave bits of information (think of them as “digital pheromone”) to reinforce paths through uncongested areas. Phone calls then follow the trails left by the antlike agents. To fine-tune the software, the researchers have added a mechanism that continually evaporates the digital pheromone, enabling the program to adjust quickly to changes in traffic conditions. When a previously swift route becomes congested, agents that follow it are delayed, and the evaporation mechanism overcomes the reinforcement process. Soon that route is abandoned, and the agents discover (or rediscover) alternatives and exploit them. The benefits are twofold: when phone calls are rerouted through the better parts of a network, the process not only allows those calls to get through quickly but also enables the congested areas to recover from the overload. Thus the ant-based solution has the inherent advantages of swarm-intelligent systems: flexibility, robustness, and self-organization.
A technique that exploits the acceleration of distributed downloading to provide high-resolution video, audio, and peer-to-peer data streams, swarmcasting also significantly reduces needed bandwidth. ACTLab's Alluvium project at the University of Texas at Austin powers ACTLab TV (http://actlab.tv/), a concept personal TV station, using peer-to-peer media streaming software. Essentially, it applies the swarm analogy to break down video files into small pieces so that the system can download components from several machines simultaneously. Thus, the user can start watching the video before the download completes.
Swarmcast (http://www.swarmcast.com/), a commercial company, supports delivery of large amounts of data over networks using similar concepts and strives to be a significant contributor to the next generation of Internet TV.
A Whole New Way to Think About Business
Southwest Airlines in U.S.A. was having trouble with its cargo operations. Even though the average plane was using only 7% of its cargo space, at some airports there wasn’t enough capacity to accommodate scheduled loads of freight, leading to bottlenecks throughout Southwest’s cargo routing and handling system. At the time, employees were trying to load freight onto the first plane going in the right direction – a seemingly reasonable strategy. But because of it, workers were spending an unnecessary amount of time moving cargo around and sometimes filling aircraft needlessly. To solve its problem, Southwest turned to an unlikely source: ants. Specifically, researchers looked at the way ants forage, using simple rules, always finding efficient routes to food sources. When they applied this research to Southwest’s problem, they discovered something surprising: it can be better to leave cargo on a plane headed initially in the wrong direction. If, for example, they wanted to send a package from Chicago to Boston, it might actually be more efficient to leave it on a plane heading for Atlanta and then Boston than to take it off and put it on the next flight to Boston. Applying this insight has slashed freight transfer rates by as much as 80% at the busiest cargo stations, decreased the workload for the people who move cargo by as much as 20%, and dramatically reduced the number of overnight transfers. That’s allowed Southwest to cut back on its cargo storage facilities and minimize wage costs. In addition, fewer planes are now flying full, which opens up significant opportunities for the company to generate new business. Thanks to the improvements, Southwest estimates an annual gain of more than $10 million.
Swarm intelligence may also hold important lessons for businesses seeking to find and exploit new markets. Consider how different species of ants attract their nest mates to new food sources. There are three basic ways in which ants lead their fellows to new food sources. Laying pheromone is a form of “mass recruitment”: a large mass of ants is attracted down the path where the pheromone is strongest. In some species, though, an ant that finds a food source returns to the nest and vibrates its antennae to convince one other nest mate to return to the site. That’s called “tandem recruitment.” In other cases, an ant vibrates its antennae to get a number of nest mates to follow. That’s “group recruitment.” In all three cases, individual ants can convey information about the quality of a food source, either by laying more pheromone or by increasing the frequency of their antenna vibrations.
The classic example of an organization with a faulty recruitment mechanism is Xerox. Its research facility, Xerox PARC, is the fabled birthplace of amazing technologies, including the graphical user interface, which the company has repeatedly failed to exploit in the marketplace. In a sense, Xerox has been like a huge colony of ants that relies on mass recruitment, which impedes the company’s efforts to divert the necessary resources away from its main food source – photocopier products.
For group recruitment to succeed, companies must provide the right nurturing environment. Specifically, we believe they should:
• Maintain their ability to explore new opportunities
while exploiting existing ones;
• Enable a person with an idea to recruit others;
• Allow, but not force, people to be recruited, even
when they are working in a core business;
• Let the system self-select the best ideas; and
• Support the winning ideas with sufficient resources.
At Cap Gemini Ernst & Young, researchers are working to create an “idea market,” open to the entire organization, which would match these conditions.
In essence, we believe that social insects have been so successful – they are almost everywhere in the ecosphere in Amazon rainforest, in Sahara desert and even in our homes because of three characteristics:
• Flexibility (the colony can adapt to a changing environment);
• Robustness (even when one or more individuals fail, the group can still perform
its tasks); and
• Self-organization (activities are neither centrally controlled nor locally
In computational science as well, swarms provide a very useful and insightful metaphor for the imprecise and robust field of soft computing. The idea of multiple potential solutions, even computer programs, interacting throughout a computer run is a relatively revolutionary and important one. Also new is the idea that potential problem solutions (system designs, etc.) can irregularly oscillate (with stochasticity, no less) in the system parameter hyperspace on their way to a near-optimal configuration. It is almost unbelievable, even to us, that the computer program that started out as a social-psychology simulation is now used to optimize power grids in Asia; develop high-tech products in the United States; and to solve high-dimensional, nonlinear, noisy mathematical problems. But it seems that the paradigm is in its youth.Perhaps the most powerful insight from swarm intelligence is that complex collective behavior can emerge fromindividuals following simple rules. Possible applications of swarm intelligence may be limited only by the imagination.
Book: SWARM INTELLIGENCE
Russell Eberhart, Purdue School of Engineering
Yuhui Shi, Electronic Data Systems, Inc.
James Kennedy, US Department of Labor
The whole report made by: Makhija Vijay Rajkumar