|
|
Swarmanoid control
Example videos
|
Adaptive eyebot movements to guide foot-bots
In this video, eye-bots guide the foot-bots between a nest (top right) and target
(bottom left) location in the arena. The eye-bots form a communication network
between them, and derive the direction to send foot-bots in from the next hop in
the shortest route between the nest and the target in this network. On the
ground there are a number of items that form obstacles for the foot-bots, but not
for the eye-bots (e.g., bookshelves, tables, etc.). Since the eye-bots are unaware
of these obstacles, they might send foot-bots in the direction of obstacles, or
they might be positioned above an obstacles, which would make it impossible for
foot-bots to approach them in order to get instructions. In this video, the
eye-bots adapt their position in order to place themselves in a location where
they can best give directions to the foot-bots. They derive good positions from
the location where they see more foot-bots. In this video, arrows above the
eye-bots show the directions in which eye-bots are sending foot-bots, and circles
below the eye-bots show the area on the ground where foot-bots and eye-bots can see
each other (a different color is used for the eye-bots indicating the nest and
target locations). Finally, a short line above each foot-bot shows its intended
movement direction.
|
|
Adaptive swarm navigation in a dead end situation
In this video, we have the same situation, but with more complex
obstacles. Here, eye-bots do not learn new positions (in fact, they remain
static), but they learn to adapt the direction they are sending foot-bots in.
The eye-bots maintain two policies for guiding the foot-bots, one pointing towards
the target and one pointing towards the source (represented by pink and blue
lines above the robots in the videos). The eye-bots sample from these policies to
give instructions to the foot-bots, and they observe foot-bot behavior to update
their policies. Specifically, if an eye-bot sees a foot-bot that is travelling
from the source to the target, it increases the policy pointing to the source
for the direction that the foot-bot is coming from. Also, it decreases the policy
pointing to the target for this same direction. When an eye-bot sees foot-bots
performing obstacle avoidance behavior, they decrease both policies in the
direction of the location of these foot-bots, as obstacle avoidance behavior
points to the presence of obstacles or to congestion between foot-bots. In the
specific experiment we carry out here, we can see how the system learns to send
the foot-bots around some obstacles on the ground.
|
|
Swarmanoid robots find shortest path over double bridge
Here, we use the same system as before in a different context. The foot-bots are
given the choice between two different paths between nest and target, a long one
and a short one. The experiment is designed to resemble that of Deneubourg et
Al. in their experiments with ants. Our Swarmanoid adaptive navigation system is
able to find the shortest path in a majority of the cases.
|
|
Energy Efficient Deployment of Eye-bots
A major challenge with aerial robotics is the limited flight
autonomy of current systems. Therefore, efficient strategies for
minimising energy consumption are required to realise the
autonomous deployment of aerial robots for tasks such as search
and exploration. The video present a swarm search behaviour that
exploits eye-bots' ability to stick to the ceiling, saving
energy. The video shows several deployment
mechanisms which can further decrease energy consumption by
reducing wasted flight time of the swarm. We aim to reduce wasted
flight time via two premises: 1) exploiting environment
information as it is acquired through robot deployment to better
guide flying robots 2) obeying the "law of Diminishing Returns"
as robot group size increases.
|
|
Self-Organised Recruitment and Deployment of Foot-bots with Eye-bots
In this video, tasks are activated in sequence. An eye-bot requests 5 to 10 robots to
execute the task it is coordinating. The request is relayed to the closest eye-bot in
the recruitment area, which takes care of recruiting the needed foot-bots. When the
team is formed, the recruiting eye-bot delivers it to the requesting eye-bot. After
the execution of the task, the foot-bots are returned to the recruitment area. At
this point, another eye-bot requests foot-bots for its task (9 to 13) and also in
this case recruitment, delivery and return are successful.
|
|
Self-Organised Recruitment and Deployment of Foot-bots with Eye-bots
In this video, we show that the recruitment system is successful also when dealing with
multiple parallel and asynchronous requests. Initially, two eye-bots request foot-bots
at the same time. One eye-bot requests 5 to 10 foot-bots, the other 7 to 13. The requests
are relayed to two eye-bots in the recruitment area. While the two foot-bot teams are
formed in parallel, a third eye-bot requests 10 to 12 foot-bots. This new request triggers
the redistribution of the already recruited foot-bots. Eventually, one team is formed and,
when the team leaves the recruitment area, further redistribution takes place, thus
allowing another group to be formed and sent to task execution. The third team is formed
when the first is returned to the recruitment area.
|
|
Self-Organised Recruitment and Deployment of Foot-bots with Eye-bots
In this video, we show how deadlocks are solved in the system. There are 30 available
foot-bots in the recruitment area and four simultaneous recruitment requests
(min=12, max=13) are formulated at the same time. The eye-bots form their teams in
parallel, but soon a deadlock happens -- no eye-bot can satisfy the minimum requested
quota. When eye-bots detect convergence to a quota which is less than the minimum, it
has a small probability to spike the leaving probability sent to the foot-bots. This
simple mechanism is sufficient to allow the system to overcome the deadlock and
continue functioning.
|
|
Phat-bot creation and navigation
Groups of three foot-bots coordinate in order to effectively grip and transport
a hand-bot towards a target location (we term the resulting robot aggregate a
"phat-bot").
|
|
Handbot bar lifting
Two hand-bots cooperate lifting a bar. The bar is grasped with hand-bot hands,
and then is lifted by using hand-bot's rope. The controller of each hand-bot
independently tries to keep bar inclination within a certain range, resulting in
a more or less coordinated bar lifting.
|
Coming soon |
High quality graphics
This is a concept video in which we depict a possible
future application of the swarmanoid. The
swarmanoid is deployed into a partially collapsed building to
find and retrieve a target object. In the video, we show
coordination of the eye-bots and foot-bots to complete the task.
|
Task and environment
The Swarmanoid Task
The task we have chosen is a search and retrieval task. The task can be
decomposed into the following sub-tasks that are executable asynchronously
and/or sequentially by the robots acting in the environment:
- item search
- collection/grasping
- transport
- deposition/organisation
A number of real-world problems can be seen in the terms of some composition of
these basic sub-tasks. Therefore, the ability to individually and collectively
solve these sub-tasks by the swarmanoid will provide an indirect validation of
its potential effectiveness to tackle many real-world problems of interest.
Task Complexity Parameters
The various complexity parameters for the task are shown below.
The swarmanoid will have to search for one or more target objects located either
on the 2D floor or in the 3D space of a room. Once the target is detected,
it has to be grasped, collected, and transported, either individually or
grouped inside a container, to some location that can be randomly selected
(e.g., to cluster objects) or selected according to some specific criterion
(e.g., to collect the objects inside an assigned shelter). At the deposition
location the swarmanoid may be required either to group the objects into unstructured
piles, or to organise the objects into some form of structure (e.g., objects
could be stacked, or arranged in a pattern).
The default task complexity parameters will involve the swarmanoid transporting the
target objects individually to a pre-specified location, and then grouping
the objects into unstructured piles.
The Swarmanoid Environment
The swarmanoid will act in an enclosed indoor environment. We make the assumption
that the ceiling is ferromagnetic.
Environment Complexity Parameters
Below, we provide a list of environmental parameters that can be altered. A number of different
parameter categories have been chosen. For each category the complexity parameters are listed in
order of increasing behavioural sophistication required by the swarmanoid. In each category
the default parameter is indicated.
Environment Size
- Small Room --- Foot-bots can find target in reasonable time frame
- Large Room --- Eye-bots are needed to find target (default)
Environment Structure and Obstacles
- Simple Structure (One Room), No Obstacles (default)
- Obstacles (Walls, Troughs)
- Complex Structure (Corridors, Multiple rooms)
Human Interaction
- No human present in environment (default)
- Human passive --- present in environment. May move, but takes no deliberate action to affect swarmanoid
- Human collaborates with swarmanoid
- Human disrupts swarmanoid
Target Object Quantity
- 1 object to be found and retrieved (default)
- Many objects to be found and retrieved
Target Object Movability
- Single robot can move object
- Cooperation required due to nature of object (e.g., too heavy for a single robot) (default)
Target Object Grippability
- Gripping easy --- Object designed to be easily grippable (default)
- Gripping hard or requires cooperation
Target Object Location
- Target on floor
- Target raised (e.g., on shelf or table) (default)
Target Object Visibility (size, luminosity, etc.)
- Easily detectable --- Object designed to make recognition easy (default)
- Not easy to detect
Hand-bots
Work on all hand-bot components is still in its early stages, as the design and
implementation of all autonomous hand-bot control components will depend heavily
on the finalised hand-bot hardware.
Collective Lifting |
|
This component is about work concerning the collective lifting of an heavy object by a group of hand-bots. The term "heavy" refers to the fact that a single hand-bot can not lift the object alone, but it needs the cooperation of other hand-bots in order to lift it. The work is at an early stage of development, as it has been used as test bench for the hand-bot's simulated model in swarmanoid simulator "ARGoS", and to explore the behavioral possibilities of the hand-bot platform.
Work on this component is in its early stages, as the design and implementation will depend heavily on the finalised hand-bot hardware.
Details of Initial Experimentation
|
|
Vertical plane exploration |
|
This component can be used in conjunction with two hand-bots in order to explore a plane. As an example, this behavior can be used to search for an object on a shelf. The idea is that the object location is know only approximately, for example because an eye-bots has seen it, and the two hand-bots have to search for it. The work is at an early stage of development, as it has been used as test bench for the hand-bot's simulated model in swarmanoid simulator "ARGoS", and to explore the behavioral possibilities of the hand-bot platform.
Work on this component is in its early stages, as the design and implementation will depend heavily on the finalised hand-bot hardware.
Details of Initial Experimentation
|
|
Eye-bots
Swarm Search |
|
This component is concerned with robust and scalable flying robotic swarms that perform coordinated search without requiring absolute positioning, localisation capabilities, or necessitating an internal world-model. This work focuses on the flying eye-bots but the presented algorithm is generalizeable to other mobile robotic platforms. A swarm control strategy that deploys a distributed network of communicating robots is presented. The deployed network is able to provide navigational aid to other robots such as the foot-bots. This work presents a self-organising swarm control strategy that deploys a mobile distributed sensor and communication network that can cover an environment with an array of sensors, provide navigational aid to other agents, and perform distributed task allocation.
Details of Initial Experimentation
|
|
Flight Control - Attitude |
|
Low level stability control of the Eye-bots is vital for the realisation
of safe locomotion with the autonomous Eye-bot swarm in
constrictive indoor environments.
Attitude control, which must maintain a level planar position
with no tilt in the pitch or roll dimensions, has been achieved via a high
speed PID (Proportional-Integral-Derivative) controller. The flight computer uses
three rate gyros that measure
rotational velocities and a three-dimensional accelerometer. This
has resulted in minimal disturbances and a stable hover.
|
|
Flight Control --- Lateral Drift |
|
Imperfections in the blades and motors, differing air flows over certain
components on the Eye-bot, and outside forces such as air drafts can cause
the Eye-bot to move without any noticeable change in pitch or roll. These
motions cannot be detected by the IMU and so it cannot correct for this
gentle drift. In order to prevent this drift an absolute air-speed sensor is required to measure
lateral displacement. However, to date no such sensor exists that would
be appropriate for the Eye-bot. One approach being
investigated utilises optical-flow inspired by flies to measure relative velocity
to the environment. An alternative approach is to make use of the infra-red
relative positioning system that the Eye-bots will be equipped with. If one Eye-bot is static, via attachment to the ceiling,
then the drift of nearby.
|
|
Flight Control --- Altitude Control |
|
An additional critical low level control ability of the eye-bots is autonomous
altitude control. That is, the ability to maintain a constant
height relative to the ground plane under external disturbances. To this
end, the eye-bot has been equipped with an ultrasound distance sensor that
perceives the distance to the ground. An
additional PID controller is utilised to maintain constant altitude under
varying environmental conditions. Results show stable hover with a standard
deviation in altitude of only 4.93cm (altitude set-point at
80cm), which is respectable given the limited sensor resolution
of 2.54cm. Furthermore, by utilising the autonomous
altitude control, it is possible to perform autonomous take-off; the eye-bot
ascends from the ground quickly towards the set-point returning to a stable hover at approximately the specified
altitude.
|
|
Aerial Chain Formation |
|
This component builds on the work done for the foot-bot chaining component in the
SWARMBOTS project. The method of building chains, and chain directionality
remains the same. However, the chain is made of eye-bots rather than foot-bots.
This component could be used in the initial exploration and search for the
targets conducted by the eye-bots.
|
|
Direct Foot-bots |
|
In this component, one or more eye-bots actively direct foot-bots in a
particular direction. Development of this autonomous control mehanism will
depend on the finalised eye-bot hardware.
This component could be used in cases where the eye-bots have found the target
object and need to direct the foot-bots towards it. Work on this component is
in its early stages, as the design and implementation will depend heavily
on the finalised eye-bot hardware.
|
|
|
|