Dr. Alonso was a Postdoctoral Researcher funded by the Spanish Government under project "Juan de la Cierva" (JCI-2011-09839). He worked (from February 2012 to October 2012) under supervision of Dr. Luis M. Bergasa with the Robesafe (Robotics and e-Safety) research group in the Department of Electronics at University of Alcala (UAH).
- Main Research Project
- Related Projects
- Coming Events
- Past Events
- Organization Tasks
- Teaching Activities
Main Research Project:
INFANTREE: Interpretable Fuzzy Systems for Safety Autonomous Navigation in Complex Real-world Environments shared between Humans and Robots
This project proposes a new methodology for generating interpretable fuzzy systems for mobile robots applications in the context of complex real environments shared by human and robots. The ultimate goal is enabling the cohabitation but also the collaboration among human and robots integrated in heterogeneous working teams with the aim of performing high-level tasks that would be unfeasible (or too much costly) for only one individual (robot or human). Such methodology involves both theoretical and practical aspects for the development of basic concepts and structures for the abstraction of partial views of the operational environment, the actions and intentions of human-team robots, and for their integration into common situation estimates.
Following the most outstanding ideas derived from his PhD dissertation (entitled as “Interpretable fuzzy systems modeling with cooperation between expert and induced knowledge”), Dr. Alonso developed a generic fuzzy modeling methodology called HILK for formalizing Highly Interpretable Linguistic Knowledge into Interpretable Fuzzy Systems.
INFANTREE is aimed at enhancing and upgrading HILK to be effectively applied to some of the most common problems in mobile robotics: Environment mapping, self-localization and mapping (SLAM), multi-sensor fusion-based safety autonomous navigation, obstacle detection and avoidance, planning, learning, decision-making, etc. The main novelty arises from the fact that generated fuzzy systems will be easily understood by humans what makes easier human-robot cooperation. The aim is not only that robots are able to act effectively and efficiently but providing them with advance human-friendly explanation capabilities about why they behave on a certain way. Namely, we will address three main challenges:
- Integration of high-level sensorial information. In collaborative architectures such as those of human-robot teams, perceptual information is typically collected by a variety of distributed multimodal sensors of different characteristics (laser, sonar, WiFi, cameras, inertial measurement units, etc.) that produce evidential data at multiple levels of resolution. The development of techniques mapping between these diverse descriptions and integrating (fusing) information as a function of its expected usage is a major need that might be addressed by approaches based on interpretable fuzzy systems.
- Advance 3D environment mapping. Definition of new mapping techniques based on the integration of the high-level descriptions of the sensorial perceptions generated in the previous point. The goal is the generation of a 3D map that models the environment and may be taken as global reference for a-priori planning preparation with the aim of performing complex spatial movements and interactions of team members. The generation of the map must be dynamic and efficient, admitting real-time tuning, and combining all available information sources no matter if they correspond to sensors carried by humans, integrated onboard robots, or even deployed along the surrounding environment. Furthermore, the generated map must be easily understandable, in a high abstraction level, for both robots and humans.
- Safety autonomous navigation. To attain a similar level of proficiency and ability (among robots and humans) adapting to new situations (for instance navigating through a wide variety of complex environments) it is necessary to develop high-level knowledge capable of being employed to rapidly produce tactics applicable to new navigational challenges (e.g., adapt 3D traversing techniques to new environments). The ability to close the loop between perception and action in challenging environments will be significantly enhanced by availability of high-level descriptions and related mappings carried out by interpretable fuzzy systems, thus permitting the association of perceptual information with preferred navigation actions.
1. ABSYNTHE: Abstraction, Synthesis, and Integration of Information for Human-Robot Teams
This is a project funded by the Spanish Government. It involves two partners: the European Centre for Soft Computing (TIN2011-29824-C02-01) and the University of Alcala (TIN2011-29824-C02-01).
ABSYNTHE is a research project very close to INFANTREE. In fact, interpretable fuzzy systems designed and developed in the context of INFANTREE may be also tested in the context of those experimental scenarios set up for ABSYNTHE. Notice that, both projects deal with the interaction between humans and robots but the scope of ABSYNTHE is wider since its final goal is forming and deploying human-robot teams able to collaborate effectively.
2. GUAJE: Generating Understandable and Accurate Fuzzy Systems in a Java Environment.
This is a free software tool for modeling interpretable fuzzy systems.
Please, notice that a new release GUAJE-v2.0 is already available!
Some illustrative videos:
IEEE International Conference on Fuzzy Systems (FUZZIEEE2013)
I'm currently chair of the Task Force on Software Tools and Data Repository in the Standards Committee of the IEEE Computational Intelligence Society (IEEE-CIS).
I'm also vice-chair of the Task Force on Fuzzy Systems Software under the IEEE-CIS Fuzzy Systems Technical Committee.
Last semester (From February to June 2012), I was teaching at the lab of Digital Electronics. For further details go to http://www.depeca.uah.es/depeca/docencia_prof/index.php?id=398