Project information:

AIVATAR. Artificial Intelligence based modular Architecture Implementation and Validation for Autonomous Driving (October 2022 – September 2025). Reference: PID2021-126623OB-I00. Financed by the Spanish Ministry of Science and Innovation (MICIIN).


According to the United Nations (UN), more than two thirds of the world population will live in cities by 2050. Providing safer, more sustainable and efficient mobility for goods and people in populated cities is a priority identified in Horizon Europe and the Spanish Plan for Scientific, Technical and Innovation Research 2021-2023. The transformational change in mobility will contribute to the Sustainable Development Goals (SDG) for UNs 2030 Agenda, to the European Green Deal, as well as to the Vision Zero Deal. One key technology for the future mobility is Autonomous Driving (AD), which has been a hot topic in the last decade that has caught the attention of research and industry. There are numerous different fully autonomous vehicle projects in various stages of development. However, autonomous vehicle technology is far from being ready to be deployed at full scale. Connected, Cooperative and Automated Mobility (CCAM) European Partnership identifies four problems for the AD introduction in the market:
1) Insufficient demand.
2) AD techniques are not yet sufficiently 
3) Current R&I efforts are fragmented and lack a long-term vision.
4) Demonstration and scale-up is limited.
On the other hand, the vast amount of data that our society is generating every day (Big data) and the exponential increase in computational power have revolutionized the Artificial Intelligence (AI) field, boosting Machine Learning (ML) and Deep Learning (DL) paradigms. The recent DL algorithms are extremely powerful tools that can contribute to solve the afore mentioned problems.
Our group has been working in the Tech4AgeCar project (2019-2021), developing classical technologies that enable our open-source automated car with autonomous driving capabilities, which have been tested in simulation in the challenging CARLA Leaderboard and in real field tests in restricted uses-cases. Based on the results in our former project, AIVATAR proposes a robust and modular human-like Autonomous Driving architecture that evolves our current techniques by cloning human behaviors through deep learning techniques, with the goal of achieving safe and sustainable navigation in challenging scenarios in both simulation and controlled real conditions, paying special attention to the validation methodology and the explainability of the decisions to achieve user-acceptance.
The proposal is disruptive and present five main breakthroughs:
1) Implement a long-term trajectory prediction with multi-head attention to find out the intentions of the surrounding objects (vehicles and VRUs) to be able to make decisions in advance as humans do.
2) Address a complete planning system, based on a hybrid structure, with a high-level decision-making layer based on Hierarchical Multiagent DRL and maneuver control based on Model Predictive Control (MPC) with the goal to manage complex traffic decisions imitating human 
3) Design a vehicle to driver (V2D) communication protocol based on questioning/answering interaction using natural language and automatic speech recognition/synthesis, contributing to improve explainability of the AD decisions.
4) Provide a modular AD architecture able to be tested in simulation and in real conditions.
5) Propose standard validation methodology that allows to evaluate the modules and the whole architecture in a holistic way in simulation, minimizing the gap between simulation and real-world evaluation.

Research Team:

Bergasa Pascual, Luis M.
Barea Navarro, Rafael
Revenga de Toro, Pedro
Ocaña Miguel, Manuel
López Guillén, Elena
Escudero Hernanz, Marisol
García Garrido, Miguel Ángel
Llamazares Llamazares, Ángel

Work Team:

Juan Felipe Arango Vargas
Carlos Gómez Huélamo
Javier Araluce Ruiz
Rodrigo Gutiérrez Moreno
Miguel Antunes García
Santiago Montiel Marín
Pablo Pardo Decimavilla
Ricardo Ignacio Pizarro Carreño
Fabio Sánchez García

The UAH-DriveSet

The UAH-DriveSet is a public collection of data captured by our driving monitoring app DriveSafe by various testers in different environments. This dataset tries to facilitate progress in the field of driving analysis by providing a large amount of variables that were captured and processed by all the sensors and capabilities of a smartphone during independent driving tests. The application was run on 6 different drivers and vehicles, performing 3 different behaviors (normal, drowsy and aggressive) on two types of roads (motorway and secondary road), resulting in more than 500 minutes of naturalistic driving with its associated raw data and additional semantic information, together with the video recordings of the trips.



To download this database click here:

The UAH-DriveSet


RobeSafe Driver Monitoring Video Dataset (RS-DMV)

The RS-DMV dataset is a set of video sequences of drivers, recorded with cameras installed over the dashboard. The dataset currently contains 10 video sequences. The drivers were fully awake, talked frequently and were asked to look regularly to rear-view mirrors and operate the car sound system. Sequences contain occlusions, illumination changes and other elements that are problematic to face tracking and driver monitoring systems using computer vision.

Frames are recorded in gray-scale, at 30 frames per second, and stored as RAW video. Frame size of outdoor videos is 960x480 pixels, and 1390x480 for indoor videos. Faces in the videos have been marked with 20 points.

The RS-DMV was created by the RobeSafe Research Group of the Dpt. of Electronics of the UAH.  Videos recorded in a simulator were recorded under the CABINTEC project.


To download this database click here:

Datasets (uah.es)

Santiago Montiel Marín

Personal Information

Mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Position: BSc in Electronics and Industrial Automation Engineering

Linkedin: Santiago Montiel

 gitHub: Santiago Montiel

About me

My name is Santiago Montiel Marín, currently studying the fourth and last year of a BSc degree in Electronics and Industrial Automation Engineering at University of Alcalá. As a research staff member of RobeSafe Research Group (Electronics Department, University of Alcalá), I am developing my Final's Degree Project "Multi Object-Tracking system based on LiDAR and RADAR for Intelligent Vehicles applications" under the supervision of Prof. Luis Miguel Bergasa and PhD student Carlos Gómez Huélamo. I would like this project to become the starting point of my research career. Main areas of interest consist of Computer Vision, Machine Learning, Deep Learning, Robotics, Embedded Systems, Perception in Intelligent Vehicles and any field related to these.


Research Areas

Computer Vision, Machine Learning, Deep Learning, Robotics, Embedded Systems, Perception in Intelligent Vehicles

Miguel Eduardo Ortiz Huamani

Personal Information

Mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Position: Master Student

gitHub: Miguel Eduardo Ortiz Huamani


About me

Graduated in Computer Engineering and Master Student in Cybersecurity at University of Alcalá.


Research Areas

Computer Vision, Machine Learning, Deep Learning, Intelligent Vehicles, Robotics

Juan Felipe Arango Vargas

Personal Information

Mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Position: Phd Student

LinkedIn: Juan Felipe Arango Vargas


About me

My name is Juan Felipe Arango Vargas, currently PhD candidate in Advanced Electronic Systems and Intelligent Systems at the RobeSafe research group (Electronics Department, University of Alcalá) under the supervision of Prof. Luis Miguel Bergasa and Prof. Pedro Revenga. I received my BSc degree in Industrial Electronics and Automation Engineering and my MSc in Industrial Engineering from the University of Alcalá.

My PhD thesis is focused on "Autonomous Navigation Techniques Validation over a Real Electric Vehicle". As member of the RobeSafe research group, I am working within Tech4AgeCar and RoboCity2030 research projects. My main role consist in the automation and preparation of an electric vehicle to carry out validation tests of the autonomous navigation systems developed within the group. More specifically, my work is the design and implementation of the Drive-By-Wire module and the Localization module based on RTK-DGNSS, IMU and odometry positions fusion, as well as designing the validation methods of the navigation techniques in real environments.


Research Areas

Autonomous Vehicle, Drive-By-Wire System, Global Navigation Satellite System (GNSS), Real-Time Kinematic (RTK) Positioning, MEMS-IMU, Sensor Fusion, Extended Kalman Filter (EKF), Robot Operating System (ROS), Automotive Electronics, Embedded Systems.


Polytechnic School

University of Alcalá

Av. Jesuitas s/n.

Alcalá de Henares, 28871

Madrid, Spain

Smart Elderly Car

Sarbot Team