Examinando por Autor "Arganda-Carreras, Ignacio"
Mostrando 1 - 3 de 3
Resultados por página
Opciones de ordenación
Ítem A comparative analysis of human behavior prediction approaches in intelligent environments(MDPI, 2022-01-18) Almeida, Aitor; Bermejo Fernández, Unai ; Bilbao Jayo, Aritz ; Azkune Galparsoro, Gorka; Aguilera, Unai ; Emaldi, Mikel ; Dornaika, Fadi; Arganda-Carreras, IgnacioBehavior modeling has multiple applications in the intelligent environment domain. It has been used in different tasks, such as the stratification of different pathologies, prediction of the user actions and activities, or modeling the energy usage. Specifically, behavior prediction can be used to forecast the future evolution of the users and to identify those behaviors that deviate from the expected conduct. In this paper, we propose the use of embeddings to represent the user actions, and study and compare several behavior prediction approaches. We test multiple model (LSTM, CNNs, GCNs, and transformers) architectures to ascertain the best approach to using embeddings for behavior modeling and also evaluate multiple embedding retrofitting approaches. To do so, we use the Kasteren dataset for intelligent environments, which is one of the most widely used datasets in the areas of activity recognition and behavior modelingÍtem Egocentric vision-based action recognition: a survey(Elsevier B.V., 2022-02) Núñez Marcos, Adrián ; Azkune Galparsoro, Gorka ; Arganda-Carreras, IgnacioThe egocentric action recognition EAR field has recently increased its popularity due to the affordable and lightweight wearable cameras available nowadays such as GoPro and similars. Therefore, the amount of egocentric data generated has increased, triggering the interest in the understanding of egocentric videos. More specifically, the recognition of actions in egocentric videos has gained popularity due to the challenge that it poses: the wild movement of the camera and the lack of context make it hard to recognise actions with a performance similar to that of third-person vision solutions. This has ignited the research interest on the field and, nowadays, many public datasets and competitions can be found in both the machine learning and the computer vision communities. In this survey, we aim to analyse the literature on egocentric vision methods and algorithms. For that, we propose a taxonomy to divide the literature into various categories with subcategories, contributing a more fine-grained classification of the available methods. We also provide a review of the zero-shot approaches used by the EAR community, a methodology that could help to transfer EAR algorithms to real-world applications. Finally, we summarise the datasets used by researchers in the literature.Ítem Vision-based fall detection with convolutional neural networks(Hindawi Limited, 2017-12-06) Núñez Marcos, Adrián; Azkune Galparsoro, Gorka ; Arganda-Carreras, IgnacioOne of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.