Logotipo del repositorio
  • English
  • Español
  • Euskara
  • Iniciar sesión
    ¿Nuevo usuario? Regístrese aquí¿Ha olvidado su contraseña?
Logotipo del repositorio
  • DeustoTeka
  • Comunidades
  • Todo DSpace
  • Políticas
  • English
  • Español
  • Euskara
  • Iniciar sesión
    ¿Nuevo usuario? Regístrese aquí¿Ha olvidado su contraseña?
  1. Inicio
  2. Buscar por autor

Examinando por Autor "Azkune Galparsoro, Gorka"

Mostrando 1 - 7 de 7
Resultados por página
Opciones de ordenación
  • Cargando...
    Miniatura
    Ítem
    A comparative analysis of human behavior prediction approaches in intelligent environments
    (MDPI, 2022-01-18) Almeida, Aitor; Bermejo Fernández, Unai ; Bilbao Jayo, Aritz ; Azkune Galparsoro, Gorka; Aguilera, Unai ; Emaldi, Mikel ; Dornaika, Fadi; Arganda-Carreras, Ignacio
    Behavior modeling has multiple applications in the intelligent environment domain. It has been used in different tasks, such as the stratification of different pathologies, prediction of the user actions and activities, or modeling the energy usage. Specifically, behavior prediction can be used to forecast the future evolution of the users and to identify those behaviors that deviate from the expected conduct. In this paper, we propose the use of embeddings to represent the user actions, and study and compare several behavior prediction approaches. We test multiple model (LSTM, CNNs, GCNs, and transformers) architectures to ascertain the best approach to using embeddings for behavior modeling and also evaluate multiple embedding retrofitting approaches. To do so, we use the Kasteren dataset for intelligent environments, which is one of the most widely used datasets in the areas of activity recognition and behavior modeling
  • No hay miniatura disponible
    Ítem
    Cross-environment activity recognition using word embeddings for sensor and activity representation
    (Elsevier B.V., 2020-12-22) Azkune Galparsoro, Gorka; Almeida, Aitor; Agirre Bengoa, Eneko
    Cross-environment activity recognition in smart homes is a very challenging problem, specially for data-driven approaches. Currently, systems developed to work for a certain environment degrade substantially when applied to a new environment, where not only sensors, but also the monitored activities may be different. Some systems require manual labeling and mapping of the new sensor names and activities using an ontology. Ideally, given a new smart home, we would like to be able to deploy the system, which has been trained on other sources, with minimal manual effort and with acceptable performance. In this paper, we propose the use of neural word embeddings to represent sensor activations and activities, which comes with several advantages: (i) the representation of the semantic information of sensor and activity names, and (ii) automatically mapping sensors and activities of different environments into the same semantic space. Based on this novel representation approach, we propose two data-driven activity recognition systems: the first one is a completely unsupervised system based on embedding similarities, while the second one adds a supervised learning regressor on top of them. We compare our approaches with some baselines using four public datasets, showing that data-driven cross-environment activity recognition obtains good results even when sensors and activity labels significantly differ. Our results show promise for reducing manual effort, and are complementary to other efforts using ontologies
  • Cargando...
    Miniatura
    Ítem
    Embedding-based real-time change point detection with application to activity segmentation in smart home time series data
    (Elsevier Ltd, 2021-12-15) Bermejo Fernández, Unai; Almeida, Aitor; Bilbao Jayo, Aritz; Azkune Galparsoro, Gorka
    Human activity recognition systems are essential to enable many assistive applications. Those systems can be sensor-based or vision-based. When sensor-based systems are deployed in real environments, they must segment sensor data streams on the fly in order to extract features and recognize the ongoing activities. This segmentation can be done with different approaches. One effective approach is to employ change point detection (CPD) algorithms to detect activity transitions (i.e. determine when activities start and end). In this paper, we present a novel real-time CPD method to perform activity segmentation, where neural embeddings (vectors of continuous numbers) are used to represent sensor events. Through empirical evaluation with 3 publicly available benchmark datasets, we conclude that our method is useful for segmenting sensor data, offering significant better performance than state of the art algorithms in two of them. Besides, we propose the use of retrofitting, a graph-based technique, to adjust the embeddings and introduce expert knowledge in the activity segmentation task, showing empirically that it can improve the performance of our method using three graphs generated from two sources of information. Finally, we discuss the advantages of our approach regarding computational cost, manual effort reduction (no need of hand-crafted features) and cross-environment possibilities (transfer learning) in comparison to others
  • Cargando...
    Miniatura
    Ítem
    Learning for dynamic and personalised knowledge-based activity models
    (Universidad de Deusto, 2015-07-15) Azkune Galparsoro, Gorka; Chen, Liming; Facultad de Ingeniería; Ingeniería para la Sociedad de la Información y Desarrollo Sostenible
    Human activity recognition is one of the key competences for human adaptive technologies. The idea of such technologies is to adapt their services to human users, so being able to recognise what human users are doing is an important step to adapt services suitably. One of the most promising approaches for human activity recognition is the knowledge-driven approach, which has already shown very interesting features and advantages. Knowledge-driven approaches allow using expert domain knowledge to describe activities and environments, providing efficient recognition systems. However, there are also some drawbacks, such as the usage of generic and static activity models, i.e. activities are defined by their generic features - they do not include personal specificities - and once activities have been defined, they do not evolve according to what users do. This dissertation presents an approach to using data-driven techniques to evolve knowledge-based activity models with a user¿s behavioural data. The approach includes a novel clustering process where initial incomplete models developed through knowledge engineering are used to detect action clusters which describe activities and aggregate new actions. Based on those action clusters, a learning process is then designed to learn and model varying ways of performing activities in order to acquire complete and specialised activity models. The approach has been tested with real users¿ inputs, noisy sensors and demanding activity sequences. Results have shown that the 100% of complete and specialised activity models are properly learnt at the expense of learning some false positive models.
  • Cargando...
    Miniatura
    Ítem
    Nola prestatzen duzu kafea?
    (Elhuyar Fundazioa, 2017) Azkune Galparsoro, Gorka
  • Cargando...
    Miniatura
    Ítem
    Smart cities survey: technologies, application domains and challenges for the cities of the future
    (SAGE Publications Ltd, 2019-06-10) Sánchez Corcuera, Rubén ; Núñez Marcos, Adrián; Sesma Solance, Jesús; Bilbao Jayo, Aritz ; Mulero, Rubén; Zulaika Zurimendi, Unai ; Azkune Galparsoro, Gorka ; Almeida, Aitor
    The introduction of the Information and Communication Technologies throughout the last decades has created a trend of providing daily objects with smartness, aiming to make human life more comfortable. The paradigm of Smart Cities arises as a response to the goal of creating the city of the future, where (1) the well-being and rights of their citizens are guaranteed, (2) industry and (3) urban planning is assessed from an environmental and sustainable viewpoint. Smart Cities still face some challenges in their implementation, but gradually more research projects of Smart Cities are funded and executed. Moreover, cities from all around the globe are implementing Smart City features to improve services or the quality of life of their citizens. Through this article, (1) we go through various definitions of Smart Cities in the literature, (2) we review the technologies and methodologies used nowadays, (3) we summarise the different domains of applications where these technologies and methodologies are applied (e.g. health and education), (4) we show the cities that have integrated the Smart City paradigm in their daily functioning and (5) we provide a review of the open research challenges. Finally, we discuss about the future opportunities for Smart Cities and the issues that must be tackled in order to move towards the cities of the future.
  • Cargando...
    Miniatura
    Ítem
    Vision-based fall detection with convolutional neural networks
    (Hindawi Limited, 2017-12-06) Núñez Marcos, Adrián; Azkune Galparsoro, Gorka ; Arganda-Carreras, Ignacio
    One of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.
  • Icono ubicación Avda. Universidades 24
    48007 Bilbao
  • Icono ubicación+34 944 139 000
  • ContactoContacto
Rights

Excepto si se señala otra cosa, la licencia del ítem se describe como:
Creative Commons Attribution-NonCommercial-NoDerivs 4.0 License

Software DSpace copyright © 2002-2025 LYRASIS

  • Configuración de cookies
  • Enviar sugerencias