Interaction in Intelligent Environments

The main objective of this research direction is to explore the development of Ambient Intelligence technologies and their application in Smart Environments, following a ‘Human-Centered Design’ and ‘Universal Access’ approach. In this context, the Laboratory develops novel software development frameworks and methods, and ambient interactive systems applications and services, that support natural, intuitive, and high-quality interaction with the intelligent environment, employing multiple interaction techniques and diverse devices. The technological solutions developed integrate multi-/cross-disciplinary technologies, such as recognition and monitoring of user interaction with the environment, distributed processing, reasoning mechanisms, computer networks, sensor and actuator networks, as well as techniques for multimodal interaction.

Furthermore, the Laboratory conducts studies that aim to assess the impact of intelligent environment technologies on the individual and society as a whole, as well as to highlight the potential and the benefits of such technologies in various aspects of everyday life. At the same time, it develops prototype applications and products taking advantage of these technologies, supporting, where appropriate, technology transfer to industry.

Since 2012, these research activities are carried out at the purpose-built Ambient Intelligence Facility of the Institute of Computer Science, which was funded by the European Commission and characterized as a flagship project of the European Union. The Facility includes simulation spaces, i.e., environments that simulate realistic everyday conditions, thus allowing the development and evaluation of innovative interactive applications that are unobtrusively embedded into the respective environment and operate therein. These spaces include, among others, a "smart" two-story house, a "smart" classroom, a "smart" entertainment space, and a "smart" greenhouse. The infrastructure of each space entails a composition of modern market-available equipment, alongside with augmented artefacts that are technologically enhanced through specialised hardware and software. Dedicated software transforms the space into a unified intelligent environment, which becomes dynamically adaptable to the particular needs of each user and context of use.

 

Indicative Outcomes

 

Real-Time Adaptation of Context-Aware Intelligent User Interfaces, for Enhanced Situational Awareness (2021): A novel computational approach for the dynamic adaptation of User Interfaces (UIs) is proposed, which aims at enhancing the Situational Awareness (SA) of users by leveraging the current context and providing the most useful information, in an optimal and efficient manner. By combining Ontology modeling and reasoning with Combinatorial Optimization, the system decides what information to present, when to present it, where to visualize it in the display - and how , taking into consideration contextual factors as well as placement constraints.
https://doi.org/10.1109/ACCESS.2022.3152743

Real-Time Stress Level Feedback from Raw ECG Signals for Personalised, Context-Aware Applications Using Lightweight Convolutional Neural Network Architectures (2021): Convolutional Neural Network architectures for the stress detection and 3-level (low, moderate, high) stress classification tasks, using ultra short-term raw ECG signals (3 s). One architecture is suitable for running in wearable edge-computing nodes, and the other is able to learn more complex features, having more trainable parameters. The evaluation demonstrated high accuracy both on the 3- and 2-level stress classification task using, superseding state-of-the-art in the field, reporting an accuracy of 83.55% and 98.77% respectively.
https://doi.org/10.3390/s21237802

Augmented Reality platforms employed on handheld devices, allowing children to program Intelligent Environments, by creating trigger-action rules about the behaviors of smart artifacts using 3D blocks (2021): The primary goal is to enable children to dictate the behavior of their surroundings in a fun and engaging way, while sharpening their computational thinking skills at the same time.
https://doi.org/10.1145/3459990.3462463

Creation of an Ambient Intelligence framework for well-being (stress management, sleep hygiene) (2021): Two systems were designed and developed in the context of an Intelligent Home, namely CaLmi and HypnOS, aiming to assist users that struggle with stress and poor sleep quality, respectively. Both of the systems rely on real-time data collected by wearable devices, as well as contextual information retrieved from the ambient facilities of the Intelligent Home, so as to offer appropriate pervasive relaxation programs (CaLmi) or provide personalized insights regarding sleep hygiene (HypnOS) to the residents.
https://doi.org/10.3390/s21072398

A framework for shaping new types of interactive experiences in multi-screen environments (2021): Investigating how the amenities offered by Intelligent Environments can be used to shape new types of useful, exciting and fulfilling experiences while watching sports or movies, playing board games or ordering food for delivery. Towards this direction, the Intelligent Living Room was equipped with an ambient media player offering live access to secondary information via the available displays, a digitally augmented version of the board game Mafia, and an integrated and multimodal environment for ordering food. All systems appropriately exploit the technological equipment so as to support natural interaction.
https://doi.org/10.1145/3452918.3465486

New methods and applications in the domain of Precision Agriculture (2021): Ambient Intelligence can cover the entire spectrum of production and streamline the synergy and interaction among people and smart environments in the domain of Agri-food. The application of advanced technologies in the primary sector can increase the quantity and improve the quality of agriculture products and optimize resources’ utilization in this domain, especially during this crucial period, where the nutrient requirements constantly increase, while the climate change affects Agri-food at a global scale.
https://doi.org/10.3390/engproc2021009041
https://doi.org/10.1109/IE51775.2021.9486584

InPrinted Framework (2019): InPrinted constitutes a framework supporting printed matter augmentation and user interaction with Ambient Intelligence (AmI) technologies in Smart Environments. The framework provides: (a) an open architecture enabling integration of new types of technologies for information acquisition and provision (b) independency of the development technologies for the applications (c) an extensible ontology based reference model for printed matter, as well as context-awareness mechanisms (d) implementation of printed matter augmentation mechanisms in the environment (e) support for multimodal natural interaction with printed matter in smart environments.
https://doi.org/10.1007/s11042-018-7088-9

New methods and tools that permit users to easily define the behavior of their Intelligent Environments (end user development) (2019): The AmI-Solertis system offers a complete suite of tools allowing management, programming, testing and monitoring of all the individual artifacts (i.e., services, hardware modules, software components, etc.) of an Intelligent Environment, but also of the entire space as a whole.
https://doi.org/10.1109/WiMOB.2017.8115850

Chatbot for interacting with and within Intelligent Environments (2019): ParlAmI is a chatbot based on the concept of allowing end-users to define the behavior of an intelligent environment via natural language, by creating “if-then” rules. It introduces a hybrid approach that combines natural language understanding (NLU) with semantic reasoning and service-oriented engineering so as to deliver a multimodal conversational interface that assists its users in determining the behavior of AmI environments.
https://doi.org/10.3390/technologies7010011

Large scale multi-touch support integration across multiple projections on arbitrary surfaces (2018): Interactive projections account for a great way of user interaction and can be used in applications that need to span a significant amount of space. However, commercial hardware limits the number of the displayed projections and as a result the need for such applications cannot be satisfied out of the box. LASIMUP (LArge Scale Interactive MUlti-Projection) is a platform that aims to bypass that limitation by developing the appropriate software and the use of low-cost supplementary hardware which can enable such functionality. LASIMUP is cross-platform, easy to install and affordable to acquire. These factors enable the possibility of creating large scale applications, that otherwise would be impractical to deploy, with a very small financial and operating overhead.
https://doi.org/10.1145/3197768.3197786

A novel Framework supporting the connection of recognized human-artifact actions within an Intelligent Environment with appropriate interventions (2018): Α framework responsible for: (i) monitoring human behaviour inside an Intelligent Environment, (ii) detecting problematic situations, and (iii) intervening appropriately to each situation. In more details, LECTOR – in conjunction with the AmI technologies of an Intelligent Environment– observes the activities of people (SENSE) in order to identify behaviours that require remedial actions (THINK), and provides situationally appropriate interventions to support them through their daily lives (ACT).
https://doi.org/10.1007/978-3-319-72038-8_11

Multimodal Interaction In the Intelligent Living Room (2017): A suite of input/output channels that enable interaction even when a user’s primary channel is occupied, unavailable or non-existent, including:

  • Virtual pointer. Users can control the TV interface by hovering their hand over the Leap Motion sensors which are embedded in the side arms of a smart sofa. A virtual cursor that follows the movements of their hands, enables them to focus on and select areas of interest
  • Mid-air gestures. Appropriate mid-air gestures, such as palm tilt, finger pinch, and hand swipe, are also available in order to permit users to complete specific actions (e.g. volume up/down, next/previous item in a list, zoom in/out etc.) quickly and in a natural manner.
  • Touch. Through a Kinect sensor installed on top of the TV facing directly at the coffee table’s surface, the coffee table becomes a touch-enabled surface. Depending on the context of use, the table is able to display various interactive touch-enabled controls (e.g. play or pause a movie, move to next or previous item on a list).
  • User posture. The force-sensitive resistors and load sensors which are installed in the smart sofa’s back and under its bottom pillows provide information regarding the user’s posture while seated (i.e. user leans back or forward). That way, when interactive controls appear on the augmented table, they are displayed within the user’s reach area.
  • User presence. The force-sensitive resistors and load sensors of the smart sofa, along with the motion sensor -mounted on the ceiling, permit the detection of user presence inside the room. Knowing when one or more users are inside or leaving the room is quite important for deciding when to start or pause specific applications (e.g. turn on the TV when someone is in the living room, pause the movie when a user leaves the living room, etc.)
  • Object Detection. When a physical object is placed on top of the augmented table, its presence can be identified via sophisticated software. This software cannot identify the type of the object, but it can estimate the space it occupies. That way, the interfaces projected on the coffee-table get rearranged in order to display the available information in areas that do not get hidden by the identified object(s).
  • Remote Control. A three-dimensional gyroscopic remote control can be used as a mouse or keyboard. In its front side, it includes on/off buttons, navigation arrows and arithmetic controls. Its back side includes a keyboard that enables text input.
  • the users can also record short phrases as vocal messages.

https://doi.org/10.3390/s19225011
https://doi.org/10.1145/3197768.3201548

Multi-touch support integration across multiple projections on arbitrary surfaces (2017): Multiple-point touch support for projections on arbitrary surfaces. For each projection, there is a touch sensor that recognizes the points of contact with the underlying surface. The application supports the combination of multiple sensors in order to produce native touch events with respect to the overall projection on all surfaces. This solution drastically reduces the number of computer units necessary to process touch on multiple surfaces to just one.
https://doi.org/10.1145/3197768.3197786

CocinAR (2017): CocinAR is an Augmented Reality (AR) system that has been developed to help the teaching of pre-schoolers (including cognitive impaired children) how to prepare simple meals. It includes a variety of exercises and mini games, aiming to instruct children: (i) which meals are appropriate for breakfast, lunch, and dinner, (ii) how to cook simple meals (e.g. bread with butter and honey, lettuce salad, pasta with tomato sauce, etc.), and (iii) fundamental rules of safety and hygiene that should be applied during the food preparation process. The system supports multimodal input, utilizing tangible objects on a table-top surface and multimedia output available in textual, auditory and pictorial form. Profiling functionality is supported, allowing the system to adapt to the needs and preferences of each individual user, while an extensive analytics framework that allows the trainers to monitor the progress of their students is provided. CocinAR consists of a computer, a high-resolution projector, a simple wooden table, an infrared camera and a high-resolution camera. With the aim to support an immersive user experience, the system is designed to “camouflage” itself in a way that none of the equipment used is visible to the users, leaving visible only the plain wooden table.
https://doi.org/10.1007/978-3-319-76111-4_24

Home Game (2016): Home Game is an educational game that aims to familiarize pre-schoolers (including cognitive impaired children) with household objects, the overall home environment and the daily activities that take place in it. In addition to touch based interaction, the game supports physical interaction through printed cards on a tabletop setup by detecting and tracking the cards placed on the game board. Home Game features six types of mini-games and an extensive analytics framework that allows the trainers to monitor (even in real-time) the progress of their students. The system comprises a touch screen, a computer, and a high resolution camera overlooking the area in front of the screen. A custom casing has been designed especially for this game to hide the technology from sight.
https://doi.org/10.1145/3078072.3091976

Money Game (2016): An educational game targeted to pre-school age children and children with cognitive disabilities. The goal of the game is to familiarize children with money exchanges through virtual purchases and foster appropriate shopping and money exchange behavior. It can be played using the mouse but also real money.
https://doi.org/10.1007/978-3-319-20684-4_61

Wall Touch (2016): A large interactive surface that enables users to view multimedia information through touch. WallTouch comprises a large projection area that can be used by several visitors who wish to explore multifaceted information simultaneously. User interaction is provided by the recognition of touch of many fingers or hands, as well as specific objects, at the same time. Visitors can explore multimedia information, which can be freely moved and tossed around, as well as magnified or shrunk.
https://doi.org/10.1007/s11042-016-3695-5

FIRMA (2015): The FIRMA framework supports the development of multimodal, elderly friendly, interactive applications for assistive robots targeted to elderly users in assistive environments. FIRMA provides developers with the necessary technologies, tools and building blocks for creating elderly-friendly multimodal applications on (custom-built) assistive robotic platforms. Using the proposed framework makes these applications inherently friendly to the elder users and capable of adapting to their needs, the surrounding environment and the context of use. The framework facilitates the effective and efficient development of the supported user interfaces, thus largely simplifying the developer’s work. The framework supports touch-based, speech and gesture-based interaction, robot’s facial expressions and user interface adaptation.
https://doi.org/10.1145/3056540.3076187

Accessible platform for educational content (2015): On-line accessible system for the provision of accessible educational content to disable students of the National and Capodistrian University of Athens. The system provides a digital catalogue of educational material for the courses where students are enrolled. Materials can be downloaded in the appropriate format for various disabilities (e.g. txt, rtf, xml, mp3, DAISY, large print ready, Braille ready). The portal is W3C AAA compliant.

SYSPEAP (2013): A system for the Collection, Production, Enrichment and Exploitation of Multimedia Content. The overall goal of the system is to allow a public organization in Greece to collect audio-visual content from the media (TV and Radio channels) and transform it so as to be accessible by people with disabilities. The Human Computer Interaction Laboratory of ICS-FORTH implemented the web portal of the project following the principles of design for all and using the Unified Web Based User Interfaces development approach.

KRIPIS “Quality of life” (2013): An integrated technological environment, including sensors and materials, targeted to enhancing quality of life for patients and older people through health monitoring at home, thus allowing early discharge protocols from hospital, as well as facilities to ease everyday life home activities. The project has been conducted in collaboration with the CBML Laboratory of ICS- FORTH, as well as the IESL and the IACM Institutes of FORTH.
https://doi.org/10.3233/978-1-61499-566-1-759

Beantable (2013): Beantable is an augmented interactive table for children in the age-range from 2 to 7. The purpose of Beantable is to support children’s development through the monitored use of appropriate smart games in an unobtrusive manner. Beantable monitors the children’s interactions and extracts indications of the achieved maturity level and skills by taking into account the way the child plays. Furthermore, Beantable can act as a diagnostic tool that provides educators and child development experts with extensive data (extracted from the interaction history) that can be used for reasoning about whether the child is meeting all the necessary developmental milestones. The table, which has been custom made, is a wooden prototype designed and built to be robust and transferrable. The height of the Beantable can be adjusted accordingly to fit children’s needs as they grow. All the devices required for the operation of the applications are embedded inside its construction in a way that is invisible to the eye. A main display device is located on the top side of the actual table, and it is enabled with multi-touch and force-pressure sensitive capabilities. The table screen is able to recognize the location and the rotation of physical objects on the top, provided that each physical object carries at least one fiducial marker at its bottom. Games involving physical objects, such as puzzles, were selected as a testing domain. Two jigsaw puzzles (“Winnie the Pooh” and “The Three Little Pigs”), as well as a classic memory game (Pick & Match), were developed and tested with young children.
https://doi.org/10.1016/j.ijcci.2016.10.008
https://doi.org/10.1007/978-3-319-20684-4_56
https://doi.org/10.1007/978-3-319-07788-8_48

Book of Ellie (2013): The “Book of Ellie” is the augmented version of a classic schoolbook for teaching the Greek alphabet to primary school children. The book introduces the alphabet letters and possible combinations of them by increasing the difficulty level. Each letter or letter combination, provides relevant images and description text. The short stories for each letter are structured around dialogues and activities of a Greek family, with the protagonist being Ellie, one of the four children. In the augmented version of the book, Ellie has become an animated character, constantly available to assist the young learner by reading phrases from the book, asking questions or providing advice.
https://doi.org/10.1109/ICMEW.2013.6618341
https://doi.org/10.1007/s10209-014-0365-0

Smart Box (2012): A standard carton box enhanced with interaction capabilities, through a 3-D accelerometer and a 3-D magnetometer which recognize the inclination of the box. The box can be used in the context of several innovative applications, e.g. to explore virtual representations of 3-D objects, as a steering wheel in the context of games, etc.

Hand, Feet and Body Gestures Navigation (2012): An innovative interaction technique based on human skeleton tracking, allowing users to interact through hand gestures, feet gestures, and body gestures (position and orientation).

Interaction techniques for persons with disabilities(2012): A head scanner for domotic control and a universal control wand have been developed allowing users with severe motor impairments and users with vision disabilities to control the surrounding environment’s devices and interactive components.

Pupil (2011): A framework that facilitates the design, development and deployment of pervasive educational applications that can automatically transform according to the context of use to ensure their usability. The collection of widgets incorporates both common basic widgets (e.g., buttons, images) and mini interfaces frequently used by in educational applications (e.g., bookViewer), as ready-to-use modules.

iTable (2010): iTable mainly targets the Exploration of terrain-based information. Its main component is a plain wooden table, the surface of which is covered by a printed map. The map does not contain any text or other kind of data. When a visitor places a cardboard piece on the table surface, an image is projected on it, showing the area of the map located underneath the paper. Furthermore, a circled crosshair is projected on the paper’s centre along with a virtual red string connecting the paper with the closest site of interest. If the visitor moves the paper so that the site of interest lies within the boundaries of the crosshair, a multimedia slideshow starts. The slideshow comprises a series of pages, each of which may contain any combination of text, images, and video. When the cardboard piece is lying on the table, a toolbar is projected at its lower bottom area, containing two buttons for moving to the next/previous page. The user can interact with these “soft” buttons using her bare fingers. If the paper is taken off the table’s surface, the buttons disappear and the user can move to the next/previous page, by tilting the paper right or left, respectively. In this case, the projection is appropriately distorted, so that the visual content registers correctly on the paper surface.

iRoom (2010): iRoom can be used for the exploration of very large-scale artifacts in real-life size, mainly targeted to exhibitions and museums. It can present large scale images of artifacts, with which one or more visitors can concurrently interact, simply by walking around. The system is capable of location sensing, and also supports interaction through mobile phones and a kiosk.

iTouch (2010): A custom-made multi-touch screen, also supporting interaction using three objects, that are detected using computer vision: a magic wand, i.e., a long stick with an IR led and a switch at its top – when the switch is pressed against the projection screen, the LED turns on; a paper magnifying glass made of white cardboard; and an IR flashlight (which also has a LED of visible light, used as feedback so that the user knows if the flashlight is turned on or not). iTouch comes with a puzzle application.

iBlow (2010): iBlow provides an alternative to typical information kiosks and touch screens used at museums, in order to allow visitors browse item collections. The system comprises a large wooden wall on wheels (for easier transportation), two framed touch screens, a webcam, two light sensors and a windmill toy. The larger screen presents a high resolution photo of the currently selected artifact. The smaller one presents information about the artifact and also includes some soft buttons. Item collections can be browsed through the touch screens, as well as by blowing the windmill toy.

Informative Art (2009): It presents dynamic information, in a subtle and aesthetically-pleasing way, without obstructing the users’ primary task. Specific information semantics are mapped to some parts of an existing painting, namely “the Birth of Venus” by Sandro Botticelli. The Informative art display initially presents a view of the original painting from which the flowers have been removed. The display tracks an e-mail account and, depending on the number and type of the incoming e-mails makes some painting elements appear (or disappear). For example, whenever a new message arrives a flower is added, messages from a list of colleagues appear as oranges on the tree, virus-infected messages as sharks circling Venus, etc.

AmIDesigner and AmIPlayer (2008): Two combined tools which support the automatic generation of accessible graphical user interfaces in AmI environments. The tools offers a simple and rapid design-and-play approach, and the running user interfaces produced integrate non-visual feedback and a scanning mechanism to support accessibility.

CAMILE (2008): Camille is an interactive application for intuitively controlling multiple sources of light in AmI environments, built so that it can be used by anyone, the young, the elderly, people with visual disabilities, and people with hand-motor disabilities alike. Control is available through multiple modalities, such as touch-screen-based, for sighted users with no motor impairments, remote controlled operation in combination with speech for visually impaired users or tele-operation by sighted users, switch-based scanning for motor-impaired users and speech-based interaction for all users.

ASK-IT Home Automation Application (2008): An application which facilitates the remote overview and control of the home environment through the use of a portable device. The user interface of the applications can adapt according to user needs (vision and motor impairments), context of use (alternative display types and display devices) and presence of assistive technologies (alternative input devices).

Voyager (2004): A User Interface (UI) development framework, delivered as a C++ toolkit, for developing wireless dynamically composed wearable interfaces.

Explorer (2004): A location-aware hand-held multimedia guide for museums and archaeological sites.

Projector (2004): A C++ proxy-toolkit for Java Foundation Classes with split cross-platform execution.