Intelligent, distributed, human-centered and trustworthy IoT environments
The Internet of Things (IoT) merges physical and virtual worlds. The European Commission is actively promoting the IoT as a next step towards the digitisation of our society and economy. The EU-funded IntellIoT project will develop a framework for intelligent IoT environments that execute semi-autonomous IoT applications, enabling a suite of novel use cases in which a human expert plays a key role in controlling and teaching the AI-enabled systems. Specifically, the project will focus on agriculture (tractors semi-autonomously operated in conjunction with drones), healthcare (patients monitored by sensors) and manufacturing (automated plants shared by multiple tenants who utilise machinery from third-party vendors). It will establish human-defined autonomy through distributed AI running on intelligent IoT devices.
The traditional cloud centric IoT has clear limitations, e.g. unreliable connectivity, privacy concerns, or high round-trip times. IntellIoT overcomes these challenges in order to enable NG IoT applications. IntellIoT’s objectives aim at developing a framework for intelligent IoT environments that execute semi-autonomous IoT applications, which evolve by keeping the human-in-the-loop as an integral part of the system.
Such intelligent IoT environments enable a suite of novel use cases. IntellIoT focuses on: Agriculture, where a tractor is semi-autonomously operated in conjunction with drones. Healthcare, where patients are monitored by sensors to receive advice and interventions from virtual advisors. Manufacturing, where highly automated plants are shared by multiple tenants who utilize machinery from third-party vendors. In all cases a human expert plays a key role in controlling and teaching the AI-enabled systems.
The following 3 key features of IntellIoT’s approach are highly relevant for the work programme as they address the call’s challenges:
Managing supply chain (SC) activities today is increasingly complex. One reason is the lack of an integrated means for security officers and operators to protect their interconnected critical infrastructures and cyber systems in the new digital era. The EU-funded CYRENE project aims to enhance the security, privacy, resilience, accountability and trustworthiness of supply chains, through the provision of a novel and dynamic Conformity Assessment Process (CAP) that evaluates the security and resilience of SC services. The CAP also assesses the interconnected IT infrastructures composing these services, and the individual devices that support the operations of the SCs. A new collaborative, multilevel, evidence-driven risk-and-privacy assessment approach will be validated in the scope of realistic scenarios/conditions comprising real-life SC infrastructures and end-users.
Despite the tremendous socio-economic importance of Supply Chains (SCs), security officers and operators have still no easy and integrated way to protect their interconnected Critical Infrastructures (CIs) and cyber systems in the new digital era. CYRENE vision is to enhance the security, privacy, resilience, accountability and trustworthiness of SCs through the provision of a novel and dynamic Conformity Assessment Process (CAP) that evaluates the security and resilience of supply chain services, the interconnected IT infrastructures composing these services, and the individual devices that support the operations of the SCs. In order to meet its objectives, the proposed CAP is based on a collaborative, multi-level evidence-driven, Risk and Privacy Assessment approach that support, at different levels, the SCs security officers and operators to recognize, identify, model, and dynamically analyse advanced persistent threats and vulnerabilities as well as to handle daily cyber-security and privacy risks and data breaches.
CYRENE will be validated in the scope of realistic scenarios/conditions comprising of real-life supply chain infrastructures and end-users. Furthermore, the project will ensure the active engagement of a large number of external stakeholders as a means of developing a wider ecosystem around the project’s results, which will set the basis for CYRENE large scale adoption and global impact.
Visionary Nature Based Actions for Heath, Wellbeing & Resilience in Cities
In an increasingly urbanised world, governments are focussing on boosting cities’ productivity and improving citizens’ living conditions and quality of life. Despite efforts to transform the challenges facing cities into opportunities, problems such as overburdened social services and health facilities, air pollution and exacerbated heat create a bleak outlook. With these challenges in mind, the EU-funded VARCITIES project aims to create a vision for future cities with the citizen and the so-called human community at the centre. It will therefore implement innovative ideas and add value by creating sustainable models for improving the health and well-being of citizens facing diverse climatic conditions and challenges around Europe. This will be achieved through shared public spaces that make cities liveable and welcoming.
In an increasingly urbanising world, governments and international corporations strive to increase productivity of cities, recognized as economy growth hubs, as well as ensuring better quality of life and living conditions to citizens. Although significant effort is performed by international organisations, researchers, etc. to transform the challenges of Cities into opportunities, the visions of our urban future are trending towards bleak. Social services and health facilities are significantly affected in negative ways owed to the increase in urban populations (70% by 2050).
Air pollution and urban exacerbation of heat islands is exacerbating. Nature will struggle to compensate in the future City, as rural land is predicted to shrink by 30% affecting liveability. VARCITIES puts the citizen and the “human community” in the eye of the future cities’ vision. Future cities should evolve to be human centred cities. The vision of VARCITIES is to implement real, visionary ideas and add value by establishing sustainable models for increasing H&WB of citizens (children, young people, middle age, elderly) that are exposed to diverse climatic conditions and challenges around Europe (e.g. from harsh winters in Skelleftea-SE to hot summers in Chania-GR, from deprived areas in Novo mesto-SI to increased pollution in Malta) through shared public spaces that make cities liveable and welcoming.
A low-cost outdoor location tracking solution for shoreline safety.
Το TETRAMAX είναι μια δράση καινοτομίας του προγράμματος Horizon 2020 στο πλαίσιο της πρωτοβουλίας για το ευρωπαϊκό Smart Anything Everywhere στον τομέα της εξατομικευμένης και χαμηλής κατανάλωσης ενέργειας υπολογιστών για τα Cyber Physical Systems και το Διαδίκτυο των πραγμάτων. Ως Κέντρο ψηφιακής καινοτομίας, το TETRAMAX στοχεύει να φέρει προστιθέμενη αξία στην ευρωπαϊκή βιομηχανία, συμβάλλοντας στην επίτευξη ανταγωνιστικού πλεονεκτήματος μέσω της ταχύτερης ψηφιοποίησης. Το TETRAMAX ξεκίνησε το Σεπτέμβριο του 2017 και διαρκεί μέχρι τον Αύγουστο του 2021.
Το έργο BLEEPER αφορά στην ενίσχυση της ασφάλειας στις παραλίες. Χρησιμοποιεί χαμηλού κόστους φωτοβολταϊκούς αδιάβροχους φάρους Bluetooth σε γεωγραφικά διαμορφωμένες ακτογραμμές, προκειμένου να παρέχει εντοπισμό θέσης υψηλής ακρίβειας για όλα τα άτομα που είναι εξοπλισμένα με κάποιο είδος χαμηλού κόστους φορητό εξοπλισμό Bluetooth (π.χ. σανδάλια, wristbands, life-vests). Το BLEeper θα προσφέρει παράκτια επιτήρηση ανώτερη από αυτή των άλλων διαθέσιμων λύσεων, οδηγώντας σε σημαντικά αυξημένα επίπεδα ασφάλειας για τις δραστηριότητες στις ακτογραμμές.
Hardware-Assisted Decoupled Access Execution on the Digital Market: The EDRA Framework
Developing the next generation of high-performance computing (HPC) technologies, applications and systems towards exascale is a priority for the EU. The ‘Decoupled Access – Execute’ (DAE) approach is an HPC framework developed by the EU-funded project EXTRA for mapping applications to reconfigurable hardware. The EU-funded EDRA project will deploy virtual machines that integrate the EXTRA DAE architecture onto custom hardware within the cloud infrastructure of the Amazon Web Services marketplace, one of the largest cloud service providers. The project will focus on the commercialisation of EDRA on Amazon’s web service marketplace to ensure the EDRA framework is widely available and versatile for wide-ranging applications.
The FET project “EXTRA” (Exploiting eXascale Technology with Reconfigurable Architectures) aimed at devising efficient ways to deploy ultra-efficient heterogeneous compute nodes for future exascale High Performance Computing (HPC) applications. One major outcome was the development of a framework for mapping applications to reconfigurable hardware, relying on the concept of Decoupled Access – Execute (DAE) approach.
This project focuses on the commercialization of the EXTRA framework for cloud HPC platforms. More specifically, it targets the deployment of Virtual Machines (VMs) that integrate the EXTRA DAE Reconfigurable Architecture (EDRA) on custom hardware within the cloud infrastructure of one of the largest cloud service providers, the Amazon Web Services marketplace. End-users will be able to automatically map their applications to “EDRA-enhanced” VMs, and directly deploy them onto Amazon’s cloud infrastructure for optimal performance and minimal cost.
Towards a successful exploitation outcome, the project will put effort on (a) applying the required software- and hardware-level modifications on the current EXTRA framework to comply with Amazon’s infrastructure, (b) devising a business strategy to effectively address various user groups, and (c) disseminating the benefits of this solution to large public events and summits.
Energy-efficient Heterogeneous COmputing at exaSCALE
In order to sustain the ever-increasing demand of storing, transferring and mainly processing data, HPC servers need to improve their capabilities. Scaling in number of cores alone is not a feasible solution any more due to the increasing utility costs and power consumption limitations. While current HPC systems can offer petaflop performance, their architecture limits their capabilities in terms of scalability and energy consumption. Extrapolating from the top HPC systems, such as China’s Tianhe-2 Supercomputer, we would require an enormous 1GW power to sustain exaflop performance while a similar yet smaller number is triggered even if we take the best system of the Green 500 list as an initial reference.
Apart from improving transistor and integration technology, what is needed is to refine the HPC application development and the HPC architecture design. Towards this end, ECOSCALE will analyse the characteristics and trends of current and future applications in order to provide a hybrid MPI+OpenCL programming environment, a hierarchical architecture, runtime system and middleware, and a shared distributed reconfigurable hardware based acceleration.
Exploiting eXascale Technology with Reconfigurable Architectures
To handle the stringent performance requirements of future exascale High Performance Computing (HPC) applications, HPC systems need ultra-efficient heterogeneous compute nodes. To reduce power and increase performance, such compute nodes will require reconfiguration as an intrinsic feature, so that specific HPC application features can be optimally accelerated at all times, even if they regularly change over time.
In the EXTRA project, we create a new and flexible exploration platform for developing reconfigurable architectures, design tools and HPC applications with run-time reconfiguration built-in from the start. The idea is to enable the efficient co-design and joint optimization of architecture, tools, applications, and reconfiguration technology in order to prepare for the necessary HPC hardware nodes of the future.
The project EXTRA covers the complete chain from architecture up to the application:
In conclusion, EXTRA focuses on the fundamental building blocks for run-time reconfigurable exascale HPC systems: new reconfigurable architectures with very low reconfiguration overhead, new tools that truly take reconfiguration as a design concept, and applications that are tuned to maximally exploit run-time reconfiguration techniques.
Our goal is to provide the European platform for run-time reconfiguration to maintain Europe’s competitive edge and leadership in run-time reconfigurable computing.
A Novel, Comprehensible, Ultra-Fast, Security-Aware CPS Simulator
One of the main problems the CPS designers face is “the lack of simulation tools and models for system design and analysis”. This is mainly because the majority of the existing simulation tools for complex CPS handle efficiently only parts of a system while they mainly focus on the performance. Moreover, they require extreme amounts of processing resources and computation time to accurately simulate the CPS nodes’ processing. Faster approaches are available, however as they function at high levels of abstraction, they cannot provide the accuracy required to model the exact behavior of the system under design so as to guarantee that it meets the requirements in terms of performance and/or energy consumption.
The COSSIM project will address all those needs by providing an open-source framework which will
COSSIM will achieve the above by developing a novel simulator framework based on a processing simulation sub-system (i.e. a “full-system simulator”) which will be integrated with a novel network simulator. Furthermore, innovative power consumption and security measurement models will be developed and incorporated to the end framework. On top of that, COSSIM will also address another critical aspect of an accurate CPS simulation environment: the performance as measured in required simulation time. COSSIM will create a framework that is orders of magnitude faster, while also being more accurate and reporting more CPS aspects, than existing solutions, by applying hardware acceleration through the use of field programmable gate arrays (FPGAs), which have been proven extremely efficient in relevant tasks.
A configurable real-time data processing infrastructure mastering autonomous quality adaptation
The growing number of fine-granular data streams opens up new opportunities for improved risk analysis, situation and evolution monitoring as well as event detection. However, there are still some major roadblocks for leveraging the full potential of data stream processing, as it would, for example, be needed the highly relevant systemic risk analysis in the financial domain.
The QualiMaster project will address those road blocks by developing novel approaches for autonomously dealing with load and need changes in large-scale data stream processing, while opportunistically exploiting the available resources for increasing analysis depth whenever possible. For this purpose, the QualiMaster infrastructure will enable autonomous proactive, reflective and cross-pipeline adaptation, in addition to the more traditional reactive adaptation.
Starting from configurable stream processing pipelines, adaptation will be based on quality-aware component description, pipelines optimization and the systematic exploitation of families of approximate algorithms with different quality/performance tradeoffs. However, adaptation will not be restricted to the software level alone: We will go a level further by investigating the systematic translation of stream processing algorithms into code for reconfigurable hardware and the synergistic exploitation of such hardware-based processing in adaptive high performance large-scale data processing.
The project focuses on financial analysis based on combining financial data streams and social web data, especially for systemic risk analysis. Our user-driven approach involves two SMEs from the financial sector. Rigorous evaluation with real world data loads from the financial domain enriched with relevant social Web content will further stress the applicability of QualiMaster results.
Spoken Dialogue Analytics
The speech services industry has been growing both for telephony applications and, recently, also for smartphones (e.g., Siri). Despite recent progress in spoken dialogue system (SDS) technologies the development cycle of speech services still requires significant effort and expertise. A significant portion of this effort is geared towards the development of the domain semantics and associated grammars, system prompts and spoken dialogue call-flow.
We propose a semi-automated process for spoken dialogue service development and speech service enhancement of deployed services, where incoming speech service data are semi-automatically transcribed and analyzed (human-in-the-loop).
A list of mature technologies will be used to
Specifically the technologies used will be: grammar induction, text-mining for language modeling, affective modeling of speech and text data, machine translation, crowd-sourcing, speech recognition/transcription, ontology induction. The technologies will be integrated in a service doctoring platform that will enhance deployed services using the human-in-the-loop paradigm.
Our business model is quick deployment of a prototype service, followed by service enhancement using our semi-automated service doctoring platform. The reduced development time and time-to-market will provide significant differentiation for SME in the speech services areas, as well as end-users. The business opportunity is significant especially given the consolidation of the speech services industry and the lack of major competition. Our offering is attractive for SME in the services area with little expertise in speech service development (B2B) and also end-user that are developing their own in house speech service often with limited success (B2C).
SpeDial is built around the knowledge cascade of technologies, data and services. Automatic or machineaided algorithms will be used to analyze the data logs from deployed speech services, and, in turn, these data will be used to tune in a cost-effective manner the speech service using the set of algorithms and tools of the SpeDial platform.
The main S&T goal of SpeDial is to devise machine-aided methods for spoken dialogue system enhancement and customization for call-center applications. SpeDial adopts a user-centric approach to SDS design. Rather than simply rolling out algorithms from the research lab to the real world (being hopeful about their usefulness), we have tried to map the requirements of a speech services developer and emulate the logical flow being followed.
In this process, we have identified two scenarios: service enhancement where the developer starts from an existing application and tries to improve KPI performance and user satisfaction, and service customization where the developer addresses the special needs of a user population. Thus our second goal is to create a platform that supports cost-effective service doctoring for those two scenarios: enhancement and customization. The platform also include interfaces for service and user satisfaction monitoring (IVR analytics component).
Our third goal is to create and support a sustainable pool of developers that will be trained to use the platform. Two separate groups of users are targeted: non-commercial users including the research community and speech services developers at end-user companies. All in all, SpeDial has ambitious but realistic goals both for technological and commercial exploitation of project outputs.
Ερευνητικό Πανεπιστημιακό Ινστιτούτο Τηλεπικοινωνιακών Συστημάτων – ΕΠΙΤΣ
Πολυτεχνείο Κρήτης
Πολυτεχνειούπολη – Κουνουπιδιανά
Τ.Κ. : 73100, Χανιά – Κρήτη
Αυτός ο ιστότοπος χρησιμοποιεί cookies. Συνεχίζοντας την περιήγηση στον ιστότοπο, συμφωνείτε με τη χρήση των cookies από εμάς.
Αποδοχή ΌλωνΡυθμίσεις
Ενδέχεται να ζητήσουμε τη ρύθμιση cookie στη συσκευή σας. Χρησιμοποιούμε cookies για να μας ενημερώνουμε όταν επισκέπτεστε τους ιστότοπούς μας, πώς αλληλεπιδράτε μαζί μας, να εμπλουτίζουμε την εμπειρία χρήστη σας και να προσαρμόσουμε τη σχέση σας με τον ιστότοπό μας.
Αυτά τα cookies είναι απολύτως απαραίτητα για να σας παρέχουμε υπηρεσίες που είναι διαθέσιμες μέσω του ιστότοπού μας και για να χρησιμοποιήσετε ορισμένες από τις δυνατότητες του.
Αυτά τα cookies συλλέγουν πληροφορίες που χρησιμοποιούνται είτε σε συγκεντρωτική μορφή για να μας βοηθήσουν να κατανοήσουμε πώς χρησιμοποιείται ο ιστότοπός μας ή πόσο αποτελεσματικές είναι οι καμπάνιες μάρκετινγκ ή για να μας βοηθήσουν να προσαρμόσουμε τον ιστότοπο και την εφαρμογή μας για εσάς, προκειμένου να βελτιώσουμε την εμπειρία σας.
Εάν δεν θέλετε να παρακολουθούμε τον επισκέπτη σας στον ιστότοπό μας, μπορείτε να απενεργοποιήσετε την παρακολούθηση στο πρόγραμμα περιήγησής σας εδώ:
Χρησιμοποιούμε επίσης διαφορετικές εξωτερικές υπηρεσίες, όπως το Google Webfonts, τους Χάρτες Google και εξωτερικούς παρόχους βίντεο. Δεδομένου ότι αυτοί οι πάροχοι ενδέχεται να συλλέγουν προσωπικά δεδομένα όπως η διεύθυνση IP σας, σας επιτρέπουμε να τους αποκλείσετε εδώ. Λάβετε υπόψη ότι αυτό μπορεί να μειώσει σημαντικά τη λειτουργικότητα και την εμφάνιση του ιστότοπού μας. Οι αλλαγές θα τεθούν σε ισχύ μόλις φορτώσετε ξανά τη σελίδα.
Google Webfont Settings:
Google Map Settings:
Vimeo and Youtube video embeds:
Μπορείτε να διαβάσετε αναλυτικά για τα cookies και τις ρυθμίσεις απορρήτου μας στη Σελίδα Πολιτικής Απορρήτου.