Although BIM and GIS are from different domains, the interoperability of IFC and CityGML is seen today as a needed step for the plan, design, and construction of an infrastructure project. Such an ...approach utilizes data from both domains by converting two open data standards. However, the interoperability of GIS/BIM convergence with other domains, such as LandInfra, LADM, RailTopoModel, etc., is becoming increasingly more important, particularly in the projects of railway. Thus, the cooperation is not only for stakeholders within the AECO/MEP industry but also other stakeholders within other domains. A decentralized seamless data flow among different domains must be ensured by linking different domain information models. This study presents a comprehensive approach for incorporating information models with a particular focus on the railways. The approach in the study first asserts project information in BIM to be included in GIS using geospatial ontologies and then extends this approach by integrating other information models from different fields.
•An advanced retail electricity market is proposed for active distribution networks.•Various flexibility resources and numerous players can be accommodated conveniently.•Having more players improved ...competition which resulted in lower prices.•Obtaining Nash equilibrium point in every hour guarantees global optimal solution.•Local generation increased to serve more consumption via H-MGs interoperation.
The concept of active distribution network has emerged by the application of new generation and storage technologies, demand flexibility, and communication infrastructure. The main goal is to create infrastructure and algorithms to facilitate an increased penetration of distributed energy resources, application of demand response and storage technologies, and encourage local generation and consumption within the distribution network. However, managing thousands of prosumers with different requirements and objectives is a challenging task. To do so, market mechanisms are found to be necessary to fully exploit the potential of customers, known as Prosumers in this new era. This paper offers an advanced retail electricity market based on game theory for the optimal operation of home microgrids (H-MGs) and their interoperability within active distribution networks. The proposed market accommodates any number of retailers and prosumers incorporating different generation sources, storage devices, retailers, and demand response resources. It is formulated considering three different types of players, namely generator, consumer, and retailer. The optimal solution is achieved using the Nikaido-Isoda Relaxation Algorithm (NIRA) in a non-cooperative gaming structure. The uncertainty of the generation and demand are also taken into account using appropriate statistical models. A comprehensive simulation study is carried out to reveal the effectiveness of the proposed method in lowering the market clearing price (MCP) for about 4%, increasing H-MG responsive load consumption by a factor of two, and promoting local generation by a factor of three. The numerical results also show the capability of the proposed algorithm to encourage market participation and improve profit for all participants.
IntroductionFast Healthcare Interoperability Resources (FHIR), is a standard for exchanging data. It is a common framework that makes it easier to read, write, and transfer medical data, such as ...patient information securely. SMART is a platform that builds upon the FHIR specification and provides developers with a set of APIs to create applications on top of FHIR. These applications range from retrieving a patient‘s medication history, to evaluating a patient‘s risk of cardiac arrest. The aim of this project was to help individual doctors, small teams of developers, or large medical organisations, who may not be familiar with FHIR, to discover the capabilities of SMART APIs and build applications for this next generation of digital healthcare.MethodAs part of a joint collaboration between GOSH and UCL computer science (CS), through the industry exchange network programme. CS Students developed a web application using Django, a framework written in Python that employs a model-template-view (MTV) pattern. The client-side templates were built using React. The application’s back-end encapsulates the app’s logic; and interacts with the data persistency layer- a SQLite database. The code snippets are run by querying the SMART STU3 Sandbox.ResultsA functional web application was developed that collates a catalogue of modular SMART functions that a developer can use to implement in their own application. The platform allows non-coders to explore SMART on FHIR components and develop prototype applications. It has a library of runnable code snippets that can act as a helpful tool and reference when building SMART applications.ConclusionThis application supports the development of SMART applications that adhere to FHIR standards in data interoperability, for developers who lack specific knowledge of FHIR standards. Such resources are likely to be of increasing importance as NHS organisations begin to develop customised local programmes using SMART on FHIR.
Unprecedented attention towards blockchain technology is serving as a game-changer in fostering the development of blockchain-enabled distinctive frameworks. However, fragmentation unleashed by its ...underlying concepts hinders different stakeholders from effectively utilizing blockchain-supported services, resulting in the obstruction of its wide-scale adoption. To explore synergies among the isolated frameworks requires comprehensively studying inter-blockchain communication approaches. These approaches broadly come under the umbrella of Blockchain Interoperability (BI) notion, as it can facilitate a novel paradigm of an integrated blockchain ecosystem that connects state-of-the-art disparate blockchains. Currently, there is a lack of studies that comprehensively review BI, which works as a stumbling block in its development. Therefore, this article aims to articulate potential of BI by reviewing it from diverse perspectives. Beginning with a glance of blockchain architecture fundamentals, this article discusses its associated platforms, taxonomy, and consensus mechanisms. Subsequently, it argues about BI’s requirement by exemplifying its potential opportunities and application areas. Concerning BI, an architecture seems to be a missing link. Hence, this article introduces a layered architecture for the effective development of protocols and methods for interoperable blockchains. Furthermore, this article proposes an in-depth BI research taxonomy and provides an insight into the state-of-the-art projects. Finally, it determines possible open challenges and future research in the domain.
The Network University Medicine projects are an important part of the German COVID-19 research infrastructure. They comprise 2 subprojects: COVID-19 Data Exchange (CODEX) and Coordination on Mobile ...Pandemic Apps Best Practice and Solution Sharing (COMPASS). CODEX provides a centralized and secure data storage platform for research data, whereas in COMPASS, expert panels were gathered to develop a reference app framework for capturing patient-reported outcomes (PROs) that can be used by any researcher.
Our study aims to integrate the data collected with the COMPASS reference app framework into the central CODEX platform, so that they can be used by secondary researchers. Although both projects used the Fast Healthcare Interoperability Resources (FHIR) standard, it was not used in a way that data could be shared directly. Given the short time frame and the parallel developments within the CODEX platform, a pragmatic and robust solution for an interface component was required.
We have developed a means to facilitate and promote the use of the German Corona Consensus (GECCO) data set, a core data set for COVID-19 research in Germany. In this way, we ensured semantic interoperability for the app-collected PRO data with the COMPASS app. We also developed an interface component to sustain syntactic interoperability.
The use of different FHIR types by the COMPASS reference app framework (the general-purpose FHIR Questionnaire) and the CODEX platform (eg, Patient, Condition, and Observation) was found to be the most significant obstacle. Therefore, we developed an interface component that realigns the Questionnaire items with the corresponding items in the GECCO data set and provides the correct resources for the CODEX platform. We extended the existing COMPASS questionnaire editor with an import function for GECCO items, which also tags them for the interface component. This ensures syntactic interoperability and eases the reuse of the GECCO data set for researchers.
This paper shows how PRO data, which are collected across various studies conducted by different researchers, can be captured in a research-compatible way. This means that the data can be shared with a central research infrastructure and be reused by other researchers to gain more insights about COVID-19 and its sequelae.
Interoperability of clinical data remains an issue in healthcare despite technology advancements. FHIR, Fast Healthcare Interoperability Resources, is a standard for electronic exchange of healthcare ...data that is built on web standards, flexible, user-friendly and easy to implement. This research aims to assess FHIR's potential as a solution for interoperability by identifying critical components of FHIR that can improve interoperability and bridge the gap between healthcare systems and sources of clinical data. By leveraging FHIR as a standard, healthcare professionals are able to more effectively communicate and access patient information leading to better patient care and more efficient clinical operations.
How will new decentralized governance impact research?
On 25 May 2018, the European Union (EU) regulation 2016/679 on data protection, also known as the General Data Protection Regulation (GDPR), ...will take effect. The GDPR, which repeals previous European legislation on data protection (Directive 95/46/EC) ( 1 ), is bound to have major effects on biomedical research and digital health technologies, in Europe and beyond, given the global reach of EU-based research and the prominence of international research networks requiring interoperability of standards. Here we describe ways in which the GDPR will become a critical tool to structure flexible governance for data protection. As a timely forecast for its potential impact, we analyze the implications of the GDPR in an ongoing paradigmatic legal controversy involving the database originally assembled by one of the world's first genomic biobanks, Shardna.
Interoperability (IOP) is the ability of a product or system – whose interfaces (APIs) are publicly documented – to connect to and operate with other products or systems, without restrictions. ...Interoperability further enables information and usable data to be properly exchanged and ensures the alignment of different business processes in critical sectors. In addition, is a prerequisite for transparent, domain-agnostic, and sustainable public sector digital services, where Public Administrations (PA) can efficiently interact across borders and domains by using common frameworks, standards, and processes for sharing information and data. The European Interoperability Framework (EIF) enables interoperability with guidelines for digital services. Therefore, the alignment with EIF becomes pivotal for the European Union (EU) countries since different regulations that facilitate and impose the implementation of European policies such as the Single Digital Gateway (SDG) regulation and the Once-Only Principle (OOP) consider the IOP a crucial technical and operational component for government digital services. This article proposes the update of the Greek NIF, with guidelines of EIF, OOP and other technological trends in conjunction with new legal and policy provisions. This proposed assessment methodology can be reused in other countries and can be further adapted for updating the EIF.
•Interoperability is in digital services and is a prerequisite for transparent, domain agnostic and sustainable public sector digital services.•This proposal contributes to digital transformation and interoperability knowledge by providing a systemic approach methodology for the identification of gaps and the proposal of concrete improvements and recommendations.•The methodology includes the conceptualized technological trends such as the Once Only Principle (OOP), the CEF Building Blocks (BB) and the Single Digital Gateway (SDG) regulation, for this proposal, in conjunction with new legal and policy provisions and can be reused in other countries and for the update of EIF.
ABSTRACT
Much biodiversity data is collected worldwide, but it remains challenging to assemble the scattered knowledge for assessing biodiversity status and trends. The concept of Essential ...Biodiversity Variables (EBVs) was introduced to structure biodiversity monitoring globally, and to harmonize and standardize biodiversity data from disparate sources to capture a minimum set of critical variables required to study, report and manage biodiversity change. Here, we assess the challenges of a ‘Big Data’ approach to building global EBV data products across taxa and spatiotemporal scales, focusing on species distribution and abundance. The majority of currently available data on species distributions derives from incidentally reported observations or from surveys where presence‐only or presence–absence data are sampled repeatedly with standardized protocols. Most abundance data come from opportunistic population counts or from population time series using standardized protocols (e.g. repeated surveys of the same population from single or multiple sites). Enormous complexity exists in integrating these heterogeneous, multi‐source data sets across space, time, taxa and different sampling methods. Integration of such data into global EBV data products requires correcting biases introduced by imperfect detection and varying sampling effort, dealing with different spatial resolution and extents, harmonizing measurement units from different data sources or sampling methods, applying statistical tools and models for spatial inter‐ or extrapolation, and quantifying sources of uncertainty and errors in data and models. To support the development of EBVs by the Group on Earth Observations Biodiversity Observation Network (GEO BON), we identify 11 key workflow steps that will operationalize the process of building EBV data products within and across research infrastructures worldwide. These workflow steps take multiple sequential activities into account, including identification and aggregation of various raw data sources, data quality control, taxonomic name matching and statistical modelling of integrated data. We illustrate these steps with concrete examples from existing citizen science and professional monitoring projects, including eBird, the Tropical Ecology Assessment and Monitoring network, the Living Planet Index and the Baltic Sea zooplankton monitoring. The identified workflow steps are applicable to both terrestrial and aquatic systems and a broad range of spatial, temporal and taxonomic scales. They depend on clear, findable and accessible metadata, and we provide an overview of current data and metadata standards. Several challenges remain to be solved for building global EBV data products: (i) developing tools and models for combining heterogeneous, multi‐source data sets and filling data gaps in geographic, temporal and taxonomic coverage, (ii) integrating emerging methods and technologies for data collection such as citizen science, sensor networks, DNA‐based techniques and satellite remote sensing, (iii) solving major technical issues related to data product structure, data storage, execution of workflows and the production process/cycle as well as approaching technical interoperability among research infrastructures, (iv) allowing semantic interoperability by developing and adopting standards and tools for capturing consistent data and metadata, and (v) ensuring legal interoperability by endorsing open data or data that are free from restrictions on use, modification and sharing. Addressing these challenges is critical for biodiversity research and for assessing progress towards conservation policy targets and sustainable development goals.