The processes of service discovery and composition are crucial tasks in application development driven by Web Services. However, with RESTful Web Service replacing SOAP-based Web Service as the ...dominant service-providing approach, the research on service discovery and composition should also shift its focus from SOAP-based Web Service to RESTful Web Service. The unstructured, resource-oriented and unified interface characteristics of RESTful Web Service pose challenges to its discovery and composition process. In this work, a framework for implementing RESTful Web Service discovery and automatic composition based on semantic technology is proposed. Firstly, the framework uses the OpenAPI Specification (OAS), which is extended by resource attributes, as the RESTful Web Service description specification, and then supports semantic-based matching discovery and automatic composition by attaching the concepts of domain ontology to the extended OAS. Secondly, the framework is fully adapted to REST features and provides a method for building service composition dependencies during registration, which is used to generate composition schemes during the service discovery process. Finally, the framework provides a discovery method that can return RESTful Web services to the requester in the form of single-point services or service composition schemes according to the magnitude of the semantic similarity with the requester’s requirements. We applied the proposed methods to experiment with RESTful Web services in three different fields, and the results show that the methods effectively calculate the similarity between RESTful single-point Web services or composite Web services and service requests with the support of domain ontology.
How to accurately predict unknown quality-of-service (QoS) data based on observed ones is a hot yet thorny issue in Web service-related applications. Recently, a latent factor (LF) model has shown ...its efficiency in addressing this issue owing to its high accuracy and scalability. An LF model can be improved by identifying user and service neighborhoods based on user and service geographical information. However, such information can be difficult to acquire in most applications with the considerations of information security, identity privacy, and commercial interests in a real system. Besides, the existing LF model-based QoS predictors mostly ignore the reliability of given QoS data where noises commonly exist to cause accuracy loss. To address the above issues, this paper proposes a data-characteristic-aware latent factor (DCALF) model to implement highly accurate QoS predictions, where 'data-characteristic-aware' indicates that it can appropriately implement QoS prediction according to the characteristics of given QoS data. Its main idea is two-fold: a) it detects the neighborhoods and noises of users and services based on the dense LFs extracted from the original sparse QoS data, b) it incorporates a density peaks-based clustering method into its modeling process for achieving the simultaneous detections of both neighborhoods and noises of QoS data. With such designs, it precisely represents the given QoS data in spite of their sparsity, thereby achieving highly accurate predictions for unknown ones. Experimental results on two QoS datasets generated by real-world Web services demonstrate that the proposed DCALF model outperforms state-of-the-art QoS predictors, making it highly competitive in addressing the issue of Web service selection and recommendation.
Abstract
Complex biomedical data generated during clinical, omics and mechanism-based experiments have increasingly been exploited through cloud- and visualization-based data mining techniques. ...However, the scientific community still lacks an easy-to-use web service for the comprehensive visualization of biomedical data, particularly high-quality and publication-ready graphics that allow easy scaling and updatability according to user demands. Therefore, we propose a community-driven modern web service, Hiplot (https://hiplot.org), with concise and top-quality data visualization applications for the life sciences and biomedical fields. This web service permits users to conveniently and interactively complete a few specialized visualization tasks that previously could only be conducted by senior bioinformatics or biostatistics researchers. It covers most of the daily demands of biomedical researchers with its equipped 240+ biomedical data visualization functions, involving basic statistics, multi-omics, regression, clustering, dimensional reduction, meta-analysis, survival analysis, risk modelling, etc. Moreover, to improve the efficiency in use and development of plugins, we introduced some core advantages on the client-/server-side of the website, such as spreadsheet-based data importing, cross-platform command-line controller (Hctl), multi-user plumber workers, JavaScript Object Notation-based plugin system, easy data/parameters, results and errors reproduction and real-time updates mode. Meanwhile, using demo/real data sets and benchmark tests, we explored statistical parameters, cancer genomic landscapes, disease risk factors and the performance of website based on selected native plugins. The statistics of visits and user numbers could further reflect the potential impact of this web service on relevant fields. Thus, researchers devoted to life and data sciences would benefit from this emerging and free web service.
This paper presents an approach to GWS (GeospatialWeb Service) discovery through the semantic annotation of WPS (Web Processing Service) service descriptions. The rationale behind this work is that ...search engines that use appropriate semantic-based similarity measures in the matching process are more accurate in terms of precision and recall than those based on syntactic matching alone. The lack of semantics in the description of services using a standard such as WPS prevents the use of such a matching process and is considered a limitation of GWS discovery. The GWS discovery approach presented is based on the consideration of semantics in the service description method and in the matching process. The description of services is based on a semantic lightweight meta-model instantiated in the WPS 2.0 standard, extending the description of the service through metadata tags. The matching process is performed in three steps (functionality matching step, I/O (Input/Output) matching step and non-functional matching step). Its core is a semantic similarity measure that combines logical and non-logical matching methods. Finally, the paper presents the results of an experiment applying the proposed discovery approach on a GWS corpus, showing promising results and the added value of the three-step matching process.
Alignment-free (AF) sequence comparison is attracting persistent interest driven by data-intensive applications. Hence, many AF procedures have been proposed in recent years, but a lack of a clearly ...defined benchmarking consensus hampers their performance assessment.
Here, we present a community resource (http://afproject.org) to establish standards for comparing alignment-free approaches across different areas of sequence-based research. We characterize 74 AF methods available in 24 software tools for five research applications, namely, protein sequence classification, gene tree inference, regulatory element detection, genome-based phylogenetic inference, and reconstruction of species trees under horizontal gene transfer and recombination events.
The interactive web service allows researchers to explore the performance of alignment-free tools relevant to their data types and analytical goals. It also allows method developers to assess their own algorithms and compare them with current state-of-the-art tools, accelerating the development of new, more accurate AF solutions.
Many studies and green building rating systems have addressed the social and environmental importance of site planning. Tools based on BIM and Location Based Services (LBSs) have been developed to ...estimate energy consumption for material transportation and the surrounding density of the sites. However, the tools are not programmable and limited by their serving phases. This requires solutions that have the flexibility to run site analysis on social surroundings and the compatibility of user programming in the early design stage. Integrating visual programming and web service Application Programming Interface (API) can fulfill the requirements of evaluating publicly available diverse uses of sites and custom coding. This study introduces the method for integrating Dynamo BIM and Amap web service APIs for the evaluations of publicly available diverse uses and transportations. Additionally, implementations of use cases are demonstrated including assessments of Access to Quality Transit and Diverse Uses in LEED v4. Results from the integrated tool are analyzed and validated with survey results. The analysis of results indicates that the integration method introduced in this paper is effective. The limitations, potentials, and future developments are also discussed. The integration of Dynamo BIM and web service APIs might be useful for site assessments in the early design stage or even earlier.
•The paper presented a quick method assessing building surroundings.•Integration of Web Service API and Dynamo BIM were used for the assessment.•Validation showed this method can be used as a reference for site assessments.
With the growing number of competing Web services that provide similar functionality, Quality-of-Service (QoS) prediction is becoming increasingly important for various QoS-aware approaches of Web ...services. Collaborative filtering (CF), which is among the most successful personalized prediction techniques for recommender systems, has been widely applied to Web service QoS prediction. In addition to using conventional CF techniques, a number of studies extend the CF approach by incorporating additional information about services and users, such as location, time, and other contextual information from the service invocations. There are also some studies that address other challenges in QoS prediction, such as adaptability, credibility, privacy preservation, and so on. In this survey, we summarize and analyze the state-of-the-art CF QoS prediction approaches of Web services and discuss their features and differences. We also present several Web service QoS datasets that have been used as benchmarks for evaluating the predition accuracy and outline some possible future research directions.
Web services are becoming a major utility for accomplishing complex tasks over the Internet. In practice, the end‐users usually search for Web service compositions that best meet the quality of ...service (QoS) requirements (i.e., QoS global constraints). Since the number of services is constantly increasing and their respective QoS is inherently uncertain (due to environmental conditions), the task of selecting optimal compositions becomes more challenging. To tackle this problem, we propose a heuristic based on majority judgment that allows for reducing the search space. In addition, we perform a constraint programming search to select the Top K compositions that fulfill the QoS global constraints. The experimental results demonstrate the high performance of our approach.
Generating highly accurate predictions for missing quality-of-service (QoS) data is an important issue. Latent factor (LF)-based QoS-predictors have proven to be effective in dealing with it. ...However, they are based on first-order solvers that cannot well address their target problem that is inherently bilinear and nonconvex, thereby leaving a significant opportunity for accuracy improvement. This paper proposes to incorporate an efficient second-order solver into them to raise their accuracy. To do so, we adopt the principle of Hessian-free optimization and successfully avoid the direct manipulation of a Hessian matrix, by employing the efficiently obtainable product between its Gauss-Newton approximation and an arbitrary vector. Thus, the second-order information is innovatively integrated into them. Experimental results on two industrial QoS datasets indicate that compared with the state-of-the-art predictors, the newly proposed one achieves significantly higher prediction accuracy at the expense of affordable computational burden. Hence, it is especially suitable for industrial applications requiring high prediction accuracy of unknown QoS data.
Service-oriented architecture is becoming a major software framework for complex application and it can be dynamically and flexibly composed by integrating existing component web services provided by ...different providers with standard protocols. The rapid introduction of new web services into a dynamic business environment can adversely affect the service quality and user satisfaction. Therefore, how to leverage, aggregate and make use of individual component's quality of service (QoS) information to derive the optimal QoS of the composite service which meets the needs of users is still an ongoing hot research problem. This study aims at reviewing the advance of the current state-of-the-art in technologies and inspiring the possible new ideas for web service selection and composition, especially with nature-inspired computing approaches. Firstly, the background knowledge of web services is presented. Secondly, various nature-inspired web selection and composition approaches are systematically reviewed and analysed for QoS-aware web services. Finally, challenges, remarks and discussions about QoS-aware web service composition are presented.