The processes of service discovery and composition are crucial tasks in application development driven by Web Services. However, with RESTful Web Service replacing SOAP-based Web Service as the ...dominant service-providing approach, the research on service discovery and composition should also shift its focus from SOAP-based Web Service to RESTful Web Service. The unstructured, resource-oriented and unified interface characteristics of RESTful Web Service pose challenges to its discovery and composition process. In this work, a framework for implementing RESTful Web Service discovery and automatic composition based on semantic technology is proposed. Firstly, the framework uses the OpenAPI Specification (OAS), which is extended by resource attributes, as the RESTful Web Service description specification, and then supports semantic-based matching discovery and automatic composition by attaching the concepts of domain ontology to the extended OAS. Secondly, the framework is fully adapted to REST features and provides a method for building service composition dependencies during registration, which is used to generate composition schemes during the service discovery process. Finally, the framework provides a discovery method that can return RESTful Web services to the requester in the form of single-point services or service composition schemes according to the magnitude of the semantic similarity with the requester’s requirements. We applied the proposed methods to experiment with RESTful Web services in three different fields, and the results show that the methods effectively calculate the similarity between RESTful single-point Web services or composite Web services and service requests with the support of domain ontology.
Abstract
Complex biomedical data generated during clinical, omics and mechanism-based experiments have increasingly been exploited through cloud- and visualization-based data mining techniques. ...However, the scientific community still lacks an easy-to-use web service for the comprehensive visualization of biomedical data, particularly high-quality and publication-ready graphics that allow easy scaling and updatability according to user demands. Therefore, we propose a community-driven modern web service, Hiplot (https://hiplot.org), with concise and top-quality data visualization applications for the life sciences and biomedical fields. This web service permits users to conveniently and interactively complete a few specialized visualization tasks that previously could only be conducted by senior bioinformatics or biostatistics researchers. It covers most of the daily demands of biomedical researchers with its equipped 240+ biomedical data visualization functions, involving basic statistics, multi-omics, regression, clustering, dimensional reduction, meta-analysis, survival analysis, risk modelling, etc. Moreover, to improve the efficiency in use and development of plugins, we introduced some core advantages on the client-/server-side of the website, such as spreadsheet-based data importing, cross-platform command-line controller (Hctl), multi-user plumber workers, JavaScript Object Notation-based plugin system, easy data/parameters, results and errors reproduction and real-time updates mode. Meanwhile, using demo/real data sets and benchmark tests, we explored statistical parameters, cancer genomic landscapes, disease risk factors and the performance of website based on selected native plugins. The statistics of visits and user numbers could further reflect the potential impact of this web service on relevant fields. Thus, researchers devoted to life and data sciences would benefit from this emerging and free web service.
How to accurately predict unknown quality-of-service (QoS) data based on observed ones is a hot yet thorny issue in Web service-related applications. Recently, a latent factor (LF) model has shown ...its efficiency in addressing this issue owing to its high accuracy and scalability. An LF model can be improved by identifying user and service neighborhoods based on user and service geographical information. However, such information can be difficult to acquire in most applications with the considerations of information security, identity privacy, and commercial interests in a real system. Besides, the existing LF model-based QoS predictors mostly ignore the reliability of given QoS data where noises commonly exist to cause accuracy loss. To address the above issues, this paper proposes a data-characteristic-aware latent factor (DCALF) model to implement highly accurate QoS predictions, where 'data-characteristic-aware' indicates that it can appropriately implement QoS prediction according to the characteristics of given QoS data. Its main idea is two-fold: a) it detects the neighborhoods and noises of users and services based on the dense LFs extracted from the original sparse QoS data, b) it incorporates a density peaks-based clustering method into its modeling process for achieving the simultaneous detections of both neighborhoods and noises of QoS data. With such designs, it precisely represents the given QoS data in spite of their sparsity, thereby achieving highly accurate predictions for unknown ones. Experimental results on two QoS datasets generated by real-world Web services demonstrate that the proposed DCALF model outperforms state-of-the-art QoS predictors, making it highly competitive in addressing the issue of Web service selection and recommendation.
Alignment-free (AF) sequence comparison is attracting persistent interest driven by data-intensive applications. Hence, many AF procedures have been proposed in recent years, but a lack of a clearly ...defined benchmarking consensus hampers their performance assessment.
Here, we present a community resource (http://afproject.org) to establish standards for comparing alignment-free approaches across different areas of sequence-based research. We characterize 74 AF methods available in 24 software tools for five research applications, namely, protein sequence classification, gene tree inference, regulatory element detection, genome-based phylogenetic inference, and reconstruction of species trees under horizontal gene transfer and recombination events.
The interactive web service allows researchers to explore the performance of alignment-free tools relevant to their data types and analytical goals. It also allows method developers to assess their own algorithms and compare them with current state-of-the-art tools, accelerating the development of new, more accurate AF solutions.
Despite many years of improvements to it, TCP still suffers from an unsatisfactory performance. For services dominated by short flows (e.g., web search and e-commerce), TCP suffers from the flow ...startup problem and cannot fully utilize the available bandwidth in the modern Internet: TCP starts from a conservative and static initial window ( IW , 2-4 or 10), while most of the web flows are too short to converge to the best sending rate before the session ends. For services dominated by long flows (e.g., video streaming and file downloading), the congestion control ( CC ) scheme manually and statically configured might not offer the best performance for the latest network conditions. To address these two challenges, we propose TCP-RL , which uses reinforcement learning ( RL ) techniques to dynamically configure IW and CC in order to improve the performance of TCP flow transmission. Basing on the latest network conditions observed at the server side of a web service, TCP-RL dynamically configures a suitable IW for short flows through group-based RL , and dynamically configures a suitable CC scheme for long flows through deep RL . Our extensive experiments show that for short flows, TCP-RL can reduce the average transmission time by about 23%; and for long flows, compared with the performance of 14 CC schemes, TCP-RL 's performance ranks top 5 for about 85% of the 288 given static network conditions, whereas for about 90% of conditions, its performance drops by less than 12% compared with that of the best-performing CC schemes for the same network conditions.
Many studies and green building rating systems have addressed the social and environmental importance of site planning. Tools based on BIM and Location Based Services (LBSs) have been developed to ...estimate energy consumption for material transportation and the surrounding density of the sites. However, the tools are not programmable and limited by their serving phases. This requires solutions that have the flexibility to run site analysis on social surroundings and the compatibility of user programming in the early design stage. Integrating visual programming and web service Application Programming Interface (API) can fulfill the requirements of evaluating publicly available diverse uses of sites and custom coding. This study introduces the method for integrating Dynamo BIM and Amap web service APIs for the evaluations of publicly available diverse uses and transportations. Additionally, implementations of use cases are demonstrated including assessments of Access to Quality Transit and Diverse Uses in LEED v4. Results from the integrated tool are analyzed and validated with survey results. The analysis of results indicates that the integration method introduced in this paper is effective. The limitations, potentials, and future developments are also discussed. The integration of Dynamo BIM and web service APIs might be useful for site assessments in the early design stage or even earlier.
•The paper presented a quick method assessing building surroundings.•Integration of Web Service API and Dynamo BIM were used for the assessment.•Validation showed this method can be used as a reference for site assessments.
With the growing number of competing Web services that provide similar functionality, Quality-of-Service (QoS) prediction is becoming increasingly important for various QoS-aware approaches of Web ...services. Collaborative filtering (CF), which is among the most successful personalized prediction techniques for recommender systems, has been widely applied to Web service QoS prediction. In addition to using conventional CF techniques, a number of studies extend the CF approach by incorporating additional information about services and users, such as location, time, and other contextual information from the service invocations. There are also some studies that address other challenges in QoS prediction, such as adaptability, credibility, privacy preservation, and so on. In this survey, we summarize and analyze the state-of-the-art CF QoS prediction approaches of Web services and discuss their features and differences. We also present several Web service QoS datasets that have been used as benchmarks for evaluating the predition accuracy and outline some possible future research directions.
Generating highly accurate predictions for missing quality-of-service (QoS) data is an important issue. Latent factor (LF)-based QoS-predictors have proven to be effective in dealing with it. ...However, they are based on first-order solvers that cannot well address their target problem that is inherently bilinear and nonconvex, thereby leaving a significant opportunity for accuracy improvement. This paper proposes to incorporate an efficient second-order solver into them to raise their accuracy. To do so, we adopt the principle of Hessian-free optimization and successfully avoid the direct manipulation of a Hessian matrix, by employing the efficiently obtainable product between its Gauss-Newton approximation and an arbitrary vector. Thus, the second-order information is innovatively integrated into them. Experimental results on two industrial QoS datasets indicate that compared with the state-of-the-art predictors, the newly proposed one achieves significantly higher prediction accuracy at the expense of affordable computational burden. Hence, it is especially suitable for industrial applications requiring high prediction accuracy of unknown QoS data.
Web services are becoming a major utility for accomplishing complex tasks over the Internet. In practice, the end‐users usually search for Web service compositions that best meet the quality of ...service (QoS) requirements (i.e., QoS global constraints). Since the number of services is constantly increasing and their respective QoS is inherently uncertain (due to environmental conditions), the task of selecting optimal compositions becomes more challenging. To tackle this problem, we propose a heuristic based on majority judgment that allows for reducing the search space. In addition, we perform a constraint programming search to select the Top K compositions that fulfill the QoS global constraints. The experimental results demonstrate the high performance of our approach.
•Proposing a novel deep learning based hybrid service recommendation approach.•Capturing non-linear interactions between mashups and their component services.•Evaluating pointwise and pairwise loss ...functions in the recommendation task.•The approach outperforms state-of-the-art methods on a real-world dataset.
With the rapid development of service-oriented computing and cloud computing, an increasing number of Web services have been published on the Internet, which makes it difficult to select relevant Web services manually to satisfy complex user requirements. Many machine learning methods, especially matrix factorization based collaborative filtering models, have been widely employed in Web service recommendation. However, as a linear model of latent factors, matrix factorization is challenging to capture complex interactions between Web applications (or mashups) and their component services within an extremely sparse interaction matrix, which will result in poor service recommendation performance. Towards this problem, in this paper, we propose a novel deep learning based hybrid approach for Web service recommendation by combining collaborative filtering and textual content. The invocation interactions between mashups and services as well as their functionalities are seamlessly integrated into a deep neural network, which can be used to characterize the complex relations between mashups and services. Experiments conducted on a real-world Web service dataset demonstrate that our approach can achieve better recommendation performance than several state-of-the-art methods, which indicates the effectiveness of our proposed approach in service recommendation.