ABSTRACT
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior ...probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics.
Repeatability, isolation and accuracy are the most desired factors while testing wireless devices. However, they cannot be guaranteed by traditional drive tests. Channel emulators play a major role ...in filling these gaps in testing. In this paper we present an efficient channel emulator which is better than existing commercial products in terms of cost, remote access, support for complex network topologies and scalability. We present the hardware and software architecture of our channel emulator and describe the experiments we conducted to evaluate its performance against a commercial channel emulator.
Cancer stem cells (CSCs) as a small subpopulation in tumor bulk are believed to initiate tumor formation and are responsible for the resistance to cancer therapy. The proliferation and ...differentiation of CSCs result in heterogeneity in a tumor which increases the chance of tumor survival and invasion. Many signaling pathways are abnormally activated or repressed in CSCs. Understanding these pathways and the metabolisms in CSCs may help targeted therapy in drug-resistant tumors. The PI3K/Akt/mTOR pathway is one of the major signaling pathways in CSCs involved in the maintenance of stemness, proliferation, differentiation, epithelial to mesenchymal transition (EMT), migration, and autophagy. Thus, suppressing the PI3K/Akt/mTOR pathway with inhibitors might be a promising strategy for targeted cancer therapy. Although the pathway is well-recognized and reviewed in tumor bulks, the functions in CSCs have not been well focused. Here, we reviewed the PI3K/Akt/mTOR signaling pathway and its functions in CSCs and addressed the potential therapeutic applications in drug-resistant tumors.
Display omitted
Background
Medical literature searches provide critical information for clinicians. However, the best strategy for identifying relevant high‐quality literature is unknown.
Objectives
We compared ...search results using PubMed and Google Scholar on four clinical questions and analysed these results with respect to article relevance and quality.
Methods
s from the first 20 citations for each search were classified into three relevance categories. We used the weighted kappa statistic to analyse reviewer agreement and nonparametric rank tests to compare the number of citations for each article and the corresponding journals' impact factors.
Results
Reviewers ranked 67.6% of PubMed articles and 80% of Google Scholar articles as at least possibly relevant (P = 0.116) with high agreement (all kappa P‐values < 0.01). Google Scholar articles had a higher median number of citations (34 vs. 1.5, P < 0.0001) and came from higher impact factor journals (5.17 vs. 3.55, P = 0.036).
Conclusions
PubMed searches and Google Scholar searches often identify different articles. In this study, Google Scholar articles were more likely to be classified as relevant, had higher numbers of citations and were published in higher impact factor journals. The identification of frequently cited articles using Google Scholar for searches probably has value for initial literature searches.
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability ...density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics.
In several sensing applications the parameter being sensed exhibits a high spatial correlation. For example, if the temperature of a region is being monitored, there are distinct hot and cold spots. ...The area close to the hot spots is usually warmer than average, with a temperature gradient between the hot and cold spots. We exploit this correlation of sensor data to form a forest of logical trees, with the trees collectively spanning all the sensor nodes. The root of a tree corresponds to a sensor reporting the local peak value. The tree nodes represent the value gradient: each node's sensed value is smaller than that of its parent, and greater than that of its children. GrAFS provides a mechanism to maintain some information at the local peaks and the sink. Using this information the sink can answer several queries either directly, or by probing the region of the sensor field that holds the answer. Thus, queries can be answered in a time and/or bandwidth efficient manner. The GrAFS approach to data aggregation can easily adapt to changes in the spatial distribution of sensed values, and also cope with message losses and sensor node failures. Implementation on MICA2 motes and simulation experiments conducted using TinyOS quantify the performance of GrAFS.
Most researchers conduct wireless networking experiments in their laboratory or similar indoor environments. Such environments are veritable RF jungles, especially when we consider the ISM bands. In ...this paper we examine and test several common explicit and implicit assumptions that researchers tend to make about the wireless environment. Although these assumptions are acknowledged by most researchers, the extent of their impact is often underestimated. We find that because the environment is always in flux, it is almost impossible to reproduce the results of an experiment. Hence, there is a high risk of misinterpreting the data obtained from such experiments. Through this paper we try to caution experimenters against such risky assumptions when they venture into the RF jungle. After a successful proof-of-concept experiment, we advocate the use of wireless networking testbeds that provide experimenters better control over the RF environment by using coaxial cables, programmable attenuators and power dividers/combiners.
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability ...density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing twelve photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/under-breadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate (CDE) loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performancemetrics.
Application-specific data aggregation can play a significant role in energy-efficient operation of wireless sensor networks. Existing aggregation techniques rely heavily on the routing protocol to ...build shortest paths to route node measurements to the base station and are limited in the types of supported queries. We propose an aggregation scheme that utilizes the inherent information gradients present in the network. The query is directed to the source of information, resulting in better load sharing in the network. We support a variety of queries ranging from simple maximum, minimum or average of the readings of sensor nodes to more complex quantile queries such as k highest values or k th highest value through a generic query algorithm. The query algorithm shifts the computation to the querying agent, thus eliminating any in-network aggregation.
Summary form only given. This work focuses on the problem of compressing the Web graph by means of eliminating some of the edges in its link structure. An algorithm is used to divide the task so that ...it can be executed on parallel processors. Ran on a test bed of generated Web graphs, the algorithm improved the compression ratio of both Huffman-coding schemes and Adler and Mitzenmacher's find reference algorithm tangibly. In the find reference case, improvement was up to 90%