While most of the circular economy (CE) research is engrossed in larger organizations and developed countries, there is hardly any research exploring the readiness of small- and medium-sized ...enterprises (SMEs) in developing countries toward the implementation of CE practices. To fill this knowledge gap, in this article, we aim to identify and evaluate the key readiness factors (RFs) that are vital for CE implementation. Initially, 15 important RFs are identified via an extensive literature review and experts' consultation, which are prioritized using the Decision-Making Trial and Evaluation Laboratory method. The proposed framework is validated with a real-world case study involving four India-based SMEs. The results reveal that " willingness of top management to implement CE practices" is the most important RF. Six RFs are classified as causal: " availing financial assistance from government and external agencies," "introducing new technology and its compatibility with existing technology," "willingness of top management to implement CE practices," "investment in infrastructural development," "pressures from competitors, business partners, and regulatory bodies to implement CE practices," and "awareness among the customers about CE benefits." The findings of this research may help managers assess CE readiness and prepare business strategies for effective implementation of CE practices.
Cache-aided wireless device-to-device (D2D) networks allow significant throughput increase, depending on the concentration of the popularity distribution of files. Many studies assume that all users ...have the same preference distribution; however, this may not be true in practice. This work investigates whether and how the information about individual preferences can benefit cache-aided D2D networks. We examine a clustered network and derive a network utility that considers both the user distribution and channel fading effects into the analysis. We also formulate a utility maximization problem for designing caching policies. This maximization problem can be applied to optimize several important quantities, including throughput, energy efficiency (EE), cost, and hit-rate, and to solve different tradeoff problems. We provide a general approach that can solve the proposed problem under the assumption that users coordinate, then prove that the proposed approach can obtain the stationary point under a mild assumption. Using simulations of practical setups, we show that performance can improve significantly with proper exploitation of individual preferences. We also show that different types of tradeoffs exist between different performance metrics and that they can be managed through caching policy and cooperation distance designs.
The advent of large-scale bibliographic databases and powerful prediction algorithms led to calls for data-driven approaches for targeting scarce funds at researchers with high predicted future ...scientific impact. The potential side-effects and fairness implications of such approaches are unknown, however. Using a large-scale bibliographic data set of
N
= 111,156 Computer Science researchers active from 1993 to 2016, I build and evaluate a realistic scientific impact prediction model. Given the persistent under-representation of women in Computer Science, the model is audited for disparate impact based on gender. Random forests and Gradient Boosting Machines are used to predict researchers’
h
-index in 2010 from their bibliographic profiles in 2005. Based on model predictions, it is determined whether the researcher will become a high-performer with an
h
-index in the top-25% of the discipline-specific
h
-index distribution. The models predict the future
h
-index with an accuracy of
R
2
=
0.875
and correctly classify 91.0% of researchers as high-performers and low-performers. Overall accuracy does not vary strongly across researcher gender. Nevertheless, there is indication of disparate impact against women. The models under-estimate the true
h
-index of female researchers more strongly than the
h
-index of male researchers. Further, women are 8.6% less likely to be predicted to become high-performers than men. In practice, hiring, tenure, and funding decisions that are based on model predictions risk to perpetuate the under-representation of women in Computer Science.
ThePhilological Quarterly'sannual bibliographies of modern studies in English neoclassical literature, published originally from 1961 to 1970, are reproduced in two volumes. Readers will find the ...same features that distinguished earlier compilations in the series: inclusive listing of significant works published in each year (including sections on the historical and cultural background as well as literature), authoritative reviews of important works, critical comments, and a full index that is in itself an indispensable reference tool.
Originally published in 1972.
ThePrinceton Legacy Libraryuses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.
Author names in bibliographic databases often suffer from ambiguity owing to the same author appearing under different names and multiple authors possessing similar names. It creates difficulty in ...associating a scholarly work with the person who wrote it, thereby introducing inaccuracy in credit attribution, bibliometric analysis, search-by-author in a digital library and expert discovery. A plethora of techniques for disambiguation of author names has been proposed in the literature. In this article, we focus on the research efforts targeted to disambiguate author names specifically in the PubMed bibliographic database. We believe this concentrated review will be useful to the research community because it discusses techniques applied to a very large real database that is actively used worldwide. We make a comprehensive survey of the existing author name disambiguation (AND) approaches that have been applied to the PubMed database: we organise the approaches into a taxonomy; describe the major characteristics of each approach including its performance, strengths, and limitations; and perform a comparative analysis of them. We also identify the datasets from PubMed that are publicly available for researchers to evaluate AND algorithms. Finally, we outline a few directions for future work.
Sentiment analysis is also known as opinion mining which shows the people's opinions and emotions about certain products or services. The main problem in sentiment analysis is the sentiment polarity ...categorization that determines whether a review is positive, negative or neutral. Previous studies proposed different techniques, but still there are some research gaps, i) some studies include only 3 sentiment classes: positive, neutral and negative, but none of them considered more than 3 classes ii) sentiment polarity features were considered on individual basis but none of them considered on both individual and on combined basis iii) No previous technique considered five sentiment classes with 3 sentiment polarity features such as a verb, adverb, adjective and their combinations. In this study, we propose a sentiment polarity categorization technique for a large data set of online reviews of Instant Videos. A comprehensive data set of five hundred thousand online reviews is used in our research. There are five classes (Strongly Negative, Negative, Neutral, Positive and Strongly Positive). We also consider three polarity features Verb, Adverb, Adjective and their combinations with their different senses in review-level categorization. Our experiments for review-level categorization show promising outcomes as the accuracy of our results is 81 percent which is 3 percent better than many previous techniques whose average accuracy is 78 percent.
First Published in 2005. Routledge is an imprint of Taylor & Francis, an informa company.
Introduction
PART ONE: SLAVE TRADE
1. Collections of evidence
2. General accounts
3. West Africa
4. Sudan
5. ...East Africa
6. Slaving Voyages
7. Medical Conditions
8. Laws and official documents
9. Economic controversy
10. Economic history
11. Biographies of slaves
12. Ethnic origins of slaves
PART TWO: ABOLITION AND ITS SUPPRESSION
13. Abolition controversy
14. Sermons
15. Legislative debates and speeches
16. Suppression controversy
17. Abolition societies and conferences
18. Laws and official documents
19. Naval blockade
20. Trials for illegal slave trading
21. military action
22. History of the abolition movement
23. History of the abolition literature
24. Legal history
25. Biographies of abolitionists
26. Imaginative literature
The literature on coronaviruses counts more than 300,000 publications. Finding relevant papers concerning arbitrary queries is essential to discovery helpful knowledge. Current best information ...retrieval (IR) use deep learning approaches and need supervised training sets with labeled data, namely to know a priori the queries and their corresponding relevant papers. Creating such labeled datasets is time-expensive and requires prominent experts’ efforts, resources insufficiently available under a pandemic time pressure. We present a new self-supervised solution, called SUBLIMER, that does not require labels to learn to search on corpora of scientific papers for most relevant against arbitrary queries. SUBLIMER is a novel efficient IR engine trained on the unsupervised COVID-19 Open Research Dataset (CORD19), using deep metric learning. The core point of our self-supervised approach is that it uses no labels, but exploits the bibliography citations from papers to create a latent space where their spatial proximity is a metric of semantic similarity; for this reason, it can also be applied to other domains of papers corpora. SUBLIMER, despite is self-supervised, outperforms the Precision@5 (P@5) and Bpref of the state-of-the-art competitors on CORD19, which, differently from our approach, require both labeled datasets and a number of trainable parameters that is an order of magnitude higher than our.