World trade in food grains (wheat, rice, and wheat flour) more than doubled from 1962 to 1983, while the number of countries participating in world trade increased 85 percent. Food grain consumption ...has increased faster than production in most countries so more countries entered world trade for their food needs, thereby increasing the volume and changes the patterns of world trade in food grains. This report is based primarily on the United Nation's world trade data and foreign trade statistics of countries not reporting to the United Nations.
The data from individual observational studies included in meta-analyses of drug effects are collected either from ad hoc methods (i.e. "primary data") or databases that were established for ...non-research purposes (i.e. "secondary data"). The use of secondary sources may be prone to measurement bias and confounding due to over-the-counter and out-of-pocket drug consumption, or non-adherence to treatment. In fact, it has been noted that failing to consider the origin of the data as a potential cause of heterogeneity may change the conclusions of a meta-analysis. We aimed to assess to what extent the origin of data is explored as a source of heterogeneity in meta-analyses of observational studies.
We searched for meta-analyses of drugs effects published between 2012 and 2018 in general and internal medicine journals with an impact factor > 15. We evaluated, when reported, the type of data source (primary vs secondary) used in the individual observational studies included in each meta-analysis, and the exposure- and outcome-related variables included in sensitivity, subgroup or meta-regression analyses.
We found 217 articles, 23 of which fulfilled our eligibility criteria. Eight meta-analyses (8/23, 34.8%) reported the source of data. Three meta-analyses (3/23, 13.0%) included the method of outcome assessment as a variable in the analysis of heterogeneity, and only one compared and discussed the results considering the different sources of data (primary vs secondary).
In meta-analyses of drug effects published in seven high impact general medicine journals, the origin of the data, either primary or secondary, is underexplored as a source of heterogeneity.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Mobile devices and Internet-based applications are producing a significant volume of data that may be used to, at least partially, replace some of the hardware necessary to sense traffic systems. ...However, there are several issues related to such an agenda: data are heterogeneous, unstructured, may appear in natural language, are normally not geolocated, and there are balancing issues related to the use of such data. This means that all these issues must be treated via software, especially using machine learning techniques. In this paper, a methodology is proposed, which is based on: extraction and processing of relevant information from social media; determination of its context; explanation of transportation related phenomena in terms of their contexts; and prediction of traffic conditions. The methodology was applied to a case study using data from the city of Porto Alegre, Brazil. Results shown that it was possible to associate traffic-related and context data to predict the traffic conditions that were originally reported in a Twitter account.
The development of modern technologies and accessibility of data on space and the natural environment has led to their increasing use for socio-economic purposes. Data users believe that these ...systems reflect the reality in the field. This applies in particular to databases used for construction investment projects or as the basis for calculations of financial obligations, e.g., taxes. The Land and Property Register (LPR), which is part of the Land Administration System, serves a number of economic and legal purposes. This geo-system often contains low-quality information regarding the technical potential of modern data acquisition methods and is continuously updated. The authors propose a two-step analysis of data contained in the LPR. The first step identified the sources of discrepancies between data from the LPR and the reality in the field. The second step emphasises the importance of the factors under analysis, which include both a plot’s geometric parameters, the geo-location features (associated with the natural environment elements) and factors associated with the supplementary data acquisition methods. The results show that sufficient quality data play the main role in achieving compatibility between the data in the Land and Property Register and with reality. Studies conducted so far have dealt with data on a global scale and were based on in situ data and focused on the specific values of each plot under analysis.
The main function of Internet of Things is to collect and transmit data.At present,the data transmission in Internet of Things lacks effective trust attestation mechanism and trust traceability ...mechanism of data source.To solve the above problems,a trust attestation mechanism for sensing layer nodes is presented.First a trusted group is established,and the node which is going to join the group needs to attest its identity and key attributes to the higher level node.Then the dynamic trust measurement value of the node can be obtained by measuring the node data transmission behavior.Finally the node encapsulates the key attributes and trust measurement value to use short message group signature to attest its trust to the challenger.This mechanism can measure the data sending and receiving behaviors of sensing nodes and track the data source,and it does not expose the privacy information of nodes and the sensing nodes can be traced effectively.The trust measurement for sensing nodes and verification is applicable to Internet of Things and the simulation experiment shows the trust attestation mechanism is flexible,practical and efficient.Besides,it can accurately and quickly identify the malicious nodes at the same time.The impact on the system performance is negligible.
General practice computers have been widely used in the United Kingdom for the last 10 years and there are over 30 different systems currently available. The commercially available databases are ...based on two of the most widely used systems - VAMP Medical and Meditel. These databases provide both longitudinal and cross-sectional data on between 1.8 and 4 million patients. Despite their availability only limited use has been made of them for epidemiological and health service research purposes. They are a unique source of population-based information and deserve to be better recognized. The advantages of general practice databases include the fact that they are population based with excellent prescribing data linked to diagnosis, age and gender. The problems are that their primary purpose is patient care and the database population is constantly changing, as well as the usual problems of bias and confounding that occur in any observational studies. The barriers to the use of general practice databases include the cost of access, the size of the databases and that they are not structured in a way that easily allows analysis. Proper utilization of these databases requires powerful computers, staff proficient in writing computer programs to facilitate analysis and epidemiologists skilled in their use. If these structural problems are overcome then the databases are an invaluable source of data for epidemiological studies.