Stirred ball mills are frequently used for ultrafine- and nanogrinding in food, pharmaceutical and chemical industry, but only few investigations have been published on empirical or scale-up modeling ...of stirred ball mills. Experiments have been carried out with a laboratory scale stirred ball mill. During the experiments the main technical parameters such as stirrer speed, grinding media, filling ratio, grinding time and the solid mass concentration have been systematically adjusted. The particle size distribution of mill products can be well estimated by empirical functions, so an empirical model has been prepared for the laboratory mill. The relation between the grinding fineness, grinding time and specific grinding work was represented for several materials such as pumice, andesite, limestone and tailings of ore mining industry. The power consumption of the stirred ball mill for scale-up was determined by a method based on the dimensional analysis. A new scale-up model has been presented as well by with industrial size stirred ball mills can be designed on the basis of the laboratory measurements.
Active current modeling for GaN HEMT devices Lim, Hong Y.; Ng, Geok I.; Leong, Yoke C.
Microwave and optical technology letters,
03/2015, Letnik:
57, Številka:
3
Journal Article
The maximum electron density of the F2 layer NmF2 relative to the solar activity level is studied. The optimal period for averaging the solar activity index F10.7 is found, which gives on average the ...lowest error and the highest correlation coefficient (in space and in time) for describing the linear dependence of NmF2 vs. F10.7. This result depends substantially on the duration of NmF2 data storage selected.
Two glucose-limited realkalized fed-batch cultures of Lactococcus lactis CECT 539 were carried out in a diluted whey medium (DW) using two different feeding media. The cultures were fed a mixture of ...a 400 g/l concentrated lactose and a concentrated mussel processing waste (CMPW, 101.72 g glucose/l) medium (fermentation I) or a CMPW medium supplemented with glucose and KH.sub.2PO.sub.4 up to concentrations of 400 g glucose/l and 3.21 g total phosphorus/l, respectively (fermentation II). For an accurate description and a better understanding of the kinetics of both cultures, the growth and product formation by L. lactis CECT 539 were both modelled, for the first time, as a function of the amounts of glucose (G) added and the pH gradient (VpH) generated in every realkalization and feeding cycle, by using an empirical polynomial model. With this modeling procedure, the kinetics of biomass, viable cell counts, nisin, lactic acid, acetic acid and butane-2,3-diol production in both cultures were successfully described (R.sup.2 values > 0.970) and interpreted for the first time. In addition, the optimum VpH and G values for each product were accurately calculated in the two realkalized fed-batch cultures. This approach appears to be useful for designing feeding strategies to enhance the productions of biomass, bacteriocin, and metabolites by the nisin-producing strain in wastes from the food industry. Key words: fed-batch fermentation, empirical modeling, probiotic biomass, nisin, glucose-limited cultures
We developed three black bear (Ursus americanus) habitat models in the context of a geographic information system to identify linkage areas across a major transportation corridor. One model was based ...on empirical habitat data, and the other two (opinion- and literature-based) were based on expert information developed in a multicriteria decision-making process. We validated the performance of the models with an independent data set. Four classes of highway linkage zones were generated. Class 3 linkages were the most accurate for mapping cross-highway movement. Our tests showed that the model based on expert literature most closely approximated the empirical model, both in the results of statistical tests and the description of the class 3 linkages. In addition, the expert literature-based model was consistently more similar to the empirical model than either of two seasonal, expert opinion-based models. Among the expert models, the literature-based model had the strongest correlation with the empirical model. Expert-opinion models were less in agreement with the empirical model. The poor performance of the expert-opinion model may be explained by an overestimation of the importance of riparian habitat by experts compared with the literature. A small portion of the empirical data to test the models was from the pre-berry season and may have affected how well the model predicted linkage areas. Our empirical and expert models represent useful tools for resource and transportation planners charged with determining the location of mitigation passages for wildlife when baseline information is lacking and when time constraints do not allow for data collection before construction.
Missing knowledge about certain physical phenomena, such as chemical reaction kinetics or heat and mass transfer, hinders efficient and accurate modeling of complex processes. The so-called ...semi-empirical or hybrid modeling approach overcomes this obstacle by supplementing readily available physical model parts with empirical models representing insufficiently understood phenomena. However, hybrid model identification is still a challenging problem since usually (i) the complexity of each empirical model has to be adjusted, and (ii) many parameters have to be estimated by solving a large nonlinearly constrained optimization problem. In this paper we present an incremental hybrid model identification approach that decomposes the original problem into a sequence of simpler problems, such as data reconciliation, the solution of nonlinear equations, parameter estimation, and the separate identification of the empirical models by means of neural network training. The incremental approach is exemplified in a case study on the identification of a steady-state model of an ethylene glycol production process.
"The Review of Financial Studies" has among its missions the facilitation and promotion of a vigorous academic debate across unsettled questions in finance. This issue represents a cross section of ...views regarding one such debate: Can ourempirical models accurately forecast the equity premium any better than the historical mean? Or, is the forecast our empirical models give us any more accurate than what we would get by simply using the historical mean?
We used a simple, systematic data-analytics approach to determine the relative linkages of different climate and environmental variables with the canopy-level, half-hourly CO₂fluxes of US deciduous ...forests. Multivariate pattern recognition techniques of principal component and factor analyses were utilized to classify and group climatic, environmental, and ecological variables based on their similarity as drivers, examining their interrelation patterns at different sites. Explanatory partial least squares regression models were developed to estimate the relative linkages of CO₂fluxes with the climatic and environmental variables. Three biophysical process components adequately described the system-data variances. The ‘radiation-energy’ component had the strongest linkage with CO₂fluxes, whereas the ‘aerodynamic’ and ‘temperature-hydrology’ components were low to moderately linked with the carbon fluxes. On average, the ‘radiation-energy’ component showed 5 and 8 times stronger carbon flux linkages than that of the ‘temperature-hydrology’ and ‘aerodynamic’ components, respectively. The similarity of observed patterns among different study sites (representing gradients in climate, canopy heights and soil-formations) indicates that the findings are potentially transferable to other deciduous forests. The similarities also highlight the scope of developing parsimonious data-driven models to predict the potential sequestration of ecosystem carbon under a changing climate and environment. The presented data-analytics provides an objective, empirical foundation to obtain crucial mechanistic insights; complementing process-based model building with a warranted complexity. Model efficiency and accuracy (R² = 0.55–0.81; ratio of root-mean-square error to the observed standard deviations, RSR = 0.44–0.67) reiterate the usefulness of multivariate analytics models for gap-filling of instantaneous flux data.
The aim of this paper is to examine the oft-heard concern that quality or quality-of-life cannot be defined. This concern persists today, even in the presence of countless studies that claim to be ...assessing quality or quality-of-life. There is obviously a disconnect here that warrants some attention, if not explanation. In this study, I summarize the extent of this disconnect and offer a number of potential explanations of why this situation exists. I review the role that operational definitions, statistical and empirical models, and content-specific definitions play in defining quality and/or quality-of-life. I conclude that none of these approaches provide a comprehensive definition of quality or quality-of-life. In its stead, I will argue that quality or quality-of-life represents a distinctive pattern of thinking. I establish this pattern by examining the cognitive-linguistic basis of these definitions and argue that when this is done it will be possible to identify an universal cognitive (hybrid) construct that describes how a person thinks about all types of qualitative assessments. The implication of this is that for a study to claim that it is defining or assessing quality or quality-of-life, it will first have to demonstrate the presence of the elements of this hybrid construct.