Reversed-phase high-performance liquid chromatography (RP-HPLC) is the most popular chromatographic mode, accounting for more than 90% of all separations. HPLC itself owes its immense popularity to ...it being relatively simple and inexpensive, with the equipment being reliable and easy to operate. Due to extensive automation, it can be run virtually unattended with multiple samples at various separation conditions, even by relatively low-skilled personnel. Currently, there are >600 RP-HPLC columns available to end users for purchase, some of which exhibit very large differences in selectivity and production quality. Often, two similar RP-HPLC columns are not equally suitable for the requisite separation, and to date, there is no universal RP-HPLC column covering a variety of analytes. This forces analytical laboratories to keep a multitude of diverse columns. Therefore, column selection is a crucial segment of RP-HPLC method development, especially since sample complexity is constantly increasing. Rationally choosing an appropriate column is complicated. In addition to the differences in the primary intermolecular interactions with analytes of the dispersive (London) type, individual columns can also exhibit a unique character owing to specific polar, hydrogen bond, and electron pair donor–acceptor interactions. They can also vary depending on the type of packing, amount and type of residual silanols, “end-capping”, bonding density of ligands, and pore size, among others. Consequently, the chromatographic performance of RP-HPLC systems is often considerably altered depending on the selected column. Although a wide spectrum of knowledge is available on this important subject, there is still a lack of a comprehensive review for an objective comparison and/or selection of chromatographic columns. We aim for this review to be a comprehensive, authoritative, critical, and easily readable monograph of the most relevant publications regarding column selection and characterization in RP-HPLC covering the past four decades. Future perspectives, which involve the integration of state-of-the-art molecular simulations (molecular dynamics or Monte Carlo) with minimal experiments, aimed at nearly “experiment-free” column selection methodology, are proposed.
•A prediction methodology to predict wind and solar power curtailments in CAISO.•Machine learning-based forecasting using cross-validation and hold-out approaches.•Analyzing the six-year hourly data ...of power plants, power exchange, and demand.•A correlation matrix of eight input features and the three target variables.•Cross-validation improved the prediction of oversupply using random forest method.
The economic viability of renewable energy is deteriorating due to its curtailment in power systems. Therefore, it is imperative to forecast curtailments for more effective utilization. To alleviate this issue, in this paper, we propose artificial intelligent-based models to accurately predict wind and solar power curtailments (WSPCs), which have not been investigated before. In this regard, a prediction methodology is developed using different types of machine learning (ML) methods and evaluated based on both hold-out (HO) and cross-validation (CV) approaches. The ML methods considered include regression trees (RT), gradient boosting trees (GBT), random forest (RF), feed-forward artificial neural networks (ANN), long short-term memory (LSTM), and support vector machines (SVR). The prediction models are trained based on eight input features, including load demands, the output power of thermal power plants, nuclear units, solar farms, wind turbines, biomass/geothermal units, large hydro units, power imports, and WSPC as two target variables. Based on historical data, i.e., hourly records of California independent system operator (ISO), the predictive models are validated, and the optimal hyperparameters are chosen using Bayesian optimization for each model to attain the best results. Among all the models, the RF model results in the minimum prediction errors and thus the best performance by implementing the proposed CV approach. The obtained results demonstrate the effectiveness of the proposed models in the prediction of WSPCs.
The rapid increase in the emission of greenhouse gases over the years signifies the urgent need to explore fuels that emit less CO2, such as biofuels. CO2 emission can be reduced further by ...productively utilizing the CO2 generated during industrial processes. In this study, we have proposed a strategy to produce advanced biofuel from macroalgal biorefinery and to utilize all waste streams from processing into value-added products. To achieve this aim, we developed a novel superstructure for biorefinery process synthesis based on Saccharina japonica (macroalgae) to determine an optimal design of the biorefinery. Process integration was performed to utilize direct greenhouse gas emissions from the biorefinery and to reduce pollutant emissions and freshwater consumption. A techno-economic and environmental mixed-integer non-linear model was formulated based on this superstructure. To achieve the economic and environmental goals, two objective functions were studied: maximization of the net present value and minimization of CO2 emissions. A comprehensive sensitivity and Monte Carlo simulation model was formulated to evaluate the effects of variation in key model parameters on the overall economics and to perform the risk assessment, respectively. The minimum ethanol selling price range for the integrated design was found to be USD 0.36–0.56/L. The optimal design achieved a 90% reduction in CO2 emissions, from 4.86 kg/s to 0.42 kg/s, as well as a 38.6% reduction in freshwater consumption. The risk of the optimal design was found to be 20–44% on the basis of the minimum selling price of ethanol. Therefore, this design can be implemented as an economically and environmentally feasible approach to biofuel production.
Display omitted
•A superstructure-based process synthesis framework is developed.•Process integration is performed to utilize all waste streams from the processing.•The optimal design of the biorefinery is determined.•A minimum ethanol selling price range of USD 0.36–0.56/L is achieved.•The optimal design has achieved 90% CO2 reduction.
Objective
Blood–brain barrier (BBB) breakdown has been suggested to be an early biomarker in human cognitive impairment. However, the relationship between BBB breakdown and brain pathology, most ...commonly Alzheimer disease (AD) and vascular disease, is still poorly understood. The present study measured human BBB function in mild cognitive impairment (MCI) patients on 2 molecular scales, specifically BBB's permeability to water and albumin molecules.
Methods
Fifty‐five elderly participants were enrolled, including 33 MCI patients and 22 controls. BBB permeability to water was measured with a new magnetic resonance imaging technique, water extraction with phase contrast arterial spin tagging. BBB permeability to albumin was determined using cerebrospinal fluid (CSF)/serum albumin ratio. Cognitive performance was assessed by domain‐specific composite scores. AD pathology (including CSF Aβ and ptau) and vascular risk factors were examined.
Results
Compared to cognitively normal subjects, BBB in MCI patients manifested an increased permeability to small molecules such as water but was no more permeable to large molecules such as albumin. BBB permeability to water was found to be related to AD markers of CSF Aβ and ptau. On the other hand, BBB permeability to albumin was found to be related to vascular risk factors, especially hypercholesterolemia, but was not related to AD pathology. BBB permeability to small molecules, but not to large molecules, was found to be predictive of cognitive function.
Interpretation
These findings provide early evidence that BBB breakdown is related to both AD and vascular risks, but their effects can be differentiated by spatial scales. BBB permeability to small molecules has a greater impact on cognitive performance. ANN NEUROL 2021;90:227–238
Greenhouse gas emissions, including carbon dioxide and non-CO2 gases, are mainly generated by human activities such as the burning of fossil fuels, deforestation, and agriculture. These emissions ...disrupt the natural balance of the global ecosystem and contribute to climate change. However, by investing in renewable energy, we can help mitigate these problems by reducing greenhouse gas emissions and promoting a more sustainable future. This research utilized a panel data model to explore the impact of carbon dioxide and non-CO2 greenhouse gas emissions on global investments in renewable energy. The study analyzed data from 63 countries over the period from 1990 to 2021. Firstly, the study established a relationship between greenhouse gas emissions and clean energy investments across all countries. The findings indicated that carbon dioxide had a positive effect on clean energy investments, while non-CO2 greenhouse gas emissions had a negative impact on all three types of clean energy investments. However, the impact of flood damage as a representative of climate change on renewable energy investment was uncertain. Secondly, the study employed panel data with random effects to examine the relationship between countries with lower or higher average carbon dioxide emissions and their investments in solar, wind, and geothermal energy. The results revealed that non-CO2 greenhouse gas emissions had a positive impact on investments only in wind power in less polluted countries. On the other hand, flood damage and carbon dioxide emissions were the primary deciding factors for investments in each type of clean energy in more polluted countries.
A superstructure-based approach was proposed for optimization of biorefineries that use Saccharina japonica as feedstock. The goal of this study was to determine the optimal flowsheet design to ...maximize the net present value by considering the mass and energy balance, capital and manufacturing costs. Multiple design alternatives reported in the literature were considered at each biorefinery processing stage, which transformed the superstructure optimization into a mixed integer nonlinear programming (MINLP) problem. In order to efficiently compute a solution for the resulting MINLP problem, the separable programming technique is employed by approximating the initial MINLP problem into a mixed integer linear programming (MILP) problem. The results indicated that the minimum ethanol selling price for optimal design is $1.97/gal, whereas the net present value of $61.5 million is obtained based on the current wholesale prices for both products and raw materials. Sensitivity analysis was performed to identify potential for economic improvement. The developed framework has the capacity to efficiently scan through processing alternatives to identify an economically optimal design for different potential objective functions.
Display omitted
•A superstructure-based optimization model is developed.•Optimal flowsheet of macroalgae biorefinery are determined.•Two different optimization scenarios are investigated.•Sensitivity analysis elaborates potential improvements.
Display omitted
•Experiment based, process design of fast pyrolysis of S. japonica brown seaweed.•Poly-generation process producing diesel-grade fuel, heat, and power.•Process simulation using with ...Aspen Plus and specialized biocrude modeling method.•Features acid wash mineral removal, fixed-bed reactor system, and Rankine power cycle.•7–45 times less CO2 emissions compared to conventional crude oil processes.
Marine macroalgae or seaweeds are increasingly becoming strong candidates for sustainable biofuel feedstocks of the future. This study features a large-scale process design and comprehensive analysis of an industrial-scale (400,000 tons dry feedstock per year) poly-generation pyrolysis process that utilizes 3rd generation biofuel feedstock, Saccharina japonica brown seaweed, and produces diesel-range hydrocarbon fuel, heat, and power. Process design relied predominately on published experimental data regarding fast pyrolysis of S. japonica in a fixed-bed reactor system, followed by dewatering and catalytic upgrading of the produced biocrude. The design featured acid wash pretreatment for the reduction of mineral content, and subsequently a Rankine power cycle utilizing biochar. The design also considered two distinct cases of on-site hydrogen production and hydrogen purchase. Based on the experimental data, a rigorous steady-state flowsheet model was constructed using Aspen Plus for each design case. The results of comprehensive techno-economic assessment, sensitivity, and Monte Carlo analyses provided insight into capital cost for the process, minimum product selling price, and selling price ranges. Finally, the process is compared with traditional crude oil extraction and processing in terms of significant reductions in CO2 emissions, hence providing strong evidence of its environmental sustainability.
•This was a retrospective study of 869 subjects with herpes zoster ophthalmicus.•Ocular involvement occurred in 84.8%.•A total of 51.2% had corneal involvement and 47.6% uveitis.•Permanent vision ...loss (≤20/50) occurred in 9.6%.•Vision loss was associated with older age, immunosuppression, and uveitis.
To determine the rate of moderate and severe vision loss following herpes zoster ophthalmicus (HZO) and to identify associated factors.
Retrospective cohort study.
All subjects with acute HZO seen at a single center from 2006 to 2016 were included in the study. The primary outcome measure was the proportion of individuals with moderate and/or severe loss of vision following an acute episode of HZO. Secondary outcome measures included causes and factors associated with permanent loss of vision owing to HZO.
A total of 869 patients with acute HZO were identified with a median follow-up time of 6.3 years (interquartile range 3.7-8.9 years). Ocular involvement of HZO was diagnosed at or within the first month of presentation in 737 individuals (84.8%). The most common sites of ocular involvement were conjunctivitis (76.1%), followed by keratitis (51.2%) and uveitis (47.6%). Moderate vision loss (≤20/50) secondary to HZO occurred in 83 eyes (9.6%) while severe vision loss (≤20/200) occurred in 31 eyes (3.6%). Causes of loss of vision included corneal scarring (94.0%), corneal perforation (4.8%), and secondary glaucoma (1.2%). Severe vision loss was associated with older age (hazard ratio HR 1.059, P = .001), immunosuppression (HR 3.125, P = .028), poor presenting visual acuity (HR 2.821, P = .002), and uveitis (HR 4.777, P = .004) on multivariate analysis.
Among individuals with HZO, approximately 1 in 10 individuals may develop moderate or severe vision loss, primarily owing to corneal scarring. Older age, immunosuppression, and uveitis are associated with severe permanent loss of vision secondary to HZO.
•A novel MRI technique was developed to provide a non-invasive assessment of oxygen extraction fraction in the medial temporal lobe (MTL-OEF) in less than 5 minutes.•MTL-OEF was measured to be ...23.9±3.6% in healthy adults and was significantly lower (P<0.0001) than the OEF of 33.3±2.9% in superficial cortical tissues.•In caffeine ingestion challenges, MTL-OEF was elevated by 9.1±4.0%.•MTL-OEF increased with age (MTL-OEF=20.997+0.100 × age; P=0.02).
The medial temporal lobe (MTL) is a key area implicated in many brain diseases, such as Alzheimer's disease. As a functional biomarker, the oxygen extraction fraction (OEF) of MTL may be more sensitive than structural atrophy of MTL, especially at the early stages of diseases. However, there is a lack of non-invasive techniques to measure MTL-OEF in humans. The goal of this work is to develop an MRI technique to assess MTL-OEF in a clinically practical time without using contrast agents. The proposed method measures venous oxygenation (Yv) in the basal veins of Rosenthal (BVs), which are the major draining veins of the MTL. MTL-OEF can then be estimated as the arterio-venous difference in oxygenation. We developed an MRI sequence, dubbed arterial-suppressed accelerated T2-relaxation-under-phase-contrast (AS-aTRUPC), to quantify the blood T2 of the BVs, which was then converted to Yv through a well-established calibration model. MTL-OEF was calculated as (Ya−Yv)/Ya × 100%, where Ya was the arterial oxygenation. The feasibility of AS-aTRUPC to quantify MTL-OEF was evaluated in 16 healthy adults. The sensitivity of AS-aTRUPC in detecting OEF changes was assessed by a caffeine ingestion (200 mg) challenge. For comparison, T2-relaxation-under-spin-tagging (TRUST) MRI, which is a widely used global OEF technique, was also acquired. The dependence of MTL-OEF on age was examined by including another seven healthy elderly subjects. The results showed that in healthy adults, MTL-OEF of the left and right hemispheres were correlated (P=0.005). MTL-OEF was measured to be 23.9±3.6% (mean±standard deviation) and was significantly lower (P<0.0001) than the OEF of 33.3±2.9% measured in superior sagittal sinus (SSS). After caffeine ingestion, there was an absolute percentage increase of 9.1±4.0% in MTL-OEF. Additionally, OEF in SSS measured with AS-aTRUPC showed a strong correlation with TRUST OEF (intra-class correlation coefficient=0.94 with 95% confidence interval 0.91, 0.96), with no significant bias (P=0.12). MTL-OEF was found to increase with age (MTL-OEF=20.997+0.100 × age; P=0.02). In conclusion, AS-aTRUPC MRI provides non-invasive assessments of MTL-OEF and may facilitate future clinical applications of MTL-OEF as a disease biomarker.
Quantitative structure-retention relationship (QSRR) modeling has emerged as an efficient alternative to predict analyte retention times using molecular descriptors. However, most reported QSRR ...models are column-specific, requiring separate models for each high-performance liquid chromatography (HPLC) system. This study evaluates the potential of machine learning (ML) algorithms and quantum mechanical (QM) descriptors to develop QSRR models that can predict retention times across three different reversed-phase HPLC columns under varying conditions. Four machine learning methods—partial least squares (PLS) regression, ridge regression (RR), random forest (RF), and gradient boosting (GB)—were compared on a dataset of 360 retention times for 15 aromatic analytes. Molecular descriptors were calculated using density functional theory (DFT). Column characteristics like particle size and pore size and experimental conditions like temperature and gradient time were additionally used as descriptors. Results showed that the GB-QSRR model demonstrated the best predictive performance, with
Q
2
of 0.989 and root mean square error of prediction (RMSEP) of 0.749 min on the test set. Feature analysis revealed that solvation energy (SE), HOMO–LUMO energy gap (∆E HOMO–LUMO), total dipole moment (Mtot), and global hardness (
η
) are among the most influential predictors for retention time prediction, indicating the significance of electrostatic interactions and hydrophobicity. Our findings underscore the efficiency of ensemble methods, GB and RF models employing non-linear learners, in capturing local variations in retention times across diverse experimental setups. This study emphasizes the potential of cross-column QSRR modeling and highlights the utility of ML models in optimizing chromatographic analysis.
Graphical Abstract