At present, information technology has become the most important symbol of this era, all walks of life are using information technology to change the original working mode. At present, college ...counselors need a long time to face complex work, which inevitably leads to the reduction of their work efficiency and greatly reduces their work enthusiasm, which makes them produce burnout. Colleges and universities can promote the work interest and enthusiasm of counselors through the use of information technology. This paper discusses how to use information technology to improve college counselors' job burnout.
Unpredictable nature of renewable energy sources puts power system operators in different conditions in terms of maintaining the system reliability. Microgrids (MGs) can provide an effective solution ...for the problems of grid-connected renewable energy sources due to their tuning capability and flexibility. In this study, a stochastic optimization strategy was proposed for participating in energy market operations considering demand response (DR). The results indicated the operating cost decreased when the MG implemented DR programs. Also, DR program could shift energy consumption from on- to off-peak hours and flatten the load curve. Therefore, a scheduling model was presented for the operation of energy carriers and reserves considering security constraints of power and natural gas grids in interconnected hubs as well as responsive load participation using the developed water wave optimization (WWO) algorithm. The objective function of this model aimed to minimize the operating cost of sources to supply electrical and thermal loads applied to the proposed MG. WWO is a meta-heuristic algorithm inspired by the behavior of water waves. The waves formed on the sea surface have complex, but interesting, relationships that can be used to solve optimization problems. In this algorithm, each problem solution is encoded as a water wave and undergoes changes in the problem search space based on three behaviors of propagation, refraction, and decay of water waves or problem solutions. The simplicity of the structure of this method, elitism and the ability to escape from local points due to the existence of different search operators and the mutation of generations have led to the use of this method. The results indicated a decline in operating costs through electrical and thermal responsive load participation and thermal energy storage system. Results of the proposed model revealed a correlation between the electricity price and natural gas consumption, indicating multi-carrier energy grids should be examined and optimized simultaneously.
•A model is proposed for scheduling the operation of interconnected hubs in the MG•The proposed MG includes renewable resources, fossil fuels, batteries, energy hubs, CHP & etc.•The effect of electrical and thermal responsive loads to reduce operating cost is presented.•Safety indicators of power grid are included in the constraints of the proposed model
The PlanetScope CubeSat constellation is providing unprecedented global coverage, visible to near infrared, atmospherically corrected, 3 m imagery. The revisit interval between successive overpasses ...varies in space and time in a complex manner because of a variety of factors and particularly because of the different sensor orbits. The temporal availability of PlanetScope imagery is quantified in this study considering all of the publicly available images acquired globally for a 12 month period from December 1st 2019 to November 30th 2020. A total of 175.8 million images were acquired by the constellation that was composed of between 100 and 133 unique PlanetScope sensors each month and three sensor generations. The local morning overpass times of the three sensor generations were quantified and the most frequently occurring times were 10:16, 10:29, and 10:03 (to the nearest minute) with 90% of the images acquired with a range of morning overpass times of 2 h and 13 min, 1 h and 30 min, and 1 h and 50 min, for PlanetScope-0, PlanetScope-1, and PlanetScope-2, respectively. Maps, histograms and summary statistics of the total number of observations and revisit intervals are derived with respect to a global grid of 4.7 million land points spaced 5.6 km apart in the equal area sinusoidal projection. The annual and monthly number of PlanetScope observations and average revisit intervals did not vary in a geographically uniform manner. This is due to several factors including the different PlanetScope orbits, seasonal high latitude darkness at the time of sensor overpass, and because of the changing number of sensors on orbit as PlanetScope sensors were decommissioned and later generations became operational over the 12 month study period. In addition, the images in each frame of sensed data are not made available if they cannot be geolocated due to cloud and/or featureless or unstructured terrain precluding ground control matching. The PlanetScope constellation provided higher temporal resolution than provided by sensors such as Landsat-8 or Sentinel-2 although 9% of the global land grid locations, predominantly in the interior of Greenland and non-coastal Antarctica, had no observations. Considering the 12 months of global observations, the median average revisit interval was only 30.3 h, and 9.6%, 71.8%, and 88.4% of the land points had average revisit intervals <24 h, <36 h, and < 48 h, respectively. Globally, the median minimum revisit interval was 25 s and the median maximum revisit interval was 9.15 days; 95.4% of the land grid points had a minimum revisit <180 s, and 89.1% had a maximum revisit <480 h (20 days). The PlanetScope images are labelled as “standard” or as “test” quality based on solar geometry, saturated pixel, and geolocation accuracy criteria. The median annual proportion of observations labelled as “standard” at each land grid point over the 12 months was 78.14%. A global cloud analysis was undertaken to quantify the probability of there being at least one and at least two cloud-free PlanetScope observations within 5, 7 and 10 day consecutive periods. Lower probabilities occurred in cloudy regions and where there were fewer observations. The global mean average probability of there being at least one cloud-free observation over the 12 study months was 0.84, 0.88 and 0.92 for the 5, 7 and 10 day periods respectively. The global mean average probability of there being at least two cloud-free observations was 0.65, 0.76 and 0.84 for the 5, 7 and 10 day periods respectively. The probabilities varied seasonally and the northern hemisphere winter (December–February) and spring (March–May) had lower and higher global mean average seasonal probabilities, respectively, than those derived over the 12 months. The high temporal global coverage provided by the PlanetScope constellation will benefit new applications in particular those concerned with assessment of rapidly changing phenomena and assessment of phenomena that cannot be resolved at moderate and coarse resolution.
•PlanetScope temporal characteristics quantified, global, 12 months, 175.8 million images•30.3 h global median average revisit interval•25 s and 9.15 day global median minimum and maximum revisit intervals•71.8% of land acquired with <36 h average revisit interval•local overpass times different among the 3 PlanetScope generations
This paper presents a Stage 3 validation of the recently released Collection 6 NASA MCD64A1 500 m global burned area product. The product is validated by comparison with Landsat 8 Operational Land ...Imager (OLI) image pairs acquired 16 days apart that were visually interpreted. These independent reference data were selected using a stratified random sampling approach that allows for probability sampling of Landsat data in both time and in space. A total of 558 Landsat 8 OLI image pairs (1116 images), acquired between March 1st, 2014 and March 19th , 2015, were selected and used to validate the MCD64A1 product. The areal accuracy of the MCD64A1 product was characterized at the 30 m resolution of the Landsat independent reference data using standard accuracy metrics derived from global and from biome specific confusion matrices. Because a probability based Stage 3 sampling protocol was followed, unbiased estimators of the accuracy metrics and associated standard errors could be used. Globally, the MCD64A1 product had an estimated 40.2% commission error and 72.6% omission error; the prevalence of omission errors is reflected by a negative estimated bias of the mapped global area burned relative to the Landsat independent reference data (−54.1%). Globally, the standard errors of the accuracy metrics were less than 6%. The lowest errors were observed in the boreal forest biome (27.0% omission and 23.9% estimated commission errors) where burned areas tend to be large and distinct, and remain on the landscape for long periods, and the highest errors were in the Tropical Forest, Temperate Forest, and Mediterranean biomes (estimated > 90% omission error and >50% commission error). The product accuracy was also characterized at coarser scale using metrics derived from the regression between the proportion of coarse resolution grid cells detected as burned by MCD64A1 and the proportion mapped in the Landsat 8 interpreted maps. The errors of omission and commission observed at 30 m resolution compensate to a considerable extent at coarser resolution, as indicated by the coefficient of determination (r2 > 0.70), slope (>0.79) and intercept (−0.0030) of the regression between the MCD64A1 product and the Landsat independent reference data in 3 km, 4 km, 5 km, and 6 km coarse resolution cells. The Boreal Forest, Desert and Xeric Shrublands, Temperate Savannah and Tropical Savannah biomes had higher r2 and slopes closer to unity than the Temperate Forest, Mediterranean, and Tropical Forest biomes. The analysis of the deviations between the proportion of area burned mapped by the MCD64A1 product and by the independent reference data, performed using 3 km × 3 km and 6 km × 6 km coarse resolution cells, indicates that the large negative bias in global area burned is primarily due to the systematic underestimation of smaller burned areas in the MCD64A1 product.
•Stage-3 validation of the NASA MODIS MCD64A1 burned area product.•Independent reference data defined by visually interpreted Landsat 8 image pairs.•588 Landsat 8 image pairs (1116 images) globally distributed over 12 months.•Selected by stratified random sampling in space and time.•accuracy reported using confusion matrix-derived and regression-based metrics.
RNA-binding proteins (RBPs) mediate the localization, stability, and translation of the target transcripts and fine-tune the physiological functions of the proteins encoded. The insulin-like growth ...factor (IGF) 2 mRNA-binding protein (IGF2BP, IMP) family comprises three RBPs, IGF2BP1, IGF2BP2, and IGF2BP3, capable of associating with IGF2 and other transcripts and mediating their processing. IGF2BP2 represents the least understood member of this family of RBPs; however, it has been reported to participate in a wide range of physiological processes, such as embryonic development, neuronal differentiation, and metabolism. Its dysregulation is associated with insulin resistance, diabetes, and carcinogenesis and may potentially be a powerful biomarker and candidate target for relevant diseases. This review summarizes the structural features, regulation, and functions of IGF2BP2 and their association with cancer and cancer stem cells.
Flavin-based electron bifurcation is a recently discovered mechanism of coupling endergonic to exergonic redox reactions in the cytoplasm of anaerobic bacteria and archaea. Among the five ...electron-bifurcating enzyme complexes characterized to date, one is a heteromeric ferredoxin- and NAD-dependent FeFe-hydrogenase. We report here a novel electron-bifurcating FeFe-hydrogenase that is NADP rather than NAD specific and forms a complex with a formate dehydrogenase. The complex was found in high concentrations (6% of the cytoplasmic proteins) in the acetogenic Clostridium autoethanogenum autotrophically grown on CO, which was fermented to acetate, ethanol, and 2,3-butanediol. The purified complex was composed of seven different subunits. As predicted from the sequence of the encoding clustered genes (fdhA/hytA-E) and from chemical analyses, the 78.8-kDa subunit (FdhA) is a selenocysteine- and tungsten-containing formate dehydrogenase, the 65.5-kDa subunit (HytB) is an iron-sulfur flavin mononucleotide protein harboring the NADP binding site, the 51.4-kDa subunit (HytA) is the FeFe-hydrogenase proper, and the 18.1-kDa (HytC), 28.6-kDa (HytD), 19.9-kDa (HytE1), and 20.1-kDa (HytE2) subunits are iron-sulfur proteins. The complex catalyzed both the reversible coupled reduction of ferredoxin and NADP+ with H2 or formate and the reversible formation of H2 and CO2 from formate. We propose the complex to have two functions in vivo, namely, to normally catalyze CO2 reduction to formate with NADPH and reduced ferredoxin in the Wood-Ljungdahl pathway and to catalyze H2 formation from NADPH and reduced ferredoxin when these redox mediators get too reduced during unbalanced growth of C. autoethanogenum on CO (E0′ = −520 mV).
Network modeling has proven to be a fundamental tool in analyzing the inner workings of a cell. It has revolutionized our understanding of biological processes and made significant contributions to ...the discovery of disease biomarkers. Much effort has been devoted to reconstruct various types of biochemical networks using functional genomic datasets generated by high-throughput technologies. This paper discusses statistical methods used to reconstruct gene regulatory networks using gene expression data. In particular, we highlight progress made and challenges yet to be met in the problems involved in estimating gene interactions, inferring causality and modeling temporal changes of regulation behaviors. As rapid advances in technologies have made available diverse, large-scale genomic data, we also survey methods of incorporating all these additional data to achieve better, more accurate inference of gene networks.
•We review statistical methods for reconstructing gene regulatory networks.•We discuss statistical and computational challenges in modeling gene interactions.•For each method we compare their modeling paradigms and data types required.
This single-center study aimed to determine the effective dose and safety of remimazolam besylate for the sedation of postoperative patients undergoing invasive mechanical ventilation in the ...intensive care unit (ICU). Mechanically ventilated patients admitted to the ICU after surgery were included. The Narcotrend index (NTI) was used to assess the depth of sedation, and the Richmond Agitation-Sedation Scale (RASS) score was also recorded. Remimazolam besylate was administered initially at a loading dose of 0.02 mg/kg, followed by a gradual increase of 0.005 mg/kg each time until the targeted depth of sedation was achieved (NTI 65-94). A maintenance dose of remimazolam besylate was administered starting at 0.2 mg/kg/h, followed by increments or subtractions of 0.05 mg/kg/h each time until a satisfactory depth of sedation was achieved and maintained for at least 30 min. The demographic data, anesthesia, surgery types, hemodynamics and respiratory parameters were recorded. Adverse events and adverse drug reactions were monitored for safety. Twenty-three patients were eventually included in this study covering a period of 1 year. A satisfactory depth of sedation was achieved by a single intravenous infusion of remimazolam besylate at a loading dose of 0.02-0.05 mg/kg followed by a maintenance dose of 0.20-0.35 mg/kg/h. There were no significant changes in hemodynamic and respiratory parameters within 10 min after the administration of remimazolam besylate. In addition, a significant correlation was observed between the NTI and the RASS score for assessing sedation (r = 0.721, P < 0.001). The NTI showed a predictive probability for a RASS score of 0.817. Remimazolam besylate was effective for mild/moderate sedation of invasively mechanically ventilated postoperative patients in the ICU while maintaining excellent respiratory and hemodynamic stability. The NTI can be used as a good tool for the objective evaluation of the depth of sedation and agitation.
The transcription factor ZNF143 contains a central domain of seven zinc fingers in a tandem array and is involved in 3D genome construction. However, the mechanism by which ZNF143 functions in ...chromatin looping remains unclear. Here, we show that ZNF143 directionally recognizes a diverse range of genomic sites directly within enhancers and promoters and is required for chromatin looping between these sites. In addition, ZNF143 is located between CTCF and cohesin at numerous CTCF sites, and ZNF143 removal narrows the space between CTCF and cohesin. Moreover, genetic deletion of ZNF143, in conjunction with acute CTCF degradation, reveals that ZNF143 and CTCF collaborate to regulate higher-order topological chromatin organization. Finally, CTCF depletion enlarges direct ZNF143 chromatin looping. Thus, ZNF143 is recruited by CTCF to the CTCF sites to regulate CTCF/cohesin configuration and TAD (topologically associating domain) formation, whereas directional recognition of genomic DNA motifs directly by ZNF143 itself regulates promoter activity via chromatin looping.
Display omitted
•ZNF143 recognizes cognate SBS elements in promoters and enhancers in an anti-parallel manner•ZNF143 deletion weakens the strength of chromatin loops between promoters and/or enhancers•Removal of ZNF143 alters 3D genome compartment and narrows the space between CTCF and cohesin•Acute CTCF degradation results in larger-sized ZNF143 loops
Zhang et al. show that the localization of ZNF143 between CTCF and cohesin facilitates CTCF to block cohesin loop extrusion at CBS anchor sites. ZNF143 also regulates long-distance chromatin looping between promoters and/or enhancers via directional recognition of SBS elements in an anti-parallel manner.