We present in this paper alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require ...at most
iterations to obtain an
-optimal solution, while our accelerated (i.e., fast) versions of them require at most
iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University,
2009
) where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.
A nonconformal hybrid finite difference time domain (FDTD)/finite element time domain (FETD) method was previously introduced, which implemented the hybridization through a buffer zone. Although this ...method has been demonstrated to be accurate and long-time stable, further efforts are still desirable to remove the buffer zone and to implement an implicit-explicit time integration from the perspective of practical applications. In this paper, a novel hybrid method is proposed, which not only successfully eliminates the necessity of the buffer zone without compromising the featured advantage (e.g., nonconformal mesh) but also effectively applies an implicit-explicit time integration scheme to improve the computational efficiency. Furthermore, the new method extends the hybridization to a broader level by incorporating the spectral element time domain (SETD) method based on the discontinuous Galerkin and domain decomposition techniques, resulting in a more general hybrid FDTD/SETD/FETD framework. The framework employs the explicit leapfrog time integration for the FDTD region while it employs the implicit Crank-Nicolson time integration for the FETD region. For the SETD region, either the implicit or explicit time integration can be employed, depending on the mesh sizes in it. When the implicit region becomes large, it can be further split into multiple subdomains to reduce computational complexity. Numerical examples are included to demonstrate the performance of the proposed hybrid method, which is accurate, long-time stable and more efficient than the hybrid method with a buffer zone.
Display omitted
•Synthesis of metal (Ag, Au, and Pt) and metal oxide (Cu2O, CuO, γ-Fe2O3, ZnO, ZnO-GS, anatase-TiO2, and rutile-TiO2) nanoparticles (NPs)•Surface analysis of metal and metal oxide ...NPs.•Influence of surface properties and surface evolution of metal and metal oxide NPs on SERS signal; SERS enhancement factor.
This work describes biologically important nanostructures of metals (AgNPs, AuNPs, and PtNPs) and metal oxides (Cu2ONPs, CuONSs, γ-Fe2O3NPs, ZnONPs, ZnONPs-GS, anatase-TiO2NPs, and rutile-TiO2NPs) synthesized by different methods (wet-chemical, electrochemical, and green-chemistry methods). The nanostructures were characterized by molecular spectroscopic methods, including scanning/transmission electron microscopy (SEM/TEM), energy dispersive X-ray spectroscopy (EDS), X-ray diffraction analysis (XRD), photoelectron spectroscopy (XPS), ultraviolet–visible spectroscopy (UV–vis), dynamic light scattering (DLS), Raman scattering spectroscopy (RS), and infrared light spectroscopy (IR). Then, a peptide (bombesin, BN) was adsorbed onto the surface of these nanostructures from an aqueous solution with pH of 7 that did not contain surfactants. Adsorption was monitored using surface-enhanced Raman scattering spectroscopy (SERS) to determine the influence of the nature of the metal surface and surface evolution on peptide geometry. Information from the SERS studies was compared with information on the biological activity of the peptide. The SERS enhancement factor was determined for each of the metallic surfaces.
Efficient multiscale electromagnetic simulations require several major challenges that need to be addressed, such as flexible and robust geometric modeling schemes, efficient and stable time-stepping ...methods, etc. Due to the versatile choices of spatial discretization and temporal integration, discontinuous Galerkin time-domain (DGTD) methods can be very promising in simulating transient multiscale problems. This paper provides a comprehensive review of different DGTD schemes, highlighting the fundamental issues arising in each step of constructing a DGTD system. The issues discussed include the selection of governing equations for transient electromagnetic analysis, different basis functions for spatial discretization, as well as the implementation of different time-stepping schemes. Numerical examples demonstrate the advantages of DGTD for multiscale electromagnetic simulations.
Pollutants such as human pharmaceuticals and synthetic hormones that are not covered by environmental legislation have increasingly become important emerging aquatic contaminants. This paper reports ...the development of a sensitive and selective multi-residue method for simultaneous determination and quantification of 23 pharmaceuticals and synthetic hormones from different therapeutic classes in water samples. Target pharmaceuticals include anti-diabetic, antihypertensive, hypolipidemic agents, β2-adrenergic receptor agonist, antihistamine, analgesic and sex hormones. The developed method is based on solid phase extraction (SPE) followed by instrumental analysis using liquid chromatography-electrospray ionization-tandem mass spectrometry (LC–ESI-MS/MS) with 30
min total run time. River water samples (150
mL) and (sewage treatment plant) STP effluents (100
mL) adjusted to pH 2, were loaded into MCX (3
cm
3, 60
mg) cartridge and eluted with four different reagents for maximum recovery. Quantification was achieved by using eight isotopically labeled internal standards (I.S.) that effectively correct for losses during sample preparation and matrix effects during LC–ESI-MS/MS analysis. Good recoveries higher than 70% were obtained for most of target analytes in all matrices. Method detection limit (MDL) ranged from 0.2 to 281
ng/L. The developed method was applied to determine the levels of target analytes in various samples, including river water and STP effluents. Among the tested emerging pollutants, chlorothiazide was found at the highest level, with concentrations reaching up to 865
ng/L in STP effluent, and 182
ng/L in river water.
This study presents a parallel meshless solver for transient heat conduction analysis of slender functionally graded materials (FGMs) with exponential variations. In the present parallel meshless ...solver, a strong-form boundary collocation method, the boundary knot method (BKM), in conjunction with Laplace transform is implemented to solve the heat conduction equations of slender FGMs with exponential variations. This method is mathematically simple, easy-to-parallel, meshless, and without domain discretization. However, two ill-posed issues, the ill-conditioning dense BKM matrix and numerical inverse Laplace transform process, may lead to incorrect numerical results. Here the extended precision arithmetic (EPA) and the domain decomposition method (DDM) have been adopted to alleviate the effect of these two ill-posed issues on numerical efficiency of the present method. Then the parallel algorithm has been employed to significantly reduce the computational cost and enhance the computational capacity for the FGM structures with larger length-width ratio. To demonstrate the effectiveness of the present parallel meshless solver for transient heat conduction analysis, several benchmark examples are considered under slender FGMs with exponential variations. The present results are compared with the analytical solutions, the conventional boundary knot method and COMSOL simulation.
The National Institutes of Health (NIH)-funded Diversity Program Consortium (DPC) includes a Coordination and Evaluation Center (CEC) to conduct a longitudinal evaluation of the two signature, ...national NIH initiatives - the Building Infrastructure Leading to Diversity (BUILD) and the National Research Mentoring Network (NRMN) programs - designed to promote diversity in the NIH-funded biomedical, behavioral, clinical, and social sciences research workforce. Evaluation is central to understanding the impact of the consortium activities. This article reviews the role and function of the CEC and the collaborative processes and achievements critical to establishing empirical evidence regarding the efficacy of federally-funded, quasi-experimental interventions across multiple sites. The integrated DPC evaluation is particularly significant because it is a collaboratively developed Consortium Wide Evaluation Plan and the first hypothesis-driven, large-scale systemic national longitudinal evaluation of training programs in the history of NIH/National Institute of General Medical Sciences.
To guide the longitudinal evaluation, the CEC-led literature review defined key indicators at critical training and career transition points - or Hallmarks of Success. The multidimensional, comprehensive evaluation of the impact of the DPC framed by these Hallmarks is described. This evaluation uses both established and newly developed common measures across sites, and rigorous quasi-experimental designs within novel multi-methods (qualitative and quantitative). The CEC also promotes shared learning among Consortium partners through working groups and provides technical assistance to support high-quality process and outcome evaluation internally of each program. Finally, the CEC is responsible for developing high-impact dissemination channels for best practices to inform peer institutions, NIH, and other key national and international stakeholders.
A strong longitudinal evaluation across programs allows the summative assessment of outcomes, an understanding of factors common to interventions that do and do not lead to success, and elucidates the processes developed for data collection and management. This will provide a framework for the assessment of other training programs and have national implications in transforming biomedical research training.
The single-station microtremor horizontal-to-vertical spectral ratio (MHVSR) method was initially proposed to retrieve the site amplification function and its resonance frequencies produced by ...unconsolidated sediments overlying high-velocity bedrock. Presently, MHVSR measurements are predominantly conducted to obtain an estimate of the fundamental site frequency at sites where a strong subsurface impedance contrast exists. Of the earthquake site characterization methods presented in this special issue, the MHVSR method is the furthest behind in terms of consensus towards standardized guidelines and commercial use. The greatest challenges to an international standardization of MHVSR acquisition and analysis are (1) the
what
— the underlying composition of the microtremor wavefield is site-dependent, and thus, the appropriate theoretical (forward) model for inversion is still debated; and (2) the
how
— many factors and options are involved in the data acquisition, processing, and interpretation stages. This paper reviews briefly a historical development of the MHVSR technique and the physical basis of an MHVSR (the
what
). We then summarize recommendations for MHVSR acquisition and analysis (the
how
). Specific sections address MHVSR interpretation and uncertainty assessment.
Background
: Geospatial linked data brings into the scope of the Semantic Web and its technologies, a wealth of datasets that combine semantically-rich descriptions of resources with their ...geo-location. There are, however, various Semantic Web technologies where technical work is needed in order to achieve the full integration of geospatial data, and federated query processing is one of these technologies.
Methods
: In this paper, we explore the idea of annotating data sources with a bounding polygon that summarizes the spatial extent of the resources in each data source, and of using such a summary as an (additional) source selection criterion in order to reduce the set of sources that will be tested as potentially holding relevant data. We present our source selection method, and we discuss its correctness and implementation.
Results
: We evaluate the proposed source selection using three different types of summaries with different degrees of accuracy, against not using geospatial summaries. We use datasets and queries from a practical use case that combines crop-type data with water availability data for food security. The experimental results suggest that more complex summaries lead to slower source selection times, but also to more precise exclusion of unneeded sources. Moreover, we observe the source selection runtime is (partially or fully) recovered by shorter planning and execution runtimes. As a result, the federated sources are not burdened by pointless querying from the federation engine.
Conclusions
: The evaluation draws on data and queries from the agroenvironmental domain and shows that our source selection method substantially improves the effectiveness of federated GeoSPARQL query processing.
In this paper, the variational multiscale interpolating element-free Galerkin (VMIEFG) method is developed to obtain the numerical solution ofthenonlinearDarcy–Forchheimer model. We use the ...interpolating moving least squares method instead of the moving least squares approximation to construct meshless shape functions with delta function properties. Then the flux boundary condition of the Darcy–Forchheimer model can be handled easily. Hughes’ variational multiscale (HVM) method is applied to overcome the numerical oscillation caused by equal-order basis for the velocity and pressure. Moreover, the HVM ensures that the resultant formulation in the VMIEFG method is consistent and the stabilization parameter (or tensor) appears naturally. Consequently, the stabilization parameter is free of user-defined. The fixed point iteration method is used to deal with the nonlinear term. Some numerical examples are provided to illustrate the stability and performance of the proposed method for solving the nonlinear Darcy–Forchheimer model.