Sensitivity analysis (SA) is en route to becoming an integral part of mathematical modeling. The tremendous potential benefits of SA are, however, yet to be fully realized, both for advancing ...mechanistic and data-driven modeling of human and natural systems, and in support of decision making. In this perspective paper, a multidisciplinary group of researchers and practitioners revisit the current status of SA, and outline research challenges in regard to both theoretical frameworks and their applications to solve real-world problems. Six areas are discussed that warrant further attention, including (1) structuring and standardizing SA as a discipline, (2) realizing the untapped potential of SA for systems modeling, (3) addressing the computational burden of SA, (4) progressing SA in the context of machine learning, (5) clarifying the relationship and role of SA to uncertainty quantification, and (6) evolving the use of SA in support of decision making. An outlook for the future of SA is provided that underlines how SA must underpin a wide variety of activities to better serve science and society.
•Sensitivity analysis (SA) should be promoted as an independent discipline.•Several grand challenges hinder full realization of the benefits of SA.•The potential of SA for systems modeling & machine learning is untapped.•New prospects exist for SA to support uncertainty quantification & decision making.•Coordination rather than consensus is key to cross-fertilize new ideas.
•Calibration on full time series is shown to be more robust than split-sample methods.•30 Bootstrapping tests on 6 cases provide evidence towards this method being optimal.•Verification on 10 ...independent catchment-model pairs support the conclusions.•Caveats of split-sampling on model performance are demonstrated.•Length of the calibration period is proportional to the parameter set robustness.
This paper investigates the issues related to the use of validation in hydrological model calibration. Traditionally, models are calibrated and then assessed on an independent period (split-sample) to determine their adequacy in simulating streamflow as compared to observations. In this study, two hydrological models and three North American catchments are used to evaluate the effects of using validation to assess the model parameters’ robustness on the model’s actual simulation capabilities and accuracy in simulating streamflow. The length of the calibration period is increased from 1 to 16 years, and for each case a large number of randomly selected combinations of years are used for calibration and for validation using the Nash-Sutcliffe Efficiency metric. The calibrated model is then run on an independent 8-year test-period to assess the model’s actual performance in simulation mode in unknown conditions. The process is bootstrapped 30 times to ensure the robustness of the results. The tests pit the calibration/validation methods on increasing calibration period lengths against a full calibration on the entire available dataset. Results show that the calibration on the full dataset is the optimal strategy as it generates the most robust parameter sets, provides the best model accuracy on an independent testing period and does not require assumption-making on the modeler’s part. The calibrated parameter sets for each test-case were evaluated using the relative bias and correlation metrics, which revealed that the method transfers well to these two other metrics. Results also demonstrate the pitfalls of the commonly used split-sampling strategy, where good parameter sets may be discarded due to model performance discrepancies between calibration and validation periods. The conclusions point to the need to use as many years as possible in the calibration step and to entirely disregard the validation aspect under certain conditions.
We validated seasonal RayMan and ENVI-met mean radiant temperature (TMRT) simulations to assess model performance in a sensitivity analysis from cold to extremely hot conditions. ...Human-biometeorological validation data were collected in Tempe, Arizona via transects during five field campaigns between 2014 and 2017. Transects were conducted across seven locations in two to three-hour intervals from 6:00 to 23:00 LST with a Kestrel meter and thermal camera (2014–2015) and the mobile instrument platform MaRTy (2017). Observations across diverse urban forms, sky view factors, and seasons covered a wide range of solar radiation regimes from a minimum TMRT of 8.7 °C to a maximum of 84.9 °C. Both models produced large simulation errors across regimes with RMSE ranging from 8 °C to 12 °C (RayMan) and 11.2 °C to 16.1 °C (ENVI-met), exceeding a suggested TMRT accuracy of ±5 °C for heat stress studies. RayMan model errors were largest for engineered enclosed spaces, complex urban forms, and extreme heat conditions. ENVI-met was unable to resolve intra-domain spatial variability of TMRT and exhibited large errors with RMSE up to 25.5 °C for engineered shade. Both models failed to accurately simulate TMRT for hot conditions. Errors varied seasonally with overestimated TMRT in the summer and underestimated TMRT in the winter and shoulder seasons. Results demonstrate that models should not be used under micrometeorological or morphological extremes without in-situ validation to quantify errors and assess directional bias due to model limitations.
Display omitted
•Ray Man and ENVI-met fail to accurately model TMRT under extreme heat.•Seasonal TMRT errors exceed suggested accuracy of ±5 °C for heat stress studies.•Models show seasonal swing in RMSE with increased errors under high solar radiation.•Ray Man struggled most with engineered enclosed spaces and complex urban forms.•ENVI-met struggled with engineered shade and did not resolve spatial variability.
Here, this study presents development of an in-situ flexure test for imaging progressive inter- and intralaminar fracture in tape-laminate composites using X-ray computed tomography (CT). The intent ...of this test is to provide detailed experimental observations of ply-level damage that can be used to validate existing, and develop new, progressive damage analysis (PDA) tools. The test consists of a vertically mounted specimen which is flexed using two eccentric compressive loads using an in-situ uniaxial load stage. The flexure specimen contains a starter notch mid-span which promotes initiation of composite failure within the X-ray field of view. Specimens with two different laminate stacking sequences were tested, imaged, analyzed. For a quasi-isotropic laminate with large angle changes between adjacent plies, there was near simultaneous growth of transverse cracks and delaminations below the midplane of the laminate. For a laminate with small angles between adjacent plies, there was extensive formation of transverse crack networks which penetrated the laminate thickness without delamination growth. In addition to imaging fracture, the X-ray CT data from both specimen types were used to quantify the variability in thickness and analyze the local orientation of individual plies. Overall, the proposed test and the image-data analysis methodology provided an important insight into the fracture processes in tape laminates and highlights the inherent ply-level geometrical variabilities that should be accounted for in PDA simulations.
Adsorption for water and wastewater treatment has been the subject of many research in the scientific community, focusing mainly on either equilibrium or kinetic studies. Adsorption kinetics are ...commonly modeled using pseudo-first and pseudo-second order rate laws. Analyses of published works in the past two decades indicated that the pseudo-second order is considered to be the superior model as it can represent many adsorption systems. However, critical assessment of modeling techniques and practices suggests that its superiority could be a consequence of currently acceptable modeling norms which tend to favor the pseudo-second order model. The partiality was due to several modeling pitfalls that are often neglected. In addition, commonly used model validation tools are often used haphazardly and redundantly. As such, they cannot sufficiently provide any kind of certainty on the validity of a model. To eliminate modeling biasness, a new validation method was proposed and was then employed to re-examine previously published adsorption kinetic data.
As any model of real-world phenomena, soil erosion models must be tested against empirical evidence to have their performance evaluated. This is critical to develop knowledge and confidence in model ...predictions. However, evaluating soil erosion models is complicated due to the uncertainties involved in the estimation of model parameters and measurements of system responses. Here, we undertake a term co-occurrence analysis to investigate how model evaluation is approached in soil erosion research. The analysis illustrates how model testing is often neglected, and how model evaluation topics are segregated from current research interests. We perform a meta-analysis of model performance to understand the mechanisms that influence model predictive accuracy. Results indicate that different models do not systematically outperform each other, and that calibration seems to be the main mechanism of model improvement. We review how soil erosion models have been evaluated at different temporal and spatial scales, focusing on the methods, assumptions, and data used for model testing. We discuss the implications of uncertainty and equifinality in soil erosion models, and implement a case study of uncertainty assessment that enables models to be tested as hypotheses. A comment on the way forward for the evaluation of erosion models is presented, discussing philosophical aspects of hypothesis testing in environmental modelling. We refute the notion that soil erosion models can be validated, and emphasize the necessity of defining fit-for-purpose tests, based on multiple sources of data, that allow for a broad investigation of model usefulness and consistency.
Operational oceanography is maturing rapidly. Its capabilities are being noticeably enhanced in response to a growing demand for regularly updated ocean information. Today, several core forecasting ...and monitoring services, such as the Copernicus Marine ones focused on global and regional scales, are well-stablished. The sustained availability of oceanography products has favored the proliferation of specific downstream services devoted to coastal monitoring and forecasting. Ocean models are a key component of these operational oceanographic systems (especially in a context marked by the extensive application of dynamical downscaling approaches), and progress in ocean modeling is certainly a driver for the evolution of these services. The goal of this Special Issue is to publish research papers on ocean modeling that benefit model applications that support existing operational oceanographic services. This Special Issue is addressed to an audience with interests in physical oceanography and especially on its operational applications. There is a focus on the numerical modeling needed for a better forecasts in marine environments and using seamless modeling approaches to simulate global to coastal processes.
Lean Startup: a comprehensive historical review Bortolini, Rafael Fazzi; Nogueira Cortimiglia, Marcelo; Danilevicz, Angela de Moura Ferreira ...
Management decision,
08/2021, Letnik:
59, Številka:
8
Journal Article
Recenzirano
Odprti dostop
Purpose
The primary goal of a startup is to find a viable business model that can generate value for its customers while being effectively captured by the startup itself. This business model, ...however, is not easily defined, being a consequence of the application of tools involving trials, data analyses and testing. The Lean Startup (LS) methodology proposes a process for agile and iterative validation of business models. Given the popularity and importance of such methodology in professional circles, the purpose of this paper is to conduct a historical literature review of existing academic and professional literature, correlating LS concepts and activities to previous theory and alternative business model validation methods.
Design/methodology/approach
A historically oriented systematic literature review employing snowball sampling was conducted in order to identify academic and professional literature and references for iterative validation of business models. A total of 12 scholarly journals and professional magazines dealing with strategy, innovation, entrepreneurship, startups and management were used as data sources. The extensive literature review resulted in 963 exploratory readings and 118 papers fully analyzed.
Findings
The results position the LS as a practical-oriented and up-to-date implementation of strategies based on the Learning School of strategy making and the effectuation approach to entrepreneurship; the authors also identify a number of methods and tools that can complement the LS principles.
Originality/value
This paper identified and synthesized the scientific, academic and professional foundations that precede, support and complement the main concepts, processes and methods advocated by the LS methodology.
Many proteins bind transition metal ions as cofactors to carry out their biological functions. Despite binding affinities for divalent transition metal ions being predominantly dictated by the ...Irving-Williams series for wild-type proteins, in vivo metal ion binding specificity is ensured by intracellular mechanisms that regulate free metal ion concentrations. However, a growing area of biotechnology research considers the use of metal-binding proteins in vitro to purify specific metal ions from wastewater, where specificity is dictated by the protein's metal binding affinities. A goal of metalloprotein engineering is to modulate these affinities to improve a protein's specificity towards a particular metal; however, the quantitative relationship between the affinities and the equilibrium metal-bound protein fractions depends on the underlying binding mechanisms. Here we demonstrate a high-throughput intrinsic tryptophan fluorescence quenching method to validate binding models in multi-metal solutions for CcNikZ-II, a nickel-binding protein from Clostridium carboxidivorans. Using our validated models, we quantify the relationship between binding affinity and specificity in different classes of metal-binding models for CcNikZ-II. We further illustrate the potential relevance of data-informed models to predicting engineering targets for improved specificity.
Display omitted
•We demonstrate a general microplate-based intrinsic tryptophan quenching assay to validate kinetic models of metalloprotein binding competition in multi-metal solutions.•Our analysis reveals distinct binding competition mechanisms between NiII and CoII compared to NiII and ZnII.•Mathematical modelling suggests distinct design principles for improving metalloprotein specificity for different binding competition mechanisms.