► This paper reviews fracture toughness testing, evaluation and standardization at ASTM. ► It relates to the linear elastic fracture mechanics and the elastic–plastic fracture mechanics. ► The review ...describes the most important fracture mechanics parameters of G, K, J, CTOD, and CTOA. ► ASTM fracture test standards discussed are E399, E561, E813, E1152, E1737, E1290, E1820, E1921 and E2472.
The present paper gives a technical review of fracture toughness testing, evaluation and standardization for metallic materials in terms of the linear elastic fracture mechanics as well as the elastic–plastic fracture mechanics. This includes the early investigations and recent advances of fracture toughness test methods and practices developed by American Society for Testing and Materials (ASTM). The review describes the most important fracture mechanics parameters: the elastic energy release rate G, the stress intensity factor K, the J-integral, the crack-tip opening displacement (CTOD) and the crack-tip opening angle (CTOA) from the basic concept, definition, to experimental estimation, test methods and ASTM standardizing practices. Attention is paid to guidelines on how to choose an appropriate fracture parameter to characterize fracture toughness for the material of interest, and how to measure the fracture toughness value defined either at a critical point or in a resistance curve format using laboratory specimens. The relevant ASTM fracture toughness test standards considered in this paper are E399 for KIc testing, E561 for K–R curve testing, E813 for JIc testing, E1152 for J–R curve testing, E1737 for JIc and J–R curve testing, E1290 for CTOD (δ) testing, a combined common test standard E1820 for measuring the three parameters of K, J and δ, E1921 for the transition reference temperature T0 testing and the master curve of cleavage toughness KJc testing, and E2472 for CTOA testing. The effects of loading rate, temperature and crack-tip constraint on fracture toughness as well as fracture instability analysis are also reviewed.
Exosome secretion is a notable feature of malignancy owing to the roles of these nanoparticles in cancer growth, immune suppression, tumor angiogenesis and therapeutic resistance. Exosomes are 30-100 ...nm membrane vesicles released by many cells types during normal physiological processes. Tumors aberrantly secrete large quantities of exosomes that transport oncoproteins and immune suppressive molecules to support tumor growth and metastasis. The role of exosomes in intercellular signaling is exemplified by human epidermal growth factor receptor type 2 (HER2) over-expressing breast cancer, where exosomes with the HER2 oncoprotein stimulate tumor growth and interfere with the activity of the therapeutic antibody Herceptin®. Since numerous observations from experimental model systems point toward an important clinical impact of exosomes in cancer, several pharmacological strategies have been proposed for targeting their malignant activities. We also propose a novel device strategy involving extracorporeal hemofiltration of exosomes from the entire circulatory system using an affinity plasmapheresis platform known as the Aethlon ADAPT™ (adaptive dialysis-like affinity platform technology) system, which would overcome the risks of toxicity and drug interactions posed by pharmacological approaches. This technology allows affinity agents, including exosome-binding lectins and antibodies, to be immobilized in the outer-capillary space of plasma filtration membranes that integrate into existing kidney dialysis systems. Device therapies that evolve from this platform allow rapid extracorporeal capture and selective retention of target particles < 200 nm from the entire circulatory system. This strategy is supported by clinical experience in hepatitis C virus-infected patients using an ADAPT™ device, the Hemopurifier®, to reduce the systemic load of virions having similar sizes and glycosylated surfaces as cancer exosomes. This review discusses the possible therapeutic approaches for targeting immune suppressive exosomes in cancer patients, and the anticipated significance of these strategies for reversing immune dysfunction and improving responses to standard of care treatments.
Richard Bradley’s landmark book
Decision Theory with a Human Face
makes seminal contributions to nearly every major area of decision theory, as well as most areas of formal epistemology and many ...areas of semantics. In addition to sketching Bradley’s distinctive semantics for conditional beliefs and desires, I will explain his theory of conditional desire, focusing particularly on his claim that we should not desire events, either positively or negatively, under the supposition that they will occur. I shall argue, to the contrary, that permitting non-trivial desirabilities for events whose occurrence is known or assumed is both more intuitively plausible and more theoretically fruitful than Bradley’s approach. In the course of the discussion I will contrast Bradley’s broadly evidentialist picture of decision theory with my own more orthodox causal approach.
Environmental rating ecolabels are a new generation of ecolabels. They are intended to enable consumers to compare the environmental impacts of multiple products and make more sustainable consumption ...choices. Falling outside of the three types defined in the ISO 14020 environmental label and declarations series, the recent proliferation of these business-to-consumer communication instruments has resulted in the creation of a plethora of methodologies to derive product performance ratings. Interest from consumers wanting more information on the products they purchase, as well as the promise of policy instruments aiming to increase transparency and combat greenwashing, are fuelling further multiplication of schemes. A move towards more credible, evidence-based environmental rating ecolabels is therefore urgently needed to promote assessment based on scientific understanding, gain consumer trust, and realise policy objectives.
We propose a framework based on four core principles - i) relevance, ii) scientific robustness, iii) trust and transparency, and iv) feasibility (scalability, affordability) - with 18 guidelines that can be followed by rating scheme developers. We characterise the rise of environmental rating ecolabels in geographical Europe and build an inventory of 33 existing schemes, at various stages of development and implementation, to which we apply the framework. This reveals the potential for significant improvement in current schemes, indicating important areas for development. The framework provides a valuable guide for the development of new schemes or an evaluation grid for existing initiatives.
•A new type of ecolabel, environmental ratings, is emerging at pace in Europe.•Current schemes proliferation risks undermining potential impact and consumer trust.•Harmonised, scientifically robust, evidence-based labelling is necessary.•A framework is proposed to guide the development and/or evaluation of schemes.•Evaluation of active schemes against the framework suggests areas for improvement.
While it may be impossible to accurately predict what the world will look like in the future, we can be certain that it will be different from the world of today. By extension, we know that using ...today's data in life cycle assessment (LCA) studies claiming to represent future scenarios is problematic. For the future impact of products to be estimated in a consistent and meaningful manner in LCA, the background system, most commonly the ecoinvent database, needs to be projected into the future alongside the foreground system modeled in a given study. Futura is a new piece of open‐source software which allows LCA practitioners to create and share novel background databases representing arbitrary scenarios. It allows users to import a base database and then start making targeted changes. These changes take three main forms—adding new technologies, regionalizing new or existing technologies, and altering market compositions. All changes made are automatically added to a "recipe." This recipe file can be shared publicly. This recipe can be imported by other users and used to exactly recreate the modified database. The additive and transparent nature of this system means that initially simple scenarios can be built upon by others to progress toward more comprehensive scenarios in a stepwise manner. The inability to build on the work of others is a serious barrier to the progress of the LCA field. Futura goes some way to reduce this barrier in the field of prospective LCA.
FMS-like tyrosine kinase 3-internal tandem duplication (FLT3-ITD) mutations in acute myeloid leukemia (AML) are associated with early relapse and poor survival. Quizartinib potently and selectively ...inhibits FLT3 kinase activity in preclinical AML models.
Quizartinib was administered orally at escalating doses of 12 to 450 mg/day to 76 patients (median age, 60 years; range, 23 to 86 years; with a median of three prior therapies range, 0 to 12 therapies), enrolled irrespective of FLT3-ITD mutation status in a phase I, first-in-human study in relapsed or refractory AML.
Responses occurred in 23 (30%) of 76 patients, including 10 (13%) complete remissions (CR) of any type (two CRs, three CRs with incomplete platelet recovery CRp, five CRs with incomplete hematologic recovery CRi) and 13 (17%) with partial remissions (PRs). Of 17 FLT3-ITD-positive patients, nine responded (53%; one CR, one CRp, two CRis, five PRs); of 37 FLT3-ITD-negative patients, five responded (14%; two CRps, three PRs); of 22 with FLT3-ITD-indeterminate/not tested status, nine responded (41%; one CR, three CRis, five PRs). Median duration of response was 13.3 weeks; median survival was 14.0 weeks. The most common drug-related adverse events (> 10% incidence) were nausea (16%), prolonged QT interval (12%), vomiting (11%), and dysgeusia (11%); most were ≤ grade 2. The maximum-tolerated dose was 200 mg/day, and the dose-limiting toxicity was grade 3 QT prolongation. FLT3-ITD phosphorylation was completely inhibited in an in vitro plasma inhibitory assay.
Quizartinib has clinical activity in patients with relapsed/refractory AML, particularly those with FLT3-ITD, and is associated with an acceptable toxicity profile.
Chronic traumatic encephalopathy (CTE) is a tauopathy associated with prior exposure to repetitive head impacts, such as those incurred through American football and other collision sports. Diagnosis ...is made through neuropathological examination. Many of the clinical features of CTE are common in the general population, with and without a history of head impact exposure, making clinical diagnosis difficult. As is now common in the diagnosis of other neurodegenerative disorders, such as Alzheimer's disease, there is a need for methods to diagnose CTE during life through objective biomarkers.
The aim of this study was to examine tau-positive exosomes in plasma as a potential CTE biomarker.
Subjects were 78 former National Football League (NFL) players and 16 controls. Extracellular vesicles were isolated from plasma. Fluorescent nanoparticle tracking analysis was used to determine the number of vesicles staining positive for tau.
The NFL group had higher exosomal tau than the control group (p < 0.0001). Exosomal tau discriminated between the groups, with 82% sensitivity, 100% specificity, 100% positive predictive value, and 53% negative predictive value. Within the NFL group, higher exosomal tau was associated with worse performance on tests of memory (p = 0.0126) and psychomotor speed (p = 0.0093).
These preliminary findings suggest that exosomal tau in plasma may be an accurate, noninvasive CTE biomarker.
Purpose
The majority of LCA studies begin with the drawing of a process flow diagram, which then needs to be translated manually into an LCA model. This study presents an initial image processing ...pipeline, implemented in an open-source software package, called
lcopt-cv
, which can be used to identify the boxes and links in a photograph of a hand-drawn process flow diagram and automatically create an LCA foreground model.
Methods
The computer vision pipeline consists of a total of 15 steps, beginning with loading the image file and conversion to greyscale. The background is equalised, then the foreground of the image is extracted from the background using thresholding. The lines are then dilated and closed to account for drawing errors. Contours in the image are detected and simplified, and rectangles (contours with four corners) are identified from the simplified contours as ‘boxes’. Links between these boxes are identified using a flood-filling technique. Heuristic processing, based on knowledge of common practice in drawing of process flow diagrams, is then performed to more accurately identify the typology of the identified boxes and the direction of the links between them.
Results and discussion
The performance of the image processing pipeline was tested on four flow diagrams of increasing difficulty: one simple computer drawn diagram and three photographs of hand-drawn diagrams (a simple diagram, a complex diagram and a diagram with merged lines). A set of default values for the variables which define the pipeline was developed through trial and error. For the two simple flow charts, all boxes and links were identified using the default settings. The complex diagram required minor tweaks to the default values to detect all boxes and links. An ‘unstacking’ heuristic allowed the diagram with merged lines to be correctly processed. After some manual reclassification of link directions and process types, the diagrams were turned into LCA models and exported to open-source LCA software packages (
lcopt
and
Brightway
) to be verified and analysed.
Conclusions
This study demonstrates that it is possible to generate a fully functional LCA model from a picture of a flow chart. This has potentially important implications not only for LCA practitioners as a whole, but in particular for the teaching of LCA. Skipping the steep learning curve required by most LCA software packages allows teachers to focus on important LCA concepts, while participants maintain the benefits of experiential learning by doing a ‘real’ LCA.
Andy Egan has recently produced a set of alleged counterexamples to causal decision theory (CDT) in which agents are forced to decide among causally unratifiable options, thereby making choices they ...know they will regret. I show that, far from being counterexamples, CDT gets Egan's cases exactly right. Egan thinks otherwise because he has misapplied CDT by requiring agents to make binding choices before they have processed all available information about the causal consequences of their acts. I elucidate CDT in a way that makes it clear where Egan goes wrong, and which explains why his examples pose no threat to the theory. My approach has similarities to a modification of CDT proposed by Frank Arntzenius, but it differs in the significance that it assigns to potential regrets. I maintain, contrary to Arntzenius, that an agent facing Egan's decisions can rationally choose actions that she knows she will later regret. All rationality demands of agents it that they maximize unconditional causal expected utility from an epistemic perspective that accurately reflects all the available evidence about what their acts are likely to cause. This yields correct answers even in outlandish cases in which one is sure to regret whatever one does.