It can be difficult to tease apart the sources of the many chemical substances that can adversely affect indoor air quality (IAQ). These can be generated from primary sources, e.g. cleaning products, ...fragrances and wall decorations, and created in-situ (secondary emissions). Inputs due to human activities (heating, smoking, etc.) add to the catalogue of chemical possibilities. This study examines the effect of airtightness and ventilation on volatile organic compounds (VOC) concentration in newly-furnished unoccupied residential houses. Further objectives included investigating the effects of airtightness and mechanical ventilation on internal VOC concentrations in two rooms with controlled simulated occupancy.
Chemicals that appear to have been generated directly from or supplemented by furnishings were acetone, 2-butanone, ethyl acetate and tert-butyl alcohol. In the absence of mechanical ventilation, total VOC (TVOC) concentrations were over three times greater in the more airtight room. The appearance of concentration maxima after furnishing and/or decoration was delayed in the airtight room and the subsequent decay rate was slower. With the exception of benzene, none of the individual compounds exceeded national guidelines for outdoor air (in the absence of IAQ guidelines).
Mechanical ventilation reduced TVOC concentrations by 300 μg m−3 (more airtight room) and > 340 μg m−3 (less airtight room) to below European threshold recommendations; although the reduction took approximately five days. The time lag indicates that simply providing adequate ventilation during chemical use indoors may not be sufficient to protect inhabitants from prolonged exposure following use.
•Research identified & quantified VOCs generated in timber-framed unoccupied houses.•Some chemicals were detected at high concentrations in background samples.•Increased airtightness delayed concentration maxima and decay rate was slower.•Measured internal TVOCs (as toluene) were 74–90% higher in the absence of MVHR.•Concentration reduction to below TVOC guidelines required 9 days of MVHR operation.
Consolidated bioprocessing (CBP) is a potential breakthrough technology for reducing costs of biochemical production from lignocellulosic biomass. Production of cellulase enzymes, saccharification of ...lignocellulose, and conversion of the resulting sugars into a chemical of interest occur simultaneously within a single bioreactor. In this study, synthetic fungal consortia composed of the cellulolytic fungus Trichoderma reesei and the production specialist Rhizopus delemar demonstrated conversion of microcrystalline cellulose (MCC) and alkaline pre‐treated corn stover (CS) to fumaric acid in a fully consolidated manner without addition of cellulase enzymes or expensive supplements such as yeast extract. A titer of 6.87 g/L of fumaric acid, representing 0.17 w/w yield, were produced from 40 g/L MCC with a productivity of 31.8 mg/L/hr. In addition, lactic acid was produced from MCC using a fungal consortium with Rhizopus oryzae as the production specialist. These results are proof‐of‐concept demonstration of engineering synthetic microbial consortia for CBP production of naturally occurring biomolecules.
Scholz and coworkers developed a consolidated bioprocessing system that supports all of the steps required for conversion of lignocellulosic biomass to two organic acids, including cell growth, cellulase enzyme production, lignocellulose hydrolysis, and organic acid production. This process was achieved in a minimal medium, without supplementation of expensive nutrients, by pairing the cellulolytic specialist Trichoderma reesei and a furmaric acid or lactic acid producing fungal specialist into a functional synthetic consortium.
A good deal of Twitter research focuses on event-detection using algorithms that rely on keywords and tweet density. We present an alternative analysis of tweets, filtering by hashtags related to the ...2012 Superbowl and validated against the 2013 baseball World Series. We analyze low-volume, topically similar tweets which reference specific plays (sub-contexts) within the game at the time they occur. These communications are not explicitly linked; they pivot on keywords and do not correlate with spikes in tweets-per-minute. Such phenomena are not readily identified by current event-detection algorithms, which rely on volume to drive the analytic engine. We propose to demonstrate the effectiveness of empirically and theoretically informed approaches and use qualitative analysis and theory to inform the design of future event-detection algorithms. Specifically, we propose theories of Information Grounds and “third places” to explain sub-contexts that emerge. Conceptualizing sub-contexts as a socio-technical place advances the framing of Twitter event-detection from principally computational to deeply contextual.
There is no such thing as high assurance without high assurance hardware. High assurance hardware is essential because any and all high assurance systems ultimately depend on hardware that conforms ...to, and does not undermine, critical system properties and invariants. And yet, high assurance hardware development is stymied by the conceptual gap between formal methods and hardware description languages used by engineers. This article advocates a semantics-directed approach to bridge this conceptual gap. We present a case study in the design of secure processors, which are formally derived via principled techniques grounded in functional programming and equational reasoning. The case study comprises the development of secure single- and dual-core variants of a single processor, both based on a common semantic specification of the ISA. We demonstrate via formal equational reasoning that the dual-core processor respects a “no-write-down” information flow policy. The semantics-directed approach enables a modular and extensible style of system design and verification. The secure processors require only a very small amount of additional code to specify and implement, and their security verification arguments are concise and readable. Our approach rests critically on ReWire, a functional programming language providing a suitable foundation for formal verification of hardware designs. This case study demonstrates both ReWire’s expressiveness as a programming language and its power as a framework for formal, high-level reasoning about hardware systems.
There is no such thing as high assurance without high assurance hardware. High assurance hardware is essential, because any and all high assurance systems ultimately depend on hardware that conforms ...to, and does not undermine, critical system properties and invariants. And yet, high assurance hardware development is stymied by the conceptual gap between formal methods and hardware description languages used by engineers. This paper presents ReWire, a functional programming language providing a suitable foundation for formal verification of hardware designs, and a compiler for that language that translates high-level, semantics-driven designs directly into working hardware. ReWire's design and implementation are presented, along with a case study in the design of a secure multicore processor, demonstrating both ReWire's expressiveness as a programming language and its power as a framework for formal, high-level reasoning about hardware systems.
Heightened international concerns relating to security and identity management have led to an increased interest in security applications, such as face recognition and baggage and passenger screening ...at airports. A common feature of many of these technologies is that a human operator is presented with an image and asked to decide whether the passenger or baggage corresponds to a person or item of interest. The human operator is a critical component in the performance of the system and it is of considerable interest to not only better understand the performance of human operators on such tasks, but to also design systems with a human operator in mind. This paper discusses a number of human factors issues which will have an impact on human operator performance in the operational environment, as well as highlighting the variables which must be considered when evaluating the performance of these technologies in scenario or operational trials based on Defence Science and Technology Organisation’s experience in such testing.
Graphics Processing Units (GPUs) are increasingly becoming part of HPC clusters. Nevertheless, cloud computing services and resource management frameworks targeting heterogeneous clusters including ...GPUs are still in their infancy. Further, GPU software stacks (e.g., CUDA driver and runtime) currently provide very limited support to concurrency.
In this paper, we propose a runtime system that provides abstraction and sharing of GPUs, while allowing isolation of concurrent applications. A central component of our runtime is a memory manager that provides a virtual memory abstraction to the applications. Our runtime is flexible in terms of scheduling policies, and allows dynamic (as opposed to programmer-defined) binding of applications to GPUs. In addition, our framework supports dynamic load balancing, dynamic upgrade and downgrade of GPUs, and is resilient to their failures. Our runtime can be deployed in combination with VM-based cloud computing services to allow virtualization of heterogeneous clusters, or in combination with HPC cluster resource managers to form an integrated resource management infrastructure for heterogeneous clusters. Experiments conducted on a three-node cluster show that our GPU sharing scheme allows up to a 28% and a 50% performance improvement over serialized execution on short- and long-running jobs, respectively. Further, dynamic inter-node load balancing leads to an additional 18-20% performance benefit.
An experiment was conducted on human face recognition performance in an access control scenario. Ten judges compared fifty individuals to security ID style photos where 20% of the photos were of ...different people, assessed to look similar to the individual presenting the photo. Performance was better than that observed in the only other comparable live-to-photo experiment 1 with a false match rate of 9% CI95%: 2%, 16% in this study compared to 66% CI95%: 50%, 82% and a false reject rate of 5% CI95%: 0%, 11% compared to 14% CI95%: 0.3%, 28%. These differences were attributed to divergences in experimental methodology, especially with regards to the distractor tasks used. It is concluded that the figures provided in the current study are more appropriate estimates of performance in access control scenarios. Substantial individual variation in face matching abilities, response time and confidence ratings was observed.
FPGA programmability remains a concern with respect to the broad adoption of the technology. One reason for this is simple: FPGA applications are frequently implementations of concurrent algorithms ...that could be most directly rendered in concurrent languages, but there is little or no first-class support for concurrent applications in conventional hardware description languages. It stands to reason that FPGA programmability would be enhanced in a hardware description language with first-class concurrency. The starting point for this paper is a functional hardware description language with built-in support for concurrency called ReWire. Because it is a concurrent functional language, ReWire supports the elegant expression of common concurrency paradigms; we illustrate this with several case studies.