ABSTRACT A planet having protective ozone within the collimated beam of a gamma-ray burst (GRB) may suffer ozone depletion, potentially causing a mass extinction event to existing life on a planet's ...surface and oceans. We model the dangers of long GRBs to planets in the Milky Way and utilize a static statistical model of the Galaxy, which matches major observable properties, such as the inside-out star formation history (SFH), metallicity evolution, and three-dimensional stellar number density distribution. The GRB formation rate is a function of both the SFH and metallicity. However, the extent to which chemical evolution reduces the GRB rate over time in the Milky Way is still an open question. Therefore, we compare the damaging effects of GRBs to biospheres in the Milky Way using two models. One model generates GRBs as a function of the inside-out SFH. The other model follows the SFH, but generates GRB progenitors as a function of metallicity, thereby favoring metal-poor host regions of the Galaxy over time. If the GRB rate only follows the SFH, the majority of the GRBs occur in the inner Galaxy. However, if GRB progenitors are constrained to low-metallicity environments, then GRBs only form in the metal-poor outskirts at recent epochs. Interestingly, over the past 1 Gyr, the surface density of stars (and their corresponding planets), which survive a GRB is still greatest in the inner galaxy in both models. The present-day danger of long GRBs to life at the solar radius (R = 8 kpc) is low. We find that at least ∼65% of stars survive a GRB over the past 1 Gyr. Furthermore, when the GRB rate was expected to have been enhanced at higher redshifts, such as z 0.5, our results suggest that a large fraction of planets would have survived these lethal GRB events.
The self-join finds all objects in a dataset within a threshold of each other defined by a similarity metric. As such, the self-join is a fundamental building block for the field of databases and ...data mining. In low dimensionality, there are several challenges associated with efficiently computing the self-join on the graphics processing unit (GPU). Low dimensional data results in higher data densities, causing a significant number of distance calculations and a large result set, and as dimensionality increases, index searches become increasingly exhaustive. We propose several techniques to optimize the self-join using the GPU that include a GPU-efficient index that employs a bounded search, a batching scheme to accommodate large result sets, and duplicate search removal with low overhead. Furthermore, we propose a performance model that reveals bottlenecks related to the result set size and enables us to choose a batch size that mitigates two sources of performance degradation. Our approach outperforms the state-of-the-art on most scenarios.
•We advance a GPU-accelerated self-join technique that is efficient on many scenarios.•We leverage an index designed for the GPU for efficient range queries.•We present a duplicate search and distance calculation removal strategy.•We leverage a performance model that reveals key sources of overhead.•The performance model can be used to optimize parameters to improve performance.
Abstract
We present here the design, architecture, and first data release for the Solar System Notification Alert Processing System (SNAPS). SNAPS is a solar system broker that ingests alert data ...from all-sky surveys. At present, we ingest data from the Zwicky Transient Facility (ZTF) public survey, and we will ingest data from the forthcoming Legacy Survey of Space and Time (LSST) when it comes online. SNAPS is an official LSST downstream broker. In this paper we present the SNAPS design goals and requirements. We describe the details of our automatic pipeline processing in which the physical properties of asteroids are derived. We present SNAPShot1, our first data release, which contains 5,458,459 observations of 31,693 asteroids observed by ZTF from 2018 July to 2020 May. By comparing a number of derived properties for this ensemble to previously published results for overlapping objects we show that our automatic processing is highly reliable. We present a short list of science results, among many that will be enabled by our SNAPS catalog: (1) we demonstrate that there are no known asteroids with very short periods and high amplitudes, which clearly indicates that in general asteroids in the size range 0.3–20 km are strengthless; (2) we find no difference in the period distributions of Jupiter Trojan asteroids, implying that the L4 and L5 clouds have different shape distributions; and (3) we highlight several individual asteroids of interest. Finally, we describe future work for SNAPS and our ability to operate at LSST scale.
Abstract
We present photometric data for minor planets observed by the Transiting Exoplanet Survey Satellite during its Cycle 1 operations. In total, we extracted usable detections for 37,965 ...objects. We present an examination of the reliability of the rotation period and light-curve amplitudes derived from each object based upon the number of detections and the normalized Lomb–Scargle power of our period fitting and compare and contrast our results with previous similar works. We show that for objects with 200 or more photometric detections and a derived normalized, generalized Lomb–Scargle power greater than 0.2, we have an 85% confidence in that period; this encompasses 3492 rotation periods we consider to be highly reliable. We independently examine a series of periods first reported by Pál et al.; periods derived in both works found to have similar results should be considered reliable. Additionally, we demonstrate the need to properly account for the true proportion of slow rotators (
P
> 100 hr) when inferring shape distributions from sparse photometry.
Previous studies of the galactic habitable zone have been concerned with identifying those regions of the Galaxy that may favor the emergence of complex life. A planet is deemed habitable if it meets ...a set of assumed criteria for supporting the emergence of such complex life. In this work, we extend the assessment of habitability to consider the potential for life to further evolve to the point of intelligence--termed the propensity for the emergence of intelligent life, φI. We assume φI is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the sterilizing effects of nearby supernovae. The times between supernova events provide windows of opportunity for the evolution of intelligence. We developed a model that allows us to analyze these window times to generate a metric for φI, and we examine here the spatial and temporal variation of this metric. Even under the assumption that long time durations are required between sterilizations to allow for the emergence of intelligence, our model suggests that the inner Galaxy provides the greatest number of opportunities for intelligence to arise. This is due to the substantially higher number density of habitable planets in this region, which outweighs the effects of a higher supernova rate in the region. Our model also shows that φI is increasing with time. Intelligent life emerged at approximately the present time at Earth's galactocentric radius, but a similar level of evolutionary opportunity was available in the inner Galaxy more than 2 Gyr ago. Our findings suggest that the inner Galaxy should logically be a prime target region for searches for extraterrestrial intelligence and that any civilizations that may have emerged there are potentially much older than our own.
K Nearest Neighbor (KNN) joins are used in scientific domains for data analysis, and are building blocks of several well-known algorithms. KNN-joins find the KNN of all points in a dataset. This ...paper focuses on a hybrid CPU/GPU approach for low-dimensional KNN-joins, where the GPU may not yield substantial performance gains over parallel CPU algorithms. We utilize a work queue that prioritizes computing data points in high density regions on the GPU, and low density regions on the CPU, thereby taking advantage of each architecture’s relative strengths. Our approach, HybridKNN-Join, effectively augments a state-of-the-art multi-core CPU algorithm. We propose optimizations that (i) maximize GPU query throughput by assigning the GPU large batches of work; (ii) increase workload granularity to optimize GPU utilization; and, (iii) limit load imbalance between CPU and GPU architectures. We compare HybridKNN-Join to one GPU and two parallel CPU reference implementations. Compared to the reference implementations, we find that the hybrid algorithm performs best on larger workloads (dataset size and K). The methods employed in this paper show promise for the general division of work in other hybrid algorithms.
•A hybrid CPU/GPU KNN algorithm is proposed.•A shared CPU/GPU work queue is used to send work to each architecture.•The GPU executes KNN searches in the high density regions.•The CPU executes KNN searches in the low density regions.•Workload imbalance caused by the work queue is largely mitigated.
Given two datasets (or tables)
A
and
B
and a search distance
ϵ
, the distance similarity join, denoted as
A
⋉
ϵ
B
, finds the pairs of points (
p
a
,
p
b
), where
p
a
∈
A
and
p
b
∈
B
, and such that ...the distance between
p
a
and
p
b
is
≤
ϵ
. If
A
=
B
, then the similarity join is equivalent to a similarity self-join, denoted as
A
⋈
ϵ
A
. We propose in this paper Heterogeneous Epsilon Grid Joins (
HEGJoin
), a heterogeneous CPU-GPU distance similarity join algorithm. Efficiently partitioning the work between the CPU and the GPU is a challenge. Indeed, the work partitioning strategy needs to consider the different characteristics and computational throughput of the processors (CPU and GPU), as well as the data-dependent nature of the similarity join that accounts in the overall execution time (e.g., the number of queries, their distribution, the dimensionality, etc.). In addition to
HEGJoin
, we design in this paper a dynamic and two static work partitioning strategies. We also propose a performance model for each static partitioning strategy to perform the distribution of the work between the processors. We evaluate the performance of all three partitioning methods by considering the execution time and the load imbalance between the CPU and GPU as performance metrics.
HEGJoin
achieves a speedup of up to
5.46
×
(
3.97
×
) over the GPU-only (CPU-only) algorithms on our first test platform and up to
1.97
×
(
12.07
×
) on our second test platform over the GPU-only (CPU-only) algorithms.
In this study, we combine bibliometric techniques with a machine learning algorithm, the sequential information bottleneck, to assess the interdisciplinarity of research produced by the University of ...Hawaii NASA Astrobiology Institute (UHNAI). In particular, we cluster abstract data to evaluate Thomson Reuters Web of Knowledge subject categories as descriptive labels for astrobiology documents, assess individual researcher interdisciplinarity, and determine where collaboration opportunities might occur. We find that the majority of the UHNAI team is engaged in interdisciplinary research, and suggest that our method could be applied to additional NASA Astrobiology Institute teams in particular, or other interdisciplinary research teams more broadly, to identify and facilitate collaboration opportunities.
Lattice and code cryptography can replace existing schemes such as elliptic curve cryptography because of their resistance to quantum computers. In support of public key infrastructures, the ...distribution, validation and storage of the cryptographic keys is then more complex for handling longer keys. This paper describes practical ways to generate keys from physical unclonable functions, for both lattice and code-based cryptography. Handshakes between client devices containing the physical unclonable functions (PUFs) and a server are used to select sets of addressable positions in the PUFs, from which streams of bits called seeds are generated on demand. The public and private cryptographic key pairs are computed from these seeds together with additional streams of random numbers. The method allows the server to independently validate the public key generated by the PUF, and act as a certificate authority in the network. Technologies such as high performance computing, and graphic processing units can further enhance security by preventing attackers from making this independent validation when only equipped with less powerful computers.
Blockchain technology is a game-changing, enhancing security for the supply chain of smart additive manufacturing. Blockchain enables the tracking and recording of the history of each transaction in ...a ledger stored in the cloud that cannot be altered, and when blockchain is combined with digital signatures, it verifies the identity of the participants with its non-repudiation capabilities. One of the weaknesses of blockchain is the difficulty of preventing malicious participants from gaining access to public–private key pairs. Groups of opponents often interact freely with the network, and this is a security concern when cloud-based methods manage the key pairs. Therefore, we are proposing end-to-end security schemes by both inserting tamper-resistant devices in the hardware of the peripheral devices and using ternary cryptography. The tamper-resistant devices, which are designed with nanomaterials, act as Physical Unclonable Functions to generate secret cryptographic keys. One-time use public–private key pairs are generated for each transaction. In addition, the cryptographic scheme incorporates a third logic state to mitigate man-in-the-middle attacks. The generation of these public–private key pairs is compatible with post quantum cryptography. The third scheme we are proposing is the use of noise injection techniques used with high-performance computing to increase the security of the system. We present prototypes to demonstrate the feasibility of these schemes and to quantify the relevant parameters. We conclude by presenting the value of blockchains to secure the logistics of additive manufacturing operations.