Abstract
We present a multiwavelength photometric and spectroscopic analysis of 13 super-Chandrasekhar-mass/2003fg-like Type Ia supernovae (SNe Ia). Nine of these objects were observed by the ...Carnegie Supernova Project. The 2003fg-like SNe have slowly declining light curves (Δ
m
15
(
B
) < 1.3 mag), and peak absolute
B
-band magnitudes of −19 <
M
B
< −21 mag. Many of the 2003fg-like SNe are located in the same part of the luminosity–width relation as normal SNe Ia. In the optical
B
and
V
bands, the 2003fg-like SNe look like normal SNe Ia, but at redder wavelengths they diverge. Unlike other luminous SNe Ia, the 2003fg-like SNe generally have only one
i
-band maximum, which peaks after the epoch of the
B
-band maximum, while their near-IR (NIR) light-curve rise times can be ≳40 days longer than those of normal SNe Ia. They are also at least 1 mag brighter in the NIR bands than normal SNe Ia, peaking above
M
H
= −19 mag, and generally have negative Hubble residuals, which may be the cause of some systematics in dark-energy experiments. Spectroscopically, the 2003fg-like SNe exhibit peculiarities such as unburnt carbon well past maximum light, a large spread (8000–12,000 km s
−1
) in Si
ii
λ
6355 velocities at maximum light with no rapid early velocity decline, and no clear
H
-band break at +10 days. We find that SNe with a larger pseudo-equivalent width of C
ii
at maximum light have lower Si
ii
λ
6355 velocities and more slowly declining light curves. There are also multiple factors that contribute to the peak luminosity of 2003fg-like SNe. The explosion of a C–O degenerate core inside a carbon-rich envelope is consistent with these observations. Such a configuration may come from the core-degenerate scenario.
Abstract
We present a study of the optical and near-infrared (NIR) spectra of SN 2013ai along with its light curves. These data range from discovery until 380 days after explosion. SN 2013ai is a ...fast declining Type II supernova (SN II) with an unusually long rise time, 18.9 ± 2.7 days in the
V
-band, and a bright
V
-band peak absolute magnitude of −18.7 ± 0.06 mag. The spectra are dominated by hydrogen features in the optical and NIR. The spectral features of SN 2013ai are unique in their expansion velocities, which, when compared to large samples of SNe II, are more than 1,000 km s
−1
faster at 50 days past explosion. In addition, the long rise time of the light curve more closely resembles SNe IIb rather than SNe II. If SN 2013ai is coeval with a nearby compact cluster, we infer a progenitor zero-age main-sequence mass of ∼17
M
⊙
. After performing light-curve modeling, we find that SN 2013ai could be the result of the explosion of a star with little hydrogen mass, a large amount of synthesized
56
Ni, 0.3–0.4
M
⊙
, and an explosion energy of 2.5–3.0 × 10
51
erg. The density structure and expansion velocities of SN 2013ai are similar to those of the prototypical SN IIb, SN 1993J. However, SN 2013ai shows no strong helium features in the optical, likely due to the presence of a dense core that prevents the majority of
γ
-rays from escaping to excite helium. Our analysis suggests that SN 2013ai could be a link between SNe II and stripped-envelope SNe.
We present a polarimetric study of the RCW121 star-forming region to derive the orientation of the sky-projected magnetic field component traced by the polarization vectors, the morphology of which ...tends to follow the cloud's structure. Individual polarization-angle values are consistent across the different bands, having a broad distribution towards the RCW121 H ii region. We estimate the corresponding magnetic field orientation in the H ii region to have a mean value of −12° ± 7°. RCW121 shows an elongated shape in the same direction as the magnetic field orientation, which may be evidence that magnetic pressure opposes the H ii region expansion. Serkowski's relation was used to determine individual values of the total-to-selective extinction ratio (R
V
) distribution and a weighted mean value of R
V
= 3.17 ± 0.05. We derive a foreground component of the polarization degree that is consistent with the literature value for this Galactic region.
In the last few years, the use of ontologies has spread thanks to the irruption of the Semantic Web. They have become a crucial tool in information systems as they explicitly state the meaning of ...information, making it possible to share it and to achieve higher levels of interoperability. However, being knowledge representation models as they are, other fields can take advantage of their characteristics to extend their capabilities. In particular, in the context of Embodied Conversational Agents, they can be used to provide them with semantic knowledge and, therefore, enhance their intellectual skills. In this paper, we propose an approach to explore the synergies between these technologies. Thus, we have developed a multimodal ECA that exploits the knowledge provided by the Linked Data initiative to help users in their search information tasks. Based on a semantic-guided keyword search, our approach is flexible enough to: 1) deal with different Linked Data repositories and 2) handle different search/knowledge domains in a multilingual way. To illustrate the potential of our approach, we have focused on the case of DBpedia, as it mirrors the information stored in the Wikipedia, providing a semantic entry to it.
BSSRDF Estimation from Single Images Munoz, Adolfo; Echevarria, Jose I.; Seron, Francisco J. ...
Computer graphics forum,
April 2011, Letnik:
30, Številka:
2
Journal Article
Recenzirano
Odprti dostop
We present a novel method to estimate an approximation of the reflectance characteristics of optically thick, homogeneous translucent materials using only a single photograph as input. First, we ...approximate the diffusion profile as a linear combination of piecewise constant functions, an approach that enables a linear system minimization and maximizes robustness in the presence of suboptimal input data inferred from the image. We then fit to a smoother monotonically decreasing model, ensuring continuity on its first derivative. We show the feasibility of our approach and validate it in controlled environments, comparing well against physical measurements from previous works. Next, we explore the performance of our method in uncontrolled scenarios, where neither lighting nor geometry are known. We show that these can be roughly approximated from the corresponding image by making two simple assumptions: that the object is lit by a distant light source and that it is globally convex, allowing us to capture the visual appearance of the photographed material. Compared with previous works, our technique offers an attractive balance between visual accuracy and ease of use, allowing its use in a wide range of scenarios including off‐the‐shelf, single images, thus extending the current repertoire of real‐world data acquisition techniques.
This paper presents, as far as the authors are aware, a complete and extended new taxonomy of shape specification modeling techniques and a characterization of shape design systems, all based on the ...relationship of users’ knowledge to the modeling system they use to generate shapes. In-depth knowledge of this relationship is not usually revealed in the regular university training courses such as bachelor’s, master’s and continuing education. For this reason, we believe that it is necessary to modify the learning process, offering a more global vision of all the currently existing techniques and extending training in those related to algorithmic modeling techniques. We consider the latter to be the most powerful current techniques for modeling complex shapes that cannot be modeled with the usual techniques known to date. Therefore, the most complete training should include everything from the usual geometry to textual programming. This would take us a step further along the way to more powerful design environments. The proposed taxonomy could serve as a guideline to help improve the learning process of students and designers in a complex environment with increasingly powerful requirements and tools. The term “smart” is widely used nowadays, e.g. smart phones, smart cars, smart homes, smart cities... and similar terms such as “smart shape modeling”. Nowadays, the term smart is applied from a marketing point of view, whenever an innovation is used to solve a complex problem. This is the case for what is currently called smart shape modeling. However, in the future; this concept should mean a much better design environment than today. The smart future requires better trained and skilled engineers, architects, designers or technical students. This means that they must be prepared to be able to contribute to the creation of new knowledge, to the use of innovations to solve complex problems of form, and to the extraction of the relevant pieces of intelligence from the growing volume of knowledge and technologies accessible today. Our taxonomy is presented from the point of view of methods that are possibly furthest away from what is considered today as “intelligent shape modeling” to the limit of what is achievable today and which the authors call “Generic Shape Algorithm”. Finally, we discuss the characteristics that a shape modeling system must have to be truly “intelligent”: it must be “proactive” in applying innovative ideas to achieve a solution to a complex problem.
Rendering participating media is important for a number of domains, ranging from commercial applications (entertainment, virtual reality) to simulation systems (driving, flying, and space simulators) ...and safety analyses (driving conditions, sign visibility). This article surveys global illumination algorithms for environments including participating media. It reviews both appearance-based and physically-based media methods, including the single-scattering and the more general multiple-scattering techniques. The objective of the survey is the characterization of all these methods: identification of their base techniques, assumptions, limitations, and range of utilization. It concludes with some reflections about the suitability of the methods depending on the specific application involved, and possible future research lines.
The biosynthesis of structural and signaling molecules depends on intracellular concentrations of essential amino acids, which are maintained by a specific system of plasma membrane transporters. We ...identify a unique population of nutrient amino acid transporters (NATs) within the sodium-neurotransmitter symporter family and have characterized a member of the NAT subfamily from the larval midgut of the Yellow Fever vector mosquito, Aedes aegypti (aeAAT1, AAR08269), which primarily supplies phenylalanine, an essential substrate for the synthesis of neuronal and cuticular catecholamines. Further analysis suggests that NATs constitute a comprehensive transport metabolon for the epithelial uptake and redistribution of essential amino acids including precursors of several neurotransmitters. In contrast to the highly conserved subfamily of orthologous neurotransmitter transporters, lineage-specific, paralogous NATs undergo rapid gene multiplication/substitution that enables a high degree of evolutionary plasticity of nutrient amino acid uptake mechanisms and facilitates environmental and nutrient adaptations of organisms. These findings provide a unique model for understanding the molecular mechanisms, physiology, and evolution of amino acid and neurotransmitter transport systems and imply that monoamine and GABA transporters evolved by selection and conservation of earlier neuronal NATs.
Motion Blur Rendering: State of the Art Navarro, Fernando; Serón, Francisco J.; Gutierrez, Diego
Computer graphics forum,
March 2011, Letnik:
30, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Motion blur is a fundamental cue in the perception of objects in motion. This phenomenon manifests as a visible trail along the trajectory of the object and is the result of the combination of ...relative motion and light integration taking place in film and electronic cameras. In this work, we analyse the mechanisms that produce motion blur in recording devices and the methods that can simulate it in computer generated images. Light integration over time is one of the most expensive processes to simulate in high‐quality renders, as such, we make an in‐depth review of the existing algorithms and we categorize them in the context of a formal model that highlights their differences, strengths and limitations. We finalize this report proposing a number of alternative classifications that will help the reader identify the best technique for a particular scenario.