TESS (Transiting Exoplanet Survey Satellite) was launched in 2018 with the purpose of observing bright stars in the solar neighbourhood to search for transiting exoplanets. After the completion of ...the two year nominal mission, TESS has provided 2\,minute cadence photometry of over 200\,000 stars. This large collection of light curves opens the possibility to study the statistical and temporal properties of this ensemble of stars. Most of the currently available data pipelines are designed to work on single sector at a time. We present a new TESS data pipeline called {\tt Taranga}, with the purpose of merging multi-sector light curves, whilst performing a period search for all the observed stars, and stores the statistical results in a database. {\tt Taranga} pipeline has three components which 1) processes the PDCSAP fluxes of each sector and creates merged PDCSAP light curve, 2) performs a similar operation on the SAP fluxes, and 3) generates the periodograms of the merged SAP and PDCSAP light curves while performing peak identification. For all the 232\,122 stars observed in short cadence in the nominal TESS mission, we provide the merged PDCSAP and SAP light-curves along with their periodograms. We provide a database that has the statistics of all the results produced from {\tt Taranga} of these stars.
Personalized electronic program guides help users overcome information overload in the TV and video domain by exploiting recommender systems that automatically compile lists of novel and diverse ...video assets, based on implicitly or explicitly defined user preferences. In this context, we assume that user preferences can be specified by program genres (documentary, sports, …) and that an asset can be labeled by one or more program genres, thus allowing an initial and coarse preselection of potentially interesting assets. As these assets may come from various sources, program genre labels may not be consistent among these sources, or not even be given at all, while we assume that each asset has a possibly short textual description. In this paper, we tackle this problem by considering whether those textual descriptions can be effectively used to automatically retrieve the most related TV shows for a specific program genre. More specifically, we compare a statistical approach called logistic regression with an enhanced version of the commonly used vector space model, called random indexing, where the latter is extended by means of a negation operator based on quantum logic. We also apply a new feature generation technique based on explicit semantic analysis for enriching the textual description associated to a TV show with additional features extracted from Wikipedia.
This paper presents an analysis that searched for systematic effects within the CoRoT exoplanet field light curves. The analysis identified a systematic effect that modified the zero point of most ...CoRoT exposures as a function of stellar magnitude. We could find this effect only after preparing a set of learning light curves that were relatively free of stellar and instrumental noise. Correcting for this effect, rejecting outliers that appear in almost every exposure, and applying SysRem, reduced the stellar RMS by about 20%, without attenuating transit signals.
The movie industry produces thousands of feature films and TV series annually. Such massive data volumes would take consumers more than a lifetime to watch. Therefore, summarization of narrative ...media, which engages in providing concise and informative video summaries, has become a popular topic of research. However, most of the summarization solutions so far aim to represent just the overall atmosphere of the video at the expense of the story line. In this paper we describe a novel approach for automated creation of summaries for narrative videos. We propose an automated content analysis and summarization framework for creating moving-image summaries. We aim at preserving the story line to the level that users can watch the summary instead of the original content. Our solution is based on textual cues available in subtitles and movie scripts. We extract features like keywords, main characters names and presence, and combine them in an importance function to identify the moments most relevant for preserving the story line. We develop several summarization methods and evaluate the quality of the resulting summaries in terms of user understanding and user satisfaction through a user test.
The detection of radial and non-radial solar-like oscillations in thousands of G-K giants with CoRoT and Kepler is paving the road for detailed studies of stellar populations in the Galaxy. The ...available average seismic constraints allow a precise and largely model-independent determination of stellar radii (hence distances) and masses. We here briefly report on the distance determination of thousands of giants in the CoRoT and Kepler fields of view.
No planet around HD 219542 B Desidera, S.; Gratton, R. G.; Endl, M. ...
Astronomy and astrophysics (Berlin),
06/2004, Letnik:
420, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The star HD 219542 B has been reported by us (Desidera et al. 2003) to show low-amplitude radial velocity variations that could be due to the presence of a Saturn-mass planetary companion or to ...stellar activity phenomena. In this letter we present the results of the continuation of the radial velocity monitoring as well as a discussion of literature determinations of the chromospheric activity of the star (Wright et al. 2004). These new data indicate that the observed radial velocity variations are likely related to stellar activity. In particular, there are indications that HD 219542 B underwent a phase of enhanced stellar activity in 2002 while the activity level has been lower in both 2001 and 2003. Our 2003 radial velocity measurements now deviate from our preliminary orbital solution and the peak in the power spectrum at the proposed planet period is severely reduced by the inclusion of the new data. We therefore dismiss the planet hypothesis as the cause of the radial velocity variations.
Recommender systems typically require feedback from the user to learn the user’s taste. This feedback can come in two forms: explicit and implicit. Explicit feedback consists of ratings provided by ...the user for a number of items, while implicit feedback comes from observing user actions on items. These actions have to be interpreted by the recommender system and translated into a rating. In this paper we propose a method to learn how to translate user actions on items to ratings on these items by correlating user actions with explicit feedback. We do this by associating user actions to rated items and subsequently applying naive Bayesian classification to rate new items with which the user has interacted. We apply and evaluate our method on data from a web-based music service and we show its potential as an addition to explicit rating.
In a recommender-based digital video recorder, TV programs are considered for automatic recording on a hard disk. The choice of which programs to record depends on (i) the scores assigned to the ...programs by the recommender, (ii) the times and channels at which the programs are broadcast, and (iii) the number of tuners available for recording. For a given set S of n programs that are broadcast in a given time interval, and a given number m of tuners, we consider the problem of determining a subset S' ⊆ S of programs with a maximum sum of scores that can be recorded with the m tuners. We show that this problem can be formulated as a min-cost flow problem and can be solved to optimality in O(mn 2 ) time. In addition, we indicate how the min-cost flow approach can be adapted to take into account practical considerations such as uncertainties in the actual broadcast times of programs and programs that are broadcast multiple times in the given time interval. We present experimental results that suggest that, for realistic settings, near-optimal subsets can be determined on low-cost hardware.
Scientific data collected at ESO's observatories are freely and openly accessible online through the ESO Science Archive Facility. In addition to the raw data straight out of the instruments, the ESO ...Science Archive also contains four million processed science files available for use by scientists and astronomy enthusiasts worldwide. ESO subscribes to the FAIR (Findable, Accessible, Interoperable, Reusable) guiding principles for scientific data management and stewardship. All data in the ESO Science Archive are distributed according to the terms of the Creative Commons Attribution 4.0 International licence (CC BY 4.0).