•Novel fully-convolutional models are proposed to improve multi-site wind forecasting.•The fully-convolutional models specialize in processing zonal and meridional velocities.•Specialized processing ...of zonal and meridional velocities improves wind forecasting.•The approach is suitable to predict phenomena composed by multiple factors.
The increasing presence of intermittent renewables in modern power systems motivates the development of methods for renewables forecasting. More accurate forecasts may implicate less operational costs for power systems. In this context, this paper proposes a family of architectures based on fully convolutional neural networks for wind speed prediction, the ComPonentNet (CPNet) family. The CPNet produces multi-site spatio-temporal forecasting for phenomena which may be decomposed into multiple components (e.g., wind, which may be decomposed into u- and v-wind). The CPNet family includes three architectures - the core CPNet, the fully-fused CPNet and the bottom-fused CPNet. Each architecture processes the components of the phenomenon in a different manner - in separate branches of convolutional operations, in the same branch, or mixing separate and joint branches. This paper investigates the performance of each CPNet architecture in forecasting multi-site spatio-temporal wind speed. Moreover, the CPNet framework is compared against the U-Net architecture. The results indicate that the proposed framework is promising, and that splitting the processing of wind components may be beneficial to spatio-temporal forecasting, with results that outperform the U-Net.
The increasing penetration of intermittent renewable energy in power systems brings operational challenges. One way of supporting them is by enhancing the predictability of renewables through ...accurate forecasting. Convolutional Neural Networks (Convnets) provide a successful technique for processing space-structured multi-dimensional data. In our work, we propose the U-Convolutional model to predict hourly wind speeds for a single location using spatio-temporal data with multiple explanatory variables as an input. The U-Convolutional model is composed of a U-Net part, which synthesizes input information, and a Convnet part, which maps the synthesized data into a single-site wind prediction. We compare our approach with advanced Convnets, a fully connected neural network, and univariate models. We use time series from the Climate Forecast System Reanalysis as datasets and select temperature and u- and v-components of wind as explanatory variables. The proposed models are evaluated at multiple locations (totaling 181 target series) and multiple forecasting horizons. The results indicate that our proposal is promising for spatio-temporal wind speed prediction, with results that show competitive performance on both time horizons for all datasets.
Ad hoc teamwork is a research topic in multi-agent systems whereby an agent (the “ad hoc agent”) must successfully collaborate with a set of unknown agents (the “teammates”) without any prior ...coordination or communication protocol. However, research in ad hoc teamwork is predominantly focused on agent-only teams, but not on agent-human teams, which we believe is an exciting research avenue and has enormous application potential in human-robot teams. This paper will tap into this potential by proposing HOTSPOT, the first framework for ad hoc teamwork in human-robot teams. Our framework comprises two main modules, addressing the two key challenges in the interaction between a robot acting as the ad hoc agent and human teammates. First, a decision-theoretic module that is responsible for all task-related decision-making (task identification, teammate identification, and planning). Second, a communication module that uses natural language processing to parse all communication between the robot and the human. To evaluate our framework, we use a task where a mobile robot and a human cooperatively collect objects in an open space, illustrating the main features of our framework in a real-world task.
Latent Trees for Coreference Resolution Fernandes, Eraldo Rezende; dos Santos, Cícero Nogueira; Milidiú, Ruy Luiz
Computational linguistics - Association for Computational Linguistics,
12/2014, Letnik:
40, Številka:
4
Journal Article
Recenzirano
Odprti dostop
We describe a structure learning system for unrestricted coreference resolution that explores two key modeling techniques: latent coreference trees and automatic entropy-guided feature induction. The ...latent tree modeling makes the learning problem computationally feasible because it incorporates a meaningful hidden structure. Additionally, using an automatic feature induction method, we can efficiently build enhanced nonlinear models using linear model learning algorithms. We present empirical results that highlight the contribution of each modeling technique used in the proposed system. Empirical evaluation is performed on the multilingual unrestricted coreference CoNLL-2012 Shared Task datasets, which comprise three languages: Arabic, Chinese and English. We apply the same system to all languages, except for minor adaptations to some language-dependent features such as nested mentions and specific static pronoun lists. A previous version of this system was submitted to the CoNLL-2012 Shared Task closed track, achieving an official score of
, the best among the competitors. The unique enhancement added to the current system version is the inclusion of candidate arcs linking nested mentions for the Chinese language. By including such arcs, the score increases by almost 4.5 points for that language. The current system shows a score of
, which corresponds to a
error reduction, and is the best performing system for each of the three languages.
Entropy Guided Transformation Learning: Algorithms and Applications (ETL) presents a machine learning algorithm for classification tasks. ETL generalizes Transformation Based Learning (TBL) by ...solving the TBL bottleneck: the construction of good template sets. ETL automatically generates templates using Decision Tree decomposition. The authors describe ETL Committee, an ensemble method that uses ETL as the base learner. Experimental results show that ETL Committee improves the effectiveness of ETL classifiers. The application of ETL is presented to four Natural Language Processing (NLP) tasks: part-of-speech tagging, phrase chunking, named entity recognition and semantic role labeling. Extensive experimental results demonstrate that ETL is an effective way to learn accurate transformation rules, and shows better results than TBL with handcrafted templates for the four tasks. By avoiding the use of handcrafted templates, ETL enables the use of transformation rules to a greater range of tasks. Suitable for both advanced undergraduate and graduate courses, Entropy Guided Transformation Learning: Algorithms and Applications provides a comprehensive introduction to ETL and its NLP applications.
Structured prediction provides a flexible modeling framework to deal with several relevant problems. Sequences, Trees, Disjoint Intervals and Matching are some useful examples of the type of ...structures we would like to predict. An elegant learning scheme for this prediction setting is the Structured Perceptron algorithm, which is sure to converge under some linear separability conditions. The framework integrates a very simple Structured layer on top of a latent costs network. Our key contribution is a novel loss function that incorporates structural information and simplifies learning. The effectiveness of this framework is illustrated with sequence prediction problems. We explore LSTM neural network architectures to model the latent costs layer, since our experiments concern NLP tasks. We perform basic experiments with Chunking in English. The SPN predictor outperforms its CRF equivalent. Our initial findings strongly indicate that SPN is a versatile framework with a powerful learning strategy.
SPTP is a model for the pipeline transportation of petroleum products. It uses a directed graph G, where arcs represent pipes and nodes represent locations. In this paper, we analyze the complexity ...of finding a minimum makespan solution to SPTP. This problem is called SPTMP. We prove that, for any fixed ε>0, there is no η1−ε-approximate algorithm for the SPTMP unless P=NP, where η is the input size. This result also holds if G is both planar and acyclic. If G is acyclic, then we give a m-approximate algorithm to SPTMP, where m is the number of arcs in G.
Given an alphabet {a,1, . . . ,an} with the corresponding list of weights w1, . . . ,wn, and a number $L \geq \lceil \log n \rceil $, we introduce the WARM-UP algorithm, a Lagrangian algorithm for ...constructing suboptimal length restricted prefix codes. Two implementations of the algorithm are proposed. The first one has time complexity $ O(n \log n + n \log \fMax) $, where {\mbox{$\overline{w}$} } is the highest presented weight. The second one runs in O(nL log (n/L)) time. The number of additional bits per symbol generated by WARM-UP when comparing to Huffman encoding is not greater than ${1/ \psi^{L-\lceil \log (n+ \lceil \log n \rceil -L) \rceil-2}}$. Even though the algorithm is approximated it presents an optimal behavior for practical settings. An important feature of the proposed algorithm is its implementation simplicity. The algorithm is basically a selected sequence of Huffman tree constructions for modified weights. The approach gives some new insights on the problem.
A strategy for searching with different access costs Laber, Eduardo Sany; Milidiú, Ruy Luiz; Pessoa, Artur Alves
Theoretical computer science,
09/2002, Letnik:
287, Številka:
2
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
Let us consider an ordered set of keys
A={a
1<⋯<a
n}
, where the probability of searching
a
i
is
1/n, for
i=1,…,n. If the cost of testing each key is similar, then the standard binary search is the ...strategy with minimum expected access cost. However, if the cost of testing
a
i
is
c
i
, for
i=1,…,n, then the standard binary search is not necessarily the best strategy.
In this paper, we prove that the expected access cost of an optimal search strategy is bounded above by
4C
ln(n+1)/n
, where
C=∑
i=1
n
c
i
. Furthermore, we show that this upper bound is asymptotically tight up to constant factors. The proof of this upper bound is constructive and generates a
4
ln(n+1)
-approximated algorithm for constructing near-optimal search strategies. This algorithm runs in
O(n
2)
time and requires
O(n)
space, which can be useful for practical cases, since the best known exact algorithm for this problem runs in
O(n
3)
time and requires
O(n
2)
space.