Every minute, 500 h of footage is uploaded to Youtube.com, and ∼1900 h of footage is livestreamed on Twitch.tv. It can therefore be challenging for viewers to find the content they are most likely to ...enjoy. Highlight videos can entertain users who did not watch a broadcast, e.g. due to a lack of awareness, availability, or willingness. Furthermore, livestream content creators can grow their audiences by using highlights as advertisement, while also engaging casual followers who do not watch full broadcasts. However, hand-generating these videos is laborious, thus automatic highlight detection is an active research challenge. We examine automatic highlight detection by focusing on esports broadcasts. Esports are an emerging genre of sport played using a video games. We focus on League of Legends, a popular title with multiple professional leagues. Esports broadcasts are high-quality and professionally produced, mirroring traditional sports. We tackle the problem in a weakly supervised manner, utilising two datasets, one of ‘crowd-sourced’ highlight videos and one of unedited broadcasts. These datasets allow us to leverage massive data while hugely reducing the human cost of data curation and annotation. We propose two novel extensions to state-of-the-art rank-based highlight detection architectures. Firstly, a multimodal hybrid–fusion architecture that enables audio-visual highlight detection, and secondly, a smoothing step to incorporate context into decision making. Both extensions show significant improvement over state-of-the-art ranking models, in places performing nearly twice as well as competing architectures. Additionally, we examine the effectiveness of each modality and compare ranking models with classification based systems.
Competitive video game playing, an activity called esports, is increasingly popular to the point that there are now many professional competitions held for a variety of games. These competitions are ...broadcast in a professional manner similar to traditional sports broadcasting. Esports games are generally fast paced, and due to the virtual nature of these games, camera positioning can be limited. Therefore, knowing ahead of time where to position cameras, and what to focus a broadcast and associated commentary on, is a key challenge in esports reporting. This gives rise to moment-to-moment prediction within esports matches which can empower broadcasters to better observe and process esports matches. In this work we focus on this moment-to-moment prediction and in particular present techniques for predicting if a player will die within a set number of seconds for the esports title Dota 2. A player death is one of the most consequential events in Dota 2. We train our model on ‘telemetry’ data gathered directly from the game itself, and position this work as a novel extension of our previous work on the challenge. We use an enhanced dataset covering 9,822 Dota 2 matches. Since the publication of our previous work, new dataset parsing techniques developed by the WEAVR project enable the model to track more features, namely player status effects, and more importantly, to operate in real time. Additionally, we explore two new enhancements to the original model: one data-based extension and one architectural. Firstly we employ learnt embeddings for categorical features, e.g. which in game character a player has selected, and secondly we explicitly model the temporal element of our telemetry data using recurrent neural networks. We find that these extensions and additional features all aid the predictive power of the model achieving an F1 score of 0.54 compared to 0.17 for our previous model (on the new data). We improve this further by experimenting with the length of the time-series in the input data and find using 15 time steps further improves the F1 score to 0.62. This compares to F1 of 0.1 for a standard RNN on the same task. Additionally a deeper analysis of the Time to Die model is carried out to assess its suitability as a broadcast aid.
•Moment-to-moment prediction in esports is important for informing the audience.•We use neural networks to predict if a player will die in Dota 2 matches.•Our prediction uses ‘telemetry’ data gathered directly from the game itself.•We enhance our previous work using new data and a new neural architecture.•The new model works in real-time with an F1 score of 0.62 compared to 0.17 previously.
Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of ...constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method.
This paper presents a generalization of the graph- based genetic programming (GP) technique known as Cartesian genetic programming (CGP). We have extended CGP by utilizing automatic module ...acquisition, evolution, and reuse. To benchmark the new technique, we have tested it on: various digital circuit problems, two symbolic regression problems, the lawnmower problem, and the hierarchical if-and-only-if problem. The results show the new modular method evolves solutions quicker than the original nonmodular method, and the speedup is more pronounced on larger problems. Also, the new modular method performs favorably when compared with other GP methods. Analysis of the evolved modules shows they often produce recognizable functions. Prospects for further improvements to the method are discussed.
The notion of self-play, albeit often cited in multiagent Reinforcement Learning as a process by which to train agent policies from scratch, has received little efforts to be taxonomized within a ...formal model. We present a formalized framework, with clearly defined assumptions, which encapsulates the meaning of self-play as abstracted from various existing self-play algorithms. This framework is framed as an approximation to a theoretical solution concept for multiagent training. Through a novel qualitative visualization metric, on a simple environment, we show that different self-play algorithms generate different distributions of episode trajectories, leading to different explorations of the policy space by the learning agents. Quantitatively, on two environments, we analyze the learning dynamics of policies trained under different self-play algorithms captured under our framework and perform cross self-play performance comparisons. Our results indicate that, throughout training, various widely used self-play algorithms exhibit cyclic policy evolutions and that the choice of self-play algorithm significantly affects the final performance of trained agents.
Field programmable gate arrays (FPGAs) are widely used in applications where online reconfigurable signal processing is required. Speed and function density of FPGAs are increasing as transistor ...sizes shrink to the nanoscale. As these transistors reduce in size intrinsic variability becomes more of a problem and to reliably create electronic designs according to specification time consuming statistical simulations become necessary; and even with accurate models and statistical simulation, the fabrication yield will decrease as every physical instance of a design behaves differently. This paper describes an adaptive, evolvable architecture that allows for correction and optimization of circuits directly in hardware using bioinspired techniques. Similar to FPGAs, the programmable analog and digital array (PAnDA) architecture introduced provides a digital configuration layer for circuit design. Accessing additional configuration options of the underlying analog layer enables continuous adjustment of circuit characteristics at runtime, which enables dynamic optimization of the mapped design's performance. Moreover, the yield of devices can be improved postfabrication via reconfiguration of the analog layer, which can overcome faults induced due to variability and process defects. Since optimization goals are generic, i.e., not restricted to reducing stochastic variability, power consumption or increasing speed, the same mechanisms can also enhance the device's fault tolerant abilities in the case of component degradation and failures during its lifetime or when exposed to hazardous environments.
Hyperinflation and price volatility in virtual economies has the potential to reduce player satisfaction and decrease developer revenue. This paper describes intuitive analytical methods for ...monitoring volatility and inflation in virtual economies, with worked examples on the increasingly popular multiplayer game Old School Runescape. Analytical methods drawn from mainstream financial literature are outlined and applied in order to present a high level overview of virtual economic activity of 3467 price series over 180 trading days. Six-monthly volume data for the top 100 most traded items is also used both for monitoring and value estimation, giving a conservative estimate of exchange trading volume of over £60m in real value. Our worked examples show results from a well functioning virtual economy to act as a benchmark for future work. This work contributes to the growing field of virtual economics and game development, describing how data transformations and statistical tests can be used to improve virtual economic design and analysis, with applications in real-time monitoring systems.
The project Meeting the Design Challenges of nano-CMOS Electronics (http://www.nanocmos.ac.uk) was funded by the Engineering and Physical Sciences Research Council to tackle the challenges facing the ...electronics industry caused by the decreasing scale of transistor devices, and the inherent variability that this exposes in devices and in the circuits and systems in which they are used. The project has developed a grid-based solution that supports the electronics design process, incorporating usage of large-scale high-performance computing (HPC) resources, data and metadata management and support for fine-grained security to protect commercially sensitive datasets. In this paper, we illustrate how the nano-CMOS (complementary metal oxide semiconductor) grid has been applied to optimize transistor dimensions within a standard cell library. The goal is to extract high-speed and low-power circuits which are more tolerant of the random fluctuations that will be prevalent in future technology nodes. Using statistically enhanced circuit simulation models based on three-dimensional atomistic device simulations, a genetic algorithm is presented that optimizes the device widths within a circuit using a multi-objective fitness function exploiting the nano-CMOS grid. The results show that the impact of threshold voltage variation can be reduced by optimizing transistor widths, and indicate that a similar method could be extended to the optimization of larger circuits.
Human beings use compositionality to generalise from past experiences to novel experiences. We assume a separation of our experiences into fundamental atomic components that can be recombined in ...novel ways to support our ability to engage with novel experiences. We frame this as the ability to learn to generalise compositionally, and we will refer to behaviours making use of this ability as compositional learning behaviours (CLBs). A central problem to learning CLBs is the resolution of a binding problem (BP). While it is another feat of intelligence that human beings perform with ease, it is not the case for state-of-the-art artificial agents. Thus, in order to build artificial agents able to collaborate with human beings, we propose to develop a novel benchmark to investigate agents' abilities to exhibit CLBs by solving a domain-agnostic version of the BP. We take inspiration from the language emergence and grounding framework of referential games and propose a meta-learning extension of referential games, entitled Meta-Referential Games, and use this framework to build our benchmark, the Symbolic Behaviour Benchmark (S2B). We provide baseline results and error analysis showing that our benchmark is a compelling challenge that we hope will spur the research community towards developing more capable artificial agents.