NUK - logo

Search results

Basic search    Expert search   

Currently you are NOT authorised to access e-resources NUK. For full access, REGISTER.

1 2 3 4 5
hits: 56
1.
  • Adaptive Semiparametric Lan... Adaptive Semiparametric Language Models
    Yogatama, Dani; de Masson d’Autume, Cyprien; Kong, Lingpeng Transactions of the Association for Computational Linguistics, 01/2021, Volume: 9
    Journal Article
    Peer reviewed
    Open access

    We present a language model that combines a large parametric neural network (i.e., a transformer) with a non-parametric episodic memory component in an integrated architecture. Our model uses ...
Full text

PDF
2.
  • Questions Are All You Need ... Questions Are All You Need to Train a Dense Passage Retriever
    Sachan, Devendra Singh; Lewis, Mike; Yogatama, Dani ... Transactions of the Association for Computational Linguistics, 06/2023, Volume: 11
    Journal Article
    Peer reviewed
    Open access

    We introduce , a new corpus-level autoencoding approach for training dense retrieval models that does not require any labeled training data. Dense retrieval is a central challenge for open-domain ...
Full text
3.
Full text

PDF
4.
  • Relational Memory-Augmented... Relational Memory-Augmented Language Models
    Liu, Qi; Yogatama, Dani; Blunsom, Phil Transactions of the Association for Computational Linguistics, 05/2022, Volume: 10
    Journal Article
    Peer reviewed
    Open access

    We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph. We represent the graph as a collection of relation triples and retrieve relevant relations ...
Full text
5.
  • Jointly learning sentence e... Jointly learning sentence embeddings and syntax with unsupervised Tree-LSTMs
    Maillard, Jean; Clark, Stephen; Yogatama, Dani Natural language engineering, 07/2019, Volume: 25, Issue: 4
    Journal Article
    Peer reviewed

    We present two studies on neural network architectures that learn to represent sentences by composing their words according to automatically induced binary trees, without ever being shown a correct ...
Full text

PDF
6.
  • Grandmaster level in StarCr... Grandmaster level in StarCraft II using multi-agent reinforcement learning
    Vinyals, Oriol; Babuschkin, Igor; Czarnecki, Wojciech M ... Nature (London), 11/2019, Volume: 575, Issue: 7782
    Journal Article
    Peer reviewed

    Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an ...
Full text
7.
  • Syntactic Structure Distill... Syntactic Structure Distillation Pretraining for Bidirectional Encoders
    Kuncoro, Adhiguna; Kong, Lingpeng; Fried, Daniel ... Transactions of the Association for Computational Linguistics, 01/2020, Volume: 8
    Journal Article
    Peer reviewed
    Open access

    Textual representation learners trained on large amounts of data have achieved notable success on downstream tasks; intriguingly, they have also performed well on challenging tests of syntactic ...
Full text

PDF
8.
Full text

PDF
9.
Full text

PDF
10.
  • Modelling Latent Skills for Multitask Language Generation
    Cao, Kris; Yogatama, Dani arXiv (Cornell University), 02/2020
    Paper, Journal Article
    Open access

    We present a generative model for multitask conditional language generation. Our guiding hypothesis is that a shared set of latent skills underlies many disparate language generation tasks, and that ...
Full text
1 2 3 4 5
hits: 56

Load filters