UNI-MB - logo
UMNIK - logo
 

Rezultati iskanja

Osnovno iskanje    Izbirno iskanje   
Iskalna
zahteva
Knjižnica

Trenutno NISTE avtorizirani za dostop do e-virov UM. Za polni dostop se PRIJAVITE.

1 2 3 4 5
zadetkov: 1.328
1.
  • KEPLER: A Unified Model for... KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
    Wang, Xiaozhi; Gao, Tianyu; Zhu, Zhaocheng ... Transactions of the Association for Computational Linguistics, 01/2021, Letnik: 9
    Journal Article
    Recenzirano
    Odprti dostop

    Pre-trained language representation models (PLMs) cannot well capture factual knowledge from text. In contrast, knowledge embedding (KE) methods can effectively represent the relational facts in ...
Celotno besedilo

PDF
2.
  • ProtTrans: Toward Understan... ProtTrans: Toward Understanding the Language of Life Through Self-Supervised Learning
    Elnaggar, Ahmed; Heinzinger, Michael; Dallago, Christian ... IEEE transactions on pattern analysis and machine intelligence, 2022-Oct.-1, 2022-10-1, 20221001, 2022-10-01, Letnik: 44, Številka: 10
    Journal Article
    Recenzirano
    Odprti dostop

    Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new ...
Celotno besedilo

PDF
3.
  • ByT5: Towards a Token-Free ... ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
    Xue, Linting; Barua, Aditya; Constant, Noah ... Transactions of the Association for Computational Linguistics, 03/2022, Letnik: 10
    Journal Article
    Recenzirano
    Odprti dostop

    Most widely used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, models that operate directly on raw text (bytes or characters) have ...
Celotno besedilo
4.
  • ChatGPT and a new academic ... ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing
    Lund, Brady D.; Wang, Ting; Mannuru, Nishith Reddy ... Journal of the Association for Information Science and Technology, 20/May , Letnik: 74, Številka: 5
    Journal Article
    Recenzirano
    Odprti dostop

    This article discusses OpenAI's ChatGPT, a generative pre‐trained transformer, which uses natural language processing to fulfill text‐based user requests (i.e., a “chatbot”). The history and ...
Celotno besedilo
5.
  • How Can We Know When Langua... How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
    Jiang, Zhengbao; Araki, Jun; Ding, Haibo ... Transactions of the Association for Computational Linguistics, 09/2021, Letnik: 9
    Journal Article
    Recenzirano
    Odprti dostop

    Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate ...
Celotno besedilo

PDF
6.
  • How Can We Know What Langua... How Can We Know What Language Models Know?
    Jiang, Zhengbao; Xu, Frank F.; Araki, Jun ... Transactions of the Association for Computational Linguistics, 01/2020, Letnik: 8
    Journal Article
    Recenzirano
    Odprti dostop

    Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as “ ”. These prompts are usually manually ...
Celotno besedilo

PDF
7.
  • What BERT Is Not: Lessons f... What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
    Ettinger, Allyson Transactions of the Association for Computational Linguistics, 01/2020, Letnik: 8
    Journal Article
    Recenzirano
    Odprti dostop

    Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon ...
Celotno besedilo

PDF
8.
  • Efficient Content-Based Spa... Efficient Content-Based Sparse Attention with Routing Transformers
    Roy, Aurko; Saffar, Mohammad; Vaswani, Ashish ... Transactions of the Association for Computational Linguistics, 02/2021, Letnik: 9
    Journal Article
    Recenzirano
    Odprti dostop

    Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic computation and memory requirements with ...
Celotno besedilo

PDF
9.
  • oLMpics-On What Language Mo... oLMpics-On What Language Model Pre-training Captures
    Talmor, Alon; Elazar, Yanai; Goldberg, Yoav ... Transactions of the Association for Computational Linguistics, 01/2020, Letnik: 8
    Journal Article
    Recenzirano
    Odprti dostop

    Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are ...
Celotno besedilo

PDF
10.
  • Supervised and Unsupervised... Supervised and Unsupervised Neural Approaches to Text Readability
    Martinc, Matej; Pollak, Senja; Robnik-Šikonja, Marko Computational linguistics - Association for Computational Linguistics, 04/2021, Letnik: 47, Številka: 1
    Journal Article
    Recenzirano
    Odprti dostop

    We present a set of novel neural supervised and unsupervised approaches for determining the readability of documents. In the unsupervised setting, we leverage neural language models, whereas in the ...
Celotno besedilo

PDF
1 2 3 4 5
zadetkov: 1.328

Nalaganje filtrov