Akademska digitalna zbirka SLovenije - logo

Rezultati iskanja

Osnovno iskanje    Ukazno iskanje   

Trenutno NISTE avtorizirani za dostop do e-virov konzorcija SI. Za polni dostop se PRIJAVITE.

1 2 3 4 5
zadetkov: 98
1.
  • A Generative Model for Punc... A Generative Model for Punctuation in Dependency Trees
    Li, Xiang Lisa; Wang, Dingquan; Eisner, Jason Transactions of the Association for Computational Linguistics, 11/2019, Letnik: 7
    Journal Article
    Recenzirano
    Odprti dostop

    Treebanks traditionally treat punctuation marks as ordinary words, but linguists have suggested that a tree’s “true” punctuation marks are not observed (Nunberg, 1990). These latent “underlying” ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK

PDF
2.
  • Few-Shot Recalibration of Language Models
    Xiang Lisa Li; Khandelwal, Urvashi; Guu, Kelvin arXiv (Cornell University), 03/2024
    Paper, Journal Article
    Odprti dostop

    Recent work has uncovered promising ways to extract well-calibrated confidence estimates from language models (LMs), where the model's confidence score reflects how likely it is to be correct. ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
3.
  • Learning to Compress Prompts with Gist Tokens
    Mu, Jesse; Xiang Lisa Li; Goodman, Noah arXiv (Cornell University), 02/2024
    Paper, Journal Article
    Odprti dostop

    Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
4.
  • AutoBencher: Creating Salient, Novel, Difficult Datasets for Language Models
    Xiang Lisa Li; Liu, Evan Zheran; Liang, Percy ... arXiv.org, 07/2024
    Paper, Journal Article
    Odprti dostop

    Evaluation is critical for assessing capabilities, tracking scientific progress, and informing model selection. In this paper, we present three desiderata for a good benchmark for language models: ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
5.
  • Benchmarking and Improving Generator-Validator Consistency of Language Models
    Xiang Lisa Li; Shrivastava, Vaishnavi; Li, Siyan ... arXiv (Cornell University), 10/2023
    Paper, Journal Article
    Odprti dostop

    As of September 2023, ChatGPT correctly answers "what is 7+8" with 15, but when asked "7+8=15, True or False" it responds with "False". This inconsistency between generating and validating an answer ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
6.
  • On the Learnability of Watermarks for Language Models
    Gu, Chenchen; Xiang Lisa Li; Liang, Percy ... arXiv (Cornell University), 01/2024
    Paper, Journal Article
    Odprti dostop

    Watermarking of language model outputs enables statistical detection of model-generated text, which has many applications in the responsible deployment of language models. Existing watermarking ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
7.
  • Prefix-Tuning: Optimizing Continuous Prompts for Generation
    Xiang Lisa Li; Liang, Percy arXiv (Cornell University), 01/2021
    Paper, Journal Article
    Odprti dostop

    Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
8.
  • Contrastive Decoding: Open-ended Text Generation as Optimization
    Xiang Lisa Li; Holtzman, Ari; Fried, Daniel ... arXiv (Cornell University), 07/2023
    Paper, Journal Article
    Odprti dostop

    Given a language model (LM), maximum probability is a poor decoding objective for open-ended generation, because it produces short and repetitive text. On the other hand, sampling can often produce ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
9.
  • Diffusion-LM Improves Controllable Text Generation
    Xiang Lisa Li; Thickstun, John; Gulrajani, Ishaan ... arXiv (Cornell University), 05/2022
    Paper, Journal Article
    Odprti dostop

    Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation. While recent works have demonstrated successes on controlling simple ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
10.
  • Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
    Khattab, Omar; Santhanam, Keshav; Xiang Lisa Li ... arXiv (Cornell University), 01/2023
    Paper, Journal Article
    Odprti dostop

    Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM). Existing work has ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
1 2 3 4 5
zadetkov: 98

Nalaganje filtrov