Akademska digitalna zbirka SLovenije - logo

Rezultati iskanja

Osnovno iskanje    Ukazno iskanje   

Trenutno NISTE avtorizirani za dostop do e-virov konzorcija SI. Za polni dostop se PRIJAVITE.

3 4 5 6 7
zadetkov: 194
41.
  • ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
    Jesson, Andrew; Lu, Chris; Gupta, Gunshi ... arXiv (Cornell University), 11/2023
    Paper, Journal Article
    Odprti dostop

    This paper introduces an effective and practical step toward approximate Bayesian inference in on-policy actor-critic deep reinforcement learning. This step manifests as three simple modifications to ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
42.
  • Form follows Function: Text-to-Text Conditional Graph Generation based on Functional Requirements
    Zachares, Peter A; Hovhannisyan, Vahan; Mosca, Alan ... arXiv (Cornell University), 11/2023
    Paper, Journal Article
    Odprti dostop

    This work focuses on the novel problem setting of generating graphs conditioned on a description of the graph's functional requirements in a downstream task. We pose the problem as a text-to-text ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
43.
  • Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
    Jesson, Andrew; Mindermann, Sören; Yarin Gal ... arXiv (Cornell University), 02/2022
    Paper, Journal Article
    Odprti dostop

    We study the problem of learning conditional average treatment effects (CATE) from high-dimensional, observational data with unobserved confounders. Unobserved confounders introduce ignorance -- a ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
44.
  • Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning
    Kirsch, Andreas; Farquhar, Sebastian; Atighehchian, Parmida ... arXiv (Cornell University), 09/2023
    Paper, Journal Article
    Odprti dostop

    We examine a simple stochastic strategy for adapting well-known single-point acquisition functions to allow batch active learning. Unlike acquiring the top-K points from the pool set, score- or ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
45.
  • Fine-tuning can cripple your foundation model; preserving features may be the solution
    Mukhoti, Jishnu; Yarin Gal; Torr, Philip H S ... arXiv (Cornell University), 07/2024
    Paper, Journal Article
    Odprti dostop

    Pre-trained foundation models, due to their enormous capacity and exposure to vast amounts of data during pre-training, are known to have learned plenty of real-world concepts. An important step in ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
46.
  • Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers
    Kirsch, Andreas; Rainforth, Tom; Yarin Gal arXiv (Cornell University), 11/2021
    Paper, Journal Article
    Odprti dostop

    Expanding on MacKay (1992), we argue that conventional model-based methods for active learning - like BALD - have a fundamental shortfall: they fail to directly account for the test-time distribution ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
47.
  • Contrastive Representation Learning with Trainable Augmentation Channel
    Koyama, Masanori; Minami, Kentaro; Miyato, Takeru ... arXiv (Cornell University), 11/2021
    Paper, Journal Article
    Odprti dostop

    In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations. However, depending on the ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
48.
  • Estimating the Hallucination Rate of Generative AI
    Jesson, Andrew; Beltran-Velez, Nicolas; Chu, Quentin ... arXiv (Cornell University), 06/2024
    Paper, Journal Article
    Odprti dostop

    This work is about estimating the hallucination rate for in-context learning (ICL) with Generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and asked to make a ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
49.
  • LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?
    Glukhov, David; Shumailov, Ilia; Yarin Gal ... arXiv (Cornell University), 07/2023
    Paper, Journal Article
    Odprti dostop

    Large language models (LLMs) have exhibited impressive capabilities in comprehending complex instructions. However, their blind adherence to provided instructions has led to concerns regarding risks ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
50.
  • Revisiting Automated Prompting: Are We Actually Doing Better?
    Zhou, Yulin; Zhao, Yiren; Shumailov, Ilia ... arXiv (Cornell University), 06/2023
    Paper, Journal Article
    Odprti dostop

    Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot ...
Celotno besedilo
Dostopno za: NUK, UL, UM, UPUK
3 4 5 6 7
zadetkov: 194

Nalaganje filtrov