Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples

From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples

FromPapers Read on AI


From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples

FromPapers Read on AI

ratings:
Length:
37 minutes
Released:
Apr 16, 2024
Format:
Podcast episode

Description

We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance rivaling (or even outperforming) that of traditional supervised methods such as Random Forest, Bagging, or Gradient Boosting. For example, on the challenging Friedman #2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting. We then investigate how well the performance of large language models scales with the number of in-context exemplars. We borrow from the notion of regret from online learning and empirically show that LLMs are capable of obtaining a sub-linear regret.

2024: Robert Vacareanu, Vlad-Andrei Negru, Vasile Suciu, M. Surdeanu



https://arxiv.org/pdf/2404.07544v1.pdf
Released:
Apr 16, 2024
Format:
Podcast episode

Titles in the series (100)

Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.