2 min readfrom Machine Learning

Learning, Fast and Slow: Towards LLMs That Adapt Continually [R]

Large language models (LLMs) are trained for downstream tasks by updating their parameters (e.g., via RL). However, updating parameters forces them to absorb task-specific information, which can result in catastrophic forgetting and loss of plasticity. In contrast, in-context learning with fixed LLM parameters can cheaply and rapidly adapt to task-specific requirements (e.g., prompt optimization), but cannot by itself typically match the performance gains available through updating LLM parameters. There is no good reason for restricting learning to being in-context or in-weights. Moreover, humans also likely learn at different time scales (e.g., System 1 vs 2). To this end, we introduce a fast-slow learning framework for LLMs, with model parameters as "slow" weights and optimized context as "fast" weights. These fast "weights" can learn from textual feedback to absorb the task-specific information, while allowing slow weights to stay closer to the base model and persist general reasoning behaviors. Fast-Slow Training (FST) is up to 3x more sample-efficient than only slow learning (RL) across reasoning tasks, while consistently reaching a higher performance asymptote. Moreover, FST-trained models remain closer to the base LLM (up to 70% less KL divergence), resulting in less catastrophic forgetting than RL-training. This reduced drift also preserves plasticity: after training on one task, FST trained models adapt more effectively to a subsequent task than parameter-only trained models. In continual learning scenarios, where task domains change on the fly, FST continues to acquire each new task while parameter-only RL stalls.

https://arxiv.org/abs/2605.12484v1

submitted by /u/LakshyAAAgrawal
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#machine learning in spreadsheet applications
#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#financial modeling with spreadsheets
#big data performance
#rows.com
#large dataset processing
#real-time data collaboration
#real-time collaboration
#natural language processing
#Large Language Models
#Fast-Slow Training
#parameters
#catastrophic forgetting
#in-context learning
#continual learning
#task-specific information
#reasoning tasks
#plasticity