1 min readfrom Machine Learning

[R] Interested in recent research into recall vs recognition in LLMs

I've casually seen LLMs correctly verify exact quotations that they either couldn't or wouldn't quote directly for me. I'm aware that they're trained to avoid quoting potentially copywritten content, and the implications of that, but it made me wonder a few things:

  1. Can LLMs verify knowledge more (or less) accurately than they can recall knowledge?
    1b. Can LLMs verify more (or less) knowledge accurately than they can recall accurately?
  2. What research exists into LLM accuracy in recalling facts vs verifying facts?
submitted by /u/Acoustic-Blacksmith
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#rows.com
#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#LLMs
#recall
#accuracy
#knowledge
#recognition
#verification
#facts
#research
#quotations
#copywritten content
#trained
#implications
#directly
#potentially
#submit
#comments