• @[email protected]
    link
    fedilink
    English
    122 months ago

    Anecdotally, this was my experience as a student when I tried to use AI to summarize and outline textbook content. The result says almost always incomplete such that I’d have to have already read the chapter to include what the model missed.

    • just another devA
      link
      fedilink
      English
      42 months ago

      I’m not sure how long ago that was, but LLM context sizes have grown exponentially in the past year, from 4k tokens to over a hundred k. That doesn’t necessarily affect the quality of the output, although you can’t expect it to summarize what it can’t hold on memory.