Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.
Read more of this story at Slashdot.
