Experience-related brain activity patterns have been found to reactivate during sleep, wakeful rest, and brief pauses from active behavior. In parallel, machine learning research has found that experience replay can lead to substantial performance improvements in artificial agents. Together, these lines of research have significantly expanded our understanding of the potential computational benefits replay may provide to biological and artificial agents alike. We provide an overview of findings in replay research from neuroscience and machine learning and summarize the computational benefits an agent can gain from replay that cannot be achieved through direct interactions with the world itself. These benefits include faster learning and data efficiency, less forgetting, prioritizing important experiences, as well as improved planning and generalization. In addition to the benefits of replay for improving an agent’s decision-making policy, we highlight the less-well studied aspect of replay in representation learning, wherein replay could provide a mechanism to learn the structure and relevant aspects of the environment. Thus, replay might help the agent to build task-appropriate state representations.