Explanations are considered to be a byproduct of our causal understanding of the world. If we would know the actual causal relations, we could provide adequate explanations. In contrast, this work places explanations at the forefront of learning. We argue that explanations provide a strong signal to learn causal relations. To this end, we propose Explanatory World Models (EWM), a type of world model where explanations drive learning. We provide an implementation of EWM based on an attention mechanism called look ahead attention, trained in an unsupervised fashion. We showcase this approach in the credit assignment problem for reinforcement learning and show that explanations provide a better solution to this problem than current heuristics.