This paper presents a novel framework for studying partially observable Markov decision processes (POMDPs) with finite state, action, observation sets and discounted rewards. The new framework is solely based on future-reward vectors associated with future policies, which is more parsimonious...