The model of a non-Bayesian agent who faces a repeated game with incomplete informationagainst Nature is an appropriate tool for modeling general agent- environment interactions. In such a modelthe environment state (controlled by Nature) may change arbitrarily and the reward function is initially unknown. The agent is non-bayesian, that is he does not form a prior probability neither on the state selection strategy of Nature, nor on his reward function. Two basic feedback structure are considered. In one of them- the perfect mopnitoring case- the agent is able to observe the previous environment state as part of his feedback, while in the other - the imperfect monitoring case- all that is available to the agent is the reward obtained. Both of these setting refer to partially observable processes, where the current environment state is unknown. Our main result refers to the competitive ratio criterion in the perfect monitoring case; We prove the existence of an efficient stochastic policy which ensures that the competitive ratio is obtained at almost all stages with an arbitrary high probability, where efficiency is measured in term of rate of convergence. It is further shown that such an optimal strategy does not exist in the imperfect monitoring case. Moreover, it is proved that in the perfect monitoring case there does not exist a deterministic policy that satisfy our long run optimality criterion. In addition we discuss the maxmin criterion and prove that a deterministic efficient optimal strategy does exist in the imperfect monitoring case under this criterion. Finally we show that our approach to long-run optimality can be vied as qualitative, which distinguishes it from previous work in this area.