We study mixed optimal control/stopping problems for $f$-expectations in the Markovian framework. We first establish a dynamic programming principle, which generalizes the well-known principle in the case of a classical linear expectation. This requires some special techniques of stochastic analysis and backward stochastic differential equations to handle the difficulties arising from the nonlinearity of the expectation. Using this result and properties of reflected backward stochastic differential equations, we prove that the value function of our mixed control problem is a viscosity solution of a nonlinear Hamilton-Jacobi-Bellman variational inequality. Uniqueness of the viscosity solution is obtained under additional assumptions. Illustrating examples in mathematical finance are given