A new memory gradient method for unconstrained optimization problems is presented. This method makes use of the current and previous iterative information to generate a decent direction and uses exact linear search or Wolfe inexact linear search to define the stepsize at each iteration. The global convergence and linear convergence rate are proved under some mild conditions. Numerical experiments show that the new method is efficient in practical computation.