Navigation tasks for mobile robots have been widely studied over past several years. More recently, there have been many attempts to introduce the usage of machine learning algorithms. Deep learning techniques are of special importance because they have achieved excellent performance in various fields, including robot navigation. Deep learning methods, however, require considerable amount of data for training deep learning models and their results may be difficult to interpret for researchers. To address this issue, we propose a novel model for mobile robot navigation using deep reinforcement learning. In our navigation tasks, no information about the environment is given to the robot beforehand. Additionally, the positions of obstacles and goal change in every episode. In order to succeed under these conditions, we combine several Q-learning techniques that are considered to be state-of-the-art. We first provide a description of our model and then verify it through a series of experiments.