As of today, the performance of convolutional neural networks (CNNs) has revolutionized image and pattern recognition by surpassing human performance on standard datasets.The strongest feature of CNNs is that they learn features automatically from training examples and hence override the human need to only select intuitive features for the model. Another benefit of CNNs is that they take advantage of the 2D structure of images and thus have a higher accuracy than a standard flattened neural network. This approach works better than the explicit feature decomposition approaches such as detection of lanes or neighboring cars, since the network decides the most optimal features to extract from the image for itself.The dataset given on Udacity’s GitHub repository is used for the CNN with 80% used for training and 20% for validation. The training data, in addition to the data from the human driver, consists of images of the vehicle car in various shifts from the center of the lane and rotations from the direction of the road.Time-stamped steering angles are extracted from the .bag files of the training data using Robotic Operating System (ROS) and paired with the corresponding image to form a tuple of (image, steering angle) for input to the CNN, which then computes a proposed steering command.Here is provided a methodology of implementing a level-2 autonomous vehicle in a relatively sparsely occupied environment.A CNN is trained on a dataset by Udacity and used to compute the optimal steering angle based on the image input through the camera. In case of obstruction in the path, three ultrasonic sensors are used to decide in which direction the vehicle should turn to continue on its path.