Opposition-based learning (OBL) was originally introduced by Tizhoosh [27] and was first proposed as a machine intelligence scheme for reinforcement learning. It has been further employed recently to improve soft computing methods such as fuzzy systems [28] and artificial neural networks [29], [30]. In addition, Rahnamayan et al. [31] illustrated the capabilities of OBL by combining it with differential evolution to solve continuous optimization problems. OBL has been employed in a wide range of evolutionary algorithms such as biogeography-based optimization [32], particle swarm optimization [33], ant colony optimization [34], and simulated annealing [35].Most of the evolutionary algorithms start with an initial random population without any preliminary knowledge about the solution space. Additionally, the computation time is directly related to the quality and distance of the solutions in the initial population from the optimal solution. Here, two questions arise as follows: how to enrich the initial population and the population generated in each iteration. Is there an advantage in the simultaneous consideration of randomness and oppositions versus pure randomness? To answer these questions, this paper attempts to explore the simultaneous implementation of two approaches (randomness and oppositions) in generating solutions. After generating the population of solutions, a second chance is given to this population by checking the opposite solutions (opposite population) and selecting the best solutions (fittest) among them to start the algorithm. The aim of the OBL algorithm as a diversity mechanism is to enhance the performance of the proposed meta-heuristic algorithms and to enrich the Pareto-fronts. However, as the solution space is binary in this paper, a new version called the binary opposition-based scheme is proposed. To do this, the concept of OBL in continuous spaces is presented. Then, it is modified to be utilized in a binary solution space.