The above-mentioned deep-learning-basedmethods areto relate the robot’s success/failure in grasping objectsand the visibility of object images. However, whether therobot succeeds in grasping objects or not depends on thephysical relations between the robot hand and the object itcontacts [36]. The method (Dex-Net) proposed by Mahleret al. takes into account not only the relations with imagesbut also the physics of contact[37–40]. Reference [37]defined a physical model for a two-finger gripper’s grasp-ing success and detected stable grasping points using thepose analyses of the 3D CAD object and gripper mod-els. They made the robot to learn by deep learning, stablegrasping points for numerous objects so that stable grasp-ing points of unknown objects could be detected. Ref-erence [38] improved the method’s versatilityby analyz-ing the grasping points on the distance images created byphysical simulations. Reference [39] dealt with a suctiongripper. Reference [40] made it possible to analyze thegrasping stabilities using the robot hand and object mod-els registered by users on the cloud service. Because theproposed method is high in both conceptual and techni-cal perfection, an increasing number of companies are ex-pected to put it into practical use in the future.