Edges are crucial features for object segmentation and classification in both image and point cloud processing. Though many research efforts have been made in edge extraction and enhancement in both areas, their applications are limited respectively owing to their own technical properties. This paper presents a new approach to integrating the edge pixels in the 2D image into boundary data in the 3D point cloud by establishing the mapping relationship between these two types of data to represent the 3D edge features of the object. The 3D edge extraction based on the adoption of Microsoft Kinect as a 3D sensor - involves the following three steps: first, the generation of a range image from the point cloud of the object, second the edge extraction in the range image and edge extraction in the digital image, and finally edge data integration by referring to the correspondence map between point cloud data and image pixels.