Hsi-Jian Lee and Chin-Tsing Deng
Department of Computer and Information Science
National Chiao Tung University
Hsinchu, Taiwan 300, R.O.C.
This paper proposes a new approach for determining the location and orientation of a camera mounted on an autonomous vehicle. We choose road boundaries and objects with vertical edges, which can be found commonly in indoor and outdoor environments, to be the calibration objects. The camera models are derived by using two or three consecutive images under the assumption that the height of the camera and the distance during the photographing of images are known in advance. Since the direction of a line in 3D space is perpendicular to the normal of the plane formed by this line and the camera center, the tilt and swing angles can be computed from the first image. The pan angle can then be derived analytically by using the projections of vertical edges in three consecutive images. If the angle between the direction of movement of the au-tonomous vehicle and the direction of the road boundaries is large enough, then the variation of the projections of the road boundaries is apparent, and two images are sufficient to find the pan angle. If the moving direction is parallel to the direction of the road boundaries, only one image is needed to determine the pan angle. Finally, two translational parameters can be found analytically by employing the projections of vertical edges in two or three consecutive images.
Keywords: autonomous vehicle, camera model, location and orientation, multiple frames, calibrated object
Received December 1, 1994; revised July 31, 1995.
Communicated by Jhing-Fa Wang.