Floating-point numbers and floating-point arithmetic are widely used in numerical computations. A treatable problem size can quickly become large-scale due to the continual advancement of computational environments. If the number of floating-point operations increases, then problems caused by rounding errors become increasingly critical. In the worst case, an approximate solution obtained by a numerical computation can be inaccurate. Therefore, verified numerical computations are becoming increasingly important. This paper presents a survey of the basics related to verified numerical computations. We focus on floating-point arithmetic, interval arithmetic, rounding error analyses, and error-free transformations of floating-point operations.