In this paper, we propose an approximate dynamic programming algorithm to solve a Markov decision process (MDP) formulation for the admission control of elective patients. To manage elective patients from multiple specialties equitably and efficiently, we establish a waiting list of patients and assign each patient a time-dependent dynamic priority. Then taking the random arrivals of patients into account, sequential decisions are made on a weekly basis. At the end of each week, we select the patients to be treated in the following week from the waiting list. By minimizing the cost function of MDP over an infinite horizon, we seek to achieve the best trade-off between the patients' waiting time and the over-utilization of operating rooms and downstream resources. Considering the curses of dimensionality resulting from the large scale of realistically sized problems, we first analyze the structural properties of the MDP model and propose an algorithm that facilitates the search for greedy actions. We then develop a novel approximate dynamic programming algorithm based on recursive least-squares temporal difference learning as the solution technique. Experimental results reveal that the proposed algorithms consume much less computation time in comparison with that required by conventional dynamic programming methods. Additionally, the algorithms are shown to be capable of computing high-quality near-optimal policies for realistically sized problems.