This paper presents a deep reinforcement learning (DRL)-based task scheduling algorithm that is applied to an FPGA-based real-time digital simulation (FRTDS) system to generate arrangements to minimize the makespan of a task sequence with limited resources. The algorithm has two parts, which are synthetic cost construction and DRL processing to make arrangements. The synthetic cost represents the cost of different selections of arrangements in both resource usage and blockage arranging probability. This study uses the cost to measure the state-action value function to process the deep Q network (DQN) procedure to generate an optimized scheduling strategy. We establish the reinforcement learning strategy generation process by instantiating the computing components in the hardware as agents, and RAM resources and communication I/O ports as environment. A hardware-design-based decision rule is constructed to ensure that the computing variables are distributed as evenly as possible in storage, while making full use of the pipeline characteristics of FPGA. A compiler is written to generate an FRTDS binary stream to drive FRTDS. Accuracy and performance of the proposed method are verified and evaluated. We present simulation results of the modeling method, as well as from a classic method. Comparing these results, the makespan obtained by the proposed method is significantly shorter. It corresponds to the possibility of having higher computing power and dealing with larger-scale real-time simulation.