Data-intensive computing applications, such as object recognition, time series prediction, and optimization tasks, are becoming increasingly important in several fields, including smart mobility, health, and industry. Because of the large amount of data involved in the computation, the conventional von Neumann architecture suffers from excessive latency and energy consumption due to the memory bottleneck. A more efficient approach consists of in-memory computing (IMC), where computational operations are directly carried out within the data. IMC can take advantage of the rich physics of memory devices, such as their ability to store analog values to be used in matrix-vector multiplication (MVM) and their stochasticity that is highly valuable in the frame of optimization and constraint satisfaction problems (CSPs). This article presents a stochastic spiking neuron based on a phase-change memory (PCM) device for the solution of CSPs within a Hopfield recurrent neural network (RNN). In the RNN, the PCM cell is used as the integrating element of a stochastic neuron, supporting the solution of a typical CSP, namely a Sudoku puzzle in hardware. Finally, the ability to solve Sudoku puzzles using RNNs with PCM-based neurons is studied for increasing size of Sudoku puzzles by a compact simulation model, thus supporting our PCM-based RNN for data-intensive computing. INDEX TERMS Phase change memory (PCM), artificial synapses, hopfield neural network, stochastic process, optimization. I. INTRODUCTION O PTIMIZATION problems are among the most intensive computing tasks for several application fields, such as industry, finance, and transport. In general, optimization is carried out by several iterations to identify the global minimum of a certain cost function. In each iteration, a conventional digital system must access the memory to fetch input data and upload the temporary output, which is time and energy consuming. To enable a more efficient optimization, a non-von Neumann architecture can be adopted to eliminate the latency and energy spent for shuttling the data between the memory and the central processing unit (CPU) [1]. An example of non-von Neumann computing architecture is the concept of in-memory computing (IMC) where the computation is executed directly within the memory array. For instance, IMC can efficiently accelerate the typical multiplyaccumulate (MAC) operation, which is the foundation for modern digital accelerators for artificial intelligence (AI) and optimization [2]. Emerging memory devices, such as phase-change memory (PCM) [3], [4] and resistive random access memory (RRAM) [5], [6], offer scalable, efficient, and CMOS-compatible solutions to store analog information as