When designing an artificial neural network system in hardware, the implementation of the activation function is an important consideration. The hyperbolic tangent activation function is the most popular, and many approaches exist to ap proximate it, with varying trade-otis between area utilization and delay. Unfortunately, there is little data available reporting the minimum accuracy required of the activation function approxi mation in order to obtain good system-level performance; this is particularly the case for table-based approximation methods.In this paper, we demonstrate that table-based approximation methods are very well suited for implementing the tanh activation function, as well as its derivative in a variety of feed-forward artificial neural network topologies which employ the popular RPROP or Levenberg-Marquardt training algorithms. It is shown that when these training methods are used, an activation function possessing a relatively high maximum error can be used to obtain results comparable to 80ating point. This discovery suggests that these table-based methods can be employed with extreme efficiency in terms of area and speed, rendering them a promising option for any VLSI or FPGA artificial neural network hardware design.Artificial neural networks (ANNs) are a powerful com putational model originally inspired by the workings of the human brain. They are used extensively in many applications for function approximation and pattern classification [1]. An important part of any ANN is the non-linear activation function (AF) and its derivative, which are used for the feedforward and training operations of the ANN, respectively. With the advent of powerful FPGAs, as well as continued interest in both embedded systems and large-scale, high-performance VLSI implementations, there is a constant effort to design better AFs in hardware [2], [3], [4], [5].It is generally accepted that a maximum representation error of 1 % or less is needed in order to function properly [6], [7], [8], however these results seem to apply specifically to when the classic backpropagation algorithm is used for training. The majority of these works implement the AF and its derivative using piecewise linear approximations, or a custom function generator. Another approach is to store these functions in lookup tables (LUTs), however this is often shunned due to the perception that very large table sizes are required to achieve acceptable levels of performance [9].Recently, some table-based approaches have been proposed [10], [11] which are very competitive in terms speed and area utilization, as long as the maximum approximation error is no less than approximately 1%. It is desirable to determine how these AFs and their derivatives perform in a system-level test in order to characterize their performance and identify their strengths and weeknesses.In this work, a wide series of experiments are carried out in order to determine how well an ANN system performs when table-based AFs are employed. Performance is characterized over an ar...