Fairness is an important aspect in queuing systems. Several fairness measures have been proposed in queuing systems in general and parallel job scheduling in particular. Generally, a scheduler is considered unfair if some jobs are discriminated whereas others are favored. Some of the metrics used to measure fairness for parallel job schedulers can imply unfairness where there is no discrimination (and vice versa). This makes them inappropriate. In this paper, we show how the existing approach misrepresents fairness in practice. We then propose a new approach for measuring fairness for parallel job schedulers. Our approach is based on two principles: (i) as jobs have different resource requirements and find different queue/system states, they need not have the same performance for the scheduler to be fair and (ii) to compare two schedulers for fairness, we make comparisons of how the schedulers favor/discriminate individual jobs. We use performance and discrimination trends to validate our approach. We observe that our approach can deduce discrimination more accurately. This is true even in cases where the most discriminated jobs are not the worst performing jobs.Performance metrics, in some cases, may not accurately represent the user's needs. They may misrepresent them in specific circumstances leading to users drawing wrong deductions. AJSD, for example, may exaggerate poor performance in short jobs. A job stream with many short jobs will have a misleading poor performance if the AJSD metric is used. The implication of the performance metrics also depends on the system setup. Differences in system setups may call for differences in deductions got from the metric values. For example, in space-slicing systems, AWT and ART can be interchangeably used. This is because ART = AWT + ( = mean execution time and is independent of the scheduler). However, in time-slicing systems, the two metrics do not lead to related conclusions. This is because job response time cannot be deduced from the time it starts processing. Therefore, a lot of care has to be taken when choosing a performance metric [2].Even when an appropriate performance metric is used, the average metric value can give misleading results. This is because it gives a global view of performance but does not show internal discrimination/favoritism among the jobs. A scheduler may have an impressive (average) metric value yet some jobs perform well at the expense of others. Such a scheduler is unfair. Unfair schedulers may have impressive performance metric values that hide the underlying discrimination leading to user dissatisfaction [3]. There are many performance metrics [1] and specific metrics are appropriate in specific scenarios. Fairness metrics [4-6] also exist. However, they misrepresent fairness (discrimination/favoritism) in some cases. This may lead to counter intuitive deductions.In this paper, we study how fairness/discrimination is represented in three common approaches used to evaluate fairness for parallel job schedulers. The approaches considered...