In the public transport industry, travellers’ perceived satisfaction is a key element in understanding their evaluation of, and loyalty to ridership. Despite its notable importance, studies of customer satisfaction are under-represented in the literature, and most previous studies are based on survey data collected from a single city only. This does not allow a comparison across different transport systems. To address this underrepresentation, this paper reports on a study of train passengers’ satisfaction with the fare paid for their most recent home-based train trip in five Australian capital cities: Sydney, Melbourne, Brisbane, Adelaide, and Perth. Two data sources are used: a nation-wide survey, and objective information on the train fare structure in each of the targeted cities. In particular, satisfaction with train fares is modelled as a function of socio-economic factors and train trip characteristics, using a random parameters ordered Logit model that accounts for unobserved heterogeneity in the population. Results indicate that gender, city of origin, transport mode from home to the train station, eligibility for either student or senior concession fare, one-way cost, and waiting time as well as five diverse interaction variables between city of origin and socio-economic factors are the key determinants of passenger satisfaction with train fares. In particular, this study reveals that female respondents tend to be less satisfied with their train fare than their male counterparts. Interestingly, respondents who take the bus to the train station tend to feel more satisfied with their fare compared with the rest of the respondents. In addition, notable heterogeneity is detected across respondents’ perceived satisfaction with train fare, specifically with regard to the one-way cost and the waiting time incurred. An intercity comparison reveals that a city’s train fare structure also affects a traveller’s perceived satisfaction with their train fare. The findings of this research are significant for both policy makers and transport operators, allowing them to understand traveller behaviours, and to subsequently formulate effective transit policies.
BackgroundFalse discovery rate (FDR) control is commonly accepted as the most appropriate error control in multiple hypothesis testing problems. The accuracy of FDR estimation depends on the accuracy of the estimation of p-values from each test and validity of the underlying assumptions of the distribution. However, in many practical testing problems such as in genomics, the p-values could be under-estimated or over-estimated for many known or unknown reasons. Consequently, FDR estimation would then be influenced and lose its veracity.ResultsWe propose a new extrapolative method called Constrained Regression Recalibration (ConReg-R) to recalibrate the empirical p-values by modeling their distribution to improve the FDR estimates. Our ConReg-R method is based on the observation that accurately estimated p-values from true null hypotheses follow uniform distribution and the observed distribution of p-values is indeed a mixture of distributions of p-values from true null hypotheses and true alternative hypotheses. Hence, ConReg-R recalibrates the observed p-values so that they exhibit the properties of an ideal empirical p-value distribution. The proportion of true null hypotheses (π0) and FDR are estimated after the recalibration.ConclusionsConReg-R provides an efficient way to improve the FDR estimates. It only requires the p-values from the tests and avoids permutation of the original test data. We demonstrate that the proposed method significantly improves FDR estimation on several gene expression datasets obtained from microarray and RNA-seq experiments.ReviewersThe manuscript was reviewed by Prof. Vladimir Kuznetsov, Prof. Philippe Broet, and Prof. Hongfang Liu (nominated by Prof. Yuriy Gusev).
Developing and disseminating system-wide traffic volume data are critical objectives of traffic monitoring programs. Jurisdictions commonly use maps to disseminate traffic volume data because maps can easily communicate spatial traffic patterns throughout a highway network. Linear referencing systems (LRSs) are essential in this process, yet little research is available on methods to appropriately sequence a highway network and attribute traffic volume data to these sequences. This research aims to fill this knowledge gap by developing and applying procedures: (1) to segment a linear-referenced highway network into sequences of homogeneous traffic flow; and (2) to attribute traffic volume data to the segmented highway network. The research uses traffic and spatial data collected in Manitoba; however, the procedures are transferrable to other jurisdictions. This research develops four criteria for segmenting the highway network into sequences based on locations of traffic sources and sinks, such as highway intersections or urban areas. It also develops three multipart principles for attributing traffic data to highway sequences, considering the type of count site, the proximity of the site to the sequence, the recency of data collection, and the presence of traffic sources and sinks. ArcGIS® tools facilitate the iterative consideration of these criteria and principles. The application of the sequencing and attribution procedures enables practitioners to improve the spatial representativeness of a traffic volume map and reveals the importance of evaluating the traffic monitoring program when changes are made to the highway network or sampling program.
Traffic monitoring agencies collect traffic data samples to estimate annual average daily traffic (AADT) at short duration count sites. The steps to estimate AADT from sample data introduce error that manifests as uncertainty in the AADT statistic and its applications. Past research suggests that the assignment of a short duration count site to a traffic pattern group (TPG), characterized by known traffic periodicities, represents a significant but poorly quantified source of error. This paper presents an approach to quantify the range of errors arising from such assignments and to mitigate these errors using a novel data-driven assignment method. The approach uses simulated 48-hour short duration counts sampled from continuous count sites with known AADT to develop a benchmark of the total error expected when AADT is estimated from such samples. Likewise, the analysis produces a set of AADT estimates using temporal factors from pre-defined TPGs to quantify the range of assignment errors. The data-driven assignment method aims to mitigate these errors by minimizing the absolute mean deviation in AADT estimates produced from multiple short duration counts in a single year. The approach is applied to traffic data collected in Manitoba, Canada, as a case study. The results indicate that the mean absolute error from 48-hour short duration counts is 6.40% of the true AADT and that improper assignment can lead to a range in mean absolute errors of 9%. When applied to previously unassigned sites, the data-driven assignment method reduced mean absolute errors from 10.32%, using a conventional assignment method, to 7.86%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.