In March 1998, a survey was conducted on trains running in Switzerland in order to gauge the public interest in utilizing a new mag-lev train system operated by Swissmetro. The survey results are often used as data in order to facilitate trip mode prediction models. In order to explore the double descent hypothesis, which posits that overparameterization of models helps generalization rather than harming it, a multinomial logit model with regu- larization is compared against neural-network models that overparameterized in this study. We finally achieved 67% accuracy on the neural-network compared with the 44% accuracy of the multinomial logit model. The models are tested against a common testing model, and the accuracy performance of the models are examined. Additionally, to further investi- gate the choice of parameters commonly used in prediction models, several causal inference studies are carried out to adjust the effect of the survey location (as a proxy for existing mode choice) and travelers’ high-income status against possible confounders. Ultimately, causal inference further strengthened the variables survey location and traveler income as effective predictors in models, especially when treated for causal variables.
We use a variety of techniques to optimize the single-threaded operation of:C = C + A * BA number of optimizations techniques yielded significant speedups, including multi-level blocking, copy optimizations, and loop adjustments. However, we also tried a number of other optimizations that did not improve our program, including prefetching. In this report, we describe each of our optimizations in our final submission, present results with evidence that they work, and describe attempted optimizations that did not noticeably improve overall our performance. Unless otherwise noted, all of our benchmarks operate on matrices of size 1024x1024.In our attempts at optimizing matrix multiplication, we find that taking full advantage of spatial locality (through techniques like loop rearrangement or copy optimizations) and instruction-level parallelism greatly improves the performance of our operation. Furthermore, techniques like prefetching can be a double-edged sword; it can help performance when used correctly but can slow down performance as well. Finally, we find that the compiler adds optimizations out-of-the-box, like loop-unrolling, that help with performance underneath.
We use a variety of techniques to optimize the single-threaded operation of:C = C + A * BA number of optimizations techniques yielded significant speedups, including multi-level blocking, copy optimizations, and loop adjustments. However, we also tried a number of other optimizations that did not improve our program, including prefetching. In this report, we describe each of our optimizations in our final submission, present results with evidence that they work, and describe attempted optimizations that did not noticeably improve overall our performance. Unless otherwise noted, all of our benchmarks operate on matrices of size 1024x1024.In our attempts at optimizing matrix multiplication, we find that taking full advantage of spatial locality (through techniques like loop rearrangement or copy optimizations) and instruction-level parallelism greatly improves the performance of our operation. Furthermore, techniques like prefetching can be a double-edged sword; it can help performance when used correctly but can slow down performance as well. Finally, we find that the compiler adds optimizations out-of-the-box, like loop-unrolling, that help with performance underneath.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.