This work investigates a class of models of lexical semantics derived from the hyperspace analog to language (HAL; Burgess, 1998;Burgess & Lund, 2000), a computational model of word meaning that derives semantic relationships from lexical co-occurrence. Although the original HAL model was well specified, it contains several parameters whose values were set without formal or empirical justification. We have created a novel and freely available implementation of the HAL model-called High Dimensional Explorer (HiDEx)-that allows users to systematically vary those parameters, creating a class of models that are algorithmically identical but parameterized differently. Given the absence of any a priori formal justifications for parameter values in HAL, we have elected to assess different parameter settings by how well they perform at predicting human behavioral data on two lexical access tasks: lexical decision and semantic decision. In this article we explain how HiDEx works and how we were able to use it to explore HAL's parameter space.We begin with a brief overview of the HAL class of models and some related models. HAL uses word cooccurrence to build a vector space that contains contextual information for every word in a specified dictionary. A vector space is a geometric representation of data that has an ordered set of N numbers associated with each point in an N-dimensional space. Each such set of numbers defines the point's location in the space and is called its vector. HAL space is made up of vectors with one dimension for each word in the language. In the original HAL work, these word vectors had more than 100,000 dimensions.Each point in a word's vector is a weighted count of the number of times another word co-occurs with that word in a corpus of text. Words can co-occur when they are adjacent or when they are separated by a small number of intervening words. The maximum distance between words considered to co-occur is called the window size. Window size is one of the free parameters in the HAL model. In the original model, words were considered to have cooccurred if they occurred within 10 words of each other in either direction.Words in another word's co-occurrence window are weighted according to their proximity to that word, using a weighting function. The original HAL model used a linear weighting function called a linear ramp as a multiplier to give more weight to the words that co-occurred closer to the center of the window. Words that occurred on either side of the center word of the window were assigned 10 co-occurrence points. Each word's outside neighbors were assigned 9 co-occurrence points, and so on, down to a single point for a word that occurred 10 words away from the center word. This weighting function is another free parameter in HAL that has no a priori justification and can be changed in HiDEx.Lexical memories in the HAL model are built by making the model read words in text one window at a time and then sliding the window forward one word. This process of counting local co-occurrenc...