To investigate a possible connection between language type frequency and learnability, we systematically compared how neural network models learn all possible combinations of three linguistic strategies for encoding grammatical relations: word order, nominal case marking, and verbal agreement/crossreference affixes. Other variable linguistic dimensions included accusative and ergative marking systems, the consistency of genitive marking, and the complexity of the grammars used to generate the artificial languages. The results of our simulations mesh well with some of the typological tendencies observed among the languages of the world: e.g., Subject-before-Object languages are more frequent than their Object-before-Subject counterparts; ergative languages are less common than accusative languages; and SOV languages almost always appear with a nominal case marking system. In general, the networks were able to learn the attested language types, but typically had severe problems learning the unattested types. However, the simulation results do not explain why some language types are more frequent than others.