Turkish Cypriot paternal lineages seem to bear an autochthonous character and closest genetic connection with the neighbouring Near Eastern populations. These observations are further underscored by the fact that the haplogroups associated with the spread of Neolithic Agricultural Revolution from the Fertile Crescent (E1b1b/J1/J2/G2a) dominate (>70%) the Turkish Cypriot haplogroup distribution.
This dissertation studies how we can build a multiagent system that can learn to execute high-level strategies in complex, dynamic, and uncertain domains. We assume that agents do not explicitly communicate and operate autonomously only with their local view of the world. Designing multiagent systems for real-world applications is challenging because of the prohibitively large state and action spaces. Dynamic changes in an environment require reactive responses, and the complexity and the uncertainty inherent in real settings require that individual agents keep their commitment to achieving common goals despite adversities. Therefore, a balance between reaction and reasoning is necessary to accomplish goals in real-world environments. Most work in multiagent systems approaches this problem using bottom-up methodologies. However, bottom-up methodologies are severely limited since they cannot learn alternative strategies, which are essential for dealing with highly dynamic, complex, and uncertain environments where convergence of single-strategy behavior is virtually impossible to obtain. Our methodology is knowledge-based and combines top-down and bottom-up approaches to problem solving in order to take advantage of the strengths of both. We use symbolic plans that define the requirements for what individual agents in a collaborative group need to do to achieve multi-step goals that span through time, but, initially, they do not specify how to implement these goals in each given situation. During training, agents acquire application knowledge using case-based learning, and, using this training knowledge, agents apply plans in realistic settings. During application, they use a naïve form of reinforcement learning to allow them to make increasingly better decisions about which specific implementation to select for each situation. Experimentally, we show that, as the complexity of plans increases, the version of our system with naïve reinforcement learning performs increasingly better than the version that retrieves and applies unreinforced training knowledge and the version that reacts to dynamic changes using search.i
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.