This paper clarifies, how a preference aggregation problem can be solved using the prioritized aggregation operators (PAO), where, a given group of candidates should be aggregated based upon their satisfaction for a collection of ranked places. The comparison results show that the proposed method can accurate aggregate a preference ranking system with less computation and without the need to solve any mathematical model. Also, a metasearch application shows the usefulness of applying the PAO for a preference aggregation. C
Purpose -The purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally investigate which search engine among three popular search engines (Ask.com, Bing and Google) gives the best performance. Design/methodology/approach -The study assesses the performance of three search engines. For each one the weighted average of similarity degrees between its ranked result list and those of its metasearch engines is measured. Next these measures are compared to establish which search engine gives the best performance. To compute the similarity degree between the lists two measures called the "tendency degree" and "coverage degree" are introduced; the former assesses a search engine in terms of results presentation and the latter evaluates it in terms of retrieval effectiveness. The performance of the search engines is experimentally assessed based on the 50 topics of the 2002 TREC web track. The effectiveness of the methods is also compared with human-based ones. Findings -Google outperformed the others, followed by Bing and Ask.com. Moreover significant degrees of consistency -92.87 percent and 91.93 percent -were found between automatic and human-based approaches. Practical implications -The findings of this work could help users to select a truly effective search engine. The results also provide motivation for the vendors of web search engines to improve their technology. Originality/value -The paper focuses on two novel automatic methods to evaluate the performance of search engines and provides valuable experimental results on three popular ones.
IntroductionThe amount of information on the web is enormous and finding relevant information among an overwhelming amount of irrelevant information is one of the major concerns in the electronic environment. The web search engine is an effective and efficient web information search tool that has been developed to help users solve this problem. As the volume of information on the internet increases dramatically, the web search engine has become an indispensable tool to search and locate the required information. However it is virtually impossible for any single search engine to index the entire web
Prior to every academic semester, every department's administrator is required to offer the best overall set of courses to meet student requirements, instructor needs and department regulations. The key contributions of this research is firstly, determining the potential factors that influence student behavior on the online courses they choose, secondly, modeling the course offering problem and fitting a function to a training set of data using neural network approach, thirdly, design and implementation of a decision support system to help the department's administrator to simulate student behavior in course selection process and support his/her decisions on the courses to be offered, and lastly, employing the proposed decision support system to perform what-if analysis and goal seeking behavior. The samples of the experiments came from 298 online graduate courses in 14 academic terms from 2005 to 2011. The results revealed high prediction accuracy on the experimental data. The performance of the introduced decision support system was also compared with three well-known regression techniques, "support vector regression", "K-nearest neighborhood", "decision tree" and a traditional approach. The finding exposed that the suggested decision support system outperformed the others significantly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.