HIPs, or Human Interactive Proofs, are challenges meant to be easily solved by humans, while remaining too hard to be economically solved by computers. HIPs are increasingly used to protect services against automatic script attacks. To be effective, a HIP must be difficult enough to discourage script attacks by raising the computation and/or development cost of breaking the HIP to an unprofitable level. At the same time, the HIP must be easy enough to solve in order to not discourage humans from using the service. Early HIP designs have successfully met these criteria [1]. However, the growing sophistication of attackers and correspondingly increasing profit incentives have rendered most of the currently deployed HIPs vulnerable to attack [2,7,12]. Yet, most companies have been reluctant to increase the difficulty of their HIPs for fear of making them too complex or unappealing to humans. The purpose of this study is to find the visual distortions that are most effective at foiling computer attacks without hindering humans. The contribution of this research is that we discovered that 1) automatically generating HIPs by varying particular distortion parameters renders HIPs that are too easy for computer hackers to break, yet humans still have difficulty recognizing them, and 2) it is possible to build segmentation-based HIPs that are extremely difficult and expensive for computers to solve, while remaining relatively easy for humans.
ACM Classification
H.5.2. [Information interfaces and presentation (HCI)]:User Interfaces − Graphical user interfaces (GUI).
Abstract-Traditional investigations with evolutionary programming (EP) for continuous parameter optimization problems have used a single mutation operator with a parameterized probability density function (pdf), typically a Gaussian. Using a variety of mutation operators that can be combined during evolution to generate pdf's of varying shapes could hold the potential for producing better solutions with less computational effort. In view of this, a linear combination of Gaussian and Cauchy mutations is proposed. Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.
Intelligence pertains to the ability to make appropriate decisions in light of specific goals and to adapt behavior to meet those goals in a range of environments. Mathematical games provide a framework for studying intelligent behavior in models of realworld settings or restricted domains. The behavior of alternative strategies in these games is defined by each individual's stimulusresponse mapping. Limiting these behaviors to linear functions of the environmental conditions renders the results to be little more than a façade: effective decision making in any complex environment almost always requires nonlinear stimulus-response mappings. The obstacle then comes in choosing the appropriate representation and learning algorithm. Neural networks and evolutionary algorithms provide useful means for addressing these issues. This paper describes efforts to hybridize neural and evolutionary computation to learn appropriate strategies in zero-and nonzero-sum games, including the iterated prisoner's dilemma, tictac-toe, and checkers. With respect to checkers, the evolutionary algorithm was able to discover a neural network that can be used to play at a near-expert level without injecting expert knowledge about how to play the game. The implications of evolutionary learning with respect to machine intelligence are also discussed. It is argued that evolution provides the framework for explaining naturally occurring intelligent entities and can be used to design machines that are also capable of intelligent behavior.
An experiment was conducted where neural networks compete for survival in an evolving population based on their ability to play checkers. More specifically, multilayer feedforward neural networks were used to evaluate alternative board positions and games were played using a minimax search strategy. At each generation, the extant neural networks were paired in competitions and selection was used to eliminate those that performed poorly relative to other networks. Offspring neural networks were created from the survivors using random variation of all weights and bias terms. After a series of 250 generations, the best-evolved neural network was played against human opponents in a series of 90 games on an internet website. The neural network was able to defeat two expert-level players and played to a draw against a master. The final rating of the neural network placed it in the "Class A" category using a standard rating system. Of particular importance in the design of the experiment was the fact that no features beyond the piece differential were given to the neural networks as a priori knowledge. The process of evolution was able to extract all of the additional information required to play at this level of competency. It accomplished this based almost solely on the feedback offered in the final aggregated outcome of each game played (i.e., win, lose, or draw). This procedure stands in marked contrast to the typical artifice of explicitly injecting expert knowledge into a game-playing program.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.