This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults -agents suffering Byzantine faults behave arbitrarily.We propose two learning rules.-In our first update rule, each agent updates its local beliefs as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the beliefs of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically agree on the true state almost surely. For the case when every agent is failure-free, we show that (with high probability) each agent's beliefs on the wrong hypotheses decrease at rate O(exp(−Ct 2 )), where t is the number of iterations, and C is a constant. -In general when agents may be adversarial, network identifiability condition specified for the above learning rule scales poorly in the number of state candidates m. In addition, the computation complexity per agent per iteration of this learning rule is forbiddingly high. Thus, we propose a modification of our first learning rule, whose complexity per iteration per agent is O(m 2 n log n), where n is the number of agents in the network. We show that this modified learning rule works under a much weaker network identifiability condition. In addition, this new condition is independent of m.⋆