In certain approaches to quantum computing the operations between qubits are non-deterministic and likely to fail. For example, a distributed quantum processor would achieve scalability by networking together many small components; operations between components should assumed to be failure prone. In the logical limit of this architecture each component contains only one qubit. Here we derive thresholds for fault tolerant quantum computation under such extreme paradigms. We find that computation is supported for remarkably high failure rates (exceeding 90%) providing that failures are heralded, meanwhile the rate of unknown errors should not exceed 2 in 10 4 operations.The field of quantum information processing (QIP) has seen many experimental successes, but the challenge of scaling from a few qubits to large scale devices remains unsolved. One can argue that the issue is so crucial that it should dictate the choice of fundamental architecture for the machine. For example, in the concept of distributed QIP a plurality of small components, each similar in complexity to systems already realised experimentally, are networked together to constitute a full scale machine. The components may be trapped atoms or ions, or solid state nanostructures such as quantum dots or NV centres [1]. Each component can be presumed to be under good control, and it is understood that the key task is then to entangle the physically remote components. An attractive method of achieving this entangling operation (EO) is to arrange for each component to emit a photon that is correlated with the internal state of the component, before performing a joint measurement (with the aid of simple linear optical elements) of the photons. A considerable number of such entanglement schemes have been advanced since the first ideas in 1999 [2,3]. An important step was the realisation that photon loss can be detected, or heralded, within such a protocol [4,5]. Generally in these remote entanglement protocols, one is supposed to employ optical measurements that simultaneously observe two, or even four [6], components simultaneously. This principle for generating entanglement has in fact been demonstrated experimentally: first with ensemble systems [7] and subsequently with individual atoms [8].It is understood that the remote EOs may be failure prone. However, these failures are assumed to be heralded: the experimentalist is aware when a failure occurs. The appropriate strategy for dealing with such failures depends on the level of complexity within each component. In the case that each component incorporates multiple qubits then we can nominate one 'logical qubit' and use the other(s) to make repeated attempts at remote entanglement; when we are eventually successful then we can transfer the entanglement to the logical qubits [9,10]. However, many physical systems may have only very limited complexity, and moreover it is always desirable to minimise the required complexity. Therefore it is interesting to consider the case of just one qubit in each compone...