This paper presents a framework for incremental multiagent learning in structured networks, that is, systems in which the communicational links between agents is constrained. Learning examples are incrementally distributed among the agents, and the agents must build a common hypothesis that is consistent with all the examples present in the system. We recently proposed layered mechanisms to enable agents to coordinate their hypotheses at different levels, that have been shown to guarantee (theoretically) global consistence, but: (i) this is just one aspect of their effectiveness; and (ii) this would be of little practical interest were these mechanims to involve a great loss of efficiency (for instance a prohibitive communication cost). We explore these questions theoretically and experimentally (using different boolean formulas learning problems).