Motivated by crowdsourced computation, peer-grading, and recommendation systems, Braverman, Mao and Weinberg [STOC'16] studied the query and round complexity of fundamental problems such as finding the maximum (max), finding all elements above a certain value (threshold-v) or computing the top−k elements (Top-k) in a noisy environment.For example, consider the task of selecting papers for a conference. This task is challenging due the crowdsourcing nature of peer reviews: the results of reviews are noisy and it is necessary to parallelize the review process as much as possible. We study the noisy value model and the noisy comparison model: In the noisy value model, a reviewer is asked to evaluate a single element: "What is the value of paper i?" (e.g. accept). In the noisy comparison model (introduced in the seminal work of Feige, Peleg, Raghavan and Upfal [SICOMP'94]) a reviewer is asked to do a pairwise comparison: "Is paper i better than paper j?"In this paper, we show optimal worst-case query complexity for the max,threshold-v and Top-k problems. For max and Top-k, we obtain optimal worst-case upper and lower bounds on the round vs query complexity in both models. For threshold-v, we obtain optimal query complexity and nearly-optimal round complexity (i.e., optimal up to a factor O(log log k), where k is the size of the output) for both models.We then go beyond the worst-case and address the question of the importance of knowledge of the instance by providing, for a large range of parameters, instance-optimal algorithms with respect to the query complexity. We complement these results by showing that for some family of instances, no instance-optimal algorithm can exist. Furthermore, we show that the value-and comparison-model are for most practical settings asymptotically equivalent (for all the above mentioned problems); on the other hand, in the special case where the papers are totally ordered, we show that the value model is strictly easier than the comparison model.