Streamflow time series often contain gaps of varying length and location. However, the influence of these gaps on trend detection is poorly understood and cannot be estimated a priori in trend detection studies. We simulated the effects of varying gap size (1, 2, 5, and 10 years) and location (one quarter, one third, and half of the way) on the detection rate of significant monotonic trends in annual maxima and peaks‐over‐threshold, based on the most commonly‐used trend tests in time series of varying length (from 15 to 150 years) and trend magnitude (β1). Results show that, in comparison with the complete time series, the loss in trend detection rate tends to grow with (1) increasing gap size, (2) increasing gap distance from the middle of the time series, (3) decreasing β1 slope, and (4) decreasing time series length. Based on these findings, we provide objective recommendations and cautionary remarks for maximal gap allowance in trend detection in extreme streamflow time series.