The goal of this paper is to further develop an approach to inverse problems with imperfect forward operators that is based on partially ordered spaces. Studying the dual problem yields useful insights into the convergence of the regularised solutions and allow us to obtain convergence rates in terms of Bregman distances -as usual in inverse problems, under an additional assumption on the exact solution called the source condition. These results are obtained for general absolutely one-homogeneous functionals. In the special case of TV-based regularisation we also study the structure of regularised solutions and prove convergence of their level sets to those of an exact solution. Finally, using the developed theory, we adapt the concept of debiasing to inverse problems with imperfect operators and propose an approach to pointwise error estimation in TV-based regularisation.Keywords: inverse problems, imperfect forward models, total variation, extended support, Bregman distances, convergence rates, error estimation, debiasing where A : L 1 (Ω) → L ∞ (Ω) is a linear operator and Ω ⊂ R m is a bounded domain. We assume that there exists a non-negative solution of (1.1).For an appropriate functional J (·) : L 1 → R + ∪{∞} we consider non-negative J -minimising solutions, which solve the following problem:We assume that the feasible set in (1.2) has at least one point with a finite value of J and denote a (possibly non-unique) solution of (1.2) byū J . Throughout this paper it is assumed that the regularisation functional J (·) is convex, proper and absolutely one-homogeneous.In practice the data f are not known precisely and only their perturbed versionf is available. In this case, we cannot simply replace the constraint Au = f in (1.2) with Au =f , since the solutions of the original problem (1.1) would no longer be feasible in this case. Therefore, we need to relax the equality in (1.2) to guarantee the feasibility of solutions of the original problem (1.1). This is the idea of the residual method [20,23]. If the error in the data is bounded by some known constant δ, the residual method accounts to solving the following constrained problem: min( 1.3)The fidelity function becomes in this case the characteristic function of the convex set {u : Au− f δ}. In the linear case, the residual method is equivalent to Tikhonov regularisation min u∈L 1with the regularisation parameter α = α(f , δ) chosen according to Morozov's discrepancy principle [23]. In many practical situations not only the data contain errors, but also the forward operator, that generated the data, are not perfectly known. In order to guarantee the feasibility of solutions of the original problem (1.1) in the constrained problem (1.3), one needs to account for the errors in the operator in the feasible set. If the errors in the operator are bounded by a known constant h (in the operator norm), the feasible set can be amended as follows in order to guarantee feasibility of the solutions of the original problem (1.1):whereà is the noisy operator. This optimi...