Procedurally-defined implicit functions, such as CSG trees and recent neural shape representations, offer compelling benefits for modeling scenes and objects, including infinite resolution, differentiability and trivial deformation, at a low memory footprint. The common approach to fit such models to measurements is to solve an optimization problem involving the function evaluated at points in space. However, the computational cost of evaluating the function makes it challenging to use visibility information from range sensors and 3D reconstruction systems. We propose a method that uses visibility information, where the number of function evaluations required at each iteration is proportional to the scene area. Our method builds on recent results for bounded Euclidean distance functions by introducing a coarse-to-fine mechanism to avoid the requirement for correct bounds. This makes our method applicable to a greater variety of implicit modeling techniques, for which deriving the Euclidean distance function or appropriate bounds is difficult.