This paper considers the problem of distributed bandit online convex optimization with time-varying coupled inequality constraints. This problem can be defined as a repeated game between a group of learners and an adversary. The learners attempt to minimize a sequence of global loss functions and at the same time satisfy a sequence of coupled constraint functions. The global loss and the coupled constraint functions are the sum of local convex loss and constraint functions, respectively, which are adaptively generated by the adversary. The local loss and constraint functions are revealed in a bandit manner, i.e., only the values of loss and constraint functions at sampled points are revealed to the learners, and the revealed function values are held privately by each learner. We consider two scenarios, one-and two-point bandit feedback, and propose two corresponding distributed bandit online algorithms used by the learners. We show that sublinear expected regret and constraint violation are achieved by these two algorithms, if the accumulated variation of the comparator sequence also grows sublinearly. In particular, we show that O(T θ 1 ) expected static regret and O(T 7/4−θ 1 ) constraint violation are achieved in the one-point bandit feedback setting, and O(T max{κ,1−κ} ) expected static regret and O(T 1−κ/2 ) constraint violation in the two-point bandit feedback setting, where θ1 ∈ (3/4, 5/6] and κ ∈ (0, 1) are user-defined trade-off parameters. Finally, these theoretical results are illustrated by numerical simulations of a simple power grid example.