The core idea of meta-learning is to leverage prior experience to design solutions that can be quickly adapted to new, unseen tasks. Most of existing studies consider the case where the feasible parameter space is continuous. Recently, [1] develops the framework of a discrete variant of metalearning, called submodular meta-learning, and they treat each task as a discrete optimization problem, i.e., they intend to select a group of items that maximizes the average expected utility of all tasks. Motivated by their framework, we consider the submodular meta-learning problem under the adaptive setting. In particular, we assume that each item has a random state, which is drawn from some known prior distribution. One must select an item before observing its realized state. Given a task, the utility function is defined over items and states. Our goal is to adaptively select a group items, each selection is based on the feedback from the past, to maximize the average expected utility of all tasks. Following the framework of standard meta-learning, we propose an effective policy that is composed of two stages: We first pre-compute an initial set of items, called initial solution set, based on previously visited tasks, then, once a new task is revealed, we add more items to the initial solution set to complete the selection process. We show that our policy achieves a 1/32 approximation ratio if the utility function of each task is adaptive submodular. Our policy enjoys the benefits of providing a personalized solution to each task while reducing the computation cost at test time.