Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing. It is well known that programming languages are highly structured. Thus previous works attempt to apply structurebased traversal (SBT) or non-sequential models like Tree-LSTM and graph neural network (GNN) to learn structural program semantics. However, it is surprising that incorporating SBT into advanced encoder like Transformer instead of LSTM has been shown no performance gain, which lets GNN become the only rest means modeling such necessary structural clue in source code. To release such inconvenience, we propose structureinduced Transformer, which encodes sequential code inputs with multi-view structural clues in terms of a newly-proposed structureinduced self-attention mechanism. Extensive experiments show that our proposed structureinduced Transformer helps achieve new stateof-the-art results on benchmarks.