ata centers (DCs) are currently the largest closedloop systems in the information technology (IT) and networking worlds, continuously growing toward multi-million-node clouds [1]. DC operators manage and control converged IT and network infrastructures in order to offer a broad range of services and applications to their customers. Typical services and applications provided by current DCs range from traditional IT resource outsourcing (storage, remote desktop, disaster recovery, etc.) to a plethora of web applications (e.g., browsers, social networks, online gaming). Innovative applications and services are also gaining momentum to the point that they will become main representatives of future DC workloads. Among them, we can find high-performance computing (HPC) and big data applications [2]. HPC encompasses a broad set of computationally intensive scientific applications, aiming to solve highly complex problems in the areas of quantum mechanics, molecular modeling, oil and gas exploration, and so on. Big data applications target the analysis of massive amounts of data collected from people on the Internet to analyze and predict their behavior.All these applications and services require huge data exchanges between servers inside the DC, supported over the DC network (DCN): the intra-DC communication network. The DCN must provide ultra-large capacity to ensure high throughput between servers. Moreover, very low latencies are mandatory, particularly in HPC where parallel computing tasks running concurrently on multiple servers are tightly interrelated. Unfortunately, current multi-tier hierarchical tree-based DCN architectures relying on Ethernet or Infiniband electronic switches suffer from bandwidth bottlenecks, high latencies, manual operation, and poor scalability to meet the expected DC growth forecasts [3].These limitations have mandated a renewed investigation D Abstract Applications running inside data centers are enabled through the cooperation of thousands of servers arranged in racks and interconnected together through the data center network. Current DCN architectures based on electronic devices are neither scalable to face the massive growth of DCs, nor flexible enough to efficiently and cost-effectively support highly dynamic application traffic profiles. The FP7 European Project LIGHTNESS foresees extending the capabilities of today's electrical DCNs through the introduction of optical packet switching and optical circuit switching paradigms, realizing together an advanced and highly scalable DCN architecture for ultra-high-bandwidth and low-latency server-to-server interconnection. This article reviews the current DC and high-performance computing (HPC) outlooks, followed by an analysis of the main requirements for future DCs and HPC platforms. As the key contribution of the article, the LIGHTNESS DCN solution is presented, deeply elaborating on the envisioned DCN data plane technologies, as well as on the unified SDN-enabled control plane architectural solution that will empower OPS and OCS transm...