Virtualization and Network Virtualization — Building Blocks for Resilience in Future Networks

Kurt Tutschku

Kurt Tutschku

Kurt Tutschku obtained his diploma and Ph.D. in Computer Science from the University of Wuerzburg, Germany. In 2008, he received the Habilitation degree ("`State Doctoral Degree"') in Computer Science and Communication Networks from the University of Wuerzburg. From February 2008 to August 2008, he worked for the National Institute for Information and Communication Technology (NICT) in Tokyo, Japan.

Since Sep. 2008, Prof. Dr. Kurt Tutschku holds the Chair of “Future Communication” (endowed by Telekom Austria) at the University of Vienna. His main research interest include future generation communication networks, network virtualization, network federation, Quality-of-Experience, and the modeling and performance evaluation of future network control mechanisms and services in the emerging Future Internet.

Kurt Tutschku maintains close collaboration with industry. He leads research projects with companies such as Telekom Austria, Nokia Siemens Networks, BTexact, DATEV e.G., Bosch and Bertelsmann AG. In addition, Kurt Tutschku conducts multiple funded academic collaborations such as the WWTF project on the "Optimization of the Future, Federated Internet". Furthermore, he and his team participate in various testbed projects such as CoreLab (Japan), GENI/GpENI (US) or G-Lab (Germany). Kurt Tutschku is also member of the steering board of the European FP7 Network-of-Excellence "EuroNF Anticipating the Network of the Future: From Theory to Design" and co-coordinates the joint research agenda and work in EuroNF. In addition, he leads the EuroNF work package JRA.1.6 on ”Overlays for Network Control and Support of Evolved Services Infrastructures” which address the topic of network virtualization.

Kurt Tutschku is member of the steering committee of the Future Internet Assembly, co-chair of the Future Internet Symposium 2011, and workshop and tutorial co-chair at 2011 International Teletraffic Congress and organized the 20th ITC Specialist Seminar on Network Virtulization.


Virtualization and Network Virtualization are technologies that allow the independence and simultaneous operation of multiple logical systems on a single physical platform, e.g. multiple virtual PCs may be executed on a single machine or multiple overlay networks may be embedded in a single physical substrate. Moreover, Network Virtualization permits distributed participants to create almost instantly their own network with application-specific naming, routing, and resource management mechanisms such as server virtualization enables users to use even a whole computing center arbitrarily as their own personal computer. Therefore, network virtualization has received recently tremendous attention since it is expected to be one of the major paradigms for the Future Internet as proposed by numerous international initiatives on future networks, e.g. PlanetLab (USA, International), GENI (USA), CoreLab (JAPAN), OneLab2 (Europe), and G-Lab (Germany).

The aim of this tutorial is to provide an introduction into the concepts of virtualization and network virtualization and to outline how use these concepts might support resilience in future networks. The tutorial we will split into six parts.

First, we provide an introduction into the various modes of virtualization (sharing vs. aggregation). After that, we outline the major techniques and mechanisms for virtualization such as paravirtualization, virtual machine monitors (VMMs) and scheduling mechanisms. Subsequently, we discuss the parallel of virtualization and resilience. Next, we enhance the virtualization to the concept of network virtualization. Afterwards, we will provide practical examples for network virtualization (including concepts for link-, router-, service virtualizations), outline their performance requirements, and provide performance models, e.g. for resource pooling. Finally, we discuss initial approaches for using virtualization and networks virtualization for improving resilience and address challenges and potential pitfalls (i.e. what should be avoided) in building resilient/dependable services with virtualization.

Robustness of Complex Networks (WWW access to slides)

Piet Van Mieghem

Piet Van Mieghem

Piet F. A. Van Mieghem is professor at the Delft University of Technology with a chair in telecommunication networks and chairman of the section Network Architectures and Services (NAS) since 1998. His main research interests lie in the modelling and analysis of complex networks (such as biological, brain, social, infrastructural, etc. networks) and in new Internet-like architectures and algorithms for future communications networks.

Professor Van Mieghem received a Master and Ph. D. degree in Electrical Engineering from the K.U.Leuven (Belgium) in 1987 and 1991, respectively. Before joining Delft, he worked at the Interuniversity Micro Electronic Center (IMEC) from 1987 to 1991. During 1993 to 1998, he was a member of the Alcatel Corporate Research Center in Antwerp where he was engaged in performance analysis of ATM systems and in network architectural concepts of both ATM networks (PNNI) and the Internet.

He was a visiting scientist at MIT (department of Electrical Engineering, 1992-1993) and a visiting professor at UCLA (department of Electrical Engineering, 2005) and at Cornell University (Center of Applied Mathematics, 2009).

He was member of the editorial board of the journal Computer Networks from 2005-2006. Currently, he serves on the editorial board of the IEEE/ACM Transactions on Networking and Computer Communications.

He is the author of three books, Data Communications Networking, Performance Analysis of Computer Networks and Systems and Graph Spectra for Complex Networks


Our society depends more strongly than ever before on large, complex networks such as transportation networks, telephone networks, the Internet, social networks and power grids, while our health needs understanding of biological networks (metabolic, DNA, brain networks). Surely, we are surrounded by complex networks, but, do we understand how they really operate? Around about 2000, several remarkable, quite universal phenomena in complex networks gave birth to a new wave of research that is still continuing and expanding from physics, mathematics to engineering, biology, medical sciences, social sciences, and even finance.

In this tutorial, we first discuss our current understanding of complex networks, how they are represented, what the universal properties are, and what tools do we have to describe them. Next, we concentrate on the theme of robustness of complex networks by posing the general high-level question “given a network, is it good?” If not, what need to be changed to achieve the desired level of robustness. The definition of “good”, “robust” is, of course, related to the purpose of the network. Network metrics (topological, spectral and service-related) will be discussed. We will sketch why a general framework to compute network robustness is still lacking, while at the same time, the social demand for robust networks increases (due to our increasing dependability on networks, terrorism, cybercrime, etc.) Finally, we will apply the devices of complex network theory to some case studies of real-world networks.