new generation network architecture (1)

Upload: kikinjo1

Post on 07-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 New Generation Network Architecture (1)

    1/136

    New Generation Network ArchitectureAKARI Conceptual Design

    AKARI ProjectOriginal Publish (Japanese) April 2007

    English Translation October 2007Copyright 2007 NICT

  • 8/6/2019 New Generation Network Architecture (1)

    2/136

    AKARI Project Members:Masaki Hirabaru, Masugi Inoue, Hiroaki Harai, Toshio Morioka, Hideki Otsuki,Kiyohide Nakauchi, Sugang Xu, Ved Kafle, Hiroko Ueda, Masataka Ohta, FumioTeraoka, Masayuki Murata, Hiroyuki Morikawa, Fumito Kubota, and Tomonori Aoyama

    This document presents the conceptual design of a new generation network architecture.It is based on the discussions at 14 meetings and 2 seminars, which were held during an11-month period beginning in May 2006 and attended primarily by the NetworkArchitecture Group of the New Generation Network Research Center of the NationalInstitute of Information and Communications Technology (NICT).

    What the name of AKARI indicatesThe codename for New Generation Network R&D in NICT"A small light in the dark pointing to the future"

  • 8/6/2019 New Generation Network Architecture (1)

    3/136

    AKARI Conceptual Design Summary

    AKARI Project Goals and Conceptual Design

    The future holds a computing environment characterized by embedded and pervasive

    computing and networking that will benefit society worldwide, not just the current statein which computers and networks are proliferating widely. The current Internet, whichwas not designed with this kind of pervasive information-networked society in mind,cannot handle this societal transition, leaving it unable to further mankind's potential. Torealize this kind of information-networked society envisioned for the next two or threedecades, a new generation network must be created before the current Internet reaches itslimits. This new generation network must seamlessly integrate real-world computing andnetworking with virtual space.

    The primary goal of the AKARI Project is to design a network of the future. The AkariProject aims to implement a new generation network by 2015, developing a network

    architecture and creating a network design based on that architecture. Our philosophy isto pursue an ideal solution by researching new network architectures from a clean slatewithout being impeded by existing constraints. Once these new network architectures aredesigned, the issue of migration from today's conditions can be considered using thesedesign principles. Our goal is to create an overarching design of what the entire futurenetwork should be. To accomplish this vision of a future network embedded as part ofsocietal infrastructure, each fundamental technology or sub-architecture must be selectedand the overall design simplified through integration.

    The AKARI Project, which was launched one year ago, identifies a list of societalrequirements and the design principles needed to support them. It also introduces futurebasic design technologies and associated design principles. It also includes conceptualdesign examples of several key portions based on the design principles as well asrequirements for testbeds that must be built for verifying them. Some parts of Chapters 2and 4 are extracted and introduced below. These parts include societal and designrequirements of the new generation network era (Chapter 2) and basic design principlesfor a new generation network architecture and network architecture design based on anintegration of science and technology (Chapter 4).

    Societal Considerations and Design Requirements of the New

    Generation Network Era

    Network requirements and considerations for the Internet of tomorrow include:(1) Peta-bps class backbone network, 10Gbps FTTH, e-Science

    (2) 100 billion devices, machine to machine (M2M), 1 million broadcasting stations

    (3) Principles of competition and user-orientation

    (4) Essential services (medical care, transportation, emergency services), 99.99%reliability

    (5) Safety, peace of mind (privacy, monetary and credit services, food supplytraceability, disaster services)

    (6) Affluent society, disabled persons, aged society, long-tail applications(7) Monitoring of global environment and human society

  • 8/6/2019 New Generation Network Architecture (1)

    4/136

    (8) Integration of communication and broadcasting, Web 2.0

    (9) Economic incentives (business-cost models)

    (10) Ecology and sustainable society

    (11) Human potential, universal communication

    To deal with these societal requirements, our goal is to contribute to humandevelopment by designing a new generation network architecture based on the followingdesign principles.

    (1) Large capacity. Increased speed and capacity are required to satisfy future trafficneeds, which are estimated to be approximately 1000 times current requirementsin a decade.

    (2) Scalability. The devices that are connected to the network will be extremelydiverse, ranging from high-performance servers to single-function sensors.Although little traffic is generated by a small device, their number will be

    enormous, and this will affect the number of addresses and states in the network.(3) Openness. The network must be open and able to support appropriate principles

    of competition.

    (4) Robustness. High availability is crucial because the network is relied on forimportant services such as medical care, traffic light control and other vehicletraffic services, and bulletins during emergencies.

    (5) Safety. The architecture must be able to authenticate all wired and wirelessconnections. It also must be designed so that it can exhibit safety and robustnessaccording to its conditions during a disaster.

    (6) Diversity. The network must be designed and evaluated based on diversecommunication requirements without assuming specific applications or usagetrends.

    (7) Ubiquity. To implement pervasive development worldwide, a recycling-orientedsociety must be built. A network for comprehensively monitoring the globalenvironment from various viewpoints is indispensable for accomplishing this.

    (8) Integration and simplification. The design must be simplified by integratingselected common parts, not by just packing together an assortment of variousfunctions. Simplification increases reliability and facilitates subsequentextensions.

    (9) Network model. To enable the information network to continue to be a foundationof society, the network architecture must have a design that includes a business-cost model so that appropriate economic incentives can be offered to serviceproviders and businesses in the communications industry.

    (10) Electric power conservation. As network performance increases, its powerconsumption continues to grow, and as things stand now, a router will require theelectrical power of a small-scale power plant. The information-networked societyof the future must be more Earth friendly.

    (11) Extendibility. The network must be sustainable. In other words, it must haveenough flexibility to enable the network to be extended as society develops.

  • 8/6/2019 New Generation Network Architecture (1)

    5/136

    Basic Design Principles for a New Generation Network Architecture

    We identified the following three principles as our core design principles for creating anew generation network architecture: KISS (Keep It Simple, Stupid), Sustainable andEvolutionary, and Reality Connection.

    (1) KISS principle

    The KISS principle is an important guide for increasing Internet diversity,expandability, as well as reliability, thereby reducing possible complications that caneasily arise. We have chosen the following design principles to support the KISSprinciple.

    End-to-End: A basic principle of Internet architecture is that a network should notbe constructed based on a specific application or with the support of a specificapplication as its objective.

    Crystal Synthesis: When selecting from among many technologies and integratingthem in order to enable diverse uses, simplification is the most important principle.The design must incorporate "crystal synthesis," a kind of simplification oftechnologies to reduce complexity even when integrating functions.

    Common Layer: In a network model with a layer structure, each layer'sindependence is maintained. Each layer is designed independently and its functionsare extended independently. However, one of the reasons for the success of theInternet is that the IP layer is a common layer. If we assume that the network layerexists as a common layer, other layers need not have the functions that areimplemented in that common layer. Therefore, we concluded that the design of thenew generation network architecture will have a common layer and will eliminateredundant functions in other layers to degenerate functions in multiple layers.

    (2) Sustainable and Evolutionary principle

    The new generation network architecture must be designed as a sustainable networkthat can evolve and develop in response to changing requirements. It is important for thenetwork to have a simple structure and for service diversity to be ensured in end or edgenodes. To accomplish this, the following network control or design methods must befollowed to enable a sustainable network to be continuously developed over 50 or 100years.

    Self-* properties: To construct a sustainable network that can be continuouslydeveloped, that network must be adaptive. Therefore, the network must be designed

    so that individual entities within the network operate in a self-distributed mannerand that intended controls are implemented overall. In other words, a self-organizing network must be designed. Also, the hierarchical structure of thenetwork will continue to be an important concept in the future from theperspectives of function division and function sharing. A network must be designedhaving an adaptable control structure for upper and lower layer states withoutcompletely dividing the hierarchy as is traditionally done. In other words, a self-emergent network must be designed.

    Robust large-scale network: As the scale or complexity of a system increases,multiple simultaneous break-downs normally occur, rather than single independentfailures. In addition, the factors in which software bugs are introduced are largerand human error is more likely to occur when managing operation. The new

  • 8/6/2019 New Generation Network Architecture (1)

    6/136

    generation network architecture must be designed to handle simultaneous or seriousfailures that may occur.

    Controls for a topologically fluctuating network: In a mobile network or P2Pnetwork, communication devices are frequently created, eliminated, or moved. It isessential for mobility to be taken into consideration when designing a network. Forexample, when the topology frequently changes, controls for finding resources ondemand are more effective than controls for maintaining routes or addresses.However, since the overhead for on-demand control is high, it is important toenable routing to be implemented according to conditions of topology fluctuation.

    Controls based on real-time traffic measurement: Failures become morecommonplace as the scale of a network increases. As a result, precision-optimizedreal-time traffic measurements over the time scale required for control areimportant, and these must be applied to routing. Also, to pursue more autonomousactions in end hosts, it is important to actually measure or estimate the networkstatus.

    Scalable, distributed controls: To sufficiently scale controls even in large-scaleor topologically varying networks, it is important to introduce self-organizingcontrols or pursue autonomous actions at each node.

    Openness: Providing openness to users to facilitate the creation of newapplications is also important to the network.

    (3) Reality Connection principle

    Internet problems occur because entities in space on the network are disassociatedfrom real-world society. To smoothly integrate relationships between these entities andsociety, addressing must be separated into physical and logical address spaces and then

    mappings must be created between them and authentication or traceability requests basedon those mappings must be satisfied.

    Separation of physical and logical addressing: We must investigate the extent towhich physical and logical addressing should be separated. Various problems havebeen caused on the Internet by the appearance of new types of host connectionscenarios that had not previously existed such as mobility or multi-homingscenarios and by handling physical and logical addresses in the same way.

    Bi-directional authentication: A network should be designed so that bi-directional authentication is always possible. Also, authentication information mustbe located under control of the particular individual or entity.

    Traceability: Individuals or entities must be traceable to reduce attacks on thenetwork. Traceability must be a basic principle when designing addressing androuting as well as transport over them. To reduce spam, systems must be traceablefrom applications to actual society.

    Network Architecture Design Based on an Integration of Science and

    Technology

    To build a new generation network architecture, it is important to design the networkarchitecture by integrating technological techniques and theoretical (scientific)

    techniques. Setting up the architecture technologically based on properties that were

  • 8/6/2019 New Generation Network Architecture (1)

    7/136

    obtained by scientific methods is the essence of architecture construction. Specifically,the following procedure is required.

    (1) One architecture that can be entirely optimized and can flexibly adopt newfunctions is constructed.

    (2) Then, to refine that architecture, a model is created based on network science,and its system properties are discovered according to mathematical analysis oractual inspections.

    (3) Specific methods for achieving further global optimization (such as moderateinteractions between layers or moderate interactions between different modulesin the same layer) are created and new functions are adopted.

    This causes the network system to grow.

    (4) The entire process in which new properties for that system are discovered froma scientific standpoint and new technologies are adopted is repeatedly executed.

    In other words, network development can be promoted through a feedback loopcontaining repeated scientific and technological processes.

    Network science provides basic theories and methodologies for network architectures.However, the network system itself must be understood. New discoveries or principlescan be obtained and system limitations can be learned by understanding system behaviorthrough basic theories and methodologies. These theories and methodologies can alsohelp clarify what makes good protocols or control mechanisms.

    When a network architecture is designed through network science research, whether ornot the architecture is truly useful is clarified and implementation is promoted based onthe following five criteria.

    (1) Has a new design policy been developed?

    (2) Has a new communication method been implemented?

    (3) Was a new abstraction, model, or tool conceived?

    (4) Were results commercialized and accepted by the user community?

    (5) Were solutions given for real-world problems?

    Summary

    The AKARI Conceptual Design is a first step towards implementing a new generationnetwork architecture. As mentioned earlier, this paper introduces societal considerations,future basic technologies, and design principles to be used when designing a newnetwork architecture. It also includes conceptual design examples of several key portionsbased on the design principles as well as requirements for testbeds that must be built forverifying them. Our approach is to focus our energy on continuing to design a newgeneration network and to use testbeds to investigate and evaluate the quality of thatdesign. Therefore, the existence of design principles is crucial to achieving a globallyoptimized, stabilized architecture. Until the final design is completed, even the designprinciples themselves are not fixed, but can be changed according to feedback throughrepeated design and evaluation.

    The network architecture is positioned between the top-down demands of solvingsocietal problems and the bottom-up conditions of future available component

  • 8/6/2019 New Generation Network Architecture (1)

    8/136

    technologies. Its role is to maximize the quality of life for the entire networked societyand to provide it with sustainable stability. A new sustainable design must support humandevelopment for 50 or 100 years, not just 2 or 3 decades as it functions as the informationinfrastructure underlying our society. This new architecture must avoid the same dangersconfronting the current Internet.

  • 8/6/2019 New Generation Network Architecture (1)

    9/136

    CONTENTS

    Preface

    Chapter 1 Goals ofthe New Generation Network Architecture Design Project AKARI1

    1.1 AKARI Project Objective1.2 AKARI Project Targets1.3 AKARI Project Themes1.4 Network Architecture Definitions and Roles1.5 Opportunity for Redesigning Network Architecture from a Clean Slate1.6 Conceptual Positioning of New Generation Network and Its Approach1.7 Two Types of NGN: NXGN and NWGN1.8 Comparison of NXGN and NWGN

    Chapter 2 Current Problems and Future Requirements11

    2.1 Internet Limitation2.2 Future Frontier2.3 Traffic Requirements 10 Years Into the Future2.4 Societal Requirements and Design Requirements

    Chapter 3 Future Enabling Technologies 26

    3.1 Optical Transmission3.2 New Optical Fiber3.3 Wavelength and Waveband Conversion

    3.4 Optical 3R3.5 Optical Quality Monitoring3.6 Optical Switch3.7 Optical Buffer3.8 Silicon Photonics3.9 Electric Power Conservation3.10 Quantum Communication3.11 Time Synchronization3.12 Software-Defined Radio3.13 Cognitive Radio3.14 Sensor Networks

    3.15 Power Conservation for Wireless Communications in the Ubiquitous ComputingEra

    Chapter 4 Design Principles and Techniques 40

    4.1 Design Principles for a New Generation Network4.2 Network Architecture Design Based on an Integration of Science and Technology4.3 Measures for Evaluating Architectures4.4 Business Models

    Chapter 5 Basic Configuration of a New Network Architecture 54

    5.1 Optical Packet Switching and Optical Paths5.2 Optical Access5.3 Wireless Access

  • 8/6/2019 New Generation Network Architecture (1)

    10/136

    5.4 PDMA5.5 Transport Layer Control5.6 Addressing and Routing5.7 Layering5.8 Security

    5.9 QoS Routing5.10 Network Model5.11 Robustness Control5.12 Overlay Network5.13 Layer Degeneracy

    Chapter 6 Testbed Requirements117

    Chapter 7 Related Research119

    7.1 NewArch7.2 GENI / FIND7.3 Euro-NGI / Euro-FGI

    Chapter 8 Conclusions122

    Appendix Definitions of Terms 123

  • 8/6/2019 New Generation Network Architecture (1)

    11/136

    Preface

    Packet switching was invented over 40 years ago. This technology, which gave rise tothe Internet, is the information foundation of society today. About a century before theinvention of packet switching, the telephone was invented as an improvement overtelegraph, and the telephone network based on circuit switching came to occupy a firmlyentrenched position within society. Through the failure of Asynchronous Transport Mode(ATM), the telephone network became the Next Generation Network (NGN) and anattempt is now being made to absorb it into a network based on packet switching.Through the transition from a simple network for connecting telephones to aninformation network for connecting computers, the network not only has supportedsocietal aims, but has also become an indispensable part of our world today. In theubiquitous computing society of the future, an information network will permeate oursociety and its terminals will be processing devices that are neither telephones norcomputers.

    As the complexity and diversity of human society increases in the future and peopleand information become more closely interconnected, the network itself cannot help butreflect this diversity and complexity. Computers and networks will be ubiquitous andinformation networks will be embedded in the real world to benefit society en masse. Theinformation network that supports the diversification of human life will give birth to anew culture and science. The network will enable real-world society to incorporatevirtual space so that the two spaces are integrated seamlessly and people will be unawareof the passing back and forth between these spaces. The current Internet, which was notdesigned with this kind of pervasive information network-oriented society in mind,cannot handle this societal transition, leaving it unable to further mankind's futurepotential. Actually, we are already experiencing problems associated with the gap

    between the real world and virtual space. To realize this kind of information network-oriented society envisioned for the next two or three decades, we must have a newgeneration network that can integrate the real world and virtual space and deal with themseamlessly.

    Improvements have often been made to the Internet by the Internet Engineering TaskForce (IETF), its standards organization. Because of improvements that were madespanning dozens of years, its protocols have become more complex. Also, innovativeideas are not accepted in Internet technologies that have already been established. IPv6simply broadens the address space, and we cannot expect the IETF to produce a newnetwork architecture. Our vision is that we must create this new generation network

    before the Internet reaches its limits. The aim of new generation network research is tocreate a network for people of the next generation, not to create a network based on nextgeneration technologies.

    A network architecture, which is a set of design principles for designing a network, isconsistent with the general rules of human society. The Internet architecture wasdeveloped along with competition based on market principles and globalization, whichthe Internet supported. Both the rules of society and the Internet try to welcome andengage turning points. A sustainable society increasingly demands not only liberalization,but also peace of mind and safety.

    To apply technologies that will be available in the future to resolve both social

    problems that cannot be resolved by modifying the current network as well as problemsthat are expected to become serious in the future, we must select, integrate, and simplify

  • 8/6/2019 New Generation Network Architecture (1)

    12/136

    techniques and technologies based on a network architecture designed according to newdesign principles. The network architecture is positioned between the top-down demandsof solving societal problems and the bottom-up conditions of future available componenttechnologies. Its role is to maximize the quality of life of the entire network-orientedsociety and to provide it with sustainable stability.

    New generation network research must design the network from a clean slateregardless of current technologies. A new sustainable design must support humandevelopment for the following 50 or 100 years. We should design an ideal network thatcan be realized at a future point in time and then consider the issue of migration fromexisting conditions later. We must not improve the current technology without looking atfuture courses of action.

    This conceptual design is a collection of techniques and technologies that wereselected and simplified based on design principles conforming to its concepts. Since thetechniques and technologies that are included have not yet been evaluated, they are onlysuggestions to be included in a new generation network and act simply as guidelines

    indicating the first step in advancing our research.

    This conceptual design is organized as follows. Chapter 1 introduces the aims of thenew generation network architecture design project AKARI. To clarify the currentproblems and future requirements, Chapter 2 describes the design requirements that arecalled for in this conceptual design. Chapter 3 describes future component technologiesthat can be used by the new generation network. Chapter 4 discusses design principlesand techniques that are used in this conceptual design. Chapter 5 deals with the basicconfiguration of the new generation network architecture and various related technicalareas. Chapter 6 describes the requirements for testbeds to be used as prototypes forverifying the new generation network architecture. Chapter 7 introduces related research

    and Chapter 8 presents conclusions.

  • 8/6/2019 New Generation Network Architecture (1)

    13/136

    Chapter 1. Goals of the New Generation Network

    Architecture Design Project AKARI [Hirabaru, Otsuki, Aoyama,Kubota]

    This chapter initially describes the objectives and targets of the AKARI Project. Then,to clarify the aims of the project, it describes the importance of the network architecturedefinitions and roles, the conceptual positioning and approach of the AKARI Project, andthe differences between a next generation and new generation network.

    1.1. AKARI Project Objective

    The objective of the AKARI Project is to design the network of the future. TheAKARI Project aims to implement a new generation network by 2015 by establishing a

    network architecture and creating a network design based on that architecture. Our mottois "a small light (akari in Japanese) in the dark pointing to the future." Our philosophy isto pursue an ideal solution by researching new network architectures from a clean slate,without being impeded by existing constraints. Then the issue of migration from existingconditions can be considered. Our goal is to create an overarching design of what theentire future network should be. To accomplish this vision of a future network embeddedas part of societal infrastructure, each fundamental technology or sub-architecture mustbe selected and the overall design simplified through integration.

    1.2. AKARI Project Targets

    The targets of the AKARI project are to develop a new generation networkarchitecture and to design a new generation network based on it. The design will takeinto consideration the various design requirements discussed in Chapter 2 and presentassessment evidence. Our first year goal is to create a conceptual design and present theinitial design principles. These initial design principles will be revised to create a moredetailed design in the second year. A development plan will be determined in the thirdyear, a prototype will be developed in the fourth year, and demonstration experimentswill be conducted and evaluated in the fifth year to show the effectiveness of this design.

    1.3. AKARI Project ThemesThe AKARI project will create a blueprint for a new generation network to be

    incorporated throughout Japan. This network will be based on future leading-edgetechnologies and will act as a foundation for supporting all communication services. Theblueprint not only will be a design of the entire new generation network, but it will alsoindicate the directions of next generation network technologies for the industrial worldwith which the network will be interacting. The AKARI Project will evaluate thenetwork using testbeds through cooperation with universities and industries and lead theway towards future standardization. To accomplish this, we identified the followingguidelines:

    Lead by indicating future actions and ensuring neutral innovations forcompetitive industries

    1

  • 8/6/2019 New Generation Network Architecture (1)

    14/136

    Design based on basic principles that are common overall, not localimprovements of efficiency or progress in specific component technologies

    Create an overarching vision of what the future network should be for morethan a decade hence and utilize established design capabilities based onpractical experience

    1.4. Network Architecture Definitions and Roles

    A network architecture is a set of abstract design principles. These design principlesbecome criteria for making decisions when confronted with choices from among manydesign alternatives. Expressed in another way, a network architecture is a fusion ofscience and technology. Although we can evaluate whether or not a specific networksatisfies certain requirements, there is no general methodology for designing a networkthat satisfies these requirements. Therefore, a network architecture aims to assist thedesign process so that the requirements are met more satisfactorily through repeated trials

    or more stable results are obtained. It is conceptually positioned at an intermediatelocation to match user requests with the development of component technologies. Anexcellent network architecture fills the gaps between these requests and developments tobring about a more optimized and stable network.

    No vertical division.

    Commoninfrastructure

    Network Architecture

    Future requirements from

    diverse users and society

    Evolving, future fundamental

    technologies

    Enjoy fundamentaltechnology advances

    Flexible to adopt anew user requirement

    For the informationinfrastructure of society

    -Global optimization- Sustainable stability

    Fig. 1.4.1. Roles of the Network Architecture

    2

  • 8/6/2019 New Generation Network Architecture (1)

    15/136

  • 8/6/2019 New Generation Network Architecture (1)

    16/136

    Original Internet Architecturemu

    lticas

    t

    mobility

    hierarchal

    addressing

    complicatedrouting

    IPSEC

    MPLS

    anycast

    localaddressing

    NAT

    GMPLS

    flow-label

    guaranteedservice?

    universal communication?

    small devices? authentication?

    L2: Datalink Layer

    L3: Internet Layer

    L4: Transport LayerL3.5:IPSEC

    L2.5:MPL

    S

    L3.5:MobileIP

    L4.5:Bun

    dleL4.5:Platform

    LX.5:Ove

    rlay

    Functions were rapidly added on top of each other

    The time for redesigning the Internet fromhe time for redesigning the Internet froma clean slate is approaching!clean slate is approaching!

    Layers were rapidly inserteddependability?

    Fig. 1.5.1. Problems with the Internet Architecture

    100x100 Clean Slate Project (NSF)

    SIGCOMM FDNA

    Euro-NGI (EU)

    2000 2001 2002 2003 2004 2005 2006

    NewArch (DARPA)

    2007

    FIND (NSF)

    Autonomic Communication (EU)

    GENI InitiativeAnnounced

    UNS Strategic Programs (JP)non-IP New Generation Network

    Future (> New) GenerationNetwork Architecture (NICT)

    2008 2009

    Fig. 1.5.2. Initiatives for Recreating a Network Architecture from a Clean Slate

    1.6. Conceptual Positioning of New Generation Network and ItsApproach

    We must have a vision for the network of the future. Although it is difficult to predictwhat it will be like 10 or 15 years in the future, there should be an ideal network-orientedsociety, and research and development should be conducted concerning the network forimplementing it. This network should only be accountable to the ideal future that it aims

    to achieve and should not be tied to network systems that are currently in use in ourpresent-day society or to the technological assets involved in those systems. The new

    4

  • 8/6/2019 New Generation Network Architecture (1)

    17/136

    generation network will not be able to be implemented immediately, but will act as areference for future research and development and point to a course of action for researchand development in this field.

    There are concerns that if research and development is performed based on currenttechnologies, the direction taken by the development process for the network-orientedsociety will reflect corporate interests or be reduced to local optimizations. In addition, alarge gap may occur between research and development based on current technologiesand the next generation technologies when the limits of the current Internet are reached.However, we believe that milestones for current network research and developmentprojects can be determined and steps towards the future can be taken with an idealsolution in mind.

    Many current network research and development projects end up adhering topiecemeal improvements of Internet technologies or the spread of the Internet. There is astrong tendency to carry out development with the current Internet in mind, whichinhibits movement towards new innovation. It is our philosophy that network research

    and development that is linked to future innovations is only possible by starting from aclean slate with no concept of the current Internet in mind.

    PresentNetwork

    RevisedNXGN

    New GenerationNetwork (NWGN)

    2005 2010 2015

    PastNetwork

    Next GenerationNetwork (NXGN)

    2) modification

    1) new paradigm

    PresentNetwork

    RevisedNXGN

    New GenerationNetwork (NWGN)

    2005 2010 2015

    2) modification

    1) new paradigm

    Next GenerationNetwork (NXGN)

    PastNetwork

    Fig. 1.6. Conceptual Positioning of New Generation Network

    5

  • 8/6/2019 New Generation Network Architecture (1)

    18/136

    1.7. Two Types of NGN: NXGN and NWGN

    We propose that the next generation network that is based on IP be referred to asNXGN and that the new generation network of at least a decade into the future, whichwill not necessarily adhere to IP, be referred to as NWGN. In addition, we want to point

    out that a new paradigm is likely to be introduced in NWGN.Next Generation Network (NXGN)

    The basic architecture and service conditions of the Internet are maintainedand quadruple-play services (telephone, data, broadcasting, and mobiledevices) are implemented.

    New Generation Network (NWGN)

    Future ubiquitous services are conceived of in a form that differs from currentInternet architecture and services, using a new paradigm called the New

    Paradigm Network (NPN).

    The NGN that is the focus of the ITU-T, a typical example of an NXGN, is a short-term research and development project covering the next five years with an aim toimprove current technologies. On the other hand, AKARI, a typical example of anNWGN, is a long-term research and development project that begins with a clean slateand aims to design a network for more than a decade into the future.

    The following figure explains the typical configuration of an NWGN. At the center isthe common network (shaded portion in the figure), which is a common layer that will benewly developed to replace IP. An underlay network below this will have severaltechnologies and will provide diverse means of transmission or access. On the other hand,

    an overlay network above the common network will provide a flexible, customizablelayer on which applications will run. A cross-layer control mechanism will operateamong the layers to enable the layers to cooperate and to provide users with services inthe appropriate layer such as A, B, and C in the figure.

    6

  • 8/6/2019 New Generation Network Architecture (1)

    19/136

    User A User B User C User interface

    Croacome

    sm

    Broad bandUbiquitousScale-freeSecure

    CustomizableFlexible

    Universal accessApplication

    Overlay network

    A

    B

    C

    Command network: Replaces IP

    Underlay networkPhotonic Mobile Sensor

    Fig. 1.7. Conceptual Diagram of the New Generation Network Configuration

    1.8. Comparison of NXGN and NWGN

    NGN is the next generation network architecture for which the ITU-T is conductingstandardization work. The core of this architecture consists of a function architecturecalled the service stratum and a transmission network called the transport stratum, whichare linked by IP. NGN aims to create a carrier network architecture that can not onlyprovide multi-media and mobile network services from telephone services by using theconventional IP network as infrastructure and adding security, authentication, and QoSfunctions but can also provide new services extending into the future.

    The goal for implementation is roughly by 2010, and the targeted services are triple-play services that encompass existing telephone and IMS-based multi-media services.Session management based on service definitions in the service stratum will beperformed for all NGN services. IMS-based multi-media services will also be session-based services. Since the infrastructure is an IP network, services that are implementedon the Internet are expected to be fully supported by the transport stratum. However,

    since a carrier network is assumed, the degree to which it will interconnect with theInternet as infrastructure is still not known. Also, one technology that is gaining attentionis application services via Application Network Interface (ANI). This not only enablesservices to be provided by carriers, but also enables users to receive extended services.However, since this technology does not directly control the infrastructure, its prospectswill not only depend on ANI functions that may be created in the future, but will also besignificantly affected by the degree to which they are made publicly available.

    Although the possibility of future growth of NGN as a carrier network is anticipatedand it is expected to be used as infrastructure instead of traditional communicationnetworks, the following concerns cannot be ignored.

    7

  • 8/6/2019 New Generation Network Architecture (1)

    20/136

    QoS tasks

    IP network limits could be reached by using of IP for QoS tasks. In particular, it isdifficult to guarantee QoS. Although applications are preferentially controlled foreach class, it is obvious that bandwidth is difficult to guarantee.

    Scalability and CapacitySince all services undergo session management, scalability is in a concern. Thereare also uncertainties concerning transaction management required forauthentication of terminals and individuals to ensure security. Managementinformation search scalability in location information databases used for mobility isalso uncertain. These kinds of uncertainties are worrisome because control iscentralized even though these services take a distributed form using IP. Sinceexisting terminals and applications are integrated, a tera-bps to peta-bps classnetwork is probably required in terms of capacity. However, we will be unable tocreate a greater capacity if these kinds of scalability uncertainties are not resolved.This is a major concern for future implementation technologies.

    Electric Power

    Since the infrastructure is based on IP routers, router performance is directly relatedto QoS or network performance. If we consider peta-bps class processing based onhigh-end IP routers, several hundred routers consuming kilowatts of power per nodeare required, resulting in megawatt-class power requirements.

    Flexibility, Robustness, and Sustainability

    The possibility of future growth may be inhibited by the ANI implementation andnon-technical limitations. On the other hand, robustness can be ensured sincesecurity will be under the strict control of business. Also, support for emergencycalls is obligatory and these calls will be processed with high priority. Since thereplacement of existing services by services that will guarantee a certain degree offlexibility is a worthy goal, sustainability (exceeding 50 or 100 years) is not aprimary goal.

    Transport stratum

    Service stratum

    IP

    Application provider

    User

    ANI

    NNIUNI

    Transport stratum

    Service stratum

    IP

    UNI NNI

    ANI

    U

    ser

    Application provider

    Ne

    twork

    Fig. 1.8. NGN Configuration

    8

  • 8/6/2019 New Generation Network Architecture (1)

    21/136

    Table 1.8. Differences Next Generation Network (NGN) and New GenerationNetwork.

    Attribute Next Generation Network New Generation Network

    Assumedimplementation time

    By 2010 2015 or later

    Creation method Add QoS and authentication toexisting IP

    Create new network withoutbeing committed to IP

    Trunk line capacity O-E-O conversion: Less thanpeta-bps capacity

    All-Optical: Greater than peta-bps capacity

    Assumed terminals

    and applications

    Integration and creation of

    advanced versions of existingterminals and applications suchas triple- or quadruple-playservices

    Unknown but highly diverse

    ranging from devices acting inconjunction with massiveinformation servers to tinycommunication devices such as asensor

    Power consumption Power consumption at severalmegawatts

    (transformer substation scale)

    Power conservation by a factorof at least 1/100 according tomulti-wavelength opticalswitching

    Security Successive violations ofprinciples such as firewalls,IPSec, and IP traceback

    Control spam or DoS attacks byaddress tracing and end-to-endand inter-network security

    Robustness Supported by enhancement ofmanagement function bybusinesses

    Robustness is provided by thenetwork itself

    Routing control Distributed centralized controlfollowing IP, MPLS requiredfor high-speed rerouting, long

    fault detection time

    Introduction of completedistributed control, increase infailure-resistance and

    adaptability, inclusion of sensornets or ad-hoc nets

    Relationshipbetween users andthe network

    Although there are someconstraints on opennessstipulated by UNI, ANI, andNNI, reliability is increased

    Provides openness from a neutralstandpoint, and users can bringnew services

    Quality assurance Priority control for each classby using IP

    Quality assurance that includesbandwidth for each flow usingpacket switching or paths

    appropriately

    9

  • 8/6/2019 New Generation Network Architecture (1)

    22/136

    Layer configuration Thick layer structure Layer degeneracy and cross-layer control centered around athin common layer

    Integration model Vertical integration orientation Vertical or horizontal integrationpossible

    Basic principles Set from a business standpointwhile using IP

    Set from a clean slate to matchfuture requirements

    Sustainableevolution

    Has limitations due to IP Has sustainable evolutioncapability that can adapt to achanging society

    Access Up to 1Gbps for each user Over 10Gpbs for each user

    Wired-wirelessconvergence

    IMS Context aware

    Mobile (Under investigation) ID locator separation

    Number of terminals Up to 10 billion Over 100 billion

    References

    [1-1] David Clark, et al., NewArch Project: Future-Generation Internet Architecture,http://www.isi.edu/newarch/, 2003.

    [1-2] Larry Peterson, et al., GENI: Global Environment for Network Innovations,http://www.geni.net/, 2006.

    [1-3] Daniel Kofman, et al., Euro NGI, http://eurongi.enst.fr/, 2006.

    [1-4] Mikhail Smirnov, et al. Autonomic Communication, http://www.autonomic-communication.org, 2006.

    [1-5] The Telecommunications Council. Research and Development for UbiquitousNetwork Society. UNS Strategic Programs, http://www.soumu.go.jp/s-news/2005/pdf/050729_7_2.pdf, July 29, 2005.

    [1-6] Hirabaru et al. Network Architecture Group, http://nag.nict.go.jp/, 2006.

    10

  • 8/6/2019 New Generation Network Architecture (1)

    23/136

    Chapter 2. Current Problems and Future Requirements [Ohta,Hirabaru, Nakauchi, Aoyama, Morikawa, Inoue, Kubota]

    2.1. Internet LimitationsLoss of transparency on the Internet is often attributed to the widespread use of

    Network Address Translation (NAT) because of an insufficient number of IP addresses.However, this is not the only problem. Many parts of the Internet are already breakingdown. When a new protocol is introduced and an attempt is made to use it together withexisting protocols or with other newly introduced protocols whose interactions areunknown, the new protocol may gradually become incompatible with protocols thatpreviously worked together efficiently. To prevent Internet limitations, relationshipsbetween protocols must be reassessed and protocols must be redesigned without regard topast usage.

    This not only seems to be occurring in lower layers, but is also seen in upper layers.For example, Session Initiation Protocol (SIP), which is supposed to be used to matchmedia formats between end users in NGN will also used for reserving resources betweennetwork providers. However, if a lower layer is designed appropriately, either an upperlayer can easily be redesigned or, in many cases, the protocol will be unnecessary. Forexample, SIP need not be used between business users if resources can be reservedappropriately in a lower layer (transport layer). Therefore, this section focuses entirely onlower layer limitations.

    2.1.1. Multicast Routing Limitations

    The limitation of multicast routing is an obvious example of Internet protocollimitation. The original concept (grand design) of multicast routing followed unicastrouting and permitted various types of routing methods within a domain. The routes ofmultiple multicast groups were aggregated to curb the growth of routing tables, and anattempt was made to integrate these control methods in a common inter-domain multicastrouting protocol. Various types of routing protocols (DVMRP, MOSPF, CBT, PIM-DM,PIM-SM, etc.) that are available within a domain fail when the domain grows larger orthe number of groups increases because the number of route advertisements increasesdramatically.

    However, it is generally impossible to aggregate the routes of multiple multicast

    groups. For unicast routing, when only a region having a certain address range is used,routing table entries are conserved at a distant location from that region by using thesame route for all addresses in that address range. However, for multicast routing, thetransmission destination is not a host, but a set of hosts spanning the entire Internet.Therefore, separate routing tables are required for different members even if the set ofdestinations are similar or many members are common. Moreover, the similarity ofmulticast destinations and similarity of multicast addresses are generally unrelated justlike viewers receiving adjacent channels of a TV are generally not alike. The aggregationof routes of multiple multicast groups is generally impossible.

    In other words, the grand design of multicast routing has been a limitation from the

    start, and the inter-domain multicast routing protocol BGMP, which was proposed toaggregate routes, has not accomplished its goal.

    11

  • 8/6/2019 New Generation Network Architecture (1)

    24/136

    Currently, only PIM-SM, for which the number of advertisements does not increaseeven if the domain gets larger, is used. However, it is limited to use in each domain bystatically configuring so that the number of advertisements does not dramatically increaseeven if the number of groups increases.

    Although the resource reservation protocol RSVP was designed to support all types ofmulticast protocols that have currently failed, it contains the same problems as thoseprotocols.

    Another limitation of multicast routing has been the introduction of IGMP. IGMP wasintroduced so that end terminals could be supported only by IGMP without regard to themulticast routing method. However, this not only made the multicast routing method(that only a router understands) unnecessarily complicated, but also moved functions thatthe terminal should have to the network, which is an overt violation of the end-to-endprinciple. Actually, IGMP has been functionally extended twice because of theintroduction of new multicast routing methods, although it was supposed to be unrelatedto individual multicast routing methods. IGMP has obviously failed.

    2.1.2. ATM Limitations

    At one time, even part of the Internet community expected ATM to be the foundationof telecommunication of the future. However, guaranteeing QoS on ATM was equally ascomplex as on a packet-switched network such as the Internet, and ATM failed in asimilar manner as RSVP. Since its average packet (cell) length was 1/10 that of theInternet and it produced a speed that was also only approximately 1/10 that of theInternet, it is hardly used anymore. However, former trials in which ATM coexisted withthe Internet placed strains on Internet protocols. Some of these were broadcast avoidance

    and the excessive expectations for multicasting. When an ATM network is selected as thedata link layer and the Internet runs on top of it, the assumption of the basic model(CATENET model) of the Internet (the data link consists of a small number of devices) isnot realized. Therefore, an IP broadcast for the data link (that is, for the ATM network),for example, is certainly not realistic. However, point to multi-point (P to MP)communications using ATM are realistic, and since this is misunderstood as beingequivalent to IP multicasting, multicasting is mistakenly considered to be realistic. Thisresults in excessive expectations for multicasting. Actually, a method should have beenused that did not break down the CATENET model and broadcasting should have beensimulated by virtually building a small data link on the ATM network. If multicasting isused excessively, IGMP traffic is uselessly generated, and if the IGMP query interval is

    extended, support for changes in group members having reduced IGMP traffic is delayed.Therefore, it is effective to only use broadcasting in an environment where mostterminals only use IP as is done today.

    2.1.3. Inter-Domain Routing Limitations

    BGP, which is an inter-domain routing protocol, selects an alternate path when afailure occurs. Although most users are not aware that recovery takes a long time (oftenin terms of minutes), this is clearly a limitation where mission-critical uses are concerned.The reasons that recovery takes so long are that free policies are permitted at each AS

    and that there are too many ASs.

    Another limitation lies in the large number of global routing table entries, currently

    12

  • 8/6/2019 New Generation Network Architecture (1)

    25/136

    exceeding 200,000 entries. Since multihoming is currently performed using routing, amultihomed site requires individual independent entries in BGP global routing tables.Most of advertised global routing entries are used for multihoming. As long asmultihoming depends on routing, the number of global routing table entries is expectedto continue to increase quickly in the future.

    Attempting to perform inter-domain routing using BGP leads to another limitation. Forexample, a method in which the MPLS path assignment depends on a BGP advertisementcauses a significant increase in BGP advertisement information.

    2.1.4. Network Layer-Specific Time Interval Limitations

    One common factor shared by many Internet limitations is the introduction of time inthe network layer. Since a network layer that is constructed using the connectionless IP,which does not have the concept of a timeout, it is inefficient as well as violation oforiginal Internet design principle to introduce the concept of time. On the other hand, the

    data link layer or transport layer often relies on timeouts for resending packets inresponse to packet loss for detecting a failure based on the lack of a response from thedestination and has had the concept of time from the start. The property of not havingtime is well maintained by the IP protocol itself. Although the TTL in IPv4 originallyindicated a number of seconds, since it was actually used as a number of hops, it lost itsmeaning as time. Also, the concept of time was officially eliminated from TTL in IPv6.

    However, because of inappropriate protocol design or operation related to layers aboveor below the network layer, time is introduced everywhere in the network layer, whichhas caused limitations to occur. The most striking example is NAT. To recover addressesin an attempt to forcibly share addresses among terminals, addresses that are unused for a

    long time are recovered according to a network-layer timeout without regard to atransport-layer timeout. Therefore, the transport layer, which may still be active, may endup being disconnected.

    Another example is a routing protocol timeout. To verify network-layer connectivity, arouting protocol generally also monitors the data link layer. Since its timeout must differaccording to the data link, the routing protocol generally should not have a fixed timing.On a slow-speed data link, monitoring packets cannot be transmitted very frequently fordetailed monitoring, while on a high-speed data link, monitoring packets should befrequently exchanged to increase monitoring precision. Also, on a long distance data link,the wait for a response must be longer than the RTT. However, since the current

    mainstream routing protocols were designed for a time when data links were slow, eithera timeout cannot be set according to the data link or if it can be set, it often can only beset in terms of seconds. Therefore, connectivity cannot be frequently verified.

    Similarly, routing advertisements also can only be spaced in terms of seconds orlonger intervals. This causes failure recovery by changing routes to be unnecessarilydelayed.

    2.1.5. IPSec Limitations

    IPSec was an attempt to provide a common security method for various types of

    protocols. However, since the functions that were required for the security method variedaccording to the application and the applications were ignored when standardizing those

    13

  • 8/6/2019 New Generation Network Architecture (1)

    26/136

    functions, IPSec contained inconsistencies from the beginning. Security cannot bestandardized by concentrating on a specific layer, but must be implemented in anappropriate layer according to application requirements.

    IPSec also contains public key encryption limitations. Generally, to implementsecurity, it must be theoretically impossible for secret information that must be sharedbetween specific parties to be shared by an unknown third party. However, in attemptingto resolve this problem according to public key encryption, an additional third partycalled a certification authority (CA) was also introduced without taking the reliability ofthe CA into consideration (although the CA can be trusted to the same degree as the ISP,if the ISP can be trusted from the start, then IPsec would be unnecessary). This isinconsistent.

    The IPSec protocol that was actually defined by a compromise is unsuitable for mostapplications, and is of little use.

    2.1.6. IPv4 LimitationsSince the IPv4 address length is only 32 bits, the number of Internet devices was

    quickly recognized to be limited to approximately 4 billion. In this sense, this was clearlya failing of IPv4. Therefore, various means of creating address hierarchies to use thelimited number of addresses more efficiently were designed. Since addresses were beingheld back at the same time, the address conservation technique, NAT, prevailed, and theend-to-end transparency of the Internet was considerably compromised. A drasticsolution to this problem was to extend the address length, which was implemented asIPv6. However, since IPv6 and the protocol group related to it have directly inherited themany limitations of the current Internet, the introduction of IPv6 alone will not prevent

    the eventual collapse of the Internet.

    2.1.7. IPv6 and ND Limitations

    IPv6 increased the number of address bits as a successor to the IPv4 protocol. Itdirectly followed most of the other conventions of IPv4 and also added several"improvements" such as neighbor discovery (ND). As a result, it is a protocol group thatcarries on the limitations of the existing Internet protocol group. Many of thoselimitations are obviously manifested in ND, which is a "standard" protocol for linkingtogether the IP layer and lower layers.

    One of the advantages of IP is that it can run on a great variety of lower layers since itis simple. As a result, a means of implementing IP must be devised according to thespecial characteristics of the lower layer. Although ND was designed as a universalprotocol for implementing IP on all lower layers, only Ethernet, PPP, and ATM wereactually assumed as the data link layer, and only conventional methods of using IP onthem were assumed. As a result, if ND is used to run IPv6 on various types of data links,new kinds of limitations will occur.

    As the result of a mistaken investigation that took ATM into consideration, IPv6 doesnot have link broadcasting, but only provides multicasting. Multicasting causes IPv6 todirectly inherit IPv4's IGMP protocol along with its limitations. As a result, ND

    frequently uses multicasting since it cannot use broadcasting. Therefore, since IPv6 notonly uses IGMP but also an ND-specific timeout, it is forced to use a timeout value

    14

  • 8/6/2019 New Generation Network Architecture (1)

    27/136

    denominated in terms of seconds, which ignores the special characteristics of the datalink. The upper and lower limits of the ND timeout value, which were determinedwithout any particular justification, make high-speed handover impossible. This is thelatest recognized limitation of IPv6. Although the specifications were changed for onlythis part, this change was merely an improvement of unimportant details. For example, in

    a wireless LAN, since multicasting and broadcasting do not resend packets when acollision occurs, they are not as reliable as Ethernet broadcasting or wireless-LANunicasting. Although congestion causes processing performance to drop significantly,this problem has not been solved.

    With ND, an attempt was made to have unicast routing (not just multicast routing)distinguish between simple terminals and routers so that a simple terminal would notneed to understand the routing protocol. However, reducing terminal functions andrelying on routers is a violation of the end-to-end principle.

    IPv6 differs from IPv4 in that the minimum Maximum Transmission Unit (MTU) hasbeen significantly increased. In many upper layer technologies, it is sufficient if the

    standard MTU can be used. A value that is required in the upper layers is the Path MTU(PMTU), which is the minimum MTU value on a path spanning multiple hops. PMTUdiscovery is an IPv6 option. However, the PMTU varies as the route varies, andmonitoring is required at a suitable interval. PMTU was first implemented in the networklayer, and then timeouts and the concept of time were also introduced. Currently, PTMUdiscovery cannot actually be used.

    The increase in the number of global routing table entries for interdomain routing hadbeen recognized at the initial stage of IPv6 development as a problem no less importantas that of the pressure on the address space, and address structures and addressassignment methods that would suppress the number of global routing table entries were

    proposed. However, since the multihoming problem has not been solved, multihomingrequests from ISPs cannot currently be resisted and the unlimited increase in the numberof global routing table entries is not likely to stop. Although there also have beenexperiments attempting to make IPv6 deal with the multihoming problem, since many ofthem try to introduce even more timeouts in the network layer in a similar manner asNAT, this only worsens the current situation.

    IPSec has also been integrated as a standard feature in IPv6. However, no attempt hasbeen made to resolve the key sharing problem, and security is not particularly increasedby IPSec.

    2.1.8. Avoiding New Generation Packet Network Limitations

    IPv6, which was introduced to overcome the IPv4 limitation of address resourcesbeing exhausted, not only is powerless concerning other causes of limitations, but evenaccelerates these limitations as described above. When we look at a sensor network, forexample, using packet switching will also be required for a new generation network(whether or not to call this the "Internet" is a matter of personal choice). However, toavoid limitations of packet switching technology in the new generation network, itssurrounding technologies should be radically reconsidered even more than for IP itself.

    15

  • 8/6/2019 New Generation Network Architecture (1)

    28/136

    2.2. Future Frontier

    2.2.1. Long Tail Applications

    The Long Tail theory is an economic theory that states that high sales and profits canbe obtained by small-lot production of a wide range of niche products without relying onlarge volume sales of hit products since an enormous number of products can be handledat low cost through online sales using the Internet. This name was chosen to represent thefollowing situation. When a graph is drawn with sales volume on the vertical axis andproducts arranged in decreasing order of sales volume along the horizontal axis, then thepart indicating products with small sales volumes, which stretches out over a longdistance, has the appearance of a long tail.

    The long tail theory can be applied to research and development of informationnetworks as follows [2-1]. Consider a graph with the number of users on the vertical axisand link speed increasing towards the right from the origin on the horizontal axis. Homeusers, which are the greatest number of users, use ADSL or FTTH links ranging in speed

    from several Mbps to 100Mbps. To the right of home users are corporate users usingLANs ranging in speed from 100Mbps to 10Gbps. There are significantly fewer of thoseusers than the numbers of general home users. Even more to the right are scientific andtechnical research and development groups, which have an extremely small number ofusers who require speeds of 10Gbps to 1Tbps. Although those speeds cannot currently beused in the computing environments of these research and development groups, thosekinds of link speeds potentially will be required. Therefore, the graph has a declininghyperbolic shape.

    Two kinds of Long Tail

    Long tail for BusinessLong tail for R & D

    Long tail applicationsA

    BC

    Data Speed

    Numb

    erofCustomers

    Short-term development performed in corporations cannot help but emphasizetechnologies that are adaptable to the area of the graph to the left in which there are manyusers. However, if we look at the history of ICT research and development since thedawn of the Internet, it is apparent that innovative technologies were created fromresearch targeting the long tail part of the graph where there was an extremely smallnumber of users and that those technologies gradually expanded into the areas to the leftuntil they finally spread to the part with an enormous number of ordinary users. TheInternet, World Wide Web, search techniques, and other technologies all started fromresearch intended for the long tail part, which targeted an extremely small number ofresearchers. Of course, it takes a long time for these technologies to spread from the

    extremely small number of special users to the enormous number of general users.However, the corporation that accomplishes this before any others will dominate as the

    16

  • 8/6/2019 New Generation Network Architecture (1)

    29/136

    current ICT champion. Even when designing the new generation network architecture, itis important to emphasize variety and the ease of introducing new services from theviewpoint described above.

    2.2.2. Scale FreeFrom ultra-high definition video applications to Web2.0 or sensor networks, which are

    described later, bandwidth and usage frequency are extremely varied and wide ranging,and there exist no characteristic typical values of a system. In the future, high-resolutionvideo streaming will be performed for each household in an evolved form of IPTV,distribution systems for uncompressed ultra-high resolution digital cinema of at least 4Kwill become commonplace, and the upper limits of network capacity will continue to beincreased. Therefore, communication methods based on circuit switching must also beinvestigated rather than creating a network using only packet switching as in the NGN.

    K M GAccess frequency [page/day]

    K

    M

    G

    T

    P

    Yahoo300Mpage/day

    InternetTV11Mpage/day

    Web content

    Cap

    acityofcontent

    SDTVDVD>GB

    Web10kB/pageMP3>MB/music

    DigitalCinema> 100GB

    e Commerce

    IP TV

    HDTV

    Cine-grid

    B2C

    B2B

    [bit]

    Contents in the ubiquitous societyFrom tiny to huge Scale free

    Sensor & RF ID

    S2M

    P2P

    Both directions

    2.2.3. Sensor NetworkSensor networks that connect sensor nodes consisting of sensors equipped with signal

    processing functions, wireless communication functions, and a power source will be usedto measure and analyze worldwide society en masse. For example, by deploying sensornodes worldwide to monitor temperature and soil pollution, sensor networks are usefulfor environmental preservation. Also, arranging sensor nodes throughout cultivated landto monitor weather conditions enables the provision of a safe supply of food. Equippingautomobiles with sensors for measuring pollutants, temperature, and speed can be usefulfor environmental preservation, performance improvement, or analysis of the causes ofaccidents.

    17

  • 8/6/2019 New Generation Network Architecture (1)

    30/136

    Dramatic Increase in Nodes

    Connecting sensors to networks will cause the number of nodes that are connected tothe networks to increase dramatically. Several application examples are given below.

    Applications are being considered for dealing with aging and eliminating

    problems of insufficient medical resources by converting from regular monitoringof the conditions of patients following medical treatment to preventive medicine.Therefore, models have been designed in which sensors are installed formonitoring health conditions on an individual basis and detection data is sent tothe network. Since the world population is predicted to be 7.5 billion by 2025, thenumber of sensor nodes will probably range from several billion to 10 billion.

    In a model for using a sensor network to monitor all cultivated land on earth inorder to eliminate food shortages and provide a safe supply of food, if sensors aredistributed over the 1.4 billion hectares of cultivated land so that there is onesensor per hectare, there will be 1.4 billion nodes.

    Automobiles are equipped with many sensors. By connecting these to a networkand using the information obtained from them, various applications can beconsidered to improve automobile performance, determine whether accidents orbreakdowns occur, and measure environmental conditions. The number ofautomobiles owned worldwide in 2003 was estimated to be 840 million, and it isexpected to reach several billion by 2020 mainly due to the increase in ownershipin developing countries.

    Sensor networks for environmental measurement can be considered to help preservethe Earth's environment by monitoring its deterioration. For example, assume that theurban areas throughout the world are covered by sensor networks. The total area of the

    land surface of the Earth is 149 million sq. km., and 10% of that or 15 million sq. km.comprise urban areas. If 10 sensor nodes were deployed per sq. km., there would be 150million nodes.

    When considering the increase in the number of nodes, besides the sensor networksdescribed above, we must also take into consideration the increase in existing nodes formobile devices, home networks, and appliances.

    2.2.4. Web 2.0

    Web 2.0 is a term coined by Tim O'Reilly in an essay entitled "What is Web 2.0"

    published in September 2005, which refers to various phenomenon that began appearingon the Internet in 2004 [2-2]. O'Reilly identified the following seven items ascomponents of the changes in the Internet.

    (1) The web as platform

    (2) Harnessing collective intelligence

    (3) Data is the next Intel inside

    (4) End of the software release

    (5) Lightweight programming models

    (6) Software above the level of a single device

    18

  • 8/6/2019 New Generation Network Architecture (1)

    31/136

  • 8/6/2019 New Generation Network Architecture (1)

    32/136

    that enables the service to be provided only to a specific group that depends on metadatasuch as the terminal owner or physical location. Private networks must be able to befreely constructed based on such information as the terminal owner, terminal location, orbilling information.

    2.2.6. Frameworks for Collecting Users' Personal Information

    Typical examples of frameworks for collecting users' personal information includeAmazon and TiVo. Amazon accumulates users' purchase history information to provide arecommendation service that uses collaborative filtering. TiVo is a system that learns auser's preferences and automatically records TV shows that the user likes. Byaccumulating users' behavior patterns, these frameworks implement services suited to theusers.

    Most services up to now have targeted content that exists on the Internet. A typicalexample of this is a search service. However, from now on, accumulated users' personal

    information will be an individual stream of targeted content. In other words, context-aware technologies will be required. Context is a word that includes a variety ofmeanings such as user context (user profile, location, behavior), physical context(brightness, noise, traffic conditions, temperature), computing context (networkconnectivity, communication cost, neighboring devices), and temporal context. As thesekinds of context information circulate within the Internet, we can expect new services tobe created.

    The most important information among the diverse context information is positioninformation. This is because the real world in which we live is often modeled based onposition. For example, if temperature is to be measured by a sensor network, information

    indicating where the temperature is measured will be required, and if informationindicating whether a person is walking or standing still also contains the location wherethat person is standing still, the service is more likely to provide finer details. By usinginformation indicating whether a user is in a "movie theater" or riding on "publictransportation" or information such as the number of people in a room or number ofconversations being conducted, natural communications can also be implementedwithout changing the device that can be used accordingly or increasing the burden on theuser.

    In addition to directly monitoring the real-world context such as the temperature,degree of soil pollution, or engine revolutions, sensors also share the data they obtain

    with other sensors. They also initiate the execution of physical actions through actuators.New services will also be possible such as distributing appropriate advertisements tousers according to the user's circumstances, which are estimated from informationobtained from acceleration sensors built into mobile devices.

    User behavior modeling information can be applied to nursing support, office design,and facility systems. A nursing support system can use behavior modeling information todetect any unusual behavior and can help significantly reduce labor for the nursing careof elderly patients who have cognitive disabilities. Also, by accumulating a history ofcontacts with people or flow paths within an office, the personal networks of individualsin the office or the flows of information that existed can be quantified. This enables anext-generation office design to be implemented, which can improve business processes

    or increase intellectual productivity. Moreover, by linking the living environment of the

    20

  • 8/6/2019 New Generation Network Architecture (1)

    33/136

    entire floor with a sensor network, the environment surrounding inhabitants can beoptimized, and the energy consumption of the entire floor can be reduced.

    To implement these kinds of context-aware services, basic technologies such as acontext acquisition mechanism, context representation model, distributed contextdatabase mechanism, context filtering mechanism (privacy, security, policy), and contextestimation mechanism must be developed. In particular, a context estimation technologythat estimates high-level context information based on physical information obtainedfrom sensors is extremely important for developing real world-oriented applications.However, although the word "context" is used without qualification, there certainly willexist various levels of context granularity required by applications. Consider positioninformation as an example. Even if there are applications that require coordinateinformation, there will also be applications that require more abstract information such as"movie theater," for example. Context information platforms that can appropriatelyprovide various types of granularity required by applications cannot be developed in ashort time. Development must proceed while gaining experience in constructing and

    operating prototype systems.

    2.3. Traffic Requirements 10 Years into the Future

    To estimate the performance of switching nodes and transmission equipments in thenew generation network, we observed the increasing trend of traffic at a typical Internetexchange point (IX) in Japan. This trend shows the same trend as Moore's Law (the levelof integration of semiconductors will double every 18 months). Guilder's Law [2-3],which is related to bandwidth, can be expressed in terms of the doubling of Moore's law,and although the factor (period in which doubling occurs) differs, a similar trend isknown to occur [2-4]. If we aggressively estimate here and assume that the trafficdoubles every year, then after 10 years, it will grow by 210, which is approximately by1000 times.

    We can estimate that this exchange point, which must already accommodate traffic onthe order of 100Gbps, will have to handle 100Tbps in 10 years. If we assume that accessalso shows the same trend, then access speed at homes will be 10Gbps. The exchangecapacity of current high-end routers is at the tera-bps level, and we can estimate thatpeta-bps routers will be required in 2015. The data link layer speed in that case will reach10Tbps.

    IX Traffic Volume: 100 Gbps100 TbpsHome Access Speed: 10 Mbps10 GbpsBackbone Node Capacity:T bpsP bpsBackbone Link:10 Gbps10 Tbps

    IX Traffic Volume: 100 Gbps100 TbpsHome Access Speed: 10 Mbps10 GbpsBackbone Node Capacity:T bpsP bpsBackbone Link:10 Gbps10 Tbps

    Year 2015 Traffic at Backbone

    ~doubleper year

    Moore's Law

    Increasing trend at JPIX99 00 01 02 03 04 05 06

    http://www.jpix.ad.jp/jp/techncal/traffic.html

    in case of double per year,10 years later from 2005

    Fig. 2.3. Traffic Forecast for 10 Years into the Future

    21

  • 8/6/2019 New Generation Network Architecture (1)

    34/136

  • 8/6/2019 New Generation Network Architecture (1)

    35/136

    architecture. A balance between network providers and network users is important, and ahigh degree of control by users as well as user-oriented diversity is also required.Therefore, the network must be open and must be able to support appropriate principlesof competition. Standardization of interfaces or the technologies used by them isimportant. The World Wide Web was invented because networks were open, and

    networks should have a degree of openness that brings out users' creative originality andenables networks to fully prosper. Mechanisms that enable users to provide services andcontrol networks are required. In this case, there will be no distinction between users andservice providers. Functions should be provided to enable users to easily bring services tothe network.

    Design Requirement 4: Robustness

    To be able to rely on networks as part of our societal infrastructure, we must be able touse them for medical care, traffic light control and other vehicle services, or bulletins

    during emergencies. We must be able to entrust important services to networks just likewe entrust our lives and well-being to doctors. The existing telephone network providesus with a benchmark of 99.99% availability. Networks must provide an even higheravailability.

    Design Requirement 5: Safety

    Network privacy is not just the hiding of information, but the ability of the entity thatowns information to control that information. On the other hand, the tracking of food orother commodities means that the recipient traces back along the information path of thatcommodity. Safety that enables the flow of information to be controlled or information tobe traced in the reverse direction is an important network function. To enable safety to beused with monetary and credit services, certification of individuals is required as well asmutual certification, which also enables the individual to certify the communicationdestination such as a bank. The architecture must be able to certify all wired and wirelessconnections. It also should be designed so that it can exhibit safety and robustnessaccording to its conditions during a disaster.

    Design Requirement 6: Diversity

    Current network design practices have pursued volume or efficiency objectives and

    have mainly targeted large numbers of users. In the future, an information network-oriented society that also targets fewer users should be constructed. The diversity ofsociety will also be carried forward on the network. From a technical standpoint as well,there has been a move from a usage scenario like telephone for which traffic can bepredicted to computer-centric traffic, which cannot be predicted, and the diversity ofsmall sensors and connected devices will also increase. A network must be able to bedesigned and evaluated based on diverse communication requirements without assumingspecific applications or usage trends.

    23

  • 8/6/2019 New Generation Network Architecture (1)

    36/136

    Design Requirement 7: Ubiquity

    To implement pervasive development worldwide, a recycling-oriented society must bebuilt. To accomplish this, a network for comprehensively monitoring the globalenvironment from various viewpoints is indispensable. However, monitoring the natural

    environment alone is not enough. Human activities also must be monitored. But privacymust be taken into consideration where human monitoring is concerned. When designinga network, there is a tradeoff between transparency and privacy protection, and a meansmust be provided for controlling the balance between them.

    Design Requirement 8: Integration and Simplification

    The time when networks were constructed for individual applications is fading away.Information networks are shared by all applications. In addition, not only broadcastingstations, but also individuals are sending transmissions to widely scattered recipients, anda large number of data sources, including devices such as sensors, are pouring

    information into the network. Network design must be simplified by integrating selectedcommon parts, not by simply packing together an assortment of various functions.Simplification increases reliability and facilitates subsequent extensions.

    Design Requirement 9: Network Model

    To enable the information network to continue to be a foundation of society, it shouldbe developed in a sustainable manner. To accomplish this, appropriate economicincentives must be offered to service providers and businesses in the communicationsindustry. In addition, the network architecture must have a design that includes a

    business-cost model.

    Design Requirement 10: Electric Power Conservation

    As network performance increases, its power consumption continues to grow, and in2004, network power consumption reached approximately 5.5% of total powerconsumption [2-5]. In addition, the traffic volume is expected to increase, and if weassume that traffic volume increases at an annual rate of 40% and that there is no changein electronic technology, then by 2020, network power consumption is estimated to reach48.7% of total power consumption [2-6]. In particular, as things stand now, a router at a

    traffic exchange point will require the electrical power of a small-scale power plant. Theinformation network-oriented society of the future must be more Earth friendly.

    Design Requirement 11: Extendibility

    The network must be sustainable. In other words, it must have enough flexibility toenable the network to be extended as society develops. A network that cannot self-reformwill end up being repeatedly scrapped and rebuilt. The network will support universalcommunication that will overcome the obstacles of language, culture, distance, or

    physical ability and contribute to the creation of human "wisdom." Since it cannot easily

    24

  • 8/6/2019 New Generation Network Architecture (1)

    37/136

    be replaced once it is embedded in society, the network architecture must be able to bedeveloped in a sustainable manner for 50 or 100 years.

    References[2-1] Tomonori Aoyama. Digital Musings (e-Zuiso) "Two Long Tails," Denkei Shimbun,

    August 14, 2006 (in Japanese).

    [2-2] Tim OReilly, What is Web2.0, http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/ what-is-web-20.html.

    [2-3] C. A. Eldering, M. L. Sylla, J. A. Eisenach, Is there a Moore's law forbandwidth?, IEEE Communications Magazine, Vol. 37, No. 10, pp. 117-121, Oct.1999.

    [2-4] G. Guilder, Telecosm: How Infinite Bandwidth Will. Revolutionize Our World,The Free Press, NY, 2000.

    [2-5] Survey Data Concerning Electric Power Conservation Techniques in Networks,Mitsubishi Research Institute, Inc., February 20, 2004 (in Japanese).

    [2-6] http://innovation.nikkeibp.co.jp/etb/20060417-00.html, Nikkei BP, EmergingTechnology Business, April 17, 2006 article (in Japanese).

    25

  • 8/6/2019 New Generation Network Architecture (1)

    38/136

    Chapter 3. Future Enabling Technologies [Morioka, Otsuki, Harai,Inoue, Morikawa]

    This chapter describes optical and wireless enabling technologies that are expected tobe used in the new-generation network. It also describes quantum and time-synchronization technologies that must be taken into consideration as part of the basictechnologies for future networks.

    3.1. Optical Transmission

    3.1.1. Serial Transmission

    Serial transmission technologies include electrical time division multiplexing (ETDM)using digital electrical multiplexing and optical time division multiplexing (OTDM)

    using optical delay multiplexing. ETDM is a commercially deployed technology that iscurrently being installed to commercialize a 40 Gbit/s system. In addition, to increase thetransmission rate and to use bandwidth more efficiently, research and development ofmulti-level modulation/demodulation technologies have been accelerating recently. As aresult, transmission experiments with transmission rates exceeding 100 Gbit/s and totalcapacity exceeding 10 Tbit/s by using carrier-suppressed return to zero differentialquadrature phase shift keying (CSRZ-DQPSK) or return to zero quadrature phase shiftkeying (RZ-QPSK) have been reported [3-1], [3-2], [3-3], [3-4]. If we consider thatmodulation rates will approach 100 Gbit/s in the future, multi-levelmodulation/demodulation techniques may be able to implement serial transmission ratesof several hundred Gbit/s.

    Fig. 3.1.1. Multi-level Modulation/demodulation Schemes

    On the other hand, 100 Gbit/s transmission experiments using OTDM were reported in

    1993, and 1.28 Tbit/s per wavelength (640 Gbit/s 2PDM (polarization divisionmultiplexing)) experiments were reported in 2000. Although the pulse width, which willbe on the order of sub-picoseconds, will easily be affected by the dispersion of thetransmission optical fibers, OTDM has a potential to be used on ultra-fast linksexceeding several 100 Gbit/s over short and medium distances in the future.

    26

  • 8/6/2019 New Generation Network Architecture (1)

    39/136

  • 8/6/2019 New Generation Network Architecture (1)

    40/136

    Fig. 3.1.2.2. 10,000 Generation Technology

    3.2. New Optical FiberResearch and development of new optical fibers that can control wavelength

    dispersion properties, nonlinear effects, and input power resistance properties is currentlyfocusing on photonic crystal fiber (PCF) and photonic band gap fiber (PBF). Ifnonlinearity is increased, the fiber can be used as various types of nonlinear devices, andif wavelength dispersion properties and input power resistance properties are controlled,the threshold for fiber fusing, which was mentioned above, can be increased and the fibermay be able to be used for ultra-wideband transmission.

    Fig. 3.2. Cross Section of (a) Photonic Crystal Fiber, (b) Photonic Bandgap Fiber

    28

  • 8/6/2019 New Generation Network Architecture (1)

    41/136

    3.3. Wavelength and Waveband Conversion

    Wavelength conversion is useful for preventing wavelength collision in an optical pathnetwork in which wavelength is used as an identifier, and transponders (OEOconversion) are currently used for wavelength conversion. In the future, all optical

    wavelength conversion or waveband (i.e., a group of wavelengths) conversion willprobably also be necessary in dynamic networks in which the frame or modulationformat and the transmission rate on the wavelength channel will vary. Currently, thethree main types of all optical wavelength converters are optical switching, parametricwavelength conversion, and supercontinuum (SPM). Among these, only the parametricwavelength conversion type optical wavelength converter maintains optical phaseinformation, and since this can be applied to waveband conversion, more and moreresearch on this type of converter is being conducted. Fig. 3.3.1 shows a typical exampleconcerning waveband switching nodes.

    Fig. 3.3.1. Waveband Conversion at a Waveband Node

    Research has been conducted on a quasi-phase matched Lithium Niobate (QPM-LN)waveguide as a material to be used for parametric wavelength conversion (Fig. 3.3.2) [3-8]. Since actual experimental results showing conversion gain with little degradationhave also been reported recently, further progress is expected

    29

  • 8/6/2019 New Generation Network Architecture (1)

    42/136

  • 8/6/2019 New Generation Network Architecture (1)

    43/136

  • 8/6/2019 New Generation Network Architecture (1)

    44/136

    conversion, which tends to be costly (power consumption, number of parts, amount ofmoney), is unnecessary.

    A representative application of an optical fiber delay line buffer is an optical packetswitch buffer. The following figure shows a typical configuration for an optical fiberdelay line buffer. The figure on the left below is an example of a 4-input, 1-output opticalbuffer consisting of an optical switch and optical fiber delay lines with different lengthsand an optical coupler. A B-type delay is assigned. By controlling the optical switchappropriately, a delay is obtained by directing information to the optical fiber with theappropriate length. Even if information arrives from different input lines simultaneously,a collision can be avoided by controlling the switch appropriately. Since there is nooptical logic circuit, using electronic processing for control is more realistic. To create a

    larger buffer without a large-scale optical switch, multiple 1N or NN optical switchescan be combined as shown in the figure on the right below. To make the system morecompact in this case, it is necessary to create fiber wiring sheets or ribbons and an arrayof switches.

    32:1 combine

    Buffer Manager

    d1

    d0

    d2

    d3

    d31

    8x8

    1x8

    Discard

    Input (7)

    1x8

    Input (0) OUT

    Input (6)

    1x