Bandwidth demands on service provider networks continue to grow at exponential rates, driven by packet-based multimedia services such as video streaming, videoconferencing, and online gaming. With cloud networking, content and resources are shifting in real time, causing complex and dynamic traffic patterns. To help overcome these challenges, IP and optical networks require better integration. This makes it difficult to increase service velocity, adapt to dynamic cloud topologies, enhance resiliency, and decrease total cost of ownership TCO.
Simple service requests can take months to fulfill. That simply is not good enough for the next-generation Internet of mobile, video, and cloud services. This document examines a new multi-layer control plane architectural approach, which increases agility and programmability for IP and optical networks. By employing this approach, service providers can reduce network capital expenditures CapEx and operational costs OpExwhile meeting or improving service-level agreements SLAs for mobile, video, and cloud services.
Modern service providers face a host of challenges, many stemming from the complex, dynamic traffic patterns of mobile, video, and cloud services and increasing demand for bandwidth with high expectations about the quality of service.
The costs for meeting the expectations of the customer challenge many service provider revenue models. They must balance the cost of infrastructure upgrades against the return on investment.
To remain competitive and profitable, service providers must be capable of offering new and improved services while gaining increased efficiencies from the network to lower costs. The standard network architectures that exist today make adapting to evolving demands costly and difficult.
Most existing networks are divided across layer boundaries—IP Layer 3 and optical transport Layer 1 —which are designed, deployed, and operated almost entirely independently. Bringing these layers together within a single, unified network environment will increase efficiencies in the network, improve time to revenue, and decrease network TCO. To accomplish this, service providers need a multi-layer control plane that allows relevant information to be exchanged across layers, allowing service providers to automate many functions across both IP-packet and optical transport domains.
Such a multi-layer control plane must provide sufficient scale to support service provider networks, allow for interlayer communication without overburdening network elements, respect organizational boundaries, and respect the organizational knowledge base that service providers have developed for each network layer.High Availability Kubernetes on Bare Metal [A] - Muhammad Kamran Azeem \u0026 Henrik Høegh, Praqma
Service providers recognize that a multi-layer control plane offers many advantages. Currently, the industry has developed two primary models for multi-layer control planes: the peer model and the overlay model.
However, both multi-layer models have inherent flaws. Figure 1 illustrates the peer and overlay models, listing their advantages and their flaws. Peer Model.
LTE Radio Protocol Architecture
Overlay Model. Optical network elements are treated the same as routing elements. This model treats the optical layer as an autonomous administrative domain running its own control plane, while the routing layer runs its own separate control plane. Each layer is completely independent of the other, but the layers share a user-network interface UNI between them to allow for minimal communication, including turning up or tearing down a circuit.
While the peer and overlay models offer multi-layer control plane solutions, they both fall short of real-world requirements for modern service providers. The overlay model solves the bandwidth efficiency problems that are endemic to the peer model. However, in the attempt to optimize bandwidth utilization, the overlay model provides an inadequate amount of information that can be shared between layers. The overlay model also struggles with circuit protection, disjoint circuit routing, and efficient mapping of bundles Figure 2.
All three of these inefficiencies are directly related to the lack of awareness of the IP layer within the underlying dense wavelength division multiplexing DWDM layer.The radio protocol architecture for LTE can be separated into control plane architecture and user plane architecture as shown below:. At user plane side, the application creates data packets that are processed by protocols such as TCP, UDP and IP, while in the control plane, the radio resource control RRC protocol writes the signalling messages that are exchanged between the base station and the mobile.
In both cases, the information is processed by the packet data convergence protocol PDCPthe radio link control RLC protocol and the medium access control MAC protocol, before being passed to the physical layer for transmission. Different tunneling protocols are used depending on the interface. The control plane includes additionally the Radio Resource Control layer RRC which is responsible for configuring the lower layers. The Control Plane handles radio-specific functionality which depends on the state of the user equipment which includes two states: idle or connected.
The grey region of the stack indicates the access stratum AS protocols. The lower layers perform the same functions as for the user plane with the exception that there is no header compression function for the control plane.
Previous Page. Next Page. Previous Page Print Page. Dashboard Logout. The user equipment camps on a cell after a cell selection or reselection process where factors like radio link quality, cell status and radio access technology are considered.
The UE also monitors a paging channel to detect incoming calls and acquire system information. In this mode, control plane protocols include cell selection and reselection procedures.Current RAN architecture is undergoing a transformation to increase deployment flexibility and network dynamicity, so that networks will be able to meet the performance requirements demanded by applications such as extreme mobile broadband and long-range massive MTC.
To stop total cost of ownership from soaring, the proposed architecture will be software-configurable and split between general-purpose and specialized hardware, in a way that enables ideal placement of networks functions. Download PDF. The changes to architecture include the capability to place selected functions closer to the network edge, for example, and the ability to increase RAN resilience.
Cost is naturally a factor, as spectrum availability and site infrastructure continue to dominate operator expenditure for wide-area systems. The evolution of RAN architecture therefore needs to include measures for enhanced spectrum efficiency that are harmonized with other improvements in the areas of hardware performance and energy efficiency.
In light of these cost and performance requirements, a number of capabilities are shaping the evolution path of RAN architecture:. The best combination of any radio beam within reach of a user should be used for connectivity across all access network technologies, antenna points, and sites. This capability will be achieved by applying carrier aggregation, dual connectivity, CoMP, and a number of MIMO and beamforming schemes.
Some 5G requirements — such as ultra-low latency and ultra-high throughput — require highly flexible RAN architecture and topology. This will be enabled by splitting RAN functions, including the separation of the user plane UP and the control plane CP in higher layers. The capability to configure, scale, and reconfigure logical nodes through software commands enables the RAN to dynamically adjust to changing traffic conditions, hardware faults, as well as new service requirements.
This capability will be achieved by separating out logical nodes suitable for virtualization on a GPP and designing functions that require specialized hardware to be dynamically re configurable on an SPP.
Deployment flexibility enables an operator to deploy and configure the RAN with maximum spectrum efficiency and service performance regardless of the site topology, transport network characteristics, and spectrum scenario. This is achieved through a correct split of the RAN architecture into logical nodes, combined with the future-proof freedom to deploy each node type in the sites that are most appropriate given the physical topology and service requirements.
This process is illustrated in Figures 1, 2, and 3. Throughout the process, and as a result of the functional decomposition, new inter-node interfaces emerge, whose characteristics need to be taken into consideration to ensure that the underlying transport network can support the various deployment scenarios.
Below the high-level specification, 3GPP leaves room for innovation to enhance the network with RAN-internal value-add features — a flexibility that has over a number of years resulted in continuous improvement in many areas, including spectrum efficiency in the form of scheduling algorithms, power control algorithms, and various RRM featuresenergy efficiency, and enhancements to service characteristics such as lower latencies.
To determine the optimal architectural split, however, the RAN architecture needs to be examined with a finer level of granularity than that offered by 3GPP. Figure 1 illustrates the logical RAN architecture, which for the purposes of simplification shows the UL and DL instances of each function combined, and the solid lines indicate user plane functions.
In this way, a single UE can simultaneously receive and send data over different radio channels — for example, one NR and one LTE channel — that are connected to different sites. The RAN functions participating in the loop work synchronously with the air-interface TTI, while the PDCP and multipath handling function feed and receive packets traveling to and from the RLC layer asynchronously — which has implications for the split architecture. Runtime control functions can generally be divided into three categories, depending on whether they: act on a per-user basis U-RRMcontrol spectrum on a system level S-RRMor manage infrastructure and other common resources.
The U-RRM functions include measurement reporting, selection of modulation and coding schemes, per-UE bearer handling, and handover execution. In contrast, the U-RRM function UE-handling works on a time scale of 10ms and above, including bearer handling, per-UE policy handling, handoff control, and more.
Functions that control spectrum on the system level include radio scheduling, distribution of the power budget across active UEs, and system-initiated load-sharing handovers. System-area handlers — such as load sharing, system information control, and dual-connectivity control — control spectrum on a 10ms time scale, or slower.
Functions that control infrastructure and common resources — other than spectrum — include handling of transport, connectivity, hardware, and energy.
By allowing the control functions for spectrum, transport, infrastructure, and connectivity to interact, a holistic control system for RAN resources can be built. The less time spent on interface signaling, the more time is available for processing, which translates into lower cost for hardware and for energy consumption. As traffic moves to the right in Figure 4, the requirement on interface latency gradually relaxes.
Figure 4: The logical interfaces in the RAN architecture and their characteristics requirements.
The CPRI scales with effective carrier bandwidth and the number of antenna elements.Istio 1. Enter Istio 1. This release has many new features, but those features are dwarfed by a major improvement.
Initially, Istio used a microservice architecture to support the way our teams within Istio were working. At the time, we were delivering updates and releases for services on different schedules, we were using API contracts between our teams, and we needed independent horizontal scaling.
As we wrapped up development of Istio 1. The Environments working group led the technical debates, and the istiod concept was born! During the Istio Technical Oversight Committee meeting that followed, the Environments working group proposed using the istiod binary. Put simply, istiod is the reversal of the microservice architecture model. So, why did we make the choice to adopt the istiod binary? One main reason was that as the Istio development matured, we began to work as a single team with a single delivery, so we no longer met the criteria for needing a microservices architecture.
Other specific reasons include:. These Wasm modules can be dynamically loaded while the proxy continues to serve traffic. Refer to Solo. There is no vendor lock-in that limits the types of services you can use with it.
Plus, Istio can run in any cloud model — public cloud, private cloud, on-premises, or hybrid cloud. Our strong, vibrant community makes Istio special. The Istio project benefits from an active, diverse community composed of over developers from over organizations. Istio and the Istio ecosystem has countless additional contributors working to make Istio a success.
The Istio service mesh technology is open source. Istio relies on an active community of contributors to improve the technology. Steven Dake is an open source leader at IBM. He is a maintainer within the Istio project, and serves as a workgroup lead within the Environments Working Group.
October 15, Blog Post Istio 1. By Steven Dake Published March 5, Why Istio started with a microservice architecture Initially, Istio used a microservice architecture to support the way our teams within Istio were working. What exactly is Istiod? Provided by the sidecar-injector service not picutred. Provided by the Mixer services istio-telemetry and istio-policy services. Provided by the Mixer Adapter plugins. Sidecar proxy config generation and serving. Provided by the Pilot service.
Provided by the Citadel service.The enterprise landscape is continuously evolving. There is a greater demand for mobile and Internet-of-Things IoT device traffic, SaaS applications, and cloud adoption. In addition, security needs are increasing and applications are requiring prioritization and optimization, and as this complexity grows, there is a push to reduce costs and operating expenses.
High availability and scale continue to be important. Legacy WAN architectures are facing major challenges under this evolving landscape.
Issues with these architectures include insufficient bandwidth along with high bandwidth costs, application downtime, poor SaaS performance, complex operations, complex workflows for cloud connectivity, long deployment times and policy changes, limited application visibility, and difficulty in securing the network.
In recent years, software-defined wide-area networking SD-WAN solutions have evolved to address these challenges. SDN is a centralized approach to network management which abstracts away the underlying network infrastructure from its applications. This de-coupling of data plane forwarding and control plane allows you to centralize the intelligence of the network and allows for more network automation, operations simplification, and centralized provisioning, monitoring, and troubleshooting.
It fully integrates routing, security, centralized policy, and orchestration into large-scale networks. It is multitenant, cloud-delivered, highly automated, secure, scalable, and application-aware with rich analytics. Some of the benefits include:. Due to the separation of the control plane and data plane, controllers can be deployed on premises or in the cloud. Cisco WAN Edge router deployment can be physical or virtual and can be deployed anywhere in the network.
It discusses the architecture and components of the solution, including control plane, data plane, routing, authentication, and onboarding of SD-WAN devices. It also focuses on NAT, Firewall, and other deployment planning considerations. The guide is based on vManage version The topics in this guide are not exhaustive. Lower-level technical details for some topics can be found in the companion prescriptive deployment guides or in other white papers.
See Appendix A for a list of documentation references. Secure Automated WAN. Application Performance Optimization. Improves the application experience for users at remote offices.
Secure Direct Internet Access. Locally offloads Internet traffic at the remote office. Multicloud Connectivity. The secure automated WAN use case focuses on providing the secure connectivity between branches, data centers, colocations, and public and private clouds over a transport independent network. It also covers streamlined device deployment using ubiquitous and scalable polices and templates, as well as automated, no-touch provisioning for new installations.
The WAN Edge router discovers its controllers automatically and fully authenticates to them and automatically downloads its prepared configuration before proceeding to establish IPsec tunnels with the rest of the existing network.
Automated provisioning helps to lower IT costs. Traffic can be offloaded from higher quality, more expensive circuits like MPLS to broadband circuits which can achieve the same availability and performance for a fraction of the cost. Application availability is maximized through performance monitoring and proactive rerouting around impairments. Traffic that enters the router is assigned to a VPN, which not only isolates user traffic, but also provides routing table isolation.
There are a variety of different network issues that can impact the application performance for end-users, which can include packet loss, congested WAN circuits, high latency WAN links, and suboptimal WAN path selection. Optimizing the application experience is critical in order to achieve high user productivity.
During periods of performance degradation, the traffic can be directed to other paths if SLAs are exceeded. The figure below shows that for application A, Path 1 and 3 are valid paths, but path 2 does not meet the SLAs so it is not used in path selection for transporting application A traffic.
Together, the feature is designed to minimize the delay, jitter and packet loss of critical application flows.OpenStack is designed to be massively horizontally scalable, which allows all services to be distributed widely.
However, to simplify this guide, we have decided to discuss services of a more central nature, using the concept of a cloud controller. A cloud controller is a conceptual simplification. In the real world, you design an architecture for your cloud controller that enables high availability so that if any node fails, another can take over the required tasks.
In reality, cloud controller tasks are spread out across more than a single node.
Cisco SD-WAN Design Guide
The cloud controller provides the central management system for OpenStack deployments. Typically, the cloud controller manages authentication and sends messaging to all the systems through a message queue. For many deployments, the cloud controller is a single node. Tracks current information about users and instances, for example, in a database, typically one database instance managed per service. Indicates which users can do what actions on certain cloud resources; quota management is spread out among services, howeverauthentication.
Indicates which resources to use first; for example, spreading out where instances are launched based on an algorithm. Each service running on a designated cloud controller may be broken out into separate nodes for scalability or availability. As another example, you could use pairs of servers for a collective cloud controller—one active, one standby—for redundant nodes providing a given set of related services, such as:.
Front end web for API requests, the scheduler for choosing which compute node to boot an instance on, Identity services, and the dashboard. Now that you see the myriad designs for controlling your cloud, read more about the further considerations to help with your design decisions.
In this guide, we assume that all services are running directly on the cloud controller. Cloud controller hardware sizing considerations contains common considerations to review when sizing hardware for the cloud controller design. Size your database server accordingly, and scale out beyond one cloud controller if many instances will report status at the same time and scheduling where a new instance starts up needs computing power.
If many users will make multiple requests, make sure that the CPU load for the cloud controller can handle it. The dashboard makes many requests, even more than the API access, so add even more CPU if your dashboard is the main interface for your users.
How many nova-api services do you run at once for your cloud? Starting instances and deleting instances is demanding on the compute node but also demanding on the controller node because of all the API queries and scheduling needs. External systems such as LDAP or Active Directory require network connectivity between the cloud controller and an external authentication system.
Also ensure that the cloud controller has the CPU power to keep up with requests.
While our example contains all central services in a single location, it is possible and indeed often a good idea to separate services onto different physical servers. This deployment used a central dedicated server to provide the databases for all services. This approach simplified operations by isolating database server updates and allowed for the simple creation of slave database servers for failover.
This deployment ran central services on a set of servers running KVM. A dedicated VM was created for each service nova-schedulerrabbitmq, database, etc. This assisted the deployment with scaling because administrators could tune the resources given to each virtual machine based on the load it received something that was not well understood during installation.
This deployment had an expensive hardware load balancer in its organization. It ran multiple nova-api and swift-proxy servers on different physical servers and used the load balancer to switch between them.Windows Virtual Desktop is a desktop and application virtualization service that runs in the Azure cloud.
Enterprise-scale solutions generally cover 1, virtual desktops and above. Microsoft manages the infrastructure and brokering components, while enterprise customers manage their own desktop host virtual machines VMsdata, and clients. By connecting Windows Virtual Desktop host pools to an Active Directory domain, you can define network topology to access virtual desktops and virtual apps from the intranet or internet, based on organizational policy.
You can connect a Windows Virtual Desktop to an on-premises network using a virtual private network VPNor use Azure ExpressRoute to extend the on-premises network into the Azure cloud over a private connection. Azure AD integration applies Azure AD security features like conditional access, multi-factor authentication, and the Intelligent Security Graphand helps maintain app compatibility in domain-joined VMs.
Windows Virtual Desktop session hosts: A host pool can run the following operating systems:. Each host pool can have one or more app groups, which are collections of remote applications or desktop sessions that users can access. Windows Virtual Desktop workspace: The Windows Virtual Desktop workspace or tenant is a management construct to manage and publish host pool resources. Personal desktop solutions, sometimes called persistent desktops, allow users to always connect to the same specific session host.
Users can typically modify their desktop experience to meet personal preferences, and save files in the desktop environment. Personal desktop solutions:. Pooled desktop solutions, also called non-persistent desktops, assign users to whichever session host is currently available, depending on the load-balancing algorithm. Because the users don't always return to the same session host each time they connect, they have limited ability to customize the desktop environment and don't usually have administrator access.
There are several options for updating Windows Virtual Desktop desktops. Deploying an updated image every month guarantees compliance and state. Numbers in the following sections are approximate.
The numbers are based on a variety of large customer deployments, and they might change over time. The Windows Virtual Desktop service is scalable to more than 10, session hosts per workspace.
You can address some Azure platform and Windows Virtual Desktop control plane limitations in the design phase to avoid changes in the scaling phase. For more information about Azure subscription limitations, see Azure subscription and service limits, quotas, and constraints.
Virtual machine sizing guidelines lists the maximum suggested number of users per virtual central processing unit vCPU and minimum VM configurations for different workloads.
This data helps estimate the VMs you need in your host pool. Use simulation tools to test deployments with both stress tests and real-life usage simulations. Make sure the system is responsive and resilient enough to meet user needs, and remember to vary the load sizes when testing. Architect your Windows Virtual Desktop solution to realize cost savings.