Being able to dynamically program and provision logical networks, regardless of the underlying hardware, quickly enabled integration and orchestration with compute and storage virtualization. VXLAN introduced a concept that decouples the physical network from the logical network. One early challenge with SDN was how to approach compute and storage virtualization and provide full integration and orchestration with the network. OpenFlow provides an API to networking elements so that a centralized controller can precalculate and program paths into the network. One aspect of SDN is that it makes the network programmable. Over the past couple of years, an architecture called Software-Defined Networking (SDN) was created, refined, and is taking shape, as shown in Figure 1-1. Whether itâs via Fibre Channel over Ethernet (FCoE), Internet Small Computer System Interface (iSCSI), or Network File System (NFS), converging storage on the data network further increases the port density, speed, and latency requirements of the network. Another trend is that storage and data are becoming collapsed onto the same network infrastructure. To support high-density 10GbE interfaces, the core and aggregation layers need to support even higher-speed interfaces such as 40GbE to maintain a standard over-subscription of 3:1. Specifically, the shift is happening from 1GbE to 10GbE interfaces in the access layer. One of the biggest factors is the adoption of the cloud services offered by service providers however, enterprise, government agencies, financial, and research institutions are adopting compute virtualization and seeing the same need for high-speed interfaces. The shift is seen across multiple target markets. This shift is being driven largely by compute virtualization. The data center is continuing to go through a fundamental shift to support higher speed interfaces at the access layer. The QFabric solution comes in two sizes: the Juniper QFX3000-M scales up to 768 10GbE ports and is often referred to as the âmicro fabricâ and the much larger Juniper QFX3000-G scales up to 128 ToR switches and 6,144 10GbE ports. The solution differentiation is that the core, aggregation, and access data center architecture roles can now be collapsed into a single Ethernet fabric that supports full Layer 2 and Layer 3 services. QFabric is a distributed Ethernet fabric that employs a spine-and-leaf physical topology, but is managed as a single, logical switch. More than four years in the making, Juniper QFabric was released in 2011. The Juniper EX4200 and EX4500 can be combined to create a single virtual chassis that can accommodate a mixed 1GBE and 10GBE access tier. Juniper released its first 10GbE ToR switch running Junos in 2011, the Juniper EX4500, which supports 48 10GbE ports. The solution differentiation is that multiple switches can be connected together to create a virtual chassis: single point of management, dual routing engines, multiple line cards, but distributed across a set of switches. Juniper released its first switch, the EX4200, a top-of-rack (ToR) switch that supports 48 1-Gigabit Ethernet (GbE) ports and 2 10GbE interfaces. It all starts in 2008, when Juniper Networks decided to officially enter the data center and campus switching market. Letâs start with a little bit of history to explain the problem and demonstrate the need of the Juniper QFX5100.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |