Episode 98 — VLAN, VXLAN, and GENEVE Protocols

This is Episode Ninety-Seven: Virtual Private Cloud Designs — Hub-and-Spoke and Peering.
Virtual Private Clouds, commonly referred to as V P Cs, provide cloud users with the ability to define isolated, customizable network environments within a public cloud platform. Each V P C allows an organization to create subnets, attach gateways, define routing tables, and enforce firewall policies. V P Cs are foundational to secure and scalable cloud design, acting as the network perimeter for virtual machines, containers, and services. Because V P Cs are logically isolated, they ensure tenant separation while giving administrators full control over internal traffic and external exposure.
The Cloud Plus exam expects candidates to understand the building blocks of V P Cs and how those blocks are assembled into broader network topologies. Hub-and-spoke and V P C peering are two critical architectural patterns used to connect multiple V P Cs together while maintaining varying levels of control, visibility, and segmentation. These topologies are not just academic—they directly affect how traffic is routed, how services are shared, and how security policies are enforced. Designing and implementing these models correctly ensures operational efficiency and avoids costly misconfigurations.
A V P C is more than just a container for cloud resources—it’s a full-featured network that mirrors traditional on-premises designs. Each V P C includes private I P address ranges defined by the administrator, and within each V P C are subnets that segment the address space. Internet gateways and virtual private network gateways provide external connectivity, while route tables define how traffic is handled within and across subnets. Security groups and access control lists provide fine-grained control over which traffic is allowed into and out of resources.
The hub-and-spoke model organizes multiple V P Cs into a centralized structure. In this topology, one V P C acts as the hub, hosting shared services such as domain name resolution, directory authentication, logging platforms, or firewalling. Other V P Cs, referred to as spokes, connect to the hub but not directly to each other. Traffic between two spokes must traverse the hub, allowing for centralized policy enforcement and visibility. This model is particularly well suited to large organizations with multiple departments or development environments that require shared access to common resources.
Use cases for hub-and-spoke architectures include centralizing security inspection, hosting core identity services, and simplifying billing and policy management. By concentrating shared infrastructure in the hub, organizations avoid duplicating services across every V P C. This design also eases operational overhead by creating a clear control point for monitoring, logging, and change management. For instance, updating a firewall policy in the hub can instantly affect traffic flow to all spokes, eliminating the need to duplicate changes across environments.
Routing in hub-and-spoke architectures typically involves the hub V P C managing connectivity between spokes. Each spoke V P C defines routes pointing to the hub as the next hop for relevant traffic. The hub, in turn, defines routes for each spoke. Security policies can be written to allow or deny traffic between specific spokes or services in the hub. This centralized approach supports zero trust network models and reduces the risk of lateral movement in the event of a security breach.
V P C peering, by contrast, is a direct link between two V P Cs. Peering allows the two environments to communicate privately without traversing the public internet. Each V P C maintains its own security controls and identity systems, but routing is configured to allow traffic to flow over the peering link. V P C peering is simpler than hub-and-spoke in terms of setup and can be used for tightly coupled services or environments that need low-latency access to each other.
Peering can be configured as unidirectional or bidirectional, depending on whether both sides need to send and receive traffic. For bidirectional communication, both V P Cs must configure route tables and update firewall rules or security groups to allow inbound traffic from the other. A common mistake is to set up the peering connection but fail to allow traffic through on one or both sides. Such misconfigurations are frequently tested on the Cloud Plus exam and must be understood in depth.
Cross-region peering extends V P C peering capabilities across geographic regions. Some cloud providers allow direct peering between V P Cs in different continents, which is useful for multi-national applications, disaster recovery designs, or globally distributed microservices. Cross-region peering introduces additional latency and may come with data transfer costs, but it offers high availability and global resource access when designed properly. For the exam, candidates must evaluate when cross-region peering is appropriate based on performance, cost, and fault tolerance requirements.
V P C peering does come with limitations. Peered V P Cs cannot transit traffic between other V P Cs—meaning if V P C A peers with V P C B, and V P C B peers with V P C C, A cannot talk to C through B. Additionally, overlapping I P address spaces are not supported in peered configurations. These limitations make peering less suitable for large-scale networks unless carefully planned. At scale, organizations may migrate from peering to more advanced constructs like transit gateways.
Design decisions involving V P Cs must consider scalability, security, and operational complexity. Hub-and-spoke models offer centralized control and simplified access to shared services but introduce dependency on the hub’s availability. Peering is easier to deploy and requires fewer components but does not scale well to dozens or hundreds of V P Cs. I P address planning is essential in both models to avoid overlap and ensure route reachability. Cloud Plus candidates must be able to evaluate the benefits and constraints of each topology and apply the correct design based on given requirements.
Monitoring is an essential component of managing V P C connectivity, especially in environments where multiple peering links or spoke networks exist. Administrators must use cloud-native logging and metric collection tools to observe how traffic flows between V P Cs and identify anomalies. This includes auditing route table usage, security group changes, and dropped packets across peered connections. Monitoring tools often provide APIs and dashboards that display peering health, packet counts, and error states. For Cloud Plus candidates, the ability to verify inter-V P C traffic behavior is key to operational visibility.
When V P Cs are interconnected, shared services often need to be centrally accessible. Common examples include load balancers, virtual private network gateways, and internet egress paths. These components are typically placed in the hub V P C to consolidate access control and reduce duplicated infrastructure. When peered V P Cs use centralized appliances, routing must ensure return traffic follows the correct path and respects firewall policies. This requires careful coordination of route tables and next-hop definitions to ensure traffic flows predictably and securely.
Interconnecting V P Cs can introduce cost considerations that must be accounted for in architecture design. Some cloud providers charge for data transferred over peering links, especially when the traffic crosses regions or organizations. Costs may apply in both directions, and billing varies between providers. To optimize costs, administrators may choose to aggregate services in a central hub or limit cross-region traffic with replication filters. Cloud Plus exam scenarios may ask you to evaluate design patterns not just for functionality, but for cost efficiency based on projected data flow.
Designing for high availability in V P C architectures involves ensuring that traffic can reroute in the event of failure. Hub-and-spoke environments must provide redundant paths to shared services and may include multiple hubs in separate availability zones or regions. Peering links should be monitored for health and configured with failover routes when possible. Some cloud platforms support route health injection or dynamic route updates, allowing failover to occur automatically. For the exam, candidates should understand how these mechanisms contribute to resilient network design.
Security policies across interconnected V P Cs must be tightly controlled. Administrators can use identity-based policies, such as those tied to tags or roles, to limit which users or systems can communicate across networks. These policies should be enforced with supporting firewall rules, subnet isolation, and network access control lists. Policies must also be consistently applied across environments to avoid gaps. Policy centralization—whether through a management hub or orchestration tool—ensures auditability and reduces misconfiguration. Exam questions often highlight failures caused by misaligned security policies across V P Cs.
Centralized D N S and directory services in the hub V P C are a best practice in hub-and-spoke designs. This allows spoke networks to rely on shared name resolution, authentication, and configuration management. However, proper forwarding rules and route permissions must be configured to ensure that spokes can reach the centralized services. Cross-account or cross-region D N S forwarding may require additional setup. Cloud Plus candidates should recognize how D N S integration affects service discovery and application performance in multi-V P C environments.
As organizations scale, managing dozens or hundreds of V P Cs through individual peerings becomes unmanageable. Transit gateways provide a solution by allowing all V P Cs to connect to a central routing and forwarding service. This many-to-many communication model enables greater scalability and observability. Migrating from a peering model to a transit gateway involves updating route tables, coordinating I P addressing, and often overlapping old and new paths temporarily. Candidates must know how and when to transition to more scalable architectures as the number of V P Cs grows.
The Cloud Plus exam includes diagram-based questions where candidates must identify misconfigured routes, missing peering links, or incomplete firewall rules. These questions test not only technical knowledge but also the ability to visually assess network design. Mastery of V P C topologies enables quick identification of problems like asymmetric routing, incomplete path definitions, or inaccessible shared services. Real-world knowledge of how V P Cs behave under various conditions is essential for answering these performance-based questions accurately.
Design best practices in V P C topologies include planning for growth, redundancy, and ease of management. Hub-and-spoke models are best for environments that share infrastructure and require centralized control. Peering is useful for lightweight, decentralized communication where only two V P Cs need to interact. Regardless of the model, clear I P space planning, policy enforcement, route visibility, and logging must be in place to maintain a secure and scalable cloud network. Cloud Plus candidates must apply these principles when selecting topologies or troubleshooting network behavior.
To conclude, understanding virtual private cloud designs such as hub-and-spoke and peering is essential for anyone working in cloud network architecture. These models define how connectivity, control, and resource sharing are implemented in multi-V P C deployments. Choosing the right design depends on the environment’s size, growth expectations, security needs, and administrative preferences. For the Cloud Plus certification, mastering these concepts is vital for success on design, implementation, and troubleshooting exam questions.

Episode 98 — VLAN, VXLAN, and GENEVE Protocols
Broadcast by