html
Full Design, Component Specification, Rationale, and Performance Analysis for a Campus-Grade Enterprise Network Deployed in a Multi-Wing Residential Compound with Underground Facility
| Rev. | Date | Author | Description of Change | Status |
|---|---|---|---|---|
| 0.1 | 2026-03-01 | F.G.D. Robledo | Initial draft — topology and ISP layer defined | DRAFT |
| 0.5 | 2026-03-18 | F.G.D. Robledo | Wireless section expanded; Wi-Fi 7 MLO parameters added; VLAN schema first draft | DRAFT |
| 0.9 | 2026-04-02 | F.G.D. Robledo | Power budget, PoE calculations, cabling spec, QoS framework added | REVIEW |
| 1.0 | 2026-04-10 | F.G.D. Robledo | Baseline release — all sections complete; approved for design phase | APPROVED |
This Technical Specification, designated NGRF-NET-001, establishes the complete, normative design intent and engineering specification for the residential campus-grade network infrastructure of the principal residence of F.G.D. Robledo. It encompasses, without limitation, all hardware components, logical configurations, physical cabling, wireless radio parameters, performance targets, redundancy requirements, security postures, and operational procedures necessary for the deployment, commissioning, and long-term operation of said network.
The scope of this document extends across the entirety of the described compound, including all wings of the primary above-ground residential structure, all subterranean tunnel corridors and underground facility levels, all outbuilding structures within the compound perimeter, and any external relay points required to extend network connectivity beyond the perimeter of the principal building.
This document is intended for use by qualified network engineers, IT infrastructure specialists, low-voltage cabling contractors, and authorized personnel of the principal. It is not intended for general public distribution. All specifications contained herein represent the design-phase intent and shall govern all procurement decisions. Deviations from this specification, whether arising from component unavailability, regulatory constraint, or engineering discovery during deployment, shall be documented via a formal Engineering Change Notice (ECN) and appended to this document at the next applicable revision cycle.
The design, components, and configurations described in this specification conform to or incorporate, in whole or in relevant part, the following standards, specifications, and regulatory frameworks. Where a listed standard has been superseded or amended, the most recent published revision as of the date of this document shall apply.
| Standard / Reference | Governing Body | Applicability | Relevance |
|---|---|---|---|
| IEEE 802.11be-2024 | IEEE | Mandatory | Wi-Fi 7 (EHT) PHY/MAC layer specification; all wireless APs in this design |
| IEEE 802.11ax-2021 | IEEE | Informative | Wi-Fi 6E backward compatibility reference; OFDMA precedent |
| IEEE 802.3bt-2018 | IEEE | Mandatory | PoE++ (802.3bt Type 3/4) — governs all PoE switch and AP power delivery |
| IEEE 802.1Q-2022 | IEEE | Mandatory | Virtual LAN (VLAN) tagging; all switching infrastructure |
| IEEE 802.1AX-2020 | IEEE | Mandatory | Link Aggregation Control Protocol (LACP); uplink bonding |
| IEEE 802.3ae-2002 | IEEE | Mandatory | 10 Gigabit Ethernet; core and uplink fiber runs |
| IEEE 802.1D / 802.1w | IEEE | Mandatory | Spanning Tree Protocol (STP) / Rapid STP; loop prevention in switching fabric |
| RFC 5798 (VRRP v3) | IETF | Mandatory | Virtual Router Redundancy Protocol v3; edge router HA |
| RFC 4271 (BGP-4) | IETF | Mandatory | Border Gateway Protocol; multi-ISP policy routing at edge |
| RFC 2328 (OSPFv2) | IETF | Mandatory | Open Shortest Path First; internal dynamic routing |
| RFC 4601 (PIM-SM) | IETF | Advisory | Protocol Independent Multicast; future multicast media distribution |
| ITU-T G.984 / G.987 | ITU-T | Informative | GPON / XG-PON; governs ISP ONT interface characteristics |
| TIA-568.2-D | TIA | Mandatory | Structured cabling specification; Cat 6A horizontal runs |
| TIA-568.3-D | TIA | Mandatory | Fiber optic cabling; OS2 single-mode backbone specification |
| WPA3-Enterprise (SAE) | Wi-Fi Alliance | Mandatory | Wireless authentication; all secured SSIDs |
| Wi-Fi 7 Certification | Wi-Fi Alliance | Mandatory | Device certification baseline for all APs and clients |
| NTC MC 05-08-2020 | NTC (Philippines) | Mandatory | National Telecommunications Commission frequency allocation; 6 GHz band local regulations |
| NFPA 70 / PEC 2017 | NFPA / PEC | Mandatory | Philippine Electrical Code; governs all electrical and PoE installation |
| All IEEE standards referenced above carry their most current published amendment as of April 2026. | |||
The following definitions, abbreviations, and notational conventions are established for use throughout this document. Unless otherwise specified contextually, these definitions shall apply in their entirety to all sections, tables, diagrams, and appendices.
The subject compound is a multi-wing residential campus of significant scale, designed to accommodate a large number of permanent and concurrent users across multiple functional zones. The compound is organized into four primary above-ground wings (designated Wing A through Wing D), an underground tunnel corridor system interconnecting all wings, and one or more subterranean facility levels beneath the primary structure. The following characterization of each zone governs the wireless coverage planning, cabling infrastructure routing, and access switch placement described in subsequent sections.
| Zone Designation | Description | Est. Area (m²) | Est. Floors | Dominant Use | AP Density | Uplink Req. |
|---|---|---|---|---|---|---|
| Wing A | Primary residential wing; master suite, principal office, library, entertainment suites | ~800 | 3–4 | High-density residential, VR, 8K streaming | Very High (1 AP / ~40m²) | 2×10G (LACP) |
| Wing B | Secondary residential wing; guest suites, common areas, dining | ~600 | 3 | Mixed residential, IoT, casual use | High (1 AP / ~55m²) | 2×10G (LACP) |
| Wing C | Operational wing; server room, workshop, design studio, lab spaces | ~500 | 2–3 | High-bandwidth technical, server access, NAS | High (1 AP / ~50m²) | 2×10G (LACP) |
| Wing D | Recreation wing; gymnasium, garage, VR arena, simulation bay | ~700 | 2 | VR-primary, ultra-low latency, high concurrent clients | Very High (1 AP / ~35m²) | 2×10G (LACP) |
| Underground Corridors | Tunnel network interconnecting wings; also houses primary conduit runs | ~400 | 1 (below grade) | Transit, cabling, emergency comms | Medium (1 AP / ~80m²) | 1×10G |
| Subterranean Facility | Below-grade facility; Shatterdome bay, mechanical, utility, storage | ~1200+ | 2 (below grade) | Operational, security monitoring | High (1 AP / ~60m²) | 2×10G (LACP) |
| Exterior / Perimeter | Outdoor areas, circuit track perimeter, helipad, gardens | ~5000+ | N/A | Outdoor coverage, security cameras, perimeter comms | Low (sector / area APs) | 1×2.5G per AP |
| Area estimates are approximate and based on preliminary compound layout planning. Actual AP counts shall be determined following final architectural drawings and RF survey. | ||||||
The network architecture described in this specification is designed according to the three-layer hierarchical network model, adapted and extended for campus-scale residential deployment. The three layers — Edge (WAN / Access), Core (Distribution), and Access (Edge) — map cleanly to the physical component hierarchy and provide a principled framework for scalability, troubleshooting, and expansion.
The guiding design principles, in order of priority, are as follows:
1. High Availability (HA) Above All. No single point of failure shall be permitted at any layer of the network that would result in a complete loss of connectivity. Redundancy is implemented at the ISP level (three providers), the edge router level (VRRP with dual physical routers), the core switching level (dual chassis, cross-connected), and at every distribution uplink (dual 10G LACP trunks). Even at the access layer, dual-homed APs to separate switches are specified where architectural feasibility permits.
2. Bandwidth Far in Excess of Current Demand. The network shall be provisioned for a bandwidth envelope substantially exceeding anticipated peak demand, providing headroom for future expansion, additional users, and as-yet-unanticipated high-bandwidth applications. A factor of not less than 3× overprovisioning at all aggregation points is the design target.
3. Wi-Fi 7 as the Universal Wireless Standard. No access point or wireless device below the IEEE 802.11be (Wi-Fi 7) specification shall be deployed as part of the primary wireless infrastructure. All APs shall support the quad-band radio configuration described in Section 11. Legacy device support shall be achieved via backward-compatible SSID configuration, not through the deployment of legacy hardware.
4. Enterprise Management and Observability. All network devices shall be centrally managed, with full telemetry streaming to a network management system (NMS). No device shall operate in a standalone, unmanaged configuration. This requirement applies equally to APs, switches, and routers.
The WAN layer constitutes the topological entry point of the compound network into the global internet. Three geographically and administratively distinct Internet Service Providers are engaged simultaneously, each delivering independent fiber connectivity to the compound. This multi-ISP architecture is the foundational prerequisite for the high availability posture of the overall network and is not considered an optional enhancement; it is a mandatory structural requirement.
| Provider | Plan | Download | Upload | Technology | ONT Interface | Role | Monthly Cost (PHP) |
|---|---|---|---|---|---|---|---|
| PLDT | PLDT Fiber 10G | 10,000 Mbps | 10,000 Mbps | XGS-PON (ITU-T G.9807.1) | 10G SFP+ or 10GBase-T | PRIMARY | ~₱9,999–₱12,000 |
| Globe | Globe At Home GFiber MAX 1G | 1,000 Mbps | 500 Mbps | GPON (ITU-T G.984) | 1GBase-T (RJ45) | SECONDARY | ~₱2,499–₱3,799 |
| Converge | Converge FiberX 1G | 1,000 Mbps | 1,000 Mbps | GPON (ITU-T G.984) | 1GBase-T (RJ45) | TERTIARY | ~₱2,499–₱3,499 |
| ¹ Pricing is indicative as of Q1 2026 and subject to change. Actual contracted rates may vary. ² All three connections are provisioned as unlimited-data plans with no FUP cap. | |||||||
The deployment of three simultaneous ISP connections at this compound is motivated by a combination of uptime requirements, aggregate bandwidth demands, and the inherent unreliability of any single telecommunications provider operating in the Philippine market. The following analysis establishes the technical and economic justification.
Uptime and Availability. The compound network shall target a minimum service availability of 99.9% (three nines), corresponding to a maximum tolerated downtime of approximately 8.76 hours per calendar year. No single ISP in the Philippine residential market guarantees this level of availability by themselves. Independent failure modes across three separate provider networks (separate physical infrastructure, separate central offices, separate backbone providers) reduce the probability of a simultaneous outage across all three providers to a negligibly small figure, approaching 99.99% compound availability even with imperfect individual provider reliability figures.
Aggregate Throughput. The PLDT 10G plan alone provides 10 Gbps of WAN capacity, which represents an extraordinary amount of bandwidth for residential use. However, the addition of Globe and Converge provides 2 additional Gbps (for a total of 12 Gbps aggregate) at modest marginal cost relative to the total infrastructure investment. This additional capacity is relevant in failure scenarios (where the full 12 Gbps of demand is concentrated on 2 or even 1 surviving ISP), and for specific load-balancing scenarios where routing policy distributes traffic across all three simultaneously.
Routing Diversity. PLDT, Globe, and Converge each maintain distinct peering relationships and transit paths to major internet exchange points and content providers. For latency-sensitive traffic (gaming, VR, real-time communication), the edge routers can implement BGP policy to prefer the ISP offering the lowest-latency path to a given destination, regardless of which ISP carries the most traffic by volume.
Each ISP shall install its Optical Network Terminal (ONT) at the designated Network Demarcation Room (NDR), located in the main telecommunications intake of the compound, ideally adjacent to the primary network equipment room housing the edge routers. The demarcation point between ISP responsibility and compound network responsibility is defined as the Ethernet output port of each ONT. All equipment beyond this point, including the edge routers and all downstream infrastructure, is the property and responsibility of the compound owner.
The PLDT ONT shall be connected to the primary edge router via a direct 10G SFP+ fiber connection, requiring a compatible SFP+ transceiver module matched to the ONT's optical interface type. The Globe and Converge ONTs shall connect via 1GBase-T copper Ethernet to the 1G management port or a 10G/1G combo SFP+ port on the edge routers, as configured by the deploying engineer.
The edge routing layer constitutes the logical boundary between the external WAN environment and the internal compound network. It is responsible for all functions associated with WAN connectivity, network address translation, stateful packet inspection, inter-VLAN routing policy, and traffic engineering across multiple upstream providers. This layer is implemented as a dual-chassis High Availability pair, providing transparent failover in the event of any single router hardware failure.
The two MikroTik CCR2004 routers shall be configured as a Virtual Router Redundancy Protocol version 3 (VRRPv3, RFC 5798) pair. This configuration presents a single virtual IP address (192.168.1.1) to all downstream devices as the default gateway, whilst the active physical router handling traffic (designated MASTER) can be transparently replaced by the BACKUP router in the event of failure, without requiring any reconfiguration of downstream switches, APs, or client devices.
Border Gateway Protocol version 4 (BGP-4, RFC 4271) is employed at the edge layer to manage routing across three upstream ISP connections. Each ISP is assigned a distinct path weight and local preference value, implementing a deterministic primary/secondary/tertiary traffic routing hierarchy. Under normal operating conditions, all internet-bound traffic is routed via the PLDT 10G connection, capitalizing on its superior bandwidth. In the event of PLDT circuit failure, traffic is automatically diverted to the Globe 1G connection via BGP route withdrawal and re-advertisement. If both PLDT and Globe circuits fail, the Converge 1G circuit assumes all internet traffic. OSPF is used internally within the compound network for distributing routes between the edge and core layers.
| ISP | BGP Local Preference | Weight | Condition for Use | Traffic Share (Normal) | Traffic Share (PLDT Down) |
|---|---|---|---|---|---|
| PLDT 10G | 300 | 1000 | Circuit operational | ~83% (10G / 12G) | 0% |
| Globe 1G | 200 | 500 | PLDT down, or policy route | ~8.3% (1G / 12G) | ~50% |
| Converge 1G | 100 | 200 | PLDT + Globe down, or policy route | ~8.3% (1G / 12G) | ~50% |
| Load balancing percentages are nominal and vary with actual link utilization. Specific traffic classes (gaming, VR) may be policy-routed to the ISP offering lowest measured latency to the target destination regardless of the above hierarchy. | |||||
The central core switching layer constitutes the primary high-speed switching fabric of the compound network. Two Cisco Catalyst 9300X chassis, cross-connected and operating as a redundant pair, provide the electrical and optical distribution backbone to all wing core switches, the edge routers, and any directly-attached core infrastructure (servers, NAS, hypervisors, network management appliances). The switching capacity of this layer exceeds the aggregate of all conceivable traffic demands that could simultaneously arise from all downstream devices.
The two core switches are physically cross-connected via a dedicated inter-chassis link aggregate (ICL), implemented as a 2×25G SFP28 LACP bundle using the native SFP28 ports of the C9300X-24Y chassis, providing 50 Gbps of bidirectional inter-chassis bandwidth — a 2.5× improvement over a 2×10G ICL and commensurate with the increased per-wing uplink density delivered by the 100G QSFP28 wing-core links described in Section 9. This cross-connect enables real-time forwarding-table and VLAN-database synchronization between chassis, permits cross-chassis traffic forwarding without hairpinning through the edge layer, and provides the physical substrate required for HSRP and MLAG failover coordination.
Each wing core switch (see Section 9) terminates two 100G QSFP28 uplinks to the central core layer — one physical fiber to CORE-SW-01 and one to CORE-SW-02 — forming a Multi-Chassis Link Aggregation Group (MLAG) bundle presenting 200 Gbps of aggregate logical bandwidth per wing with full link-level redundancy. The central core switches each require one Cisco C9300X-NM-4C network module to provide the four QSFP28 100G ports necessary to terminate the uplinks of all four above-ground wing cores (Wings A–D, one 100G uplink per wing per core chassis). The underground facility wing core connects via 2×25G SFP28 LACP to the native SFP28 ports of both core chassis, consistent with its lower aggregate traffic profile. The edge routers connect to the core via the 25G SFP28 native ports of each core chassis, one 25G link per router per core.
The central core switches operate as the primary Layer 3 routing engine for the internal compound network. All inter-VLAN routing (traffic passing between VLANs, e.g., a device on the Management VLAN initiating a connection to a device on the Trusted Client VLAN) is performed at the core switching layer via Switched Virtual Interfaces (SVIs). Each VLAN defined in Section 12 is assigned a corresponding SVI on both core switches, with HSRP providing a single virtual gateway IP for client devices regardless of which physical core switch is active. OSPF distributes the internal routing table between the core switches and the edge routers.
The wing core distribution layer provides per-wing aggregation and distribution services, sitting logically between the central core switches and the floor-level access switches. Each discrete wing of the compound — Wing A (Primary Residential), Wing B (Guest and Common), Wing C (Technical and Operations), Wing D (Recreation and VR), and the Underground Facility — is served by a dedicated Cisco Catalyst 9300X-24Y distribution switch equipped with a C9300X-NM-4C network module. The C9300X-24Y was selected over the previously considered C9300X-24UX specifically because it is one of only three Catalyst 9300X models compatible with the C9300X-NM-4C QSFP28 module — the others being the C9300X-48HX and C9300X-48TX, both of which are unnecessarily port-dense for a wing distribution role. The C9300X-24Y's 24 native 25G SFP28 ports provide the downlink density required to serve all floor-level access switches at 2×25G LACP each, while the NM-4C's four QSFP28 ports simultaneously provide 100G uplinks to the central core pair, without port-group contention.
This one-to-one correspondence between wings and distribution switches ensures that each wing's traffic is isolated, managed, and routed independently, preventing a single access layer event from affecting other wings. The 100G QSFP28 uplink architecture introduced at this tier represents a step-change in inter-layer bandwidth: each wing presents 200 Gbps of logical uplink capacity to the central core via a 2-member Multi-Chassis Link Aggregation Group (MLAG), compared to 20 Gbps in a conventional 2×10G design — a tenfold increase that eliminates the distribution-to-core link as any conceivable bottleneck for the lifetime of this infrastructure.
The Cisco C9300X-NM-4C is a field-replaceable, hot-swappable network expansion module providing four QSFP+/QSFP28 ports, each capable of operating at either 40 Gigabit Ethernet (40GBASE-SR4, 40GBASE-LR4) or 100 Gigabit Ethernet (100GBASE-SR4, 100GBASE-LR4, 100GBASE-CWDM4, 100GBASE-PSM4), auto-negotiated based on the installed transceiver. The module provides up to 400 Gbps of raw port bandwidth and integrates natively with the UADP 2.0sec ASIC of the C9300X, enabling line-rate switching and hardware-accelerated IPsec at 100G speeds. In this deployment, the NM-4C is configured in 100G mode exclusively, with 100GBASE-LR4 QSFP28 transceivers installed for compatibility with the existing OS2 single-mode fiber plant.
The wing-to-core uplink design employs a cross-chassis Multi-Chassis Link Aggregation Group (MLAG) pattern, a well-established enterprise design in which two physical links from a single source device terminate on two different core chassis, forming a single logical aggregated link. From the wing core's perspective, the two 100G uplinks (one to CORE-SW-01, one to CORE-SW-02) appear as a single Port-Channel interface of 200 Gbps. From the central core pair's perspective, MLAG coordination via the ICL (inter-chassis link) ensures that both chassis present a unified LACP partner to the wing core. The result is that any single core chassis failure causes zero connectivity loss to any wing: the surviving 100G link continues to carry full traffic, and the MLAG subsystem on the surviving core chassis promotes itself to sole active peer within sub-second convergence.
The underground facility (UG-CORE) is a partial exception: it connects to the central core via 2×25G SFP28 LACP (one 25G to each central core chassis, cross-chassis MLAG, 50G aggregate), using the native SFP28 ports of the C9300X-24Y. This reflects the lower aggregate traffic demand of the underground zone (transit corridors, monitoring, security cameras) and reserves the NM-4C QSFP28 ports of the UG-CORE for future expansion capacity. The 50G aggregate uplink for the underground zone exceeds its realistic peak demand by a factor of at least 3× at any foreseeable loading level.
| Switch ID | Wing / Zone | Model + Module | Uplink to CORE-SW-01 | Uplink to CORE-SW-02 | MLAG Aggregate | Downlink to Access Switches | Est. # Access Switches |
|---|---|---|---|---|---|---|---|
| WING-A-CORE | Wing A (Primary Res.) | C9300X-24Y + NM-4C | 1×100G QSFP28 (NM-4C) | 1×100G QSFP28 (NM-4C) | 200G MLAG | 2×25G SFP28 LACP per floor switch (native ports) | 4–6 |
| WING-B-CORE | Wing B (Guest / Common) | C9300X-24Y + NM-4C | 1×100G QSFP28 (NM-4C) | 1×100G QSFP28 (NM-4C) | 200G MLAG | 2×25G SFP28 LACP per floor switch (native ports) | 3–5 |
| WING-C-CORE | Wing C (Technical / Lab) | C9300X-24Y + NM-4C | 1×100G QSFP28 (NM-4C) | 1×100G QSFP28 (NM-4C) | 200G MLAG | 2×25G SFP28 LACP per floor switch (native ports) | 3–4 |
| WING-D-CORE | Wing D (VR / Recreation) | C9300X-24Y + NM-4C | 1×100G QSFP28 (NM-4C) | 1×100G QSFP28 (NM-4C) | 200G MLAG | 2×25G SFP28 LACP per floor switch (native ports) | 2–4 |
| UG-CORE | Underground Facility | C9300X-24Y + NM-4C | 1×25G SFP28 (native port) | 1×25G SFP28 (native port) | 50G MLAG | 2×25G SFP28 LACP per zone switch (native ports) | 2–3 |
| All uplinks use OS2 single-mode fiber (LC Duplex). Wings A–D uplinks use 100GBASE-LR4 QSFP28 transceivers. Underground and all access switch downlinks use 25GBASE-LR SFP28 transceivers. NM-4C ports 3–4 on all wing cores remain unloaded as reserved expansion capacity. Exact access switch counts subject to final RF survey. | |||||||
The access switching layer is the point at which the structured cabling infrastructure of each floor connects to the network. All wired endpoints — access points, wired workstations, IP cameras, rack-mounted servers, smart home controllers, and other network-attached devices — physically terminate at access layer switches. For the purposes of this specification, the Ubiquiti UniFi Switch Pro XG 24 PoE (USW-Pro-XG-24-PoE) is the designated access switch for all floor-level deployments. This model was selected over the older USW-Enterprise-24-PoE for a critical architectural reason: it provides two native 25G SFP28 uplink ports, enabling a 2×25G LACP aggregate (50 Gbps) to the wing core's C9300X-24Y native 25G SFP28 downlink ports — a 2.5× increase in per-switch uplink bandwidth compared to the 2×10G SFP+ available on the legacy model. This upgrade ensures the access-to-distribution uplink does not become the bottleneck in a switching hierarchy now delivering 200G at the distribution-to-core tier.
The USW-Pro-XG-24-PoE also improves the downlink port configuration: its sixteen 10GbE RJ45 PoE+++ ports (auto-sensing at 100M/1G/2.5G/5G/10G) allow AP connections to operate at 2.5G for current Wi-Fi 7 APs while providing an in-place upgrade path to 5G or 10G wired uplinks as future AP hardware demands higher single-port throughput — without any switch replacement. All 24 RJ45 ports deliver IEEE 802.3bt Type 4 PoE+++ at up to 100W per port, exceeding the maximum draw of any AP in the specified wireless layer and providing substantial headroom for next-generation access point hardware.
The access switch presents 50 Gbps of aggregate uplink bandwidth to the wing core via its 2×25G SFP28 LACP bundle. The aggregate downlink capacity of its 24 RJ45 ports — sixteen at 10G and eight at 2.5G — reaches a theoretical maximum of 180 Gbps if all ports were simultaneously saturated at their ceiling speeds. In practice, Wi-Fi 7 APs operate at 2.5G on current hardware, yielding a realistic concurrent aggregate of 60 Gbps (24 ports × 2.5G), producing a downlink-to-uplink oversubscription ratio of 1.2:1 — effectively non-blocking for the access layer. Even when 10G port capacity is considered against realistic concurrent wireless load, the 50G uplink remains more than adequate. As future Wi-Fi 8 access points with 10G wired interfaces begin to enter service, the 10G RJ45 ports of the USW-Pro-XG-24-PoE accommodate them natively, and the 50G LACP uplink remains appropriate at a 3.2:1 oversubscription ratio for mixed-speed access deployments.
At the wing level, up to twelve access switches connect to a single wing core via 2×25G LACP each, presenting a maximum theoretical aggregate of 12 × 50G = 600G of downlink-facing bandwidth against the wing core's 200G MLAG uplink — a 3:1 distribution-tier oversubscription ratio, which is standard enterprise practice and appropriate given that no real-world deployment will simultaneously saturate all floors at their maximum port speeds.
IEEE 802.11be, commercially designated Wi-Fi 7 and technically classified as Extremely High Throughput (EHT), represents the seventh major revision of the 802.11 wireless LAN standard and constitutes the most significant advancement in wireless networking since the introduction of OFDMA in 802.11ax (Wi-Fi 6). The standard was ratified in 2024 and defines a comprehensive set of PHY and MAC layer enhancements that collectively enable theoretical maximum aggregate throughputs of up to 46 Gbps per access point. The following table summarizes the key PHY parameters that define the 802.11be standard and distinguishes it from its predecessors.
| Parameter | Wi-Fi 5 (802.11ac) | Wi-Fi 6/6E (802.11ax) | Wi-Fi 7 (802.11be) | Improvement vs Wi-Fi 6E |
|---|---|---|---|---|
| Standard Designation | 802.11ac | 802.11ax | 802.11be | — |
| Technical Name | VHT | HE | EHT | — |
| Max Channel Width | 160 MHz | 160 MHz | 320 MHz | 2× channel width |
| Max Modulation | 256-QAM | 1024-QAM | 4096-QAM | 4× modulation density |
| Max Coding Rate | 5/6 | 5/6 | 5/6 | Equal |
| Max Spatial Streams (per band) | 8 | 8 | 16 | 2× per band |
| Multi-Link Operation (MLO) | No | No | Yes (defining feature) | Revolutionary |
| Multi-RU (Resource Unit) | No | No | Yes | New |
| Max Theoretical Rate (single band) | 6.9 Gbps | 9.6 Gbps | 23.1 Gbps | 2.4× improvement |
| Max Theoretical Rate (4-band total) | N/A | N/A (tri-band only) | ~46+ Gbps | N/A (new capability) |
| OFDMA | No | Yes (DL/UL) | Yes (DL/UL, enhanced) | Enhanced |
| MU-MIMO | 4×4 DL only | 8×8 DL + UL | 16×16 DL + UL | 2× streams |
| Frequency Bands | 2.4 / 5 GHz | 2.4 / 5 / 6 GHz | 2.4 / 5 / 6 GHz (×2) | +1 additional 6 GHz band |
| Target Wake Time (TWT) | No | Yes | Yes (enhanced) | Enhanced |
The quad-band radio architecture is the defining characteristic of the AP platform specified in this document and the primary justification for the selection of Wi-Fi 7 hardware. Whereas prior Wi-Fi 6E routers and APs operated across three bands (2.4 GHz, 5 GHz, and 6 GHz), the quad-band Wi-Fi 7 platforms specified herein add a second, independent 6 GHz radio, operating in a distinct non-overlapping frequency segment of the 6 GHz band. This configuration provides four simultaneous, independent radio interfaces per physical AP unit.
| Radio | Frequency Range | Max Channel BW | Max Spatial Streams | Max Modulation | Max PHY Rate | Primary Use Case | Regulatory Notes |
|---|---|---|---|---|---|---|---|
| Radio 1 — 2.4 GHz | 2.400 – 2.500 GHz | 40 MHz | 4×4 MIMO | 4096-QAM (EHT) | ~1.1 Gbps | Legacy device support, IoT, long-range backhaul reach | PH: 100 mW EIRP max; channels 1/6/11 non-overlapping |
| Radio 2 — 5 GHz | 5.150 – 5.850 GHz | 160 MHz | 4×4 MIMO | 4096-QAM (EHT) | ~5.8 Gbps | General client association, mid-throughput devices | PH: DFS required channels 52–144; EIRP limits per NTC |
| Radio 3 — 6 GHz Low | 5.925 – 6.425 GHz | 320 MHz | 4×4 MIMO | 4096-QAM (EHT) | ~11.5 Gbps | High-throughput clients, VR primary, gaming | PH: NTC MC 05-08-2020; Low Power Indoor (LPI) mode |
| Radio 4 — 6 GHz High | 6.425 – 7.125 GHz | 320 MHz | 4×4 MIMO | 4096-QAM (EHT) | ~11.5 Gbps | High-throughput clients, AP backhaul, dedicated VR band | PH: Subject to NTC allocation; Very Low Power (VLP) or LPI |
| ¹ Maximum PHY rates shown are for 4×4 MIMO with 4096-QAM, 5/6 coding rate, and maximum channel width. Real-world throughput will be lower. ² 6 GHz availability in the Philippines is subject to NTC regulatory confirmation; the design assumes LPI operation as currently enacted. | |||||||
Multi-Link Operation (MLO) is the singular most important technical feature introduced in IEEE 802.11be and the central reason that Wi-Fi 7 represents a qualitative, rather than merely quantitative, advancement over Wi-Fi 6E for the applications targeted by this specification. MLO fundamentally changes the relationship between a Wi-Fi client device and an access point by allowing what appears to the application layer as a single network connection to simultaneously utilize multiple frequency bands and channels at the PHY and MAC layers.
Prior to Wi-Fi 7, a multi-band AP could make available multiple SSIDs (or a single SSID on multiple bands), and a client device would associate with exactly one band at a time. Sophisticated firmware could implement "band steering" to coax clients from the congested 2.4 GHz band to the less congested 5 GHz or 6 GHz bands, but this was a soft advisory mechanism only, and the client could ignore it. More critically, even with band steering, a client had a single active radio association — if that band experienced interference or congestion, the client's performance suffered, and re-association to another band incurred observable latency (typically 50–300 ms) due to the need to complete a new association handshake.
With MLO in 802.11be, a client device capable of MLO operation (designated an MLO Multi-Link Device, or MLD) establishes a single logical link with the AP's MLD entity that simultaneously encompasses two or more physical RF links across different bands. Traffic can be dynamically scheduled across any active link simultaneously, with the MAC layer handling all multiplexing transparently. The benefits are profound: effective throughput is the aggregate of all participating links; if one link suffers interference or congestion, traffic is dynamically shifted to other links with sub-millisecond latency; effective round-trip time is reduced because packets can always be transmitted on whichever link offers the earliest transmission opportunity; and reliability is substantially improved because a burst of interference on one band cannot cause packet loss if alternative links are available.
| MLO Parameter | Value / Setting | Notes |
|---|---|---|
| MLO Bands in Use | 5 GHz + 6 GHz Low + 6 GHz High (3-link MLO) | 2.4 GHz excluded from MLO for latency reasons (wider OFDM symbol duration) |
| Primary Link (Anchor) | 6 GHz Low (Radio 3) | Lowest latency link; preferred for latency-critical traffic classes |
| Secondary Links | 5 GHz (Radio 2) + 6 GHz High (Radio 4) | Load-balanced for throughput; failover to primary if needed |
| Max Aggregate MLO Throughput | ~18.4 Gbps (5G: 5.8 + 6GL: 11.5 + 6GH: 1.1) | Theoretical; real-world ~40–60% of theoretical |
| MLO Latency Target | ≤ 2 ms per link (intra-wing) | MLO aggregate latency ≤ 1 ms effective (best available link) |
| MLO Client Requirement | Wi-Fi 7 MLO-capable client (MLD) | Legacy Wi-Fi 6/5 clients associate normally on a single band |
| Traffic Steering in MLO | AP-directed, dynamic per-packet scheduling | Low-latency traffic prioritized to least-congested, lowest-latency link |
| MLO Mode | Enhanced MLO (eMLSR + STR modes supported) | STR (Simultaneous Transmit and Receive) preferred for max throughput |
One Asus ROG Rapture GT-BE98 Pro is designated as the anchor access point per floor or zone. The anchor AP serves as the primary high-capacity node for that zone, handling the densest client load and providing the reference BSSID for roaming coordination within the zone.
The Asus ZenWiFi Pro ET12 is deployed as the primary density-filling access point throughout all residential and mixed-use zones. Multiple ET12 units are deployed per floor at spacing intervals determined by RF modeling and site survey, supplementing the anchor GT-BE98 Pro and ensuring comprehensive, overlap-redundant coverage throughout each zone.
Wing C (Technical / Lab) utilizes the Asus ProArt series Wi-Fi 7 AP, optimized for workstation and creative-professional clients with larger attached files, NAS access, and high-sustained-throughput requirements. The ProArt aesthetic and management profile integrates cleanly with the technical character of Wing C.
AP placement is governed by three requirements simultaneously: coverage (every point in the compound receiving a usable signal from at least one AP), redundant overlap (every point receiving an adequate signal from at least two APs, enabling seamless roaming without coverage gaps), and capacity (sufficient APs deployed per zone such that no single AP serves more clients than its OFDMA/MU-MIMO scheduling engine can simultaneously serve at acceptable per-client throughput).
| Zone Type | Target AP Spacing | Target Client Density | Min. RSSI at Cell Edge | Primary Band (MLO Anchor) | Handoff Protocol |
|---|---|---|---|---|---|
| VR Arena / Wing D | 1 AP per ~35 m² | ≤ 20 clients/AP | −60 dBm (6 GHz) | 6 GHz Low (320 MHz) | 802.11r Fast BSS + 802.11k Neighbor Reports |
| Wing A Primary Residential | 1 AP per ~40 m² | ≤ 15 clients/AP | −65 dBm (6 GHz) | 6 GHz Low / High (MLO) | 802.11r + 802.11v BSS Transition |
| Wing B Guest / Common | 1 AP per ~55 m² | ≤ 25 clients/AP | −67 dBm (5 GHz) | 5 GHz + 6 GHz (MLO) | 802.11r + 802.11k |
| Wing C Technical | 1 AP per ~50 m² | ≤ 10 clients/AP | −65 dBm (6 GHz) | 6 GHz High (320 MHz) | 802.11r (wired-like roaming) |
| Underground Corridors | 1 AP per ~80 m² | ≤ 10 clients/AP | −70 dBm (2.4 GHz) | 2.4 GHz (long range) + 5 GHz | 802.11r |
| Exterior Perimeter | Sector APs at perimeter points | ≤ 30 clients/AP | −72 dBm (5 GHz) | 5 GHz | 802.11k neighbor-guided |
The compound network is segmented into distinct VLANs to enforce security boundaries, isolate traffic classes, simplify access control policy, and optimize broadcast domain size. Each VLAN is assigned a dedicated IPv4 subnet from the private address space (10.0.0.0/8) and carries an associated SSID for wireless clients that belong to that segment. All inter-VLAN traffic routing is performed at the core switching layer via SVIs as described in Section 8.2.
| VLAN ID | VLAN Name | IPv4 Subnet | Gateway (HSRP VIP) | DHCP Pool Range | SSID Mapping | Security Tier | Description |
|---|---|---|---|---|---|---|---|
| 1 | NATIVE (Mgmt) | 10.0.1.0/24 | 10.0.1.1 | 10.0.1.10–254 | None (wired only) | ★★★★★ Critical | Network management — switches, routers, APs. Isolated from all user VLANs. SSH/SNMP access only. |
| 10 | TRUSTED-PRIVATE | 10.0.10.0/23 | 10.0.10.1 | 10.0.10.10–510 | NGRF-PRIVATE | ★★★★ High Trust | Principal and family devices. Full internal and WAN access. QoS Priority 1 for VR and gaming. |
| 20 | WORKSTATION | 10.0.20.0/24 | 10.0.20.1 | 10.0.20.10–250 | NGRF-WORK | ★★★★ High Trust | Desktop workstations, NAS clients, creative workstations. Full LAN access. WAN access permitted. |
| 30 | VR-PRIORITY | 10.0.30.0/24 | 10.0.30.1 | 10.0.30.10–200 | NGRF-VR | ★★★ Medium Trust | VR headsets and gaming consoles. Strict QoS Priority 1 (VR) / Priority 2 (gaming). Low-latency path enforced. |
| 40 | GUEST-WIFI | 10.0.40.0/23 | 10.0.40.1 | 10.0.40.10–510 | NGRF-GUEST | ★★ Low Trust | Guest wireless clients. Internet access only. No inter-VLAN routing to any internal segment. Rate-limited per client. |
| 50 | IOT-ISOLATED | 10.0.50.0/23 | 10.0.50.1 | 10.0.50.10–510 | NGRF-IOT | ★ Very Low Trust | Smart home IoT devices (thermostats, lighting, appliances). Internet access only (for cloud services). Strict ACL: no LAN access. |
| 60 | SECURITY-CAM | 10.0.60.0/24 | 10.0.60.1 | 10.0.60.10–200 | NGRF-CCTV (hidden) | ★★★ Medium Trust | IP security cameras, NVR. Inbound to NVR only. No internet access. ACL: camera to NVR only. |
| 70 | SERVER-LAN | 10.0.70.0/24 | 10.0.70.1 | Static only | None (wired only) | ★★★★★ Critical | Servers, NAS, hypervisors, NMS. Strictly controlled inbound access from VLAN 10/20 only. No DHCP (all static IPs). |
| 80 | VOIP-QoS | 10.0.80.0/24 | 10.0.80.1 | 10.0.80.10–200 | None / wired | ★★★ Medium Trust | VoIP handsets, intercom systems. DSCP EF marking enforced. Strict priority queuing at all switch layers. |
| 90 | DMZ | 10.0.90.0/28 | 10.0.90.1 | Static only | None (wired only) | ★★★★ High Trust | Externally-accessible services (web server, game server, VPN endpoint). Stateful firewall between DMZ and all internal VLANs. |
| 99 | QUARANTINE | 10.0.99.0/24 | 10.0.99.1 | 10.0.99.10–250 | Dynamic (enforcement) | ☆ Untrusted | Dynamically assigned by 802.1X / NAC to devices that fail authentication or compliance checks. Internet access only; blocked from all internal resources. |
A comprehensive Quality of Service framework is implemented across all layers of the compound network — from the edge routers through the core and wing switches to the wireless APs. The QoS framework ensures that latency-sensitive, mission-critical traffic classes receive guaranteed minimum bandwidth and maximum latency treatment, even during periods of network congestion. The framework follows the DiffServ (Differentiated Services) model, using DSCP markings applied at the edge and honored throughout the switching and routing fabric.
| Traffic Class | Applications | DSCP Value | Per-Hop Behavior | WFQ Queue | Max Latency Target | Bandwidth Guarantee |
|---|---|---|---|---|---|---|
| Class 1 — VR Realtime | VR headset streaming, haptic feedback | EF (46) | Expedited Forwarding | Q0 (Strict Priority) | ≤ 2 ms (intra-wing) | Reserved 20% WAN |
| Class 2 — Interactive Gaming | Online gaming (UDP), game downloads | CS4 (32) | Assured Forwarding 41 | Q1 | ≤ 10 ms (intra-wing) | 15% WAN |
| Class 3 — VoIP / Video Call | Zoom, Teams, phone calls, intercom | EF (46) / CS3 | Expedited Forwarding | Q0 (Strict Priority) | ≤ 5 ms | 5% WAN (reserved) |
| Class 4 — Streaming Video | 4K/8K/HDR Netflix, YouTube, VoD | AF41 (34) | Assured Forwarding 41 | Q2 | ≤ 50 ms | 30% WAN |
| Class 5 — Business-Critical | NAS I/O, hypervisor traffic, backups | AF31 (26) | Assured Forwarding 31 | Q2 | ≤ 100 ms | 15% WAN |
| Class 6 — General Web / Apps | HTTP/S browsing, app traffic | CS0 (0) / AF21 | Best Effort / Assured | Q3 | Best Effort | 10% WAN |
| Class 7 — Bulk Transfer | Software updates, large downloads, torrents | CS1 (8) | Scavenger / Lower Effort | Q4 | Best Effort (deprioritized) | 5% WAN (remaining) |
| Class 8 — Network Control | OSPF, VRRP, BGP, STP BPDUs, SNMP | CS6 (48) / CS7 | Network Control | Q0 (Strict Priority) | ≤ 1 ms | Always served first |
The security architecture of the compound network employs a defense-in-depth strategy, in which multiple independent security controls exist at every layer of the network stack. No single security control is relied upon exclusively. The compromise of any one layer or control does not grant an attacker unrestricted access to compound network resources. The following table summarizes the security controls applied at each layer.
| Layer | Security Control | Mechanism / Protocol | Enforcement Point |
|---|---|---|---|
| WAN / ISP | Ingress filtering; DDoS mitigation | BGP blackholing, ISP-level scrubbing | Edge routers + ISP |
| Edge | Stateful firewall; NAT; IDS/IPS | MikroTik RouterOS firewall chains; connection tracking; Suricata IDS integration | Edge routers (RTR-01/02) |
| Edge | VPN gateway | WireGuard + IPsec IKEv2 for remote access | Edge routers |
| Core / Distribution | VLAN isolation; ACLs | IEEE 802.1Q; IP ACLs at SVI level on core switches | CORE-SW-01/02, Wing Cores |
| Access | Port security; DHCP snooping | MAC limiting per port; DHCP snooping binding table; DAI (Dynamic ARP Inspection) | All access switches |
| Access | 802.1X port authentication | IEEE 802.1X with RADIUS back-end (FreeRADIUS or Cisco ISE) | Wired ports; wireless SSID |
| Wireless | WPA3-Enterprise authentication | IEEE 802.11be WPA3-Enterprise with 192-bit security suite (EAP-TLS) | All APs — NGRF-PRIVATE, NGRF-WORK, NGRF-VR SSIDs |
| Wireless | WPA3-Personal | SAE (Simultaneous Authentication of Equals) with strong passphrase | NGRF-GUEST, NGRF-IOT SSIDs |
| Wireless | SSID isolation; AP isolation | Client isolation per VLAN; no peer-to-peer traffic on GUEST / IOT SSIDs | All APs |
| Wireless | Management Frame Protection | IEEE 802.11w (PMF — Protected Management Frames) mandatory on all SSIDs | All APs |
| Network-wide | DNS security | DNS-over-TLS (DoT) or DNS-over-HTTPS (DoH) to upstream resolver; internal DNS server for split-horizon resolution | Edge router + internal DNS |
| Network-wide | NTP authentication | Authenticated NTP (NTPsec) synchronization; all devices locked to internal NTP server | All managed devices |
| Management | OOB management network | Separate management VLAN (VLAN 1) accessible only via jump server; no direct management access from user VLANs | All infrastructure devices |
| Management | Encrypted management protocols | SSHv2 only (no Telnet); HTTPS-only web management; SNMPv3 with auth+privacy | All infrastructure devices |
The physical cabling infrastructure is the passive foundation upon which all active network components operate. Deficiencies in the physical cabling plant will limit the performance of all overlying active equipment, regardless of how capable that equipment may be. This specification therefore imposes strict requirements on all cabling media, connectors, termination quality, and conduit installation, conforming to TIA-568.2-D (copper) and TIA-568.3-D (fiber).
| Application | Cable Category / Type | Connector | Max Segment Length | Supported Data Rate | Deployment Zone | |
|---|---|---|---|---|---|---|
| AP to Access Switch (PoE++ uplink) | Cat 6A U/FTP (23 AWG, shielded) | RJ45 (T568B) | 90m (channel) + 10m patch | 2.5GBASE-T (2.5 Gbps) | All floors, all wings | |
| Wired workstation drops | Cat 6A U/FTP (23 AWG, shielded) | RJ45 (T568B) | 90m (channel) + 10m patch | 2.5GBASE-T / 10GBASE-T (2.5 or 10 Gbps) | All wings (workstation zones) | |
| Access Switch → Wing Core (uplink) | OS2 SM Fiber (9/125 μm) | LC/UPC Duplex | 10 km (well within compound) | 25GBASE-LR (25 Gbps) — 2 fibers per link; 2×25G LACP = 50G per switch | IDF to Wing Core MDF; 25GBASE-LR SFP28 transceivers both ends | |
| Wing Core → Central Core (Wings A–D uplink) | OS2 SM Fiber (9/125 μm) | LC/UPC Duplex | 10 km | 100GBASE-LR4 (100 Gbps) — 1×100G to each core chassis; 2×100G MLAG = 200G per wing | Wing MDF to Central MDF; 100GBASE-LR4 QSFP28; C9300X-NM-4C required on both ends | |
| Wing Core → Central Core (Underground uplink) | OS2 SM Fiber (9/125 μm) | LC/UPC Duplex | 10 km | 25GBASE-LR (25 Gbps) — 1×25G to each core chassis; 2×25G MLAG = 50G | UG MDF to Central MDF; 25GBASE-LR SFP28; native C9300X-24Y SFP28 ports | |
| Central Core → Edge Router (uplink) | OS2 SM Fiber (9/125 μm) | LC/UPC Duplex | 10 km | 25GBASE-LR (25 Gbps) — via C9300X-24Y native SFP28 ports (NM-4C ports reserved for wing uplinks) | Core MDF to Edge Router; 25GBASE-LR SFP28 transceivers | |
| ISP ONT → Edge Router | OS2 SM Fiber (9/125 μm) or Cat 6A | LC/UPC or RJ45 | 10 km / 100m | 10GBASE-LR / 1GBASE-T | NDR room to Equipment Room | |
| Underground tunnel backbone | OS2 SM Fiber (9/125 μm) — armored | LC/UPC | Per run — up to 500m | 10GBASE-LR | All underground conduit runs | |
| Exterior perimeter runs | OS2 SM Fiber — direct burial armored | LC/UPC (weatherproof enclosures) | Per run | 10GBASE-LR | Outdoor conduit, direct burial | |
| Core inter-chassis cross-connect (ICL) | OS2 SM Fiber (9/125 μm) | LC/UPC Duplex | < 5m (within equipment room) | 25GBASE-LR (25 Gbps) — 2×25G LACP = 50G ICL aggregate; native C9300X-24Y SFP28 ports (25GBASE-SR or DAC acceptable at <5m) | Equipment room — CORE-SW-01 to CORE-SW-02 | |
| ¹ Cat 6A shielded (U/FTP) is mandatory throughout for PoE++ runs. Unshielded Cat 6A may be acceptable for non-PoE wired drops subject to engineer approval. ² All fiber runs shall be tested with an OTDR at 1310 nm and 1550 nm wavelengths post-installation. Test results shall be archived. ³ All Cat 6A runs shall be tested to TIA-568-C.2 Cat 6A specification minimum. | ||||||
| Component | Qty. | Power per Unit (W) | Total (W) | Notes |
|---|---|---|---|---|
| MikroTik CCR2004 Edge Router | 2 | ~75W (loaded) | 150W | Includes dual PSU overhead |
| Cisco Catalyst 9300X Central Core | 2 | ~250W (loaded, no PoE) | 500W | Switching fabric + SFP+ transceivers |
| Cisco Catalyst 9300X Wing Core | 5 | ~180W (loaded, no PoE) | 900W | Per-wing distribution, no PoE at this tier |
| Ubiquiti Enterprise 24 PoE (switch only) | ~20 | ~50W (switch, excluding PoE load) | 1,000W | Estimate based on 4 floors × 5 wings; actual count TBD |
| PoE Budget — APs per switch | ~20 switches × 12 APs avg. | ~40W per AP (GT-BE98 Pro) | ~9,600W | 240 APs × 40W; 400W PoE budget per switch × 20 = 8,000W allocated |
| Asus ROG GT-BE98 Pro (PoE++) | ~40 (anchor APs) | ~40W | 1,600W | Included in PoE budget above |
| Asus ZenWiFi Pro ET12 (PoE++) | ~200 (density APs) | ~30–35W | ~6,500W | Included in PoE budget above |
| Server / NAS infrastructure | ~6 units (est.) | ~300W avg. | ~1,800W | VLAN 70 rack equipment |
| NMS / monitoring server | 1 | ~150W | 150W | Unifi Controller + NMS + logging |
| Cooling (network rooms only) | Per room | ~500W | ~2,000W | 4 wing MDF rooms + central equipment room |
| ESTIMATED TOTAL NETWORK INFRASTRUCTURE DRAW | ~18,600W | ~18.6 kW peak; design UPS/generator for ≥ 25 kW with headroom | ||
The following performance benchmarks represent the design-phase targets for the compound network under normal operating conditions (no active equipment failures, no extreme concurrent load events). These figures shall form the basis of internal performance validation testing during network commissioning and shall be re-validated periodically thereafter. All benchmark figures are accompanied by the measurement conditions under which they apply.
| Benchmark Metric | Target Value | Measurement Conditions | Acceptable Minimum |
|---|---|---|---|
| WAN Downstream — PLDT 10G (single host) | ≥ 9,500 Mbps (9.5 Gbps) | iperf3 to external speedtest node; single wired client on VLAN 10 | ≥ 8,000 Mbps |
| WAN Aggregate (all 3 ISPs) | ≥ 11,500 Mbps | Simultaneous multi-stream to distinct external endpoints via each ISP | ≥ 10,000 Mbps |
| Wired LAN throughput (intra-core) | ≥ 2,400 Mbps per client (2.5G link) | iperf3 between two wired clients on same access switch; Cat 6A 2.5G ports | ≥ 2,200 Mbps |
| Wi-Fi 7 Single Client (6 GHz, 320 MHz, 4×4) | ≥ 4,000 Mbps (real-world) | Wi-Fi 7 MLO client at 2m distance from AP; iperf3 UDP; 6 GHz Low radio | ≥ 3,000 Mbps |
| Wi-Fi 7 MLO Aggregate (3 bands) | ≥ 9,000 Mbps (real-world) | MLO-capable Wi-Fi 7 client at 2m; iperf3 multi-stream across all MLO links | ≥ 7,000 Mbps |
| Intra-wing wireless RTT (same AP) | ≤ 1 ms | Ping between two Wi-Fi 7 clients on same AP; 1000-packet ICMP test | ≤ 2 ms |
| Intra-wing wireless RTT (adjacent APs) | ≤ 3 ms | Ping between two clients on adjacent APs, same floor; 802.11r roaming established | ≤ 5 ms |
| Cross-wing wireless RTT (core path) | ≤ 8 ms | Ping between clients on Wing A and Wing D; traffic path through core | ≤ 12 ms |
| VR streaming latency (Wing D intra-wing) | ≤ 3 ms (one-way) | Simulated VR workload; UDP 72 Mbps stream; timestamped packet RTT / 2 | ≤ 6 ms |
| WAN failover time (PLDT → Globe) | ≤ 3 seconds | Hard disconnect PLDT ONT; measure time to restored internet on test host | ≤ 5 seconds |
| Core switch failover time (CORE-SW-01 failure) | ≤ 1 second | Hard power-off CORE-SW-01; measure interruption time on active TCP session | ≤ 2 seconds |
| Edge router VRRP failover | ≤ 3 seconds | Hard power-off EDGE-RTR-01; measure interruption time on active TCP session | ≤ 5 seconds |
| Wireless roaming transition (802.11r) | ≤ 20 ms | Mobile client walking between adjacent APs; measure RSSI and re-auth time | ≤ 50 ms |
| Concurrent wireless clients (compound-wide) | ≥ 500 simultaneous clients | All APs loaded with simulated clients via Wi-Fi performance test framework | ≥ 300 clients |
| Network availability (uptime) | ≥ 99.95% per calendar year (≤ 4.4 hrs downtime) | Continuous monitoring via NMS; includes all planned maintenance windows | ≥ 99.9% |
The following visual representation provides a relative performance comparison between the key wireless throughput metrics of this design and prior Wi-Fi generations, illustrating the magnitude of the improvement realized by the Wi-Fi 7 quad-band architecture.
| Failure Scenario | Failed Component | Redundant Path | Failover Mechanism | Est. Downtime | Impact |
|---|---|---|---|---|---|
| Primary ISP failure | PLDT ONT / circuit | Globe 1G + Converge 1G | BGP route withdrawal, auto-failover | < 3 seconds | WAN speed reduced to 2 Gbps; zero LAN impact |
| Secondary ISP failure (during PLDT outage) | Globe circuit | Converge 1G | BGP withdrawal, route to Converge | < 3 seconds | WAN speed reduced to 1 Gbps; LAN unaffected |
| Edge router failure (RTR-01) | EDGE-RTR-01 (MASTER) | EDGE-RTR-02 (BACKUP) | VRRP promotion; BACKUP → MASTER | ~3 seconds | Brief TCP interruption; UDP (gaming/VR) resumes < 1s |
| Central core switch failure | CORE-SW-01 | CORE-SW-02 via cross-link | LACP uplink failure; wing cores re-route via surviving core | < 1 second | Minimal — cross-chassis LACP handles transparently |
| Wing core switch failure | e.g., WING-A-CORE | None (single wing core per wing) | N/A — Wing A access switches lose uplink | Until replaced | Wing A LAN and Wi-Fi offline until replacement |
| Access switch failure | e.g., ACCSS-A-FL2 | None (single access switch per floor) | N/A — Floor 2 Wing A ports offline | Until replaced | Floor 2 Wing A APs and wired drops offline; other floors unaffected |
| Single AP failure | Any one AP | Adjacent APs (overlap coverage) | 802.11k/r — clients roam to neighbor APs automatically | < 1 second (client roam) | Minor coverage reduction; no connectivity loss for mobile clients |
| Edge router PSU failure | One PSU on RTR-01 | Secondary PSU on same router | Automatic (hot-swap PSU redundancy) | Zero | None |
| Core switch PSU failure | One PSU on CORE-SW-01 | Secondary PSU on same switch | Automatic (hot-swap) | Zero | None |
| UPS failure (mains present) | UPS battery/inverter | Mains power continues | Bypass relay | Zero (brief < 20ms relay switch) | UPS battery protection lost; equipment remains online on mains |
| Power outage (mains loss) | Mains electricity | UPS (30 min) → Generator | UPS instantaneous; generator typically starts in 10–30s | Zero (UPS covers generator spin-up) | None for network equipment within UPS/generator coverage |
| ¹ Wing Core switches represent the only layer in the design without hardware redundancy. A future revision of this specification may address this with dual wing core switches per wing for critical wings (A and D). ² AP count and placement provide inherent redundancy at the wireless layer; a single AP failure is invisible to mobile clients within standard deployment density. | |||||
All network infrastructure components in this design are fully managed devices, exposing complete management interfaces for centralized configuration, monitoring, and telemetry collection. The compound shall maintain a dedicated Network Management System (NMS) instance, deployed on the SERVER-LAN VLAN (VLAN 70) on dedicated server hardware within the equipment room. The NMS provides the single pane of glass through which all network infrastructure is observed and administered.
| Device Class | Management Platform | Protocol | Key Functions |
|---|---|---|---|
| MikroTik Edge Routers | The Dude (MikroTik) + Grafana + Prometheus | RouterOS API, SNMP v3, REST, SSH | Real-time traffic graphs, BGP state, VRRP health, firewall hit counters, CPU/RAM utilization |
| Cisco Catalyst Core/Wing Switches | Cisco DNA Center or Catalyst Center + SNMP | NETCONF/YANG, gRPC telemetry, SNMP v3, SSH | Interface utilization, STP topology, VLAN state, MAC table, spanning tree events, QoS queue statistics |
| Ubiquiti Access Switches | Ubiquiti UniFi Network Controller | Proprietary UniFi, SNMP v3 | PoE budget real-time, port utilization, VLAN assignment, firmware management, topology view |
| Asus Wi-Fi 7 Access Points | Asus AiMesh Controller + UniFi integration (where applicable) | AiMesh proprietary, SNMP, TR-069 | Client association, RF channel utilization, RSSI heatmaps, roaming event logs, MLO band state |
| All devices (unified) | Grafana + InfluxDB + Prometheus stack | SNMP polling, gRPC streaming, syslog | Unified dashboard — all KPIs, alerts, historical trending, SLA reporting |
| Syslog collection | Graylog or OpenSearch / ELK Stack | Syslog (UDP 514 / TCP 6514 TLS) | Centralized log aggregation, security event correlation, audit trail |
| Network Time | Internal NTP server (Chrony) | NTPv4 | Authoritative time for all network devices and servers; GPS-disciplined if available |
| DNS | Pi-hole + Unbound (dual server) | DNS-over-TLS upstream; standard DNS internal | Split-horizon DNS, ad filtering, internal hostname resolution, DHCP integration |
The network infrastructure defined in this document, designated NGRF-NET-001 Revision 1.0, constitutes a fully realized, enterprise-grade, campus-scale network designed for deployment in a multi-wing residential compound. The architecture provides an aggregate WAN capacity of twelve gigabits per second across three independent ISP connections, a switching fabric with a combined central core capacity of nine hundred and sixty gigabits per second, and a wireless fabric capable of delivering approximately nine to nineteen gigabits per second of aggregate Wi-Fi 7 throughput to a single client at close range, with compound-wide wireless capacity scaling proportionally with the number of deployed access points.
The design achieves its stated goals across all five primary design dimensions:
High Throughput is achieved through the deployment of PLDT's 10G fiber plan as the primary WAN, 10G fiber uplinks at every distribution tier, 2.5G PoE++ at every AP drop point, and IEEE 802.11be Wi-Fi 7 with 320 MHz 6 GHz channels and Multi-Link Operation across all access points. No link in the path from ISP to client is a bottleneck relative to the client's maximum achievable throughput.
Redundancy and High Availability is achieved through a comprehensive layered redundancy strategy: three ISPs, dual edge routers in VRRP, dual core switches in cross-connected HA, LACP uplinks at every distribution tier, redundant PSUs in all critical equipment, and UPS plus generator backup power. The result is a network that survives virtually any single hardware failure without service interruption.
Seamless Wireless Roaming is achieved through dense Wi-Fi 7 AP deployment conforming to per-zone coverage engineering targets, universal deployment of IEEE 802.11k/r/v roaming assistance protocols, and the MLO capability of Wi-Fi 7 which inherently reduces roaming latency by maintaining multiple simultaneous radio associations. A client moving at walking pace through the compound will experience zero perceptible wireless connectivity interruption.
VR Readiness is achieved through the combined effect of the 6 GHz quad-band Wi-Fi 7 platform, dense AP spacing in Wing D (VR arena), strict QoS enforcement placing VR traffic in the Expedited Forwarding queue at all network layers, and the MLO architecture's inherent latency advantages. The target wireless latency of ≤ 3 ms one-way (≤ 6 ms RTT) within Wing D is well within the ≤ 7 ms tolerance of consumer and professional VR headsets.
Scalability is achieved by specifying enterprise-grade components at all switching tiers, all of which support expansion of port count, additional stacking members, and additional VLANs without architectural redesign. The addition of new floors, wings, or facilities to the compound requires only the addition of further wing core and access switching capacity, with the central core and edge layer having abundant capacity reserve to absorb substantial growth.
| Roadmap Item | Description | Priority | Dependencies | Est. Timeline |
|---|---|---|---|---|
| Wing Core Redundancy | Add second Cisco 9300X per wing (dual wing cores) to eliminate the one remaining single-point-of-failure per wing | HIGH | Budget authorization; additional rack space per wing MDF | Year 2 post-deployment |
| 25G Access Layer Upgrade | Replace 2.5G access switches with 25G multi-gigabit switches as Wi-Fi 7 AP maximum practical throughput approaches 10G per AP | MEDIUM | Client device Wi-Fi 7 adoption; AP firmware maturity | Year 3–5 |
| Wi-Fi 7 Rev. 2 / Wi-Fi 8 Ready | Cabling, PoE, and management infrastructure are sized to support next-generation AP hardware with no changes beyond AP replacement | LOW (pre-planned) | IEEE 802.11bn (Wi-Fi 8) ratification; compatible hardware availability | Year 5–7 |
| PLDT 100G Upgrade | Upgrade PLDT ISP connection to 100G if/when commercially available in PH; edge router replacement to CCR2216 or equivalent required | LOW | PLDT commercial 100G residential availability in PH | Year 5+ |
| Private 5G / mmWave Overlay | Deploy private 5G NR small cells or mmWave 802.11ay point-to-point links for highest-density zones (Wing D VR arena) as complementary ultra-low-latency layer | RESEARCH | NTC licensing for private 5G; hardware maturity and cost | Year 3–5 |
| SD-WAN Overlay | Implement SD-WAN software overlay (Cisco Viptela or MikroTik MPLS) across all three ISPs for application-aware intelligent path selection beyond basic BGP policy | MEDIUM | SD-WAN platform licensing | Year 2 |
| Zero Trust Network Access (ZTNA) | Implement ZTNA platform (e.g., Cloudflare Zero Trust, Cisco Duo) for all remote access and inter-VLAN access policy, replacing static ACL-based controls | MEDIUM | Identity provider integration; client agent deployment | Year 2–3 |
| Network Block | Subnet Mask | VLAN | Gateway (HSRP VIP) | DHCP Range | Static Range | Purpose |
|---|---|---|---|---|---|---|
| 10.0.1.0 | /24 (256 hosts) | VLAN 1 | 10.0.1.1 | 10.0.1.10–200 | 10.0.1.201–254 | Network Management / OOB |
| 10.0.10.0 | /23 (512 hosts) | VLAN 10 | 10.0.10.1 | 10.0.10.10–510 | 10.0.10.200–511 | Trusted Private (Principal + Family) |
| 10.0.20.0 | /24 (256 hosts) | VLAN 20 | 10.0.20.1 | 10.0.20.10–200 | 10.0.20.201–254 | Workstations / Creative Lab |
| 10.0.30.0 | /24 (256 hosts) | VLAN 30 | 10.0.30.1 | 10.0.30.10–200 | — | VR Headsets / Gaming (QoS Priority) |
| 10.0.40.0 | /23 (512 hosts) | VLAN 40 | 10.0.40.1 | 10.0.40.10–510 | — | Guest Wi-Fi (Internet-only) |
| 10.0.50.0 | /23 (512 hosts) | VLAN 50 | 10.0.50.1 | 10.0.50.10–510 | — | IoT Devices (Isolated) |
| 10.0.60.0 | /24 (256 hosts) | VLAN 60 | 10.0.60.1 | 10.0.60.10–200 | 10.0.60.200–254 | Security Cameras / NVR |
| 10.0.70.0 | /24 (256 hosts) | VLAN 70 | 10.0.70.1 | Static only | 10.0.70.1–254 | Servers / NAS / Hypervisors |
| 10.0.80.0 | /24 (256 hosts) | VLAN 80 | 10.0.80.1 | 10.0.80.10–200 | 10.0.80.201–254 | VoIP / Intercom |
| 10.0.90.0 | /28 (16 hosts) | VLAN 90 | 10.0.90.1 | Static only | 10.0.90.2–14 | DMZ (externally accessible services) |
| 10.0.99.0 | /24 (256 hosts) | VLAN 99 | 10.0.99.1 | 10.0.99.10–250 | — | Quarantine (non-compliant devices) |
| 10.0.200.0 | /30 (4 hosts) | — | — | Static | 10.0.200.1–2 | VRRP Link — RTR-01 ↔ RTR-02 |
| 10.0.201.0 | /30 (4 hosts) | — | — | Static | 10.0.201.1–2 | OSPF Link — RTR-01 ↔ CORE-SW-01 |
| 10.0.202.0 | /30 (4 hosts) | — | — | Static | 10.0.202.1–2 | OSPF Link — RTR-01 ↔ CORE-SW-02 |
| 10.0.203.0 | /30 (4 hosts) | — | — | Static | 10.0.203.1–2 | OSPF Link — RTR-02 ↔ CORE-SW-01 |
| 10.0.204.0 | /30 (4 hosts) | — | — | Static | 10.0.204.1–2 | OSPF Link — RTR-02 ↔ CORE-SW-02 |
| Line | Component | Part Number / SKU | Qty. | Unit Price (USD est.) | Extended (USD est.) | Lead Time |
|---|---|---|---|---|---|---|
| 1 | MikroTik CCR2004-1G-12S+2XS Edge Router | CCR2004-1G-12S+2XS | 2 | $1,450 | $2,900 | 2–4 wks |
| 2 | Cisco Catalyst 9300X-24Y-A Switch (central core ×2 and wing core ×5 — all tiers now unified on same chassis) | C9300X-24Y-A | 7 | $12,000 | $84,000 | 4–8 wks |
| 3 | Cisco C9300X-NM-4C Network Module (4× QSFP28 100G/40G dual-rate) — installed in all 7 chassis | C9300X-NM-4C= | 7 | $3,800 | $26,600 | 2–4 wks |
| 3b | 100GBASE-LR4 QSFP28 Transceiver, OS2 SM, LC Duplex, 10km (wing-to-core; 2 per wing × 4 wings × 2 ends = 16) | QSFP-100G-LR4-S= | 16 | $650 | $10,400 | 1–2 wks |
| 4 | Ubiquiti UniFi Switch Pro XG 24 PoE (16×10G + 8×2.5G PoE+++ RJ45, 2×25G SFP28 uplink, 720W PoE) | USW-Pro-XG-24-PoE | ~20 (TBD) | $1,799 | ~$35,980 | 1–3 wks |
| 4b | 25GBASE-LR SFP28 Transceiver, OS2 SM, LC Duplex, 10km (access-to-wing + UG uplinks + ICL; ~4 per access switch pair + core) | SFP-25G-LR-S= | 120 | $85 | $10,200 | 1–2 wks |
| 5 | Asus ROG Rapture GT-BE98 Pro (Wi-Fi 7 Quad-Band) | GT-BE98 Pro | ~40 (TBD) | $699 | ~$27,960 | 1–2 wks |
| 6 | Asus ZenWiFi Pro ET12 (Wi-Fi 7 Tri/Quad-Band) | ZenWiFi Pro ET12 | ~200 (TBD) | $399 | ~$79,800 | 1–3 wks |
| 7 | OS2 SM Fiber Bulk Roll (9/125, LSZH) — 1km | OS2-SM-LSZH-1000M | 10 rolls | $280 | $2,800 | 1 wk |
| 8 | Cat 6A U/FTP Cable Bulk Roll (23 AWG, LSZH) — 305m | Cat6A-UFTP-305M | 60 rolls | $185 | $11,100 | 1–2 wks |
| 9 | 10G SFP+ LC/UPC Single-Mode Transceiver (10GBASE-LR) | SFP-10G-LR | 80 | $45 | $3,600 | 1 wk |
| 10 | 25G SFP28 LC/UPC Single-Mode Transceiver (25GBASE-LR) | SFP28-25G-LR | 20 | $120 | $2,400 | 1–2 wks |
| 11 | LC/UPC Duplex Fiber Patch Cord (OS2, 3m) | OS2-LC-LC-3M | 200 | $12 | $2,400 | 1 wk |
| 12 | 24-port LC Fiber Patch Panel (1U) | FPP-24LC-1U | 20 | $95 | $1,900 | 1 wk |
| 13 | 24-port Cat 6A Keystone Patch Panel (1U, shielded) | PP-CAT6A-24SH | 30 | $120 | $3,600 | 1 wk |
| 14 | APC Smart-UPS SRTL 10kVA 208V (network room UPS) | SRTL10KRM4U | 5 | $6,800 | $34,000 | 3–6 wks |
| 15 | 19" 42U Network Equipment Rack (with cable management) | NetRack-42U-800 | 8 | $650 | $5,200 | 1–2 wks |
| 16 | Server (NMS + UniFi Controller) | Custom / Dell R550 equiv. | 1 | $4,500 | $4,500 | 2–4 wks |
| 17 | RADIUS Server / DNS Server (VM or dedicated) | VM on NMS server | 1 | $0 (software) | $0 | — |
| 18 | Ubiquiti UniFi Network Controller License (Perpetual) | UniFi-Network-Enterprise | 1 | $0 (self-hosted) | $0 | — |
| 19 | FreeRADIUS / Cisco ISE VM License | ISE-VM-K9 or FOSS | 1 | $0–$3,000 | ~$1,500 | — |
| 20 | Grafana + InfluxDB + Prometheus Stack (self-hosted) | OSS software | 1 | $0 | $0 | — |
| 21 | Cat 6A RJ45 Shielded Keystone Jacks (bag of 50) | KJ-CAT6A-SH-50 | 20 bags | $38 | $760 | 1 wk |
| 22 | Armored OS2 SM Fiber — Direct Burial (per 500m reel) | OS2-ARM-DB-500M | 4 reels | $520 | $2,080 | 1–2 wks |
| 23 | Cisco IOS-XE DNA Advantage License (per switch, 3yr) | C9300-DNA-A-48-3Y | 7 | $1,800 | $12,600 | Electronic |
| 24 | MikroTik Rack Mount Kit for CCR2004 | CCR2004-RM-KIT | 2 | $25 | $50 | 1 wk |
| 25 | Structured Cabling Installation Labor (est.) | Contractor / LOE | 1 (lot) | — | ~$15,000–$25,000 | Per schedule |
| ESTIMATED TOTAL MATERIAL + EQUIPMENT COST (USD) — REV. 1.1 | ~$388,130 | Excl. labor, taxes, import duties, and contingency. Reflects unified C9300X-24Y chassis across all tiers, NM-4C modules ×7, 100GBASE-LR4 QSFP28 transceivers, 25GBASE-LR SFP28 transceivers, and USW-Pro-XG-24-PoE access switch upgrade. | ||||
| Add 20% contingency + 12% VAT + import duties (PH) | ~$510,000–$545,000 | All-in total estimate (USD); subject to final procurement | ||||