Google Cloud Network Engineer Sample Questions for Certification Success
- CertiMaan
- Oct 24, 2025
- 45 min read
Updated: Mar 3
Get exam-ready with expertly curated Google Cloud Network Engineer sample questions tailored to the certification exam blueprint. These practice questions cover key areas including Virtual Private Cloud (VPC) design, hybrid connectivity, network services, security, and automation. Whether you're a network professional or cloud architect aiming for the Professional Cloud Network Engineer certification, this resource provides hands-on experience through scenario-based questions that reflect the actual Google exam pattern. Perfect for validating your skills, assessing knowledge gaps, and boosting confidence before test day. Start preparing smarter and achieve success in your Google Cloud networking career.
Google Cloud Network Engineer Sample Questions List :
1. You have recently taken over responsibility for your organization's Google Cloud network security configurations. You want to review your Cloud Next Generation Firewall (Cloud NGFW) configurations and ensure there are no rules that are allowing ingress traffic to your VMs and services from the internet. You want to avoid manual work. What should you do?
Enable the Network Analyzer API and review the "VPC Network" category insights
Review the firewall policy rules associated with the VPC, and filter for rules that allow ingress from 0.0.0.0/0.
Run Connectivity Tests from multiple external sources to double-check ingress traffic settings
Enable "Overly permissive rules insights" in Firewall Insights. Review results for rules that show allowed ingress traffic from internet sources
2. You have an application running on Compute Engine that uses BigQuery to generate some results that are stored in Cloud Storage. You want to ensure that none of the application instances have external IP addresses. Which two methods can you use to accomplish this? (Choose two.)
Enable Private Google Access on the VPC
Enable Private Google Access on all the subnets
Create network peering between your VPC and BigQuery
Create a Cloud NAT, and route the application traffic via NAT gateway
Enable Private Services Access on the VPC
3. You have two Google Cloud projects in a perimeter to prevent data exfiltration. You need to move a third project inside the perimeter; however, the move could negatively impact the existing environment. You need to validate the impact of the change. What should you do?
Enable Firewall Rules Logging inside the third project
Modify the existing VPC Service Controls policy to include the new project in dry run mode
Enable VPC Flow Logs inside the third project, and monitor the logs for negative impact
Monitor the Resource Manager audit logs inside the perimeter
4. You created a new VPC network named Dev with a single subnet. You added a firewall rule for the network Dev to allow HTTP traffic only and enabled logging. When you try to log in to an instance in the subnet via Remote Desktop Protocol, the login fails. You look for the Firewall rules logs in Stackdriver Logging, but you do not see any entries for blocked traffic. You want to see the logs for blocked traffic. What should you do?
Create a new firewall rule with priority 65500 to deny all traffic, and enable logs
Try connecting to the instance via SSH, and check the logs
Check the VPC flow logs for the instance
Create a new firewall rule to allow traffic from port 22, and enable logs
5. You are implementing a VPC architecture for your organization by using a Network Connectivity Center hub and spoke topology: • There is one Network Connectivity Center hybrid spoke to receive on-premises routes. • There is one VPC spoke that needs to be added as a Network Connectivity Center spoke. Your organization has limited routable IP space for their cloud environment (192.168.0.0/20). The Network Connectivity Center spoke VPC is connected to on-premises with a Cloud Interconnect connection in the us-east4 region. The on-premises IP range is 172.16.0.0/16. You need to reach on-premises resources from multiple Google Cloud regions (us-west1,europe-central1, and asia-southeast1) and minimize the IP addresses being used. What should you do?
1. Configure a Private NAT gateway and NAT subnet in us-west1(192.168.1.0/24), europe-central1(192.168.2.0/24) and asia-southeast1(192.168.3.0/24). 2. Add the VPC as a spoke and configure an export include policy to advertise only 192.168.1.0/24, 192.168.2.0/24, and 192.168.3.0/24 to the hub. 3. Enable global dynamic routing to allow resources in us-west1, us-central1 and asia-southeast1 to reach the on-premises location through us-east4
1. Configure a Private NAT gateway instance in us-west1(192.168.1.0/24), europe-central1(192.168.2.0/24), and asia-southeast1(192.168.3.0/24). 2. Add the VPC as a spoke and configure an export exclude policy on the VPC spoke to advertise only the NAT subnets 192.168.1.0/24, 192.168.2.0/24, and 192.168.3.0/24 to the hub. 3. Enable global dynamic routing to allow resources in us-west1, us-central1, and asia-southeast1 to reach the on-premises location through us-east4
1. Configure a Private NAT gateway instance in us-west1(172.16.1.0/24), europe-central1(172.16.2.0/24), and asia-southeast1(172.16.3.0/24). 2. Add the VPC as a spoke and configure an export include policy on the VPC spoke to advertise only the NAT subnets 172.16.1.0/24, 172.16.2.0/24, and 172.16.3.0/24 to the hub. 3. Enable global dynamic to allow resources in us-west1, us-central1, and asia-southeast1 to reach the on-premises location through us-east4
1. Configure a Private NAT gateway instance in us-east4(192.168.1.0/24). 2. Add the VPC as a spoke and configure an export include policy on the VPC spoke to advertise 192.168.1.0/24 to the hub. 3. Enable global dynamic routing to allow resources in us-west1, us-central1 and asia-southeast1 to reach the on-premises location through us-east4
6. You have deployed an HTTP(s) load balancer, but health checks to port 80 on the Compute Engine virtual machine instance are failing, and no traffic is sent to your instances. You want to resolve the problem. Which commands should you run?
gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --destination-ranges 130.211.0.0/22,35.191.0.0/16 --direction EGRESS
gcloud compute instances add-access-config instance-1
gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --source-ranges 130.211.0.0/22,35.191.0.0/16 --direction INGRESS
gcloud compute health-checks update http health-check --unhealthy-threshold 10
7. You recently reviewed the user behavior for your main application, which uses an external global Application Load Balancer, and found that the backend servers were overloaded due to erratic spikes in the rate of client requests. You need to limit the concurrent sessions and return an HTTP 429 Too Many Requests response back to the client while following Google-recommended practices. What should you do?
Create a Cloud Armor security policy, and associate the policy with the load balancer. Configure the security policy's settings as follows: action: throttle; conform action: allow; exceed action: deny-429
Configure the load balancer to accept only the defined amount of requests per client IP address, increase the backend servers to support more traffic, and redirect traffic to a different backend to burst traffic
Create a Cloud Armor security policy, and apply the predefined Open Worldwide Security Application Project (OWASP) rules to automatically implement the rate limit per client IP address
Configure a VM with Linux, implement the rate limit through iptables, and use a firewall rule to send an HTTP 429 response to the client application
8. Your organization's current architecture has one Shared VPC host project (SH_HOST_PRJ) that contains a single VPC (SH_VPC) and two Shared VPC service projects (SP_ONE_PRJ and SP_TWO_PRJ) that do not contain any VPCs. Each Shared VPC service project belongs to a different team: TEAM_ONE manages SP_ONE_PRJ and TEAM_TWO manages SP_TWO_PRJ. You must design a solution that allows each team to create their own DNS private zones and DNS records only in their respective Shared VPC service projects. Workloads in SP_ONE_PRJ must be able to resolve all the DNS private zones defined in SP_TWO_PRJ and conversely. Your design must have the least amount of set up effort. What should you do?
1. TEAM_ONE creates a new VPC (SP_ONE_VPC) in the Shared VPC service projects (SP_ONE_PRJ). TEAM_ONE creates Cloud DNS private zones and DNS records in SP_ONE_PRJ, and binds the zones to the new VPC (SP_ONE_VPC). TEAM_ONE creates a VPC Network Peering relationship between SP_ONE_VPC and the VPC (SH_VPC) in the Shared VPC host project (SH_HOST_PRJ). 2. TEAM_TWO completes the same actions for the SP_TWO_PRJ project
1. TEAM_ONE uses cross-project binding and creates Cloud DNS private zones and DNS records in SP_ONE_PRJ, and binds the zones to the Shared VPC host project (SH_HOST_PRJ). 2. TEAM_TWO creates Cloud DNS private zones and DNS records in SP_TWO_PRJ, and uses cross-project binding to connect the zones to the Shared VPC host project (SH_HOST_PRJ)
1. TEAM_ONE uses cross-project binding and creates Cloud DNS private zones and DNS records in SP_ONE_PRJ, and binds the zones to the VPC (SH_VPC) in the Shared VPC host project (SH_HOST_PRJ). 2. TEAM_TWO creates DNS private zones and DNS records in SP_TWO_PRJ and uses cross-project binding to connect the zones to the VPC (SH_VPC) in the Shared VPC host project (SH_HOST_PRJ)
1. TEAM_ONE creates a new VPC (SP_ONE_VPC) in the Shared VPC service projects (SP_ONE_PRJ). TEAM_ONE creates Cloud DNS private zones and DNS records in SP_ONE_PRJ, and binds the zones to the new VPC (SP_ONE_VPC). TEAM_ONE creates a Cloud DNS peering relationship between SP_ONE_VPC and the VPC (SH_VPC) in the Shared VPC host project (SH_HOST_PRJ). 2. TEAM_TWO completes the same actions for the SP_TWO_PRJ project
9. You have configured Cloud CDN using HTTP(S) load balancing as the origin for cacheable content. Compression is configured on the web servers, but responses served by Cloud CDN are not compressed. What is the most likely cause of the problem?
The web servers behind the load balancer are configured with different compression types
You have to configure the web servers to compress responses even if the request has a Via header
You have configured the web servers and Cloud CDN with different compression types
You have not configured compression in Cloud CDN
10. Your company deployed a hub and spoke architecture in Google Cloud to host their workloads. They use VPC network peerings to connect the hub and the spokes. You need to replicate the design and use Network Connectivity Center. What should you do?
Choose a Network Connectivity Center mesh topology. Configure the spokes as Network Connectivity Center spokes
Choose a Network Connectivity Center star topology. Deploy the hub VPC in the center group. Deploy the spoke VPCs in the edge group
Choose a Network Connectivity Center mesh topology. Configure the hub and the spokes as Network Connectivity Center spokes
Choose a Network Connectivity Center star topology. Deploy the spoke VPCs in the center group. Deploy the hub VPC in the edge group
11. You have configured a Compute Engine virtual machine instance as a NAT gateway. You execute the following command: gcloud compute routes create no-ip-internet-route \ --network custom-network1 \ --destination-range 0.0.0.0/0 \ --next-hop instance nat-gateway \ --next-hop instance-zone us-central1-a \ --tags no-ip --priority 800 You want existing instances to use the new NAT gateway. Which command should you execute?
gcloud compute instances add-tags [existing-instance] --tags no-ip
sudo sysctl -w net.ipv4.ip_forward=1
gcloud builds submit --config=cloudbuild.waml --substitutions=TAG_NAME=no-ip
gcloud compute instances create example-instance --network custom-network1 \ --subnet subnet-us-central \ --no-address \ --zone us-central1-a \ --image-family debian-9 \ --image-project debian-cloud \ --tags no-ip
12. Your organization wants to deploy HA VPN over Cloud Interconnect to ensure encryption-in-transit over the Cloud Interconnect connections. You have created a Cloud Router and two VLAN attachments. The BGP sessions are operational. You need to complete the deployment of the HA VPN over Cloud Interconnect. What should you do?
Create an HA VPN gateway and associate the gateway with your two VLAN attachments. Use the existing Cloud Router for HA VPN, the peer VPN gateway resources, and the HA VPN tunnels
Create an HA VPN gateway and associate the gateway with your two VLAN attachments. Create a new Cloud Router for HA VPN, the peer VPN gateway resources, and the HA VPN tunnels
Enable MACsec on Partner Cloud Interconnect
Enable MACsec on the VLAN attachments
13. Your organization has a hub and spoke architecture with VPC Network Peering, and hybrid connectivity is centralized at the hub. The Cloud Router in the hub VPC is advertising subnet routes, but the on-premises router does not appear to be receiving any subnet routes from the VPC spokes. You need to resolve this issue. What should you do?
Create custom routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes
Create a BGP route policy at the Cloud Router, and ensure the subnets of the VPC spokes are being announced towards the on-premises environment
Create custom learned routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes
Create custom routes at the Cloud Router in the spokes to advertise the subnets of the VPC spokes
14. You have deployed a new internal application that provides HTTP and TFTP services to on-premises hosts. You want to be able to distribute traffic across multiple Compute Engine instances, but need to ensure that clients are sticky to a particular instance across both services. Which session affinity should you choose?
Client IP
None
Client IP, port and protocol
Client IP and protocol
15. You are using the gcloud command line tool to create a new custom role in a project by coping a predefined role. You receive this error message: INVALID_ARGUMENT: Permission resourcemanager.projects.list is not valid What should you do?
Add the resourcemanager.projects.setIamPolicy permission, and try again
Add the resourcemanager.projects.get permission, and try again
Try again with a different role with a new name but the same permissions
Remove the resourcemanager.projects.list permission, and try again
16. You are configuring a new HTTP application that will be exposed externally behind both IPv4 and IPv6 virtual IP addresses, using ports 80, 8080, and 443. You will have backends in two regions: us-west1 and us-east1. You want to serve the content with the lowest-possible latency while ensuring high availability and autoscaling, and create native content-based rules using the HTTP hostname and request path. The IP addresses of the clients that connect to the load balancer need to be visible to the backends. Which configuration should you use?
Use External HTTP(S) Load Balancing with URL Maps and an X-Forwarded-For header
Use External HTTP(S) Load Balancing with URL Maps and custom headers
Use Network Load Balancing
Use TCP Proxy Load Balancing with PROXY protocol enabled
17. You are responsible for designing a new connectivity solution between your organization's on-premises data center and your Google Cloud Virtual Private Cloud (VPC) network. Currently, there is no end-to-end connectivity. You must ensure a service level agreement (SLA) of 99.99% availability. What should you do?
Use two Dedicated Interconnect connections in a single metropolitan area. Configure one Cloud Router and enable global routing in the VPC
Use a Direct Peering connection between your on-premises data center and Google Cloud. Configure Classic VPN with two tunnels and one Cloud Router
Use one Dedicated Interconnect connection in a single metropolitan area. Configure one Cloud Router and enable global routing in the VPC
Use HA VPN. Configure one tunnel from each interface of the VPN gateway to connect to the corresponding interfaces on the peer gateway on-premises. Configure one Cloud Router and enable global routing in the VPC
18. You are configuring an HA VPN connection between your Virtual Private Cloud (VPC) and on-premises network. The VPN gateway is named VPN_GATEWAY_1. You need to restrict VPN tunnels created in the project to only connect to your on-premises VPN public IP address: 203.0.113.1/32. What should you do?
Configure an access control list on the peer VPN gateway to deny all traffic except 203.0.113.1/32, and attach it to the primary external interface
Configure a firewall rule accepting 203.0.113.1/32, and set a target tag equal to VPN_GATEWAY_1
Configure a Google Cloud Armor security policy, and create a policy rule to allow 203.0.113.1/32.
Configure the Resource Manager constraint constraints/compute.restrictVpnPeerIPs to use an allowList consisting of only the 203.0.113.1/32 address
19. Your organization has over 250 autonomous business units that currently operate in a decentralized manner. Due to the organization's maturity, there is limited routable private IP address space, which is insufficient to accommodate all of the necessary workloads. You need to create a cloud-first network design that uses the same IP address space across business unit workloads where possible. These business units require communication between units, and access to their on-premises data center. What should you do?
Create a hub and spoke design that incorporates a centralized network virtual appliance (NVA) in the hub to perform routing and NAT between spokes
Create a hub and spoke model that incorporates VPC Network Peering with hybrid connectivity centralized within the hub
Create a Network Connectivity Center design that incorporates Private NAT to facilitate communication between VPC spokes, and a Routing VPC to exchange dynamic routes from the on-premises environment
Create a Network Connectivity Center design that incorporates Private Service Connect to provide bidirectional communication between VPC spokes, and a Routing VPC to exchange dynamic routes from the on-premises environment
20. Your on-premises data center has 2 routers connected to your Google Cloud environment through a VPN on each router. All applications are working correctly; however, all of the traffic is passing across a single VPN instead of being load-balanced across the 2 connections as desired. During troubleshooting you find: "¢ Each on-premises router is configured with a unique ASN. "¢ Each on-premises router is configured with the same routes and priorities. "¢ Both on-premises routers are configured with a VPN connected to a single Cloud Router. "¢ BGP sessions are established between both on-premises routers and the Cloud Router. "¢ Only 1 of the on-premises router's routes are being added to the routing table. What is the most likely cause of this problem?
The on-premises routers are configured with the same routes
You do not have a load balancer to load-balance the network traffic
The ASNs being used on the on-premises routers are different
A firewall is blocking the traffic across the second VPN connection
21. Your company uses VPC firewall rules and denies all egress traffic. You need to allow some VMs to contact external websites based on their fully qualified domain name (FQDN). You apply the new configuration, but the traffic is still denied. You need to adjust your setup to apply the new configuration. What would you do?
Update the default policy and rule evaluation order to AFTER_CLASSIC_FIREWALL
Raise the priority of the network firewall policy rules
Lower the priority of the network firewall policy rules
Update the default policy and rule evaluation order to BEFORE_CLASSIC_FIREWALL
22. You are designing a new application that has backends internally exposed on port 800. The application will be exposed externally using both IPv4 and IPv6 via TCP on port 700. You want to ensure high availability for this application. What should you do?
Create a TCP proxy that uses backend services containing an instance group with two instances
Create a network load balancer that uses a target pool backend with two instances
Create a network load balancer that used backend services containing one instance group with two instances
Create a TCP proxy that uses a zonal network endpoint group containing one instance
23. You need to enable Private Google Access for use by some subnets within your Virtual Private Cloud (VPC). Your security team set up the VPC to send all internet-bound traffic back to the on- premises data center for inspection before egressing to the internet, and is also implementing VPC Service Controls in the environment for API-level security control. You have already enabled the subnets for Private Google Access. What configuration changes should you make to enable Private Google Access while adhering to your security team’s requirements?
1. Create a private DNS zone with a CNAME record for *.googleapis.com to restricted.googleapis.com, with an A record pointing to Google's restricted API address range. 2. Create a custom route that points Google's restricted API address range to the default internet gateway as the next hop
1. Create a private DNS zone with a CNAME record for *.googleapis.com to restricted.googleapis.com, with an A record pointing to Google's restricted API address range. 2. Change the custom route that points the default route (0/0) to the default internet gateway as the next hop
1. Create a private DNS zone with a CNAME record for *.googleapis.com to private.googleapis.com, with an A record painting to Google's private AP address range. 2. Change the custom route that points the default route (0/0) to the default internet gateway as the next hop
1. Create a private DNS zone with a CNAME record for *.googleapis.com to private.googleapis.com, with an A record pointing to Google's private API address range. 2. Create a custom route that points Google's private API address range to the default internet gateway as the next hop
24. Your organization wants to deploy HA VPN over Cloud Interconnect to ensure encryption-in-transit over the Cloud Interconnect connections. You have created a Cloud Router and two encrypted VLAN attachments that have a 5 Gbps capacity and a BGP configuration. The BGP sessions are operational. You need to complete the deployment of the HA VPN over Cloud Interconnect. What should you do?
Create an HA VPN gateway and associate the gateway with your two encrypted VLAN attachments. Configure the HA VPN Cloud Router, peer VPN gateway resources, and HA VPN tunnels. Use the same encrypted Cloud Router used for the Cloud Interconnect tier
Enable MACsec for Cloud Interconnect on the VLAN attachments
Create an HA VPN gateway and associate the gateway with your two encrypted VLAN attachments. Create a new dedicated HA VPN Cloud Router, peer VPN gateway resources, and HA VPN tunnels
Enable MACsec on Partner Interconnect
25. You are developing an HTTP API hosted on a Compute Engine virtual machine instance that must be invoked only by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the IP address of the service. What should you do?
Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Clients should use this IP address to connect to the service
Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c.[PROJECT_ID].internal/
Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/
Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service
26. You are deploying your infrastructure in the us-central1 region. Your on-premises data center is located in New York City, and the Google Cloud region closest to New York City is us-east4. Your Cloud Interconnect is located in Ashburn, Virginia (VA), United States. You need to use Cloud Interconnect to connect your application infrastructure with backend systems in your data center location. You do not expect the application bandwidth to exceed 500 Mbps. You want to minimize latency and cost. What should you do?
Create a Cloud Router and VLAN attachments in the us-central1 region attached to your physical Interconnect in Ashburn, VA
Create a Cloud Router and VLAN attachments in the us-east4 region attached to your physical Interconnect in Ashburn, VEnable global routing in your VPC. Set the bandwidth on the VLAN attachments to 500 Mbps
Create a Cloud Router in the us-central1 region and VLAN attachments in the us-east4 region attached to your physical Interconnect in Ashburn, VA. Enable global routing in your VPC
Create a Cloud Router and VLAN attachments in the us-east4 region attached to your physical Interconnect in Ashburn, VA. Enable global routing in your VPC
27. You need to configure a Google Kubernetes Engine (GKE) cluster. The initial deployment should have 5 nodes with the potential to scale to 10 nodes. The maximum number of Pods per node is 8. The number of services could grow from 100 to up to 1024. How should you design the IP schema to optimally meet this requirement?
Configure a /28 primary IP address range for the node IP addresses. Configure a /24 secondary IP range for the Pads. Configure a /22 secondary IP range for the Services
Configure a /28 primary IP address range for the node IP addresses. Configure a /25 secondary IP range for the Pods. Configure a /21 secondary IP range for the Services
Configure a /28 primary IP address range for the node IP addresses. Configure a /25 secondary IP range for the Pods. Configure a /22 secondary IP range for the Services
Configure a /28 primary IP address range for the node IP addresses. Configure a /28 secondary IP range for the Pods. Configure a /21 secondary IP range for the Services
28. You have a storage bucket that contains two objects. Cloud CDN is enabled on the bucket, and both objects have been successfully cached. Now you want to make sure that one of the two objects will not be cached anymore, and will always be served to the internet directly from the origin. What should you do?
Create a new storage bucket, and move the object you don't want to be checked anymore inside it. Then edit the bucket setting and enable the private attribute
Add a Cache-Control entry with value private to the metadata of the object you don't want to be cached anymore. Invalidate all the previously cached copies
Ensure that the object you don't want to be cached anymore is not shared publicly
Add an appropriate lifecycle rule on the storage bucket containing the two objects
29. Your team deployed two applications in GKE that are exposed through an external Application Load Balancer. When queries are sent to www.mountkirkgames.com/sales and www.mountkirkgames.com/get-an-analysis, the correct pages are displayed. However, you have received complaints that www.mountkirkgames.com yields a 404 error. You need to resolve this error. What should you do?
Review the Service YAML file. Add a new path rule for the * character that directs to the base service. Reapply the YAML
Review the Ingress YAML file. Add a new path rule for the * character that directs to the base service. Reapply the YAML
Review the Service YAML file. Define a default backend. Reapply the YAML
Review the Ingress YAML file. Define the default backend. Reapply the YAML
30. Your organization's security policy requires that all internet-bound traffic return to your on-premises data center through HA VPN tunnels before egressing to the internet, while allowing virtual machines (VMs) to leverage private Google APIs using private virtual IP addresses 199.36.153.4/30. You need to configure the routes to enable these traffic flows. What should you do?
Announce a 0.0.0.0/0 route from your on-premises router with a MED of 500. Configure another custom route 199.36.153.4/30 with a priority of 1000 whose next hop is the VPN tunnel back to the on-premises data center
Configure a custom route 0.0.0.0/0 with a priority of 500 whose next hop is the default internet gateway. Configure another custom route 199.36.153.4/30 with priority of 1000 whose next hop is the VPN tunnel back to the on-premises data center
Announce a 0.0.0.0/0 route from your on-premises router with a MED of 1000. Configure a custom route 199.36.153.4/30 with a priority of 1000 whose next hop is the default internet gateway
Configure a custom route 0.0.0.0/0 with a priority of 1000 whose next hop is the internet gateway. Configure another custom route 199.36.153.4/30 with a priority of 500 whose next hop is the VPN tunnel back to the on-premises data center
31. You have two VPCs: VPC A in Project A and VPC B in Project B. The VPCs are peered, and each VPC has VM instances in four zones. You are using the Network Intelligence Center Performance Dashboard to investigate the packet loss for traffic flows that start in VPC A and terminate in VPC B. You need the reported packet loss metric to have at least a 90% confidence level. What should you do?
Ensure that each zone in each of the VPC networks has at least 9 compute instances. Look in Project B for the reported metric
Ensure that each zone in each of the VPC networks has at least 10 compute instances. Look in Project B for the reported metric
Ensure that each zone in each of the VPC networks has at least 10 compute instances. Look in Project A for the reported metric
Ensure that each zone in each of the VPC networks has at least 9 compute instances. Look in Project A for the reported metric
32. You are configuring a new application that will be exposed behind an external load balancer with both IPv4 and IPv6 addresses and support TCP pass-through on port 443. You will have backends in two regions: us-west1 and us-east1. You want to serve the content with the lowest possible latency while ensuring high availability and autoscaling. Which configuration should you use?
Use global TCP Proxy Load Balancing with backends in both regions
Use Network Load Balancing in both regions, and use DNS-based load balancing to direct traffic to the closest region
Use global external HTTP(S) Load Balancing with backends in both regions
Use global SSL Proxy Load Balancing with backends in both regions
33. You want to create a service in GCP using IPv6. What should you do?
Configure an internal load balancer with the designated IPv6 address
Configure a TCP Proxy with the designated IPv6 address
Configure a global load balancer with the designated IPv6 address
Create the instance with the designated IPv6 address
34. You are troubleshooting an application in your organization's Google Cloud network that is not functioning as expected. You suspect that packets are getting lost somewhere. The application sends packets intermittently at a low volume from a Compute Engine VM to a destination on your on-premises network through a pair of Cloud Interconnect VLAN attachments. You validated that the Cloud Next Generation Firewall (Cloud NGFW) rules do not have any deny statements blocking egress traffic, and you do not have any explicit allow rules. Following Google-recommended practices, you need to analyze the flow to see if packets are being sent correctly out of the VM to isolate the issue. What should you do?
Verify the network/attachment/egress_dropped_packets_count Cloud Interconnect VLAN attachment metric
Create a packet mirroring policy that is configured with your VM as the source and destined to a collector. Analyze the packet captures
Enable VPC Flow Logs on the subnet that the VM is deployed in with SAMPLE_RATE = 1.0, and run a query in Logs Explorer to analyze the packet flow
Enable Firewall Rules Logging on your firewall rules and review the logs
35. You are troubleshooting connectivity issues between Google Cloud and a public SaaS provider. The connectivity between the two environments is through the public internet. Your users are reporting intermittent connection errors when using TCP to connect; however, ICMP tests show no failures. According to users, errors occur around the same time every day. You want to troubleshoot and gather information by using Google Cloud tools that are most likely to provide insights to what is occurring within Google Cloud. What should you do?
Enable and review Cloud Logging on your Cloud NAT Gateway. Look for logs with errors that match the destination IP address of the public SaaS provider
Create a Connectivity Test. Review the results for configuration issues in the VPC routing table
Enable the Firewall Insights API. Set the Deny rule insights observation period to one day. Review Insight results to assure there are no firewall rules denying traffic
Enable and review Cloud Logging for Cloud Armor. Look for logs with errors that match the destination IP address of the public SaaS provider
36. Your company uses Compute Engine instances that are exposed to the public internet. Each compute instance has a single network interface with a single public IP address. You need to block any connection attempt that originates from internet clients with IP addresses that belong to the BGP_ASN_TOBLOCK BGP ASN. What should you do?
Create a new Cloud Armor edge security policy, and use the --network-src-asns parameter
Create a new Cloud Armor network edge security policy, and use the --network-src-asns parameter
Create a new Cloud Armor backend security policy, and use the --network-src-asns parameter
Create a new firewall policy ingress rule, and use the --network-src-asns parameter
37. Your team is developing an application that will be used by consumers all over the world. Currently, the application sits behind a global external application load balancer. You need to protect the application from potential application-level attacks. What should you do?
Create a Google Cloud Armor security policy with web application firewall rules, and apply the security policy to the backend service
Create a VPC Service Controls perimeter with the global external application load balancer as the protected service, and apply it to the backend service
Create multiple firewall deny rules to block malicious users, and apply them to the global external application load balancer
Enable Cloud CDN on the backend service
38. You are designing an IP address scheme for new private Google Kubernetes Engine (GKE) clusters. Due to IP address exhaustion of the RFC 1918 address space in your enterprise, you plan to use privately used public IP space for the new clusters. You want to follow Google-recommended practices. What should you do after designing your IP scheme?
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected: --enable-ip-alias and --enable-private-nodes
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected: --disable-default-snat, --enable-ip-alias, and --enable-private-nodes
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters, Re-use the secondary address range for the services across multiple private GKE clusters
39. You are creating a design that will connect your single on-premises data center to a VPC in Google Cloud by using an IPsec VPN connection. The connection must have a minimum SLA of 99.99%. There is a single VPN termination device located in your on-premises data center. The VPN termination device can be configured only with a single public IP address. Your design must also have the least amount of setup effort. What should you do?
1. Replace the existing on-premises VPN termination device with a new device that is configured with two different public IP addresses. 2. Create one HA VPN gateway. 3. Create one tunnel for each of the two HA VPN gateway interfaces. 4. Terminate each of the two tunnels on one of the two public IP addresses that is configured on the new VPN termination device located in your on-premises data center
1. Create one Classic VPN gateway and one HA VPN gateway. 2. Create one tunnel on the interface of the Classic VPN gateway and one tunnel on interface 1 of the HA VPN gateway. 3. Terminate each of the two tunnels on the single public IP address that is configured on the VPN termination device located in your on-premises data center
1. Create two HA VPN gateways. 2. Create one tunnel on interface 0 of one gateway and create one tunnel on interface 1 of the other gateway. 3. Terminate each of the two tunnels on the single public IP address that is configured on the VPN termination device located in your on-premises data center
1. Create one HA VPN gateway. 2. Create one tunnel for each of the two HA VPN gateway interfaces. 3. Terminate each of the two tunnels on the single public IP address that is configured on the VPN termination device located in your on-premises data center
40. Your organization is using a Shared VPC model. Service project owners want to independently manage their DNS zones in service projects. All service project workloads must be able to resolve all private zones that are defined in other service projects. You need to create a solution that meets these goals. What should you do?
Create a Cloud DNS private zone in each service project. Use Cloud DNS peering zones that target the Shared VPC in the host project
Create a Cloud DNS response policy zone in each service project. Use Cloud DNS peering zones that target the Shared VPC in the host project
Create a Cloud DNS private zone in each service project. Use cross-project binding to associate the zones to the Shared VPC in the host project
Create a Cloud DNS private zone in each service project. Use a Cloud DNS forwarding zone to forward queries to the Shared VPC in the host project
41. One instance in your VPC is configured to run with a private IP address only. You want to ensure that even if this instance is deleted, its current private IP address will not be automatically assigned to a different instance. In the GCP Console, what should you do?
Add custom metadata to the instance with key internal-address and value reserved
Assign a new reserved internal IP address to the instance
Change the instance's current internal IP address to static
Assign a public IP address to the instance
42. You have deployed a proof-of-concept application by manually placing instances in a single Compute Engine zone. You are now moving the application to production, so you need to increase your application availability and ensure it can autoscale. How should you provision your instances?
Create a single managed instance group, specify the desired region, and select Multiple zones for the location
Create an unmanaged instance group in a single zone, and then create an HTTP load balancer for the instance group
Create a managed instance group for each region, select Single zone for the location, and manually distribute instances across the zones in that region
Create an unmanaged instance group for each zone, and manually distribute the instances across the desired zones
43. You are designing a hybrid cloud environment for your organization. Your Google Cloud environment is interconnected with your on-premises network using Cloud HA VPN and Cloud Router. The Cloud Router is configured with the default settings. Your on-premises DNS server is located at 192.168.20.88 and is protected by a firewall, and your Compute Engine resources are located at 10.204.0.0/24. Your Compute Engine resources need to resolve on-premises private hostnames using the domain corp.altostrat.com while still resolving Google Cloud hostnames. You want to follow Google-recommended practices. What should you do?
1. Create a private forwarding zone in Cloud DNS for ‘corp.altostrat.com’ called corp-altostrat-com that points to 192.168.20.88. 2. Configure your on-premises firewall to accept traffic from 10.204.0.0/24. 3. Set a custom route advertisement on the Cloud Router for 10.204.0.0/24
1. Create a private zone in Cloud DNS for ‘corp altostrat.com’ called corp-altostrat-com. 2. Configure DNS Server Policies and create a policy with Alternate DNS servers to 192.168.20.88. 3. Configure your on-premises firewall to accept traffic from 35.199.192.0/19. 4. Set a custom route advertisement on the Cloud Router for 35.199.192.0/19.
1. Create a private forwarding zone in Cloud DNS for ‘corp .altostrat.com’ called corp-altostrat-com that points to 192.168.20.88. 2. Configure your on-premises firewall to accept traffic from 10.204.0.0/24. 3. Modify the /etc/resolv conf file on your Compute Engine instances to point to 192.168.20 88
1. Create a private forwarding zone in Cloud DNS for ‘corp.altostrat.com’ called corp-altostrat-com that points to 192.168 20.88. 2. Configure your on-premises firewall to accept traffic from 35.199.192.0/19 3. Set a custom route advertisement on the Cloud Router for 35.199.192.0/19.
44. Your organization deployed a mission critical application that is expected to be a new revenue source. As part of the planning and deployment process, you have recently implemented a security profile with the default set of threat signatures provided by Cloud Next Generation Firewall (Cloud NGFW). This application is the only application running on this project. You need to increase the security posture of the application to log the threat and drop the related packets. What should you do?
Set up a Linux VM as the frontend gateway for the application. Create iptables rules to drop all packets, excluding the application port
Configure Cloud Scheduler to run a task that checks the Cloud NGFW logs to verify the threats. Configure the task to create a security profile with each signature ID set to override the default action
Configure a new default threat signature with Deny All to all severity options. Review the logs to understand the impact
For all severity options (critical, high, medium, low and informational) in the security profile, change the default override action to Deny
45. You are creating an instance group and need to create a new health check for HTTP(s) load balancing. Which two methods can you use to accomplish this? (Choose two.)
Create a new legacy health check using the gcloud command line tool
Create a new health check, or select an existing one, when you complete the load balancer's backend configuration in the GCP Console
Create a new health check using the gcloud command line tool
Create a new legacy health check using the Health checks section in the GCP Console
Create a new health check using the VPC Network section in the GCP Console
46. You have just deployed your infrastructure on Google Cloud. You now need to configure the DNS to meet the following requirements: • Your on-premises resources should resolve your Google Cloud zones. • Your Google Cloud resources should resolve your on-premises zones. • You need the ability to resolve “.internal” zones provisioned by Google Cloud. What should you do?
Configure both an inbound server policy and outbound DNS forwarding zones with the target as the on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google Cloud's DNS resolver
Configure an outbound DNS server policy, and set your alternative name server to be your on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google Cloud's DNS resolver
Configure an outbound server policy, and set your alternative name server to be your on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google's public DNS 8.8.8.8.
Configure Cloud DNS to DNS peer with your on-premises DNS resolver. Configure your on-premises DNS resolver to forward Google Cloud zone queries to Google's public DNS 8.8.8.8.
47. Your company has recently expanded their EMEA-based operations into APAC. Globally distributed users report that their SMTP and IMAP services are slow. Your company requires end-to-end encryption, but you do not have access to the SSL certificates. Which Google Cloud load balancer should you use?
SSL proxy load balancer
Network load balancer
HTTPS load balancer
TCP proxy load balancer
48. You need to create a GKE cluster in an existing VPC that is accessible from on-premises. You must meet the following requirements: ✑ IP ranges for pods and services must be as small as possible. ✑ The nodes and the master must not be reachable from the internet. ✑ You must be able to use kubectl commands from on-premises subnets to manage the cluster. How should you create the GKE cluster?
"¢ Create a private cluster that uses VPC advanced routes. "¢ Set the pod and service ranges as /24. "¢ Set up a network proxy to access the master
"¢ Create a VPC-native GKE cluster using user-managed IP ranges. "¢ Enable a GKE cluster network policy, set the pod and service ranges as /24. "¢ Set up a network proxy to access the master. "¢ Enable master authorized networks
"¢ Create a VPC-native GKE cluster using GKE-managed IP ranges. "¢ Set the pod IP range as /21 and service IP range as /24. "¢ Set up a network proxy to access the master
"¢ Create a VPC-native GKE cluster using user-managed IP ranges. "¢ Enable privateEndpoint on the cluster master. "¢ Set the pod and service ranges as /24. "¢ Set up a network proxy to access the master. "¢ Enable master authorized networks
49. You have a storage bucket that contains the following objects: [1] [1] [1] [1] Cloud CDN is enabled on the storage bucket, and all four objects have been successfully cached. You want to remove the cached copies of all the objects with the prefix folder-a, using the minimum number of commands. What should you do?
Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket
Add an appropriate lifecycle rule on the storage bucket
Issue a cache invalidation command with pattern /folder-a/*
Make sure that all the objects with prefix folder-a are not shared publicly
50. You have an HA VPN connection with two tunnels running in active/passive mode between your Virtual Private Cloud (VPC) and on-premises network. Traffic over the connection has recently increased from 1 gigabit per second (Gbps) to 4 Gbps, and you notice that packets are being dropped. You need to configure your VPN connection to Google Cloud to support 4 Gbps. What should you do?
Configure the maximum transmission unit (MTU) to its highest supported value
Configure the remote autonomous system number (ASN) to 4096
Configure a second Cloud Router to scale bandwidth in and out of the VPC
Configure a second set of active/passive VPN tunnels
51. Your organization has a new security policy that requires you to monitor all egress traffic payloads from your virtual machines in region us-west2. You deployed an intrusion detection system (IDS) virtual appliance in the same region to meet the new policy. You now need to integrate the IDS into the environment to monitor all egress traffic payloads from us-west2. What should you do?
Enable firewall logging, and forward all filtered egress firewall logs to the IDS
Create an internal HTTP(S) load balancer for Packet Mirroring, and add a packet mirroring policy filter for egress traffic
Enable VPC Flow Logs. Create a sink in Cloud Logging to send filtered egress VPC Flow Logs to the IDS
Create an internal TCP/UDP load balancer for Packet Mirroring, and add a packet mirroring policy filter for egress traffic
52. You are configuring the final elements of a migration effort where resources have been moved from on-premises to Google Cloud. While reviewing the deployed architecture, you noticed that DNS resolution is failing when queries are being sent to the on-premises environment. You login to a Compute Engine instance, try to resolve an on-premises hostname, and the query fails. DNS queries are not arriving at the on-premises DNS server. You need to use managed services to reconfigure Cloud DNS to resolve the DNS error. What should you do?
Validate that the Compute Engine instances are using the Metadata Service IP address as their resolver. Configure an outbound forwarding zone for the on-premises domain pointing to the on-premises DNS server. Configure Cloud Router to advertise the Cloud DNS proxy range to the on-premises network
Validate that there is network connectivity to the on-premises environment and that the Compute Engine instances can reach other on-premises resources. If errors persist, remove the VPC Network Peerings and recreate the peerings after validating the routes
Ensure that the operating systems of the Compute Engine instances are configured to send DNS queries to the on-premises DNS servers directly
Review the existing Cloud DNS zones, and validate that there is a route in the VPC directing traffic destined to the IP address of the DNS servers. Recreate the existing DNS forwarding zones for . to forward all queries to the on-premises DNS servers
53. You want to use Partner Interconnect to connect your on-premises network with your VPC. You already have an Interconnect partner. What should you first?
Log in to your partner's portal and request the VLAN attachment there
Ask your Interconnect partner to provision a physical connection to Google
Run gcloud compute interconnect attachments partner update <attachment> / --region <region> --admin-enabled
Create a Partner Interconnect type VLAN attachment in the GCP Console and retrieve the pairing key
54. You work for a multinational enterprise that is moving to GCP. These are the cloud requirements: "¢ An on-premises data center located in the United States in Oregon and New York with Dedicated Interconnects connected to Cloud regions us-west1 (primary HQ) and us-east4 (backup) "¢ Multiple regional offices in Europe and APAC "¢ Regional data processing is required in europe-west1 and australia-southeast1 "¢ Centralized Network Administration Team Your security and compliance team requires a virtual inline security appliance to perform L7 inspection for URL filtering. You want to deploy the appliance in us- west1. What should you do?
"¢ Create 1 VPC in a Shared VPC Host Project. "¢ Configure a 2-NIC instance in zone us-west1-a in the Host Project. "¢ Attach NIC0 in us-west1 subnet of the Host Project. "¢ Attach NIC1 in us-west1 subnet of the Host Project "¢ Deploy the instance. "¢ Configure the necessary routes and firewall rules to pass traffic through the instance
"¢ Create 2 VPCs in a Shared VPC Host Project. "¢ Configure a 2-NIC instance in zone us-west1-a in the Host Project. "¢ Attach NIC0 in VPC #1 us-west1 subnet of the Host Project. "¢ Attach NIC1 in VPC #2 us-west1 subnet of the Host Project. "¢ Deploy the instance. "¢ Configure the necessary routes and firewall rules to pass traffic through the instance
"¢ Create 2 VPCs in a Shared VPC Host Project. "¢ Configure a 2-NIC instance in zone us-west1-a in the Service Project. "¢ Attach NIC0 in VPC #1 us-west1 subnet of the Host Project. "¢ Attach NIC1 in VPC #2 us-west1 subnet of the Host Project. "¢ Deploy the instance. "¢ Configure the necessary routes and firewall rules to pass traffic through the instance
"¢ Create 1 VPC in a Shared VPC Service Project. "¢ Configure a 2-NIC instance in zone us-west1-a in the Service Project. "¢ Attach NIC0 in us-west1 subnet of the Service Project. "¢ Attach NIC1 in us-west1 subnet of the Service Project "¢ Deploy the instance. "¢ Configure the necessary routes and firewall rules to pass traffic through the instance
55. Your organization's application is running on a VPC-native GKE Standard cluster with public IP addresses. You need to configure access to the remote address range 35.100.0.0/16 through Cloud NAT, instead of using the GKE nodes' external IP addresses. SNAT is enabled on the cluster and needs to be configured. What should you do?
Configure nonMasqueradeCIDRs in the ip-masq-agent ConfigMap. Include the 35.100.0.0/16 range in the list
Configure Cloud NAT and create an exclusion rule for any SNAT address translation
Configure Cloud NAT with nonMasqueradeCIDRs, and enable SNAT with the same configuration to allow traffic to 35.100.0.0/16.
Configure nonMasqueradeCIDRs in the ip-masq-agent ConfigMap. Remove the 35.100.0.0/16 range from the list
56. You are deploying a global external TCP load balancing solution and want to preserve the source IP address of the original layer 3 payload. Which type of load balancer should you use?
Network load balancer
HTTP(S) load balancer
TCP/SSL proxy load balancer
Internal load balancer
57. You are configuring HA VPN for your organization to connect your on-premises environment to your Google Cloud network. Your on-premises environment is closest to the us-west1 Google Cloud region. You have Google Cloud resources in us-west2, which requires a throughput of 300,000 packets per second (PPS) and an approximate bandwidth of 4 Gbps. You need to have predictable bandwidth management and maintain an SLA of 99.99% with minimal costs. What should you do?
Create two HA VPN gateways, each with two tunnels. Configure BGP on each of the gateways' tunnels with tunnel 0 configured with a base routing priority metric of 100 and tunnel 1 with a base routing priority metric of 100. Configure the on-premises router with the same corresponding multi-exit discriminator (MED) value
Create an HA VPN gateway with four tunnels. Configure BGP on four tunnels with tunnel 0 configured with a base routing priority metric of 100, tunnel 1 with a base routing priority metric of 200, tunnel 2 with a base routing priority of 300, and tunnel 3 with a base routing priority of 400. Configure the on-premises router with the corresponding multi-exit discriminator (MED) value
Create an HA VPN gateway with two tunnels. Configure BGP on both tunnels with tunnel 0 configured with a base routing priority metric of 100 and tunnel 1 with a base routing priority metric of 100. Configure the on-premises router with the corresponding multi-exit discriminator (MED) value
Create an HA VPN gateway with two tunnels. Configure BGP on both tunnels with tunnel 0 configured with a base routing priority metric of 100 and tunnel 1 with a base routing priority metric of 200. Configure the on-premises router with the corresponding multi-exit discriminator (MED) value
58. You are planning to use Terraform to deploy the Google Cloud infrastructure for your company. The design must meet the following requirements: • Each Google Cloud project must represent an internal project that your team will work on. • After an internal project is finished, the infrastructure must be deleted. • Each internal project must have its own Google Cloud project owner to manage the Google Cloud resources. • You have 10-100 projects deployed at a time. While you are writing the Terraform code, you need to ensure that the deployment is simple and the code is reusable with centralized management. What should you do?
Create a single project and single VPC for each internal project
Create a single project and additional VPCs for each internal project. D.O Create a Shared VPC and service project for each internal project
Create a single Shared VPC and attach each Google Cloud project as a service project
59. You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT. What is the most likely cause of this problem?
An external IP address has been configured on the instance
The instance has been configured with multiple interfaces
The instance is accessible by a load balancer external IP address
You have created static routes that use RFC1918 ranges
60. Your company has a Virtual Private Cloud (VPC) with two Dedicated Interconnect connections in two different regions: us-west1 and us-east1. Each Dedicated Interconnect connection is attached to a Cloud Router in its respective region by a VLAN attachment. You need to configure a high availability failover path. By default, all ingress traffic from the on-premises environment should flow to the VPC using the us-west1 connection. If us-west1 is unavailable, you want traffic to be rerouted to us-east1. How should you configure the multi-exit discriminator (MED) values to enable this failover path?
Use global routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the us-west1 Cloud Router to a base priority of 1
Use global routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1
Use regional routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the us-west1 Cloud Router to a base priority of 1
Use regional routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1
61. Your organization has a single project that contains multiple Virtual Private Clouds (VPCs). You need to secure API access to your Cloud Storage buckets and BigQuery datasets by allowing API access only from resources in your corporate public networks. What should you do?
Create a firewall rule to block API access to Cloud Storage and BigQuery from unauthorized networks
Create a VPC Service Controls perimeter for your project with an access context policy that allows your corporate public network IP ranges
Create a VPC Service Controls perimeter for each VPC with an access context policy that allows your corporate public network IP ranges
Create an access context policy that allows your VPC and corporate public network IP ranges, and then attach the policy to Cloud Storage and BigQuery
62. Your company has 10 separate Virtual Private Cloud (VPC) networks, with one VPC per project in a single region in Google Cloud. Your security team requires each VPC network to have private connectivity to the main on-premises location via a Partner Interconnect connection in the same region. To optimize cost and operations, the same connectivity must be shared with all projects. You must ensure that all traffic between different projects, on-premises locations, and the internet can be inspected using the same third-party appliances. What should you do?
Configure the third-party appliances with multiple interfaces. Create a hub VPC network for all projects, and create separate VPC networks for on-premises and internet connectivity. Create the relevant routes on the third-party appliances and VPC networks. Use VPC Network Peering to connect all projects’ VPC networks to the hub VPC. Export custom routes from the hub VPC and import on all projects’ VPC networks
Configure the third-party appliances with multiple interfaces and specific Partner Interconnect VLAN attachments per project. Create the relevant routes on the third-party appliances and VPC networks
Configure the third-party appliances with multiple interfaces, with each interface connected to a separate VPC network. Create separate VPC networks for on-premises and internet connectivity. Create the relevant routes on the third-party appliances and VPC networks
Consolidate all existing projects’ subnetworks into a single VPCreate separate VPC networks for on-premises and internet connectivity. Configure the third-party appliances with multiple interfaces, with each interface connected to a separate VPC network. Create the relevant routes on the third-party appliances and VPC networks
63. You successfully provisioned a single Dedicated Interconnect. The physical connection is at a colocation facility closest to us-west2. Seventy-five percent of your workloads are in us-east4, and the remaining twenty-five percent of your workloads are in us-central1. All workloads have the same network traffic profile. You need to minimize data transfer costs when deploying VLAN attachments. What should you do?
Order a new Dedicated Interconnect for a colocation facility closest to us-east4, and use VPC global routing to access workloads in us-central1
Order a new Dedicated Interconnect for a colocation facility closest to us-central1, and use VPC global routing to access workloads in us-east4
Keep the existing Dedicated Interconnect. Deploy a VLAN attachment to a Cloud Router in us-east4, and deploy another VLAN attachment to a Cloud Router in us-central1
Keep the existing Dedicated interconnect. Deploy a VLAN attachment to a Cloud Router in us-west2, and use VPC global routing to access workloads in us-east4 and us-central1
64. Your product team has web servers running on both us-east1 and us-west1 regions in the prod-servers project. Your security team plans to install an intrusion detection system (IDS) in their own Google Cloud project to inspect the incoming network traffic. What should you do?
Create a host project and a Shared VPC for the security team. Make prod-servers a service project, and relocate the web servers to shared subnets in both regions. Create an internal load balancer and the IDS system in a subnet in either us-east1 or us-west1. Enable Packet Mirroring, and create a packet mirroring policy inside the host project
Create a host project and a Sharad VPC for the security team. Make prod-servers a service project, and relocate the web servers to shared subnets in both regions. Enable IP forwarding on all the web servers. Create the IDS system in a non-shared subnet of us-east1 or us-west1. Configure the web servers to forward the packets to the IDS system. C. Create a new project and a VPC for the security team. Peer the new VPC with the web servers’ VPC in the prod-servers project. Enable IP forwarding on all the web servers. Install the IDS system in both us-east1 and us-west1. Configure the web servers to forward the packets to the IDS system
Create a new project and a VPC for the security team. Peer the new VPC with the web servers’ VPC in the prod-servers project. Create an internal load balancer and the IDS system in both us-east1 and us-west1. Enable Packet Mirroring, and create packet mirroring policies inside the new project
65. Your company uses Network Connectivity Center to connect its VPCs in Google Cloud. They plan to connect their on-premises data center to one of these VPCs by using HA VPN. The CIDR range of your on-premises network overlaps with the IP addresses in Google Cloud. You want your VMs in Google Cloud to connect directly to the IP address of the on-premises hosts. What should you do?
Configure a subnet of purpose PRIVATE_NAT and use Private NAT for the Network Connectivity Center spokes
Configure a subnet of purpose PRIVATE_NAT and use Hybrid NAT
Configure a subnet of purpose REGIONAL_MANAGED_PROXY and use a Google Cloud TCP proxy load balancer
Configure a subnet of purpose REGIONAL_MANAGED_PROXY and use a Google Cloud application load balancer
66. You are implementing firewall controls to protect your computer resources in a newly created VPC. To make the protection process easier to manage and control, you've defined the hierarchical firewall policies, global network firewall policies, and VPC firewall rules. The configuration of rules defines the following characteristics: • The hierarchical firewall policy, bound at the organization level, is allowing/denying spe-cific external traffic. • There is a global network firewall policy with rules that enforce intrusion prevention sys-tem (IPS) capabilities for specific external inbound/outbound traffic. • The VPC firewall rules allow internal communication from RFC 1918 defined subnets communications. • The VPC firewall contains an explicit deny rule with logs enabled. This configuration was successful in multiple preexisting VF'Cs. However, you noticed that the logs were missing when you were reviewing a newly created VPC. All external communications are hanging, but internal traffic is working as expected. You want to fix the connectivity issue. What should you do?
Create a new VPC and migrate existing resources to the new VPC. Delete the old VPC, and reapply the firewall policies and rules in the newVPC
Lower the priority numbers of the firewall policy rules and raise the priority numbers of the VPC firewall rules
Raise the priority numbers of the firewall policy rules and lower the priority numbers of the VPC firewall rules
Review the order in which the VPC firewall rules and policies are evaluated. If the VPC firewall rules are being evaluated before firewall policies, switch the order
67. You are designing a new global application using Compute Engine instances that will be exposed by a global HTTP(S) load balancer. You need to secure your application from distributed denial-of-service and application layer (layer 7) attacks. What should you do?
Configure hierarchical firewall rules for the global HTTP(S) load balancer public IP address at the organization level
Configure VPC Service Controls and create a secure perimeter. Define fine-grained perimeter controls and enforce that security posture across your Google Cloud services and projects
Configure VPC firewall rules to protect the Compute Engine instances against distributed denial-of-service attacks
Configure a Google Cloud Armor security policy in your project, and attach it to the backend service to secure the application
68. You have several VMs across multiple VPCs in your cloud environment, which require access to internet endpoints. These VMs cannot have public IP addresses due to security policies, so you plan to use Cloud NAT to provide outbound internet access. Within your VPCs, you have several subnets in each region. You want to ensure that only specific subnets have access to the internet through Cloud NAT. You want to avoid any unintentional configuration issues caused by other administrators, and align to Google-recommended practices. What should you do?
Create a firewall rule in each VPC at priority 500 that targets all instances in the network and denies egress to the internet, 0.0.0.0/0. Create a firewall rule at priority 300 that targets all instances in the network, has a source filter that maps to the allowed subnets, and allows egress to the internet, 0.0.0.0/0. Deploy Cloud NAT, and configure all primary and secondary subnet source ranges
Create a firewall rule in each VPC at priority 500 that targets all instances in the network and denies egress to the internet, 0.0.0.0/0. Create a firewall rule at priority 300 that targets all instances in the network, has a source filter that maps to the allowed subnets, and allows egress to the internet, 0.0.0.0/0. Deploy Cloud NAT, and configure a custom source range that includes the allowed subnets
Deploy Cloud NAT in each VPC, and configure a custom source range that includes the allowed subnets. Configure Cloud NAT rules to only permit the allowed subnets to egress through Cloud NAT
Create a constraints/compute.restrictCloudNATUsage organizational policy constraint. Attach the constraint to a folder that contains the associated projects. Configure the allowedValues to only contain the subnets that should have internet access. Deploy Cloud NAT and select only the allowed subnets
69. Your organization has a Google Cloud Virtual Private Cloud (VPC) with subnets in us-east1, us-west4, and europe-west4 that use the default VPC configuration. Employees in a branch office in Europe need to access the resources in the VPC using HA VPN. You configured the HA VPN associated with the Google Cloud VPC for your organization with a Cloud Router deployed in europe-west4. You need to ensure that the users in the branch office can quickly and easily access all resources in the VPC. What should you do?
Set the advertised routes to Global for the Cloud Router
Configure the VPC dynamic routing mode to Global
Create custom advertised routes for each subnet
Configure each subnet’s VPN connections to use Cloud VPN to connect to the branch office
70. Your organization has multiple VMs running on Google Cloud within a VPC. The VMs require connectivity to certain Google APIs. You need to enable Private Google Access for VM connectivity to Cloud Storage. What should you do?
Enable Private Google Access on the VPC, create a default route that points to the default internet gateway, and enable the Cloud Storage API
Enable Private Google Access on the project, remove the default route that points to the default internet gateway, and enable the Cloud Storage API
Enable Private Google Access on the subnet, create a default route that points to the default internet gateway, and enable the Cloud Storage API
Enable Private Google Access on the VM, remove the default route that points to the default internet gateway, and enable the Cloud Storage API
71. All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured. Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance. What should you do?
Open the Cloud Shell SSH into the instance using gcloud compute ssh
Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh
Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh
Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh
72. You recently deployed Cloud VPN to connect your on-premises data canter to Google Cloud. You need to monitor the usage of this VPN and set up alerts in case traffic exceeds the maximum allowed. You need to be able to quickly decide whether to add extra links or move to a Dedicated Interconnect. What should you do?
In the Google Cloud Console, use Monitoring Query Language to create a custom alert for bandwidth utilization
In the VPN section of the Google Cloud Console, select the VPN under hybrid connectivity, and then select monitoring to display utilization on the dashboard
In the Network Intelligence Canter, check for the number of packet drops on the VPN
In the Monitoring section of the Google Cloud Console, use the Dashboard section to select a default dashboard for VPN usage
73. You need to centralize the Identity and Access Management permissions and email distribution for the WebServices Team as efficiently as possible. What should you do?
Create a new Cloud Identity Domain for the WebServices Team
Create a new Custom Role for all members of the WebServices Team
Create a Google Group for the WebServices Team
Create a G Suite Domain for the WebServices Team
74. You need to create the network infrastructure to deploy a highly available web application in the us-east1 and us-west1 regions. The application runs on Compute Engine instances, and it does not require the use of a database. You want to follow Google-recommended practices. What should you do?
Create one VPC with one subnet in each region. Create a global load balancer with a static IP address. Enable Cloud CDN and Google Cloud Armor on the load balancer. Create an A record using the IP address of the load balancer in Cloud DNS
Create one VPC with one subnet in each region. Create a regional network load balancer in each region with a static IP address. Enable Cloud CDN on the load balancers. Create an A record in Cloud DNS with both IP addresses for the load balancers
Create one VPC with one subnet in each region. Create an HTTP(S) load balancer with a static IP address. Choose the standard tier for the network. Enable Cloud CDN on the load balancer. Create a CNAME record using the load balancer’s IP address in Cloud DNS
Create one VPC in each region, and peer both VPCs. Create a global load balancer. Enable Cloud CDN on the load balancer. Create a CNAME for the load balancer in Cloud DNS
75. Your organization wants to deploy an internal application named app-1 in VPC-1. The application will consume services from another internal application named app-2 in VPC-2. VPC Network Peering will connect both applications. You need to apply microsegmentation between these two applications and VPCs. What should you do?
Assign network tags to these applications: secure-tag-app-1 to app-1 and secure-tag-app-2 to app-2. Configure an ingress VPC firewall rule that allows traffic from secure-tag-app-1 to secure-tag-app-2. Leave the default deny ingress rule and the default allow egress rule
Assign secure tags to these applications: secure-tag-app-1 to app-1 and secure-tag-app-2 to app-2. Configure a network firewall policy that is attached to VPC-2 with an ingress rule that allows traffic from secure-tag-app-1 to secure-tag-app-2. Leave the default deny ingress rule and the default allow egress rule
Assign network tags to these applications: secure-tag-app-1 to app-1 and secure-tag-app-2 to app-2. Configure a hierarchical firewall policy with an ingress rule that allows traffic from secure-tag-app-1 to secure-tag-app-2. Leave the default deny ingress rule and the default allow egress rule
Assign secure tags to these applications: secure-tag-app-1 to app-1 and secure-tag-app-2 to app-2. Configure a hierarchical firewall policy with an ingress rule that allows traffic from secure-tag-app-1 to secure-tag-app-2. Leave the default deny ingress rule and the default allow egress rule
76. You created a VPC network named Retail in auto mode. You want to create a VPC network named Distribution and peer it with the Retail VPC. How should you configure the Distribution VPC?
Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering
Rename the default VPC as "Distribution" and peer it via network peering
Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering
Create the Distribution VPC in auto mode. Peer both the VPCs via network peering
FAQs
1. What is the Google Professional Cloud Network Engineer certification exam?
The Google Professional Cloud Network Engineer exam validates your ability to design, implement, and manage network architectures on Google Cloud.
2. How do I become a Google Professional Cloud Network Engineer certified professional?
You must pass the Professional Cloud Network Engineer exam, which tests your skills in networking, hybrid connectivity, and optimizing Google Cloud networks.
3. What are the prerequisites for the Google Professional Cloud Network Engineer exam?
There are no formal prerequisites, but Google recommends 3+ years of industry experience including 1+ year of hands-on work with Google Cloud.
4. How much does the Google Professional Cloud Network Engineer certification cost?
The exam costs $200 USD, but pricing may vary by region or currency.
5. How many questions are in the Google Professional Cloud Network Engineer exam?
The exam consists of 50–60 multiple-choice and multiple-select questions, with a 2-hour time limit.
6. What topics are covered in the Google Professional Cloud Network Engineer exam?
It covers VPC design, hybrid connectivity, network services, security, load balancing, and automation.
7. How difficult is the Google Professional Cloud Network Engineer certification exam?
It’s an intermediate to advanced-level exam, requiring strong networking and Google Cloud experience.
8. How long does it take to prepare for the Google Professional Cloud Network Engineer exam?
Most candidates take 8–10 weeks to prepare thoroughly, depending on their background in networking.
9. What jobs can I get after earning the Google Professional Cloud Network Engineer certification?
You can work as a Cloud Network Engineer, Network Architect, or Infrastructure Engineer specializing in Google Cloud solutions.
10. How much salary can I earn with a Google Professional Cloud Network Engineer certification?
Certified professionals typically earn between $110,000–$150,000 per year, depending on role and experience.
