Thank You For Reaching Out To Us
We have received your message and will get back to you within 24-48 hours. Have a great day!

Welcome to Haposoft Blog

Explore our blog for fresh insights, expert commentary, and real-world examples of project development that we're eager to share with you.
aws-vpc-best-practices
latest post
Dec 16, 2025
20 min read
AWS VPC Best Practices: Build a Secure and Scalable Cloud Network
A well-built AWS VPC creates clear network boundaries for security and scaling. When the core layers are structured correctly from the start, systems stay predictable, compliant, and easier to operate as traffic and data grow. What Is a VPC in AWS? A Virtual Private Cloud (VPC) is an isolated virtual network that AWS provisions exclusively for each account—essentially your own private territory inside the AWS ecosystem. Within this environment, you control every part of the network design: choosing IP ranges, creating subnets, defining routing rules, and attaching gateways. Unlike traditional on-premise networking, where infrastructure must be built and maintained manually, an AWS VPC lets you establish enterprise-grade network boundaries with far less operational overhead. A well-designed VPC is the foundation of any workload deployed on AWS. It determines how traffic flows, which components can reach the internet, and which must remain fully isolated. Thinking of a VPC as a planned digital neighborhood makes the concept easier to grasp—each subnet acts like a distinct zone with its own purpose, access rules, and connectivity model. This structured layout is what enables secure, scalable, and resilient cloud architectures. Standard Architecture Used in Real Systems When designing a VPC, the first step is understanding the core networking components that every production architecture is built on. These components define how traffic moves, which resources can reach the Internet, and how isolation is enforced across your workloads. Once these fundamentals are clear, the three subnet layers—Public, Private, and Database—become straightforward to structure. Core VPC Components Subnets The VPC is divided into logical zones: Public: Can reach the Internet through an Internet Gateway Private: No direct Internet access; outbound traffic goes through a NAT Gateway Isolated: No Internet route at all (ideal for databases) Route Tables: Control how each subnet sends traffic: Public → Internet Gateway Private → NAT Gateway Database → local VPC routes only Internet Gateway (IGW): Allows inbound/outbound Internet connectivity for public subnets NAT Gateway: Enables outbound-only Internet access for private subnets Security Groups: Stateful, resource-level firewalls controlling application-to-application access. Network ACLs (NACLs): Stateless rules at the subnet boundary, used for hardening VPC Endpoints: Enable private access to AWS services (S3, DynamoDB) without traversing the public Internet. Each component above plays a specific role, but they only become meaningful when arranged into subnet layers. IGW only makes sense when attached to public subnets. NAT Gateway is only useful when private subnets need outbound access. Route tables shape the connectivity of each layer. Security Groups control access between tier to tier. This is why production VPCs are structured into three tiers: Public, Private, and Database. Now we can dive into each tier. Public Subnet (Internet-Facing Layer) Public subnets contain the components that must receive traffic from the Internet, such as: Application Load Balancer (ALB) AWS WAF for Layer-7 protection CloudFront for global edge delivery Route 53 for DNS routing This ensures inbound client traffic always enters through tightly controlled entry points—never directly into the application or database layers. Private Subnet (Application Layer) Private subnets host the application services that should not have public IPs. These typically include: ECS Fargate or EC2 instances for backend workloads Auto Scaling groups Internal services communicating with databases Outbound access (for package updates, calling third-party APIs, etc.) is routed through a NAT Gateway placed in a public subnet. Because traffic can only initiate outbound, this layer protects your application from unsolicited Internet access while allowing it to function normally. Database Subnet (Isolated Layer) The isolated subnet contains data stores such as: Amazon RDS (Multi-AZ) Other managed database services This layer has no direct Internet route and is reachable only from the application tier via Security Group rules: This strict isolation prevents any external traffic from reaching the database, greatly reducing risk and helping organizations meet compliance standards like PCI DSS and GDPR. AWS VPC Best Practices You Should Apply in 2025 Before applying any best practices, it’s worth checking whether your current VPC is already showing signs of architectural stress. Common indicators include running out of CIDR space, applications failing to scale properly or difficulty integrating hybrid connectivity such as VPN or Direct Connect. When these symptoms appear, it’s usually a signal that your VPC needs a structural redesign rather than incremental fixes. To address these issues consistently, modern production environments follow a standardized network layout: Public, Private Application, and Database subnets, combined with a controlled, one-directional traffic flow between tiers. This structure is widely adopted because it improves security boundaries, simplifies scaling, and ensures compliance across sensitive workloads. #1 — Public Subnet (Internet-Facing Layer) Location: Two subnets distributed across two Availability Zones (10.0.1.0/24, 10.0.2.0/24) Key Components: Application Load Balancer (ALB) with ACM SSL certificates AWS WAF for Layer-7 protection CloudFront as the edge CDN Route 53 for DNS resolution Route Table: 0.0.0.0/0 → Internet Gateway Purpose: This layer receives external traffic from web or mobile clients, handles TLS termination, filters malicious requests, serves cached static content, and forwards validated requests into the private application layer. #2 — Private Subnet (Application Tier) Location: Two subnets across two AZs (10.0.3.0/24, 10.0.4.0/24) Key Components: ECS Fargate services: Backend APIs (Golang) Frontend build pipelines (React) Auto Scaling Groups adapting to CPU/Memory load Route Table: 0.0.0.0/0 → NAT Gateway Purpose: This tier runs all business logic without exposing any public IPs. Workloads can make outbound calls through the NAT Gateway, but inbound access is restricted to the ALB. This setup ensures security, scalability, and predictable traffic control. #3 — Database Subnet (Isolated Layer) Location: Two dedicated subnets (10.0.5.0/24, 10.0.6.0/24) Key Components: RDS PostgreSQL with Primary + Read Replica Multi-AZ deployment for high availability Route Table: 10.0.0.0/16 → Local (No Internet route) Security: Security Group: Allow only connections from the Application Tier SG on port 5432 NACL rules: Allow inbound 5432 from 10.0.3.0/24 and 10.0.4.0/24 Deny all access from public subnets Deny all other inbound traffic Encryption at rest (KMS) and TLS in-transit enabled Purpose: Ensures the database remains fully isolated, protected from the Internet, and reachable only through controlled, auditable application-layer traffic. #4 — Enforcing a Secure, One-Way Data Flow No packet from the Internet ever reaches RDS directly. No application container has a public IP. Every hop is enforced by Security Groups, NACL rules, and IAM policies. Purpose: This structured, predictable flow minimizes the blast radius, improves auditability, and ensures compliance with security frameworks such as PCI DSS, GDPR, and ISO 27001. Deploying This Architecture With Terraform (Code Example) Using Terraform to manage your VPC (the classic aws vpc terraform setup) turns your network design into version-controlled, reviewable infrastructure. It keeps dev/stage/prod environments consistent, makes changes auditable, and prevents configuration drift caused by manual edits in the AWS console. Below is a full Terraform example that builds the VPC and all three subnet tiers according to the architecture above. 1. Create the VPC Defines the network boundary for all workloads. 2. Public Subnet + Internet Gateway + Route Table Public subnets require an Internet Gateway and a route table allowing outbound traffic. 3. Private Application Subnet + NAT Gateway Allows outbound Internet access without exposing application workloads. 4. Database Subnet — No Internet Path Database subnets must remain fully isolated with local-only routing. 5. Security Group for ECS Backend Restricts inbound access to only trusted ALB traffic. 6. Security Group for RDS — Only ECS Allowed Ensures the database tier is reachable only from the application layer. 7. Attach to ECS Fargate Service Runs the application inside private subnets with the correct security boundaries. Common VPC Mistakes Make (And How to Avoid Them) Many VPC issues come from a few fundamental misconfigurations that repeatedly appear in real deployments. 1. Putting Databases in Public Subnets A surprising number of VPCs place RDS instances in public subnets simply because initial connectivity feels easier. The problem is that this exposes the database to unnecessary risk and breaks most security and compliance requirements. Databases should always live in isolated subnets with no path to the Internet, and access must be restricted to application-tier Security Groups. 2. Assigning Public IPs to Application Instances Giving EC2 or ECS tasks public IPs might feel convenient for quick access or troubleshooting, but it creates an unpredictable security boundary and drastically widens the attack surface. Application workloads belong in private subnets, with outbound traffic routed through a NAT Gateway and operational access handled via SSM or private bastion hosts. 3. Using a Single Route Table for Every Subnet One of the easiest ways to break VPC isolation is attaching the same route table to public, private, and database subnets. Traffic intended for the Internet can unintentionally propagate inward, creating routing loops or leaking connectivity between tiers. A proper design separates route tables: public subnets route to IGW, private subnets to NAT Gateways, and database subnets stay local-only. 4. Choosing a CIDR Block That’s Too Small Teams often underestimate growth and allocate a VPC CIDR so narrow that IP capacity runs out once more services or subnets are added. Expanding a VPC later is painful and usually requires migrations or complex peering setups. Starting with a larger CIDR range gives your architecture room to scale without infrastructure disruptions. Conclusion A clean, well-structured VPC provides the security, scalability, and operational clarity needed for any serious AWS workload. Following the 3-tier subnet model and enforcing predictable data flows keeps your environment compliant and easier to manage as the system grows. If you’re exploring how to apply these principles to your own infrastructure, Haposoft’s AWS team can help review your architecture and recommend the right improvements. Feel free to get in touch if you’d like expert guidance.
submit-app-google-play-closed-testing
Nov 26, 2025
10 min read
Submit App To Google Play Without Rejection: Handling Closed Testing Failures
When you submit an app to Google Play, most early failures surface in Closed Testing, not the final review. What we share here comes from real testing practice, and it’s what made handling those failures predictable for us. What Google Play Closed Testing Is Closed Testing is where Google first checks your app using real user activity, so it matters to understand what this stage actually requires. Where Closed Testing Fits in the Submission Process When you submit an app to Google Play, it doesn’t go straight to the final review. Before reaching that stage, every build must pass through Google’s internal testing tracks—Internal Testing → Closed Testing → Open Testing. Closed Testing sits in the middle of this flow and is the first point where Google expects real usage from real users. If the app fails here, it never reaches the actual “Submit for Review” step. That’s why many teams face repeated rejections without realizing the root cause comes from this stage, not the final review. Google Play Closed Testing in Simple Terms Google Play Closed Testing is a private release track where your app is shared with a small group of testers you select. These testers install the real build you intend to ship and use it in everyday conditions. The goal is straightforward: Google wants to see whether the app behaves like a complete product when real people interact with it. In this controlled environment, Google observes how users move through your features, how data is handled, and whether the experience matches what you describe in your Play Console settings. This is essentially Google’s early check to confirm that the app is stable, transparent, and built for genuine use—not just something assembled to pass review. What Google Expects During Closed Testing The core function of Google Play Closed Testing is to verify authenticity. Google wants evidence that your app is functional, transparent, and ready for real users, not a rushed build created solely to pass review. To make this evaluation, Google looks for a few key signals: Real testers using real, active Google accounts Real usage patterns, not one-off opens or artificial interactions Consistent engagement over time, typically around 14 days for most app types Actions inside your core features, not empty screens or placeholder flows Behavior that aligns with your Data Safety, privacy details, and feature declarations Evidence that the app is “alive”, such as logs, events, and navigation patterns generated from authentic interactions Google began tightening its review standards in 2023 after more unfinished and auto-generated apps started slipping into the submission flow. Instead of relying only on manual checks, Google now leans heavily on the activity recorded during Closed Testing to understand how an app performs under real use. This gives the review team a clearer picture of stability, data handling, and readiness—making Closed Testing a much more decisive step in whether an app moves forward. Why Google Play Closed Testing Is So Hard to Pass Most teams fail Closed Testing because their testing behavior doesn’t match the actual evaluation signals Google uses. The table below compares real developer mistakes with Google’s real criteria, so you can see exactly why each issue leads to rejection. Common Issues During Testing What Google Actually Checks Teams treat Closed Testing like internal QA. Testers only tap around the interface and rarely complete real user journeys. Google checks full, natural flows. It expects onboarding → core action → follow-up action. Shallow tapping does not confirm real functionality, so Google marks the test as lacking behavioral proof. Testers open the app once or twice and stop. Most activity happens on day 1, then engagement drops to zero. Google checks multi-day usage patterns. It needs recurring activity to evaluate stability and real adoption. One-off launches look like artificial or incomplete testing → fail. Core features remain untouched because testers don’t find or understand them. Navigation confusion prevents users from triggering important flows. Google checks whether declared core features are actually used. If users don’t naturally reach those flows, Google cannot validate them → flagged as “unverified behavior.” Permissions are declared but no tester enters flows that use them. e.g., camera, location, contacts, or other data-related actions never get triggered. Google cross-checks declared permissions with real behavior. If a permission never activates during testing, Google treats the Data Safety form as unverifiable → extremely high rejection rate. Engagement collapses after the first day. Testers lose interest quickly, resulting in long periods of zero activity. Google checks consistency over time (≈14 days). When usage dies early, the system sees weak, unreliable activity that does not resemble real-world usage → rejection. Passing Google Play Closed Testing: A Real Case Study Closed Testing turned out to be far stricter than we expected. What looked like a simple pre-release step quickly became the most decisive part of the review, and our team had to learn this the hard way—through three consecutive rejections before finally getting approved. The Three Issues That Held Us Back in Closed Testing These were the three recurring problems that blocked our app from moving past Google Play’s Closed Testing stage. #Issue 1: Having Testers, but Not Enough “Real” Activity In the first attempt, we only invited one person to join the test, so the app barely generated any meaningful activity. Most of the usage stopped at simple screen opens, and none of the core features were exercised in a way Google could evaluate. With such a small and shallow pattern, the system couldn't treat it as real user participation. The build was rejected right away for not meeting the minimum level of authentic activity. #Issue 2: Misunderstanding the “14-Day Activity” Requirement For the second round, we expanded the group to twelve testers, but most of them stopped using the app after just a few days. The remaining period showed almost no engagement, which meant the full 14-day window Google expects was never actually covered. Although the number of testers looked correct, the lack of continuous usage made the test inconclusive. Google dismissed the submission because the activity dropped off too early. #Issue 3: No Evidence of Real Activity (Logs, Tracking, or Records) By the third attempt, we finally kept twelve testers active for the entire duration, but we failed to capture what they did. There were no logs showing feature flows, no tracking to confirm event sequences, and no recordings for actions tied to sensitive permissions. From Google's viewpoint, the numbers in the dashboard had nothing to support them. Without verifiable evidence, the review team treated the activity as unreliable and rejected the build again. What Finally Helped Us Pass Google Play Closed Testing To fix the issues in the earlier attempts, the team reorganized the entire test instead of adding more testers at random. Everything was structured so Google could see consistent, authentic behaviour from real users. A larger tester group created a more reliable activity curve The previous rounds didn’t generate enough meaningful activity, so we increased the number of people involved. The larger group created a more natural engagement pattern that gave Google more complete usage signals to review. Extending the testing period from 14 to 17 consecutive days To avoid the early drop-off that hurt our earlier attempts, we kept the test running a little longer than the minimum 14 days. The longer duration prevented mid-test gaps and helped Google see continuous interaction across multiple days. Introducing a detailed daily checklist so testers covered the right flows Instead of letting everyone tap around freely, we provided a short list of the core actions Google needed to observe. A clear checklist guided testers through specific actions each day, producing consistent evidence for the features Google needed to verify. Enabling device-level tracking and full system logs Earlier data was too thin to validate behaviour, so we enabled device-level tracking and full system logs to review and later align with Google’s dashboard. This fixed the “invisible activity” issue from the earlier rounds and gave the review team something concrete to validate. Having testers record short videos of their actions Some flows involving permissions weren’t reflected clearly in logs, so testers recorded short clips when performing these tasks. These videos provided direct confirmation of how camera, file access and upload flows worked. Adding small features and content to encourage natural engagement The previous builds didn’t encourage repeated use, so we added minor features and content updates to create more realistic daily engagement. These adjustments helped testers interact with the app in a way that resembled real usage, not surface-level taps. Release Access Form: A Commonly Overlooked Step in the Approval Process After Closed Testing is completed, Google requires developers to submit the Release Access Form before the app can move forward in the publishing process. It sounds simple, but the way this form is written has a direct influence on the final review. Taking the form seriously, paired with the testing evidence we had already prepared, helped our final submission go through smoothly on the fourth attempt. Here’s what became clear when we worked through it: The answers must reflect the real behaviour of the app — especially the sections on intended use and where user data comes from. Any mismatch creates doubt. Google expects clear descriptions of features, user actions and the scope of testing. Vague explanations often slow the process down. Looking at how other developer communities handled this form helped us understand the phrasing that aligns with Google’s criteria. Final Thoughts Closed Testing is ultimately about proving that your app behaves like a real, ready-to-ship product. Most teams lose time because they only react after a rejection; we prevent 80% of those rejections long before you ship. If you want fewer surprises and a tighter, lower-risk review cycle, talk to us and Haposoft will run the entire review cycle for you.
cta-background

Subscribe to Haposoft's Monthly Newsletter

Get expert insights on digital transformation and event update straight to your inbox

Let’s Talk about Your Next Project. How Can We Help?

+1 
©Haposoft 2025. All rights reserved