F5 Distributed Cloud
233 TopicsF5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
Introduction For those of you following along with the F5 Hybrid Security Architectures series, welcome back! If this is your first foray into the series and would like some background, have a look at the intro article. This series is using the F5 Hybrid Security Architectures GitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles. This repo is a community-supported effort to provide not only a demo and workshop, but also a stepping stone for using these practices in your own F5 deployments. If you find any bugs or have any enhancement requests, open an issue, or better yet, contribute! Here in our first example solution, we will be using Terraform to deploy an application server running the OWASP Juice Shop application serviced by a F5 BIG-IP Advanced WAF Virtual Edition. We will supplement this with F5 Distributed Cloud Web App and API Protection to provide complimentary security at the edge. Everything will be tied together using GitHub Actions for CI/CD and Terraform Cloud to maintain state. Distributed Cloud WAF: Available for SaaS-based deployments in a distributed environment that reduces operational overhead with an optional fully managed service. BIG-IP Advanced WAF: Available for on-premises / data center and public or private cloud (virtual edition) deployment, for robust, high-performance web application, and API security with granular, self-managed controls. XC WAF + BIG-IP Advanced WAF Workflow GitHub Repo: F5 Hybrid Security Architectures Prerequisites: F5 Distributed Cloud Account (F5 XC) Create an F5 XC API certificate AWS Account — Due to the assets being created, a free tier will not work. NOTE: You must be subscribed to the F5 BIG-IP AMI being used in the AWS Marketplace. Terraform Cloud Account GitHub Account Assets: xc: F5 Distributed Cloud WAAP bigip-base: F5 BIG-IP Base deployment bigip-awaf: F5 BIG-IP Advanced WAF config infra: AWS Infrastructure (VPC, IGW, etc.) juiceshop: OWASP Juice Shop test web application Tools: Cloud Provider: AWS Infrastructure as Code: Terraform Infrastructure as Code State: Terraform Cloud CI/CD: GitHub Actions Terraform Cloud: Workspaces: Create a workspace for each asset in the workflow chosen Workflow Workspaces xc-bigip infra, bigip-base, bigip-awaf, juiceshop, xc Workspace Sharing: Under the settings for each Workspace, set the Remote state sharing to share with each Workspace created. Your Terraform Cloud console should resemble the following: Variable Set: Create a Variable Set with the following values. IMPORTANT: Ensure sensitive values are appropriately marked. AWS_ACCESS_KEY_ID: Your AWS Access Key ID - Environment Variable AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key - Environment Variable AWS_SESSION_TOKEN: Your AWS Session Token - Environment Variable VOLT_API_P12_FILE: Your F5 XC API certificate. Set this to api.p12 - Environment Variable VES_P12_PASSWORD: Set this to the password you supplied when creating your F5 XC API key. - Environment Variable ssh_key: Your ssh key for access to created BIG-IP and compute assets. - Terrraform Variable admin_src_addr: The source address of your administrative workstation. - Terraform Variable Environment Variable tf_cloud_organization: Your Terraform Cloud Organization name - Terraform Variable Your Variable Set should resemble the following: GitHub: Fork and Clone Repo: F5 Hybrid Security Architectures Actions Secrets: Create the following GitHub Actions secrets in your forked repo P12: The base64 encoded F5 XC API certificate TF_API_TOKEN: Your Terraform Cloud API token TF_CLOUD_ORGANIZATION: Your Terraform Cloud Organization TF_CLOUD_WORKSPACE_workspace: Create for each workspace used in your workflow. EX: TF_CLOUD_WORKSPACE_BIGIP_BASE would be created with the value bigip-base Your GitHub Actions Secrets should resemble the following: Terraform Local Variables: Step 1: Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data project_prefix = "Your project identifier" resource_owner = "You" aws_region = "Your AWS region" ex: us-west-1 azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"] #Assets nic = false nap = false bigip = true bigip-cis = false Step 2: Rename bigip-base/terraform.tfvars.examples to bigip-base/terraform.tfvars and add the following data f5_ami_search_name = "F5 BIGIP-16.1.3* PAYG-Adv WAF Plus 25Mbps*" aws_secretmanager_auth = false #Provisioning set to nominal or none asm = "nominal" apm = "none" Step 3: Rename bigip-awaf/terraform.tfvars.examples to bigip-awaf/terraform.tfvars and add the following data awaf_config_payload = "awaf-config.json" Step 4: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data api_url = "https://<YOUR TENANT>.console.ves.volterra.io/api" xc_tenant = "Your tenant id available in F5 XC Administration section Tenant Overview" xc_namespace = "Your XC Namespace" app_domain = "Your APP FQDN" xc_waf_blocking = true Step 4: Commit your changes Deployment Workflow: Step 1: Check out a branch for the deploy workflow using the following naming convention xc-bigip deployment branch: deploy-xc-bigip Step 2: Push your deploy branch to the forked repo Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build Step 4: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC Note: Check the terraform outputs of the bigip-base job for the randomly generated password for BIG-IP GUI access F5 BIG-IP Terraform Outputs: Step 5: Verify your app is available by navigating to the app domain FQDN you provided in the setup. Note: The autocert process takes time. It may be 5 to 10 minutes before Let's Encrypt has provided the cert F5 XC Terraform Outputs: Destroy Workflow: Step 1: From your main branch, check out a new branch for the destroy workflow using the following naming convention xc-bigip destroy branch: destroy-xc-bigip Step 2: Push your destroy branch to the forked repo Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your workflow Step 4: Once the pipeline completes, verify your assets were destroyed in AWS and F5 XC Conclusion In this article, we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAF and BIG-IP Advanced WAF to protect a test web application. While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments. Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances. Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect. With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications. Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution. Teachable Course: Here, You can access hands-on course for F5 Hybrid XC WAF with BIG-IP Advanced WAF through the following link. Training Course Article Series: F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) For further information or to get started: F5 Distributed Cloud Platform F5 Distributed Cloud WAAP Services F5 Distributed Cloud WAAP YouTube series F5 Distributed Cloud WAAP Get Started6.8KViews4likes0CommentsMitigation of OWASP API Security Risk: BOPLA using F5 XC Platform
Introduction: OWASP API Security Top 10 - 2019 has two categories “Mass Assignment” and “Excessive Data Exposure” which focus on vulnerabilities that stem from manipulation of, or unauthorized access to an object's properties. For ex: let’s say there is a user information in json format {“UserName”: ”apisec”, “IsAdmin”: “False”, “role”: ”testing”, “Email”: “apisec@f5.com”}. In this object payload, each detail is considered as a property, and so vulnerabilities around modifying/showing these sensitive properties like email/role/IsAdmin will fall under these categories. These risks shed light on the hidden vulnerabilities that might appear when modifying the object properties and highlighted the essence of having a security solution to validate user access to functions/objects while also ensuring access control for specific properties within objects. As per them, role-based access, sanitizing the user input, and schema-based validation play a crucial role in safeguarding your data from unauthorized access and modifications. Since these two risks are similar, the OWASP community felt they could be brought under one radar and were merged as “Broken Object Property Level Authorization” (BOPLA) in the newer version of OWASP API Security Top 10 – 2023. Mass Assignment: Mass Assignment vulnerability occurs when client requests are not restricted to modifying immutable internal object properties. Attackers can take advantage of this vulnerability by manually parsing requests to escalate user privileges, bypass security mechanisms or other approaches to exploit the API Endpoints in an illegal/invalid way. For more details on F5 Distributed Cloud mitigation solution, check this link: Mitigation of OWASP API6: 2019 Mass Assignment vulnerability using F5 XC Excessive Data Exposure: Application Programming Interfaces (APIs) don’t have restrictions in place and sometimes expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks to gain access to customer information, and identifying the sensitive information in these huge chunks of API response data is crucial in data safety. For more details on this risk and F5 Distributed Cloud mitigation solution, check this link: Mitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Conclusion: Wrapping up, this article covers the overview of the newly added category of BOPLA in OWASP Top 10 – 2023 edition. Finally, we have also provided minutiae on each section in this risk and reference articles to dig deeper into F5 Distributed Cloud mitigation solutions. Reference links or to get started: F5 Distributed Cloud Services F5 Distributed Cloud WAAP Introduction to OWASP API Security Top 10 2023519Views3likes0CommentsMitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Platform
This is part of the OWASP API Security TOP 10 mitigation series, and you can refer here for an overview of these categories and F5 Distributed Cloud Platform (F5 XC) Web Application and API protection (WAAP). Introduction to Excessive Data Exposure Application Programming Interfaces (APIs) are the foundation stone of modern evolving web applications which are driving the digital world. They are part of all phases in product development life cycle, starting from design, testing to end customer using them in their day-to-day tasks. Since they don't have restrictions in place, sometimes APIs expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks in cybercrime to gain access to customer information which can be sold or further used in other exploits like credential stuffing, etc. Most of the time, the design stage doesn't include this security perspective and relies on 3rd party tools to perform sanitization of the data before displaying the results to customers. Identifying the sensitive information in these huge chunks of API response data is sophisticated and most of the available security tools in the market don't support this capability. So instead of relying on third party tools it's recommended to follow shift left strategies and add security as part of the development phase. During this phase, developers must review and ensure that the API returns only required details instead of providing unnecessary properties to avoid sensitive data exposure. Excessive data exposure attack scenario-1 To showcase this category, we are exposing sensitive details like CCN and SSN in one of the product reviews of Juice shop application (refer links for more info) as below - Overview of Data Guard: Data Guard is F5 XC load balancer feature which shields the responses from exposing sensitive information like CCN/SSN by masking these fields with a string of asterisks (*). Depending on the customer's requirement, they can have multiple rules configured to apply or skip processing for certain paths and routes. Preventing excessive data exposure using F5 Distributed Cloud Step1: Create origin pool - Refer here for more information Step2: Create Web Application Firewall policy (WAF) - Refer here for details Step3: Create https load balancer (LB) with above created pool and WAF policy - Refer here for more information Step4: Upload your application swagger file and add it to above load balancer - Refer here for more details Step5: Configure Data Guard on the load balancer with action and path as below Step6: Validate the sensitive data is masked Open postman/browser, check the product reviews section/API and validate these details are hidden and not exposed as in original application In Distributed Cloud Console expand the security event and check the WAF section to understand the reason why these details are masked as below: Excessive data exposure attack scenario-2 In this demonstration we are using an API based vulnerable application VAmPI (VAmPI is a vulnerable API made with Flask, and it includes vulnerabilities from the OWASP top 10 vulnerabilities for APIs, for more info follow the repo link). Follow below steps to bring up the setup: Step1: Host the VAmPI application inside a virtual machine Step2: Login to XC console, create a HTTP LB and add the hosted application as an origin server Step3: Access the application to check its availability. Step4: Now enable API Discovery and configure sensitive data discovery policy by addingall the compliance frameworks in your HTTP LB config. Step5: Hit the vulnerable API Endpoint '/users/v1/_debug' exposing sensitive data like username, password etc. Step6: Navigate to security overview dashboard in the XC console and select the API Endpoints tab. Check for vulnerable endpoint details. Step7: In the Sensitive Data section, click Ellipsis on the right side to get options for action. Step8: Clicking on the option 'Add Sensitive Data Exposure Rule' will automatically add the entries for sensitive data exposure rule to your existing LB configs. Apply the configuration. Step9: Now again, hit the vulnerable API Endpoint '/users/v1/_debug' Here in the above image, you can see masked values in the response. All letters changed to 'a' and number is converted to '1'. Step10: Optionally you can also manually configure sensitive data exposure rule by adding details about the vulnerable API endpoint. Login back to XC console Start configuring API Protection rule in the created HTTP LB Click Configure in the Sensitive Data Exposure Rules section. Click Add Item to create the first rule. In the Target section, enter the path that will respond to the request. Also enter one or more methods with responses containing sensitive information. In the Values field in Pattern section, enter the JSON field value you want to mask. For example, to mask all emails in the array users, enter “users[_].email”. Note that an underscore between the square brackets indicates the array's elements. Once the above rule gets applied, values in the response will be masked as follows: All letters will change to a or A (matching case) and all numbers will convert to 1. Click Apply to save the rule to the list of Sensitive Data Exposure Rules. Optionally, Click Add Item to add more rules. Click Apply to save the list of rules to your load balancer. Step11: After the completion of Step10, Hit back the vulnerable API Endpoint. Here also in the above image, you can see masked values in the response as per the configurations done in Step 10. Conclusion As we have seen in the above use cases sensitive data exposure occurs when an application does not protect sensitive data like PII, CCN, SSN, Auth Credentials etc. Leaking of such information may lead to serious consequences. Hence it becomes extremely critical for organizations to reduce the risk of sensitive data exposure. As demonstrated above, F5 Distributed Cloud Platform can help in protecting the exposure of such sensitive data with its easy to use API Security solution offerings. For further information check the links below OWASP API Security - Excessive Data Exposure OWASP API Security - Overview article F5 XC Data Guard Overview OWASP Juice Shop VAmPI2.8KViews3likes2CommentsMitigating OWASP API Security Risk: Unrestricted Resource Consumption using F5 Distributed Cloud Platform
Introduction: Unrestricted Resource Consumption vulnerability occurs where an API allows end users to over-utilize resources (e.g., CPU, memory, bandwidth, or storage) without enforcing proper limitations. This can lead to overwhelming of the system, performance degradation, denial of service (DoS) or complete unavailability of the services for valid users. Attack Scenario: In this demo, we are going to generate huge traffic and observe the server’s behaviour along with its response time. Fig 1: Using Apache JMeter to send arbitrary number of requests to API endpoint continuously in very short span of time. Fig 2: (From left to right) Response time during normal and server with huge traffic. Above results show higher response time when abnormal traffic is sent to a single API endpoint when compared to normal usage. By further increases in volume, server can become unresponsive, deny requests from real users and result in DoS attacks. Fig 3: Attackers performing arbitrary number of API request to consume the server’s resources Customer Solution: F5 Distributed Cloud (XC) WAAP helps in solving above vulnerability in the application by rate limiting the API requests, thereby preventing complete consumption of memory, file system storage, CPU resources etc. This protects against traffic surge and DoS attacks. This article aims to provide F5 XC WAAP configurations to control the rate of requests sent to the origin server. Step by Step to configure Rate Limiting in F5 XC: These are the steps to enable Rate Limiting feature for APIs and its validation Add API Endpoints with Rate Limiter values Validation of request rate to violate threshold limit Verifying blocked request in F5 XC console Step 1: Add API Endpoints with Rate Limiter values Login to F5 XC console and Navigate to Home > Load Balancers > Manage > Load Balancers Select the load balancer to which API Rate Limiting should be applied. Click on the menu in Actions column of the app’s Load Balancer and click on Manage Configurations as shown below to display load balancer configs. Fig 4: Selecting menu to manage configurations for load balancer Once Load Balancer configurations are displayed, click on Edit configuration button on the top right of the page. Navigate to Security Configuration and select “API Rate Limit” in dropdown of Rate Limiting and click on “Add Item” under API Endpoint section. Fig 5: Choosing API Rate Limit to configure API endpoints. Fig 6: Configuring rate limit to API Endpoint Rate limit is configured to GET request from API Endpoint “/product/OLJCESPC7Z”. Click on Apply button displayed on the right bottom of the screen. Click on “Save and Exit” for above configuration to get saved to Load Balancer. Validation of request rate to violate threshold limit Fig 7: Verifying request for first time Request is sent for the first time after configuring API Endpoint and can be able to see the response along with status code 200. Upon requesting to the same API Endpoint beyond the threshold limit blocks the request as shown below, Fig 8: Rate Limiting the API request Verifying blocked request from F5 XC console From the F5 XC Console homepage, Navigate to WAAP > Apps & APIs > Security and select the Load Balancer. Click on Requests to view the request logs as below, Fig 9: Blocked API request details from F5 XC console You can see requests beyond the rate limiter value get dropped and the response code is 429. Conclusion: In this article, we have seen that when an application receives an abnormal amount of traffic, F5 XC WAAP protects APIs from being overwhelmed by rate limiting the requests. XC's Rate limiting feature helps in preventing DoS attacks and ensures service availability at all times. Related Links: API4:2019 Lack of Resources and Rate Limiting API4:2023 Unrestricted Resource Consumption Creating Load balancer Steps F5 Distributed Cloud Security WAAP F5 Distributed Cloud Platform125Views0likes0CommentsMitigation of OWASP API Security Top 10 2023 using F5 Distributed Cloud Platform
The OWASP API Security project aims to help organizations by providing a guide with a list of the latest top 10 most critical API vulnerabilities and steps to mitigate them. As part of updating the old OWASP API Security risk categories for 2019, the OWASP API Security Top 10 2023 is released. List of vulnerabilities: API1:2023 Broken Object Level Authorization Broken Object Level Authorization (BOLA) is a vulnerability that occurs when there is a failure in validation of a user’s permissions to perform a specific task over an object, which may eventually lead to leakage, updation, or destruction of data. To prevent this vulnerability, proper authorization mechanisms should be followed, proper checks should be made to validate a user’s actions on a certain record, and security tests should be performed before deploying any production-grade changes. API2:2023 Broken Authentication Broken Authentication is a critical vulnerability that occurs when an application’s authentication endpoints fail to detect attackers impersonating someone else’s identity and allow partial or full control over the account. To prevent this vulnerability, observability and understanding of all possible authentication API endpoints is needed. Re-authentication should be performed for any confidential changes, multi-factor authentication, captcha-challenge, and effective security solutions should be applied to detect and mitigate credential stuffing, dictionary and brute-force types of attacks. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API3:2023 Broken Object Property Level Authorization Broken Object Property Level Authorization is one of the new risk categories in the OWASP API Security Top 10 2023. This vulnerability occurs when a user is allowed to access an object’s property without validating their access permissions. Excessive Data Exposure and Mass Assignment which were initially a part of OWASP APISec 2019 are now part of this new vulnerability. To prevent this vulnerability, access privileges of users requesting for a specific object’s property should be scrutinized before exposure by the API endpoints. Use of generic methods and automatically binding client inputs to internal objects or code variables should be avoided and schema-based validation should be enforced. Detailed explanation about the vulnerabilities with demos showcasing the mitigation part using F5 Distributed Cloud can be found here (Excessive Data Exposure, Mass Assignment) API4:2023 Unrestricted Resource Consumption Unrestricted Resource Consumption vulnerability occurs when the system’s resources are being unnecessarily consumed, which could eventually lead to degradation of services and performance latency issues. Although the name has changed, the vulnerability is still the same as that of Lack of Resources & Rate Limiting. To prevent this vulnerability, rate-limiting, maximum size for input payload/parameters and server-side validations of requests should be enforced. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API5:2023 Broken Function Level Authorization Broken Function Level Authorization occurs when vulnerable API endpoints allow normal users to perform administrative actions or a user from one group is allowed to access a function specific to users of another group. To prevent this vulnerability, access control policies and administrative authorization checks based on user’s group/roles should be implemented. API6:2023 Unrestricted Access to Sensitive Business Flows Unrestricted Access to Sensitive Business Flows is also a new addition to the list of API vulnerabilities. While writing API endpoints, it is extremely critical for developers to have a clear understanding of the business flows getting exposed by it. To avoid exposing any sensitive business flow and limit its excessive usage, which if not considered, might eventually lead to exploitation by the attackers and cause some serious harm to the business. This also includes securing and limiting access to B2B APIs that are consumed directly and often integrated with minimal protection mechanisms. By keeping automation to work, now-a-days, attackers can bypass traditional protection mechanisms. APIs inefficiency in detecting automated bot attacks not only causes business loss but also it can adversely impact the services for real users as well. To overcome this vulnerability, enterprises need to have a platform to identify whether the request is from a real user or an automated tool by analyzing and tracking patterns of usage. Device fingerprinting, Integrating Captcha solution, blocking Tor requests, are a few methods which can help to minimize the impact of such automated attacks. For more details on automated threats, you can visit OWASP Automated Threats to Web Applications Note: Although the vulnerability is new but it contains some references of API10:2019 Insufficient Logging & Monitoring Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API7:2023 Server-Side Request Forgery After finding a place in the OWASP Top 10 web application vulnerabilities of 2021, SSRF has now been included in the OWASP API Security Top 10 2023 list as well, showing the severity of this vulnerability. Server-Side Request Forgery (SSRF) vulnerability occurs when an API fetches an internal server resource without validating the URL from the user. Attackers exploit this vulnerability by manipulating the URL, which in turn helps them retrieve sensitive data from internal servers. To overcome this vulnerability, input data validations should be implemented to ensure that the client supplied input data obeys the expected format. Allow lists should be maintained so that only trusted requests/calls will be processed, and HTTP redirections should be disabled. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API8:2023 Security Misconfiguration Security Misconfiguration is a vulnerability that may arise when security best practices are overlooked. Unwanted exposure of debug logs, unnecessary enabled HTTP Verbs, unapplied latest security patches, missing repeatable security hardening process, improper implementation of CORS policy, etc. are a few examples of security misconfiguration. To prevent this vulnerability, systems and entire API stack should be maintained up to date without missing any security patches. Continuous security hardening and configuration tracking process should be carried out. Make sure all API communications take place over a secure channel (TLS) and all servers in the HTTP server chain process incoming requests. Cross-Origin Resource Sharing (CORS) policy should be set up properly. Unnecessary HTTP verbs should be disabled. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API9:2023 Improper Inventory Management Improper Inventory Management vulnerability occurs when organizations don’t have much clarity on their own APIs as well as third-party APIs that they use and lack proper documentation. Unawareness with regards to current API version, environment, access control policies, data shared with the third-party etc. can lead to serious business repercussions. Clear understanding and proper documentation are the key to overcoming this vulnerability. All the details related to API hosts, API environment, Network access, API version, Integrated services, redirections, rate limiting, CORS policy should be documented correctly and maintained up to date. Documenting every minor detail is advisable and authorized access should be given to these documents. Exposed API versions should be secured along with the production version. A risk analysis is recommended whenever newer versions of APIs are available. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here API10:2023 Unsafe Consumption of APIs Unsafe Consumption of APIs is again a newly added vulnerability covering a portion of API8:2019 Injection vulnerability. This occurs when developers tend to apply very little or no sanitization to data received from third-party APIs. To overcome this, we should make sure that API interactions take place over an encrypted channel. API data evaluation and sanitization should be carried out before using the data further. Precautionary actions should be taken to avoid unnecessary redirections by using Allow lists. Detailed explanation about the vulnerability with a demo showcasing the mitigation part using F5 Distributed Cloud can be found here Related OWASP API Security article series Broken Authentication Excessive Data Exposure Mass Assignment Lack of Resources & Rate limiting Security Misconfiguration Improper Assets Management Unsafe consumption of APIs Server-Side Request Forgery Unrestricted Access to Sensitive Business Flows OWASP API Security Top 10 - 2019128Views0likes0CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization834Views2likes2CommentsAccelerate Your Initiatives: Secure & Scale Hybrid Cloud Apps on F5 BIG-IP & Distributed Cloud DNS
It's rare now to find an application that runs exclusively in one homogeneous environment. Users are now global, and enterprises must support applications that are always-on and available. These applications must also scale to meet demand while continuing to run efficiently, continuously delivering a positive user experience with minimal cost. Introduction In F5’s 2024 State of Application Strategy Report, Hybrid and Multicloud deployments are pervasive. With the need for flexibility and resilience, most businesses will deploy applications that span multiple clouds and use complex hybrid environments. In the following solution, we walk through how an organization can expand and scale an application that has matured and now needs to be highly-available to internal users while also being accessible to external partners and customers at scale. Enterprises using different form-factors such as F5 BIG-IP TMOS and F5 Distributed Cloud can quickly right-size and scale legacy and modern applications that were originally only available in an on-prem datacenter. Secure & Scale Applications Let’s consider the following example. Bookinfo is an enterprise application running in an on-prem datacenter that only internal employees use. This application provides product information and details that the business’ users access from an on-site call center in another building on the campus. To secure the application and make it highly-available, the enterprise has deployed an F5 BIG-IP TMOS in front of each of endpoint An endpoint is the combination of an IP, port, and service URL. In this scenario, our app has endpoints for the frontend product page and backend resources that only the product page pulls from. Internal on-prem users access the app with internal DNS on BIG-IP TMOS. GSLB on the device sends another class of internal users, who aren’t on campus and access by VPN, to the public cloud frontend in AWS. The frontend that runs in AWS can scale with demand, allowing it to expand as needed to serve an influx of external users. Both internal users who are off-campus and external users will now always connect to the frontend in AWS through the F5 Global Network and Regional Edges with Distributed Cloud DNS and App Connect. Enabling the frontend for the app in AWS, it now needs to pull data from backend services that still run on-prem. Expanding the frontend requires additional connectivity, and to do that we first deploy an F5 Distributed Cloud Customer Edge (CE) to the on-prem datacenter. The CE connects to the F5 Global Network and it extends Distributed Cloud Services, such as DNS and Service Discovery, WAF, API Security, DDoS, and Bot protection to apps running on BIG-IP. These protections not only secure the app but also help reduce unnecessary traffic to the on-prem datacenter. With Distributed Cloud connecting the public cloud and on-prem datacenter, Service Discovery is configured on the CE on-prem. This makes a catalog of apps (virtual servers) on the BIG-IP available to Distributed Cloud App Connect. Using App Connect with managed DNS, Distributed Cloud automatically creates the fully qualified domain name (FQDN) for external users to access the app publicly, and it uses Service Discovery to make the backend services running on the BIG-IP available to the frontend in AWS. Here are the virtual servers running on BIG-IP. Two of the virtual servers, “details” and “reviews,” need to be made available to the frontend in AWS while continuing to work for the frontend that’s on-prem. To make the virtual servers on BIG-IP available as upstream servers in App Connect, all that’s needed is to click “Add HTTP Load Balancer” directly from the Discovered Services menu. To make the details and reviews sevices that are on-prem available to the frontend product page in AWS, we advertise each of their virtual servers on BIG-IP to only the CE running in AWS. The menu below makes this possible with only a few clicks as service discovery eliminates the need to find the virtual IP and port for each virtual server. Because the CE in AWS runs within Kubernetes, the name of the new service being advertised is recognized by the frontend product page and is automatically handled by the CE. This creates a split-DNS situation where an internal client can resolve and access both the internal on-prem and external AWS versions of the app. The subdomain “external.f5-cloud-demo.com” is now resolved by Distributed Cloud DNS, and “on-prem.f5-cloud-demo.com” is resolved by the BIG-IP. When combined with GSLB, internal users who aren’t on campus and use a VPN will be redirected to the external version of the app. Demo The following video explains this solution in greater detail, showing how to configure connectivity to each service the app uses, as well as how the app looks to internal and external users. (Note: it looks and works identically! Just the way it should be and with minimal time needed to configure it). Key Takeaways BIG-IP TMOS has long delivered best-in-class service with high-availability and scale to enterprise and complex applications. When integrated with Distributed Cloud, freely expand and migrate application services regardless of the deployment model (on-prem, cloud, and edge). This combination leverages cloud environments for extreme scale and global availability while freeing up resources on-prem that would be needed to scrub and sanitize traffic. Conclusion Using the BIG-IP platform with Distributed Cloud services addresses key challenges that enterprises face today: whether it's making internal apps available globally to workforces in multiple regions or scaling services without purchasing more fixed-cost on-prem resources. F5 has the products to unlock your enterprise’s growth potential while keeping resources nimble. Check out the select resources below to explore more about the products and services featured in this solution. Additional Resources Solution Overview: Distributed Cloud DNS Solution Overview: One DNS – Four Expressions Interactive Demo: Distributed Cloud DNS at F5 DevCentral: The Power of &: F5 Hybrid DNS solution F5 Hybrid Security Architectures: One WAF Engine, Total Flexibility186Views1like0Comments