close search bar

Sorry, not available in this language yet

close language selection

Remediating XSS: Does a single fix work?

Synopsys Editorial Team

Jul 20, 2018 / 4 min read

A very common type of injection defect is cross-site scripting (also known as XSS or HTML injection). Many developers struggle with remediation of XSS because of a misunderstanding of the difference between validation, sanitization, and normalization/canonicalization.

Lately, even some security vendors have started suggesting “fixing” injection defects close to the source rather than close to the sink. This seems appealing because a fix close to the source has the potential to fix many defects with only a single code change. However, this suggestion suffers from two painful deficiencies. The first is confusion between validation and sanitization. The second is a misunderstanding of the regression risk associated with broad-impact changes. Once those items are better understood, it’s possible to formulate a viable plan for defect remediation.

Validation isn’t enough to remediate XSS

Input validation involves ensuring that “input data falls within the expected domain of valid program input.” As an example, if we are expecting a dollar amount as input, only numerals and a decimal point are acceptable input characters. In some cases, validation of input data ensures that there are no special characters in the input and, as a side effect, will indeed prevent an injection attack.

However, many inputs are free-form text, where special characters and keywords are acceptable inputs. As an example, many names contain an apostrophe, such as the name “O’Malley.” And although there have been no recorded incidents of this yet, it’s likely that at some point at least one jokester will name their child “DROP TABLE” or even “script alert.” Many business names have an ampersand. I actually had a friend in college whose last name was “Ampersand” (the string literal spelled out). Because of this, the domain of valid input for many (if not most) text fields is the entire Unicode character set. Validating input in a common location near the source is generally good advice, but if the set of valid inputs is the entire Unicode space, validation won’t remove any injection defects.

Escaping won’t fix XSS either

For injection defects like SQL injection (SQLi), the proper remediation technique is neither input validation nor sanitization but rather using the appropriate parameterized queries. But for HTML injection (XSS), it is necessary to sanitize—or more specifically, to escape—the user-controlled data. However, escaping is context-dependent. That is to say, if the same data is used in multiple places, a different escaping may be necessary. There are five HTML contexts, and there’s simply no way to universally escape the data. (At that second link, notice especially the text in bold.)

We could imagine a pathological case in which a single input value is used in one, and only one, HTML context multiple times. In that case, we could conceivably escape at the source of the data. However, if the data is ever used in a different context, it would have to be re-encoded. More importantly, however, even in this imagined case, the regression risk would simply be unacceptable.

Let’s say you have a user whose nickname is “Joe’s B&B.” It’s determined that an account management page is subject to XSS because the nickname is output without encoding. In addition to the security defect, this page also probably appears broken to this user, as the second B is treated as an HTML escape sequence and doesn’t appear in the rendered text. However, this is just a minor annoyance.

It’s very common for every page that displays user information to use the same model class. Escaping the data when populating the model would fix the XSS on the account management page. But what regressions would be introduced? A reasonable authorization measure for an account update might verify that the nickname of the logged-in user matches the nickname of the account being updated. Only now that we’ve escaped the nickname, the comparison will always fail, as the nickname in the database is the unescaped version. This is a fairly simple example, but it illustrates the point quite well.

Move toward the sink, not the source

As you move remediation further from the sink and closer to the source, the risk of regression increases dramatically. Therefore, in most cases, the “best fix” location is just before the tainted data reaches the sink.

Moving the fix closer to the source because you can “fix more bugs with less effort” is a false ROI that makes sense only in the minds of security “experts” who have never actually used a compiler to produce working software. It’s an appealing story, as it provides a seductive underestimation of the remediation effort. Unfortunately, the result is to place an impossible burden on already overworked development teams.

Rather than pursue this strategy, let’s explore some alternatives.

Remediate XSS by preventing XSS

First and foremost, get your SAST running within the CI pipeline, not as an afterthought. Fix new security issues as they are introduced. This is the cheap, easy way to deal with the problem. Second, if an application was developed with no concern for security, retrofitting it is going to be expensive and time-consuming. It may be cheaper to either end-of-life the application or rewrite it from scratch with an actual security architecture. If an application doesn’t have good regression tests, factor in the cost of writing them as part of the decision. If the app can’t be fixed or shut down, consider fixing only the subset of defects that can be found with penetration testing and using a RASP solution to help the application limp along until it can be replaced.

In conclusion, remediating security defects after the fact is expensive, difficult, and high-risk. The “best fix” location isn’t the one that involves writing the least code. It’s the one that involves the least cost. And writing code is considerably cheaper than debugging regressions. Use a secure software development cycle that prevents the introduction of security risks rather than engaging in desperate attempts to “bolt security on” afterward. And if you do find yourself with a security disaster, don’t be fooled by quick-fix solutions that end up being more expensive than proper remediation.

Continue Reading

Explore Topics