Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Confusion Attacks: Exploiting Hidden Semantic Ambiguity in Apache HTTP Server! - 1467

Orange Tsai     Reference →Posted 1 Year Ago
  • The Apache HTTP server is constructed with modules, with 136 listed in the documentation and about half that are in normal use. To the author this, there was a bad code smell: a giant request_rec structure is passed around to each module. if there was a difference between the understanding of two modules on this, it'd be bad. This is what the research is about.
  • The structure contains a field called filename to represent the filesystem path. However, some of the modules treat this as a full URL, which can lead to security issues. This can be used to truncate entries using a ? in the path. For instance, mod_rewrite allows sysadmins to easily rewrite a path pattern with the RewriteRule directive. By providing a question mark here, the rewritten path will be truncated, resulting in a bad access. Another example of the truncation being useful is with a RewriteRule on the path.
  • The other interesting issue with the filename confusion is an ACL bypass. It's common to use the File directive to add authentication to a file access. Using the confusion on the file path with the URL encoded question mark, we can get one path verified but another actually used. For instance, admin.php%3Fooo.php would be verified by the ooo.php at the end but used with admin.php.
  • The next bug is crazy. When Httpd is processing a request, it first looks at that exact spot on the file system with specific rewrite rules. Then, it attempts to go to the specified document root. Most of the time, the root directory isn't there so it doesn't matter though. This means that if the prefix of a RewriteRule is controllable then the entire file system can be accessed!
  • Well, sorta. Because of the rewrite rule having an ending attached to it (like .html), we can only access what this allows. Additionally, Apache has a built in protection for protecting against the access of some files. Using the first primitive allows us to truncate the path though, creating a super primitive. Using this bug, the author found they could disclose arbitrary source code.
  • Even though there are restrictions on where can be accessed by default, we can use gadgets. The LibreOffice file at /usr/share/libreoffice/help/help.html contains an XSS. Some libraries, such as Wordpress plugins, could be used for LFI via tutorials. They mention a few other ways to exploit this, including abusing symbolic links.
  • In Apache, there are two directives that do the same thing: AddHandler and AddType. Under the hood, there is some magic from 1996 to allow for both to be used by using the content_type field as the module handler when the handler field is empty. This new primitive is the ability to overwrite the function handler.
  • The first instance of this being exploited was in mod security. When an error occurred in processing of a path, it wasn't being handled correctly by the Content-Type was being overwritten. As a result, the wrong handler was being executed, resulting in source code for PHP instead of the result of PHP being returned. This technique could be used in conjunction with other content type changes as well.
  • Next, if an attacker can control the Content-Type header in the response then we can invoke ANY handler. Even though this processing happens after receiving, server side redirect make this exploitable to hit any CGI implementation on the server. The author mentions an SSRF with controlled headers or CRLF injection as potential ways to do this.
  • How does this become exploitable? Getting an image file to be processed as a PHP script can quickly lead to RCE. mod_proxy leads to a full SSRF or direct access to unix sockets. Finally, they found that PEAR.php included with Docker can be used to get RCE by using PHP even.
  • At the end of the article, they say this is promising for more research. The author only focused on issues in a few impactful fields but there may be other fields that cause as much havoc. The more complex a code base is the more unique vulnerabilities are likely lurking there. Amazing research, as always by Orange Tsai :)

0.0.0.0 Day: Exploiting Localhost APIs From the Browser- 1466

Avi Lumelsky - Oligo Security    Reference →Posted 1 Year Ago
  • Browsers can request any data via HTTP using JavaScript. From a website, it's possible to make requests to items on the local network, such as localhost. Should this be allowed? IP scanning and attacks on the LAN are very possible here.
  • All major browsers have CORS - but this is only for response data and not the outbound. So, Chrome released a standard called Private Network Access (PNA). This extends CORS to restrict the ability to send requests to PNA domains.
  • PNA has a large list of domains that fall into the private category. While doing research into this topic, they noticed that 0.0.0.0 was not in the list though. Is this bad? 0.0.0.0 has multiple uses but it commonly just means localhost.
  • Since 0.0.0.0 can be requested to, this violates PNA completely for localhost. Many local apps skip CSRF or authentication checks solely because of this feature.
  • They found that an application called Ray used by developers could be exploited for RCE. Selenium Grid had a similar issue as well as PyTorch.
  • How do we fix this? PNA headers will be added to requests. In order to allow the browser to make these requests, the website will need to return Access-Control-Request-Private-Network: true, similar to how CORS works. Good bug write up and a good explanation on an incoming feature!

Bypassing Rockwell Automation Logix Controllers’ Local Chassis Security Protection- 1465

Sharon Brizinov - Team82    Reference →Posted 1 Year Ago
  • ControlLogix 1756 is a series of programmable automation controllers from Rockwell for highly scalable industrial automation. This device is a chassis component that servers as the enclosure for lots of connections. It communicates using the Common Industrial Protocol (CIP).
  • When an operator at an engineering workstation wants to communicate with the PLC, they will be routed over the CIP protocol with the CPU card connected to the same chassis network card. The chassis has a security feature called trusted slot that is designed to prevent untrusted network cards from being added into the network plane. The idea is that an untrusted network card will be refused data.
  • In CIP routing, a path is the route or sequence of devices to where a message travels. This route requires a source device to a destination device. For the chassis, each slot has a unique path structure. Once on the chassis, checks are made to ensure that the network card is trusted or it will just drop the packet.
  • Since all slots are connected to the backplane and CIP supports path routing, a packet could be generated that would be routed through a trusted card FIRST, before reaching the CPU. By going between other trusted slots first THEN routing to the untrusted slot, the CPU thought it was a valid route.
  • An interesting abuse of built in mechanics within a protocol! A super fun bug within an important security feature. I wish they had a path example in the code but that's okay.

Listen to the whispers: web timing attacks that actually work - 1464

James Kettle - PortSwigger    Reference →Posted 1 Year Ago
  • James Kettle starts off with a graph of attacks they've done vs. attacks they're read about. At the top of it is 1,300ms that he's seen and 5ns for things they haven't seen. They were curious how far these timing attacks could be taken for this research on real black box targets.
  • The timing of a request has many points where the timing can change: latency, jitter, internal latency, internal jitter and processing delay. In 2020, Timeless Timing Attacks showed that the jitter could be avoided by using HTTP/2 to send requests concurrently to measure differences. Sadly, the first request is still slightly faster but this can be fixed with an arbitrary timing delay on the first one to match the second.
  • From previous research of the single packet attack, they decided to implement this method with timing attacks - send MOST of the data for a packet at the beginning then a small amount right at the end. The problem is that some servers will parse the request as soon as the header frames have been received and the data/header frames can't be interleaved.
  • Server side noise is a major problem too. There's caching, server specific things and much more to worry about. To maximize signal and reduce noise, take the slower code path. For instance, use multiple headers with the same prefix. He claims that ORMs and GraphQL are great target for timing expansion techniques.
  • After putting this together, they had a machine. But, it was too powerful! Now, you can find soooo many timing differences that you may misunderstand what you're seeing. For instance, the author thought they had found a hidden parameter with exec. In reality, the WAF was doing extra parsing on this parameter.
  • Now what? They did some parameter discovery and they tried out server side injection issues that were being handled. They found a fully blind SQL injection, but it was a dup; in retrospect, they say that super powerful injection attacks don't need timing issues. Many of the things they found were for recon gather than exploitation though.
  • They ended up using this to discover Nginx misconfigurations for the host header leading to scoped SSRF. Additionally, they used it to bypass firewall restrictions. Finally, sending user controlled headers to a backend where the proxy would change it or est it themslves.
  • Overall, I think this is a step in the right direction for exploitation. However, detecting tiny changes for a password reset token still doesn't seem to be possible, sadly. Kettle's understanding of HTTP is bonkers at this point!

Beyond TCP's 65535 Byte Limit- 1463

Flatt Security - RyotaK    Reference →Posted 1 Year Ago
  • James Kettle published research on exploiting race conditions more relably by putting things in the same packet. However, the author of this post ran into a limitation of this - the allowed for size of TCP and other size limits. They looked on how to bypass this limitation.
  • The single packet packet has a 1500 byte limit, each means that only 20-30 requests can be made at a time. This limitation is because of the Ethernet's maximum frame size of 1518 bytes. What's weird though, is that TCP supports 65535, but how? IP fragmentation can be used to divide the original packet into different Ethernet frames but be processed together since the packet won't be processed until all data has been received.
  • Even with this, the packet size is limited. How else can we expand this? TCP is ordered with a sequence number. By reversing the order of packets to account for this, all packets will be processed at the same time - genius! Using this, we can make packets be infinite in size without worrying about timing issues.
  • HTTP/2 puts a wrench into this, sadly. There is a maximum amount of concurrent streams open in most settings that is under 250 for Apache, Nginx and Go but unlimited for NodeJs and some others.
  • How do we configure this stuff to happen? First, they configured their IP tables to not send RST packets. Then, they just wrote their own client to change the ordering of TCP. Using this, it's possible to exploit limit overrun rate limiting issues as well.
  • The tool is a little rough right now. It doesn't support HTTPs over HTTP, TCP window updating or any proxy tools. Overall, an awesome post to hype up the possibility of race conditions and other issues.

Evmos Precompile State Commit Infinite Mint- 1462

Jason Mattyser    Reference →Posted 1 Year Ago
  • My co-worker Jason just published a super sick bug in the main implementation of EVM integration in Cosmos. Under the hood, the execution is done with Geth but the integration with Cosmos is complicated to do correctly with many different areas of state to consider.
  • The stateDB in Geth contains a single journal of all state changes in the current transaction. When a new execution context is created, such as on a function call, a snapshot is taken. This adds a new revision and an index in the journal. In case of a revert, all changes within a particular jouralindex can be undone.
  • In Cosmos, Commit() on a cached storage is what stores the data into permanent memory. It is crucial to ensure that the Cosmos storage and the Geth journal storage line up. However, during Evmos specific precompiles (such as for the Staking and Distribution module), it's possible to desync these two.
  • The steps are outlined in the post well with a simple looking yet specific PoC contract:
    1. Contract calls an external function within a try/catch block.
    2. A new contract is created for a contract that will transfer ETH.
    3. A call is made to the Distribution precompile contract. This will trigger the Commit(). In particular, the balance of the ETH is saved in the target contract.
    4. Revert the call within the try/catch block. The rollback done by the EVM's state is not accurate now! Even though the contract shouldn't exist, it's still in storage and holds a balance.
    5. Withdraw the balance from the target contract that shouldn't exist still.
  • In point 4 above, they have some notes on WHY this happens. The rollback on the revert does not reflect the changes in permanent storage. Since the contract creation happened post snapshot, the dirty mappings are removed after the revert. Since only the dirty accounts are touched, there's no reason to make any changes to the created contract. As a result, when the actual state is updated at the end of execution, the target contract is valid and alive, even though it should have been destroyed.
  • This vulnerability required a very deep insight into how the state handling was being done by the EVM execution and by EVMOS. Overall, a solid vulnerability that was hard to wrap my head around but is amazing none-the-less.

Abusing Subtle C++ Destructor Behavior for a UAF- 1461

Jack Dates - RET2    Reference →Posted 1 Year Ago
  • Pwn2Own has an automotive category for hacking cars. They decided to tackle the CHARX system because A) the product was very different from other similar products and B) the firmware was easy to obtain. It runs an embedded Linux on 32-bit ARM with SSH enabled for easy access.
  • Much of the code on the system was compiled Python but they did find the Controller Service Agent that was written in C++. This device communicated between the various CHARX units, managed AC and a vehicle to grid protocol with comms over UDP, TCP and HomePlug Green PHY protocol.
  • Much of the code on the system was compiled Python but they did find the Controller Service Agent that was written in C++. This device communicated between the various CHARX units, managed AC and a vehicle to grid protocol with comms over UDP, TCP and HomePlug Green PHY protocol.
  • The first vulnerability they found was a null pointer dereference in the HomePlug Green PHY protocol. The parsing code for the minimal implementation was reading the size of a structure at bytes 4 and 5 instead of 5 and 6. As a result, some parsing goes haywire and eventually leads to a null pointer deref. Off by one strikes again!
  • The second bug is more interesting. While using GDB, they found that the exit handlers were causing crashes to happen. In the C++ binary, many of the exit handlers are implicitly added by the compiler as static. Since these are global, the exit handlers need to close it out. Additionally, the binary has several signal handlers as well.
  • The exit handlers for static objects seem to appear in random orders when not specified. The authors give a toy example where the destructor of one object type runs after another object type. Since the ordering is weird in this case, if one objects interactions with the other it can lead to a UAF!
  • In the Controller Agent code, this exact bug occurs in a more complicated way. A list is already gone but trying to be accessed, leading to a UAF! Since we want this destructor to happen at will, the null pointer deference is a a perfect bug for us. In the second post, they go through the exploitation of this bug.

A SquareSpace Retrospective - 1460

samczsun    Reference →Posted 1 Year Ago
  • A large amount of crypto companies had their domains stolen. The only similarity between the domains was that they were all SquareSpace domains that were migrated over from Google Domains after the merger. This article is explains the incident response that was done.
  • When migrating the ownership of a domain, the domain owner or any collaborator would be granted the domain manager permission on SquareSpace. Since most Google Domains users were not mapped to SquareSpace, they did a pre-emptive mapping from the Google email to the SquareSpace. Once they logged in, they had access to the domain.
  • SquareSpace has many login options, such as continue with Google, Facebook and regular logins. Since this was coming from Google, the developers likely assumed that all of the domains would be owned by gmail accounts.
  • SquareSpace has many login options, such as continue with Google, Facebook and regular logins. Since this was coming from Google, the developers likely assumed that all of the domains would be owned by gmail accounts. The threat actor had stolen a lot of domains and had planted plenty of backdoors to the system for when they got caught. SEAL coordinated the recovery of lots of domains and helped mitigate these backdoor techniques.
  • The author of the post has a few notes for security teams...
    • First, defense in depth matters. Yukikey 2FA and monitoring with alarms are great things to have.
    • Second, re-evaluate the attack surface of your system when external things change.
    • Third, minimize special cases in your system; the assumptions you made before in security can break with a small change like this one.
    • Emails need to be validated. That's a really stupid thing that would have prevented all of this.
  • Overall, a great post into a really big deal for the industry with some great lessons along the way.

Akash Network Authentication Bypass - 1459

Chainlight    Reference →Posted 1 Year Ago
  • Akash is a decentralized cloud computing platform built for the Cosmos ecosystem. It appears to have offer a product similar to EC2 instances on AWS.
  • There are four main parties involved:
    • Blockchain layer: Handles payments of tokens and used for governance.
    • Application layer: Intermediary between buyers and sellers. Sellers want people to use their resources and buyers need resources.
    • Provider layer: The instance where computational resources are located. There is a daemon from Akash that integrates with the usres to give them access.
    • User layer: Where users buy the resources.
  • On the Akash network, the authentication process is solely down through TLS certificates. Here's the flow:
    1. Auser creates a certificate and submits it to the blockchain.
    2. The user initiates an mTLS connection with the provider.
    3. Provider verifies the client certificate to ensure that it's valid. They check the common name, subject and serial number.
    4. The cert is added to the certificate pool of valid users that can access the system.
  • You know what's really important about certificates? The signatures! In this case, the fingerprint is not checked to see if the client certificate and the registered certificate are the same! So, a self signed certificate with the spoofed information is good enough to bypass this.
  • To do this, we generate a self-signed root CA with the target's address in the common name. Next, we add the serial number to the cert of the target. Finally, we use this cert to run arbitrary commands on the instance, giving us free access to everything.
  • How did they fix this? Check the fingerprints! I would guess that since this by design allows for self-signed CAs, they didn't consider the ability to spoof all of the things. Great find by the team at Chainlight!

AmberWolf Uncovers Critical Vulnerabilities in Cato Client- 1458

Amber Wolf    Reference →Posted 1 Year Ago
  • Cato Network has a client application that allows access to access to resources from the Internet, cloud, SaaS or data center. The authors of this post looked into the application and found many bad bugs.
  • The application contains a custom URI handler. One parameter for this is external_browser program on calls to Process.Start. Since this is user controllable, it can be used to start an arbitrary application. When specifying something like notepad.exe though, it cannot find the actual executable because of its search. Why? It's trying to open a URL in the browser and not a path with a URL parameter being passed in. Luckily for them, providing a %00 (NULL) removes the parameter in the call and opens up an arbitrary application.
  • Process.Start cannot be used with parameters, sadly. However, an SMB share can house a malicious executable that we control! By setting this up and passing in the link, we have gotten RCE on the device. This same block of code can be hit from the authentication process as well.
  • Besides the RCE, they found two local privilege escalation vulnerabilities. When loading up the program, it's searching (and not finding) an OpenSSL config. Since it cannot find it, an attacker can create the folder and set it themselves. The engines parameter can be used to dynamically load an arbitrary DLL into the process, giving SYSTEM level code execution once restarted. They found a method to trigger an exception and force a process restart to make this easier to do.
  • The next privilege escalation bug had to do with folder permissions again. When downloading the client, it's executing from the /Windows/TEMP directory. The program tries to execute a non-existent process called msiexec.exe from this location. By writing a file with this name here, the authors got code execution within the context of system.
  • Are we done? No! The CatoClient.exe process communicates via the high privileged process winvpnclient.cli.exe. The IPC handler for installing the root certificate was exposed but not used in the CatoClient process. So, simply submitting this command would add the certificate to the system, which is really bad. Sometimes, unused functions lay dormant and contain real issues!
  • Overall, a fun series of bugs in the client. Windows hacking isn't my thing but the issues were explained well enough for me to understand still.