Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

A Realistic Breakdown of Optimism - Part 1- 1567

Trust Security    Reference →Posted 1 Year Ago
  • TrustSec has contributed to the Optimism ecosystem both in contributing to contests, and audits and two paid bug bounties. In this post, they talk about the security of Optimism and some of the more interesting bugs they discovered. There is a lot of background one the one bug they go through in the blog post.
  • L2s need a way to communicate with the L1s and vice versa. In Optimism, going from the L2 to the L1 requires a Merkle proof from the trusted L2 state root. When going from Ethereum to Optimism, OptimismPortal is used which emits an event that is translated to an ETH minting call. Both of these give the capability to send arbitrary data between the chains.
  • There is a limitation to what's above though: calls can become stuck. So, the CrossDomainMessenger (XDM) is where the other bridging functionality, like ERC20/ETH bridge and the ERC721Bridge are implemented. XDM supports resending failed messages via a mapping of either successful or failed messages.
  • On the revert flow, there is a very important check... If the XDM failed then the ETH is has already been supplied. Additionally, the successfulMessages mapping prevents the same withdrawal from being executed more than once.
  • When making a call from the L2, the default L2 sender is 0xDEAD. The variable is xDomainMsgSender. On a cross-chain message, this is set to the calling user. In effect, this acts as a reentrancy protection as well. At the end of the call, the storage value is set back to the default.
  • The audit they did was for the SuperchainConfig contract upgrade. This was a single program that allowed for designated roles to pause or unpause a core contract. During this upgrade, they manually switch off the initialized bit of the contract to allow for recalling initialize() instead of using reinitializer modifier. Such a small line of code seems so simple. So, what's wrong?
  • The xDomainMsgSender is 0xDEAD at all times except during the withdrawal process. Within the initialization code (which gets retriggered), this value is set to 0xDEAD but actually defaults to zero. Normally, this would be fine (since it should be a NOP) but that's NOT true in the context of the withdrawal code!
  • Upgrades are permissionless once enough signers have agreed to the upgrade. Here's the flow of the attack:
    1. Store a failed delivery to the smart contracts failedMessages mapping.
    2. Wait for the upgrade to be in the mempool and ready to go.
    3. Reattempt the failed delivery:
      1. Perform the upgrade with the parameters. Now, the msgSender is set to 0xDEAD.
      2. Reenter into relayMessage passing in the withdrawal request. This will succeed because the DEFAULT 0xDEAD address was set back.
      3. The msgSender is set to L2Sender again when it shouldn't be.
    4. A double withdrawal has occurred because of the double setting of the xDomainMsgSender global variable. The stolen amount comes from any failed withdrawals.
  • The setting of the xDomainMsgSender global variable doesn't seem important until you have the better context of what it does and why it's important. It's crazy how this reentrancy/replay protection was touched by this simple upgrade code. What an awesome find!

"Invariant inversion" in memory-unsafe languages- 1566

sha1lan - pacibsp    Reference →Posted 1 Year Ago
  • The author begins the post with an invisible C bug. After staring at the code for a while, I couldn't find it. The bug is simply that a boolean could have a value other than 0 or 1. Why does this matter though?
  • In memory unsafe languages like C, the invariants used to uphold memory safety are programmer-created invariants. By breaking these assumptions of the program, safe-looking good can be broken via subtle memory unsafety issues. This is the main concern of the post.
  • Why does having a boolean that is not a 0 or a 1 matter? Because it's a boolean, the compiler assumes that this byte will only ever be a 0 or a 1. Because of this, it will make some optimizations around this. When it's a non-binary value, this breaks the logic of the optimization and leads to memory corruption in the program.
  • In a typical C codebase, you would look for memory unsafe accesses in things like keep[index] that actually perform the access. The author compares this bug to reviewing JIT compilers. They try to enforce invariants early on in the program then the rest of the code assumes that this invariant is true. If the invariant is ever violated, then you have a memory corruption bug.
  • According to the author, the similarity is that the memory safety violation does not come from the exact line of code like with a bad access. Instead, it's the violation of an invariant that another part of the code relies on further down the line.
  • This is the invariant inversion. Languages can create chains of invariants leaning on other invariants leaning on other invariants... until it's a crazy mess and web and invariants. Because of this, breaking a single one of these upper-level chained invariants can have much larger consequences than you realize. Unfortunately, managing this web of invariants in your head is impossible to do because it becomes a huge graph quickly.
  • In the case of the bool-typed variables only having a 0 or a 1, they consider this an inverted invariant because it's "higher level" than memory safety yet it is relied upon for "lower level" safety properties later.
  • Why does this all matter? It's a new way of finding bugs! Currently, we are asking ourselves "where is the memory unsafety occurring at", which is only relevant in languages like C. Instead, we should be asking ourselves "where was the first violation of an invariant relied upon?" This different view of the world seems more reliable since it's finding the safety bug first rather than backtracing where the bad access could occur at. Great post!

Immunefi Contest Results- 1565

Immunefi    Reference →Posted 1 Year Ago
  • Immunefi has contests, similar to C4 and Sherlock. Uniquely, they publish all of their findings for people to see. I just found this and wanted to have a public record of it for my own sanity later.

New era of slop security reports for open source- 1564

Seth Michael Larson    Reference →Posted 1 Year Ago
  • Seth Larson is a security report triager for CPython, pip and many other open source projects mainly in the Python ecosystem. Recently, they got a large uptick in the amount of bad reports. These were either LLM-halluncinated, low quality and overall spammy. What's so hard is that many of these look legitimate at first glance!
  • Responding to security reports is an expensive operation for projects. It requires a lot of time to understand an issue and to see if it's relevant or not. This leads to confusion, stress, and frustration. The author then goes through what programs, reporters and maintainers can do.
  • For platforms, like HackerOne, it comes down to incentives. Being able to "name and shame" repeat offenders and banning folks with too many false positives. Additionally, removing public recognition can be helpful. They mention preventing new users from reporting issues but I disagree with this approach since some people just publish on the platform where they happen to find a bug at.
  • The list for reporters is a lot longer. Effectively, it comes to stop being an idiot and make sure it's a real bug before reporting. The only thing of note is coming with a patch alongside a security issue.
  • For maintainers, the author talks about putting the same amount of effort that the reporter put in. If you receive a report that is spam or AI generated, then give zero effort. If it's garbage then they won't respond. If it's real then admit your mistake and move on.
  • When trying to audit whether it's going to be low-quality or not, look for a few things:
    • If the account has no public identity, no public reports of value or multiple invalid/bad reports, then it's likely spam.
    • Is the vulnerability in the code usage itself or does it even include a PoC?
  • Most people are acting in good faith :) Some people are just new. Overall, a good post on responding to bug bounty programs.

The Full Story of CVE-2024-6386: Remote Code Execution in WPML- 1563

WordpressSec    Reference →Posted 1 Year Ago
  • WordPress Multilingual Plugin (WPML) has 1 million active installations. It's a premium plugin that provides automatic language translation features.
  • Templates have became more popular in recent years. They are pre-built web pages with place holders in the code that take input for customization on the web page. The escaped templating code can also have logic like loops, if statements and much more in it though.
  • Wordpress has a feature called Shortcode Blocks that function similarly to templates. An example is adding an image to the page - it will handle all of the custom HTML formatting for you. In Wordpress, custom Shortcodes can be registered and then used in the program.
  • The WPML plulgin added three custom shortcodes: language switcher, selector widget and selector footer. The language switcher short code used a Twig template. with its content before it was evaluated.
  • Unfortunately, it appears that the input from the user was either being double evaluated resulting in template injection. Although the article shows the code, it does not discuss the reason for this vulnerability occurring or the fix for it, which is a bummer. With template injection, {{7 * 7}} will be evaluated as 49 when returned, which is how the author found it.
  • In Wordpress, all single and double quotes are escaped, which made exploitation difficult. They found that some functions could be called without parameters that returned strings. Then, they could use a string slicing method in order to get the character that they wanted from the function call. Using this, they were able to generate arbitrary strings for inputs to execute bash commands. A good and impactful bug!

I’m Lovin’ It: Exploiting McDonald’s APIs to hijack deliveries and order food for a penny- 1562

Eaton Works    Reference →Posted 1 Year Ago
  • First, I love the fries animation they add for the cursor - I got a good kick out of this. The blog post is about McDelivery from McDonald's bug hunting.
  • Digging through the website, they noticed it was Angular. They pulled the routes from the minified JavaScript. With no idea what the IDs looked like on the requests, they just tried 0 and 1. To their horror, they were able to get card information of a random cart on the website with this! They found that the order IDs were sequential by playing around with it. A similar issue was found on the ratings API too.
  • They noticed that when a user goes onto the website, they're given a "Guest" JWT. To me, the proper handling of "Guest" users is complicated. You want the guest to be able to buy things and have their orders be trackable without logging in but it also needs to only be accessible to them. It's a hard problem to solve.
  • The same IDOR on the order ID worked on both the map for the order, receipts and submitting feedback. This seemed to be all over the website.
  • The payment flow for an order worked by clicking add to cart then redirecting to the payment process Juspay. When going to checkout as a POST request, it was creating the order. If you tried to modify the order information it wouldn't work because there is an RSA signature generated on the server side. This prevents tampered the request or state issues.
  • Besides the POST request, there is a PUT request for modifying the order. Unfortunately, this endpoint was vulnerable to a mass assignment vulnerability. Using this, they could update the price and many other fields of the order. Crazy!
  • This same bug could be used to steal people's orders. It was possible to change the destination location of another cart's address and then reassign the order to your account but only after they paid of course. This requires some crazy timing. But, given the other bugs that contain increment IDs and information disclosure, it seems fairly reasonable to pull off.
  • The final bug was an issue with scope on JWT tokens. On the McDelivery admin panel, a single API would use consumer website JWTs. This API had KPI reports on them, leading to a serious information disclosure.
  • Overall, a really fun read! I enjoyed the storistic nature of the post and the notes of complexity on the various components that they tested. The vulnerabilities were nothing crazily fancy but just required some knowledge of the application. For their hardwork, they received $240, which is criminally undervalued.

Exploiting Reflected Input Via the Range Header- 1561

Attack Ships on Fire    Reference →Posted 1 Year Ago
  • The author of this post decided to take a look at the Range header. In HTTP, the Range Header is used for returning only changes to the content of a page by requesting partial information on the request by a portion of the header. For instance, you can ask for bytes 2-6 from the request.
  • The other insight is that most browsers will happily render 206 Partial Content queries. To me, this is fairly surprising, since it should be made to get the data only and not be rendered.
  • Putting these two concepts together, if an attacker can get a particular content range to be used in the request with the Range header, the reflected input can be used to get XSS! The post focuses on getting a header injection vulnerability on the request in order to exploit this.
  • I had personally never seen this trick so I thought it was pretty fun. It's weird to me that modern browsers will render the 206 request but every other part of it makes sense.

How an obscure PHP footgun led to RCE in Craft CMS- 1560

Asset Note    Reference →Posted 1 Year Ago
  • PHP is full of security footguns. Many of them have been fixed, such as 'abc' == 0. However, there are some that still remain. In the case of Craft CMS, a popular PHP based CMS, there are still some footguns lurking.
  • PHP can be used both on the command line and as a web server. So, the global variables $_SERVER['argc'] and $_SERVER['argv']. If you run this as a web server then PHP will take argv from the query string, if you have enabled this! In the case of PHP, this functionality was on in the default Docker container.
  • In the case of Craft CMS, there is some code that runs for parsing CLI functions. If the argv entry is empty, then it skips this though. Normally from a website, this would it would be empty. Like many programs, there is a lot of option processing when running the application.
  • Here's the crazy bug though: since $_SERVER['argv'] can be controlled from web! This allows an attacker to control the configurations of the running website. For example, --configPath=/aaa will set the configuration path to a non-existent location, remotely.
  • At this point, the vulnerability feels like than RCE but the path needs to be found. They tried some easy wins but no dice. The path that looked interesting was a request to get a file remotely. If a PHP file could be included then this would mean RCE. However, there was a file_exists check before this that prevented using HTTP, PHP or other many other types of files.
  • Upon further inspection, the ftp:// URI supports file existence checks. Using a file here is blocked by the allow_url_include security feature. But, a template CAN be included! So, they created an FTP server with anonymous access and a file called index.twig with {{7*7}} in it. When craft loads the file, it gets evaluated!
  • Craft CMS attempts to sandbox the Twig template renderer. So, simple calls to system are denied. They found that using {{ ['system', 'id'] | sort('call_user_func') }} bypassed the verification, which I don't really understand how it works.
  • Understanding esoteric portions of languages and frameworks can be useful but it's hard to see the payoff until something like this happens sometimes. Great find!

Arc Browser UXSS, Local File Read, Arbitrary File Creation and Path Traversal to RCE- 1559

Renwa    Reference →Posted 1 Year Ago
  • The Arc browser had just announced their bug bounty program. As a result, the author decided to search through for some low hanging fruit. Quickly, they found some interesting endpoints: arc://boost/v2/js and arc://boost/v2/css.
  • The functionality is a nice UI for creating boosts - effectively a nice extension with some more special configurations. Looking at the paths, they found the /play endpoint. This was base64 encoded data that was being converted to JSON. This was used for configuring the boosts.
  • The installed boost UI can have custom styling. This means that it's possible to change the contents via CSS to look like one boost but actually be another. Given that this requires a click to install, this trickery can be used to confuse a user to installing it.
  • When the boost is added, the information is added in a folder with several files storing this. In the JSON that was provided, you control the path of various files being stored. Naturally, these were vulnerable to directory traversal attacks on the file write. So, this gave them an arbitrary file write vulnerability.
  • The LaunchAgent plist files that are run whenever a user logs in or the system starts. By adding a file to this location, arbitrary commands will be executed. When the system restarts after the file write, the attacker has arbitrary command execution on the system.
  • After doing this research, they found that the /play endpoint was not mentioned anyway. To the author, this indicated that functionality was never meant for public use. To patch this, the functionality for the legacy boost builder was removed. They got a nice 10K bounty for reporting the vulnerability.

Compromising OpenWrt Supply Chain via Truncated SHA-256 Collision and Command Injection- 1558

RyotaK - Flatt Security    Reference →Posted 1 Year Ago
  • Openwrt is open source router firmware. While the researcher was updating their router, they noticed that there was a service called attended sysupgrade that builds and hosts the firmware remotely. So, they decided to take a look into it.
  • Why look at this though? Building a firmware image from user-provided packages is very dangerous because of the amount of control a user has over this process. Proper isolation must be done as well.
  • The first goal was getting code execution within the context of the container. There is a Makefile that is used to build the router firmware. The make command will expand shell variables before executing a command. Since the package name is controlled by the user and set as an environment variable, this can be used to execute arbitrary bash commands on the server.
  • To me, this is somewhat by design though. When building user controlled code, there is going to be a way to execute arbitrary code. Of course, this is why the code runs in a container. Regardless, the make command expanding the variables before executing as bash was interesting to me!
  • When determining if a build is unique or not, the method is generating a hash of the request. While reviewing this code, they noticed that the package hash being used was truncated to only 12 characters of hex - 2^48 of entropy!
  • 48 bits is vulnerable to brute force attacks. The idea is to create a cache key collision by brute forcing the algorithm until we can create something where the 12 characters match the other builds. Once successful, the build will override the other build, resulting in users pulling the wrong firmware.
  • They wanted to prove that this could be brute forced. They attempted to write their own OpenCL program to brute force using the GPU but it was very slow. To get partial hash match support they make a quick patch to hashcat. They played around with the settings until it was running fast and using the proper characters. Eventually, it was running at 18 billion hashes with a collision being found within an hour.
  • Once they found the issue, the openwrt team fixed and released a version in 3 hours. Overall, a super interesting vulnerability and exploitation of a crypto-usage issue. Good finds!