Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

The HamsterWheel An In-Depth Exploration of a Novel Attack Vector on the Sui Blockchain- 1197

Certik    Reference →Posted 2 Years Ago
  • Move blockchains are pretty rare about this point. There are only Sui and Aptos that are using it to my knowledge. Move by itself is not completely safe from cross-contract alterations and other weird problems. To resolve this, there is a static verifier running at compile time.
  • Sui implements a distinct memory model compared to the original Move implementation by using a customized version of the Move VM. Upon adding these features, Sui decided to add custom verifiers to ensure the safety of the programs being executed such as the bounds checker. All of this is achieved with the high level concept of the abstract interpreter framework designed for doing security analysis on bytecode.
  • The abstract interpreter contains a control flow graph with an understanding of the states that it may jump to prior to the execution of a basic block. If there are no loops, the linear execution is simple for validating simple flow. With a loop, the merging of states is done.
  • The Sui blockchain contains an object-centric global storage model, which is different than the original Move design. Objects can have a unique ID with the key ability. A verifier is ran to ensure that the ID is unique per object. So, where's the bug at? Still more background!
  • The verifier integrates with the Move Abstract Interpreter in the AbstractState::join() function. This function merges and updates state values iteratively like we mentioned before. For each local variable in the incoming state, it compares the value to its current value. If the two valeus are unequal, then the changed flag is added to perform a AbstractValue::join() call and to go over this iteratively again.
  • There is an order of operations problem here though. AbstractState::join() may indicate a change due to the differing new and old values but the state value after the update might remain the same. This occurs because the AbstractState is processed before the AbstractValue. By triggering this state, it's possible to initiate an infinite analysis loop.
  • This infinite loop would bring the network to a complete halt. To fix this, a hard fork of the software would be required. As a result, this leads to a critical severity issue. To fix this problem, Sui changed the order of operations between the AbstractValue and AbstractState. On top of this, the verifier can now timeout as well, mitigations impacts of these types of bugs in the future.

Escaping Parallels Desktop with Plist Injection- 1196

pwn.win    Reference →Posted 2 Years Ago
  • Parallels Desktop is a virtualization platform on macOS. Obviously, being able to escape this would be a huge deal! They started by looking into toolgate, the protocol for communicating between the guest and host.
  • Toolgate requests are sent to the host by writing to a physical address that corresponds with a specific I/O port. There are restrictions on what Toolgate messages userland processes can send to the host using Parallel Tools. The only operations that are allowed are file system operations to the host.
  • Shared Applications are a Parallels feature that allow the opening of files on a Mac in a guest application, and vice versa. A file extension and URL scheme can be related to this application, with shortcuts even being created. Parallels handles syncing of running guest apps to the host. This is done as follows:
    1. Parallel Tools detects that an application is launched in the guest.
    2. A toolgate request (TG_REQUEST_FAVRUNAPPS) is made to the host to notify it of the app.
    3. If a helper exists, then the helper app is launched. If not, a new bundle is created.
    4. The app bundle is created from a template, which is filled with information supplied by the guest. This information is written to several areas, including the Info.plist of the application.
  • This sounds like a classic attack. Privileged entity is taking in information then adding it to a location. If the input is properly santitized, it could be possible to inject malicious content into it. In turns out, that two of the fields were not sanitized of XML document data. As a result, the guest user could inject arbitrary data into the plist file.
  • Why is this useful? The initial attack vector the author went after was LSEnvironment key to set DYLD_INSERT_LIBRARIES to force an arbitrary dylib file. Still though, this isn't enough for execution just yet. So, they were looking for arbitrary file write vulnerabilities to write a dylib file themselves then execute it. The best place to look for these bugs would be a shared folder service.
  • The shared folders are implemented using the Toolgate functionality as well. The only thing we really have access to here is the opening, reading and writing to files. When performing these operations, there are validations that the path doesn't contain a ../, has symlinks or anything else. It looks perfect. Except, there is a time of check time of use (TOCTOU) bug here that allows for the circumvention of this check.
  • Using this bug, an attacker can read or write to arbitrary files on the host! To bring these bugs together, we can use the arbitrary file write to create a dylib file of our choosing at a known location. Then, we can use the first bug to execute this dylib file. Damn, that's a pretty hype chain!
  • A novel plist injection technique into a classic TOCTOU bug. Good finds and good chaining! I wonder if there are other bugs in this part of the eco-system; my guess is that there are.

The OverlayFS vulnerability CVE-2023-0386- 1195

Datadog Security Labs    Reference →Posted 2 Years Ago
  • The overlay file system (OverlayFS) allows a user to merge file systems together to create a single unified file system. There are different types of mounts with OverlayFS: lower, upper and overlay in this order. The overlay is the overarching item of the setup. If you write to the lower directories, it will be copied to the upper ones. If you write to the upper, it doesn't go to the lower though. This is a design feature for isolation, it appears. If changes are made through the overlay, they are only reflected in the upper directory.
  • Recap: lower->overlay, upper<->overlay and lower->upper. When a kernel copies a file from the overlay file system to the upper directory, there is no validation on the owner of this file within the current namespace. Using this oversight, a lower directory could smuggle a SETUID binary into the upper directory using OverlayFS.
  • How could this be exploited? Let's are below:
    1. Create a FUSE file system. This will allow us to create a binary owned by root with the setuid bit on it.
    2. Create a new namespace.
    3. Create a new OverlayFS mount with the lower directory within the FUSE FS from the previous step.
    4. Trigger a copy of our SETUID binary from the overlay FS to the upper directory. This can be done by simply creating the binary. We now have a setuid binary under the upper directory, even though this was from the OverlayFS setup.
    5. Exit the user namespace from step 2 to execute the SETUID binary!
  • The vulnerability allows for a privilege escalation to root by not handling namespaces correctly. This is why defense-in-depth with limiting syscalls and other things is important. Good writeup!

Bypass IIS Authorisation with this One Weird Trick - Three RCEs and Two Auth Bypasses in Sitecore 9.3- 1194

AssetNote    Reference →Posted 2 Years Ago
  • Sitecore is a CMS written in .NET. They pwned this in 2019 but wanted to see if any new bugs had been added or if they missed anything big years ago. To start with, they do a large amount of recon to look at the functionality without authentication. Additionally, they looked into the Web.config to see how the routing was working.
  • The Sitecore APIs had a standard MVC setup, where the executed route was api/sitecore/{controller}/{action}. While digging around into what they could instantiate with only a few restrictions; it ensures that the object is .NET type and implements the IController interface. A super large attack surface!
  • The class Sitecore.Mvc.DeviceSimulator.Controllers.SimulatorController had the action Preview with the parameter previewPath. This was calling Server.execute under the hood with the parameter that we control. This allows for arbitrary redirects within the application itself without fancy 302s. Damn, that creates a pretty neat authorization bypass!
  • Server.Execute had no restrictions on where it could redirect to. All it had to be was something within the webroot. This function does not rerun the HTTP pipeline (including auth), allowing for bypasses of the IIS setup. Using this, they were able to leak the Web.config file by reading backups.
  • Sitecore's dependency Telerik UI had a known CVE for deserialization that requires knowledge of the Telerik encryption keys. However, since we have this from the backup, we can exploit a deserialization bug to get RCE.
  • While auditing the code base for more things, the path /sitecore/shell/Invoke.aspx caught their eye for obvious reasons. This allows for the arbitrary instantiation of a class and execute any method, with restrictions. In particular, no static items were allowed, a user had to be authenticated and it could only take string parameters. They decided to look for sinks for RCE gadgets.
  • Eventually, they came to DeserializeObject() within the Telerik UI. They followed this back up to find a method that sets this value within a class! Now, they can send in a deserialization method once again to get code execution. They wanted this to be unauthenticated though. A third similar deserialization issue exists as well.
  • Anonymous users couldn't hit the shell endpoint. The endpoint did allow for ANY user to hit the endpoint. If the user wasn't authenticated, the page was still ran but the execution still happened. Within EXM mailing list, the user is set to the Renderer User. They used the Server.execute issue from before to hit this code to trigger the second deserialization attack mentioned above. Neat!
  • I had no idea about the internal redirects causing so many problems. It is super interesting seeing the subtle flaws that can be built with .NET applications. Good read!

Uniswap's Financial Alchemy- 1193

Dave White    Reference →Posted 2 Years Ago
  • Uniswap is one of the best Automated Market Makers in the DeFi space. Because of their innovation, they had to create solid mathematical models in order to ensure everyone makes a profit. How did they do this?
  • In Uniswap, there is a concept of a pool; this is a collection of assets controlled by various individuals. The users provided value or tokens into this are liquidity provdiers (LP). When another user wants to perform a trade, they trade one asset for another asset within the pool. The LPs get rewards in the form of fees put on the users performing trades.
  • This comes with a problem. The price of the assets in the pool moves randomly. Additionally, it makes the assumption that all traders are informed; meaning that the price gets arbitraged to the proper price threshold. In other words, this means that ever liquidity provider would lose money from impermanent lose on their assets.
  • Market makers demand a lower price to buy than to sell, they directly profit when assets don't move in price with an even amount of buys in sells. The arbitrager comes in and fixes the cost of the market compared to the real world, stealing value along the way. It seems like the LPs would always lose money. So, where's the magic?
  • The concept of volatility harvesting comes into play here. It is possible to outperform any static portfolio of two assets by periodically rebalancing them. When the market gets arbitraged, the LPs are paying a fee to the market for the portfolio to be rebalanced. By redistributing the portfolio over time (instead of it being static), it is more accurate to reality.
  • The next concept is volatility drag. When using multiplication on betting, the results can be devastating. For instance, if we start with $100 and have an equal chance of the asset going up by 75% or dropping by 50%, this sounds like a wonderful deal. However, in reality, it is very hard to recover from a loss.
  • The expected value of the equation above is 50/2 + 175/2=$112.5 in isolation. But, if we consider the compounding asset to this, it's different. A 50% loss and a 75% gain gives us 87.5% of the value. This is the same in the other direction as well. The effects of compounding on gaining back the wealth are devastating. This is based upon the Kelly Criterion optimal betting strategy.
  • So, what's the lesson? Keep some of your money in reserve! Don't put all of the eggs in one basket, as they say. Instead of betting all of your money, only bet a portion of it. This way, your positive bet you placed will win in the long run. For instance, keeping $75 and betting $25 with those values will yield different results. A 75% gain then a %50 loss will end up with $131.25 instead of a loss. This is because we kept some of the value in the second step after making money.
  • In Uniswap, they learned that making the fee as cheap as possible to incentivize rebalancing is important. It pays to an LP vs. simply holding onto the asset in cash if the fee is not zero and the volatility is between (2 * sqrt(fee))/sqrt(3) and 2sqrt(fee). What's going on here? If the asset is too volatile or doesn't move at all, you're better off keeping the asset. Within that middle zone, we can stop volatility drag and make a profit from it though.
  • This is an interesting post on the finances of the market. I would love to learn more about market making and how the math works behind this in the future. Good read!

LPE and RCE in RenderDoc: CVE-2023-33865, CVE-2023-33864, CVE-2023-33863- 1192

Qualys    Reference →Posted 2 Years Ago
  • RenderDoc is a graphics debugger that allows for quick and easy introspection of graphics applications. It supports many different types such as Vulkan, D3D11, OpenGL and more. This is a write up on 3 vulnerabilities and the full exploits on them within RenderDoc.
  • librenderdoc.so is LD_PRELOADed into the application using a library load function. This works by creating a directory with /tmp/Renderdoc/ and calling open on a log file in this location with append mode. However, there is no validation on who owns the file, in the case that a malicious user wins this race.
  • The data is partially controllable by writing data to the TCP debugger server. Sadly, it has a major string header at the top that makes traditional configuration files not viable for escalation. How did they get around this? There's a call to fgets() within the systemd processing that only get 512 bytes at a time. By sending a very long string, we can use only our data on a given line.
  • An additional way to escalate privileges is to write to , using the truncation mentioned above they write to .config/user-dirs.defaults with SYSTEMD=.config/systemd to create a systemd directory in a user controlled location. By writing to a configuration file in this directory, code execution is trivial to achieve. To bypass the head issue again, the authors abuse a different in deliminators (\r) to add their own lines.
  • The second vulnerability is a heap based buffer overflow that occurs from unexpected sizes in processing the client name. This overflow does not seem very useful to start with but these people are wizards! When a thread of renderdoc starts up, glibc malloc allocates a new heap for the thread within an mmaped section of memory. This is always 64MB, aligned in 64MB groups and is mprotected for more security. This section is close to the libraries but there is a gap in memory.
  • The authors decided to use a technique that I've documented here. In glibc malloc, the mmap chunks have a special bit. By calling free() on an mmap chunk, the chunk isn't put into a free list; it's literally unmapped with munmap(). Arbitrarily mapping and unmapping memory is an incredibly powerful primitive. The munmap and mmap calls are the attack method but there is very crazy strategy to it.
  • They have a text picture explaining the setup. By using the vulnerability from before in the 64KB buffer, they are able to corrupt another mmap chunk directly ahead of it. Once the chunk ahead of it is updated to be extremely large. This is because we want the munmap() call to succeed. The goal is to punch a hole of exactly 8MB+4KB, which is the size of a threat stack and its guard page.
  • With the gap ready to go, we can allocate our data into that location that we want. This is done by simply connecting to the server, which creates a new thread in this gap, then disconnecting for this to be reused. After this, a large allocation of the client name (without triggering the vulnerability) will overlap with this section. I don't fully understand why this is the case but I'm assuming it's weirdness with munmapping memory.
  • With a long lived connection with our client name data being readable and overlapping with the stale client stack, we create a great primitive for reading and writing data. First, they force a new stack into this area. By doing this, a bunch of libc, stack addresses and much else can be leaked. After this, they use it to write to the RIP on the thread stack to hijack control flow with a large amount of ROP gadgets. It's weird that this works because of stack canaries though.
  • The final vulnerability is a fairly straight forward integer overflow that leads to a large amount of data being written to a small buffer. Overall, an amazing and innovative post. The shout out to me was appreciated and made my day!

Striking Gold at 30,000 Feet: Uncovering a Critical Vulnerability in Q Blockchain for $50,000- 1191

Blockian    Reference →Posted 2 Years Ago
  • Q (lolz) is a proof of stake EVM compatible blockchain. It's native currency is Q tokens that are used for voting, staking and much more.
  • The voting mechanism has four components:
    1. A proposal is created.
    2. Voters lock their Q Tokens.
    3. Voters cast their votes on the proposal.
    4. The proposal is either accepted or rejected.
  • The _vote() function counts votes that are proportional to the amount of tokens they have. After enough time the proposal is either accepted or rejected.
  • To use a Q token for voting, the token is locked. Users can delegate their voting power to other users as well. The voting power of a user is calculated based upon the quantity of Q tokens that have been locked until the end of the proposal voting period.
  • The vulnerability is about the counting the votes. Using a particular flow, we can get tokens to be counted twice.
    1. User A delegates their Q tokens to User B.
    2. User B votes on a proposal, with the incorporating voting power from User A.
    3. User A announces the unlocking of their Q Tokens.
    4. User A votes on the same proposal with the same Q tokens being counted twice.
  • Why does announcing the unlock make this possible? It seems like such a weird flow! The flow of operations does not seem to consider the case where a delegated entity had already voted then decided to unlock their tokens. At this point, it is assumed that the person will take their tokens out; but, they are still able to vote.
  • Voting bugs are always fun! Double counting bugs on voting are terrible and compromise the eco-system. Write up could have been more clear on WHY this happens instead of the teaser they include.

Strategy v2 Burn Bug Post Mortem- 1190

Alberto Cuesta Canada    Reference →Posted 2 Years Ago
  • The Yield Protocol is a fixed-rate borrowing and lending protocol on Ethereum. As demonstrated by the name "Yield", getting yield from the assets provided is an extremely important part of this protocol.
  • With ERC20 tokens liquidity provider tokens, the mint() and burn() functions are common for adding liquidity and removing it from the protocol. mint() will create LP tokens from the provided asset token provided. burn() will destroy the LP token and give back the original asset token. These tokens are used for portions of the pool rewards.
  • In the burn() function, all of the users tokens are burned. Then, based upon the amount of tokens burned and their share in the pool, it will give them the underlying token back. The code for this is below:
    uint256 burnt = _balanceOf[address(this)];
    _burn(address(this), burnt); 
    
    poolTokensObtained = 
      pool.balanceOf(address(this)) * 
      burnt / 
      totalSupply_;
    
  • The attacker can donate a large set of tokens to inflate the balance of the pool. This leads the pool to the pool sending more tokens to a user than they should. Crazily enough, these donated funds are not lost though! An attacker can call mint(), which will use the difference between the balance and the cached pool. So, the inflation of the amount of tokens being sent to the attacker doesn't cost them anything.
  • The article has some interesting insights into the development process. First, cache-based contracts are known to be vulnerable to donation attacks if the developer is not careful. The author mentions going through the YieldSpace-TV project and validating that every single use of balanceOf() was not vulnerable.
  • This new feature was audited and missed. However, they mention that the level of complexity of this bug warranted a code based audit. The time pressure led to an internal audit instead. In this case, the bug bounty program saved the day, which is amazing! Having multiple levels like this prevents major hacks from happening.
  • Once the vulnerability was discovered, the protocol decided to use the eject() function to take all of the funds. They learned a few lessons from this warroom. First, having a pause() would have allowed them to explore their options without an attack being viable. Second, the contract is not upgradable but uses the code>eject() functionality to recover funds. By having the ability to upgrade contracts, restoring the protocol would have been much easier.
  • Overall, an amazing post into the world of bug bounties, handling issues and protocol design. The fix is a single line change to use the cached version instead of the balance of the pool.

PoC for libssh Auth Bypass - CVE-2023-2283- 1189

Kevin Backhouse    Reference →Posted 2 Years Ago
  • SSH is used to log in to servers by everyone. Finding a vulnerability in the authentication process for this would be catastrophic. This is exactly what the author found here! In this case, it is libssh and not openssh, meaning that we cannot simply log into other people's servers.
  • The function pki_verify_data_signature is used during the public key authentication check. In particular, it's checking to see if we've provided the proper signature to authenticate. At the beginning of the function, the rc (return code) is set to SSH_ERROR in order to prevent accidentally returning the improper value in case of a jump to the end.
  • However, one of the later function calls sets the variable when doing a hash comparison check. The idea was to reuse the variable rc for various calls. But, this comes with a problem: if we can get rc returned with the code assuming that it's set to the original default value, we could spoof a success! In several places, there is a goto that assumes this. A good find for code snippets is here. But, in what cases?
  • There is one directly before the signature verification! How do we trigger this to error? The function is trying to get a directly object from malloc. If the code errors, it would only be under extreme memory pressure. So, we need a memory leak or something else in order to trigger this?
  • Kevin's POC generates a large amount of memory pressure by sending a large number of service requests that require zlib compression. By not reading the reply of the server, the data is kept in memory. Even though this isn't hard to do, it's complicated to have it run out of memory at the exact right time on this tiny 72 byte allocation.
  • Analysis of the memory at this point did not work very well. So, instead of doing further work, they simply embraced the chaos. They kept triggering out of memory errors in the wrong locations. So, they ran the same copy of the PoC multiple times and it works! By bombarding the service, an attacker will eventually get lucky.
  • This bug is only exploitable in very memory-constrained systems. The author uses a container where only 256MB are allowed to be used. Additionally, since this is a library, it depends on how the library is used. The author was using the demo ssh server from the examples directory to test this out.
  • Overall, a simple bug once you see it but a crazy hard thing to find and exploit. Good work!

Jimbo's Protocol Hacked- 1188

Rekt    Reference →Posted 2 Years Ago
  • Jimbo creates a semi-stablecoin via rebalancing. This is version 2 of the protocol, which was an attempt to fix the first version with too many bugs in it.
  • The whole point of this protocol is being able to rebalance (buying or selling accordingly) itself based upon the current state of the market. By doing this, the pool would keep a specific percentage of resources through out. By having a pool of resources, this would hopefully make the coin stable.
  • This rebalancing for stability sounds like a blessing. However, this makes the assumption that the pricing is done fairly and equally. In the case of Jimbo, the rebalancing with bad prices was possible. With a bad price on the rebalance, the protocol lost an insane amount of money ($7 million). With an inflated price of Jimbo, the JimboController would transfer the contracts ETH back into the pool. By selling the Jimbo back to the pool, the attacker could make off with some extra profit.
  • To hit this vulnerability, the attacker took out a large flash loan then performed the following actions:
    1. Swap a large amount of ETH to get JIMBO from the Uniswap and Trader Joe pools. NOTE: This causes a major surge in the price of Jimbo compared to ETH.
    2. Call shift() to rebalance the contracts assets for the Jimbo Controller.
    3. Use the now extremely valuable Jimbo tokens to get back the ETH.
    4. Leave the protocol in complete shambles. Do steps 1-4 over and over again.
    5. Do steps 1-4 over and over again.
    6. Repay the flash loan and keep everything else as profit.
  • According to Peckshield the issue was a lack of slippage control on the protocol-owned liquidity being invested. In particular, a time waited average or price change check should have been added to account for these large attacker controlled changes.