Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Carelessness versus craftsmanship in cryptography- 1903

Opal Wright - Trail of Bits (ToB)    Reference →Posted 1 Month Ago
  • Two popular AES libraries, aes-js and pyaes, provide a default IV in their AES-CTR API. Although this was seen as helpful from the API standpoint, it actually creates a terrible vulnerability.
  • Why is reusing a key/IV pair so bad? If you encrypt two messages in CTR mode that use the same nonce, you can recover the plaintext by XORing the ciphertexts. Being able to recover ciphertexts is fairly catastrophic. Pyaes's default example does this even. So, this is likely all over the place.
  • strongMan is a VPN tool in the strongSwan VPN suite. It allows credential and user management, as well as creating VPN connections. It uses an encrypted SQLite database protected by AES in CTR mode, along with the aforementioned library. This allowed for the leaking of X.509 certificates and private key information from the database. The developer immediately fixed the issue and complained to the library developer to fix this footgun.
  • The article names and shames the developers of the open source library. They praise the strongMan developers for immediately remediating the issue. I'm unsure how I feel about this. On the one hand, it's open-source software that is probably maintained by one person... so, if name-and-shame, maybe they stop, which is worse than having a security issue. On the other hand, we need to make sure this footgun gets fixed. Regardless, good technical article and bug discovery.

Transient Storage Clearing Helper Collision Bug- 1902

SolidityLang    Reference →Posted 1 Month Ago
  • The main Solidity code generator had a compiler bug in the intermediate representation (IR). This is the story and impact of the bug from versions 0.8.28 and 0.8.33.
  • The IR pipeline generates reusable Yul helper functions during code generation. These helpers are deduplicated by name, so there can only be a single Yul function per name. The helper responsible for clearing a storage slot (delete) derives its name from the Solidity type being cleared. For instance, storage_set_to_zero_t_address for the address and the zeroth slot.
  • This is where the bug lives: the name does not include the type of storage - persistent or transient. Since there can only be one, the clearing operation encountered by the compiler determines the implementation used. There are two cases for this: transient when expecting persistent and persistent when expecting transient. Notably, this writes data to the wrong slot!
  • The article gives an example of the persistent being used when it should be transient. In this case, the owner address is in storage slot 0 and a locking address variable in transient slot 1. Upon setting the transient lock, the owner value is set unintentionally.
  • The other example is the opposite type. The contract has a mapping of uint256 to address as persistent storage and an address for the caller in transient storage. This code has an approval, a revoked approval, and a run to demonstrate the issue. In this case, if you try to call revoke(), it simply will not work. Neat!
  • The Solidity team's severity assessment seems fair. First, projects that run their test suite in --via-ir before deployment would have noticed this behaviour. This feels off at first, but I definitely don't trust the compiler, so I've tested in multiple settings like this before. Following the bug report, they found three affected contracts, which were notified and fixed.
  • There's a separate blog post from the discoverers of the bug, Hexens, about this. While the Solidity team thinks this was low impact, Hexens believes it was critical. I think that assessing the impact of developer tools is hard. Is the impact the worst possible impact or the impact + likelihood? If you go off the first, it's really bad; if you go off the second, the overall impact isn't very bad. The transient keyword isn't too popular yet; this would have been more likely in the future. In the case of the Vyper reentrancy bug, which affected a wide range of contracts with very concrete security implications.
  • Overall, a great blog post on the reason behind a compiler bug and the issues that resulted from it. I appreciated the concrete examples. The post doesn't mention how the bug is found which I would also like to know.

Trailing Danger: exploring HTTP Trailer parsing discrepancies- 1901

sebsrt    Reference →Posted 1 Month Ago
  • HTTP Smuggling is the process of two HTTP parsers parsing data differently and this difference being able to smuggle unintended data through the pipeline. A simple example would be Nginx alongside NodeJS; both implementations need to parse the data.
  • HTTP trailers are extra header fields transmitted after the body in a chunked transfer encoding in HTTP/1.1. Although they are defined in the specification, they are rarely used in practice, besides gRPC. Many servers, such as HAProxy, simply ignore or discard trailers altogether. The specification specifically states that only an allowlist of headers should be mergeable, such as the Digest. But anything else, such as the content length, must be ignored.
  • By abusing implementations that simply merge all headers, it's possible to bypass various security protections. For instance, you can spoof the host header or smuggle in the x-forwarded-for header. An additional attack vector is manipulating request boundaries by smuggling in Content-Length or Transfer-Encoding headers.
  • lighthttpd merged trailers post-dechunking. This allowed overwriting Content-Length to change the packet's meaning entirely. This project adds an extra Connection: close to make it not useful, though. There's a theoretical workaround for this, but I'm unsure how practical it is. Some HTTP servers will only close if it's the only entry. If lighthttpd sees a TE encoding, it will add it to the connection header. If the downstream server ignores the close because of this extra value, the smuggling is still possible.
  • Apache Traffic Server and Pound to not validate trailers, allowing for hidden HTTP headers to be added. EventLet, after reading the chunked body of an HTTP request, skips trailer parsing entirely. If the front-end server sees the request with trailers but eventlet ignores them, this forces eventlet parse an additional request.
  • In http4s, the trailer parser terminates early. If a trailer header doesn't contain a colon, parsing completely stops. This again makes the server parse more than one request from the original request. Overall, using HTTP garden, they found 13 variations of this across HTTP servers. Some were just header smuggling, while others were real request smuggling.
  • Most HTTP clients do not support trailers. To do this research, the author had to create a tool. They even have an intentionally vulnerable app to play around with and a CTF challenge too. The post seems to take inspiration from this post but just takes it a step further. It pays to create unique tooling and read on what else is happening in the space. Great work!

From WebView to Remote Code Injection - 1900

djini.ai - Lyes Mohammed    Reference →Posted 1 Month Ago
  • The author of this post was reverse-engineering a mobile application for weird handling of deeplinks. While doing this, they found a Browsable intent-filter with a custom Schema. Additionally, the app had a WebView with no host restrictions and a NativeMessage handler that uses postMessage. Altogether, this functionality created a large attack surface to explore, so they decided to dig in.
  • The NativeMessageHandler() JSInterface (webview to native) only had a single exported message: postMessage. It had two types of actions: Native and Standard. After registering yourself as a sender, there were actions like sharing/saving files and more. This code contained an arbitrary file write via a path traversal. Classic!
  • The author began to ask themselves was the consequence of this was though. The application they were testing used React Over the Air updates to allow for updating JavaScript bundles without going through the app store for review. After playing around with these directories, they found the right information to write to gain RCE on the Android device after an app crash.
  • The deeplink didn't work for ALL URLs; it had a server-side check that verifies whether an endpoint is trusted or not. One of the trusted domains was google.com and its subdomains. Google has a subdomain called sites.google.com that allows for loading arbitrary webpages through an iframe. From this iframe, it was possible to use postMessage to trigger the bug once again.
  • This is the full exploit path:
    1. Route a browser on Android to a deeplink to the application with the special Google site iframe from above.
    2. Register the native handler.
    3. Overwrite the OTA configuration and creates a malicious React Native bundle.
    4. Restart the application. This can be done by crashing the app or waiting for the user to restart.
  • The company runs an AI security company. They were able to replicate the finding of this vulnerability using this tool. This appeared to take some guidance to find though. A very crazy chain of issues that led to a sick RCE. Great post!

Tachyon: Saving $600M from a time-warp attack - 1899

Qedaudit    Reference →Posted 1 Month Ago
  • CometBFT is a BFT implementation used by Cosmos blockchains. In this, block timestamps are derived from a weighted median of validator votes. In theory, this should ensure that the median timestamp falls within the range of honest validators, even when 1/3 are malicious. Commit is used for the finalized proof within a block being accepted and bundling the block ID with a set of signatures. Each commit signature is a validator's vote attesting to that block.
  • The Commit structure stores the entirety of the block information. The signatures are a list of CommitSig objects, each containing an address, timestamp, and signature. When performing commit signature verification, the index of the signature is used to find the amount of voting power. When computing the median time, the validator address is used instead. If the address is not present in the current validator set, then it's simply skipped.
  • This small difference allows different values to be used for different things. For signature verification, the ValidatorAddress doesn't matter; it's only the index of the signature. So, the submitter of a block can use an invalid ValidatorAddress to force the lookup of an invalid value for the median time difference! The example exploit makes the attacker's validator address the ONLY valid address and index, allowing them to set the block timestamp arbitrarily.
  • On most chains, it's possible to cause a chain halt by forcing an overflow on the time of the block. In the case of chains with time-based rewards, increasing the time horizon enables the creation of large amounts of assets. Notably, Babylon and Celestia would result in a significant token inflation. The "$600M" portion feels slightly exaggerated. These funds become unusable almost immediately after exploitation.
  • The vulnerability is pretty rad! Bad input validation on all fields leads to a weird edge case that breaks everything. Awesome find! To fix this, they suggest performing address validation on the index and returning an error if the address cannot be found. Anytime errors are silently being ignored, it's probably going to be a problem!

GNU InetUtils telnetd Authentication Bypass Vulnerability- 1898

Offsec    Reference →Posted 1 Month Ago
  • Telnetd uses /usr/bin/login for authentication. To login via telnet, you need to pass in a valid username and password.
  • When calling /usr/bin/login, the placeholder in the template is a %U that gets replaced with the USER environment variable. telnetd performs no sanitization of the USER value when being concatenated. So, /usr/bin/login -h [hostname] "%U" becomes /usr/bin/login -h [hostname] "USER".
  • Setting the user to be -f root will skip authentication remotely and grant a shell to the specified user. The local exploit can be performed with the regular telnet command: USER='-f root' telnet -a [ipaddr]. Obviously, this is really bad if it's exposed to the Internet. If you were exposing telnet in the first place, then you probably have other problems though.

The AI Vampire- 1897

Steve Yegge    Reference →Posted 1 Month Ago
  • The main idea around the post is that people are using AI too much and are overworking. CEOs dream of this level of productivity but it's impossible to maintain 10x all the time.
  • While productivity increases, your salary doesn't with it. The employer captures 100% of the value of you using AI; you get nothing from it. This is why the post is called AI vampire. If you decide to only work 1 hour a day to offset this productivity bonus then you'll run out of a job because others are doing the 10x increase in productivity. So, what's the solution? Somewhere in the middle. Maybe you work less hours but the company gets more productivity.
  • Another thought I had was would they let others go and pay you more? The reality is it's a supply and demand game. If there are more developers out there then there are jobs, the price of the job actually goes down. With the mass layoffs that the AI revolution could cause, the cost of a software developer will go down dramatically. So, more devs than the supply means lower prices for the devs, even if they are producing more value than before.
  • At a previous job at Amazon (right after IPO in 2001-2003), they noticed people were killing themselves working. Many of the ones who got through those hard times are now millionaires. But, the reality is that most startups don't pan out and your effort isn't worth anything in the end. To combat this, they considered the $/hr ratio. You control the amount of hours you put in but not the amount you get paid. More hours doesn't mean more money.
  • The AI vampire is real and people need to know about it. The idea is that the AI value capture needs to be shared between the company and the employees, to strike a good balance between competitiveness and sustainability. Although I want this to be true, incentivizes drive the future and this doesn't line up with the normal money now perspective of America. Even if it's not sustainable, CEOs are happy to just hire another developer after the previous one burns out.
  • They claim that AI has turned us all into Jeff Bezos: the easy stuff is already automated and the hard decisions are what is left. As the author said, you can only do 4 hours of incredibly deep work like this per day. Most of the time, you're just coasting. Now, AI has made coasting a thing of the past. 8 hours of this is humanly impossible for long periods of time.
  • I thought this was an interesting post on where AI is taking us. Frankly, I'm worried about the capabilities and what this will do to technology jobs in the future. To me, developers will still need time to come. At the very least, A) maintainability of software, B) design-level decisions, C) verifying output of LLMs. Without deep knowledge as a developer and simply using Claude Code, the software will become very bad very quickly.

Cross Curve $1.4M Implementation Bug [Explained]- 1896

Quill Audits    Reference →Posted 1 Month Ago
  • Axelar is a cross-chain protocol similar to Wormhole and Layer Zero. Normally, with finalization, the Axelar network sends a message to the core contract. Then, the calling contracts checks to see if the command exists and can be executed. With the Express functionality, all of this changes.
  • Axelar includes an express feature that executes transactions before finalization is triggered. Practically, this means that some actor is fronting the funds, assuming they will be repaid. Since there's no command ID saved on Axelar because it's before the command has been sent, how do we know it's valid? We don't! So, the express functionality is a super-duper trusted action.
  • CrossCurve used the expressExecute() interface. It checks whether a commandId already exists and rejects the command if the ID exists. There's still no validity of the message though. So, an attacker could simply call the expressExecute() with whatever data they wanted to execute cross-chain actions.
  • On Twitter, sujith posted a screenshot of them submitting this issue to Axelar on Immunefi. This appears to be a poorly designed feature: the relayer in this model is a trusted entity, but it isn't included in the standard contracts to inherit from.
  • MixBytes has a good tweet discussing the issue as well. I understand Axelar expects additional layers of authentication. The CrossCurve team attempted this but failed. Personally, I think this vulnerable by default pattern is bad. Good write-ups explaining the root cause of the issue.

Claude Skill for Solidity Smart Contract Vulnerabilities- 1895

kadenzipfel    Reference →Posted 1 Month Ago
  • The repository contains a set of Claude Skills for Solidity smart contract vulnerabilities. They range from authorization on tx.origin to more nuanced/contextualized things like access control checks. Many of these are already found via Slither but it's the new ones that are interesting to me. Cheatsheet

Improving UserOperation Execution Safety in EntryPoint v0.9- 1894

ERC4337    Reference →Posted 1 Month Ago
  • The ERC4337 (Account Abstraction) implementation assumes that UserOperation binds the protocol to run the user's transaction only by the intended user. In particular, being sent directly to the contract on the blockchain from an EOA. In reality, the transaction does NOT have to be run in an isolated context.
  • Reentrancy guards and flash loans are great examples of this. The state of an executing contract can be modified prior to execution of the UserOperation. In both cases, it would be possible to force the transaction to fail by triggering the reentrancy guard. This would grief users for the gas they spent.
  • These can be observed by looking at the public transaction mempool or the gossip-offchain ERC4337-specific mempool. Both are valid ways to front-run these calls and are perfectly valid.
  • Operations like simple transfers on UserOperations are not affected. More complex contracts, such as flash loans and those with reentrancy guards, would have been affected. The discoverers of the vulnerability from TrustSecurity received a $50K bounty. This is at the top of the high category in the program. It was a unique issue identified through a deep understanding of the ERC's context. Good report!