Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Billion-Dollar Bait & Switch: Exploiting a Race Condition in Blockchain Infrastructure- 1873

Mav Levin     Reference →Posted 2 Months Ago
  • In web3, a random user is selected to be the block creator. In order to maximize profit, this is split into three users: builder, relayer and validator. Builder is the trader willing to pay for the transaction block ordering. The relayer is a trusted auctioneer and identifies the highest bid from the builder. The validator is the creator of the block.
  • The process of MEV is a player vs. player (PvP) contest. There are thousands of trading bots that see the same data as everyone else but only one can win. Unlike traditional finance, which competes on speed, Web3 competes on price alone because block times on ETH are 12 seconds long. Of course, different chains have different requirements. A lot of builders send bids to the relayer and the validator chooses the highest bid's setup to construct the block with the most popular market being Flashbots.
  • When a trader submits a transaction to the relay, the relay recalculates the top bid; also known as the current winner. When setting the top bid, the users bid is retrieved from the redis cache and then user is written as the winning address and bid amount. To set the bid, there's a separate API that sets the cache key of the bid.
  • There's a classic time of check vs. time of use (TOCTOU) issue here. If you set the bid to be very high, the code tries to update the highest bidder. While doing this, the bid amount can be reset to a very low value. This is a tight race window so it must be spammed over and over again. This results in winning the auction without paying anything!
  • To fix the issue, the Redis COPY command was used instead. Given the impact of this, it was weird that it was only a $5K payout. A good takeaway is that concurrency is hard to get correct and should always be considered.

GnuPG Fail - 12 Vulns- 1872

gnu.fail    Reference →Posted 2 Months Ago
  • The research focuses on Pretty Good Privacy (PGP) implementation from GNU. It's used for many things, like verifying downloads. This started with reviewing the code for fun but turned into a lot of vulnerabilities.
  • The first bug was in the injection of plaintext. The implementation didn't sign headers but claimed to only allow for the SHA256 header. In reality, there was another header that could effectively contain arbitrary data. Since this value is a C string, if you add a nullbyte, vertical tabs and carriage returns, the data is effectively skipped on signing but looks legitimate! With a MitM, an attacker could add arbitrary text into the message being signed.
  • Within another tool called minisign, they found a similar issue with comments as well. If you try to add a newline or carriage return to the comment, it will replace it with a nullbyte and check the signature of that. When the actual string is displayed, it uses the full line though. So, signed\ninjected will become injected when signed but display all of the content. The tldr; is that C strings are hard!
  • In GnuPG and Sequioa, they found an issue with signature wrapping. Given any signed message from a user, you can add data to the top of the file that will be shown but the original data will be verified instead. This was because of slight differences in how the user saw the messages and how they were verified. Notably, if a header didn't have the correct name, it would be effectively ignored, even though the user would still see it.
  • There are full signatures, where the sig is included with the data, and detached signatures, where the sig is included in a separate file. When iterating over each packet (group of bytes), there's a bad state machine that resets a byte to zero with an invalid type. By doing this, verification is effectively skipped. This allows marking unsigned contents as verified. Neat!
  • ANSI allows for arbitrary styling in the terminal. Many things, such as gzip, will specifically not output untrusted binary to the terminal. It's possible to ask to write data to any file, but show different things. This turns into clickjacking effectively in the terminal.
  • Age, an alternative to PGP, supports plugins. When loading these plugins, it's based on some content of the recipient's name. Using a path traversal, it was possible to execute an arbitrary binary. Since this was an issue with the specification, it led to A) a change in the spec and B) 5 implementations being vulnerable.
  • From this, they started looking at some memory corruption issues in the C libraries. They found a for loop that was double incrementing an index that led to an OOB operation. To trigger it, required exploiting an integer overflow in the return value of the same function through multiple calls to it. An additional memory corruption issue was found for an uninitialized variable that led to a downgrade to SHA1 signature in some cases.
  • They found some bad caching logic that allowed them to have an unverified key get linked to an account if loaded from the DB. This enabled the simple verification of a key to have it linked, without succeeding in the verification. In practice, this key linking would allow for a wide-range of bypasses in verification.
  • So much impact with zero exploits in the actual cryptography. Most of the bugs were just in the parsing of signatures and in abusing format quirks. I really cool talk on PGP from non-crypto people!

A >$10M protocol drain missed in an audit contest - vulnerability write-up- 1871

samuraii77    Reference →Posted 2 Months Ago
  • dHEDGE is an asset management protocol. Users deposit assets, and managers/traders can generate a field for them. The codebase intentionally allowed for integrations, but with minimal trust required of the manager/trader. For instance, when using Uniswap, the code had a slippage range. These checks are called contract guards; their goal is to prevent users from being drained by the strategy's owner. Designing a protocol this way is great for users, but very hard to do securely.
  • The vulnerability existed within the 1inch integration. Each integration is called a function with a tx guard, provider call, and invariant checks at the end. The 1-inch integration provided multiple functions: swap, unoswap, unoswap2, and unoswap3. These allow for swaps via different pools, such as Uniswap and Curve. The bug is within the unoswap function.
  • The calldata is decoded for the Unoswap to get the token information associated with it. Notably, it would get the source token and the pool address. The source token is a uint256, where the least significant 160 bits are the address and the rest are bit flags. Given a source token and a pool, it will deduce the destination asset.
  • The vulnerability lies in some very simple-looking code used to get the destination token. The integration with unoswap sometimes uses a different value as the source token. In particular, with UniswapV3, it will use the bitmaps from the pool value from before. So, there's a desync between the validation and usage.
  • For example, if the manager provides WETH as the source asset for a USDC/WETH pool with the USDC flag on. Even though WETH was provided as the source asset, the swap will be done from USDC->WETH. This circumvents the slippage protection because it's using the wrong value.
  • Trying to exploit this as described won't work because the slippage logic will revert on the source asset, increasing. By specifying multiple pools (one has a malicious token and another is legitimate), we can use the fake token for the slippage checks to return wrong values to balanceOf. Now, on the second part of the trade, we can get all of the funds out.
  • This vulnerability was missed in a contest that the author of this post actually participated in. The bug was pretty simple, but it just required some integration knowledge of the various protocols. The exploit required a very deep knowledge of the protocol to work, which was pretty awesome. Good find!

0-click Exploit Chain For Pixel 9 Part 3: Where do we go from here?- 1870

Natalie Silvanovich - Project Zero     Reference →Posted 2 Months Ago
  • The previous two blog posts in this series contained a 0-click exploit to compromise the audio rendering on Android and then a kernel driver on the Pixel to compromise the device. The third and final post is about where to go from here.
  • The audio parsing 0-click attack surface is a bad one. There are many audio formats with many crappy libraries, giving attackers a good place to look. Natalie recommends removing uncommonly used decoders from the 0-click attack surface, such as Dolby UDC. What's attackable via 0-click or 1-click isn't commonly considered; it's wise to think through these decisions more deeply.
  • The Dolby UDC bug was found after a single one-week team hackathon. The second kernel bug was found in a single day. The discovery was relatively quick. The driver had multiple bad vulnerabilities. GTIG has detected and reported 16 Android driver vulnerabilities; it appears that the security of these drivers is pretty poor.
  • The Dolby UDC chain took eight person-weeks, and the BigWave vulnerability took 3 weeks for a basic proof of concept. Given the amount of money threat actors have, this is a very small time commitment to find this issue.
  • The mitigations are an interesting discussion. On the UDC bug, the seccomp filter was turned off and the binary was not compiled with bounds protections that would have fixed it. Things like MTE wouldn't have helped because of the custom allocator being used. For the kernel bug, the issue along kALSR was made exploitation much easier. They hope that the Pixel and other Android manufacturers will be more like Apple with memory safety features going forward.
  • The patching for this wasn't great, mainly because of the ecosystem's complexity. The first bug was reported to Dolby in June 2025 and patched in Chrome in Sept 2025. Pixel did not receive Dolby patches until October. Samsung patched in Nov 2025, and Pixel didn't ship fully until Jan 2026. In total, this 0-click exploit took 139 days to patch from disclosure. Dolby thought the exploitability of this bug was low for some reason, by Project Zero believes otherwise.
  • Just because there's another bug to fully compromise the phone doesn't mean it's not worth patching right away. At least one bug in the full chain, probably the hardest part of the chain, should be patched immediately. The diffusion of responsibility is likely why this took so long to patch. Overall, an amazing series on the real world of exploit development.

0-click Exploit Chain For Pixel 9 Part 2: Cracking the Sandbox with a Big Wave- 1869

Seth Jenkins - Project Zero    Reference →Posted 2 Months Ago
  • The previous blog post on the exploit chain got code execution with the mediacodec sandbox on Android. This is constrained by SELinux, where non-secure software is initialized. After reviewing the available attack surface, they found the driver /dev/bigwave was accessible from the sandbox. This is a Pixel SOC that accelerates AV1 decoding tasks, meaning it was intended to be accessible.
  • They had several issues pop out immediately in this driver. One was reported in February of 2024, but was still unfixed when the research was conducted a year later. Another bug was a double-free. They found one bug that was better than those, though: a UAF.
  • Whenever the device driver is opened, a new kernel struct labeled inst to store private data is created. Within this is a job that tracks register values and the job status. To submit work to the bigo hardware, a process called an ioctl is used to place the job in a queue and have it be used by a separate thread. In practice, this meant that an object whose lifetime was bound to a file descriptor was accessed by a kernel thread without any validity checks.
  • The actual UAF required a little trickery. The ioctl BIGO_IOCX_PROCESS submits a job to the bigo worker thread. In this, it enters wait_for_completion_timeout with a 16-second timeout. After this, the job is removed from the priority queue. If enough jobs are queued, the thread may exit early. If userland closes the file descriptor associated with BigWave, the job is destroyed while the worker thread still references it.
  • By spraying attacker-controlled kmalloc allocations, such as Unix Domain Socket messages, it's possible to control the job->regs. This allows for where and what is being written by the program. This creates a 2144 byte arbitrary write! From previous research, they found that 0xffffff8000010000 (.data) is static and contains many useful kernel globals. So, there's no need to defeat ASLR at all.
  • With some work with relability, it's possible to get a very reliable arbitrary read/write from the exploit. From there, they set SELinux permissions to permissive, fork a new process, and then set the new process's task creds to init_cred. They now have root credentials with SELinux disabled.

0-click Exploit Chain For Pixel 9 Part 1: Decoding Dolby - 1868

Natalie Silvanovich - Project Zero     Reference →Posted 2 Months Ago
  • AI-powered features have added 0-click attack surfaces to mobile phones, such as SMS, RCS, and more. Because audio decoders are now phone zero-click targets, the P0 decided to investigate. One such format they decided to look into was Dolby Digital, including AC-3 and EAC-3.
  • While reviewing the audio format specification, they noticed an apparent problem: no limit was specified for the emdf_payload_size parameter. Ivan Fratric found that copying the data from this content into the heap contained a simple integer overflow. By using this, along with a stop point for the emdf_container_length of the buffer, a buffer can be underallocated without turning into a wild copy.
  • There's a useful pointer that can be overwritten called skip. By carefully filling the heap with multiple payloads, it's possible to overwrite the pointer and have arbitrary attacker-controlled data written to it.
  • Making this exploit work without an ASLR leak was super complicated and beyond my experience. It required a deep understanding of the data structures in play and the code that was interacting with them. After a lot of work, they introduced a repeatable out-of-bounds write. Using relative writes allowed us to avoid issues with ASLR. They were now able to use their write to overwrite a function pointer within a table. Neat!
  • Android has several mechanisms to make running shellcode within mediacoded difficult, primarily SELinux. Some co-workers, Jann Horn and Seth Jenkins came up with a plan to work around these limitations. Use ROP to write to /proc/self/mem repeatedly to make it a descriptor that's easy to guess. Then, use pwrite on the shellcode in memory to overwrite a function's code. This works because /proc/self/mem allows for any memory in a process to be overwritten for debug purposes.
  • Due to ASLR guessing, the exploit worked 1 in 255 times; this would take about 6 minutes to run with process restarts. They believe that the two sources of 1/16 could be removed with several months of effort. Android has a lot of platform mitigations and they reflect on this. ASLR on the lower bits of pointers made this much harder than they expected. Additionally, all parts of the process were sufficiently randomized. mediacodec contains several seccomp rules that prevent a process from executing syscalls that aren't necessary but were left out of the Pixel 9; this would have prevented the pwrite strategy that was used in this post. This would have required several weeks of development effort to implement the entire exploit in ROP.
  • scudo, the heap allocator, didn't feel sufficiently hardened. Part of the exploit tricked it into moving allocations. Most of the time, this wouldn't be possible because of the guard pages, but it's still worth considering. On macOS and iOS devices that use the -fbounds-safety flag, the exploit isn't possible.
  • The blog's perspective isn't just let's find a bug. It's can we create a reliable exploit chain like a threat actor? This is a very unique perspective that I love see, as it makes the security work feel more real.

Self-XSS in Facebook payments flow leads to Instagram and Facebook account takeovers- 1867

ysamm    Reference →Posted 2 Months Ago
  • Facebook's payments and billing flows use third-party financial services providers. To perform these bank payments, Facebook embeds external services via iFrames that perform cross-window communication.
  • One integration that exists is the ACH Direct Debt flow; this embeds a third-party iframe into m.facebook.com with strict origin verification. The cross-window message-handling mechanism allows direct HTML injection, which is assumed to be safe because it's from a trusted domain. The reality is that all messages originating from this domain were trustworthy. Practically, origin validation is only as strong as the security of the trusted origin.
  • The handler for direct_debit_ach_initialization's learnMore event injects HTML directly into the DOM. This creates an XSS opportunity, if we can find more bugs. On one of the third-party providers, they found code that loads a remote configuration file via a URL and calls eval() on the cmd parameter.
  • Since we can control a message from the third-party domain, we can communicate with Facebook to trigger the XSS sink. All it takes is a postMessage from the iFrame and we have XSS in Facebook! This is actually self-XSS because the endpoint has a nonce that is not guessable. So, how is this useful?
  • With XSS on the page, it's possible to initiate the OAuth flow. By reading the iframe of the page with the OAuth flow for the user, it's possible to extract the OAuth codes from location.href of the iframe and compromise the account.
  • Here's the full exploit chain:
    1. Login CSRF is used to get a valid account setup.
    2. Attacker loads the ACH page with the vulnerable third-party provider.
    3. XSS is triggered on the third party iframe that calls postMessage to get XSS on Facebook.
    4. Attacker initializes the OAuth flow in an iFrame to steal the OAuth codes.
  • By itself, the XSS isn't bad enough to warrant an account takeover. So, they found some quirks to make this possible via the Save Login and the ability to keep a malicious page alive using Blob URL. Facebook has a GraphQL endpoint for account switching. With XSS on the site, this API can be called. By loading the payload via a Blob URL, Facebook cannot reload from the compromised execution context. So, the XSS remains persistent.
  • If there are no device-switchable accounts, there's an additional way. Trigger a Google OAuth flow for Facebook. Capture the authorization code after the redirect to Facebook and use it to link the attacker's account to theirs. This works without user interaction but requires that only one Google account is open in their browser.
  • To make the impact worse, third-party payment providers were able to execute arbitrary code within Facebook remotely. To target all users, it would have been simple for a threat actor to compromise these. After this bug, they were rewarded $62K. Great work!

Two-click Facebook account takeover via FXAuth token and blob theft- 1866

ysamm    Reference →Posted 2 Months Ago
  • Facebook and Instagram accounts are deeply integrated through Accounts Center. This allows users to link identities, share authentication methods, and manage global settings. The integration relies on native SSO flows and redirect-based handoffs between applications. Of course, issues within authentication can be catastrophic.
  • On Facebook, the native SSO login endpoint has three parameters: app_id, token (FXAuth token), and extra_data. The extra_data commonly contains a redirect path, which is verified by the application that depends on it. This endpoint allows redirects to /accounts_center/ for the Instagram application. By using double URL-encoding and path traversal, it's possible to bypass the normally strict redirect endpoint.
  • The end goal of OAuth-based SSO issues is to leak the tokens. With the ability to redirect to any endpoint on Instagram, we don't have the tokens but it's a good starting point. The author found an endpoint that creates a postMessage with the * origin, including the token in its payload. This is the leak of the token that we wanted. This ONLY works if the nonce is set correctly.
  • There's an issue with this, though: the nonce must be legitimate. To get around this, the attacker has to create their own account to generate a valid nonce and use that in the payload. Additionally, the user must be logged in for this primitive to work. So, they use login CSRF to make this work. Finally, we need to generate our own FXAuth token to be signed from accountscenter.
  • The attack is as follows:
    1. Victim visits the attacker's site.
    2. Attacker uses a login CSRF primitive to log in the user into their account.
    3. Attacker website opens a new window with the crafted native SSO URL.
    4. Victim confirms the Instragram app.
    5. The redirect goes to the vulnerable endpoint to leak the token. This creates a post message to the page to steal the full redirect URL, including the token.
    6. Attacker captures the message and extracts the blob to log in to the victim's account. They now have access to the accounts center to manage settings. This leads to a complete account takeover.
  • The exploitation had four parts to it: FXAuth token reuse, weak validation of the redirect parameter, token leakage via postMessage, and email-based CSRF. I appreciate the ability to chain all of these together for an account takeover that requires only two clicks. For this, Facebook paid $30K.

Can't find Criticals? The problem is either your strategy, your execution, or both.- 1865

infosec_us_team    Reference →Posted 2 Months Ago
  • The author of this post had a DM conversation with a security researcher that has proven results on multiple platforms but has been doubting their skills from a lack fo recent bounties They adapted the DMs for a wider audience and posted it for others to read. They claim the issue is one of three things: unreasonable goals, strategy in bug hunting or the execution.
  • If you have a goal of finding a critical in Aave (a million dollar program) with only a 10 day window then you're likely to find nothing. Another bad example is people having a goal of 6 figures per year but then joining 2 month long contests with small rewards. Your goals need to line up with your choices.
  • The rest of the article is posts that have a revenue goal, strategy and execution plan all in place. The first one is a goal of $100K per year. To do this, participate only in large contested, hunt on programs that offer $20K-$50K for criticals. On the execution, 1) read all previous findings to see if there's a way to bypass fixes, 2) look for low-hanging fruit and 3) only look at a codebase for 10 days. For this case, they say to only hunt on programs that push code updates more often than they get reviews.
  • The second example of $200K per year. First, do contests that are over $300K in prize pools. Next, hunt on bug bounty programs that offer $50K-$200K per critical with mostly DLT-blockchain protocols that haven't had much public on the auditing front. On the execution, dive into the nitty-gritty details of the codebase looking for low-hanging fruit and then obscure edge cases; only stay on a project for 2 months. From there, move onto another codebase but capitalize on knowledge from this project to do contests they do and have code update monitoring.
  • Having a solid plan and reasonable goals is just as important as finding the bug itself. They gave real examples of strategies in this post, which I appreciated. If your plan isn't working then come up with a new plan and try again.

Datr cookie theft and AI leads to Facebook account takeover via trusted device recovery- 1864

ysamm    Reference →Posted 2 Months Ago
  • Facebook uses long-lived device identifiers to reduce friction for returning users to distinguish legitimate vs. illegitimate activity. A device that logs in repeatedly is considered trusted by the application, which relaxes some of the security requirements. One of the identifiers is datr.
  • The application https://www.facebook.com/recover/account/ is used to verify an account via email or phone number. In cases where requests originate from a trusted device an alternative flow canbe used to recover the account via uploading a document. This process is automated and is supposed to help legitimate users regain access easily. A core invariant of this flow is that trusted device cannot be easily impersonated.
  • The Facebook OAuth implementation, when interacting with the GraphQL API, can leak the datr value. When a datr is in the fields for an application with Facebook login, the machine_id is the same as this cookie. Although this data cannot be queried directly, Facebook's GraphQL allows chaining GraphQL API requests. By having multiple requests reference earlier responses, it's possible to propagate the machine_id to attacker-viewable output.
  • Here's the full attack flow:
    1. Generate your own access code information for OAuth. This just makes the calls require less interactions from the user.
    2. Get user to visit your malicious website.
    3. Within an iframe, use the BATCH API to trigger the OAuth call that will return the machine_id and then post that to your own Facebook account.
    4. Initiate account recovery with the new datr value. This should be easy to bypass with public information and fake documents.
  • A sick blog post on an account takeover on Facebook. I appreciate the knowledge around the importance of datr and the Batch API referencing previous values. Both of these require a lot of context, specifically on this target. They were awarded $24K for the bug, which is a solid payment. Another amazing write-up!