Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

web/framed-xss - 1883

m0z    Reference →Posted 2 Months Ago
  • The challenge uses Chromium and abuses HTTP Disk Cache keys to trigger a client-side cache-poisoning issue. It contains two endpoint: /view and /. /view only succeeds if the header contains From-Fetch but contains an XSS sink within it via the HTML parameter. / performs a call to /view via a Fetch and places the contents within iframe without script execution. This is the setup for the challenge.
  • The goal is to trigger the XSS but there's a paradox. /view cannot be called directly because of the header check. / places the code into an iframe so we can't do anything. The trick of the challenge is to trick the browser into adding the From-Fetch header to an unintended request.
  • Moderen browsers have a split cache in order to prevent cross-site leaks. This is derived via the Top site domain and the resource URL. Chromium added the cn_ prefix to prevent cache poisoning during mainframe navigation. In. Particularly, this is added when the top level of the page has its location.href modified.
  • By using the history API, it's possible to bypass the usage of cn_ on the page. Notably, history.back() doesn't count as a cross-site main-frame navigation for whatever reason!
  • So, the following sequence of events will lead to XSS:
    1. Do a window.open() to /?html=<XSS> to populate the cache.
    2. Redirect to another page and preform history.back().
    3. Redirect to /view?html=<XSS>.
    4. Do the final history.back() to load the cached version of the page to get XSS.
  • Weird challenge with some weird browser quirks. I still don't 100% understand this, but I appreciate the trick of telling the browser to use the cache when the requests are somewhat different.

Multiple cross-site leaks disclosing Facebook users in third-party websites- 1882

ysamm    Reference →Posted 2 Months Ago
  • Facebook is used by almost everybody. Being able to see who is logged in can allow for targeted attacks, account takeovers, and employee profiling. This article dives into several techniques they used to de-anonymize users.
  • The first issue occurs in Zoom callbacks in Facebook Workspace. When supplying the __cid and __user, an attacker can brute force the user ID of the Workplace community. If __user is correct, then an empty page with text/html is returned. If it's incorrect, the response in application/json, which will trigger CORB and block script execution. By observing onload and onerror events, it's possible to determine the user id of the logged-in user.
  • When embedding a Facebook plugin, such as the Like plugin, inside of an iframe, the rendering is different depending on the supplied user ID. If the __user is correct, everything renders as normal. If it's incorrect, then X-Frame-Options: Deny is returned, preventing the iframe from loading. This distinction allows brute-forcing the active sure or page ID by observing postMessage events rather than a timeout.
  • The endpoint https://www.facebook.com/signals/iwl.js?pixel_id=PIXEL_ID returns a JavaScript payload intended for internal Meta Pixel testing, including the Facebook user ID. This value is scoped inside a function. But by manipulating JavaScript prototypes before loading the script, it can still be extracted. Their PoC modifies the function prototype and prints the user ID of the object. Apparently, the script runs within the full context of your page, allowing for the reading of the data still. Neat!
  • They got 2.4K for the previous two bugs and 3.6K for the third bug. Good work by the author!

Instagram account takeover via Meta Pixel script abuse- 1881

ysamm    Reference →Posted 2 Months Ago
  • Meta's web ecosystem relies on cross-window messaging between first-party websites. The only security control is around origin checks on facebook.com or its subdomains.
  • Multiple Meta modules register window message listeners that must be from a trusted domain. One of these is < code>fbevents.js, the Meta Pixel script embedded on millions of websites. When loaded in a window, the message listener reacts to many events and sends them via the graph.facebook.com send. This includes location.href and document.referrer, which can contain OAuth codes and other sensitive values.
  • The author founa n endpoint that constructs an object from user-supplied parameters and forwards it via postMessage to a target Facebook domain specified by the attacker. This appears to be a classic confused deputy problem, where the data is passed through without any checks from a trusted domain.
  • The fbevents.js code receives messages originating from facebook.com. By using the primitive from above with an arbitrary message send and including an attacker's access_token for GraphQL, requests can be tricked into exposing OAuth code/tokens to the attacker. By doing this, an account takeover may be possible.
  • Here's the flow of the attack:
    1. Trick the user into clicking on a crafted link that abuses the issues from above. To start with, an OAuth callback on Instagram to developers.facebook.com.
    2. The page developers.facebook.com, contains the fbevent.js file and has the message listener. To prevent the page from consuming the token, an invalid nonce must be used.
    3. Attacker redirects their website to the postMessage sync discussed from before with the attacker-controlled GraphQL access token.
    4. fbevents.js will consume the message and issue a GraphQL request with the sensitive information, including the OAuth code.
    5. Attacker reviews the Graph Explorer to retrieve the Instagram OAuth authorization code.
  • There is no description of the patch. To patch this, I'd probably get rid of the postMessage sink first. Then, remove the href and referrer from the GraphQL endpoint data, if possible. The author claims that the attack surface expands beyond Meta properties and to third-party websites because of how widely deployed this is. They got $32K for this bug!

Leaking Meta FXAuth Token leading to 2 click Account Takeover- 1880

ysamm    Reference →Posted 2 Months Ago
  • FXAuth is Meta's shared authentication system used by a variety of services that they own. On the domain https://auth.meta.com/fxauth/, a signed token and blob are returned for using the website. The base_uri contains where to redirect back to.
  • Originally, base_uri had no restrictions on the value that was set. By exploiting this, it was possible to redirect to an arbitrary domain and extract the token. This made by an account takeover possible. The fix was to restrict it to Meta-owned domains, assuming that the path could not be controlled either.
  • Legacy locations exist where attackers can execute arbitrary JavaScript under a controlled path at https://apps.facebook.com/{app_namespace}. If an attacker owns an application, they can read parameters from the URL even if they do not control the path directly.
  • Once the user is redirected to the attacker's application, their JavaScript can exploit the token. Using this, it's possible to finalize sensitive flows, such as account linking, to get persistent access to the user's account. This led to two 32.5K payouts.

CodeBreach: Infiltrating the AWS Console Supply Chain and Hijacking AWS GitHub Repositories via CodeBuild- 1879

wiz    Reference →Posted 2 Months Ago
  • On AWS CodeBuild, there is functionality to trigger a build on specific GitHub repos. The main protection against this is a regex that checks the ACTOR_ID for validity when a PR is made. The validation is as follows: 16024985|755743|.... The | symbol is an OR operation in regex.
  • The regex above isn't anchored with a ^ and $. Practically, this means that any account that contains these values would be approved by the filter. So, is it possible for a GitHub user ID to contain one of the values in the regex?
  • From their research, about 200K IDs are made per day. Practically, this means there's a new registration every 5 days of these account values. Still, there's a bit of a race here. So, it's necessary to create a lot of accounts at once. The standard account creation has rate limiting, so this didn't work. The GitHub Enterprise API is used to create organizations and shares the same IDs. Sadly, this couldn't be used because orgs can't create PRs.
  • The GitHub App manifest flow can interact with pull requests as a bot user. This allowed for the creation of hundreds of apps at once, then visiting the confirmation page to create the IDs simultaneously. This made winning the race condition much smoother. They waited until the live ID was about 100 away and then visited 200 URLs at once. They were able to obtain the ID on many of these open GitHub repos.
  • With the ability to make PRs within the context of the build process, they were able to do a classic pwn request. In particular, create a PR that, once built, extracts GitHub credentials from the environment. With a personal access token (PAT), an attacker had full admin privileges over the repository. What repo was at risk? The AWS SDK JavaScript library! Since so many ENVs use this, a backdoor to this package would have compromised a large percentage of the Internet.
  • A severe attack of taking a small CI/CD misconfiguration to an Internet-compromising bug. Backdoored packages feel impossible to stop right now, which is what makes this very terrifying.

Defeating KASLR by Doing Nothing at All- 1878

Seth Jenkins    Reference →Posted 2 Months Ago
  • Address Space Layout Randomization (ASLR) prevents trivial exploitation by randomizes addresses of processes. The Linux kernel also supports ASLR. The author of this post had a vulnerability in the Pixel kernel but needed to bypass KASLR in some way.
  • Their target was looking into Linux Linear Mapping. This is a section of the virtual address space that directly represents physical memory. While reviewing the code for this, they learned that the mappings always start at 0x80000000. So, KASLR is effectively useless on these values. But why?
  • Linux and Android theoretically support hot plugging memory. This is when new memory is plugged into an already running system and must be usable by the Linux kernel addressing. The kernel virtual address space is limited to 39 bits.
  • Given that the maximum amount of physical memory is much larger than the entire linear map, the kernel places the linear map at the lowest possible address so that it can handle the largest amounts of further hot-plugged memory. The feature for randomizing the memory space was removed because DRAM may appear in inaccessible locations.
  • On Pixel phones, the bootloader compresses itself at the same physical address as well. Some phones, such as Samsung, do randomize this address on every boot, but not every phone does.
  • With the randomization issue, it's possible to access the .data entries of the kernel as R/W permissions. The offset0xffffff8001ff2398 will always map to modprobe_path, for instance; 0xffffff8000010000 is effectively the kernel base.
  • According to the author, this severely weakens the kernel's security. These issues were reported to the Linux kernel and Pixel teams, but they were denied as findings. Overall, a great report on a security issue and its very real origins.

On the Coming Industrialisation of Exploit Generation with LLMs- 1877

Sean Heelan    Reference →Posted 2 Months Ago
  • The author of this post wanted to see the capabilities of Opus 4.5 and GPT-5.2 when exploiting new vulnerabilities in the QuickJS JavaScript interpreter. They included many different challenges, such as various exploit mitigations and different target goals. Out of the 40 distinct exploits, GPT solved every scenario and Opus solved all but 2. These are the results of the experiment.
  • The vulnerability itself was documented at the beginning. Very quickly, both agents turned the QuickJS vulnerability into a read/write primitive API, making exploitation easier. From there, it leveraged known public weaknesses to build an exploit chain. In the hardest test, they included everything you could think of: fine-grained CFI, shadow-stack, seccomp sandbox, and more. GPT-5.2 created a chain of 7 function calls through glibc's exit handler to pop a shell on the hardest challenge with 50M tokens and $150.
  • The author found the vulnerability with an AI agent and then wrote an exploit using it as well. So, now what? The industrialization of exploitation. Now, the ability of an organization to complete a task will be restricted by the number of tokens it can afford, NOT by the number of people.
  • According to the author, exploit dev is perfect for industrialization. The environment is easy to construct. The tools are well understood, and verification is straightforward. The information is out there, and people know how to do this. The limitation tends to be on how many things a person can try and their hours; the computer is not limited by these.
  • This shows that new security issues can be exploited by LLMs because of their massive knowledge of the exploit game. They included source code for these agents as well.

Account Takeover in Facebook mobile app due to usage of cryptographically unsecure random number generator and XSS in Facebook JS SDK- 1876

ysamm    Reference →Posted 2 Months Ago
  • Mata provides several website plugins such as the Like button and Customer Chat. These are hosted at www.facebook.com and designed for use in iFrames. Communication between the host website and Facebook is implemented using postMessage.
  • The plugin sends messages to its parent window and the SDK on the Facebook side listens for those messages and dispatches them internally. To prevent arbitrary domains from interacting with it, the SDK enforces two checks on received messages: they must originate from Facebook, and they must include the proper callback identifier, a random string.
  • The Facebook JavaScript SDK registers a cross-window message listener for messages coming from the Facebook iframe. One of the iframe-handling functions injects an SVG directly into the DOM without sanitization, which could lead to XSS if invoked. There are two issues with this, though: 1) we need to send a postmessage, and 2) we need to have the random identifier.
  • The author of this post seems to know every quirk on Facebook. To solve problem 1, they found a URL that, when the page was visited, would send an iframe with user-controlled data. It's pretty crazy they found this primitive!
  • The random identifier was generated using Math.random(). This is insufficient for cryptographically random data and leaves a hole. The seed for randomness appears to be unique per page, so we need to leak the randomness somehow. The window.name() also uses Math.random(). If this could be leaked, it could be extracted.
  • The listener for the call init:post will reinitialize the iframe, generating a new ID. Since the name of a window can be public, it's possible to leak the name and reverse the random number generator to find the seed. From there, it's possible to calculate the callback string to trigger the DOM XSS on the website.
  • This attack has a few limitations... The XSS occurs on the user's website and NOT Facebook, and it requires lots of framing on websites to be allowed. Because this would be considered low to medium impact, they decided to review the internal use of this plugin to increase the issue's impact.
  • Most Facebook pages don't allow the framing required for this exploit. So, they decided to find a generic bypass for the framing. On Android and iOS, the XFO header with frame-ancestors set to any domain would place it in the XFO header with ALLOW-FROM. Since this isn't supported by modern browsers, this was a bypass of the iframe protection, but required frame-ancestors to be on the page.
  • They found an endpoint that would set the frame ancestors to break the iframe protections. However, it had a token that would require a login CSRF for the account. Since this was useless for XSS, a new constraint was added: keep a valid Facebook page inside an iframe with a useful body and ensure it does not refresh after a session change. They noticed that a business endpoint embedded this page on core facebook.com. We have everything we need!
  • Here's the full exploit chain:
    1. Victim visits attacker's Facebook App where the attacker opens a Facebook App Webview.
    2. Attacker creates that would contain sensitive values like an OAuth token in an iframe on their website. The attacker then performs log out and login CSRF into their own account.
    3. Attacker creates another iframe with the Facebook page with the customer chat plugin with the known attacker token; this is why the login CSRF was required.
    4. Attacker saves the name of the window for usage later. They force reinitialization of the iframe to get multiple values to defeat randomness. This allows them to calculate the Math.random() seed.
    5. Attacker can now send the payload message to the frame from facebook.com and the callback identifier.
    6. Payload from previous step triggers XSS on Facebook. Now, the script can read the victims OAuth token.
  • What a crazy set of issues. It requires SOOOO many small primitives in order to exploit and then even more to increase the impact. I appreciate the patience and the gadgets it took to earn the $66K bounty payout.

Cloudflare Zero-day: Accessing Any Host Globally- 1875

Fearsoff    Reference →Posted 2 Months Ago
  • The ACME protocol is used for verifying domain control, commonly used by certificate authorities. The CA expects the website to serve a one-time token at /.well-known/acme-challenge/{token} to verify the domain. The CA fetches the token; if the bytes match, then the certificate is issued.
  • The authors of this post were reviewing Cloudflare where the WAF was configured to block the world from all calls. When requesting the ACME endpoint, the WAF stepped aside and the origin answered the request! Practically, this means that there is special handling for this specific path on Cloudflare that circumvents the firewall. What can we do with that?
  • Cloudflare's SSL/TLS Custom hostnames is for managing domains and uses ACME to verify ownership. By starting the verification process on their domain, they had a long-lived access token that could be tested on all Cloudflare hosts.
  • In Spring and Tomcat, the known path traversal quirk ..;/ performed a directory traversal on the website but was considered valid at the original path on Cloudflare. NextJS made it possible to leak some details that are normally not public (I don't get this one).
  • In PHP, many websites route through a single parameter in the URL. By exploiting the vulnerable Cloudflare path in combination with this PHP functionality, the WAF could be bypassed. On top of just detouring around 404s, WAF rules like blocking headers could also be ignored.
  • The patch was to just remove this edge case for access. It's just no longer possible to hit this endpoint behind a WAF anymore.
  • Amazing bug! It seems like they saw something interesting while scanning, wanted to understand why and found a great issue. I personally feel there are a lot of distractions, like the PHP using an LFI or interactions with the Crypto.com CISO, but I found it great otherwise.

Getting Rounding Right in DeFi- 1874

Josselin Feist    Reference →Posted 2 Months Ago
  • Poor rounding in DeFi has been the catalyst of many, many bugs in Web3, even on major projects. The consequences of truncations and rounding down can seem insignificant but can be horrible consequences. This post dives into how this happens and a systematic approach for discovering rounding issues.
  • The EVM doesn't have floats; everything is an integer. So, 5/2 is not 2.5 but 2. To combat this, fixed-point arithmetic is used, which most folks know as decimal arithmetic. Now, 5 is no longer 5 but 5^18. Now, when you divide by 2 it's 2.5^18 which keeps most the precision. Some libraries also provide functionality for rounding up or down depending on the situation.
  • They provide an example of the following formula: balanceOut * (1 - balanceIn / (balanceIn + tokenIn)) from Balancer. if balanceIn is 0, then this can become balanceOut * (1-0). balanceOut is the only value leftover. This can happen by combining a flash mint with a pool unbalanced. This truncation allows the attacker to steal all funds from the protocol. The fix is rounding up instead of down.
  • It's commonly said to round in the favor of the protocol. In particular, if going to the users' round down and if going to the protocol round up. In reality, this doesn't work. First, there's a complexity problem. With HUGE formulas, the rounding depends on the runtime values, making this not a simple task.
  • Second, code is commonly reused. Sometimes, it'll require rounding down and others up. Third, errors can be bad as well. For instance, making a liquidation unliquidatable would be a major problem.
  • The author includes several tips for fixing this. First, every rounding issue is a bug. Instead of spending hours writing an exploit path, it just needs to be fixed on the spot. Some bugs are vulnerabilities, some are exploitable, but they are ALL bugs. The second consideration is around design. One thought is to redesign or simplify formulas, or to add protocol invariants. Next, precision loss can cancel each other out. By rewriting formulas with multiple steps, math errors can cancel each other out. All of these decisions must be explicitly documented.
  • Overall, a good post on rounding vulnerabilities and how to think about them going forward.