People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!
SCI protocol looked quite similar to the FINE protocol (which is well documented in the manual). There are some notes on the two protocols but the author leaves the rest of this reversing for someone else to do.view-source:, they noticed an interesting file path being used for the environment. This directory called home/ec2-user/anaconda3/envs/JupyterSystemEnv/share/jupyter/lab/static had several HTML, JavaScript and CSS files inside of it. By modifying the HTML page, the author trivially achieved XSS! But here is the thing: this was a self-XSS. What can we really do with that? <my_instance><region>.sagemaker.aws, self XSS could potentially be escalated away from the hosted instance. They noticed that all of the cookies were scoped to ONLY .sagemaker.aws! In particular, the anti-csrf token was on this domain.Same-Site cookie flag. The origin validation was non-existent, so this part was fine. In the land of browsers, CORS becomes a major problem for CSRF attacks because of pre-flight OPTIONS request is made. As a result, only certain types of request, known as simple can be used. This disallows us from setting custom headers, using JSON and any non GET/POST request. plain/text in order to get the JSON to be sent but still interpreted as JSON! That's a new trick for me.none, lax and strict. The default in Chrome and Firefox is lax, but the default in Safari is none. When the setting is lax, some cross-domain requests are allowed, such as GET requests to the top level domains. strict will never send cookies from cross-domain requests. SameSite attribute is never set (defaults to lax), then there is a 2 minute grace period where all requests will contain the cookies. So, the author figured out a GET request (which does not get affected by SameSite with the lax setting) to reset the Authtoken of the user! Since the cookie was now set less than 2 minutes ago, the auth cookie will be used. Damn, that is a real fancy workaround!plain/text simple request and setting the csrf value as a parameter. Finally, the trick for resetting the auth token was really awesome to abuse the Chrome grace period on the SameSite cookie header. Overall, a crazy article about how a self-XSS lead to a compromise. '\@' trick, which even the original RFC gets wrong. https://[your_domain]\@jobs.googleapis.com thinks the authority is jobs.googleapis.com but the library making the request makes the request to [your_domain] with a path of /@jobs.googleapis.com. Hence, the verification differing from the usage causes the vulnerability. '\@' next to each other. So, adding anything between these (such the URL https://[your_domain]\_@jobs.googleapis.com) still caused the SSRF. After fixing this, they found ANOTHER issue. There are still old vulnerable versions of the AppEngine app running, which needed to be patched. alg, which is short for algorithm for the signature. For instance, this could be set to RS256 for RSA or HS256 for HMAC.template with JavaScript. Later on, this input gets executed when clicking on the search bar. XSS!__proto__ and a few other items. I hate this solution; if another way is found to reference prototype, then it's a large list of findings. memcpy for a fixed sized buffer. /dev/msm_npu is used. This driver has many IOTCL calls, such as allocate/unmap a DMA buffer, load/unload neural network model and several other operations. Most of the commands are synchronous with a few being asynchronous. npu_close, the client pointer is removed from the network. npu_close and async npu_exec_network at close to the same time, the client is used but the NPU is never cleaned up! This leads a use after free on a pointer in the global buffer. By replacing the client object with a fake object arbitrary kernel functions, with 1 parameter of control, can be called. npu_exec_network_v2, stats_buf can be specified to collect some debugging information. But, this never worked? Instead of specifying the buffer address, an additional dereference was used! &kevt->reserved[0] should have been kevt->reserved[0]. stats_buf address rather than the copying the contents. This allowed the attacker to learn where this buffer was in memory and partially defeat KASLR. What a stupid bug that leads to another step in the chain. struct npu_kevent contained a UNION with four potential elements. In C, the compiler creates a UNION with the largest of the elements for size reasons. The largest element (uint8_t data[128])is an auxiliary buffer of size 128. When the copy happens when a small UNION field is used, such as struct msm_npu_event_execute_v2_done exec_v2_done, then the rest of the data is never initialized. sizeof(struct msm_npu_event)) takes the size of the struct with the largest field in the UNION for the size. So, even though the used parts of the UNION were initialized, the rest of the buffer was not. Damn, this is an awesome bug!stats_buf, which is important for creating a fake object. The first vulnerability can then have a fake object, on the use after free, that calls a function pointer to get code execution. __bpf_prog_run32 with bytecode pointer that should be executed in the kernel. Since the parameters were not setup properly, they needed to find a function to control the second parameter. Moving from parameter 1 to parameter was easy because of the large occurrence of small wrappers in Linux. sock_getsockopt handles the option SO_PEERCRED there is no lock when copying the data to userspace. Why is this bad? Because of the missing lock the object could be deleted then sent back to the user with a use after free vulnerability. sk->sk_peer_cred. This is because the creds are replaced, then freed. If the other process/thread is accessing the structure, then a use after free vulnerability could potentially occur.