Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Solidity Mutation Testing- 1327

Rare Skills    Reference →Posted 2 Years Ago
  • Finding bugs dynamically via testing frameworks is amazing as a development team. Security issues and general bugs get through less and it requires less person power to go through. There are many ways to go about testing. In article, they introduce the concept of mutation testing.
  • The idea is simple: let's intentionally introduce bugs into the code and see if the test suite catches it. Remove a modifier, flip comparison operators and deleting code are all great examples of this. By doing this, you test the capability of the test suite to actually find bugs.
  • Doing this manually would theoretically work. However, it would be time consuming. So, the people at Rare Skills built a tool that works in Solidity for this! The tool Vertigo-rs, a Foundry add-on, is meant to find bugs but randomly mutating the running code. Overall, an interesting way to test code; I'm curious to see how much this takes off.

[GitLab] Account Takeover via password reset without user interactions- 1326

DayZeroSec    Reference →Posted 2 Years Ago
  • Gitlab is a platform similar to Github. Recently, a user found an awful password reset issue that borks the security of the entire system.
  • I love the beginning sentence from the DayZeroSec folks: "Dyanmic typing strikes again!" Languages like Java, C# and others are super serious on data structures being passed in. In Ruby, PHP, Python and others, there are virtually no rules. I've definitely written code over the years that returns different types in different situations, which I know I shouldn't do though.
  • When passing in an array for the email instead of a string, weird things happened. The lookup function for emails took in an array OR a string. This lookup would only parse the first email in the list though.
  • When actually sending out the password reset tokens, all of the emails in the array would be used. According to Z on the audio version of the podcast, the function for parsing the email to reset had email in the name while the second one had emails in it.
  • Using this, an attacker can trigger a password reset on a victim that will send the link to their own email. To fix this issue, you can't even specify an email anymore. Instead, it's derived from the user record itself, which is much more secure. How do people find this types of bugs!? Gotta love the creativity of these folks.

Socket Incident Report 16 Jan- 1325

SocketTech    Reference →Posted 2 Years Ago
  • Socket Tech allows for interoperability between all of the major wallets. On January 16th, they were exploited in a major way.
  • Socket Gateway hosts various modules that can only be added by administrators. When deploying these modules, a developer first deploys it then the admin will attach it to the contract.
  • The goal was to update the contract WrapperTokenSwappgerImpl. When doing this, the development team had a mixup on which version was should be deployed - a pre-review vs post-review. For whatever reason, the pre-view module got added and attached to the contract.
  • The original code had an arbitrary call vulnerability where the address being called and the data, such as the selector, could be set. As a result, an attacker called transferFrom() on all of the token contracts that had large approvals from users. This is a good example on why approvals on tokens should NOT be infinite.
  • Overall, the bug is pretty simple. The interesting part to me is how the bug got released into the wild. The team had reviewed the code and found the bug but released the wrong version. I suppose a more rigorous CI/CD program for deployment could have stopped this issue.

ECDSA is Weird- 1324

Kelby Ludwig    Reference →Posted 2 Years Ago
  • ECDSA has many unexpected properties that can cause security issues if people are not completely sure on how it works. I can imagine that many of these issues being found in blockchain-land, since the public nature of all data gives everyone more access to data than anticipated.
  • The first, and most well-known issue, is signature malleability. All EC curves are y2=x3 + Ax + B. Because of the y2, the entire curve is reflected over the x axis perfectly. As a result, there are always two valid points or two valid signatures. The math to generate the other point is trivial to do.
  • In blockchain, the usage of signatures is common. To prevent replay and double spend attacks, the verification of the orientation of the signature is crucial. Otherwise, using the signature as a key can create a duplicate signature to bypass the scheme.
  • Given a signature, it's trivial to generate a keypair that has the same signature for a chosen message. In our replay attack example, this doesn't do us any good. However, if there is a scheme that assumes signatures are unique and anybody can call it, then this can be a problem. Now, we have the ability to create arbitrary messages with the same signature. Super weird issue but interesting in practice.
  • The next one is not as common but pops up from time to time. It's super important to hash the data that is provided in and NOT trust an incoming hash. If a hash is supposed to be trusted then an attacker can generate signatures for arbitrary private keys. One of the examples is an app that tries to prove that they created Bitcoin to spoof the Satoshi address.
  • The final two have to do with knowledge of the random k value. Any knowledge of the random can makes it trivial to find the private key. Additionally, if two signatures have the same k from a user then it's also trivial to recover the private key using similar techniques.
  • All of the issues above have a POC in the code, which is super nice as well. Cryptography is absolute black magic and we all need to be careful when using it. The author also linked this as inspiration, which has lots more content about cryptography issues.

Permission denied - The story of an EIP that sinned- 1323

Trust Security    Reference →Posted 2 Years Ago
  • EIP-2612 is an extension of the ERC20 standard that adds in the Permit() function. This removes the burden of paying for gas on a call to approve(). Instead, a user can sign offline a permit signature, give it to a user and make it usable for them to transfer funds from their account. Good idea that saves lots of gas!
  • There are two key items to verify: the signature is valid and the deadline has not passed. Crucially, the msg.sender of the call does NOT need to be validated. This is a known limitation but was brushed off, as "The end result is the same for the Permit signer..." The authors of this post asked themselves if this is true or not.
  • Many times, the call to Permit() is in the middle of lots of other code. So, if an attacker frontruns the call to the other function, extracts the signature and uses this signature directly on the call to Permit(), they would lose the ability to use the functionality after that point. The authors went around and starting looking for cases of this being true.
  • The case they saw over and over again was within custom EIP712 functions, mostly in deposit() functions. With these, there's a permit, a transfer then some custom logic. In the example, the logic called _creditUser(). Since we can frontrun this call, the final step will never happen, losing the user some value.
  • The author has a very good point on this: "The issue is a great example of how important it is to be security-focused when defining widely used standards." When creating ideas for everyone to use, they better be well-thought out. The payouts were mixed for the reports. Some paid, were supposed to by Immunefi and didn't... Just how the life of a bug bounty hunter goes.
  • They claim this falls under the griefing category, which is a medium severity. This is simply a bug that can be used to hurt an individual user or protocol for a small period of time but has no incentive or profit from an attacker. To me, this doesn't fall under the griefing category, since they permanently lose access to the functionality. Overall, good write up on an issue that appears to be everywhere. I'm curious to see if this will turn into the ERC20 approval frontrun bug in terms of reporting.

Code Vulnerabilities Put Proton Mails at Risk- 1322

Paul Gerste - Sonar Source    Reference →Posted 2 Years Ago
  • Proton Mail is a privacy-centric email service. Being able to extract secrets from this service, where it's supposed to be secret, would be devastating. Under the hood, it uses the state-of-the-art HTML sanitizer DOMPurify in order to avoid XSS on incoming emails.
  • After doing the sanitization via DOMPurify, the author noticed that some DOM manipulation was being done. In particular, the code would find <svg> and replace them with <proton-svg>. It may be possible to use this to break the parsing of the HTML!
  • HTML has its own parsing rules. However, SVG and MathML have their own rules. For the <style> tag, the parsing is different when seeing a closing tag. In HTML, the text in the next closing style tag will end. In SVG, it can contain child elements. Seeing the code in different contexts can cause major issues.
  • When the element is changed from an svg to proton-svg, major changes occur to the parsing. Using the payload <style><a alt="</style><img..."> and changing the context will cause the style to get parsed differently. Originally, the text was kept in for svg, since it was valid. But, the transformation leads to issues with the context, potentially leading to XSS.
  • Adding a <onerror="javascript..."> will now lead to XSS! But, we still have two more lines of defense. First, there's an iframe. Second, there's a CSP. For the iFrame on Safari, it adds the directive allow-scripts directive, which allows attackers to execute JS to access the top frame.
  • The allow-popups-to-escape-sandbox element allows JS to access the other page that popped the iFrame. For other browsers, the attacker needs a victim to click on a link that opens in a new tab, which will then access the rest of the content on the website.
  • The final thing is bypassing the CSP. The CSP restricts which origins information can be loaded from. In the CSP, the blob URi was allowed for scripts. They are temporary URLs that can be dynamically created at a link then used. If we can convince the browser to load our blob, we'd be able to execute arbitrary JS.
  • The blob URLs are placed at long UUIDs. Since these are random, we need a way to know where these are. In order to do this, the author used the ability to render remote images and inline styles to leak the original URL. Then, later, use this blob URL in a different payload.
  • Overall, an awesome post on contexts for HTML parsers, escaping iFrame sandboxes and CSP bypasses. I really enjoyed the post and learned a ton along the way.

draw.io CVEs- 1321

lude.rs    Reference →Posted 2 Years Ago
  • Draw.io is a website for drawing diagrams. The first vulnerability is a simple SSRF bug because of a bad and manual blacklisting technique. The second issue is much cooler though.
  • The website supports OAuth from third party providers like Github. If we can force a redirect during this flow, we can steal the OAuth token, which would be awesome. However, it's not legal to put an absolute URL - only relative URLs. Regardless, the author decided to take a look at this to see if they could bypass this.
  • The verification of this code checks to see if the URL is absolute or not. The library doing this follows the specification perfectly. If it's an invalid URL, then the code assumes it's a relative path! So, what if we found a URL that was invalid but was processed as a absolute path by the browser?
  • The author did some fuzzing and manual testing of this. Chrome is ever nice and does not conform to the RFC! In particular, if there is a space after the protocol, it will just remove the space. However, this is an invalid URL, which triggers our error. An example is https:// @evil.com/, with the space being the important thing here.
  • Since the check is bypassed for an absolute URL, the redirect will be made to an attacker controlled website. This steals the OAuth code, leading to a compromise of the user. Overall, amazing post on the bug. I love the idea of "what if we have an invalid URL by the RFC but valid to Chrome?" Even though the issue was not immediately exploitable, the idea from the bad error handling was there.

Code Vulnerabilities Put Skiff Emails at Risk- 1320

Paul Gereste - Sonar Source    Reference →Posted 2 Years Ago
  • Skiff is an email provider that really doesn't want XSS on their website. First, they sanitize their emails using DOMPurify. After that, they do various transformations on the data, which is the crux of the issue. They stick the email rendering into an iFrame and have a good CSP as well. Let's bypass all of them!
  • Mutation XSS (mXSS) is a type of XSS that results from taking information, but the browser fixing the markup changes the expected meaning of it. A good example of this can be seen here.
  • In Skiff, the content is ran through DOMPurify then processed some more. During this processing, the previously quoted emails are put into a thread, which inserts an empty div before the first element with the parameters data-injected-id=last-email-quote. So, what's the big deal with this small change?
  • In HTML, a div is invalid within an svg tag. So, if the browser sees this it will move the entire div element outside of the svg. Many of the elements within the svg that are safe there are unsafe in the normal context. Using some weirdness with style tags closing within double quotes in the HTML context but not the SVG context allows for the smuggling of an image tag with a onerror event! This gives us XSS within the iFrame.
  • The iFrame for Skiff has three directives on it: allow-same-origin, allow-popups and allow-popups-to-escape-sandbox. The goal is to get code that we can execute on the page. To do this, they first noticed that images are rendered as inline blobs. Since blobs inherit the origin they are on, we can create an attachment with the necessary information in a blob. The blobs have a random UUID though. So, using a technique in a previous post, they use CSS to leak the UUID to themselves.
  • Once they know the UUID of the attachment, they put the attachment into a link for the victim to click in a follow-up email. By having the link contain target="_blank", this will be opened in another tab with the content being controlled by us.
  • The final thing was bypassing the CSP. The CSP contains script-src 'unsafe-eval' http://hcaptcha.com. This is known to have an XSS gadget. So, an attacker can simply use one of these existing functions to get the XSS working.
  • Overall, a pretty crazy XSS bug with a full CSP bypass and sandbox escape. To me, CSPs and iFrames seem unescapable. So, finding posts that circumvent these protections is pretty amazing.

SSRF Cross Protocol Redirect Bypass- 1319

Szymon Drosdzol - doyensec    Reference →Posted 2 Years Ago
  • Server side request forgery (SSRF) is a popular and impactful vulnerability when used correctly. In order to prevent this attack, processing is done on the URLs to ensure that no internal URLs are used. The title of this post says it all: switching protocols to bypass protections.
  • One common bypass is reaching out to a public domain then redirecting to an internal IP. The authors of this post had found this multiple times then asked them to use the anti-SSRF libraries ssrfFilter which appeared to solve the problem.
  • When messing around with the library, going from HTTP to HTTP was blocked for localhost redirects. However, going from HTTPs to HTTP (or vice verse) on localhost wasn't blocked.
  • What happened? Within the request library, whenever the protocol is changed the request agent is deleted to ensure the right client is used. However, the SSRF prevention is based on the agents createConnection event handler! So, the SSRF mitigation strategy doesn't work since the hook is never called.
  • Overall, a fairly crazy/weird bypass in the protections for SSRF issues. Sometimes, dynamic blackbox testing with weird things is more fruitful than seeing the code. There's no way anybody could have found this reading the code as a security researcher.

CVE-2022-4908: SOP bypass in Chrome using Navigation API- 1318

Johan Carlsson    Reference →Posted 2 Years Ago
  • The Navigation API is supposed to be a replacement for the old History API. This is supposed to solve the problems of SPA client-side navigations. The navigation.entries() function is used to access a list of the history for a given windows session. Ideally, this will only given history entries for pages that have the same origin as the current page. Each history entry contains the full URL including fragments, making it ripe for attack.
  • When reading the specification, the author noticed the the API allows for the interception of navigation events. Immediately, the author thought of this as a good potential for abuse. It could be violating SOP, redirecting navigations and more.
  • The author tried some things but found a post by Gareth Hayes with a tool. Using some ideas from this, they setup an iFrame, setup a hijacker on the iFrame for the navigation API then redirected it to about:blank. Upon doing this, they history array was returned!
  • The history API would only be returned for items that were sameSite instead of sameOrigin. Still, getting XSS or a subdomain takeover then using this could leak information cross origin, which is pretty bad. For OAuth, which commonly has secrets in the URL, this would be a complete account takeover if somebody visited the website.
  • They decided to test this with an imaginary XSS on Gitlab forums to the real Gitlab. By having XSS (again, not a real one) on the forums, the OAuth codes could be exfiltrated from the history information of the navigation API. They also learned the difference between a eTLD and a TLD for samesite, which is not talked about as much as it should be.
  • To really drive home the point, they wanted to find something worse and real to exploit this. codesandbox.io hosts code that can be executed by others on subdomains. If a user was logged in to the site via an SSO provider like Github or Google, an attacker could access the history information with the OAuth codes from the history! Damn, that's real bad. It should be noted that a window reference is all that is needed; either through opening a tab or an iFrame.
  • The ticket has some insight into what happened. When copying information for the entries in the about:blank navigation, the developer did not consider a cross-origin request could be made. Luckily enough, site isolation (which isolates processes for different origins) prevents leaks cross origin.
  • Overall, a super interesting vulnerability that actually has some real impact. Following your gut for features that look dangerous works out a lot of the time!