Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

How Broken OTPs and Open Endpoints Turned a Dating App Into a Stalker’s Playground- 1659

Alex Schapiro    Reference →Posted 10 Months Ago
  • Another rushed app launch and another set of horrific vulnerabilities. Writing secure code is hard, takes time and lots of effort to get right. This is a prime example of what can go wrong. In this case, the author reviewed an app called Cerca briefly to find some bad issues.
  • First, they downloaded the app and opened it in a proxy. The app uses an OTP-based sign-in (code to phone number). When looking at the response for submitting this request, the OTP was simply in the response. Obviously, this means that you can access anyone's account with just a phone number. Yikes.
  • The website had an openapi.json file that described all of the endpoints on the website. The goal was to find a way to enumerate users, get their phone numbers, and compromise all accounts. The endpoint /user/{user_id} returns exactly this. Since these IDs were sequential, they could just brute force all accounts very quickly.
  • The data accessible to them was vast—sexual preferences, passport information, personal messages—all of the good stuff. This is a complete invasion of privacy. The company fixed the vulnerabilities once they were reported, but made no public announcement about it—this is likely to avoid a PR nightmare.
  • Privacy is hard to get correct and requires careful design. Should user be easily identifiable and found with just an ID? How about a phone number? These considerations depend on the app but it's always something to think about.

One-Click RCE in ASUS’s Preinstalled Driver Software- 1658

Mr Bruh    Reference →Posted 10 Months Ago
  • The author of this post bought a ASUS motherboard for their PC. Under the hood, it installed a bunch of software into the OS. One of these pieces of software was the Driver Hub. Its job was installing software from driverhub.asus.com via a background process.
  • The website uses RPC to talk to a background processing running on the system. The background process hosts an application locally on 127.0.0.1 on port 53000. Given that any website can interact with 127.0.0.1 on your local system, this was a pretty interesting attack surface. The ability to install arbitrary software would be pretty cool!
  • The driver had a check to ensure the origin was set to driverhub.asus.com. However, the origin check was flimsy. It was a startsWith check it appeared. So, driverhub.asus.com.mrbruh.com was also a valid request to it. After a long while of reverse engineering the .exe, they found a list of callable functions, including InstallApp and UpdateApp. The UpdateApp would take a URL (which was poorly validated again) and run any signed executable by ASUS. The signature check likely means that RCE isn't possible.
  • The way UpdateApp works has some nuances though. Here's the flow:
    1. Saves the file with the name specified at the end of the URL.
    2. If the file is executable by ASUS then it will be executed with admin permissions.
    3. If the file fails the signing check, then it does NOT get deleted.
  • The author looked into the packaging of the WiFi driver. It contained a ZIP file with an executable, a command script and a configuration file. The AsusSetup.exe from this package is a signed installer that uses other components inside of the zip file to install things. Based upon the information within the configuration file, it would execute SilentInstallRun without any signature checks. Additionally, adding the -s flag made this not even pop up a box for installation.
  • Here's the full exploit:
    1. Create a website with the domain driverhub.asus.com.* .
    2. The website will make a request to download a binary via UpdateApp This is not executed right away..
    3. Call UpdateApp again with the custom AsusSetup.ini file.
    4. Call UpdateApp one final time to trigger the vulnerability.
  • Overall, a great find and a solid bug report!

Statistical Analysis to Detect Uncommon Code- 1657

Tim Blazytko    Reference →Posted 10 Months Ago
  • Statistical analysis is used all the time in computer science for solving hard problems. In particular machine learning has hit a big boom lately. Sometimes, simple statistical analysis can be used to solve hard problems instead of the insanity of LLMs. In this post, we get one of those.
  • n-gram statistical analysis is common in linguistics. Simply put, it uses a grouping of tokens, such as words, and shows what the likelihood of this is to occur. Based upon this, it's possible to predict words in linguistics by using the most likely next word.
  • The author has chosen to use this technique for binary analysis on machine code. From testing, they figured out that 3-grams work well without over fitting. I'm guessing they tried this with several different N-gram amounts for analysis. Previous work has shown the ability to identify both anomalies in code and find patterns to help reverse engineer unknown ISAs.
  • To do this analysis, the author lifted the binary into a binary ninja intermediate language. Additionally, they removed registers and memory addresses to make it more generalized. From this, they analyzed a large amount of binaries to get a ground truth. Now, they can start analyzing new binaries to look for anomalies!
  • While looking into malware, they were able to identify control-flow flattening obfuscation techniques. Every function identified by the heuristic is obfuscated or pinpoint a helper function managing the obfuscated state. In the Windows kernel, they analyzed the Warbird Virtual machine. By finding an obscure pattern of code in the asm, they were able to find VM handlers that were obfuscated in the VM.
  • They analyzed Mobile DRM that plays encrypted multi-media content. Using it, they were able to identify arithmetic obfuscated areas via Mixed Boolean Arithmetic and usages of hardware encryption. This was enough to demonstrate they were looking in the proper area.
  • Stats don't lie! Statistics is useful for many things, including binary analysis. Great post on using techniques from other disciplines in the realm of security.

The Path to Memory Safety is Inevitable - 1656

Shaen Chang - Hardened Linux    Reference →Posted 10 Months Ago
  • Memory safety is a common topic when discussing programming languages. However, it's not well-defined what is being talked about.
    • Compiler-based static analysis.
    • Compiler-based code creation that is expected to be memory-safe.
    • Runtime mechanisms like garbage collection and array bounds checking.
    • Security hardening. Aka, just trying to make the system safe against attackers.
  • Lisp, traditionally memory safe language, has features that may allow for memory corruption. C/C++, traditionally considered a memory unsafe language, can also be done safely. Static analysis tools, strict code review and good runtime detectors do well. This demonstrates that memory safety isn't the sole responsibility of the compiler or the runtime - it's a coordinated effort. The author of this post comes from a project called Hardened Linux that's goal is to create a version of Linux that is resistant to compromise.
  • The lifecycle of vulnerability has a few stages:
    1. Identifying a bug and assessing whether it can be exploited.
    2. Writing a PoC.
    3. Adapting the PoC into a stable exploit.
    4. Digital arms dealers integrating it into a weaponizing framework.
  • Most of the effort in preventing vulnerabilities is around a bug not existing in the first place. However, there are other ways to keep users safe besides removing bugs - we could just make them unexploitable to get RCE.
  • Fil-C is a customization on Clang/LLVM compiler that catches many memory safety vulnerabilities by doing a combination of garbage collection and capability checking on pointer accesses made by Epic Games. Buffer overflows, type confusions, use after frees and many classes of vulnerabilities can be prevented using this.
  • Another strategy is around mitigation techniques at the hardware or software level. NX, CET, etc. are good examples of this. Many vulnerabilities would have been harder to exploit with some of these protections, if not outright impossible. Every protection is another roadblock that makes it less likely that exploitation will occur.
  • Practically speaking, I like this take on simply rewriting software: "Rewriting software and libraries using memory-safe languages is an expensive endeavor. If you have thoroughly considered this approach and decide to proceed, please consider rewriting them in Lisp/Scheme." Great post on the practically on exploiting systems!

CVE-2025-30147 - The curious case of subgroup check on Besu- 1655

Antonio Sanso - Ethereum    Reference →Posted 10 Months Ago
  • Elliptic Curve Cryptography is the basis of most signature verification, hence identity, in modern blockchains. Prior to the recent Pectra release, only the bn254 elliptic curve was allowed. There are some precompiles for curve pairing checks and multiplication/division that were defined in previous releases for efficient gas-wise computations.
  • Invalid curve attacks are a known issue surrounding ECDSA systems. For non-prime order curves, it's important that they're in the proper subgroup. If it's not in the correct subgroup then cryptographic operations can be manipulated/compromised. To check if a point is valid, there are two things to check: it must be on the curve and belong to a subgroup. If P is untrusted, then these verification's are crucial.
  • In the Besu implementation of the EVM, is_valid_point was not checking if the point was on the curve - it was only cehcking if it was in the subgroup. So, can you create a point that lies in the correct subgroup but off the curve? This requires choosing a very well-chosen curve. In particular an isomorphic curve. There are more details on the math but I don't really understand them :)
  • Why does all of this matter though? In this case, the main issue was a consensus failure. Since the Besu implementation was the only one with this particular issue, it would have diverged from the other clients, potentially leading to a chain fork. Besides this, they imply that it has other security concerns but didn't say it specifically.
  • To me, up time is not a huge concern compared to the benefit of multiple clients. If there's a loss of funds bug to be exploited in the EVM, it would have to appear in 66% of the clients; this is the benefit of client diversity. Good bug that was very specific to cryptography none-the-less.

Solana: The hidden dangers of lamport transfers- 1654

OtterSec - Nicola Vella    Reference →Posted 10 Months Ago
  • Lamports are the smallest denomination of SOL on Solana. Sending SOL to an account can cause major havoc to an executing program in certain situations.
  • They have a game called the king of SOL as a demonstration. At a high level, whoever has donated the most SOL wins, and it reimburses 95% of the funds to the original king. However, several DoS bugs are lurking in this codebase.
  • In Solana, an account (place where data is stored) has a minimum balance of lamports to be alive. Storage has a cost. So, this is used to combat account DoS attacks. Rent exemption is itself an attack vector though. Consider the case where a transfer is going from one account to another. If the account goes below rent-exemption then the transaction will always fail.
  • Accounts in Solana have a few properties - readable, writable, and executable. An account that is executable is unable to receive SOL via set_lamports. So, forcing a transfer to happen this way will also lead to a DoS.
  • Some programs are silently downgrades from writable to read-only. This happens for reserved system programs/accounts. In Anchor, specifying an account to have the writable requirement is common. By combining both of these, we can create situations where a transfer of lamports will always fail.
  • Overall, this is an interesting article on transferring imports and the security consequences associated with it. I didn't know all of these!

Post-Mortem: PT Collateral Pricing Incident- 1653

Loopscale    Reference →Posted 10 Months Ago
  • Loopscale is a modular lending protocol deployed on Solana. It recently suffered a 5.7M hack, which affected many of the platform's users. So, what was the bug?
  • In Solana, all programs and accounts that are interacted with must be specified beforehand. The program's usage can drastically change if these addresses are not properly checked. In this situation, a cross-program invocation was being made to the RateX vault. However, the RateX vault's usage was not correctly verified on the call.
  • I'm not sure what value was supposed to be returned from the RateX contracts, but it was something important for tracking assets. From reading tweets, it appears that the prices were being manipulated. Of course, if you can specify the incorrect price, you can perform trades at terrible price points to steal money.
  • Otherwise, the program had a good design. The exploit was limited to RateX principal tokens, which meant that no other vaults or lending positions were affected. Market isolation and collateral segregation really helped reduce the impact. In the future, they are adding time-based limits, exposure limits, and loan approval on giant loans, further giving protocol control. Finally, several updates will be gated by a multisig.
  • Going forward, they will expand their audit coverage. Small changes can have devastating consequences, so to combat this issue, they plan on having all code reviewed before launching. They also plan on launching a bug bounty program. Overall, an interesting report and set of takeaways from a real world hack.

AI Slop Is Polluting Bug Bounty Platforms with Fake Vulnerability Reports- 1652

Sarah Gooding    Reference →Posted 10 Months Ago
  • Bug bounty programs allow security researchers to disclosure vulnerabilities to get patched. Many of these programs pay money for reporting these issues. Given that there's money on the line, there's an incentive to get a payout even if there's no real vulnerability.
  • LLM's are great at generating content. Unfortunately, they can create content for anything, including bug bounty reports. Security is very contextual and subtle things can change whether something is exploitable or not. Because of this, incorrect LLM generated reports are becoming a major issue in the security realm.
  • The problem with these reports is that, at a glance, they seem legitimate. To disprove the issue, it requires a large amount of context on the codebase and a deep understanding of security issues. Historically, we have assumed "good faith" research but this is starting to be abused. The is the problem is that triaging these issues takes a large amount of time.
  • Some projects do not have the bandwidth to handle these security reports. So, they end up just paying a small bounty to avoid the delay and PR fallout. It's just cheaper to pay for the bug than hire an expert to perform the true analysis.
  • In the case of curl, they have a large amount of reports to handle from LLMs. At curl, they have very technical folks and are able to handle these. They are usually able to identify fake reports but it still takes time. If this keeps up, restrictions may be added to bug bounty programs on the users doing it.
  • What's the solution? Detectors and verification in my opinion. A few detectors:
    • It's common for these reports to not include reproduction steps, making the vulnerability impossible to reproduce. So, adding a hard requirement on PoCs that run would be useful.
    • It's common for reports to have illegitimate code links. If code being linked doesn't exist then, then it's likely trash.
    • Making vulnerabilities needlessly complex.
    • The styling of ChatGPT and other LLMs really likes Markdown with a lot of bullets.
  • On the other side is verification. Platforms, like HackerOne, need to have better account verification. Once an account has been flagged as using spam, they need to ban the account, the IP and the email going forward. Sort of like cheat detection repercussions on Chess websites. Eventually, the beg bounty people would likely stop reporting things altogether.
  • This is a hard problem to solve but it'll eventually be worked out!

XNU VM_BEHAVIOR_ZERO_WIRED_PAGES behavior allows writing to read-only pages - 1651

Ian Beer    Reference →Posted 10 Months Ago
  • The proof of concept starts with a write of a bunch of A's to a file owned by root and read only. Next, they execute a C file that uses mlock on that file. The file is still read only and owned by root but now contains a bunch of 0's.
  • VME's define the privileges which a particular map has over a regions vm_object. The behavior VM_BEHAVIOR_ZERO_WIRED_PAGES can be set by a task on any vm_entry. However, there are no permission checks on this, causing the zero_wired_pages flag to be set. In vm_map_delete, the unwire function looks up the page of the underlying object and zeros the portion of it out. Again, no permissions are checked in this case.
  • The next challenge is getting the page wired to something interesting. mlock is a wrapper around mach_vm_wire_kernel which contains the ability to do writes. Using this, it's possible to mmap an interesting part of a page, mark it with VM_BEHAVIOR_ZERO_WIRED_PAGES, mlock the page and it'll zero out parts of the data.
  • A pretty classic, yet complicated to exploit, permissions issue. Neat!

Bug Disclosure: Reentrancy Lock Bypass- 1650

Bunni    Reference →Posted 10 Months Ago
  • The contract BunniHub is a pool contract. There was a vulnerability that allowed for calling back into this code while the pool was in an unintended state, classic reentrancy, via a user-defined hook. Inevitably, this would have led to lost user funds. Pashov audits found this reentrancy vulnerability during their audit.
  • To mitigate the original issue, they introduced a set of functions for prevent reentrancy. This was done by adding two functions: lockForRebalance and unlockForRebalance. This locked the rebalance before the order and unlocked it once the order was executed. These locks are per contract and not per pool.
  • A Bunni pool can have a hook contract that triggers this functionality registered by anyone. Since the locks are global, an attacker can create a hook contract, call it and disable the reentrancy lock themselves. Now, manipulation is the same as before and leads to loss of funds. Cyfrin, a web3 auditing company, found this bypass.
  • To patch the issue immediately, they created a whitelist on who is able to execute rebalancing actions. The attack was prevented, theoretically. To be cautious, they asked Cyfrin if any other reentrancy attacks were still possible and they did more research into it. They found a similar vulnerability when interacting with a malicious ERC-4626 vault that broke the accounting of the pool to withdraw more assets than they should be able to. To resolve this new issue, all functionality was paused until a proper fix could be made.
  • The contracts were audited by Pashov Audit Group and Trail of Bits. Currently, and they are being audited by Cyfrin as part of the Uniswap Foundation Security Fund. Patching vulnerabilities is hard; patches need to be taken really seriously when they're suggested. Otherwise, you'll end up with more issues like this.