Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Mozilla VPN Clients: RCE via file write and path traversal- 1843

Trein - HackerOne    Reference →Posted 2 Months Ago
  • Mozilla's VPN client software has a live_reload command available over a websocket. This command reaches out to a server and writes the file to /tmp on the local machine.
  • This code contains a classic directory traversal. The path for the remote server is the same as the one that is written to. By adding a ../, it's possible to overwrite DLLs on a Windows system. This would, in all likelihood, lead to RCE on Windows.
  • The exploit required that staging servers be enabled, which seems to be a non-default setting. A classic vulnerability in 2025. Crazy!

Using Mintlify to Hack Fortune 500 Companies- 1842

eva, hackermon & MDL    Reference →Posted 2 Months Ago
  • Mintlify is a b2b software as a service documentation platform that allows companies to make documentation via MDX files then host them with styling. Some of their clients include Discord, Twitter, and Vercel.
  • The MDX to render docs is provided by their customers. The author was curious on how the pages were rendered on the server-side for static page generation for search engines/bots. MDX is effectively JSX (react) combined with markdown. So, you can literally add js expressions in the markdown. So, they added a simple payload to just eval things on the server and it worked! After some work, they were able to extract all of the environment variables from a serverless environment. This attack could be used to enable mass XSS, defacement, and other issues. Yikes!
  • The route /_mintlify/static/[subdomain]/ is used to allow static images to be loaded. Surprisingly, websites will load from other domains! The author created an SVG containing an XSS payload and shared the link https://discord.com/_mintlify/static/evascoolcompany/xss.svg on Discord. This is XSS on everything now. This becomes particularly problematic because cookies are unlikely to be scoped per subdomain. Mintlify patched the targeted XSS via ensuring that it was an absolute path. This was vulnerable to a directory traversal though.
  • On top of these issues, they found an IDOR that exposed GitHub repo fields for private repos to the GitHub API. Additionally, the previously deployed versions on Vercel were accessible via direct branch references. So, the exploit could be run on here still. After all of this effort, they were rewarded with 5K from Mintlify.
  • There was another hacker involved with this: MDL. Instead of just popping an alert via XSS, they wanted to create MORE impact on the specific platforms. Some companies treat third-party vendors as untrusted input, and some grant them admin access to everything. So, they wanted to test the XSS from above to take this further.
  • Some companies had an extensive CORS policy that whitelisted all *.company.com. In this case, it's possible to send requests on the user's behalf on the website. This was made even worse by companies that scoped the authentication cookies to the entire domain namespace. Finally, most companies didn't bother configuring CSP's because it's just documentation.
  • In the other camp was explicit distrust. The best defense was explicit domain separation. Some companies didn't scope cookies to the entire domain, limiting the impact too. They planned to disclose on their websites, based on the findings, who was vulnerable and who paid out. However, after getting approval, they got blasted by lawyers with threatening letters, so they decided to anonymize it.
  • I have always found bugs in third-party components being reported to bug bounty programs to be hit or miss on payouts. On one hand, the goal of a bug bounty program is to find anything that can negatively affect customers. On the otherhand, the company did nothing wrong and is being punished for a bug in somebody else's code. If we go on impact and impact alone, it makes sense to pay out. Otherwise, no research would be done into smaller products/open source things.

ORM Leaking More Than You Joined For- 1841

Alex Brown - elttam    Reference →Posted 2 Months Ago
  • Beego is a popular Object Relational Mapper (ORM) in Golang. Its filtering syntax is heavily based on Django ORM. Because of these similarities, techniques from the Django ORM article plORM worked on Golang as well. The main requirement was the ability to control the filterExpression fully.
  • They decided to check out GitHub for vulnerable projects using SourceGraph. With a simple search, they ended up on Harbor. A user-controlled query parameter was being concatenated to the key of a filter with __icontains field. By using email as the input, it would return all email addresses. Additionally, it would be possible to filter based on internal sensitive fields like password and salt.
  • The Harbor team tried patching this by limiting what fields could be put in the filter. The authors of this post noticed that if a__b was used, then b would be parsed by the ORM but the filtering engine would see a. The second patch tried to limit the amount of __ in the filter. This was bypassed by using the concatenation described above to only have a single __ in the input, but actually use two in the real filter.
  • The authors claim that these issues are common in their client engagements and in bug bounty targets. Overall, a good post on an ORM leak issue that somewhat resembles NOSQL injection.

One Bug Per Day- 1840

onebugperday    Reference →Posted 2 Months Ago
  • The website takes a previously known vulnerability, and it explains it. You can "roll" random bugs, which is pretty neat. If you're looking to learn about new vulnerabilities in existing projects, this is a great way to learn.

Bugs that survive the heat of continuous fuzzing- 1839

Antonio Morales - GitHub    Reference →Posted 2 Months Ago
  • OSS-Fuzz is most of the most impactful security initiatives for open source software. Over the years, it has found thousands of vulnerabilities. Continuous fuzzing isn't a silver bullet. The author of this post dissects why many bugs still exist in these projects, regardless of the large amounts of fuzzing they have done. Since they have reviewed many of these projects and found bugs, they have a good insight into this to create a process for writing good fuzz tests.
  • In 2024, the author of this post found 29 vulnerabilities in the default multimedia framework for GNOME desktop GStreamer. The public OSS-Fuzz statistics show that only two active fuzzers had a code coverage of 19%. For a project like OpenSSL, it has 139 fuzzers; for bzip2, it has 93% code coverage. OSS-Fuzz requires human intervention to write new fuzzers for uncovered code, and to be serious about doing so.
  • Poppler, the default PDF parsing library on Ubuntu, is used to render PDFs. It has 16 fuzzers and a coverage of around 60%. Kevin Backhouse found a 1-click RCE in this. How? They exploited a dependency that wasn't included in the coverage metrics. Software is only as secure as its weakest link.
  • Exiv2 is a library used for reading, writing, deleting and modifying metadata information from images. It's used by GIMP and LibreOffice. In 2021, the Kevin Backhouse added OSS-Fuzz support to Exiv2, finding multiple vulnerabilities along the way. In 2025, several vulnerabilities were discovered in the project; this is after 3+ years of fuzzing. The discovered vulnerabilities were not in decoding but in encoding. Since the encoding isn't looked at nearly as much for security, this led to a hole in the system.
  • The first step of five is to prepare the code for fuzzing. Removing checksums and reducing randomness are examples of this. Not all code is written in a way that is easily fuzzable. So, it may take time to do this. They have an entire article already dedicated to this step.
  • Step 2 is increasing code coverage to above 90%. This mainly falls into two categories of work: adding new fuzzers and creating new inputs to trigger corner cases. To hit the magical 90% number, sophisticated techniques like fault injection will be needed.
  • Step 3 is improving context-sensitive coverage. Most fuzzers track code coverage at the edge level, where an edge is the transition between two basic blocks. This works well but has a major limitation: it doesn't cover the order in which these blocks are executed. Context-sensitive coverage not only tracks the edges executed but also the edges directly before them. This can be 2, 3, or N edges. AFL++ has support for this. With edge coverage, it's impossible to hit the 90% number though.
  • Step 4 is adding Value coverage. This defines the range of values a variable can take. This is necessary because an edge may have many possible states that enter into the basic block, where only a few of them have bugs. Because of the large amount of possible values, these should either be A) done on strategic values that are security sensitive or B) done in buckets of values to limit large amounts of states.
  • Even with these optimizations, many bugs still fit through the cracks. They have noticed that big input cases tend to slip through. This is because fuzzer inputs sizes are capped or it times out. Some vulnerabilities require a long time to trigger, such as a reference-counter integer overflow. The reality is that not all bugs can be found by fuzzers.
  • Fuzzing is powerful, but it is really an art rather than a fire-and-forget solution. Overall, this was a great post on the effectiveness of fuzzing and where it sometimes falls short in the real world.

Defending LLM applications against Unicode character smuggling- 1838

Amazon    Reference →Posted 2 Months Ago
  • AI applications accept text and then act based upon that. If text is hidden to the user but consumed by the AI, this becomes a problem. When code executes in a multitude of languages, from Python to Java to C, these differences are important.
  • Unicode Tag blocks are a range of characters that span from U+E0000 to U+E007F used for formatting tag characters for emojis that mirrors ASCII. An example of this is adding text to a flag.
  • For example, let's take an email client set to assist users by reading and summarizing emails. A bad actor could embed a malicious instruction into an ordinary email. When the email is processed, the assistant might only summarize the embedded instruction but then execute the hidden data, such as deleting the entire inbox.
  • Because of issues around these characters, it's common to strip them. Removing sets of characters in code is complicated because of issues around nesting. This approach is similar to HTML sanitization. Overall, a good post on a new attack vector affecting AI applications.

Returndata Bombing RAI's Liquidation Engine- 1837

Trust    Reference →Posted 2 Months Ago
  • RAI (Reflexer Finance) is an ETH-backed stable asset with a managed float regime. DAI pegs to $1 through governance-controlled mechanisms, RAI only uses ETH as collateral. With this, it contains an algorithmic controller to manage the redemption price.
  • On top of this, it's a lending protocol. When taking out a loan, the collaterialization ratio must not go below a certain threshold. If this is the case, then the loan is liquidated. This is an important aspect of keeping the system solvent and safe.
  • RAI introduced a feature called Safe Saviours. These are contracts that attempt to rescue an underwater position during liquidation to inject additional collateral. The contracts are written by the community, as long as they meet certain requirements, but must be approved by Governance.
  • The liquidation engine calls saveSAFE() on the Safe Saviours contract on liquidation. If anything goes wrong then the error is caught via a try/catch block and the liquidation happens anyway. It's important that loans are always liquidatable. Otherwise, the protocol would be left with a lot of bad debt and lose money.
  • The error handling has two peculiar things. First, the amount of gas is NOT passed to the contract. By default, this means that 63/64 of the remaining gas is sent in order to allow the contract to finish execution even if callee contract uses all the gas. Second, the catch clause will emit an event with the revert reason without a limit on the amount of data. Returning data and event emissions both use gas.
  • The issue is that an attacker can return a large amount of data from the contract call and force an out of gas error to occur. In practice, this violates a key invariant of lending protocols that all loans must be liquidatable. The bad debt would accumulate over time and would be permanent in the protocol.
  • When this was reported, RAI claimed the vulnerability was out of scope because it required the contract to be whitelisted by Governance. Although this is true, there is a process for this to happen that is explicitly documented and encouraged. Hence, I effectively see this as unprivileged.
  • Immunefi initially ruled this as a medium citing it as gas griefing. After being closed again by RAI, Immunefi closed as None impact. From the arguments I see from the article, I tend to side the researcher. For all intents and purposes, this functionality was exploitable and effectively unprivilged through normal flow. It led to protocol unsolevnecy and the ability to make bets to guarantee profit.
  • To find the vulnerability, they looked for dangerous external calls. Additionally, they noted that try/catch is commonly mishandled in Solidity because it gives a false sense of security on error handling. According to the author, developers should use ExcessivelySafeCall for arbitrary untrusted calls to limit return data, cap gas on calls to external contracts and treat error messages as untrusted input.
  • From the reporting side, they have some good points. First, Immunefi mediation is non binding, as previous rulings can be changed. Second, Governance rubber stamps are not a security boundary. If the approval process can't detect the bug, then it's not a mitigation. Overall, a pretty good post on calldata bombing.

The Ultimate Guide to the Top for Security Researchers: Setting Sail- 1836

Shealtielanz - Sigma Prime    Reference →Posted 3 Months Ago
  • This article is the start of a four part series about the process of being a security researcher in web3. This first part is Setting Sail — The Intro & Foundation. It starts with defining what "success" is. They mention doing well in contests, earning large bounties, working for a big security firm and doing private audits.
  • They go into age old ideas around motivations, and goals. You need to know your "why" to do well. Having goals for your why is helpful for making it to the next step. They have three core pillars: relationships, skill set and social media presence.
  • For social media presence, the claim is around it opening doors that other things cannot. Building influence, either by sharing knowledge, lessons or big wins, gives you opportunities. From there, it's about building the relationships; it's not what you know it's who you know. With a combination of meeting people and being on social media, you will start to get job offers, opportunities to collaborate and other types of opportunities. They claim to go to discord channels, DMs with good questions, conferences and other things.
  • The most important thing is competence. Being able to find bugs and exploit vulnerabilities should be valued above all else. Read articles, do contests... hone your skills and keep improving. If you don't have skills then the relationships don't matter.
  • About the skills... the author says to focus on niche things above breadth. "The more you niche, the less you compete, and the more you earn." The next thing is around staying active. This is a marathon, not a sprint. Still, it's a race though; the faster you run compared to others, the better you will do. Just don't burn out. "Discipline sustains motivation when it fades." The next tip is around collaboration. Working in teams can expand your thinking. It can help you find things that you missed as well. I enjoy working at a company to learn from others.
  • The final section is around traps and what to avoid. I personally find this section to be the most valuable. First, they mention not being kept up by pride. As you grow stronger, it's easy to feel like you've made it and loss your edge. Enjoy your winds, set new goals and repeat the same success as before.
  • Another big one is around consistency. The turtle wins the race because it goes the whole time. If you're inconsistent, you will never get good. Keep up to date on the opportunities. This could be learning new languages, new bug bounties and many other things. It's a fast paced game!
  • The final one is not taking chances, which is summed up with a good quote: "“A ship in harbor is safe, but that is not what ships are built for." Whether it's pride, time issues, being scared of failure... take changes that make sense. It won't always work out but fortune favors the bold! Overall, a good post on breaking into the security space for Web3.

External calls are dangerous- 1835

Alex Lazar    Reference →Posted 3 Months Ago
  • In both EVM and Solana programs, a common security issue is not validating external calls properly. This can led to DOS issues, reentrancy or loss of funds bugs. This article has a list of 7 issues to consider.
  • There are many ways that calls can fail. "If you don't know how it can fail, you don't know enough" is a great title for this section. In EVM, contracts without receive() cannot receive ETH. In Solana, there are multiple ways this can happen that was already documented in another post. Apparently, ATA creation in Solana fails if the address has already been created.
  • In EVM, gas griefing can be used to make the main function work but have external calls fail. If the errors are not handled correctly then partial state updates can occur. This isn't possible in Solana because it will always completely rollback on errors.
  • In EVM, reentrancy is really the output of bad validation of the state and the caller. Sometimes, you do need to make calls to an arbitrary callee though. Solana doesn't have this specifically but it DOES have issues with account reloading. In Solana, once a runtime account has been loaded in an instruction, they are not automatically reloaded after making a CPI. So, if the CPI changes the data you will be left with accounts with stale data.
  • The final bug is the dreaded arbitrary CPI in Solana. This is when the address of a program being used for a CPI is not properly validated. I've talked a lot about this here already. This can be used to skip function calls, such as token transfers or be used to abuse permissions. Regardless, they can be very bad.
  • Overall, an interesting piece of literature. Most posts are very focused on one specific set of technology; I enjoy the back and forth between EVM and Solana. It's good to have this stuff documented!

Reverse Engineering EVM Storage- 1834

wavey    Reference →Posted 3 Months Ago
  • Ethereum storage is very simple: a 32-byte slot with 32-byte values. Mapping these slots back to meaningful variable names and use cases is difficult to do though. This post is about going from storage back to the usage of the data. EVM itself has no concept of variable names. To begin with, everything just starts from slot 1. Maps and other dynamic types compute the slot number using a hash, which is unrecoverable of course.
  • To figure out this, we need the execution information of a given transaction. debug_traceTransaction can be used to replay the transaction and return trace data. Notably, with prestateTracer, we can get a summary of the before and afters of slots. Sadly, this is only the final state though.
  • structLogs is a trace format of every single EVM step. It includes opcodes, stack, memory and everything else. From this, the author extracts the SSTORE for immediate writes and SHA3 operations for preimages of mapping slots. This is much more powerful than the previous tracer but is too bulky. A mixture of these is used to make it faster.
  • delegateCall allows contract B to write to contract A's storage, as long as the delegate call was originally made from contract A. structLogs doesn't include the address field on each step. So, the stack must be manually tracked to know the code context that is being written to.
  • The strategy for mapping SHA3 calls to get the preimage of a hash works well. In some cases the compiler will optimize the code>SHA3 away and just use a constant. In this case, they parse the source code to get the value of it.
  • Their code needed to handle the decoding of all types with care, nested writes and proxy detection for Solidity. Vyper had it's own differences in writing data. The constructor also had some weird quirks in it. They created SlotScan to make this easier to see. Pretty sick stuff!