Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

A New Era of macOS Sandbox Escapes: Diving into an Overlooked Attack Surface and Uncovering 10+ New Vulnerabilities- 1537

Mickey Jin    Reference →Posted 1 Year Ago
  • In macOS, most processes run in a restricted sandbox with the com.apple.security.app-sandbox entitlement. These sandbox restrictions are applied before the app's main function via containerization via the dyld library. Files that are dropped from the app are quarantined by default. Forked processes inherit the properties.
  • The Service Sandbox, such as Apple daemon services, manually call the sandboxing via app specific sandbox properties. In this mode, the quarantine functionality has to be manually invoked and is not done automatically. A common way for a sandbox escape in the past was targeting Mach services running the System or User domain. The PID domain seems to be accessible to all sandbox apps, giving us some more attack surface without extra entitlement checks.
  • The Application service type is a newer XPC service. When loading an application, it appears that it is automatically registered to the XPC service when loaded. The key insight is that we have an access control issue - the sandbox applications are able to call this API with lots of permissions. To exploit this, they started finding all applications that were XPC services registered to the PID domain that were XPC Applications.
  • SystemShoveService.xpc has powerful entitlement for com.apple.rootless.install to work around Protection (SIP). The XPC service does not check the incoming client. This allows us to drop an app folder that will not be quarantined or a DMG file to be executed. They have a separate blog post on this one.
  • storagekitfsrunner runner only had a single function that took in an executable path and arguments. Obviously, this leads to the ability to start a process that isn't sandboxed to escape.
  • Many of the other vulnerabilities are similar to this pattern. Call XPC from the sandbox to execute a privileged action. "Privileged" in this case is interesting though. If there is any file handling, these files will be created without quarantine, thus resulting in them being directly executable. Much of figuring out the vulnerable apps required a ton of reverse engineering to do.
  • Another one was an app with the Full Disk Access TCC entitlement. It's purpose is to give an app complete read/write access to the file system. This is done by calling the sandbox_extension_issue_file to issue a file token under the hood. This pattern of proxying permissions from an XPC app to the underlying app is a somewhat common pattern but can suffer from a confused deputy problem. Another attack uses this to access Photos and the camera directly to bypass a TCC permission check.
  • Several of the issues required funny symlink or folder creations to exploit properly. All in all, they ended up with 10+ vulnerabilities with 5 still in the patching queue. Once you find a new attack surface that seems unexpected, hit it hard until all of the bugs are gone! Good post on the root cause of the vulns and how they were exploited.

What Are My OPTIONS? CyberPanel v2.3.6 pre-auth RCE- 1536

DreyAnd    Reference →Posted 1 Year Ago
  • CyberPanel is a free web hosting control plane. Under the hood, it's a fairly simple Django app. The main purpose of it is setting up services like FTP, SSH, etc. on a box. It has a login screen to prevent everyone from being able to do this of course.
  • While reviewing the code, they noticed that authentication checks were added manually to every API instead of being global through a middleware. From reviewing previous finding, they found an authentication bypass for file upload through a missed authentication check. From reading about previous findings, they determined that command injection and authentication issues were likely. Do your homework kids!
  • Using Semgrep, they stumbled across upgrademysqlstatus. This is missing authentication and executes arbitrary commands on the OS. The best of both worlds!
  • Unfortunately, the command injection didn't work because of a recently added secMiddleware that was doing input validation on inputs to prevent these types of issues. After fuzzing it and trying some Linux tricks they didn't find anything. However, they did notice a funny design flaw with the input validation!
  • secMiddleware was only checking the inputs IF it was a POST request. However, each request in Django can be processed by more than one verb. So, by making an OPTIONS request the verification is bypassed. This means we have a successful pre-auth command injection. They found another variant of this as well.
  • Good write-up! I like that the author did their homework on previous bugs in order to identify pervasive bug patterns in the code base. The bypass for the input validation was quite funny as well.

Fuel Network Argument Parsing Vuln- 1535

minato7namikazi    Reference →Posted 1 Year Ago
  • The Fuel Network ran an Immunefi contest for the entire network. From their custom VM to compilers to the bridge... lots of attack surface. The author of this post dove into the compiler and contract runtime.
  • When contract A calls contract B, there is an ABI to preform type safety. The arguments are encoded into raw bytes in order to make the actual call in the contract. In the bytecode of the callee contract, there is implicit type information based upon the source code. One has compile time checks and the other has runtime verification on the values.
  • In the EVM, extra data at the end or in the middle of a structure is ignored. If the type is completely incorrect (like a string where an integer should be), then it reverts. This is more a Solidity compiler added protection than an actual protection added by the VM.
  • In Fuel, if an extra value is added to a struct it's not ignored - it corrupts the next value! For instance, if a struct only had a value called key with 32 bytes but we passed in an extra u8. The value of the u8 is just added to the next type instead of being ignored. All types keep their size but can be changed to unexpected values. I'm guessing that this corruption happening after the verification of the type but I'm not entirely sure from the post.
  • Why is this useful? The boolean type is usually guaranteed to either be a 0 or a 1. Given that the compiler knows this, it will do checks in weird ways that may be bypassable. The author provides an if statement with two options: option == true and option == false without an else clause. Since a boolean value of 100 wouldn't fall into either of these, we can break logic that assumes a binary value for a boolean.
  • An additional impact is that a boolean could be stored with a non-zero value in storage. This could cause a DoS when loading the value or cause more corruption as well. An interesting impact is that since this happened in the compiled code, all deployed code would have to be redeployed with a new version of the compiler.
  • I'm slightly confused on why the corruption must happen. From my end, it appears that we could just make a boolean any value. My guess is that there is verification in the compiled code that happens first then the decoding happens that corrupts the values. Interesting bug and thanks for sharing!

Why exploits prefer memory corruption- 1534

sha1lan    Reference →Posted 1 Year Ago
  • Memory corruption vulnerabilities are 60%-70% of the issues exploited in the wild. There are many other classes of bugs so why are these so popular? This is what the article tackles. Ironically, it's the simplest way to do it.
  • It all comes down to how expressive and unconstrained memory corruption vulnerabilities are. The usage of system() runs attacker controlled input on a computer, giving us lots of freedom. In the case of memory corruption, it's the same; we can create our own path with the infinite space of a weird machine. This expressive nature is really only offered with a few bug classes.
  • With a simple logic bug, like an authorization issue, it is limited in nature. It has much more defined capabilities. Things like MTE and movement towards memory safe languages like Golang and Python are making memory corruption bugs harder and harder to exploit though.
  • The author does make a distinction between memory corruption and memory unsafety. Memory corruption is commonly the effect of a memory unsafe bug. They reference a type confusion bug leading to memory corruption as an example. The author believes that true memory corruption vulnerabilities, like page table management in the kernel, will stay around but memory unsafety bugs will start to die.
  • At the end of the day, memory corruption vulnerabilities are still likely to be used. They provide huge capabilities that cannot be paralleled. Additionally, they are easily abstractable. If I find an arbitrary read/write primitive, I can hide away the details to this with an API of sorts to continue using my exploits. This does not work well with logic bugs most of the time.
  • Overall, a good post on why we like memory corruption vulnerabilities so much! It creates reusable primitives in an environment that can be repeated. Other bugs can't provide the same things, making them harder to find and harder to exploit for real gain.

Code auditing is not the same as vulnerability research- 1533

sha1lan    Reference →Posted 1 Year Ago
  • Cybersecurity is an incredibly broad topic. Even the category of offensive cybersecurity is quite broad. In this article, they do a comparison between code auditing and vulnerability research.
  • Vulnerability research is all about understanding the practical threat landscape of a system or area of code. In this work, vulnerabilities are not enough. Instead, we care about how exploitable these bugs are and the real impact they can have given the constraints of real attackers. The output of a real proof of concept can even be helpful.
  • Code auditing has the goal of improving security within an area of code over a given time frame. This is usually about finding the greatest number of bugs without an emphasis on real exploitability. Code quality or configuration improvements, like missing binary protections, can be found here as well as actual bugs.
  • Both of these are valuable but serve different purposes. If it's a new codebase that's about to ship then a code audit to find many issues is a good idea. If vulnerability research was done on the codebase then they would likely find only a few horrible things but leave many risks and bugs still in there that weren't worth tracking down.
  • Sometimes, it's the opposite though - vulnerability research is needed instead of a code audit. On a large codebase with lots of risk in a merger and acquisition or bug bounty are good examples of when this is necessary. If fuzzing is done on a library with little to no exposure to the outside world and lots of shadow bugs are found, it's not a realistic view of the security of the application. Instead, decisions should be made on the most impactful locations and bugs should attempted to be found in this.
  • According to the author, the latter case is more likely to happen. A common issue is when the client signals that higher quantity of bugs is better than a few high impact ones, which leads to a code audit instead of vulnerability research. A good way to assess this (to me) is the likelihood and impact metrics.
  • Overall, a good article on the differences between a code audit and vulnerability research! They are different things that are similar, leading to issues within various organizations.

Filecoin Boost: Clients can create PublishConfirmed but never-AddedPiece (handoff) deals- 1532

Qiuhao Li    Reference →Posted 1 Year Ago
  • Filecoin is a decentralized p2p network allowing users to store and retrieve files on the Internet. Users (data owners) pay to store their files with storage providers (computers that store files). Filecoin does this using a blockchain to record all of the information but using IPF under the hood.
  • A deal is a contract between the user who owns the data and the data provider agreeing to store the information for them. In Proposal.StartEpoch, the function checks to ensure that a proposed deal hasn't already elasped a certain time frame. This is to ensure there's enough time to perform the operation.
  • In AddPiece(), the code is ran by the miner every 5 minutes until 6 hours has been reached.
  • The deal's start epoch (group of blocks) is checked against the current epoch + a sealing buffer (480 epochs). For a deal to be created, accepted and closed takes time. An attacker can create a deal in which the start epoch is closed to the current epoch, which will pass verification. However, after the deal is published but before it's added, the epoch will grow larger than the specified start epoch.
  • This is exploiting the weird boundary on times between various actions. One item doesn't use the StartEpochSealingBuffer into consideration while the other one does. By doing this, AddedPiece() will always fail! This loses gas for the Service Provider. Additionally, this could lead to a denial of service if the collaterals reach their limits.
  • Race condition vulnerabilities are commonly hard to find/understand but can show a fundamental weakness in the software design. Concurrency is nearly impossible to get 100% correct. Good write up but I do wish there was a little more background since I had no idea what Filecoin was prior to reading this.

$150,000 Evmos Vulnerability Through Reading Documentation- 1531

jayjonah.eth    Reference →Posted 1 Year Ago
  • EVMOS is a Cosmos SDK blockchain that integrates the EVM into it. From reading the documentation (shown in the next bullet point), they sent the distribution module some tokens. As stated in the documentation, this broke an invariant and crashed the program.
  • The author talks about just reading documentation to find the vulnerability but I think there is a lot more going on here! The docs say: "The x/bank module accepts a map of addresses that are considered blocklisted from directly and explicitly receiving funds. Typically, these addresses are module accounts. If these addresses receive funds outside the expected rules of the state machine, invariants are likely to be broken and could result in a halted network."
  • So, what's really going on? The Cosmos SDK has a set of invariants that run at the end of every block. In the distribution module, one of these is that the accounting and actual tokens must line up. By sending tokens to the module, this invariant breaks and crashes the blockchain.
  • So, why can we send tokens to this account then? The Cosmos SDK Bank module initialization contains a list of blockedAddrs found here. According to the documentation, this should block all Module Accounts as it may brick the chain. In the case of EVMOS, they did not include all of the modules that would result in invariant breaks.
  • The EVMOS project has not been on Immunefi for a long time - I'd guess two years. So, this vulnerability is quite old. If I had to guess, the author of the post popped every chain they could with this misconfiguration and just published this. It's funny how the news picked up on this considering how old this vulnerability must of been.
  • Overall, a good vulnerability but the post is somewhat deceptive. Although it was "just reading documentation" the why and the how are important for popping this. Additionally, not talking about disclosure timelines also feels wrong. I'm curious to see if Cosmos changed the invariants that led to this vulnerability or not as well.

Exploiting a Blind Format String Vulnerability in Modern Binaries: A Case Study from Pwn2Own Ireland 2024 - 1530

synacktiv    Reference →Posted 1 Year Ago
  • Pwn2Own is a prestigious hacking competition for various devices. This entry was for the Synology TC500 camera running ARM 32-bit. The authors found a format string vulnerability in a custom print_debug_msg function that was passing inputs into vsnprintf.
  • Since the format string was in a debug log, it was blind. Additionally, ASLR, NX, Full RELRO, and PIE were all enabled on this device. On top of this, the payload was restricted to 128 bytes and could not contain nullbytes or characters lower than 0x1F.
  • Format string vulnerabilities are ridiculously powerful. The specifiers allow for reading and writing to arbitrary spots in memory if you know what you're doing. Initially, they used the vulnerability to edit a pointer to a looping variable to be somewhere else on the stack via a single-byte write of the pointer. This variable was then being written to with our input. In practice, we could edit the location some data was going to be written to with relative bytes, giving an effective relative out-of-bounds write primitive.
  • Once they had an arbitrary write on the stack, they needed to build a ROP chain. In the vulnerable function, they used the unused stack space. Using the format string specifier %*X$c, it's possible to read a value on the stack from a specific offset. This value is then stored in an internal character counter. Using the %Y$c will increase the count further by the value we control. Since the first value can be from the stack and we control the second one, we can effectively bypass ASLR and PIE!
  • Once the values are set, %Z$n can be used to write the value onto the stack. Using this over and over again gave them a solid ROP chain to eventually call system(). To hijack the control flow, the same relative write trick could be used to overwrite the return address on the stack to point to the ROP chain.
  • Modern binary protections are not enough for security with capable folks like the ones at synacktiv. An awesome post on their exploit path for this. It's sad that this was patched before the competition :(

An analysis of the Keycloak authentication system- 1529

Maurizio Agazzini - HN Security     Reference →Posted 1 Year Ago
  • Keyclock is a single sign-on provider. While on a project for a client, they identified a flaw in the authentication system.
  • In Keyclock, the levels of security depend on the level of authentication. First level is just the username and password. Level 2 is username, password and OTP. According to their setup guide, the default browser flow is used by most apps.
  • This levels system sounds good in theory but has a flaw: level 1 authentication has access to account settings. An attacker could login with credentials to a level 1 website, add a new OTP method then use this on the level 2 website. This creates a really dumb bypass for 2FA auth. This vulnerability was known about, according to the security team, but took 10 months to fix.
  • Several of the administrative endpoints were reachable via a unprivileged user. Of these, the testLDAPConnection was the most serious because it could be used to steal LDAP creds by setting a custom connection location. This required some information that could be queried using this same vulnerability on a different API.
  • The final issue was poor brute force protections. The protections were turned off by default but were insufficient anyway. It was possible to send multiple requests simultaneously to allow more login attempts than what should be allowed. Use those locks!
  • Overall, a serious of fairly simple yet impactful bugs. Good writeup!

Cracking into a Just Eat / Takeaway.com terminal with an NFC card- 1528

Marcel    Reference →Posted 1 Year Ago
  • Takeaway.com is an online food delivery system. The author of this post found an Android-based kiosk online for super cheap so they decided to buy one.
  • Their goal was a Kiosk escape while using the system to perform various bad actions as an actor. After several deadends, such as using keyboard shortcuts, they found that Android will open apps automatically using NFC. So, they wrote to an NFC card with a particular package name and Android opened it! In their example, they use the Android settings.
  • They used the settings to enable the status and navbar. With this, it's much easier to work on the Android device. Using a file system app on the device, they were able to extract the APK from the device to reverse engineer. They found that 14611 sent the device into a factory test menu and 59047 gave an app launcher that is both hardcoded.
  • Using a male to make USB cable, it would be possible to connect via ADB, since it uses a userdebug ROM in production. This would allow dumping the file system, overriding the OS and many other things. Good jailbreak post!