Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Race condition leads to Inflation of coins on Reddit- 663

yashrs - HackerOne    Reference →Posted 4 Years Ago
  • Reddit has coins that can be purchased. These coins can be used to give awards and do other things on Reddit. The API depends on the application, since some go through Paypal, some go through the Apple Store and some go through the Google Play store.
  • When calling the verify_purchase endpoint (which contains information from the payment in Google) there existing a Time of Check vs. Time of Use (TOCTOU) vulnerability. There is verification being done. However, by making the same request several times concurrently, the money gets added multiple times.
  • In the report, the developers at Reddit mention that they look for this type of issue by creating a DB lock to prevent this. But, the bug appears to be in the memcache lock having multiple entries because of the concurrent requests. Actual verification of the testing is important to verify a fix, as complicated eco-systems add unexpected outcomes.
  • Overall, a great and impactful bug in the Reddit coin handling. Damn, race conditions are so fun!

Stored XSS in Mermaid when viewing Markdown files- 662

SaleeMrAshid - HackerOne    Reference →Posted 4 Years Ago
  • Gitlab does some crazy shenanigans for their Markdown engine. One of these changes is the ability to inline Mermaid, which is a chart render in Markdown.
  • Mermaid supports HTML labels when using flowcharts. However, this is only possible with specific configurations that Gitlab does not have. Namely, the the securityLevel configuration cannot be strict. If we could get HTML into this, we could likely take this to XSS.
  • Mermaid supports the adding of directives, which can change the configurations. For obvious security reasons, several of these cannot be changed: secure and securityLevel are the two important ones to note here. By passing in flowchart.htmlLabels as the string "false" (not the boolean), we can bypass this allowlist since the string is being evaluated for existence instead of a boolean. We use "false" to get it through the allowlist.
  • Since flowchart.htmlLabels is set to some value, we can get the variable controlling it set to true. With this, the labels will now render the HTML directly, resulting in the injecting of HTML. But, what about JavaScript?
  • The page has a fairly strict CSP. Because the page uses nonces for inline scripts, injecting it via this is not possible. In order to bypass this, the author calls Workhorse (which serves pipeline artifacts) with an auto-detected Content-Type. Since the JS is now on the Gitlab domain, it believes that this JavaScript code is coming from the same domain as the page. This satisfies the CSP.
  • With the JavaScript code on the Gitlab domain, we insert directly into the DOM whatever we want. innerHTML does not accept <script> tags. Instead, we pass the script directly into an iframe srcdoc to get XSS on the page.
  • The triage report for this is super interesting. The author makes a few notes on how Gitlab should remediate this. First, they mention that Gitlab should add another item to the denylist for the flowchart.htmlLabels directive, which would prevent this attack. Secondly, they should not allow for potentially malicious Content-Types from the Workhouse. Finally, they mention that HTMLlabels should not be possible anyway.
  • The bug finder mentions that a lot of the security related code for Mermaid is quite broken. For instance, the anti-script settings should block all script execution. But, the author found multiple ways around this quickly, not even including the bug mentioned above. In reality, the project could use an upgrade on the code quality.

Blackswan - 7 Microsoft 0 Days- 661

Erik Egsgard - Field Effect    Reference →Posted 4 Years Ago
  • When sending I/O control requests on sockets, the request codes are verified in order to validate that internal functions cannot be checked. However, the TdxIssueIoControlRequest function accepts codes but does not do the validation. This is labeled as the first vulnerability.
  • With the ability to call an internal functions unexpectedly, many other bugs fell out of this. From this research, 4 exploitation paths surfaced. An arbitrary increment, arbitrary read/write via getting access to a pointer, TOCTOU on a buffer and an INFOleak.
  • The other two bugs are TOCTOU bugs. Windows IOCTLs have three different modes: buffered (copy user buffer to kernel), direct I/O (buffer mapped to kernel address) and neither (where the kernel will operate directly on a shared user mapping).
  • The two other TOCTOU bugs were mapped in the neither category. Because a user can easily write to this type of memory, validation is hard to do. It leads to many TOCTOU bugs.
  • To me, there are 3 bugs: the 2 TOCTOU bugs and the code validation bypass. By simply fixing the validation bypass, you also make those bugs completely unexploitable. But, I suppose the more CVEs the better!
  • The article has a bunch of background on the Windows OS. It was to good to see. Overall, good article with unique and hard-to-find bugs.

Squirrel Sandbox Escape allows Code Execution in Games and Cloud Services- 660

SIMON SCANNELL & NIKLAS BREITFELD - Sonar Source     Reference →Posted 4 Years Ago
  • Squirrel Lang is an interrupted language used by video games and cloud services that allows for custom programming. In CS:GO it is used to enable custom game modes and maps. This language runs in a sandbox in order to prevent the exploitation on hosted machines playing games.
  • The main SquirrelLang implementation is written in C. As a result, a series of memory corruption vulnerabilities could be used to break out of the system. This is also an object- oriented programming language (OOP) and looks similar to PHP.
  • When creating a class, there are two dynamic arrays: values and methods. Additionally, the _members field is used that maps the name of an attribute to their index in one of these arrays.
  • To know which array to go into, a bitflag within the index of the _members field is used. This bitflag is set at 0x02000000. Bitflags being held in a used value is similar to the size in the chunks in glibc malloc. Is the usage of the bitflag done securely?
  • Since the bitflag is at 0x02000000, could we create a class definition with 0x02000000 methods or variables? If we add 0x02000000 methods, then try to get this as a variable, the program will immediately crash! We have got a type confusion vulnerability.
  • Here's an example flow:
    1. Create 0x02000005 methods and 1 field.
    2. The attacker accesses the method with the corresponding index 0x02000005.
    3. The _isfield() macro returns true for this index as the bitflag 0x02000000 is set .
    4. The _defaultvalues array is accessed with index 0x5. However, it only contains 0x1 entries and thus the attacker has accessed out of bounds.
  • Using the type confusion vulnerability, we can use the value accessor to write and read values IF we can create a proper fake object (lots of misdirection).
  • A good usage of this misdirection was setting the _value to retrieve an array type. By using a OOB access, we could control the base address and the amount of entries in the array. Now, by reading or writing to this, we have a beautiful arbitrary read/arbitrary write primitive.
  • Mixing real values and metadata bits can be very dangerous. In this case, the lack of validation of overflows allowed for a bad type confusion eventually leading to code execution.

nt!ObpCreateSymbolicLinkName Race Condition Write-Beyond-Boundary- 659

WALIEDASSAR    Reference →Posted 4 Years Ago
  • In the Windows Operating System, you can create symbolic links using kernel syscalls. Once a reference has been made, the handle is passed back to the user in order to use. symlinks can also be deleted.
  • When a symlink is being created, the valid handle is created quite early in the process. Why is this problem? An attacker can predict this handle value and access it from another thread! Because the lock was never applied (I mean, it is not even finished being created), the rest of the creation process can used an unexpectedly changed symbolic link handle.
  • In the proof of concept, the author has one thread continually closing (removing) symlink handlers and the other one creating them. Eventually, the race will be won, resulting in a crash in the symbolic link creation handler.
  • I had never considered this before! In the future, I will remember the creation process as an interesting place to validate that locks are done properly.

BYPASSING LOCKS IN ADOBE READER- 658

Mark Vincent Yason - Zero Day Initiative (ZDI)     Reference →Posted 4 Years Ago
  • The author was running a fuzzer on Adobe reader. They had both a JavaScript section and a PDF portion. Let's triage!
  • The CPDField of a PDF are internal AcroForm.api C++ objects used to represent text fields, buttons and many other things. In the POC, there is a CPDField object that is a child of another object. When doing this and calling JavaScript on the parent with a callback that has state changing actions on the child, we crash. But why?
  • The CPDField has an internal property called LockFieldProp in order to prevent concurrent access issues. This field is checked every time some change is happening on the object. However, when using a custom callback on (like mentioned above) a recursive call can be made that can free the child object, since it was never locked.
  • When the recursive call goes back up the call stack, the object pointer is now free, resulting in a use after free vulnerability. The initial patch ONLY locked the directory child of ab object. Hence, the author wrote a POC that modified the grandchild of a field, which resulted in the same vulnerability as before.
  • This bugs appears to be extremely exploitable! In JavaScript, controlling the CPDField is easy to control via a heap spray of similarly sized objects. Once the freed CPDField has been swapped out with an object that we control, it is now gameover! The POC submitted to ZDI, once dereferenced, demonstrated control of a virtual function pointer.
  • Overall, interesting find that appears to be extremely exploitable. Seeing deep into the crash analysis of a real bug that the author found fuzzing was quite the insight.

Discourse SNS webhook RCE- 657

Joern Chen    Reference →Posted 4 Years Ago
  • Discourse SNS project is used for mailing list, discussion forums and chat rooms. They have a very nice Security Guide as well, if you are looking for something to look at.
  • While staring at the code in this project, they saw an interesting piece of code open(subscribe_url). The open function in Ruby can be injected into for OS command injection.
  • The problem is that this code has a ton of verification, including having a proper AWS PEM file from SNS. It must be within the SNS service and has an extension sending with a .pem extension. Since we do not control the PEM file going in from SNS, this causes us issues.
  • The code itself is intended to send push notifications to registered endpoints. The code snippet is for grabbing the .pem file. Could this verification by bypassed?
  • The regex verification allows all SNS services to be used, which allows for any SNS operation to be used. The first option was crafting a X509 certificate error by sending a strange looking URL. But, we need a 200 response for this to work, darn.
  • The SNS operation GetEndpointAttributes has a field called CustomUserData. By using this endpoint, it was possible to create a valid X509 certificate that would be returned from the API.
  • With this out of the way, the SubscribeURL on the message being sent with the certificate could be used for command injection. At this point, we could pop a shell on the Discourse instance, even though we clearly should not be able to!
  • Overall, great writeup on how to read the docs and source code in order to find impactful exploits. Using cloud services to build your service is a complicated affair, which really reminds me of a HashiCorp Vault vulnerability.

SuDump: Exploiting suid binaries through the kernel- 656

Itai Greenhut - Aleph Security    Reference →Posted 4 Years Ago
  • In Linux, a coredump is the log of a crash of an executable. This is generated when a process receives a variety of different signals, such as SIGSEGV. The coredump can be used to explore the memory of the process at the time of a crash.
  • Every process has many attributes. One of these attributes is the dumpable property. This is used to determine whether to generate a core file for a crashing process. There are three values for this:
    • 0 - Process is not dumpable, and core file won’t be created.
    • 1- Process is dumpable.
    • 2 - Dump created only if core_pattern is an absolute path or a pipe to a program (suidsafe).
  • Linux gives the ability to run a program as another user called setuid. Of course, we do not want a coredump to occur, as this could leaks secrets from the program. In order to prevent this from happening, the dumpable value of a setuid binary should be set to 0 instead of 1 at process creation.
  • This check is done by comparing the Real Id for a user or group to the Effective Id of a user or group. If the process has a real user ID (the actual uid of the user) that is different than the effective (the executed binaries user), then the dumpable attribute is set to 0.
  • This begs the question: "What would be the dumpable value of a child of a suid process?” A child process would have the dumpable value of 1 if the suid process dropped its privileges. Here is the attack idea: if a suid binary creates a child process that is not suid without dropping the permissions, then a core dump could occur. With this insight, it is time to find an application that this is true in that we can cause to crash!
  • After investigating, they found that sudo (in some non-standard configurations) was a good candidate. In some setups of sudo, all users can execute a binary as any other user. In the example they use true, which has literally no logic what-so-ever. When true is executed via sudo, the fork call does NOT drop permissions. Because true does not have the setuid bit, the dumpable value is set to 1 when we execute this.
  • If we can find a crash, this will work! Luckily for us, there are many environmental ways to make a program easily crash:
    • Using 'RLIMIT_CPU' to to limit the amount of resources in the program.
    • CTRL + \ or SIGQUIT will stop the process immediately and trigger a coredump.
  • When using sudo, only some ENV variables are passed through. Because of this, we need to be careful with how we make a coredump happen at a location of our choosing. One of them is the XAUTHORITY ENV variable. By including information into this (that we control), we can control part of the coredump that we are writing.
  • The program Logrotate is used generally for automatic rotation, compression, removal, and mailing of log files. Logrotate is extremely lax in how it interrupts files (it ignores binary data), being an ideal candidate once we have a file write primitive. By writing a coredump to the logrotate directory, we can force a string in the XAUTHORITY to be executed.
  • The exploit flow goes like this:
    1. set XAUTHORITY to our logrotate configuration so it will be in child’s memory.
    2. chdir into /etc/logrotate. Now, the coredump will occur here.
    3. Crash the program. This can be done with the SIGQUIT signal or playing with the CPUs.
    4. execve our privileged sudo command (true binary in our example) via logrotate.
  • A second approach was found using Pluggable Authentication Modules (PAM) in Linux. PAM is used by su which will eventually use several external binaries. A code path that does this can be reached only if SELinux is enabled. By using the same method as before with su and the PAM modules, a coredump can be put into logrotate to execute code down the code.
  • The most interesting part of the post is that there is no true fix for this at the moment. coredumps are an important part of Linux debugging that need to be created. This post used a subtle quirk in the checking of the dumpable property (that is NOT trivial to fix) and expected functionality for crashing binaries in order to cause a coredump in a bad place.
  • Overall, amazing article with great insight into how Linux and the coredump mechanics work. Not all of the bugs are fixable or caused by memory corruption; some of them are logic flaws.

RCE via Directory Traversal in CivetWeb HTTP server - 655

Denys Vozniuk & Shachar Menashe - jfrog    Reference →Posted 4 Years Ago
  • CivetWeb is a embeddable web server/library. It can be used as a standalone web server or add web server functionality to an existing application.
  • CivetWeb has built-in file upload functionality. The API mg_handle_form_request is used for uploading the files. The code has directory traversal (../) sanitization. However, there is a logic bug in the compilation that is platform dependent. The logic for protecting works ONLY on Windows builds. As a result, OSX and Linux builds are vulnerable to this attack.
  • According to the post, there is no validation because of a build-specifc issue. There is a conditional compilation check where the else simply checks that the string does not contain a '&'. In reality, this code SHOULD have bene checking for a backslash '/' instead of this.
  • The fix is remove all of the 'dot' segments in the code. This prevents the standard directory traversal vulnerability. Overall, good post and an impactful find.

Improper Validation at Partners Login- 654

HackerOne    Reference →Posted 4 Years Ago
  • One-time-password functionality (OTP) is commonly used for 2FA. A common form of this is to send a code via an SMS message to the user then get this verified.
  • The OTP had several fields. The two of note are the phone_number and the country_isd. The phone number is obvious but the country_isd is not; it is simply the prefix for the country of the phone. For instance, the US is '+1'.
  • Although the phone number had proper verification on the OTP being sent, the country_isd did not. Since this was prepended to the phone number, we could edit the phone number that was actually being used with the SMS service!
  • Initially, I thought to add a new phone number then put a '#' to act as a comment. However, the author had a much cleverer idea. The SMS provider accepted messages that were comma delimited to send the message to multiple numbers!
  • By setting the country_isd to a full phone then adding a comma, another number can be added! For instance, if the phone number was 9999999999, and the validated number was 8888888888 the new number would be 9999999999,8888888888. Since the number is sent to both numbers, the attacker has a valid OTP.
  • This is an attack that required a lack of input validation and a decent understanding of the backend service. Sometimes, a misunderstanding of the full workflow of the service can lead to devastating bugs, such as this one.