Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Robots with lasers and cameras but no security Liberating your vacuum - DEF CON 29- 593

Dennis Giese    Reference →Posted 4 Years Ago
  • IoT devices are notoriously insecure, even though things are getting better. As a result, people need to know if the claims by the company, such as privacy and security, are valid. Additionally, the only way to know a used device is clean is to root it yourself. So, being able to root the devices is important for functionality, security and claim validation. The author device gave a similar talk in 2018 about another roomba-like and discusses the state of rooting for these.
  • This new cleaner stepped up its game in security though! The Ubuntu version had an obfuscated root password, a custom ADB version, a watchdog enforcing copy protection and a firewall via IP tables to prevent malicious access (this only worked on IPv4 and not IPv6 though). Additionally, the firmware is not signed and each vacuum has its own encryption keys. Good steps up!
  • The author noticed an open UART connection on the device. However the root password was obfuscated. Dennis de-obfuscates the password then uses this to gain access, but never mentions how this was done. An additional way was setting up single user mode in UBoot while the device was starting up. At this point in time, both of these methods are not restricted.
  • Prior to the research, 3 different robots were unrootable. While looking for a new methods of rooting, the author noticed that all of the previous exploits had been removed and decided to reverse the PCB. The SOCs by Allwinner all have a FEL or flashing mode. By disabling the flash IC or pulling the FEL pin on the chip, we can boot our own OS on the system! Since FEL mode is burned into the BOOTRom of the device, it cannot be removed.
  • This approach sounds simple in theory. However, actually loading a proper version of the Linux kernel is complicated because NAND support is proprietary. The steps are as follows:
    1. Extract kernel config from the RockRobo kernel. This was likely done by JTAG or from previous rooting attempts.
    2. Create a file system with custom tools on it.
    3. Compile a minimal kernel using the Nintendo NES Classic source code. This works because they both use the same chip for running.
    4. Create custom UBoot version with the extracted Roborock configuration.
    5. Trigger the FEL mode by shorting TPA17 to GND.
    6. Load everything (UBoot, kernel and new FS) via USB.
    7. Patch original OS with our own. Now, our OS should just run! :)
  • ADB uses special authentication with a challenge-response method. This is based upon a secret file and the mode is controlled via an adb.conf file. Luckily for us, this is stored on an unencrypted and unprotected partition. By using in-system programming (ISP) or replacing the chip entirely, the configuration or secret can be changed by us. Access to the device is now given!
  • The next step is disabling SELinux. Currently, access to /dev and the network is blocked. However, bind mounts and kill are usable. By replacing the client with our own bash script via a bind-mount and kill the currently running client, the watchdog will attempt to turn the client on, which is just our bash script. Now, SELinux is disabled.
  • Finally, we need to get persistent access. There is custom ELF signature verification running within the kernel, which means we cannot add custom code to the device. However, there is a backdoor that allows all files with the name librrafm.so to run. Now, we have rooted the vacuum cleaner! What else can we do? OPTEE, which uses ARM Trust Zone, will decrypt firmware updates if we ask nicely. With this, we can reverse the firmware to find other issues.
  • Another device that the author was looking at had a debug interface for UART, USB and easy access to the boot selection pin. Using the FEL, this device could also be rooted quite easily. The author mentions getting the firmware off of the device as well.
  • This device (Dreame) has a backdoor in it that is accessible from the cloud. The credentials for the server are publicly facing and this is used for development. The user has sudo privileges, to make matters worse. There is an open FTP server that downloads debug scripts that could be altered as well. These devices have predictable root passwords: it's base64(SHA1(serial number)). The password for debug firmwares is #share!#, making it trivial to break into these devices from the internet.
  • This talk was enough content for 5 talks! It's amazing how much information is crammed into the talk and how much this researcher got done. I hope to see more rooted vacuums in the future and to get better with hardware hacking, such as this hacker.

Zoom RCE from Pwn2Own 2021- 592

Thijs Alkemade & Daan Keuper - Sector7    Reference →Posted 4 Years Ago
  • Most users will mainly know the video chat functionality, but there is also a quite full featured chat client included, with the ability to send images, create group chats, and many more. Within meetings, there’s of course audio and video, but also another way to chat, send files, share the screen, etc. We made a few premium accounts too, to make sure we saw as much as possible of the features.
  • When starting to reverse the application they found a large quantity of the libraries to be apart of an open source SDK. As a result, they decided to target these. Most of the Zoom code is written in C++, which has many nice foot-guns removed from C.
  • The authors noticed an interesting but benign issue when OpenSSL functions were being used. The size of the buffer being created for a base64 function should be 3/4 of the size. However, a buffer is created by shifting the size by 4. Although this is not a vulnerability, it is a code smell! Because of this code smell, they decided to look at all of the handling for OpenSSL integration.
  • The paid version of Zoom includes an Advanced Chat Encryption. When doing this, a handshake process takes place where some encryption is done. While decrypting, a fixed size buffer of 1024 bytes is used for AES However, there is no validation that the decryption result will fit into the buffer for AES, unlike RSA.
  • As a result, a heap-based buffer overflow could occur by sending a message that needed to be decrypted. This overflow is fairly ideal since we control the size and all of the bytes being sent over. But, exploiting bugs in real life is MUCH more complicated than in a CTF.
  • This article has extensive knowledge on how the Windows heap allocator works. I'll summarize a few points here:
    • Windows has two different heaps: Segment Heap for very specific applications and NT Heap for everything else. In the NT heap, there are the front-end/Low-Fragmentation Heap and the back-end allocator.
    • The back-end allocator is fully deterministic and functions like the GLibC malloc implementation. The LFH is used for sizes that are requested often and has a bit of randomization involved with it.
    • If more than 17 blocks of a specific size range are allocated and still in use, then the LFH will start handling that specific size from then on.
    • Each heap allocation (of less than 16 kB) has a header of eight bytes. The first four bytes are encoded, the next four are not. The encoding uses a XOR with a random key, which is used as a security measure against buffer overflows corrupting heap metadata.
  • Grooming the heap is just the right way took a significant amount of effort and testing to get properly. They needed to line up the proper object with a function pointer close enough to overwrite them in OpenSSL (no CFG on these functions). Additionally, they required an information leak to break ASLR.
  • For the information leak, they targeted a different part of the application. By finding a link that the authors could control, they would overwrite the URL that was being sent back for the connection. This overwrite would corrupt the NULLbyte, resulting in a bunch of extra data being sent. A little bit more groomer had to be done in order to put an OpenSSL object after our URL.
  • After hours upon hours of trying different TLS settings, orderings and things they got the leak to work! They found that using TLS renegotiation made the exploit much more stable by spraying the object we wanted to leak in the URL over and over again.
  • With the leak armed, they could go tackle the code execution problem. The class FileWrapperImpls had an insane amount of function pointers to overwrite. By lining this up next to our overflow, we can corrupt these values and jump to whatever we want!
  • To get code execution via a function pointer means we start with COP or call orientated programming. The next step is getting a stack pivot; but, we still need to know where to write to. The second problem was solved by sending a bunch of GIFs to file up the address space. If we fill up the address space we can make an educated guess of where the new stack will be at.
  • The COP chain starts by calling a gadget that allows us to control the RSP value afterwords. This gadget pushes value we control then pops it directly into RSP: how convenient! With control over the RSP, we can start a standard ROP chain.
  • The ROP chain calls VirtualProtect over the region where our GIF may be at after calling a different function to get the address of our region. Since we control all of the contents of the GIF, this is a perfect location to make executable. Finally, we have got arbitrary code execution!
  • Overall, this article has very good insights into competing in Pwn2Own. Additionally, the methodology and bugs found are amazing for learning how to make real exploits.

Accellion Kiteworks Vulnerabilities- 591

Adam Boileau - Insomnia Sec    Reference →Posted 4 Years Ago
  • Accellion has a large collection of products that are meant to secure the ecosystem against outside attackers, such as encrypted email, secure file sharing and many other features. Kiteworks is a Content Fire firewall product.
  • The application wrote its own query builder library for SQL. However, there is no safe or standard way for protecting ORDER BY or LIMIT constraints. As a result, there are two SQL injections because of this. With the ability to stack queries on top of each other in the application, this vulnerability allows you to call UPDATE or exfiltrate arbitrary data.
  • Once the UPDATE was used to change or create a user with Admin privileges, this is enough to compromise the application. However, the author wanted to go from user to shell!
  • The product allows several SMS backends but also has a generic option that allows arbitrary HTTP requests to be sent from the Kiteworks host. Additionally, this had a test method to see if the SMS service was working properly. The message could be sent through the API and was sent to the phone number specified. The generic, data sent back and test just scream exploitable SSRF!
  • Using the SSRF, JWT tokens could be requested from the internal web server. Additionally, the backend has Apache Solr search engine. It is a known misconfiguration to have remote streaming enabled, which allows for an arbitrary file read from the operating system. Boom!
  • Using the arbitrary file read allows us to steal the HMAC key for the admin to make intra-cluster calls. With becomes particularly bad because an attacker who can obtain a valid JWT and the key material to HMAC can simply call into an endpoint like /dbapi/cli_exec/execute via the front-end webserver exposed to the internet, and have arbitrary commands run via the shell.
  • The user has the uid=500 or non-root permissions. There is a script that attempts to protect against root level access of particular binaries and scripts, which does an insane amount of validation on it. However, after patiently reading through the operating system, the author of the article noticed a permissions issue on the location with some of the binaries. As a result, one of the secure binaries could simply be switched out with another script to become root.
  • To make this attack more spicy, there is a reflected XSS vulnerability on one of the APIs. Using this, the rest of the vulnerability chain mentioned above could be performed. Unauthenticated to popping a shell!
  • The article ends with a failed attempt to get code execution via parsing read-only documents. The author touches on many different potential ways that were partially reviewed and things with known vulnerabilities being used. Additionally, the app runs within a sandbox called Firejail.
  • The sandbox itself is known to be secure; it limits access to the file system, network resources and many other things. The network was restricted by limiting access to all inet sockets. However, the application uses Nginx as a frontend to forward requests to the backend via domain sockets. The deployment of PHP is done using the “fastcgi” mechanism, where a long running PHP server receives requests to invoke scripts, to avoid the cost of process start-up.
  • The limitations of the socket are only done on the inet and not Unix. As a result, a Unix stream socket connection to the PHP FastCGI server can be made to execute a PHP script within the directory. With this, we have code execution outside of the sandbox.
  • Overall, the SSRF exploitation, privilege escalation and sandbox escapes were unique and enjoyable to read. Seeing the full attack and the scrapes of notes at the end were awesome.

The Complete Guide to Prototype Pollution Vulnerabilities - 590

Daniel Elkabes - WhiteSource    Reference →Posted 4 Years Ago
  • The Prototype Pollution is a vulnerability specific to JavaScript (JS) that requires a deep understanding of JS. In JS, there are object, which are a key-value pair (similar to a dictionary in Python). A Prototype is an attribute of an Object that allows for objects to inherit features from one to another. Even a prototype can have a prototype; this is called a prototype chain.
  • The __proto__ attribute of an object has some unique and interesting traits:
    • It is a special attribute that refers to all the Prototype of an object
    • all Objects have __proto__ as their attribute (Prototype)
    • __proto__ is also an Object
    • __proto__ was meant to be a feature, to support processes like inheritance of all attributes
  • What if we could alter the lead prototype object? If we could do this, then all objects would inherit from this! In the context of JavaScript, this would allow us to change the object information for all other objects of the same type being used.
  • On the frontend, this commonly leads to XSS. On the backend, this could potentially lead to RCE even. The whole point is that we are altering or pre-setting fields that could alter the flow of the program.
  • How to find this vulnerability? The deserialization of a string to JSON object or recursive merge operations are good places to look. Here's an additional video by Intigriti.
  • Prototype Pollution is interesting by itself but difficult to find. Keep out an eye for this in the future with testing.

Wodify Security Advisory- 589

Bishop Fox    Reference →Posted 4 Years Ago
  • The Wodify gym management web application is designed to help gyms grow. It is heavily used among CrossFit boxes, mainly in the US, but also across other continents and countries.
  • The application had three vulnerabilities. The first two are fairly standard: 4 stored XSS and insufficient access controls via an IDOR. These are normal and not very unique though.
  • The final bug was a bit more interesting though! There is a specific page that exposes the user's hashed password and JWT, but only to the main user. At first thought, this does not seem like a terrible security problem as only the user can see it.
  • However, one of the stored XSS vulnerabilities mentioned above could be used in order to exfiltrate this information. Now, this is definitely an issue and should be fixed as a defense-in-depth finding.
  • Just because the authorization works properly, does not mean that an information disclosure is not valid. To me, anything that allows for persistent access to an account from a single vulnerability or a single view should be cause for concern. For instance, the ability to change a password without knowing the current password would be an issue. Interesting callout!

You're Doing IoT RNG- 588

Dan Petro - Bishop Fox    Reference →Posted 4 Years Ago
  • Random numbers are very important to security. For instance, they are used for encryption keys, authentication tokens and business logic. Even though random numbers are important, they are terrible at generating random numbers. By design, they are deterministic; 1 + 1 should always equal 2.
  • There are two types of random number generators: hardware and software. The hardware generators are the focus of this article/video. Hardware RNG design has two common implementations: analog circuit and clock timings. Analog circuit has a bit that flows between the two values: 0 and 1. As a result, this is mostly random. The latter way is to get the difference between two clock timings.
  • With hardware random number generators, there can be issues. With the analog circuit method, you must give it time to go onto the next cycle. Otherwise, the same number will be given twice. For the clock method, it is possible for the clocks to sync up. So, if you're not calling the function too much, you are likely okay.
  • Lots of IoT devices are not using operating systems. As a result, a call to a HAL (hardware abstract layer) is made to the hardware RNG. This function has two values returned: an output variable (random number) and the return code. It turns out that no one checks the return code. This undefined behavior can result to all 0's being performed or partial entropy throughout.
  • Instead, use a cryptographically secure pseudorandom number generator (CSPRNG). They never block execution, the API never fails and it pools from many sources. Most operating systems get hardware randomness, timing, network and many other items for randomness. Then, this randomness can be used to seed CSPRNG to get something mostly random. How do we fix this problem? Instead of just getting hardware random numbers, like most IoT devices do, there needs to be a built in CSPRNG subsystem.
  • How can this be actually exploited? It really depends on the device and business logic! One common attack against hardware random number generators is to generate a key for asymmetric encryption. This will likely eat up all of the entropy and cause non-random numbers to be returned.
  • In general, there are two ways for blackbox approaches:
    • View the output of the RNG from the application. For instance, the RSA keys or certs mentioned above.
    • Tax or constantly call the RNG. This will likely cause the RNG to be lower entropy or return 0's.
    For whitebox approaches look for return codes not being validated and common security issues.
  • The authors claim that code that interfaces with the hardware RNG are also vulnerable. For instance, the hardware RNG output from some vendor mentioned on page 1006 and 1052 pages how to properly use the output for security based events. To get proper entropy, after getting a 32-bit number, the next 32 calls to the RNG had to be thrown out. Otherwise, they would not be using proper random numbers.
  • The authors took a look at a few chips to see the random number generation. When looking at the Mediatek 7687, they noticed that the statistical analysis of the numbers being taken out was no-where near completely random. Some numbers occurred much more frequently than others. The Nordie nrf52840 had a problem where the 0x50th (or slightly more) byte was always 0x0.
  • There are a lot of good tools for doing statistical analysis. For instance, dieharder, number circle and many other tools.

Breaking Secure Bootloaders Part 2 - DEFCON 2021- 587

Christopher Wade    Reference →Posted 4 Years Ago
  • The NXP PN533 is a NFC chip used on mobile phones. They all use the ARM Cortex-M architecture. The chip communicates over the I2C interface (/dev/nq-nci) and uses a custom protocol for updates. The updates sound nice but how do we get the update to occur?
  • The Android phone had two firmware files on it. By changing the name of these files, the firmware updater will notice that the version numbers are different than the current. As a result, the update occurs, which can be snooped by logcat.
  • To do the firmware update, a hash chaining process is used. The write command goes to any location in memory that we would like to use but needs to have a valid hash.
  • After reversing the file format, the update process and much more, the author wrote a targeted fuzzer on the firmware update and NCI interfaces. From fuzzing, the author discovered vendor specific NCI commands. One of these was a NCI Config Write command. Although something useful may have been possible here, the author bricked the firmware on the chip by corrupting the configuration.
  • While fuzzing, the author noticed that the last block of the firmware update could be written multiple times. This implied that the hash of the previous block was still in memory, was global in some sense. Because of this, the author was looking for a potential buffer overflow to corrupt parts of the firmware. When sending an invalid command with the same size as the firmware update block, the update would fail. This implied a buffer overflow on the static RAM.
  • What does this mean? The author could create a modified hash to write to portion of memory. Because this is a hash chain and they could overwrite it, the security of the hash chain had now been broken. By doing this overwrite over and over again, we could write to any memory block on the chip. With this overflow, the author could overwrite parts of memory that we using the firmware.
  • Now, it was time to patch in new features! The first thing was that the author changed the NCI version command to read from an arbitrary location in memory and send this out. The author found that the global pointer pointed to 0x100007, which could be used to dump the bootloader directly.
  • The entire bootloader was dumped using read commands from above. With this in hand, the author noted that the firmware could be overwritten in arbitrary ways (on the chip) for a consistent backdoor or just extended functionality on NFC. The PN5180 had the same exact vulnerability. Although, this was likely to be on all similar chip sets.
  • The reverse engineering and blackbox testing is incredible to see in action. Instead of having access to GDB, very subtle assumptions need to be made in order for this work. Even though the vulnerabilities were fairly straight forward, the hard part lies in actually finding them and figuring out how to exploit them in the blackbox setting. Great research!

Breaking Secure Bootloaders Part 1 - DEFCON 2021- 586

Christopher Wade    Reference →Posted 4 Years Ago
  • Smartphone manufacturers often use signature verification to protect their firmware. In order to get root access, the signature verification mechanism needs to be disabled. This requires contacting the manufacturers to get the phone to get the phone unlocked. Besides this, custom tooling is required in order to unlock the bootloader from the device. If you own you should be able to pwn it!
  • On the authors Android device, there is a signature verification being done on the firmware, which is verified by the bootloader. When updating the device over USB, most Android bootloaders speak fastboot. This is a basic USB interface with a myriad of commands for flashing, updating and gathering much information. It should be noted that since most bootloaders are open source and modified, it is important to analyze the firmware directly with a disassembler.
  • Since custom modifications are a great location for finding bugs, the author looked there first. They noticed that the flash command had been modified to allow flashing of specific custom partitions, even when the bootloader was locked. When making a custom fastboot binary, the author accidentally caused a crash with improper ordering of the commands. This appeared to be a buffer overflow in some parsing functionality.
  • But, how do you find the result of the crash without having a debuggable setup? You cannot just add GDB to this! In addition, a hard reset is required in order to get the phone working. To dump the memory of the phone (to learn more about the current state). So, the author wrote an automated script that would overflow by a SINGLE byte then see if a crash occurred. If not, we checked the next byte. If the phone crash, we tried another value. Although this is not perfect, it is good enough for identifying.
  • The author viewed the data from the crash and determined that it was opcodes. From there, they searched for similar patterns and values in the disassembled version of the bootloader to find out it was part of the bootloader itself! The buffer overflow was overwriting the bootloader itself in RAM.
  • The author tested the same vulnerability on a different phone and found the same issue but using a different amount of bytes until the crash. This implies that the vulnerability is present but the phones just use a different memory layout. The issue affected the SDM66 chip from Qualcomm.
  • The Qualcomm chip encrypts the userdata partition. This prevents chip-off analysis using an internal security mechanism on a chip. If an unlocked bootloader tries to access the partition, it is identified as corrupted. The keys are inaccessible (even with code execution) and the EFI API to decrypt the partition is not modifiable. The API verifies whether the bootloader is unlocked and whether the firmware is signed before allowing access to the keys. The new goal is to bypass this to decrypt the partition.
  • The author was looking at how the flow of execution works. They noticed that there is a large difference between where the verification is done and the execution is done; this is a classic bug known as time of check - time of use (TOCTOU). The author had to modify the bootloader in a very particular ways in order to exploit this:
    • Verify with one image the actually use another malicious one.
    • Change the boot command to be accessible. Since the bootloader is locked, the Android image can access the keys. Game Over!
  • This video is really long and has two different exploits of two different chips. So, this is going to be part 1 of my analysis.

OTA remote code execution on the DEF CON 27 badge via NFMI- 585

Seth Kintigh    Reference →Posted 4 Years Ago
  • Neat Field Magnetic Inductance (NFMI) is a short range physical layer that communicates by coupling a tight non-propagating magnetic field between devices. Similar to radio waves but only for short distances. It is uses two coils to communicate to each other.
  • The DEFCON 27 badges communicated over NFMI. The MCU does the bulk of the work and the NFMI chip over UART does the communication with the other badges.
  • For debugging the badge, there is a JTAG interface, serial interface and a SWD that allows for doing whatever you want on the device. In particular, you can dump the firmware, read registers or do whatever you want.
  • While reverse engineering the firmware with IDA pro, the author found a horrible buffer overflow. The code writes bytes into a static buffer until it finds the character E. However, there is not limit on the amount of bytes we can write!
  • To run a quick POC, the author connected to the badge over SWD and added a large packet to the ring buffer on the transmission. Of course, this caused a crash!
  • But how do we write custom code to exploit this over NFMI? First, the center frequency is somewhere between 10.579MHz and 10.56MHz. From a few random sources, some things were found but a bunch of reverse engineering of the signals was required. The author goes into how the signal works but the notes for this are not included.
  • Eventually, they took to reversing the firmware of the NFMI chip itself. They needed to use SWD on the NFMI chip but the traces were in the middle of the board. After finding the reset line, the author scratched off a layer of the board, cut the line to the MCU from the chip and soldered a wire onto the line.
  • After connecting over SWD, the author tried a bunch of different configurations until one of them eventually worked. From reversing the firmware, there was code to drop all packets over 11 bytes. So, what happened? The badge had more buggy firmware!
  • When receiving the packet over UART from the NFMI chip, a few weird things are done:
    • The badge copies the data from UART byte by byte. If this runs out of space, a partial packet is used without the proper delimiter.
    • There is an off by 1 error that makes sure that two bytes are free when only one is being copied. This allows for an odd number size of packets.
  • By truncating data in the perfect way, we can convince the firmware that a packet is MUCH larger than it actually is! Even though a packet cannot be larger than 11 bytes, we can make something look like it's larger than 11 bytes. Now, the buffer overflow we originally saw it exploitable!
  • Since the data being sent out is limited in characters (because of encoding), it is possible to crash the badge but not get code execution. Still, this is super interesting!

Response Smuggling: Pwning HTTP 1 1 Connections- 584

Martin Doyhenard    Reference →Posted 4 Years Ago
  • Websites do not simply go from request to server now-a-days. There are proxies in between, redirects and many other things going on. What would happen if two different passes understood the request or response differently?
  • This attack was originally used on the request in order to trick the service to what request was actually being sent. This can be done by sending multiple Content-Length headers, Transfer-Encoding and Content-Length or whenever two different requests are being made. This article discusses a new way to cause a desync but via the response pipeline.
  • The Connection header is used to specify connection information in a request. In particular, it tells how persistent a connection should be. This is a Hop-by-Hop, which means it is dropped between proxies.
  • The Connection header specifies which other headers are part of the specific connection. Then, this other connection specific headers are removed from the request when it is forwarded to the next part of the pipeline. What if we sent the Content-Length header instead?
  • By removing the Content-Length header from the request, the body of the original request will be interpreted as the start of the next request. The original request is just seen to have an empty body; this is a vulnerability in the RFC itself! Can this be exploited?
  • A few ideas from the original request smuggling:
    • Bypass FrontEnd controls with the new smuggled request.
    • Change Response of a different user with the desynced queue.
    • Web Cache Attacks.
    • Make an existing vulnerability, such as reflected XSS, much more impactful.
  • We can do something better though! By smuggling in two requests within a single request, we can mess up the response queue! The second attacker requests response will go back to the victim, leaving the victims request! By making a final request, we will receive the response to the victims request instead of our own. Damn!
  • There is an issue with getting the correct response back though. If there is a response but not connection in the queue, the response is dropped. As a result, the smuggled request should be a time consuming operation to get it to be sent back to our victim.
  • It is also possible to concatenate responses back to the victim from a users request. This is done by smuggling in a HEAD request that contains a Content-Length header, which is against the RFC but very common. Then, with the second smuggled request contains a reflected endpoint, we can send arbitrary data back to the victim.
  • The article contains a few other techniques that work as denial of services as well. Overall, this is amazing research that will help many researchers find bugs in the future!