Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Timeless Timing Attacks- 583

Tom Van Goethem & Mathy Vanhoef    Reference →Posted 4 Years Ago
  • Timing attacks are used all over the place in order to implicitly figure out data. Timing attacks are common with cryptosystems to leak information about the key. A timing attack is a specific version of a side channel attack.
  • While on the web, this attack is significantly harder because of network jitter. The higher the jitter, the lower success of the timing attack. By moving closer to the target, adding more requests and a few other tricks, it is possible to statistically analyze the results to figure out the timing of some action. Can this be improved?
  • At this point, absolute response timing is inconsistent because of network jitter. Let's remove this! This can be done by exploiting concurrency to force all of the requests to have the same network jitter on the response.
  • Instead of viewing the response time we only care about the response order. In order to make this possible, requests need to meet the following requirements:
    1. Requests need to arrive at the same time
    2. The server needs to process the requests concurrently.
    3. The response order needs to reflect the difference in execution time.
  • For item #1, there are a few ways to do this. With HTTP/2 or HTTP/3, there is multiplexing that processes multiple packets at the same. With HTTP1, we can use network encapsulation with either Tor or VPN to achieve this.
  • For item #2 this is application dependent. For item #3, the ordering SHOULD be the same. But, it may require viewing the TCP ordering fields to validate. Both of these are doable tough.
  • This new technique blows the old way out of the water! The traditional attack depends on the location of the server and the amount of requests that can be made to only get a precision of 10ish microseconds at the best. The new technique allows precision of nano seconds, with 5 microseconds of precision within only 50 requests! Damn, this is a game changer.
  • The authors took this new knowledge and implemented it in a few places. They used it for a cross-site search attack on HackerOne and the WPA3 WiFi Protocol handshake for EAP-pwd. Exploitation of timing attacks just became more practical!

Snapcraft Packages Come with Extra Baggage- 582

Amy Burnett - Ret2    Reference →Posted 4 Years Ago
  • Snapcraft is a new Ubuntu package management system. This is similar to apt-get.
  • The initial discovery of the bug was during a CTF while doing a pwnable challenge. While the author was building a CTF challenge with Docker, it segfaulted. Since Docker NEVER segfaults, they explored the issue more. From looking at strace, they noticed that this crashed when loading the local version of LibC!
  • This bug looked familiar to DLL Hijacking on Windows machines. This technique exploits the search path when looking for libraries. If a library is not found, then it goes to the next location to find it. The idea is that if we control one of those locations on a privileged process, we can get our own code to run within it.
  • The PID of the crash of Docker was associated with snap. Snap preaches security by containerization. But, most applications include the home plug interface that allows for the home directory to be accessible in the container. This is the reason that the LibC was loaded!
  • Snap packages require a wrapper to launch the container around the application. So, this is likely the case of a bad LD_LIBRARY environmental variable path. The path has a small bug in it: ::. Although this does not seem like an issue at first, the Id is parsed as the current directory! Damn, that's horrible.
  • This bug allows for the loading of arbitrary code into the bulk of applications wrapped with snap, including Docker, VLC and many others. This application is sandboxed though; is there anything that we can do? Can we escape the container?
  • A large amount of Snap applications are GUIs, which utilize the x11 plugin. This exposes the /tmp/.X!11-unix/X0 domain socket to the container, which allows us to send the same command that other windows can. This allows us to send keyboard strokes or mouse inputs to the system. For instance, we can send keyboard strokes to the terminal itself in order to pop a shell :)
  • A few takeaways for me:
    • Be observant of strange or unexpected behaviors. There may be a bug lurking close by.
    • Containered does not necessarily mean secure! Even within a containered environment, the author was able to escalate privileges some of the time.
    • Any application setting LD_LIBRARY_PATH should be diligent in ensuring it does introduce sideloading of libraries from unintended (i.e. relative) directories.

Potential remote code execution in PyPI- 581

Ryotak    Reference →Posted 4 Years Ago
  • PyPI is the package registry for Python's package manager pip. This is where most of the Python repos are stored at. It should be noted that the source code for PyPI is open source on Github.
  • PyPI has a not-so-popular documentation feature but was removed since it did not catch on. Because of the lack of popularity, a feature was made that removes documents. These documents are stored within S3 where each repo has its own bucket.
  • The code for deleting the documents used the S3 Python SDK to call list_objects_v2 with the prefix parameter. The prefix parameter will grab all buckets that start with some text. For example, examp will find examp, example and any other variation of this in the prefix. This resulted in the ability to delete arbitrary documents. This was the first vulnerability found.
  • PyPI has a permission management feature for packages. In this feature, the project owner can grant or remove permissions. When removing a role, the role is never checked for ownership. As a result, any user could delete a role on another project but simply knowing the id of the role.
  • Github Actions allows for actions to be performed on different events for the repository. Having a security hole in this open source repository would allow for the ability to edit source code of the repo to compromise everything downstream.
  • The workflow combine-prs.yml collects pull requests that start with dependbot and merges them into a single pull request.
  • The workflow has no validation on the author of the PR! This could allow malicious users to inject unsuspecting code into the repo. On the downside, it would require a manual audit but is still something to consider.
  • In this feature, the authors found the usage of echo on a branch name. According to docs, the code ${{ }} will be evaluated before being passed to bash. As a result, we have a classic command injection that allows escaping the context of echo to run other commands.
  • With command injection within the Github Actions workflow, we can assume the permissions of the process. Since the Action has access to a Write Permission for the Github repo, we can use this to take control of the content of the repo. The final payload would be to create a branch with a name like dependabot;"cat TOKEN"#.
  • Overall, a serious of interesting bugs! The RCE is particular to the extra interpreting done by Github Actions. It would be worth the time to check the Actions on other repos for similar vulnerabilities.
  • Extended Berkeley Packet Filter (eBPF) is a way for a user mode applicatin to run code in the kernel without needing to install a kernel module. This is important as the kernel is fast and allowing anybody to add kernel modules is a security hazzard. eBPF is used for tracing, instrumentation, hooking of system calls, debugging and packet capturing/filtering.
  • eBPF is programmed in a higher level language that is compiled into eBPF bytecode. The VM has a simple instruction set with twelve 64-bit registers, a program counter and a 512 byte sized stack. Prior to loading the bytecode, the verifier runs to ensure the security of the code being added to the kernel. Once loaded, the program specifies a hook point, which will be acted upon with event driven programming.
  • The sysctl knob kernel.unprivileged_bpf_disabled determines whether users can run the programs within the context of the kernel. If this is set to true (like in most Linux distros), then this is a great attack surface for local privilege escalation.
  • The verifier, as you can imagine, is extremely complicated. How do you even begin to determine if code is safe or not? For more information on the verifier, read this post yourself or read it at ZDI. But, in general, the verifier validates the following:
    • No back edges, loops or unreachable instructions.
    • No pointer comparisons. Only scalar values can be added or subtracted from the pointer.
    • pointer arithmetic cannot leave the safe bounds of the memory allocated to it. This is done by verifying the upper and lower boudns of the values of each register.
    • No pointers can be stored in maps or stored as a return value, in order to avoid leaking kernel addresses to user space.
  • When operating on the logical operations (AND, OR and XOR), the 32 bit tracking function has a small flaw in it. When updating the expected bounds of a program there is code there is code that defers it to the 64-bit operation while in the 32-bit operation. The validation on whether to do this update or not comes from two different variables!
  • The 32-bit operation uses the function tnum_subreg_is_const and the 64-bit code uses the function tnum_is_const. The difference is that the 32-bit function returns true if the the lower 32 bits of the register are known constants, and the latter returns true only if the entire 64 bits are constant. This becomes an issue if the operation involves registers where the lower 32 bits are known but the upper 32 bits are unknown. This breaks the assumption mentioned in the comments!
  • The article contains a large amount of information not related to the exploit, such as debugging tricks with runtime debugging, logging and many other things. If you are looking into digging into eBPF, this would be a wonderful article to read to help you out with this with an insane amount of reference articles as well.
  • The authors goal was to lengthen the registers with invalid boundaries in order widen the safe range for pointer arithmetic. By doing this, we can read out of bounds is possible to leak a kernel address. It should be noted that this was possible because of the bug above having unexpected consequences allowing for the converting of a pointer type into a scalar type to skip all sanity checks.
  • With an initial leak, we want to get an arbitrary read/write primitive by an out of bounds access on an array via eBPF map pointers. Actually making this happen with the bug comes down to tricking the verifier that the value is 0 but is actually equal to 0 at runtime. With this, the author defers to the ZDI article above for getting an arbitrary read/write primitive (which is game over).
  • The exploitation is insanely complicated and relies upon a deep understanding of how eBPF works to exploit. To make matters even crazier, the eBPF does dynamic patching of the bytecode in order to make it faster. Above is an overview of how it works but there is much more to look with how eBPF works internally.
  • At the end of the article, the author makes an interesting point. In a recent exploit by Qualys, they used the a vulnerability to corrupt eBPF code after it had been verified in order to achieve a read/write primitive in the kernel. In order to prevent this exploit method from being usable in the future, it would be a good idea to mark the region read-only before verification.

No Key? No PIN? No Combo? No Problem! P0wning ATMs For Fun and Profit- 579

Roy Davis    Reference →Posted 4 Years Ago
  • The dream for any hacker; dumping all of the money out of an ATM. The goal of the research was to have a fully operational ATM in his office.
  • Purchasing ATMs is surprisingly easy! The author simply looked on eBay to find one for $220. The ATM was bolted to the ground, they had no key and no pin; not a great start! The ATM had been purchased from a place that was shut down and everything sold. The author used a jackhammer in order to remove the ATM from the cement of a building.
  • The ATM uses a cylindrical lock. By using this special tool (unnamed), the lock comes out instantly. However, you can simply buy the key on eBay for the safe as well. We are in the physical ATM now!
  • The PIN number is encrypted at the PINpad level. This means that the computer does not see the PIN ever. The bank communicates over the internet in order to get the actual bank. To see the traffic, the author become a licensed ATM handler, which costed around 5K to do. With the encrypted traffic, the ATM allows you to use your own self-signed certificate.
  • On this ATM, the combination for pulling on the admin panel is enter, cancel, 1,2,3. There are also three default users with default passwords that did not work. After trying and trying, the author could not figured out the password. They ended up doing a factory reset to get the default version of the firmware; but, this required that the vault with the money be open!
  • Well, this is just a sensor, right? The author followed the wire and noticed that it is accessible from the front of the ATM. It turns out that the ATM sensor fails open if removed. Now, the ATM thinks that the vault is open even though it is not! We still want to open the vault though.
  • The author bought the safe that was used for the vault and reverse engineered how it worked. The main thing to note was that the voltage changed depending on the key being pressed. After reversing this process they created a device to sniff the traffic happening on the pinpad. If somebody entered the code, it could be recorded!
  • The ATM has a tiny hole that can be used to power the lock. If this is powered, then it opens the lock for us! Additionally, shorting out two pins to this and applying power will reset the lock to 555555. Once in the vault, thousands of dollars were still in the vault. Time well spent for a hack :)

My other car is your car: compromising the Tesla Model X keyless entry system- 578

Lennert Wouters    Reference →Posted 4 Years Ago
  • There are two ways of opening a Tesla (or most cars). First, pressing a button uses rolling codes. Second, a challenge-response model is used for passive entry, such as being close to the car.
  • The key fob has a quite a few electronic components. The device has a BLE microcontroller, a certified secure element(?) and a range extender. In order to see what was going on, the author set up a very elaborate logic analyzer to see all flow of data between the different chips.
  • The remote entry goes from the microcontroller, to the secure element and then transmits over BLE. The passive entry does the same thing, except that we receive a challenge from the car prior. Only the secure element knows the secret to let us in.
  • The BLE is simply a peripheral. So, a phone app, or some other device, can connect to the device. The BLE interface has an Over-Air-Download, which allows us to change the firmware of the device. Additionally, the APDU interface allows for direct communication with the secure element. Some actions are blocked over BLE.
  • The Over The Air (OTA) has a signature check which is done but then ignored. This allows for arbitrary firmware to be installed onto the keyfob. So, how do we modify the firmware?
  • The attackers decided to not write custom firmware or patch the original. Instead, they change occurrences of jumps (in case statements) and see if the block list is gone. If that's the case, then we're good. They brute forced these checks until everything just works.
  • The current flow requires that the battery be taken out of the remote: this is not ideal. In order to get around this, the cars BCM (body control module) can be used to send a wake-up command to the device.
  • The flow for the attack works like this:
    1. Send wake-up command with known VIN number to a remote. This opens the BLE interface.
    2. Connect via BLE to do the OTA update to add custom firmware. This removes the blocked ADPS commands.
    3. With the removal of the blocklist, we can request a code for unlocking the car. This will be returned back to the attacker.
    This gives an attacker access to the inside the car. But, can we drive off with the car?
  • The hackers looked into how to create a keyfob that would allow them to drive away with the car. To do this, an insane amount of reverse engineering was done on the Tesla engineering Toolbox, UDS and the provisioning of keys done by Toolbox. If the key can be re-provisioned for the car, then we can take the car for ourselves.
  • The pairing protocol and key provision process is crazily complicated. However, for whatever reason, the key provisioning process is NOT used, allowing for trivial adding of a fake keyfob. The final step of the attack is to connect to the diagnostic port and pair the modified key fob to the car.
  • The author leaves us with a few points:
    • OTA updates are a blessing and a curse
    • A Secure Element does not guarantee a secure system. Cryptography is typically bypassed, not penetrated. This was essentially an encryption oracle attack.
    • Everything has more attack surface with bluetooth.
    • Embedded systems are in a hostile env. General purpose MCUs with security critical operations are quite bad.
  • The author leaves their twitter and email handle. There is also a paper presentation and a video of the actual attack in action. Overall, amazing research where the attacks were not that complicated; they just took quite a bit of effort to understand how the system works.

Time Turner - Hacking RF Attendance Systems To Be in Two Places- 577

Vivek Nair    Reference →Posted 4 Years Ago
  • In Harry Potter, Hermione Granger wants to be in two classes at once. With magic, this is possible. But, what about real life?
  • There are devices for attendance systems at big schools. These use clicker systems in order to do quizzes and know if somebody is in the class or not.
  • The device is really simple. In order to see how the communication worked, they intercepted the SPI interface to see what was going on. The newer devices have a program fuse burned, making it impossible to dump the firmware. However, the original version of the device forget to do this, making it vulnerable.
  • The protocol is only 5 bytes. The first 3 bytes are a encrypted (poorly with a substitution cipher). The fourth byte is a lookup table for the answer for the packet. The final byte is a checksum that is not done very well.
  • A cheap ardunio board has the ability to emulate the functionality for us. The firmware for this can be found at here. Now, we can emulate a remote as any ID or any user.
  • But, we still don't KNOW the answer. Now what? We can create a device that listens to all of the other students! By emulating the base station, we can take the most popular answer and send that with any student ID we want. This allows us to NEVER go to class and always get the right answer.
  • Besides this, you can see all other votes. Or, even worse, you can send in votes for other people. System completely broken!
  • To patch this, you should not be able to see other peoples votes, alter them or overwhelm the service. To fix the availability aspect they could use frequency hoping (FHSS). To fix the confidentiality, they could use mutual key exchanges in order to encrypt the data. For integrity, having a unique fingerprint per device would making it nearly impossible to emulate other devices besides the ones that are owned.
  • The story telling of this is awesome! They really lean into the student needing to be in two classes at once. The story of the student needing to do something made for a really good story!

Cross Over Episode: The Real-Life Story of the First Mainframe Container Breakout- 576

Ian Coldwater and Chad Rikansrud    Reference →Posted 4 Years Ago
  • A mainframe hacker and a container hacker get together to hack the planet. The talk is them joining together to do some interesting research on container for mainframes. Mainframes are old and slow; containers are fast and new.
  • ZCX runs Docker directly on the zOS for IBM mainframe. ATMs, retail sites, governments, air planes and many other things use mainframes. They are not just super old. ZCX is a hypervisor that emulates the mainframe on zOS. ZCX uses Docker.
  • CGroups control what the process is able to use. Namespaces control what a process is able to see, such as processes. The container is no different than any other process from the context of the kernel. This is the basis of Docker containerization. Namespace remapping (TODO). Two versions of Docker (TODO).
  • The mainframe hacker (Chad) initially looked at the weird installation process for the mainframe. With some static analysis on the bash scripts and encrypted file systems, they started to reverse engineer the system. They found a flag that could be turned on in the bash script that would turn on debugging for the author.
  • The default user for Docker on zOS had a fundamental security issue; the permission group had the ability to run containers, which could be used to get root privileges on the host immediately. Additionally, they had an auth plugin that blocked a ton of nice Docker functionality. So, how does this work?
  • After reading ALL OF THE DOCs, they understood what the plugin was blocking (thanks IBM)! The restrictions were missing quite a things though; reading the docs was quite helpful here. The restrictions were done with pattern matching on the standard Docker commands.
  • Making the Docker socket Read Only is a small security measure which only blocks Docker Run. The Docker binary has an HTTP server that can be used to work around all of the restrictions mentioned above. TODO - Getting around namespace renaming.
  • The security of the container relies on a set of keys. Once Ian escaped the container, there is a set of keys (signing private key) that they found and gave to Chad. The encryption keys for the file system were shared across instances, giving a false sense of security.
  • The AuthPlugin was interacting with different services then doing pattern matching to determine codes for system exists. A system exit is similar to an interrupt plugged into the mainframe. While Ian was confused about this, Chad took time to understand what the system exists in the AuthPlugin was.
  • The reference slides for the talk have a bunch of background information. If interested, these are the best resources for learning how to go about hacking. The storytelling and back-forth of the talk is amazing. The cross disciplinary action of this helped them learn a bunch!

Adventures in Buttplug Penetration Testing- 575

Smea    Reference →Posted 4 Years Ago
  • Everything is on the internet. This device is an IoT buttplug that can be used to control it from your phone or other people can control your device as well. The major usage for this is with sex workers on the internet. The creator of this tool has a patent on tips for control over the plug.
  • The device connects to a USB dongle via BLE. The dongle connects to the computer and the computer is connected to the internet. The computer application allows for chat, video sending and remote controlling of the buttplugs. From the attacker point of view, we can go from the internet to the buttplug, or in the reverse direction. Both of these are discussed in the article.
  • Now, how does the flow actually work under the hood? The application was built in Electron. This means that the JavaScript will only be partially obfuscated, making it pretty easy to reverse. When looking for updates, the binary for the USB dongle was saved locally on the device. Additionally, the dongle itself had test points still on it, making it possible to dynamically debug by soldering a few wires onto it.
  • The buttplug itself had a nice SWD (Single Wire Debug) that allows for easy debugging of the buttplug. With this, it was possible to dump the firmware as well.
  • The JSON parser for the USB dongle had a vulnerability when parsing escape sequences. When using the \u or unicode escape sequences, the parser will skip 6 bytes. However, if an incomplete unicode escape sequence is used, then it will skip over the null terminator byte. As a result, the copying goes above the allocated string length.
  • The dongle and binary has no binary protections (Nx, ASLR, etc.). The JSON parser data is put onto the heap. Using an fd poison style attack on a custom allocator, we can place data into arbitrary locations in memory. Using this, we trivially control the flow of execution and run shellcode. The dongle has a DFU (device firmware update) that can be used to get code execution on the device. USB compromised!
  • The DFU mode on the buttplug is also insecure. With control over the USB dongle, we can send an update command to the buttplug to take control of this. The compromise could be used as ransomware or hurting people with the buttplug. With code execution on the dongle and USB, can we compromise it from the internet?
  • The Electron application parsing of the Dongle messages does a few things. Of particular interest to us is the debug log. This function logs the incoming serial data to console.log and throws the contents into a new DOM element. Because this is not sanitized, this results in JavaScript execution known as XSS. XSS in an electron app means code execution within the context of the application!
  • The payloads are restricted to 32 characters. Although this does not seem like enough (with only 10 characters of JS at a time), this is still enough! We can create an array and add our payload the array. Next, we join the string. Finally, we can run eval in order to execute the JavaScript. Although this takes a while, it does work!
  • This still was not enough though... they wanted to be able to compromise the USB dongle from the buttplug. The device has an entire BLE section of code from the semiconductor manufacturer. A statically sized buffer is iterated over depending on the amount of BLE handles. However, the amount of these is not validated, resulting in a basic stack smashing attack.
  • The BLE dongle does not have any binary protections. As a result, the return address on the stack, from the function with the vulnerability, can be sent to the data in the BLE packet. As a result, we can execute arbitrary code on the device. In order to work with thumb instructions, they had restrictions for dealing with alignment. The alignment could be dealt with on the ring buffer by allocating packets before to get the required alignment on thumb.
  • The remote control functionality can be used to compromise the dongle with the same bug as before. There is an attempt on validating the integer value on the vibration amount. However, the amount only validates that the value is less than 0 in JavaScript. The logic works for integers but NOT strings or other objects. As a result, this check fails entirely, allowing for one of the bugs from before.
  • The chat portion of the application also has an XSS vulnerability. Because this is Electron, this means code execution on the computer. The author writes that this can be used in wormable attacks in order to take over all buttplugs that are connected to computers by simply sending a message to them that is read.
  • The author of the talk made buttplug ransomware with a live demo! This was hilarious to see. The speaker was a tad bit awkward but had amazing visualizes throughout the presentation.

WebContent->EL1 LPE: OOBR in AppleCLCD / IOMobileFrameBuffer- 574

saaramar    Reference →Posted 4 Years Ago
  • The author was reversing applications of accessible from the IOServices from the app sandbox. While reviewing the decompilation of the code, they found a fairly trivial bug (in theory) within AppleCLCD/IOMFB.
  • The bug is an arbitrary write with a controlled 32-bit integer index for accessing an IOSurface object. There is no validation on the value of the index being accessed! How can this flow be triggered? The entitlement com.apple.private.allow-explicit-graphics-priority is required. The app sandbox does not have this entitlement, but WebKit.WebContent does.
  • The POC is super simple for this! Obtain a userclient to AppleCLCD/IOMobileFramBuffer and call IOConnectCallMethod with selector=83, and set the scalar input to be a very large number. With the right entitlements this causes a kernel panic.
  • To exploit this, the author can either craft a fake IOSurface object or use an actual one. They choose to go with the latter because this primitive allows for the freeing of the object as well! This creates a UAF on the object in it's original space. The IOSurface represents a userspace buffer that is shared with the kernel.
  • This object is amazing for exploitation on iOS. This is because the object can easily be created and freed by specifying specific functions. Then it is perfect for spraying controlled data because of the s_set_value function. The author includes three links to using this struct for exploitation in iOS.
  • The first step is to get an information leak. By spraying an insane amount of IOSurface objects, the author found a usable offset of 0x1200000 bytes. Using Corellium EL1 debugging, the hosted/emulated iOS, made the debugging significantly easier for testing this as well.
  • The article ends with the bug being triggered, with the proper offset, to dereference a pointer being read that we control. As a result, this is quite an exploitable bug! The author stops at this point... They planned on posting a full r/w primitive but the bug was patched prior to them getting to the point. Overall, I really enjoyed the description of the bug and the explanations from the author though!