Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

Ghost in the Block: Ethereum Consensus Vulnerability- 1497

Giuseppe Cocomazzi - Asymmetric Research    Reference →Posted 1 Year Ago
  • Simple Serialize (SSZ) is used by Ethereum clients in the consensus protocol and in P to P communication. The SSZ soundness depends on the involutive and injective property. The involutive property is that serializing a value then deserializing it will resolve to the original value. The injective property is that if A=B then serialized A should also equal serialized B. Some of these properties didn't hold, which resulted in a vulnerability.
  • SSZ relies on offsets and lengths for encoded objects. For the serialized block information we want to send (SignedBeaconBlockDeneb) the object, there are multiple layers of nesting in order to properly transfer all information. Within a block, is a body. To go from the block of offset 0x64 and then the offset of the body in the block type of 0x54 puts us at 0xB8.
  • The body contains its own set of values that have their own offsets in the block information. With this whole system of offsets to find objects, the serialization system works well. It should be a requirement that there are no gaps in the data. However, by changing the offsets around for objects (which have set lengths), ghost regions can be inserted into the data.
  • By itself, this isn't a huge deal. However, not all clients function this way. Many of them will reject the block information outright even. Since Prysm acts one way (shown above) and Lighthouse acts another that rejects it, this will lead to a consensus failure in the protocol. Doing this does not modify the hash tree root at all either. When setting this up locally, it resulted in the network just stopping entirely.
  • An interesting takeaway from the author: "Paradoxically enough, the same design choice of favoring multiple implementations has brought a new vulnerability class, that of “consensus bugs”, on which we hopefully shed some new light." Overall, a great article on a subtle difference in the Ethereum serialization code.

The unreasonable success of Fuzzing- 1496

Halvar Flake    Reference →Posted 1 Year Ago
  • Fuzzing is a technique that many of us know and love. But why is it so effective? This talk aims to go through the origins of fuzzing and why it works as well as it does.
  • The origins stem back to software being bad in the 90s and early 2000s. For a while people felt that "you fuzz if you're too stupid to audit code". Over time, this perception changed. At this point, you could send random data to most programs and get a crash from it. This included a remote OpenSSH bug, RealServer (music streaming) RCE, Cisco IKE and an Acrobat font bug.
  • After the introduction of fuzzing and its effectiveness, the author gives us reasons why its so good. First, it's crazily efficient. It parallels well, it's only limited by computing power and gives us very false positives. They do mention that it's worth "being clever" to make it faster, which can make a big difference in some situations.
  • Next, it scales with the complexity of the project. It finds weird states that a human doesn't have time to think about. This seems to be a theme though - the next one is that fuzzers are generally simple designs compared to fully understanding a project. Sending random data and setting this up is much simpler than static analyzers, solvers and pure code review.
  • The final section discusses the similarities between AI and fuzzing. They base this around the bitter lesson that computation search is much better than human intuition. The article linked above discusses Chess and Go AI history and ends with AI and computer vision. I personally fall into the trap that my personal knowledge is going to be better than a computer doing something but that's almost always wrong. Combining the humans ability to optimize and make the computers faster is what we should focus on.
  • In the case of fuzzing, they see it as the same. Fuzzing requires lots of computing power with the smarts of the person who set up the power helping with the efficiency of it. The success of fuzzing depends on large tree searches.
  • They go through the issues with code coverage as the main metric for fuzzers being limited by an implicit state machine. How can this be improved? Should the state machine be modeled?
  • The end asks a question whether the future will be more clever fuzzing or more systems engineering to make the fuzzer run more times. I think it's a combination of both but interesting parallels that to an industry that I had not considered very much in security.

SSH Keystroke Obfuscation Bypass- 1495

Philippos Maximos Giavridis     Reference →Posted 1 Year Ago
  • SSH has a problem where a passive observer is able to deduce some information via the metadata, which violates most cryptographic principles. By default, each keystroke is clearly identified and timestamped. To combat this, SSH started obfuscating the keystrokes some.
  • The obfuscation veils the keystroke packets among a wave a fake packets that should look the same. When a keystroke is made, a bunch of these chaff packets start flooding out to hide all real keystrokes.
  • The author decided to do some analysis on the sizes of these packets to see if the protection actually worked. While analyzing, they noticed that some packets were substantially larger than the rest! The chaff packets should be the same as the keystroke packets in size in order to mask them but this doesn't appear to be the case. What's going on?
  • After reviewing the source code, wireshark captures and SSH verbose mode logs, they understood what was going on... SSH can group multiple requests together into a single packet. On the second keyboard stroke, this starts happening. The real keystrokes are packaged up with a PING packet, creating a packet twice the size as a normal keystroke and two server-side responses.
  • Using this knowledge, it's possible to get the same information as before - how many keystrokes were made at what intervals. They create a tool for doing this that is pretty cool! Typing out certain commands have a specific rhythm (such as sudo apt upgrade) making the analysis possible to get the actual sent command out of the packet. Overall, good post on side channel analysis and how easy it is to mess up these types of protections.

Feeld dating app – Your nudes and data were publicly available- 1494

Bogdan Tiron - Fortbridge    Reference →Posted 1 Year Ago
  • A dating app is an absolute mess in terms of access control. Shocker...
  • The first bug really sets the tone - non-premium users can view premium functionality via direct request. In the mobile app, it's just not shown to the user. Classic bug
  • After the first vulnerability, it becomes a ton of mostly uninteresting from a technical standpoint access control vulnerabilities. Using IDORs on GraphQL APIs, you can read the messages of others, update another persons profile, get a like from any user, send messages in another persons chat and view other peoples matches were simple IDORs.
  • It was possible to view another users attachments as well. This was a fairly standard IDOR except with the URL prepended with v1 bypassed all authorization checks. Fuzzing does wonders when done correctly but this is a fairly weird thing to fuzz for.
  • The other interesting bug was that attempting to redelete a message, it would return the result of of the message. Why does it save a message after deletion, I'm not sure but it's an interesting case of an IDOR leading to information disclosure in a weird place. This same bug can be used to delete and edit messages as well.
  • The main reason I wrote this up was how bad the access control of this was and the impact of it. Sometimes, the things without bug bounties are worth looking at in order to make the world a more secure place.

Zero-Click Calendar invite — Critical zero-click vulnerability chain in macOS- 1493

Mikko Kenttala    Reference →Posted 1 Year Ago
  • macOS calendar is paired with all of the other macOS services like Mail. The author found a bug in it to get RCE, which is terrifying. They don't just show the bug - they show how to get steal photos too!
  • Calendar invites can have attachments. When the name is used as part of a path, it not sanitized. This gives us a classic directory traversal, which I cannot believe actually happened in something this important. This gives us an arbitrary file write or an arbitrary file delete if the event/attachment is deleted.
  • Gaining RCE from this was not an easy task and required writing many files and using the Open File functionality of Calendar. First, they create a calendar entry that has Siri Suggested content. This will open other injected files in the future. The next attachment coverts old calendar formats to the new format to make sure this attack will work.
  • The next attachment is a .dmg file. This dmg contains a background image that points to an external samba server. For whatever reason, even though this has the quarantine flag, it will not be subject to quarantine. The next injected file is used to open a URL a URL triggered from the mounted samba mount from before to open an app. Finder will attempt to open this application, indexing the file and registering a custom URL type.
  • The final file (triggered by the Siri events mentioned before) will open the custom URL that was just registered. When this URL is opened, it will execute the binary! This is possible because the quarantine flag is not set on the samba loaded file, for whatever reason. When the file is executed, it pops a shell or does something more interesting like stealing photos...
  • TCC in macOS should prevent access to photos. However, they found a clever trick to steal them anyway. By abusing the RCE, the configurations of Photos can be changed to control the iCloud settings. This allows them to control the location where the files are downloaded to! When the sync happens, they can recover the sensitive files.
  • An amazing blog post! Many of the techniques for taking this to zero click RCE were interesting and specific to macOS, which probably took a lot of reverse engineering. Using the Siri autoloading to open links, Samba downloaded links not being quarantined, and the forcing the indexing of the custom URI were all awesome finds. The bug was simple but the exploitation was not!

URL validation bypass cheat sheet- 1492

Portswigger    Reference →Posted 1 Year Ago
  • URLs are notoriously hard to parse. This article is a list of easy to try URL domain bypasses. This includes absolute URLs, CORS bypasses and weird host headers.
  • The domains contain different encodings (URL encodings), classic parser differentials such as semi colons and https://\\ and usage of username/passwords in the URL.
  • I had been writing a CTF challenge for the Spokane Cyber Cup. From this article, I found 3 bypasses for one of my challenges immediately. Solid techniques!

Writeup of CWA-2023-004- 1491

CertiK    Reference →Posted 1 Year Ago
  • In CosmWasm, a module for running Wasm on Cosmos blockchains, the maximum wasm payload is 800KB. Before the contract is saved to disk, it goes through some sanity checks. This check is to ensure it's not too big. The bug is effectively a zip bomb to slow the chain down.
  • When taking the Wasm bytecode, the compilation process can leads to signatures being inlined multiple times in compiled code. By using a large signature with many references, it's possible to create a gigantic file when it's loaded to be megabytes or gigabytes in size. If it's larger than 2GB in CosmWasm, this can lead to panics.
  • The cosmwasm-vm crate uses the Mutex type to safeguard race conditions on the inner caching of the module. If code crashes during a mutex, then the lock becomes unusable. This creates a denial of service when this object is used. Since all CosmWasm calls now crash, this leads to a denial of service on major parts of the contract.
  • From the user's perspective, this translates to the blockchain stalling in processing any transaction, akin to a network outage. To fix the issue, additional restrictions were added to the maximum amount of functions, parameters and total function parameters. This limits the size of a payload but doesn't really fix the root cause. Interesting!

Exploiting Misconfigured GitLab OIDC AWS IAM Roles- 1490

Nick Frichette    Reference →Posted 1 Year Ago
  • OpenID Connect (OIDC) is a common authorization service. Of course, AWS supports a way to authorize services outside of AWS to assume IAM roles using it. Besides this post, they have many other cases where the permissions of OIDC are incorrect and this leads to a privilege escalation. The service of focus this time is Gitlab
  • The default trust policy for OIDC Gitlab authentication contains the principal (Gitlab.com), an action for AssumeRoleWithWebIdentity and an optional condition key of gitlab.com:sub. This is either a group, project or branch that is allowed to submit this.
  • The reason there is a misconfiguration is the optional condition key - aka, fails open. The sub field on the JWT - who is permitted to assume the role - is not a required field. If this is not included, then there are a wide variety of ways to assume the role in AWS.
  • The example policy used for the test does not include the sub at all but only the aud. To exploit this, an attacker needs to create a valid JWT for the sts:AssumeRoleWithWebIdentity invocation. Doing this only requires having an account on Gitlab, creating a project with CI and support for JWT generation. In the CI, we can simply output the GITLAB_OIDC_TOKEN and this will work for us.
  • In AWS, we can then use the token with a call to sts:AssumeRoleWithWebIdentity to assume the role now. Generating a trust policy for Gitlab in the AWS console is created insecure by default, which is terrifying. In the case of Github Actions and Terraform Cloud, AWS made changes to require specific fields. Overall, a good and concise write up on a common AWS misconfiguration.

Unauthenticated Access to GCP Dataproc Can Lead to Data Leak- 1489

Roi Nisimi    Reference →Posted 1 Year Ago
  • Google Dataproc is a managed service that runs Apache Spark and Hadoop clusters for data analytics workloads. When creating an instance, the default allows for no internet access but computers in the same VPC can access the the service completely.
  • The Dataproc cluster contains a YARN Resource Manager on port 8088 and HDFS on port 9870. Neither of these require any authentication on them.
  • If an attacker has access to a vulnerable compute instance via an RCE bug, they can then access the Dataproc clusters. If they access the HDFS endpoint, they can browse through a file system to obtain sensitive data.
  • Their key takeaway of using an OSS project and hosting it without considering the security consequences is a good callout though. To me, the issue is on Google for using this incorrectly. To fix this, I'd personally add a better default network permissions in order to prevent this from happening. The authors are right - shells happen and is the public instance doesn't need access to it then it shouldn't have network access to it.

Persistent XSS on Microsoft Bing.com by poisoning Bingbot indexing- 1488

Supakiad S. (m3ez)    Reference →Posted 1 Year Ago
  • Bing is the Microsoft search engine. BingBot is the web crawler used to keep Bing up to date with search results.
  • When a user searches for a video on Bing, the search engine retrieves the content from its index with all of the videos details. Even though the data is stored as JSON, the returned content type is text/html for some reason.
  • Since the metadata associated with a video is completely controlled, the browser may confuse this as a loadable HTML page! The author created a video on several different platforms with script tags. Once the indexer had picked this up, if we go to the exact page for this, it leads to stored XSS on Bing. A user must click the link in order to be exploited though.
  • Another Content Type mishap! I feel like I've been seeing more and more write ups about this. Good find!