Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

How I Reversed Amazon's Kindle Web Obfuscation Because Their App Sucked- 1758

Pixelmelt    Reference →Posted 5 Months Ago
  • The author of this post bought a e-book on Amazon that required the usage of the Kindle App on their Android phone. As soon as they did this, the app crashed. They tried downloading it, backing it up but none of this worked. They had bought the book but couldn't read it. So, they decided to reverse engineer the app to get the book into a usable format for themselves.
  • The Web API downloads a TAR file with several JSON blobs in it. Upon trying to read this information, they realized that the API was heavily obfuscating the requests. Their was a mapping of Glyphs to Character IDs. They were using a simple substitution cipher that changed on every API request.
  • Interestingly enough, browsers handled the data fine because of native Path2D support. Using a custom parser for an SVG didn't work. SVG libraries had spurious lines everything. They even had 4 font variants!
  • After a large amount of effort, they figured out how to render the book. It required a lot of visual mapping code that I'm sure was a pain to write. Although it wasn't perfectly precise, it was good enough to get the job done. The end of the book has a great point: "Was it worth it? To read one book? No. To prove a point? Absolutely."

More Than DNS: The 14 hour AWS us-east-1 outage- 1757

thundergolfer    Reference →Posted 5 Months Ago
  • AWS had its worst outage in 10 years. It lasted 16 hours and took down 140 services including EC2. The author of this post doesn't work at Amazon. But, they are an experience developer who took information from several locations and aggregated it together.
  • DynamoDB had a service failure at about 7am. Since AWS dogfoods much of their stuff internally. So, when DynamoDB went down so did literally everything else that was using it. So, what was the issue? A race condition in DNS registration.
  • DynamoDB has an automated DNS load balancing system. They have have three DNS Enactors (adder of rules) that operate in the three availability zones. These don't have any coordination. The exact us-east-1a was going very, very slow - likely 10x-100x slower than normal.
  • The DNS service gets rid of old plans in an automated way. This is done via "keep last N" strategy. The DNS plan that was being enacted fell outside of the safety of N. This meant that an active DNS plan was removed! This time of check vs. time of use issue led to everything going down. The system didn't have a fallback to fix itself once the DNS broke.
  • There's two issues here: a TOCTOU bug and a missing check for a stale plan being active or not. They mention the Swiss Cheese model: the more holes that are in the system, the more likely something is too happen. Much of the time, several things need to go wrong. This outage lasted about 3 hours but more damage was caused.
  • The EC2 service uses DynamoDB for metadata management. So, current instances could keep running but you couldn't create new ones or stop current ones. Once DynamoDB came back up, EC2 still didn't work. The DropletWorkflow Manager (DWFM) contains a large list of active leases (10^6) of which 10^2 are broken, meaning that a connection needs to be reestablished. The heartbeat timeouts raised to 10^5, creating a gigantic queue that led to a congestive collapse. This was only fixed by manual intervention.
  • The author claims that software is much buggier than we realize. If AWS, the giant of the cloud industry, has these types of issues laying around then there are many more to be discovered. Overall, a good post on the outage.

Blu-ray Disc Java Sandbox Escape via two vulnerabilities - 1756

theflow0    Reference →Posted 5 Months Ago
  • Blu-ray Disc (BD-J) runs Java code calls XLets for menus and bonus feature functionality. Since the Blu-ray player's manufacturer doesn't trust the disc's code, it runs in a Java sandbox. XLet can render menus, use its own memory and talk to other apps. It cannot do things like read or write files to the hard drive and many other things.
  • A core component of the sandbox is the Security Manager. Whenever a sensitive operation is required on Java internal APIs, a privilege check is performed. If it's rejected, then the code fails.
  • The code describes two vulnerabilities. The first one has to do with a bypass of vulnerability 3 discussed here. The Security Manager performs a check to ensure that classes under com.sony.gemstack.org.dvb.io.ixc.IxcProxy cannot call invokeMethod. An attacker can write a subclass of the target class to perform this operation anyway. To fix this, the code now checks the call stack to see if the class is included or not.
  • There are still instances where this code needs to be triggered, requiring a whitelist of sorts. By calling allowed classes that call the invokeMethod, it's possible to perform the same attack. An attacker can extend IxcRegistryImpl and create a remote proxy for it. By calling bind at the privileged location, verification is not performed. This allows registering arbitrary classes, that shouldn't be possible.
  • The function com_sun_xlet_execute is called via a wrapper of remoteMethod in a doPrivileged block that is accessible to the sandbox. This can be used to overwrite important functions and objects within the runtime. They use this to create a custom security manager that does nothing. This leads to a complete sandbox escape. Neat!

Position Spoofing Post Mortem- 1755

Panaoptic    Reference →Posted 5 Months Ago
  • Panaoptic is a company that maintains trading and perpetual (longs/shorts) contracts. When you take out a short, you are stating that I think this token will drop. This contract is called a position that a user takes. The vulnerability was a spoofing issue around the position data. This was received as a bug bounty report. The user received a $250,000K payout - the maximum on the program.
  • The position is all stored within a uint256 with different sections to the value for gas efficiency reasons. These include four legs, each consisting of an asset ID, an option ratio, a boolean indicating whether it's a long, a strike price, and many other fields. Additionally, there's a one-time section for Uniswap pool information. So, what's the big deal?
  • A user can have up to 25 open positions. Instead of storing each position ID on-chain, they combine the hashes to create a single fingerprint. When the user wants to interact with the contract, the positions are passed in, hashed, and verified against the user's current fingerprint. The fingerprint generation is as follows:
    1. Hash the provided number associated with the position.
    2. Take the lower 248 bits of the hash output.
    3. XOR the lower 248 bits with the accumulated hash, which defaults to 0x00.
    4. Add the amount of legs associated with the position and add them to the upper bits of the fingerprint. This is to know how many legs are associated with the user.
    5. Repeat steps 1-4 until you're left with a value.
  • The usage of XOR is a significant problem here. XOR is associative, commutative, and self-canceling. This appears like the hashing function XHASH, which is known to be broken. By treating this as a set of linear combinations, it's possible to use Gaussian elimination to find overlaps with large sets of provided tokens. This allows you to create fake positions because the fingerprint will match. This requires brute-forcing the correct keccak256 hashes for the input values, but the problem isn't particularly computationally intensive.
  • Another issue that made this MUCH easier to do was that the usage of the fingerprint was not limited to 25 positions. Additionally, there was very little input validation placed on this either, since it was assumed the fingerprint couldn't be spoofed. This made it possible to use giant token lists, making the solving easier.
  • To exploit this, an attacker would do the following:
    1. Borrow assets via a flash loan.
    2. Use the funds to deposit a lot of collateral.
    3. Open two very deep in the money positions with both a call and a put with leverage.
    4. Call the withdrawal methods on the contract with a list of zero collateral requirement positions. This allows you to credit your account without triggering the collateral subtraction.
  • They were unable to upgrade the contracts while this major vulnerability was present. So, they commenced a migration process. Over the course of two days they messaged all Panoptic users to remove funds from the protocol. Additionally, two wallets had 70% of 1.6M in them. They were able to find the users who owned these wallets, and they withdrew. They led to $550K in the protocol.
  • Time for the crazy part: the whitehat hack. Their plan was the same as described above on the attack: take a flash loan, borrow a lot of tokens to open positions and withdraw the tokens with the spoofed IDs that were provided. They took the reporter's PoC and made it work within a single Tx. They didn't post to GitHub or other tools out of fear of leaking. So, the PoC was done on a single devs machine.
  • They tried using the Flashbots RPC to recover the funds. This is a requirement because, otherwise, a bot could frontrun the whitehat hack and steal the funds. Unfortunately, Flashbots rejected the transaction because it required more state reads than they allowed. After contacting Flashbots, they updated the rate limit for them, and the recovery succeeded. It was first done on the Ethereum L1 and done immediately after on the L2s. They claim that 98% of customer funds were safe. To claim their funds, the team created a Merkle root verification system for users.
  • Awesome post! They have some good lessons learned:
    • Several issues were discovered relating to these issues on the C4 contest platform but they didn't put the whole spoofing issue together. This quote is perfect: "If we had spent time ourselves trying to come up with a position-list-spoofing method, we may have been able to foresee this vulnerability."
    • Recovery methods, outside of whitehat hacks, are massively important. A large amount of the funds was recovered through reaching out to users. I find this risky because one of the users could have looked for the vulnerability and exploited it. The more people who know, the more likely it is to get exploited.
    • A rule as old as time: Use available tools and methods at all times. This is specifically true with cryptography.
    • If you need to do a whitehat hack, prepare to be perfect. RPC listening, mempool watching, compromised dev machines, GitHub snooping... all of these would have allowed an attacker to carry out this attack themselves. This is a case where perfection is required.

Hunting Evasive Vulnerabilities- 1754

James Kettle - Portswigger Labs    Reference →Posted 5 Months Ago
  • Many vulnerabilities, both classes of them and individual instances of them, are missed by hackers. It's easy to stay in the comfort zone and look for the standard bugs over and over again. There's a big problem with this though: over years and years of an application existing, more and more niche security lay dormant ready to be discovered. This presentation goes through James Kettle's perspective on hunting security issues that others miss. He is the perfect person to publish this research!
  • One reason is a highly visible defense. In the web browser, this may be X-Frame headers, for instance. In the case of James Kettle, they wrote a proof of concept for a side-channel leakage that required the X-Frame to not be there; Mozilla said this is impossible because the header was not there. James didn't see the defense so they tried the PoC and it worked. It turned out that a bug in FireFox allowed for this to be possible. What's the takeaway? Write PoCs without considering the defenses and figure them out later. Modern applications are too complicated to fully understand by simply reading the code.
  • A lot of times, it's a vulnerability fad. A good recent example of this HTTP Smuggling. Vulnerability classes get popular, many of the bugs get found and then it floats into obscurity until the cycle happens again. The advice here is to review old research and techniques and apply them to today's applications.
  • But, there's a catch... techniques get corrupted over time. HTTP Request Smuggling was originally a desync between proxies. Over time, it become the ability to bypass WAFs and that was it. James recommends to go to the original source of the research. It will have the most complete amount of information and give you information that was lost over time.
  • The next reason is around fear of failure. Again, an example of this is around HTTP Smuggling. Either the technique isn't feasible, it's too complicated, it's not there... etc. James says to just go ahead and try it! If it's super new or super old, then there's likely a lot of good bugs to find with it.
  • Another reason is the invisible link. This is either application-specific or context-specific knowledge. For instance, a website that uses a custom authorization scheme. Unfortunately, this is inconvenient and time-consuming. However, it is essential to find great bugs!
  • The author thinks that automation can help find better bugs as well. Not full automation - fuzzing specific inputs. Scan for interesting input not bugs. Use the scanner as a lead for issues not as the issue itself. This helps go from a huge attack surface to finding the juicy bits. James calls this curiosity-powered hacking. Test a hypothesis, ask questions and iterate.
  • To do this effectively, make the questions cheap to ask. The longer you spend on something, the more sure you should be. The next advice is eliminate noise and be specific with the questions you're asking. Finally, do non-default things.
  • The end of the presentation has the best advice: make it your own. If you do the same thing that everyone else does then you'll be crowded by noise and not do well. There's no winning formula; there are only different formulas. Great talk!

A Story About Bypassing Air Canada's In-flight Network Restrictions- 1753

Ramsay Leung    Reference →Posted 5 Months Ago
  • The author of this post was on a twelve hour trip from Canada to Hong Kong. The plane had WiFi but it was a requirement to pay $30.75 For everyone else on the WiFi, it offered free texting.
  • acwifi.com is the captive portal and asks for a Aeroplan payment. So, some websites work, such as https://acwifi.com, but others do not? For instance, github.com. Can we circumvent this!?
  • Initially, they tried to disguise the domain. They set the /etc/hosts to be acwifi.com to go to a proxy server. By doing this, the DNS record would be rebinded. When they tried to ping the IP, this failed. Their best hypothesis was that ICMP and TLS were blocked.
  • Much of the time, DNS arbitrarily works. This was the case here as well. This was both UDP and TCP-based DNS queries. This tells us one thing: the firewall allows all data through port 53. So, they setup a proxy on port 53 and connect to it. Boom! WiFi without paying for it ;) They also think that DNS tunneling would have worked as well.
  • Another mechanism for bypassing the protections would be to use ARP Spoofing. By becoming a different MAC address you can simply become another user who is paid, as far as the network is concerned. This is a slightly more criminal so they decided not to do this though.

PROMISQROUTE: GPT-5 AI Router Novel Vulnerability Class Exposes the Fatal Flaw in Multi-Model Architectures- 1752

Adversa    Reference →Posted 5 Months Ago
  • When you use a major AI service like ChatGPT there is more than one model that you're talking to. How does it decide which model to use? More AI! According to this post, very quick neutral networks choose which model to use, known as the router. Some of the backend models are more powerful while the some other ones are less powerful.
  • This creates a potential security issue when it comes to jailbreaking: what if you can trick the router to use a less powerful model? By tricking the router, it makes jailbreaking much, much easier to do.
  • This is more of an abuse issue than anything else. You could likely get ChatGpt to generate inappropriate content such as create recipes for bombs and such. Being able to downgrade jailbreaking detection is interesting!

CVE-2025-59489: Arbitrary Code Execution in Unity Runtime- 1751

RyotaK    Reference →Posted 5 Months Ago
  • To support debugging applications written in Unity, the Android library adds a handler for the intent containing unity data onto a UnityPlayerAcivity. Android does manage feature flags it does not prevent the execution of intents.
  • The unity field contained a lot of extra flags. While reverse engineering the library they found that xrsdk-pre-init-library could be used as an argument to dlopen to load arbitrary libraries. This gives the threat of an RCE in the application!
  • A malicious Android application can trigger the intent with their own created library. By doing this, the application would have the same permissions as the Unity application.
  • Exploitation from the browser is somewhat nebulous though. Because dlopen needs a local file path, we need to trick the user to downloading a file. By good design on Android, SELinux prevents the usage of dlopen for files in the downloads directory. Nice protection!
  • This isn't full-proof though. dlopen doesn't require a file to have the .so extension. Since /data is allowed, if an application writes arbitrary data to storage on the device then this can be used as a malicious library. Good find!

Remote code execution though vulnerability in Facebook Messenger for Windows (June 2024) - 1750

Dzmitry    Reference →Posted 5 Months Ago
  • Meta's Facebook Messenger can use end-to-end encryption. In particular, you can select a friend and decide to start a conversation with them. Because the chat is encrypted, everything must be verified on the client-side. This creates a pretty large attack surface that the author of this post looked into.
  • The author was playing around with Android and sending attachments to a user on a Windows computer with encrypted chat. The author tried a trick as old as security itself: path traversal. They added some ../ to the path to see what would happen. If a victim can receive messages from you then you can add a file into any location on their Windows machine!
  • This has two crucial limitations: files cannot be overwritten and there's a character limit of 256 symbols because of the Windows FS limit. The path that the file name is appended to has a 212 symbols, giving us 44 available to work with. To get to the main C drive with a traversal, we only have 12 characters left. What to do?
  • Slack and Viber are very small names. So, the author decided to try to exploit these directories. By using DLL hijacking, they were able to add a DLL that those programs would execute. Naturally, this led to RCE on the victim devices.
  • Initially, they received a payout of 35K. They linked to a bug bounty page about payouts and claimed that the information provided was insufficient. After doing that, they were aware of another 75K. It's essential to push back on your payouts!

Compliance is a snake eating it's tail, and that's a good thing- 1749

Nabla    Reference →Posted 5 Months Ago
  • A lot of people hate compliance. There's always some new standard to follow. Compliance is a snake eating its own tail. This is a good thing!
  • The tech industry is constantly evolving. If the standards stayed the same, then they would be out of date. That doesn't mean that the original standard was bad - it was just meant for a different time. The next standard will be good for the current cycle, but it will eventually go out of date as well.
  • Sometimes, things become too cumbersome and need to be rethought. Other times, there are more important things to consider than the original design. A pretty short article, but I like the rebirth mentality of compliance standards.