People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!
call() to a function. It has some Yul code that is worth discussing further:
calldata of the program into EVM memory.swapAmount in the calldata via a user controlled index. This is where the fun begins!swapAmountInDataIndex is a 32 bit integer.
ptr + 36 (0x24) + swapAmountInDataIndex * 32 (0x20).
swapAmountInDataIndex variable is a uin256. Unfortantely, there's an integer overflow in this calculation. When performing the multiplication on the index, this can overflow. With a specially crafted value, it's possible to wrap back around to modify the function selector that had previously been verified. An arbitrary call in the context of a Solidity smart contract is effectively game over.account.data point directly to the hosts buffer for the Solana memory region. Because everything is now using a shared pointer instead of a personal pointer, the validation must happen on each write called copy-on-write. This changes a key invariant of the system. Originally, all of the data was directly read from the underlying DB implementation. So, a copy was added to memory and updated once it was written to.MemoryRegions structure for the previous call still points to the old buffer. To do this, it grabs the vm-data_addr to find the memory region of the original mapping to eventually update it.CallerAccount.vm_data_addr is stored completely in the VM's heap memory. By modifying the AccountInfo.data pointer in VM memory before triggering a CPI call, an attacker can forge an arbitrary vm_data_addr value. This causes the wrong memory region to have its host address updated, being mapped to an arbitrary location in virtual memory.MemoryRegion. By setting a simple value on the account data, we can locate it after some searching.host_addr and set its state to writable.tar implementation. This article describes a lot of reverse engineering of the device and a firmware downgrade vulnerability they found in the process. /opt/cookey.txt is found to contain the encryption key. After some reviewing of a customized kernel driver, they are able to decrypt locally, but cannot modify it because the data is signed. cs.tar and the second partition contains recipes, cloud settings, and a recovery firmware image.version section is what we're after. This contains three values: date, comment and force_flag, with the first two being arrays. The original usage of this contained a security issue: the firmware could be downgraded by swapping firmware update file sections between versions. A classic replay issue, but this was the past, and we needed a new vulnerability by swapping these individual sections around. AES-EAX mode. This combines AES-CTR for encryption and OMAC-based tag for integrity. Each section is RSA-signed, but the nonce and tag are excluded from the signature so that they can be tampered with. We know the encryption algorithm in this case, but we're unable to modify anything because of the signature. Or can we?PT = C0 XOR K0 in most cases. This can be rearranged to K0' = C0 XOR P0'. If we XOR our plaintext with the ciphertext, we can know what key (nonce starting value) it can be generated with. Since we control the nonce and know the key, we can reverse the encryption process to find a nonce that will match this. Neat!force_flag is something that we want to be set to 1 though. By brute forcing enough keys, it's possible to set this to 1. All of this works because A) the nonce is not verified and B) the header information with the date, comment, and force_flag is a singular encrypted piece of data with nothing else in it. I find it weird that the signature is unique per section, personally. ExtractLinearSum to convert a value into a linear sum expression. For instance, (x+(2+3)) - (-3) can be transformed into x+8. This type contains three parameters:
This type contains three parameters:
ExtractLinearSum is used multiple places in the Ion compiler, one of which is folding or simplifying the linear expressions. The function TryEliminateBoundsCheck is trying to merge bounds checks on the same object to simplify things. For instance, array[i+4]; array[i+7] will generate two bounds checks. To do this, it will create a bounds check object that can keep track of what's going on, eventually leading to a value of 7 being checked on the length. MathSpace is useful, it's not rigorously verified. In the case of bounds checks, this seems pretty important! Module makes sense in some math cases but doesn't make sense in the case of bounds checks - infinite does. So, what if we can find a way to make the numbers being used in this operation of type Modulo on a bounds check? i is slightly less than 2^32: array[(i+5)|0]; array[(i+10)|0]. The |0 is used to force this to be 32 bits. The check will overflow because of the MathSpace being set to Modulo, leading to a faulty bounds check. This is only possible with really large arrays, requiring typed arrays to be practically feasible. Map objects were nice for getting a addrOf and fakeObj primitive. Once there, exploitation is trivial. Commitment,QueryPoint as the key and to a Value. This key isn't unique enough! It's possible for a "query collision" to occur, where two independent queries have the same key, even if their values are expected to be different. In the context of Halo2, the consequence is horrible: one evaluation can silently overwrite the other. This means that it's possible to forge proofs in many situations. increasePosition, which did NOT update the globalShortAveragePrices in the ShortsTracker contract. Later, when the execution decreases, the value is updated, though. Entries update, but exist to not update. This is not really a vulnerability by itself but a quirk of the protocol.enableLeverage on the core code before performing any of the trades. There was a backend off-chain service that would trigger this functionality. While Keeper made this call, it was possible to redirect execution and call the GMX contract while leverage was still enabled. This is the vulnerability that makes this possible.PositionManager to enable leverage. The Orderbook would then execute executeDecreaseOrder(), update the attacks position and pass execution to the contract via the collataral token being in WETH.fallback function, would transfer 3000 USDC to the vault and open a 30x leverage short against WBTC using increasePosition. Because of the second design flaw, the globalShortAveragePrices were not updated. During a future call to the ShortsTracker contract, the globalShortAveragePrices would be updated. This dropped the price of WBTC to about 57x less than it should have been.mintAndStakeGlp to mint a lot of GLP. Next, the attacker would call increasePosition to deposit a large amount of USDC on WBTC. This would update the globalShortSizes, resulting in AUM increasing dramatically. Finally, the attacker would call unstakeAndRedeemGlp to redeem way more tokens than they were entitled to. But why? globalShortSizes was not. When performing calculations on the trades, the manipulated value of the trade was far above the market price, making the trade appear deeply unprofitable. Naturally, this increases AUM by a lot. By doing this over and over, they got more funds from the trade of GLP than they actually should have.initialize function can be called by attackers before the real user sets malicious settings. In reality, if this happened, a legitimate developer should recognize the failure and just try again. At least, that's the argument I've been hearing for a long time. So, what's different here?guest.microsoft.com. Once logged in via a phone number, no information was given. This seemed like it wasn't meant to be publicly accessible./api/v1/config/ with a JSON parameter called buildingIds. Since they had not visited any buildings, none of the information was provided, though the array of buildings was empty. By providing an ID of 1, they were able to see some building information. /api/v1/host. By providing an email, PII about the employee, such as phone number, office location, mailing address, and more was provided. The same issue existed on guests based upon their email as well...%2f..%2f..%2f or ../../../ URL encoded, they were able to get an Azure functions page. But why!? The proxy was decoding the URL encoded / and being used by the actual Azure function. Neat! /api/visits/visit/test. Eventually, they managed to get this working to retrieve a wide range of invitation and meeting information. Sadly, they got nothing for the vulnerability: it was moved to review/repo, fixed, and no payment was ever made. Regardless, it was a good set of vulns!