Platform: Code4rena
Start Date: 01/03/2024
Pot Size: $60,500 USDC
Total HM: 4
Participants: 18
Period: 21 days
Judge: Lambda
Total Solo HM: 3
Id: 344
League: POLKADOT
Rank: 2/18
Findings: 3
Award: $15,225.87
π Selected for report: 1
π Solo Findings: 1
π Selected for report: zhaojie
12549.5721 USDC - $12,549.57
Recover a cached value that has timed out, and a malicious user or contract can exploit this bug to fool other users or cause other unknown problems.
The LocalCache#set_expire function does not check whether the key has expired when setting a timeout:
pub fn set_expire(&mut self, id: Cow<[u8]>, key: Cow<[u8]>, expire: u64) { //@audit key values that have timed out may not be deleted self.maybe_clear_expired(); if expire == 0 { let _ = self.remove(id.as_ref(), key.as_ref()); } else if let Some(v) = self .storages .get_mut(id.as_ref()) .and_then(|storage| storage.kvs.get_mut(key.as_ref())) { //@audit You can increase the timeout period of a key value that has expired v.expire_at = now().saturating_add(expire) } } fn maybe_clear_expired(&mut self) { self.sets_since_last_gc += 1; @> if self.sets_since_last_gc == self.gc_interval { self.clear_expired(); } }
As we can see from the above code, the set_expire
function will first call maybe_clear_expired
,
this function is called maybe_*
, so it won't necessarily delete keys that have expired,
this function will not clean up expired keys until gc count
reaches a certain value.
Therefore, if there are keys that have expired, they are still queried from storages
, and then reset the expiration time.
In other words, the set_expire
function can cause an expired key to be reactivated.
Let's look at another function, LocalCache#get:
pub fn get(&self, id: &[u8], key: &[u8]) -> Option<Vec<u8>> { let entry = self.storages.get(id)?.kvs.get(key)?; @> if entry.expire_at <= now() { None } else { Some(entry.value.to_owned()) } }
The get
function returns None
if it finds that key has expired,
this results in inconsistency of cached data.
In this way, when a key value(such as balance signature or debt) has expired, the attacker declares that the value no longer exists.
The user is then asked to take some action, and the victim queries LocalCache#get
and finds that the value indeed no longer exists.
The problem is that an attacker can use set_expire
to restore this value
Another attack scenario is:
The key value(such as an nft) that the developer thinks has expired no longer exists.
However, a malicious user can make this value expire indefinitely if set_expire
can be called.
tips:
LocalCache#set
does not have this problem. LocalCache#set
will call Storage#set
, which will first call self.remove
to remove the existing key.
vscode, manual
pub fn set_expire(&mut self, id: Cow<[u8]>, key: Cow<[u8]>, expire: u64) { - self.maybe_clear_expired(); + self.clear_expired(); if expire == 0 { let _ = self.remove(id.as_ref(), key.as_ref()); } else if let Some(v) = self .storages .get_mut(id.as_ref()) .and_then(|storage| storage.kvs.get_mut(key.as_ref())) { v.expire_at = now().saturating_add(expire) } }
Other
#0 - c4-pre-sort
2024-03-25T04:45:13Z
141345 marked the issue as primary issue
#1 - c4-pre-sort
2024-03-25T04:45:22Z
141345 marked the issue as sufficient quality report
#2 - 141345
2024-03-25T04:51:15Z
the expired cache will only be cleared if sets_since_last_gc
num reachs some certain value, which makes it possible to recover expired cache
#3 - kvinwang
2024-03-26T03:55:11Z
This is a good catch.
I'm not sure if this should be classified as High Risk
level. The purpose of the local cache is to store volatile data, such as information fetched from an HTTP server. This data is expected to be lost at any time and may vary between different workers for the same contract. If the data is lost, the contract will re-fetch it from the external system.
Therefore, I don't think it is suitable for storing on-chain assets that users can query.
#4 - c4-sponsor
2024-03-26T03:55:17Z
kvinwang (sponsor) confirmed
#5 - c4-sponsor
2024-03-26T03:55:20Z
kvinwang marked the issue as disagree with severity
#6 - c4-judge
2024-03-27T15:13:07Z
OpenCoreCH changed the severity to 2 (Med Risk)
#7 - OpenCoreCH
2024-03-27T15:15:08Z
Good finding, but agree that Medium is more appropriate as the finding does not show any direct way how this can be used to steal funds, but only speculates about potential methods (which I am not sure if they can happen in practice and even if, they would have many assumptions)
#8 - c4-judge
2024-03-27T15:15:20Z
OpenCoreCH marked the issue as selected for report
#9 - DadeKuma
2024-03-28T15:43:52Z
I'm a bit skeptical about this finding. First of all, the docs state that the cache will store off-chain computations and not on-chain data that users can query/call:
//! The LocalCache provides a local KV cache for contracts to do some offchain computation. //! When we say local, it means that the data stored in the cache is different in different //! machines of the same contract. And the data might loss when the pruntime restart or caused //! by some kind of cache expiring machanism.
There is no proof that these off-chain computations can be leveraged to impact the protocol or leak value in any way, especially because:
For these reasons, I believe this finding should be capped to QA/Low, not Medium risk.
#10 - zhaojio
2024-03-29T09:57:29Z
Although the user cannot query the cached data directly, the user can query it indirectly through the contract. The cache stores off-chain data, which can also cause problems if it is recovered. For example, price data, the attacker let the expired cache recovery, there may be 2 different prices, which will lead to price manipulation. The report says that storing xxxx in the cache is just an assumption.
severity is determined by the judge, and I think it should at least remain Medium.
#11 - DadeKuma
2024-03-29T10:39:25Z
The the only proof of how it will be used is inside the docs in the file itself, which points to off-chain computations.set_expire
function is never called inside the contest repository, nor in the main Phala repository, and
Using this cache to store on-chain data would be a misuse and a user error (and I don't think it would even make sense as the data is volatile/incongruent between the workers). This is also confirmed by the Sponsor in the comment above.
Of course, the Judge will decide the final severity. I was just adding some details that might have been missed in the initial submission.
EDIT: set_expire
is actually called, GitHub search is broken; see comment below. My point on docs/normal usage stands.
#12 - zhaojio
2024-03-29T10:50:56Z
The
set_expire
function is never called inside the contest repository, nor in the main Phala repository, and the only proof of how it will be used is inside the docs in the file itself, which points to off-chain computations.Using this cache to store on-chain data would be a misuse and a user error (and I don't think it would even make sense as the data is volatile/incongruent between the workers). This is also confirmed by the Sponsor in the comment above.
Of course, the Judge will decide the final severity. I was just adding some details that might have been missed in the initial submission.
You should search for set_expiration
pub fn set_expiration(contract: &[u8], key: &[u8], expiration: u64) { with_global_cache(|cache| cache.set_expire(contract.into(), key.into(), expiration)) }
#13 - OpenCoreCH
2024-03-30T16:00:47Z
First of all, the docs state that the cache will store off-chain computations and not on-chain data that users can query/call:
I agree with that. If it were used for on-chain data such as balances, High would be more appropriate. However, even for off-chain data, this behaviour could lead to problems. For instance, if the cache is used for caching some API / web responses (which seems to be one of the most common use cases for the cache), it is not unreasonable that a developer wants to have a maximum age of the response (and with cache_set_expire
, there is an exposed function for exactly doing that, which does not always correctly as the warden has shown). In these cases, it also does not matter that the data is volatile / different between workers, because you care about the maximum age, but fetching newer data is fine. One example I can think of is betting on some result of an API service that refreshes daily where you would set the expiration such that no new requests are made until the next refresh. Of course, this has some assumptions, but I think they are pretty reasonable for such a runtime and it is well possible that there could be contracts that trigger this issue.
2606.4496 USDC - $2,606.45
https://github.com/code-423n4/2024-03-phala-network/blob/a01ffbe992560d8d0f17deadfb9b9a2bed38377e/phala-blockchain/crates/pink/chain-extension/src/lib.rs#L100 https://github.com/code-423n4/2024-03-phala-network/blob/a01ffbe992560d8d0f17deadfb9b9a2bed38377e/phala-blockchain/crates/pink/chain-extension/src/lib.rs#L184
An attacker can perform a DoS attack on a worker.
Pink Extension
provides two functions to make http requests:
http_request
batch_http_request
The default timeout of http_request
is 10 * 1000 ms.
impl<T: PinkRuntimeEnv, E: From<&'static str>> PinkExtBackend for DefaultPinkExtension<'_, T, E> { ..... fn http_request(&self, request: HttpRequest) -> Result<HttpResponse, Self::Error> { http_request(request, 10 * 1000).map_err(|err| err.display().into()) } ..... }
The problem is that batch_http_request
can pass in a timeout of any value.
batch_http_request
sets the maximum number of http requests to 5, but there is no limit to the timeout period:
pub fn batch_http_request(requests: Vec<HttpRequest>, timeout_ms: u64) -> ext::BatchHttpResult { const MAX_CONCURRENT_REQUESTS: usize = 5; if requests.len() > MAX_CONCURRENT_REQUESTS { return Err(ext::HttpRequestError::TooManyRequests); } ..... }
This will lead to DoS attacks. The attacker can launch a large number of requests and specify an infinite timeout period. In this way, a large number of http requests in processing will appear in the worker, which may lead to problems such as memory overflow.
An attacker can set up a malicious web server to make the timeout period as long as possible, such as continuously writing data to the client to maintain the connection.
In addition, the Extension
is ultimately run in the worker as a .so
file, so the http request code is not executed in the VM, so the DoS cannot be prevented by paying gas.
vscode, manual
pub fn batch_http_request(requests: Vec<HttpRequest>, timeout_ms: u64) -> ext::BatchHttpResult { const MAX_CONCURRENT_REQUESTS: usize = 5; if requests.len() > MAX_CONCURRENT_REQUESTS { return Err(ext::HttpRequestError::TooManyRequests); } + if timeout_ms > MAX_TIMEOUT_MS { + return Err(ext::HttpRequestError::TooLargeTimeout); + } }
DoS
#0 - c4-pre-sort
2024-03-25T01:00:37Z
141345 marked the issue as duplicate of #43
#1 - c4-judge
2024-03-28T08:44:34Z
OpenCoreCH changed the severity to 2 (Med Risk)
#2 - c4-judge
2024-03-28T08:44:58Z
OpenCoreCH marked the issue as satisfactory
π Selected for report: Cryptor
Also found by: 0xTheC0der, Bauchibred, Daniel526, XDZIBECX, ihtishamsudo, zhaojie
69.8535 USDC - $69.85
An attacker can perform a DoS attack on a worker.
chain-extension/lib/async_http_reques titerates request.headers
and adds the requested headers to another map:
async fn async_http_request( request: HttpRequest, timeout_ms: u64, ) -> Result<HttpResponse, HttpRequestError> { ..... let mut headers = HeaderMap::new(); for (key, value) in &request.headers { let key = HeaderName::from_str(key.as_str()).or(Err(HttpRequestError::InvalidHeaderName))?; let value = HeaderValue::from_str(value).or(Err(HttpRequestError::InvalidHeaderValue))?; headers.insert(key, value); } .....
The request
is passed in by the user at the time of invocation and has no size limit.
Therefore, if a malicious user sends a large request.headers
and sends a large number of requests at the same time, there may be a memory overflow problem, and the for loop will occupy cpu time, resulting in worker failure, network delay and other problems.
Another factor that can lead to a DoS attack is, http request code is not executed in the VM, so the DoS cannot be prevented by paying gas. The code for the http request is compiled into a native. so file.
vscode, manual
Limit the size of request.headers
DoS
#0 - c4-pre-sort
2024-03-25T04:45:12Z
141345 marked the issue as primary issue
#1 - c4-pre-sort
2024-03-25T04:45:24Z
141345 marked the issue as sufficient quality report
#2 - 141345
2024-03-25T04:51:52Z
huge header size result in memory overflow
#3 - kvinwang
2024-03-26T05:50:30Z
The guest contract can send a maximum of 2MB of headers to the runtime due to the VM memory limit. Therefore, memory should not be a problem. However, it may construct 2MB of headers with empty values to consume CPU resources. There is a scheduler in the worker that, if a query to a contract takes too much time, will decrease the priority of subsequent queries to the same contract.
Anyway, a header size check is good to have in the runtime. Might be QA level.
#4 - c4-sponsor
2024-03-26T05:50:36Z
kvinwang (sponsor) confirmed
#5 - c4-sponsor
2024-03-26T05:50:44Z
kvinwang marked the issue as disagree with severity
#6 - c4-judge
2024-03-27T15:16:27Z
OpenCoreCH changed the severity to QA (Quality Assurance)
#7 - OpenCoreCH
2024-03-27T15:17:15Z
Agree with the sponsor's take, limit is a good idea to have, but it is not clear (and I doubt it because of the memory limit) that it could actually be used to cause a DoS.
#8 - c4-judge
2024-03-27T18:36:04Z
OpenCoreCH marked the issue as grade-b