Dataset Viewer
Auto-converted to Parquet
code
stringlengths
11
306k
docstring
stringlengths
1
39.1k
func_name
stringlengths
0
97
language
stringclasses
1 value
repo
stringclasses
959 values
path
stringlengths
8
160
url
stringlengths
49
212
license
stringclasses
4 values
pub fn with_cycles(block: ResponseFormat<BlockView>, cycles: Option<Vec<Cycle>>) -> Self { BlockResponse::WithCycles(BlockWithCyclesResponse { block, cycles }) }
Wrap with cycles block response
with_cycles
rust
nervosnetwork/ckb
util/jsonrpc-types/src/blockchain.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/blockchain.rs
MIT
pub fn from_ext(ext: packed::EpochExt) -> EpochView { EpochView { number: ext.number().unpack(), start_number: ext.start_number().unpack(), length: ext.length().unpack(), compact_target: ext.compact_target().unpack(), } }
Creates the view from the stored ext.
from_ext
rust
nervosnetwork/ckb
util/jsonrpc-types/src/blockchain.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/blockchain.rs
MIT
pub fn new(hardforks: &core::hardfork::HardForks) -> Self { HardForks { inner: vec![ HardForkFeature::new("0028", convert(hardforks.ckb2021.rfc_0028())), HardForkFeature::new("0029", convert(hardforks.ckb2021.rfc_0029())), HardForkFeature::new("0030", convert(hardforks.ckb2021.rfc_0030())), HardForkFeature::new("0031", convert(hardforks.ckb2021.rfc_0031())), HardForkFeature::new("0032", convert(hardforks.ckb2021.rfc_0032())), HardForkFeature::new("0036", convert(hardforks.ckb2021.rfc_0036())), HardForkFeature::new("0038", convert(hardforks.ckb2021.rfc_0038())), HardForkFeature::new("0048", convert(hardforks.ckb2023.rfc_0048())), HardForkFeature::new("0049", convert(hardforks.ckb2023.rfc_0049())), ], } }
Returns a list of hardfork features from a hardfork switch.
new
rust
nervosnetwork/ckb
util/jsonrpc-types/src/blockchain.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/blockchain.rs
MIT
pub fn new_rfc0043(deployment: Deployment) -> SoftFork { SoftFork::Rfc0043(Rfc0043 { status: SoftForkStatus::Rfc0043, rfc0043: deployment, }) }
Construct new rfc0043
new_rfc0043
rust
nervosnetwork/ckb
util/jsonrpc-types/src/blockchain.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/blockchain.rs
MIT
pub fn new_buried(active: bool, epoch: EpochNumber) -> SoftFork { SoftFork::Buried(Buried { active, epoch, status: SoftForkStatus::Buried, }) }
Construct new buried
new_buried
rust
nervosnetwork/ckb
util/jsonrpc-types/src/blockchain.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/blockchain.rs
MIT
pub fn new(rfc: &str, epoch_number: Option<EpochNumber>) -> Self { Self { rfc: rfc.to_owned(), epoch_number, } }
Creates a new struct.
new
rust
nervosnetwork/ckb
util/jsonrpc-types/src/blockchain.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/blockchain.rs
MIT
pub fn new(inner: [u8; 10]) -> ProposalShortId { ProposalShortId(inner) }
Creates the proposal id from array.
new
rust
nervosnetwork/ckb
util/jsonrpc-types/src/proposal_short_id.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/proposal_short_id.rs
MIT
pub fn into_inner(self) -> [u8; 10] { self.0 }
Converts into the inner bytes array.
into_inner
rust
nervosnetwork/ckb
util/jsonrpc-types/src/proposal_short_id.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/proposal_short_id.rs
MIT
pub fn new(objects: Vec<T>, last_cursor: JsonBytes) -> Self { IndexerPagination { objects, last_cursor, } }
Construct new IndexerPagination
new
rust
nervosnetwork/ckb
util/jsonrpc-types/src/indexer.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/indexer.rs
MIT
pub fn new<U>(start: U, end: U) -> Self where U: Into<Uint64>, { IndexerRange { inner: [start.into(), end.into()], } }
Construct new range
new
rust
nervosnetwork/ckb
util/jsonrpc-types/src/indexer.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/indexer.rs
MIT
pub fn start(&self) -> Uint64 { self.inner[0] }
Return the lower bound of the range (inclusive).
start
rust
nervosnetwork/ckb
util/jsonrpc-types/src/indexer.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/indexer.rs
MIT
pub fn end(&self) -> Uint64 { self.inner[1] }
Return the upper bound of the range (exclusive).
end
rust
nervosnetwork/ckb
util/jsonrpc-types/src/indexer.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/indexer.rs
MIT
pub fn tx_hash(&self) -> H256 { match self { IndexerTx::Ungrouped(tx) => tx.tx_hash.clone(), IndexerTx::Grouped(tx) => tx.tx_hash.clone(), } }
Return tx hash
tx_hash
rust
nervosnetwork/ckb
util/jsonrpc-types/src/indexer.rs
https://github.com/nervosnetwork/ckb/blob/master/util/jsonrpc-types/src/indexer.rs
MIT
pub fn update_extra_logger(name: String, filter_str: String) -> Result<(), String> { let filter = Self::build_filter(&filter_str); let message = Message::UpdateExtraLogger(name, filter); Self::send_message(message) }
Updates an extra logger through it's name.
update_extra_logger
rust
nervosnetwork/ckb
util/logger-service/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/logger-service/src/lib.rs
MIT
pub fn remove_extra_logger(name: String) -> Result<(), String> { let message = Message::RemoveExtraLogger(name); Self::send_message(message) }
Removes an extra logger.
remove_extra_logger
rust
nervosnetwork/ckb
util/logger-service/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/logger-service/src/lib.rs
MIT
pub fn init(env_opt: Option<&str>, config: Config) -> Result<LoggerInitGuard, SetLoggerError> { setup_panic_logger(); let logger = Logger::new(env_opt, config); let filter = logger.filter(); log::set_boxed_logger(Box::new(logger)).map(|_| { log::set_max_level(filter); LoggerInitGuard }) }
Initializes the [Logger](struct.Logger.html) and run the logging service.
init
rust
nervosnetwork/ckb
util/logger-service/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/logger-service/src/lib.rs
MIT
pub fn init_silent() -> Result<LoggerInitGuard, SetLoggerError> { log::set_boxed_logger(Box::new(SilentLogger)).map(|_| LoggerInitGuard) }
Initializes the [SilentLogger](struct.SilentLogger.html).
init_silent
rust
nervosnetwork/ckb
util/logger-service/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/logger-service/src/lib.rs
MIT
pub fn flush() { log::logger().flush() }
Flushes any buffered records.
flush
rust
nervosnetwork/ckb
util/logger-service/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/logger-service/src/lib.rs
MIT
pub fn init_for_test(filter: &str) -> Result<LoggerInitGuard, SetLoggerError> { setup_panic_logger(); let config: Config = Config { filter: Some(filter.to_string()), color: true, log_to_stdout: true, log_to_file: false, emit_sentry_breadcrumbs: None, file: Default::default(), log_dir: Default::default(), extra: Default::default(), }; let logger = Logger::new(None, config); let filter = logger.filter(); log::set_boxed_logger(Box::new(logger)).map(|_| { log::set_max_level(filter); LoggerInitGuard }) }
Only used by unit test Initializes the [Logger](struct.Logger.html) and run the logging service.
init_for_test
rust
nervosnetwork/ckb
util/logger-service/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/logger-service/src/lib.rs
MIT
pub fn extract_raw_data(script: &Script) -> Vec<u8> { [ script.code_hash().as_slice(), script.hash_type().as_slice(), &script.args().raw_data(), ] .concat() }
helper fn extracts script fields raw data
extract_raw_data
rust
nervosnetwork/ckb
util/indexer/src/indexer.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/indexer.rs
MIT
pub fn new( ckb_db: SecondaryDB, pool_service: PoolService, config: &IndexerConfig, async_handle: Handle, ) -> Self { let store_opts = Self::indexer_store_options(config); let store = RocksdbStore::new(&store_opts, &config.store); let sync = IndexerSyncService::new( ckb_db, pool_service, &config.into(), async_handle, config.init_tip_hash.clone(), ); Self { store, sync, block_filter: config.block_filter.clone(), cell_filter: config.cell_filter.clone(), request_limit: config.request_limit.unwrap_or(usize::MAX), } }
Construct new Indexer service instance from DBConfig and IndexerConfig
new
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn handle(&self) -> IndexerHandle { IndexerHandle { store: self.store.clone(), pool: self.sync.pool(), request_limit: self.request_limit, } }
Returns a handle to the indexer. The returned handle can be used to get data from indexer, and can be cloned to allow moving the Handle to other threads.
handle
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn spawn_poll(&self, notify_controller: NotifyController) { self.sync.spawn_poll( notify_controller, SUBSCRIBER_NAME.to_string(), self.get_indexer(), ) }
Processes that handle block cell and expect to be spawned to run in tokio runtime
spawn_poll
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn index_tx_pool(&mut self, notify_controller: NotifyController) { self.sync .index_tx_pool(self.get_indexer(), notify_controller) }
Index tx pool
index_tx_pool
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn get_indexer_tip(&self) -> Result<Option<IndexerTip>, Error> { let mut iter = self .store .iter([KeyPrefix::Header as u8 + 1], IteratorDirection::Reverse) .expect("iter Header should be OK"); Ok(iter.next().map(|(key, _)| IndexerTip { block_hash: packed::Byte32::from_slice(&key[9..41]) .expect("stored block key") .unpack(), block_number: core::BlockNumber::from_be_bytes( key[1..9].try_into().expect("stored block key"), ) .into(), })) }
Get indexer current tip
get_indexer_tip
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn get_cells( &self, search_key: IndexerSearchKey, order: IndexerOrder, limit: Uint32, after_cursor: Option<JsonBytes>, ) -> Result<IndexerPagination<IndexerCell>, Error> { if search_key .script_search_mode .as_ref() .map(|mode| *mode == IndexerSearchMode::Partial) .unwrap_or(false) { return Err(Error::invalid_params( "the CKB indexer doesn't support search_key.script_search_mode partial search mode, \ please use the CKB rich-indexer for such search", )); } let limit = limit.value() as usize; if limit == 0 { return Err(Error::invalid_params("limit should be greater than 0")); } if limit > self.request_limit { return Err(Error::invalid_params(format!( "limit must be less than {}", self.request_limit, ))); } let (prefix, from_key, direction, skip) = build_query_options( &search_key, KeyPrefix::CellLockScript, KeyPrefix::CellTypeScript, order, after_cursor, )?; let filter_script_type = match search_key.script_type { IndexerScriptType::Lock => IndexerScriptType::Type, IndexerScriptType::Type => IndexerScriptType::Lock, }; let script_search_exact = matches!( search_key.script_search_mode, Some(IndexerSearchMode::Exact) ); let filter_options: FilterOptions = search_key.try_into()?; let mode = IteratorMode::From(from_key.as_ref(), direction); let snapshot = self.store.inner().snapshot(); let iter = snapshot.iterator(mode).skip(skip); let mut last_key = Vec::new(); let pool = self .pool .as_ref() .map(|pool| pool.read().expect("acquire lock")); let cells = iter .take_while(|(key, _value)| key.starts_with(&prefix)) .filter_map(|(key, value)| { if script_search_exact { // Exact match mode, check key length is equal to full script len + BlockNumber (8) + TxIndex (4) + OutputIndex (4) if key.len() != prefix.len() + 16 { return None; } } let tx_hash = packed::Byte32::from_slice(&value).expect("stored tx hash"); let index = u32::from_be_bytes(key[key.len() - 4..].try_into().expect("stored index")); let out_point = packed::OutPoint::new(tx_hash, index); if pool .as_ref() .map(|pool| pool.is_consumed_by_pool_tx(&out_point)) .unwrap_or_default() { return None; } let (block_number, tx_index, output, output_data) = Value::parse_cell_value( &snapshot .get(Key::OutPoint(&out_point).into_vec()) .expect("get OutPoint should be OK") .expect("stored OutPoint"), ); if let Some(prefix) = filter_options.script_prefix.as_ref() { match filter_script_type { IndexerScriptType::Lock => { if !extract_raw_data(&output.lock()) .as_slice() .starts_with(prefix) { return None; } } IndexerScriptType::Type => { if output.type_().is_none() || !extract_raw_data(&output.type_().to_opt().unwrap()) .as_slice() .starts_with(prefix) { return None; } } } } if let Some([r0, r1]) = filter_options.script_len_range { match filter_script_type { IndexerScriptType::Lock => { let script_len = extract_raw_data(&output.lock()).len(); if script_len < r0 || script_len >= r1 { return None; } } IndexerScriptType::Type => { let script_len = output .type_() .to_opt() .map(|script| extract_raw_data(&script).len()) .unwrap_or_default(); if script_len < r0 || script_len >= r1 { return None; } } } } if let Some((data, mode)) = &filter_options.output_data { match mode { IndexerSearchMode::Prefix => { if !output_data.raw_data().starts_with(data) { return None; } } IndexerSearchMode::Exact => { if output_data.raw_data() != data { return None; } } IndexerSearchMode::Partial => { memmem::find(&output_data.raw_data(), data)?; } } } if let Some([r0, r1]) = filter_options.output_data_len_range { if output_data.len() < r0 || output_data.len() >= r1 { return None; } } if let Some([r0, r1]) = filter_options.output_capacity_range { let capacity: core::Capacity = output.capacity().unpack(); if capacity < r0 || capacity >= r1 { return None; } } if let Some([r0, r1]) = filter_options.block_range { if block_number < r0 || block_number >= r1 { return None; } } last_key = key.to_vec(); Some(IndexerCell { output: output.into(), output_data: if filter_options.with_data { Some(output_data.into()) } else { None }, out_point: out_point.into(), block_number: block_number.into(), tx_index: tx_index.into(), }) }) .take(limit) .collect::<Vec<_>>(); Ok(IndexerPagination::new(cells, JsonBytes::from_vec(last_key))) }
Get cells by specified params
get_cells
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn get_transactions( &self, search_key: IndexerSearchKey, order: IndexerOrder, limit: Uint32, after_cursor: Option<JsonBytes>, ) -> Result<IndexerPagination<IndexerTx>, Error> { let limit = limit.value() as usize; if limit == 0 { return Err(Error::invalid_params("limit should be greater than 0")); } if limit > self.request_limit { return Err(Error::invalid_params(format!( "limit must be less than {}", self.request_limit, ))); } if search_key .script_search_mode .as_ref() .map(|mode| *mode == IndexerSearchMode::Partial) .unwrap_or(false) { return Err(Error::invalid_params( "the CKB indexer doesn't support search_key.script_search_mode partial search mode, \ please use the CKB rich-indexer for such search", )); } let (prefix, from_key, direction, skip) = build_query_options( &search_key, KeyPrefix::TxLockScript, KeyPrefix::TxTypeScript, order, after_cursor, )?; let (filter_script, filter_block_range) = if let Some(filter) = search_key.filter.as_ref() { if filter.script_len_range.is_some() { return Err(Error::invalid_params( "doesn't support search_key.filter.script_len_range parameter", )); } if filter.output_data.is_some() { return Err(Error::invalid_params( "doesn't support search_key.filter.output_data parameter", )); } if filter.output_data_len_range.is_some() { return Err(Error::invalid_params( "doesn't support search_key.filter.output_data_len_range parameter", )); } if filter.output_capacity_range.is_some() { return Err(Error::invalid_params( "doesn't support search_key.filter.output_capacity_range parameter", )); } let filter_script: Option<packed::Script> = filter.script.as_ref().map(|script| script.clone().into()); let filter_block_range: Option<[core::BlockNumber; 2]> = filter .block_range .as_ref() .map(|r| [r.start().into(), r.end().into()]); (filter_script, filter_block_range) } else { (None, None) }; let filter_script_type = match search_key.script_type { IndexerScriptType::Lock => IndexerScriptType::Type, IndexerScriptType::Type => IndexerScriptType::Lock, }; let script_search_exact = matches!( search_key.script_search_mode, Some(IndexerSearchMode::Exact) ); let mode = IteratorMode::From(from_key.as_ref(), direction); let snapshot = self.store.inner().snapshot(); let iter = snapshot.iterator(mode).skip(skip); if search_key.group_by_transaction.unwrap_or_default() { let mut tx_with_cells: Vec<IndexerTxWithCells> = Vec::new(); let mut last_key = Vec::new(); for (key, value) in iter.take_while(|(key, _value)| key.starts_with(&prefix)) { if script_search_exact { // Exact match mode, check key length is equal to full script len + BlockNumber (8) + TxIndex (4) + CellIndex (4) + CellType (1) if key.len() != prefix.len() + 17 { continue; } } let tx_hash: H256 = packed::Byte32::from_slice(&value) .expect("stored tx hash") .unpack(); if tx_with_cells.len() == limit && tx_with_cells.last_mut().unwrap().tx_hash != tx_hash { break; } last_key = key.to_vec(); let block_number = u64::from_be_bytes( key[key.len() - 17..key.len() - 9] .try_into() .expect("stored block_number"), ); let tx_index = u32::from_be_bytes( key[key.len() - 9..key.len() - 5] .try_into() .expect("stored tx_index"), ); let io_index = u32::from_be_bytes( key[key.len() - 5..key.len() - 1] .try_into() .expect("stored io_index"), ); let io_type = if *key.last().expect("stored io_type") == 0 { IndexerCellType::Input } else { IndexerCellType::Output }; if let Some(filter_script) = filter_script.as_ref() { let filter_script_matched = match filter_script_type { IndexerScriptType::Lock => snapshot .get( Key::TxLockScript( filter_script, block_number, tx_index, io_index, match io_type { IndexerCellType::Input => indexer::CellType::Input, IndexerCellType::Output => indexer::CellType::Output, }, ) .into_vec(), ) .expect("get TxLockScript should be OK") .is_some(), IndexerScriptType::Type => snapshot .get( Key::TxTypeScript( filter_script, block_number, tx_index, io_index, match io_type { IndexerCellType::Input => indexer::CellType::Input, IndexerCellType::Output => indexer::CellType::Output, }, ) .into_vec(), ) .expect("get TxTypeScript should be OK") .is_some(), }; if !filter_script_matched { continue; } } if let Some([r0, r1]) = filter_block_range { if block_number < r0 || block_number >= r1 { continue; } } let last_tx_hash_is_same = tx_with_cells .last_mut() .map(|last| { if last.tx_hash == tx_hash { last.cells.push((io_type.clone(), io_index.into())); true } else { false } }) .unwrap_or_default(); if !last_tx_hash_is_same { tx_with_cells.push(IndexerTxWithCells { tx_hash, block_number: block_number.into(), tx_index: tx_index.into(), cells: vec![(io_type, io_index.into())], }); } } Ok(IndexerPagination::new( tx_with_cells.into_iter().map(IndexerTx::Grouped).collect(), JsonBytes::from_vec(last_key), )) } else { let mut last_key = Vec::new(); let txs = iter .take_while(|(key, _value)| key.starts_with(&prefix)) .filter_map(|(key, value)| { if script_search_exact { // Exact match mode, check key length is equal to full script len + BlockNumber (8) + TxIndex (4) + CellIndex (4) + CellType (1) if key.len() != prefix.len() + 17 { return None; } } let tx_hash = packed::Byte32::from_slice(&value).expect("stored tx hash"); let block_number = u64::from_be_bytes( key[key.len() - 17..key.len() - 9] .try_into() .expect("stored block_number"), ); let tx_index = u32::from_be_bytes( key[key.len() - 9..key.len() - 5] .try_into() .expect("stored tx_index"), ); let io_index = u32::from_be_bytes( key[key.len() - 5..key.len() - 1] .try_into() .expect("stored io_index"), ); let io_type = if *key.last().expect("stored io_type") == 0 { IndexerCellType::Input } else { IndexerCellType::Output }; if let Some(filter_script) = filter_script.as_ref() { match filter_script_type { IndexerScriptType::Lock => { snapshot .get( Key::TxLockScript( filter_script, block_number, tx_index, io_index, match io_type { IndexerCellType::Input => indexer::CellType::Input, IndexerCellType::Output => { indexer::CellType::Output } }, ) .into_vec(), ) .expect("get TxLockScript should be OK")?; } IndexerScriptType::Type => { snapshot .get( Key::TxTypeScript( filter_script, block_number, tx_index, io_index, match io_type { IndexerCellType::Input => indexer::CellType::Input, IndexerCellType::Output => { indexer::CellType::Output } }, ) .into_vec(), ) .expect("get TxTypeScript should be OK")?; } } } if let Some([r0, r1]) = filter_block_range { if block_number < r0 || block_number >= r1 { return None; } } last_key = key.to_vec(); Some(IndexerTx::Ungrouped(IndexerTxWithCell { tx_hash: tx_hash.unpack(), block_number: block_number.into(), tx_index: tx_index.into(), io_index: io_index.into(), io_type, })) }) .take(limit) .collect::<Vec<_>>(); Ok(IndexerPagination::new(txs, JsonBytes::from_vec(last_key))) } }
Get transaction by specified params
get_transactions
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn get_cells_capacity( &self, search_key: IndexerSearchKey, ) -> Result<Option<IndexerCellsCapacity>, Error> { if search_key .script_search_mode .as_ref() .map(|mode| *mode == IndexerSearchMode::Partial) .unwrap_or(false) { return Err(Error::invalid_params( "the CKB indexer doesn't support search_key.script_search_mode partial search mode, \ please use the CKB rich-indexer for such search", )); } let (prefix, from_key, direction, skip) = build_query_options( &search_key, KeyPrefix::CellLockScript, KeyPrefix::CellTypeScript, IndexerOrder::Asc, None, )?; let filter_script_type = match search_key.script_type { IndexerScriptType::Lock => IndexerScriptType::Type, IndexerScriptType::Type => IndexerScriptType::Lock, }; let script_search_exact = matches!( search_key.script_search_mode, Some(IndexerSearchMode::Exact) ); let filter_options: FilterOptions = search_key.try_into()?; let mode = IteratorMode::From(from_key.as_ref(), direction); let snapshot = self.store.inner().snapshot(); let iter = snapshot.iterator(mode).skip(skip); let pool = self .pool .as_ref() .map(|pool| pool.read().expect("acquire lock")); let capacity: u64 = iter .take_while(|(key, _value)| key.starts_with(&prefix)) .filter_map(|(key, value)| { if script_search_exact { // Exact match mode, check key length is equal to full script len + BlockNumber (8) + TxIndex (4) + OutputIndex (4) if key.len() != prefix.len() + 16 { return None; } } let tx_hash = packed::Byte32::from_slice(value.as_ref()).expect("stored tx hash"); let index = u32::from_be_bytes(key[key.len() - 4..].try_into().expect("stored index")); let out_point = packed::OutPoint::new(tx_hash, index); if pool .as_ref() .map(|pool| pool.is_consumed_by_pool_tx(&out_point)) .unwrap_or_default() { return None; } let (block_number, _tx_index, output, output_data) = Value::parse_cell_value( &snapshot .get(Key::OutPoint(&out_point).into_vec()) .expect("get OutPoint should be OK") .expect("stored OutPoint"), ); if let Some(prefix) = filter_options.script_prefix.as_ref() { match filter_script_type { IndexerScriptType::Lock => { if !extract_raw_data(&output.lock()) .as_slice() .starts_with(prefix) { return None; } } IndexerScriptType::Type => { if output.type_().is_none() || !extract_raw_data(&output.type_().to_opt().unwrap()) .as_slice() .starts_with(prefix) { return None; } } } } if let Some([r0, r1]) = filter_options.script_len_range { match filter_script_type { IndexerScriptType::Lock => { let script_len = extract_raw_data(&output.lock()).len(); if script_len < r0 || script_len > r1 { return None; } } IndexerScriptType::Type => { let script_len = output .type_() .to_opt() .map(|script| extract_raw_data(&script).len()) .unwrap_or_default(); if script_len < r0 || script_len > r1 { return None; } } } } if let Some((data, mode)) = &filter_options.output_data { match mode { IndexerSearchMode::Prefix => { if !output_data.raw_data().starts_with(data) { return None; } } IndexerSearchMode::Exact => { if output_data.raw_data() != data { return None; } } IndexerSearchMode::Partial => { memmem::find(&output_data.raw_data(), data)?; } } } if let Some([r0, r1]) = filter_options.output_data_len_range { if output_data.len() < r0 || output_data.len() >= r1 { return None; } } if let Some([r0, r1]) = filter_options.output_capacity_range { let capacity: core::Capacity = output.capacity().unpack(); if capacity < r0 || capacity >= r1 { return None; } } if let Some([r0, r1]) = filter_options.block_range { if block_number < r0 || block_number >= r1 { return None; } } Some(Unpack::<core::Capacity>::unpack(&output.capacity()).as_u64()) }) .sum(); let tip_mode = IteratorMode::From(&[KeyPrefix::Header as u8 + 1], Direction::Reverse); let mut tip_iter = snapshot.iterator(tip_mode); Ok(tip_iter.next().map(|(key, _value)| IndexerCellsCapacity { capacity: capacity.into(), block_hash: packed::Byte32::from_slice(&key[9..41]) .expect("stored block key") .unpack(), block_number: core::BlockNumber::from_be_bytes( key[1..9].try_into().expect("stored block key"), ) .into(), })) }
Get cells_capacity by specified search_key
get_cells_capacity
rust
nervosnetwork/ckb
util/indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/indexer/src/service.rs
MIT
pub fn unix_time() -> Duration { Duration::from_millis(unix_time_as_millis()) }
Get system's unix_time
unix_time
rust
nervosnetwork/ckb
util/systemtime/src/lib.rs
https://github.com/nervosnetwork/ckb/blob/master/util/systemtime/src/lib.rs
MIT
pub fn default_assume_valid_targets() -> Vec<&'static str> { vec![ // height: 500000; https://explorer.nervos.org/block/0xb72f4d9758a36a2f9d4b8aea5a11d232e3e48332b76ec350f0a375fac10317a4 "0xb72f4d9758a36a2f9d4b8aea5a11d232e3e48332b76ec350f0a375fac10317a4", // height: 1000000; https://explorer.nervos.org/block/0x7544e2a9db2054fbe42215ece2e5d31f175972cfeccaa7597c8ff3ec5c8b7d67 "0x7544e2a9db2054fbe42215ece2e5d31f175972cfeccaa7597c8ff3ec5c8b7d67", // height: 2000000; https://explorer.nervos.org/block/0xc0c1ca7dcfa5862b9d2afeb5ea94db14744b8146c9005982879030f01e1f47cb "0xc0c1ca7dcfa5862b9d2afeb5ea94db14744b8146c9005982879030f01e1f47cb", // height: 3000000; https://explorer.nervos.org/block/0x36ff0ea1100e7892367b5004a362780c14c85fc2812bb6bd511e1c3a131c3fda "0x36ff0ea1100e7892367b5004a362780c14c85fc2812bb6bd511e1c3a131c3fda", // height: 4000000; https://explorer.nervos.org/block/0xcd925c9baa8c3110980546c916dad122dc69111780e49b50c3bb407ab7b6aa1c "0xcd925c9baa8c3110980546c916dad122dc69111780e49b50c3bb407ab7b6aa1c", // height: 5000000; https://explorer.nervos.org/block/0x10898dd0307ef95e9086794ae7070d2f960725d1dd1e0800044eb8d8b2547da6 "0x10898dd0307ef95e9086794ae7070d2f960725d1dd1e0800044eb8d8b2547da6", // height: 6000000; https://explorer.nervos.org/block/0x0d78219b6972c21f33350958882da3e961c2ebbddc4521bf45ee47139b331333 "0x0d78219b6972c21f33350958882da3e961c2ebbddc4521bf45ee47139b331333", // height: 7000000; https://explorer.nervos.org/block/0x1c280be16bf3366cf890cd5a8c5dc4eeed8c6ddeeb988a482d7feabb3bd014c6 "0x1c280be16bf3366cf890cd5a8c5dc4eeed8c6ddeeb988a482d7feabb3bd014c6", // height: 8000000; https://explorer.nervos.org/block/0x063ccfcdbad01922792914f0bd61e47930bbb4a531f711013a24210638c0174a "0x063ccfcdbad01922792914f0bd61e47930bbb4a531f711013a24210638c0174a", // height: 9000000; https://explorer.nervos.org/block/0xcf95c190a0054ce2404ad70d9befb5ec78579dd0a9ddb95776c5ac1bc5ddeed1 "0xcf95c190a0054ce2404ad70d9befb5ec78579dd0a9ddb95776c5ac1bc5ddeed1", // height: 10000000; https://explorer.nervos.org/block/0xe784f617bf1e13a3ac1a564e361b7e6298364193246e11cd328243f329f3592d "0xe784f617bf1e13a3ac1a564e361b7e6298364193246e11cd328243f329f3592d", // height: 11000000; https://explorer.nervos.org/block/0xe9b97767424dd04aa65a1f7ad562b0faf8dd0fbf2a213d1586ea7969160f5996 "0xe9b97767424dd04aa65a1f7ad562b0faf8dd0fbf2a213d1586ea7969160f5996", // height: 12000000; https://explorer.nervos.org/block/0x2210a9bd5a292888f79ec7547ac3ea79c731df8bfe2049934f3206cabdc07f54 "0x2210a9bd5a292888f79ec7547ac3ea79c731df8bfe2049934f3206cabdc07f54", // height: 13000000; https://explorer.nervos.org/block/0xcffc6a0a1f363db8fdbe2fea916ab5cd8851dd479bc04003dab88c9379dca1d0 "0xcffc6a0a1f363db8fdbe2fea916ab5cd8851dd479bc04003dab88c9379dca1d0", // height: 14000000; https://explorer.nervos.org/block/0xf283cacaa21556957b9621b8ac303a0b2c06434c26a1b53b1e590219d2c7313a "0xf283cacaa21556957b9621b8ac303a0b2c06434c26a1b53b1e590219d2c7313a", latest_assume_valid_target::mainnet::DEFAULT_ASSUME_VALID_TARGET, ] } } /// testnet pub mod testnet { use crate::latest_assume_valid_target; /// get testnet related default assume valid targets pub fn default_assume_valid_targets() -> Vec<&'static str> { vec![ // height: 500000; https://testnet.explorer.nervos.org/block/0xf9c73f3db9a7c6707c3c6800a9a0dbd5a2edf69e3921832f65275dcd71f7871c "0xf9c73f3db9a7c6707c3c6800a9a0dbd5a2edf69e3921832f65275dcd71f7871c", // height: 1000000; https://testnet.explorer.nervos.org/block/0x935a48f2660fd141121114786edcf17ef5789c6c2fe7aca04ea27813b30e1fa3 "0x935a48f2660fd141121114786edcf17ef5789c6c2fe7aca04ea27813b30e1fa3", // height: 2000000; https://testnet.explorer.nervos.org/block/0xf4d1648131b7bc4a0c9dbc442d240395c89a0c77b0cc197dce8794cd93669b32 "0xf4d1648131b7bc4a0c9dbc442d240395c89a0c77b0cc197dce8794cd93669b32", // height: 3000000; https://testnet.explorer.nervos.org/block/0x1d1bd2a6a50d9532b7131c5d0b05c006fb354a0341a504e54eaf39b27acc620d "0x1d1bd2a6a50d9532b7131c5d0b05c006fb354a0341a504e54eaf39b27acc620d", // height: 4000000; https://testnet.explorer.nervos.org/block/0xb33c0e0a649003ab65062e93a3126a2235f6e7c3ca1b16fe9938816d846bb14f "0xb33c0e0a649003ab65062e93a3126a2235f6e7c3ca1b16fe9938816d846bb14f", // height: 5000000; https://testnet.explorer.nervos.org/block/0xff4f979d8ab597a5836c533828d5253021c05f2614470fd8a4df7724ff8ec5e1 "0xff4f979d8ab597a5836c533828d5253021c05f2614470fd8a4df7724ff8ec5e1", // height: 6000000; https://testnet.explorer.nervos.org/block/0xfdb427f18e03cee68947609db1f592ee2651181528da35fb62b64d4d4d5d749a "0xfdb427f18e03cee68947609db1f592ee2651181528da35fb62b64d4d4d5d749a", // height: 7000000; https://testnet.explorer.nervos.org/block/0xf9e1c6398f524c10b358dca7e000f59992004fda68c801453ed4da06bc3c6ecc "0xf9e1c6398f524c10b358dca7e000f59992004fda68c801453ed4da06bc3c6ecc", // height: 8000000; https://testnet.explorer.nervos.org/block/0x2be0f327e78032f495f90da159883da84f2efd5025fde106a6a7590b8fca6647 "0x2be0f327e78032f495f90da159883da84f2efd5025fde106a6a7590b8fca6647", // height: 9000000; https://testnet.explorer.nervos.org/block/0xba1e8db7d162445979f2c73392208b882ea01c7627a8a98be82789d6f130ce35 "0xba1e8db7d162445979f2c73392208b882ea01c7627a8a98be82789d6f130ce35", // height: 10000000; https://testnet.explorer.nervos.org/block/0xf64c95cfa813e0aa1ae2e0e28af4723134263c9862979c953842511381b7d8c6 "0xf64c95cfa813e0aa1ae2e0e28af4723134263c9862979c953842511381b7d8c6", // height: 11000000; https://testnet.explorer.nervos.org/block/0x0a9e4de75031163fefc5e7c0d40adadb2d7cb23eb9b1b2dae46872e921f4bcf1 "0x0a9e4de75031163fefc5e7c0d40adadb2d7cb23eb9b1b2dae46872e921f4bcf1", // height: 12000000; https://testnet.explorer.nervos.org/block/0x9f24177a181798b7ad63dfc8e0b89fe0ce60c099e86743675070f428ca1037b4 "0x9f24177a181798b7ad63dfc8e0b89fe0ce60c099e86743675070f428ca1037b4", // height: 13000000; https://testnet.explorer.nervos.org/block/0xc884fb5ca8cc2acddf6ce4888dc7fe0f583bb0dd4f80c5be31bed87268b1ca2f "0xc884fb5ca8cc2acddf6ce4888dc7fe0f583bb0dd4f80c5be31bed87268b1ca2f", // height: 14000000; https://testnet.explorer.nervos.org/block/0xfb7da0ff926540463e3a9168cf0cd73113c24e4692a561525554c87c62aa3475 "0xfb7da0ff926540463e3a9168cf0cd73113c24e4692a561525554c87c62aa3475", latest_assume_valid_target::testnet::DEFAULT_ASSUME_VALID_TARGET, ] }
get mainnet related default assume valid targets
default_assume_valid_targets
rust
nervosnetwork/ckb
util/constant/src/default_assume_valid_target.rs
https://github.com/nervosnetwork/ckb/blob/master/util/constant/src/default_assume_valid_target.rs
MIT
pub fn new( ckb_db: SecondaryDB, pool_service: PoolService, config: &IndexerConfig, async_handle: Handle, ) -> Self { let mut store = SQLXPool::default(); async_handle .block_on(store.connect(&config.rich_indexer)) .expect("Failed to connect to rich-indexer database"); let sync = IndexerSyncService::new( ckb_db, pool_service, &config.into(), async_handle.clone(), config.init_tip_hash.clone(), ); Self { store, sync, block_filter: config.block_filter.clone(), cell_filter: config.cell_filter.clone(), async_handle, request_limit: config.request_limit.unwrap_or(usize::MAX), } }
Construct new RichIndexerService instance
new
rust
nervosnetwork/ckb
util/rich-indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/service.rs
MIT
pub fn spawn_poll(&self, notify_controller: NotifyController) { self.sync.spawn_poll( notify_controller, SUBSCRIBER_NAME.to_string(), self.get_indexer(), ) }
Spawn a poller to sync data from ckb node.
spawn_poll
rust
nervosnetwork/ckb
util/rich-indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/service.rs
MIT
pub fn index_tx_pool(&mut self, notify_controller: NotifyController) { self.sync .index_tx_pool(self.get_indexer(), notify_controller) }
Index tx pool
index_tx_pool
rust
nervosnetwork/ckb
util/rich-indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/service.rs
MIT
pub fn handle(&self) -> RichIndexerHandle { RichIndexerHandle::new( self.store.clone(), self.sync.pool(), self.async_handle.clone(), self.request_limit, ) }
Returns a handle to the rich-indexer. The returned handle can be used to get data from rich-indexer, and can be cloned to allow moving the Handle to other threads.
handle
rust
nervosnetwork/ckb
util/rich-indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/service.rs
MIT
pub fn async_handle(&self) -> AsyncRichIndexerHandle { AsyncRichIndexerHandle::new(self.store.clone(), self.sync.pool(), self.request_limit) }
Returns a handle to the rich-indexer. The returned handle can be used to get data from rich-indexer, and can be cloned to allow moving the Handle to other threads.
async_handle
rust
nervosnetwork/ckb
util/rich-indexer/src/service.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/service.rs
MIT
pub fn new( store: SQLXPool, pool: Option<Arc<RwLock<Pool>>>, async_handle: Handle, request_limit: usize, ) -> Self { Self { async_handle: AsyncRichIndexerHandle::new(store, pool, request_limit), async_runtime: async_handle, } }
Construct new RichIndexerHandle instance
new
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/mod.rs
MIT
pub fn get_indexer_tip(&self) -> Result<Option<IndexerTip>, Error> { let future = self.async_handle.get_indexer_tip(); self.async_runtime.block_on(future) }
Get indexer current tip
get_indexer_tip
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/mod.rs
MIT
pub async fn get_transactions( &self, mut search_key: IndexerSearchKey, order: IndexerOrder, limit: Uint32, after: Option<JsonBytes>, ) -> Result<IndexerPagination<IndexerTx>, Error> { let limit = limit.value(); if limit == 0 { return Err(Error::invalid_params("limit should be greater than 0")); } if limit as usize > self.request_limit { return Err(Error::invalid_params(format!( "limit must be less than {}", self.request_limit, ))); } search_key.filter = convert_max_values_in_search_filter(&search_key.filter); let mut tx = self .store .transaction() .await .map_err(|err| Error::DB(err.to_string()))?; match search_key.group_by_transaction { Some(false) | None => { let mut last_cursor = None; if let Some(after) = after { if after.len() != 12 { return Err(Error::Params( "Unable to parse the 'after' parameter.".to_string(), )); } let (last, offset) = after.as_bytes().split_at(after.len() - 4); let last = decode_i64(last)?; let offset = decode_i32(offset)?; last_cursor = Some((last, offset)); }; let txs = get_tx_with_cell( self.store.db_driver, search_key, &order, limit, last_cursor, &mut tx, ) .await?; let mut last_id = 0; let mut count = 0i32; let txs = txs .into_iter() .map(|(id, block_number, tx_index, tx_hash, io_type, io_index)| { if id == last_id { count += 1; } else { last_id = id; count = 1; } IndexerTx::Ungrouped(IndexerTxWithCell { tx_hash: bytes_to_h256(&tx_hash), block_number: block_number.into(), tx_index: tx_index.into(), io_index: io_index.into(), io_type: match io_type { 0 => IndexerCellType::Input, 1 => IndexerCellType::Output, _ => unreachable!(), }, }) }) .collect::<Vec<_>>(); let mut last_cursor = last_id.to_le_bytes().to_vec(); let mut offset = count.to_le_bytes().to_vec(); last_cursor.append(&mut offset); Ok(IndexerPagination { objects: txs, last_cursor: JsonBytes::from_vec(last_cursor), }) } Some(true) => { let txs = get_tx_with_cells( self.store.db_driver, search_key, &order, limit, after, &mut tx, ) .await?; let mut last_cursor = 0; let txs = txs .into_iter() .map(|(id, block_number, tx_index, tx_hash, io_pairs)| { last_cursor = id; IndexerTx::Grouped(IndexerTxWithCells { tx_hash: bytes_to_h256(&tx_hash), block_number: block_number.into(), tx_index: tx_index.into(), cells: io_pairs .into_iter() .map(|(io_type, io_index)| { ( match io_type { 0 => IndexerCellType::Input, 1 => IndexerCellType::Output, _ => unreachable!(), }, io_index.into(), ) }) .collect::<Vec<_>>(), }) }) .collect::<Vec<_>>(); Ok(IndexerPagination { objects: txs, last_cursor: JsonBytes::from_vec(last_cursor.to_le_bytes().to_vec()), }) } } }
Get transactions
get_transactions
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/async_indexer_handle/get_transactions.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/async_indexer_handle/get_transactions.rs
MIT
pub async fn get_cells( &self, search_key: IndexerSearchKey, order: IndexerOrder, limit: Uint32, after: Option<JsonBytes>, ) -> Result<IndexerPagination<IndexerCell>, Error> { let limit = limit.value(); if limit == 0 { return Err(Error::invalid_params("limit should be greater than 0")); } if limit as usize > self.request_limit { return Err(Error::invalid_params(format!( "limit must be less than {}", self.request_limit, ))); } let mut param_index = 1; // sub query for script let script_sub_query_sql = build_query_script_sql( self.store.db_driver, &search_key.script_search_mode, &mut param_index, )?; // query output let mut query_builder = SqlBuilder::select_from("output"); query_builder .field("output.id") .field("output.output_index") .field("output.capacity"); match search_key.script_type { IndexerScriptType::Lock => { query_builder .field("query_script.code_hash AS lock_code_hash") .field("query_script.hash_type AS lock_hash_type") .field("query_script.args AS lock_args") .field("type_script.code_hash AS type_code_hash") .field("type_script.hash_type AS type_hash_type") .field("type_script.args AS type_args"); } IndexerScriptType::Type => { query_builder .field("lock_script.code_hash AS lock_code_hash") .field("lock_script.hash_type AS lock_hash_type") .field("lock_script.args AS lock_args") .field("query_script.code_hash AS type_code_hash") .field("query_script.hash_type AS type_hash_type") .field("query_script.args AS type_args"); } } query_builder .field("ckb_transaction.tx_index") .field("ckb_transaction.tx_hash") .field("block.block_number"); match search_key.with_data { Some(true) | None => { query_builder.field("output.data as output_data"); } Some(false) => { query_builder.field("NULL as output_data"); } } query_builder.join(format!("{} AS query_script", script_sub_query_sql)); match search_key.script_type { IndexerScriptType::Lock => { query_builder.on("output.lock_script_id = query_script.id"); } IndexerScriptType::Type => { query_builder.on("output.type_script_id = query_script.id"); } } query_builder .join("ckb_transaction") .on("output.tx_id = ckb_transaction.id") .join("block") .on("ckb_transaction.block_id = block.id"); match search_key.script_type { IndexerScriptType::Lock => query_builder .left() .join(name!("script";"type_script")) .on("output.type_script_id = type_script.id"), IndexerScriptType::Type => query_builder .left() .join(name!("script";"lock_script")) .on("output.lock_script_id = lock_script.id"), } .and_where("output.is_spent = 0"); // live cells // filter cells in pool let mut dead_cells = Vec::new(); if let Some(pool) = self .pool .as_ref() .map(|pool| pool.read().expect("acquire lock")) { dead_cells = pool .dead_cells() .map(|out_point| { let tx_hash: H256 = out_point.tx_hash().unpack(); (tx_hash.as_bytes().to_vec(), out_point.index().unpack()) }) .collect::<Vec<(_, u32)>>() } if !dead_cells.is_empty() { let placeholders = dead_cells .iter() .map(|(_, output_index)| { let placeholder = format!("(${}, {})", param_index, output_index); param_index += 1; placeholder }) .collect::<Vec<_>>() .join(","); query_builder.and_where(format!("(tx_hash, output_index) NOT IN ({})", placeholders)); } if let Some(after) = after { let after = decode_i64(after.as_bytes())?; match order { IndexerOrder::Asc => query_builder.and_where_gt("output.id", after), IndexerOrder::Desc => query_builder.and_where_lt("output.id", after), }; } build_cell_filter( self.store.db_driver, &mut query_builder, &search_key, &mut param_index, ); match order { IndexerOrder::Asc => query_builder.order_by("output.id", false), IndexerOrder::Desc => query_builder.order_by("output.id", true), }; query_builder.limit(limit); // sql string let sql = query_builder .sql() .map_err(|err| Error::DB(err.to_string()))? .trim_end_matches(';') .to_string(); // bind let mut query = SQLXPool::new_query(&sql); query = query .bind(search_key.script.code_hash.as_bytes()) .bind(search_key.script.hash_type as i16); match &search_key.script_search_mode { Some(IndexerSearchMode::Prefix) | None => { query = query .bind(search_key.script.args.as_bytes()) .bind(get_binary_upper_boundary(search_key.script.args.as_bytes())); } Some(IndexerSearchMode::Exact) => { query = query.bind(search_key.script.args.as_bytes()); } Some(IndexerSearchMode::Partial) => match self.store.db_driver { DBDriver::Postgres => { let new_args = escape_and_wrap_for_postgres_like(&search_key.script.args); query = query.bind(new_args); } DBDriver::Sqlite => { query = query.bind(search_key.script.args.as_bytes()); } }, } if let Some(filter) = search_key.filter.as_ref() { if let Some(script) = filter.script.as_ref() { query = query .bind(script.code_hash.as_bytes()) .bind(script.hash_type.clone() as i16); // Default prefix search query = query .bind(script.args.as_bytes()) .bind(get_binary_upper_boundary(script.args.as_bytes())) } if let Some(data) = &filter.output_data { match &filter.output_data_filter_mode { Some(IndexerSearchMode::Prefix) | None => { query = query .bind(data.as_bytes()) .bind(get_binary_upper_boundary(data.as_bytes())); } Some(IndexerSearchMode::Exact) => { query = query.bind(data.as_bytes()); } Some(IndexerSearchMode::Partial) => match self.store.db_driver { DBDriver::Postgres => { let new_data = escape_and_wrap_for_postgres_like(data); query = query.bind(new_data); } DBDriver::Sqlite => { query = query.bind(data.as_bytes()); } }, } } } if !dead_cells.is_empty() { for (tx_hash, _) in dead_cells { query = query.bind(tx_hash) } } // fetch let mut last_cursor = Vec::new(); let cells = self .store .fetch_all(query) .await .map_err(|err| Error::DB(err.to_string()))? .iter() .map(|row| { last_cursor = row.get::<i64, _>("id").to_le_bytes().to_vec(); build_indexer_cell(row) }) .collect::<Vec<_>>(); Ok(IndexerPagination { objects: cells, last_cursor: JsonBytes::from_vec(last_cursor), }) }
Get cells
get_cells
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/async_indexer_handle/get_cells.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/async_indexer_handle/get_cells.rs
MIT
pub fn new(store: SQLXPool, pool: Option<Arc<RwLock<Pool>>>, request_limit: usize) -> Self { Self { store, pool, request_limit, } }
Construct new AsyncRichIndexerHandle instance
new
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/async_indexer_handle/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/async_indexer_handle/mod.rs
MIT
pub async fn get_indexer_tip(&self) -> Result<Option<IndexerTip>, Error> { let query = SQLXPool::new_query( r#" SELECT block_hash, block_number FROM block ORDER BY id DESC LIMIT 1 "#, ); self.store .fetch_optional(query) .await .map(|res| { res.map(|row| IndexerTip { block_number: (row.get::<i64, _>("block_number") as u64).into(), block_hash: bytes_to_h256(row.get("block_hash")), }) }) .map_err(|err| Error::DB(err.to_string())) }
Get indexer current tip
get_indexer_tip
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/async_indexer_handle/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/async_indexer_handle/mod.rs
MIT
fn escape_and_wrap_for_postgres_like(data: &JsonBytes) -> Vec<u8> { // 0x5c is the default escape character '\' // 0x25 is the '%' wildcard // 0x5f is the '_' wildcard let mut new_data: Vec<u8> = data .as_bytes() .iter() .flat_map(|&b| { if b == 0x25 || b == 0x5c || b == 0x5f { vec![0x5c, b] } else { vec![b] } }) .collect(); new_data.insert(0, 0x25); // Start with % new_data.push(0x25); // End with % new_data }
Escapes special characters and wraps data with '%' for PostgreSQL LIKE queries. This function escapes the characters '%', '\' and '_' in the input `JsonBytes` by prefixing them with '\'. It then wraps the processed data with '%' at both the start and end for use in PostgreSQL LIKE queries. Note: This function is not suitable for SQLite queries if the data contains NUL characters (0x00), as SQLite treats NUL as the end of the string.
escape_and_wrap_for_postgres_like
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/async_indexer_handle/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/async_indexer_handle/mod.rs
MIT
pub async fn get_cells_capacity( &self, search_key: IndexerSearchKey, ) -> Result<Option<IndexerCellsCapacity>, Error> { // sub query for script let mut param_index = 1; let script_sub_query_sql = build_query_script_id_sql( self.store.db_driver, &search_key.script_search_mode, &mut param_index, )?; // query output let mut query_builder = SqlBuilder::select_from("output"); query_builder.field("CAST(SUM(output.capacity) AS BIGINT) AS total_capacity"); query_builder.join(format!("{} AS query_script", script_sub_query_sql)); match search_key.script_type { IndexerScriptType::Lock => { query_builder.on("output.lock_script_id = query_script.id"); } IndexerScriptType::Type => { query_builder.on("output.type_script_id = query_script.id"); } } let mut joined_ckb_transaction = false; if let Some(ref filter) = search_key.filter { if filter.block_range.is_some() { query_builder .join("ckb_transaction") .on("output.tx_id = ckb_transaction.id") .join("block") .on("ckb_transaction.block_id = block.id"); joined_ckb_transaction = true; } } if self.pool.is_some() && !joined_ckb_transaction { query_builder .join("ckb_transaction") .on("output.tx_id = ckb_transaction.id"); } if let Some(ref filter) = search_key.filter { if filter.script.is_some() || filter.script_len_range.is_some() { match search_key.script_type { IndexerScriptType::Lock => { query_builder .left() .join(name!("script";"type_script")) .on("output.type_script_id = type_script.id"); } IndexerScriptType::Type => { query_builder .left() .join(name!("script";"lock_script")) .on("output.lock_script_id = lock_script.id"); } } } } query_builder.and_where("output.is_spent = 0"); // live cells // filter cells in pool let mut dead_cells = Vec::new(); if let Some(pool) = self .pool .as_ref() .map(|pool| pool.read().expect("acquire lock")) { dead_cells = pool .dead_cells() .map(|out_point| { let tx_hash: H256 = out_point.tx_hash().unpack(); (tx_hash.as_bytes().to_vec(), out_point.index().unpack()) }) .collect::<Vec<(_, u32)>>() } if !dead_cells.is_empty() { let placeholders = dead_cells .iter() .map(|(_, output_index)| { let placeholder = format!("(${}, {})", param_index, output_index); param_index += 1; placeholder }) .collect::<Vec<_>>() .join(","); query_builder.and_where(format!( "(ckb_transaction.tx_hash, output_index) NOT IN ({})", placeholders )); } build_cell_filter( self.store.db_driver, &mut query_builder, &search_key, &mut param_index, ); // sql string let sql = query_builder .sql() .map_err(|err| Error::DB(err.to_string()))? .trim_end_matches(';') .to_string(); // bind let mut query = SQLXPool::new_query(&sql); query = query .bind(search_key.script.code_hash.as_bytes()) .bind(search_key.script.hash_type as i16); match &search_key.script_search_mode { Some(IndexerSearchMode::Prefix) | None => { query = query .bind(search_key.script.args.as_bytes()) .bind(get_binary_upper_boundary(search_key.script.args.as_bytes())); } Some(IndexerSearchMode::Exact) => { query = query.bind(search_key.script.args.as_bytes()); } Some(IndexerSearchMode::Partial) => match self.store.db_driver { DBDriver::Postgres => { let new_args = escape_and_wrap_for_postgres_like(&search_key.script.args); query = query.bind(new_args); } DBDriver::Sqlite => { query = query.bind(search_key.script.args.as_bytes()); } }, } if let Some(filter) = search_key.filter.as_ref() { if let Some(script) = filter.script.as_ref() { query = query .bind(script.code_hash.as_bytes()) .bind(script.hash_type.clone() as i16); // Default prefix search query = query .bind(script.args.as_bytes()) .bind(get_binary_upper_boundary(script.args.as_bytes())) } if let Some(data) = &filter.output_data { match &filter.output_data_filter_mode { Some(IndexerSearchMode::Prefix) | None => { query = query .bind(data.as_bytes()) .bind(get_binary_upper_boundary(data.as_bytes())); } Some(IndexerSearchMode::Exact) => { query = query.bind(data.as_bytes()); } Some(IndexerSearchMode::Partial) => match self.store.db_driver { DBDriver::Postgres => { let new_data = escape_and_wrap_for_postgres_like(data); query = query.bind(new_data); } DBDriver::Sqlite => { query = query.bind(data.as_bytes()); } }, } } } if !dead_cells.is_empty() { for (tx_hash, _) in dead_cells { query = query.bind(tx_hash) } } let mut tx = self .store .transaction() .await .map_err(|err| Error::DB(err.to_string()))?; // fetch let capacity = query .fetch_optional(&mut *tx) .await .map_err(|err| Error::DB(err.to_string()))? .and_then(|row| row.try_get::<i64, _>("total_capacity").ok()); let capacity = match capacity { Some(capacity) => capacity as u64, None => return Ok(None), }; let (block_hash, block_number) = SQLXPool::new_query( r#" SELECT block_hash, block_number FROM block ORDER BY id DESC LIMIT 1 "#, ) .fetch_optional(&mut *tx) .await .map(|res| { res.map(|row| { ( bytes_to_h256(row.get("block_hash")), row.get::<i64, _>("block_number") as u64, ) }) }) .map_err(|err| Error::DB(err.to_string()))? .unwrap(); tx.commit() .await .map_err(|err| Error::DB(err.to_string()))?; Ok(Some(IndexerCellsCapacity { capacity: capacity.into(), block_hash, block_number: block_number.into(), })) }
Get cells_capacity by specified search_key
get_cells_capacity
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer_handle/async_indexer_handle/get_cells_capacity.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer_handle/async_indexer_handle/get_cells_capacity.rs
MIT
pub fn new( store: SQLXPool, pool: Option<Arc<RwLock<Pool>>>, custom_filters: CustomFilters, async_runtime: Handle, request_limit: usize, ) -> Self { Self { async_rich_indexer: AsyncRichIndexer::new(store, pool, custom_filters), async_runtime, request_limit, } }
Construct new Rich Indexer instance
new
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
fn tip(&self) -> Result<Option<(BlockNumber, Byte32)>, Error> { let indexer_handle = RichIndexerHandle::new( self.async_rich_indexer.store.clone(), self.async_rich_indexer.pool.clone(), self.async_runtime.clone(), self.request_limit, ); indexer_handle .get_indexer_tip() .map(|tip| tip.map(|tip| (tip.block_number.value(), tip.block_hash.0.pack()))) .map_err(|err| Error::DB(err.to_string())) }
Retrieves the tip of the indexer
tip
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
fn append(&self, block: &BlockView) -> Result<(), Error> { let future = self.async_rich_indexer.append(block); self.async_runtime.block_on(future) }
Appends a new block to the indexer
append
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
fn rollback(&self) -> Result<(), Error> { let future = self.async_rich_indexer.rollback(); self.async_runtime.block_on(future) }
Rollback the indexer to a previous state
rollback
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
fn get_identity(&self) -> &str { SUBSCRIBER_NAME }
Return identity
get_identity
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
fn set_init_tip(&self, init_tip_number: u64, init_tip_hash: &H256) { let future = self .async_rich_indexer .set_init_tip(init_tip_number, init_tip_hash); self.async_runtime.block_on(future) }
Set init tip
set_init_tip
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
pub fn new( store: SQLXPool, pool: Option<Arc<RwLock<Pool>>>, custom_filters: CustomFilters, ) -> Self { Self { store, pool, custom_filters, } }
Construct new AsyncRichIndexer instance
new
rust
nervosnetwork/ckb
util/rich-indexer/src/indexer/mod.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/indexer/mod.rs
MIT
fn extract_raw_data(script: &Script) -> Vec<u8> { [ script.code_hash().as_slice(), script.hash_type().as_slice(), &script.args().raw_data(), ] .concat() }
helper fn extracts script fields raw data
extract_raw_data
rust
nervosnetwork/ckb
util/rich-indexer/src/tests/query.rs
https://github.com/nervosnetwork/ckb/blob/master/util/rich-indexer/src/tests/query.rs
MIT
pub fn check_if_identifier_is_valid(ident: &str) -> Result<(), String> { const IDENT_PATTERN: &str = r#"^[0-9a-zA-Z_-]+$"#; static RE: std::sync::OnceLock<Regex> = std::sync::OnceLock::new(); // IDENT_PATTERN is a correct regular expression, so unwrap here let re = RE.get_or_init(|| Regex::new(IDENT_PATTERN).unwrap()); if ident.is_empty() { return Err("the identifier shouldn't be empty".to_owned()); } if !re.is_match(ident) { return Err(format!( "Invalid identifier \"{ident}\", the identifier pattern is \"{IDENT_PATTERN}\"" )); } Ok(()) }
Checks whether the given string is a valid identifier. This function considers non-empty string containing only alphabets, digits, `-`, and `_` as a valid identifier. ## Examples ``` use ckb_util::strings::check_if_identifier_is_valid; assert!(check_if_identifier_is_valid("test123").is_ok()); assert!(check_if_identifier_is_valid("123test").is_ok()); assert!(check_if_identifier_is_valid("").is_err()); assert!(check_if_identifier_is_valid("test 123").is_err()); ```
check_if_identifier_is_valid
rust
nervosnetwork/ckb
util/src/strings.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/strings.rs
MIT
pub fn new() -> LinkedHashSet<T, DefaultBuildHasher> { LinkedHashSet { map: LinkedHashMap::default(), } }
Creates a linked hash set. ## Examples ``` use ckb_util::LinkedHashSet; let set: LinkedHashSet<i32> = LinkedHashSet::new(); ```
new
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn with_capacity(capacity: usize) -> LinkedHashSet<T, DefaultBuildHasher> { LinkedHashSet { map: LinkedHashMap::with_capacity_and_hasher(capacity, Default::default()), } }
Creates an empty linked hash map with the given initial capacity. ## Examples ``` use ckb_util::LinkedHashSet; let set: LinkedHashSet<i32> = LinkedHashSet::with_capacity(42); ```
with_capacity
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn contains(&self, value: &T) -> bool { self.map.contains_key(value) }
Returns `true` if the set contains a value. ``` use ckb_util::LinkedHashSet; let mut set: LinkedHashSet<_> = LinkedHashSet::new(); set.insert(1); set.insert(2); set.insert(3); assert_eq!(set.contains(&1), true); assert_eq!(set.contains(&4), false); ```
contains
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn capacity(&self) -> usize { self.map.capacity() }
Returns the number of elements the set can hold without reallocating.
capacity
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn len(&self) -> usize { self.map.len() }
Returns the number of elements in the set.
len
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn is_empty(&self) -> bool { self.map.is_empty() }
Returns `true` if the set contains no elements.
is_empty
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn insert(&mut self, value: T) -> bool { self.map.insert(value, ()).is_none() }
Adds a value to the set. If the set did not have this value present, true is returned. If the set did have this value present, false is returned.
insert
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn iter(&self) -> Iter<T> { Iter { iter: self.map.keys(), } }
Gets an iterator visiting all elements in insertion order. The iterator element type is `&'a T`.
iter
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
pub fn clear(&mut self) { self.map.clear(); }
Clears the set of all value.
clear
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
fn default() -> LinkedHashSet<T, DefaultBuildHasher> { LinkedHashSet { map: LinkedHashMap::default(), } }
Creates an empty `HashSet<T>` with the `Default` value for the hasher.
default
rust
nervosnetwork/ckb
util/src/linked_hash_set.rs
https://github.com/nervosnetwork/ckb/blob/master/util/src/linked_hash_set.rs
MIT
fn should_be_ok(self) -> T; } // Use for Option impl<T> ShouldBeOk<T> for Option<T> { fn should_be_ok(self) -> T { self.unwrap_or_else(|| panic!("should not be None")) } }
Unwraps an `Option` or a `Result` with confidence and we assume that it's impossible to fail.
should_be_ok
rust
nervosnetwork/ckb
util/gen-types/src/prelude.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/prelude.rs
MIT
pub fn as_utf8(&self) -> Result<&str, str::Utf8Error> { str::from_utf8(self.raw_data()) }
Converts self to a string slice.
as_utf8
rust
nervosnetwork/ckb
util/gen-types/src/conversion/primitive.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/conversion/primitive.rs
MIT
pub unsafe fn as_utf8_unchecked(&self) -> &str { str::from_utf8_unchecked(self.raw_data()) }
Converts self to a string slice without checking that the string contains valid UTF-8. # Safety This function is unsafe because it does not check that the bytes passed to it are valid UTF-8. If this constraint is violated, undefined behavior results, as the rest of Rust assumes that [`&str`]s are valid UTF-8.
as_utf8_unchecked
rust
nervosnetwork/ckb
util/gen-types/src/conversion/primitive.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/conversion/primitive.rs
MIT
pub fn is_utf8(&self) -> bool { self.as_utf8().is_ok() }
Checks whether self is contains valid UTF-8 binary data.
is_utf8
rust
nervosnetwork/ckb
util/gen-types/src/conversion/primitive.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/conversion/primitive.rs
MIT
pub fn check_data(&self) -> bool { self.transactions().check_data() }
Recursively checks whether the structure of the binary data is correct.
check_data
rust
nervosnetwork/ckb
util/gen-types/src/extension/check_data.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/check_data.rs
MIT
pub fn check_data(&self) -> bool { self.transactions().check_data() }
Recursively checks whether the structure of the binary data is correct.
check_data
rust
nervosnetwork/ckb
util/gen-types/src/extension/check_data.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/check_data.rs
MIT
pub fn check_data(&self) -> bool { self.block().check_data() }
Recursively checks whether the structure of the binary data is correct.
check_data
rust
nervosnetwork/ckb
util/gen-types/src/extension/check_data.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/check_data.rs
MIT
pub fn serialized_size_in_block(&self) -> usize { self.as_slice().len() + molecule::NUMBER_SIZE }
Calculates the serialized size of a [`Transaction`] in [`Block`]. Put each [`Transaction`] into [`Block`] will occupy extra spaces to store [an offset in header], its size is [`molecule::NUMBER_SIZE`]. [`Transaction`]: https://github.com/nervosnetwork/ckb/blob/v0.36.0/util/types/schemas/blockchain.mol#L66-L69 [`Block`]: https://github.com/nervosnetwork/ckb/blob/v0.36.0/util/types/schemas/blockchain.mol#L94-L99 [an offset in header]: https://github.com/nervosnetwork/molecule/blob/df1fdce/docs/encoding_spec.md#memory-layout [`molecule::NUMBER_SIZE`]: https://docs.rs/molecule/0.6.1/molecule/constant.NUMBER_SIZE.html
serialized_size_in_block
rust
nervosnetwork/ckb
util/gen-types/src/extension/serialized_size.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/serialized_size.rs
MIT
pub fn serialized_size_without_uncle_proposals(&self) -> usize { let block_size = self.as_slice().len(); let uncles_proposals_size = self .uncles() .iter() .map(|x| x.proposals().as_slice().len() - molecule::NUMBER_SIZE) .sum::<usize>(); block_size - uncles_proposals_size }
Calculates the serialized size of [`Block`] without [uncle proposals]. # Computational Steps - Calculates the total serialized size of [`Block`], marks it as `B`. - Calculates the serialized size [`ProposalShortIdVec`] for each uncle block, marks them as `P0, P1, ..., Pn`. - Even an uncle has no proposals, the [`ProposalShortIdVec`] still has [a header contains its total size], the size is [`molecule::NUMBER_SIZE`], marks it as `h`. - So the serialized size of [`Block`] without [uncle proposals] is: `B - sum(P0 - h, P1 - h, ..., Pn - h)` [`Block`]: https://github.com/nervosnetwork/ckb/blob/v0.36.0/util/types/schemas/blockchain.mol#L94-L99 [uncle proposals]: https://github.com/nervosnetwork/ckb/blob/v0.36.0/util/types/schemas/blockchain.mol#L91 [`ProposalShortIdVec`]: https://github.com/nervosnetwork/ckb/blob/v0.36.0/util/types/schemas/blockchain.mol#L25 [a header contains its total size]: https://github.com/nervosnetwork/molecule/blob/df1fdce/docs/encoding_spec.md#memory-layout [`molecule::NUMBER_SIZE`]: https://docs.rs/molecule/0.6.1/molecule/constant.NUMBER_SIZE.html
serialized_size_without_uncle_proposals
rust
nervosnetwork/ckb
util/gen-types/src/extension/serialized_size.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/serialized_size.rs
MIT
pub fn serialized_size_in_block() -> usize { packed::Header::TOTAL_SIZE + 5 * molecule::NUMBER_SIZE }
Calculates the serialized size of a UncleBlock in Block. The block has 1 more uncle: - the block will has 1 more offset (+NUM_SIZE) in UncleBlockVec - UncleBlockVec has 1 more UncleBlock. UncleBlock comes with 1 `total` field, and 2 field offsets, (+NUM_SIZE * 3) UncleBlock contains Header (+208) and empty proposals (only one total_size, + NUM_SIZE because it is a fixVec) The total is +NUM_SIZE*5 + Header.size() = 228 see tests block_size_should_not_include_uncles_proposals.
serialized_size_in_block
rust
nervosnetwork/ckb
util/gen-types/src/extension/serialized_size.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/serialized_size.rs
MIT
pub fn serialized_size() -> usize { 10 }
Return the serialized size
serialized_size
rust
nervosnetwork/ckb
util/gen-types/src/extension/serialized_size.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/serialized_size.rs
MIT
pub fn calc_data_hash(data: &[u8]) -> packed::Byte32 { if data.is_empty() { packed::Byte32::zero() } else { blake2b_256(data).pack() } }
Calculates the hash for cell data. Returns the empty hash if no data, otherwise, calculates the hash of the data and returns it.
calc_data_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_raw_data_hash(&self) -> packed::Byte32 { blake2b_256(self.raw_data()).pack() }
Calculates the hash for raw data in `Bytes`. Returns the empty hash if no data, otherwise, calculates the hash of the data and returns it.
calc_raw_data_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_script_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the script hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_script_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_lock_hash(&self) -> packed::Byte32 { self.lock().calc_script_hash() }
Calls [`ScriptReader.calc_script_hash()`] for [`self.lock()`]. [`ScriptReader.calc_script_hash()`]: struct.ScriptReader.html#method.calc_script_hash [`self.lock()`]: #method.lock
calc_lock_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_proposals_hash(&self) -> packed::Byte32 { if self.is_empty() { packed::Byte32::zero() } else { let mut ret = [0u8; 32]; let mut blake2b = new_blake2b(); for id in self.iter() { blake2b.update(id.as_slice()); } blake2b.finalize(&mut ret); ret.pack() } }
Calculates the hash for proposals. Returns the empty hash if no proposals short ids, otherwise, calculates a hash for all proposals short ids and return it.
calc_proposals_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_tx_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the transaction hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_tx_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_tx_hash(&self) -> packed::Byte32 { self.raw().calc_tx_hash() }
Calls [`RawTransactionReader.calc_tx_hash()`] for [`self.raw()`]. [`RawTransactionReader.calc_tx_hash()`]: struct.RawTransactionReader.html#method.calc_tx_hash [`self.raw()`]: #method.raw
calc_tx_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_witness_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the witness hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_witness_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_pow_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the pow hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_pow_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_pow_hash(&self) -> packed::Byte32 { self.raw().calc_pow_hash() }
Calls [`RawHeaderReader.calc_pow_hash()`] for [`self.raw()`]. [`RawHeaderReader.calc_pow_hash()`]: struct.RawHeaderReader.html#method.calc_pow_hash [`self.raw()`]: #method.raw
calc_pow_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_header_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the header hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_header_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_header_hash(&self) -> packed::Byte32 { self.header().calc_header_hash() }
Calls [`HeaderReader.calc_header_hash()`] for [`self.header()`]. [`HeaderReader.calc_header_hash()`]: struct.HeaderReader.html#method.calc_header_hash [`self.header()`]: #method.header
calc_header_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_proposals_hash(&self) -> packed::Byte32 { self.proposals().calc_proposals_hash() }
Calls [`ProposalShortIdVecReader.calc_proposals_hash()`] for [`self.proposals()`]. [`ProposalShortIdVecReader.calc_proposals_hash()`]: struct.ProposalShortIdVecReader.html#method.calc_proposals_hash [`self.proposals()`]: #method.proposals
calc_proposals_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_uncles_hash(&self) -> packed::Byte32 { if self.is_empty() { packed::Byte32::zero() } else { let mut ret = [0u8; 32]; let mut blake2b = new_blake2b(); for uncle in self.iter() { blake2b.update(uncle.calc_header_hash().as_slice()); } blake2b.finalize(&mut ret); ret.pack() } }
Calculates the hash for uncle blocks. Returns the empty hash if no uncle block, otherwise, calculates a hash for all header hashes of uncle blocks and returns it.
calc_uncles_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_header_hash(&self) -> packed::Byte32 { self.header().calc_header_hash() }
Calls [`HeaderReader.calc_header_hash()`] for [`self.header()`]. [`HeaderReader.calc_header_hash()`]: struct.HeaderReader.html#method.calc_header_hash [`self.header()`]: #method.header
calc_header_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_proposals_hash(&self) -> packed::Byte32 { self.proposals().calc_proposals_hash() }
Calls [`ProposalShortIdVecReader.calc_proposals_hash()`] for [`self.proposals()`]. [`ProposalShortIdVecReader.calc_proposals_hash()`]: struct.ProposalShortIdVecReader.html#method.calc_proposals_hash [`self.proposals()`]: #method.proposals
calc_proposals_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_uncles_hash(&self) -> packed::Byte32 { self.uncles().calc_uncles_hash() }
Calls [`UncleBlockVecReader.calc_uncles_hash()`] for [`self.uncles()`]. [`UncleBlockVecReader.calc_uncles_hash()`]: struct.UncleBlockVecReader.html#method.calc_uncles_hash [`self.uncles()`]: #method.uncles
calc_uncles_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_extension_hash(&self) -> Option<packed::Byte32> { self.extension() .map(|extension| extension.calc_raw_data_hash()) }
Calculates the hash for the extension. If there is an extension (unknown for now), calculate the hash of its data.
calc_extension_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_tx_hashes(&self) -> Vec<packed::Byte32> { self.transactions() .iter() .map(|tx| tx.calc_tx_hash()) .collect::<Vec<_>>() }
Calculates transaction hashes for all transactions in the block.
calc_tx_hashes
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_tx_witness_hashes(&self) -> Vec<packed::Byte32> { self.transactions() .iter() .map(|tx| tx.calc_witness_hash()) .collect::<Vec<_>>() }
Calculates transaction witness hashes for all transactions in the block.
calc_tx_witness_hashes
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_header_hash(&self) -> packed::Byte32 { self.header().calc_header_hash() }
Calls [`HeaderReader.calc_header_hash()`] for [`self.header()`]. [`HeaderReader.calc_header_hash()`]: struct.HeaderReader.html#method.calc_header_hash [`self.header()`]: #method.header
calc_header_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_alert_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the alert hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_alert_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_alert_hash(&self) -> packed::Byte32 { self.raw().calc_alert_hash() }
Calls [`RawAlertReader.calc_alert_hash()`] for [`self.raw()`]. [`RawAlertReader.calc_alert_hash()`]: struct.RawAlertReader.html#method.calc_alert_hash [`self.raw()`]: #method.raw
calc_alert_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn calc_mmr_hash(&self) -> packed::Byte32 { self.calc_hash() }
Calculates the hash for [self.as_slice()] as the MMR node hash. [self.as_slice()]: ../prelude/trait.Reader.html#tymethod.as_slice
calc_mmr_hash
rust
nervosnetwork/ckb
util/gen-types/src/extension/calc_hash.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/calc_hash.rs
MIT
pub fn zero() -> Self { Self::default() }
Creates a new `Bytes32` whose bits are all zeros.
zero
rust
nervosnetwork/ckb
util/gen-types/src/extension/shortcut.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/shortcut.rs
MIT
pub fn max_value() -> Self { [u8::MAX; 32].pack() }
Creates a new `Byte32` whose bits are all ones.
max_value
rust
nervosnetwork/ckb
util/gen-types/src/extension/shortcut.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/shortcut.rs
MIT
pub fn is_zero(&self) -> bool { self.as_slice().iter().all(|x| *x == 0) }
Checks whether all bits in self are zeros.
is_zero
rust
nervosnetwork/ckb
util/gen-types/src/extension/shortcut.rs
https://github.com/nervosnetwork/ckb/blob/master/util/gen-types/src/extension/shortcut.rs
MIT
End of preview. Expand in Data Studio

Rust Code Dataset

Dataset Description

This dataset contains Rust functions with their documentation comments extracted from GitHub repositories.

Features

  • code: The Rust function code
  • docstring: Documentation comment for the function
  • func_name: Function name
  • language: Programming language (always "rust")
  • repo: Source repository name
  • path: File path within the repository
  • url: GitHub URL to the source file
  • license: License of the source code

Dataset Structure

The dataset contains the following splits:

  • train: 298110 examples
  • validation: 34374 examples
  • test: 22487 examples

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Shuu12121/rust-codesearch-dataset-open")

# Access the training split
train_data = dataset["train"]

# Example: Print the first sample
print(train_data[0])

Source

This dataset was created by scraping Rust code from GitHub repositories. Each function includes its documentation comment and license information.

License

This dataset contains code from various repositories with different licenses. Each sample includes its original license information in the license field.

Downloads last month
56

Models trained or fine-tuned on Shuu12121/rust-codesearch-dataset-open