text
stringlengths 46
74.6k
|
---|
---
sidebar_position: 3
---
# Creating Accounts
You might want to create an account from a contract for many reasons. One example:
You want to [progressively onboard](https://www.youtube.com/watch?v=7mO4yN1zjbs&t=2s) users, hiding the whole concept of NEAR from them at the beginning, and automatically create accounts for them (these could be sub-accounts of your main contract, such as `user123.some-cool-game.near`).
Since an account with no balance is almost unusable, you probably want to combine this with the token transfer from [the last page](./token-tx.md). You will also need to give the account an access key. Here's a way do it:
```js
NearPromise.new("subaccount.example.near").createAccount().addFullAccessKey(near.signerAccountPk()).transfer(BigInt(250_000_000_000_000_000_000_000)); // 2.5e23yN, 0.25N
```
In the context of a full contract:
```js
import { NearPromise, near } from "near-sdk-js";
@NearBindgen({})
export class Contract {
@call({ privateFunction: true })
createSubaccount({ prefix }) {
const subaccountId = `${prefix}.${near.currentAccountId()}`;
return NearPromise.new(subaccount_id).createAccount().addFullAccessKey(near.signerAccountPk()).transfer(BigInt(250_000_000_000_000_000_000_000)); // 2.5e23yN, 0.25N
}
}
```
Things to note:
- `addFullAccessKey` – This example passes in the public key of the human or app that signed the original transaction that resulted in this function call ([`signerAccountPk`](https://github.com/near/near-sdk-js/blob/d1ca261feac5c38768ab30e0b24cf7263d80aaf2/packages/near-sdk-js/src/api.ts#L187-L194)). You could also use [`addAccessKey`](https://github.com/near/near-sdk-js/blob/d1ca261feac5c38768ab30e0b24cf7263d80aaf2/packages/near-sdk-js/src/promise.ts#L526-L548) to add a Function Call access key that only permits the account to make calls to a predefined set of contract functions.
- `{ privateFunction: true }` – if you have a function that spends your contract's funds, you probably want to protect it in some way. This example does so with a perhaps-too-simple [`{ privateFunction: true }`](../contract-interface/private-methods.md) decorator parameter.
|
Thresholded Proof Of Stake
DEVELOPERS
September 11, 2018
A central component of any blockchain is the consensus. At its core, Blockchain is a distributed storage system where every member of the network must agree on what information is stored there. A consensus is the protocol by which this agreement is achieved.
The consensus algorithm usually involves:
Election. How a set of nodes (or single node) is elected at any particular moment to make decisions.
Agreement. What is the protocol by which the elected set of nodes agrees on the current state of the blockchain and newly added transactions.
In this article, I want to discuss the first part (Election) and leave exact mechanics of agreement to a subsequent article.
Proof of Work:
Examples: Bitcoin, Ethereum
The genius of Nakamoto presented the world with Proof of Work (PoW) consensus. This method allows participants in a large distributed system to identify who the leader is at each point in time. To become a leader, everybody is racing to compute a complex puzzle and whomever gets there first ends up being rewarded. This leader then publishes what they believe is the new state of the network and anybody else can easily prove that leader did this work correctly.
There are three main disadvantages to this method:
Pooling. Because of the randomness involved with finding the answer to the cryptographic puzzle which solves a given block, it’s beneficial for a large group of independent workers to pool resources and find the answer more predictably. With pooling, if anybody in the pool finds the answer, all members of the pool gets fraction of the reward. This increases likelihood of payout at any given moment even if each reward is much smaller. Unfortunately, pooling leads to centralization. For example it’s known that 53%+ of Bitcoin network is controlled by just 3 pools.
Forks. Forks are a natural thing in the case of Nakamoto consensus, as there can be multiple entities that found the answer within same few seconds. The fork which is ultimately chosen is that which more other network participants end up adopting. This leads to a longer wait until a transaction can be considered “final”, as one can never be sure that the current last block is the one that most of the network is building on.
Wasted energy. A huge number of customized machines operate around the world solely performing computations for the sake of identifying who should be the next block leader, consuming more electricity than all of Denmark. Iceland for example spends a significant percentage of all electricity produced on mining Bitcoin.
Proof of Stake.
Examples: EOS, Steemit, Bitshares
The most widely adopted alternative to Proof of Work is Proof of Stake (PoS). As an idea, it means that every node in the system participates in decisions proportionally to the amount of money they have.
One of the main ways of using PoS in practice is called Delegated Proof of Stake (DPoS). In this system, the whole network votes for “delegates” — participants who maintain the network and make all of the decisions on behalf of the other members. Depending on the implementation, delegates must either personally stake a large sum of money or use it to campaign for their election. Both of these mean that only high net worth individuals or consortia can be delegates.
In theory, this is very similar to how equity in companies works. In that case, small equity holders participate in elections and elect a small number of decision makers (the board of directors), who are typically large equity holders. These large equity holders then make all of the major decisions on the behalf of all shareholders.
Depending on the specifics of consensus itself, the Proof of Stake system can address the forking and wasted energy that are problematic with Proof of Work. This method still has downside of centralization due to either small number of individual nodes (EOS) or centrally controlled pools end up participating in network maintenance. Also, in specific implementations of DPoS, the fact that all delegates know each other means that slashing (penalizing delegate for wrongdoing by taking their stake) may not happen because a majority of delegates must vote for it.
Ultimately, these factors mean that a small club of rich get richer, perpetuating some of the systemic problems that blockchains were created to address.
Thresholded Proof Of Stake:
Examples: NEAR
NEAR uses an election mechanism called Thresholded Proof of Stake (TPoS). The general idea is that we want a deterministic way to have a large number of participants that are maintaining network maintenance thereby increasing decentralization, security and establishes fair reward distribution. The closest alternative to our method is an auction, where people bid for fixed number of items and at the end the top N bids win, while receiving a number of items proportional to the size of their bids.
In our case, we want a large pool of participants (we call them “witnesses”) to be elected to make decisions during a specific interval of time (we default to one day). Each interval is split into a large number of block slots (we default to 1440 slots, one every minute) with a reasonably large number of witnesses per each block (default to 1024). With these defaults, we end up needing to fill 1,474,560 individual witness seats.
Example of selecting set of witnesses via TPoS process
Each witness seat is defined by the stake of all participants that indicated they want to be signing blocks. For example, if 1,474,560 participants stake 10 tokens each, then each individual seat is worth 10 tokens and each participant will have one seat. Alternatively, if 10 participants stake 1,474,560 tokens each, individual seat still cost 10 tokens and each participant is awarded with 147,456 seats. Formally, if X is the seat price and {Wi} are stakes by each individual participant:
Formula to identify single witness seat threshold
To participate in the network maintenance (become a witness), any account can submit a special transaction that indicates how much money one wants to stake. As soon as that transaction is accepted, the specified amount of money is locked for at least 3 days. At the end of the day, all of the new witness proposals are collected together with all of the participants who signed blocks during the day. From there, we identify the cost for an individual seat (with the formula above), allocate a number of seats for everybody who has staked at least that amount and perform pseudorandom permutation.
NEAR Protocol uses inflationary block rewards and transaction fees to incentivize witnesses to participate in signing blocks. Specifically, we propose an inflation rate to be defined as percentage of total number of tokens. This encourages HODLers to run participating node to maintain their share in the network value.
When participant signs up to be witness, the portion of their money is locked and can not be spend. The stake of each witness is unlocked a day after the witness stops participating in the block signature. If during this time witness signed two competing blocks, their stake will be forfeited toward the participant that noticed and proved “double sign”.
The main advantages of this mechanism for electing witnesses are:
No pooling necessary. There is no reason to pool stake or computational resources because the reward is directly proportional to the stake. Put another way, two accounts holding 10 tokens each will give the same return as 20 tokens in a single account. The only exception if you have less tokens then threshold, which is counteracted by a very large number of witnesses elected.
Less Forking. Forks are possible only when there is a serious network split (when less then ⅓ adversaries present). In normal operation, a user can observe the number of signatures on the block and, if there is more than ⅔+1, the block is irreversible. In the case of a network split, the participants can clearly observe the split by understanding how many signatures there are versus how many there should be. For example, if the network forks then the majority of the network participants will observe blocks with less than ⅔ (but likely more than ½) of necessary signatures and can choose to wait for a sufficient number of blocks before concluding that the block in the past is unlikely to be reversed. The minority of the network participants will see blocks with less than ½ signatures, and will have a clear evidence that the network split might be in effect, and will know that their blocks are likely to be overwritten and should not be used for finality.
Security. Rewriting a single block or performing a long range attack is extremely hard to do since one must get the private keys from witnesses who hold ⅔ of total stake amount over the two days in the past. This is with assumption that each witness participated only for one day, which will rarely happen due to economic incentive to continuously participate if they holding tokens on the account.
A few disadvantages of this approach:
Witnesses are known well in advance so an attacker could try to DDoS them. In our case, this is very difficult because a large portion of witnesses will be on mobile phones behind NATs and not accepting incoming connections. Attacking specific relays will just lead to affected mobile phones reconnecting to their peers.
In PoS, total reward is divided between current pool of witnesses, which makes it slightly unfavorable for them to include new witnesses. In our approach we add extra incentives for inclusion of new witness transactions into the block.
This system is a framework which can incorporate a number of other improvements as well. We are particularly excited about Dfinity and AlgoRand’s research into using Verifiable Random Functions to select random subset of witnesses for producing next block, which helps with protecting from DDoS and reducing the requirement to keep track of who is a witness at a particular time.
In future posts of this series, we will discuss the Agreement part of our consensus algorithm, sharding of state and transactions and design of distributed smart contracts.
To follow our progress you can use:
Twitter — https://twitter.com/nearprotocol
Discord — https://near.chat
https://upscri.be/633436/
Thanks to Ivan Bogatyy for feedback on the draft. Thanks to Alexander Skidanov, Maksym Zavershynskyi, Erik Trautman, Aliaksandr Hudzilin, Bowen Wang for helping putting together this post. |
# Metadata
## [NEP-177](https://github.com/near/NEPs/blob/master/neps/nep-0177.md)
Version `2.1.0`
## Summary
An interface for a non-fungible token's metadata. The goal is to keep the metadata future-proof as well as lightweight. This will be important to dApps needing additional information about an NFT's properties, and broadly compatible with other token standards such that the [NEAR Rainbow Bridge](https://near.org/blog/eth-near-rainbow-bridge/) can move tokens between chains.
## Motivation
The primary value of non-fungible tokens comes from their metadata. While the [core standard](Core.md) provides the minimum interface that can be considered a non-fungible token, most artists, developers, and dApps will want to associate more data with each NFT, and will want a predictable way to interact with any NFT's metadata.
NEAR's unique [storage staking](https://docs.near.org/concepts/storage/storage-staking) approach makes it feasible to store more data on-chain than other blockchains. This standard leverages this strength for common metadata attributes, and provides a standard way to link to additional offchain data to support rapid community experimentation.
This standard also provides a `spec` version. This makes it easy for consumers of NFTs, such as marketplaces, to know if they support all the features of a given token.
Prior art:
- NEAR's [Fungible Token Metadata Standard](../FungibleToken/Metadata.md)
- Discussion about NEAR's complete NFT standard: #171
## Interface
Metadata applies at both the contract level (`NFTContractMetadata`) and the token level (`TokenMetadata`). The relevant metadata for each:
```ts
type NFTContractMetadata = {
spec: string, // required, essentially a version like "nft-2.0.0", replacing "2.0.0" with the implemented version of NEP-177
name: string, // required, ex. "Mochi Rising — Digital Edition" or "Metaverse 3"
symbol: string, // required, ex. "MOCHI"
icon: string|null, // Data URL
base_uri: string|null, // Centralized gateway known to have reliable access to decentralized storage assets referenced by `reference` or `media` URLs
reference: string|null, // URL to a JSON file with more info
reference_hash: string|null, // Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.
}
type TokenMetadata = {
title: string|null, // ex. "Arch Nemesis: Mail Carrier" or "Parcel #5055"
description: string|null, // free-form description
media: string|null, // URL to associated media, preferably to decentralized, content-addressed storage
media_hash: string|null, // Base64-encoded sha256 hash of content referenced by the `media` field. Required if `media` is included.
copies: number|null, // number of copies of this set of metadata in existence when token was minted.
issued_at: number|null, // When token was issued or minted, Unix epoch in milliseconds
expires_at: number|null, // When token expires, Unix epoch in milliseconds
starts_at: number|null, // When token starts being valid, Unix epoch in milliseconds
updated_at: number|null, // When token was last updated, Unix epoch in milliseconds
extra: string|null, // anything extra the NFT wants to store on-chain. Can be stringified JSON.
reference: string|null, // URL to an off-chain JSON file with more info.
reference_hash: string|null // Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.
}
```
A new function MUST be supported on the NFT contract:
```ts
function nft_metadata(): NFTContractMetadata {}
```
A new attribute MUST be added to each `Token` struct:
```diff
type Token = {
token_id: string,
owner_id: string,
+ metadata: TokenMetadata,
}
```
### An implementing contract MUST include the following fields on-chain
- `spec`: a string that MUST be formatted `nft-n.n.n` where "n.n.n" is replaced with the implemented version of this Metadata spec: for instance, "nft-2.0.0" to indicate NEP-177 version 2.0.0. This will allow consumers of the Non-Fungible Token to know which set of metadata features the contract supports.
- `name`: the human-readable name of the contract.
- `symbol`: the abbreviated symbol of the contract, like MOCHI or MV3
- `base_uri`: Centralized gateway known to have reliable access to decentralized storage assets referenced by `reference` or `media` URLs. Can be used by other frontends for initial retrieval of assets, even if these frontends then replicate the data to their own decentralized nodes, which they are encouraged to do.
### An implementing contract MAY include the following fields on-chain
For `NFTContractMetadata`:
- `icon`: a small image associated with this contract. Encouraged to be a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs), to help consumers display it quickly while protecting user data. Recommendation: use [optimized SVG](https://codepen.io/tigt/post/optimizing-svgs-in-data-uris), which can result in high-resolution images with only 100s of bytes of [storage cost](https://docs.near.org/concepts/storage/storage-staking). (Note that these storage costs are incurred to the contract deployer, but that querying these icons is a very cheap & cacheable read operation for all consumers of the contract and the RPC nodes that serve the data.) Recommendation: create icons that will work well with both light-mode and dark-mode websites by either using middle-tone color schemes, or by [embedding `media` queries in the SVG](https://timkadlec.com/2013/04/media-queries-within-svg/).
- `reference`: a link to a valid JSON file containing various keys offering supplementary details on the token. Example: `/ipfs/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm`, `https://example.com/token.json`, etc. If the information given in this document conflicts with the on-chain attributes, the values in `reference` shall be considered the source of truth.
- `reference_hash`: the base64-encoded sha256 hash of the JSON file contained in the `reference` field. This is to guard against off-chain tampering.
For `TokenMetadata`:
- `title`: The name of this specific token.
- `description`: A longer description of the token.
- `media`: URL to associated media. Preferably to decentralized, content-addressed storage.
- `media_hash`: the base64-encoded sha256 hash of content referenced by the `media` field. This is to guard against off-chain tampering.
- `copies`: The number of tokens with this set of metadata or `media` known to exist at time of minting.
- `issued_at`: Unix epoch in milliseconds when token was issued or minted (an unsigned 32-bit integer would suffice until the year 2106)
- `expires_at`: Unix epoch in milliseconds when token expires
- `starts_at`: Unix epoch in milliseconds when token starts being valid
- `updated_at`: Unix epoch in milliseconds when token was last updated
- `extra`: anything extra the NFT wants to store on-chain. Can be stringified JSON.
- `reference`: URL to an off-chain JSON file with more info.
- `reference_hash`: Base64-encoded sha256 hash of JSON from reference field. Required if `reference` is included.
### No incurred cost for core NFT behavior
Contracts should be implemented in a way to avoid extra gas fees for serialization & deserialization of metadata for calls to `nft_*` methods other than `nft_metadata` or `nft_token`. See `near-contract-standards` [implementation using `LazyOption`](https://github.com/near/near-sdk-rs/blob/c2771af7fdfe01a4e8414046752ee16fb0d29d39/examples/fungible-token/ft/src/lib.rs#L71) as a reference example.
## Drawbacks
* When this NFT contract is created and initialized, the storage use per-token will be higher than an NFT Core version. Frontends can account for this by adding extra deposit when minting. This could be done by padding with a reasonable amount, or by the frontend using the [RPC call detailed here](https://docs.near.org/docs/develop/front-end/rpc#genesis-config) that gets genesis configuration and actually determine precisely how much deposit is needed.
* Convention of `icon` being a data URL rather than a link to an HTTP endpoint that could contain privacy-violating code cannot be done on deploy or update of contract metadata, and must be done on the consumer/app side when displaying token data.
* If on-chain icon uses a data URL or is not set but the document given by `reference` contains a privacy-violating `icon` URL, consumers & apps of this data should not naïvely display the `reference` version, but should prefer the safe version. This is technically a violation of the "`reference` setting wins" policy described above.
## Future possibilities
- Detailed conventions that may be enforced for versions.
- A fleshed out schema for what the `reference` object should contain.
## Errata
* **2022-02-03**: updated `Token` struct field names. `id` was changed to `token_id`. This is to be consistent with current implementations of the standard and the rust SDK docs.
The first version (`1.0.0`) had confusing language regarding the fields:
- `issued_at`
- `expires_at`
- `starts_at`
- `updated_at`
It gave those fields the type `string|null` but it was unclear whether it should be a Unix epoch in milliseconds or [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html). Upon having to revisit this, it was determined to be the most efficient to use epoch milliseconds as it would reduce the computation on the smart contract and can be derived trivially from the block timestamp.
|
Aurora Partners With ConsenSys, Bringing MetaMask, Infura and More Ethereum Tools to NEAR
COMMUNITY
December 2, 2021
Aurora, the Ethereum scaling solution that allows projects built on Ethereum to utilize the cutting-edge technology of the NEAR Protocol, is partnering with ConsenSys, the enterprise blockchain company to provide access to its suite of developer tools.
The partnership aims to empower both the NEAR and Ethereum ecosystems by improving developer facilities – through the availability of the product suite – with the goal of increasing cross-chain interoperability.
The product suite features projects including MetaMask, Infura, ConsenSys Quorum, Truffle, Codefi, and Diligence. MetaMask is the primary way a global user base of over 21 million monthly active users interact with applications on Web3. Over 350,000 developers use Infura to access Ethereum, IPFS and Layer 2 networks. In addition, 4.7 million developers create and deploy smart contracts using Truffle and ConsenSys Diligence has secured more than $25 billion in smart contracts with its hands-on dapp audits and testing tools.
“We are thrilled to join forces with ConsenSys on our shared mission of empowering the Ethereum ecosystem and extending its economy,” said Alex Shevchenko, the CEO of Aurora Labs.
The partnership means ConsenSys will now have official participation in the ongoing development of the AuroraDAO.
“We are excited to be partnering with the talented Aurora and NEAR protocol teams. Developer interest in EVM-compatible scaling solutions continues to grow at the same pace as the rapid expansion of the Web3 ecosystem, says E.G. Galano, Co-Founder and Head of Engineering at Infura.
“We believe developers will benefit from the addition of Aurora to the Infura product suite by enabling them to utilize the NEAR network with the EVM tooling they are already familiar with.”
Infura is directly involved in the AuroraDAO, which hands over key decision making to its community as a part of its vision to create a scaling solution that is as decentralized as possible.
About Aurora
Aurora is an EVM built on the NEAR Protocol, delivering a turn-key solution for developers to operate their apps on an Ethereum-compatible, high-throughput, scalable and future-safe platform, with low transaction costs for their users. |
The State of Self Sovereign Identities
CASE STUDIES
September 30, 2019
What do we mean by identity?
Throughout our lives, we take on different identities. What we identify ourselves with ranges from the state identity provided at birth to the identities that we assign to ourselves and that are given to us by others. Depending on the communities that we interact with, we are inclined to tell different stories about yourself. The goal may either be to stand out and portray a unique individual or to blend in with the crowd.
Trade-offs
In personal interactions, we can shift our attitude and character depending on the group we engage with. Thus, our counterparty will be able to recognize and respond with an adequate form of interaction. This form of selective disclosure of information is not possible in formal interactions, such as by the government-regulated entities, nor on social media. Users are forced into a split between providing all information and gaining access to a service or product, or refusing access and risking censorship, exclusion, or even becoming subject to state-enforced violence. As a result, personal credentials are scattered across platforms.
When signing up for a new service, the user has to decide between various trade-offs, one is to opt for usability over privacy, another one is the ease of use and the level of inclusion in one’s social circles. The less you know about someone, the harder it is to interact with that person. Similarly, no single entity has the need, nor should be given the right, to access all data that has been generated and gathered on a user.
However, throughout the centralisation of social networks and user-centric services, access to those vast amounts of information has unrightfully been claimed and exploited.
Solution
The following section provides an overview of several projects that are working on alternatives to the problems mentioned above. In the most general form, solutions allow users to register and store their identity credentials. Depending on the entity that they are interacting with, the user can provide access to individual attributes that make up their identity. Imagine this to be similar to Facebook Sign-In, with the difference of you owning your data instead of Facebook. Being able to modify the set of information disclosed on every sign-on will allow users to shift and shape their identity in accordance with social situations.
Digital Identities
We can differentiate between two broad types of identities, self-sovereign identities and centralised trusted identities. Self-sovereign identities allow users to own and control their identity without the need or influence of an external entity. In contrast, centralised trusted identities rely on a centralised body to provide and verify documentation. Both are designed for different use cases.
The following section provides an overview of three different digital identities, none of which utilise a blockchain.
State-maintained digital identities
The first one being state-owned and maintained identities. If you have travelled abroad, purchased a car, or signed a rental agreement, the chances are high that you were in need of a passport or ID card. The main problem with formal documentation is that they require the establishment of an authority that is globally recognised. According to the World Bank, an estimated number of 1 Billion people do not have access to any documentation. To participate in daily activities, individuals have to rely on the trust of their community. Interpersonal trust is not only highly time-consuming to establish but also makes the participation in legal interactions, such as voting or buying a house, impossible.
Several European countries started developing, testing and implementing digital identities, intending to make government services more inclusive. The principle is that once citizens have access to a government-issued identity, they can reuse the credential for all digital services. The most established implementation is in Estonia. Estonia’s government provides an e-residency to all of its citizens. Once users have the government-issued ID card, they can access a wide range of online services, including health records, medical prescriptions, sign e-documents, and vote.
A similar implementation has been provided by u-Port throughout a pilot program in Zug, Switzerland. U-Port allows its users to register their identity and interact with the Ethereum blockchain. After enrolling with the city hall and u-Port, users have to go to the city hall to verify their identity. Once approved, users can interact with financial services.
Germany and other states are currently working on identity solutions that are based on the same principles. Resulting, those will all run into the same problem: Government-issued digital identities are dependent on the government to provide the necessary infrastructure. If this infrastructure is not available, the e-identity will not be of much use, resulting in the chicken and egg problem.
Decentralised approaches
Self-sovereign identities rely on cryptographic solutions, such as the web of trust, to establish higher confidence in the information that users provide on a platform. The premise hereby is that a user has more credibility within a given system, the more people know him/her and approve of her/his identity. Note that the purpose is not to uniquely identify someone’s identity based on whom they claim to be, but instead based on who they are in relation to everybody else.
If the only credential that is known in the system is a user’s public key, members of the system can sign-off each other’s public keys to build trust-relations. Depending on the implementation of the web of trust, users gain more trust from other users in the system, the more people that have signed-off their public key. The same mechanisms can be implemented in the form of a voting ring, whereby users have to become endorsed by those users, they verified prior. A user will only be trusted if (s)he is part of a cycle of trust. Blockchain-based implementations that rely on the web of trust are Sovrin and brightID.
An alternative to the web of trust is to identify users based on what they have. Consensus solutions rely on a similar premise. To gain trust in the system, users have to provide value. In case they behave maliciously, the value provided will be taken away. Ultimately, the more value a user is willing to provide to the network, the less likely (s)he will want it to be slashed, and the more trustworthy the user will be.
An example of a non-blockchain-based system is Scuttlebutt. Scuttlebutt is a decentralised social network, in which users identify themselves with their public key, which is linked to the user’s device. All data that the user generates and collects from other users lives on the user’s machine. Resulting, the relationship between humans correlates to the relationship between computers.
Moving forward
Currently, there is none “solve it all” solution for decentralised identities. Depending on the use case, different identity solutions will be utilised to generate, store and maintain the user’s identity. Blockchain will become most important in providing secure data storage of people’s credentials, whether a central authority issues those or approved by a network of users. Currently, people keep their most relevant information, such as health records, offline and linked to an analogue, state-issued ID. While the data can be lost, they can only be copied if someone gains physical access to the document. In contrast, any information that is kept on a centralised server will become vulnerable to unauthorised access if it has not been encrypted on the user-side by the user’s key.
Utilising blockchain architecture can remove the need for all middlemen, while the control will remain with the user. This will not only lead to higher security if done right but also empower users to take ownership of their identity generation, maintenance and usage. Once creators don’t have to base their business model around the implementation of use-case-specific identities, unprecedented applications can emerge.
TLDR
We rely heavily on personal, organisation, and state-issued identities in our day to day life, often forcing us to compromise privacy for usability. While government-issued IDs rely on a trust-enhancing infrastructure, organisations collect and exploit user data in exchange for access to online services.
Solutions are either based on self-sovereign identities or centralised trusted identities, with the common goal of empowering the user to hold and provide access to their personal information. If the infrastructure is in place, governments are working on ID-based e-identities. These employ centralised storage, providing a common attack vector. In contrast, decentralised identity solutions utilise the web of trust or unique value that an individual offers to the network. With this, identity is based on the trust people provide to each other depending on interactions.
While several projects experiment with decentralised, user-owned identities, an application has yet to emerge that provides users with self-sovereignty, usability, and trust.
To follow our progress and learn how you can get involved, check out the following links:
Discord (http://near.chat/)
Beta Program (https://pages.near.org/beta/) |
---
NEP: 455
Title: Parameter Compute Costs
Author: Andrei Kashin <andrei.kashin@near.org>, Jakob Meier <jakob@near.org>
DiscussionsTo: https://github.com/nearprotocol/neps/pull/455
Status: Final
Type: Protocol Track
Category: Runtime
Created: 26-Jan-2023
---
## Summary
Introduce compute costs decoupled from gas costs for individual parameters to safely limit the compute time it takes to process the chunk while avoiding adding breaking changes for contracts.
## Motivation
For NEAR blockchain stability, we need to ensure that blocks are produced regularly and in a timely manner.
The chunk gas limit is used to ensure that the time it takes to validate a chunk is strictly bounded by limiting the total gas cost of operations included in the chunk.
This process relies on accurate estimates of gas costs for individual operations.
Underestimating these costs leads to *undercharging* which can increase the chunk validation time and slow down the chunk production.
As a concrete example, in the past we undercharged contract deployment.
The responsible team has implemented a number of optimizations but a gas increase was still necessary.
[Meta-pool](https://github.com/Narwallets/meta-pool/issues/21) and [Sputnik-DAO](https://github.com/near-daos/sputnik-dao-contract/issues/135) were affected by this change, among others.
Finding all affected parties and reaching out to them before implementing the change took a lot of effort, prolonging the period where the network was exposed to the attack.
Another motivating example is the upcoming incremental deployment of Flat Storage, where during one of the intermediate stages we expect the storage operations to be undercharged.
See the explanation in the next section for more details.
## Rationale
Separating compute costs from gas costs will allow us to safely limit the compute usage for processing the chunk while still keeping the gas prices the same and thus not breaking existing contracts.
An important challenge with undercharging is that it is not possible to disclose them widely because it could be used to increase the chunk production time thereby impacting the stability of the network.
Adjusting the compute cost for undercharged parameter eliminates the security concern and allows us to publicly discuss the ways to solve the undercharging (optimize implementation, smart contract or increasing the gas cost).
This design is easy to implement and simple to reason about and provides a clear way to address existing undercharging issues.
If we don't address the undercharging problems, we increase the risks that they will be exploited.
Specifically for Flat Storage deployment, we [plan](https://github.com/near/nearcore/issues/8006) to stop charging TTN (touching trie node) gas costs, however the intermediate implementation (read-only Flat Storage) will still incur these costs during writes introducing undercharging.
Setting temporary high compute costs for writes will ensure that this undercharging does not lead to long chunk processing times.
## Alternatives
### Increase the gas costs for undercharged operations
We could increase the gas costs for the operations that are undercharged to match the computational time it takes to process them according to the rule 1ms = 1TGas.
Pros:
- Does not require any new code or design work (but still requires a protocol version bump)
- Security implications are well-understood
Cons:
- Can break contracts that rely on current gas costs, in particular steeply increasing operating costs for the most active users of the blockchain (aurora and sweat)
- Doing this safely and responsibly requires prior consent by the affected parties which is hard to do without disclosing security-sensitive information about undercharging in public
In case of flat storage specifically, using this approach will result in a large increase in storage write costs (breaking many contracts) to enable safe deployment of read-only flat storage and later a correction of storage write costs when flat storage for writes is rolled out.
With compute costs, we will be able to roll out the read-only flat storage with minimal impact on deployed contracts.
### Adjust the gas chunk limit
We could continuously measure the chunk production time in nearcore clients and compare it to the gas burnt.
If the chunk producer observes undercharging, it decreases the limit.
If there is overcharging, the limit can be increased up to a limit of at most 1000 Tgas.
To make such adjustment more predictable under spiky load, we also [limit](https://nomicon.io/Economics/Economic#transaction-fees) the magnitude of change of gas limit by 0.1% per block.
Pros:
- Prevents moderate undercharging from stalling the network
- No protocol change necessary (as this feature is already [a part of the protocol](https://nomicon.io/Economics/Economic#transaction-fees)), we could easily experiment and revert if it does not work well
Cons:
- Very broad granularity --- undercharging in one parameter affects all users, even those that never use the undercharged parts
- Dependence on validator hardware --- someone running overspecced hardware will continuously want to increase the limit, others might run with underspecced hardware and continuously want to decrease the limit
- Malicious undercharging attacks are unlikely to be prevented by this --- a single 10x undercharged receipt still needs to be processed using the old limit.
Adjusting 0.1% per block means 100 chunks can only change by a maximum of 1.1x and 1000 chunks could change up to x2.7
- Conflicts with transaction and receipt limit --- A transaction or receipt can (today) use up to 300Tgas.
The effective limit per chunk is `gas_limit` + 300Tgas since receipts are added to a chunk until one exceeds the limit and the last receipt is not removed.
Thus a gas limit of 0gas only reduces the effective limit from 1300Tgas to 300Tgas, which means a single 10x undercharged receipt can still result in a chunk with compute usage of 3 seconds (equivalent to 3000TGas)
### Allow skipping chunks in the chain
Slow chunk production in one shard can introduce additional user-visible latency in all shards as the nodes expect a regular and timely chunk production during normal operation.
If processing the chunk takes much longer than 1.3s, it can cause the corresponding block and possibly more consecutive blocks to be skipped.
We could extend the protocol to produce empty chunks for some of the shards within the block (effectively skipping them) when processing the chunk takes longer than expected.
This way will still ensure a regular block production, at a cost of lower throughput of the network in that shard.
The chunk should still be included in a later block to avoid stalling the affected shard.
Pros:
- Fast and automatic adaptation to the blockchain workload
Cons:
- For the purpose of slashing, it is hard to distinguish situations when the honest block producer skips chunk due to slowness from the situations when the block producer is offline or is maliciously stalling the block production. We need some mechanism (e.g. on-chain voting) for nodes to agree that the chunk was skipped legitimately due to slowness as otherwise we introduce new attack vectors to stall the network
## Specification
- **Chunk Compute Usage** -- total compute time spent on processing the chunk
- **Chunk Compute Limit** -- upper-bound for compute time spent on processing the chunk
- **Parameter Compute Cost** -- the numeric value in seconds corresponding to compute time that it takes to include an operation into the chunk
Today, gas has two somewhat orthogonal roles:
1. Gas is money. It is used to avoid spam by charging users
2. Gas is CPU time. It defines how many transactions fit in a chunk so that validators can apply it within a second
The idea is to decouple these two by introducing parameter compute costs.
Each gas parameter still has a gas cost that determines what users have to pay.
But when filling a chunk with transactions, parameter compute cost is used to estimate CPU time.
Ideally, all compute costs should match corresponding gas costs.
But when we discover undercharging issues, we can set a higher compute cost (this would require a protocol upgrade).
The stability concern is then resolved when the compute cost becomes active.
The ratio between compute cost and gas cost can be thought of as an undercharging factor.
If a gas cost is 2 times too low to guarantee stability, compute cost will be twice the gas cost.
A chunk will be full 2 times faster when gas for this parameter is burned.
This deterministically throttles the throughput to match what validators can actually handle.
Compute costs influence the gas price adjustment logic described in https://nomicon.io/Economics/Economic#transaction-fees.
Specifically, we're now using compute usage instead of gas usage in the formula to make sure that the gas price increases if chunk processing time is close to the limit.
Compute costs **do not** count towards the transaction/receipt gas limit of 300TGas, as that might break existing contracts by pushing their method calls over this limit.
Compute costs are static for each protocol version.
### Using Compute Costs
Compute costs different from gas costs are only a temporary solution.
Whenever we introduce a compute cost, we as the community can discuss this publicly and find a solution to the specific problem together.
For any active compute cost, a tracking GitHub issue in [`nearcore`](https://github.com/near/nearcore) should be created, tracking work towards resolving the undercharging. The reference to this issue should be added to this NEP.
In the best case, we find technical optimizations that allow us to decrease the compute cost to match the existing gas cost.
In other cases, the only solution is to increase the gas cost.
But the dApp developers who are affected by this change should have a chance to voice their opinion, suggest alternatives, and implement necessary changes before the gas cost is increased.
## Reference Implementation
The compute cost is a numeric value represented as `u64` in time units.
Value 1 corresponds to `10^-15` seconds or 1fs (femtosecond) to match the gas costs scale.
By default, the parameter compute cost matches the corresponding gas cost.
Compute costs should be applicable to all gas parameters, specifically including:
- [`ExtCosts`](https://github.com/near/nearcore/blob/6e08a41084c632010b1d4c42132ad58ecf1398a2/core/primitives-core/src/config.rs#L377)
- [`ActionCosts`](https://github.com/near/nearcore/blob/6e08a41084c632010b1d4c42132ad58ecf1398a2/core/primitives-core/src/config.rs#L456)
Changes necessary to support `ExtCosts`:
1. Track compute usage in [`GasCounter`](https://github.com/near/nearcore/blob/51670e593a3741342a1abc40bb65e29ba0e1b026/runtime/near-vm-logic/src/gas_counter.rs#L47) struct
2. Track compute usage in [`VMOutcome`](https://github.com/near/nearcore/blob/056c62183e31e64cd6cacfc923a357775bc2b5c9/runtime/near-vm-logic/src/logic.rs#L2868) struct (alongside `burnt_gas` and `used_gas`)
3. Store compute usage in [`ActionResult`](https://github.com/near/nearcore/blob/6d2f3fcdd8512e0071847b9d2ca10fb0268f469e/runtime/runtime/src/lib.rs#L129) and aggregate it across multiple actions by modifying [`ActionResult::merge`](https://github.com/near/nearcore/blob/6d2f3fcdd8512e0071847b9d2ca10fb0268f469e/runtime/runtime/src/lib.rs#L141)
4. Store compute costs in [`ExecutionOutcome`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/lib.rs#L266) and [aggregate them across all transactions](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/lib.rs#L1279)
5. Enforce the chunk compute limit when the chunk is [applied](https://github.com/near/nearcore/blob/6d2f3fcdd8512e0071847b9d2ca10fb0268f469e/runtime/runtime/src/lib.rs#L1325)
Additional changes necessary to support `ActionCosts`:
1. Return compute costs from [`total_send_fees`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/config.rs#L71)
2. Store aggregate compute cost in [`TransactionCost`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/config.rs#L22) struct
3. Propagate compute costs to [`VerificationResult`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/runtime/runtime/src/verifier.rs#L330)
Additionaly, the gas price computation will need to be adjusted in [`compute_new_gas_price`](https://github.com/near/nearcore/blob/578983c8df9cc36508da2fb4a205c852e92b211a/core/primitives/src/block.rs#L328) to use compute cost instead of gas cost.
## Security Implications
Changes in compute costs will be publicly known and might reveal an undercharging that can be used as a target for the attack.
In practice, it is not trivial to exploit the undercharging unless you know the exact shape of the workload that realizes it.
Also, after the compute cost is deployed, the undercharging should no longer be a threat for the network stability.
## Drawbacks
- Changing compute costs requires a protocol version bump (and a new binary release), limiting their use to undercharging problems that we're aware of
- Updating compute costs is a manual process and requires deliberately looking for potential underchargings
- The compute cost would not have a full effect on the last receipt in the chunk, decreasing its effectiveness to deal with undercharging.
This is because 1) a transaction or receipt today can use up to 300TGas and 2) receipts are added to a chunk until one exceeds the limit and the last receipt is not removed.
Therefore, a single receipt with 300TGas filled with undercharged operations with a factor of K can lead to overshooting the chunk compute limit by (K - 1) * 300TGas
- Underchargings can still be exploited to lower the throughput of the network at unfair price and increase the waiting times for other users.
This is inevitable for any proposal that doesn't change the gas costs and must be resolved by improving the performance or increasing the gas costs
- Even without malicious intent, the effective peak throughput of the network will decrease when the chunks include undercharged operations (as the stopping condition based on compute costs for filling the chunk becomes stricter).
Most of the time, this is not the problem as the network is operating below the capacity.
The effects will also be softened by the fact that undercharged operations comprise only a fraction of the workload.
For example, the planned increase for TTN compute cost alongside the Flat Storage MVP is less critical because you cannot fill a receipt with only TTN costs, you will always have other storage costs and ~5Tgas overhead to even start a function call.
So even with 10x difference between gas and compute costs, the DoS only becomes 5x cheaper instead of 10x
## Unresolved Issues
## Future possibilities
We can also think about compute costs smaller than gas costs.
For example, we charge gas instead of token balance for extra storage bytes in [NEP-448](https://github.com/near/NEPs/pull/448), it would make sense to set the compute cost to 0 for the part that covers on-chain storage if the throttling due to increased gas cost becomes problematic.
Otherwise, the throughput would be throttled unnecessarily.
A further option would be to change compute costs dynamically without a protocol upgrade when block production has become too slow.
This would be a catch-all, self-healing solution that requires zero intervention from anyone.
The network would simply throttle throughput when block time remains too high for long enough.
Pursuing this approach would require additional design work:
- On-chain voting to agree on new values of costs, given that inputs to the adjustment process are not deterministic (measurements of wall clock time it takes to process receipt on particular validator)
- Ensuring that dynamic adjustment is done in a safe way that does not lead to erratic behavior of costs (and as a result unpredictable network throughput).
Having some experience manually operating this mechanism would be valuable before introducing automation
and addressing challenges described in https://github.com/near/nearcore/issues/8032#issuecomment-1362564330.
The idea of introducing a chunk limit for compute resource usage naturally extends to other resource types, for example RAM usage, Disk IOPS, [Background CPU Usage](https://github.com/near/nearcore/issues/7625).
This would allow us to align the pricing model with cloud offerings familiar to many users, while still using gas as a common denominator to simplify UX.
## Changelog
### 1.0.0 - Initial Version
This NEP was approved by Protocol Working Group members on March 16, 2023 ([meeting recording](https://www.youtube.com/watch?v=4VxRoKwLXIs)):
- [Bowen's vote](https://github.com/near/NEPs/pull/455#issuecomment-1467023424)
- [Marcelo's vote](https://github.com/near/NEPs/pull/455#pullrequestreview-1340887413)
- [Marcin's vote](https://github.com/near/NEPs/pull/455#issuecomment-1471882639)
### 1.0.1 - Storage Related Compute Costs
Add five compute cost values for protocol version 61 and above.
- wasm_touching_trie_node
- wasm_storage_write_base
- wasm_storage_remove_base
- wasm_storage_read_base
- wasm_storage_has_key_base
For the exact values, please refer to the table at the bottom.
The intention behind these increased compute costs is to address the issue of
storage accesses taking longer than the allocated gas costs, particularly in
cases where RocksDB, the underlying storage system, is too slow. These values
have been chosen to ensure that validators with recommended hardware can meet
the required timing constraints.
([Analysis Report](https://github.com/near/nearcore/issues/8006))
The protocol team at Pagoda is actively working on optimizing the nearcore
client storage implementation. This should eventually allow to lower the compute
costs parameters again.
Progress on this work is tracked here: https://github.com/near/nearcore/issues/8938.
#### Benefits
- Among the alternatives, this is the easiest to implement.
- It allows us to able to publicly discuss undercharging issues before they are fixed.
#### Concerns
No concerns that need to be addressed. The drawbacks listed in this NEP are minor compared to the benefits that it will bring. And implementing this NEP is strictly better than what we have today.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- https://gov.near.org/t/proposal-gas-weights-to-fight-instability-to-due-to-undercharging/30919
- https://github.com/near/nearcore/issues/8032
## Live Compute Costs Tracking
Parameter Name | Compute / Gas factor | First version | Last version | Tracking issue |
-------------- | -------------------- | ------------- | ------------ | -------------- |
wasm_touching_trie_node | 6.83 | 61 | *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)
wasm_storage_write_base | 3.12 | 61 | *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)
wasm_storage_remove_base | 3.74 | 61 | *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)
wasm_storage_read_base | 3.55 | 61 | *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)
wasm_storage_has_key_base | 3.70 | 61 | *TBD* | [nearcore#8938](https://github.com/near/nearcore/issues/8938)
|
---
id: chain-signatures
title: Chain Signatures
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import {CodeTabs, Language, Github} from "@site/src/components/codetabs"
Chain signatures enable NEAR accounts, including smart contracts, to sign and execute transactions across many blockchain protocols.
This unlocks the next level of blockchain interoperability by giving ownership of diverse assets, cross-chain accounts, and data to a single NEAR account.
:::info
This guide will take you through a step by step process for creating a Chain Signature.
⭐️ For a deep dive into the concepts of Chain Signatures see [What are Chain Signatures?](/concepts/abstraction/chain-signatures)
⭐️ For complete examples of a NEAR account performing transactions in other chains:
- [CLI script](https://github.com/mattlockyer/mpc-script)
- [web-app example](https://github.com/near-examples/near-multichain)
- [component example](https://test.near.social/bot.testnet/widget/chainsig-sign-eth-tx)
:::
---
## Create a Chain Signature
There are five steps to create a Chain Signature:
1. [Deriving the Foreign Address](#1-deriving-the-foreign-address) - Construct the address that will be controlled on the target blockchain
2. [Creating a Transaction](#2-creating-the-transaction) - Create the transaction or message to be signed
3. [Requesting a Signature](#3-requesting-the-signature) - Call the NEAR `multichain` contract requesting it to sign the transaction
4. [Reconstructing the Signature](#4-reconstructing-the-signature) - Reconstruct the signature from the MPC service's response
5. [Relaying the Signed Transaction](#5-relaying-the-signature) - Send the signed transaction to the destination chain for execution
![chain-signatures](/docs/assets/welcome-pages/chain-signatures-overview.png)
_Diagram of a chain signature in NEAR_
:::info MPC testnet contracts
If you want to try things out, these are the smart contracts available on `testnet`:
- `multichain-testnet-2.testnet`: MPC signer contract
- `canhazgas.testnet`: [Multichain Gas Station](multichain-gas-relayer/gas-station.md) contract
- `nft.kagi.testnet`: [NFT Chain Key](nft-keys.md) contract
:::
---
## 1. Deriving the Foreign Address
Chain Signatures use [`derivation paths`](../../1.concepts/abstraction/chain-signatures.md#one-account-multiple-chains) to represent accounts on the target blockchain. The external address to be controlled can be deterministically derived from:
- The NEAR address (e.g., `example.near`, `example.testnet`, etc.)
- A derivation path (a string such as `ethereum-1`, `ethereum-2`, etc.)
- The MPC service's public key
We provide code to derive the address, as it's a complex process that involves multiple steps of hashing and encoding:
<Tabs groupId="code-tabs">
<TabItem value="Ξ Ethereum">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/ethereum.js" start="14" end="18" />
</TabItem>
<TabItem value="₿ Bitcoin">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/bitcoin.js" start="14" end="18" />
</TabItem>
</Tabs>
:::tip
The same NEAR account and path will always produce the same address on the target blockchain.
- `example.near` + `ethereum-1` = `0x1b48b83a308ea4beb845db088180dc3389f8aa3b`
- `example.near` + `ethereum-2` = `0x99c5d3025dc736541f2d97c3ef3c90de4d221315`
:::
---
## 2. Creating the Transaction
Constructing the transaction to be signed (transaction, message, data, etc.) varies depending on the target blockchain, but generally it's the hash of the message or transaction to be signed.
<Tabs groupId="code-tabs">
<TabItem value="Ξ Ethereum">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/ethereum.js"
start="32" end="48" />
In Ethereum, constructing the transaction is simple since you only need to specify the address of the receiver and how much you want to send.
</TabItem>
<TabItem value="₿ Bitcoin">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/bitcoin.js"
start="28" end="80" />
In bitcoin, you construct a new transaction by using all the Unspent Transaction Outputs (UTXOs) of the account as input, and then specify the output address and amount you want to send.
</TabItem>
</Tabs>
---
## 3. Requesting the Signature
Once the transaction is created and ready to be signed, a signature request is made by calling `sign` on the [MPC smart contract](https://github.com/near/mpc-recovery/blob/develop/contract/src/lib.rs#L298).
The method requires two parameters:
1. The `transaction` to be signed for the target blockchain
2. The derivation `path` for the account we want to use to sign the transaction
<Tabs groupId="code-tabs">
<TabItem value="Ξ Ethereum">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/ethereum.js"
start="57" end="61" />
</TabItem>
<TabItem value="₿ Bitcoin">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/bitcoin.js"
start="87" end="98" />
For bitcoin, all UTXOs are signed independently and then combined into a single transaction.
</TabItem>
</Tabs>
:::tip
Notice that the `payload` is being reversed before requesting the signature, to match the little-endian format expected by the contract
:::
:::info
The contract will take some time to respond, as the `sign` method starts recursively calling itself waiting for the **MPC service** to sign the transaction.
<details>
<summary> A Contract Recursively Calling Itself? </summary>
NEAR smart contracts are unable to halt execution and await the completion of a process. To solve this, one can make the contract call itself again and again checking on each iteration to see if the result is ready.
**Note:** Each call will take one block which equates to ~1 second of waiting. After some time the contract will either return a result that an external party provided or return an error running out of GAS waiting.
</details>
:::
---
## 4. Reconstructing the Signature
The MPC contract will not return the signature of the transaction itself, but the elements needed to reconstruct the signature.
This allows the contract to generalize the signing process for multiple blockchains.
<Tabs groupId="code-tabs">
<TabItem value="Ξ Ethereum">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/ethereum.js"
start="62" end="71" />
In Ethereum, the signature is reconstructed by concatenating the `r`, `s`, and `v` values returned by the contract.
The `v` parameter is a parity bit that depends on the `sender` address. We reconstruct the signature using both possible values (`v=0` and `v=1`) and check which one corresponds to our `sender` address.
</TabItem>
<TabItem value="₿ Bitcoin">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/bitcoin.js"
start="105" end="116" />
In Bitcoin, the signature is reconstructed by concatenating the `r` and `s` values returned by the contract.
</TabItem>
</Tabs>
---
## 5. Relaying the Signature
Once we have reconstructed the signature, we can relay it to the corresponding network. This will once again vary depending on the target blockchain.
<Tabs groupId="code-tabs">
<TabItem value="Ξ Ethereum">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/ethereum.js"
start="80" end="84" />
</TabItem>
<TabItem value="₿ Bitcoin">
<Github language="js"
url="https://github.com/near-examples/near-multichain/blob/main/src/services/bitcoin.js"
start="119" end="127" />
</TabItem>
</Tabs>
:::info
⭐️ For a deep dive into the concepts of Chain Signatures see [What are Chain Signatures?](/concepts/abstraction/chain-signatures)
⭐️ For complete examples of a NEAR account performing Eth transactions:
- [web-app example](https://github.com/near-examples/near-multichain)
- [component example](https://test.near.social/bot.testnet/widget/chainsig-sign-eth-tx)
:::
|
---
id: meta-transactions
title: Meta Transactions
---
# Meta Transactions
[NEP-366](https://github.com/near/NEPs/pull/366) introduced the concept of meta
transactions to Near Protocol. This feature allows users to execute transactions
on NEAR without owning any gas or tokens. In order to enable this, users
construct and sign transactions off-chain. A third party (the relayer) is used
to cover the fees of submitting and executing the transaction.
## Overview
![Flow chart of meta
transactions](https://raw.githubusercontent.com/near/NEPs/003e589e6aba24fc70dd91c9cf7ef0007ca50735/neps/assets/nep-0366/NEP-DelegateAction.png)
_Credits for the diagram go to the NEP authors Alexander Fadeev and Egor
Uleyskiy._
The graphic shows an example use case for meta transactions. Alice owns an
amount of the fungible token `$FT`. She wants to transfer some to John. To do
that, she needs to call `ft_transfer("john", 10)` on an account named `FT`.
The problem is, Alice has no NEAR tokens. She only has a NEAR account that
someone else funded for her and she owns the private keys. She could create a
signed transaction that would make the `ft_transfer("john", 10)` call. But
validator nodes will not accept it, because she does not have the necessary Near
token balance to purchase the gas.
With meta transactions, Alice can create a `DelegateAction`, which is very
similar to a transaction. It also contains a list of actions to execute and a
single receiver for those actions. She signs the `DelegateAction` and forwards
it (off-chain) to a relayer. The relayer wraps it in a transaction, of which the
relayer is the signer and therefore pays the gas costs. If the inner actions
have an attached token balance, this is also paid for by the relayer.
On chain, the `SignedDelegateAction` inside the transaction is converted to an
action receipt with the same `SignedDelegateAction` on the relayer's shard. The
receipt is forwarded to the account from `Alice`, which will unpacked the
`SignedDelegateAction` and verify that it is signed by Alice with a valid Nonce,
etc. If all checks are successful, a new action receipt with the inner actions
as body is sent to `FT`. There, the `ft_transfer` call finally executes.
## Relayer
Meta transactions only work with a [relayer](relayers.md). This is an application layer
concept, implemented off-chain. Think of it as a server that accepts a
`SignedDelegateAction`, does some checks on them and eventually forwards it
inside a transaction to the blockchain network.
A relayer may choose to offer their service for free but that's not going to be
financially viable long-term. But they could easily have the user pay using
other means, outside of Near blockchain. And with some tricks, it can even be
paid using fungible tokens on Near.
In the example visualized above, the payment is done using $FT. Together with
the transfer to John, Alice also adds an action to pay 0.1 $FT to the relayer.
The relayer checks the content of the `SignedDelegateAction` and only processes
it if this payment is included as the first action. In this way, the relayer
will be paid in the same transaction as John.
:::warning Keep in mind
The payment to the relayer is still not guaranteed. It could be that
Alice does not have sufficient `$FT` and the transfer fails. To mitigate, the
relayer should check the `$FT` balance of Alice first.
:::
Unfortunately, this still does not guarantee that the balance will be high
enough once the meta transaction executes. The relayer could waste NEAR gas
without compensation if Alice somehow reduces her $FT balance in just the right
moment. Some level of trust between the relayer and its user is therefore
required.
## Limitations
### Single receiver
A meta transaction, like a normal transaction, can only have one receiver. It's
possible to chain additional receipts afterwards. But crucially, there is no
atomicity guarantee and no roll-back mechanism.
### Accounts must be initialized
Any transaction, including meta transactions, must use NONCEs to avoid replay
attacks. The NONCE must be chosen by Alice and compared to a NONCE stored on
chain. This NONCE is stored on the access key information that gets initialized
when creating an account.
## Constraints on the actions inside a meta transaction
A transaction is only allowed to contain one single delegate action. Nested
delegate actions are disallowed and so are delegate actions next to each other
in the same receipt.
## Gas costs for meta transactions
Meta transactions challenge the traditional ways of charging gas for actions.
Let's assume Alice uses a relayer to
execute actions with Bob as the receiver.
1. The relayer purchases the gas for all inner actions, plus the gas for the
delegate action wrapping them.
2. The cost of sending the inner actions and the delegate action from the
relayer to Alice's shard will be burned immediately. The condition `relayer
== Alice` determines which action `SEND` cost is taken (`sir` or `not_sir`).
Let's call this `SEND(1)`.
3. On Alice's shard, the delegate action is executed, thus the `EXEC` gas cost
for it is burned. Alice sends the inner actions to Bob's shard. Therefore, we
burn the `SEND` fee again. This time based on `Alice == Bob` to figure out
`sir` or `not_sir`. Let's call this `SEND(2)`.
4. On Bob's shard, we execute all inner actions and burn their `EXEC` cost.
Each of these steps should make sense and not be too surprising. But the
consequence is that the implicit costs paid at the relayer's shard are
`SEND(1)` + `SEND(2)` + `EXEC` for all inner actions plus `SEND(1)` + `EXEC` for
the delegate action. This might be surprising but hopefully with this
explanation it makes sense now!
## Gas refunds in meta transactions
Gas refund receipts work exactly like for normal transaction. At every step, the
difference between the pessimistic gas price and the actual gas price at that
height is computed and refunded. At the end of the last step, additionally all
remaining gas is also refunded at the original purchasing price. The gas refunds
go to the signer of the original transaction, in this case the relayer. This is
only fair, since the relayer also paid for it.
## Balance refunds in meta transactions
Unlike gas refunds, the protocol sends balance refunds to the predecessor
(a.k.a. sender) of the receipt. This makes sense, as we deposit the attached
balance to the receiver, who has to explicitly reattach a new balance to new
receipts they might spawn.
In the world of meta transactions, this assumption is also challenged. If an
inner action requires an attached balance (for example a transfer action) then
this balance is taken from the relayer.
The relayer can see what the cost will be before submitting the meta transaction
and agrees to pay for it, so nothing wrong so far. But what if the transaction
fails execution on Bob's shard? At this point, the predecessor is `Alice` and
therefore she receives the token balance refunded, not the relayer. This is
something relayer implementations must be aware of since there is a financial
incentive for Alice to submit meta transactions that have high balances attached
but will fail on Bob's shard.
## Function access keys in meta transactions
Function access keys can limit the allowance, the receiving contract, and the
contract methods. The allowance limitation acts slightly strange with meta
transactions.
But first, both the methods and the receiver will be checked as expected. That
is, when the delegate action is unwrapped on Alice's shard, the access key is
loaded from the DB and compared to the function call. If the receiver or method
is not allowed, the function call action fails.
For allowance, however, there is no check. All costs have been covered by the
relayer. Hence, even if the allowance of the key is insufficient to make the call
directly, indirectly through meta transaction it will still work.
This behavior is in the spirit of allowance limiting how much financial
resources the user can use from a given account. But if someone were to limit a
function access key to one trivial action by setting a very small allowance,
that is circumventable by going through a relayer. An interesting twist that
comes with the addition of meta transactions.
|
公链 2020 :Near Protocol 驶向Open Web之路
COMMUNITY
February 19, 2020
“Into the Open Web”, China Community AMA.
Conducted by NEAR Protocol Co-Founder Illia Polosukhin and China Lead Amos Zhang.
1、你是如何进入区块链行业又做了NEAR公链的?
Hi Illia! How did you enter the blockchain space and come up with the idea of Near Protocol?
@illia:
Alex and I previously had been working on an AI company NEAR.ai. Though we were doing cutting edge research in the field of program synthesis (automating software engineering), we were lacking real data and real users. As part of our work, we built a crowdsourcing platform that would employ engineers across the world to solve programming tasks to allow us to train better models.
We faced multiple issues, starting from payment across the world to the fact that we couldn’t ourselves provide it [the platform] with enough tasks. We started to look at how to make this platform into a marketplace and blockchain seemed like a perfect platform for this.
Alex comes from the background of building sharded databases at MemSQL, and I worked at Google Research on large distributed machine learning systems – we went down the rabbit hole of learning about blockchain, consensus and generally surrounding technologies. As we were learning, we stumbled upon the fact that we didn’t find a fitting solution that we would be able to use. Both from a technology standpoint, and even more importantly, from a usability standpoint.
We had a chat with some of our friends from MemSQL and Google on July 4th and realized that in that room we had great systems engineers who are all excited about the technology and also have experience building distributed systems.
Thus NEAR Protocol was born; we grew the team from the 3 people we had at NEAR.ai to 9 people over a week. Now we have 30+ ppl all over the world.
2、NEAR的分片设计是什么样的,和目前已有的分片方案有什么不同?
What is NEAR’s sharding solution, and how will NEAR differentiate with other sharding solutions?
@illia:
First and foremost NEAR is a developer platform. Meaning that we are focused on delivering the best experience for developers to build applications without limiting the types of experience they can build for their users.
This means that we really focused on tooling, APIs, common programming languages and making things really easy to develop. Second, we focused on allowing a common non-crypto user to easily start using applications built on NEAR – you don’t need to have tokens, wallets or prior knowledge of private/public keys to start using things.
Sharding and scalability are emerging as the outcome of this – blockchain should not be blocking developers or users from using applications. Hence there should not be limitations on the infrastructure layer.
Our sharding is designed to be hidden from the developer. For example, instead of shard chains we shard blocks. This means that developers do not have to be concerned with the shard they are on nor with other applications on that shard and the gas prices among shards. Instead, developers have the convenience of interacting with the NEAR network as they do with a single blockchain now. To achieve that, we have designed a novel sharding approach called Nightshade, you can read more about it https://near.ai/nightshade or check out this video https://www.youtube.com/watch?v=4CKvfYJTjxk.
中文版:https://blog.csdn.net/sun_dsk1/article/details/102763593
还有github版 https://github.com/marco-sundsk/NEAR_DOC_zhcn/blob/master/whitepaper/nightshade_cn.md
Additionally, economics is extremely important for any chain, and in sharded or multi-chain setups this becomes even more crucial. We have successfully made strides to both hide complexity and solve some of the burning needs of developers – https://near.ai/economics.
中文版https://blog.csdn.net/sun_dsk1/article/details/102763595
3、你认为分片带来的最大的可用性挑战是什么,NEAR打算如何应对?
What are the biggest usability challenges due to sharding, and how do you plan to address them ?
@illia:
The biggest challenge for developers building on sharded blockchains compared to blockchains like Ethereum is the fact that cross-contract calls become asynchronous. When in Ethereum we send transactions – if something fails mid-way through its execution across many contracts, the system will revert all the changes.
This is highly unscalable in nature. And if we look at any distributed system used in Web2, we see that everything is operating asynchronously. You might have seen the DevCon commentary by James Prestwich about how this would hurt the experience and composability.
There are a few things we are doing to address this:
Nightshade design makes cross-shard communication to be reliable and execute at the next block produced by the network. Because of this, we removed the need for developers to care when they are calling another contract if it’s in the same shard or not. All cross contract calls get executed in the same block, even if routing across shards have happened.
Because different contracts might have different usage, it also means different shards might get more or fewer transactions. Dynamic resharding is done every epoch to rebalance the contracts and accounts between shards, and sometimes even change the number of shards, to keep usage of each shard relatively even.
Economic design (https://near.ai/economics) targets to provide predictable fees for developers and users. That is one of the problems of auction-based systems, that Bitcoin and Ethereum have that the pricing for transactions might be changing dramatically within a short period of time. In NEAR, price is predictably changing from block to block depending on the network usage, which allows developers and users to understand how much will operations cost. Additionally, price is the same across all shards, removing the need for developers to manage that as well.
Because all cross contract calls are executed in the next block, they are done asynchronously. This is different from the Ethereum model, where calling another contract would be synchronous and return results back into your function. To add developers’ ability to operate with it, we have built a promises API that also supports callbacks. We have SDK for Rust and AssemblyScript (TypeScript compiler) that provide API similar to futures in respective libraries. Developers who are familiar with asynchronous programming will be able to pick it up relatively easily. For example, we had a workshop where people not familiar with our tech stack managed to implement a Map-Reduce job across shards.
Additional tooling is added to make locking safe in a blockchain environment. This is required based compared to a normal state where your program/service is only run by authorized other services, in blockchain anyone can call your contract. This locking mechanics allows developers to write contracts that lock parts of the state within a sequence of cross-contract asynchronous calls and be sure that this lock will be released when the sequence (transaction) will finish. This allows building complex sets of contracts, that operate in a similar way as on Ethereum, propagating errors or reverting changes to the state of other contracts.
4、NEAR为什么要过渡到PoST?纯PoS有什么问题呢?如何实现呢?
Why are you transitioning to PoST? What’s wrong with pure PoS? And how will you make it happen?
@illia:
PoST is currently still in research. We have identified a few core issues with PoS, some of which are described in this video – https://www.youtube.com/watch?v=XiJI7EhNsmc&list=PL9tzQn_TEuFW_t9QDzlQJZpEQnhcZte2y.
One of the core problems in Proof of Stake is long-range attacks. The general agreement is to require a weak subjectivity assumption (https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/). Another problem is that most consensus algorithms that are built on top of Proof of Stake require a specific set of validators to be elected per period of time and that set of validators needs to be online to select the next set.
These problems are what PoW has a much better answer. Trading for huge energy consumption, variance in the reward which leads to pooling and centralization and increased block times/latency due to requiring block propagation across the network.
PoST addresses these issues as well as makes the “mining” of space proofs fairer as it doesn’t require specialized hardware and cheap electricity to participate.
We are not planning to have any PoST work in the upcoming MainNet. This will be a research and development post MainNet and will be presented to the community to decide if it’s worth upgrading into it.
5、这次NEAR中国行活动主题是:「区块链技术在政府和企业中的应用」,NEAR在企业应用上的解决方案有哪些优势?
We know the theme of the Near China tour is [Blockchain adoption in enterprise and government]. In China, many companies offer permitted chain solutions. What are the advantages of NEAR’s blockchain solutions to Blockchain and to Government?
What are the possible applications for NEAR technology for businesses and enterprises?
@illia:
What NEAR’s design provides at its core is cross contract communication when contracts are not operating under the same chain. This is important for sharding public chains, but it also allows for enterprise use cases where a business can run their own shard. We call this Private Shard.
In systems like Hyperledger or Corda, you would need to set up a set of participants, get them to agree to share all of the data and contracts they put into this chain. The benefit of Private Shard is that it doesn’t require setting up consortia or any upfront investment. It’s also easy to set which other Private Shards can access what data and contracts from a given company or if this company wants others to participate in the same shard later.
Private Shard is also simple for businesses to grasp because for them it’s a SaaS model, where business can spin up their own Private Shard in their private cloud or data center and start using it as a backend for their applications. The benefits are that these applications have general namespace across the whole universe and can communicate with both public blockchain applications and with other private shards. When another business B wants to interact with business A, given shared global namespace of contracts and checked in proofs of the state into the public – they can easily call into contracts of business A.
There are lots of benefits, starting from managing public assets (like digital real estate or monetary value) to the ability to easily grow the network of applications across different enterprises that can rely on the common protocol to communicate.
6、NEAR称使用开发模版的话 15 分钟就可以基于平台开发一个 APP,并即时发布,对开发者十分友好。可以具体介绍一下开发者如何参与到NEAR生态做App,一龙有没有什么想法可以启发大家?
How can a developer take advantage of the NEAR platform to make some tools and DApps?
And Illia, would you like to share some ideas to inspire our developers?
@illia:
We have quite a few tools for developers:
Online IDE to quickly start building: https://near.dev
Documentation: https://docs.nearprotocol.com
Rust bindings that also have examples of few contracts: https://github.com/nearprotocol/near-bindgen/tree/master/examples
Nearlib is JS SDK that allows to build easy frontends / integrate with blockchain – https://github.com/nearprotocol/nearlib
Example NFT for Corgis – https://github.com/nearprotocol/crypto-corgis-solution
We have a live online hackathon for Chinese developers and have published some ideas here: https://github.com/nearprotocol/hackathon/blob/master/ideas.md
7、是否可以回顾总结2019年NEAR的项目进展结果,再简单陈述在2020年NEAR有哪些计划?
What is your review on NEAR 2019, and what are the exciting plans of NEAR 2020?
@illia:
In 2019 NEAR:
Went from a small team in San Francisco, to the global community.
Had changed our sharding design and implemented it, running pre-release TestNet with external validators around the globe.
Onboarded first batch of application developers who are building exciting apps on NEAR and going to launch with us at MainNet. Helped some of them to raise money and scale up to deliver good experience day 1.
Had tons of meetups, workshops and 8 hackathons around the world, with developers giving us valuable feedback on how to improve the platform.
For 2020 our first and foremost goal is to launch MainNet and start growing Open Web community.
We believe that there are a ton of opportunities to bring new developers and entrepreneurs to build the next wave of businesses that are more aligned with users and that we can power part of this transition with NEAR.
We really think 2020 will be the year of growth – developers in the ecosystem, applications launched, usage by regular consumers and adoption by big companies.
8.一龙说2020 会是The year of Open Web,这个预测的依据是什么,能谈谈你理解的OpenWeb吗?
Illia has said 2020 will be the year of Open Web. Could you explain this? How do you think of Open Web?
@illia:
Open Web is the new paradigm of businesses and applications which are aligned around users.
Currently the incentives in Web2 are to build moats and maximizing revenue even against users benefit. The goal of Open Web is to bring control back to the user: for their money, assets and data.
We already have movements across the globe that starting in this direction, with GDPR and data portability laws, promoting privacy and self sovereignty – all of it is trying to change the status quo. But until now there were real alternatives to the centralizing power of zero marginal costs in web2.
An example of this is any social network. They all start trying to acquire users and really being open to applications being built on it and serving the smaller groups need. But as the network effect compounds, the value of playing nice disappears and instead it becomes more about acquisition of competition before it becomes too big and growing revenue.
A way to turn it around is to commoditize the social graph itself. Both make it user owned instead of a company that provides software, and make it portable and usable across any applications. This removes the moat around user’s data or friends and instead will force companies to build better products and serve their needs to more attuned. This also means that there is room for more niche applications that can leverage this data to provide good experience for small community or social group, which right now is impossible because no local app would gather enough network and no global app would focus on building something for small local community.
Even enterprises as they accept more blockchain technology will participate in this movement, unlocking user’s data and allowing more interoperability.
We think in 2020 we will start seeing first applications that deliver on this promise and see long term alignment of applications and users. |
NEAR & Gaming: How Web3 is Disrupting Gaming
COMMUNITY
August 11, 2022
The world of gaming is undergoing a radical shift thanks to cryptocurrency and Web3 technologies. Traditional top-down, centralized Web2 models of gaming are being challenged left and right. Community-operated games, new economic structures like play-to-earn, and other developments benefiting both players and indie game developers alike are just the beginning.
“A lot of smaller, independent game developers are restricted in Web2,” says Chase Freo, founder of OP Games, a native crypto gaming platform. “They simply don’t have the money to hit number one in the App or Google Play stores. But crypto and Web3 are changing all that. We’re eliminating gatekeepers and changing the way both developers and players participate in gaming.”
And while decentralization is a key part of how Web3 is disrupting the gaming industry, there’s much more to the story than that. Game development can be community managed and co-created via decentralized autonomous organizations (DAOs). Radical new economic and financial models within games are emerging daily. Global brands even realize that they need to become involved with crypto gaming in some way.
In Part 3 of the “NEAR & Gaming” series, we’re exploring a few ways that Web3’s unique set of tools and infrastructure are disrupting the status quo. And why the NEAR ecosystem is at the forefront of this massive sea change.
Enabling composable fandom in eSports
The rise of eSports and competitive gaming has been a major industry trend in recent years. In many ways it lends itself perfectly to decentralization, cryptocurrencies, and Web3 technologies. In particular, digital collectibles and NFTs represent a significant opportunity to bring fans, creators, and brands closer to eSports teams and competitors.
“The industry now has tools to create digital collectibles as a medium for fan engagement,” explains Bridge Craven, CEO of ARterra Labs. “This can span a broad spectrum of use cases, like a utility NFT that grants fans tickets to a tournament or certain access to their favorite player through special Twitch emotes or Discord channels.”
ARterra is a Web3 fan engagement infrastructure platform for the eSports and gaming market. A major part of Craven’s vision is to make Web3 a user-friendly entry point for gamers, fans, and collectors, with win-win economic incentives for all parties involved.
“We chose NEAR because of the benefits in terms of scalability and user experience,” Craven continues. “We subsidize gas costs and are the custodian of the users’ private keys, so people don’t need to worry about the crypto interaction.”
Craven explains that Web3 is leading towards what he calls Composable Fandom. The idea that Web3 facilitates more interactive, long-lasting, and flexible ways for fans to interact with their favorite gamers, eSports teams, and even brands.
“Ownership is something that’s been missing from gaming, whether it’s in-game assets or something that travels with you like a passport,” Craven says. “But with NFTs and Web3, players have more opportunities to make a living, and fans can own their assets, and their fandom.”
Mobile taking blockchain gaming mainstream
Gaming is also onboarding more users into Web3 through mobile devices, and vice versa. So far, most successful Web3 games have been browser-based, while Web2 corporations and gaming studios continue to dominate mobile.
But this model is changing. And PlayEmber, a Web3 gaming platform focused on the mobile market, is helping to lead the charge. PlayEmber’s Web3 software development kit (SDK), combined with a native NEAR wallet, is reshaping the Web3 mobile gaming landscape for developers and players alike.
“Everyone is talking about blockchain gaming, which is great. But if you look at GameFi 1.0, for example, it was all desktop,” observes Jon Hook, advisor, and CMO at PlayEmber. “But nobody was really talking about bringing blockchain gaming to mobile. So we thought, why not use blockchain to make the games people love to play even more fun?”
Hugo Furn, founder PlayEmber, adds that once Web3 gaming focuses more on building experiences that are as fun as Web2, the blockchain industry will finally crack the code for bringing blockchain gaming to the masses.
“We already know that NEAR is going to be this amazing on-ramp,” Furn says. “But instead of starting with things like tokenomics or yield farming mechanisms, we’re helping developers keep the mobile gaming experience at the center of what they do, then incorporate in-game tokens or NFTs to boost what players can do.”
Both Hook and Furn envision a future where mobile blockchain gaming—powered by PlayEmber’s SDK and NEAR on the backend—makes blockchain gaming as seamless and accessible as Web2. The games will not only be fun but based on a decentralized economic model that provides greater benefits to gamers of all stripes.
“You could play a mobile game and earn NFTs to sell on an open marketplace, for example,” Hook predicts. “Or if brands partner with games, you could redeem those tokens for a free coffee or flight upgrade. Mobile is critical to making blockchain games as popular as the ones featured at events like E3 and Comic Con.”
Building a resourceful, collaborative metaverse
While the seeds of what we now know as the blockchain-enabled metaverse were sown in Web2, new advancements, projects, and thought leaders are taking the metaverse to the next level. Atlantis World is one such project, aiming to build a more social, accessible metaverse for gaming, productivity, and whatever uses that inhabitants can dream up.
“At Atlantis World, we’re building a more resourceful metaverse,” explains Rev Miller, founder of Atlantis World. “And what I mean by that is connecting things like audio, video, text chats, and conferencing into a lightweight metaverse experience that can be completely stored on a thumb drive.”
Miller sees Atlantis World as an intersection between a metaverse gaming experience and a platform where people can create their own customizable virtual spaces.
“We see Atlantis World as kind of a Layer Zero for the metaverse,” Miller continues. “We want to enable users to find their own creative ways to engage and solve problems. For instance, we’re planning to integrate AstroDAO into Atlantis World to potentially help solve governance challenges. DAOs will be able to meet, discuss, and vote entirely in the metaverse to hopefully increase engagement and voter turnout.”
Atlantis World represents an important shift in the metaverse from Web2 to Web3, and even current ways in which the blockchain metaverse functions.
“We want to make it easier for users by providing them with no-code tooling and enabling them to build spaces on their own,” says Miller. “At the same time, we want to challenge the current notion of metaverse lands. It doesn’t make sense to have scarce resources and limited plots, so we want Atlantis World to be flexible and expansive in that respect.”
Enhancing gaming’s economic incentives
In Web2, economic activity and incentives are typically a one-way street: game studios and companies monetize players and users. Web3 has the potential to upend this model by making economic interactions and incentives fairer. Current Web3 projects on NEAR are already demonstrating how this will take place.
“We want to allow both players and developers to participate in the economic incentives that Web3 provides,” says Chase Freo of OP Games. “Imagine yourself as a player who walks into an arcade, except the games aren’t owned by a company or the arcade owner. They’re owned and operated by the developers themselves. That’s the gap we’re bridging.”
At OP Games, Web2 native gaming developers can easily migrate to Web3 with an SDK that features what Freo calls “Lego Blocks”—modular components for building and operating blockchain-based games. Those games can then be placed on the Arcadia platform, which is primarily for consumers. There are over 4,000 games, primarily tournament-based, which players can join based on their individual skill level. Players can participate in tournaments, NFT wagering, and other gaming activities that reward participation rather than simply taking their money to play.
“Another thing we’re working on is what we call NFT sharding for games,” Freo continues. “It’ll be the ability for game developers to transform their entire games into an NFT, fractionalize it, and sell those multiple pieces to the community. So then the community, along with the developers, run the game together as an enterprise.”
Freo adds that this community-owned and driven model for Web3 games will completely disrupt the economic models that gaming has traditionally existed under. This will move closer to becoming a reality as blockchain gaming experiences match what players are accustomed to from Web2 apps.
“Games don’t necessarily have to be fun, per se, but they need to evoke emotions. Games that are scary, keep you up at night or test your morality,” Freo says. “These are the types of things that keep players coming back for more, and I think that we need more of in the Web3 space.”
The dawn of Web3 gaming disruption
Last year’s newsworthy boom in blockchain gaming and the metaverse was just the beginning of how Web3 is changing the gaming industry—hopefully for the better. Fans of eSports can get closer to their favorite stars and own collectibles with real-world utility and value. Mobile gaming on the blockchain promises to reward users more, instead of simply extracting time and attention.
The metaverse also promises to become more open, lightweight, and collaborative if projects like Atlantis World are any indication of what’s to come. But most importantly, economic models and incentives around the interaction between creators and players are ripe for disruption.
In Web3, players will have a bigger voice in allocating in-game resources and how their overall world exists. And developers will be empowered more creatively and economically in terms of the games, worlds, and experiences they create. |
---
id: introduction
title: Introduction
sidebar_label: Introduction
---
<blockquote className="info">
<strong>did you know?</strong><br /><br />
The [NEAR Platform overview](/concepts/welcome) clarifies much of the language in this section.
</blockquote>
## The life of a transaction: {#the-life-of-a-transaction}
- A client creates a transaction, computes the transaction hash and signs this hash to get a signed transaction. Now this signed transaction can be sent to a node.
- The RPC interface receives the transaction and routes it to the correct physical node using `signer_id`. Since the `signer_id` must be a NEAR Account ID which lives on a single shard, the account is mapped to a shard which is followed by at least one validator running at least one machine with an IP address.
- When a node receives a new signed transaction, it validates the transaction for signer, receiver, account balance, cost overflow, signature, etc. ([see here](https://nomicon.io/RuntimeSpec/Scenarios/FinancialTransaction.html#transaction-to-receipt)) and gossips it to all peers following the same shard. If a transaction has an invalid signature or would be invalid on the latest state, it is rejected quickly and returns an error to the original RPC call.
- Valid transactions are added to the transaction pool (every validating node has its own independent copy of a transaction pool). The transaction pool maintains transactions that are not yet discarded and not yet included into the chain.
- A pool iterator is used to pick transactions from the pool one at a time, ordered from the smallest nonce to largest, until the pool is drained or some chunk limit is reached (max number of transactions per chunk or max gas burnt per chunk to process transactions). Please refer to articles on the [pool iterator](https://nomicon.io/ChainSpec/Transactions.html?highlight=pool#pool-iterator) and [gas](/concepts/protocol/gas) for more details.
- To accommodate the distributed nature of a sharded blockchain, all transactions are subsequently returned to a segmented transaction pool having 3 distinct layers: accepted transactions (which will be processed on the next chunk), pending transactions (which exceeded the limits of the current chunk and will be included in a later round of processing) and invalid transactions (which will be rejected at the next available opportunity).
- Before producing a chunk, transactions are ordered and validated again. This is done to produce chunks with only valid transactions across a distributed system.
- While a transaction is being processed on to a chunk, any errors raised by the application of its actions are also returned via RPC.
## NEAR Platform Errors {#near-platform-errors}
Errors raised by the NEAR platform are implemented in the following locations in `nearcore`:
- [nearcore/core/primitives/src/errors.rs](https://github.com/near/nearcore/blob/master/core/primitives/src/errors.rs)
- [nearcore/runtime/near-vm-errors/src/lib.rs](https://github.com/near/nearcore/blob/master/runtime/near-vm-errors/src/lib.rs)
This page includes:
- **RuntimeError and subtypes**: errors raised when a transaction is first received by the destination node and again before it's processed and applied to a chunk
- **TxExecutionError and subtypes**: errors raised while a transaction and its component action(s) are being validated and applied to a chunk
- **VMerror and subtypes**: errors raised during the execution of a Wasm contract by the NEAR VM
### RuntimeError and subtypes {#runtimeerror-and-subtypes}
```text
RuntimeError Error returned from `Runtime::apply
StorageError Unexpected error which is typically related to the node storage corruption.account
BalanceMismatchError An error happens if `check_balance` fails, which is likely an indication of an invalid state
InvalidTxError An error happened during TX verification and account charging
InvalidAccessKeyError Describes the error for validating access key
ActionsValidationError Describes the error for validating a list of actions
TotalPrepaidGasExceeded The total prepaid gas (for all given actions) exceeded the limit.
TotalNumberOfActionsExceeded The number of actions exceeded the given limit.
AddKeyMethodNamesNumberOfBytesExceeded The total number of bytes of the method names exceeded the limit in a Add Key action.
AddKeyMethodNameLengthExceeded The length of some method name exceeded the limit in a Add Key action.
IntegerOverflow Integer overflow during a compute.
InvalidAccountId Invalid account ID.
ContractSizeExceeded The size of the contract code exceeded the limit in a DeployContract action.
FunctionCallMethodNameLengthExceeded The length of the method name exceeded the limit in a Function Call action.
FunctionCallArgumentsLengthExceeded The length of the arguments exceeded the limit in a Function Call action.
```
### TxExecutionError and subtypes {#txexecutionerror-and-subtypes}
```text
TxExecutionError Error returned in the ExecutionOutcome in case of failure
InvalidTxError An error happened during Transaction execution
InvalidAccessKeyError Describes the error for validating access key
ActionsValidationError Describes the error for validating a list of actions
TotalPrepaidGasExceeded The total prepaid gas (for all given actions) exceeded the limit.
TotalNumberOfActionsExceeded The number of actions exceeded the given limit.
AddKeyMethodNamesNumberOfBytesExceeded The total number of bytes of the method names exceeded the limit in a Add Key action.
AddKeyMethodNameLengthExceeded The length of some method name exceeded the limit in a Add Key action.
IntegerOverflow Integer overflow during a compute.
InvalidAccountId Invalid account ID.
ContractSizeExceeded The size of the contract code exceeded the limit in a DeployContract action.
FunctionCallMethodNameLengthExceeded The length of the method name exceeded the limit in a Function Call action.
FunctionCallArgumentsLengthExceeded The length of the arguments exceeded the limit in a Function Call action.
ActionError An error happened during Acton execution
ActionErrorKind The kind of ActionError happened
RuntimeCallError
ReceiptValidationError Describes the error for validating a receipt
ActionsValidationError Describes the error for validating a list of actions
TotalPrepaidGasExceeded The total prepaid gas (for all given actions) exceeded the limit.
TotalNumberOfActionsExceeded The number of actions exceeded the given limit.
AddKeyMethodNamesNumberOfBytesExceeded The total number of bytes of the method names exceeded the limit in a Add Key action.
AddKeyMethodNameLengthExceeded The length of some method name exceeded the limit in a Add Key action.
IntegerOverflow Integer overflow during a compute.
InvalidAccountId Invalid account ID.
ContractSizeExceeded The size of the contract code exceeded the limit in a DeployContract action.
FunctionCallMethodNameLengthExceeded The length of the method name exceeded the limit in a Function Call action.
FunctionCallArgumentsLengthExceeded The length of the arguments exceeded the limit in a Function Call action.
```
### VMerror and subtypes {#vmerror-and-subtypes}
```text
VMerror An error that occurs in the NEAR virtual machine
ExternalError Serialized external error from External trait implementation
InconsistentStateError An error that is caused by an operation on an inconsistent state (ie. an integer overflow by using a value from the given context
IntegerOverflow Math operation with a value from the state resulted in a integer overflow
FunctionCallError
LinkError
WasmTrap
MethodResolveError
MethodEmptyName
MethodUTF8Error
MethodNotFound
MethodInvalidSignature
HostError
BadUTF16 String encoding is bad UTF-16 sequence
BadUTF8 String encoding is bad UTF-8 sequence
GasExceeded Exceeded the prepaid ga
GasLimitExceeded Exceeded the maximum amount of gas allowed to burn per contract
BalanceExceeded Exceeded the account balance
EmptyMethodName Tried to call an empty method nam
GuestPanic Smart contract panicked
IntegerOverflow IntegerOverflow happened during a contract execution
InvalidPromiseIndex `promise_idx` does not correspond to existing promises
CannotAppendActionToJointPromise Actions can only be appended to non-joint promise.
CannotReturnJointPromise Returning joint promise is currently prohibited
InvalidPromiseResultIndex Accessed invalid promise result index
InvalidRegisterId Accessed invalid register id
IteratorWasInvalidated Iterator `iterator_index` was invalidated after its creation by performing a mutable operation on trie
MemoryAccessViolation Accessed memory outside the bounds
InvalidReceiptIndex VM Logic returned an invalid receipt index
InvalidIteratorIndex Iterator index `iterator_index` does not exist
InvalidAccountId VM Logic returned an invalid account id
InvalidMethodName VM Logic returned an invalid method name
InvalidPublicKey VM Logic provided an invalid public key
ProhibitedInView `method_name` is not allowed in view calls
NumberOfLogsExceeded The total number of logs will exceed the limit.
KeyLengthExceeded The storage key length exceeded the limit.
ValueLengthExceeded The storage value length exceeded the limit.
TotalLogLengthExceeded The total log length exceeded the limit.
NumberPromisesExceeded The maximum number of promises within a FunctionCall exceeded the limit.
NumberInputDataDependenciesExceeded The maximum number of input data dependencies exceeded the limit.
ReturnedValueLengthExceeded The returned value length exceeded the limit.
ContractSizeExceeded The contract size for DeployContract action exceeded the limit.
CompilationError
CodeDoesNotExist
WasmerCompileError
PrepareError Error that can occur while preparing or executing Wasm smart-contract
Serialization Error happened while serializing the module
Deserialization Error happened while deserializing the module
InternalMemoryDeclared Internal memory declaration has been found in the module
GasInstrumentation Gas instrumentation failed. This most likely indicates the module isn't valid
StackHeightInstrumentation Stack instrumentation failed. This most likely indicates the module isn't valid
Instantiate Error happened during instantiation. This might indicate that `start` function trapped, or module isn't instantiable and/or unlinkable
Memory Error creating memory
```
:::tip Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol"> Ask it on StackOverflow! </a>
:::
|
# Account Receipt Storage
There is a definition of all the keys and values we store in the Account Storage
## Received data
*key = `receiver_id: String`,`data_id: CryptoHash`*
*value = `Option<Vec<u8>>`*
Runtime saves incoming data from [DataReceipt](Receipts.md#data) until every [input_data_ids](Receipts.md#input_data_ids) in [postponed receipt](#Postponed-receipts) [ActionReceipt](Receipts.md#ActionReceipt) for `receiver_id` account is satisfied.
## Postponed receipts
For each awaited `DataReceipt` we store
*key = `receiver_id`,`data_id`*
*value = `receipt_id`*
Runtime saves incoming [ActionReceipt](Receipts.md#ActionReceipt) until all ``
|
NEARCON ‘23 Day Three: Shaping the Future of Decentralized Governance
NEAR FOUNDATION
November 10, 2023
NEARCON ’23’s grand finale is here, packed with groundbreaking sessions you won’t want to miss. It’s been a great ride so far, with engaging sessions, a mind-blowing hackathon, and the chance to network in person with open web experts from all over the globe. Today’s agenda has some of the most impactful talks yet, so get ready to send NEARCON ‘23 out with a bang!
Start with a coffee at 9:00 AM and enjoy breakfast at the venue with other attendees before diving into our final day of immersive sessions. By 10:45 AM, the curtain will lift on the main stage for NEARCON 23’s final foray into Web3 — including a special, surprise announcement from NEAR CEO Illia Polosukhin.
Remember, you can stay connected throughout the day on Calimero Chat — a private, decentralized comms built on the NEAR blockchain.
*Note: There is no bag check or cloak room at any NEARCON venue.
Can’t miss sessions on Day Three
What 2024 Looks Like for Decentralized Democracy (1:00 PM – 1:30 PM, Layer 1 Stage).
Join us for a deep dive into the future of governance on NEAR, where the visionaries behind NEAR’s political framework dissect past triumphs and unfold the master plan for 2024. Hear from our panelists, including Joe Spano of ShardDog and Blaze from Cheddar.
Special Announcement (12:00 PM – 12:30 PM, Layer 1 Stage). Don’t miss NEAR Co-founder and NEAR Foundation CEO Illia Polosukhin’s major announcement. When Illia speaks, the entire Web3 world listens, so grab a quick bite and head on over to the main stage for what’s sure to be Illia’s mic-drop moment of NEARCON ‘23.
NEAR Governance Forum (11:05 AM – 12:00 PM, Layer 2 Stage). Curious about NEAR’s decentralized governance model? Get your questions answered by Alejandro Betancourt of the Transparency Commission and Cameron Dennis of Banyan Collective, with the conversation facilitated by Gus Friis of NEARWEEK.
NEAR Campus: Unleashing the Potential of University Partnerships for DevHub (11:00 AM – 11:20 AM). NEAR DevHub’s Boris Polania will showcase the NEAR Campus’s role in revolutionizing developer engagement. Explore the relationship between NEAR and academia, and how NEAR Campus will serve as a touchpoint for global campuses in the future.
Sensational speaker highlights
Unveiling the Web3 BUILD Incubator Fellowship (11:20 AM – 11:30 AM, Block Zero Stage)
Max Mizzi from Major League Hacking will introduce the BUILD Incubator Fellowship, an innovative collaboration with NEAR Horizon. Discover the inception, operation, and success stories of the program’s trailblazing summer cohort.
It’s Honest Work: How DeFi Is Growing Up (11:30 AM – 12:00 PM, Layer 1 Stage),
From ‘DeFi Summer’ to today, decentralized finance remains one of the most promising areas of crypto — and it’s inevitably maturing. Drop by as an incredible panel included Arjun Arora from Orderly Network, Mike Cahill of Douro Labs, and more chart recent developments that are reshaping DeFi’s landscape on NEAR and beyond.
Aurora Unleashed: Igniting NEAR’s Potential (11:15 AM – 11:30 AM, Layer 1 Stage).
Alex Shevchenko of Aurora Labs takes the spotlight to discuss Aurora’s impactful innovations within the NEAR ecosystem. Expect insights into Aurora’s network developments, Borealis infrastructure, and their cross-contract call SDK, all of which amplify NEAR’s technological prowess.
IRL Hackathon grand finale and LX Factory party
Judgment day is finally here, as winners of the IRL Hackathon will claim their awards on the Main Stage from 2:35 PM – 3:50 PM. Huge thanks to all who participated and submitted projects. Every hacker, coder, builder, and developer is what makes the NEAR ecosystem special and the NEAR Foundation looks forward to engaging each and every one in the future.
Finally, NEARCON ‘23 is going out with a major bang at one of the most iconic Lisbon party venues — LX Factory. An epic bash brought to you by The Littles, it’s a going away soiree that every NEARCON ‘23 attendee simply can’t miss. Grab a drink, do some networking, and groove your night away as NEARCON ‘23 sails off into the Portuguese sunset.
A few important reminders
Here are today’s venue hours:
Armazem (Hacker HQ) closes at 12:30 PM.
Garagem (Community HQ) closes at 5:30 immediately after the happy hour.
Convento Do Beato Layer 2 Stage closes at 12:00 PM. The Layer 1 Stage and venue closes at 4:00 PM.
Closing party at LX Factory (Fábrica) from 9:00 PM – 4:00 AM. Open to badge holders.
Shoutout to Day Three sponsors
Of course, this epic Day Three wouldn’t be possible without the support of todays’ generous and amazing sponsors:
The Littles
SailGP
Day Three will be all about forging connections, gaining insights, and celebrating all that you and the collective NEAR ecosystem has accomplished not only over the past three days – but the entirety of 2023. So grab a coffee, head on over to the venue, and let’s make the final day of NEARCON ‘23 count towards making this the most iconic edition ever.
|
---
title: IT Guide
sidebar_label: IT Guide
sidebar_position: 5
---
_The purpose of this handbook is to provide guidelines regarding IT security._
Many of us are not IT experts, so it is important to educate ourselves on cybersecurity best practices and practicing diligence with systems, tools and websites so as to avoid hacking and viruses.
# Recommendations
## Onboarding:
* Always use strong passwords for all Company Accounts
* Use 1password, protected by a **_very_** strong password, to store secure passwords. Additionally, set up an authenticator app or Yubikey for Multi-Factor Authentication.
* Create a separate account: If you or your employees are using the same laptop for personal use as for work, we suggest you create a new user account on your computer and log into 1 user for work, the other for personal use. Log out of your personal Google account or make sure you use separate Chrome profiles
* Ensure your laptop is encrypted at rest using FileVault (OSX) or Bitlocker (Windows). Don’t assume it came configured correctly. Backup the recovery keys offline or in a password vault. Losing an unencrypted laptop is potentially dangerous to you, your company, and can be incredibly frightening
* Always lock your laptop if you are away from it or not in immediate possession of it when in public places.
## External applications
* Only download secure / well known applications to your phone and laptop . Verify that you are downloading the official application from the correct source.
* It’s ok to utilize “sign in with google” for SaaS platforms, except when they require permissions beyond “get your basic profile.” Restrict “sign in with Google” to only SaaS apps officially in use by your company. Otherwise, if excessive permissions are requested, use strong passwords with MFA where available.
* Applications such as Telegram are safe to use but be aware of :
* Fake “official” pages / groups
* Attachments / gifs - do not download from chats. Disable automatic downloads.
* Rampant fraud
* Confidence scams
* Restrict messaging regarding sensitive company information and transfer requests, etc. to your company’s Slack account or similar.
* Be extremely cautious about installing extensions to your browser. Extensions like Honey are recording essentially everything you do in your browser. Do you trust a company with that data? Other extensions may be poorly developed and thus easily exploited, contain malware designed to steal data, or could be purchased from a developer by malicious actors and become part of a supply-chain hack. In simple terms, don’t install extensions unless you absolutely know what you’re doing. Don’t trust app stores (for your OS or browser) to be able to ensure that the apps they host are safe to use.
## Crypto security
* Avoid using a named address that contains information that may identify you (no firstnamelastname.near, mascotgradyear.near, etc.) or stick with implicit accounts.
* Feel free to “squat” on crypto addresses with identifiable information to prevent ownership by others, but avoid using them to hold significant token assets or to conduct transactions.
* If you have whatever you consider to be significant assets in a wallet, we strongly advise and urge you to use a Ledger or other form of crypto hardware wallet (cold storage). Only leave a small amount in “hot wallets”such as those where you are able to “recover” within the browser directly through an emailed link or by entering a passphrase.
* Note that staking tokens has the side effect of providing additional security due to the 2-3 days required to unstake tokens before they become available for transfer.
## Additional Security Tips
**1. Get a Hardware Wallet for your Crypto Assets**
Using a software interface to a wallet that requires a password or seed phrase is risky, however, because they require you to enter some information that could be compromised. A hardware wallet, or Ledger as is most commonly used, is the most secure way to access accounts, transfer funds, and protect your private key (the thing that allows you to do anything with your account). A Ledger is a physical device that stores a user's private key inside a dedicated piece of hardware that never transmits your key. Instead it signs transactions with that key. You can lose or destroy a Ledger without worry, as long as you still have your seed phrase that you used to set up the Ledger, you also have your private key. That seed phrase can be kept in multiple places, buried, in a safety deposit box, and even encrypted in a file on a CD or DVD. Most importantly, neither your private key nor your seed phrase should be accessible anywhere online or on a computer when any significant sums are involved. “Significant” is an amount you decide on.
**2.** **Keep Your Private Keys or Recovery Phrases Secure**
Do not store your private keys and recovery phrases online or share them with anyone. In fact, don’t even store them on your computer. A standard feature of contemporary malware is to search for keys, credentials, and recovery phrases on the devices they infect. Using a Ledger prevents you from having to store a private key anywhere, as it is on the device and never traverses your device or network when used to approve transactions.
Keep your pass phrases and your Ledger or other cold storage devices safe and secure. Consider utilizing a safety deposit box or other method for preventing physical access to a device and or recovery phrases so that they can’t be stolen and you cannot be compelled to disclose or unlock them.
Create an additional wallet or purchase an additional Ledger for “day to day” use, holding a value of assets you can afford to lose. Fund this from time to time with your more secure wallet(s). It’s worth noting that if you have an account which was not created with a Ledger, you can always update the account to use a Ledger.
Note, while a Ledger is a powerful tool for securing a wallet, it is possible that it can be hacked or that you can be forced to disclose its pin. Ensure that your Ledger associated with your most secure wallet is kept somewhere secure at all times.
Note also that the passphrase associated with a Ledger can be used to recover your private keys on another Ledger–this is by design, so that a failure or the destruction of the device doesn’t cause you to lose access to your accounts. As a ramification of this, it is absolutely critical that you do not lose the passphrase you used to set up a Ledger or other cold storage device.
Finally, other methods for securing a wallet are available. These include “multi-sig” and other types of key assignment, but require a more thorough understanding of the risks and operational requirements associated with them.
**3. Use Strong Passwords**
Always use strong passwords when creating accounts. Never reuse passwords across accounts. Do not use your browser’s password storage feature. Humans are terrible at remembering and creating strong passwords, so utilize a simpler solution…
**4.** **Use a Password Manager**
Use a trusted and reliable password manager to create, store, and fill all your passwords. This achieves a number of objectives and simplifies security. 1Password and LastPass are currently the gold standard. When setting up 1Password, use a very strong passphrase that you can remember. If you aren’t good at remembering a typical complex password, you can instead utilize a long passphrase you will not forget, such as (but not this):
Foolish catapults swim solemnly in retrograde fashion
or
Correct Horse Battery Staple
This seems simple, but is very secure. Do not, however, utilize a common phrase or something taken from a book. For example, don’t use this:
I came I saw I conquered
See[ ](https://xkcd.com/936/)[Password Strength](https://xkcd.com/936/) for a fun explanation.
Use MFA with your password manager. In order of security, you can use a Yubikey Universal Second Factor (U2F) hardware device (and a second as backup in case you lose the primary one), a push notification through the DUO app, or a time-based or HMAC-based one time password application on your phone such as the Google Authenticator or DUO Mobile app. Why is this important? Because if your computer is hacked, a malicious actor could capture the passphrase you use to access your password manager and remotely exfiltrate all your stored credentials.
As a final note, ensure you record and secure your secret key. 1Password creates an “emergency kit” when you first create an account. Do not lose this. Physically secure this in a safe place where it cannot be destroyed or discovered. Lastpass provides a similar mechanism.
**5.** **Use Multi-Factor Authentication BUT NOT SMS Two-Factor Authentication**
Significantly increase your account security with a MFA. We’ve already mentioned the importance of doing this for your password vault, but you should enable multi-factor authentication in every account with which it is available. With this form of increased security in place, it becomes incredibly hard for hackers to use stolen credentials to access your accounts–in most cases this is even true if the device you are logging in with is already compromised.
As mentioned above, MFA can be enabled through push notifications to your phone, one-time password generators (also on your phone), a Yubikey or similar device.
SMS and email notifications are also forms of MFA. However, they should not be used. They are simply not secure. If you use software that only allows for SMS 2FA, you should consider not subscribing. Sadly, many banks are still using SMS 2FA for security.
It should also be noted that the “additional factor” should not be used on the same device. Generally the second factor will be on your mobile device (which should be protected with a pin code and also encrypted at rest–standard for most new devices)
One additional method for ensuring account security is to use “Login with Google.” This isn’t technically MFA, but a set of tools that operate behind the scenes to determine if you are who you say you are, using your device, and that you are supposed to have access to whatever you are attempting to log into . This is robust, secure, and extremely resistant to hacking. When it is available, and when apps aren’t asking for excessive data permissions, feel free to use this with your company email account for company-related software.
**6. Cold Storage & Safekeeping**
Are you holding large amounts of crypto for the long term? If so, it's time to move any funds off of any hot wallets and into cold storage. With your private keys stored safely offline, you’ll rest easy with a hardware wallet. Move any crypto that you are not actively trading out of exchange based hot wallets and into more secure wallets like software, desktop or mobile wallets. These types of (hot) wallets are still vulnerable but can be a more secure option. Exchange hacking is a reality. Better to be safe than sorry.
Natural disasters cannot be prevented, but you can protect your money from disasters. Print all 2FA backup codes, password manager emergency kits and hardware wallet recovery sentences. Then put these items (along with paper wallets and other crypto planning documents) in a safe and fireproof location. You can keep your recovery seed safe from physical damage by engraving it on a steel plate. Engraving tools are fairly inexpensive and are especially worth the investment if you have more than one recovery seed to protect. Of course, as stated above, Ledgers are your best bet.
**7. Write Your Recovery Sentence Down & Verify **
Make sure all words are spelled correctly and are in the correct order. Then triple-check and test it with a small number of funds.
**8. Never Input Your Recovery Phrase Online**
This includes:
* On your phone
* On your computer
* Taking a photo or screenshot
* In an email
* Cloud services, including Google Drive/Dropbox/Evernote
When it comes to recovery phrases, simply consider that the device you are using is compromised, and someone can read what you are typing or data you have stored on that device.
**9.** **Choose a Unique PIN**
We recommend a pin that’s 6–9 digits. This should be obvious, but please don’t use important dates that are easy to guess — such as your birthday, or “1–2–3–4.”
**10.** **Be Watchful of Potential Phishing Sites and Emails**
A very common scam technique used by hackers is to create a fake, identical version of the exchange or web wallet page they use and email the link to the victim. The scammers will usually include convincing messages persuading them to log in and take immediate action. Many people then visit these sites, enter their data, and hackers take this data and do what they want.
To avoid phishing, always check that the link displayed in your browser perfectly matches the link in your exchange or web wallet. Also ensure you have no certificate warnings and that you are securely connected with HTTPS:// and not HTTP://
**11. Keep Your Device Safe**
Make sure you have the latest version of antivirus and firewall enabled. Do not install any software that you’re not sure about its safety. As good practice, never download any suspicious attachments, and ensure that you research the reputation of the software you're trying to install. Ensure your system is automatically downloading security updates.
**12. Keep your Holdings Private**
It is crucial never to tell anyone how much virtual currency you have. This is especially true when talking to strangers at blockchain tech conferences, cryptocurrency gatherings, and social media.
Also, don’t “signal” that you have significant holdings or wealth. It’s a good way to attract the wrong kind of attention.
The same guidance applies to your family and friends. Request that they also keep quiet about your holdings.
**13.** **Avoid Public WiFi**
It’s very difficult to determine whether or not an available wifi access point is malicious. Just assume that they are. While TLS generally prevents snooping on your web sessions, you may not notice a Man in the Middle attack that is able to read everything you transmit. Many hackers will convince you to install a certificate into your device’s trust store. If you do this, it’s quite possible that you won’t notice that your device has been compromised.
If you need to connect to an unfamiliar access point:
* Never install anything the access point may request you to install
* Never open any document that the access point may attempt to download to your computer
* Never use credentials from other accounts (i.e. gmail) to login to an access point
* Don’t be fooled into thinking an access point is secure because it requires a password
* Use a VPN. VPNs provide an additional layer of defense from man in the middle attacks and other forms of snooping
* If your VPN is blocked, don’t use the access point. It’s suspicious at best, and likely malicious.
* Get a mobile hot spot or use your cell phone as a hot spot (make sure you set a strong password, and don’t tape the password to the hotspot–a common practice)
**14.** **Educate your Friends and Family about Crypto**
Knowledge is power! The more everyone knows about crypto security, the faster we will move toward mass adoption.
**15. Messaging Software Safety**
* Safety on Telegram (_this applies to similar features on other platforms)_:
* A reminder that messaging platforms that can share files enable threat actors to easily target entire groups with malware.
* Recently, there were reports of a RAT (Remote Access Trojan) being distributed by "Smokes Night" on Telegram crypto trading forums. See the linked report for details. To be clear, this sort of thing is happening on a continuous basis, and is an example of a very common type of malware.
* [Threat Report: Echelon Malware Detected in Mobile Chat Forums](https://www.safeguardcyber.com/blog/echelon-malware-crypto-wallet-stealer-malware)
* TL;DR for the report: If you have Telegram auto-download enabled (which you probably do--it's the default config), there is a high probability that malware can be dropped and run on your computer and steal your <insert anything>.
* If your company relies on Telegram make sure your employees configure the following:
* disable "Automatic Media Download" for chats, groups, and channels under "Advanced" settings on desktop OSes or "Data and Storage" on Android/iPhone .
* Enable "Ask download path for each file" which is also under "Advanced settings" on desktops (does not appear on mobile devices).
* Do this for all of your devices running telegram.
* You should know that when auto-download is enabled, clicking on a message with an attached file will open that file. You may or may not get a warning prompt from your OS.
* With auto-download disabled, you first have to click the down arrow before clicking the message for a file to execute. This should help prevent accidental execution of files.
* With "ask download path for each file" enabled, you get an extra "UX" warning that makes it clear you are downloading a file, which also prevents accidental file execution. It also appears to prevent automatic downloading across the board--but you should still disable Automatic Media Download regardless.
* If you are installing a fresh copy of Telegram or any messaging platform for that matter, make sure to disable risky configuration defaults before use.
* As an added layer of protection, consider disabling Auto-download of images as well as Auto-play of media and Streaming. Every once in a while someone finds a way to exploit a media decoder. This will make Telegram (and other apps) significantly less fun, but more secure.
* Finally, when installing any software--especially software designed for sharing, messaging, etc.--make it a point to walk through the configuration options to ensure that you understand the operating parameters of the default settings--and adjust them where it makes sense.
**16. See Something, Say Something**
It’s a good idea to create a #Security slack channel and/or email where security concerns can be fielded and handled by your IT staff.
|
NEARCON ’23 Unwrapped: Lisbon Steps into the Open Web in Iconic Fashion
NEAR FOUNDATION
November 16, 2023
Over the course of three epic days, the NEAR ecosystem showed up and showed out to make NEARCON ‘23 the most iconic edition yet. Groundbreaking partnerships were announced, new bonds forged, and a new vision for AI and the open web was unveiled by some of the brightest minds in the blockchain business.
“The open web is an internet where all people control their own assets, data, and power of governance,” remarked NEAR co-founder and newly minted CEO of NEAR Foundation, Illia Polosukhin, on Day One of NEARCON ‘23. “And that vision extends beyond NEAR to the entire space, with the B.O.S. acting as the single entry point to an open web.”
Illia’s remarks truly encapsulated the spirit of NEARCON ‘23, as innovative collaborations with the likes of Polygon and EigenLayer highlighted a cross-chain, multi-ecosystem future with NEAR enabling multiple key facets of tomorrow’s truly open web.
So ICYMI, here’s all the key happenings, hubbub, and highpoints of an iconic NEARCON ‘23.
Partnerships, IRL Hackathon, and NCON make a splash
Things started off with a bang before the doors of the Convento Do Beato in Lisbon even opened, with the debut of the NCON token. The first ever native NEARCON token, attendees could download the NEARCON app and collect NCON tokens by completing tasks, bounties, and viewing sessions throughout the event.
NEARCON-goers could then redeem NCON tokens for swag or a bite from the various food trucks, or even send NCON to other attendees via a slick native wallet on the NEARCON app. All told, over 110,000 NCON tokens were distributed during the event, making it a smash hit amongst attendees.
This year’s NEARCON also took place across three cool and contrasting venues: NEARCON HQ, Hacker HQ, and Community HQ, each providing a unique experience for those of varying interests. HackerHQ was popping in particular due to the three day IRL Hackathon. Held in a stunning seaview venue, the winners will be announced next week, so stay tuned.
Once everyone settled in for Day One, the ecosystem was treated to a few major game-changing partnership announcements. Here’s a roundup of the major collabs that are already making an impact for NEAR builders, developers, and the community at large:
NEAR releases the Data Availability Layer for ETH Rollups and Ethereum developers
By far the biggest news of Day One was NEAR unveiling the NEAR Data Availability Layer (NEAR DA), offering robust, low-cost blockchain data to developers for modular open web development. NEAR DA will help developers reduce costs and enhance the reliability of rollups, while keeping the security of Ethereum. NEAR DA is a key part of tomorrow’s open web development stack, with early users including StarkNet, Caldera, Movement Labs, and others.
NEAR x EigenLayer streamline Ethereum
NEAR Foundation announced its partnership with EigenLayer, which will speed up transaction times and slashing costs on the Ethereum blockchain. The collaboration will introduce a fast finality layer to Ethereum rollups, boosting their efficiency and scalability. The fast finality layer showcases the strengths of NEAR’s technology while making the open web more usable.
LayerZero brings interoperability to NEAR
In other big news, LayerZero integrated with NEAR, bringing over 40 blockchain networks into the open web. Layer brings trustless cross-chain messaging into the fold, making NEAR a more versatile ecosystem. The announcement highlighted NEAR’s commitment to seamless communication across various blockchain networks.
Day One highlights: bringing the open web vision to life
While there were a ton of amazing sessions and panels on the first day — from open web gaming and sports to crypto economics — one of the major themes was how Illia would realize his vision for a truly open internet in his new role as CEO of the NEAR Foundation. With a renewed focus on developers in the ecosystem, it only makes sense for him to steer the ship.
Piggybacking on Illia’s “NEAR: The Open Web Vision” keynote, new NEAR COO Chris Donovan made a case for the open web, while also sitting down to discuss Web3 regulatory issues with Coindesk’s Michael Casey. And in other developer tooling news, NEAR Data Availability (DA) was announced, giving developers low-cost access to modular, scalable blockchain data.
Day Two highlights: the AI is NEAR track takes flight
The second day of NEARCON ‘23 was perhaps the boldest to date, with an epic “AI is NEAR” speaker and programming track that was simply mind blowing. Things kicked off with sessions featuring thought leaders from the likes of Pantera Capital and NEAR Tasks, and heated up even more with NEAR co-founder Alex Skidanov’s talk on generative AI and the open web.
Illia returned in the afternoon to discuss “AI on NEAR: The Future of Work and Governance,” painting a picture of how AI will impact the future of governance, work, asset ownership, and beyond. He was then joined by Mike Butcher of TechCrunch, with the two unpacking the intersection of AI, blockchain, and global policy.
“AI has the potential to be a huge driver for productivity,” Illia explained. “AI agents, for example, are redefining the future of work. They have the unique ability to manage transactions and resources on blockchain platforms, communicate with secure cryptographic verification, and act as autonomous entities on people’s behalf — and directly for their benefit.”
The AI is NEAR track was capped off when Illia then joined Michael Casey for a Coindesk podcast, where the two dove as deep as one can imagine down the AI rabbit hole. In particular, Illia shared his experience, views, and predictions regarding AI, blockchain, and the open web – simply a must listen. (Stay tuned for the release of this Coindesk podcast with Illia and Michael.)
Day Three highlights: governance and hackathon on stage
Governance and current happenings with the NEAR Digital Collective (NDC) were front and center in a spirited Day Three. One of the most unique panels was the NEAR Governance Forum, featuring AVB from the Transparency Committee, Cameron Dennis from Banyan, and Blaze of Cheddar.
All three gave their takes on what’s gone right — and wrong — with the NDC since its inception at last year’s NEARCON. And in a “Round Robin” format, the three fielded a variety of questions from the audience, some of whom were NDC members themselves. Blaze and AVB then joined NEAR’s Yadira Blocker in an afternoon session discussing Decentralized Democracy in 2024.
The final season of NEARCON ‘23 featured presentations from the IRL Hackathon, where builders unveiled their work and ideas. Judges and panelists included Oleg Fomenko, CEO of Sweat Economy, with talented NEAR builders putting their best foot forward. They showcased some truly amazing ideas and tech, so you’ll want to keep your eyes peeled for the winners.
And what’s Day Three of any NEARCON without a party? This year’s bash was held at the LX Factory, an enormous space where all attendees got their final chance to network, share ideas, and have a blast. The Littles sponsored the party, bringing an added element of excitement by setting up carnival games at the venue along with The Fun Pass to reward anyone who played.
NEARCON ‘23: as iconic as it gets
If you joined us in Lisbon, a huge thanks for making this year’s edition so fun, spectacular, and rewarding. If you weren’t able to attend, we sincerely hope to see you next year. From transformative partnerships to leading the Web3 pack in AI, NEARCON ‘23 showcased the passion, momentum, and ingenuity of the entire NEAR ecosystem.
If you weren’t able to make it to Lisbon for NEARCON, don’t worry — you can still check out all of the talks. Visit NEAR Protocol’s YouTube page for the full livestreams from the Layer 1 and Layer 2 stages, or click the links below.
NEARCON Day One Livestream – Layer 1 Stage, Layer 2 Stage
NEARCON Day Two Livestream – Layer 1 Stage, Layer 2 Stage
NEARCON Day Three Livestream – Layer 1 Stage, Layer 2 Stage
Anyone who built furiously at the HackerHQ or clinked a cocktail at the Glass Movers & Shakers Happy Hour will likely tell you something similar: that NEARCON ‘23 showed the importance of a truly open web — and how together we can all achieve it. |
---
id: fungible-tokens
title: Fungible tokens
sidebar_label: Fungible Tokens
---
## Introduction {#introduction}
Please see the [spec for the fungible token standard](https://nomicon.io/Standards/FungibleToken/) and an [example implementation](https://github.com/near-examples/FT) for reference details.
One notable aspect of the standard is that method names are prefixed with `ft_`. This will be a helpful convention when querying for transactions related to fungible tokens.
## Get balance {#get-balance}
Using the abstraction of the [NEAR CLI](/tools/near-cli) tool, we can check the balance of a user's account with [`near view`](/tools/near-cli#near-view):
`near view ft.demo.testnet ft_balance_of '{"account_id": "mike.testnet"}'`
Returns:
```
View call: ft.demo.testnet.ft_balance_of({"account_id": "mike.testnet"})
'1000000'
```
Alternatively, you can [call a contract function](/api/rpc/setup#call-a-contract-function) using the `query` RPC endpoint. Below is an example using HTTPie:
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=ftbalance method=query \
params:='{
"request_type": "call_function",
"finality": "final",
"account_id": "ft.demo.testnet",
"method_name": "ft_balance_of",
"args_base64": "eyJhY2NvdW50X2lkIjogIm1pa2UudGVzdG5ldCJ9"
}'
```
Returns:
```bash
HTTP/1.1 200 OK
Alt-Svc: clear
Via: 1.1 google
access-control-allow-origin:
content-length: 176
content-type: application/json
date: Thu, 27 May 2021 12:53:38 GMT
{
"id": "dontcare",
"jsonrpc": "2.0",
"result": {
"block_hash": "3mvNHpZAsXiJ6SuHU1mbLVB4iXCfh5i5d41pnkaSoaJ5",
"block_height": 49282350,
"logs": [],
"result": [ 34, 49, 48, 48, 48, 48, 48, 48, 34 ]
}
}
```
As mentioned earlier, the `result` is an array of bytes. There are various ways to convert bytes into a more human-readable form such as the [dtool CLI](https://github.com/guoxbin/dtool#installation).
`dtool a2h '[34,49,48,48,48,48,48,48,34]' | dtool h2s`
Returns:
`"1000000"`
**Note:** The fungible token balance of the account `mike.testnet` is `1000000` wrapped in double-quotes. This is because of an issue with JSON serialization. Amounts given in arguments and results must be serialized as Base-10 strings, e.g. "100". This is done to avoid JSON limitation of max integer value of 2**53, which can certainly happen with fungible tokens.
## Get info about the FT {#get-info-about-the-ft}
You can get `name`, `decimals`, `icon` and other parameters by calling the next function:
- using NEAR CLI:
```bash
near view <contract_account_id> ft_metadata
```
Result:
```bash
View call: ft.demo.testnet.ft_metadata()
{
spec: 'ft-1.0.0',
name: 'Example Token Name',
symbol: 'MOCHI',
icon: null,
reference: null,
reference_hash: null,
decimals: 24
}
```
- with JSON RPC call:
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=ftmetadata method=query \
params:='{
"request_type": "call_function",
"finality": "final",
"account_id": "<contract_account_id>",
"method_name": "ft_metadata",
"args_base64": ""
}'
```
Example response:
```bash
HTTP/1.1 200 OK
Alt-Svc: clear
Via: 1.1 google
access-control-allow-origin:
content-length: 604
content-type: application/json
date: Wed, 02 Jun 2021 15:51:17 GMT
{
"id": "ftmetadata",
"jsonrpc": "2.0",
"result": {
"block_hash": "B3fu3v4dmn19B6oqjHUXN3k5NhdP9EW5kkjyuFUDpa1r",
"block_height": 50061565,
"logs": [],
"result": [ 123, 34, 115, 112, 101, 99, 34, 58, 34, 102, 116, 45, 49, 46, 48, 46, 48, 34, 44, 34, 110, 97, 109, 101, 34, 58, 34, 69, 120, 97, 109, 112, 108, 101, 32, 84, 111, 107, 101, 110, 32, 78, 97, 109, 101, 34, 44, 34, 115, 121, 109, 98, 111, 108, 34, 58, 34, 77, 79, 67, 72, 73, 34, 44, 34, 105, 99, 111, 110, 34, 58, 110, 117, 108, 108, 44, 34, 114, 101, 102, 101, 114, 101, 110, 99, 101, 34, 58, 110, 117, 108, 108, 44, 34, 114, 101, 102, 101, 114, 101, 110, 99, 101, 95, 104, 97, 115, 104, 34, 58, 110, 117, 108, 108, 44, 34, 100, 101, 99, 105, 109, 97, 108, 115, 34, 58, 50, 52, 125 ]
}
}
```
Decoded result in this case is:
```json
{
"spec": "ft-1.0.0",
"name": "Example Token Name",
"symbol": "MOCHI",
"icon": null,
"reference": null,
"reference_hash": null,
"decimals": 24
}
```
## Simple transfer {#simple-transfer}
To follow this guide, please check the [step by step instructions](/integrations/create-transactions#low-level----create-a-transaction) on how to create a transaction first.
In order to send a fungible token to an account, the receiver must have a storage deposit. This is because each smart contract on NEAR must account for storage used, and each account on a fungible token contract is a key-value pair, taking up a small amount of storage. For more information, please see [how storage works in NEAR](/concepts/storage/storage-staking). To check if account has deposited the storage for this FT do the following:
Get storage balance of the account. `storage_balance_of` function returns the amount of deposited storage or `null` if there is no deposit.
- using NEAR CLI:
```bash
near view <contract_account_id> storage_balance_of '{"account_id": "<user_account_id>"}'
```
Result:
```bash
View call: ft.demo.testnet.storage_balance_of({"account_id": "serhii.testnet"})
null
```
- with JSON RPC call:
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=storagebalanceof method=query \
params:='{
"request_type": "call_function",
"finality": "final",
"account_id": "ft.demo.testnet",
"method_name": "storage_balance_of",
"args_base64": "eyJhY2NvdW50X2lkIjogInNlcmhpaS50ZXN0bmV0In0K"
}'
```
Example response:
```bash
HTTP/1.1 200 OK
Alt-Svc: clear
Via: 1.1 google
access-control-allow-origin:
content-length: 173
content-type: application/json
date: Wed, 02 Jun 2021 14:22:01 GMT
{
"id": "storagebalanceof",
"jsonrpc": "2.0",
"result": {
"block_hash": "EkM2j4yxRVoQ1TCqF2KUb7J4w5G1VsWtMLiycq6k3f53",
"block_height": 50054247,
"logs": [],
"result": [ 110, 117, 108, 108 ]
}
}
```
Decoded result in this case is `null`.
Get the minimum storage required for FT. (The storage used for an account's key-value pair.)
- using NEAR CLI:
```bash
near view <contract_account_id> storage_balance_bounds`
```
Result:
```bash
View call: ft.demo.testnet.storage_balance_bounds()
{ min: '1250000000000000000000', max: '1250000000000000000000' }
```
- with JSON RPC call
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=storagebalancebounds method=query \
params:='{
"request_type": "call_function",
"finality": "final",
"account_id": "<contract_account_id>",
"method_name": "storage_balance_bounds",
"args_base64": ""
}'
```
Example response:
```bash
HTTP/1.1 200 OK
Alt-Svc: clear
Via: 1.1 google
access-control-allow-origin:
content-length: 357
content-type: application/json
date: Wed, 02 Jun 2021 15:42:49 GMT
{
"id": "storagebalancebounds",
"jsonrpc": "2.0",
"result": {
"block_hash": "Fy3mBqwj5nvUDha3X7G61kmUeituHASEX12oCASrChEE",
"block_height": 50060878,
"logs": [],
"result": [ 123, 34, 109, 105, 110, 34, 58, 34, 49, 50, 53, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 34, 44, 34, 109, 97, 120, 34, 58, 34, 49, 50, 53, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 34, 125 ]
}
}
```
Decoded result should look similar to:
```json
{
"min": "1250000000000000000000",
"max": "1250000000000000000000"
}
```
Basic fungible tokens are simple smart contracts that don't have variable storage as compared to a smart contract that might store free-form text, for instance. The only storage needed is for an accounts key-value pair, which will always be covered by the `1250000000000000000000` yoctoⓃ storage balance.
If there is not enough deposit for the storage or returned value is `null` - you should deposit more storage with the next command:
- using NEAR CLI, don't forget to convert from yoctoⓃ to Ⓝ:
```bash
near call <contract_account_id> storage_deposit '{"account_id": "<user_account_id>"}' --accountId <sender_account_id> --deposit <deposit in Ⓝ>
```
Result example:
```bash
Scheduling a call: ft.demo.testnet.storage_deposit() with attached 0.125 NEAR
Transaction Id 9CMrMMt3UzeU63FFrUyFb1gNGuHXxvKfHqYJzyFTAk6z
To see the transaction in the transaction explorer, please open this url in your browser
https://testnet.nearblocks.io/txns/9CMrMMt3UzeU63FFrUyFb1gNGuHXxvKfHqYJzyFTAk6z
{ total: '1250000000000000000000', available: '0' }
```
- with JSON RPC call:
At the top of this section is a link detailing how to [construct a transaction](/integrations/create-transactions#low-level----create-a-transaction) without the full abstraction of the [`near-api-js` library](https://www.npmjs.com/package/near-api-js). For this and future examples that use the [RPC method `broadcast_tx_commit`](https://docs.near.org/api/rpc/setup#send-transaction-await) we will provide a JSON-like object meant to act similar to [pseudocode](https://en.wikipedia.org/wiki/Pseudocode), only imparting high-level details of a transaction. This code block below is the first example of this, detailing what goes into the transaction discussed currently, involving the method `storage_deposit`.
```yaml
Transaction: {
block_hash: `456…abc`,
signer_id: "serhii.testnet",
public_key: "ed25519:789…def",
nonce: 123,
receiver_id: "ft.demo.testnet",
actions: [
FunctionCall(
FunctionCallAction {
method_name: storage_deposit,
args: `{"account_id": "robertyan.near"}`,
gas: 300000000000000,
deposit: 1250000000000000000000,
},
),
]
}
```
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=broadcast_tx_commit \
params:='["DgAAAHNlcmhpaS50ZXN0bmV0AEKEp54fyVkp8dJE2l/m1ErjdhDGodBK8ZF6JLeHFMeZi/qoVEgrAAAPAAAAZnQuZGVtby50ZXN0bmV0JYbWPOu0P9T32vtUKnZSh+EaoboQqg0/De2i8Y+AjHIBAAAAAg8AAABzdG9yYWdlX2RlcG9zaXQCAAAAe30AQHoQ81oAAAAAILSd2XlDeBoAAAAAAAAAZF7+s4lcHOzy+re59VErt7LcZkPMMUVgOJV8LH5TsLBBv+8h/5tZ6+HFwxSp605A4c46oS9Jw4KBRXZD07lKCg=="]'
```
<details>
<summary>**Example Response:**</summary>
```json
{
"id": "myid",
"jsonrpc": "2.0",
"result": {
"receipts": [
{
"predecessor_id": "serhii.testnet",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "e30=",
"deposit": "125000000000000000000000",
"gas": 100000000000000,
"method_name": "storage_deposit"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "4urgFabknn1myZkjTYdb1BFSoEimP21k9smCUWoSggA7",
"receiver_id": "ft.demo.testnet"
},
{
"predecessor_id": "ft.demo.testnet",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "123750000000000000000000"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "7neJYE45vXnQia1LQqWuAfyTRXHy4vv88JaULa5DnNBd",
"receiver_id": "serhii.testnet"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "19200274886926125000"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "2c59u2zYj41JuhMfPUCKjNucmYfz2Jt83JLWP6VyQn1S",
"receiver_id": "serhii.testnet"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "18587201427159524319124"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "kaYatRKxcC1NXac69WwTqg6K13oXq2yEuy4LLZtsV2G",
"receiver_id": "serhii.testnet"
}
],
"receipts_outcome": [
{
"block_hash": "6Gz6P8N3F447kRc7kkxEhuZRZTzfuTUEagye65bPVGb",
"id": "4urgFabknn1myZkjTYdb1BFSoEimP21k9smCUWoSggA7",
"outcome": {
"executor_id": "ft.demo.testnet",
"gas_burnt": 4258977405434,
"logs": [],
"receipt_ids": [
"7neJYE45vXnQia1LQqWuAfyTRXHy4vv88JaULa5DnNBd",
"kaYatRKxcC1NXac69WwTqg6K13oXq2yEuy4LLZtsV2G"
],
"status": {
"SuccessValue": "eyJ0b3RhbCI6IjEyNTAwMDAwMDAwMDAwMDAwMDAwMDAiLCJhdmFpbGFibGUiOiIwIn0="
},
"tokens_burnt": "425897740543400000000"
},
"proof": []
},
{
"block_hash": "J6YXMnLPfLEPyvL3fWdhWPzpWAeW8zNY2CwAFwAg9tfr",
"id": "7neJYE45vXnQia1LQqWuAfyTRXHy4vv88JaULa5DnNBd",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 223182562500,
"logs": [],
"receipt_ids": [
"2c59u2zYj41JuhMfPUCKjNucmYfz2Jt83JLWP6VyQn1S"
],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "22318256250000000000"
},
"proof": [
{
"direction": "Right",
"hash": "D6tGfHwKh21PqzhBTsdKCsTtZvXFDkmH39dQiQBoGN3w"
}
]
},
{
"block_hash": "HRRF7N1PphZ46eN5g4DjqEgYHBYM76yGiXSTYWsfMGoy",
"id": "2c59u2zYj41JuhMfPUCKjNucmYfz2Jt83JLWP6VyQn1S",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": []
},
{
"block_hash": "J6YXMnLPfLEPyvL3fWdhWPzpWAeW8zNY2CwAFwAg9tfr",
"id": "kaYatRKxcC1NXac69WwTqg6K13oXq2yEuy4LLZtsV2G",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Left",
"hash": "1uVJZ8vNQpHMwPA38DYJjk9PpvnHDhsDcMxrTXcwf1s"
}
]
}
],
"status": {
"SuccessValue": "eyJ0b3RhbCI6IjEyNTAwMDAwMDAwMDAwMDAwMDAwMDAiLCJhdmFpbGFibGUiOiIwIn0="
},
"transaction": {
"actions": [
{
"FunctionCall": {
"args": "e30=",
"deposit": "125000000000000000000000",
"gas": 100000000000000,
"method_name": "storage_deposit"
}
}
],
"hash": "6sDUF1f5hebpybUbipNJer5ez13EY4HW1VBJEBqZjCEm",
"nonce": 47589658000011,
"public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN",
"receiver_id": "ft.demo.testnet",
"signature": "ed25519:31PfuinsVvM1o2CmiZbSguFkZKYqAtkf5PHerfexhDbC3SsJWDzRFBpoUYTNDJddhKeqs93GQ3SHtUyqaSYhhQ9X",
"signer_id": "serhii.testnet"
},
"transaction_outcome": {
"block_hash": "5XiuxzTpyw6p2NTD5AqrZYNW7SHvpj8MhUCyn1x2KSQR",
"id": "6sDUF1f5hebpybUbipNJer5ez13EY4HW1VBJEBqZjCEm",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 2427959010878,
"logs": [],
"receipt_ids": [
"4urgFabknn1myZkjTYdb1BFSoEimP21k9smCUWoSggA7"
],
"status": {
"SuccessReceiptId": "4urgFabknn1myZkjTYdb1BFSoEimP21k9smCUWoSggA7"
},
"tokens_burnt": "242795901087800000000"
},
"proof": []
}
}
}
```
</details>
Transfer the tokens:
- using NEAR CLI:
```bash
near call <contract_account_id> ft_transfer '{"receiver_id": "<receiver_account_id>", "amount": "1"}' --accountId <sender_account_id> --amount 0.000000000000000000000001
```
Result example:
```bash
Scheduling a call: berryclub.ek.near.ft_transfer({"receiver_id": "volovyk.near", "amount": "1"}) with attached 0.000000000000000000000001 NEAR
Receipt: GDeE3Kv1JHgs71A22NEUbgq55r2Hvcnis8gCMyJtQ2mx
Log [berryclub.ek.near]: Transfer 1 from serhii.near to volovyk.near
Transaction Id 3MkWKbXVP8wyy4pBofELqiE1pwx7ie2v3SKCwaobNcEe
To see the transaction in the transaction explorer, please open this url in your browser
https://nearblocks.io/txns/3MkWKbXVP8wyy4pBofELqiE1pwx7ie2v3SKCwaobNcEe
''
```
- with JSON RPC call:
Transaction representation:
```yaml
Transaction: {
block_hash: `456…abc`,
signer_id: "serhii.near",
public_key: "ed25519:789…def",
nonce: 123,
receiver_id: "berryclub.ek.near",
actions: [
FunctionCall(
FunctionCallAction {
method_name: ft_transfer,
args: `{"receiver_id": "volovyk.near", "amount": "1"}`,
gas: 300000000000000,
deposit: 1,
},
),
]
}
```
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=broadcast_tx_commit \
params:='["CwAAAHNlcmhpaS5uZWFyAAmQpgZcJM5nMc6f3tqmw/YI4eAvc84ZgsKMRRRzhY/6CQAAAAAAAAARAAAAYmVycnljbHViLmVrLm5lYXLLWPIiUOElkDF3u4hLAMJ0Sjeo1V338pDdHIp70va3ewEAAAACCwAAAGZ0X3RyYW5zZmVyKwAAAHsicmVjZWl2ZXJfaWQiOiJ2b2xvdnlrLm5lYXIiLCJhbW91bnQiOiIxIn0AQHoQ81oAAAEAAAAAAAAAAAAAAAAAAAAA7fDOZQt3zCtdS05Y8XaZFlwO/Gd5wkkNAHShzDiLQXk4Q4ixpraLPMJivs35PZD0gocXl1iGFbQ46NG3VllzCA=="]'
```
To get details of this transaction:
```bash
http post https://archival-rpc.mainnet.near.org jsonrpc=2.0 method=EXPERIMENTAL_tx_status \
params:='["2Fy4714idMCoja7QLdGAbQZHzV2XEnUdwZX6yGa46VMX", "serhii.near"]' id=myid
```
<details>
<summary>**Example Response:**</summary>
```json
{
"id": "myid",
"jsonrpc": "2.0",
"result": {
"receipts": [
{
"predecessor_id": "serhii.near",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6InZvbG92eWsubmVhciIsImFtb3VudCI6IjEifQ==",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.near",
"signer_public_key": "ed25519:eLbduR3uJGaAHLXeGKEfo1fYmYFKkLyR1R8ZPCxrJAM"
}
},
"receipt_id": "ExhYcvwAUb3Jpm38pSQ5oobwJAouBqqDZjbhavKrZtur",
"receiver_id": "berryclub.ek.near"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "18418055677558685763688"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.near",
"signer_public_key": "ed25519:eLbduR3uJGaAHLXeGKEfo1fYmYFKkLyR1R8ZPCxrJAM"
}
},
"receipt_id": "EAPh8XrBMqm6iuVH5jsfemz4YqUxWsV8Mz241cw5tjvE",
"receiver_id": "serhii.near"
}
],
"receipts_outcome": [
{
"block_hash": "6Re4NTkKzD7maKx3MuoxzYVHQKqjgnXW8rNjGjeVx8YC",
"id": "ExhYcvwAUb3Jpm38pSQ5oobwJAouBqqDZjbhavKrZtur",
"outcome": {
"executor_id": "berryclub.ek.near",
"gas_burnt": 6365774114160,
"logs": [
"Transfer 1 from serhii.near to volovyk.near"
],
"receipt_ids": [
"EAPh8XrBMqm6iuVH5jsfemz4YqUxWsV8Mz241cw5tjvE"
],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "636577411416000000000"
},
"proof": [
{
"direction": "Left",
"hash": "2eUmWnLExsH5jb6mALY9jTC8FiQH4FcuxQ16tn7RfkYr"
},
{
"direction": "Right",
"hash": "266d5QfDKXNbAWJgXMJXgLP97VwoMiC4Qyt8wH7xcs1Q"
},
{
"direction": "Right",
"hash": "EkJAuJigdVSZj41yGXSZYAtDV7Xwe2Hv9Xsqcv6LUZvq"
}
]
},
{
"block_hash": "3XMoeEdm1zE64aByFuWCrZaxfbvsjMHRFcL8Wsp95vyt",
"id": "EAPh8XrBMqm6iuVH5jsfemz4YqUxWsV8Mz241cw5tjvE",
"outcome": {
"executor_id": "serhii.near",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Right",
"hash": "EGC9ZPJHTbmCs3aQDuCkFQboGLBxU2uzrSZMsp8WonDu"
},
{
"direction": "Right",
"hash": "EsBd1n7bDAphA3HY84DrrKd1GP1VugeNiqFCET2S5sNG"
},
{
"direction": "Left",
"hash": "H4q3ByfNB7QH9QEuHN3tcGay7tjhsZwjXx3sq3Vm3Lza"
}
]
}
],
"status": {
"SuccessValue": ""
},
"transaction": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6InZvbG92eWsubmVhciIsImFtb3VudCI6IjEifQ==",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer"
}
}
],
"hash": "2Fy4714idMCoja7QLdGAbQZHzV2XEnUdwZX6yGa46VMX",
"nonce": 10,
"public_key": "ed25519:eLbduR3uJGaAHLXeGKEfo1fYmYFKkLyR1R8ZPCxrJAM",
"receiver_id": "berryclub.ek.near",
"signature": "ed25519:5eJPGirNkBUbMeyRfEA4fgi1FtkgGk8pmbbkmiz3Faf6zrANpBsCs5bZd5heSTvQ6b3fEPLSPCPi2iwD2XJT93As",
"signer_id": "serhii.near"
},
"transaction_outcome": {
"block_hash": "EAcwavyaeNWZnfhYP2nAWzeDgMiuiyRHfaprFqhXgCRF",
"id": "2Fy4714idMCoja7QLdGAbQZHzV2XEnUdwZX6yGa46VMX",
"outcome": {
"executor_id": "serhii.near",
"gas_burnt": 2428041740436,
"logs": [],
"receipt_ids": [
"ExhYcvwAUb3Jpm38pSQ5oobwJAouBqqDZjbhavKrZtur"
],
"status": {
"SuccessReceiptId": "ExhYcvwAUb3Jpm38pSQ5oobwJAouBqqDZjbhavKrZtur"
},
"tokens_burnt": "242804174043600000000"
},
"proof": [
{
"direction": "Right",
"hash": "GatQmy7fW5uXRJRSg7A315CWzWWcQCGk4GJXyW3cjw4j"
},
{
"direction": "Right",
"hash": "89WJwAetivZLvAkVLXUt862o7zJX7YYt6ZixdWebq3xv"
},
{
"direction": "Right",
"hash": "CH3wHSqYPJp35krLjSgJTgCFYnv1ymhd9bJpjXA31VVD"
}
]
}
}
}
```
</details>
You can get the same info later by the transaction hash from the previous call:
- using NEAR Explorer: https://nearblocks.io
<!--
- using NEAR CLI:
near tx-status <transaction_hash> --accountId <transaction_signer>
-->
- with JSON RPC call
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=EXPERIMENTAL_tx_status \
params:='[ "2Fy4714idMCoja7QLdGAbQZHzV2XEnUdwZX6yGa46VMX", "sender.testnet"]'
```
Let's create test transaction that should fail and investigate the response. We will try to send more tokens that are available on this account:
- using NEAR CLI:
```bash
near call <contract_account_id> ft_transfer '{"receiver_id": "<user_account_id>", "amount": "10000000000"}' --accountId <sender_account_id> --amount 0.000000000000000000000001
```
- with JSON RPC call:
Transaction representation:
```yaml
Transaction: {
block_hash: `456…abc`,
signer_id: "serhii.near",
public_key: "ed25519:789…def",
nonce: 123,
receiver_id: "berryclub.ek.near",
actions: [
FunctionCall(
FunctionCallAction {
method_name: ft_transfer,
args: `{"receiver_id":"volovyk.near","amount":"10000000000000000000"}`,
gas: 300000000000000,
deposit: 1,
},
),
]
}
```
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=broadcast_tx_commit \
params:='["DgAAAHNlcmhpaS50ZXN0bmV0AEKEp54fyVkp8dJE2l/m1ErjdhDGodBK8ZF6JLeHFMeZofqoVEgrAAAgAAAAZGV2LTE2MjMzMzM3OTU2MjMtMjEzOTk5NTk3NzgxNTm8Xq8BTIi6utG0424Gg7CknYzLH8RH/A409jq5o0zi7gEAAAACCwAAAGZ0X3RyYW5zZmVyPwAAAHsicmVjZWl2ZXJfaWQiOiJkZXYtMTYyMzMzMzkxNjM2OC01ODcwNzQzNDg3ODUzMyIsImFtb3VudCI6IjEifQBAehDzWgAAAQAAAAAAAAAAAAAAAAAAAABCwjqayKdpWgM6PE0ixzm/Gy0EtdpxVn0xehMTBReVfVAKIBTDPoPSaOdT8fAhk343F5uOMfSijhTqU2mWV3oD"]'
```
To get details of this transaction:
```bash
http post https://archival-rpc.mainnet.near.org jsonrpc=2.0 method=EXPERIMENTAL_tx_status \
params:='["CKHzodHvFw4C87PazsniycYZZHm37CEWLE2u8VUUMU7r", "serhii.near"]' id=myid
```
<details>
<summary>**Example Response:**</summary>
```json
{
"id": "myid",
"jsonrpc": "2.0",
"result": {
"receipts": [
{
"predecessor_id": "serhii.near",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6InZvbG92eWsubmVhciIsImFtb3VudCI6IjEwMDAwMDAwMDAwMDAwMDAwMDAwIn0=",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.near",
"signer_public_key": "ed25519:eLbduR3uJGaAHLXeGKEfo1fYmYFKkLyR1R8ZPCxrJAM"
}
},
"receipt_id": "5bdBKwS1RH7wm8eoG6ZeESdhNpj9HffUcf8RoP6Ng5d",
"receiver_id": "berryclub.ek.near"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "1"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "system",
"signer_public_key": "ed25519:11111111111111111111111111111111"
}
},
"receipt_id": "Td3QxpKhMdi8bfVeMiQZwNS1VzPXceQdn6xdftoC8k6",
"receiver_id": "serhii.near"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "18653463364152698495356"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.near",
"signer_public_key": "ed25519:eLbduR3uJGaAHLXeGKEfo1fYmYFKkLyR1R8ZPCxrJAM"
}
},
"receipt_id": "DwLMVTdqv9Z4g9QC4AthTXHqqeJVAH4s1tFXHQYMArW7",
"receiver_id": "serhii.near"
}
],
"receipts_outcome": [
{
"block_hash": "DTruWLgm5Y56yDrxUipvYqKKm8F7hxVQTarNQqe147zs",
"id": "5bdBKwS1RH7wm8eoG6ZeESdhNpj9HffUcf8RoP6Ng5d",
"outcome": {
"executor_id": "berryclub.ek.near",
"gas_burnt": 4011776278642,
"logs": [],
"receipt_ids": [
"Td3QxpKhMdi8bfVeMiQZwNS1VzPXceQdn6xdftoC8k6",
"DwLMVTdqv9Z4g9QC4AthTXHqqeJVAH4s1tFXHQYMArW7"
],
"status": {
"Failure": {
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: The account doesn't have enough balance"
}
}
}
}
},
"tokens_burnt": "401177627864200000000"
},
"proof": [
{
"direction": "Right",
"hash": "6GHrA42oMEF4g7YCBpPw9EakkLiepTHnQBvaKtmsenEY"
},
{
"direction": "Right",
"hash": "DCG3qZAzf415twXfHmgBUdB129g2iZoQ4v8dawwBzhSh"
}
]
},
{
"block_hash": "F9xNWGhJuYW336f3qVaDDAipsyfpudJHYbmt5in3MeMT",
"id": "Td3QxpKhMdi8bfVeMiQZwNS1VzPXceQdn6xdftoC8k6",
"outcome": {
"executor_id": "serhii.near",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Right",
"hash": "CJNvis1CoJmccshDpPBrk3a7fdZ5HnMQuy3p2Kd2GCdS"
},
{
"direction": "Left",
"hash": "4vHM3fbdNwXGMp9uYzVKB13abEM6qdPUuZ9rfrdsaDzc"
}
]
},
{
"block_hash": "F9xNWGhJuYW336f3qVaDDAipsyfpudJHYbmt5in3MeMT",
"id": "DwLMVTdqv9Z4g9QC4AthTXHqqeJVAH4s1tFXHQYMArW7",
"outcome": {
"executor_id": "serhii.near",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Left",
"hash": "BR3R7tjziEgXMiHaJ7VuuXCo2yBHB2ZzsoxobPhPjFeJ"
},
{
"direction": "Left",
"hash": "4vHM3fbdNwXGMp9uYzVKB13abEM6qdPUuZ9rfrdsaDzc"
}
]
}
],
"status": {
"Failure": {
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: The account doesn't have enough balance"
}
}
}
}
},
"transaction": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6InZvbG92eWsubmVhciIsImFtb3VudCI6IjEwMDAwMDAwMDAwMDAwMDAwMDAwIn0=",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer"
}
}
],
"hash": "CKHzodHvFw4C87PazsniycYZZHm37CEWLE2u8VUUMU7r",
"nonce": 12,
"public_key": "ed25519:eLbduR3uJGaAHLXeGKEfo1fYmYFKkLyR1R8ZPCxrJAM",
"receiver_id": "berryclub.ek.near",
"signature": "ed25519:63MC3f8m5jeycpy97G9XaCwmJLx4YHRn2x5AEJDiYYzZ3TzdzWsrz8dgaz2kHR2jsWh35aZoL97tw1RRTHK6ZQYq",
"signer_id": "serhii.near"
},
"transaction_outcome": {
"block_hash": "7YUgyBHgmbGy1edhaWRZeBVq9zzbnzrRGtVRQS5PpooW",
"id": "CKHzodHvFw4C87PazsniycYZZHm37CEWLE2u8VUUMU7r",
"outcome": {
"executor_id": "serhii.near",
"gas_burnt": 2428084223182,
"logs": [],
"receipt_ids": [
"5bdBKwS1RH7wm8eoG6ZeESdhNpj9HffUcf8RoP6Ng5d"
],
"status": {
"SuccessReceiptId": "5bdBKwS1RH7wm8eoG6ZeESdhNpj9HffUcf8RoP6Ng5d"
},
"tokens_burnt": "242808422318200000000"
},
"proof": [
{
"direction": "Right",
"hash": "Agyg5P46kSVa4ptG9spteHpZ5c8XkvfbmDN5EUXhC1Wr"
},
{
"direction": "Right",
"hash": "3JDKkLCy5bJaAU3exa66sotTwJyGwyChxeNJgKReKw34"
},
{
"direction": "Right",
"hash": "7GXEmeQEJdd4c2kgN7NoYiF2bkjzV4bNkMmkhpK14NTz"
}
]
}
}
}
```
</details>
Was the fungible token transfer successful?
- Look for `result` » `transaction_outcome` » `outcome` » see if `SuccessReceiptId` is a key
- if `SuccessReceiptId` is not a key, this fungible token transfer has `failed`.
- If it does have that key, get the value, which is a `receipt ID`
- Loop through `result` » `receipts_outcome` until you find an object that ID (from above) under the id key
- in that object check `outcome` » `status` » (see if SuccessValue is a key)
- If SuccessValue is a key, fungible token transfer succeeded, if not, it failed.
To determine how many fungible tokens were transferred, look at:
- `result` » `transaction` » `actions` » `FunctionCall` » `args`
- then take the args and `base64` decode it, that will give you a JSON payload and look for the `amount` key
- It will contain a stringified number that represents the number of fungible tokens that were successfully transferred
## Transfer and call {#transfer-and-call}
If the idea of a fungible token using "transfer and call" is new, please review the comments above the function in [the Nomicon spec](https://nomicon.io/Standards/Tokens/FungibleToken/Core#reference-level-explanation). Also, see a similar idea [from EIP-677](https://github.com/ethereum/EIPs/issues/677).
For this example we will build and deploy FT contracts from [near-sdk-rs/examples/fungible-token](https://github.com/near/near-sdk-rs/tree/master/examples/fungible-token).
Let's call `ft_transfer_call` function on `ft` contract (receiver) and examine successful and unsuccessful scenarios.
### Successful transfer and call {#successful-transfer-and-call}
Let's send 10 N to `DEFI` contract that requires only 9 N.
- using NEAR CLI
```bash
near call <ft_contract_id> ft_transfer_call '{"receiver_id": "<defi_contract_id>", "amount": "10", "msg": "take-my-money"}' --accountId <user_account_id> --amount 0.000000000000000000000001
```
- with JSON RPC call
Transaction representation:
```yaml
Transaction: {
block_hash: `456…abc`,
signer_id: "serhii.testnet",
public_key: "ed25519:789…def",
nonce: 123,
receiver_id: "dev-1623333795623-21399959778159",
actions: [
FunctionCall(
FunctionCallAction {
method_name: ft_transfer_call,
args: `{"receiver_id":"dev-1623693121955-71667632531176","amount":"10","msg":"take-my-money"}`,
gas: 300000000000000,
deposit: 1,
},
),
]
}
```
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=broadcast_tx_commit \
params:='["DgAAAHNlcmhpaS50ZXN0bmV0AEKEp54fyVkp8dJE2l/m1ErjdhDGodBK8ZF6JLeHFMeZqPqoVEgrAAAgAAAAZGV2LTE2MjMzMzM3OTU2MjMtMjEzOTk5NTk3NzgxNTn9j4g2IJ8nGQ38i3+k+4WBAeJL1xP7ygQhC7CrvEG4NQEAAAACEAAAAGZ0X3RyYW5zZmVyX2NhbGxWAAAAeyJyZWNlaXZlcl9pZCI6ImRldi0xNjIzNjkzMTIxOTU1LTcxNjY3NjMyNTMxMTc2IiwiYW1vdW50IjoiMTAiLCJtc2ciOiJ0YWtlLW15LW1vbmV5In0AQHoQ81oAAAEAAAAAAAAAAAAAAAAAAAAANY2lHqJlAJYNDGEQiUNnmfiBV44Q1sdg45xNlNvlROOM+AtN1z3PSJqM6M6jAKXUwANoQTzFqXhIMHIjIPbTAA=="]'
```
To get details of this transaction:
```bash
http post https://archival-rpc.testnet.near.org jsonrpc=2.0 method=EXPERIMENTAL_tx_status \
params:='["5n1kwA3TQQyFTkddR2Jau3H1Pt8ebQNGaov6aCQ6TDp1", "serhii.testnet"]' id=myid
```
<details>
<summary>**Example Response:**</summary>
```json
{
"id": "myid",
"jsonrpc": "2.0",
"result": {
"receipts": [
{
"predecessor_id": "serhii.testnet",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6ImRldi0xNjIzNjkzMTIxOTU1LTcxNjY3NjMyNTMxMTc2IiwiYW1vdW50IjoiMTAiLCJtc2ciOiJ0YWtlLW15LW1vbmV5In0=",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer_call"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "Hw6z8kJ7CSaC6SgyQzcmXzNX9gq1gaAnLS169qgyZ2Vk",
"receiver_id": "dev-1623333795623-21399959778159"
},
{
"predecessor_id": "dev-1623333795623-21399959778159",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "eyJzZW5kZXJfaWQiOiJzZXJoaWkudGVzdG5ldCIsImFtb3VudCI6IjEwIiwibXNnIjoidGFrZS1teS1tb25leSJ9",
"deposit": "0",
"gas": 70000000000000,
"method_name": "ft_on_transfer"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [
{
"data_id": "EiDQi54XHfdD1KEcgiNzogXxXuwTpzeQfmyqVwbq7H4D",
"receiver_id": "dev-1623333795623-21399959778159"
}
],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "EB69xtJiLRh9RNzAHgBGmom8551hrK2xSRreqbjvJgu5",
"receiver_id": "dev-1623693121955-71667632531176"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "13116953530949529501760"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "AkwgvxUspRgy255fef2hrEWMbrMWFtnTRGduSgDRdSW1",
"receiver_id": "serhii.testnet"
},
{
"predecessor_id": "dev-1623333795623-21399959778159",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "eyJzZW5kZXJfaWQiOiJzZXJoaWkudGVzdG5ldCIsInJlY2VpdmVyX2lkIjoiZGV2LTE2MjM2OTMxMjE5NTUtNzE2Njc2MzI1MzExNzYiLCJhbW91bnQiOiIxMCJ9",
"deposit": "0",
"gas": 5000000000000,
"method_name": "ft_resolve_transfer"
}
}
],
"gas_price": "186029458",
"input_data_ids": [
"EiDQi54XHfdD1KEcgiNzogXxXuwTpzeQfmyqVwbq7H4D"
],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "4Tc8MsrJZSMpNZx7u4jSqxr3WhRzqxaNHxLJFqz8tUPR",
"receiver_id": "dev-1623333795623-21399959778159"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "761030677610514102464"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "9rxcC9o8x4RX7ftsDCfxK8qnisYv45rA1HGPxhuukWUL",
"receiver_id": "serhii.testnet"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "2137766093631769060520"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "H7YWFkvx16Efy2keCQ7BQ67BMEsdgdYLqJ99G4H3dR1D",
"receiver_id": "serhii.testnet"
}
],
"receipts_outcome": [
{
"block_hash": "B9yZz1w3yzrqQnfBFAf17S4TLaHakXJWqmFDBbFxaEiZ",
"id": "Hw6z8kJ7CSaC6SgyQzcmXzNX9gq1gaAnLS169qgyZ2Vk",
"outcome": {
"executor_id": "dev-1623333795623-21399959778159",
"gas_burnt": 20612680932083,
"logs": [
"Transfer 10 from serhii.testnet to dev-1623693121955-71667632531176"
],
"receipt_ids": [
"EB69xtJiLRh9RNzAHgBGmom8551hrK2xSRreqbjvJgu5",
"4Tc8MsrJZSMpNZx7u4jSqxr3WhRzqxaNHxLJFqz8tUPR",
"H7YWFkvx16Efy2keCQ7BQ67BMEsdgdYLqJ99G4H3dR1D"
],
"status": {
"SuccessReceiptId": "4Tc8MsrJZSMpNZx7u4jSqxr3WhRzqxaNHxLJFqz8tUPR"
},
"tokens_burnt": "2061268093208300000000"
},
"proof": []
},
{
"block_hash": "7Z4LHWksvw7sKYKwpQfjEMG8oigjtRXKa3EopN7hS2v7",
"id": "EB69xtJiLRh9RNzAHgBGmom8551hrK2xSRreqbjvJgu5",
"outcome": {
"executor_id": "dev-1623693121955-71667632531176",
"gas_burnt": 3568066327145,
"logs": [
"Sender @serhii.testnet is transferring 10 tokens using ft_on_transfer, msg = take-my-money"
],
"receipt_ids": [
"AkwgvxUspRgy255fef2hrEWMbrMWFtnTRGduSgDRdSW1"
],
"status": {
"SuccessValue": "IjEi"
},
"tokens_burnt": "356806632714500000000"
},
"proof": [
{
"direction": "Right",
"hash": "5X2agUKpqmk7QkUZsDQ4R4HdX7zXeuPYpVAfvbmF5Gav"
}
]
},
{
"block_hash": "CrSDhQNn72K2Qr1mmoM9j3YHCo3wfZdmHjpHJs74WnPk",
"id": "AkwgvxUspRgy255fef2hrEWMbrMWFtnTRGduSgDRdSW1",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Right",
"hash": "4WG6hF5fTAtM7GSqU8mprrFwRVbChGMCV2NPZEjEdnc1"
}
]
},
{
"block_hash": "CrSDhQNn72K2Qr1mmoM9j3YHCo3wfZdmHjpHJs74WnPk",
"id": "4Tc8MsrJZSMpNZx7u4jSqxr3WhRzqxaNHxLJFqz8tUPR",
"outcome": {
"executor_id": "dev-1623333795623-21399959778159",
"gas_burnt": 6208280264404,
"logs": [
"Refund 1 from dev-1623693121955-71667632531176 to serhii.testnet"
],
"receipt_ids": [
"9rxcC9o8x4RX7ftsDCfxK8qnisYv45rA1HGPxhuukWUL"
],
"status": {
"SuccessValue": "Ijki"
},
"tokens_burnt": "620828026440400000000"
},
"proof": [
{
"direction": "Left",
"hash": "BzT8YhEDDWSuoGGTBzH2Cj5GC4c56uAQxk41by4KVnXi"
}
]
},
{
"block_hash": "3Q2Zyscj6vG5nC2vdoYfcBHU9RVaAwoxHsHzAKVcAHZ6",
"id": "9rxcC9o8x4RX7ftsDCfxK8qnisYv45rA1HGPxhuukWUL",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": []
},
{
"block_hash": "7Z4LHWksvw7sKYKwpQfjEMG8oigjtRXKa3EopN7hS2v7",
"id": "H7YWFkvx16Efy2keCQ7BQ67BMEsdgdYLqJ99G4H3dR1D",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Left",
"hash": "61ak42D3duBBunCz3w4xXxoEeR2N7oav5e938TnmGFGN"
}
]
}
],
"status": {
"SuccessValue": "Ijki"
},
"transaction": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6ImRldi0xNjIzNjkzMTIxOTU1LTcxNjY3NjMyNTMxMTc2IiwiYW1vdW50IjoiMTAiLCJtc2ciOiJ0YWtlLW15LW1vbmV5In0=",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer_call"
}
}
],
"hash": "5n1kwA3TQQyFTkddR2Jau3H1Pt8ebQNGaov6aCQ6TDp1",
"nonce": 47589658000040,
"public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN",
"receiver_id": "dev-1623333795623-21399959778159",
"signature": "ed25519:256qp2jAGXhhw2t2XfUAjWwzz3XcD83DH2v9THwDPsZjCLWHU8QJd6cuA773NP9yBmTd2ZyYiFHuxVEkYqnbsaSb",
"signer_id": "serhii.testnet"
},
"transaction_outcome": {
"block_hash": "96k8kKzFuZWxyiUnT774Rg7DC3XDZNuxhxD1qEViFupd",
"id": "5n1kwA3TQQyFTkddR2Jau3H1Pt8ebQNGaov6aCQ6TDp1",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 2428149065268,
"logs": [],
"receipt_ids": [
"Hw6z8kJ7CSaC6SgyQzcmXzNX9gq1gaAnLS169qgyZ2Vk"
],
"status": {
"SuccessReceiptId": "Hw6z8kJ7CSaC6SgyQzcmXzNX9gq1gaAnLS169qgyZ2Vk"
},
"tokens_burnt": "242814906526800000000"
},
"proof": []
}
}
}
```
</details>
Now, let's try to follow the steps described in the previous section and determine if these transactions was successful. In addition to being successful, let's analyze the various receipts in the series of cross-contract calls to determine how many fungible tokens were transferred. This will be the most complex case we'll look at.
1. Check that `result` » `transaction_outcome` » `outcome` » `status` has `SuccessReceiptId` as a key. If not, no fungible tokens were transferred.
2. Take the value of the `SuccessReceiptId` key. In the case above it's `Hw6z8kJ7CSaC6SgyQzcmXzNX9gq1gaAnLS169qgyZ2Vk`.
3. Now, under `result` » `receipts` loop through the array until you find a receipt where the `receipt_id` matches the value from step 2. (Note that in the receipt, under `Actions` there's an element mentioning calling the `method_name: "ft_transfer_call"`.) On the same level of JSON, there's an `args` key. That's a base64-encoded value of the arguments passed to the method. When decoded it is:
```json
{"receiver_id":"dev-1623693121955-71667632531176","amount":"10","msg":"take-my-money"}
```
4. Loop through `result` » `receipts_outcome` until finding the object where `id` is equal to the value from step 2. Similar to step 1, this object will also contain a `status` field that should contain the key `SuccessReceiptId`. Again, if this isn't there no fungible tokens were transferred, otherwise get the value of the `SuccessReceiptId`. In the above example, this value is `4Tc8MsrJZSMpNZx7u4jSqxr3WhRzqxaNHxLJFqz8tUPR`.
5. Similar to the previous step, loop through the `result` » `receipts_outcome` until you find the object where the `id` matches the value from step 4. In that object check that `outcome` » `status` has the `SuccessValue` field. This `SuccessValue` represents how many fungible tokens the receiving contract is "returning" to the fungible token contract. Note that in the example above the value is `Ijki`, which is the base64-encoded version of `"9"`. At this point, we know that 10 fungible tokens were sent (from step 3) and 9 were taken.
For additional clarity, let's take a look at one more optional aspect. In step 4 we isolated an obeject in `result` » `receipts_outcome`. There's an array of `receipt_ids` that's particularly interesting. The first element in the array is the receipt ID `EB69xtJiLRh9RNzAHgBGmom8551hrK2xSRreqbjvJgu5`. If we loop through the `result` » `receipts_outcome` and find this as the value for the `id` key, we'll see what happened in the function `ft_on_transfer` which takes place in the contract receiving the fungible tokens. In this object the `status` » `SuccessValue` is `IjEi` which is the base64-encoded value of `"1"`.
In summary:
1. A user called the fungible token contract with the method `ft_transfer_call` specifying the receiver account, how many tokens to send, and custom info.
2. The receiver account implemented `ft_on_transfer`, returning `"1"` to the callback function on the fungible token contract.
3. The fungible token contract's callback is `ft_resolve_transfer` and receives this value of `"1"`. It knows that 1 token was returned, so subtracts that from the 10 it intended to send. It then returns to the user how many tokens were used in this back-and-forth series of cross-contract calls: `"9"`.
### Failed transfer and call {#failed-transfer-and-call}
Let's try to send more tokens than the account has:
- using NEAR CLI
```bash
near call <ft_contract_id> ft_transfer_call '{"receiver_id": "<defi_contract_id>", "amount": "1000000000", "msg": "take-my-money"}' --accountId <user_account_id> --amount 0.000000000000000000000001
```
Transaction representation:
```yaml
Transaction: {
block_hash: `456…abc`,
signer_id: "serhii.testnet",
public_key: "ed25519:789…def",
nonce: 123,
receiver_id: "dev-1623333795623-21399959778159",
actions: [
FunctionCall(
FunctionCallAction {
method_name: ft_transfer_call,
args: `{"receiver_id":"dev-1623333916368-58707434878533","amount":"1000000000","msg":"take-my-money"}`,
gas: 300000000000000,
deposit: 1,
},
),
]
}
```
- with JSON RPC call
```bash
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=broadcast_tx_commit \
params:='["DgAAAHNlcmhpaS50ZXN0bmV0AEKEp54fyVkp8dJE2l/m1ErjdhDGodBK8ZF6JLeHFMeZn/qoVEgrAAAgAAAAZGV2LTE2MjMzMzM3OTU2MjMtMjEzOTk5NTk3NzgxNTnrbOQ93Wv9xxBwmq4yDYrssCpwKSI2bzjNNCCCHMZKNwEAAAACEAAAAGZ0X3RyYW5zZmVyX2NhbGxeAAAAeyJyZWNlaXZlcl9pZCI6ImRldi0xNjIzMzMzOTE2MzY4LTU4NzA3NDM0ODc4NTMzIiwiYW1vdW50IjoiMTAwMDAwMDAwMCIsIm1zZyI6InRha2UtbXktbW9uZXkifQBAehDzWgAAAQAAAAAAAAAAAAAAAAAAAABQh3k+7zG2m/Yz3O/FBrvLaBwR/5YRB5FbFnb27Nfu6BW/Wh77RFH7+ktBwGLBwFbJGxiumIcsqBiGXgg1EPMN"]'
```
To get details of this transaction:
```bash
http post https://archival-rpc.testnet.near.org jsonrpc=2.0 method=EXPERIMENTAL_tx_status \
params:='["FQsh44pvEsK8RS9AbK868CmGwfhUU2pUrizkQ6wCWTsB", "serhii.testnet"]' id=myid
```
<details>
<summary>**Example response**:</summary>
```json
{
"id": "myid",
"jsonrpc": "2.0",
"result": {
"receipts": [
{
"predecessor_id": "serhii.testnet",
"receipt": {
"Action": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6ImRldi0xNjIzMzMzOTE2MzY4LTU4NzA3NDM0ODc4NTMzIiwiYW1vdW50IjoiMTAwMDAwMDAwMCIsIm1zZyI6InRha2UtbXktbW9uZXkifQ==",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer_call"
}
}
],
"gas_price": "186029458",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "83AdQ16bpAC7BEUyF7zoRsAgeNW7HHmjhZLvytEsrygo",
"receiver_id": "dev-1623333795623-21399959778159"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "1"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "system",
"signer_public_key": "ed25519:11111111111111111111111111111111"
}
},
"receipt_id": "Euy4Q33DfvJTXD8HirE5ACoXnw9PMTQ2Hq47aGyD1spc",
"receiver_id": "serhii.testnet"
},
{
"predecessor_id": "system",
"receipt": {
"Action": {
"actions": [
{
"Transfer": {
"deposit": "18681184841157733814920"
}
}
],
"gas_price": "0",
"input_data_ids": [],
"output_data_receivers": [],
"signer_id": "serhii.testnet",
"signer_public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN"
}
},
"receipt_id": "6ZDoSeV3gLFS2NXqMCJGEUR3VwBpSxBEPjnEEaAQfmXL",
"receiver_id": "serhii.testnet"
}
],
"receipts_outcome": [
{
"block_hash": "BohRBwqjRHssDVS9Gt9dj3SYuipxHA81xXFjRVLqgGeb",
"id": "83AdQ16bpAC7BEUyF7zoRsAgeNW7HHmjhZLvytEsrygo",
"outcome": {
"executor_id": "dev-1623333795623-21399959778159",
"gas_burnt": 3734715409940,
"logs": [],
"receipt_ids": [
"Euy4Q33DfvJTXD8HirE5ACoXnw9PMTQ2Hq47aGyD1spc",
"6ZDoSeV3gLFS2NXqMCJGEUR3VwBpSxBEPjnEEaAQfmXL"
],
"status": {
"Failure": {
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: The account doesn't have enough balance"
}
}
}
}
},
"tokens_burnt": "373471540994000000000"
},
"proof": []
},
{
"block_hash": "4BzTmMmTjKvfs6ANS5gmJ6GQzhqianEGWq7SaxSfPbdC",
"id": "Euy4Q33DfvJTXD8HirE5ACoXnw9PMTQ2Hq47aGyD1spc",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Right",
"hash": "5ipmcdgTieQqFXWQFCwcbZhFtkHE4PL4nW3mknBchpG6"
}
]
},
{
"block_hash": "4BzTmMmTjKvfs6ANS5gmJ6GQzhqianEGWq7SaxSfPbdC",
"id": "6ZDoSeV3gLFS2NXqMCJGEUR3VwBpSxBEPjnEEaAQfmXL",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 0,
"logs": [],
"receipt_ids": [],
"status": {
"SuccessValue": ""
},
"tokens_burnt": "0"
},
"proof": [
{
"direction": "Left",
"hash": "9tcjij6M8Ge4aJcAa97He5nw8pH7PF8ZjRHVahBZD2VW"
}
]
}
],
"status": {
"Failure": {
"ActionError": {
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: The account doesn't have enough balance"
}
}
}
}
},
"transaction": {
"actions": [
{
"FunctionCall": {
"args": "eyJyZWNlaXZlcl9pZCI6ImRldi0xNjIzMzMzOTE2MzY4LTU4NzA3NDM0ODc4NTMzIiwiYW1vdW50IjoiMTAwMDAwMDAwMCIsIm1zZyI6InRha2UtbXktbW9uZXkifQ==",
"deposit": "1",
"gas": 100000000000000,
"method_name": "ft_transfer_call"
}
}
],
"hash": "FQsh44pvEsK8RS9AbK868CmGwfhUU2pUrizkQ6wCWTsB",
"nonce": 47589658000031,
"public_key": "ed25519:5UfEFyve3RdqKkWtALMreA9jzsAGDgCtwEXGNtkGeruN",
"receiver_id": "dev-1623333795623-21399959778159",
"signature": "ed25519:2cPASnxKtCoQtZ9NFq63fg8RzpjvmmE8hL4s2jk8zuhnBCD3AnYQ6chZZrUBGwu7WrsGuWUyohP1bEca4vfbsorC",
"signer_id": "serhii.testnet"
},
"transaction_outcome": {
"block_hash": "FwHUeqmYpvgkL7eBrUUAEMCuaQshcSy5vm4AHchebhK1",
"id": "FQsh44pvEsK8RS9AbK868CmGwfhUU2pUrizkQ6wCWTsB",
"outcome": {
"executor_id": "serhii.testnet",
"gas_burnt": 2428166952740,
"logs": [],
"receipt_ids": [
"83AdQ16bpAC7BEUyF7zoRsAgeNW7HHmjhZLvytEsrygo"
],
"status": {
"SuccessReceiptId": "83AdQ16bpAC7BEUyF7zoRsAgeNW7HHmjhZLvytEsrygo"
},
"tokens_burnt": "242816695274000000000"
},
"proof": []
}
}
}
```
</details>
Let's examine this response.
* `result` » `transaction_outcome` » `outcome` » `status` » `SuccessReceiptId` is `83AdQ16bpAC7BEUyF7zoRsAgeNW7HHmjhZLvytEsrygo`
* check `result` » `receipts_outcome` » `0` » `outcome` » `status` and find `Failure` status there
:::tip Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol"> Ask it on StackOverflow! </a>
:::
|
---
sidebar_position: 5
sidebar_label: "Cross-contract calls, etc."
title: "Adding cross-contract calls, access key shuffling, etc."
---
import {Github} from "@site/src/components/codetabs"
# Updating the contract
import shuffleKeys from '/docs/assets/crosswords/shuffle-keys.gif';
import clionSuggestion from '/docs/assets/crosswords/clion-suggestion.gif';
import carpenterAddingKey from '/docs/assets/crosswords/create-key-carpenter-near--carlcarlkarl.near--CarlCarlKarl.jpg';
import recycleKey from '/docs/assets/crosswords/remove-key-recycle--eerie_ram.near--eerie_ram.png';
To reiterate, we'd like anyone to be able to participate in the crossword puzzle, even folks who don't have a NEAR account.
The first person to win will "reserve their spot" and choose where to send the prize money: either an account they own or an account they'd like to create.
## Reserving their spot
### The plan
When a user first visits the crossword, they only see the crossword. No login button and no fields (like a `memo` field) to fill out.
On their first visit, our frontend will create a brand new, random seed phrase in their browser. We'll use this seed phrase to create the user's unique key pair. If a random seed phrase is already there, it skips this part. (We covered the code for this in [a previous section](02-use-seed-phrase.md#generate-random-seed-phrase).)
If the user is the first to solve the puzzle, it discovers the function-call access key and calls `submit_solution` with that key. It's basically using someone else's key, as this key is on the crossword account.
**We'll be adding a new parameter** to the `submit_solution` so the user can include the random, public key we just stored in their browser.
During the execution of `submit_solution`, because contracts can use Promises to perform Actions, we'll remove the solution public key and add the user's public key.
This will lock out other attempts to solve the crossword puzzle and ensure there is only one winner.
<img src={shuffleKeys} width="600"/><br/><br/>
This means that a puzzle can have three states it can be in:
1. Unsolved
2. Solved and not yet claimed (not paid out)
3. Claimed and finalized
The previous chapter [we discussed enums](../02-beginner/02-structs-enums.md#using-enums), so this is simply modifying the enumeration variants.
### The implementation
First, let's see how the `submit_solution` will verify the correct solution.
<Github language="rust" start="145" end="151" url="https://github.com/near-examples/crossword-tutorial-chapter-3/blob/ec07e1e48285d31089b7e8cec9e9cf32a7e90c35/contract/src/lib.rs" />
Instead of hashing the plaintext, we simply check that the public key matches what we know the answer is. (The answer being the series of words representing the solution to the crossword puzzle, used as a seed phrase to create a key pair, including a public key.)
Further down in the `submit_solution` method we'll follow our plan by **adding a function-call access key** (that only the winner has) and removing the access key that was discovered by the winner, so no one else can use it.
<figure>
<img src={carpenterAddingKey} alt="Illustration of a carpenter who has created a key. Art by carlcarlkarl.near" width="400"/>
<figcaption className="small">Our smart contract is like this carpenter adding a key to itself.<br/>Art by <a href="https://twitter.com/CarlCarlKarl" target="_blank">carlcarlkarl.near</a></figcaption>
</figure>
<br/>
<Github language="rust" start="175" end="181" url="https://github.com/near-examples/crossword-tutorial-chapter-3/blob/ec07e1e48285d31089b7e8cec9e9cf32a7e90c35/contract/src/lib.rs" />
The first promise above adds an access key, and the second deletes the access key on the account that was derived from the solution as a seed phrase.
<figure>
<img src={recycleKey} alt="Book showing pagination of hashes. Art created by eerie_ram.near" width="600"/>
<figcaption>We delete the function-call access key so there is only one winner.<br/>Art by <a href="https://twitter.com/eerie_ram" target="_blank">eerie_ram.near</a></figcaption>
</figure>
<br/>
Note that the new function-call access key is able to call two methods we'll be adding:
1. `claim_reward` — when the user has an existing account and wishes to send the prize to it
2. `claim_reward_new_account` — when the user doesn't have an account, wants to create one and send the prize to it
Both functions will do cross-contract calls and use callbacks to see the result. We finally get to the meat of this chapter, let's go!
## Cross-contract calls
### The traits
We're going to be making a cross-contract call to the linkdrop account deployed to the `testnet` account. We're also going to have callbacks for that, and for a simple transfer to a (potentially existing) account. We'll create the traits that define both those methods.
<Github language="rust" start="19" end="45" url="https://github.com/near-examples/crossword-tutorial-chapter-3/blob/ec07e1e48285d31089b7e8cec9e9cf32a7e90c35/contract/src/lib.rs" />
:::tip
It's not necessary to create the trait for the callback as we could have just implemented the functions `callback_after_transfer` and `callback_after_create_account` in our `Crossword` struct implementation. We chose to define the trait and implement it to make the code a bit more readable.
:::
### `claim_reward`
Again, this function is called when the user solves the crossword puzzle and wishes to send the prize money to an existing account.
Seems straightforward, so why would we need a callback? We didn't use a callback in the previous chapter when the user logged in, so what gives?
It's possible that while claiming the prize, the user accidentally fat-fingers their username, or their cat jumps on their keyboard. Instead of typing `mike.testnet` they type `mike.testnzzz` and hit send. In short, if we try to send the prize to a non-existent account, we want to catch that.
For brevity, we'll skip some code in this function to focus on the Promise and callback:
```rust
pub fn claim_reward(
&mut self,
crossword_pk: PublicKey,
receiver_acc_id: String,
memo: String,
) -> Promise {
let signer_pk = env::signer_account_pk();
...
Promise::new(receiver_acc_id.parse().unwrap())
.transfer(reward_amount)
.then(
Self::ext(env::current_account_id())
.with_static_gas(GAS_FOR_ACCOUNT_CALLBACK)
.callback_after_transfer(
crossword_pk,
receiver_acc_id,
memo,
env::signer_account_pk(),
),
)
}
```
:::tip Your IDE is your friend
Oftentimes, the IDE can help you.
For instance, in the above snippet we have `receiver_acc_id.parse().unwrap()` which might look confusing. You can lean on code examples or documentation to see how this is done, or you can utilize the suggestions from your IDE.
:::
This `claim_reward` method will attempt to use the `Transfer` Action to send NEAR to the account specified. It might fail on a protocol level (as opposed to a smart contract failure), which would indicate the account doesn't exist.
Let's see how we check this in the callback:
<Github language="rust" start="381" end="411" url="https://github.com/near-examples/crossword-tutorial-chapter-3/blob/ec07e1e48285d31089b7e8cec9e9cf32a7e90c35/contract/src/lib.rs" />
:::info The `#[private]` macro
Notice that above the function, we have declared it to be private.
This is an ergonomic helper that checks to make sure the predecessor is the current account ID.
We actually saw this done "the long way" in the callback for the linkdrop contract in [the previous section](03-linkdrop.md#the-callback).
Every callback will want to have this `#[private]` macro above it.
:::
The snippet above essentially says it expects there to be a Promise result for exactly one Promise, and then sees if that was successful or not. Note that we're not actually getting a *value* in this callback, just if it succeeded or failed.
If it succeeded, we proceed to finalize the puzzle, like setting its status to be claimed and finished, removing it from the `unsolved_puzzles` collection, etc.
### `claim_reward_new_account`
Now we want to handle a more interesting case. We're going to do a cross-contract call to the smart contract located on `testnet` and ask it to create an account for us. This name might be unavailable, and this time we get to write a callback that *gets a value*.
Again, for brevity, we'll show the meat of the `claim_reward_new_account` method:
```rust
pub fn claim_reward_new_account(
&mut self,
crossword_pk: PublicKey,
new_acc_id: String,
new_pk: PublicKey,
memo: String,
) -> Promise {
...
ext_linkdrop::ext(AccountId::from(self.creator_account.clone()))
.with_attached_deposit(reward_amount)
.with_static_gas(GAS_FOR_ACCOUNT_CREATION) // This amount of gas will be split
.create_account(new_acc_id.parse().unwrap(), new_pk)
.then(
// Chain a promise callback to ourselves
Self::ext(env::current_account_id())
.with_static_gas(GAS_FOR_ACCOUNT_CALLBACK)
.callback_after_create_account(
crossword_pk,
new_acc_id,
memo,
env::signer_account_pk(),
),
)
}
```
Then the callback:
<Github language="rust" start="413" end="448" url="https://github.com/near-examples/crossword-tutorial-chapter-3/blob/ec07e1e48285d31089b7e8cec9e9cf32a7e90c35/contract/src/lib.rs" />
In the above snippet, there's one difference from the callback we saw in `claim_reward`: we capture the value returned from the smart contract we just called. Since the linkdrop contract returns a bool, we can expect that type. (See the comments with "NOTE:" above, highlighting this.)
## Callbacks
The way that the callback works is that you start with the `Self::ext()` and pass in the current acount ID using `env::current_account_id()`. This is essentially saying that you want to call a function that lives on the current account ID.
You then have a couple of config options that each start with `.with_*`:
1. You can attach a deposit of Ⓝ, in yoctoⓃ to the call by specifying the `.with_attached_deposit()` method but it is defaulted to 0 (1 Ⓝ = 1000000000000000000000000 yoctoⓃ, or 1^24 yoctoⓃ).
2. You can attach a static amount of GAS by specifying the `.with_static_gas()` method but it is defaulted to 0.
3. You can attach an unused GAS weight by specifying the `.with_unused_gas_weight()` method but it is defaulted to 1. The unused GAS will be split amongst all the functions in the current execution depending on their weights. If there is only 1 function, any weight above 1 will result in all the unused GAS being attached to that function. If you specify a weight of 0, however, the unused GAS will not be attached to that function. If you have two functions, one with a weight of 3, and one with a weight of 1, the first function will get 3/4 of the unused GAS and the other function will get 1/4 of the unused GAS.
After you've added the desired configurations to the call, you execute the function and pass in the parameters. In this case, we call the function `callback_after_create_account` and pass in the crossword public key, the new account ID, the memo, and the signer's public key.
This function will be called with static GAS equal to `GAS_FOR_ACCOUNT_CALLBACK` and will have no deposit attached. In addition, since the `with_unused_gas_weight()` method wasn't called, it will default to a weight of 1 meaning that it will split all the unused GAS with the `create_account` function to be added on top of the `GAS_FOR_ACCOUNT_CALLBACK`.
```rust
.then(
// Chain a promise callback to ourselves
Self::ext(env::current_account_id())
.with_static_gas(GAS_FOR_ACCOUNT_CALLBACK)
.callback_after_create_account(
crossword_pk,
new_acc_id,
memo,
env::signer_account_pk(),
),
)
```
:::tip Consider changing contract state in callback
It's not always the case, but often you'll want to change the contract state in the callback.
The callback is a safe place where we have knowledge of what's happened after cross-contract calls or Actions. If your smart contract is changing state *before* doing a cross-contract call, make sure there's a good reason for it. It might be best to move this logic into the callback.
:::
So what parameters should I pass into a callback?
There's no one-size-fits-all solution, but perhaps there's some advice that can be helpful.
Try to pass parameters that would be unwise to trust coming from another source. For instance, if an account calls a method to transfer some digital asset, and you need to do a cross-contract call, don't rely on the results of contract call to determine ownership. If the original function call determines the owner of a digital asset, pass this to the callback.
Passing parameters to callbacks is also a handy way to save fetching data from persistent collections twice: once in the initial method and again in the callback. Instead, just pass them along and save some CPU cycles.
## Checking the public key
The last simple change in this section is to modify the way we verify if a user has found the crossword solution.
In previous chapters we hashed the plaintext solution and compared it to the known solution's hash.
Here we're able to simply check the signer's public key, which is available in the `env` object [under `signer_account_pk`](https://docs.rs/near-sdk/latest/near_sdk/env/fn.signer_account_pk.html).
We'll do this check in both when the solution is submitted, and when the prize is claimed.
### When the crossword is solved
```rust
// The solver_pk parameter is the public key generated and stored in their browser
pub fn submit_solution(&mut self, solver_pk: PublicKey) {
let answer_pk = env::signer_account_pk();
// check to see if the answer_pk from signer is in the puzzles
let mut puzzle = self
.puzzles
.get(&answer_pk)
.expect("ERR_NOT_CORRECT_ANSWER");
```
### When prize is claimed
```rust
pub fn claim_reward(
&mut self,
crossword_pk: PublicKey,
receiver_acc_id: String,
memo: String,
) -> Promise {
let signer_pk = env::signer_account_pk();
...
// Check that puzzle is solved and the signer has the right public key
match puzzle.status {
PuzzleStatus::Solved {
solver_pk: puzzle_pk,
} => {
// Check to see if signer_pk matches
assert_eq!(signer_pk, puzzle_pk, "You're not the person who can claim this, or else you need to use your function-call access key, friend.");
}
_ => {
env::panic_str("puzzle should have `Solved` status to be claimed");
}
};
...
}
```
|
NEAR’s Private Shard: Infrastructure for Enterprise, Built on the Open Web
DEVELOPERS
September 29, 2021
For business, privacy is important. Whether it be the transfer of confidential information to partners, or protecting customers’ data. Blockchain has privacy baked into its DNA. But attempts at bringing enterprise and blockchain together have struggled.
Historically, businesses approached the ground-breaking technology by adopting the consortium model. Company A would invite its partners or companies in the same ecosystem to build a blockchain. But almost as soon as these consortiums started, they ran into issues.
Privacy is baked into blockchain’s DNA, but businesses have struggled to harness its potential.
Concerns around control and access merged with the headache of trying to move data from one company’s IT systems – often built on legacy software – to the blockchain in a way that could be interpreted by another company’s proprietary software. Then you have the thorny issue of throughput, fees and other associated costs with writing and storing data on the blockchain.
This is an issue we at NEAR have been thinking about deeply, and we believe we have the perfect solution to help enterprises embrace blockchain and the Open Web. We call it Private Shards.
Private Shards: Blending the public and private seamlessly and securely
Since the inception of NEAR, it was always designed to be a sharded network: an interconnected global system of users, businesses, and infrastructure providers. As part of NEAR’s design is the ability to partition shards to suit different use cases.
In consortium models, blockchains aren’t interoperable.
For example, a hospital’s medical data, information on students at a university, or sensitive manufacturing data. All of these datasets have different requirements and needs. On NEAR, a private shard can be created to suit all of these needs, without the need for each of those businesses to build a blockchain from scratch.
In the consortium model, each of those use cases would need a separate system that catered for the different requirements. Not on NEAR.
The NEAR blockchain gives enterprises the ability to build private shards that can still be connected to the public blockchain. Why would a business want such a feature? Let’s look at the use cases above.
Let’s say a hospital housed its records on a private shard, but a patient gets sick overseas and a doctor needs access to his or her records. On NEAR, a doctor – whose identity could be publicly verifiable – could make a request to the private shard for access to those files simply, and securely.
In the factory example, there are aspects of manufacturing that need to be private, but then there are other aspects that need to be public. Let’s say a customer wants to know when a product has been built and shipped. A private shard could publish that data to the public blockchain seamlessly.
In order for mass adoption to occur, blockchain needs to be able to blend the public and the private seamlessly, but safely. Private Shards are NEAR’s solution to that challenge, and it’s as simple as starting a node on AWS or AliCloud. Let’s explore how.
How do Private Shards work?
Given private shards operate as a shard on the NEAR network, it means that public chain contracts can call into private shard contracts and vice versa.
This is done via the same mechanism that handles cross-shard routing, which is completely transparent to the users and developers and doesn’t require any additional work (public contracts don’t even need to know they are interacting with private ones). Let’s look at a use case: Two private shards want to interact with each other without routing through public shards, how is this achieved? The shared identity space.
In private shards, blockchains can share information without compromising on security.
Each private shard gets its own name, similar to domains on the web. For example, if University of Berkeley and Tencent are using this system, they will have “berkeley.edu” and “tencent.com” accounts.
Inside their private shards, specific applications will then have a sub-account, for example, if both of them are using some application to track ownership of real estate: “properties.berkeley.edu” and “properties.tencent.edu”. Selling a property between these two entities then would require a cross private shard transaction with potential public chain settlement later if later this information needs to be proven to the public parties.
Applications that these companies use will be built exactly the same as other applications on NEAR: smart contracts are built in Rust or Typescript. This allows creators to build frontends that can interact with these smart contracts, including sending cross-private and public shard transactions.
NEAR’s mission has been to bridge the gap between the internet of today and the blockchains that will power the future. There is already a company actively working on this solution – Calimero’s Private Shards.
Private Shards is a core part of that mission. It helps create an ecosystem where businesses, users, and partners can interact, and we invite anyone to join us in that mission to create a more open, and inclusive web. |
---
title: Legal Checklist
description: A basic list of legal processes to go through when creating and organizing a DAO.
---
# Legal Checklist
## Legal Setup
### Corporate Structuring
Make sure all shares are distributed according to the shareholders agreement, all meeting minutes are fully structured and all necessary resolutions are filed with the relevant authorities. When considering which jurisdiction works best for you and your business, one of your top priorities should be to find a jurisdiction that can provide some certainty in an uncertain blockchain environment. Here you can find basic information on the latest development of the blockchain regulations in [various jurisdictions](https://www.globallegalinsights.com/practice-areas/blockchain-laws-and-regulations)
### Structuring of IP Rights
Transfer IP rights to your company with every freelancer/contractor/employee who creates anything for your product (software, logo, design, texts, etc.)
### Audit of Open Source Licenses
Make sure that your developers document every “open source license” used in the software development. If your business model includes software distribution, and “open license” requires publishing the whole software code, such licenses can further cause additional costs for rewriting the code.
### Trademark Registration
Register your brand’s trademark early on to avoid becoming a “victim” of an unscrupulous competitor, who might “squat” your name before you and then register all similar domains on that basis.
### Customized Items of Use for Website/dApp
Order Terms from a lawyer specifically for your business. Do not copy Terms of Use from other websites. Not only because this can result in copyright infringement. First of all, such Terms may not comply with the laws of your place of business. Second of all, they will most likely lack relevant disclaimers that are aimed to protect you from unjustified claims.
### Specification of Privacy Policy/GDPR Compliance
Consult with a lawyer about specific rules for working with personal data in certain countries/regions your product operates. Some of them might limit territory for the data storage, the other impose additional disclosure obligation to be implemented in the Policy.
### Internal Data Protection Policy
You need more than just a Privacy Policy to avoid fines from regulators. Due diligence also covers agreements with third parties with whom the company shares personal data and implementation of relevant internal policies regarding data breaches and how to deal with requests to delete personal data.
### Signing Founders Agreement
Fix all understandings and agreements reached between the founders before you start your company in the Founders Agreement. This agreement is similar to a “constitution” for founders: it covers the distribution of equity, founders’ roles, procedures for “on-boarding” new partners and exits of existing ones. The more detailed the agreement is – the longer and more productive the cooperation will be.
### Issuing Tokens/Equity to Employers/Contributors
Develop incentive systems in the format of policies that document what employees/contributors have to achieve in order to receive a token/equity option. Make sure the team is well motivated for future endeavours. For token based awards in USA make sure to get yourself familiar with Rule 701. Consult lawyer for a written plan. [https://docs.google.com/document/d/1Vc3bH42KdTH-muHflU4LP7nFAnfK5s5hGuX-4XerWa8/edit](https://docs.google.com/document/d/1Vc3bH42KdTH-muHflU4LP7nFAnfK5s5hGuX-4XerWa8/edit)
## HR
### Initial Organisational Design
Process of aligning the structure of an organisation with its objectives, with the ultimate aim of improving efficiency and effectiveness. Understanding the business processes, workflows, roles and responsibilities, volumes of work, activity analysis and resources.
### Hiring Plan for 6-Months
[Template to track hiring plans](https://docs.google.com/spreadsheets/d/14o4B7hG\_GtIvQZ9mq9VwSKoV8sWnxwZPepkpLPkoLa0/edit?usp=sharing)
### Job Descriptions or Expectations for Each Person
Generally speaking there should be a universal understanding of what the profile is and why we are looking for it (i.e. what are the immediate challenges/needs and what is the charter for this role). The focus should be on developing a set of non-negotiable attributes you are interviewing for. There does not need to be universal alignment, but at least identify where there is a difference of opinion and address accordingly. This will drive a lot of consistency during the interview process. From our experiences and feedback received, candidates take notice and it makes a big difference. A job description makes things crystal clear for everyone involved in what we are evaluating in candidates, helps in engaging candidates, and will accelerate your chosen candidate’s ability to hit the ground running as they know what they signed up for. Think of it as a blueprint, more or less, for framing the profile. However, at a minimum scope out immediate challenges/needs that you'll ask this new hire to solve and share with the candidate early in your discussions.
### Labor Relations
Make sure to have excellent employment/service agreement and contracting support process to prevent work stoppages. The misclassification of employees as independent contractors or freelancers is a hot topic and an area of enforcement to watch closely. With a steady increase in the number of claims filed against employers, it’s more important than ever for companies to consult with human resources and legal experts about whether some actions may be considered discrimination, harassment or retaliation. Knowledge of current case law is important in today’s fluid environment.
### Local Employment Laws
Employment related laws very significantly by country, state/provinces/etc, or even by city in some places. This can impact your hiring process, as well as your responsibilities as an employer in such areas as interview questions, statutory leave, and terminations. Before you enter into a location by hiring someone, do your research.
### Compensation and Benefits Programs
Whether the recruiter lists the wage as an hourly, weekly, monthly, or hourly rate, candidates see it as the most critical part of any job offer. Benefits cover indirect pay. This can be health insurance, tokens, or any perks offered to employees. All of these things are critical in any job offer. Two jobs that offer identical salaries may vary wildly in the benefits category, making one a better financial proposition than the other. Be creative :)
### Labor Relations
Make sure to have excellent employment/service agreement and contracting support process to prevent work stoppages. The misclassification of employees as independent contractors or freelancers is a hot topic and an area of enforcement to watch closely. With a steady increase in the number of claims filed against employers, it’s more important than ever for companies to consult with human resources and legal experts about whether some actions may be considered discrimination, harassment or retaliation. Knowledge of current case law is important in today’s fluid environment.
### Business Immigration and Visa Processes
Businesses must closely monitor the legislative environment and stay informed on the business immigration and visa issuance related topics that are driving significant changes in the law and best practices in human resources management. The best bet for employers is to partner with an HR consultant or employment law adviser to stay ahead of the curve.
## Finance/Tax/Audit
### Budget
Build a detailed budget to have visibility on cash flows during \~18 months of operations. The focus will be on expenses, where a granural structure will help you foresee all expense items and realistic capital requirements. Token issuance might come as a substantial funding source, but also typically brings high costs of legal advise, structuring and regulatory compliance (often across several jurisdictions). NF-funded spinoffs are required to prepare budget per provided template (and it's recommended for all spinoffs) https://docs.google.com/spreadsheets/d/1TwavFQPB0XgZVNpf4N6xU-7pdnMRL6ppAa8FN2bD2Bs/edit?ts=60decc90#gid=
|
Update on the Grassroots Support Initiative
NEAR FOUNDATION
August 28, 2023
One of the things the NEAR Foundation has always been most proud of is the passion and enthusiasm of our builders and grassroots projects — something we have seen demonstrated again since the recent post about the NDC. Over the past week alone, over 30 projects have reached out to the Ecosystem Team and a number of conversations are already underway. This has allowed NEAR Foundation to begin mapping out current needs, and understand the best way to provide support that will ensure the NEAR ecosystem remains a vibrant, open, and accessible place to build.
For us at the Foundation, hearing directly from builders provides invaluable insight into the state of the ecosystem and how we can best continue to support the people and projects that make it what it is. What we have heard so far is that while financial support is important as the NDC fully ramps up, every project has a unique set of needs that requires a holistic approach to provide the right support, and that simply re-starting a grants program is not the right solution. To make sure that we get as many perspectives from across the ecosystem, we will be leaving the form open for one more week (until September 4th) to give anyone who needs support a chance to have their voice heard.
Once the form is closed and we have had a chance to review all of the inputs, we will then begin setting out a broader plan — in alignment with the NDC — to ensure projects get the support they need as we continue moving forward on the path to decentralization. The moment we have the full insights from these conversations, which should be within the next few weeks, we will share them in a follow-up blog post along with a clear plan on how we will be moving forward.
We are excited to have so much interest from builders, and look forward to creating a plan that can help keep grassroots projects engaged and building.
You can sign up for a time to speak with the Ecosystem Relations Team here.
|
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
# Fungible Token Standard
<DocCardList items={useCurrentSidebarCategory().items}/>
|
---
id: account
title: Account
sidebar_label: Account
---
You can interact with, create or delete NEAR accounts.
### Load Account {#load-account}
This will return an Account object for you to interact with.
```js
const account = await nearConnection.account("example-account.testnet");
```
[<span className="typedoc-icon typedoc-icon-class"></span> Class `Account`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html)
### Create Account {#create-account}
Create a sub-account.
```js
// creates a sub-account using funds from the account used to create it.
const account = await nearConnection.account("example-account.testnet");
await account.createAccount(
"sub.example-account.testnet", // sub-account name
"8hSHprDq2StXwMtNd43wDTXQYsjXcD4MJTXQYsjXcc", // public key for sub account
"10000000000000000000" // initial balance for new account in yoctoNEAR
);
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.createAccount`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#createAccount)
For creating .near or .testnet accounts please refer to the [cookbook](https://github.com/near/near-api-js/tree/master/packages/cookbook/accounts).
### Delete Account {#delete-account}
```js
// deletes account found in the `account` object
// transfers remaining account balance to the accountId passed as an argument
const account = await nearConnection.account("example-account.testnet");
await account.deleteAccount("beneficiary-account.testnet");
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.deleteAccount`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#deleteAccount)
### Get Account Balance {#get-account-balance}
```js
// gets account balance
const account = await nearConnection.account("example-account.testnet");
await account.getAccountBalance();
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.getAccountBalance`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#getAccountBalance)
### Get Account details {#get-account-details}
Returns information about an account, such as authorized apps.
```js
// gets account details in terms of authorized apps and transactions
const account = await nearConnection.account("example-account.testnet");
await account.getAccountDetails();
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.getAccountDetails`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#getAccountDetails)
### Deploy a Contract {#deploy-a-contract}
You can deploy a contract from a compiled WASM file. This returns an object with transaction and receipts outcomes and status.
```js
const account = await nearConnection.account("example-account.testnet");
const transactionOutcome = await account.deployContract(
fs.readFileSync("example-file.wasm")
);
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.deployContract`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#deployContract)
[<span className="typedoc-icon typedoc-icon-interface"></span> Interface `FinalExecutionOutcome`](https://near.github.io/near-api-js/interfaces/_near_js_types.provider_response.FinalExecutionOutcome.html)
### Send Tokens {#send-tokens}
Transfer NEAR tokens between accounts. This returns an object with transaction and receipts outcomes and status.
```js
const account = await nearConnection.account("sender-account.testnet");
await account.sendMoney(
"receiver-account.testnet", // receiver account
"1000000000000000000000000" // amount in yoctoNEAR
);
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.sendMoney`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#sendMoney)
[<span className="typedoc-icon typedoc-icon-interface"></span> Interface `FinalExecutionOutcome`](https://near.github.io/near-api-js/interfaces/_near_js_types.provider_response.FinalExecutionOutcome.html)
### State {#state}
Get basic account information, such as amount of tokens the account has or the amount of storage it uses.
```js
const account = await nearConnection.account("example-account.testnet");
const accountState = await account.state();
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.state`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#state)
[<span className="typedoc-icon typedoc-icon-interface"></span> Interface `AccountView`](https://near.github.io/near-api-js/interfaces/near_api_js.providers_provider.AccountView.html)
### Access Keys {#access-keys}
You can get and manage keys for an account.
#### Add Full Access Key {#add-full-access-key}
```js
// takes public key as string for argument
const account = await nearConnection.account("example-account.testnet");
await account.addKey("8hSHprDq2StXwMtNd43wDTXQYsjXcD4MJTXQYsjXcc");
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.addKey`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#addKey)
#### Add Function Access Key {#add-function-access-key}
```js
const account = await nearConnection.account("example-account.testnet");
await account.addKey(
"8hSHprDq2StXwMtNd43wDTXQYsjXcD4MJTXQYsjXcc", // public key for new account
"example-account.testnet", // contract this key is allowed to call (optional)
"example_method", // methods this key is allowed to call (optional)
"2500000000000" // allowance key can use to call methods (optional)
);
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.addKey`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#addKey)
#### Get All Access Keys {#get-all-access-keys}
```js
const account = await nearConnection.account("example-account.testnet");
await account.getAccessKeys();
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.getAccessKeys`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#getAccessKeys)
[<span className="typedoc-icon typedoc-icon-interface"></span> Interface `AccessKeyInfoView`](https://near.github.io/near-api-js/interfaces/near_api_js.providers_provider.AccessKeyInfoView.html)
#### Delete Access Key {#delete-access-key}
```js
const account = await nearConnection.account("example-account.testnet");
await account.deleteKey("8hSHprDq2StXwMtNd43wDTXQYsjXcD4MJTXQYsjXcc");
```
[<span className="typedoc-icon typedoc-icon-method"></span> Method `Account.deleteKey`](https://near.github.io/near-api-js/classes/near_api_js.account.Account.html#deleteKey)
|
The NEAR Token Sale and Unforkable Community
COMMUNITY
August 14, 2020
The NEAR community utterly humbled us this week. The NEAR token sale, which occurred on CoinList and was available to many non-US participants, sold out completely in just over 2 hours. We didn’t even remotely expect this level of excitement and appreciation but we are deeply grateful.
The goal of the sale was to expand the reach and participation of the NEAR community as we move along the final steps to making MainNet fully community-governed. By many measures, this was an outstanding success — we now have the opportunity to welcome over 1,500 new token holders, who collectively purchased 100M tokens. This is an overwhelming show of support!
In this post, I’ll review some of the early details of the sale but also introduce a new opportunity for those who wanted to participate but were unable to and highlight how they’ll be supported in the future.
A Summary of the Sale
The sale generated a tremendous amount of interest within the broader community and resulted in the following statistics:
> 9500 unique registrants
> 1500 successful participants
> $152M allocation requests made
> $30M commitments made
135 minutes (and 2 attempts) to sell out
Please note that specific numbers aren’t yet available because it takes several days to finish some of the financing and KYC processes.
Interest in the blockchain space has increased in recent months as fees on Ethereum have again skyrocketed. On our side, the NEAR team has spent 2 years building a platform that both solves the scaling challenges and allows developers to easily build decentralized apps that real humans can use. This sale is a fantastic endorsement of that platform as a solution to today’s problems.
A Second Chance
The conditions of the sale meant that over 6,000 verified registrants were not able to participate. Many of them experienced congestion and server issues on the CoinList platform which made it challenging to take part. This group includes both longtime community members who have been excited about the project for years and brand new enthusiasts who just caught the bug.
We’ve spent the past several days listening to what these people had to say — the good, the bad and the ugly. We heard frustration about some of the technical challenges but, ultimately, it was mostly because people just wanted the chance to take part and felt like they missed the opportunity.
We’d like to give everyone who has that kind of enthusiasm for the project a chance to participate and to join this community. In order to do this, we have the following offer:
Anyone who already registered for the sale but didn’t receive an allocation will have the opportunity to purchase up to 3,000 NEAR tokens at the original sale price. Participants can choose either the 12-month linear release or the 24-month linear release option. To ensure things go smoothly, they will have a full seven days to purchase tokens and there is no preference given to earlier purchases so all eligible participants who want an allocation will receive one.
This second chance sale will take place from August 18 to August 25. CoinList will send an e-mail out to all eligible participants by early next week with details on how to participate.
While we realize this may not be the exact allocation everyone originally anticipated and represents an imperfect solution, we hope this helps people who want to have a meaningful part in the NEAR ecosystem to join us on this journey.
A Continued Commitment to Community
Beyond just providing people with an opportunity to join the community, both CoinList and the NEAR team want to make it clear that their efforts to build that community will be supported in the future. Thus CoinList will be seeding the first ever NEAR community fund with a $750,000 donation, which will be matched in kind by the NEAR Foundation.
This $1.5M commitment is just the start. It will support future projects on the platform and help underwrite new community-focused initiatives.
We hope this fund will provide our community with an opportunity to contribute to the evolution and growth of the NEAR ecosystem. If you are interested in getting involved directly, reach out to us at [email protected].
We are so excited to build alongside you!
Thank you.
FAQ
Who can participate? This is not open to any new registrants. It has the same geographical restrictions as before, including being unavailable to US-based participants. Participants will still need to pass the full Swiss KYC and anti-bot checks to participate in the sale if they have not already. Participants who already received allocations are not eligible.
Are new tokens being created? No, there are no new tokens being created to do this. Tokens are being reshuffled from other categories, where they would otherwise support future activities, in order to serve this community now.
How does this affect circulating supply? The net result on token balances and circulating supply will depend on how many people participate in this, but we expect it to reduce circulating supply in early days (because some tokens are being moved from an unlocked category) and slightly increase it during later periods (because some tokens are being moved from longer-term categories). The token distribution post will be updated but only after we have finished this activity because it heavily depends on the final participation rate.
Where can I go to participate? You will receive an email with instructions from CoinList if you are eligible. You do not need to ask us if you are eligible!
Am I guaranteed an allocation if I’m eligible? Yes, you will be able to purchase up to 3,000 NEAR tokens. We have reserved enough space for everyone who was not able to participate the first time to do so now. You will have between August 18 to August 25 to claim this. There is no rush because you can participate as long as you are in before time expires.
I already purchased NEAR tokens. Can I participate? This is only available to people who were not able to participate in the sale already. It doesn’t matter which option you chose the first time — if you were able to receive an allocation, you are not eligible for this sale.
Where can I ask questions about this? Please email the CoinList team at [email protected]
When can we learn more about the community fund? We’re not ready to release details yet on this yet but stay tuned. We’ll notify the community via the mailing list at https://pages.near.org/newsletter. |
# Proposals
This section contains the NEAR Enhancement Proposals (NEPs) that cover a fleshed out concept for NEAR. Before an idea is turned into a proposal, it will be fleshed out and discussed on the [NEAR Governance Forum](https://gov.near.org).
These subcategories are great places to start such a discussion:
- [Standards](https://gov.near.org/c/dev/standards/29) — examples might include new protocol standards, token standards, etc.
- [Proposals](https://gov.near.org/c/dev/proposals/68) — ecosystem proposals that may touch tooling, node experience, wallet usage, and so on.
Once and idea has been thoroughly discussed and vetted, a pull request should be made according to the instructions at the [NEP repository](https://github.com/near/NEPs).
The proposals shown in this section have been merged and exist to offer as much information as possible including historical motivations, drawbacks, approaches, future concerns, etc.
Once a proposal has been fully implemented it can be added as a specification, but will remain a proposal until that time. |
NEARCON: The Ultimate Web3 Education for Only $99!
NEAR FOUNDATION
September 22, 2023
As NEAR’s flagship conference, NEARCON is the place to learn all of the latest on Web3 technologies and the open web. NEARCON 2023 will be the ultimate Web3 education — and for only $99.
Let’s explore what you can expect from the NEARCON crash course — from recent Web3 developments in venture capital, decentralization, and AI to NEAR technologies like the Blockchain Operating System (B.O.S), and much more.
What you’ll learn at NEARCON 2023
The Web3 ecosystem consists of several key roles and sectors, and we’re lucky to have some of the biggest names attending NEARCON. Over 4 days, you will hear from venture capital figures like Samantha Bohbot (RockawayX) and Nathalie Oestmann (Outlier Ventures) as well as accelerator leaders like Nicolai Reinbold (CV Labs). There will also be a number of entrepreneurs and founders, including Michael Casey (CoinDesk), Alexander Skidanov (NEAR Protocol), Rebecca Allen (Contented), Dave Balter (Flipside Crypto), Aurelie Boiteux (Nansen), Mitchell Amador (Immunefi), Marc Goldich (Proximity Labs), and many others.
Featured talks, panels, and discussions will include learnings on:
Open Web — Learn how open source technologies and ideals are colliding with blockchain technologies like NEAR to create an open web where everyone can be fairly rewarded for their data, ideas, and effort.
AI & Blockchain — Learn from the OG of AI himself, NEAR co-founder Illia Polosukhin, on how Blockchain and Artificial Intelligence are intersecting in 2023, and how they will develop in the near future.
Decentralization — See how the NDC (NEAR Digital Collective) is helping to decentralize governance to the NEAR community.
Regulation & Policy — Get updated on the latest Web3 regulatory developments.
Your Web3 education doesn’t stop at NEARCON — take the next step
If you love what you learn at NEARCON, you can keep going with your NEAR and open web education. The NEAR ecosystem offers a number of resources to help you find success in Web3.
Thanks to the NEAR Horizon team — who will be on hand at NEARCON — you can get help in your journey of building, getting funding, and networking as a Web3 builder, founder, and entrepreneur.
See you in Lisbon! |
---
title: 6.2 Digital Real Estate Economies
description: Understanding Real Estate as a component of active Metaverse projects
---
# 6.2 Decentraland, Sandbox, and Digital Real Estate Economies
This second lecture on the Metaverse, is a deep dive into one of the most popular, and misunderstood components of active Metaverse projects: _Digital Real Estate _and _Property_. Both a Metaverse and a Virtual World, center on the capacity to have ‘land’, ‘space’ or ‘planets’ from which users can move, interact, purchase, and transact. Decentraland (MANA), and Sandbox (SAND) dominate this domain, and provide a nice case-study for understanding the early versions of how people started to think about value in the Metaverse.
## Property Rights and The Geopolitical Theory of Crypto
Loosely, our geopolitical analogy from the very beginning finds us once more. But this time in the concentrated domain of property rights: Countries around the world all hold certain laws pertaining to how land can be owned - what can be built on that land, and how it can be passed down from generation to generation. Digital property is translated into the world of smart contracts, and centers on the blockchain that the digital land is built upon, and then the smart-contract functionality, and permissioning, that allows different users of different qualification levels, to purchase, create, modify, and monetize that land.
The underlying logic of property in the physical world is mirrored in the digital world: Property in close proximity to certain social locales, possesses higher value. Scarcity of property, finite attributes of land (water, mountains, etc.) may also equally increase value of the land. Land can also be bought, constructed for some purpose, monetized, and then re-sold, similar to how homes are renovated and flipped in the physical world. We have seen small tastes of these opportunities, but none at scale.
## Decentraland (MANA)
Decentraland was one of the first Metaverse plays on Ethereum, which garnered serious attention and market share. Importantly, Decentraland describes itself as a ‘Virtual Reality’ from which ‘users can create, experience, and monetize their content and applications’ ([Decentraland](https://docs.decentraland.org/player/general/introduction/)).
From a high level, Decentraland allows users to customize and design their own character, purchase parcels of land in the virtual world currency $MANA, and then monetize other items or applications they might have built, inside of their virtual world.
The optionality offered to users vis a vis parcels of land, is limited to the following:
![](@site/static/img/bootcamp/mod-em-6.2.1.png)
Users can then also create custom items and structures on their land, that can be monetized separate from the parcels of land themselves. All such items are paid for in $MANA, which operates as a de facto currency of the virtual world.
![](@site/static/img/bootcamp/mod-em-6.2.2.png)
The world of Decentraland is visible from a high-level view, such that property purchase, pricing of different areas, and visibility into what others have built, is also possible.
## Sandbox ($SAND)
Slightly different - and in some ways more sophisticated and complex - is Sandbox. While branded as a game, Sandbox describes itself as a virtual world living on Ethereum from which users can create, monetize, and interact with one another, through uniquely designed avatars.
The pillars of Sandbox include a token for the in-world economy - $SAND (which is staked to create assets), a token for real-estate specifically - $LAND, and in world NFT’s capable of being used to customize avatars.
![](@site/static/img/bootcamp/mod-em-6.2.3.png)
Uniquely, Sandbox has created a tiered system of value for a finite amount of different parcels of their virtual world.
_“The Sandbox Metaverse is made up of LANDS, that are parts of the world, owned by players to create and monetize experiences. There will only ever be 166,464 LANDS available, which can be used to host games, build multiplayer experiences, create housing, or offer social experiences to the community”_
Each of these parcels, in turn, contains a number of GEMS and Catalysts, which add value to the parcel.
![](@site/static/img/bootcamp/mod-em-6.2.4.png)
Beyond the tokens, users are free to monetize assets and experiences inside of the virtual world, and can do so either by owning and constructing their own experience, or staking SAND in order to create assets that they can then resell.
The three primary ‘activities’ for users in the Sandbox provide an early window into the future of Metaverse based engagement and monetization strategies:
* **[Gamemaker](https://www.sandbox.game/en/create/game-maker/)**: Allows users to design, edit, and launch games within the Sandbox for other users to participate with.
* **[VoxEdit](https://www.sandbox.game/en/create/vox-edit/)**: Allows users to uniquely create, and customize non-fungible assets in the Sandbox metaverse.
* **[Marketplace](https://www.sandbox.game/en/shop/)**: Allows users to buy and sell collections, equipment and assets.
## Why are these virtual worlds valuable?
Interestingly, both virtual worlds discussed above contain their own notion of property rights, their own economies, and their own governance systems and mechanics, for users to interact within the virtual world being created. But at the end of the day, _why_ does the capacity to create and interact in a virtual world hold value?
If we think about the importance of land in the real world - there are two driving factors: (1) Social purposes, like living in downtown Manhattan, or (2) Natural resources and geography - for oil rigs, conservation, or farming.
Digitally, the thesis is more powerful, if one realizes that (1) More people can accrue more easily to a digital parcel of land, as opposed to a physical one, and (2) Within certain limits, it is possible to allow the users of the metaverse, the freedom to create for themselves resources and valuable properties to their property - once the world has been jump started.
Altogether, these two theses are speculative, insofar as we are only in the emergent phase of Metaverses in crypto. The main value propositions to date include the following:
* **Visibility and Awareness:** Of services, assets, and products inside of the Metaverse.
* **Engagement:** Between users, brands, and within games.
* **Users:** To market to, gain exposure to an asset, and launch new products or collections.
* **VR / AR future:** To create an environment and world, capable of being connected with the physical world.
The real value underlying digital real estate, lies in being able to create a world, and an underlying mechanism design for that world, that is able to capture value as more users begin to live digitally, and the ‘digital-physical’ divide becomes more intermingled. The creators of a successful metaverse hold the initial power to decide what kind of world, potentially millions of people in the future, will inhabit.
These rules apply to:
* How more of a reality can be created.
* What the cost of engaging is.
* What items or objects can enter this world from another dApp.
* Who is allowed to create certain objects inside of the metaverse.
* What prior requirements or reputational elements, certain objects require in order to be wielded or created.
* The scarcity of the digital property.
* The public infrastructure built into the privately monetizable map.
* The commodities or elements inside of the world.
* The spectrum of possibilities for creating an item, to be used inside of the metaverse.
_From a high level, creating a virtual world, with digitally scarce property, is about creating a framework for a game, that appreciates in value with the amount of people interested in inhabiting that world and playing by the rules of the set game. The underlying bet of these projects, is that it is more convenient, intuitive, and enjoyable for users to work, socialize, and collaborate in a virtual world, than from the website screens of their desktop._
## Decentraland and Sandbox Are The Tip Of The Iceberg
The most important takeaways from this lecture, center on the balance between individual freedom and creativity to explore, monetize, purchase, and dress up their identity, with a communally governed virtual world equipped with scarce parcels, activities for users to partake in, and composable integrations of non-fungible assets.
With that being said, however, it is very clear that the metaverse in its current state remains in its infancy largely because of the complexity required to build a metaverse, coupled with the need for an existing community, and or market of users interested in participating in the services offered by the Metaverse.
|
Stake Wars Week 3 Retro
DEVELOPERS
November 25, 2019
On Nov 18, 8am PST we had the third Stake Wars call with around 10 people. We again opened registration for validators to the public through our customized form. However, due to insufficient testing of the form, some bugs went unnoticed until Sunday, which may caused some registrations to be lost. As a result, only five external validators actually signed up for stake wars this week at genesis.
The network started as expected during the call. However, very soon we realized that the new state sync code does not work properly — it panics on empty state. After fixing the bug, we restarted the network on Monday afternoon with the same set of validators. However, due to timezone difference none of the external validators was able to restart their nodes at the same time as us, which caused finality gadget to stall since there were fewer than 2/3 validators online.
Shortly afterwards, we noticed that it became difficult for nodes to sync to the stake wars network. After some debugging, we realized that the network forked and nodes had difficulty syncing because they incorrectly rejected known headers so that they cannot do state sync. We fixed the bug in header sync and changed adaptive waiting time for block production to reduce forkfulness. The network was restarted again on Tuesday night.
The new network ran normally for a couple days before Murphy’s law kicked in. This time, the network itself seemed fine but no transactions can go through, which means that no one can register account on our stake wars wallet or send a staking transaction. Looking into it we noticed that no chunks were included in the newly produced blocks.
At the same time, all our validators have been kicked out at that point, making it harder to debug. As a debug workaround, we figured out a way to impersonate the block producing node by customizing the node’s code to see why chunks were not included, which soon leads to the discovery of a simple but hard-to-notice bug in our block production logic.
Here is a summary of the issues we found from running stake wars this week:
State sync crashes on empty state. The new state sync code incorrectly crashed on receiving an empty state, which is fixed in https://github.com/nearprotocol/nearcore/pull/1716.
Incorrect rejection of old block headers during header sync, which is fixed in https://github.com/nearprotocol/nearcore/pull/1740.
In block production, chunks might be incorrectly removed if not all chunks are ready when the block production function is invoked, which is fixed in https://github.com/nearprotocol/nearcore/pull/1765.
Overall, this week’s experience indicates that our stability has regressed, mostly due to the new code that was merged last week. Although it is hard to stabilize our code while still developing new features, we will focus on improving test coverage for more complex setups to make sure that new code doesn’t break the system. For example, we have now a nightly test infrastructure set up so that we can run expensive integration multi node tests nightly to test the stability of the system as a whole. Our goal is to see steady increase in the stability of the network over the next few weeks. |
```rust
// Validator interface, for cross-contract calls
#[ext_contract(ext_amm_contract)]
trait ExternalAmmContract {
fn get_deposits(&self, account_id: AccountId) -> Promise;
}
// Implement the contract structure
#[near_bindgen]
impl Contract {
#[private] // Public - but only callable by env::current_account_id()
pub fn external_get_deposits_callback(&self, #[callback_result] call_result: Result<HashMap<AccountId, U128>, PromiseError>) -> Option<HashMap<AccountId, U128>> {
// Check if the promise succeeded
if call_result.is_err() {
log!("There was an error contacting external contract");
return None;
}
// Return the pools data
let deposits_data = call_result.unwrap();
return Some(deposits_data);
}
pub fn get_contract_deposits(&self) -> Promise {
let promise = ext_amm_contract::ext(self.amm_contract.clone())
.get_deposits(env::current_account_id());
return promise.then( // Create a promise to callback query_greeting_callback
Self::ext(env::current_account_id())
.external_get_deposits_callback()
)
}
}
```
|
Stake Wars Episode II
DEVELOPERS
May 19, 2020
Return of the Validators
NEAR’s MainNet recently launched into its first phase, called “POA” (see full roadmap). This means that a small handful of validating nodes are currently being run by the core team. In order to progress to the next phase, “MainNet: Restricted”, the operation of the network will be handed off to a large group of node operators called validators.
The goal of Stake Wars: Episode II is to onboard those validators, test the stability of the system and begin introducing some of the unique aspects of NEAR’s delegation system in preparation for the next phase of MainNet itself.
Stake Wars: Episode I occurred in late 2019 as a way of stress-testing the network. It helped to expose key areas for stability improvements and drove improvement of release processes. We hope that Episode II will be similarly helpful for enhancing the stability of the system but, additionally, it is about bringing new and old validators up to speed so they can begin staking immediately at the launch of MainNet: Restricted.
This post will discuss the unique features of validation and delegation on NEAR, show how Stake Wars: Episode II will work and describe the rewards for successful participation.
Contract-Based Delegation
One of the key features that NEAR offers which differentiates it from many Proof-of-Stake networks is contract-based delegation.
“Delegation” is when one token holder lends their tokens to a validating node to use them on the delegator’s behalf. This is important because not everyone wants to — or is able to — run a full validating node. While the minimum requirements for running a validating node are not technically challenging, the operational efforts are multiple. They consist in ensuring that updates are deployed at the same time with other validators; and building a robust infrastructure, optimized for uptime and security. Since these requirements can require a professional level of oversight and expense, more casual token holders generally prefer not to do this.
Other protocols typically implement delegation at the protocol level, meaning that it is exactly the same across all validators. Validators generally compete with each other purely based on what price they offer — for example, if the protocol is providing a 5% reward for validation, these validators may provide 4% of that as return to people who delegate to them and keep the 1% for themselves. This generally results in a price war where the only differentiation between validators is what return they offer and reputational factors like how many people already delegate to them. Also, custodial centralized exchanges frequently take a large fraction of the delegation market as they allow to offer additional financial instruments that regular validators can’t.
Because delegation in NEAR is done through smart contracts, it is far more flexible. Each validator could theoretically produce its own delegation contract or configure the parameters of a widely trusted contract to offer a broad range of services. For example, one validator might offer delegators a better return if they lock up their capital for a long period of time while another might offer better returns for larger size delegations.
This contract-based delegation makes it easier to pipe together Open Finance components, so you can imagine contracts which dynamically allocate delegators’ funds to lending protocols or validators depending on the prevailing interest rates and return in the market. Essentially, staking becomes a core component of the Open Finance ecosystem while still providing security to the system as intended.
How Delegation Works
Delegation on NEAR is done by transfering funds to the validator’s account via a secure, trust-less smart contract. There is a reference implementation of such a smart contract available now on Github, which we encourage you to explore to better understand the mechanics of delegation.
Over time, it is expected that validators will roll out more features for contracts like this, for example, tax optimization for different regions, staking tokens to provide better liquidity for validators and delegators, or any of the previously-described return optimization strategies.
Delegation during Stake Wars: Episode II will occur through direct interaction with these contracts via the command line tools but, in the future, explorers and wallets will support an user interface on top of this tooling to make it easy for non-technical users to participate (check out code example and video walkthrough if you want to build this into your wallet/explorer/tool).
Validator Participation
Validators are important participants of the NEAR network. As mentioned, they provide the core operation of the network, ongoing security guarantees and participation in technical governance. They run the nodes that generate new blocks, and are instrumental in rolling out technical upgrades and security patches across their systems, coordinating with the NEAR core development team, and other validators. Their voice is heard through the direct aspects of technical governance (upgrades) as well as participation in voting processes which support other areas of network governance.
During the rollout of MainNet, validators play a particularly important role because their voting power will determine when transfers are unlocked and MainNet officially enters its final community-governed stage.
While some validators may participate with only their own stake (for example if they have a sufficient allocation of tokens to begin with), many are professionals who rely on the support of delegators to source enough stake and participate early in validation. Thus, in order to earn the trust of prospective delegators, it is important that such validators are visible and vocal in the community.
In the early days of MainNet, the minimum stake required to become a validator is fairly high because the total number of “seats” available for validation is determined by the number of shards the network has been broken into. NEAR initially contains a single shard with 100 seats. As the usage of NEAR grows, the number of shards will grow as well and, with it, the number of seats will grow too.
With 100 validator seats available during the initial rollout of MainNet, it is expected that 1-4M NEAR tokens will be required to take one seat on MainNet. This is determined by the overall distribution of tokens staked – see more details in the Economics blog post. To be clear, the tokens that a validator bids in for validation are the sum of their own tokens and those tokens which have been delegated to them, thus delegation will be quite important for many validators to achieve sufficient balances to participate in running nodes.
In upcoming months, more seats will become available as the number of shards grows and a security feature called “hidden validators” will be released. This will provide more opportunities for validators to participate in the network operations with lower capital requirements.
Stake Wars will take place on the BetaNet network and not MainNet, so it uses the native tokens of the test network BetaNet, which are allocated to participants upon registration. Validators who didn’t already submit their application can create their account, set up their node and begin participating in validation. This initiative is already quite popular. There are already 180+ applications from the previous phase of Stake Wars to participate even before this announcement, with over 60 active validators and 100 nodes currently running.
Initially, 75k BetaNet $NEAR (NOT MainNet tokens) will be provided to the new applicants of Stake Wars. Depending on how popular the Stake Wars is, this amount may become insufficient to earn one seat so delegation could increasingly become important. Additionally, to make room for interested parties, we will work to span Stake Wars: Episode II over multiple test networks (see below for details).
Path to Community Governed MainNet
The overall goal is for validators and token holders to take over technical governance of MainNet (see MainNet Roadmap for more details). To achieve this, Stake Wars is an opportunity to identify the best validators and for them transition from BetaNet to TestNet and then MainNet and provide them with the ability to attract delegations.
Every validator will go through these phases:
Join Stake Wars on BetaNet
Successfully complete BetaNet Validator challenges
Get promoted to TestNet
Successfully complete TestNet Validator challenges
Start staking and accepting delegations on MainNet
Vote for unlocking transfers
The teams who will transition from one network to the other will be asked to unstake their tokens, and focus on the new network. This process will both make room for new Stake Wars entries on BetaNet, and will progressively increase the number of validators running on MainNet.
Note: every reward will pass case-by-case evaluation and KYC controls, to discourage automated scripting to bias any metrics, or participants not interested to run a node on MainNet.
For more details on differences between BetaNet and TestNet checkout last section of the Roadmap to MainNet blog post.
Judgement Criteria and Rewards
As a validator, one of the main criteria to determine success is running secure and live infrastructure. This means setting up infrastructure for updating software, having a hot swap setup to keep uptime when updating software (NEAR has the unique ability to atomically switch staking from one node to another). Additionally, it’s about participating in discussions, helping other community members and attracting more delegations.
A new leaderboard will rank validators based on:
Uptime
Capacity to update the node and closely follow latest releases
Correct deployment of the delegation contract
Involvement in community discussions and helping other members
Building open source tools and other code contributions
A new leaderboard will be published in the Stake Wars Repo on Github, however some parameters will be shared, if requested, during 1:1 conversations and reviews.
The primary reward for participating successfully in Stake Wars: Episode II is that top operators will be onboarded as the initial set of validators of MainNet. This makes them the initial stewards of the network and leaders in the community, which is very helpful for attracting the delegation of other token holders from across the ecosystem.
Additionally, because “MainNet Restricted” doesn’t have inflation yet, to cover the costs and motivate for getting in this set, such validators will receive 10,000 $NEAR a month.
Stake Wars: Episode II is a dynamic program that will evolve over time. It will introduce increasingly difficult challenges on BetaNet and will progressively migrate to TestNet. Activities for validators will be hard-forks, unplanned restarts, deploying new node releases, updates to delegation contracts, and following best practices on their infrastructure. On a bi-weekly basis, new challenges will be announced in the community channels, and participating will unlock additional rewards, including the opportunity to be officially invited to join TestNet and then the MainNet.
These challenges will unlock additional rewards: NEAR Foundation allocated up to 1 Million NEAR tokens in total for the participants of these initiatives.
Validator Advisory Board
We are also launching the Validator Advisory Board, a selected group of professional validators who,over time, will become key voices in the technical governance of the community.
These validators are engaged in group discussions, product advice and feedback, testing beta releases, and suggesting features that support other validators, and the ecosystem at large.
The initial members of this board were the first participants who started running validator nodes on BetaNet, helping the NEAR Collective with technical details of validation and supporting fellow validators with setting everything up. Going forward, this group will stay at the forefront of NEAR’s advances in staking, providing product feedback and building tooling.
Initial members of this Validator Advisory Board are: Bison Trails, Buildlinks, Figment Networks, HashQuark, Sparkpool and Staked.
There are still a few vacant spots on the Board. If you are a professional validator participating in Stake Wars and want to join this group, reach out to us.
How to join Stake Wars: Episode II
There are a few steps to follow:
Open the initiative official page at nearpages.wpengine.com/stakewars.
If you haven’t already, you must sign up for NEAR’s BetaNet Wallet at this link. It will allocate you the few test tokens necessary to deploy a delegation contract.
If you haven’t already, you have to enroll in the Stake Wars program, from this form, to subscribe to our technical bulletin, and receive new releases information.
Follow the indications on Github, at the address https://github.com/nearprotocol/stakewars, to deploy your own node and add it to the VALIDATORS.md list.
Deploy the staking pool smart contract, to enable delegation on your node.
Once the contract is deployed, you will receive extra tokens: differently from the past weeks, this time the tokens will come in the form of delegation, and not tokens available in your wallet.
Join the official community channels on Discord or Telegram and follow any weekly updates or actions required (such as update your node to a new release)
All node operators who are already running their node on BetaNet will have only to deploy the Staking Pool Contract, and update VALIDATORS.md file on Github accordingly.
NEAR Stake Wars is waiting for you, start today your validator journey at nearpages.wpengine.com/stakewars.
Getting Involved beyond Stake Wars
Even if you aren’t planning to participate in Stake Wars, there are a number of things you might be interested in:
Tokens: Some validators will use their own stake and others will receive delegation. If you would like to get a stake in the network via acquiring tokens, be sure to sign up for the token list to hear news of any opportunities to do so.
Developers: Check out the docs quickstart for information about how to get started building on TestNet and ask questions in the chat. If you are ready to deploy to MainNet, register for the Developer Program.
Interested Contributors: If you run your own community and we can help out or if you are interested in helping out directly, learn about our community programs or ask questions in the chat.
Startup Founders: NEAR is a supporter of the Open Web Collective, a protocol-agnostic community of startup founders who are focused on building on the decentralized web. They provide education, networking and support during this process. Learn more and join at https://openwebcollective.com.
Business Leaders: If you are curious about how to integrate with NEAR or whether it might be a good fit for your business needs, reach out to [email protected]. |
# Light Client
The state of the light client is defined by:
1. `BlockHeaderInnerLiteView` for the current head (which contains `height`, `epoch_id`, `next_epoch_id`, `prev_state_root`, `outcome_root`, `timestamp`, the hash of the block producers set for the next epoch `next_bp_hash`, and the merkle root of all the block hashes `block_merkle_root`);
2. The set of block producers for the current and next epochs.
The `epoch_id` refers to the epoch to which the block that is the current known head belongs, and `next_epoch_id` is the epoch that will follow.
Light clients operate by periodically fetching instances of `LightClientBlockView` via particular RPC end-point described [below](#rpc-end-points).
Light client doesn't need to receive `LightClientBlockView` for all the blocks. Having the `LightClientBlockView` for block `B` is sufficient to be able to verify any statement about state or outcomes in any block in the ancestry of `B` (including `B` itself). In particular, having the `LightClientBlockView` for the head is sufficient to locally verify any statement about state or outcomes in any block on the canonical chain.
However, to verify the validity of a particular `LightClientBlockView`, the light client must have verified a `LightClientBlockView` for at least one block in the preceding epoch, thus to sync to the head the light client will have to fetch and verify a `LightClientBlockView` per epoch passed.
## Validating Light Client Block Views
```rust
pub enum ApprovalInner {
Endorsement(CryptoHash),
Skip(BlockHeight)
}
pub struct ValidatorStakeView {
pub account_id: AccountId,
pub public_key: PublicKey,
pub stake: Balance,
}
pub struct BlockHeaderInnerLiteView {
pub height: BlockHeight,
pub epoch_id: CryptoHash,
pub next_epoch_id: CryptoHash,
pub prev_state_root: CryptoHash,
pub outcome_root: CryptoHash,
pub timestamp: u64,
pub next_bp_hash: CryptoHash,
pub block_merkle_root: CryptoHash,
}
pub struct LightClientBlockLiteView {
pub prev_block_hash: CryptoHash,
pub inner_rest_hash: CryptoHash,
pub inner_lite: BlockHeaderInnerLiteView,
}
pub struct LightClientBlockView {
pub prev_block_hash: CryptoHash,
pub next_block_inner_hash: CryptoHash,
pub inner_lite: BlockHeaderInnerLiteView,
pub inner_rest_hash: CryptoHash,
pub next_bps: Option<Vec<ValidatorStakeView>>,
pub approvals_after_next: Vec<Option<Signature>>,
}
```
Recall that the hash of the block is
```rust
sha256(concat(
sha256(concat(
sha256(borsh(inner_lite)),
sha256(borsh(inner_rest))
)),
prev_hash
))
```
The fields `prev_block_hash`, `next_block_inner_hash` and `inner_rest_hash` are used to reconstruct the hashes of the current and next block, and the approvals that will be signed, in the following way (where `block_view` is an instance of `LightClientBlockView`):
```python
def reconstruct_light_client_block_view_fields(block_view):
current_block_hash = sha256(concat(
sha256(concat(
sha256(borsh(block_view.inner_lite)),
block_view.inner_rest_hash,
)),
block_view.prev_block_hash
))
next_block_hash = sha256(concat(
block_view.next_block_inner_hash,
current_block_hash
))
approval_message = concat(
borsh(ApprovalInner::Endorsement(next_block_hash)),
little_endian(block_view.inner_lite.height + 2)
)
return (current_block_hash, next_block_hash, approval_message)
```
The light client updates its head with the information from `LightClientBlockView` iff:
1. The height of the block is higher than the height of the current head;
2. The epoch of the block is equal to the `epoch_id` or `next_epoch_id` known for the current head;
3. If the epoch of the block is equal to the `next_epoch_id` of the head, then `next_bps` is not `None`;
4. `approvals_after_next` contain valid signatures on `approval_message` from the block producers of the corresponding epoch (see next section);
5. The signatures present in `approvals_after_next` correspond to more than 2/3 of the total stake (see next section).
6. If `next_bps` is not none, `sha256(borsh(next_bps))` corresponds to the `next_bp_hash` in `inner_lite`.
```python
def validate_and_update_head(block_view):
global head
global epoch_block_producers_map
current_block_hash, next_block_hash, approval_message = reconstruct_light_client_block_view_fields(block_view)
# (1)
if block_view.inner_lite.height <= head.inner_lite.height:
return False
# (2)
if block_view.inner_lite.epoch_id not in [head.inner_lite.epoch_id, head.inner_lite.next_epoch_id]:
return False
# (3)
if block_view.inner_lite.epoch_id == head.inner_lite.next_epoch_id and block_view.next_bps is None:
return False
# (4) and (5)
total_stake = 0
approved_stake = 0
epoch_block_producers = epoch_block_producers_map[block_view.inner_lite.epoch_id]
for maybe_signature, block_producer in zip(block_view.approvals_after_next, epoch_block_producers):
total_stake += block_producer.stake
if maybe_signature is None:
continue
approved_stake += block_producer.stake
if not verify_signature(
public_key: block_producer.public_key,
signature: maybe_signature,
message: approval_message
):
return False
threshold = total_stake * 2 // 3
if approved_stake <= threshold:
return False
# (6)
if block_view.next_bps is not None:
if sha256(borsh(block_view.next_bps)) != block_view.inner_lite.next_bp_hash:
return False
epoch_block_producers_map[block_view.inner_lite.next_epoch_id] = block_view.next_bps
head = block_view
```
## Signature verification
To simplify the protocol we require that the next block and the block after next are both in the same epoch as the block that `LightClientBlockView` corresponds to. It is guaranteed that each epoch has at least one final block for which the next two blocks that build on top of it are in the same epoch.
By construction by the time the `LightClientBlockView` is being validated, the block producers set for its epoch is known. Specifically, when the first light client block view of the previous epoch was processed, due to (3) above the `next_bps` was not `None`, and due to (6) above it was corresponding to the `next_bp_hash` in the block header.
The sum of all the stakes of `next_bps` in the previous epoch is `total_stake` referred to in (5) above.
The signatures in the `LightClientBlockView::approvals_after_next` are signatures on `approval_message`. The `i`-th signature in `approvals_after_next`, if present, must validate against the `i`-th public key in `next_bps` from the previous epoch. `approvals_after_next` can contain fewer elements than `next_bps` in the previous epoch.
`approvals_after_next` can also contain more signatures than the length of `next_bps` in the previous epoch. This is due to the fact that, as per [consensus specification](./Consensus.md), the last blocks in each epoch contain signatures from both the block producers of the current epoch, and the next epoch. The trailing signatures can be safely ignored by the light client implementation.
## Proof Verification
[Transaction Outcome Proof]: #transaction-outcome-proofs
### Transaction Outcome Proofs
To verify that a transaction or receipt happens on chain, a light client can request a proof through rpc by providing `id`, which is of type
```rust
pub enum TransactionOrReceiptId {
Transaction { hash: CryptoHash, sender: AccountId },
Receipt { id: CryptoHash, receiver: AccountId },
}
```
and the block hash of light client head. The rpc will return the following struct
```rust
pub struct RpcLightClientExecutionProofResponse {
/// Proof of execution outcome
pub outcome_proof: ExecutionOutcomeWithIdView,
/// Proof of shard execution outcome root
pub outcome_root_proof: MerklePath,
/// A light weight representation of block that contains the outcome root
pub block_header_lite: LightClientBlockLiteView,
/// Proof of the existence of the block in the block merkle tree,
/// which consists of blocks up to the light client head
pub block_proof: MerklePath,
}
```
which includes everything that a light client needs to prove the execution outcome of the given transaction or receipt.
Here `ExecutionOutcomeWithIdView` is
```rust
pub struct ExecutionOutcomeWithIdView {
/// Proof of the execution outcome
pub proof: MerklePath,
/// Block hash of the block that contains the outcome root
pub block_hash: CryptoHash,
/// Id of the execution (transaction or receipt)
pub id: CryptoHash,
/// The actual outcome
pub outcome: ExecutionOutcomeView,
}
```
The proof verification can be broken down into two steps, execution outcome root verification and block merkle root
verification.
#### Execution Outcome Root Verification
If the outcome root of the transaction or receipt is included in block `H`, then `outcome_proof` includes the block hash
of `H`, as well as the merkle proof of the execution outcome in its given shard. The outcome root in `H` can be
reconstructed by
```python
shard_outcome_root = compute_root(sha256(borsh(execution_outcome)), outcome_proof.proof)
block_outcome_root = compute_root(sha256(borsh(shard_outcome_root)), outcome_root_proof)
```
This outcome root must match the outcome root in `block_header_lite.inner_lite`.
#### Block Merkle Root Verification
Recall that block hash can be computed from `LightClientBlockLiteView` by
```rust
sha256(concat(
sha256(concat(
sha256(borsh(inner_lite)),
sha256(borsh(inner_rest))
)),
prev_hash
))
```
The expected block merkle root can be computed by
```python
block_hash = compute_block_hash(block_header_lite)
block_merkle_root = compute_root(block_hash, block_proof)
```
which must match the block merkle root in the light client block of the light client head.
## RPC end-points
### Light Client Block
There's a single end-point that full nodes exposed that light clients can use to fetch new `LightClientBlockView`s:
```
http post http://127.0.0.1:3030/ jsonrpc=2.0 method=next_light_client_block params:="[<last known hash>]" id="dontcare"
```
The RPC returns the `LightClientBlock` for the block as far into the future from the last known hash as possible for the light client to still accept it. Specifically, it either returns the last final block of the next epoch, or the last final known block. If there's no newer final block than the one the light client knows about, the RPC returns an empty result.
A standalone light client would bootstrap by requesting next blocks until it receives an empty result, and then periodically request the next light client block.
A smart contract-based light client that enables a bridge to NEAR on a different blockchain naturally cannot request blocks itself. Instead external oracles query the next light client block from one of the full nodes, and submit it to the light client smart contract. The smart contract-based light client performs the same checks described above, so the oracle doesn't need to be trusted.
### Light Client Proof
The following rpc end-point returns `RpcLightClientExecutionProofResponse` that a light client needs for verifying execution outcomes.
For transaction execution outcome, the rpc is
```
http post http://127.0.0.1:3030/ jsonrpc=2.0 method=EXPERIMENTAL_light_client_proof params:="{"type": "transaction", "transaction_hash": <transaction_hash>, "sender_id": <sender_id>, "light_client_head": <light_client_head>}" id="dontcare"
```
For receipt execution outcome, the rpc is
```
http post http://127.0.0.1:3030/ jsonrpc=2.0 method=EXPERIMENTAL_light_client_proof params:="{"type": "receipt", "receipt_id": <receipt_id>, "receiver_id": <receiver_id>, "light_client_head": <light_client_head>}" id="dontcare"
```
|
How Unrealistic is Bribing Frequently Rotated Validators?
DEVELOPERS
October 24, 2018
Many proposed blockchain protocols today use some sorts of committees to propose and validate blocks. The committees are at the core of the Delegated Proof of Stake, and no sharding protocol is feasible without only a subset of nodes operating each shard.
Majority of the DPoS based protocols go as far as to claim that once the block is finalized by the committee, it is irreversible.
When analyzing the security of such committees it is often assumed that all the participants in the entire population can be separated into honest and malicious in advance, and those honest participants will remain honest after they become validators. For example, such an assumption is at the core of security analysis in the latest MultiVAC sharding yellowpaper.
In Ethereum Sharding FAQ multiple security models are analyzed, including a Bribing Attacker model. In this model, an attacker has a budget which it can use to arbitrarily corrupt validators in the network. According to the FAQ, some people claim the model is unrealistically adversarial.
In this short write-up, I will do some analysis on how unrealistically adversarial the bribing attacker model is, as well as how easy this model allows an adversary to corrupt a shard. Let’s consider such an adversary, with a budget proportional to the stakes in a single shard (which in a system with hundreds of shards is less than a percent of the total stake) that wants to compromise one shard and see how they go about it.
The Approach
Let’s start with the simplest approach. We will assume the adversary has a good media channel to reach out to many participants of the protocol, and also for simplicity assume the stake of each participant is equal and is exactly 32 tokens (the constant Ethereum 2.0 uses).
The adversary then uses the media channel to advertise that they would pay 40 tokens in 15 minutes for any private key of any validator that will be at that time assigned to shard #7. For as long a validator trusts the adversary to actually pay the promised 40 tokens, it makes a lot of sense for them to take advantage of the offer. The assumption of trust, however, is rather strong.
Reducing Trust
Unless some smart Algorand-style way of assigning validators is used, the public keys of the validators are known in advance. The adversary can create a smart contract in a chain that is different from the one they are trying to corrupt (without loss of generality, let’s say the chain being corrupted is NEAR, and the chain the smart contract is published on is Ethereum). In its simplest form, the smart contract includes all the public keys of the validators that will be assigned to the shard to be compromised at the time the adversary wants to attack it, and the smart contract would immediately send 40 tokens to any user who submits a private key that matches any of the public keys in the contract.
While this makes the transaction involve no trust, it has a drawback that the private key will become known to everybody in the system, not just the attacker.
To get around it, we can improve the smart contract. The adversary now publishes their own public key pka, and any participant that wants to submit their private key for 40 tokens encrypts their private key with pka before submission. The contract then enters a 15-minute challenge period during which the adversary can submit their own private key ska to decrypt the message submitted by the participant and show that it is not encrypting a valid private key of a validator. If no challenge is submitted within 15 minutes, 40 tokens are released.
This way the private key of the validator is only known to the validator and the adversary, and the validator is guaranteed to get their 40 tokens. At this moment no reasonable validator that has no stake in the platform besides the 32 tokens they staked will miss on the opportunity to effectively get 8 tokens for free.
Footnote: A few further improvements are possible that would reduce spam, such as the adversary posting some message a validator needs to encrypt with their private key so that the smart contract can verify the validator has it, or validators staking some tokens that are released back to them unless the adversary challenges their submission.
Further Analysis
The approach above makes two major assumptions: that there exists a media channel that can reach out to a large percentage of the validators, and that many validators are easily corruptible. While both assumptions are somewhat strong, neither is completely unreasonable.
It is also worth noting that once a validator submits a private key to the adversary, the validator themselves still has access to the key. At this point, they can use it to exercise some malicious slashable behavior that somehow benefits them (since their stake at this point is doomed to be slashed no matter what, and the payment for the private key was already received). While a valid argument, it is worth pointing out that if the adversary’s goal is to compromise the shard, then a large set of validators simultaneously acting maliciously is not really against their interests.
Another important observation is that by advertising the exact time and shard that will be attacked, the adversary attracts significant attention to that shard at that time, so whatever malicious intent they plan to exercise will be immediately slashed with a proper system design, possibly before any benefit from such intent can be realized.
Outro
I will be travelling to Prague for the DevCon4 next week. Many founders and engineers from recognized sharded blockchain projects such as Cosmos, Ethereum and PolkaDot will be there. The primary purpose of my trip will be to figure out what is their thinking on the adaptive and bribing adversaries and defending against them in the sharded blockchains. Any good insights I accumulate will be posted in our blog.
I write on different topics related to building distributed blockchains and work full time on building NEAR Protocol, a decentralized distributed applications platform. If you are interested in what we build and write, follow our progress:
NEAR Twitter — https://twitter.com/nearprotocol,
Discord — https://discord.gg/kRY6AFp, we have open conversations on tech, governance, and economics of blockchain on our discord.
https://upscri.be/633436/ |
---
sidebar_label: NFT Indexer
---
# Building an NFT indexer
:::note Source code for the tutorial
[`near-examples/near-lake-nft-indexer`](https://github.com/near-examples/near-lake-nft-indexer): source code for this tutorial
:::
## The End
This tutorial ends with a working NFT indexer built on top [NEAR Lake Framework JS](/concepts/advanced/near-lake-framework). The indexer is watching for `nft_mint` [Events](https://nomicon.io/Standards/EventsFormat) and prints some relevant data:
- `receiptId` of the [Receipt](/build/data-infrastructure/lake-data-structures/receipt) where the mint has happened
- Marketplace
- NFT owner account name
- Links to the NFTs on the marketplaces
The final source code is available on the GitHub [`near-examples/near-lake-nft-indexer`](https://github.com/near-examples/near-lake-nft-indexer)
## Motivation
NEAR Protocol had introduced a nice feature [Events](https://nomicon.io/Standards/EventsFormat). The Events allow a contract developer to add standardized logs to the [`ExecutionOutcomes`](/build/data-infrastructure/lake-data-structures/execution-outcome) thus allowing themselves or other developers to read those logs in more convenient manner via API or indexers.
The Events have a field `standard` which aligns with NEPs. In this tutorial we'll be talking about [NEP-171 Non-Fungible Token standard](https://github.com/near/NEPs/discussions/171).
In this tutorial our goal is to show you how you can "listen" to the Events contracts emit and how you can benefit from them.
As the example we will be building an indexer that watches all the NFTs minted following the [NEP-171 Events](https://nomicon.io/Standards/Tokens/NonFungibleToken/Event) standard, assuming we're collectors who don't want to miss a thing. Our indexer should notice every single NFT minted and give us a basic set of data like: in what Receipt it was minted, and show us the link to a marketplace (we'll cover [Paras](https://paras.id) and [Mintbase](https://mintbase.io) in our example).
We will use JS version of [NEAR Lake Framework](/concepts/advanced/near-lake-framework) in this tutorial. Though the concept is the same for Rust, but we want to show more people that it's not that complex to build your own indexer.
## Preparation
:::danger Credentials
Please, ensure you've the credentials set up as described on the [Credentials](../running-near-lake/credentials.md) page. Otherwise you won't be able to get the code working.
:::
You will need:
- `node` [installed and configured](https://nodejs.org/en/download/)
Let's create our project folder
```bash
mkdir lake-nft-indexer && cd lake-nft-indexer
```
Let's add the `package.json`
```json title=package.json
{
"name": "lake-nft-indexer",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "tsc && node index.js"
},
"dependencies": {
"near-lake-framework": "^1.0.2"
},
"devDependencies": {
"typescript": "^4.6.4"
}
}
```
You may have noticed we've added `typescript` as a dev dependency. Let's configure the TypeScript. We'll need to create `tsconfig.json` file for that
```json title=tsconfig.json
{
"compilerOptions": {
"lib": [
"ES2019",
"dom"
]
}
}
```
:::warning ES2019 edition
Please, note the `ES2019` edition used. We require it because we are going to use `.flatMap()` and `.flat()` in our code. These methods were introduces in `ES2019`. Though you can use even more recent
:::
Let's create empty `index.ts` in the project root and thus finish the preparations.
```bash
npm install
```
Now we can start a real work.
## Set up NEAR Lake Framework
In the `index.ts` let's import `startStream` function and `types` from `near-lake-framework`:
```ts title=index.ts
import { startStream, types } from 'near-lake-framework';
```
Add the instantiation of `LakeConfig` below:
```ts title=index.js
const lakeConfig: types.LakeConfig = {
s3BucketName: "near-lake-data-mainnet",
s3RegionName: "eu-central-1",
startBlockHeight: 66264389,
};
```
Just a few words on the config, we have set `s3BucketName` for mainnet, default `s3RegionName` and a fresh-ish block height for `startBlockHeight`. You can go to [NEAR Explorer](https://nearblocks.io) and get **the freshest** block height for your setup. Though you can use the same as we do.
Now we need to create a callback function that we'll be called to handle [`StreamerMessage`](/build/data-infrastructure/lake-data-structures/toc) our indexer receives.
```ts title=index.ts
async function handleStreamerMessage(
streamerMessage: types.StreamerMessage
): Promise<void> {
}
```
:::info Callback function requirements
In `near-lake-framework` JS library the handler have to be presented as a callback function. This function have to:
- be asynchronous
- accept an argument of type [`StreamerMessage`](/build/data-infrastructure/lake-data-structures/toc)
- return nothing (`void`)
:::
And an actual start of our indexer in the end of the `index.ts`
```ts title=index.ts
(async () => {
await startStream(lakeConfig, handleStreamerMessage);
})();
```
The final `index.ts` at this moment should look like the following:
```ts title=index.ts
import { startStream, types } from 'near-lake-framework';
const lakeConfig: types.LakeConfig = {
s3BucketName: "near-lake-data-mainnet",
s3RegionName: "eu-central-1",
startBlockHeight: 66264389,
};
async function handleStreamerMessage(
streamerMessage: types.StreamerMessage
): Promise<void> {
}
(async () => {
await startStream(lakeConfig, handleStreamerMessage);
})();
```
## Events and where to catch them
First of all let's find out where we can catch the Events. We hope you are familiar with how the [Data Flow in NEAR Blockchain](/concepts/data-flow/near-data-flow), but let's revise our knowledge:
- Mint an NFT is an action in an NFT contract (doesn't matter which one)
- Actions are located in a [Receipt](/build/data-infrastructure/lake-data-structures/receipt)
- A result of the Receipt execution is [ExecutionOutcome](/build/data-infrastructure/lake-data-structures/execution-outcome)
- `ExecutionOutcome` in turn, catches the logs a contract "prints"
- [Events](https://nomicon.io/Standards/EventsFormat) built on top of the logs
This leads us to the realization that we can watch only for ExecutionOutcomes and ignore everything else `StreamerMessage` brings us.
Also, we need to define an interface to catch the Events. Let's copy the interface definition from the [Events Nomicon page](https://nomicon.io/Standards/EventsFormat#events) and paste it before the `handleStreamerMessage` function.
```ts title=index.ts
interface EventLogData {
standard: string,
version: string,
event: string,
data?: unknown,
};
```
## Catching only the data we need
Inside the callback function `handleStreamerMessage` we've prepared in the [Preparation](#preparation) section let's start filtering the data we need:
```ts title=index.ts
async function handleStreamerMessage(
streamerMessage: types.StreamerMessage
): Promise<void> {
const relevantOutcomes = streamerMessage
.shards
.flatMap(shard => shard.receiptExecutionOutcomes)
}
```
We have iterated through all the [Shards](/build/data-infrastructure/lake-data-structures/shard) and collected the lists of all ExecutionOutcomes into a single list (in our case we don't care on which Shard did the mint happen)
Now we want to deal only with those ExecutionOutcomes that contain logs of Events format. Such logs start with `EVENT_JSON:` according to the [Events docs](https://nomicon.io/Standards/EventsFormat#events).
Also, we don't require all the data from ExecutionOutcome, let's handle it:
```ts title=index.ts
async function handleStreamerMessage(
streamerMessage: types.StreamerMessage
): Promise<void> {
const relevantOutcomes = streamerMessage
.shards
.flatMap(shard => shard.receiptExecutionOutcomes)
.map(outcome => ({
receipt: {
id: outcome.receipt.receiptId,
receiverId: outcome.receipt.receiverId,
},
events: outcome.executionOutcome.outcome.logs.map(
(log: string): EventLogData => {
const [_, probablyEvent] = log.match(/^EVENT_JSON:(.*)$/) ?? []
try {
return JSON.parse(probablyEvent)
} catch (e) {
return
}
}
)
.filter(event => event !== undefined)
}))
}
```
Let us explain what we are doing here:
1. We are walking through the ExecutionOutcomes
2. We are constructing a list of objects containing `receipt` (it's id and the receiver) and `events` containing the Events
3. In order to collect the Events we are iterating through the ExecutionOutcome's logs trying to parse Event using regular expression. We're returning `undefined` if we fail to parse `EventLogData`
4. Finally once `events` list is collected we're filtering it dropping the `undefined`
Fine, so now we have only a list of our objects that contain some Receipt data and the list of successfully parsed `EventLogData`.
The goal for our indexer is to return the useful data about a minted NFT that follows NEP-171 standard. We need to drop irrelevant standard Events:
```ts title=index.ts
.filter(relevantOutcome =>
relevantOutcome.events.some(
event => event.standard === "nep171" && event.event === "nft_mint"
)
)
```
## Almost done
So far we have collected everything we need corresponding to our requirements.
We can print everything in the end of the `handleStreamerMessage`:
```ts title=index.ts
relevantOutcomes.length && console.dir(relevantOutcomes, { depth: 10 })
```
The final look of the `handleStreamerMessage` function:
```ts title=index.ts
async function handleStreamerMessage(
streamerMessage: types.StreamerMessage
): Promise<void> {
const relevantOutcomes = streamerMessage
.shards
.flatMap(shard => shard.receiptExecutionOutcomes)
.map(outcome => ({
receipt: {
id: outcome.receipt.receiptId,
receiverId: outcome.receipt.receiverId,
},
events: outcome.executionOutcome.outcome.logs.map(
(log: string): EventLogData => {
const [_, probablyEvent] = log.match(/^EVENT_JSON:(.*)$/) ?? []
try {
return JSON.parse(probablyEvent)
} catch (e) {
return
}
}
)
.filter(event => event !== undefined)
}))
.filter(relevantOutcome =>
relevantOutcome.events.some(
event => event.standard === "nep171" && event.event === "nft_mint"
)
)
relevantOutcomes.length && console.dir(relevantOutcomes, { depth: 10 })
}
```
And if we run our indexer we will be catching `nft_mint` event and print the data in the terminal.
```bash
npm run start
```
:::note
Having troubles running the indexer? Please, check you haven't skipped the [Credentials](../running-near-lake/credentials.md) part :)
:::
Not so fast! Remember we were talking about having the links to the marketplaces to see the minted tokens? We're gonna extend our data with links whenever possible. At least we're gonna show you how to deal with the NFTs minted on [Paras](https://paras.id) and [Mintbase](https://mintbase.io).
## Crafting links to Paras and Mintbase for NFTs minted there
At this moment we have an array of objects we've crafted on the fly that exposes receipt, execution status and event logs. We definitely know that all the data we have at this moment are relevant for us, and we want to extend it with the links to that minted NFTs at least for those marketplaces we know.
We know and love Paras and Mintbase.
### Paras token URL
We did the research for you and here's how the URL to token on Paras is crafting:
```
https://paras.id/token/[1]::[2]/[3]
```
Where:
- [1] - Paras contract address (`x.paras.near`)
- [2] - First part of the `token_id` (`EventLogData.data` for Paras is an array of objects with `token_ids` key in it. Those IDs represented by numbers with column `:` between them)
- [3] - `token_id` itself
Example:
```
https://paras.id/token/x.paras.near::387427/387427:373
```
Let's add the interface for later use somewhere after `interface EventLogData`:
```ts
interface ParasEventLogData {
owner_id: string,
token_ids: string[],
};
```
### Mintbase token URL
And again we did the research for you:
```
https://www.mintbase.io/thing/[1]:[2]
```
Where:
- [1] - `meta_id` (`EventLogData.data` for Mintbase is an array of stringified JSON that contains `meta_id`)
- [2] - Store contract account address (basically Receipt's receiver ID)
Example:
```
https://www.mintbase.io/thing/70eES-icwSw9iPIkUluMHOV055pKTTgQgTiXtwy3Xus:vnartistsdao.mintbase1.near
```
Let's add the interface for later use somewhere after `interface EventLogData`:
```ts
interface MintbaseEventLogData {
owner_id: string,
memo: string,
}
```
Now it's a perfect time to add another `.map()`, but it might be too much. So let's proceed with a forloop to craft the output data we want to print.
```ts title=index.ts
let output = []
for (let relevantOutcome of relevantOutcomes) {
let marketplace = "Unknown"
let nfts = []
}
```
We're going to print the marketplace name, Receipt id so you would be able to search for it on [NEAR Explorer](https://nearblocks.io) and the list of links to the NFTs along with the owner account name.
Let's start crafting the links. Time to say "Hi!" to [Riqi](https://twitter.com/hdriqi) (just because we can):
```ts title=index.ts
let output = []
for (let relevantOutcome of relevantOutcomes) {
let marketplace = "Unknown"
let nfts = []
if (relevantOutcome.receipt.receiverId.endsWith(".paras.near")) {
marketplace = "Paras"
nfts = relevantOutcome.events.flatMap(event => {
return (event.data as ParasEventLogData[])
.map(eventData => ({
owner: eventData.owner_id,
links: eventData.token_ids.map(
tokenId => `https://paras.id/token/${relevantOutcome.receipt.receiverId}::${tokenId.split(":")[0]}/${tokenId}`
)
})
)
})
}
```
A few words about what is going on here. If the Receipt's receiver account name ends with `.paras.near` (e.g. `x.paras.near`) we assume it's from Paras marketplace, so we are changing the corresponding variable.
After that we iterate over the Events and its `data` using the `ParasEventLogData` we've defined earlier. Collecting a list of objects with the NFTs owner and NFTs links.
Mintbase turn, we hope [Nate](https://twitter.com/nategeier) and his team have [migrated to NEAR Lake Framework](../../lake-framework/migrating-to-near-lake-framework.md) already, saying "Hi!" and crafting the link:
```ts title=index.ts
} else if (relevantOutcome.receipt.receiverId.match(/\.mintbase\d+\.near$/)) {
marketplace = "Mintbase"
nfts = relevantOutcome.events.flatMap(event => {
return (event.data as MintbaseEventLogData[])
.map(eventData => {
const memo = JSON.parse(eventData.memo)
return {
owner: eventData.owner_id,
links: [`https://mintbase.io/thing/${memo["meta_id"]}:${relevantOutcome.receipt.receiverId}`]
}
})
})
}
```
Almost the same story as with Paras, but a little bit more complex. The nature of Mintbase marketplace is that it's not a single marketplace! Every Mintbase user has their own store and a separate contract. And it looks like those contract addresses follow the same principle they end with `.mintbaseN.near` where `N` is number (e.g. `nate.mintbase1.near`).
After we have defined that the ExecutionOutcome receiver is from Mintbase we are doing the same stuff as with Paras:
1. Changing the `marketplace` variable
2. Collecting the list of NFTs with owner and crafted links
And if we can't determine the marketplace, we still want to return something, so let's return Events data as is:
```ts title=index.ts
} else {
nfts = relevantOutcome.events.flatMap(event => event.data)
}
```
It's time to push the collected data to the `output`
```ts title=index.ts
output.push({
receiptId: relevantOutcome.receipt.id,
marketplace,
nfts,
})
```
And make it print the output to the terminal:
```ts title=index.ts
if (output.length) {
console.log(`We caught freshly minted NFTs!`)
console.dir(output, { depth: 5 })
}
```
All together:
```ts title=index.ts
let output = []
for (let relevantOutcome of relevantOutcomes) {
let marketplace = "Unknown"
let nfts = []
if (relevantOutcome.receipt.receiverId.endsWith(".paras.near")) {
marketplace = "Paras"
nfts = relevantOutcome.events.flatMap(event => {
return (event.data as ParasEventLogData[])
.map(eventData => ({
owner: eventData.owner_id,
links: eventData.token_ids.map(
tokenId => `https://paras.id/token/${relevantOutcome.receipt.receiverId}::${tokenId.split(":")[0]}/${tokenId}`
)
})
)
})
} else if (relevantOutcome.receipt.receiverId.match(/\.mintbase\d+\.near$/)) {
marketplace = "Mintbase"
nfts = relevantOutcome.events.flatMap(event => {
return (event.data as MintbaseEventLogData[])
.map(eventData => {
const memo = JSON.parse(eventData.memo)
return {
owner: eventData.owner_id,
links: [`https://mintbase.io/thing/${memo["meta_id"]}:${relevantOutcome.receipt.receiverId}`]
}
})
})
} else {
nfts = relevantOutcome.events.flatMap(event => event.data)
}
output.push({
receiptId: relevantOutcome.receipt.id,
marketplace,
createdOn,
nfts,
})
}
if (output.length) {
console.log(`We caught freshly minted NFTs!`)
console.dir(output, { depth: 5 })
}
```
OK, how about the date and time of the NFT mint? Let's add to the beginning of the `handleStreamerMessage` function
```ts title=index.ts
const createdOn = new Date(streamerMessage.block.header.timestamp / 1000000)
```
Update the `output.push()` expression:
```ts title=index.ts
output.push({
receiptId: relevantOutcome.receipt.id,
marketplace,
createdOn,
nfts,
})
```
And not that's it. Run the indexer to watch for NFT minting and never miss a thing.
```bash
npm run start
```
:::note
Having troubles running the indexer? Please, check you haven't skipped the [Credentials](../running-near-lake/credentials.md) part :)
:::
Example output:
```
We caught freshly minted NFTs!
[
{
receiptId: '2y5XzzL1EEAxgq8EW3es2r1dLLkcecC6pDFHR12osCGk',
marketplace: 'Paras',
createdOn: 2022-05-24T09:35:57.831Z,
nfts: [
{
owner: 'dccc.near',
links: [ 'https://paras.id/token/x.paras.near::398089/398089:17' ]
}
]
}
]
We caught freshly minted NFTs!
[
{
receiptId: 'BAVZ92XdbkAPX4DkqW5gjCvrhLX6kGq8nD8HkhQFVt5q',
marketplace: 'Mintbase',
createdOn: 2022-05-24T09:36:00.411Z,
nfts: [
{
owner: 'chiming.near',
links: [
'https://mintbase.io/thing/HOTcn6LTo3qTq8bUbB7VwA1GfSDYx2fYOqvP0L_N5Es:vnartistsdao.mintbase1.near'
]
}
]
}
]
```
## Conclusion
What a ride, yeah? Let's sum up what we have done:
- You've learnt about [Events](https://nomicon.io/Standards/EventsFormat)
- Now you understand how to follow for the Events
- Knowing the fact that as a contract developer you can use Events and emit your own events, now you know how to create an indexer that follows those Events
- We've had a closer look at NFT minting process, you can experiment further and find out how to follow `nft_transfer` Events
The material from this tutorial can be extrapolated for literally any event that follows the [Events format](https://nomicon.io/Standards/EventsFormat)
Not mentioning you have a dedicated indexer to find out about the newest NFTs minted out there and to be the earliest bird to collect them.
Let's go hunt doo, doo, doo 🦈
:::note Source code for the tutorial
[`near-examples/near-lake-nft-indexer`](https://github.com/near-examples/near-lake-nft-indexer): source code for this tutorial
:::
|
---
sidebar_position: 3
sidebar_label: "Using structs and enums"
title: "How to think about structs and enums when writing a Rust smart contract on NEAR"
---
import basicCrossword from '/docs/assets/crosswords/basics-crossword.jpg';
import enumBox from '/docs/assets/crosswords/enum-a-d-block--eizaconiendo.near--eiza_coniendo.png';
# Structs and enums
## Overview
### Structs
If you're not familiar with Rust, it may be confusing that there are no classes or inheritance like other programming languages. We'll be exploring how to [use structs](https://doc.rust-lang.org/book/ch05-01-defining-structs.html), which are someone similar to classes, but perhaps simpler.
Remember that there will be only one struct that gets the [`#[near_bindgen]` macro](/sdk/rust/contract-structure/near-bindgen) placed on it; our primary struct or singleton if you wish. Oftentimes the primary struct will contain additional structs that may, in turn, contain more structs in a neat and orderly way. You may also have structs that are used to return data to an end user, like a frontend. We'll be covering both of these cases in this chapter.
### Enums
Enums are short for enumerations, and can be particularly useful if you have entities in your smart contract that transition to different states. For example, say you have a series of blockchain games where players can join, battle, and win. There might be an enumeration for `AcceptingPlayers`, `GameInProgress`, and `GameCompleted`. Enums are also used to define discrete types of concept, like months in a year.
For our crossword puzzle, one example of an enum is the direction of the clue: either across (A) or down (D) as illustrated below. These are the only two options.
<figure>
<img src={enumBox} alt="Children's toy of a box that has blocks that only fit certain shapes, resembling the letters A and D. Art created by eizaconiendo.near" width="600"/>
<figcaption>Art by <a href="https://twitter.com/eiza_coniendo" target="_blank">eizaconiendo.near</a></figcaption>
</figure>
<br/>
Rust has an interesting feature where enums can contain additional data. You can see [examples of that here](https://doc.rust-lang.org/rust-by-example/custom_types/enum.html).
## Using structs
### Storing contract state
We're going to introduce several structs all at once. These structs are addressing a need from the previous chapter, where the puzzle itself was hardcoded and looked like this:
<img src={basicCrossword} alt="Basic crossword puzzle from chapter 1" width="600" />
In this chapter, we want the ability to add multiple, custom crossword puzzles. This means we'll be storing information about the clues in the contract state. Think of a grid where there are x and y coordinates for where a clue starts. We'll also want to specify:
1. Clue number
2. Whether it's **across** or **down**
3. The length, or number of letters in the answer
Let's dive right in, starting with our primary struct:
```rust
#[near_bindgen]
#[derive(BorshDeserialize, BorshSerialize, PanicOnDefault)]
pub struct Crossword {
puzzles: LookupMap<String, Puzzle>, // ⟵ Puzzle is a struct we're defining
unsolved_puzzles: UnorderedSet<String>,
}
```
:::note Let's ignore a couple of things…
For now, let's ignore the macros about the structs that begin with `derive` and `serde`.
:::
Look at the fields inside the `Crossword` struct above, and you'll see a couple types. `String` is a part of Rust's standard library, but `Puzzle` is something we've created:
```rust
#[derive(BorshDeserialize, BorshSerialize, Debug)]
pub struct Puzzle {
status: PuzzleStatus, // ⟵ An enum we'll get to soon
/// Use the CoordinatePair assuming the origin is (0, 0) in the top left side of the puzzle.
answer: Vec<Answer>, // ⟵ Another struct we've defined
}
```
Let's focus on the `answer` field here, which is a vector of `Answer`s. (A vector is nothing fancy, just a bunch of items or a "growable array" as described in the [standard Rust documentation](https://doc.rust-lang.org/std/vec/struct.Vec.html).
```rust
#[derive(BorshDeserialize, BorshSerialize, Deserialize, Serialize, Debug)]
#[serde(crate = "near_sdk::serde")]
pub struct Answer {
num: u8,
start: CoordinatePair, // ⟵ Another struct we've defined
direction: AnswerDirection, // ⟵ An enum we'll get to soon
length: u8,
clue: String,
}
```
Now let's take a look at the last struct we'e defined, that has cascaded down from fields on our primary struct: the `CoordinatePair`.
```rust
#[derive(BorshDeserialize, BorshSerialize, Deserialize, Serialize, Debug)]
#[serde(crate = "near_sdk::serde")]
pub struct CoordinatePair {
x: u8,
y: u8,
}
```
:::info Summary of the structs shown
There are a handful of structs here, and this will be a typical pattern when we use structs to store contract state.
```
Crossword ⟵ primary struct with #[near_bindgen]
└── Puzzle
└── Answer
└── CoordinatePair
```
:::
### Returning data
Since we're going to have multiple crossword puzzles that have their own, unique clues and positions in a grid, we'll want to return puzzle objects to a frontend.
:::tip Quick note on return values
By default, return values are serialized in JSON unless explicitly directed to use Borsh for binary serialization.
For example, if we call this function:
```rust
pub fn return_some_words() -> Vec<String> {
vec!["crossword".to_string(), "puzzle".to_string()]
}
```
The return value would be a JSON array:
`["crossword", "puzzle"]`
While somewhat advanced, you can learn more about [changing the serialization here](/sdk/rust/contract-interface/serialization-interface#overriding-serialization-protocol-default).
:::
We have a struct called `JsonPuzzle` that differs from the `Puzzle` struct we've shown. It has one difference: the addition of the `solution_hash` field.
```rust
#[derive(Serialize, Deserialize)]
#[serde(crate = "near_sdk::serde")]
pub struct JsonPuzzle {
/// The human-readable (not in bytes) hash of the solution
solution_hash: String, // ⟵ this field is not contained in the Puzzle struct
status: PuzzleStatus,
answer: Vec<Answer>,
}
```
This is handy because our primary struct has a key-value pair where the key is the solution hash (as a `String`) and the value is the `Puzzle` struct.
```rust
pub struct Crossword {
puzzles: LookupMap<String, Puzzle>,
// key ↗ ↖ value
…
```
Our `JsonPuzzle` struct returns the information from both the key and the value.
We can move on from this topic, but suffice it to say, sometimes it's helpful to have structs where the intended use is to return data in a more meaningful way than might exist from the structs used to store contract data.
### Using returned objects in a callback
Don't be alarmed if this section feels confusing at this point, but know we'll cover Promises and callbacks later.
Without getting into detail, a contract may want to make a cross-contract call and "do something" with the return value. Sometimes this return value is an object we're expecting, so we can define a struct with the expected fields to capture the value. In other programming languages this may be referred to as "casting" or "marshaling" the value.
A real-world example of this might be the [Storage Management standard](https://nomicon.io/Standards/StorageManagement.html), as used in a [fungible token](https://github.com/near-examples/FT).
Let's say a smart contract wants to determine if `alice.near` is "registered" on the `nDAI` token. More technically, does `alice.near` have a key-value pair for herself in the fungible token contract.
```rust
#[derive(Serialize, Deserialize)]
#[serde(crate = "near_sdk::serde")]
pub struct StorageBalance {
pub total: U128,
pub available: U128,
}
// …
// Logic that calls the nDAI token contract, asking for alice.near's storage balance.
// …
#[private]
pub fn my_callback(&mut self, #[callback] storage_balance: StorageBalance) {
// …
}
```
The crossword puzzle will eventually use a cross-contract call and callback, so we can look forward to that. For now just know that if your contract expects to receive a return value that's not a primitive (unsigned integer, string, etc.) and is more complex, you may use a struct to give it the proper type.
## Using enums
In the section above, we saw two fields in the structs that had an enum type:
1.`AnswerDirection` — this is the simplest type of enum, and will look familiar from other programming languages. It provides the only two options for how a clue in oriented in a crossword puzzle: across and down.
```rust
#[derive(BorshDeserialize, BorshSerialize, Deserialize, Serialize, Debug)]
#[serde(crate = "near_sdk::serde")]
pub enum AnswerDirection {
Across,
Down,
}
```
2. `PuzzleStatus` — this enum can actually store a string inside the `Solved` structure. (Note that we could have simply stored a string instead of having a structure, but a structure might make this easier to read.)
As we improve our crossword puzzle, the idea is to give the winner of the crossword puzzle (the first person to solve it) the ability to write a memo. (For example: "Took me forever to get clue six!", "Alice rules!" or whatever.)
```rust
#[derive(BorshDeserialize, BorshSerialize, Deserialize, Serialize, Debug)]
#[serde(crate = "near_sdk::serde")]
pub enum PuzzleStatus {
Unsolved,
Solved { memo: String },
}
```
|
---
sidebar_position: 5
sidebar_label: "Access keys and login 1/2"
title: "Covering access keys and login"
---
import chapter1Correct from '/docs/assets/crosswords/chapter-1-crossword-correct.gif';
import accessKeys from '/docs/assets/crosswords/keys-cartoon-good--alcantara_gabriel.near--Bagriel_5_10.png';
import functionCallAction from '/docs/assets/crosswords/function-call-action.png';
import tutorialAccessKeys from '/docs/assets/crosswords/access-keys.png';
# Logging in with NEAR
## Previously…
In the previous chapter we simply displayed whether the crossword puzzle was solved or not, by checking the solution hash against the user's answers.
<img src={chapter1Correct} width="600"/><br/><br/>
## Updates to transfer prize money
In this chapter, our smart contract will send 5 Ⓝ to the first person who solves the puzzle. For this, we're going to require the user to have a NEAR account and log in.
:::note Better onboarding to come
Later in this tutorial we won't require the user to have a NEAR account.
Since logging in is important for many decentralized apps, we'll show how this is done in NEAR and how it's incredibly unique compared to other blockchains.
:::
This transfer will occur when the first user to solve the puzzle calls the `submit_solution` method with the solution. During the execution of that function it will check that the user submitted the correct answer, then transfer the prize.
We'll be able to see this transfer (and other steps that occurred) in [NearBlocks Explorer](https://testnet.nearblocks.io).
But first let's talk about one of the most interesting superpowers of NEAR: access keys.
## Access keys
You might be familiar with other blockchains where your account name is a long string of numbers and letters. NEAR has an account system where your name is human-readable, like `friend.testnet` for testnet or `friend.near` for mainnet.
You can add (and remove) keys to your account on NEAR. There are two types of keys: full and function-call access keys.
The illustration below shows a keychain with a full-access key (the large, gold one) and two function-call access keys.
<figure>
<img src={accessKeys} width="600" alt="A keychain with three keys. A large, gold key represents the full-access keys on NEAR. The two other keys are gray and smaller, and have detachable latches on them. They represent function-call access key. Art created by alcantara_gabriel.near" />
<figcaption>Art by <a href="https://twitter.com/Bagriel_5_10" target="_blank">alcantara_gabriel.near</a></figcaption>
</figure>
### Full-access keys
Full-access keys are the ones you want to protect the most. They can transfer all the funds from your account, delete the account, or perform any of the other [Actions on NEAR](03-actions.md).
When we used the `near login` command in the [previous chapter](../01-basics/01-set-up-skeleton.md#creating-a-new-key-on-your-computer), that command asked the full-access key in the NEAR Wallet to use the `AddKey` Action to create another full-access key: the one we created locally on our computer. NEAR CLI uses that new key to deploy, make function calls, etc.
### Function-call access keys
Function-call access keys are sometimes called "limited access keys" because they aren't as powerful as the full access keys.
A Function-call access key will specify:
- What contract it's allowed to call
- What method name(s) it's allowed to call (you can also specify all functions)
- How much allowance it's allowed to use on these function calls
It's only allowed to perform the `FunctionCall` Action.
<img src={functionCallAction} alt="List of NEAR Actions with a highlight on the FunctionCall Action" width="600"/>
### Example account with keys
Let's look at this testnet account that has one full-access key and two function-call access keys. As you can see, we use the NEAR CLI [command `keys`](https://docs.near.org/tools/near-cli#near-keys) to print this info.
<img src={tutorialAccessKeys} alt="Terminal screen showing the access keys for an account, there is one full-access key and two function-call access keys"/>
Let's look deeper into each key.
#### First key
```js
{
access_key: {
nonce: 72772126000000, // Large nonce, huh!
permission: {
FunctionCall: {
allowance: '777000000000000000000000000', // Equivalent to 777 NEAR
method_names: [], // Any methods can be called
receiver_id: 'puzzle.testnet' // This key can only call methods on puzzle.testnet
}
}
},
public_key: 'ed25519:9Hhm77W4KCFzFgK55sZgEMesYRaL8wV1kpqh8qntnSPV'
}
```
The first key in the image above is a function-call access key that can call the smart contract `puzzle.testnet` on **any method**. If you don't specify which methods it's allowed to call, it is allowed to call them all. Note the empty array (`[]`) next to `method_names`, which indicates this.
We won't discuss the nonce too much, but know that in order to prevent the possibility of [replay attacks](https://en.wikipedia.org/wiki/Replay_attack), the nonce for a newly-created key is large and includes info on the block height as well as a random number.
The allowance is the amount, in yoctoNEAR, that this key is allowed to use during function calls. This **cannot** be used to transfer NEAR. It can only be used in gas for function calls.
The allowance on this key is intentionally large for demonstration purposes. `777000000000000000000000000` yoctoNEAR is `777` NEAR, which is unreasonably high for an access key. So high, in fact, that it exceeded the amount of NEAR on the contract itself when created. This shows that you can create an access key that exceeds the account balance, and that it doesn't subtract the allowance at the time of creation.
So the key is simply allowed to use the allowance in NEAR on gas, deducting from the account for each function call.
#### Second key
```js
{
access_key: {
nonce: 72777733000000,
permission: {
FunctionCall: {
allowance: '250000000000000000000000', // 0.25 NEAR, which is a typical allowance
method_names: [ 'foo', 'bar' ], // Can call methods foo and bar only
receiver_id: 'puzzle.testnet'
}
}
},
public_key: 'ed25519:CM4JtNo2sL3qPjWFn4MwusMQoZbHUSWaPGCCMrudZdDU'
},
```
This second key specifies which methods can be called, and has a lower allowance.
Note that the allowance for this key (a quarter of a NEAR) is the default allowance when a person "logs in" in with the NEAR Wallet.
In NEAR, "logging in" typically means adding a key like this to your account. We'll cover this more in a moment.
#### Third key
```js
{
access_key: { nonce: 72770704000019, permission: 'FullAccess' },
public_key: 'ed25519:FG4HjEPsvP5beScC3hkTLztQH8k9Qz9maTaumvPDa5t3'
}
```
The third key is a full-access key.
Since this key can perform all the Actions, there aren't additional details or restrictions like the function-call access keys we saw.
## What does "log in" mean in a blockchain context?
Let's take a step back from NEAR and talk about how login works broadly using web3 wallets.
A web3 wallet (like Ethereum's MetaMask, Cosmos's Keplr, or the NEAR Wallet) stores a private key for an account. When interacting with decentralized apps, a user will typically use the wallet to sign transactions and send them to the blockchain for processing.
However, web3 wallets can also be used to sign any kind of message, and it doesn't need to send anything to the blockchain. This is sometimes called "offline signing" and protocols will sometimes create standards around how to sign data.
In other ecosystems, the idea of "logging in" with a web3 wallet uses this offline signing. A user is asked to sign a structured message and a backend can confirm that the message was signed by a given account.
NEAR keys can also sign and verify messages in this manner. In fact, there are a couple simple examples of how to achieve this in the [`near-api-js` cookbook](https://github.com/near/near-api-js/blob/master/packages/cookbook/utils/verify-signature.js).
There are potential drawbacks to this offline signing technique, particularly if a signed message gets intercepted by a malicious party. They might be able to send this signature to a backend and log in on your behalf. Because this all takes place offline, there's no mechanism on-chain to revoke your login or otherwise control access. We quickly see that using a web3 wallet for signed typed data runs into limitations.
So signing a message is fine, but what if we could do better?
With NEAR, we can leverage access keys to improve a user's login experience and give the power back to the user.
If I log into the [Guest Book example site](https://github.com/near-examples/guest-book-examples/tree/main/frontend), I create a unique key just for that dApp, adding it to my account. When I'm done I can remove the key myself. If I suspect someone has control of my key (if a laptop is stolen, for example) I can remove the key as long as I have a full-access key in my control.
Logging in with NEAR truly gives the end user control of their account and how they interact with dApps, and does so on the protocol level.
---
The concept of access keys is so important that we've spent longer than usual on the topic without actually implementing code for our improved crossword puzzle.
Let's move to the next section and actually add the login button.
|
---
id: predeployed-contract
title: Pre-deployed Contract
sidebar_label: Pre-deployed Contract
---
Create your first non-fungible token by using a pre-deployed NFT smart contract which works exactly as the one you will build on this tutorial.
---
## Prerequisites
To complete this tutorial successfully, you'll need [a NEAR Wallet](https://testnet.mynearwallet.com/create) and [NEAR CLI](/tools/near-cli#setup)
:::tip
You can install near-cli through the following command:
```bash
npm install -g near-cli
```
:::
---
## Using the NFT contract
Minting an NFT token on NEAR is a simple process that involves calling a smart contract function.
To interact with the contract you will need to first login to your NEAR account through `near-cli`.
<hr class="subsection" />
### Setup
Log in to your newly created account with `near-cli` by running the following command in your terminal:
```bash
near login
```
Set an environment variable for your account ID to make it easy to copy and paste commands from this tutorial:
```bash
export NEARID=YOUR_ACCOUNT_NAME
```
<hr class="subsection" />
### Minting your NFTs
We have already deployed an NFT contract to `nft.examples.testnet` which allows users to freely mint tokens. Let's use it to mint our first token.
Run this command in your terminal, remember to replace the `token_id` with a string of your choice. This string will uniquely identify the token you mint.
```bash
near call nft.examples.testnet nft_mint '{"token_id": "TYPE_A_UNIQUE_VALUE_HERE", "receiver_id": "'$NEARID'", "metadata": { "title": "GO TEAM", "description": "The Team Goes", "media": "https://bafybeidl4hjbpdr6u6xvlrizwxbrfcyqurzvcnn5xoilmcqbxfbdwrmp5m.ipfs.dweb.link/", "copies": 1}}' --accountId $NEARID --deposit 0.1
```
<details>
<summary>Example response: </summary>
<p>
```json
Log [nft.examples.testnet]: EVENT_JSON:{"standard":"nep171","version":"nft-1.0.0","event":"nft_mint","data":[{"owner_id":"benjiman.testnet","token_ids":["TYPE_A_UNIQUE_VALUE_HERE"]}]}
Transaction Id 8RFWrQvAsm2grEsd1UTASKpfvHKrjtBdEyXu7WqGBPUr
To see the transaction in the transaction explorer, please open this url in your browser
https://testnet.nearblocks.io/txns/8RFWrQvAsm2grEsd1UTASKpfvHKrjtBdEyXu7WqGBPUr
''
```
</p>
</details>
:::tip
You can also replace the `media` URL with a link to any image file hosted on your web server.
:::
<hr class="subsection" />
### Querying your NFT
To view tokens owned by an account you can call the NFT contract with the following `near-cli` command:
```bash
near view nft.examples.testnet nft_tokens_for_owner '{"account_id": "'$NEARID'"}'
```
<details>
<summary>Example response: </summary>
<p>
```json
[
{
"token_id": "Goi0CZ",
"owner_id": "bob.testnet",
"metadata": {
"title": "GO TEAM",
"description": "The Team Goes",
"media": "https://bafybeidl4hjbpdr6u6xvlrizwxbrfcyqurzvcnn5xoilmcqbxfbdwrmp5m.ipfs.dweb.link/",
"media_hash": null,
"copies": 1,
"issued_at": null,
"expires_at": null,
"starts_at": null,
"updated_at": null,
"extra": null,
"reference": null,
"reference_hash": null
},
"approved_account_ids": {}
}
]
```
</p>
</details>
**Congratulations!** You just minted your first NFT token on the NEAR blockchain! 🎉
Now try going to your [NEAR Wallet](https://testnet.mynearwallet.com) and view your NFT in the "Collectibles" tab.
---
## Final remarks
This basic example illustrates all the required steps to call an NFT smart contract on NEAR and start minting your own non-fungible tokens.
Now that you're familiar with the process, you can jump to [Contract Architecture](/tutorials/nfts/skeleton) and learn more about the smart contract structure and how you can build your own NFT contract from the ground up.
***Happy minting!*** 🪙
:::note Versioning for this article
At the time of this writing, this example works with the following versions:
- near-cli: `4.0.13`
- NFT standard: [NEP171](https://nomicon.io/Standards/Tokens/NonFungibleToken/Core), version `1.1.0`
:::
|
NEAR is in the Top 50 – Crypto Valley VC Report
COMMUNITY
March 3, 2021
NEAR is happy to announce that it has been listed as one of the Top 50 Blockchain Technology firms in Crypto Valley! NEAR is included in the 6th edition of the Crypto Valley VC Top 50 Report, created by CV VC AG in collaboration with its technology partners PwC and inacta. The Top 50 is chosen based on funding, valuation, and employees. Together, the Top 50 had a total valuation of US$254.9 billion
The Crypto Valley VC Top 50 is a periodical report on market valuation developments from Crypto Valley, which includes Switzerland and Liechtenstein. The CV VC Top 50 Report aims to highlight the impact of Crypto Valley on the Swiss economy and job market. The report shows the growth of the crypto and blockchain space and highlights the best companies in terms of funding, valuation, and employees.
Top 50 organizations – CV VC report.
This is the 6th edition of the CV VC Top 50 report and in this edition, NEAR is in the Top 50 under the Blockchain and Protocol sector!
To read the full report, please visit: https://cvvc.com/top50.
NEAR Report – by Mally Anderson
More than a decade after Bitcoin’s genesis block and five years since the Ethereum mainnet launched, blockchain networks support trillions of dollars in value and have millions of users worldwide, but they have not achieved anything close to mainstream global adoption. Many use cases have been unavailable to blockchain builders––especially in the last year, with the rise of DeFi on Ethereum and the resulting network congestion––because of slow confirmation, high gas fees, and high hurdles to onboarding users. The crypto ecosystem will likely evolve to become more multichain to address these bottlenecks and a range of layer-one protocols and layer-two solutions will apply their own strategies to address scalability and usability.
NEAR’s approach is to focus on the developer and user experience. By focusing on accessibility and usability, it can become just as easy to develop a decentralized application on a blockchain as it is to build any other kind of application on today’s internet––and just as easy to use. NEAR Protocol is a fully decentralized, community-controlled layer-one blockchain protocol and smart contract platform. The mission of the NEAR collective is to accelerate the world’s transition to open technologies by growing and enabling a community of developers and creators.
Beyond ease of use to support global adoption, the network must be able to scale to meet that volume of demand––which NEAR can, using dynamic sharding and a novel consensus mechanism. Dynamic sharding divides the system into parallel shards that each handle a subset of the computation and which can be added or removed based on usage. This horizontal scaling approach (as opposed to vertical scaling, as with Ethereum layer-twos such as rollups) raises the network’s potential throughput to more than 1000x that of Ethereum 1.0, across 100 shards. Combined with gas fees between 1000x and 10,000x cheaper than those on ETH, this horizontal scaling approach also opens the door to greater usability.
Developer experience and end-user accessibility are the top priorities for NEAR, all the way down to the protocol level. Developers can use familiar tools, such as Rust or AssemblyScript, to build on NEAR’s WASM-based runtime, and the application build/test/deploy cycle is much faster and simpler than on most networks. Transaction fees are predictable and developers can earn a rebate of 30% of the gas moving through their smart contracts.
NEAR believes the future of blockchain relies on progressive onboarding for users new to crypto. NEAR’s unique contract-based account model provides the flexibility to onboard users to an application who either don’t hold NEAR tokens or have never interacted with a blockchain––a process that can take dozens of steps and complex fiat onramps on most other networks. Human-readable, named account addresses (i.e. cryptovalley.near) replace long, clunky hex strings and can support multiple names within a single public key. With progressive onboarding, the crypto elements of using decentralized apps on NEAR can be 100% abstracted away from the user and the complexities of gas fees, storage costs, seed phrases, and keys are obscured from their experience until they are ready to claim their wallet and hold their own tokens.
Because NEAR is a sharded scalable blockchain, the easiest point of comparison is Ethereum 2.0––but this does not mean they are competitors. Quite the opposite, in fact: NEAR is compatible and interoperable with Ethereum 1.0 today. The hope is that NEAR solves many developers’ pain points and blockers around cost and scalability today, without having to commit fully to one network or the other.
NEAR will soon interoperate with Ethereum via a fully decentralized asset bridge and EVM support, which will allow Solidity contracts to run on NEAR without any code rewrites. The ideal for many developers and application builders in today’s blockchain landscape is to run a product on multiple blockchains and get the best of each: they can leverage community and liquidity on one, while leveraging performance on another. Using the ETH-NEAR bridge, a developer can have the same asset on both blockchains and let apps communicate across the bridge.
In order to realize the NEAR collective’s vision of a world where all people have control of their money, data, and power of governance, we need a positive-sum approach to growing adoption of all open technologies and decentralized networks. Rebuilding a truly open, decentralized web is not about performance and features alone, but about empowering builders and entrepreneurs to make their ideas a reality and onboard users without barriers.
ABOUT CV VC
CV VC is an early-stage venture capital investor with a focus on startups that build on blockchain technology. In addition to the venture capital investments, we operate our own incubator and ecosystem business under the CV Labs brand, consisting of co-working spaces, advisory and events.
Learn more about our opportunities to invest in the next generation of blockchain startups, our one-of-a-kind blockchain incubation program and how startups get funded, our corporate advisory and consulting services, and our community-focused co-working spaces.
ABOUT NEAR
NEAR exists to to accelerate the world’s transition to open technologies by growing and enabling a community of developers and creators. NEAR is a decentralized application platform that secures high value assets like money and identity with the performance necessary to make them useful for everyday people, putting the power of Open Finance and the Open Web in their hands. NEAR’s unique account model allows developers to build secure apps that consumers can actually use similarly to today’s web apps, something which requires multiple second-layer add-ons on other blockchains.
If you’re a developer, be sure to sign up for a developer program and join the conversation in NEAR Discord chat. You can also keep up to date on future releases, announcements, and event invitations by subscribing to NEAR newsletter, or following @NEARProtocol on Twitter for the latest news.
|
NEAR 2022: A Year in Review
NEAR FOUNDATION
December 23, 2022
It’s been a whirlwind year for NEAR – and it’s hard to believe that 2022 is coming to an end. It’s been a year filled with massive milestones and achievements, including the biggest NEARCON ever, record-breaking new wallet creation, and much more.
With 2023 around the corner, it’s a great time to reflect and take stock of all the exciting happenings and announcements that took place in 2022. NEAR Foundation congratulates every developer, community member, and partner that made 2022 a blast.
Without further ado, here’s everything that happened with NEAR in 2022, how the ecosystem is successfully navigating bear market conditions, and why NEAR is primed for explosive growth heading into 2023!
Protocol Progress and JavaScript SDK
The last 12 months saw huge progress on the protocol level, with an exciting roadmap charting the course for 2023 and beyond. From staking upgrades to new developer tools, the NEAR protocol made huge strides in onboarding new builders and the next 1 billion users.
NEAR introduced the brand new JavaScript SDK, enabling developers to build on NEAR using the most popular programming language in the world. Brendan Eiche, the inventor of JavaScript and co-founder of privacy-first browser Brave, even joined a panel at NEARCON to discuss the new SDK, Brave’s new support of Aurora, and why he’s excited about NEAR.
NEAR also saw the introduction of meta-transactions, allowing third parties to pay transactions for the transaction cost of any account. Users can then be onboarded to NEAR apps without owning any NEAR tokens. Meta transaction development will continue through next year and will be critical to new wallet growth.
Stake Wars’ latest iteration also began in 2022, marking another step towards decentralizing the network. Stake Wars will increase the total number of validators as chunk-only producers for the next phases of sharding. The chunk-only producer role will be more accessible to new validators who don’t have sufficient $NEAR to run a Block Producer node.
Phase 1 of Nightshade Sharding Commences
Stake Wars was a critical step in the transition from Simple Nightshade to Phase 1 of sharding in 2022. As Phase 1 continues, total validators will increase from around 100 to 300, with a significantly lower seat price. Phase 1 was crucial to facilitate scaling, improve decentralization, and bring the Open Web to mass adoption.
There will be an 86% decrease in collateral requirements to become a chunk-only producer as Phase 1 of Nightshade concludes in 2022 and continues into the next year. Phase 1 went live on mainnet in September 2022 with key contributions and assistance from the Pagoda team.
As NEAR co-founder Illia Polosukhin told CoinDesk in the lead-up to the Phase 1 roll-out, “the more users the network gets, the more decentralized the network gets as well.” This allows NEAR to add more validators in response to more demand for the network, delivering on the promise of speed, scalability, and efficiency.
Major Strides in Funding and Transparency
A thriving NEAR ecosystem requires resources and trust, both of which made major strides. The birth of Transparency Reports assured the community that all core stakeholders in NEAR are operating in good faith and from a position of financial strength. These reports provide important information about the health of the protocol and ecosystem, including staking distribution, daily transactions, and new accounts created. (Read the Q3 Transparency Report.)
NEAR Foundation CEO Marieke Flament also hosted a post-FTX AMA, re-assuring the ecosystem of NEAR’s runway and explaining why the current bear market is a time for a conviction to build.
To see how NEAR Foundation and other ecosystem funding projects have been distributing portions of the $800M in funding throughout 2022, check out Q1/Q2 and Q3 Transparency Reports. Key areas of funding include Proximity Labs and DeFi, DAOs, NFT infrastructure, and Regional Hubs. The NEAR Digital Collective (NDC) was also announced and launched at NEARCON, with one of the goals being to further decentralize and democratize grants giving and decision-making processes.
Ecosystem Growth
NEAR’s 2022 was one of huge growth and innovation. New partnerships like Sweatcoin and SailGP were major stepping stones toward bringing Web3 to the masses. Projects in areas such as gaming, music, and NFTs showcased that the NEAR ecosystem is thriving and poised for new heights in 2023.
The NEAR protocol experienced 15x growth in cumulative accounts over the past year with 22M+ today. NEAR also has 900K monthly active wallets, marking a major increase from this time last year. And in 2022, the NEAR ecosystem generated $330M of external capital in projects building on NEAR.
Movement Economy
One of the biggest catalysts for new NEAR wallet and account creation was the partnership with Sweatcoin. The $SWEAT token rewards users for every step they take throughout the day, encouraging users to live healthy lifestyles. Sweatcoin migrating to NEAR pushed total wallets from 2 million at the beginning of the year to over 20 million by November. As Flament pointed out during the Sweatcoin keynote at NEARCON, the movement economy is in its infancy, and NEAR is poised to be a leader in the space with the help of Sweatcoin.
DAO Innovation
Another huge addition to the NEAR ecosystem was the world-renowned boat racing league, SailGP. SailGP partnered with NEAR to pioneer the intersection of sports, Web3, and Decentralized Autonomous Organizations (DAOs). In addition to offering NFT collectibles on the NEAR blockchain to fans, SailGP will use AstroDAO tooling to create the first fan-owned team as a DAO. In addition to the growth of ecosystem projects like Kin DAO for equity and inclusion, it was a banner year for NEAR and DAOs.
Blockchain Music
You may not have noticed, but 2022 saw a massive surge in interest about music and Web3. And one of the most innovative projects in this area was in the NEAR ecosystem, with the launch of Endlesss. The music creation, marketing, and community development platform’s NEAR integration went live in the summer as a virtual gathering place blending social media features with music production tools. Endlesss enables musicians of all skill levels to conduct “jam sessions” on the NEAR blockchain and mint their music as NFTs.
Gaming
The NEAR ecosystem got a huge dose of star power with the unveiling of Armored Kingdom. Backed by Hollywood star Mila Kunis, Armored Kingdom will be an immersive gaming, NFT, storytelling, and metaverse experience built on the NEAR blockchain. The project kicked off with a first edition NFT comic book airdrop at Consensus, Austin. NEAR also announced the launch of the South Korea Regional Hub with a focus on bringing the NEAR blockchain to the massive local game development community.
PlayEmber also established itself as a key player in the NEAR gaming ecosystem, taking a mobile-first approach to Web3 gaming and bringing advertisers into the space. PlayEmber’s games now have over 4.2 million monthly active users, and recently closed a $2.3 million pre-seed raise led by Shima Capital.
NFTs
With core NFT infrastructure maturing in the NEAR ecosystem, 2022 was a year of innovative use cases and groundbreaking projects. NEARCON saw the announcement of a key grant to Few and Far, a premium NFT marketplace on NEAR with a seamless UX and simple minting solutions. One of NEAR’s biggest NFT projects, Mintbase, received over $12 million in funding this August. Mintbase empowers niche creators with the ability to mint NFTs of any type with little technical know-how, exemplifying NEAR’s commitment to making Web3 easy for everyone.
Looking Ahead
From the launch of Phase 1 sharding to protocol upgrades and ecosystem growth, 2022 was a huge leap for the NEAR ecosystem in many respects. It’s the year that the “Create Without Limits” vision was introduced, the biggest NEARCON ever took place, and some of the most important partnerships in the history of NEAR were cemented. It was also a year in which NEAR committed to more transparency and communication with the community.
Looking forward, NEAR will continue to champion Web3 as a catalyst for change, in addition to environmental sustainability as a carbon-neutral blockchain. In 2023, the NEAR community can expect even more partnerships that push boundaries and support projects that will enhance and empower a prosperous NEAR ecosystem. |
NEAR Foundation to fund USN Protection Programme
COMMUNITY
October 24, 2022
The NEAR Foundation has set aside $40m USD for a USN Protection Programme grant, designed to protect users from a recent issue relating to USN by ensuring eligible USN holders can redeem their USN on a 1:1 basis with USDT.e. The Programme is now live, run by a subsidiary of Aurora Labs, and can be found here.
Decentral Bank (DCB) recently contacted the NEAR Foundation to advise that USN – a NEAR-native stablecoin created and launched by DCB independently of NEAR Foundation – had become undercollateralised. DCB confirmed that this was due to an initial algorithmic version of USN (v1), which is no longer algorithmic since its upgrade to v2 in June, that was susceptible to undercollateralisation during extreme market conditions.
This collateral gap of $40m USD is fully covered by the USN Protection Programme. This gap is fixed and not linked to the $NEAR token price in any way (and $NEAR has never had a hardcoded burn/mint relationship to USN).
Given the issues described above, the NEAR Foundation is recommending that DCB wind-down USN in an orderly manner. To assist with this process, the NEAR Foundation has provided a $40m USD grant to a subsidiary of Aurora Labs – one of the NEAR ecosystem’s most prominent contributors – to set up the USN Protection Programme which is now live. Redemptions will begin once DCB has taken certain steps to ensure an orderly wind down. This Programme will be available for 1 year (until 24th October 2023). Eligible USN holders can choose to redeem their USN through the USN Protection Programme or can exchange their USN via any other available route. We’re confident this action best safeguards users and the wider NEAR ecosystem by funding the known collateral gap.
The NEAR Foundation is confident that as the ecosystem grows and matures, this type of intervention should not be required in the future. Moving forward, the NEAR Foundation expects to be working with the NEAR Digital Collective (NDC) and wider community to set up a funded initiative with the remit of developing robust community standards and guardrails, in particular in relation to stablecoins, to help ensure that users are protected in situations of rapid innovation, and evolving markets and regulations.
Note: this article is a summary of and based on the assumptions in a longer article describing events relating to USN and information on the USN Protection Programme (including eligibility requirements and excluded jurisdictions incl. the US) in more detail – this can be found here
|
```bash
near call token.v2.ref-finance.near storage_deposit '{"account_id": "alice.near"}' --depositYocto 1250000000000000000000 --accountId bob.near
```
|
---
NEP: 508
Title: Resharding v2
Authors: Waclaw Banasik, Shreyan Gupta, Yoon Hong
Status: Draft
DiscussionsTo: https://github.com/near/nearcore/issues/8992
Type: Protocol
Version: 1.0.0
Created: 2023-09-19
LastUpdated: 2023-11-14
---
## Summary
This proposal introduces a new implementation for resharding and a new shard layout for the production networks.
In essence, this NEP is an extension of [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md), which was focused on splitting one shard into multiple shards.
We are introducing resharding v2, which supports one shard splitting into two within one epoch at a pre-determined split boundary. The NEP includes performance improvement to make resharding feasible under the current state as well as actual resharding in mainnet and testnet (To be specific, splitting the largest shard into two).
While the new approach addresses critical limitations left unsolved in NEP-40 and is expected to remain valid for foreseeable future, it does not serve all use cases, such as dynamic resharding.
## Motivation
Currently, NEAR protocol has four shards. With more partners onboarding, we started seeing that some shards occasionally become over-crowded with respect to total state size and number of transactions. In addition, with state sync and stateless validation, validators will not need to track all shards and validator hardware requirements can be greatly reduced with smaller shard size. With future in-memory tries, it's also important to limit the size of individual shards.
## Specification
### High level assumptions
* Flat storage is enabled.
* Shard split boundary is predetermined and hardcoded. In other words, necessity of shard splitting is manually decided.
* For the time being resharding as an event is only going to happen once but we would still like to have the infrastructure in place to handle future resharding events with ease.
* Merkle Patricia Trie is the underlying data structure for the protocol state.
* Epoch is at least 6 hrs long for resharding to complete.
### High level requirements
* Resharding must be fast enough so that both state sync and resharding can happen within one epoch.
* Resharding should work efficiently within the limits of the current hardware requirements for nodes.
* Potential failures in resharding may require intervention from node operator to recover.
* No transaction or receipt must be lost during resharding.
* Resharding must work regardless of number of existing shards.
* No apps, tools or code should hardcode the number of shards to 4.
### Out of scope
* Dynamic resharding
* automatically scheduling resharding based on shard usage/capacity
* automatically determining the shard layout
* Merging shards or boundary adjustments
* Shard reshuffling
### Required protocol changes
A new protocol version will be introduced specifying the new shard layout which would be picked up by the resharding logic to split the shard.
### Required state changes
* For the duration of the resharding the node will need to maintain a snapshot of the flat state and related columns. As the main database and the snapshot diverge this will cause some extent of storage overhead.
* For the duration of the epoch before the new shard layout takes effect, the node will need to maintain the state and flat state of shards in the old and new layout at the same time. The State and FlatState columns will grow up to approx 2x the size. The processing overhead should be minimal as the chunks will still be executed only on the parent shards. There will be increased load on the database while applying changes to both the parent and the children shards.
* The total storage overhead is estimated to be on the order of 100GB for mainnet RPC nodes and 2TB for mainnet archival nodes. For testnet the overhead is expected to be much smaller.
### Resharding flow
* The new shard layout will be agreed on offline by the protocol team and hardcoded in the reference implementation.
* The first resharding will be scheduled soon after this NEP is merged. The new shard layout boundary accounts will be: ```["aurora", "aurora-0", "kkuuue2akv_1630967379.near", "tge-lockup.sweat"]```.
* Subsequent reshardings will be scheduled as needed, without further NEPs, unless significant changes are introduced.
* In epoch T, past the protocol version upgrade date, nodes will vote to switch to the new protocol version. The new protocol version will contain the new shard layout.
* In epoch T, in the last block of the epoch, the EpochConfig for epoch T+2 will be set. The EpochConfig for epoch T+2 will have the new shard layout.
* In epoch T + 1, all nodes will perform the state split. The child shards will be kept up to date with the blockchain up until the epoch end first via catchup, and later as part of block postprocessing state application.
* In epoch T + 2, the chain will switch to the new shard layout.
## Reference Implementation
The implementation heavily re-uses the implementation from [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md). Below are listed the major differences and additions.
### Code pointers to the proposed implementation
* [new shard layout](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/core/primitives/src/shard_layout.rs#L161)
* [the main logic for splitting states](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/resharding.rs#L280)
* [the main logic for applying chunks to split states](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/update_shard.rs#L315)
* [the main logic for garbage collecting state from parent shard](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/store.rs#L2335)
### Flat Storage
The old implementation of resharding relied on iterating over the full trie state of the parent shard in order to build the state for the children shards. This implementation was suitable at the time but since then the state has grown considerably and this implementation is now too slow to fit within a single epoch. The new implementation relies on iterating through the flat storage in order to build the children shards quicker. Based on benchmarks, splitting the largest shard by using flat storage can take around 15 min without throttling and around 3 hours with throttling to maintain the block production rate.
The new implementation will also propagate the flat storage for the children shards and keep it up to date with the chain until the switch to the new shard layout in the next epoch. The old implementation didn't handle this case because the flat storage didn't exist back then.
In order to ensure consistent view of the flat storage while splitting the state the node will maintain a snapshot of the flat state and related columns as of the last block of the epoch prior to resharding. The existing implementation of flat state snapshots used in State Sync will be used for this purpose.
### Handling receipts, gas burnt and balance burnt
When resharding, extra care should be taken when handling receipts in order to ensure that no receipts are lost or duplicated. The gas burnt and balance burnt also need to be correctly handled. The old resharding implementation for handling receipts, gas burnt and balance burnt relied on the fact in the first resharding there was only a single parent shard to begin with. The new implementation will provide a more generic and robust way of reassigning the receipts to the child shards, gas burnt, and balance burnt, that works for arbitrary splitting of shards, regardless of the previous shard layout.
### New shard layout
The first release of the resharding v2 will contain a new shard layout where one of the existing shards will be split into two smaller shards. Furthermore additional reshardings can be scheduled in subsequent releases without additional NEPs unless the need for it arises. A new shard layout can be determined and will be scheduled and executed with the next protocol upgrade. Resharding will typically happen by splitting one of the existing shards into two smaller shards. The new shard layout will be created by adding a new boundary account that will be determined by analysing the storage and gas usage metrics within the shard and selecting a point that will divide the shard roughly in half in accordance to the mentioned metrics. Other metrics can also be used based on requirements.
### Removal of Fixed shards
Fixed shards was a feature of the protocol that allowed for assigning specific accounts and all of their recursive sub accounts to a predetermined shard. This feature was only used for testing and was never used in production. Fixed shards feature unfortunately breaks the contiguity of shards and is not compatible with the new resharding flow. A sub account of a fixed shard account can fall in the middle of account range that belongs to a different shard. This property of fixed shards made it particularly hard to reason about and implement efficient resharding.
For example in a shard layout with boundary accounts [`b`, `d`] the account space is cleanly divided into three shards, each spanning a contiguous range and account ids:
* 0 - `:b`
* 1 - `b:d`
* 2 - `d:`
Now if we add a fixed shard `f` to the same shard layout, then any we'll have 4 shards but neither is contiguous. Accounts such as `aaa.f`, `ccc.f`, `eee.f` that would otherwise belong to shards 0, 1 and 2 respectively are now all assigned to the fixed shard and create holes in the shard account ranges.
It's also worth noting that there is no benefit to having accounts colocated in the same shard. Any transaction or receipt is treated the same way regardless of crossing shard boundary.
This was implemented ahead of this NEP and the fixed shards feature was **removed**.
### Garbage collection
In epoch T+2 once resharding is completed, we can delete the trie state and the flat state related to the parent shard. In practice, this is handled as part of the garbage collection code. While garbage collecting the last block of epoch T+1, we go ahead and clear all the data associated with the parent shard from the trie cache, flat storage, and RocksDB state associated with trie state and flat storage.
### Transaction pool
The transaction pool is sharded i.e. it groups transactions by the shard where each transaction should be converted to a receipt. The transaction pool was previously sharded by the ShardId. Unfortunately ShardId is insufficient to correctly identify a shard across a resharding event as ShardIds change domain. The transaction pool was migrated to group transactions by ShardUId instead, and a transaction pool resharding was implemented to reassign transaction from parent shard to children shards right before the new shard layout takes effect. The ShardUId contains the version of the shard layout which allows differentiating between shards in different shard layouts.
This was implemented ahead of this NEP and the transaction pool is now fully **migrated** to ShardUId.
## Alternatives
### Why is this design the best in the space of possible designs?
This design is simple, robust, safe, and meets all requirements.
### What other designs have been considered and what is the rationale for not choosing them?
#### Alternative implementations
* Splitting the trie by iterating over the boundaries between children shards for each trie record type. This implementation has the potential to be faster but it is more complex and it would take longer to implement. We opted in for the much simpler one using flat storage given it is already quite performant.
* Changing the trie structure to have the account id first and type of record later. This change would allow for much faster resharding by only iterating over the nodes on the boundary. This approach has two major drawbacks without providing too many benefits over the previous approach of splitting by each trie record type.
1) It would require a massive migration of trie.
2) We would need to maintain the old and the new trie structure forever.
* Changing the storage structure by having the storage key to have the format of `account_id.node_hash`. This structure would make it much easier to split the trie on storage level because the children shards are simple sub-ranges of the parent shard. Unfortunately we found that the migration would not be feasible.
* Changing the storage structure by having the key format as only node_hash and dropping the ShardUId prefix. This is a feasible approach but it adds complexity to the garbage collection and data deletion, specially when nodes would start tracking only one shard. We opted in for the much simpler one by using the existing scheme of prefixing storage entries by shard uid.
#### Other considerations
* Dynamic Resharding - we have decided to not implement the full dynamic resharding at this time. Instead we hardcode the shard layout and schedule it manually. The reasons are as follows:
* We prefer incremental process of introducing resharding to make sure that it is robust and reliable, as well as give the community the time to adjust.
* Each resharding increases the potential total load on the system. We don't want to allow it to grow until full sharding is in place and we can handle that increase.
* Extended shard layout adjustments - we have decided to only implement shard splitting and not implement any other operations. The reasons are as follows:
* In this iteration we only want to perform splitting.
* The extended adjustments are currently not justified. Both merging and boundary moving may be useful in the future when the traffic patterns change and some shard become underutilized. In the nearest future we only predict needing to reduce the size of the heaviest shards.
### What is the impact of not doing this?
We need resharding in order to scale up the system. Without resharding eventually shards would grow so big (in either storage or cpu usage) that a single node would not be able to handle it. Additionally, this clears up the path to implement in-memory tries as we need to store the whole trie structure in limited RAM. In the future smaller shard size would lead to faster syncing of shard data when nodes start tracking just one shard.
## Integration with State Sync
There are two known issues in the integration of resharding and state sync:
* When syncing the state for the first epoch where the new shard layout is used. In this case the node would need to apply the last block of the previous epoch. It cannot be done on the children shard as on chain the block was applied on the parent shards and the trie related gas costs would be different.
* When generating proofs for incoming receipts. The proof for each of the children shards contains only the receipts of the shard but it's generated on the parent shard layout and so may not be verified.
In this NEP we propose that resharding should be rolled out first, before any real dependency on state sync is added. We can then safely roll out the resharding logic and solve the above mentioned issues separately. We believe at least some of the issues can be mitigated by the implementation of new pre-state root and chunk execution design.
## Integration with Stateless Validation
The Stateless Validation requires that chunk producers provide proof of correctness of the transition function from one state root to another. That proof for the first block after the new shard layout takes place will need to prove that the entire state split was correct as well as the state transition.
In this NEP we propose that resharding should be rolled out first, before stateless validation. We can then safely roll out the resharding logic and solve the above mentioned issues separately. This issue was discussed with the stateless validation experts and we are cautiously optimistic that the integration will be possible. The most concerning part is the proof size and we believe that it should be small enough thanks to the resharding touching relatively small number of trie nodes - on the order of the depth of the trie.
## Future fast-followups
### Resharding should work even when validators stop tracking all shards
As mentioned above under 'Integration with State Sync' section, initial release of resharding v2 will happen before the full implementation of state sync and we plan to tackle the integration between resharding and state sync after the next shard split (Won't need a separate NEP as the integration does not require protocol change.)
### Resharding should work after stateless validation is enabled
As mentioned above under 'Integration with Stateless Validation' section, the initial release of resharding v2 will happen before the full implementation of stateless validation and we plan to tackle the integration between resharding and stateless validation after the next shard split (May need a separate NEP depending on implementation detail.)
## Future possibilities
### Further reshardings
This NEP introduces both an implementation of resharding and an actual resharding to be done in the production networks. Further reshardings can also be performed in the future by adding a new shard layout and setting the shard layout for the desired protocol version in the `AllEpochConfig`.
### Dynamic resharding
As noted above, dynamic resharding is out of scope for this NEP and should be implemented in the future. Dynamic resharding includes the following but not limited to:
* Automatic determination of split boundary based on parameters like traffic, gas usage, state size, etc.
* Automatic scheduling of resharding events
### Extended shard layout adjustments
In this NEP we only propose supporting splitting shards. This operation should be more than sufficient for the near future but eventually we may want to add support for more sophisticated adjustments such as:
* Merging shards together
* Moving the boundary account between two shards
### Localization of resharding event to specific shard
As of today, at the RocksDB storage layer, we have the ShardUId, i.e. the ShardId along with the ShardVersion, as a prefix in the key of trie state and flat state. During a resharding event, we increment the ShardVersion by one, and effectively remap all the current parent shards to new child shards. This implies we can't use the same underlying key value pairs for store and instead would need to duplicate the values with the new ShardUId prefix, even if a shard is unaffected and not split.
In the future, we would like to potentially change the schema in a way such that only the shard that is splitting is impacted by a resharding event, so as to avoid additonal work done by nodes tracking other shards.
### Other useful features
* Removal of shard uids and introducing globally unique shard ids
* Account colocation for low latency across account call - In case we start considering synchronous execution environment, colocating associated accounts (e.g. cross contract call between them) in the same shard can increase the efficiency
* Shard purchase/reservation - When someone wants to secure entirety of limitation on a single shard (e.g. state size limit), they can 'purchase/reserve' a shard so it can be dedicated for them (similar to how Aurora is set up)
## Consequences
### Positive
* Workload across shards will be more evenly distributed.
* Required space to maintain state (either in memory or in persistent disk) will be smaller. This is useful for in-memory tries.
* State sync overhead will be smaller with smaller state size.
### Neutral
* Number of shards would increase.
* Underlying trie structure and data structure are not going to change.
* Resharding will create dependency on flat state snapshots.
* The resharding process, as of now, is not fully automated. Analyzing shard data, determining the split boundary, and triggering an actual shard split all need to be manually curated and tracked.
### Negative
* During resharding, a node is expected to require more resources as it will first need to copy state data from the parent shard to the child shard, and then will have to apply trie and flat state changes twice, once for the parent shard and once for the child shards.
* Increased potential for apps and tools to break without proper shard layout change handling.
### Backwards Compatibility
Any light clients, tooling or frameworks external to nearcore that have the current shard layout or the current number of shards hardcoded may break and will need to be adjusted in advance. The recommended way for fixing it is querying an RPC node for the shard layout of the relevant epoch and using that information in place of the previously hardcoded shard layout or number of shards. The shard layout can be queried by using the `EXPERIMENTAL_protocol_config` rpc endpoint and reading the `shard_layout` field from the result. A dedicated endpoint may be added in the future as well.
Within nearcore we do not expect anything to break with this change. Yet, shard splitting can introduce additional complexity on replayability. For instance, as target shard of a receipt and belonging shard of an account can change with shard splitting, shard splitting must be replayed along with transactions at the exact epoch boundary.
## Changelog
[The changelog section provides historical context for how the NEP developed over time. Initial NEP submission should start with version 1.0.0, and all subsequent NEP extensions must follow [Semantic Versioning](https://semver.org/). Every version should have the benefits and concerns raised during the review. The author does not need to fill out this section for the initial draft. Instead, the assigned reviewers (Subject Matter Experts) should create the first version during the first technical review. After the final public call, the author should then finalize the last version of the decision context.]
### 1.0.0 - Initial Version
> Placeholder for the context about when and who approved this NEP version.
#### Benefits
> List of benefits filled by the Subject Matter Experts while reviewing this version:
* Benefit 1
* Benefit 2
#### Concerns
> Template for Subject Matter Experts review for this version:
> Status: New | Ongoing | Resolved
| # | Concern | Resolution | Status |
| --: | :------ | :--------- | -----: |
| 1 | | | |
| 2 | | | |
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
```js
const ammContract = "v2.ref-finance.near";
const result = Near.view(
ammContract,
"get_pools",
{
from_index: 0,
limit: 1000
}
);
```
<details>
<summary>Example response</summary>
<p>
```js
[
{
pool_kind: 'SIMPLE_POOL',
token_account_ids: [ 'token.skyward.near', 'wrap.near' ],
amounts: [ '51865812079751349630100', '6254162663147994789053210138' ],
total_fee: 30,
shares_total_supply: '1305338644973934698612124055',
amp: 0
},
{
pool_kind: 'SIMPLE_POOL',
token_account_ids: [
'c02aaa39b223fe8d0a0e5c4f27ead9083c756cc2.factory.bridge.near',
'wrap.near'
],
amounts: [ '783621938569399817', '1100232280852443291118200599' ],
total_fee: 30,
shares_total_supply: '33923015415693335344747628',
amp: 0
}
]
```
</p>
</details> |
---
sidebar_position: 4
---
# Deploying Contracts
You might want your smart contract to deploy subsequent smart contract code for a few reasons:
- The contract acts as a Factory, a pattern where a parent contract creates many child contracts ([Mintbase](https://www.mintbase.xyz/) does this to create a new NFT store for [anyone who wants one](https://docs.mintbase.xyz/creating/store/deploy-fee); [Rainbow Bridge](https://near.org/bridge/) does this to deploy separate Fungible Token contracts for [each bridged token](https://github.com/aurora-is-near/rainbow-token-connector/blob/ce7640da144f000e0a93b6d9373bbc2514e37f3b/bridge-token-factory/src/lib.rs#L311-L341))
- The contract [updates its own code](../../../2.build/2.smart-contracts/release/upgrade.md#programmatic-update) (calls `deploy` on itself).
- You could implement a "contract per user" system that creates app-specific subaccounts for users (`your-app.user1.near`, `your-app.user2.near`, etc) and deploys the same contract to each. This is currently prohibitively expensive due to NEAR's [storage fees](https://docs.near.org/concepts/storage/storage-staking), but that may be optimized in the future. If it is, this sort of "sharded app design" may become the more scalable, user-centric approach to contract standards and app mechanics. An early experiment with this paradigm was called [Meta NEAR](https://github.com/metanear).
If your goal is to deploy to a subaccount of your main contract like Mintbase or the Rainbow Bridge, you will also need to create the account. So, combining concepts from the last few pages, here's what you need:
```js
import { includeBytes, NearPromise, near } from "near-sdk-js";
const CODE = includeBytes("./res/contract.wasm");
NearPromise.new("subaccount.example.near")
.createAccount()
.addFullAccessKey(near.signerAccountPk())
.transfer(BigInt(3_000_000_000_000_000_000_000_000)) // 3e24yN, 3N
.deployContract(CODE);
```
Here's what a full contract might look like, showing a naïve way to pass `code` as an argument rather than hard-coding it with `includeBytes`:
```js
import { NearPromise, near, validateAccountId } from "near-sdk-js";
const INITIAL_BALANCE = BigInt(3_000_000_000_000_000_000_000_000); // 3e24yN, 3N
@NearBindgen({})
export class Contract {
@call({ privateFunction: true })
createAccount({ prefix, code }) {
const subAccountId = `${prefix}.${near.currentAccountId()}`;
validateAccountId(subAccountId);
NearPromise.new(subAccountId)
.createAccount()
.addFullAccessKey(near.signerAccountPk())
.transfer(INITIAL_BALANCE)
.deployContract(code);
}
}
```
Why is this a naïve approach? It could run into issues because of the 4MB transaction size limit – the function above would deserialize and heap-allocate a whole contract. For many situations, the `includeBytes` approach is preferable. If you really need to attach compiled Wasm as an argument, you might be able to copy the approach [used by Sputnik DAO v2](https://github.com/near-daos/sputnik-dao-contract/blob/a8fc9a8c1cbde37610e56e1efda8e5971e79b845/sputnikdao2/src/types.rs#L74-L142).
|
ETH-NEAR Rainbow Bridge
DEVELOPERS
August 19, 2020
There are lots of blockchains and scalability solutions, and it is hard to decide which one to use when we are building a product, especially when it means committing with the assets or data. Ideally, we don’t want to commit at all. We want to freely move our assets and data between the blockchains, or even better — run our product on several blockchains at the same time and leverage each of them. For example, we can leverage performance on one blockchain while leveraging community and ecosystem on another blockchain.
At NEAR, we do not want Ethereum developers to choose between NEAR and Ethereum and commit to only one. We want them to have the same asset on both blockchains and even have apps that seamlessly communicate across the boundary. So we built a bridge, called Rainbow Bridge, to connect the Ethereum and NEAR blockchains, and we created the lowest possible trust level one can have for an interoperability solution — you only need to trust what it connects, the NEAR and Ethereum blockchains, and you don’t need to trust the bridge itself. There is no authority outside Ethereum miners and NEAR validators.
Specifically, to trust the bridge, you need to:
Trust that Ethereum blocks are final after X confirmations. Currently, bridge implementation decides X for the app developer, but soon app developers will be able to determine X for themselves. It can be 25 if you are a typical app developer or 500 if you are super cautious;
Trust that at no time, 2/3 of the validators stake on NEAR blockchain is dishonest. Not only the bridge, but also all other applications on NEAR operate under this assumption;
Until EIP665 is accepted, you will need to trust that it is not possible to exponentially increase the minimum gas price of Ethereum blocks by more than 2x with every block for more than 4 hours. Assuming the base gas price to be 40gwei and 14bps, you can easily calculate that attempting to increase gas price by 2x will very quickly cause it to exceed any reasonable limits way before the end of 4 hours. We will explain more on where this restriction is coming from below.
Since the Rainbow Bridge does not require the users to trust anything but the blockchains themselves, we call it trustless.
This trustless model results in the following latency number for interactions across the bridge:
For ETH->NEAR interactions, the latency is the speed of producing X Ethereum blocks, which is about 6 minutes for 25 blocks;
For NEAR->ETH interactions, the latency is 4 hours, and it will be about 14 seconds once EIP665 is accepted.
Notice how significantly the latency would drop if only EIP665 was accepted. Given that many scalability solutions require up to 7 days of waiting time for interoperability, we consider our bridge rapid. The speed is only limited by the lack of EIP665 and the number of confirmations for Ethereum blocks, which is the same limitation that applies to any Ethereum-based project.
As we will expand later, our bridge does not require special permission to deploy, maintain, or use. Anyone can deploy a new bridge, use an existing bridge, or join the maintenance of an existing bridge without getting approval from anyone else, not even NEAR Foundation, which makes our bridge decentralized.
Rainbow Bridge is also generic. Any information that is cryptographically provable on NEAR is usable in Ethereum contracts and the other way around. The following information is cryptographically provable for both blockchains:
Inclusion of a transaction in a block;
Execution of a transaction with a specific result;
The state of the contract.
Additionally, blockchain-specific information is provable, like the content of a specific block header, which in Ethereum would include things like information about the miner, and in NEAR would include information about the validators. Cryptographically-provable information allows us to build a variety of use-cases:
we can bridge fungible tokens, non-fungible tokens, or any kind of asset;
we can write Ethereum contracts that use the state of a contract or validator from NEAR;
we can do cross-contract calls across the bridge.
While the variety of the use cases looks unlimited, we currently only have out-of-box support for transferring ERC20 tokens from Ethereum to NEAR blockchain and back. However, we will add out-of-box support for other use cases based on demand. Additionally, anyone can jump in and add support to their personal use case without waiting for us or working with our codebase.
To understand why this and other properties that we listed above hold, we need to understand its design.
*If you are a developer or designer, join us Sept. 15-30, 2020 for NEAR’s open online Hack The Rainbow 🌈 hackathon.
Design
Anton Bukov developed a large portion of the Rainbow Bridge design during his work in NEAR. He is now the CTO of 1inch exchange, but he still guides the high-level design of the bridge.
The core idea behind the bridge is that it implements two light clients:
An Ethereum light client implemented in Rust as a NEAR contract
A NEAR light client implemented in Solidity as an Ethereum contract
If you are familiar with the concept of a light client, this short outline already explains the above-listed guarantees. In short, a blockchain light client is a specification or an implementation of this specification that tracks the state of the blockchain without running heavy computation, but which can still verify the state that it tracks in a trustless way. The main focus is to be able to track and verify the state with a small amount of computation.
We realized that the amount of computation could be so small that it might be possible to run a light client inside a contract. This was the key to making the Rainbow Bridge feasible.
The Ethereum light client is more resource-intensive since it requires tracking every single header of the Ethereum blockchain, and it requires Ethash verification. The NEAR light client is less resource-intensive since it requires tracking only one block per epoch, where an epoch is approximately 43k blocks (this is a necessary quality since NEAR produces blocks much faster than Ethereum, which makes tracking all NEAR headers in an Ethereum Solidity contract challenging and frankly prohibitively expensive due to gas costs). Fortunately, NEAR gas limits allow more expensive computations than Ethereum. So we can run a more expensive Ethereum light client in the more computationally liberal NEAR blockchain while running a less expensive NEAR light client in the more computationally conservative Ethereum blockchain. What a fantastic coincidence! 🙂 You can find the specification of the NEAR light client here: https://nomicon.io/ChainSpec/LightClient.html
Light clients
Here is the simplified diagram of the light clients operating in the bridge:
Notice, how in addition to smart contracts that implement light clients, we have two services, called relays, that regularly send headers to the light clients. Eth2NearRelay sends every single header to the EthOnNearClient contract, while Near2EthRelay sends one header every 4 hours to the NearOnEthClient contract. For a given pair of EthOnNearClient and NearOnEthClient contracts, there could be several pairs of Eth2NearRelay and Near2EthRelay services. Each bridge maintainer can run its pair of services. These pairs of services can either compete with each other — they would try submitting the same blocks simultaneously, and only one would succeed each time, or back up each other — some services would only submit the blocks if others did not submit blocks in time. Our current implementation of the services runs in the first mode.
EthOnNearClient
EthOnNearClient as we already said is an implementation of the Ethereum light client in Rust as a NEAR contract. It accepts Ethereum headers and maintains the canonical chain, it assumes that blocks that have finalized_gc_threshold confirmations cannot leave the canonical chain, and it memorizes up to hashes_gc_threshold of the blocks from the canonical chain, where hashes_gc_threshold>=finalized_gc_threshold. By the default finalized_gc_threshold=46k which roughly corresponds to 7 days worth of headers. This is done so that the state of the EthOnNearClient does not grow endlessly. Remember that in NEAR one is required to have a certain amount of locked tokens to be able to use the state (more info), which would require a very large ever-growing number of locked tokens if we were storing the hash of every single Ethereum canonical header in a single contract. Therefore we store a limited number of hashes of Ethereum headers. As a consequence the bridge can only be used to prove events that happened within this time horizon. So if you ever start an ERC20 transfer from Ethereum to NEAR, please make sure to finish it within 7 days if it gets interrupted in the middle.
Another important nuance of EthOnNearClient is how it verifies Ethereum headers. It wouldn’t be possible to verify Ethereum PoW directly inside the contract since it would require storing the Ethereum DAG file, which would require prohibitive memory usage. Fortunately, every Ethereum block uses only a subset of the elements from the DAG file, and there is only one DAG file per Ethereum epoch. Moreover, DAG files can be precomputed for future epochs in advance. We precompute DAG files for about 4 years in advance and merkelize each of them. EthOnNearClient contract then memorizes the merkle roots of the DAG files for the next 4 years upon initialization. EthOnNearClient then only needs to receive the Ethereum header, the DAG elements, and the merkle proofs of these elements, which allows it to verify PoW without having the entire DAG file in memory. We took this approach from the EOS bridge used by Kyber network, and we even reuse some of their code. Fortunately, the Ethereum header’s verification uses only 1/3 of the max transaction gas limit and 1/10 of the block gas limit.
NearOnEthClient
NearOnEthClient is an implementation of the NEAR light client in Solidity as an Ethereum contract. Unlike EthOnNearClient it does not need to verify every single NEAR header and can skip most of them as long as it verifies at least one header per NEAR epoch, which is about 43k blocks and lasts about half a day. As a result, NearOnEthClient can memorize hashes of all submitted NEAR headers in history, so if you are making a transfer from NEAR to Ethereum and it gets interrupted you don’t need to worry and you can resume it any time, even months later. Another useful property of the NEAR light client is that every NEAR header contains a root of the merkle tree computed from all headers before it. As a result, if you have one NEAR header you can efficiently verify any event that happened in any header before it.
Another useful property of the NEAR light client is that it only accepts final blocks, and final blocks cannot leave the canonical chain in NEAR. This means that NearOnEthClient does not need to worry about forks.
However, unfortunately, NEAR uses Ed25519 to sign messages of the validators who approve the blocks, and this signature is not available as an EVM precompile. It makes verification of all signatures of a single NEAR header prohibitively expensive. So technically, we cannot verify one NEAR header within one contract call to NearOnEthClient. Therefore we adopt the optimistic approach where NearOnEthClient verifies everything in the NEAR header except the signatures. Then anyone can challenge a signature in a submitted header within a 4-hour challenge window. The challenge requires verification of a single Ed25519 signature which would cost about 500k Ethereum gas (expensive, but possible). The user submitting the NEAR header would have to post a bond in Ethereum tokens, and a successful challenge would burn half of the bond and return the other half to the challenger. The bond should be large enough to pay for the gas even if the gas price increases exponentially during the 4 hours. For instance, a 20 ETH bond would cover gas price hikes up to 20000 Gwei. This optimistic approach requires having a watchdog service that monitors submitted NEAR headers and challenges any headers with invalid signatures. For added security, independent users can run several watchdog services.
Once EIP665 is accepted, Ethereum will have the Ed25519 signature available as an EVM precompile. This will make watchdog services and the 4-hour challenge window unnecessary.
At its bare minimum, Rainbow Bridge consists of EthOnNearClient and NearOnEthClient contracts, and three services: Eth2NearRelay, Near2EthRelay, and the Watchdog. We might argue that this already constitutes a bridge since we have established a cryptographic link between two blockchains, but practically speaking it requires a large portion of additional code to make application developers even consider using the Rainbow Bridge for their applications.
Provers
What would make clients more useful is the ability to prove that something happened on a specific blockchain. For that, we implement EthOnNearProver NEAR contract in Rust and NearOnEthProver Ethereum contract in Solidity. EthOnNearProver can verify Ethereum events, while the NearOnEthProver contract can verify NEAR contract execution results. Both EthOnNearProver and NearOnEthProver contracts are implemented separately from EthOnNearClient and NearOnEthClient for multiple reasons:
Separation of concerns — clients are responsible for keeping track of the most recent blockchain header, while provers are responsible for verifying specific cryptographic information;
Scalability — since NEAR is a sharded blockchain, if a certain instance of the bridge becomes extremely popular, users will be able to scale the load by deploying multiple copies of a given prover;
Specificity — in addition to the provers described above, one can implement other kinds of provers. One could verify a contract’s state, the specific content of a header, or the inclusion of a transaction. Additionally, different implementations of provers could use different optimizations.
Currently, we have implementations only for verifying Ethereum events and NEAR contract execution results, which is sufficient for moving tokens between the blockchains. But anyone is welcome to contribute prover implementations.
(Caption: One EthOnNearClient can have multiple provers talking to it replicated across multiple shards, while NearOnEthClient will have a single copy of each kind of the prover talking to it).
ERC20 use case
The provers enable us to build a set of contracts that allow asset transfer or cross-chain communication. However, these contracts will be very different depending on what exactly we want to interoperate across the bridge. For example, ERC20 requires an entirely separate set of contracts from ERC721 or native token transfer. Currently, the Rainbow Bridge has out-of-the-box support for generic ERC20 token transfers, and we will add more use cases in the future. In this post, we focus only on the ERC20 use case.
Suppose there is an existing ERC20 token on Ethereum, e.g., DAI. For us to move it across the bridge, we need to deploy two additional contracts:
TokenLocker Ethereum contract implemented in Solidity;
MintableFungibleToken NEAR contract implemented in Rust.
When a user wants to transfer X amount of DAI from Ethereum to NEAR, they first lock this X DAI in the TokenLocker contract, and then mint X nearDAI in the MintableFungibleToken NEAR contract. When they want to transfer some tokens back from NEAR to Ethereum, they first burn Y nearDAI in MintableFungibleToken and then unlock Y DAI in TokenLocker.
Currently, each instance of ERC20 token would require deploying a separate pair of TokenLocker/MintableFungibleToken; however, we are going to lift this restriction in the future and use the same TokenLocker for multiple ERC20 tokens. Note that if someone wants to add support of ERC721 or another type of asset transfer, they would need to implement similar Locker/Asset pair with a different external interface and internal implementation, but the high-level design would remain the same — the Locker would lock/unlock assets while Asset would mint/burn the NEAR version of this asset.
Usage flow
Users will be able to use either RainbowCLI or any app that integrates with RainbowLib, like NEAR Wallet. In rare cases, someone might choose to work with the bridge manually by calling Ethereum and NEAR contracts from their shell using low-level contract calls. Currently, we only support RainbowCLI, but we are working on RainbowLib and Wallet integrations.
From the user perspective, the transfer of a token should be straightforward: they provide credentials, choose beneficiary and amount, and initiate the transfer either from RainbowCLI, Wallet, or other applications. After some time, the transfer succeeds, and the user receives visual confirmation. Behind the scenes however, RainbowLib will perform the following complex operations:
Suppose Alice wants to transfer X DAI to Bob on NEAR blockchain and she initiates the transfer from RainbowCLI/RainbowLib;
RainbowLib first sets an allowance to transfer X DAI from Alice to TokenLocker;
It then calls TokenLocker to grab those tokens resulting in TokenLocker emitting event “Alice locked X tokens in favor of Bob”;
RainbowLib then waits until EthOnNearClient receives the Ethereum header than contains this event, plus 25 blocks more for confirmation (see note on Ethereum finality in opening section)
Then RainbowLib computes the proof of this event and submits it to the MintableFungibleToken contract;
MintableFungibleToken contract then verifies that this proof is correct by calling EthOnNearProver;
EthOnNearProver, in turn, verifies that the header of the proof is on the canonical chain of EthOnNearClient, and it has the required number of confirmations. It also verifies the proof itself;
MintableFungibleToken then unpacks the Ethereum event and mints X nearDAI for Bob, finishing the transfer.
Similarly, RainbowLib will be able to perform transfers of non-ERC20 tokens and do contract calls.
Hard forks
We want existing Rainbow Bridges not to break when NEAR or Ethereum protocol change is happening. And we want to be able to upgrade bridge contracts when serious performance upgrades become available, e.g., if EIP665 is accepted. However, we don’t want it to become less trustless or decentralized by introducing a centralized way of initiating the upgrade.
There is a large variety of proxy patterns developed by the Ethereum community that allow contract updates. However, we think the safest upgrade pattern is when the upgrade decision is delegated to the users themselves. An equivalent of this user-controlled upgrade for the bridge is the following:
Suppose the user is using the bridge to transfer DAI between Ethereum and NEAR. Suppose they have X DAI locked in TokenLocker and X nearDAI available on the NEAR side;
Suppose one day they receive an announcement that NEAR or Ethereum are making a protocol change and they need to migrate to bridge V2 within a week;
They transfer nearDAI back to DAI using the old bridge, then transfer DAI into nearDAI V2 using bridge V2.
This approach, however, has downsides:
Unlike regular contracts, the bridge is a more complex system that requires maintenance — someone needs to run the relays constantly, otherwise the light clients will go out of sync with the blockchains and become useless. That means we cannot ask users to manually migrate from bridge V1 to bridge V2 at their convenience. If we shut down the relays of the V1 bridge before literally everyone migrates their assets out, their assets might be permanently locked without the ability to move to V2. If we run relays for too long and wait until every user migrates we will be spending gas on the fees daily, approximately 40 USD per day at 40gwei price;
Applications that integrate with our bridge would need to be aware of the migration as they would need to switch from an old MintableFungibleToken contract to a new MintableFungibleToken contract. Some of them might need to perform this migration themselves or guide the user through it with the UI.
Given the above downsides, we have decided to implement the following upgradability option into our bridge. We will be using one of the proxy patterns to allow EthOnNearClient, EthOnNearProver, NearOnEthClient, and NearOnEthProver to automatically switch to new contract versions when they have received a proof that 2/3 of NEAR stake has voted for a new set of contracts. TokenLocker and MintableFungibleToken would not need to change, in most if not all cases, and so the upgrade will be transparent to the users and the application developers. This will not affect the bridge’s trust model since we need to trust 2/3 of the NEAR stake anyway. We will be using the same voting contract as for other governance mechanisms at NEAR. We will also try to minimize the chance of this mechanism used as a backdoor by creating a 7-day delay on the upgrade so that users can observe it, verify it, and if they don’t like it, liquidate their tokens or move them manually to a different bridge. We will, however, leave a possibility for the NEAR validators to vote for an emergency upgrade, but we do not expect to exercise it.
Incentives
The current Rainbow Bridge does not implement incentives for the maintainers who pay for the gas by relaying the headers. Major users of the bridge are expected to run their own relays, and at least one pair of relays will be run by the NEAR collective. Future versions of the bridge will have an option to charge users of the bridge who perform the transfers would have to pay for its usage in gas, native tokens, or some other means. Designing a sound incentive system is, however, tricky. The most naive approach would accumulate the tax from users of the bridge and distribute it among the maintainers who run relays. Unfortunately, such a system does not protect from death spiral scenarios, e.g. when the bridge experiences a hiccup which causes users to stop using it for some time and consequently makes maintainers lose interest in paying for gas. Our organization is taking time to develop a well-thought-through incentive system for the bridge, and meanwhile, we will maintain it at our own cost.
Trying it out and getting involved
We’ve been running Rainbow Bridge between Ropsten and NEAR TestNet for several weeks now, but we haven’t opened it to the public yet. However, you can experiment with bridge transfers by running a local NEAR node and Ganache. Please make sure to install all dependencies first, as their number is quite extensive: Rust+Wasm, Golang, pm2, node <=13. We are working on removing some of these dependencies.
If you are a developer or designer, sign up for NEAR’s open online Hack The Rainbow 🌈 hackathon. You can also get involved in our Bridge Guild where we will be discussing various integrations, new use cases, and building projects that use the bridge. Please also follow our blog as we are going to have a series of posts on bridge use cases and integrations. |
---
id: actions
title: Transfers & Actions
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Smart contracts can perform specific `Actions` such as transferring NEAR, or calling other contracts.
An important property of `Actions` is that they can be batched together when acting on the same contract. **Batched actions** act as a unit: they execute in the same [receipt](../../../1.concepts/protocol/transactions.md#receipt-receipt), and if **any fails**, then they **all get reverted**.
:::info
`Actions` can be batched only when they act on the **same contract**. You can batch calling two methods on a contract,
but **cannot** call two methods on different contracts.
:::
---
## Transfer NEAR Ⓝ
You can send $NEAR from your contract to any other account on the network. The Gas cost for transferring $NEAR is fixed and is based on the protocol's genesis config. Currently, it costs `~0.45 TGas`.
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="js" label="🌐 JavaScript">
```js
import { NearBindgen, NearPromise, call } from 'near-sdk-js'
import { AccountId } from 'near-sdk-js/lib/types'
@NearBindgen({})
class Contract{
@call({})
transfer({ to, amount }: { to: AccountId, amount: bigint }) {
NearPromise.new(to).transfer(amount);
}
}
```
</TabItem>
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::{near, AccountId, Promise, NearToken};
#[near(contract_state)]
#[derive(Default)]
pub struct Contract { }
#[near_bindgen]
impl Contract {
pub fn transfer(&self, to: AccountId, amount: NearToken){
Promise::new(to).transfer(amount);
}
}
```
</TabItem>
</Tabs>
:::tip
The only case where a transfer will fail is if the receiver account does **not** exist.
:::
:::caution
Remember that your balance is used to cover for the contract's storage. When sending money, make sure you always leave enough to cover for future storage needs.
:::
---
## Function Call
Your smart contract can call methods in another contract. In the snippet bellow we call a method
in a deployed [Hello NEAR](../quickstart.md) contract, and check if everything went
right in the callback.
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="js" label="🌐 JavaScript">
```js
import { NearBindgen, near, call, bytes, NearPromise } from 'near-sdk-js'
import { AccountId } from 'near-sdk-js/lib/types'
const HELLO_NEAR: AccountId = "hello-nearverse.testnet";
const NO_DEPOSIT: bigint = BigInt(0);
const CALL_GAS: bigint = BigInt("10000000000000");
@NearBindgen({})
class Contract {
@call({})
call_method({}): NearPromise {
const args = bytes(JSON.stringify({ message: "howdy" }))
return NearPromise.new(HELLO_NEAR)
.functionCall("set_greeting", args, NO_DEPOSIT, CALL_GAS)
.then(
NearPromise.new(near.currentAccountId())
.functionCall("callback", bytes(JSON.stringify({})), NO_DEPOSIT, CALL_GAS)
)
.asReturn()
}
@call({privateFunction: true})
callback({}): boolean {
let result, success;
try{ result = near.promiseResult(0); success = true }
catch{ result = undefined; success = false }
if (success) {
near.log(`Success!`)
return true
} else {
near.log("Promise failed...")
return false
}
}
}
```
</TabItem>
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::{near_bindgen, env, log, Promise, Gas, PromiseError};
use serde_json::json;
#[near(contract_state)]
#[derive(Default)]
pub struct Contract { }
const HELLO_NEAR: &str = "hello-nearverse.testnet";
const NO_DEPOSIT: u128 = 0;
const CALL_GAS: Gas = Gas(5_000_000_000_000);
#[near_bindgen]
impl Contract {
pub fn call_method(&self){
let args = json!({ "message": "howdy".to_string() })
.to_string().into_bytes().to_vec();
Promise::new(HELLO_NEAR.parse().unwrap())
.function_call("set_greeting".to_string(), args, NO_DEPOSIT, CALL_GAS)
.then(
Promise::new(env::current_account_id())
.function_call("callback".to_string(), Vec::new(), NO_DEPOSIT, CALL_GAS)
);
}
pub fn callback(&self, #[callback_result] result: Result<(), PromiseError>){
if result.is_err(){
log!("Something went wrong")
}else{
log!("Message changed")
}
}
}
```
</TabItem>
</Tabs>
:::warning
The snippet showed above is a low level way of calling other methods. We recommend make calls to other contracts as explained in the [Cross-contract Calls section](crosscontract.md).
:::
---
## Create a Sub Account
Your contract can create direct sub accounts of itself, for example, `user.near` can create `sub.user.near`.
Accounts do **NOT** have control over their sub-accounts, since they have their own keys.
Sub-accounts are simply useful for organizing your accounts (e.g. `dao.project.near`, `token.project.near`).
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="js" label="🌐 JavaScript">
```js
import { NearBindgen, near, call, NearPromise } from 'near-sdk-js'
const MIN_STORAGE: bigint = BigInt("1000000000000000000000") // 0.001Ⓝ
@NearBindgen({})
class Contract {
@call({payableFunction:true})
create({prefix}:{prefix: String}) {
const account_id = `${prefix}.${near.currentAccountId()}`
NearPromise.new(account_id)
.createAccount()
.transfer(MIN_STORAGE)
}
}
```
</TabItem>
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::{near, env, Promise, NearToken};
#[near(contract_state)]
#[derive(Default)]
pub struct Contract { }
const MIN_STORAGE: Balance = 1_000_000_000_000_000_000_000; //0.001Ⓝ
#[near]
impl Contract {
pub fn create(&self, prefix: String){
let account_id = prefix + "." + &env::current_account_id().to_string();
Promise::new(account_id.parse().unwrap())
.create_account()
.transfer(MIN_STORAGE);
}
}
```
</TabItem>
</Tabs>
:::tip
Notice that in the snippet we are transferring some money to the new account for storage
:::
:::caution
When you create an account from within a contract, it has no keys by default. If you don't explicitly [add keys](#add-keys) to it or [deploy a contract](#deploy-a-contract) on creation then it will be [locked](../../../1.concepts/protocol/access-keys.md#locked-accounts).
:::
<hr className="subsection" />
#### Creating Other Accounts
Accounts can only create immediate sub-accounts of themselves.
If your contract wants to create a `.mainnet` or `.testnet` account, then it needs to [call](#function-call)
the `create_account` method of `near` or `testnet` root contracts.
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="js" label="🌐 JavaScript">
```js
import { NearBindgen, near, call, bytes, NearPromise } from 'near-sdk-js'
const MIN_STORAGE: bigint = BigInt("1820000000000000000000"); //0.00182Ⓝ
const CALL_GAS: bigint = BigInt("28000000000000");
@NearBindgen({})
class Contract {
@call({})
create_account({account_id, public_key}:{account_id: String, public_key: String}) {
const args = bytes(JSON.stringify({
"new_account_id": account_id,
"new_public_key": public_key
}))
NearPromise.new("testnet")
.functionCall("create_account", args, MIN_STORAGE, CALL_GAS);
}
}
```
</TabItem>
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::{near, Promise, Gas, NearToken };
use serde_json::json;
#[near(contract_state)]
#[derive(Default)]
pub struct Contract { }
const CALL_GAS: Gas = Gas(28_000_000_000_000);
const MIN_STORAGE: Balance = 1_820_000_000_000_000_000_000; //0.00182Ⓝ
#[near]
impl Contract {
pub fn create_account(&self, account_id: String, public_key: String){
let args = json!({
"new_account_id": account_id,
"new_public_key": public_key,
}).to_string().into_bytes().to_vec();
// Use "near" to create mainnet accounts
Promise::new("testnet".parse().unwrap())
.function_call("create_account".to_string(), args, MIN_STORAGE, CALL_GAS);
}
}
```
</TabItem>
</Tabs>
---
## Deploy a Contract
When creating an account you can also batch the action of deploying a contract to it. Note that for this, you will need to pre-load the byte-code you want to deploy in your contract.
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::{near_bindgen, env, Promise, NearToken};
#[near(contract_state)]
#[derive(Default)]
pub struct Contract { }
const MIN_STORAGE: Balance = 1_100_000_000_000_000_000_000_000; //1.1Ⓝ
const HELLO_CODE: &[u8] = include_bytes!("./hello.wasm");
#[near]
impl Contract {
pub fn create_hello(&self, prefix: String){
let account_id = prefix + "." + &env::current_account_id().to_string();
Promise::new(account_id.parse().unwrap())
.create_account()
.transfer(MIN_STORAGE)
.deploy_contract(HELLO_CODE.to_vec());
}
}
```
</TabItem>
</Tabs>
:::tip
If an account with a contract deployed does **not** have any access keys, this is known as a locked contract. When the account is locked, it cannot sign transactions therefore, actions can **only** be performed from **within** the contract code.
:::
---
## Add Keys
When you use actions to create a new account, the created account does not have any [access keys](../../../1.concepts/protocol/access-keys.md), meaning that it **cannot sign transactions** (e.g. to update its contract, delete itself, transfer money).
There are two options for adding keys to the account:
1. `add_access_key`: adds a key that can only call specific methods on a specified contract.
2. `add_full_access_key`: adds a key that has full access to the account.
<br/>
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="js" label="🌐 JavaScript">
```js
import { NearBindgen, near, call, NearPromise } from 'near-sdk-js'
import { PublicKey } from 'near-sdk-js/lib/types'
const MIN_STORAGE: bigint = BigInt("1000000000000000000000") // 0.001Ⓝ
@NearBindgen({})
class Contract {
@call({})
create_hello({prefix, public_key}:{prefix: String, public_key: PublicKey}) {
const account_id = `${prefix}.${near.currentAccountId()}`
NearPromise.new(account_id)
.createAccount()
.transfer(MIN_STORAGE)
.addFullAccessKey(public_key)
}
}
```
</TabItem>
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::borsh::{self, BorshDeserialize, BorshSerialize};
use near_sdk::{near_bindgen, env, Promise, Balance, PublicKey};
#[near_bindgen]
#[derive(Default, BorshDeserialize, BorshSerialize)]
pub struct Contract { }
const MIN_STORAGE: Balance = 1_100_000_000_000_000_000_000_000; //1.1Ⓝ
const HELLO_CODE: &[u8] = include_bytes!("./hello.wasm");
#[near_bindgen]
impl Contract {
pub fn create_hello(&self, prefix: String, public_key: PublicKey){
let account_id = prefix + "." + &env::current_account_id().to_string();
Promise::new(account_id.parse().unwrap())
.create_account()
.transfer(MIN_STORAGE)
.deploy_contract(HELLO_CODE.to_vec())
.add_full_access_key(public_key);
}
}
```
</TabItem>
</Tabs>
Notice that what you actually add is a "public key". Whoever holds its private counterpart, i.e. the private-key, will be able to use the newly access key.
:::tip
If an account with a contract deployed does **not** have any access keys, this is known as a locked contract. When the account is locked, it cannot sign transactions therefore, actions can **only** be performed from **within** the contract code.
:::
---
## Delete Account
There are two scenarios in which you can use the `delete_account` action:
1. As the **last** action in a chain of batched actions.
2. To make your smart contract delete its own account.
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="js" label="🌐 JavaScript">
```js
import { NearBindgen, near, call, NearPromise } from 'near-sdk-js'
import { AccountId } from 'near-sdk-js/lib/types'
const MIN_STORAGE: bigint = BigInt("1000000000000000000000") // 0.001Ⓝ
@NearBindgen({})
class Contract {
@call({})
create_delete({prefix, beneficiary}:{prefix: String, beneficiary: AccountId}) {
const account_id = `${prefix}.${near.currentAccountId()}`
NearPromise.new(account_id)
.createAccount()
.transfer(MIN_STORAGE)
.deleteAccount(beneficiary)
}
@call({})
self_delete({beneficiary}:{beneficiary: AccountId}) {
NearPromise.new(near.currentAccountId())
.deleteAccount(beneficiary)
}
}
```
</TabItem>
<TabItem value="rust" label="🦀 Rust">
```rust
use near_sdk::{near, env, Promise, Neartoken, AccountId};
#[near(contract_state)]
#[derive(Default)]
pub struct Contract { }
const MIN_STORAGE: Balance = 1_000_000_000_000_000_000_000; //0.001Ⓝ
#[near]
impl Contract {
pub fn create_delete(&self, prefix: String, beneficiary: AccountId){
let account_id = prefix + "." + &env::current_account_id().to_string();
Promise::new(account_id.parse().unwrap())
.create_account()
.transfer(MIN_STORAGE)
.delete_account(beneficiary);
}
pub fn self_delete(beneficiary: AccountId){
Promise::new(env::current_account_id())
.delete_account(beneficiary);
}
}
```
</TabItem>
</Tabs>
:::warning Token Loss
If the beneficiary account does not exist the funds will be [**dispersed among validators**](../../../1.concepts/basics/token-loss.md).
:::
:::warning Token Loss
Do **not** use `delete` to try fund a new account. Since the account doesn't exist the tokens will be lost.
:::
|
---
sidebar_position: 3
---
# Imports
## npm
When importing npm packages, they are fetched in the user's browser from esm.sh, an npm package CDN.
**This means that you can import npm packages directly in your BWE component without having to install them.**
Note that not every npm package will function within the BWE environment
### Supported Packages
By default, `preact` and `react` (via `preact/compat`) are available via the container's `importmap`.
The BWE team has a tracker [here](https://bos-web-engine.vercel.app/webengine.near/NPM.Tracker) which categorizes known compatibility of packages. Expect the list to grow over time.
If you have certain packages which you would like to use within BWE, please chime in on [this thread](https://github.com/near/bos-web-engine/discussions/166)
#### Expected Incompatibility
Some packages are expected to not work within BWE due to its architecture and security model. Packages which rely on the following are expected to not work:
- direct access to the `window` or `document` objects
- usage of certain React hooks, in particular `useContext` and DOM manipulation via `useRef`
- state management across iframes
- React implementation details not in parity with Preact
In general, external component libraries (e.g. Chakra UI) and state management libraries are not well-supported in the current version.
## BWE Components
Other BWE components can be imported and embedded within your component.
### Dedicated Import Syntax
Any BWE Component can be imported using the following syntax
```
near://<account-id>/<Component>
```
e.g.
```tsx
import Message from 'near://bwe-web.near/Message'
// ...
// in use
<Message />
```
Since components use default exports, you can import them using any name you like. Note the difference between the imported name and the component path:
```tsx
import Foo from 'near://bwe-web.near/Bar'
```
### Relative Imports
Components published by the same NEAR account and in the same directory can be imported using relative paths.
```tsx
import Foo from './Foo'
```
This only works for `./` paths. Other relative imports such as `../Foo` are not implemented.
:::tip
Directory support is a work in progress. If you place `/` separators in your component name when working in the sandbox, it will be treated as a directory separator.
From a component named `Foo/Bar.tsx`, relative imports will only be resolvable for other components starting with `Foo/`.
:::
|
---
id: upgrade_alert
title: Validator Node Upgrade Alert
sidebar_label: Validator Node Upgrade Alert
description: How to setup an alert for validator nodes upgrading.
sidebar_position: 5
---
# Alerting for Validator Node Upgrades
Please note that once 80% of the validator nodes switch to a new protocol version, the upgrade will occur in 2 epochs. Any validator node who doesn't upgrade in time will be kicked. The following provides a network upgrade ratio which allows validators to see what percentage of the validator nodes has upgraded to a new protocol version.
<blockquote class="warning">
<strong>Heads up</strong><br /><br />
Information about node version and build is available on-chain. A metric based on on-chain data would be more reliable, and that metric is already under construction.
</blockquote>
Validator nodes periodically submit telemetry data, which is stored in a publicly-accessible PostgresSQL database.
Note that the nodes can opt out of submitting the telemetry data, and the telemetry data isn't guaranteed to be accurate.
With these caveats clarified, let's define an upgrade alert. The instructions were tested only in Grafana.
Step 1. Add a PostgreSQL data source with the following credentials:
* `telemetry_testnet` for testnet and `telemetry_mainnet` for mainnet: https://github.com/near/near-indexer-for-explorer/#shared-public-access
Step 2. Add a dashboard with a Graph panel with the following SQL query. Grafana only supports alerts on the Graph panels, therefore this needs a workaround to fit the Table data into a Graph format.
```
SELECT $__time(time_sec)
,upgraded_ratio
FROM (
SELECT NOW() - INTERVAL '1' SECOND AS time_sec
,(
(
SELECT CAST(COUNT(*) AS FLOAT)
FROM nodes
WHERE agent_version > (
SELECT MIN(agent_version)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
)
AND is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) / (
SELECT COUNT(*)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
)
) AS upgraded_ratio
) AS inner_table
WHERE $__timeFilter(time_sec);
```
Step 3. Go to the alert tab and change the condition to `WHEN last () OF query (A, 10s, now) IS ABOVE 0.65` or similar.
Step 4. Optionally add a table with min and max version of the validator nodes:
```
SELECT (
SELECT MIN(agent_version)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) AS min_version
,(
SELECT MAX(agent_version)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) AS max_version
,(
SELECT COUNT(*)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) AS num_validators
,(
SELECT COUNT(*)
FROM nodes
WHERE agent_version = (
SELECT MAX(agent_version)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
)
AND is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) AS upgraded_validators
,(
(
SELECT CAST(COUNT(*) AS FLOAT)
FROM nodes
WHERE agent_version = (
SELECT MAX(agent_version)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
)
AND is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) / (
SELECT COUNT(*)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
)
) AS upgraded_ratio;
```
Step 5. Optionally add a table with the number of nodes running a newer version than your node:
```
SELECT (
SELECT COUNT(*)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) AS num_validators
,(
SELECT COUNT(*)
FROM nodes
WHERE agent_version > (
SELECT agent_version
FROM nodes
WHERE moniker = '$YOUR_VALIDATOR_MONIKER'
)
AND is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) AS upgraded_validators
,(
(
SELECT CAST(COUNT(*) AS FLOAT)
FROM nodes
WHERE agent_version > (
SELECT agent_version
FROM nodes
WHERE moniker = '$YOUR_VALIDATOR_MONIKER'
)
AND is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
) / (
SELECT COUNT(*)
FROM nodes
WHERE is_validator
AND (now() - last_seen) < INTERVAL '15 MINUTE'
)
) AS upgraded_ratio;
```
>Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol">
<h8>Ask it on StackOverflow!</h8></a>
|
---
id: get-started
title: "Getting Started"
---
# Getting Started
:::tip Using the JS SDK on Windows
You can develop smart contracts on Windows using Windows Subsystem for Linux (WSL2).
:::
In order to use WSL2, follow the next steps:
- Run `PowerShell` as Administrator
- Execute `wsl --install` to install Ubuntu and do additional setup automatically. Check more details [here](https://learn.microsoft.com/en-us/windows/wsl/install)
- Restart your machine
- `WSL2` will continue setup process on start. Setup your username and password when prompted.
- Check [this](https://learn.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-wsl) guide to setup `npm`, `node`, `npx`, `VSCode` and other tools of your choice in order to start developing.
In case of any issues of setting up WSL2 make sure that:
- Your Windows OS is up to date
- Virtualisation is turned on in BIOS
- `Windows Subsystem for Linux` and `Virtual Machine Platform` are turned on in `Windows Features` (Start -> Search -> Turn Windows Feature On or Off)
## Install Node
To install Node, follow the instructions on the [Node.js website](https://nodejs.org/en/download/).
## Create a new project
The best way to create a new NEAR app connected with a frontend is through [create-near-app](https://github.com/near/create-near-app). When initializing the project, be sure to select creating a project in TypeScript with a frontend option of your choice.
```bash
npx create-near-app
```
If you only wish to develop and deploy a JS contract, the [`hello-near-ts`](https://github.com/near-examples/hello-near-examples/tree/main/contract-ts) repository is great to use as a template or one of the [examples in the SDK repository](https://github.com/near/near-sdk-js/tree/develop/examples/src).
If you would like to generate a new project manually with `npm init`, make sure you include the following configuration in the generated `package.json`:
```json
"dependencies": {
"near-sdk-js": "*"
}
```
|
```rust
// Validator interface, for cross-contract calls
#[ext_contract(ext_nft_contract)]
trait ExternalNftContract {
fn nft_mint(&self, token_series_id: String, receiver_id: AccountId) -> Promise;
}
// Implement the contract structure
#[near_bindgen]
impl Contract {
#[payable]
pub fn nft_mint(&mut self, token_series_id: String, receiver_id: AccountId) -> Promise {
let promise = ext_nft_contract::ext(self.nft_contract.clone())
.with_static_gas(Gas(30*TGAS))
.with_attached_deposit(env::attached_deposit())
.nft_mint(token_series_id, receiver_id);
return promise.then( // Create a promise to callback query_greeting_callback
Self::ext(env::current_account_id())
.with_static_gas(Gas(30*TGAS))
.nft_mint_callback()
)
}
#[private] // Public - but only callable by env::current_account_id()
pub fn nft_mint_callback(&self, #[callback_result] call_result: Result<TokenId, PromiseError>) -> Option<TokenId> {
// Check if the promise succeeded
if call_result.is_err() {
log!("There was an error contacting NFT contract");
return None;
}
// Return the token data
let token_id: TokenId = call_result.unwrap();
return Some(token_id);
}
} |
NEAR’s March Town Hall Highlights
COMMUNITY
April 7, 2022
In the first quarter of 2022, the NEAR Foundation saw an explosion of activity in the ecosystem. The NEAR Town Hall is a great place for the growing global NEAR community to see these new projects in action, as well as meet, learn, teach, and push the Web3 movement and NEAR adoption forward.
The March NEAR Town Hall explored ecosystem updates, MetaBUILD 2 winners, DAOs, and more. Here is the highlights from the NEAR Town Hall, which can be seen on NEAR’s YouTube channel or below.
NEAR in the news amid strong ecosystem growth
In her opening remarks, NEAR Foundation CEO Marieke Flament gave an update on the NEAR community’s Ukrainian members. As Flament explained, many of them have settled in Lisbon, Portugal, one of the NEAR ecosystem’s major European hubs.
Flament reiterated the Foundation’s commitment to raising platform awareness, supporting projects with grants, and creating a safe path to decentralization via engagement with policymaking and regulation. She also showed some updated numbers from the NEAR community.
Through March, the NEAR community has onboarded 40,000 students and 120 NEAR-certified teachers, while grants grew to $124 million in funds spread across 509 recipients. The ecosystem also grew from 300 to 429 projects and DAOs, 110 million transactions (up from 95 million in February), and 5.2 million wallets—a growth of 1 million wallets in a single month.
Flament also highlighted recent news coverage on the Ukrainian fundraising effort Unchain Fund. This included a Cointelegraph feature, NEAR Co-founder Illia Polosukhin’s Wall Street Journal op-ed, and Sail GP’s NEAR announcement.
Tune into Marieke’s talk here.
New brand partnerships: SailGP, Unchain Fund, and more
Chris Ghent, NEAR Foundation’s Global Head of Brand Strategy and Partnerships, went into greater detail on the SailGP announcement, and Unchain Fund.
“Sail GP is one of the most exciting and rapidly growing sports,” said Ghent. “The goal here is to look at the property as something we can drive really deep integrations with. The league is looking to announce a DAO, so we’re looking to see that evolve over the coming weeks and months as Season 3 approaches in May. From ticketing to NFTs, this is very much a deep tech integration as much as it is a global go-to-market.”
Ghent also detailed NEAR’s ongoing partnership with privacy-preserving browser Brave—specifically, the homepage takeover for Unchain Fund and NEAR’s education initiatives. He also highlighted NEAR’s support of “Play Magnus,” a charity chess event that benefitted Unchain Fund’s humanitarian efforts in Ukraine.
Ghent’s segment also features “Marketing by the Numbers”, with talk of 700,000,000 global impressions and explosive social media engagement.
Catch Chris’ talk here.
MetaBUILD 2: the largest hackathon in NEAR history
Pagoda’s Maria Yarotska, MetaBUILD 2’s Hackathon Coordinator, announced winners of the MetaBUILD 2 at the NEAR Town Hall.
“I’m happy to tell you that we just wrapped up the largest hackathon in NEAR’s history with a $1 million prize fund and almost 4,000 participants,” said Yarotska. “I was managing the judging process while fleeing the war in Ukraine, and I’m really proud of the community that made it happen and delivered some really exciting projects.”
“Congratulations to Voog, MetaAds, and NEAR Playground,” she added. “Enjoy your bounties and don’t forget to respond to all the potential investors.”
Yarotska also encouraged teams who didn’t win this time to get involved in the next MetaBUILD hackathon. For full details on the results, read “MetaBUILD 2 Hackathon Winners” on the NEAR blog.
You can see Maria’s talk here.
Paris Blockchain Week Summit and other upcoming NEAR events
Yadira Blocker, Experiential Marketing Lead at NEAR Foundation, spoke on NEAR’s involvement at Paris Blockchain Week Summit. NEAR speakers will be at PBWS (April 12-14th), so be sure to find them at the summit’s Discovery Stage.
For more information on NEAR’s events at PBWS, head to nearpages.wpengine.com/pbws.
From April 18-25th, NEAR heads to DevConnect in Amsterdam (more details to come), then Consensus 2022 in Austin (June 9-12th).
“[Consensus] is very big for the NEAR ecosystem,” said Blocker. “We’re coming on as a Block 4 partner, and we’ll have more information to share soon.”
Stay tuned to NEAR’s events calendar at nearpages.wpengine.com/events and watch Yadira’s talk here.
Global NEAR Education and NEAR Grants updates
Sherif Abushadi said NEAR Education continues to onboard new teachers through its global education initiatives. While 5,000 individuals registered for programs in the last week of March alone, Abushadi noted that only 120 people have become NEAR-certified.
“Certifications are still low—it’s not easy to get NEAR-certified,” said Abushadi. “You have to build an original project. We want to maintain a high bar but also make this program more accessible. So, you’ll see some new innovations from the education team in the coming weeks and months.”
Abushadi also spotlighted considerable growth in education fellowships in India, Nigeria, Mexico, Venezuela, and other c.
Want to become NEAR-certified? Visit near.university/certify.
You can see Sherif’s segment here.
Next, Josh Daniels, Head of Funding at NEAR Foundation, spoke on the Grants team’s core funding vision and statistics. To date, the Foundation has issued 234 direct grants committed for $15.4 million in 2022. Additionally, 8 projects have raised ~$70 million, with 84 currently fundraising for a total of $350 million.
Daniels also noted how NEAR Grants is currently exploring grant pools for select projects as part of the ecosystem fund.
“This is something that we announced at NEARCON in October of last year,” said Daniels. “We’re very much committed to continuing to move this forward.”
Daniels also spoke on the expansion of the NEAR Regional Hub effort. Funding has been provided to four hubs to support local initiatives and projects in Kenya, the Balkans, Latin America, and Ukraine.
Check out Josh’s talk here.
Open Web Collective on latest accelerator batch
Mildred Idada, Head of Open Web Collective, had several updates at the NEAR Town Hall, including an announcement for OWC Accelerator’s Batch 4.
“This is going to be our biggest batch yet,” said Idada. “Just like other batches before, we want to see projects from all industries and sectors. So, if you’re focused on DeFi, NFTs, dev tooling—we want to see it all.”
She also noted how every project in the accelerator gets up to $650,000 after Demo Day.
“That means when you join Day 1 of the program you’ll have $150,000,” Idada said. “At the close of Demo Day you’ll get another $500,000. Really, this allows you all to focus and build.”
Check out Idada’s segment for more OWC Accelerator updates here.
Panel on NEAR’s growing DAO ecosystem
The March Town Hall also featured a panel on NEAR DAOs, moderated by sports business analyst and ESPN alum Darren Rovell. The panel featured Ben Johnson (Strategy & Commercial, SailGP), Don Ho (Founder, Orange DAO), Rev Miller (Unchain Fund), and Julian Weisser, ConstitutionDAO).
After Rovell’s smooth introduction, SailGP’s Ben Johnson kicked things off with background on how SailGP partnered with NEAR to help create a groundbreaking sports team DAO. Unchain Funds’s Rev Miller talked about creating Unchain Fund DAO using Astro DAO, a NEAR-based DAO launchpad, to create the Unchain Fund DAO for humanitarian efforts in Ukraine.
Next, Julian Weisser explained why Constitution DAO was important in demonstrating what is possible when people assemble around a shared goal.
“This wasn’t about saying, ‘Hey, we’re going to buy a copy of the Constitution, then we’re going to try and flip it 5 years from now, or we’re going to have a token and have the token appreciate in value,” he said. “The reason people were participating in this, including people who had never created a crypto wallet before, is that they wanted to be part of something that had emotional resonance to them. The constitution, like many artifacts, has significant cultural resonance to a lot of people—some positive, some negative.”
“There is also the fact that it’s funny,” he added. “There’s this meme component to it. You combine that with a 7-day deadline. If we had started 90 days out we wouldn’t have had the traction.”
Don Ho, Managing Director at Quantstamp, talked about why the DAO is an important structure to Orange DAO, a crypto collective formed by Y Combinator alums.
“I think DAOs are interesting, specifically in the context of Orange DAO, because it allows the community to actually own the community’s efforts,” Ho said. “What is Orange DAO? It’s the largest collective of builders in web3. To start, we have over 1,200 YC founders who are all part of this DAO, all who want to build in web3—that represents 20% of all YC founders.”
“The craziest thing, though, is the YC community itself doesn’t have an inherent mechanism to capture the value it creates,” he added. “So, at Orange we want to create a community-owned fund and body of things that can promote this advancement of web3.”
Check out the talk here.
Read NEAR in March: Unchain Fund, SailGP, and DAOs to see what the NEAR community was up to in March. |
---
title: Onboarding and Engagement
description: Overview of seamless onboarding to NEAR BOS with Fast Auth
sidebar_position: 6
---
# FastAuth
### Highlights
* A seamless web2 style onboarding experience that allows users to easily create an account for any app on the BOS without the need for crypto.
* It dramatically lowers the threshold for adoption and opens the door to bringing billions of web2 users into the web3 space.
### What it is:
FastAuth lets users quickly and easily create accounts for any website or app that integrates with the Blockchain Operating System (BOS). Putting user experience at the center, FastAuth creates a familiar, web2 style onboarding experience where users can create a free account using biometrics, phone prompts and an email address. This allows users the chance to quickly interact with an app, as well as the ability to easily re-authenticate to the app using the same or different devices. By combining this with decentralized email recovery, users no longer need to remember seed phrases or other difficult passwords.
### Why it matters:
One of the challenges to onboarding users in Web3 has been the complicated process and the need to remember long passwords or seed phrases. Since most users are accustomed to centralized authentication methods like Google, they can find this off-putting, creating a barrier for entry, and making it challenging to quickly or easily onboard new people into the space.
Beyond streamlining the onboarding, FastAuth also removes the need for users to have or acquire crypto to open an account or transact, making it possible for anyone to get started right away. Along with this FastAuth also removes the need to download any specialized software or applications because everything works seamlessly right from the browser on your desktop or mobile device.
By creating an easy, user-centric experience, FastAuth makes the web3 space accessible to everyone and opens the door to mainstream adoption.
### Who it’s for:
* Developers - The streamlined onboarding can significantly increase conversion rates for people trying your app, as well as the total addressable audience for your app and website by making it accessible to mainstream users.
* Enterprises - FastAuth is the easiest way to integrate web3 and crypto technology into your business. With just a few lines of code you can onboard your existing users into powerful new community and commerce experiences that are accessible, highly-secure, and decentralized.
* Users - Getting started using Web3 apps and experience is now easy and accessible for everyone. Setting up a secure account, over which you have full control, now only takes seconds – it’s even easier than the passwords and usernames that you are already used to.
### How it works:
* Fast registration
* Users can register with biometrics (fingerprint, faceId, OS user/password challenge) and an email address.
* Users get an auto-generated, customizable username based on an email address.
* Easy Login and Passkeys
* Passkeys ([Apple](https://support.apple.com/guide/iphone/sign-in-with-passkeys-iphf538ea8d0/ios#:~:text=A%20passkey%20consists%20of%20a,held%20only%20by%20your%20devices) & [Google](https://developers.google.com/identity/passkeys)) are used to facilitate logging-in using biometrics from any device that is capable of sharing compatible passkeys with the original authenticating device.
* SSO Account Recovery
* Users can recover their accounts via an SSO sign-in process with the email used for registration.
* Account recovery is decentralized via multi-party-computation and does not give custodial access to full access keys to any single custodian.
* Meta Transactions and Zero Balance Accounts
* These technologies allow for account registration and creation with zero cost to the user.
* Relayers
* Relayers, which are definable by the developer, are used to sponsor initial interactions for new users at no cost.
* On Near.org, new users will be able to interact with social functions without needing to purchase $NEAR.
![](@site/static/img/fast-auth_account.png)
![](@site/static/img/fast-auth_keys.png)
![](@site/static/img/fast-auth_recovery.png)
### What’s coming next:
* The ability to extend relayers and FastAuth to additional gateways beyond near.org
* Further MPC decentralization
* Multi-chain compatibility
* Two-factor authentication |
Why NEAR’s Ecosystem Remains Primed for Growth in 2022
COMMUNITY
May 20, 2022
For many, 2022 has been a time of uncertainty. Volatility in global markets and the crypto sector has had a radical effect on business, community and creativity. While this is not the first time turbulence has crept into Web3, change can create a fear of the unknown.
But know this: one thing that will not change is NEAR’s commitment to building a lasting Open Web.
Collectively, NEAR ecosystem members continue to create a vibrant, inclusive environment for anyone to take part in. For creators, developers or community leaders looking to develop the next generation of decentralized apps, there has never been a better time to do it.
Let’s take a quick look at the NEAR ecosystem’s fundamentals. From capitalization to strategic partnerships and the app ecosystem, this is an exciting time to be part of the NEAR community.
Funding & Team
NEAR has raised significant capital from the likes of Tiger Global, Andreessen Horowitz (a16z), Republic Capital, FTX Ventures, Hashed and Dragonfly Capital, amongst others.
This capitalization ensures that NEAR can facilitate the building of ecosystem apps with grant funds, as well as support from the NEAR Foundation, a Swiss non-profit. It also means the Foundation team can keep growing to deliver crucial support and other resources to developers and entrepreneurs in NEAR’s ecosystem.
NEAR Foundation’s executive leadership team includes: CEO Marieke Flament (Mettle, Circle), Chris Ghent, Global Head of Brand Strategy & Partnerships (Tezos), CISO John Smith (Neilsen), CFO Yessin Schiegg (former director, BlackRock), CMO Jack Collier (Mettle, Circle) and Head of Business Development, Robbie Lim (Twitch).
Strategic Partnerships
Since early 2022, NEAR has forged a number of strategic partnerships. Recently announced partnerships include SailGP, a global sailing league co-founded by Oracle’s Larry Ellison, and Orange DAO, a collective of over 1,000 Y-Combinator alums, who chose NEAR as their preferred Layer 1 blockchain for Web3 startups.
Other NEAR partners include: Wharton Business School, Unchain Fund, Elliptic, Kyve, Woo Network, The Graph, and the Opera and Brave browsers.
The good news is that more partnerships are in the works. This means more visibility for NEAR ecosystem projects, and better avenues for onboarding millions of new users onto the blockchain.
Grants & Education
NEAR’s healthy capitalization ensures that it is well-equipped to offer grants funding to individuals and teams building apps. In 2022, NEAR has awarded more than $45 million in grants to more than 800 projects, helping founders from across Web2 and Web3 reimagine their worlds.
NEAR Education has helped more than 5,000 students learn and build on NEAR, paving the way for the next generation of apps. Through NEAR University, developers and entrepreneurs can learn NEAR developer skills (for free) through courses, guided workshops, instructional videos, and more.
Scalability, Sustainability and Security
With Nightshade, the protocol’s unique sharding approach, NEAR’s ecosystem will be infinitely scalable by Q1 of 2023. NEAR rolled out Phase 1 of Nightshade—Chunk-Only Producers—early this year, expanding the number of validators and further decentralizing the blockchain.
With an infinitely scalable blockchain, more developers and entrepreneurs can join the NEAR ecosystem, both easily and cheaply. These individuals and teams will be the builders of the open web future. The innovators of a more fair and decentralized internet.
NEAR’s climate commitment is a major part of its long-term future. In 2021, South Pole, a leading low-carbon project developer, awarded NEAR its Climate Neutral label. NEAR’s energy usage is a fraction of other chains, like Bitcoin. The NEAR network consumes as much energy in 10 years as Bitcoin does every 10 minutes, creating a sustainable platform for future growth.
Security is another major component of the NEAR ecosystem’s long-term plans. The NEAR Foundation’s security team, as well as security partners like Elliptic, are hard at work making sure decentralized applications and user wallets remain safe and secure.
From the NEAR Foundation leadership to the community leaders and the founders creating and innovating on the blockchain, NEAR is here for the long term.
|
---
title: 2.7 Measuring Success Against other Ecosystems
description: How can we measure the success of 1 L1 ecosystem against one another?
---
# 2.7 Measuring Success Against other Ecosystems
It is fair to say that crypto is a war of ecosystems, in a similar way that big tech companies fought for control of user behavior and data through social network participation. Yet in the early stage of the game we are in at the present (2022), it remains to be seen what ‘success’ for an L1 ecosystem looks like comparatively: Is a rapidly increasing token price, but no real products or services (Cardano) something to be preferred over a strong social focus, but little funding and public visibility (Celo), or fantastic tech, but no community or spotlight (Elrond)? In short - how can we measure the success of 1 L1 ecosystem against one another?
## The Analogy of Geopolitics Once More
L1 ecosystems are the digital cities and countries of the future. But analogously, there is not necessarily exceptional clarity on what it means for one country to be more successful than another: Would we say, Tibet for example, is more or less successful than Costa Rica? When both states have entirely different cultures, priorities, and foci.
In crypto, however, things are a little more pragmatic in the following sense: Every L1 ecosystem requires active updates and development in order to be maintained. In this sense, the social or communal support for an L1 ecosystem - either via financial support or social commitment to maintenance of the protocol - is the necessary condition for keeping an L1 system operational. From there every L1 must fight to create a culture, community, and ecosystem of users and builders to inhabit and pursue their vision of the future.
## Variables:
Much discussion and debate centers on the different metrics and analytics that should be tracked in order to gauge ecosystem health and overall performance. The following variables should all be considered and known in any discussion pertaining to measuring success in an ecosystem.
Hint, everything is all connected here. Meaning one variable is closely linked to most other variables except on small and anomalous behavioral trends and time frames.
* **Total Value Locked:** The amount of value held in the ecosystem, or particular dApp or protocol.
* **Number of Native Tokens:** The number of unique, liquid, and tradeable tokens according to the standard of the ecosystem.
* **Number of Listed Tokens:** Number of tokens in the ecosystem listed on centralized and decentralized exchanges.
* **Non-Fungible Asset Value:** The total value in non-fungible assets, most usually PFP or art work but also music and entertainment.
* **Accounts Created / New Users:** The number of new accounts on a daily, monthly, and annual basis.
* **Daily Active Wallets / Daily active users:** The number of active wallets or accounts on a daily, monthly, and annual basis.
* **Active Developers Building:** Usually measured via Github Commits, or internal ecosystem polling / selection of number of active devs building.
* **Total Number of dApps by Category:** Total number of active decentralized applications often segmented by category as done on [awesomenear.com](https://awesomenear.com/).
* **Liquidity and Volume of Fungible and Non-Fungible Tokens:** The depth of liquidity in token markets, and / or the amount of daily volume on different token markets (both fungible and non-fungible).
* **Number of Validator Nodes:** The number of validator nodes on the network - sometimes indicative of the level of decentralization in a network.
* **Transaction Fee Burn Rate (Deflation):** The amount of value spent on transaction fees, and burned (in certain proof of stake chains like NEAR and ETH) to indicate an overall burn rate / ecosystem wide deflation rate.
* **Total Daily Transactions:** The number of total transactions executed on a daily basis on a network. This can also be expanded to average for weekly, monthly, and annually.
* **Bridged Value (inflows and outflows):** The amount of value brought into or sent out of the ecosystem, on a daily, weekly or monthly basis.
* **Institutional Exposure:** The amount of institutional value, or number of institutions actively holding, participating, LPing, or integrated with dApp’s inside of an ecosystem.
* **Total Native Token Supply Held in Smart Contract:** The amount of the native L1 token locked in a smart contract, either for storage, staking, or other uses.
* **Ecosystem Generated Value:** The amount of value natively generated inside of the ecosystem - created value, that does not originate elsewhere.
* **Ecosystem Exported Value:** The amount of value that leaves the ecosystem on a moving basis.
## The Traditional and The Emerging Game
From 2014 to 2020 crypto analytics concentrated on on financial metrics including market cap, daily volume, number of fungible tokens, stock to flow, tokens locked in smart contract, trading volume, and total value locked. This _traditional game was_ largely focused on the financial component of crypto, due to the growth in payment tokens, as well as the lack of product market fit for other verticals.
However, since that time, with the emergence of new products and infrastructure, there is ongoing debate as to which variables are indeed indicative of ecosystem health and success. This _Emerging Game_ focuses beyond the financial components of an ecosystem’s success, and looks at active developers, types of dApps being built in the ecosystem, the number of dApps being built, as well as ecosystem generated value (EGV).
* **The traditional game:**
* Token market cap
* Total Value Locked
* Active Users.
* **The emerging game:**
* Developer commits.
* dApps in the ecosystem.
* Daily active users.
* Ecosystem Generated Value (EGV)
**A Note on Exchanges, Custodians, and Institutional / Retail exposure:**
Notably, some of the largest institutions and infrastructure providers in crypto, have not successfully ‘kept up’ with the _emerging game _and tend to focus strongly on the _traditional game._ This means that tokens are listed on Coinbase, Kraken, and Binance not necessarily due to their innovative design, but rather the number of users, the volume on the token, and the belief in the token’s ability to grow and strengthen over time.
Despite this lag, L1 ecosystems remain incredibly dependent on listing, custodians, off-ramps, and institutional exposure to fungible tokens inside of their ecosystem: Ecosystems with large exposure via exchanges and custodians, naturally tend to perform better, than ecosystems with minimal or singular exposure (usually in the form of the L1 token).
## Main Takeaways on the Evaluation of Ecosystems
Some of the most unresolved questions to date in evaluating an ecosystem could be summarized in the following questions:
* **_Are these all variables arbitrary?_** Yes and No. Different ecosystems may prioritize different variables, or specialize in certain variables. This aligns quite nicely with Haseeb’s thesis on L1 cities, whereby Solana is LA, NEAR is SF, and AVAX is Chicago. Each ecosystem can foster a specific culture by focusing on certain variables. However, all ecosystems are dependent upon a healthy token price and active users participating in the dApps within that ecosystem.
* **_Does token price matter_**? Yes. And this answer could be argued from two perspectives: First, on its own, token price matters because of the psychological effect and the establishment of flywheels (explained below). In short, a strong and rising token price attracts attention from the outside world, and prioritizes the ecosystem in the eyes of newcomers, Web2 devs crossing over, and new entrepreneurs. Second, an L1 token is a lot like a national currency - and in this case - it refers to the purchasing power of an ecosystem relative to other products, and assets in other ecosystems. A strong token price, like a strong currency makes it easier for that ecosystem to bring and maintain value inside of it.
* **Flywheels and How They Can Be Leveraged:** The flywheel in simple terms, is summarized in the following sequence:
_Brand → Users and Builders → Products, Services, and Communities → Native Value → Original Innovation → token appreciation → Brand._
To break this down fully: The brand and visibility of an ecosystem brings in users and builders, who product, collaborate, and service the ecosystem (dApps, DAOs, LPs, traders, gamers, etc.), from which value is created and exchanged, and eventually from which new forms of value and dApps can be created. If executed well this should lead to token appreciation, which enhances the visibility of the brand, and brings more users into the ecosystem, from which the process can repeat itself.
## Conclusion
Success of an ecosystem is an open discussion today that revolves around a number of key variables. Ecosystems generate cultures and communities around specialization in certain variables. While all of these different variables are connected, token price still plays an exceptionally important role. It is up to the ecosystem itself to decide what variables it would like to prioritize and how it would like to strategically develop its community and brand.
|
---
id: implicit-accounts
title: Implicit Accounts
sidebar_label: Implicit Accounts
---
## Background {#background}
Implicit accounts work similarly to Bitcoin/Ethereum accounts.
- They allow you to reserve an account ID before it's created by generating a ED25519 key-pair locally.
- This key-pair has a public key that maps to the account ID.
- The account ID is a lowercase hex representation of the public key.
- An ED25519 Public key contains 32 bytes that maps to 64 characters account ID.
- The corresponding secret key allows you to sign transactions on behalf of this account once it's created on chain.
## [Specifications](https://nomicon.io/DataStructures/Account.html#implicit-account-ids) {#specifications}
## Creating an account locally {#creating-an-account-locally}
For the purpose of this demo, we'll use the `betanet` network.
### Set `betanet` network {#set-betanet-network}
```bash
export NEAR_ENV=betanet
```
### Generating a key-pair first {#generating-a-key-pair-first}
```bash
near generate-key --saveImplicit
```
Example Output
```
Seed phrase: lumber habit sausage used zebra brain border exist meat muscle river hidden
Key pair: {"publicKey":"ed25519:AQgnQSR1Mp3v7xrw7egJtu3ibNzoCGwUwnEehypip9od","secretKey":"ed25519:51qTiqybe8ycXwPznA8hz7GJJQ5hyZ45wh2rm5MBBjgZ5XqFjbjta1m41pq9zbRZfWGUGWYJqH4yVhSWoW6pYFkT"}
Implicit account: 8bca86065be487de45e795b2c3154fe834d53ffa07e0a44f29e76a2a5f075df8
Storing credentials for account: 8bca86065be487de45e795b2c3154fe834d53ffa07e0a44f29e76a2a5f075df8 (network: testnet)
Saving key to '~/.near-credentials/testnet/8bca86065be487de45e795b2c3154fe834d53ffa07e0a44f29e76a2a5f075df8.json'
```
#### Using the Implicit Account
We can export our account ID to a bash env variable:
```bash
export ACCOUNT="8bca86065be487de45e795b2c3154fe834d53ffa07e0a44f29e76a2a5f075df8"
```
Assuming you've received tokens on your new account, you can transfer from it using the following command:
```bash
near $ACCOUNT <receiver> <amount>
```
You can also replace `$ACCOUNT` with your actual account ID, e.g.
```bash
near send 98793cd91a3f870fb126f66285808c7e094afcfc4eda8a970f6648cdf0dbd6de <receiver> <amount>
```
## Transferring to the implicit account {#transferring-to-the-implicit-account}
Let's say someone gives you their account ID `0861ea8ddd696525696ccf3148dd706c4fda981c64d8a597490472594400c223`. You can just transfer to it by running:
```bash
near send <your_account_id> 0861ea8ddd696525696ccf3148dd706c4fda981c64d8a597490472594400c223 <amount>
```
## BONUS: Converting public key using python (for learning purposes) {#bonus-converting-public-key-using-python-for-learning-purposes}
For this flow we'll use `python3` (with version `3.5+`) with `base58` library.
You can install this library with `pip3`:
```bash
pip3 install --user base58
```
Start python3 interpreter:
```bash
python3
```
The first thing is to get the data part from the public key (without `ed25519:` prefix). Let's store it in a variable `pk58`:
```python
pk58 = 'BGCCDDHfysuuVnaNVtEhhqeT4k9Muyem3Kpgq2U1m9HX'
```
Now let's import base58:
```python
import base58
```
Finally, let's convert our base58 public key representation to bytes and then to hex:
```python
base58.b58decode(pk58).hex()
```
Output:
```
'98793cd91a3f870fb126f66285808c7e094afcfc4eda8a970f6648cdf0dbd6de'
```
This gives us the same account ID as `near-cli`, so this is encouraging.
**Note:** The default network for `near-cli` is `testnet`. If you would like to change this to `mainnet` or `betanet`, please see [`near-cli` network selection](/tools/near-cli#network-selection) for instructions.
:::tip Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol"> Ask it on StackOverflow! </a>
:::
|
NEAR Worldwide | September 24th, 2019
COMMUNITY
September 24, 2019
We’re traveling all over the world to talk to you! Hopefully we will see you (or have already seen you) at the events we are participating in at Shanghai Blockchain week, the BlockchainUA conference in Ukraine, Korea Blockchain week, and of course DevCon in Japan. As usual, if you see any one of us, don’t be shy and come say hello! We’re always happy to talk to members of our community. Speaking of which, the ambassador program is in full swing. That means you should keep an eye out for even more events near you. If there are no events near you, you can always become an ambassador for fun and excitement. Finally, for the devs out there, we’ve totally redesigned our documentation portal. On top of that NEAR app integration is working with the ledger hardware wallet (more to come!).
COMMUNITY AND EVENTS
Illia’s was in China again for the last week. He’s been talking a ton of community members in Shanghai including Math Wallet, our friends over at IOSG, and many others. He was on several panels and presented on our Nightshade Sharding Design. Thanks to everyone who came out! In other news, Max presented on Developer Experience at BlockchainUA in Ukraine and helped out (shoutout to Frol as well) with the hackathon. Here are some highlights:
Illia at Wanxiang Summit (Shanghai International Blockchain Week):
Illia at Old Friends United (Co sponsored by IOSG, NEAR, Polkadot, and CasperLab):
Max presenting at BlockchainUA:
Max and Frol helping out at the hackathon:
WRITING AND CONTENT
We’ve got a blog post for you to check out on Long Range Attacks, one of the largest unsolved problems in Proof-of-Stake blockchains. We also recently had a chance to catch up with Benny from Dapper Labs. He helped create crypto kitties, and drops some serious knowledge on creative marketing ideas in the new Fireside chat. Sasha wrote a new post on the future of Open Web. In other news: Canaan wrote a great post on migrating to NEAR from Loom SDK, and Jan from WorkbnDAO wrote a roundup of what they’ve learned in the crypto space since January 2018.
Long Range Attacks: https://pages.near.org/blog/long-range-attacks-and-a-new-fork-choice-rule/
Dapper Labs in Fireside Chat Ep 3: https://www.youtube.com/watch?v=Ww8XDdpw2Pk
Sasha’s post on Open Web: https://hackernoon.com/power-to-the-people-how-open-web-will-reshape-the-society-in-the-next-decade-5t2wd3193
Canaan from Stardust on why they’re moving from Loom to NEAR: https://medium.com/stardustplatform/stardust-joins-the-near-protocol-beta-program-47f3c630f2e0
Jan from WorknB on what they’ve learned since January 2018 about the crypto space: https://medium.com/worknb/our-learnings-at-worknbdao-since-january-2018-1b4cc2a18606
ENGINEERING HIGHLIGHTS
Some fun stuff to announce this week. Vlad was able to get a the ledger nano hardware wallet to work with a NEAR app. (You can see it running in the pic below). Also, there’s a little peak at our documentation portal, which we’ve redesigned from scratch. Also, we’ve added two major features to core: batch transactions and System Runtime API. That means you can do stuff like creating a contract factory! Super cool.
108 PRs across 20 repos by 19 authors. Featured repos: nearcore, nearlib, near-shell, near-wallet, near-bindgen, and borsh.
NEW DOCS PORTAL
Proper filtering of transactions. Every transaction in a chunk is now valid.
“Delete account” added to near shell and nearlib
Generic decoder added to AssemblyScript runtime
Trust wallet integration in progress
Block queries optimized in explorer (2s -> 55ms)
Batch transactions exposed in Rust bindgen
Transaction signing in Ledger hardware wallet 🙂 (see pic above)
Batched transactions implemented
NEAR <> Ethereum bridge is underway https://github.com/nearprotocol/near-bridge
Shipped nearlib 0.13.3
System Runtime API implemented
Contract reward added to core
Now can query block by hash, not just index in core
Continuing epic refactor in near-runtime-ts
Tests for invalid input added to BORSH
HOW YOU CAN GET INVOLVED
Join us: there are new jobs we’re hiring across the board!
If you want to work with one of the most talented teams in the world right now to solve incredibly hard problems, check out our careers page for openings. And tell your friends!
Learn more about NEAR in The Beginner’s Guide to NEAR Protocol. Stay up to date with what we’re building by following us on Twitter for updates, joining the conversation on Discord and subscribing to our newsletter to receive updates right to your inbox.
https://upscri.be/633436/ |
# Economics
**This is under heavy development**
## Units
| Name | Value |
| - | - |
| yoctoNEAR | smallest undividable amount of native currency *NEAR*. |
| NEAR | `10**24` yoctoNEAR |
| block | smallest on-chain unit of time |
| gas | unit to measure usage of blockchain |
## General Parameters
| Name | Value |
| - | - |
| `INITIAL_SUPPLY` | `10**33` yoctoNEAR |
| `MIN_GAS_PRICE` | `10**5` yoctoNEAR |
| `REWARD_PCT_PER_YEAR` | `0.05` |
| `EPOCH_LENGTH` | `43,200` blocks |
| `EPOCHS_A_YEAR` | `730` epochs |
| `INITIAL_MAX_STORAGE` | `10 * 2**40` bytes == `10` TB |
| `TREASURY_PCT` | `0.1` |
| `TREASURY_ACCOUNT_ID` | `treasury` |
| `CONTRACT_PCT` | `0.3` |
| `INVALID_STATE_SLASH_PCT` | `0.05` |
| `ADJ_FEE` | `0.01` |
| `TOTAL_SEATS` | `100` |
| `ONLINE_THRESHOLD_MIN` | `0.9` |
| `ONLINE_THRESHOLD_MAX` | `0.99` |
| `BLOCK_PRODUCER_KICKOUT_THRESHOLD` | `0.9` |
| `CHUNK_PRODUCER_KICKOUT_THRESHOLD` | `0.6` |
## General Variables
| Name | Description | Initial value |
| - | - | - |
| `totalSupply[t]` | Total supply of NEAR at given epoch[t] | `INITIAL_SUPPLY` |
| `gasPrice[t]` | The cost of 1 unit of *gas* in NEAR tokens (see Transaction Fees section below) | `MIN_GAS_PRICE` |
| `storageAmountPerByte[t]` | keeping constant, `INITIAL_SUPPLY / INITIAL_MAX_STORAGE` | `~9.09 * 10**19` yoctoNEAR |
## Issuance
The protocol sets a ceiling for the maximum issuance of tokens, and dynamically decreases this issuance depending on the amount of total fees in the system.
| Name | Description |
| - | - |
| `reward[t]` | `totalSupply[t]` * `REWARD_PCT_PER_YEAR` * `epochTime[t]` / `NUM_SECONDS_IN_A_YEAR` |
| `epochFee[t]` | `sum([(1 - DEVELOPER_PCT_PER_YEAR) * block.txFee + block.stateFee for block in epoch[t]])` |
| `issuance[t]` | The amount of token issued at a certain epoch[t], `issuance[t] = reward[t] - epochFee[t]` |
Where `totalSupply[t]` is the total number of tokens in the system at a given time *t* and `epochTime[t]` is the
duration of the epoch in seconds.
If `epochFee[t] > reward[t]` the issuance is negative, thus the `totalSupply[t]` decreases in given epoch.
## Transaction Fees
Each transaction before inclusion must buy gas enough to cover the cost of bandwidth and execution.
Gas unifies execution and bytes of bandwidth usage of blockchain. Each WASM instruction or pre-compiled function gets assigned an amount of gas based on measurements on common-denominator computer. Same goes for weighting the used bandwidth based on general unified costs. For specific gas mapping numbers see [???](#).
Gas is priced dynamically in `NEAR` tokens. At each block `t`, we update `gasPrice[t] = gasPrice[t - 1] * (1 + (gasUsed[t - 1] / gasLimit[t - 1] - 0.5) * ADJ_FEE)`.
Where `gasUsed[t] = sum([sum([gas(tx) for tx in chunk]) for chunk in block[t]])`.
`gasLimit[t]` is defined as `gasLimit[t] = gasLimit[t - 1] + validatorGasDiff[t - 1]`, where `validatorGasDiff` is parameter with which each chunk producer can either increase or decrease gas limit based on how long it to execute the previous chunk. `validatorGasDiff[t]` can be only within `±0.1%` of `gasLimit[t]` and only if `gasUsed[t - 1] > 0.9 * gasLimit[t - 1]`.
## State Stake
Amount of `NEAR` on the account represents right for this account to take portion of the blockchain's overall global state. Transactions fail if account doesn't have enough balance to cover the storage required for given account.
```python
def check_storage_cost(account):
# Compute requiredAmount given size of the account.
requiredAmount = sizeOf(account) * storageAmountPerByte
return Ok() if account.amount + account.locked >= requiredAmount else Error(requiredAmount)
# Check when transaction is received to verify that it is valid.
def verify_transaction(tx, signer_account):
# ...
# Updates signer's account with the amount it will have after executing this tx.
update_post_amount(signer_account, tx)
result = check_storage_cost(signer_account)
# If enough balance OR account is been deleted by the owner.
if not result.ok() or DeleteAccount(tx.signer_id) in tx.actions:
assert LackBalanceForState(signer_id: tx.signer_id, amount: result.err())
# After account touched / changed, we check it still has enough balance to cover it's storage.
def on_account_change(block_height, account):
# ... execute transaction / receipt changes ...
# Validate post-condition and revert if it fails.
result = check_storage_cost(sender_account)
if not result.ok():
assert LackBalanceForState(signer_id: tx.signer_id, amount: result.err())
```
Where `sizeOf(account)` includes size of `account_id`, `account` structure and size of all the data stored under the account.
Account can end up with not enough balance in case it gets slashed. Account will become unusable as all originating transactions will fail (including deletion).
The only way to recover it in this case is by sending extra funds from a different accounts.
## Validators
NEAR validators provide their resources in exchange for a reward `epochReward[t]`, where [t] represents the considered epoch
| Name | Description |
| - | - |
| `epochReward[t]` | `= coinbaseReward[t] + epochFee[t]` |
| `coinbaseReward[t]` | The maximum inflation per epoch[t], as a function of `REWARD_PCT_PER_YEAR / EPOCHS_A_YEAR` |
### Validator Selection
```rust
struct Proposal {
account_id: AccountId,
stake: Balance,
public_key: PublicKey,
}
```
During the epoch, outcome of staking transactions produce `proposals`, which are collected, in the form of `Proposal`s.
There are separate proposals for block producers and chunk-only producers, see [Selecting Chunk and Block Producers](../ChainSpec/SelectingBlockProducers.md).
for more information.
At the end of every epoch `T`, next algorithm gets executed to determine validators for epoch `T + 2`:
1. For every chunk/block producer in `epoch[T]` determine `num_blocks_produced`, `num_chunks_produced` based on what they produced during the epoch.
2. Remove validators, for whom `num_blocks_produced < num_blocks_expected * BLOCK_PRODUCER_KICKOUT_THRESHOLD` or `num_chunks_produced < num_chunks_expected * CHUNK_PRODUCER_KICKOUT_THRESHOLD`.
3. Collect chunk-only and block producer `proposals`, if validator was also a validator in `epoch[T]`, considered stake of the proposal is `0 if proposal.stake == 0 else proposal.stake + reward[proposal.account_id]`.
4. Use the chunk/block producer selection algorithms outlined in [Selecting Chunk and Block Producers](../ChainSpec/SelectingBlockProducers.md).
### Validator Rewards Calculation
Note: all calculations are done in Rational numbers.
Total reward every epoch `t` is equal to:
```python
total_reward[t] = floor(totalSupply * max_inflation_rate * num_blocks_per_year / epoch_length)
```
where `max_inflation_rate`, `num_blocks_per_year`, `epoch_length` are genesis parameters and `totalSupply` is
taken from the last block in the epoch.
After that a fraction of the reward goes to the treasury and the remaining amount will be used for computing validator rewards:
```python
treasury_reward[t] = floor(reward[t] * protocol_reward_rate)
validator_reward[t] = total_reward[t] - treasury_reward[t]
```
Validators that didn't meet the threshold for either blocks or chunks get kicked out and don't get any reward, otherwise uptime
of a validator is computed:
```python
pct_online[t][j] = (num_produced_blocks[t][j] / expected_produced_blocks[t][j] + num_produced_chunks[t][j] / expected_produced_chunks[t][j]) / 2
if pct_online > ONLINE_THRESHOLD:
uptime[t][j] = min(1, (pct_online[t][j] - ONLINE_THRESHOLD_MIN) / (ONLINE_THRESHOLD_MAX - ONLINE_THRESHOLD_MIN))
else:
uptime[t][j] = 0
```
Where `expected_produced_blocks` and `expected_produced_chunks` is the number of blocks and chunks respectively that is expected to be produced by given validator `j` in the epoch `t`.
The specific `validator[t][j]` reward for epoch `t` is then proportional to the fraction of stake of this validator from total stake:
```python
validatorReward[t][j] = floor(uptime[t][j] * stake[t][j] * validator_reward[t] / total_stake[t])
```
### Slashing
#### ChunkProofs
```python
# Check that chunk is invalid, because the proofs in header don't match the body.
def chunk_proofs_condition(chunk):
# TODO
# At the end of the epoch, run update validators and
# determine how much to slash validators.
def end_of_epoch_update_validators(validators):
# ...
for validator in validators:
if validator.is_slashed:
validator.stake -= INVALID_STATE_SLASH_PCT * validator.stake
```
#### ChunkState
```python
# Check that chunk header post state root is invalid,
# because the execution of previous chunk doesn't lead to it.
def chunk_state_condition(prev_chunk, prev_state, chunk_header):
# TODO
# At the end of the epoch, run update validators and
# determine how much to slash validators.
def end_of_epoch(..., validators):
# ...
for validator in validators:
if validator.is_slashed:
validator.stake -= INVALID_STATE_SLASH_PCT * validator.stake
```
## Protocol Treasury
Treasury account `TREASURY_ACCOUNT_ID` receives fraction of reward every epoch `t`:
```python
# At the end of the epoch, update treasury
def end_of_epoch(..., reward):
# ...
accounts[TREASURY_ACCOUNT_ID].amount = treasury_reward[t]
```
## Contract Rewards
Contract account is rewarded with 30% of gas burnt during the execution of its functions.
The reward is credited to the contract account after applying the corresponding receipt with [`FunctionCallAction`](../RuntimeSpec/Actions.md#functioncallaction), gas is converted to tokens using gas price of the current block.
You can read more about:
- [receipts execution](../RuntimeSpec/Receipts.md);
- [runtime fees](../RuntimeSpec/Fees/Fees.md) with description [how gas is charged](../RuntimeSpec/Fees/Fees.md#gas-tracking).
|
NEAR DA Integrates with Polygon CDK for Developers Building Ethereum ZK Rollups
DEVELOPERS
January 18, 2024
The NEAR Foundation and Polygon Labs announced today that the latest technical integration for NEAR DA’s efficient and highly scalable data availability is now available for Polygon CDK, the tech stack that allows developers to launch their own ZK-powered Layer 2s custom-fitted to their needs.
The NEAR Data Availability layer (NEAR DA) is a highly efficient and robust data availability layer, designed to help Ethereum rollup builders simplify their network and lower costs, while ensuring they can scale like the NEAR Protocol. Polygon CDK is a simple-to-build L2 stack and a scaling solution for Ethereum that lets developers build custom L2 chains with their own configurations. Builders can enjoy customisable features (transaction costs, native token, throughput), while still enjoying familiar EVM-compatibility and trustless security.
With the latest NEAR DA integration, rollups can benefit from cheaper data availability costs to significantly reduce their overall rollup overheads. This is especially relevant considering data publishing costs are at all-time highs for L2s.
NEAR DA paves the way for modular blockchain development
This integration empowers rollup builders on Polygon CDK to use NEAR DA as a complete, out-of-the-box modular DA solution.
“We’re excited to make NEAR DA available to Polygon CDK rollup builders and share the benefits of cost-effective, lightning-fast data availability that scales with NEAR’s sharding for the modular Ethereum ecosystem,” said Illia Polosukhin, Co-Founder of NEAR Protocol. “This is another exciting collaboration between NEAR Foundation and Polygon Labs to drive Chain Abstraction and create better experiences for developers and users.”
As of December 2023, 231 kB of calldata on NEAR costs $0.0016, while the same calldata on Ethereum L1 costs users $140.54 and on Celestia costs users $0.046.
NEAR DA helps developers reduce costs and enhance their rollup’s reliability, while maintaining the security guarantees provided by Ethereum. Another upside to NEAR DA is that high quality projects launching an app-chain or L2 will be able to get out-of-the-box NEAR DA compatibility and support.
Develop ZK L2s within the Polygon CDK ecosystem enabled by NEAR DA
The NEAR-Polygon CDK integration allows developers building their own rollups to be part of the Polygon ecosystem, a network of blockchains which includes Polygon PoS, Polygon zkEVM and Polygon CDK-enabled blockchains. This is the first NEAR DA integration with a ZK-based L2 stack, increasing optionality for developers looking for scalable DA solutions.
This integration also builds upon the NEAR-Polygon research collaboration to build zkWASM, a new type of prover for WASM blockchains. In the future, builders could even create zkWASM chains built on NEAR DA. Together, NEAR DA and zkWASM technology will play significant roles in scaling EVM and Wasm ecosystems in parallel while maximizing interoperability for a multi-chain future.
Interested teams who want to work with NEAR DA are invited to fill out this form, with information about your project and how you would like to integrate with NEAR DA. |
---
id: state-sync
title: State Sync
sidebar_label: State Sync Configuration
sidebar_position: 4
description: State Sync Configuration
---
# Overview
See [State Sync from External Storage](https://github.com/near/nearcore/blob/master/docs/misc/state_sync_from_external_storage.md)
for a description of how to configure your node to sync State from an arbitrary
external location.
Pagoda provides state dumps for every shard of every epoch since the release of
`1.36.0-rc.1` for testnet and `1.36.0` for mainnet.
## Enable State Sync
The main option you need is `state_sync_enabled`, and then specify how to get
the state parts provided by Pagoda in the option `state_sync`.
The following snippet is included in the reference `config.json` file and
provides access to the state of every shard every epoch:
```json
"state_sync_enabled": true,
"state_sync": {
"sync": {
"ExternalStorage": {
"location": {
"GCS": {
"bucket": "state-parts",
}
},
"num_concurrent_requests": 4,
"num_concurrent_requests_during_catchup": 4,
}
}
}
```
## Reference `config.json` file
Run the following command to download the reference `config.json` file:
```shell
./neard --home /tmp/ init --download-genesis --download-config --chain-id <testnet or mainnet>
```
The file will be available at `/tmp/config.json`.
## Troubleshooting
If you notice that your node runs state sync and it hasn't completed after 3 hours, please check the following:
1. Config options related to the state sync in your `config.json` file:
* `state_sync_enabled`
* `state_sync`
* `consensus.state_sync_timeout`
* `tracked_shards`
* `tracked_accounts`
* `tracked_shard_schedule`
* `archive`
* `block_fetch_horizon`
The best way to see the exact config values used is to visit a debug page of your node: `http://127.0.0.1:3030/debug/client_config`
Check whether state sync is enabled, and check whether it's configured to get state parts from the right location mentioned above.
2. Has your node ran out of available disk space?
If all seems to be configured fine, then disable state sync (set `"state_sync_enabled": false` in `config.json`) and try again.
If that doesn't help, then restart from a backup data snapshot.
## Running a Chunk-Only Producer that tracks a single shard
Enable State Sync as explained above.
And then configure the node to track no shards:
```json
"tracked_shards": [],
"tracked_accounts": [],
"tracked_shard_schedule": [],
```
It is counter-intuitive but it works. If a node stakes and is accepted as a
validator, then it will track the shard that it needs to track to fulfill its
validator role. The assignment of validators to shards is done by the consensus
of validators.
Note that in different epochs a validator may be assigned to different shards. A
node switches tracked shards by using the State Sync mechanism for a single
shard. See [catchup](https://github.com/near/nearcore/blob/master/docs/architecture/how/sync.md#catchup).
|
---
id: running-a-node-windows
title: Run a Node on Windows
sidebar_label: Run a Node (Windows)
sidebar_position: 4
description: How to run a NEAR node using `nearup` on Windows
---
*If this is the first time for you to setup a validator node, head to our [Validator Bootcamp 🚀](/validator/validator-bootcamp). We encourage you to set up your node with Neard instead of Nearup as Nearup is not used on Mainnet. Please head to [Run a node](/validator/compile-and-run-a-node) for instructions on how to setup a RPC node with Neard.*
This doc is written for developers, sysadmins, DevOps, or curious people who want to know how to run a NEAR node using `nearup` on Windows.
<blockquote class="warning">
<strong>Heads up</strong><br /><br />
This documentation may require additional edits. Please keep this in mind while running the following commands.
</blockquote>
## `nearup` Installation {#nearup-installation}
You can install `nearup` by following the instructions at https://github.com/near-guildnet/nearup.
<blockquote class="info">
<strong>Heads up</strong><br /><br />
The README for `nearup` (linked above) may be **all you need to get a node up and running** in `testnet` and `localnet`. `nearup` is exclusively used to launch NEAR `testnet` and `localnet` nodes. `nearup` is not used to launch `mainnet` nodes. See [Deploy Node on Mainnet](deploy-on-mainnet.md) for running a node on `mainnet`.
</blockquote>
1. If Windows Subsystem for Linux is not enabled, open PowerShell as administrator and run:
```sh
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
```
Then restart your computer.
2. Go to your Microsoft Store and look for Ubuntu; this is the Ubuntu Terminal instance. Install and launch it.
3. Now you might be asked for username and password, do not use admin as username.
4. Your Ubuntu Instance need initial before next steps
```sh
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential pkg-config libssl-dev
```
5. You need to install OpenSSL, which you will need to run the node. To download OpenSSL, please run the following commands in the Ubuntu Terminal:
```sh
cd /tmp
wget https://www.openssl.org/source/openssl-1.1.1.tar.gz
tar xvf openssl-1.1.1.tar.gz
```
6. After it finished downloading OpenSSL, run the following commands to install:
```sh
cd openssl-1.1.1
sudo ./config -Wl,--enable-new-dtags,-rpath,'$(LIBRPATH)'
sudo make
sudo make install
```
The files will be under the following directory: /usr/local/ssl.
7. Once this is finished, you have to ensure that Ubuntu is going to use the right version of OpenSSL. Now update the path for man pages and binaries. Run the following command:
```sh
cd ../..
sudo nano /etc/manpath.config
```
8. A text file will open, add the following line:
```sh
MANPATH_MAP /usr/local/ssl/bin /usr/local/ssl/man
```
Once this is done press ctrl + o . It will ask you to save the file, just press enter. Now press ctrl + x to exit.
9. To make sure that OpenSSL is installed run:
```sh
openssl version -v
```
This should show you the installed version. More info on this can be found here. (https://manpages.ubuntu.com/manpages/bionic/man1/version.1ssl.html)
10. Now you have to run the following commands to install all nessesary software and dependencies:
```sh
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y git jq binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev cmake gcc g++ protobuf-compiler python3 python3-pip llvm clang
```
Install Rustup
```sh
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
rustup default nightly
```
Great! All set to get the node up and running!
11. Clone the github nearcore
First we need to check a version which is currently working in `testnet`:
```sh
curl -s https://rpc.testnet.near.org/status | jq .version
```
You’ll get something like this: "1.13.0-rc.2". "1.13.0" is a branch which we need to clone to build our node for `testnet`.
```sh
git clone --branch 1.13.0 https://github.com/near/nearcore.git
```
12. This created a nearcore directory, change into that one and build a noce:
```sh
cd nearcore
make neard
```
13. Install nearup
```sh
pip3 install --user nearup
export PATH="$HOME/.local/bin:$PATH"
```
14. Final: And now run the `testnet`:
```sh
nearup run testnet --binary-path ~/nearcore/target/release/neard
```
To be sure node is running you can check logs
```sh
nearup logs --follow
```
You might be asked for a validator ID; if you do not want to validate, simply press enter. For validation, please follow the [Validator Bootcamp](/validator/validator-bootcamp).
>Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol">
<h8>Ask it on StackOverflow!</h8></a>
|
---
sidebar_label: Migrating to NEAR Lake framework
---
# Migrating to NEAR Lake Framework
We encourage everyone who don't have a hard requirement to use [NEAR Indexer Framework](/concepts/advanced/near-indexer-framework) consider the migration to [NEAR Lake Framework](/concepts/advanced/near-lake-framework).
In this tutorial we'll show you how to migrate the project using [indexer-tx-watcher-example](https://github.com/near-examples/indexer-tx-watcher-example) as a showcase.
:::info Source code
The source code for the migrated indexer can be found on GitHub https://github.com/near-examples/indexer-tx-watcher-example-lake/tree/0.4.0
:::
:::info Diffs
We've [posted the diffs for the reference in the end](#diffs) of the article, you can scroll down to them if diffs are all you need in order to migrate your indexer
:::
## Changing the dependencies
First of all we'll start from the dependencies in `Cargo.toml`
```toml title=src/Cargo.toml
[package]
name = "indexer-tx-watcher-example"
version = "0.1.0"
authors = ["Near Inc <hello@nearprotocol.com>"]
edition = "2018"
[dependencies]
actix = "=0.11.0-beta.2"
actix-rt = "=2.2.0" # remove it once actix is upgraded to 0.11+
base64 = "0.11"
clap = "3.0.0-beta.1"
openssl-probe = { version = "0.1.2" }
serde = { version = "1", features = ["derive"] }
serde_json = "1.0.55"
tokio = { version = "1.1", features = ["sync"] }
tracing = "0.1.13"
tracing-subscriber = "0.2.4"
near-indexer = { git = "https://github.com/near/nearcore", rev = "25b000ae4dd9fe784695d07a3f2e99d82a6f10bd" }
```
- Update `edition` to `2021`
- Drop `actix` crates
- Drop `openssl-probe` crate
- Add `futures` and `itertools`
- Add features to `tokio` as we will be using tokio runtime
- Add `tokio-stream` crate
- Replace `near-indexer` with `near-lake-framework`
So in the end we'll have this after all:
```toml title=src/Cargo.toml
[package]
name = "indexer-tx-watcher-example"
version = "0.1.0"
authors = ["Near Inc <hello@nearprotocol.com>"]
edition = "2021"
[dependencies]
base64 = "0.11"
clap = { version = "3.1.6", features = ["derive"] }
futures = "0.3.5"
serde = { version = "1", features = ["derive"] }
serde_json = "1.0.55"
itertools = "0.9.0"
tokio = { version = "1.1", features = ["sync", "time", "macros", "rt-multi-thread"] }
tokio-stream = { version = "0.1" }
tracing = "0.1.13"
tracing-subscriber = "0.2.4"
near-lake-framework = "0.4.0"
```
## Change the clap configs
Currently we have structure `Opts` that has a subcommand with `Run` and `Init` command. Since [NEAR Lake Framework](/concepts/advanced/near-lake-framework) doesn't need `data` and config files we don't need `Init` at all. So we need to combine some structures into `Opts` itself.
```rust title=src/config.rs
...
/// NEAR Indexer Example
/// Watches for stream of blocks from the chain
#[derive(Clap, Debug)]
#[clap(version = "0.1", author = "Near Inc. <hello@nearprotocol.com>")]
pub(crate) struct Opts {
/// Sets a custom config dir. Defaults to ~/.near/
#[clap(short, long)]
pub home_dir: Option<std::path::PathBuf>,
#[clap(subcommand)]
pub subcmd: SubCommand,
}
#[derive(Clap, Debug)]
pub(crate) enum SubCommand {
/// Run NEAR Indexer Example. Start observe the network
Run(RunArgs),
/// Initialize necessary configs
Init(InitConfigArgs),
}
#[derive(Clap, Debug)]
pub(crate) struct RunArgs {
/// account ids to watch for
#[clap(long)]
pub accounts: String,
}
#[derive(Clap, Debug)]
pub(crate) struct InitConfigArgs {
...
}
...
```
We are going:
- Drop `InitConfigArgs` completely
- Move the content from `RunArgs` to `Opts` and then drop `RunArgs`
- Drop `home_dir` from `Opts`
- Add `block_height` to `Opts` to know from which block height to start indexing
- Refactor `SubCommand` to have to variants: mainnet and testnet to define what chain to index
- And add `Clone` detive to the structs for later
```rust title=src/config.rs
/// NEAR Indexer Example
/// Watches for stream of blocks from the chain
#[derive(Clap, Debug, Clone)]
#[clap(version = "0.1", author = "Near Inc. <hello@nearprotocol.com>")]
pub(crate) struct Opts {
/// block height to start indexing from
#[clap(long)]
pub block_height: u64,
/// account ids to watch for
#[clap(long)]
pub accounts: String,
#[clap(subcommand)]
pub subcmd: SubCommand,
}
#[derive(Clap, Debug, Clone)]
pub(crate) enum SubCommand {
Mainnet,
Testnet,
}
```
In the end of the file we have one implementation we need to replace.
```rust title=src/config.rs
...
impl From<InitConfigArgs> for near_indexer::InitConfigArgs {
...
}
```
We want to be able to cast `Opts` to `near_lake_framework::LakeConfig`. So we're going to create a new implementation.
```rust title=src/config.rs
impl From<Opts> for near_lake_framework::LakeConfig {
fn from(opts: Opts) -> Self {
let mut lake_config =
near_lake_framework::LakeConfigBuilder::default().start_block_height(opts.block_height);
match &opts.subcmd {
SubCommand::Mainnet => {
lake_config = lake_config.mainnet();
}
SubCommand::Testnet => {
lake_config = lake_config.testnet();
}
};
lake_config.build().expect("Failed to build LakeConfig")
}
}
```
And the final move is to change `init_logging` function to remove redundant log subscriptions:
```rust title=src/config.rs
...
pub(crate) fn init_logging() {
let env_filter = EnvFilter::new(
"tokio_reactor=info,near=info,stats=info,telemetry=info,indexer_example=info,indexer=info,near-performance-metrics=info",
);
tracing_subscriber::fmt::Subscriber::builder()
.with_env_filter(env_filter)
.with_writer(std::io::stderr)
.init();
}
...
```
Replace it with
```rust title=src/config.rs
...
pub(crate) fn init_logging() {
let env_filter = EnvFilter::new("near_lake_framework=info");
tracing_subscriber::fmt::Subscriber::builder()
.with_env_filter(env_filter)
.with_writer(std::io::stderr)
.init();
}
...
```
Finally we're done with `src/config.rs` and now we can move on to `src/main.rs`
## Replacing the indexer instantiation
Since we can use `tokio` runtime and make our `main` function asynchronous it's shorted to show the recreating of the `main` function than the process of refactoring.
Let's start from import section
### Imports before
```rust title=src/main.rs
use std::str::FromStr;
use std::collections::{HashMap, HashSet};
use clap::Clap;
use tokio::sync::mpsc;
use tracing::info;
use configs::{init_logging, Opts, SubCommand};
mod configs;
```
### Imports after
We're adding `near_lake_framework` imports and remove redundant import from `configs`.
```rust title=src/main.rs
use std::str::FromStr;
use std::collections::{HashMap, HashSet};
use clap::Clap;
use tokio::sync::mpsc;
use tracing::info;
use near_lake_framework::near_indexer_primitives;
use near_lake_framework::LakeConfig;
use configs::{init_logging, Opts};
```
### Creating `main()`
Let's create an async `main()` function, call `init_logging` and read the `Opts`.
```rust title=src/main.rs
#[tokio::main]
async fn main() -> Result<(), tokio::io::Error> {
init_logging();
let opts: Opts = Opts::parse();
```
Let's cast `LakeConfig` from `Opts` and instantiate [NEAR Lake Framework](/concepts/advanced/near-lake-framework)'s `stream`
```rust title=src/main.rs
#[tokio::main]
async fn main() -> Result<(), tokio::io::Error> {
init_logging();
let opts: Opts = Opts::parse();
let config: LakeConfig = opts.clone().into();
let (_, stream) = near_lake_framework::streamer(config);
```
Copy/paste the code of reading `accounts` arg to `Vec<AccountId`> from the old `main()`
```rust title=src/main.rs
#[tokio::main]
async fn main() -> Result<(), tokio::io::Error> {
init_logging();
let opts: Opts = Opts::parse();
let config: LakeConfig = opts.clone().into();
let (_, stream) = near_lake_framework::streamer(config);
let watching_list = opts
.accounts
.split(',')
.map(|elem| {
near_indexer_primitives::types::AccountId::from_str(elem).expect("AccountId is invalid")
})
.collect();
```
Now we can call `listen_blocks` function we have used before in our indexer while it was built on top of [NEAR Indexer Framework](/concepts/advanced/near-indexer-framework). And return `Ok(())` so our `main()` would be happy.
### Final async main with NEAR Lake Framework stream
```rust title=src/main.rs
#[tokio::main]
async fn main() -> Result<(), tokio::io::Error> {
init_logging();
let opts: Opts = Opts::parse();
let config: LakeConfig = opts.clone().into();
let (_, stream) = near_lake_framework::streamer(config);
let watching_list = opts
.accounts
.split(',')
.map(|elem| {
near_indexer_primitives::types::AccountId::from_str(elem).expect("AccountId is invalid")
})
.collect();
listen_blocks(stream, watching_list).await;
Ok(())
}
```
We're done. That's pretty much entire `main()` function. Drop the old one if you haven't yet.
## Changes in other function related to data types
Along with [NEAR Lake Framework](/concepts/advanced/near-lake-framework) release we have extracted the structures created for indexers into a separate crate. This was done in order to avoid dependency on `nearcore` as now you can depend on a separate crate that is already [published on crates.io](https://crates.io/crates/near-indexer-primitives) or on NEAR Lake Framework that exposes that crate.
### `listen_blocks`
A function signature needs to be changed to point to new place for data types
```rust title=src/main.rs
async fn listen_blocks(
mut stream: mpsc::Receiver<near_indexer::StreamerMessage>,
watching_list: Vec<near_indexer::near_primitives::types::AccountId>,
) {
```
```rust title=src/main.rs
async fn listen_blocks(
mut stream: mpsc::Receiver<near_indexer_primitives::StreamerMessage>,
watching_list: Vec<near_indexer_primitives::types::AccountId>,
) {
```
And another 3 places where `near_indexer::near_primitives` needs to be replaced with `near_indexer_primitives`
```rust title=src/main.rs
if let near_indexer_primitives::views::ReceiptEnumView::Action {
```
```rust title=src/main.rs
if let near_indexer_primitives::views::ReceiptEnumView::Action {
```
```rust title=src/main.rs
if let near_indexer_primitives::views::ActionView::FunctionCall {
```
## `is_tx_receiver_watched()`
And final change for data types in the function `is_tx_receiver_watched()`
```rust title=src/main.rs
fn is_tx_receiver_watched(
tx: &near_indexer_primitives::IndexerTransactionWithOutcome,
watching_list: &[near_indexer_primitives::types::AccountId],
) -> bool {
watching_list.contains(&tx.transaction.receiver_id)
}
```
## Credentials
[Configure the Credentials](./running-near-lake/credentials.md) in order to access the data from NEAR Lake Framework
## Conclusion
And now we have a completely migrated to [NEAR Lake Framework](/concepts/advanced/near-lake-framework) indexer.
We are posting the complete diffs for the reference
## Diffs
```diff title=Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -2,18 +2,18 @@
name = "indexer-tx-watcher-example"
version = "0.1.0"
authors = ["Near Inc <hello@nearprotocol.com>"]
-edition = "2018"
+edition = "2021"
[dependencies]
-actix = "=0.11.0-beta.2"
-actix-rt = "=2.2.0" # remove it once actix is upgraded to 0.11+
base64 = "0.11"
-clap = "3.0.0-beta.1"
-openssl-probe = { version = "0.1.2" }
+clap = { version = "3.1.6", features = ["derive"] }
+futures = "0.3.5"
serde = { version = "1", features = ["derive"] }
serde_json = "1.0.55"
-tokio = { version = "1.1", features = ["sync"] }
+itertools = "0.9.0"
+tokio = { version = "1.1", features = ["sync", "time", "macros", "rt-multi-thread"] }
+tokio-stream = { version = "0.1" }
tracing = "0.1.13"
tracing-subscriber = "0.2.4"
-near-indexer = { git = "https://github.com/near/nearcore", rev = "25b000ae4dd9fe784695d07a3f2e99d82a6f10bd" }
+near-lake-framework = "0.4.0"
```
```diff title=src/configs.rs
--- a/src/configs.rs
+++ b/src/configs.rs
@@ -1,99 +1,50 @@
-use clap::Clap;
+use clap::Parser;
use tracing_subscriber::EnvFilter;
/// NEAR Indexer Example
/// Watches for stream of blocks from the chain
-#[derive(Clap, Debug)]
+#[derive(Parser, Debug, Clone)]
#[clap(version = "0.1", author = "Near Inc. <hello@nearprotocol.com>")]
pub(crate) struct Opts {
- /// Sets a custom config dir. Defaults to ~/.near/
- #[clap(short, long)]
- pub home_dir: Option<std::path::PathBuf>,
- #[clap(subcommand)]
- pub subcmd: SubCommand,
-}
-
-#[derive(Clap, Debug)]
-pub(crate) enum SubCommand {
- /// Run NEAR Indexer Example. Start observe the network
- Run(RunArgs),
- /// Initialize necessary configs
- Init(InitConfigArgs),
-}
-
-#[derive(Clap, Debug)]
-pub(crate) struct RunArgs {
+ /// block height to start indexing from
+ #[clap(long)]
+ pub block_height: u64,
/// account ids to watch for
#[clap(long)]
pub accounts: String,
+ #[clap(subcommand)]
+ pub subcmd: SubCommand,
}
-#[derive(Clap, Debug)]
-pub(crate) struct InitConfigArgs {
- /// chain/network id (localnet, testnet, devnet, betanet)
- #[clap(short, long)]
- pub chain_id: Option<String>,
- /// Account ID for the validator key
- #[clap(long)]
- pub account_id: Option<String>,
- /// Specify private key generated from seed (TESTING ONLY)
- #[clap(long)]
- pub test_seed: Option<String>,
- /// Number of shards to initialize the chain with
- #[clap(short, long, default_value = "1")]
- pub num_shards: u64,
- /// Makes block production fast (TESTING ONLY)
- #[clap(short, long)]
- pub fast: bool,
- /// Genesis file to use when initialize testnet (including downloading)
- #[clap(short, long)]
- pub genesis: Option<String>,
- /// Download the verified NEAR genesis file automatically.
- #[clap(long)]
- pub download_genesis: bool,
- /// Specify a custom download URL for the genesis file.
- #[clap(long)]
- pub download_genesis_url: Option<String>,
- /// Download the verified NEAR config file automtically.
- #[clap(long)]
- pub download_config: bool,
- /// Specify a custom download URL for the config file.
- #[clap(long)]
- pub download_config_url: Option<String>,
- /// Specify the boot nodes to bootstrap the network
- #[clap(long)]
- pub boot_nodes: Option<String>,
- /// Specify a custom max_gas_burnt_view limit.
- #[clap(long)]
- pub max_gas_burnt_view: Option<u64>,
+#[derive(Parser, Debug, Clone)]
+pub(crate) enum SubCommand {
+ Mainnet,
+ Testnet,
}
pub(crate) fn init_logging() {
- let env_filter = EnvFilter::new(
- "tokio_reactor=info,near=info,stats=info,telemetry=info,indexer_example=info,indexer=info,near-performance-metrics=info",
- );
+ let env_filter = EnvFilter::new("near_lake_framework=info");
tracing_subscriber::fmt::Subscriber::builder()
.with_env_filter(env_filter)
.with_writer(std::io::stderr)
.init();
}
-impl From<InitConfigArgs> for near_indexer::InitConfigArgs {
- fn from(config_args: InitConfigArgs) -> Self {
- Self {
- chain_id: config_args.chain_id,
- account_id: config_args.account_id,
- test_seed: config_args.test_seed,
- num_shards: config_args.num_shards,
- fast: config_args.fast,
- genesis: config_args.genesis,
- download_genesis: config_args.download_genesis,
- download_genesis_url: config_args.download_genesis_url,
- download_config: config_args.download_config,
- download_config_url: config_args.download_config_url,
- boot_nodes: config_args.boot_nodes,
- max_gas_burnt_view: config_args.max_gas_burnt_view,
- }
+impl From<Opts> for near_lake_framework::LakeConfig {
+ fn from(opts: Opts) -> Self {
+ let mut lake_config =
+ near_lake_framework::LakeConfigBuilder::default().start_block_height(opts.block_height);
+
+ match &opts.subcmd {
+ SubCommand::Mainnet => {
+ lake_config = lake_config.mainnet();
+ }
+ SubCommand::Testnet => {
+ lake_config = lake_config.testnet();
+ }
+ };
+
+ lake_config.build().expect("Failed to build LakeConfig")
}
}
```
```diff title=src/main.rs
--- a/src/main.rs
+++ b/src/main.rs
@@ -2,11 +2,14 @@
use std::collections::{HashMap, HashSet};
-use clap::Clap;
+use clap::Parser;
use tokio::sync::mpsc;
use tracing::info;
-use configs::{init_logging, Opts, SubCommand};
+use near_lake_framework::near_indexer_primitives;
+use near_lake_framework::LakeConfig;
+
+use configs::{init_logging, Opts};
mod configs;
@@ -15,60 +18,34 @@
/// We want to catch all *successful* transactions sent to one of the accounts from the list.
/// In the demo we'll just look for them and log them but it might and probably should be extended based on your needs.
-fn main() {
- // We use it to automatically search the for root certificates to perform HTTPS calls
- // (sending telemetry and downloading genesis)
- openssl_probe::init_ssl_cert_env_vars();
+#[tokio::main]
+async fn main() -> Result<(), tokio::io::Error> {
init_logging();
let opts: Opts = Opts::parse();
- let home_dir = opts.home_dir.unwrap_or_else(near_indexer::get_default_home);
+ let config: LakeConfig = opts.clone().into();
- match opts.subcmd {
- SubCommand::Run(args) => {
- // Create the Vec of AccountId from the provided ``--accounts`` to pass it to `listen_blocks`
- let watching_list = args
- .accounts
- .split(',')
- .map(|elem| {
- near_indexer::near_primitives::types::AccountId::from_str(elem)
- .expect("AccountId is invalid")
- })
- .collect();
-
- // Inform about indexer is being started and what accounts we're watching for
- eprintln!(
- "Starting indexer transaction watcher for accounts: \n {:#?}",
- &args.accounts
- );
-
- // Instantiate IndexerConfig with hardcoded parameters
- let indexer_config = near_indexer::IndexerConfig {
- home_dir,
- sync_mode: near_indexer::SyncModeEnum::FromInterruption,
- await_for_node_synced: near_indexer::AwaitForNodeSyncedEnum::WaitForFullSync,
- };
+ let (_, stream) = near_lake_framework::streamer(config);
- // Boilerplate code to start the indexer itself
- let sys = actix::System::new();
- sys.block_on(async move {
- eprintln!("Actix");
- let indexer = near_indexer::Indexer::new(indexer_config);
- let stream = indexer.streamer();
- actix::spawn(listen_blocks(stream, watching_list));
- });
- sys.run().unwrap();
- }
- SubCommand::Init(config) => near_indexer::indexer_init_configs(&home_dir, config.into()),
- }
+ let watching_list = opts
+ .accounts
+ .split(',')
+ .map(|elem| {
+ near_indexer_primitives::types::AccountId::from_str(elem).expect("AccountId is invalid")
+ })
+ .collect();
+
+ listen_blocks(stream, watching_list).await;
+
+ Ok(())
}
/// The main listener function the will be reading the stream of blocks `StreamerMessage`
/// and perform necessary checks
async fn listen_blocks(
- mut stream: mpsc::Receiver<near_indexer::StreamerMessage>,
- watching_list: Vec<near_indexer::near_primitives::types::AccountId>,
+ mut stream: mpsc::Receiver<near_indexer_primitives::StreamerMessage>,
+ watching_list: Vec<near_indexer_primitives::types::AccountId>,
) {
eprintln!("listen_blocks");
// This will be a map of correspondence between transactions and receipts
@@ -120,7 +97,7 @@
&execution_outcome.receipt.receiver_id,
execution_outcome.execution_outcome.outcome.status
);
- if let near_indexer::near_primitives::views::ReceiptEnumView::Action {
+ if let near_indexer_primitives::views::ReceiptEnumView::Action {
signer_id,
..
} = &execution_outcome.receipt.receipt
@@ -128,19 +105,20 @@
eprintln!("{}", signer_id);
}
- if let near_indexer::near_primitives::views::ReceiptEnumView::Action {
- actions,
- ..
+ if let near_indexer_primitives::views::ReceiptEnumView::Action {
+ actions, ..
} = execution_outcome.receipt.receipt
{
for action in actions.iter() {
- if let near_indexer::near_primitives::views::ActionView::FunctionCall {
+ if let near_indexer_primitives::views::ActionView::FunctionCall {
args,
..
} = action
{
if let Ok(decoded_args) = base64::decode(args) {
- if let Ok(args_json) = serde_json::from_slice::<serde_json::Value>(&decoded_args) {
+ if let Ok(args_json) =
+ serde_json::from_slice::<serde_json::Value>(&decoded_args)
+ {
eprintln!("{:#?}", args_json);
}
}
@@ -156,8 +134,8 @@
}
fn is_tx_receiver_watched(
- tx: &near_indexer::IndexerTransactionWithOutcome,
- watching_list: &[near_indexer::near_primitives::types::AccountId],
+ tx: &near_indexer_primitives::IndexerTransactionWithOutcome,
+ watching_list: &[near_indexer_primitives::types::AccountId],
) -> bool {
watching_list.contains(&tx.transaction.receiver_id)
}
```
|
---
title: DApps
description: DApps are unique to blockchain.
---
# dApps
Decentralized Applications (dApps) are applications developed for blockchains.
They combine smart contracts with a user-friendly frontend to make it easier to interact with smart contracts.
dApps are decentralized, so, typically, they aren't under the authority of a single authority.
There are a wide variety of dApps for different purposes including: gaming, social media, and finance among other categories.
To find dApps on NEAR, take a look at [Awesome NEAR](https://awesomenear.com/), a community owned resource that lists dApps available on NEAR.
If you're interested in building dApps on NEAR, have a look at [our tutorial](https://learnnear.club/how-to-build-on-near-starting-guide/) or dive right into our [developer documentation](https://docs.near.org/).
|
---
id: enumeration
title: Enumeration
sidebar_label: Enumeration
---
import {Github} from "@site/src/components/codetabs"
In the previous tutorials, you looked at ways to integrate the minting functionality into a skeleton smart contract. In order to get your NFTs to show in the wallet, you also had to deploy a patch fix that implemented one of the enumeration methods. In this tutorial, you'll expand on and finish the rest of the enumeration methods as per the [standard](https://nomicon.io/Standards/Tokens/NonFungibleToken/Enumeration)
Now you'll extend the NFT smart contract and add a couple of enumeration methods that can be used to return the contract's state.
---
## Introduction
As mentioned in the [Upgrade a Contract](/tutorials/nfts/upgrade-contract/) tutorial, you can deploy patches and fixes to smart contracts. This time, you'll use that knowledge to implement the `nft_total_supply`, `nft_tokens` and `nft_supply_for_owner` enumeration functions.
---
## Modifications to the contract
Let's start by opening the `src/enumeration.rs` file and locating the empty `nft_total_supply` function.
**nft_total_supply**
This function should return the total number of NFTs stored on the contract. You can easily achieve this functionality by simply returning the length of the `nft_metadata_by_id` data structure.
<Github language="rust" start="5" end="9" url="https://github.com/near-examples/nft-tutorial/blob/main/nft-contract-basic/src/enumeration.rs" />
**nft_token**
This function should return a paginated list of `JsonTokens` that are stored on the contract regardless of their owners.
If the user provides a `from_index` parameter, you should use that as the starting point for which to start iterating through tokens; otherwise it should start from the beginning. Likewise, if the user provides a `limit` parameter, the function shall stop after reaching either the limit or the end of the list.
:::tip
Rust has useful methods for pagination, allowing you to skip to a starting index and taking the first `n` elements of an iterator.
:::
<Github language="rust" start="11" end="26" url="https://github.com/near-examples/nft-tutorial/blob/main/nft-contract-basic/src/enumeration.rs" />
**nft_supply_for_owner**
This function should look for all the non-fungible tokens for a user-defined owner, and return the length of the resulting set.
If there isn't a set of tokens for the provided `AccountID`, then the function shall return `0`.
<Github language="rust" start="28" end="43" url="https://github.com/near-examples/nft-tutorial/blob/main/nft-contract-basic/src/enumeration.rs" />
Next, you can use the CLI to query these new methods and validate that they work correctly.
---
## Redeploying the contract {#redeploying-contract}
Now that you've implemented the necessary logic for `nft_tokens_for_owner`, it's time to build and re-deploy the contract to your account. Using the cargo-near, deploy the contract as you did in the previous tutorials:
```bash
cargo near deploy $NFT_CONTRACT_ID without-init-call network-config testnet sign-with-keychain send
```
---
## Enumerating tokens
Once the updated contract has been redeployed, you can test and see if these new functions work as expected.
### NFT tokens
Let's query for a list of non-fungible tokens on the contract. Use the following command to query for the information of up to 50 NFTs starting from the 10th item:
```bash
near view $NFT_CONTRACT_ID nft_tokens '{"from_index": "10", "limit": 50}'
```
This command should return an output similar to the following:
<details>
<summary>Example response: </summary>
<p>
```json
[]
```
</p>
</details>
<hr class="subsection" />
### Tokens by owner
To get the total supply of NFTs owned by the `goteam.testnet` account, call the `nft_supply_for_owner` function and set the `account_id` parameter:
```bash
near view $NFT_CONTRACT_ID nft_supply_for_owner '{"account_id": "goteam.testnet"}'
```
This should return an output similar to the following:
<details>
<summary>Example response: </summary>
<p>
```json
0
```
</p>
</details>
---
## Conclusion
In this tutorial, you have added two [new enumeration functions](/tutorials/nfts/enumeration#modifications-to-the-contract), and now you have a basic NFT smart contract with minting and enumeration methods in place. After implementing these modifications, you redeployed the smart contract and tested the functions using the CLI.
In the [next tutorial](/tutorials/nfts/core), you'll implement the core functions needed to allow users to transfer the minted tokens.
:::note Versioning for this article
At the time of this writing, this example works with the following versions:
- near-cli: `4.0.13`
- cargo-near `0.6.1`
- NFT standard: [NEP171](https://nomicon.io/Standards/Tokens/NonFungibleToken/Core), version `1.1.0`
- Enumeration standard: [NEP181](https://nomicon.io/Standards/Tokens/NonFungibleToken/Enumeration), version `1.0.0`
:::
|
```js
const result = Near.view(
"nearweek-news-contribution.sputnik-dao.near",
"get_proposals",
{ from_index: 9262, limit: 2 }
);
```
<details>
<summary>Example response</summary>
<p>
```js
[
{
id: 9262,
proposer: 'pasternag.near',
description: 'NEAR, a top non-EVM blockchain, has gone live on Router’s Testnet Mandara. With Router Nitro, our flagship dApp, users in the NEAR ecosystem can now transfer test tokens to and from NEAR onto other supported chains. $$$$https://twitter.com/routerprotocol/status/1727732303491961232',
kind: {
Transfer: {
token_id: '',
receiver_id: 'pasternag.near',
amount: '500000000000000000000000',
msg: null
}
},
status: 'Approved',
vote_counts: { council: [ 1, 0, 0 ] },
votes: { 'brzk-93444.near': 'Approve' },
submission_time: '1700828277659425683'
},
{
id: 9263,
proposer: 'fittedn.near',
description: 'How to deploy BOS component$$$$https://twitter.com/BitkubAcademy/status/1728003163318563025?t=PiN6pwS380T1N4JuQXSONA&s=19',
kind: {
Transfer: {
token_id: '',
receiver_id: 'fittedn.near',
amount: '500000000000000000000000',
msg: null
}
},
status: 'InProgress',
vote_counts: { 'Whitelisted Members': [ 1, 0, 0 ] },
votes: { 'trendheo.near': 'Approve' },
submission_time: '1700832601849419123'
}
]
```
</p>
</details> |
---
id: validator-bootcamp
title: NEAR Validator Bootcamp
sidebar_label: NEAR Validator Bootcamp 🚀
sidebar_position: 2
description: NEAR Validator Bootcamp
---
# NEAR Validator Bootcamp 🚀
---
### Validator Onboarding FAQ's
***What’s the current protocol upgrade that will increase the number of validators on Mainnet?***
The next upgrade to increase the number of mainnet validators will introduce Chunk-Only Producers, and is currently slated for Q3 2022.
***How do I join NEAR as a validator on the Mainnet? What steps do I need to take?***
1. Find out more about how to become a validator, head to https://near.org/validators/
2. Join the [Open Shards Alliance Server](https://discord.com/invite/t9Kgbvf) to find out more on how to run nodes and participate on the guildnet.
***What are the future plans for NEAR Protocol?***
Learn about the protocol roadmap here https://near.org/blog/near-launches-simple-nightshade-the-first-step-towards-a-sharded-blockchain/.
---
## LESSON 1 - OVERVIEW OF NEAR
#### Blockchain Technology
Blockchain is a new technology originally developed by Bitcoin. The term most often used to describe blockchain technology is a “public ledger”. An easy way to describe it, is that it’s similar to your banking statement that keeps track of balances, debits, and credits, with several key differences:
- Limitless – Transactions can be added as long as the network is up and running
- Unchangeable – Immutable once written (validated), it can not be modified
- Decentralized – Maintained by individual nodes called validators worldwide
- Verifiable – Transactions are verified via consensus by decentralized validators via a Proof of Stake (POS) or Proof of Work (POW) algorithm
#### Smart Contracts
Not all blockchains are equal. While some only allow the processing of transactions (balances, debits, and credits), blockchains like NEAR and Ethereum offer another layer of functionality that enables programs to run on the blockchain known as smart contracts.
An easy way to think about it is that “smart contract” enabled blockchains are like supercomputers. They allow for users and developers to pay a fee to use CPU/Memory (resources), and Disk (storage) on a global scale.
However, it does not make sense to store everything on the blockchain as it would be too expensive. Web front-ends (HTML, CSS, and JS) are served from web hosting, Github, or Skynet, and images are often stored on InterPlanetary File Systems (IPFS) like Filecoin or Arweave.
#### Sharding
As the founding blockchains, Bitcoin and Ethereum, began to grow, limitations presented themselves. They could only manage to process a specific number of transactions in a block/space of time. This created severe bottlenecks with processing, wait times, and also significantly increased the transaction fee required to have transactions processed.
Sharding is a technological advancement that allows blockchains to scale and to provide more throughput, while decreasing wait times and keeping transaction fees low. It accomplishes this by creating additional shards as the use of the network grows.
#### About NEAR
NEAR Protocol was designed with a best-of-breed approach in mind. It focuses on fixing the bottlenecks of the earlier blockchains while also enhancing the user experience to enable broader adoption of blockchain technology by existing Web2 users and developers. NEAR is a sharded Proof of Stake (POS) blockchain.
#### Key Features
- Award-Winning Team – Top Programmers in the world
- Proof of Stake (POS)
- Unlimited Shards
- User Experience (UX)
- Developer Experience (DX)
- Rainbow Bridge to ETH – Bridge assets to and from ETH
- Aurora – Run native ETH apps built-in solidity
#### Technology Stack
- Rust – Primary Smart Contract Language
- Assembly Script – Alternant Smart Contract Language (similar to TypeScript)
- NodeJS – Tooling
- Javascript / React / Angular – Frontends
---
## LESSON 2 - NEAR-CLI
NEAR-CLI is a command-line interface that communicates with the NEAR blockchain via remote procedure calls (RPC):
* Setup and Installation NEAR CLI
* View Validator Stats
> Note: For security reasons, it is recommended that NEAR-CLI be installed on a different computer than your validator node and that no full access keys be kept on your validator node.
### Setup NEAR-CLI
First, let's make sure the Debian machine is up-to-date.
```
sudo apt update && sudo apt upgrade -y
```
#### Install developer tools, Node.js, and npm
First, we will start with installing `Node.js` and `npm`:
```
curl -sL https://deb.nodesource.com/setup_17.x | sudo -E bash -
sudo apt install build-essential nodejs
PATH="$PATH"
```
Check `Node.js` and `npm` version:
```
node -v
```
> v17.x.x
```
npm -v
```
> 8.x.x
#### Install NEAR-CLI
Here's the Github Repository for NEAR CLI.: https://github.com/near/near-cli. To install NEAR-CLI, unless you are logged in as root, which is not recommended you will need to use `sudo` to install NEAR-CLI so that the near binary is saved to /usr/local/bin
```
sudo npm install -g near-cli
```
### Validator Stats
Now that NEAR-CLI is installed, let's test out the CLI and use the following commands to interact with the blockchain as well as to view validator stats. There are three reports used to monitor validator status:
##### Environment
The environment will need to be set each time a new shell is launched to select the correct network.
Networks:
- GuildNet
- TestNet
- MainNet
Command:
```
export NEAR_ENV=<network> (use guildnet / testnet / mainnet)
```
##### Proposals
A proposal by a validator indicates they would like to enter the validator set, in order for a proposal to be accepted it must meet the minimum seat price.
Command:
```
near proposals
```
##### Validators Current
This shows a list of active validators in the current epoch, the number of blocks produced, number of blocks expected, and online rate. Used to monitor if a validator is having issues.
Command:
```
near validators current
```
##### Validators Next
This shows validators whose proposal was accepted one epoch ago, and that will enter the validator set in the next epoch.
Command:
```
near validators next
```
---
## LESSON 3 - SETUP UP A VALIDATOR
In this lesson you will learn about:
* Setting up a Node
* Difference between MainNet and TestNet
* Compiling Nearcore for MainNet
* Genesis file (genesis.json)
* Config file (config.json)
### Server Requirements
For Block Producing Validators, please refer to the [`Validator Hardware`](/validator/hardware)
For Chunk-Only Producers (an upcoming role on NEAR), please see the hardware requirement below:
| Hardware | Chunk-Only Producer Specifications |
| -------------- | --------------------------------------------------------------- |
| CPU | 4-Core CPU with AVX support |
| RAM | 8GB DDR4 |
| Storage | 500GB SSD |
### Install required software & set the configuration
#### Prerequisites:
Before you start, you may want to confirm that your machine has the right CPU features. For more hardware specific information, please take a look of the [Hardware requirement](/validator/hardware).
```
lscpu | grep -P '(?=.*avx )(?=.*sse4.2 )(?=.*cx16 )(?=.*popcnt )' > /dev/null \
&& echo "Supported" \
|| echo "Not supported"
```
> Supported
Next, let's make sure the Debian machine is up-to-date.
```
sudo apt update && sudo apt upgrade -y
```
#### Install developer tools:
```
sudo apt install -y git binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev cmake gcc g++ python docker.io protobuf-compiler libssl-dev pkg-config clang llvm cargo
```
#### Install Python pip:
```
sudo apt install python3-pip
```
#### Set the configuration:
```
USER_BASE_BIN=$(python3 -m site --user-base)/bin
export PATH="$USER_BASE_BIN:$PATH"
```
#### Install Building env
```
sudo apt install clang build-essential make
```
#### Install Rust & Cargo
```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
You will see the following:
![img](/images/rust.png)
Press 1 and press enter.
#### Source the environment
```
source $HOME/.cargo/env
```
### Clone `nearcore` project from GitHub
First, clone the [`nearcore` repository](https://github.com/near/nearcore).
```
git clone https://github.com/near/nearcore
cd nearcore
git fetch origin --tags
```
Checkout to the branch you need. Latest unstable release is recommended if you are running on testnet and latest stable version is recommended if you are running on mainnet. Please check the [releases page on GitHub](https://github.com/near/nearcore/releases).
```
git checkout tags/<version> -b mynode
```
### Compile `nearcore` binary
In the `nearcore` folder run the following commands:
```
make neard
```
The binary path is `target/release/neard`. If you are seeing issues, it is possible that cargo command is not found. Make sure `sudo apt install cargo`. Compiling `nearcore` binary may take a little while.
### Initialize working directory
In order to work properly, the NEAR node requires a working directory and a couple of configuration files. Generate the initial required working directory by running:
```
./target/release/neard --home ~/.near init --chain-id <network> --download-genesis
```
![img](/images/initialize.png)
This command will create the directory structure and will generate `config.json`, `node_key.json`, and `genesis.json` on the network you have passed. The genesis file for `testnet` is big (6GB +) so this command will be running for a while and no progress will be shown.
- `config.json` - Configuration parameters which are responsive for how the node will work. The config.json contains needed information for a node to run on the network, how to communicate with peers, and how to reach consensus. Although some options are configurable. In general validators have opted to use the default config.json provided.
- `genesis.json` - A file with all the data the network started with at genesis. This contains initial accounts, contracts, access keys, and other records which represents the initial state of the blockchain. The genesis.json file is a snapshot of the network state at a point in time. In contacts accounts, balances, active validators, and other information about the network. On MainNet, it is the state of what the network looked like at launch. On testnet or guildnet, it can include a more intermediate state if the network was hard-forked at one point.
- `node_key.json` - A file which contains a public and private key for the node. Also includes an optional `account_id` parameter which is required to run a validator node (not covered in this doc).
- `data/` - A folder in which a NEAR node will write it's state.
### Replace the `config.json`
From the generated `config.json`, there two parameters to modify:
- `boot_nodes`: If you had not specify the boot nodes to use during init in Step 3, the generated `config.json` shows an empty array, so we will need to replace it with a full one specifying the boot nodes.
- `tracked_shards`: In the generated `config.json`, this field is an empty empty. You will have to replace it to `"tracked_shards": [0]`
```
rm ~/.near/config.json
wget -O ~/.near/config.json https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/<network>/config.json
```
### Get data backup
The node is ready to be started. However, you must first sync up with the network. This means your node needs to download all the headers and blocks that other nodes in the network already have.
First, please install AWS CLI:
```
sudo apt-get install awscli -y
```
Then, download the snapshot using the AWS CLI:
```
aws s3 --no-sign-request cp s3://near-protocol-public/backups/<testnet|mainnet>/rpc/latest .
LATEST=$(cat latest)
aws s3 --no-sign-request cp --no-sign-request --recursive s3://near-protocol-public/backups/<testnet|mainnet>/rpc/$LATEST ~/.near/data
```
NOTE: The .tar file is around 147GB (and will grow) so make sure you have enough disk space to unpack inside the data folder.
### Run the node
To start your node simply run the following command:
```
cd nearcore
./target/release/neard --home ~/.near run
```
![img](/images/download.png)
The node is now running you can see log outputs in your console. Your node should be find peers, download headers to 100%, and then download blocks.
----
#### Using NearUp
You can set up a node using neard on Mainnet and Testnet. On Guildnet, you have the option to use NearUp to set the node. However, NearUp is not recommended or supported for Mainnet. We recommend that you use neard consistently on Guildnet, Testnet, and Mainnet.
However, if you choose to use NearUp, NearUp will download the necessary binaries and files to get up and running. You just need to provide the network to run and the staking pool id.
* Install NearUp:
```
pip3 install --user nearup
```
* Install latest NearUp Version:
```
pip3 install --user --upgrade nearup
```
#### Create a wallet
- MainNet: https://wallet.near.org/
- TestNet: https://wallet.testnet.near.org/
- GuildNet: `https://wallet.openshards.io/`
#### Authorize Wallet Locally
A full access key needs to be installed locally to be able transactions via NEAR-CLI.
For Guildnet
```
export NEAR_ENV=testnet
```
* You need to run this command:
```
near login
```
> Note: This command launches a web browser allowing for the authorization of a full access key to be copied locally.
1 – Copy the link in your browser
![img](/images/1.png)
2 – Grant Access to Near CLI
![img](/images/3.png)
3 – After Grant, you will see a page like this, go back to console
![img](/images/4.png)
4 – Enter your wallet and press Enter
![img](/images/5.png)
For Guildnet
* Download the latest genesis and config files:
```
cd ~/.near/guildnet
wget -c https://s3.us-east-2.amazonaws.com/build.openshards.io/nearcore-deploy/guildnet/config.json
wget -c https://s3.us-east-2.amazonaws.com/build.openshards.io/nearcore-deploy/guildnet/genesis.json
```
* Launch this command so set the Near guildnet Environment:
```
export NEAR_ENV=guildnet
```
You can also run this command to set the Near guildnet Environment persistent:
```
echo 'export NEAR_ENV=guildnet' >> ~/.bashrc
```
* Running command is:
```
nearup run $NEAR_ENV --account-id <staking pool id>
```
Where AccountId is xx.stake.guildnet, xx is your pool name for example bootcamp.stake.guildnet
For Testnet
* Download the latest genesis, config files:
```
cd ~/.near/testnet
wget -c https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/testnet/config.json
wget -c https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/testnet/genesis.json
```
* Download the latest snapshot:
```
mkdir ~/.near/testnet/data
cd ~/.near/testnet
wget -c https://near-protocol-public.s3.amazonaws.com/backups/testnet/rpc/data.tar -O - | tar -xf -
```
* Launch this command so set the Near testnet Environment:
```
export NEAR_ENV=testnet
```
* You can also run this command to set the Near testnet Environment persistent:
```
echo 'export NEAR_ENV=testnet' >> ~/.bashrc
```
* Running command is:
```
nearup run $NEAR_ENV --account-id <staking pool id>
```
Where AccountId is xx.pool.f863973.m0, xx is your pool name for example bootcamp.pool.f863973.m0
On the first run, NEARUp will ask you to enter a staking pool id, provide the staking pool id set up previously in the form {pool id}.{staking pool factory}
> **Note: This is the Chicken and the Egg.
> You have not created the staking pool, but need to provide the name.**
#### Check the validator_key.json
* Run the following command:
For Guildnet
```
cat ~/.near/guildnet/validator_key.json
```
For Testnet
```
cat ~/.near/testnet/validator_key.json
```
> Note: If a validator_key.json is not present, follow these steps to create one
Create a validator_key.json for Guildnet
* Generate the Key file:
```
near generate-key <pool_id>
```
* Copy the file generated to Guildnet folder:
Make sure to replace YOUR_WALLET by your accountId
```
cp ~/.near-credentials/guildnet/<YOUR_WALLET>.json ~/.near/guildnet/validator_key.json
```
* Edit “account_id” => `xx.stake.guildnet`, where xx is your PoolName
* Change `private_key` to `secret_key`
> Note: The account_id must match the staking pool contract name or you will not be able to sign blocks.\
File content must be in the following pattern:
```
{
"account_id": "xx.stake.guildnet",
"public_key": "ed25519:HeaBJ3xLgvZacQWmEctTeUqyfSU4SDEnEwckWxd92W2G",
"secret_key": "ed25519:****"
}
```
Create a `validator_key.json` for Testnet
* Generate the Key file:
```
near generate-key <pool_id>
```
* Copy the file generated to Testnet folder:
Make sure to replace YOUR_WALLET by your accountId
```
cp ~/.near-credentials/testnet/YOUR_WALLET.json ~/.near/testnet/validator_key.json
```
* Edit “account_id” => xx.pool.f863973.m0, where xx is your PoolName
* Change `private_key` to `secret_key`
> Note: The account_id must match the staking pool contract name or you will not be able to sign blocks.\
File content must be in the following pattern:
```
{
"account_id": "xx.stake.guildnet",
"public_key": "ed25519:HeaBJ3xLgvZacQWmEctTeUqyfSU4SDEnEwckWxd92W2G",
"secret_key": "ed25519:****"
}
```
#### Step 8 – Check all files were generated
Command for Guildnet
```
ls ~/.near/guildnet
```
Command for Testnet
```
ls ~/.near/testnet
```
You should have: **validator_key.json node_key.json config.json data genesis.json**
# Setup using NEARCore (MainNet)
RECOMMENDED FOR MAINNET
#### Step 1 – Installation required software & set the configuration
* Before you start, you might want to ensure your system is up to date.
```
sudo apt update && sudo apt upgrade -y
```
* Install Python
```
sudo apt install python3 git curl
```
* Install Building env
```
sudo apt install clang build-essential make
```
* Install Rust & Cargo
```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
Press 1 and press enter
![img](/images/rust.png)
* Source the environment
```
source $HOME/.cargo/env
```
* Clone the NEARCore Repo
```
git clone https://github.com/nearprotocol/nearcore.git
```
* Set environment to the latest release tag. For the latest release tag, please check here: https://github.com/near/nearcore/releases. Note: RC tags are for Testnet only.
```
export NEAR_RELEASE_VERSION=1.26.1
```
```
cd nearcore
git checkout $NEAR_RELEASE_VERSION
make release
```
* Install Nodejs and NPM
```
curl -sL https://deb.nodesource.com/setup_17.x | sudo -E bash -
sudo apt install build-essential nodejs
PATH="$PATH"
```
* Install Near CLI
Once NodeJs and NPM are installed you can now install NEAR-Cli.
Unless you are logged in as root, which is not recommended you will need to use `sudo` to install NEAR-Cli so that the near binary to /usr/local/bin
```
sudo npm install -g near-cli
```
* Launch this command so set the Near Mainnet Environment:
```
export NEAR_ENV=mainnet
```
* You can also run this command to set the Near testnet Environment persistent:
```
echo 'export NEAR_ENV=mainnet' >> ~/.bashrc
```
#### Step 2 – Create a wallet
MainNet: https://wallet.near.org/
#### Step 3 – Authorize Wallet Locally
A full access key needs to be installed locally to be able transactions via NEAR-Cli.
* You need to run this command:
```
near login
```
> Note: This command launches a web browser allowing for the authorization of a full access key to be copied locally.
1 – Copy the link in your browser
![img](/images/1.png)
2 – Grant Access to Near CLI
![img](/images/3.png)
3 – After Grant, you will see a page like this, go back to console
![img](/images/4.png)
4 – Enter your wallet and press Enter
![img](/images/5.png)
#### Step 4 – Initialize & Start the Node
* Download the latest genesis, config files:
```
mkdir ~/.near
cd ~/.near
wget -c https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/mainnet/genesis.json
wget -c https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/mainnet/config.json
```
* Download the latest snapshot from [the snapshot page](/intro/node-data-snapshots).
* Initialize NEAR:
```
target/release/neard init --chain-id="mainnet" --account-id=<staking pool id>
```
##### Create `validator_key.json`
* Generate the Key file:
```
near generate-key <pool_id>
```
* Copy the file generated to Mainnet folder.Make sure to replace YOUR_WALLET by your accountId
```
cp ~/.near-credentials/YOUR_WALLET.json ~/.near/mainnet/validator_key.json
```
* Edit “account_id” => xx.poolv1.near, where xx is your PoolName
* Change “private_key” to “secret_key”
> Note: The account_id must match the staking pool contract name or you will not be able to sign blocks.
> File content must be something like :
> {
> "account_id": "xx.poolv1.near",
> "public_key": "ed25519:HeaBJ3xLgvZacQWmEctTeUqyfSU4SDEnEwckWxd92W2G",
> "secret_key": "ed25519:****"
> }
```
near generate-key <your_accountId>
vi ~/.near/{your_accountId}.json
```
Rename "private_key" to "secret_key", then move the file to ~/.near.
```
mv the file to ~/.near
```
Update "account_id" to be the staking_pool_id
> Note: The account_id must match the staking pool contract name or you will not be able to sign blocks.
* Start the Node
```
target/release/neard run
```
* Setup Systemd
Command:
```
sudo vi /etc/systemd/system/neard.service
```
Paste:
```[Unit]
Description=NEARd Daemon Service
[Service]
Type=simple
User=<USER>
#Group=near
WorkingDirectory=/home/<USER>/.near
ExecStart=/home/<USER>/nearcore/target/release/neard run
StandardOutput=file:/home/<USER>/.near/neard.log
StandardError=file:/home/<USER>/.near/nearderror.log
Restart=on-failure
RestartSec=30
KillSignal=SIGINT
TimeoutStopSec=45
KillMode=mixed
[Install]
WantedBy=multi-user.target
```
> Note: Change USER to your paths
Command:
```
sudo systemctl enable neard
```
Command:
```
sudo systemctl start neard
```
If you need to make a change to service because of an error in the file. It has to be reloaded:
```
sudo systemctl reload neard
```
##### Watch logs
Command:
```
journalctl -n 100 -f -u neard
```
Make log output in pretty print
Command:
```
sudo apt install ccze
```
View Logs with color
Command:
```
journalctl -n 100 -f -u neard | ccze -A
```
### Becoming a Validator
In order to become a validator and enter the validator set, a minimum set of success criteria must be met.
* The node must be fully synced
* The `validator_key.json` must be in place
* The contract must be initialized with the public_key in `validator_key.json`
* The account_id must be set to the staking pool contract id
* There must be enough delegations to meet the minimum seat price. See the seat price [here](https://explorer.testnet.near.org/nodes/validators).
* A proposal must be submitted by pinging the contract
* Once a proposal is accepted a validator must wait 2-3 epoch to enter the validator set
* Once in the validator set the validator must produce great than 90% of assigned blocks
Check running status of validator node. If “Validator” is showing up, your pool is selected in the current validators list.
### Submitting Pool Information (Mainnet Only)
Adding pool information helps delegators and also helps with outreach for upgrades and other important announcements: https://github.com/zavodil/near-pool-details.
The available fields to add are: https://github.com/zavodil/near-pool-details/blob/master/FIELDS.md.
The identifying information that we ask the validators:to provide are:
- Name
- Description
- URL
- Country and country code
- Email (for support)
- Telegram, Discord, or Twitter
Command:
```
near call name.near update_field '{"pool_id": "<pool_id>.poolv1.near", "name": "url", "value": "https://yoururl.com"}' --accountId=<accountId>.near --gas=200000000000000
```
```
near call name.near update_field '{"pool_id": "<pool_id>.poolv1.near", "name": "twitter", "value": "<twitter>"}' --accountId=<account id>.near --gas=200000000000000
```
```
near view name.near get_all_fields '{"from_index": 0, "limit": 3}'
```
```
near view name.near get_fields_by_pool '{"pool_id": "<pool_id>.poolv1.near"}'
```
---
## LESSON 4 - STAKING POOLS
NEAR uses a staking pool factory with a whitelisted staking contract to ensure delegators’ funds are safe. In order to run a validator on NEAR, a staking pool must be deployed to a NEAR account and integrated into a NEAR validator node. Delegators must use a UI or the command line to stake to the pool. A staking pool is a smart contract that is deployed to a NEAR account.
### Deploy a Staking Pool Contract
#### Deploy a Staking Pool
Calls the staking pool factory, creates a new staking pool with the specified name, and deploys it to the indicated accountId.
For Guildnet
```
near call stake.guildnet create_staking_pool '{"staking_pool_id": "<pool id>", "owner_id": "<accountId>", "stake_public_key": "<public key>", "reward_fee_fraction": {"numerator": 5, "denominator": 100}}' --accountId="<accountId>" --amount=30 --gas=300000000000000
```
For Testnet
```
near call pool.f863973.m0 create_staking_pool '{"staking_pool_id": "<pool id>", "owner_id": "<accountId>", "stake_public_key": "<public key>", "reward_fee_fraction": {"numerator": 5, "denominator": 100}}' --accountId="<accountId>" --amount=30 --gas=300000000000000
```
For Mainnet
```
near call poolv1.near create_staking_pool '{"staking_pool_id": "<pool id>", "owner_id": "<accountId>", "stake_public_key": "<public key>", "reward_fee_fraction": {"numerator": 5, "denominator": 100}}' --accountId="<accountId>" --amount=30 --gas=300000000000000
```
From the example above, you need to replace:
* **Pool ID**: Staking pool name, the factory automatically adds its name to this parameter, creating {pool_id}.{staking_pool_factory}
Examples:
- `nearkat.stake.guildnet` for guildnet
- `nearkat.pool.f863973.m0` for testnet
- `nearkat.poolv1.near` for mainnet
* **Owner ID**: The NEAR account that will manage the staking pool. Usually your main NEAR account.
* **Public Key**: The public key in your **validator_key.json** file.
* **5**: The fee the pool will charge (e.g. in this case 5 over 100 is 5% of fees).
* **Account Id**: The NEAR account deploying the staking pool.
> Be sure to have at least 30 NEAR available, it is the minimum required for storage.
To change the pool parameters, such as changing the amount of commission charged to 1% in the example below, use this command:
```
near call <pool_name> update_reward_fee_fraction '{"reward_fee_fraction": {"numerator": 1, "denominator": 100}}' --accountId <account_id> --gas=300000000000000
```
You will see something like this:
![img](/images/pool.png)
If there is a “True” at the End. Your pool is created.
**You have now configure your Staking pool.**
For Guildnet & Testnet
You can go to OSA Discord to ask for some guildnet or testnet token to be a validator,
https://discord.gg/GrBqK3ZJ2T
#### Configure your staking pool contract
Replace:
* Pool Name – pool_id.staking_pool_factory
* Owner Id
* Public Key
* Reward Fraction
* Account Id
```
near call <pool name> new '{"owner_id": "<owner id>", "<public key>": "<public key>", "reward_fee_fraction": {"numerator": <reward fraction>, "denominator": 100}}' --accountId <accountId>
```
#### Manage your staking pool contract
> HINT: Copy/Paste everything after this line into a text editor and use search and replace. Once your pool is deployed, you can issue the commands below:
##### Retrieve the owner ID of the staking pool
Command:
```
near view {pool_id}.{staking_pool_factory} get_owner_id '{}'
```
##### Issue this command to retrieve the public key the network has for your validator
Command:
```
near view {pool_id}.{staking_pool_factory} get_staking_key '{}'
```
##### If the public key does not match you can update the staking key like this (replace the pubkey below with the key in your validator.json file)
```
near call {pool_id}.{staking_pool_factory} update_staking_key '{"stake_public_key": "<public key>"}' --accountId <accountId>
```
### Working with Staking Pools
> NOTE: Your validator must be fully synced before issuing a proposal or depositing funds.
### Proposals
In order to get a validator seat you must first submit a proposal with an appropriate amount of stake. Proposals are sent for epoch +2. Meaning if you send a proposal now, if approved, you would get the seat in 3 epochs. You should submit a proposal every epoch to ensure your seat. To send a proposal we use the ping command. A proposal is also sent if a stake or unstake command is sent to the staking pool contract.
To note, a ping also updates the staking balances for your delegators. A ping should be issued each epoch to keep reported rewards current on the pool contract. You could set up a ping using a cron job or use [Cron Cat](https://cron.cat/).
Staking Pools Factories for each network:
* **GuildNet**: stake.guildnet
* **TestNet**: pool.f863973.m0
* **MainNet**: poolv1.near
### Transactions
#### Deposit and Stake NEAR
Command:
```
near call <staking_pool_id> deposit_and_stake --amount <amount> --accountId <accountId> --gas=300000000000000
```
#### Unstake NEAR
Amount in yoctoNEAR.
Run the following command to unstake:
```
near call <staking_pool_id> unstake '{"amount": "<amount yoctoNEAR>"}' --accountId <accountId> --gas=300000000000000
```
To unstake all you can run this one:
```
near call <staking_pool_id> unstake_all --accountId <accountId> --gas=300000000000000
```
#### Withdraw
Unstaking takes 2-3 epochs to complete, after that period you can withdraw in YoctoNEAR from pool.
Command:
```
near call <staking_pool_id> withdraw '{"amount": "<amount yoctoNEAR>"}' --accountId <accountId> --gas=300000000000000
```
Command to withdraw all:
```
near call <staking_pool_id> withdraw_all --accountId <accountId> --gas=300000000000000
```
#### Ping
A ping issues a new proposal and updates the staking balances for your delegators. A ping should be issued each epoch to keep reported rewards current.
Command:
```
near call <staking_pool_id> ping '{}' --accountId <accountId> --gas=300000000000000
```
Balances
Total Balance
Command:
```
near view <staking_pool_id> get_account_total_balance '{"account_id": "<accountId>"}' --gas=300000000000000
```
#### Staked Balance
Command:
```
near view <staking_pool_id> get_account_staked_balance '{"account_id": "<accountId>"}'
```
#### Unstaked Balance
Command:
```
near view <staking_pool_id> get_account_unstaked_balance '{"account_id": "<accountId>"}'
```
#### Available for Withdrawal
You can only withdraw funds from a contract if they are unlocked.
Command:
```
near view <staking_pool_id> is_account_unstaked_balance_available '{"account_id": "<accountId>"}'
```
#### Pause / Resume Staking
##### Pause
Command:
```
near call <staking_pool_id> pause_staking '{}' --accountId <accountId>
```
##### Resume
Command:
```
near call <staking_pool_id> resume_staking '{}' --accountId <accountId>
```
---
## LESSON 5 - MONITORING
### Log Files
The log file is stored either in the ~/.nearup/logs directory or in systemd depending on your setup.
NEARUp Command:
```
nearup logs --follow
```
Systemd Command:
```
journalctl -n 100 -f -u neard | ccze -A
```
**Log file sample:**
Validator | 1 validator
```
INFO stats: #85079829 H1GUabkB7TW2K2yhZqZ7G47gnpS7ESqicDMNyb9EE6tf Validator 73 validators 30 peers ⬇ 506.1kiB/s ⬆ 428.3kiB/s 1.20 bps 62.08 Tgas/s CPU: 23%, Mem: 7.4 GiB
```
* **Validator**: A “Validator” will indicate you are an active validator
* **73 validators**: Total 73 validators on the network
* **30 peers**: You current have 30 peers. You need at least 3 peers to reach consensus and start validating
* **#46199418**: block – Look to ensure blocks are moving
### RPC
Any node within the network offers RPC services on port 3030 as long as the port is open in the nodes firewall. The NEAR-CLI uses RPC calls behind the scenes. Common uses for RPC are to check on validator stats, node version and to see delegator stake, although it can be used to interact with the blockchain, accounts and contracts overall.
Find many commands and how to use them in more detail here:
- https://docs.near.org/api/rpc/introduction
Command:
```
sudo apt install curl jq
```
##### Common Commands:
###### Check your node version:
Command:
```
curl -s http://127.0.0.1:3030/status | jq .version
```
###### Check Delegators and Stake
Command:
```
near view <your pool>.stake.guildnet get_accounts '{"from_index": 0, "limit": 10}' --accountId <accountId>.guildnet
```
###### Check Reason Validator Kicked
Command:
```
curl -s -d '{"jsonrpc": "2.0", "method": "validators", "id": "dontcare", "params": [null]}' -H 'Content-Type: application/json' https://rpc.openshards.io | jq -c ".result.prev_epoch_kickout[] | select(.account_id | contains ("<POOL_ID>"))" | jq .reason
```
###### Check Blocks Produced / Expected
Command:
```
curl -s -d '{"jsonrpc": "2.0", "method": "validators", "id": "dontcare", "params": [null]}' -H 'Content-Type: application/json' http://localhost:3030/ | jq -c ".result.current_validators[] | select(.account_id | contains ("POOL_ID"))"
```
### Prometheus
Monitoring disk, CPU, memory, network io, missed blocks, and peers is critically important to a healthy node. Prometheus and Grafana combined provide monitoring and visual reporting tools. Please note that Prometheus is best set up on another machine due the storage requirement for logs.
#### Installation
Command:
```
sudo apt-get update
```
Command:
```
sudo apt-get install prometheus prometheus-node-exporter prometheus-pushgateway
prometheus-alertmanager
```
Check that Prometheus was installed and is in your path.
```
prometheus --version
```
#### Start Services
Command:
```
sudo systemctl status prometheus
```
Command:
```
sudo systemctl status prometheus-node-exporter
```
Command:
```
sudo vi /etc/prometheus/prometheus.yml
```
#### Update the targets and save
targets: [‘localhost:9093’, ‘localhost:3030’]
#### Reference the rules file
Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
```
rule_files:
- "rules.yml"
```
#### Setup Postfix email
Command:
```
sudo apt-get install mailutils
```
#### Setup Responses
* internet site
* enter a domain name (Used for the from email address)
Command
```
sudo vi /etc/postfix/main.cf
```
#### Update and save the config file
* inet_interfaces = all to inet_interfaces = localhost
* inet_protocols = all to inet_protocols = ipv4
#### Restart postfix
Command:
```
sudo service postfix restart
```
#### Add the hostname used with Postfix
Command:
```
sudo vi /etc/hostname
```
Command
```
sudo nano /etc/hosts
```
#### Send a test email
```
echo "This is the body of the email" | mail -s "This is the subject line" user@example.com
```
#### Update rules.yml
Command:
```
sudo vi /etc/prometheus/rules.yml
```
```
groups:
- name: near
rules:
- alert: InstanceDown
expr: up == 0
for: 1m
labels:
severity: "critical"
annotations:
summary: "Endpoint {{ $labels.instance }} down"
description: "{{ $labels.instance }} of job {{ $labels.job }} "
- alert: NearVersionBuildNotMatched
expr: near_version_build{instance="yournode.io", job="near"} != near_dev_version_build{instance="yournode.io", job="near"}
for: 5m
labels:
severity: critical
annotations:
summary: "Near Node Version needs updated."
description: "Your version is out of date and you risk getting kicked."
- alert: StakeBelowSeatPrice
expr: abs((near_current_stake / near_seat_price) * 100) < 100
for: 2m
labels:
severity: critical
annotations:
description: 'Pool is below the current seat price'
```
#### Restart services
Command:
```
sudo systemctl restart prometheus
```
Command:
```
sudo systemctl status prometheus
```
```
sudo systemctl status prometheus
```
### Grafana
#### Download and add the package
Command:
```
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
```
Command:
```
sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
```
Command:
```
sudo apt-get install grafana
```
#### Setup Rules
Command:
```
sudo vi /etc/prometheus/rules.yml
```
```
- job_name: validator
- static_configs:
- targets: ['localhost:9093']
```
#### Update alert manager
Command:
```
sudo vi /etc/init.d/prometheus-alertmanager
```
```
NAME=prometheus-alertmanager
```
#### Set the command-line arguments
Command:
```
sudo vi /etc/default/prometheus-alertmanager
```
```
ARGS="--cluster.listen-address="
```
#### Start & Reload services
Command:
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable grafana-server.service
```
```
sudo systemctl start grafana-server
```
```
sudo service grafana-server status
```
#### Install additional plugins
```
sudo grafana-cli plugins install simpod-json-datasource
```
```
sudo grafana-cli plugins install ryantxu-ajax-panel
```
#### Restart service and update password
```
service grafana-server restart
```
```
sudo grafana-cli admin reset-admin-password admin
```
#### Open Ports to specific IP
Command:
```
sudo iptables -L
```
Command:
```
sudo iptables -A INPUT -p tcp --dport 3000 -s 66.73.0.194 -j ACCEPT
```
```
sudo netfilter-persistent save
```
```
sudo netfilter-persistent reload
```
#### Create Dashboard
* Add datasource Prometheus
* Add Notification channel email
* Config grafana email
Command
```
sudo vi /etc/grafana/grafana.ini
```
```
enabled = true
host = localhost:25
skip_verify = true
from_address = moinitor@yournode.com
from_name = Validator
```
#### Restart service
```
service grafana-server restart
```
#### Watch logs
```
sudo tail -f /var/log/grafana/grafana.log
```
#### Install Prometheus Exporter
Command:
```
sudo apt install golang-go
git clone https://github.com/masknetgoal634/near-prometheus-exporter.git
cd near-prometheus-exporter/
go build -a -installsuffix cgo -ldflags="-w -s" -o main .
```
#### Start exporter
Command:
```
./main -accountId <contract account id>
netstat -an | grep 9333
```
#### Update Prometheus
Command:
```
sudo vi /etc/prometheus/prometheus.yml
```
```
- job_name: near-node
scrape_interval: 15s
static_configs:
- targets: ['<NODE_IP_ADDRESS>:3030']
```
#### Update AlertManager
```
sudo vi /etc/prometheus/alertmanager.yml
```
```
postqueue -p
```
```
amtool alert
```
```
promtool check rules /etc/prometheus/rules.yml
```
```
promtool check config /etc/prometheus/prometheus.yml
```
```
amtool check-config /etc/prometheus/alertmanager.yml
```
```
cd near-prometheus-exporter/
```
```
mv main near-exporter
```
```
sudo vi /lib/systemd/system/near-exporter.service
```
```
[Unit]
Description=NEAR Prometheus Exporter
[Service]
Restart=always
User=prometheus
EnvironmentFile=/etc/default/near-exporter
ExecStart=/opt/near-prometheus-exporter/near-exporter $ARGS
ExecReload=/bin/kill -HUP $MAINPID
TimeoutStopSec=20s
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
```
#### Update exporter
Command:
```
sudo vi /etc/default/near-exporter
# Set the command-line arguments to pass to the server.
# Set you contract name
ARGS="-accountId yournode"
```
#### Copy config and start service
Command:
```
sudo cp /lib/systemd/system/near-exporter.service /etc/systemd/system/near-exporter.service
```
```
sudo chmod 644 /etc/systemd/system/near-exporter.service
```
```
sudo systemctl start near-exporter
```
```
sudo systemctl status near-exporter
```
```
ps -elf | grep near-exporter
```
```
netstat -an | grep 9333
```
```
sudo systemctl enable near-exporter
```
---
## LESSON 6 - TROUBLESHOOTING
In this lesson you will learn about:
* Keys
* Common Errors
### Keys
NEAR uses cryptographic keys to secure accounts and validators, each key has a public and matching private key pair.
Find more detailed information about NEAR and keys:
https://docs.near.org/concepts/basics/account#access-keys
#### Validator Keys:
To manage and sign transactions a node_key and validator_key.
##### Node Key:
Used to communicate with other peers and is primarily responsible for syncing the blockchain.
```
~/.near/<network>/node_key.json
```
Common Issue:
In a failure situation, both the node_key and validator_key must be copied over
##### Validator Key:
Used to sign and validate blocks.
```
~/.near/<network>/validator_key.json
```
Common Issues:
1. The validator key used to initialize the staking contract is not the one listed in validator_key.json
2. The account Id submitted when NEARUp was initialized is not the same as the one in validator_key.json
### Common Errors & Solutions
#### Submitting a proposal before the validator is synced
A validator must be fully synced before submitting a proposal to enter the validator set. If you are not fully synced and you entered the validator set your log will be filled with errors.
**Resolution**
Wait until 4 epochs until you have been kicked from the validator set. Delete the data directory and resync. Be sure to be fully synced before staking actions or pings go to your pool.
```
1. Stop the node
2. rm -Rf ~/.near/<NETWORK_ID>/data
3. Start the node
```
#### Starting the node when another instance is running
On occasion, a process will get disconnected from the shell or NEARUp. The common error seen when trying to start the node will be:
```
Err value: Os { code: 98, kind: AddrInUse, message: "Address already in use" }',
```
**Resolution**
Find the pid of the running process
```
ps -elf | grep neard
```
Kill the hung pid
```
kill -9 <PID>
```
#### Running out of Disk space
If you find no space left on device errors in the validator log files then you likely ran out of disk space.
```
1. Stop the node
2. Failover to a backup node
3. Cleanup files or resize disk
4. Fail back over.
```
#### Not enough stake to obtain a validator seat
You can check that your validator proposal was accepted by checking
```
near proposals
```
#### Not producing blocks
Your validator is producing zero blocks in near validators current.
This is due the staking pool contract and the validator_key.json have different public keys or the account_id in validator_key.json not being the staking pool contract id.
**Get the staking pool key**
```
near view <POOL_ID> get_staking_key '{}'
```
**Get the key in validator_key.json**
```
cat ~/.near/<network>/validator_key.json | grep public_key
```
> Note: Both keys must match. If they do not update the staking pool key and wait 2 epochs before pinging.
**Update staking pool key**
```
near call <POOL_ID> update_staking_key '{"stake_public_key": "<PUBLIC_KEY>"}' --accountId <ACCOUNT_ID>
```
**Check the account_id is set to the staking pool contract**
```
vi ~/.near/<network>/validator_key.json
```
```
"account_id" = <staking pool id>
```
> Note: if it does now match the staking pool id upate it and restart your node.
#### Not producing enough blocks or chunks
Missing blocks or chunks is the primary reason a validator is kicked from the validator pool. You can check the number of blocks expected/missed via NEAR-Cli and RPC.
**NEAR-CLI**
```
near validators current | grep <pool id>
```
**RPC**
```
curl -s -d '{"jsonrpc": "2.0", "method": "validators", "id": "dontcare", "params": [null]}' -H 'Content-Type: application/json' http://localhost:3030/ | jq -c ".result.current_validators[] | select(.account_id | contains ("<POOL_ID>"))"
```
**Check the reason kicked**
```
curl -s -d '{"jsonrpc": "2.0", "method": "validators", "id": "dontcare", "params": [null]}' -H 'Content-Type: application/json' https://rpc.openshards.io | jq -c ".result.prev_epoch_kickout[] | select(.account_id | contains ("<POOL_ID>"))" | jq .reason
```
#### Not exporting the correct environment
Be sure that you are using the correct environment each time you run NEAR-Cli
```
export NEAR_ENV=<mainnet,testnet,guildnet>
```
#### Incompatible CPU without AVX support
One cause of missed blocks and nodes falling out of sync is running on a CPU without AVX support. AVX support is required, but not always used depending on the complexity of the transaction.
#### Not running on SSD drives
NEAR Requires a 1 second block time and HDD disks are just not fast enough. SDD drives are required and SSD NVME drives are recommended.
#### Inconsistent Internet connection
NEAR requires a 1 second block time, so any latency from your internet provider can cause you to miss blocks.
#### Inability to gain enough peers for consensus
If you are unable to gain any peers a restart can help. In some rare cases the boot_nodes in config.json can be empty.
Logs like that :
```
INFO stats: #42376888 Waiting for peers 0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 0.00 bps 0 gas/s CPU: 2%, Mem: 91.3 MiB
```
**Resolution**
Download the latest config.json file and restart:
- For Testnet: `https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/testnet/config.json`
- For Mainnet: `https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/mainnet/config.json`
## LESSON 7 - NODE FAILOVER
It is an unspoken requirement to maintain a secondary node in a different location in the event of a hardware or network failure on your primary node.
You will need to set up a standard node (without a validator key) and a different node_key. The node will be set up using the normal process see Setup a Validator Node.
Note: The backup node must track the same shard in the config.json as the primary node, to be used as a failover node. Please confirm the backp node has the the same `config.json` as the primary node.
```
"tracked_shards":[0],
```
### Failing Over
To failover you must copy the node_key.json and the validatory_key.json to the secondary node and restart.
```
1. Copy over node_key.json
2. Copy over validator_key.json
4. Stop the node primary node
3. Stop the secondary node
4. Restart the secondary node
```
When failing back over to the primary simply move the secondary node_key.json into place and restart the services in reverse order.
>Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol">
<h8>Ask it on StackOverflow!</h8></a>
|
---
sidebar_position: 4
---
# Versioning
:::warning Not Yet Available
Versioning is currently being worked on by the BWE team and is not yet available for use. **Current behavior is that the latest version of a component is always used.**
:::
BWE Components will support versioning based on blockheight at which changes were published. This resembles commits in a git repository.
Default behavior will be to use the latest version of a component **at the time of publish of the parent component**. It will be possible to specify that the current latest version of an embedded component should be loaded instead.
<details>
<summary>Why change the default behavior from BOS components?</summary>
<p>Locking embedded components to their state at time of publish will lead to more predictable frontend behavior across the ecosystem. We have often seen broken UI as a result of component authors not locking their embeds to a specific version and the embedded component changing in functionality or presentation.</p>
</details> |
NEAR Opens the Door to More Wallets
COMMUNITY
July 29, 2022
Part of the NEAR’s mission has been to foster an ecosystem that can build and maintain the core components of a rich and vibrant Web3 ecosystem.
One of those key functions is a wallet provider. So far, to help onboard users quickly and easily to the NEAR ecosystem, the Foundation built and maintained a wallet on wallet.nearpages.wpengine.com
As NEAR continues to decentralize, it’s transitioning wallet.nearpages.wpengine.com to be a landing page for all compatible wallets operating in the ecosystem.
The move is designed to give users more choice as they move around the Web3 space. NEAR has always believed in a multichain world, and with wallets, it takes a similar approach.
Allowing a user from a different ecosystem to interact with the NEAR blockchain using a wallet they already know and love, is an essential part of making Web3 as easy to use as Web2.
What happens to Wallet.NEAR.org?
As part of the transition, wallet holders on Wallet.NEAR.org will be encouraged to migrate their wallet to a new provider. The Foundation will be publishing a range of tutorials and guides for how to do that over the coming weeks.
Developers on NEAR should start the process of implementing Wallet Selector. This new modal provides a list of supported wallets, and allows dapp users to select their preferred choice of wallet.
For developers who are currently hardcoding wallet URLs, consider implementing Wallet Selector, a link to the Github repository for implementation can be found here.
For wallet creators, this migration presents an opportunity to onboard new users. Wallet creators will have to implement a functionality to import an account using a private key.
Additionally, the core developer team are hosting twice a day Dev office hours if you run into any issues with Wallet-Selector integration. A link can be found here.
As NEAR aims to have a thriving ecosystem with multiple wallets, providing a single point of integration for developers building on NEAR is vital. NEAR’s Wallet Selector fosters a rich ecosystem of wallets to thrive on NEAR Protocol and creates a simple integration experience to enable multiple wallets for DApp developers.
What other Wallet Options are Available for NEAR?
If you’re interested in exploring what other wallets are available in the ecosystem, there are plenty of options. In fact, there are more than 30 wallet providers compatible with NEAR.
Check out AwesomeNEAR for a list of wallets compatible with the NEAR ecosystem.
Stay tuned for more updates on the transition. |
Web3’s Leading Ladies: NEAR Foundation Honors 2023’s Women Changemakers
NEAR FOUNDATION
June 29, 2023
As Collision 2023 rolls on, NEAR Foundation announced the ten winners of the Women in Web3 Changemakers award. These exceptional women, chosen from a pool of 200 nominees worldwide, underline the significant contributions women can make throughout the Web3 ecosystem.
“These Changemakers show us that together we can make a positive difference and shape the collective good of Web3,” says Marieke Flament, CEO of NEAR Foundation. “We must continue to help one another and nurture a strong, international community that includes both men and women.”
The Women in Web3 Changemakers competition illuminates the groundbreaking work of influential women in the Web3 space. As this platform appreciates the significant strides made by these remarkable women, it’s important to recognize the unique journeys of all ten winners, each of whom has contributed to the evolution of the blockchain industry.
Women behind the success of Web3 honored at Collision
Candidates for the Women in Web3 Changemakers award come from all corners of the globe, each making unique marks on the Web3 ecosystem in their own respective ways. Each of the following women made the final top ten because they’ve carved out a niche, advocated for transformative solutions, and been a part of breakthrough blockchain initiatives.
The full list of winners includes:
Emily Rose Dallara: Web3 Leadership Coach and Podcast Producer
Janine Grainger: Co-founder and CEO, Easy Crypto AI
Bridget Greenwood: Founder, The Bigger Pie
Cathy Hackl: Chief Futurist and Chief Metaverse Officer at Journey
Erica Kang: Founder and CEO, KrptoSeoul
Irina Karagyaur: Founder and Director, BQ9
Veronica Korzh: Co-founder and CEO, Geekpay
Zoe Leavitt: Founder and CEO, Glass
Alana Podrx: Founder and CEO, Eve Wealth
Yaliwe Soko: Chair, United Africa Blockchain Association
Handpicked by public vote from a pool of hundreds of applications, these Changemakers were assessed on criteria of inclusion, influence, and innovation. Each woman demonstrated the ability to drive societal good, make a substantial impact in the Web3 community, and contribute to critical projects, uplifting the global female presence in the Web3 landscape.
“The Changemakers show that we can forge our own path into the ecosystem and create a collective narrative that has the power to break down barriers to entry and make Web3 more inclusive for everyone,” Flament continues. “I’m extremely honored to be championing this initiative and to spotlight these exceptionally talented women.”
The diverse journeys of Women in Web3 Changemakers
All ten winners of the Women in Web3 Changemakers awards come from a variety of backgrounds, geographies, and skill sets. Yaliwe Soko, for instance, is championing opportunities in Web3 to alleviate poverty and provide opportunities across Africa. Others like Cathy Hackle are icons in spheres like the metaverse.
Erica Kang, meanwhile, spearheads KryptoSeoul and was responsible for BUIDL ASIA 2022 conference, attended by ETH co-founder Vitalik Buterin and NEAR Protocol co-founder Illia Polosukhin. And through her organization The Bigger Pie, Bridget Greenwood is building an entire support ecosystem to advance both women and minorities in blockchain and crypto.
Another changemaker, Emily Rose Dallara, is a Web3 leadership coach and podcast producer, who uses a Holistic Growth Coaching method to help overwhelmed leaders thrive — without the burnout. Helping to make digital assets more accessible, Kiwi-born Janine Grainger is the co-founder and CEO of Easy Crypto, a simple and secure way for anyone to get involved in crypto. Irina Karagyaur, the founder and director of BQ9, is working to turn great ideas into even better Web3 products through her crypto-fintech boutique advisory firm.
With Geekpay, co-founder and CEO Veronica Korzh is streamlining and securing digital payments without the stress and need for long wallet addresses. Zoe Leavitt, the founder and CEO of Glass, has set up shop in the loyalty space, allowing customers to level up their nights out by earning rewards from their favorite alcohol brands. And Alana Podrx, the founder and CEO of Eve Wealth, has created a community of women who share portfolios, co-invest, and support each other in self-guided wealth management through peer-to-peer learning.
Celebrating the individual successes and collective efforts of these Changemakers reflects the massive impact and influence of women in Web3. From Africa to Asia, from the metaverse and marketing to crypto media and education, each winner has blazed her unique trail and is impacting the Web3 ecosystem, inspiring both men and women to do the same.
“I’m grateful for the hundreds of nominations we received from peers and employers who took the time to highlight the many achievements that the global female workforce has made to their organizations,” Flament continues. “By casting their ballot, the international community has shown that the contributions of women are being noticed.”
By celebrating the contributions of these Changemakers, the NEAR Foundation underscores its commitment to diversity, inclusion, and innovation in the Web3 space. As females in crypto continue to make strides, it’s vital to recognize and appreciate the efforts of women who are paving the way for an equitable and inclusive future in Web3 and beyond. |
NEAR Foundation Joins Forces with Mirae Asset for the Next Leap in Web3 Finance
NEAR FOUNDATION
June 8, 2023
In a pivotal partnership for the blockchain industry, NEAR Foundation is joining forces with Mirae Asset — a subsidiary of Asia’s largest financial group Mirae Asset Global — to help bridge the gap between trad-fi and Web3. This collaborative endeavor will thrust Web3 into the spotlight in the traditional finance market in South Korea and the larger Asian region, driving innovation and growth in the sector.
With the guidance of the NEAR Korea Hub, this partnership places NEAR Foundation and Mirae Asset at the helm of the Web3 business domain. Together, they will conduct in-depth research and foster collaboration on blockchain technology, laying a robust foundation for the use of Web3 by the traditional financial industry in the region.
Simultaneously, global joint events are in the pipeline to bolster brand visibility, coupled with a planned mutual support system. This system will fortify the Web2/Web3 business network, carving out a strong pathway for the integration of these technologies.
NEAR and Mirae Asset: propelling finance into the Web3 age
NEAR is a global Layer 1 protocol with a strong emphasis on usability (NEAR Protocol). With its new Blockchain Operating System (BOS) feature FastAuth, NEAR offers a user-friendly gateway into the world of Web3, mirroring the simplicity of email login. By presenting accessible solutions through BOS, it eases the transition for traditional businesses and developers. This ease of entry into Web3 fosters an environment conducive to the growth of innovative financial services.
“Our agreement with Mirae Asset will provide a platform for us to showcase the capabilities of NEAR Protocol and its powerful Blockchain Operating System to help transform the finance industry,” Marieke Flament, CEO of the NEAR Foundation. “We look forward to supporting this important partnership and helping to play a key role in transforming the future of Web3 finance.”
A major financial player partners with NEAR Foundation
Entering into an alliance with NEAR Foundation, Mirae Asset demonstrates its mission to transform into a top-tier global investment bank. Providing standout services in corporate finance, trading, wealth management, and private equity investments, it’s redefining the finance industry.
“We will continue to innovate and develop the foundational blockchain technology that underpins the Web3 industry,” said Ahn In-sung, Head of the Digital Division at Mirae Asset. “We will actively collaborate with exceptional blockchain communities like NEAR Protocol to incorporate the technology into our global business.”
Mirae Asset’s pursuit of innovation is embodied in the creation of the “Next Finance Initiative” (NFI). Teamed with South Korea’s key players like SK Telecom and Hana Financial Group, this partnership signals an intriguing expansion for the NEAR ecosystem. Another feature of this partnership involves NEAR Foundation joining the practical discussion body via the working group.
The NEAR Korea Hub, responsible for business expansion across Korea and other Asian regions, has been instrumental in forming this partnership. The Hub views this alliance as a significant leap showcasing NEAR Protocol’s potential in the financial industry.
“NEAR Foundation is expanding the horizons of Web3 industry by onboarding leaders in the gaming industry (Kakao Games Bora, WeMade, and Netmarble) and Mirae Asset, a pioneer in the financial industry,” said Scott Lee, General Manager, NEAR Korea Hub, commented. “NEAR Foundation will continue to expand its reach through collaborations with industry leaders to actively drive the transformation of the existing financial paradigm.”
In essence, this alliance between NEAR Foundation and Mirae Asset marks a turning point in the realm of Web3 finance, firmly rooted in the innovative and scalable infrastructure of the NEAR Protocol, NEAR Foundation, and ecosystem. |
---
sidebar_label: "Indexer Framework"
---
# NEAR Indexer Framework
:::note GitHub repo
https://github.com/near/nearcore/tree/master/chain/indexer
:::
:::caution You might be looking for NEAR Lake Framework
[NEAR Lake Framework](near-lake-framework.md) is a lightweight alternative to NEAR Indexer Framework that is recommended for use when centralization can be tolerated.
:::
## Description
NEAR Indexer Framework is a Rust package (crate) that embeds [nearcore](https://github.com/near/nearcore), and abstracts away all the complexities of collecting every bit of information related to each produced block in NEAR network. The crate name is [`near-indexer`](https://github.com/near/nearcore/tree/master/chain/indexer), and it is part of the [nearcore repository](https://github.com/near/nearcore).
`near-indexer` is a micro-framework, which provides you with a stream of blocks that are recorded on NEAR network. It is useful to handle real-time "events" on the chain.
## Rationale
As scaling dApps enter NEAR’s mainnet, an issue may arise: how do they quickly and efficiently access state from our deployed smart contracts, and cut out the cruft? Contracts may grow to have complex data structures and querying the network RPC may not be the optimal way to access state data. The NEAR Indexer Framework allows for streams to be captured and indexed in a customized manner. The typical use-case is for this data to make its way to a relational database. Seeing as this is custom per project, there is engineering work involved in using this framework.
## Limitations
NEAR Indexer Framework embeds the full NEAR node and thus requires to sync with the peer-to-peer network and store all the network data locally thus it is subject to the storage requirements, which is hundreds of GBs on SSD if you only need to extract the data that is not older than ~2.5 days, and thousands of GBs on SSD if you want to be able to go over the whole history of the network. Also, the network sync process is known to be extremely slow (while the block production is 1 block per second, while the block sync usually reaches 2 blocks per second, which means that it is capable to catch up with the live network at a speed of 1 block per second, so if your node was offline for one hour, it will take one hour to catch up to the tip of the network that keeps getting freshly produced blocks).
NEAR Indexer Framework only exposes the blocks that were finalized. In NEAR Protocol, it takes 3 consecutive blocks to get the block finalized which means that there is at least a 3-second delay between the time when some transaction hits the network, and the time it is finalized and streamed from NEAR Indexer Framework. If we measure the delay between the moment when a transaction gets submitted from the client device to the moment Indexer Framework-based indexer would receive it, we can see the following timings:
* A serialized transaction being transferred over the Internet to NEAR node (most commonly, through [NEAR JSON RPC broadcast_tx_commit](https://docs.near.org/api/rpc/transactions#send-transaction-await)): around 50ms (it is not measured precisely as it is mostly network latency of TCP handshake + HTTPS handshake)
* The transaction is routed to the [validation node](https://near-nodes.io/intro/what-is-a-node): around 50ms (again, mostly network latency between the peer nodes)
* The transaction arrives in the mempool on the validation node and will be delayed at least until the next chunk/block is produced, so if the transaction was received right at the moment when transactions for the current block were selected, it would take 1.2 seconds on mainnet to get the next block produced
* Once the transaction is included in a block, it will produce a receipt which often will be executed in the next block (another 1.2-second delay) - learn more about the NEAR Protocol data flow [here](../data-flow/near-data-flow.md)
* Given that block finalization takes 3 blocks (1.2 seconds * 3), Indexer Framework will only get the opportunity to start collecting the information about the block where the transaction was included 3.6 seconds later, but we should also include at least a 50ms delay that is introduced by the network latency when produced blocks propagate back from the validation nodes back to the regular nodes
* Indexer Framework then collects all the bits of information for the produced block and streams it: around 50-100ms
* Custom indexer implementation receives the block and there could be additional delays down the line, but that is outside of our scope here
Ultimately, it takes at least 3.8 seconds from the moment one submits a transaction to the network, and Indexer Framework-based indexer picks it up, where the finalization time contributes the most of the delay. In real life scenario, dApps usually need to know the result of the execution, and so it will take a couple of blocks after the transaction is included to get all the receipts executed (read more about the data flow [here](../data-flow/near-data-flow.md)), so the delay between the transaction submission and the result being observed by an indexer could be 5-7 seconds.
## Current Status
Indexer Framework is a tool that provides a straightforward way of getting a stream of finalized NEAR Protocol blocks as soon as possible operating over a decentralized NEAR Protocol peer-to-peer network.
However, in our experiments with the Indexer ecosystem, we realized that we need a lightweight foundation to build micro-indexers instead of maintaining a full [nearcore](https://github.com/near/nearcore) node. We considered various solutions to deliver events (Kafka, RabbitMQ, etc), but ultimately we decided to dump all the blocks as is to AWS S3 bucket. This is where NEAR Lake ecosystem was born, learn more about it [here](near-lake-framework.md).
These days, we use NEAR Indexer Framework to implement [NEAR Lake Indexer](https://github.com/near/near-lake-indexer) and from there we build micro-indexers based on [NEAR Lake Framework](near-lake-framework.md). Said that, Indexer Framework plays a crucial role in the ecosystem even though most of the indexers these days are implemented without using it directly.
## Applications
See the [example](https://github.com/nearprotocol/nearcore/tree/master/tools/indexer/example) for further technical details.
- [`near-examples/indexer-tx-watcher-example`](https://github.com/near-examples/indexer-tx-watcher-example) NEAR Indexer example that watches for transaction for specified accounts/contracts
:::info NEAR Indexer Framework usage
The most famous project build on top of NEAR Indexer Framework is [NEAR Indexer for Explorer](/tools/indexer-for-explorer)
:::
|
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs groupId="nft-contract-tabs" className="file-tabs">
<TabItem value="Paras" label="Paras">
```bash
near call marketplace.paras.near storage_deposit '{"receiver_id": "bob.near"}' --accountId bob.near --deposit 0.00939
near call nft.primitives.near nft_approve '{"token_id": "1e95238d266e5497d735eb30", "account_id": "marketplace.paras.near", "msg": {"price": "200000000000000000000000", "market_type": "sale", "ft_token_id": "near"}}' --accountId bob.near
```
Method `nft_approve` of a NFT contract also calls the `nft_on_approve` method in `marketplace.paras.near` as a callback.
</TabItem>
<TabItem value="Mintbase" label="Mintbase">
```bash
near call simple.market.mintbase1.near deposit_storage '{"autotransfer": "true"}' --accountId bob.near --deposit 0.00939
near call nft.primitives.near nft_approve '{"token_id": "3c46b76cbd48e65f2fc88473", "account_id": "simple.market.mintbase1.near", "msg": {"price": "200000000000000000000000"}}' --accountId bob.near
```
Method `nft_approve` of a NFT contract also calls the `nft_on_approve` method in `simple.market.mintbase1.near` as a callback.
</TabItem>
</Tabs>
|
---
id: lock
title: Locking Accounts
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Removing all [full access keys](../../../4.tools/cli.md#near-delete-key-near-delete-key) from an account will effectively **lock it**.
When an account is locked nobody can perform transactions in the account's name (e.g. update the code or transfer money).
#### How to Lock an Account
<Tabs className="language-tabs" groupId="code-tabs">
<TabItem value="near-cli">
```bash
near keys <dev-account>
# result: [access_key: {"nonce": ..., "public_key": '<key>'}]
near delete-key <dev-account> '<key>'
```
</TabItem>
<TabItem value="near-cli-rs">
```bash
near account list-keys <dev-account> network-config testnet now
# result:
+---+------------+-------+-------------+
| # | Public Key | Nonce | Permissions |
+---+------------+-------+-------------+
.. '<key>' ... ...
+---+------------+-------+-------------+
near account delete-key <dev-account> '<key>' network-config testnet sign-with-keychain send
```
</TabItem>
</Tabs>
#### Why Locking an Account
Locking an account brings more reassurance to end-users, since they know no external actor will be able to manipulate the account's
contract or balance.
:::tip Upgrading Locked Contracts
Please do note that, while no external actor can update the contract, the contract **can still upgrade itself**. See [this article](upgrade.md#programmatic-update) for details.
:::
|
---
id: exchange-integration
title: Exchange Integration
sidebar_label: Exchange Integration
---
## Integration Reference {#integration-reference}
- [Balance Changes](/integrations/balance-changes)
- [Accounts](/integrations/accounts)
- [Fungible Tokens](/integrations/fungible-tokens)
- [Implicit Accounts](/integrations/implicit-accounts)
### Transaction Reference Links {#transaction-reference-links}
- [Basics](/concepts/protocol/transactions)
- [Specifications](https://nomicon.io/RuntimeSpec/Transactions)
- [Constructing Transactions](/integrations/create-transactions)
## Blocks and Finality {#blocks-and-finality}
Some important pieces of information regarding blocks and finality include:
- Expected block time is around 1s and expected time to finality is around 2s. The last final block can be queried by
specifying `{"finality": "final"}` in the block query. For example, to get the latest final block on mainnet, one can run
```bash
http post https://rpc.mainnet.near.org method=block params:='{"finality":"final"}' id=123 jsonrpc=2.0
```
- Block height are not necessarily continuous and certain heights may be skipped if, for example, a block producer for that height is offline. For example, after a block at height 100 is produced, the block at height 101 may be skipped. When block at height 102 is produced, its previous block is the block at height 100.
- Some blocks may not include new chunks if, for example, the previous chunk producer is offline. Even though in the RPC
return result every block will have non-empty `chunks` field, it does not imply that there is a new chunk included in the block.
The way to tell whether the chunk is included in the block is to check whether `height_included` in the chunk is the same
as the height of the block.
## Running an Archival Node {#running-an-archival-node}
Please refer to configuration changes required in `config.json` for archival node by referring to the documentation on [Run an Archival Node](https://near-nodes.io/archival/run-archival-node-with-nearup).
## Staking and Delegation {#staking-and-delegation}
- [https://github.com/nearprotocol/stakewars](https://github.com/nearprotocol/stakewars)
- [https://github.com/near/core-contracts](https://github.com/near/core-contracts)
:::tip Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol"> Ask it on StackOverflow! </a>
:::
|
---
id: run-archival-node-with-nearup
title: Run an Archival Node (with nearup)
sidebar_label: Run a Node (with nearup)
sidebar_position: 3
description: How to run an Archival Node with nearup
---
*We encourage you to set up your node with Neard instead of Nearup as Nearup is not used on Mainnet. Please head to [Run a Node](/archival/run-archival-node-without-nearup) for instructions on how to setup an archival node with Neard.*
<blockquote class="info">
<strong>Heads up</strong><br /><br />
Running an archival node is very similar to running a [validator node](/validator/running-a-node) as both types of node use the same `nearcore` release. The main difference for running an archival node is a modification to the `config.json` by changing `archive` to `true`. See below for more details.
</blockquote>
## Prerequisites {#prerequisites}
- [Git](https://git-scm.com/)
- [Nearup](https://github.com/near-guildnet/nearup): Make sure [`nearup`](https://github.com/near-guildnet/nearup) is installed. You can install `nearup` by following the instructions at https://github.com/near-guildnet/nearup.
---
### Steps to Run an Archival Node using `nearup` {#steps-to-run-an-archival-node-using-nearup}
First, retrieve a copy of the latest archival snapshot from S3:
```bash
$ aws s3 --no-sign-request cp s3://near-protocol-public/backups/testnet/archive/latest .
$ LATEST=$(cat latest)
$ aws s3 --no-sign-request cp --no-sign-request --recursive s3://near-protocol-public/backups/testnet/archive/$LATEST ~/.near/data
```
### Configuration Update {#configuration-update}
Running an archival node is the same as a [validator node](/validator/running-a-node) as both types of node use the same `nearcore` release. The main difference for running an archival node is a modification to the `config.json` by changing `archive` to `true`.
The `config.json` should contain the following fields. Currently, NEAR testnet and mainnet have only 1 (indexed [0]) shard and that shard is tracked. In the future, there will be the possibility to track different or multiple shards.
```
{
...
"archive": true,
"tracked_shards": [0],
...
}
```
Please make sure that the node is stopped while changing the `config.json`.
Once the config has been changed, you can restart the node and the node will start syncing new archival data. In the case where you want the full archival history, you can delete the data dir and start the node from scratch syncing full history or use one of the latest backups containing the data directory snapshot which can be copied under the near home dir (default: ~/.near/data).
Then run:
```bash
$ nearup run testnet
```
Wait until initialization finishes, use the following command to follow logs:
```bash
$ nearup logs --follow
```
Then run:
```bash
$ nearup stop
```
Finally, run the following command and the node should start syncing headers at ~97%:
```bash
$ nearup run testnet
```
>Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol">
<h8>Ask it on StackOverflow!</h8></a>
|
# Appendix
|NEP # | Title | Author | Status |
|---|---|---|---|
|[0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md) | FIP Purpose and Guidelines | @jlogelin | Active |
|[0021](https://github.com/near/NEPs/blob/master/neps/nep-0021.md) | Fungible Token Standard (Deprecated) | @aevgenykuzyakov | Final |
|[0141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md) | Fungible Token Standard | @aevgenykuzyakov @oysterpack | Final |
|[0145](https://github.com/near/NEPs/blob/master/neps/nep-0145.md) | Storage Management | @aevgenykuzyakov | Final |
|[0148](https://github.com/near/NEPs/blob/master/neps/nep-0148.md) | Fungible Token Metadata | @robert-zaremba @aevgenykuzyakov @oysterpack | Final |
|[0171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md) | Non Fungible Token Standard | @mikedotexe @aevgenykuzyakov @oysterpack | Final |
|[0177](https://github.com/near/NEPs/blob/master/neps/nep-0177.md) | Non Fungible Token Metadata | @chadoh @mikedotexe | Final |
|[0178](https://github.com/near/NEPs/blob/master/neps/nep-0178.md) | Non Fungible Token Approval Management | @chadoh @thor314 | Final |
|[0181](https://github.com/near/NEPs/blob/master/neps/nep-0181.md) | Non Fungible Token Enumeration | @chadoh @thor314 | Final |
|[0199](https://github.com/near/NEPs/blob/master/neps/nep-0199.md) | Non Fungible Token Royalties and Payouts | @thor314 @mattlockyer | Final |
|[0297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md) | Non Fungible Token Royalties and Payouts | @telezhnaya | Final |
|
---
title: 3.4 DeFi Lending
description: The basis of ‘lending’ and ‘credit’ and how they operate in Web3
---
# 3.4 DeFi Lending
Lending or ‘credit’ as a topic, is especially important in the macro context of crypto, as so much of the existing financial system is dependent upon it. In this lecture we will explain the basis of ‘lending’ and ‘credit’ and then dive into the specifics of how smart-contract enabled lending operates. We conclude, by pointing out, that lending in crypto is fundamentally changing the nature of credit in our monetary systems, and drastically altering how ‘money’ works.
## What Does Credit Unlock? The Credit Creation Theory of Money
Traditional lending in the legacy financial system is the basis from which most people are able to purchase and own assets: (1) Most houses, are bought using a loan (Mortgage), (2) Most car payments are made from an auto-loan, and (3) Most businesses purchase commercial properties or other business items using credit as well.
**Credit Agencies Set Rules of the Game:** Yet despite how widespread credit is in many societies, it is relatively centralized with the ‘rules of the game’ based upon ‘credit’ agencies that keep track of individual performance and assign scores deeming them fit or unfit for certain types of loans.
**Credit is Nation-State Dependent:** Zooming out a little more, we see that these silos and blockers are actually capped on the nation-state level, with differences in jurisdiction being responsible for differences in how credit is viewed and evaluated.
**Credit Creation is How New Money is Created:** And from the most elevated position, we come to realize that credit creation - the creation of a new loan - is the basis from which more monetary supply is created and injected into a national economy.
_"Since the 1980s, bank credit creation has decoupled from the real economy, expanding at a considerably faster rate than GDP. According to the Quantity Theory of Credit this is evidence that an increasing among of bank credit creation has been channeled into financial transactions.” (Where does Money Come From, Page 51)._
![](@site/static/img/bootcamp/mod-em-3.4.1.png)
_“Each and every time a bank makes a loan, new bank credit is created - new deposits - brand new money.” - Graham Towers (1939), Former Governor of the Central Bank of Canada_
_“In the Eurosystem, money is primarily created through the extension of bank credit… The commercial banks can create money themselves, the so-called giro money.” (Bundesbank, 2009)_
_“In the United Kingdom, money is endogenous - the Bank supplies base money on demand at its prevailing interest rate, and broad money is created by the banking system.” (Bank of England, 1994)_
All of this precursor is to understand the importance of lending in traditional finance, such that we can understand the implications of moving lending on-chain. Lending in traditional finance is managed by credit agencies and gatekeepers, it varies by jurisdiction and nation-state, and it is ultimately the basis of increased money supply that you and I tend to interact with. Keeping these three things in mind, let’s now see how crypto-lending turns legacy credit on its head!
## Crypto Lending is For Global Money
For anyone interested in making sense of the question: Why do we need lending in crypto? Currently as it stands, money is locked on a regional level, insofar as an individual or an entity, must go through the commercial bank in that specific jurisdiction for approval on a loan. This limits the capacity and reach of credit, to the extent that foreign investors in a country are usually limited in their ability to finance foreign purchases due to credit limits from their bank of origin.
On a macro scale, crypto lending disrupts this model, by removing credit checks, by removing national indicators of ‘trustworthiness’ and simply outsourcing this problem to the rules programmed into the smart contract, with rates algorithmically set.
### The key principles of _Decentralized Financial Lending_ can be summarized as follows:
* Loans are secured with assets held in a smart contract.
* Loans are executed or liquidated based upon the time-duration of the loan, and whether sufficient value has been returned to the smart contract.
_"If the collateral value falls below a specified ratio of the loan value, the position is automatically liquidated to pay back the debt.”_
* The system absorbs the loss of a loan repayment, usually by taking a fee from the interest paid by the borrower.
_“A percentage of the interest paid by borrowers is typically allocated to a reserve pool to repay lenders when a liquidation fails to cover the value of the loan (failed liquidation).”_
* Rates for loans (in terms of interest) are algorithmically set based upon the _utilization rate_ of a lending pool. This weighs how many deposits remain in the pool, from how many loans have been taken out from it already.
## DeFi Lending and How It Works
_DeFi Lending typically works by using smart contracts on a blockchain to facilitate the loan agreement between a borrower and a lender. The borrower and lender agree on the terms of the loan, such as the loan amount, interest rate, and repayment schedule, and the smart contract is used to enforce these terms. When the loan is repaid, the smart contract automatically releases the funds to the lender._
When a borrower wants to take out a loan, they deposit collateral, such as a stablecoin, or accepted token, into the smart contract. The smart contract then releases the loan amount to the borrower's wallet. The borrower then repays the loan, along with interest, according to the terms of the contract.
If lending is designed to be thoroughly decentralized, it requires the loan capital to be supplied by liquidity providers (instead of commercial banks). In this sense, the limit to the scope or amount of loans possible is capped by the total amount of available liquidity in the pool at a given time. Getting granular on the dynamics of each pool, the interest rate required by the borrower increases based on both the utilization rate, as well as the ‘token categorization’ of the loan (risky tokens trigger a lower Utilization Rate).
![](@site/static/img/bootcamp/mod-em-3.4.2.png)
## Concrete Examples: Aave, MakerDAO, and Compound
The three most successful lending protocols to date all launched on Ethereum between 2017 and 2020. Since that time, they have accrued a total of 12 billion in TVL as of December 2022 - at its peak it was much higher.
**[Aave](https://defillama.com/protocol/aave): 3.8 Billion, $2 billion USD borrowed.**
_“Aave is a decentralized non-custodial liquidity protocol where users can participate as depositors or borrowers. Depositors provide liquidity to the market to earn a passive income, while borrowers are able to borrow in an overcollateralized (perpetually) or undercollateralized (one-block liquidity) fashion.” [(Aave Docs)](https://docs.aave.com/hub/)_
Aave is one of the most well developed lending marketplaces in crypto. While it is maintained as a non-custodial liquidity protocol, its most recent version (Aave V3) has institutional KYC built into it, such that it can accommodate [clients like JP Morgan.](https://twitter.com/AaveAave/status/1587846905509433344) The other notable feature of Aave is its capacity to accommodate a vast diversity of different lenders:
![](@site/static/img/bootcamp/mod-em-3.4.3.png)
As the image demonstrates, lending pools on Aave are customizable insofar as loans can be stable rate (meaning fixed interest payments of the same amount), variable rate (meaning changing payment amounts), utilizing interest bearing tokens, and for uncollateralized flash loans (discussed below). Most interesting, is the fact that beyond the key users of Aave (lenders and borrowers), Aave offers lending services to wallets and dApps – meaning that in a smart-contract dominated future, we can imagine a world where self-executing code is programmed to lend or borrow from itself! _Is the composability thesis evident here?_
_Tokens:_
* _Aave: AAVE is the governance token of the AAVE ecosystem._ It is used for deciding on treasury allocation, and other mechanics of the protocol. Delegators can select users to represent their interests on the protocol by voting for or against specific proposals.
![](@site/static/img/bootcamp/mod-em-3.4.4.png)
_Mechanism Design:_
The algorithmically programmed interest rates for different pools and different loans are the core of the incentive structure bringing lenders (liquidity) into pools that are able to service borrowers (users, dApps or wallets). Interestingly, because of the advanced design of the Aave ecosystem, the programmable contracts that execute loans are themselves governable by the community of token holders - such that there is a clear incentive to own the token and participate in the governance process. Finally, Aave can [also be staked, ](https://www.stakingrewards.com/earn/aave/)to protect the protocol against a shortfall event and receive safeguard incentives in return.
**[MakerDAO](https://defillama.com/protocol/makerdao): 6.31 Billion - DAI Issuance.**
[MakerDAO](https://docs.makerdao.com/getting-started/maker-protocol-101) trades as both a lending protocol, and a stablecoin issuance protocol, for its USD pegged asset DAI.
_“MakerDAO is a lending platform that was launched in 2017 on the [Ethereum](https://www.coindesk.com/learn/how-does-ethereum-work/) blockchain. It powers a decentralized stablecoin called DAI. The protocol works by allowing anyone to take out loans in DAI using other cryptocurrencies as collateral.” ([Coindesk](https://www.coindesk.com/tech/2019/05/30/how-makerdao-works-a-video-explainer/))_
[DAI is collateralized by 127%](https://daistats.com/#/) using a basket of assets, including real-world assets.
![](@site/static/img/bootcamp/mod-em-3.4.5.png)
_Mechanism Design of MakerDAO:_
_Tokens_: Two Tokens.
* 1. MKR - Is used to govern the mechanics of the DAO. As a governance token, owners of MKR can decide on the DAI savings rate, and other levers that handle incentives for the protocol.
* 2. DAI - Is issued based upon assets that users lock into the DAO. This is the stable form of value that is issued out to borrowers locking up their collateral. Notably, DAI is backed by a basket of assets, including real-world fiat, and crypto.
_Mechanism Design:_
Maker issues DAI out to users who collateralize their loan with other accepted token assets. Unlike other loans, there is no repayment time frame on the DAI, it is simply a stable form of value that can be used, in exchange for other value. Compared to Aave or Compound, Maker is a simpler design, offering only one type of lending pool, and issuing its own stable coin from that pool. Those who borrow DAI, can furthermore earn interest on their DAI by locking it up into the MakerDAO bank. Governance of the rates, as a way of increasing or decreasing the amount of DAI in circulation has led to Maker being referenced as the Central Bank of Crypto.
**[Compound](https://defillama.com/protocol/compound)**: **1.9 billion in TVL, 600m Borrowed to Date.**
[Compound](https://thedefiant.io/what-is-compound-crypto) was one of the first lending protocols that stimulated DeFi Summer in 2020, and pioneered the concept of Liquidity Mining. Compound is the clearest example of algorithmically calculated interest rates based upon different pools structured by Liquidity Providers.
_“Compound is an algorithmic, autonomous interest rate protocol built for developers, to unlock a universe of open financial applications.” ([DeFi Llama](https://defillama.com/protocol/compound))_
_Tokens:_
* _COMP:_ The COMP governance token is awarded out to anyone holding a c(Token) representative of either a loan or sourced collateral. The token can then be used to alter protocol mechanics via governance processes.
* _c(token):_ Any lent out token or collateralized asset, from which a user either earns or pays in interest for what it borrows. The lent out or collateralized asset is ‘delineated with a c’ similar to how liquid staking tokens work (stNEAR).
_Mechanism Design:_
The mechanism design of Compound is directly in line with the algorithmic structure of LP pools and _Utilization Ratios_ discussed above. Users are incentivized to take out a loan if they do not wish to sell their underlying assets, but still wish to convert those assets to stable value that can be utilized. Liquidity Providers - or lenders - earn interest and token incentives for locking up their liquidity in a specific lending pool. The smart contract, maintains repayment rates, interest rewards, and new ratios for emergent loans, based upon how active a specific liquidity pool is.
## Flash Loans
A final subset of DeFi lending worth diving into, is known as _flash loans._ The basis of a flash loan is for a user to be able to borrow and return the loan they take out, in the same transaction - so extremely quickly - depending on the L1 network they are utilizing.
While flash loans may seem to be redundant, the underlying premise is that during the time in which the loan is received, and before the loan is paid back, a user can participate in some other smart contract operation such as arbitrage, liquidation, or another swap or loan.
![](@site/static/img/bootcamp/mod-em-3.4.6.png)
Specifically flash loans are commonly used to:
* Swap collateral supporting a loan.
* Liquidate a liquidity pool on a decentralized exchange in order to refinance a loan.
* Conduct arbitrage by exploiting the priced differences of an asset across various exchanges.
* Exploit protocols for market manipulation, by creating artificial leverage.
One of the most clear examples is a well known exploit done in the early days of Aave between Aave and Maker DAO:
![](@site/static/img/bootcamp/mod-em-3.4.7.png)
_Source: [Moonpay blog](https://www.moonpay.com/learn/defi/defi-flash-loans)._
The conceptual logic of flash loans is to react faster than the chain is able to update, such that you can benefit off of moving your loan quickly into a specific operation, and then back to its originator. While many believe this is a potential vulnerability for lending protocols, others see it is a form of financial innovation.
## Why is Lending such an Essential and Revolutionary Primitive for DeFi?
_On a crypto level:_ Credit enables the expedited flow of value, between assets, without requiring a buyer and seller to have to exchange them. It’s three primary innovations are as follows:
* **Transparency and Security in the Lending Process:** Because the loan terms are encoded in a smart contract, the loan process is transparent and can be audited by anyone. This can help to reduce the risk of fraud as sometimes the case in the legacy financial system. Second, since the smart contract automatically manages the loan: Including tracking the borrower's repayment progress and enforcing penalties for late payments or non-payment. This eliminates the need for a traditional financial institution to manage the loan and allows for a more transparent and secure lending process.
* **Smart-Contract Based Loan Execution:** Because the loans are facilitated by smart contracts, they can be executed quickly and without the need for intermediaries, which can make the process more efficient and less costly. In the context of the nature of transaction costs, outsourcing loan execution to smart-contracts _significantly reduces transaction costs_ while also minimizing friction for the participating parties.
* **Permissionless:** No Credit Checks or Borrower Specific Evaluations - as done in the legacy financial system. Anyone with collateral, is eligible to participate, and the smart contract operators as the manager and liquidator if the loan cannot be repaid. This makes credit participation a global, permissionless phenomenon, no longer limited by the nation state or specific credit agencies.
_On a Macro Financial Level:_ In the broader context of economic history, and specifically the role that credit has played in money supply, DeFi Lending is fundamentally inverting the nature of credit - as being something supplied by a user, rather than created by a Commercial Bank (granted a commercial bank could supply the liquidity, which would inadvertently be created by them). Put more simply, lending in crypto is not based on credit creation; Instead, crypto lending requires prior tokens and liquidity in order to function.
It inverts traditional lending in two fundamental ways: First, credit can no longer be created by a central intermediary with a mandate from the state to create money. Second, anyone can participate in either facet of the process, and there are no longer gatekeepers capable of barring certain individuals from participating. Taken together, DeFi Lending is the first step in decentralizing and dis-intermediating control over credit supply across the legacy financial system. How this disintermediation couples with composable value creation across dApps, is _the macro question_ for the future of lending in crypto.
|
The NEAR MainNet is now Unrestricted and Decentralized
COMMUNITY
October 13, 2020
NEAR was instantiated to help builders. This means helping them to create meaningful applications with the power to impact real lives by making those apps secure enough to hold substantial value but as easy to use as anything on today’s web. It also means helping them to get to market faster by making applications as easy to build, test and deploy as anything on today’s web.
Today, the validators who run the NEAR Protocol and the tokenholders who delegated to them voted to advance the network into its final stage of MainNet. This transition to “Phase II” is the most significant milestone in NEAR’s history and the most important one to drive its future.
In this post, you will learn what this means, what the path forward looks like and how you can get involved.
What this Means
The transition to Phase II occurred because NEAR Protocol’s validators indicated via an on-chain voting mechanism that they believe the network is sufficiently secure, reliable and decentralized to remove transfer restrictions and officially allow it to operate at full functionality. Accounts that are subject to lockup contracts will now begin their unlocking timeline.
The shift to Phase II means 3 important things:
Permissionlessness: With the removal of transfer restrictions, it is now possible for anyone to transfer NEAR tokens and participate in the network. Specifically, it is now possible for anyone to send or receive tokens, to create accounts, to participate in validation, to launch applications or to otherwise use the network… all without asking for anyone’s permission. This means individuals, exchanges, defi contracts or anyone else can utilize the NEAR token and the NEAR network in an unrestricted fashion.
Voting: The community-operated NEAR Protocol successfully performed its first on-chain vote, indicating that community governance is operational and effective. The exact mechanism will change going forward but this is a substantial endorsement of the enthusiasm of the community to participate.
Decentralization: The dozens of validators who participated in the vote were backed by over 120 million tokens of delegated stake from over a thousand individual tokenholders and this vote indicates that they believe the network is sufficiently secure and decentralized to operate freely.
In essence, NEAR is now fully ready to build on, ready to use, and ready to grow to its full potential. While Bitcoin brought us Open Money and Ethereum evolved that into the beginnings of Open Finance, we finally have access to a platform with the potential to bridge the gap to a truly Open Web.
Day Zero: What Comes Next
Amazon founder Jeff Bezos likes to tout his “day 1 philosophy”, in which the company is meant to operate as if it’s only the first day along the journey of serving customers. We can correct their off-by-1 error by acknowledging that NEAR is now in Day 0… that the network is finally and fully out in the world but there is a lot of room to grow from here.
In the short term, the next step for the network is for validators to implement a system upgrade which will enable protocol inflation and allow stakers to receive rewards for their activities to help secure the network. The code for this upgrade is expected to be released on Monday October 19 in order to give the community plenty of time to finish claiming tokens and set up their delegation to stakers. After that, it usually takes anywhere from 1-2 days for the normal upgrade process to be adopted among validators. Once this happens, the network’s supply will inflate at 5% annually, an increase which is offset (to a varying degree) by the burning of transaction fees.
In the medium term, there are a number of key research and development areas that the team is excited to explore, while acknowledging that the ultimate operation and growth of the network is no longer entirely in any one group’s hands.
Sharding: NEAR currently operates with a single shard because that provides more than enough capacity to serve a very high degree of load. Transitioning to multiple shards is unlikely to be required anytime soon, so there is an opportunity to improve the technology. The current sharding spec is implemented in a 4 shard testnet and a variation of it runs on an independent network which is run by a Guild. While this is a good start, the NEAR team continues to improve the spec so the next version of this is expected to be released by the end of the year and implemented in the first half of next year.
Ethereum Bridge: The Rainbow Bridge, which will likely be the first fully permissionless bridge between Ethereum and a performant Layer 1 protocol, is operating across TestNets while work continues to improve reliability and usability for MainNet operation. It is likely that this will be able to roll out on MainNet before the end of the year and that this will occur in several stages that follow progressive decentralization.
EVM: Developers can already run EVM code using a smart contract which has been deployed since February but it is currently gas intensive and clunky. One effort over the next few months is targeted at implementing precompiled EVM, which would allow Ethereum developers to easily drop their existing contracts onto NEAR and offer significant performance at the same time.
Tooling: The tooling which supports building on and using the platform still has a wide range of improvements to come. Wallets like the NEAR wallet will continue to improve their delegation, staking, voting and other capabilities which support full network functions for everyday people. For developers, improvements in everything from examples to indexers will help smooth out the experience of developing on this platform even more.
If you’re interested in actively following or participating in research initiatives, join the public research calls, held weekly. You can see this and other engineering-related calls posted on the Events Calendar.
NEAR’s next steps are not just about the network — this is a global decentralized project and the growth of that ecosystem is the real story from this point forward. This means:
Empowering Community Leaders: The Guild Program is one of the most visible steps to empower people who want to build communities around NEAR with resources and support but it’s not the only one. We’re all excited to see what the community comes up with over the coming months and thrilled to see how groups are beginning to support each other.
Funding: The NEAR Foundation has committed to providing funding for the community by supporting a $1.5M+ community fund (with additional early funding from Coinlist), a grants program for infrastructure and a series of investment grants in projects building atop decentralized technologies. Expect all of these to roll out through the end of the year.
Governance: No decentralized network has reached maturity without navigating a few storms. We’ve all been impressed with how the community has stepped forward to take charge during the last few weeks and how communication and coordination has organically evolved to serve the needs of the network. As things progress further, we’ll do everything we can to make sure all stakeholders are heard and that the future path of the network is directed by a balanced group of participants. The validator-led voting process was important for Phase II but future governance will allow the community a more direct voice and that’ll be important whenever the first (inevitable) community crisis emerges.
What You Should Do Now
NEAR is “open for business” but, like Ethereum before it, the platform is just the substrate onto which you can apply your creativity. It’s secure storage, rapid compute, smooth payment rails and composable components that add up to unstoppable applications… but what applications get created is entirely up to you.
NEAR has deliberately focused on building a platform that can support the entire range of decentralized use cases, whether that’s providing stablecoins to people across the globe, building decentralized financial tooling on chain, creating marketplaces to improve gaming experiences, tokenizing investment assets or more because each of these use cases benefits from the tooling, liquidity and components produced by the others.
This breadth of possibilities gives NEAR access to the largest possible opportunity set and there’s something for everyone in the ecosystem to do now:
Developers: This is all for you! The NEAR platform still needs polishing in many places (which you can help with…) but it’s production ready and you can build production ready apps. If you’re just getting started with decentralized apps, check out the New to NEAR? docs. And join https://near.chat to get in the conversation.
Tokenholders: You can help the network substantially in a number of different ways. If you like to test-drive new technology, look for new applications built on NEAR where you can play with them and maybe spot a breakout idea. This could, for example, be a game for fun or a decentralized finance app which helps solve real problems with liquidity or risk management. Or, if you aren’t sure, you can delegate to a validating pool and let your tokens work for you by participating in securing the network and earning rewards for doing so.
Entrepreneurs: Whether you’re at the first step of your journey or are looking back at a well-worn path, the Open Web Collective can help you make sure you’re leveraging not just the best of decentralized technology but also the best support for building and fundraising along the way.
Validators: NEAR allows a unique level of innovation for validators. Whereas you previously could only compete based on how much you charge delegators for your services, NEAR’s flexible delegation contracts allow a wide range of experimentation. If you want to stay on top of what’s happening with validators, I recommend checking out the NEAR Validator Advisory Board (NVAB) and following their meetings to stay in the loop. If you’re already a validator, starting a Guild or building your community presence in other ways (by giving back to the community!) will be crucial for attracting delegation. Check out the Guilds for a path and help doing so.
Designers, product people, marketers, academics, sales people, investors, accountants and, yes, lawyers: This ecosystem needs your help! Ethereum’s early years inspired a generation of people to try their hand at building successful businesses using decentralized tools but the technology was too early and the ecosystem never fully developed. For everyone who, like us, believes that the technology is finally at a place where we can cross the usability gap and build apps real people will use, please consider joining or starting a Guild so you can get in on the ground floor of the next wave forward.
Finally, what you see now is just the beginning. The pace at which UX is improving across developer tooling, platform components, apps and everything in between is astonishing. Blink and you’ll miss it — so I recommend you stay in touch via our newsletter and check back often.
Day 0 is just the beginning. Let’s build great things together 🚀
Special Thanks
It’s hard to make something complex. It’s even harder to make something simple that works. Everyone who has been involved in the NEAR project so far is an absolute superhero for putting in sleepless nights, stretching to design the impossible and sacrificing so much to bring this to reality.
Thank you to everyone who has built the core protocol from scratch, pushed 5am commits, answered community questions, driven creative new hackathons, introduced fabulous new teammates, spread the word, spoke on stage, hacked on dev tools, spoke to founders, security reviewed, made very #berry memes, fired up examples, wrangled documentation, shipped update emails, rubber ducked, double (triple) checked it all, drafted press releases, dove into governance, revised doc markups, wrote test coverage, backed this idea before it was fully formed, argued passionately for improvements or on behalf of the community, provided operational oversight, lived in spreadsheets, fleshed out documentation, hopped on late night calls, slacked infinite threads, parsed analytics, spun up (and down) nodes, reviewed pull requests, organized offsites, and ultimately believed in — and materially supported — what we’re working to accomplish with NEAR. |
---
NEP: 330
Title: Source Metadata
Author: Ben Kurrek <ben.kurrek@near.org>, Osman Abdelnasir <osman@near.org>, Andrey Gruzdev <@canvi>, Alexey Zenin <@alexthebuildr>
DiscussionsTo: https://github.com/near/NEPs/discussions/329
Status: Approved
Type: Standards Track
Category: Contract
Version: 1.2.0
Created: 27-Feb-2022
Updated: 19-Feb-2023
---
## Summary
The contract source metadata represents a standardized interface designed to facilitate the auditing and inspection of source code associated with a deployed smart contract. Adoption of this standard remains discretionary; however, it is strongly advocated for developers who maintain an open-source approach to their contracts. This initiative promotes greater accountability and transparency within the ecosystem, encouraging best practices in contract development and deployment.
## Motivation
The incorporation of metadata facilitates the discovery and validation of deployed source code, thereby significantly reducing the requisite level of trust during code integration or interaction processes.
The absence of an accepted protocol for identifying the source code or author contact details of a deployed smart contract presents a challenge. Establishing a standardized framework for accessing the source code of any given smart contract would foster a culture of transparency and collaborative engagement.
Moreover, the current landscape does not offer a straightforward mechanism to verify the authenticity of a smart contract's deployed source code against its deployed version. To address this issue, it is imperative that metadata includes specific details that enable contract verification through reproducible builds.
Furthermore, it is desirable for users and dApps to possess the capability to interpret this metadata, thereby identifying executable methods and generating UIs that facilitate such functionalities. This also extends to acquiring comprehensive insights into potential future modifications by the contract or its developers, enhancing overall system transparency and user trust.
The initial discussion can be found [here](https://github.com/near/NEPs/discussions/329).
## Rationale and alternatives
There is a lot of information that can be held about a contract. Ultimately, we wanted to limit it to the least amount fields while still maintaining our goal. This decision was made to not bloat the contracts with unnecessary storage and also to keep the standard simple and understandable.
## Specification
Successful implementations of this standard will introduce a new (`ContractSourceMetadata`) struct that will hold all the necessary information to be queried for. This struct will be kept on the contract level.
The metadata will include optional fields:
- `version`: a string that references the specific commit ID or a tag of the code currently deployed on-chain. Examples: `"v0.8.1"`, `"a80bc29"`.
- `link`: an URL to the currently deployed code. It must include version or a tag if using a GitHub or a GitLab link. Examples: "https://github.com/near/near-cli-rs/releases/tag/v0.8.1", "https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420" or an IPFS CID.
- `standards`: a list of objects (see type definition below) that enumerates the NEPs supported by the contract. If this extension is supported, it is advised to also include NEP-330 version 1.1.0 in the list (`{standard: "nep330", version: "1.1.0"}`).
- `build_info`: a build details object (see type definition below) that contains all the necessary information about how the contract was built, making it possible for others to reproduce the same WASM of this contract.
```ts
type ContractSourceMetadata = {
version: string|null, // optional, commit hash being used for the currently deployed WASM. If the contract is not open-sourced, this could also be a numbering system for internal organization / tracking such as "1.0.0" and "2.1.0".
link: string|null, // optional, link to open source code such as a Github repository or a CID to somewhere on IPFS, e.g., "https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420"
standards: Standard[]|null, // optional, standards and extensions implemented in the currently deployed WASM, e.g., [{standard: "nep330", version: "1.1.0"},{standard: "nep141", version: "1.0.0"}].
build_info: BuildInfo|null, // optional, details that are required for contract WASM reproducibility.
}
type Standard {
standard: string, // standard name, e.g., "nep141"
version: string, // semantic version number of the Standard, e.g., "1.0.0"
}
type BuildInfo {
build_environment: string, // reference to a reproducible build environment docker image, e.g., "docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471" or something else pointing to the build environment.
source_code_snapshot: string, // reference to the source code snapshot that was used to build the contract, e.g., "git+https://github.com/near/cargo-near-new-project-template.git#9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420" or "ipfs://<ipfs-hash>".
contract_path: string|null, // relative path to contract crate within the source code, e.g., "contracts/contract-one". Often, it is the root of the repository, so can be omitted.
build_command: string[], // the exact command that was used to build the contract, with all the flags, e.g., ["cargo", "near", "build", "--no-abi"].
}
```
In order to view this information, contracts must include a getter which will return the struct.
```ts
function contract_source_metadata(): ContractSourceMetadata {}
```
### Ensuring WASM Reproducibility
#### Build Environment Docker Image
When using a Docker image as a reference, it's important to specify the digest of the image to ensure reproducibility, since a tag could be reassigned to a different image.
### Paths Inside Docker Image
During the build, paths from the source of the build as well as the location of the cargo registry could be saved into WASM, which affects reproducibility. Therefore, we need to ensure that everyone uses the same paths inside the Docker image. We propose using the following paths:
- `/home/near/code` - Mounting volume from the host system containing the source code.
- `/home/near/.cargo` - Cargo registry.
#### Cargo.lock
It is important to have `Cargo.lock` inside the source code snapshot to ensure reproducibility. Example: https://github.com/near/core-contracts.
## Reference Implementation
As an example, consider a contract located at the root path of the repository, which was deployed using the `cargo near deploy --no-abi` and environment docker image `sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471`. Its latest commit hash is `9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420`, and its open-source code can be found at `https://github.com/near/cargo-near-new-project-template`. This contract would then include a struct with the following fields:
```ts
type ContractSourceMetadata = {
version: "1.0.0",
link: "https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420",
standards: [
{
standard: "nep330",
version: "1.1.0"
}
],
build_info: {
build_environment: "docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471",
source_code_snapshot: "git+https://github.com/near/cargo-near-new-project-template.git#9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420",
contract_path: ".",
build_command: ["cargo", "near", "deploy", "--no-abi"]
}
}
```
Calling the view function `contract_source_metadata`, the contract would return:
```bash
{
version: "1.0.0"
link: "https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420",
standards: [
{
standard: "nep330",
version: "1.1.0"
}
],
build_info: {
build_environment: "docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471",
source_code_snapshot: "git+https://github.com/near/cargo-near-new-project-template.git#9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420",
contract_path: ".",
build_command: ["cargo", "near", "deploy", "--no-abi"]
}
}
```
This could be used by SourceScan to reproduce the same WASM using the build details and to verify the on-chain WASM code with the reproduced one.
An example implementation can be seen below.
```rust
/// Simple Implementation
#[near_bindgen]
pub struct Contract {
pub contract_metadata: ContractSourceMetadata
}
/// NEP supported by the contract.
pub struct Standard {
pub standard: String,
pub version: String
}
/// BuildInfo structure
pub struct BuildInfo {
pub build_environment: String,
pub source_code_snapshot: String,
pub contract_path: Option<String>,
pub build_command: Vec<String>,
}
/// Contract metadata structure
pub struct ContractSourceMetadata {
pub version: Option<String>,
pub link: Option<String>,
pub standards: Option<Vec<Standard>>,
pub build_info: Option<BuildInfo>,
}
/// Minimum Viable Interface
pub trait ContractSourceMetadataTrait {
fn contract_source_metadata(&self) -> ContractSourceMetadata;
}
/// Implementation of the view function
#[near_bindgen]
impl ContractSourceMetadataTrait for Contract {
fn contract_source_metadata(&self) -> ContractSourceMetadata {
self.contract_source_metadata.get().unwrap()
}
}
```
## Future possibilities
- By having a standard outlining metadata for an arbitrary contract, any information that pertains on a contract level can be added based on the requests of the developer community.
## Decision Context
### 1.0.0 - Initial Version
The initial version of NEP-330 was approved by @jlogelin on Mar 29, 2022.
### 1.1.0 - Contract Metadata Extension
The extension NEP-351 that added Contract Metadata to this NEP-330 was approved by Contract Standards Working Group members on January 17, 2023 ([meeting recording](https://youtu.be/pBLN9UyE6AA)).
#### Benefits
- Unlocks NEP extensions that otherwise would be hard to integrate into the tooling as it would be guess-based (e.g. see "interface detection" concerns in the Non-transferrable NFT NEP)
- Standardization enables composability as it makes it easier to interact with contracts when you can programmatically check compatibility
- This NEP extension introduces an optional field, so there is no breaking change to the original NEP
#### Concerns
| # | Concern | Resolution | Status |
| - | - | - | - |
| 1 | Integer field as a standard reference is limiting as third-party projects may want to introduce their own standards without pushing it through the NEP process | Author accepted the proposed string-value standard reference (e.g. “nep123” instead of just 123, and allow “xyz001” as previously it was not possible to express it) | Resolved |
| 2 | NEP-330 and NEP-351 should be included in the list of the supported NEPs | There seems to be a general agreement that it is a good default, so NEP was updated | Resolved |
| 3 | JSON Event could be beneficial, so tooling can react to the changes in the supported standards | It is outside the scope of this NEP. Also, list of supported standards only changes with contract re-deployment, so tooling can track DEPLOY_CODE events and check the list of supported standards when new code is deployed | Won’t fix |
### 1.2.0 - Build Details Extension
The NEP extension adds build details to the contract metadata, containing necessary information about how the contract was built. This makes it possible for others to reproduce the same WASM of this contract. The idea first appeared in the [cargo-near SourceScan integration thread](https://github.com/near/cargo-near/issues/131).
#### Benefits
- This NEP extension gives developers the capability to save all the required build details, making it possible to reproduce the same WASM code in the future. This ensures greater consistency in contracts and the ability to verify source code. With the assistance of tools like SourceScan and cargo-near, the development process on NEAR becomes significantly easier
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
NEAR Protocol Partners with Mask Network
COMMUNITY
March 16, 2021
If you’re a developer who wants to see what building on NEAR is all about, check out the NEAR documentation to get started. If you want to be part of the NEAR community, join our Discord or Telegram channels or follow NEAR on Twitter.
NEAR Protocol is happy to announce a strategic partnership with Mask Network. NEAR Protocol and Mask Network will cooperate between their base layer technologies as well as on the application front. The Rainbow Bridge, the NEAR EVM, NEAR Drops, and other key components are expected to be integrated when launched, in order to accelerate widespread adoption and engagement with NEAR Protocol and the Mask Network.
The cooperation will start with a joint bounty, allowing developers from both communities to join forces and explore new opportunities at the intersection of these two protocols. The first bounty will be to facilitate the integration of the NEAR Wallet into the Mask Network, creating a solid foundation for future deployment of more NEAR based applications.
The NEAR Wallet is NEAR Protocol’s non-custodial web wallet, using local storage on the users computer to store private keys in an open file format. This design is meant to provide more wallet options, and makes it easier to expand the NEAR developer community, while also marketing the determination of NEAR Protocol and Mask Network to support one another in creating a vibrant decentralized web and finance ecosystem.
The product idea of the Mask Network is a natural fit with the mission of NEAR Protocol. Both want to create a more free and open web, allowing users to control their own funds, data, and identity. This bounty is just the starting point of cooperation between the two projects. We have already planned a series of deeper integration projects and joint ventures that will be released to our communities over time. If you are interested in the Mask Network and NEAR Protocol, come join us!
The details of the Bounty are outlined here.
About NEAR Protocol
NEAR exists to accelerate the world’s transition to open technologies by growing and enabling a community of developers and creators. NEAR is a decentralized application platform that secures high value assets like money and identity with the performance necessary to make them useful for everyday people, putting the power of Open Finance and the Open Web in their hands. NEAR’s unique account model allows developers to build secure apps that consumers can actually use similarly to today’s web apps, something which requires multiple second-layer add-ons on other blockchains.
About Mask Network
Mask Network is the core product of Dimension, which is positioned to become the bridge that connects internet users from Web 2.0 to Web 3.0. The foundational technology of Mask Network is a peer to peer encrypted messaging application, with new functions continuously being created around this foundation. We at Mask Network are strong believers in the ownership economy. People should own what they produce, people should own their data, their attention and the virtual space they choose to contribute to.
Mask Network integrates decentralized social messaging, borderless payment network, and decentralized file storage and sharing to provide a safe and convenient portal for users to jump right into the continent of decentralized finance and then the new world of Web 3.0.
If you’re a developer who wants to see what building on NEAR is all about, check out the NEAR documentation to get started. |
NEAR’s Road to Decentralization: A Deep Dive into Aurora
NEAR FOUNDATION
March 7, 2022
One of NEAR Foundation’s core missions has been to create a network that empowers users to take control of their data, their finances, and the tools to govern.
In the web3 world, this idea can be neatly summed up by one word: decentralization. This is a process whereby control and the tools to create and reimagine everything from business to creativity are progressively handed over to the contributors.
The NEAR Foundation has been steadily progressing in this mission, from the growing number of validators to the flourishing DAO community. But NEAR isn’t alone on this journey. It relies on projects in and around the ecosystem to help accelerate this mission. Enter Aurora.
This project, started originally by a core team of NEAR developers, is now helping build bridges to other ecosystems and accelerating the adoption and decentralization in equal measure.
A Bridge Between NEAR and Ethereum
Built by NEAR Inc’s core team (now Pagoda), Aurora is an Ethereum Virtual Machine (EVM) smart contract platform that creates a bridge between Ethereum and NEAR. An EVM, for those who don’t know, is best thought of as a decentralized computer that allows anyone to create a smart contract on the Ethereum network.
Although an EVM, Aurora works on top of the NEAR blockchain, giving it all of the benefits of the NEAR blockchain: super fast, incredibly secure, and infinitely scalable transactions. That’s possible thanks to Nightshade, NEAR’s unique sharded protocol design that allows the network to process thousands of transactions per second without skipping a beat.
With Aurora, the idea is that anyone building Ethereum projects can make use of the NEAR’s speed and low fees. The Aurora EVM allows developers to make use of NEAR’s blockchain, while the platform uses NEAR’s Rainbow Bridge to transfer assets.
Let’s look at Rainbow Bridge, as this protocol is where things got started.
Aurora and Rainbow Bridge
NEAR’s core developer team wanted to create a smart contract that would perform the function of allowing tokens to flow freely between Ethereum, NEAR, and other projects. The team’s vision: create a tool that allowed assets to exist on both NEAR and Ethereum, creating a multi-chain Web3 user experience.
To do this, the team would need to create a bridge that would be completely decentralized: anyone could use it, anywhere, and at any time—without permission. The developers were able to build the bridge and now, if one wants to move an Ethereum-based token—say, a stablecoin like DAI—and use it on NEAR’s network, they can do so via the Rainbow Bridge.
Some of NEAR’s core team that had helped create Rainbow Bridge split off to continue their work. The result: Aurora—a project that now offers the bridge and a whole host of other features designed to create a global network of open-source Web3 projects.
Aurora partnerships
Other projects working to improve cross-chain accessibility have since taken notice of Aurora’s work and become partners. One example is Allbridge, an application that unites scattered blockchains by means of global interoperability across all networks. Since Allbridge partnered with Aurora, it has launched a bridge between Aurora and Terra, an open-source stablecoin network and one of the biggest cryptocurrencies by market cap.
Aurora is also playing a big part in decentalized finance (DeFi) growth in the NEAR ecosystem. DeFi refers to the peer-to-peer financial services built on public blockchains, and is one of the most active crypto sectors on NEAR. The NEAR community has been working tirelessly to make it easier for DeFi apps and tools to leverage the protocol’s developer-friendly advantages.
One way that Aurora has made this process much smoother and more decentralized is through its extremely low fees (so low, they’re negligible), which it achieves by using the NEAR network. A constant gripe from users about Ethereum-based DeFi products is that the fees are often too high, creating a barrier to entry for developers and end users, and creating a hurdle to Web3’s mainstream adoption.
Aurora adoption is growing so much that the team released Aurorascan, its very own version of Etherscan, the most popular Ethereum block explorer and analytics platform. Aurorascan has all of Etherscan’s features set and reliability, while giving developers the tools and data to see how Aurora’s EVM functions.
But Aurora goes beyond DeFi. The platform is also integral to the NEAR community’s NFT projects, which have exploded in popularity since 2020. When the Aurora team spoke at this year’s ETHDenver conference in February, and they were joined by members of Endemic, Chronicle, and TENKBay—all NFT platforms that have launched on the Aurora EVM and NEAR Protocol.
Future Aurora developments
So what does Aurora have up its sleeve for the future? Well, Aurorascan is still in beta mode, so more features will be added to that platform. And the team is also working to build new bridges, which will be announced in the coming months.
There are also plans to allow NFTs to be able to hop between Ethereum and NEAR. You can keep up to date on all things Aurora on its Medium page.
All this and more will help those wanting to take advantage of NEAR’s super fast and extremely cost-effective network, and help the dream of a truly decentralized world become a reality. |
---
id: use-cases
title: Use cases for Chain Signatures
sidebar_label: Use cases
---
Chain signatures enable you to implement multichain and cross-chain workflows in a simple way.
Take a look at a few possible use cases:
---
## Trade Blockchain assets without transactions
Trading assets across different blockchains usually require using a bridge that supports them, bringing longer settlement times as the trades are not atomic and require confirmation on both blockchains.
Using Chain signatures, you can trade assets across chains simply swapping the ownership of NEAR accounts that control funds on different blockchains. For example, you could trade a NEAR account that controls a Bitcoin account with `X BTC` for another NEAR account that controls an Ethereum account with `Y ETH`.
This way, you can keep native tokens on their native blockchain (e.g., `BTC` on Bitcoin, `ETH` on Ethereum, `ARB` on Arbitrum) and trade them without bridges.
As an added bonus, trades are atomic across chains, settlement takes just 2 seconds, and supports any token on any chain.
:::tip Keep in mind
There are transactions happening on different blockchains.
The difference is that a [Multi-Party Computation service](../chain-signatures.md#multi-party-computation-service) (MPC) signs a transaction for you, and that transaction is then broadcast to another blockchain RPC node or API.
:::
For example, a basic trade flow could be:
1. Users create an account controlled by NEAR chain signatures
2. Users funds these accounts on the native blockchains (depositing)
3. Place orders by funding a new account for the total amount of the order
4. Another user accepts the order
5. Users swap control of the keys to fulfill the order
![docs](/docs/native-cross-chain.png)
<details>
- User A has `ETH` on the Ethereum blockchain, and wants to buy native Bitcoin
- User B wants to sell Bitcoin for Ethereum
**Steps**
1. User B, using NEAR, creates and funds a new account on Bitcoin with 1 `BTC`
2. User B, using the spot marketplace smart contract, signs a transaction to create a limit order. This transfers control of the Bitcoin account to the smart contract
3. User A creates a batch transaction with two steps
- Creating and funding a new Ethereum account with 10 `ETH`
- Accepting the order and atomically swapping control of the accounts
4. User A takes ownership of the Bitcoin account with 1 `BTC`, and User B takes ownership of the Ethereum account with 10 `ETH`
5. User A and B can _"withdraw"_ their asset from the order by transferring the assets to their respective _"main"_ accounts
</details>
---
## Oauth-controlled Blockchain accounts
On-boarding is a huge problem for decentralized applications. If you want widespread adoption you can't expect people to keep seed phrases safe in order to use an application.
An attractive way of managing Web3 accounts is to use existing Web2 accounts to on-board users. This can be done in the following way:
1. Deploy a NEAR contract that allows the bearer of a user's [JWT token](https://jwt.io/) to sign a blockchain transaction (Ethereum, Polygon, Avalanche, and others)
2. The user validates their identity with a third-party receiving a JWT Token
3. The user holding that token can interact with blockchain applications on Ethereum/Polygon/+++ via the NEAR contract for the duration of it's validity
Any method of controlling a NEAR account can also be used to control a cross-chain account.
:::info About JWT tokens
JSON Web Tokens are a standard RFC 7519 method for representing claims securely between two parties. They are used in this example to represent the claim that someone is the owner of an Oauth account.
:::
---
## Cross-chain Zero-friction onboarding
Using unique features of the NEAR account model, [Keypom](https://docs.keypom.xyz/) provides zero-friction onboarding and transactions on NEAR. They are generally used for NFT drops, FT drops, and ticketing.
A generic Keypom user-flow could be:
1. The developer creates a restricted NEAR account
2. The account is funded with `NEAR`
3. The user receives a key with limited control of the account
4. The user uses the funded account to call controlled endpoints on NEAR
5. The user returns the remaining funds to the developer and their account is unlocked
:::tip
This allows easy on-boarding to decentralized apps. The accounts are initially restricted to prevent the user being able to simply withdraw the `NEAR` from the account.
:::
## DeFi on Bitcoin (and other non-smart contract chains).
Using chain signatures, smart contracts on NEAR can control externally-owned accounts on non-smart contract chains like Bitcoin, Dogecoin, XRP Ledger, Bittensor, Cosmos Hub, etc. This enables developers to use NEAR as a smart contract “layer” for chains that do not support this functionality natively.
For example, a developer can build a decentralized exchange for Bitcoin Ordinals, using a smart contract on NEAR to manage deposits (into Bitcoin addresses controlled by the contract) and to verify and execute swaps when two users agree to trade BTC for an Ordinal or BRC20 token.
Example:
1. Seller generates a deposit address on Bitcoin that is controlled by the marketplace smart contract on NEAR via chain signatures
2. Seller deposits a Bitcoin Ordinal to the deposit address
3. The Ordinal is listed for sale with a price and a pre-commitment signature from the seller
4. Buyer accepts the order, deposits USDC
5. The control of the Bitcoin Ordinal address is given to the buyer, USDC on NEAR is transferred to the seller
#### Using Chain Signatures
With Chain Signatures you can do the same but across many chains, for example Polygon:
1. The developer creates a restricted NEAR account with a key
2. The account is funded with `NEAR` and `MATIC`
3. The user receives a key with limited control of the account
4. The user uses the funded account to sign payloads calling controlled endpoints on Polygon
5. The user returns the remaining funds to the developer and their account is unlocked
This allows developers to pay for users to use arbitrary contracts on arbitrary chains.
---
## Decentralized Clients
A big problem in decentralized applications is that while the smart contracts are tamper-proof, the clients that access them generally are not. This allows practically complete control over any user account provided they are using the frontend assets that you serve. This has security, trust, and regulatory implications.
When smart contracts can sign payloads you can start using [signed exchanges](https://wicg.github.io/webpackage/draft-yasskin-http-origin-signed-responses.html#name-introduction) (or polyfills) to require HTTP exchanges to be signed by a certain key. If it is not signed with this key the SSL certificate is considered invalid. This means that individual users cannot be served invalid frontends without it being generally observable and non repudiable.
---
## Communication with private NEAR Shards
Companies like [Calimero](https://www.calimero.network/) offer private NEAR shards. Currently, sending messages to and from these NEAR shards is troublesome. If each shard had the ability to sign their message queues, they could be securely sent from one shard to another. Thus you could communicate bidirectionally with any shard as easily as you can with a contract on your own shard.
:::tip
This could also simplify NEAR's sharding model, by treating each NEAR shard like one would a private shard.
:::
|
---
description: A step-by-step guide to creating your NEAR Wallet
title: Creating a NEAR Wallet
sidebar_position: 4
---
# Creating a NEAR Wallet
---
The first step in your NEAR journey is to create a NEAR Wallet. It is an empowering experience to remove the middleman from your financial transactions. You will be able to create your own account, send and receive tokens, and even create your own smart contracts.
[MyNEARWallet](https://app.mynearwallet.com/) is a non-custodial, web-based crypto wallet for the NEAR blockchain. This means that you are in control of your account and your private keys. Whoever has access to your private keys has access to your account. This is why it is important to keep your private keys safe.
To get started, go to [https://app.mynearwallet.com/](https://app.mynearwallet.com/) and click “Create Account”. The steps provided below are the same as the ones you will see in the NEAR Wallet.
### Creating an Account
First, choose your human-readable account ID. Each account created in the NEAR Wallet is appended by `.near`. For example, choosing `satoshi` will create the `satoshi.near` account name. This is similar to how google accounts end with `@gmail.com`. If you have used other blockchains like Ethereum this may seem different to the alphanumeric account IDs you are used to, but named accounts are standard on NEAR.
:::note
If you want a deeper dive on this subject, check out the [Account Model](https://docs.near.org/concepts/basics/accounts/model) section of the NEAR docs.
:::
![Wallet](@site/static/img/wallet1.png)
### Choosing the Recovery Method
Next, you will choose your account recovery method. The NEAR Wallet is entirely non-custodial, which means account access is your responsibility. We currently offer three methods of account access. After creating your account, you can choose to enable a combination of methods.
![Wallet](@site/static/img/wallet2.png)
#### Ledger Hardware Wallet
If you have a hardware wallet like Ledger Nano S or X, we highly recommend using it. This ensures your private keys never leave your Ledger, which provides the highest level of security when using the NEAR Wallet. Check out [Ledger](https://www.ledger.com/) for more information.
![Wallet](@site/static/img/wallet3.png)
#### Recovery (Seed) Phrase
A seed phrase is the most common recovery method for blockchain account. It is a list of words in a particular order that is unique to your account. Write down the twelve seed word recovery phrase when prompted, and keep it safe! If you do not have a Ledger, this is the next best option.
:::important
The seed phrase is only as secure as your storage of it! If you want to retain access to your account, you MUST write down your seed words in the correct order, and store the phrase securely. If you lose your seed phrase, you lose access to your account forever.
:::
![Wallet](@site/static/img/wallet4.png)
#### Email
This is the least secure option. We only recommend this option for smaller amounts, as the security depends on your email provider.
We will send you a one-time email with a verification code and recovery link. We do not store this information, and thus you MUST keep this message safe in order to recover your account.
:::important
We send the email recovery message only once, so make sure to have a copy of the recovery link in a safe place. If you delete the recovery email or you lose access to the email account, you will not be able to recover your wallet
:::
### Funding Your Account
Next, you will need to fund your account with NEAR tokens.
To do anything with the NEAR Wallet, you will need at least 0.1 NEAR. There are several ways to obtain NEAR, including from popular exchanges like [Binance](https://www.binance.com/en) and [Huobi](https://huobi.com/en-us/).
NEAR Wallet provides a simplified account creation process for first-time users who provide a valid email address. In this case, you can directly top up your wallet using the account name without the temporary address passage below.
If you don’t want to disclose your email address, or create a secondary account, transfer NEAR (minimum 0.1 NEAR) to the 64 character account ID (your temporary / implicit ID) shown when selecting 'Manual Deposit', then return to this page.
![Wallet](@site/static/img/wallet5.png)
Until the funds are received, you will see “Awaiting deposit” in the account status.
Once the funds are ready, click the button “Claim my Account”, and your account is ready to use! You will be redirected to your “Wallet” page and can view your accounts details.
![Wallet](@site/static/img/wallet6.png)
## Managing Your Account
On the “Account” page, you will see a breakdown of your account balance, available recovery options (not including your seed phrase from the first step), and the option to add a Ledger hardware device or setup Two Factor Authentication.
![Wallet](@site/static/img/wallet7.png)
### Two Factor Authentication
If you do not have a Ledger, we highly recommend enabling Two Factor Authentication.
Two Factor Authentication in the NEAR Wallet deploys a multi-signature contract to your account, and requires all transactions to be confirmed by a second device.
We currently offer Email based Two Factor Authentication.
:::note
If you set up Ledger, you will not see the 2FA screen. We will eventually offer support for Ledger to be used as one of the authentication methods, but that is currently not possible in the Wallet.
:::
|
NEAR Sharding Rollout and Aurora IDO Postmortems
DEVELOPERS
December 3, 2021
Between November 8th and November 19th, multiple incidents related to Near Inc RPC services caused severe degradation of user experience. The Near Inc team takes this kind of issue seriously and aims to continually improve its services.
The following postmortems on the NEAR sharding rollout and Aurora IDO summarize the incidents. They provide root cause analyses, mitigations, and the future improvements we are working on to prevent similar incidents from happening again.
Incident 1 – November 10th (12:00 UTC – 15:00 UTC) NEAR Inc RPC Service degraded during Boca Chica Aurora token sale
During the Boca Chica Aurora token sale, community members reported an inability to access the service to buy the Aurora lottery ticket. The cause: too much load on the RPC endpoint. Due to higher than expected demand during the sale, the RPC became overloaded in the EU region.
We’ve increased our RPC capacity by rolling out new RPC nodes across all regions. In particular, we’ve bumped up the number of nodes in the EU region. In the meantime, we’ve made and are working on several other improvements to increase the RPC throughput and reliability.
Incident 2 – November 16th (19:00 UTC – 19:30 UTC) Near Inc Services down due to global cloud provider outage
All Near Inc services stopped working to some extent for a short period of time including RPC, Indexers, Explorer, and Wallet. Issues were reported from across the community and internally all alerts started to fire.
We rapidly identified the root cause: an outage affecting the load balancing stack hit our cloud provider, which impacted RPC and therefore made all other dependent services unavailable as well. Unfortunately, there was no quick mitigation we could do on our side. We had to wait for the cloud provider to resolve the issue to get our services operational again. The cloud provider was able to quickly detect and restore service in under 20 minutes.
Currently, we use one cloud provider for the services we host. We do this for simplicity of maintenance. Due to the decentralized nature of NEAR, anyone is able to run their own RPC services, providing alternative ways to access the network. Making it easy for users to switch RPC nodes is something that both wallets and developer tooling will be looking into.
To prevent issues like this from happening to Near Inc services, we are also looking into implementing a multi-cloud deployment with client-side fallbacks in case the primary service goes down.
Incident 3 – November 17th(03:00 UTC – 19:00 UTC)Near Inc services degraded including Skyward during Aurora IDO, Wallet, and Explorer
During the sharding rollout on mainnet we encountered two issues caused by state splitting as part of the protocol upgrade: 1) high disk usage affecting the RPC service, and 2) the inability of archival nodes to split state within one epoch.
Let’s discuss them separately.
The first issue was caused by high disk usage (IOPS) during the process of splitting state. Even though the RPC traffic was very low across the network, performance of our RPC nodes drastically deteriorated and we observed that RPC latency in some regions jumped from 1s to 60s.
This was not a capacity problem: each RPC request takes longer to respond and adding more nodes wouldn’t help much. Most services were almost unavailable, but due to client side retries they could be used with high response times. Most affected users were in Europe and Asia.
The second issue was caused by archival nodes being unable to finish state splitting within one epoch as expected and, therefore, they got stuck when the new epoch arrived. This issue arose unexpectedly, as we ran simulations beforehand for regular RPC for the sharding upgrade using mainnet data. The edge case we missed was running the simulation on archival nodes, which took significantly longer. The issue had not been previously identified on testnet, as archival data is smaller there.
The failure of archival nodes affected all services depending on them: Indexer, Explorer, Wallet, Aurora, etc. The Infrastructure team was able to rapidly redirect traffic to non-archival nodes as a patch until archival nodes were restored. We waited until archival nodes finished splitting their states, and once the first node synced we generated a backup and started the rest of the archival nodes from there. This was a failure of not testing for all cases and we plan to invest more time and effort in making sure releases go as smoothly as previous ones.
Some Final Thoughts
It’s important to note that, despite all the aforementioned incidents, the network was always functioning as expected and only RPC services were affected, causing dependent services to have issues.
We want to be transparent about all past and future issues we face. We believe the community members will understand that “NEAR stands for iteration” and we are doing our best to prevent such incidents from happening in the future. |
---
title: Interviewing Guide
sidebar_label: Interviewing Guide
sidebar_position: 3
---
:::info on this page
* Understanding MOC model to craft a great interview process
* Interview templates and examples
* Job description template and assessment tool
* Intro to Behavioral Interview Questions
* Interview Etiquette
:::
## Introduction
Finding the right people for your growing team can be a daunting task. What should go into your job description? What should you look for when interviewing? That’s why this guide was created. It will help you answer all your questions, and craft a solid framework for recruiting, interviewing, and hiring strong candidates.
## Hiring starts with an approved MOC*
Hiring right starts with a clear understanding of what success looks like for a position – and the skills and capability needed to be successful in this role. The Mission, Outcomes and Competencies “MOC” framework is intended to drive the crafting of a great interview process with clear cues for your interviewers and drive a positive experience for everyone involved with great hiring decisions as a result.
---
## TL; DR – MOC (Mission, Outcomes, and Competencies)
* MISSION- Essentially, why does this role exist and what’s the charter?
* OUTCOMES- What does success look like in the first 12 months?
* COMPETENCIES- What knowledge, skills, and abilities are needed to succeed?
---
### MOC: Template
This template is designed to help kick start a Job Description (JD) doc by filling in the essence of what we're looking for. It gives internal people and the applicant clarity on why the job is important (Mission), what that person is actually accountable for (Objectives) and what skillsets are critical for us to interview them for along the way (Competencies).
===
Reporting to:_**Manager Name**_
**Mission**
_What do we need this person to do over the next 12 months (essence of the job in 2-3 plain English sentences). This isn't a marketing copy, it's just the explanation you'd give over Slack._
**Outcomes**
_What specific outcomes are we looking for this hire to achieve in their first 12-18 months? 5-8 specific and important goals that support the mission, ranked by order of importance; outcomes also influence what needs to report to them to be successful? Go deeper and also address the problem (s) this person will be solving._
1.
2.
**Competencies**
_Flowing out from the mission and outcomes, competencies articulate how we expect this person to operate in achieving the goals. Think about this with two views: 1. How we expect this person to operate in achieving the goals, and 2. What set of experiences/ accomplishments do they have that give them a unique advantage in achieving our outcomes list above? The interview process will be designed so at least 2 people provide signal on each of these._
1. Competency 1:
2. Competency 2:
**Interview Plan**
_Which 3-5 people should be part of the interview process for uncovering these competencies and how should that process work?_
===
---
## Job Description Template
Now that you have an idea of what you expect from the role, and the type of candidate you think would be ideal, you are ready to draft your Job Description. Feel free to use the template below:
:::tip Job Description Template
#### _Your Project or Company is hiring a_
_**Managers Roles ONLY Add** Lead a team of [insert team details], as well as any freelancers or agency relationships; you will establish the team’s direction, alignment and commitment, grow and coach teammates, drive accountability and set tempo, foster cross-functional collaboration and be a role model to others._
#### _You will_
* _A_
* _B_
* _C_
_You should apply if:_
* _A_
* _B_
* _C_
_**Managers Roles ONLY Add below**_
* _ Experience in establishing alignment and clarity on objectives for the team and individual(s). _
* _Previous success in coaching a team and driving accountability._
* _Demonstrated track record of fostering cross-functional collaboration across a global and remote team._
_Blurb about your company’s vision, and where candidates can go to learn more about the project._
:::
### Here is an example of a Job Description using the above template:
:::tip Job Description Example
## _Ecosystem Analyst_
_NEAR Foundation is hiring an Analytics Manager to join our Ecosystem Success Department. This is a pivotal role in powering data-driven intelligence which will build NEAR's analytics function both internally and externally. You will build out a set of open source dashboards and tools and create actionable narratives from them which you will present across the ecosystem. As the primary builder and maintainer of the NF Analytics presence, you'll also be a leader of (and get support from) the broader NEAR analytics community. Basically, this role helps everyone understand what's happening within the NEAR ecosystem and what they can do to help the cause._
_You will:_
* _Communicate compelling actionable insights from data analysis and present them on a recurring basis both internally and externally to our ecosystem._
* _Develop strategies for effective data analysis and reporting._
* _Build and automate performance dashboards, including leading and maintaining the one at explorer.near.org/stats._
* _Develop SQL queries and data visualizations to fulfill ad-hoc analysis requests and ongoing reporting needs_
* _Execute advanced analytics projects to guide timely decision making and accelerate optimization cycles_
* _Source, coordinate and direct the work of developers as needed to fulfill these tasks_
* _Support and coordinate the NEAR analytics community._
##### _You’ll have:_
* _4+ years of background from consulting, Investment banking, fintech/payments company and/or other high growth organization with operational experience in a quantitative or analytical role._
* _Ability to effectively communicate actionable data-driven insights to all audiences. Ability to demystify measurement and translate data to consumable, relatable stories. Communication -- turning data into digestible, actionable insights about the ecosystem -- is the most important part of this role._
* _Strong SQL, Python or R, and Excel skills; Experience with Looker, or other data visualization tools_
* _Strong analytical skills set: ability to identify and isolate trends, and connect them to customer behavioral patterns, and business impacts._
* _Experience building data pipelines, running SQL queries and updating/ building dashboards, etc. \Expert SQL Proficiency._
* _Self-starter and resourceful mindset with a passion to learn and succeed._
* _Bachelor's degree in Business, Finance, Economics, Engineering, Marketing, or related technical field or equivalent experience._
##### _We value:_
* _ECOSYSTEM-FIRST: always put the health and success of the ecosystem above any individual's interest._
* _OPENNESS: operate transparently and consistently share knowledge to build open communities._
* _PRAGMATISM OVER PERFECTION: find the right solution not the ideal solution and beat dogmatism by openly considering all ideas._
* _MAKE IT FEEL SIMPLE: strive to make the complex feel simple so the technology is accessible to all._
* _GROW CONSTANTLY: learn, improve and fail productively so the project and community are always becoming more effective._
##### _About the NEAR Foundation_
_The NEAR Foundation is a nonprofit, non-beneficiary foundation based in Switzerland which supports community-driven innovation and the Open Web with a specific focus on the NEAR platform. The Foundation distributes grant funding into the NEAR ecosystem, coordinates governance among participants, educates people about the protocol and ensures that relationships among the community members are as strong and sustainable as the apps they build. It seeks to combine the inclusive care of a community-driven NGO with the at-scale effectiveness of a Silicon Valley startup._
##### _This is a full-time, remote position._
:::
### Run JD through Gender Decoder: [https://gender-decoder.katmatfield.com/](https://gender-decoder.katmatfield.com/)
---
## Interview Plan Development:
This section provides a layout for screening and interviewing candidates.
**Pre-screen Stage: **
* Application Review
* Hiring Manager Application Review
* Recruiter Screen
* Hiring Manager Screens:
**Interview Loop: 3-4 interviewers**
* Value Alignment Assessment- 45mins (these are standardized questions)
* Skills Assessment- 60- mins
* Skills Assessment- 60- mins
***Homework Round (this can be before or after the loop)**
* There should be a session to discuss the assessment
**Final Round: (role dependent)**
* Deep dive into the homework & interview
---
## Behavioral - Competency Interviewing
#### **What is a Competency?**
Competencies are a set of demonstrated characteristics and skills that are required to perform critical work functions successfully.
#### **How to use them in interviews**
The[ MOC](https://near-foundation.atlassian.net/wiki/spaces/PT/pages/62685185) framework can help outline a role and identify the top competencies needed to perform this role. Then in the interview process, you can ask questions related to the competencies to validate if the candidate's experience matches the position you are looking to fill.
#### **Can this type of interview be used for technical roles?**
Yes, this type of interviewing can be used for all kinds of roles. By pairing technical assessments and case studies with the behavior interviews you will be able to validate the candidate has all the competencies needed for the position.
##### **Example:**
_Software Engineer, Solutions & Enterprise Platforms_
_Experience Needed_
* _Bachelor's Degree in Computer Science or related fields is a must_
* _Experience or ability to work with systems-level languages, Rust is preferred but Go and C would also be eligible;_
* _Good understanding of blockchain protocol_
_Competencies Required_
* _Analytical_
* _Articulate_
* _Problem Solver_
* _Adaptable_
* _Team Player_
#### **Questions you can ask for each competency:**
* **Analytical:**
* Describe a time when you had to interpret technical data to form solid conclusions.
* Articulate Briefly describe the most complex role you’ve ever had.
* **Problem Solver**
* Tell me about a time when you found a solution to a challenging problem. Why was it a problem and what was your approach to solving it?
* **Adaptable**
* Describe a time when you worked in a rapidly changing environment. How did you handle this?
* **Team Player**
* Give me an example of a time when you took on additional responsibilities to support your team. How did you feel doing this?
#### Take a look at the [Behavioral Interview Questions](support/hr-resources/behavioral-interview-questions.md) article of this wiki to take a deeper dive into this topic.
---
## Interview Etiquette
## Reminders for a great candidate experience :)
A final word on the interview process. Remember, your candidates are evaluating you just as much as you are evaluating them. First impressions go both ways. These bullet points will help you communicate to your candidates that you value their time, put thought into their role, your project, and your vision. Following these simple etiquette tips will help you look professional, and help them feel welcome as they advance through the hiring process you create.
* Set up your interviews as soon as you can once responses to your job posting start arriving in your inbox.
* Be on time for your interview! It’s respectful and shows that you care about them.
* If you need to reschedule your interview, do it at least 24 hours in advance. Don’t reschedule more than once as it can make the candidate feel unimportant.
* Don’t leave the candidates hanging in a zoom alone - although sometimes emergencies happen, it’s important to send them a message directly in these situations.
* Submit your scorecard to your team within 24 hours of conducting the interview.
---
|
Long Range Attacks, and a new Fork Choice Rule
DEVELOPERS
September 10, 2019
In this blog post we briefly discuss the concept of long range attacks, which is considered to be one of the biggest unsolved problems in Proof-of-Stake protocols. We then cover two solutions presently used in live or proposed protocols, namely weak subjectivity and forward-secure keys. Finally, we present a new fork choice rule that makes long range attacks significantly more expensive. The fork choice rule presented can be combined with either or both existing solutions, and can serve as an extra measure against the long range attacks.
This write-up is part of an ongoing effort to create high quality technical content about blockchain protocols and related topics. See our blog posts on Layer 2 approaches, Randomness, and our sharding paper that includes a great overview of the current state of the art.
Follow us on Twitter or join our discord channel to make sure you don’t miss the updates.
Long Range Attacks
Proof-of-Work with the heaviest chain fork choice rule has a very desirable property that no matter what malicious actors do, unless they control more than 50% of the hash power, they cannot revert a block that was finalized sufficiently long ago.
Proof-of-Stake systems do not have the same property. In particular, after the block producers that created blocks at some point in the past get their staked tokens back, the keys that they used to create blocks no longer have value for them. Thus, an adversary can attempt to buy such keys for a price significantly lower than the amount of tokens that was staked when the key was used to produce blocks. Since, unlike in Proof-of-Work, Proof-of-Stake has no mechanism to force a delay between produced blocks, the adversary can then in minutes create a chain that is longer than the canonical chain, and have such chain chosen by the fork choice rule.
There are two primary ways to get around this problem:
Weak subjectivity. Require that all the nodes in the network periodically check what is the latest produced block, and disallow reorgs that go too far into the past. If the nodes check the chain more frequently than the time it takes to unstake tokens, they will never choose a longer chain produced by an adversary who acquired keys for which the tokens were unstaked.
The weak side of the weak subjectivity approach is that while the existing nodes won’t be fooled by the attacker, all the new nodes that spin up for the first time will have no information to tell which chain was created first, and will choose the longer chain produced by the attacker. To avoid it, they need to somehow learn off-chain about the canonical chain, effectively forcing them to identify someone whom they trust in the network.
Forward-secure keys. Another approach is to make the block producers destroy the keys that they used to produce blocks immediately or shortly after the blocks were produced. This can be done by either creating a new key pair every time the participant creates blocks, or by using a construction called Forward-secure keys, which allows a secret key to change while the corresponding public key remains constant.
This approach relies on nodes being honest and following protocol strictly. There’s no incentive for them to destroy their keys, since they know in the future someone might attempt to buy them, thus the key has some non-zero value. While it is unlikely that a large percentage of block producers at a certain moment all decide to alter their binaries and remove the logic that wipes the keys, a protocol that relies on the majority of participants being honest has different security guarantees than a protocol that relies on the majority of participants being reasonable. Proof-of-work works for as long as more than half of the participants are reasonable and do not cooperate, and it is desirable to have the fork choice rule and a Sybil resistance mechanism that have the same property.
Proposed Fork-Choice Rule
Consider the figure below. There’s a block B sufficiently far in the past on the canonical chain such that the majority (or all) the block producers who were building the chain back when B was produced have their stake unstaked.
The adversary reaches out to all those block producers and acquires the private keys of ~⅔ of them. At that point the keys have no value to the block producers, so the adversary can purchase them for a price significantly lower than the actual amount that was staked.
The adversary then uses the keys to build a longer chain. Since no scarce resource is used in a Proof-of-Stake system, the adversary can do it very quickly.
We want a fork choice rule such that no matter how long the chain the adversary has built is, and how many of the corrupted validators signed on it, no client, including the clients that start for the first time, will see the chain built by the adversary as the canonical.
The proposed fork choice rule is the following: start from the genesis, and at each block greedily choose one of its children as the next block by doing the following:
Snapshot the state as of after the current block is applied.
Enumerate all the accounts that have tokens at that moment, as a list of pairs (public key, amount).
For each child block, identify all the public keys from the set that signed at least one transaction in the subtree of that child block.
Choose the child block for which the score, computed as the sum of the amounts on the accounts that signed at least one transaction in its subtree, is the largest.
In the example above if the current block we consider is B, its child blocks are C1 and C2. The subtree of C1 is all the blocks ever built on the canonical chain, while the subtree of C2 is all the blocks built by the adversary.
The score of the canonical chain will be the sum of all the amounts on the accounts in the snapshot that issued at least one transaction throughout the history of the canonical chain starting from B, including all the money transfers and staking transactions.
The score of the adversarial chain will be the sum of all the amounts on the accounts that made a transaction on the adversarial chain. With a proper replay protection it means that all such transactions will have to be issued by the adversary.
Say the total amount on all the accounts that exist in the snapshot that made at least one transaction on the canonical chain since B is p. Say the adversary managed to purchase the keys for a subset of such accounts such that the total amount on them adds up to q, q < p. Since for the adversary to have a heavier chain they need the total amount of accounts that issued transactions to be more than p, they need to also get a hold of some keys that correspond to accounts with more than p – q tokens for which no transaction was issued on the canonical chain. But since for those accounts no transaction was issued on the canonical chain, the keys for those accounts cannot be acquired for less than the value that was on such accounts in the snapshot, since those accounts still have all the funds that were in the snapshot as of the latest block on the canonical chain.
Thus, if the adversary can realistically acquire the keys as of block B that correspond to 80% of all the accounts that issued at least one transaction on the canonical chain at a discount, they still need to pay the amount of money that is at least 0.2p.
Note that this fork choice rule only works for long range fork decisions, since for the canonical chain to be resilient to forks, it needs to accumulate a large number of accounts issuing transactions. One way to combine it with more classic fork choice rules would be to first build a subset of the blockchain that only consists of blocks finalized by a BFT finality gadget, run the fork choice rule described above on such a sub-blockchain, and then starting from the last finalized block on the chain chosen by the fork choice rule use a more appropriate fork choice rule, such as LMD GHOST.
Outro
The fork choice rule described above provides an extra level of protection against long range attacks.
This particular fork choice rule, while has interesting properties, needs to be researched further before becoming usable in practice. For instance, it is unclear without further analysis if it provides economic finality.
We are actively researching the approaches to defend against long range attacks presently, and will likely be publishing more materials soon, so stay tuned.
In our continuous effort to create high quality technical content, we run a video series in which we talk to the founders and core developers of other protocols, we have episodes with Ethereum Serenity, Cosmos, Polkadot, Ontology, QuarkChain and many others. All the episodes are conveniently assembled into a playlist here.
If you are interested to learn more about technology behind NEAR Protocol, make sure to check out our sharding paper and our randomness beacon. NEAR is one of the very few protocols that addresses the state validity and data availability problems in sharding, and the sharding paper contains great overview of sharding in general and its challenges besides presenting our approach.
While scalability is a big concern for blockchains today, a bigger concern is usability. We invest a lot of effort into building the most usable blockchain. We published an overview of the challenges that the blockchain protocols face today with usability, and how they can be addressed here.
Follow us on Twitter and join our Discord chat where we discuss all the topics related to tech, economics, governance and other aspects of building a protocol.
|
---
id: coin-flip
title: Coin Flip
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import {CodeTabs, Language, Github} from "@site/src/components/codetabs"
Coin Flip is a game where the player tries to guess the outcome of a coin flip. It is one of the simplest contracts implementing random numbers.
![img](/docs/assets/examples/coin-flip.png)
---
## Starting the Game
You have two options to start the example:
1. **Recommended:** use the app through Gitpod (a web-based interactive environment)
2. Clone the project locally.
| Gitpod | Clone locally |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
| <a href="https://gitpod.io/#https://github.com/near-examples/coin-flip-examples.git"><img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" /></a> | `https://github.com/near-examples/coin-flip-examples.git` |
If you choose Gitpod, a new browser window will open automatically with the code. Give it a minute, and the front-end will pop up (ensure the pop-up window is not blocked).
If you are running the app locally, you should build and deploy a contract (JavaScript or Rust version) and a client manually.
---
## Interacting With the Counter
Go ahead and log in with your NEAR account. If you don't have one, you can create one on the fly. Once logged in, use the `tails` and `heads` buttons to try to guess the next coin flip outcome.
![img](/docs/assets/examples/coin-flip.png)
*Frontend of the Game*
---
## Structure of a dApp
Now that you understand what the dApp does, let us take a closer look to its structure:
1. The frontend code lives in the `/frontend` folder.
2. The smart contract code in Rust is in the `/contract-rs` folder.
3. The smart contract code in JavaScript is in the `/contract-ts` folder.
:::note
Both Rust and JavaScript versions of the contract implement the same functionality.
:::
### Contract
The contract presents 2 methods: `flip_coin`, and `points_of`.
<CodeTabs>
<Language value="js" language="ts">
<Github fname="contract.ts"
url="https://github.com/near-examples/coin-flip-examples/blob/main/contract-ts/src/contract.ts"
start="23" end="56" />
</Language>
<Language value="rust" language="rust">
<Github fname="lib.rs"
url="https://github.com/near-examples/coin-flip-examples/blob/main/contract-rs/src/lib.rs"
start="46" end="70" />
</Language>
</CodeTabs>
### Frontend
The frontend is composed by a single HTML file (`/index.html`). This file defines the components displayed in the screen.
The website's logic lives in `/assets/js/index.js`, which communicates with the contract through a `wallet`. You will notice in `/assets/js/index.js` the following code:
<CodeTabs>
<Language value="js" language="ts">
<Github fname="index.js"
url="https://github.com/near-examples/coin-flip-workshop-js/blob/main/frontend/index.js"
start="10" end="19" />
</Language>
</CodeTabs>
It indicates our app, when it starts, to check if the user is already logged in and execute either `signedInFlow()` or `signedOutFlow()`.
---
## Testing
When writing smart contracts, it is very important to test all methods exhaustively. In this
project you have integration tests. Before digging into them, go ahead and perform the tests present in the dApp through the command `yarn test` for the JavaScript version, or `./test.sh` for the Rust version.
### Integration test
Integration tests can be written in both Rust and JavaScript. They automatically deploy a new
contract and execute methods on it. In this way, integration tests simulate interactions
from users in a realistic scenario. You will find the integration tests for the `coin-flip`
in `contract-ts/sandbox-ts` (for the JavaScript contract) and `contract-rs/tests` (for the Rust contract).
<CodeTabs>
<Language value="js" language="ts">
<Github fname="main.test.js"
url="https://github.com/near-examples/coin-flip-examples/blob/main/contract-ts/sandbox-ts/main.ava.ts"
start="30" end="53" />
</Language>
<Language value="rust" language="rust">
<Github fname="lib.rs"
url="https://github.com/near-examples/coin-flip-examples/blob/main/contract-rs/tests/tests.rs"
start="25" end="82" />
</Language>
</CodeTabs>
---
## A Note On Randomness
Randomness in the blockchain is a complex subject. We recommend you to read and investigate about it.
You can start with our [security page on it](../../2.build/2.smart-contracts/security/random.md).
:::note Versioning for this article
At the time of this writing, this example works with the following versions:
- near-cli: `4.0.13`
- node: `18.19.1`
- rustc: `1.77.0`
:::
|
Community Update: April 24th, 2020
COMMUNITY
April 24, 2020
The past two weeks have been quite eventful. We have been busy preparing for Ready Layer One while hosting community events and sharing our engineering updates with the community. To stay up to date with our community events, check out the events calendar.
Open Blockchain Week: Ready Layer One
It has been amazing to see various speakers and projects, from across the ecosystem, come together to work on one shared experience.
Confirmed talks include:
Laura Shin hosting a live podcast with L1 Founders: Zaki Manian (Cosmos), Illia Polosukhin (NEAR), Dr. Gavin Wood (Polkadot), Arthur Breitman (Tezos)
Mike Masnick, author of Protocols, not Platforms: A Technological Approach to Free Speech will discuss a decentralized approach to content moderation
Chris Dixon from a16z Crypto in a fireside chat with Robert Hackett of Fortune Magazine
We still have some tickets available for the NEAR community. The event will take place from the 4th to the 6th of May. Please let us know if you would like to attend and we can send you a ticket.
Featured Speakers RL1 — Head over to the website to see the full list
The Road to MainNet and Beyond
This blog post details the roadmap to a community governed MainNet, divided into stages. The goal for the first stage is to distribute initial tokens to contributors and get the initial set of validators onboard. Developers who are ready to deploy their applications to MainNet will be able to apply to the NEAR Foundation and get an account to deploy their application.
Each stage is identified by the restrictions that it has and each stage has different goals. To learn more about our path to MainNet and the road ahead, please head over to our article.
Uniting the Creators of Tomorrow: The Open Web Collective
This is for the go-getters, the risk-takers, the crypto entrepreneurs moving past the hype of decentralization and building the projects that will bring the vision of the Open Web to reality.
OWC helps projects at all stages, from ideation to growth through access to exclusive curriculum, events, networks, and mentors.
Collaborate, discuss new products and partnerships, build your team, get feedback, or even find a co-founder with OWC.
Ready to accelerate your project and grow your network? Apply now and join the Collective at https://openwebcollective.com/
Thursday Jam — Join us in the next one!
Content
Introduction to NEAR Protocol’s Economics
This article explores the economic principles governing NEAR Protocol, and how they keep aligned the interests of its community.
Private Transactions
We take a deep dive into the internals of how private transactions will be implemented on NEAR so that no information about the participants or amounts are revealed—all without sacrificing the ability to validate the correctness.
NEAR Hacker’s Diaries
For those of you who are interested in the technical challenges in NEAR Protocol, we wrote an article explaining the gas metering estimation.
Call for Validators
If you want to join phase one on the road to a community governed MainNet and provide validation services to our community of users and contributors, please apply through the link below. We will contact every applicant, one by one, with further instructions to test your infrastructure on NEAR’s BetaNet and plan accordingly together: https://forms.gle/rQ6YTfvGVkMJLuS97
How You Can Get Involved
Join the NEAR Contributor Program. If you would like to host events, create content or provide developer feedback, we would love to hear from you! To learn more about the program and ways to participate, please head over to the website.
If you’re just getting started, learn more in The Beginner’s Guide to NEAR Protocol or read the official White Paper. Stay up to date with what we’re building by following us on Twitter for updates, joining the conversation on Discord and subscribing to our newsletter to receive updates right to your inbox. |
Arbitrum Integrates NEAR DA for Developers Building Ethereum Rollups
NEAR FOUNDATION
December 21, 2023
The NEAR Data Availability layer (NEAR DA) was one of the most exciting announcements to come out of NEARCON ‘23. Unveiled by NEAR Protocol co-founder and new NEAR Foundation CEO, Illia Polosukhin, NEAR DA is a highly efficient and robust data availability layer, designed to help Ethereum rollup builders simplify their network and lower costs, while ensuring they can scale like the NEAR Protocol.
The latest technical integration for NEAR DA’s efficient and highly scalable data availability is now available for Arbitrum Orbit, the tech stack that allows developers to launch their own configurable rollups based on Arbitrum’s technology.
Arbitrum Orbit is a L2/L3 scaling solution for Ethereum that lets developers build their own dedicated chains with their own configurations. Arbitrum Orbit chains derive trustless security, while scaling Ethereum. With the latest NEAR DA integration, rollup builders could benefit from cheaper data availability costs to significantly reduce their overall rollup overheads.
Develop Ethereum rollups within the Arbitrum Orbit ecosystem enabled by NEAR DA
The NEAR-Arbitrum integration allows devs building their own rollups to be part of Arbitrum Orbit, an ecosystem of blockchains that settle onto Arbitrum or Ethereum Mainnet, while leveraging the cost effectiveness and scalability of the NEAR Protocol.
Arbitrum is a clear innovative leader in developing the Optimistic Rollup technology, while also operating as an L2 with the highest TVL. Arbitrum now offers its own stack to other rollups builders with Arbitrum Orbit, one step closer to decentralizing Ethereum.
Arbitrum Orbit chains leverage the Arbitrum Nitro tech stack, the technology that Arbitrum developed to scale Ethereum. It allows builders to create their own blockchains, which settles transactions on Arbitrum One, Arbitrum Nova, or Ethereum Mainnet if the Arbitrum DAO grants an L2 license.
These Orbit chains, which use Arbitrum’s Rollup and AnyTrust protocols, offer customization across throughput, privacy, gas token, and governance to cater to specific use cases and business requirements. For instance, rollup builders looking for cheaper DA alternatives can now utilize NEAR DA within the Arbitrum Orbit stack. With this, developers can build self-managed, configurable blockchains with enhanced control over its features and governance, while deriving the security guarantees of Ethereum.
NEAR DA paves the way for modular blockchain development
This integration empowers rollup builders on Arbitrum Orbit to use NEAR DA as a complete out-of-the-box modular DA solution.
As of December 2023, 231 kB of calldata on NEAR costs $0.0016, while the same calldata on Ethereum L1 costs users $140.54.
NEAR DA helps developers reduce costs and enhance their rollup’s reliability, while maintaining the security guarantees provided by Ethereum. Another upside to NEAR DA is that high quality projects launching an app-chain or L2 will be able to get out of the box NEAR DA compatibility and support.
“Offering a data availability layer to Ethereum rollups highlights the versatility of NEAR’s tech while also helping founders from across Web3 deliver great products that bring us closer to mainstream adoption of the Open Web,” said Polosukhin, when announcing NEAR DA at NEARCON ‘23.
“NEAR’s L1 has been live with 100% uptime for more than three years, so it can offer true reliability to projects looking for secure DA while also being cost-effective,” Polosukhin added. “NEAR provides great solutions to developers no matter which stack they’re building on and now that includes the Ethereum modular blockchain landscape.”
Interested teams who want to work with NEAR DA are invited to fill out this form, with information about your project and how you would like to integrate with NEAR DA.
|
---
id: resharding-troubleshooting
title: Troubleshooting Resharding
sidebar_label: Resharding
description: Advice to ensure that the node goes through resharding successfully
---
## Resharding Timeline {#timeline}
The [1.37.0 release](https://github.com/near/nearcore/releases/tag/1.37.0) contains a protocol upgrade that splits shard 3 into two shards.
When network upgrades to the protocol version 64, it will have 5 shards defined by these border accounts `vec!["aurora", "aurora-0", "kkuuue2akv_1630967379.near", "tge-lockup.sweat"]`.
Any code that has a hardcoded number of shards or mapping of an account to shard id may break.
If you are not sure if your tool will work after mainnet updates to protocol version 64, test it on testnet, as it is already running on 5 shards.
Resharding will happen in the epoch preceding protocol upgrade.
So, if the voting happens in epoch X, resharding will happen in epoch X+1, and protocol upgrade will happen in epoch X + 2.
Voting for upgrading to protocol version 64 will start on **Monday 2024-03-11 18:00:00 UTC** .
By our estimations, resharding will start on **Tuesday 2024-03-12 07:00:00 UTC**, and first epoch with 5 shards will start on **Tuesday 2024-03-12 23:00:00 UTC**.
Resharding is done as a background process of a regular node run. It takes hours to finish, and it shouldn’t be interrupted.
Failure to reshard will result in the node not being able to sync with the network.
## 1.37.0 release resharding {#1.37.0}
### General recommendations {#general 1.37}
- **Do not restart your node during the resharding epoch.**
It may result in your node not being able to finish resharding.
- **Disable state sync** until your node successfully transitions to the epoch with protocol version 64.
You should disable it before the voting date (**Monday 2024-03-11 18:00:00 UTC**).
It should be safe to enable it on **Thursday 2024-03-14**.
To disable state sync, assign `false` to the `state_sync_enabled` field in config.
- **Make sure that state snapshot compaction is disabled.**
Your node will create a state snapshot for resharding.
State snapshot compaction may lead to stack overflow.
Make sure that fields `store.state_snapshot_config.compaction_enabled` and `store.state_snapshot_compaction_enabled` are set to `false`.
- Ensure that you have additional 200Gb of free space on your `.near/data` disk.
### Before resharding {#prepare for 1.37}
#### If your node is out of sync {#out of sync 1.37}
If your node is far behind the network, consider downloading the latest DB snapshot provided by Pagoda from s3
[Node Data Snapshots](/intro/node-data-snapshots).
Your node will likely fail resharding if it is not in sync with the network for the majority of the resharding epoch.
#### If you run legacy archival node {#legacy 1.37}
We don’t expect legacy archival nodes to be able to finish resharding and stay in sync on mainnet.
We highly recommend migrating to split storage archival nodes as soon as possible.
The easiest way is to download DB snapshots provided by Pagoda from s3.
Be aware that the cold db of a mainnet split storage is about 22Tb, and it may take a long time to download it.
You can find instructions on how to migrate to split storage on [Split Storage page](/archival/split-storage-archival).
### During resharding {#run 1.37}
#### Monitoring {#monitoring 1.37}
To monitor resharding you can use metrics `near_resharding_status`, `near_resharding_batch_size`, and `near_resharding_batch_prepare_time_bucket`.
You can read more [on github](https://github.com/near/nearcore/blob/master/docs/architecture/how/resharding.md#monitoring).
If you observe problems with block production or resharding performance, you can adjust resharding throttling configuration.
This does not require a node restart, you can send a signal to the neard process to load the new config.
Read more [on github](https://github.com/near/nearcore/blob/master/docs/architecture/how/resharding.md#monitoring).
### After resharding {#after 1.37}
If your node failed to reshard or is not able to sync with the network after the protocol upgrade, you will need to download the latest DB snapshot provided by Pagoda from s3
[Node Data Snapshots](/intro/node-data-snapshots).
We will try to ensure that these snapshots are uploaded as soon as possible, but you may need to wait several hours for them to be available.
Pagoda s3 DB snapshots have a timestamp of their creation in the file path.
Check that you are downloading a snapshot that was taken after the switch to protocol version 64.
|
---
id: minting-nfts
title: Minting NFTs
sidebar_label: Minting NFTs
---
In this tutorial you'll learn how to easily create your own NFTs without doing any software development by using a readily-available smart contract and a decentralized storage solution like [IPFS](https://ipfs.io/).
## Overview {#overview}
This article will guide you in setting up an [NFT smart contract](#non-fungible-token-contract), and show you [how to build](#building-the-contract), [test](#testing-the-contract) and [deploy](#deploying-the-contract) your NFT contract on NEAR.
Once the contract is deployed, you'll learn [how to mint](#minting-your-nfts) non-fungible tokens from media files [stored on IPFS](#uploading-the-image) and view them in your Wallet.
## Prerequisites {#prerequisites}
To complete this tutorial successfully, you'll need:
- [Rust toolchain](/build/smart-contracts/quickstart#prerequisites)
- [A NEAR account](#wallet)
- [NEAR command-line interface](/tools/near-cli#setup) (`near-cli`)
## Wallet {#wallet}
To store your non-fungible tokens you'll need a [NEAR Wallet](https://testnet.mynearwallet.com//).
If you don't have one yet, you can create one easily by following [these instructions](https://testnet.mynearwallet.com/create).
> **Tip:** for this tutorial we'll use a `testnet` wallet account. The `testnet` network is free and there's no need to deposit funds.
Once you have your Wallet account, you can click on the [Collectibles](https://testnet.mynearwallet.com//?tab=collectibles) tab where all your NFTs will be listed:
![Wallet](/docs/assets/nfts/nft-wallet.png)
<!--
Briefly talks about how the wallet listens for methods that start with `nft_` and then flags the contracts.
-->
## IPFS {#ipfs}
The [InterPlanetary File System](https://ipfs.io/) (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.
### Uploading the image {#uploading-the-image}
To upload the NFT image, you should use a [decentralized storage](/concepts/storage/storage-solutions) provider such as IPFS.
:::note
This example uses IPFS, but you could use a different solution like Filecoin, Arweave, or a regular centralized Web2 hosting.
:::
Once you have uploaded your file to IPFS, you'll get a unique `CID` for your content, and a URL like:
```
https://bafyreiabag3ztnhe5pg7js4bj6sxuvkz3sdf76cjvcuqjoidvnfjz7vwrq.ipfs.dweb.link/
```
## Non-fungible Token contract {#non-fungible-token-contract}
[This repository](https://github.com/near-examples/NFT) includes an example implementation of a [non-fungible token] contract which uses [near-contract-standards] and simulation tests.
[non-fungible token]: https://nomicon.io/Standards/NonFungibleToken
[near-contract-standards]: https://github.com/near/near-sdk-rs/tree/master/near-contract-standards
### Clone the NFT repository {#clone-the-nft-repository}
In your terminal run the following command to clone the NFT repo:
```
git clone https://github.com/near-examples/NFT
```
### Explore the smart contract {#explore-the-smart-contract}
The source code for this contract can be found in `nft/src/lib.rs`. This contract contains logic which follows the [NEP-171 standard][non-fungible token] (NEAR Enhancement Proposal) and the implementation of this standard which can be found [here](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/core/core_impl.rs).
At first, the code can be a bit overwhelming, but if we only consider the aspects involved with minting, we can break it down into 2 main categories - the contract struct and the minting process.
#### Contract Struct {#contract-struct}
The contract keeps track of two pieces of information - `tokens` and `metadata`. For the purpose of this tutorial we will only deal with the `tokens` field.
```rust
#[near_bindgen]
#[derive(BorshDeserialize, BorshSerialize, PanicOnDefault)]
pub struct Contract {
tokens: NonFungibleToken,
metadata: LazyOption<NFTContractMetadata>,
}
```
The tokens are of type `NonFungibleToken` which come from the [core standards](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/core/core_impl.rs). There are several fields that make up the struct but for the purpose of this tutorial, we'll only be concerned with the `owner_by_id` field. This keeps track of the owner for any given token.
```rust
pub struct NonFungibleToken {
// owner of contract
pub owner_id: AccountId,
// keeps track of the owner for any given token ID.
pub owner_by_id: TreeMap<TokenId, AccountId>,
...
}
```
Now that we've explored behind the scenes and where the data is being kept, let's move to the minting functionality.
#### Minting {#minting}
In order for a token to be minted you will need to call the `nft_mint` function. There are three arguments that are passed to this function:
- `token_id`
- `receiver_id`
- `token_metadata`
This function executes `self.tokens.mint` which calls the mint function in the [core standards](https://github.com/near/near-sdk-rs/blob/master/near-contract-standards/src/non_fungible_token/core/core_impl.rs) creating a record of the token with the owner being `receiver_id`.
```rust
#[payable]
pub fn nft_mint(
&mut self,
token_id: TokenId,
receiver_id: ValidAccountId,
token_metadata: TokenMetadata,
) -> Token {
self.tokens.mint(token_id, receiver_id, Some(token_metadata))
}
```
This creates that record by inserting the token into the `owner_by_id` data structure that we mentioned in the previous section.
```rust
self.owner_by_id.insert(&token_id, &owner_id);
```
### Building the contract {#building-the-contract}
To build your contract run the following command in your terminal which builds your contract using Rust's `cargo`.
```bash
./scripts/build.sh
```
This will generate WASM binaries into your `res/` directory. This WASM file is the smart contract we'll be deploying onto the NEAR blockchain.
> **Tip:** If you run into errors make sure you have [Rust installed](/build/smart-contracts/quickstart#prerequisites) and are in the root directory of the NFT example.
### Testing the contract {#testing-the-contract}
Written in the smart contract there are pre-written tests that you can run. Run the following command in your terminal to perform these simple tests to verify that your contract code is working.
```bash
cargo test -- --nocapture
```
> **Note:** the more complex simulation tests aren't performed with this command but you can find them in `tests/sim`.
## Using the NFT contract {#using-the-nft-contract}
Now that you have successfully built and tested the NFT smart contract, you're ready to [deploy it](#deploying-the-contract)
and start using it [mint your NFTs](#minting-your-nfts).
### Deploying the contract {#deploying-the-contract}
This smart contract will be deployed to your NEAR account. Because NEAR allows the ability to upgrade contracts on the same account, initialization functions must be cleared.
> **Note:** If you'd like to run this example on a NEAR account that has had prior contracts deployed, please use the `near-cli` command `near delete` and then recreate it in Wallet. To create (or recreate) an account, please follow the directions in [Test Wallet](https://testnet.mynearwallet.com/) or ([NEAR Wallet](https://wallet.near.org/) if we're using `mainnet`).
Log in to your newly created account with `near-cli` by running the following command in your terminal.
```bash
near login
```
To make this tutorial easier to copy/paste, we're going to set an environment variable for your account ID. In the command below, replace `YOUR_ACCOUNT_NAME` with the account name you just logged in with including the `.testnet` (or `.near` for `mainnet`):
```bash
export ID=YOUR_ACCOUNT_NAME
```
Test that the environment variable is set correctly by running:
```bash
echo $ID
```
Verify that the correct account ID is printed in the terminal. If everything looks correct you can now deploy your contract.
In the root of your NFT project run the following command to deploy your smart contract.
```bash
near deploy --wasmFile res/non_fungible_token.wasm --accountId $ID
```
<details>
<summary>Example response: </summary>
<p>
```bash
Starting deployment. Account id: ex-1.testnet, node: https://rpc.testnet.near.org, file: res/non_fungible_token.wasm
Transaction Id E1AoeTjvuNbDDdNS9SqKfoWiZT95keFrRUmsB65fVZ52
To see the transaction in the transaction explorer, please open this url in your browser
https://testnet.nearblocks.io/txns/E1AoeTjvuNbDDdNS9SqKfoWiZT95keFrRUmsB65fVZ52
Done deploying to ex-1.testnet
```
</p>
</details>
> **Note:** For `mainnet` you will need to prepend your command with `NEAR_ENV=mainnet`. [See here](/tools/near-cli#network-selection) for more information.
### Minting your NFTs {#minting-your-nfts}
A smart contract can define an initialization method that can be used to set the contract's initial state.
In our case, we need to initialize the NFT contract before usage. For now, we'll initialize it with the default metadata.
> **Note:** each account has a data area called `storage`, which is persistent between function calls and transactions.
> For example, when you initialize a contract, the initial state is saved in the persistent storage.
```bash
near call $ID new_default_meta '{"owner_id": "'$ID'"}' --accountId $ID
```
> **Tip:** you can find more info about the NFT metadata at [nomicon.io](https://nomicon.io/Standards/Tokens/NonFungibleToken/Metadata).
You can then view the metadata by running the following `view` call:
```bash
near view $ID nft_metadata
```
<details>
<summary>Example response: </summary>
<p>
```json
{
"spec": "nft-1.0.0",
"name": "Example NEAR non-fungible token",
"symbol": "EXAMPLE",
"icon": "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 288 288'%3E%3Cg id='l' data-name='l'%3E%3Cpath d='M187.58,79.81l-30.1,44.69a3.2,3.2,0,0,0,4.75,4.2L191.86,103a1.2,1.2,0,0,1,2,.91v80.46a1.2,1.2,0,0,1-2.12.77L102.18,77.93A15.35,15.35,0,0,0,90.47,72.5H87.34A15.34,15.34,0,0,0,72,87.84V201.16A15.34,15.34,0,0,0,87.34,216.5h0a15.35,15.35,0,0,0,13.08-7.31l30.1-44.69a3.2,3.2,0,0,0-4.75-4.2L96.14,186a1.2,1.2,0,0,1-2-.91V104.61a1.2,1.2,0,0,1,2.12-.77l89.55,107.23a15.35,15.35,0,0,0,11.71,5.43h3.13A15.34,15.34,0,0,0,216,201.16V87.84A15.34,15.34,0,0,0,200.66,72.5h0A15.35,15.35,0,0,0,187.58,79.81Z'/%3E%3C/g%3E%3C/svg%3E",
"base_uri": null,
"reference": null,
"reference_hash": null
}
```
</p>
</details>
Now let's mint our first token! The following command will mint one copy of your NFT. Replace the `media` url with the one you [uploaded to IPFS](#uploading-the-image) earlier:
```bash
near call $ID nft_mint '{"token_id": "0", "receiver_id": "'$ID'", "token_metadata": { "title": "Some Art", "description": "My NFT media", "media": "https://bafkreiabag3ztnhe5pg7js4bj6sxuvkz3sdf76cjvcuqjoidvnfjz7vwrq.ipfs.dweb.link/", "copies": 1}}' --accountId $ID --deposit 0.1
```
<details>
<summary>Example response: </summary>
<p>
```json
{
"token_id": "0",
"owner_id": "dev-xxxxxx-xxxxxxx",
"metadata": {
"title": "Some Art",
"description": "My NFT media",
"media": "https://bafkreiabag3ztnhe5pg7js4bj6sxuvkz3sdf76cjvcuqjoidvnfjz7vwrq.ipfs.dweb.link/",
"media_hash": null,
"copies": 1,
"issued_at": null,
"expires_at": null,
"starts_at": null,
"updated_at": null,
"extra": null,
"reference": null,
"reference_hash": null
},
"approved_account_ids": {}
}
```
</p>
</details>
To view tokens owned by an account you can call the NFT contract with the following `near-cli` command:
```bash
near view $ID nft_tokens_for_owner '{"account_id": "'$ID'"}'
```
<details>
<summary>Example response: </summary>
<p>
```json
[
{
"token_id": "0",
"owner_id": "dev-xxxxxx-xxxxxxx",
"metadata": {
"title": "Some Art",
"description": "My NFT media",
"media": "https://bafkreiabag3ztnhe5pg7js4bj6sxuvkz3sdf76cjvcuqjoidvnfjz7vwrq.ipfs.dweb.link/",
"media_hash": null,
"copies": 1,
"issued_at": null,
"expires_at": null,
"starts_at": null,
"updated_at": null,
"extra": null,
"reference": null,
"reference_hash": null
},
"approved_account_ids": {}
}
]
```
</p>
</details>
> <br/>
>
> **Tip:** after you mint your first non-fungible token, you can [view it in your Wallet](https://testnet.mynearwallet.com//?tab=collectibles):
>
> ![Wallet with token](/docs/assets/nfts/nft-wallet-token.png)
>
> <br/>
**_Congratulations! You just minted your first NFT token on the NEAR blockchain!_** 🎉
## Final remarks {#final-remarks}
This basic example illustrates all the required steps to deploy an NFT smart contract, store media files on IPFS,
and start minting your own non-fungible tokens.
Now that you're familiar with the process, you can check out our [NFT Example](https://examples.near.org/NFT) and learn more about the smart contract code and how you can transfer minted tokens to other accounts.
Finally, if you are new to Rust and want to dive into smart contract development, our [Quick-start guide](../../2.build/2.smart-contracts/quickstart.md) is a great place to start.
**_Happy minting!_** 🪙
## Blockcraft - a Practical Extension
If you'd like to learn how to use Minecraft to mint NFTs and copy/paste builds across different worlds while storing all your data on-chain, be sure to check out our [Minecraft tutorial](/tutorials/nfts/minecraft-nfts)
## Versioning for this article {#versioning-for-this-article}
At the time of this writing, this example works with the following versions:
- cargo: `cargo 1.54.0 (5ae8d74b3 2021-06-22)`
- rustc: `rustc 1.54.0 (a178d0322 2021-07-26)`
- near-cli: `2.1.1`
|
---
id: account-id
title: Address (Account ID)
---
NEAR accounts are identified by a unique address, which take one of two forms:
1. [**Implicit addresses**](#implicit-address), which are 64 characters long (e.g. `fb9243ce...`)
2. [**Named addresses**](#named-address), which are simpler to remember and act as domains (e.g. `alice.near`)
:::tip Searching to create an account?
You have multiple ways to create an account, you can [sign-up using your email](https://near.org/), get a mobile wallet through [telegram](https://web.telegram.org/k/#@herewalletbot), or create a [web wallet](https://app.mynearwallet.com).
:::
---
## Implicit Address
Implicit accounts are denoted by a 64 character address, which corresponds to a unique public/private key-pair. Who controls the [private key](./access-keys.md) of the implicit account controls the account.
For example:
- The private key: `ed25519:4x1xiJ6u3sZF3NgrwPUCnHqup2o...`
- Corresponds to the public key: `ed25519:CQLP1o1F3Jbdttek3GoRJYhzfT...`
- And controls the account: `a96ad3cb539b653e4b869bd7cf26590690e8971...`
Implicit accounts always *exist*, and thus do not need to be created. However, in order to use the account you will still need to fund it with NEAR tokens (or get somebody to pay the gas for your transaction).
<details>
<summary> 🧑💻 Technical: How to obtain a key-pair </summary>
The simplest way to obtain a public / private key that represents an account is using the [NEAR CLI](../../4.tools/cli.md)
```bash
near generate-key
# Output
# Seed phrase: lumber habit sausage used zebra brain border exist meat muscle river hidden
# Key pair: {"publicKey":"ed25519:AQgnQSR1Mp3v7xrw7egJtu3ibNzoCGwUwnEehypip9od","secretKey":"ed25519:51qTiqybe8ycXwPznA8hz7GJJQ5hyZ45wh2rm5MBBjgZ5XqFjbjta1m41pq9zbRZfWGUGWYJqH4yVhSWoW6pYFkT"}
# Implicit account: 8bca86065be487de45e795b2c3154fe834d53ffa07e0a44f29e76a2a5f075df8
```
</details>
---
## Named Address
In NEAR, users can register **named accounts** (e.g. `bob.near`) which are simpler to share and remember.
Another advantage of named accounts is that they can create **sub-accounts** of themselves, effectively working as domains:
1. The [`registrar`](https://nearblocks.io/address/registrar) account can create top-level accounts (e.g. `near`, `sweat`, `kaiching`).
2. The `near` account can create sub-accounts such as `bob.near` or `alice.near`
3. `bob.near` can create sub-accounts of itself, such as `app.bob.near`
4. Accounts cannot create sub-accounts of other accounts
- `near` **cannot** create `app.bob.near`
- `account.near` **cannot** create `sub.another-account.near`
5. Accounts have **no control** over their sub-account, they are different entities
Anyone can create a `.near` or `.testnet` account, you just to call the `create_account` method of the corresponding top-level account - `testnet` on testnet, and `near` on mainnet.
<details>
<summary> 🧑💻 Technical: How to create a named account </summary>
Named accounts are created by calling the `create_account` method of the network's top-level account - `testnet` on testnet, and `near` on mainnet.
```bash
near call testnet create_account '{"new_account_id": "new-acc.testnet", "new_public_key": "ed25519:<data>"}' --deposit 0.00182 --accountId funding-account.testnet
```
We abstract this process in the [NEAR CLI](../../4.tools/cli.md) with the following command:
```bash
near create_account new-acc.testnet --useAccount funding-account.testnet --publicKey ed25519:<data>
```
You can use the same command to create sub-accounts of an existing named account:
```bash
near create_account sub-acc.new-acc.testnet --useAccount new-acc.testnet
```
</details>
:::tip
Accounts have **no control** over their sub-accounts, they are different entities. This means that `near` cannot control `bob.near`, and `bob.near` cannot control `sub.bob.near`.
:::
|
NEAR at Collision Highlights
NEAR FOUNDATION
June 30, 2023
Missed NEAR’s booth at Collision? We’ve got you covered with content from NEAR talks and panels. There were also a number of awesome announcements, including the big news out of Collision: Alibaba Cloud.
From the winners of the Women in Web3 Changemakers awards being announced to exciting ecosystem developments, here’s a run down on all things NEAR from Collision 2023!!
ICYMI: Major Announcements @ Collision
NEAR kicked off Collision with a roar, with two major news items rolling hot off the presses yesterday. Here’s the scoop:
NEAR Foundation and Alibaba Cloud Join Forces
First off, the NEAR Foundation announced a major partnership with Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group. The two will collaborate to offer Remote Procedure Calls (RPC) as a service and multi-chain data indexing to NEAR developers, users, and the broader ecosystem.
The goal is to make building decentralized applications on the BOS a breeze, and working with one of the world’s biggest tech infrastructure companies is a huge step in the right direction. Alibaba Cloud will also enhance plug-and-play capabilities for BOS builders everywhere, from China and APAC to Europe and the Americas.
Winners of the Women of Web3 Changemakers Awards
In other news, the ten winners of the Women in Web3 Changemakers initiative were just announced. These incredible women are leading the charge in the Web3 space, and NEAR Foundation is proud to recognize their outstanding contributions.
Selected from over 200 nominations and voted for by the public, these were chosen based on their inclusive and innovative ideas, impact within the Web3 space, and their contributions to significant projects. Read the full announcement for NEAR’s Women in Web3 Changemakers winners.
Highlights from NEAR at Collision
Geekpay breaks barriers in crypto transactions on NEAR
GeekPay, a crypto payments platform for businesses and freelancers built on NEAR, surpassed $500,000 in total transactions during Collision. It also just hit 205 verified users, companies, and 120 transactions. GeekPay is transforming how companies handle digital currency payments to contractors, eliminating the need for lengthy wallet addresses and providing greater security for cross-border transactions.
“NEAR is a perfect ecosystem and technology for cross-border payments in digital currencies for remote workers and contractors,” said Veronica Korzh, CEO and co-founder of GeekPay.
GeekPay supports a range of digital currencies including NEAR and ETH, including functionality with tracking and analytics, secure payment processing, invoicing, and transaction categorization. GeekPay hurdling the $400,000 transaction mark shows that the future of crypto payments for work is being built on the BOS.
Tenamint launches Toronto Regional Community with a bang
Tenamint, a marketplace for fractionalized trading card collectibles built on the NEAR blockchain, christened the launch of the NEAR Toronto Regional Community during the NEAR Toronto Launch Party. The soiree was held on an iconic yacht, the Yankee Lady, during a sunset cruise for an epic post-event party to cap off Collision.
“We’re thrilled to announce the launch of the NEAR Toronto Regional Community, a vibrant community dedicated to fostering collaboration, innovation, and growth in the Toronto region’s blockchain ecosystem,” said Sal Chaudhry, Tenamint co-founder and Toronto local.
“At the NEAR Toronto Regional Community, we’re on a mission to foster collaboration, education, and entrepreneurial support within the blockchain community,” Sal added. “We aim to empower individuals and businesses in Toronto and beyond to explore, adopt, and thrive in the world of blockchain technology and beyond,”
NEAR after Collision: Onward to ETHCC
Collision 2023 is only beginning, but the BOS momentum won’t stop in Toronto. NEAR will also be at the Ethereum Community Conference (ETHCC) in Paris, from July 19th to 23rd.
From exciting cross-chain discussions to insightful panels, interactive workshops, and pitch events, ETHCC promises to be another remarkable platform for BOS building and the wider NEAR community.
So after immersing yourself in Collision 2023, remember to mark your calendars for NEAR’s presence at ETHCC as well. Here’s to continuing to make waves in Web3 and expanding into the Open Web – with you as the BOS! |
May NEAR Be with You | October 18th, 2019
COMMUNITY
October 18, 2019
Stake, you must! We’ve kicked-off Stake Wars, the NEAR Protocol’s incentivized testnet program. Practice your powers and compete with other validators for points that will be translated into rewards. Test, Bend and Break to gain points and level up. May the block be with you!
Our team is travelling at warp speed towards mainnet. In the meantime, we stop at various events to meet everybody. The past weeks included panels and talks in Tokyo and at Devcon V in Osaka. Pictures attached. Furthermore, our fleet of Beta Program projects is getting stronger by the month. Make sure to reach out to us with your ideas and projects at [email protected].
Lastly, a big shout-out to our ambassadors, who have been supporting us on the ground, translating content and building local communities. If you want to contribute to content, educational posts, and community organisation apply today!
If this photo had a title, it would be “Our community made us do it!” — Sasha and Illia having a great time after the Validator Panel in Tokyo, photo credits to @awasunyin.
COMMUNITY AND EVENTS
Community
Illia, Sasha and Peter have been busy travelling across Asia, including lots of great conversations at Devcon V.
Following Devcon, we had a great time at our cross-App communication panel, featuring Justin from Ethereum, Christopher from Cosmos, and Alistair from Polkadot. A BIG thanks to James Prestwich for moderating.
Our Ambassador Program has reached over 70 members.
Shout-out to our Ambassador Huy, who has been busy building the Vietnam Telegram Group. You can join here!
Can validators be the hidden evils in the blockchain world? Illia at the Validator Panel in Tokyo. Photo credits to @Diane_0320
Upcoming events
We are busy bees preparing for San Francisco Blockchain Week! If you happen to be in the Bay Area drop by our office or attend one of our events — Details to be announced soon!
Next week, we’ll do a little UK tour, catching up with Uni students from Oxford, Cambridge and London at The Future of Blockchain Hackathon. If you are at one of the events or in the area reach out and say hi!
#spotted @DevconV
WRITING AND CONTENT
Become a validator you must! Read the announcement & sign-up here! Chinese Edition of the announcement. (Shout-out to our ambassador Fengyuan, who translated it.)
Alex’ thread on Vitalik’s sharding announcement and how it compares to NEAR.
Tweet thread on NEAR’s mission to enable community-driven innovation and development to benefit people around the world *takes in deep breath*.
Chinese Translation of Alex’ blog post “Long Range Attacks, and a new Fork Choice Rule” by one of our ambassadors.
ENGINEERING UPDATE
90 PRs across 19 repos by 24 authors. Featured repos:nearcore,nearlib,near-shell,near-wallet,near-bindgen, docs, NEARStudio, assemblyscript, near-evm, borsh, stakewars and near-explorer;
Show transactions on Block Details page in near-explorer;
Switch Network for Stake Wars in near-explorer;
Massive rewrite of networking to support better peer discovery and message routing in nearcore;
Fixing transaction propagation, large overhaul of nightshade preparing to implement challenges, proper logic to retry requesting shard chunks in nearcore;
Work on diving state into parts preparing for proper state sync in nearcore;
Add a check into Runtime to compare input and output balances in nearcore;
Native test runner to replace docker runner and clean up in nearcore;
Creating a new near compiler frontend in near-runtime-ts;
Updated dependencies in borsh;
Set initialBalance for create_account – 1 NEAR in near-shell;
HOW YOU CAN GET INVOLVED
Join us: there are new jobs we’re hiring across the board!
If you want to work with one of the most talented teams in the world right now to solve incredibly hard problems, check out our careers page for openings. And tell your friends!
Learn more about NEAR in The Beginner’s Guide to NEAR Protocol. Stay up to date with what we’re building by following us on Twitter for updates, joining the conversation on Discord and subscribing to our newsletter to receive updates right to your inbox.
https://upscri.be/633436/ |
Running Ethereum Applications On NEAR
CASE STUDIES
February 15, 2020
The major part of this work was done by James Prestwich and Barbara Liau from https://summa.one/.
TLDR: Today we are releasing a set of tools to deploy EVM contracts on the NEAR network, thus benefiting from the performance, user experience and developer tooling of NEAR. Underneath, it’s implemented as an execution environment which runs Ethereum as a smart contract on NEAR. Web3.js tooling works with NEAR via a custom provider.
EVM support and web3.js provider
Ethereum’s developer community is large, and many crypto developers are familiar with the Ethereum Virtual Machine (EVM). Solidity, an EVM-targeted language, has been developed since the beginning to serve as the primary language for smart contracts. While it has clear limitations when compared to general purpose languages like Rust and TypeScript, Solidity maintains broad adoption and extensive tooling for on-chain development.
NEAR, on the other hand, uses the WebAssembly Virtual Machine (WASM), an increasingly popular technology both in crypto and in the wider tech world. The majority of the crypto space is moving in this direction, with projects like ETH2, Polkadot, and more deciding to use WASM.
While we believe strongly in WebAssembly, we recognize the need to simplify this transition for developers, and are releasing a way for existing EVM contracts to run on NEAR. To do so, we’ve deployed the EVM as a smart contract. Conveniently, the Parity Ethereum client has an EVM implementation in Rust that is easily compilable to WebAssembly.
Running the EVM as a smart contract is essentially a simplified version of the ETH2 / Serenity execution environment concept, and it doesn’t require any custom transaction processing logic! You can find the EVM contract on Github.
Since the majority of Ethereum tooling relies on web3.js, we’ve implemented a custom web3 provider, NearProvider, that allows direct communication to Ethereum contracts via familiar interfaces in near-web3-provider library. NearProvider handles the connection to the Near network, and automatically translates objects and RPC calls for you.
Let’s dig in!
How it works
First, let’s get your Solidity application running on NEAR’s TestNet:
If you don’t have an existing Truffle project, set it up first. You can find the example here – https://github.com/kcole16/near-evm-demo.
Next, install NEAR shell:
npm install -g near-shell
Then, login with NEAR wallet:
near login
This will redirect you to the NEAR web wallet, and walk you through creating a new account. You can enter any accountID you’d like to use going forward. Next, you will authorize the CLI to use this account via a transaction, and then enter the newly created accountID to complete the login.
The next step is to configure NEAR as another network in truffle.js:
The above code imports near-web3-provider, which provides a mapping from Ethereum RPCs to NEAR’s network.
Next, we’ll point it to the keyStore that contains your NEAR account, from which you will be deploying applications (and paying fees). Here, I use my account illia, but you should change this to your accountId.
And that’s it, you are ready to deploy applications to NEAR’s EVM!
truffle migrate –network near
You can checkout success of your transaction in the block explorer: https://explorer.nearprotocol.com
The final step is to plugin your near-web3-provider into your frontend web3 code. This way you can now use NEAR Wallet and enable people to onboard and use your application easily.
Once you have your provider set up, you can interact with near-evm using Truffle, Web3.js, and many other standard Solidity development tools. While the library is still in early stages, many web3-based apps will just work out of the box.
You can checkout full example here: https://github.com/kcole16/near-evm-demo
NEAR EVM support is ready for your project! Start developing today.
Resources
Here are the useful resources:
https://github.com/kcole16/near-evm-demo – repo with full demo.
https://github.com/nearprotocol/near-evm – EVM execution environment contract.
https://github.com/nearprotocol/near-web3-provider – NearProvider for Web3.js.
https://t.me/joinchat/F3YJ0lcCcZka_GN09MGwJw – Developer support channel on Telegram for real time questions.
https://commonwealth.im/near – forum for ideas and suggestions. |
---
id: relayers
title: Relayers
---
A relayer is a simple web service that receives signed transactions from NEAR users, and relays them to the network while attaching tokens to sponsor their GAS expenses. This can be useful to create applications in which the users are not required to purchase NEAR in order to be able to transact. In this document we present a high-level overview on how relayers work. Please check the [build a relayer](../../2.build/welcome.md) page if you want to learn how to build your own relayer.
---
## How it works
![relayer-overview](/docs/assets/welcome-pages/relayer-overview.png)
Relayers are a natural consequence of [Meta Transactions](meta-tx.md) ([NEP-366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md)), a special type of transaction that can be best understood as an intent.
The user expresses: _"I want to do a specific action on chain"_ and signs this intent **off-chain**, but does not send it to the network. Instead, they send the intent to a `Relayer`, which wraps the message into an actual transaction, attaches the necessary funds, and sends it to the network.
<details>
<summary> Technical Details </summary>
Technically, the end user (client) creates a `SignedDelegateAction` that contains the data necessary to construct a `Transaction`, signs the `SignedDelegateAction` using their key, and send it to the relayer service.
When the request is received, the relayer uses its own key to sign a `Transaction` using the fields in the `SignedDelegateAction` as input to create a `SignedTransaction`.
The `SignedTransaction` is then sent to the network via RPC call, and the result is sent back to the client. The `Transaction` is executed in such a way that the relayer pays the GAS fees, but all actions are executed as if the user had sent the transaction.
</details>
---
## Why use a Relayer?
There are multiple reasons to use a relayer:
1. Your users are new to NEAR and don't have any gas to cover transactions
2. Your users have an account on NEAR, but only have a Fungible Token Balance. They can now use the FT to pay for gas
3. As an enterprise or a large startup you want to seamlessly onboard your existing users onto NEAR without needing them to worry about gas costs and seed phrases
4. As an enterprise or large startup you have a user base that can generate large spikes of user activity that would congest the network. In this case, the relayer acts as a queue for low urgency transactions
5. In exchange for covering the gas fee costs, relayer operators can limit where users spend their assets while allowing users to have custody and ownership of their assets
6. Capital Efficiency: Without relayer if your business has 1M users they would have to be allocated 0.25 NEAR to cover their gas costs totalling 250k NEAR. However, only ~10% of the users would actually use the full allowance and a large amount of the 250k NEAR is just sitting there unused. So using the relayer, you can allocate 50k NEAR as a global pool of capital for your users, which can refilled on an as needed basis. |
---
id: keys
title: Key Management
sidebar_label: Key Management
sidebar_position: 4
description: NEAR Node Key Management
---
## What are Keys? {#what-are-keys}
In public key cryptography, there exists a key pair, one public and one private, to sign and send verifiable transactions across the network. NEAR takes the common approach of using public keys for identity and private keys for signatures. Internally the NEAR platform uses ed25519, one of several "elliptic curves" that produce secure cryptographic results quickly. Specifically, we use `tweetnacl` in JavaScript and `libsodium` in Rust.
## Are there Different Types of Keys? {#are-there-different-types-of-keys}
There are 3 types of key pairs on the NEAR platform:
- Signer Keys (e.g. account keys, access keys)
- Validator Keys
- Node Keys
**Signer Keys** are the ones we all know and love. They're used by accounts on the network to sign transactions like `sendMoney` and `stake` before sending these transactions to the network. Signer keys are not related to running a node in any way. End users who sign up through the [NEAR Wallet](https://wallet.near.org/) get their own signer keys, for example. These are the keys that humans think about and keep safe.
There are two flavors of signer keys currently available, `FullAccess` keys and `FunctionCall` keys. The first has unrestricted control to "act on behalf of an account" (as used by NEAR CLI and NEAR Wallet to get things done for you). The second is limited to contract storage and compute. Both flavors of keys can be revoked by the account holder. There is no limit to the flavors of keys that the NEAR platform can handle so we can easily imagine keys for voting, shopping, conducting official business, etc. each with their own access controls on our data, programmable time limits, etc. But keep in mind that you do have to pay rent on keys issued to your account.
**Validator Keys** are used by validators (people and companies who are committed to maintaining the integrity of the system) to support their work of validating blocks and chunks on the network, nothing more. The human validators don't think about these keys beyond creating them and resetting them. Once added to a validator's node, validator keys are used by the node to do their thing of validating blocks and chunks. As a convenience to validators, validator keys are currently produced by a script at node startup if they don't already exist (in the case of NEAR Stake Wars, the `start_stakewars.py` script) but this may change.
**Node Keys** are something no humans on the network think about except core contributors to the platform. These keys are used internally by a node to sign low-level communications with other nodes in the network like sending block headers or making other verifiable requests. Node keys are currently provided to a node at startup by a script. In the case of NEAR Stake Wars it's the `start_stakewars.py` script that produces these keys for now, but this may change.
## Can Keys be Changed? {#can-keys-be-changed}
Yes, but only in that keys can be _reset_ (ie. regenerated as a new key pair). If a private key is lost or compromised somehow then a new key pair must be generated. This is just the nature of secure keys.
**Signers** can create new keys and revoke existing keys at will. NEAR Wallet also supports key recovery via SMS or seed phrase which makes it convenient to move signer keys from one computer to another, for example.
**Validators** have the option to reset their validator keys at any time but it makes sense to avoid resetting validator keys while staking. To reset their keys, a human validator stops their node, changes their validator key and restarts the node. All new validator output will be signed by these new keys.
**Nodes** should not need to reset their node keys.
<blockquote class="info">
<strong>Did you know?</strong><br /><br />
As a brief word on the NEAR runtime, the subsystem that manages state transitions on the blockchain (ie. keeping things moving from one block to the next), it's worth understanding that the movement of the system happens in stages, called epochs, during which the group of validators does not change. The [Nightshade whitepaper](https://near.org/papers/nightshade) introduces epochs this way: "the maintenance of the network is done in epochs, where an epoch is a period of time on the order of days." and there's much more detail in the paper.
At the beginning of each epoch, some computation produces a list of validators for the very next epoch (not the one that just started). The input to this computation includes all validators that have "raised their hand" to be a validator by staking some amount over the system's staking threshold. The output of this computation is a list of the validators for the very next epoch.
When a validator is elected during an epoch, they have the opportunity to stake (ie. put some skin in the game in the form of tokens) in support of their intent to "behave" while keeping their node running so others can rent storage and compute on it. Any foul play on the part of the validator that is detected by the system may result is a slashing event where the validator is marked as out of integrity and forfeit their stake to be redistributed among other validators.
</blockquote>
<blockquote class="warning">
<strong>Heads up</strong><br /><br />
If validator keys are changed _during an epoch in which the validator is staking_, the validator's output will be rejected since their signature will not match (new keys). This means the validator will, by the end of the epoch, not be able to meet the minimum validator output threshold and lose their position as a recognized validator. Their stake will be returned to them.
</blockquote>
For concrete examples of keys being used as identifiers, you can see a list of validators and active nodes on various NEAR networks here:
- NEAR testnet (staking currently disabled)
- `https://rpc.testnet.near.org/status`
- `https://rpc.testnet.near.org/network_info`
- NEAR betanet
- `https://rpc.betanet.near.org/status`
- `https://rpc.betanet.near.org/network_info`
>Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol">
<h8>Ask it on StackOverflow!</h8></a>
|
---
sidebar_position: 1
---
# The NEAR Protocol Specification
Near Protocol is the scalable blockchain protocol.
For the overview of the NEAR Protocol, read the following documents in numerical order.
1. [Terminology](Terminology.md)
2. [Data structures](DataStructures/)
3. [Architecture](Architecture.md)
4. [Chain specification](ChainSpec/)
5. [Runtime specification](RuntimeSpec/)
6. [Network specification](NetworkSpec/NetworkSpec.md)
7. [Economics](Economics/Economic.md)
## Standards
Standards such as Fungible Token Standard can be found in [Standards](Standards/README.md) page.
|
---
description: Secure way to lock NEAR tokens for a period of time
title: Lockups
sidebar_position: 6
---
---
These docs include information about lockups in general, how they are implemented on NEAR, some challenges this causes, and how you can delegate your locked tokens.
## Lockup Basics
A "lockup" is when tokens are prevented from being transferred. The configuration of this lockup may vary significantly from case to case, but the same smart contract is used for each of them. Accounts that are subject to a lockup have a different setup than accounts that are created without a lockup. If you have a locked-up account, it may be supported slightly differently by various tools (from wallets to delegation interfaces) because of this difference in the architecture.\
If you want to be sure to see the correct balances, use [My NEAR Wallet](https://app.mynearwallet.com/).
The most common configuration of lockup is to linearly release the tokens for transfer during the entire term of the lockup. For example, a 24-month linear lockup would make a small amount of tokens eligible for transfer with each block that passes until the full amount is free to transfer at the end of 24 months.
Another factor in lockups is the "cliff", which means that no tokens are unlocked until that date (often 12 months after the lockup start).\
On that date, a large chunk of tokens is unlocked at once to make it as if the cliff never existed at all.\
Most early accounts are subject to a cliff. For example, a 4-year linear lockup with a 1-year cliff will have the following characteristics:
1. Months 0-12: all tokens are locked
2. Month 12+1 block: the first 25% of the tokens are immediately unlocked
3. Months 13-48: the remaining 75% of tokens are unlocked smoothly over each block of the remaining 36 months.
4. Months 48+: all tokens are unlocked
_See how NEAR tokens have been distributed and what lockups generally apply in_ [_this post_](https://near.org/blog/near-token-supply-and-distribution/)_._
_See the FAQ at the end for questions_
A Lockup is a special smart contract that ensures that the full, or the partial amount is not transferable until it is supposed to be.
The lockups are implemented as a separate smart contract from your main account. Thus, if you have received tokens prior to [Phase II](https://near.org/blog/near-mainnet-phase-2-unrestricted-decentralized/), you will get two things:
1. A regular account (also called "Owner Account" in the context of lockups), let's say `user.near` or `3e52c197feb13fa457dddd102f6af299a5b63465e324784b22aaa7544a7d55fb`;
2. A lockup contract, with a name like `4336aba00d32a1b91d313c81e8544ea1fdc67284.lockup.near`.
Have a look at the [Lockup page](https://github.com/near/core-contracts/tree/master/lockup) in the NEAR repo for a deeper dive into Lockups.
### Termination of Vesting
Vesting could be terminated by the foundation, an account configured at the moment of initializing the contract. It's important to understand how the termination works combining with the lockup schedule.
![](@site/static/img/lockup_5-ccc671d917b28deda1ddc51c2ef2f1d1.png)
At the moment of termination, we stop the vesting process, so the vested amount is going to remain constant after that; the lockup process keeps going and will unlock the tokens on its schedule. We continue to unlock the tokens by getting the minimum between unlocked and vested amounts.
### An Example
You can see examples of account and lockup setups in the [NEAR Explorer](https://explorer.mainnet.near.org).\
For example, this randomly chosen account `gio3gio.near` was created in several steps:
First, the Owner Account `gio3gio.near` was created and configured using several transactions, which you can see in [the account history](https://explorer.mainnet.near.org/accounts/gio3gio.near). It was created with 40 NEAR tokens to pay for the storage requirements of the account, and the two-factor authentication that is deployed to it.
Next, the account [9b84742f269952cea2877425b5e9d2e15cae8829.lockup.near](https://explorer.mainnet.near.org/accounts/9b84742f269952cea2877425b5e9d2e15cae8829.lockup.near) was created to store the actual balance of locked tokens on the account in [a batch transaction](https://explorer.mainnet.near.org/transactions/Eer14Fih17TRjpiF8PwWfVKNTB57vXnNJsDW93iqc2Ui) which also transferred these tokens to it (in this case, 594.11765 tokens).\
You can see the arguments for the `new` method in the explorer, which show a 12-month release duration with an initial cliff of October 4th:
For the actual lockup contract code and README, [see it on Github](https://github.com/near/core-contracts/tree/master/lockup).
```json
{
"owner_account_id": "gio3gio.near", // the Owner account who is allowed to call methods on this one
"lockup_duration": "0", // not necessary if the lockup_timestamp is used
"lockup_timestamp": "1601769600000000000", // Unix timestamp for October 4th, 2020 at midnight UTC
"transfers_information": {
"TransfersDisabled": {
"transfer_poll_account_id": "transfer-vote.near"
}
},
"vesting_schedule": null,
"release_duration": "31536000000000000", // 365 days
"staking_pool_whitelist_account_id": "lockup-whitelist.near",
"foundation_account_id": null
}
```
## Delegating Locked Tokens
One of the unique features of the NEAR lockups is the ability to delegate tokens while they are still locked.
There are a few things you need to know:
1. You can only delegate to whitelisted pools, right now it's all the pools that end with `.poolv1.near`. 
2. One lockup contract can only delegate to a single pool. 
3. The account must keep a minimum balance of 3.5 $NEAR to cover storage for the lockup contract itself (transactions that will try to withdraw over that amount will just fail). 
4. Delegation rewards can be withdrawn back to the lockup contract but are unlocked, so they can be withdrawn from it right away. 
5. Delegating commands/tools which are not specifically configured to work with locked-up accounts won't work, as the "owner account" must call a lockup contract.
## Frequently Asked Questions
### I don't see my full balance in my wallet
Not all wallets support looking up the locked-up balance.
There are three ways to go:
* Use [My NEAR Wallet](https://app.mynearwallet.com/);
* [Import your account into NEAR Wallet](token-custody.md#importing-accounts-from-other-wallets);
* Use CLI to check your balance: `near view <LOCKUP_ACCOUNT_ID> get_balance ''` (note it outputs the value in yoctoNEAR - divide by 10e24 to get NEAR amount).
### If I have a lockup, what do I need to do to transfer my tokens once they are available from the Wallet?
If you use NEAR Wallet, you can just transfer them as normal. You will just have to confirm a couple of extra transactions ("check vote" and "transfer").\
Other wallets may implement this differently.
> Got a question? [Ask it on StackOverflow!](https://stackoverflow.com/questions/tagged/nearprotocol)
|
---
id: defining-a-token
title: Defining a Fungible Token
sidebar_label: Defining Your Token
---
import {Github} from "@site/src/components/codetabs"
This is the first of many tutorials in a series where you'll be creating a complete FT smart contract from scratch that conforms with all the NEAR [FT standards](https://nomicon.io/Standards/Tokens/FungibleToken/Core). Today you'll learn what a Fungible Token is and how you can define one on the NEAR blockchain. You will be modifying a bare-bones [skeleton smart contract](/tutorials/fts/skeleton) by filling in the necessary code snippets needed to add this functionality.
## Introduction
To get started, switch to the `1.skeleton` folder in our repo. If you haven't cloned the repository, refer to the [Contract Architecture](/tutorials/fts/skeleton) to get started.
If you wish to see the finished code for this portion of the tutorial, that can be found on the `2.defining-a-token` folder.
## Modifications to the skeleton contract {#modifications}
At its very core, a fungible token is an exchangeable asset that **is divisible** but is **not unique**. For example, if Benji had 1 Canadian dollar, it would be worth the exact same as Matt's Canadian dollar. Both their dollars are fungible and exchangeable. In this case, the fungible token is the canadian dollar. All fiat currencies are fungible and exchangeable.
Non-fungible tokens, on the other hand, are **unique** and **indivisible** such as a house or a car. You **cannot** have another asset that is exactly the same. Even if you had a specific car model, such as a Corvette 1963 C2 Stingray, each car would have a separate serial number with a different number of kilometers driven etc...
Now that you understand what a fungible token is, let's look at how you can define one in the contract itself.
### Defining a fungible token {#defining-a-fungible-token}
Start by navigating to the `1.skeleton/src/metadata.rs` file. This is where you'll define the metadata for the fungible token itself. There are several ways NEAR allows you to customize your token, all of which are found in the [metadata](https://nomicon.io/Standards/Tokens/FungibleToken/Core#metadata) standard. Let's break them up into the optional and non-optional portions.
Required:
- **spec**: Indicates the version of the standard the contract is using. This should be set to `ft-1.0.0`.
- **name**: The human readable name of the token such as "Wrapped NEAR" or "TEAM Tokens".
- **symbol**: The abbreviation of the token such as `wNEAR` or `gtNEAR`.
- **decimals**: used in frontends to show the proper significant digits of a token. This concept is explained well in this [OpenZeppelin post](https://docs.openzeppelin.com/contracts/3.x/erc20#a-note-on-decimals).
Optional:
- **icon**: The image for the token (must be a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs)).
- **reference**: A link to any supplementary JSON details for the token stored off-chain.
- **reference_hash**: A hash of the referenced JSON.
With this finished, you can now add these fields to the metadata in the contract.
<Github language="rust" start="8" end="18" url="https://github.com/near-examples/ft-tutorial/blob/main/2.define-a-token/src/metadata.rs" />
Now that you've defined what the metadata will look like, you need someway to store it on the contract. Switch to the `1.skeleton/src/lib.rs` file and add the following to the `Contract` struct. You'll want to store the metadata on the contract under the `metadata` field.
<Github language="rust" start="18" end="23" url="https://github.com/near-examples/ft-tutorial/blob/main/2.define-a-token/src/lib.rs" />
You've now defined *where* the metadata will live but you'll also need someway to pass in the metadata itself. This is where the initialization function comes into play.
#### Initialization Functions
You'll now create what's called an initialization function; you can name it `new`. This function needs to be invoked when you first deploy the contract. It will initialize all the contract's fields that you've defined with default values. It's important to note that you **cannot** call these methods more than once.
<Github language="rust" start="56" end="72" url="https://github.com/near-examples/ft-tutorial/blob/main/2.define-a-token/src/lib.rs" />
More often than not when doing development, you'll need to deploy contracts several times. You can imagine that it might get tedious to have to pass in metadata every single time you want to initialize the contract. For this reason, let's create a function that can initialize the contract with a set of default `metadata`. You can call it `new_default_meta`.
<Github language="rust" start="36" end="52" url="https://github.com/near-examples/ft-tutorial/blob/main/2.define-a-token/src/lib.rs" />
This function is simply calling the previous `new` function and passing in some default metadata behind the scenes.
At this point, you've defined the metadata for your fungible tokens and you've created a way to store this information on the contract. The last step is to introduce a getter that will query for and return the metadata. Switch to the `1.skeleton/src/metadata.rs` file and add the following code to the `ft_metadata` function.
<Github language="rust" start="20" end="30" url="https://github.com/near-examples/ft-tutorial/blob/main/2.define-a-token/src/metadata.rs" />
This function will get the `metadata` object from the contract which is of type `FungibleTokenMetadata` and will return it.
## Interacting with the contract on-chain
Now that the logic for defining a custom fungible token is complete and you've added a way to query for the metadata, it's time to build and deploy your contract to the blockchain.
### Deploying the contract {#deploy-the-contract}
We've included a very simple way to build the smart contracts throughout this tutorial using a bash script. The following command will build the contract and copy over the `.wasm` file to a folder `out/contract.wasm`. The build script can be found in the `1.skeleton/build.sh` file.
```bash
cd 1.skeleton && ./build.sh && cd ..
```
There will be a list of warnings on your console, but as the tutorial progresses, these warnings will go away. You should now see the folder `out/` with the file `contract.wasm` inside. This is what we will be deploying to the blockchain.
For deployment, you will need a NEAR account with the keys stored on your local machine. Navigate to the [NEAR wallet](https://testnet.mynearwallet.com//) site and create an account.
:::info
Please ensure that you deploy the contract to an account with no pre-existing contracts. It's easiest to simply create a new account or create a sub-account for this tutorial.
:::
Log in to your newly created account with `near-cli` by running the following command in your terminal.
```bash
near login
```
To make this tutorial easier to copy/paste, we're going to set an environment variable for your account ID. In the command below, replace `YOUR_ACCOUNT_NAME` with the account name you just logged in with including the `.testnet` portion:
```bash
export FT_CONTRACT_ID="YOUR_ACCOUNT_NAME"
```
Test that the environment variable is set correctly by running:
```bash
echo $FT_CONTRACT_ID
```
Verify that the correct account ID is printed in the terminal. If everything looks correct you can now deploy your contract.
In the root of your FT project run the following command to deploy your smart contract.
```bash
near deploy $FT_CONTRACT_ID out/contract.wasm
```
At this point, the contract should have been deployed to your account and you're ready to move onto creating your personalized fungible token.
### Creating the fungible token {#initialize-contract}
The very first thing you need to do once the contract has been deployed is to initialize it. For simplicity, let's call the default metadata initialization function you wrote earlier so that you don't have to type the metadata manually in the CLI.
```bash
near call $FT_CONTRACT_ID new_default_meta '{"owner_id": "'$FT_CONTRACT_ID'", "total_supply": "0"}' --accountId $FT_CONTRACT_ID
```
### Viewing the contract's metadata
Now that the contract has been initialized, you can query for the metadata by calling the function you wrote earlier.
```bash
near view $FT_CONTRACT_ID ft_metadata
```
This should return an output similar to the following:
```bash
{
spec: 'ft-1.0.0',
name: 'Team Token FT Tutorial',
symbol: 'gtNEAR',
icon: 'data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAASABIAAD/
/*
...lots of base64 data...
*/
j4Mvhy9H9NlnieJ4iwoo9ZlyLGx4pnrPWeB4CVGRZZcJ7Vohwhi0z5MJY4cVL4MdP/Z',
reference: null,
reference_hash: null,
decimals: 24
}
```
**Go team!** You've now verified that everything works correctly and you've defined your own fungible token!
In the next tutorial, you'll learn about how to create a total supply and view the tokens in the wallet.
## Conclusion
In this tutorial, you went through the basics of setting up and understanding the logic behind creating a fungible token on the blockchain using a skeleton contract.
You first looked at [what a fungible token is](#modifications) and how it differs from a non-fungible token. You then learned how to customize and create your own fungible tokens and how you could modify the skeleton contract to achieve this. Finally you built and deployed the contract and interacted with it using the NEAR CLI.
## Next Steps
In the [next tutorial](/tutorials/fts/circulating-supply), you'll find out how to create an initial supply of tokens and have them show up in the NEAR wallet.
|
Statement in full: NEAR Foundation to fund USN Protection Programme
COMMUNITY
October 24, 2022
The NEAR ecosystem has always been at the cutting edge of innovation – many projects building on the NEAR blockchain are pushing boundaries – creating new value and experiences for users, and radically redefining how traditional markets and processes operate. The road of innovation is one of constant learning, evolution, and sometimes challenges that need the community to rally together in support of our shared goals and values to find a clear path forward.
Decentral Bank and USN
USN, a NEAR-native stablecoin, was created and launched by Decentral Bank (DCB) on 25th April 2022 (v1). This is an independently operated community run project which had no direct financial assistance from the NEAR Foundation.
In June 2022, DCB upgraded the algorithmic USN (v1) to a new non-algorithmic version (v2) due to the potential risks, including undercollateralisation, inherent in algorithmic stablecoins. DCB has confirmed that USN v2 is a non-algorithmic and fully 1:1 USDT-backed version. This can be seen and verified within the smart contract code from DCB.
DCB recently contacted the NEAR Foundation to advise it that USN had become undercollateralised due to it originally being an algorithmic stablecoin (v1) susceptible to undercollateralisation during extreme market conditions. DCB has also confirmed that there was some double-minting of USN, associated with the v1 algorithm, which contributed to the undercollateralisation.
The known collateral gap is $40m USD and cannot expand further assuming DCB burns/destroys all the double-minted USN and promptly winds down the project in an orderly manner. This gap is not related to the value of the $NEAR token (and USN has never had any burn/mint relationship to $NEAR).
On this basis, the NEAR Foundation believes the collateral gap to be contained.
The USN Protection Programme
Stablecoins in general have faced many headwinds over the last few months with increased regulatory focus, and changes in market perception from recent high profile incidents. Although this situation is fundamentally different to other incidents (as USN never had a hardcoded burn/mint relationship to $NEAR), the headwinds still remain.
Given the issues described above, the NEAR Foundation is recommending that USN should wind down. The Foundation encourages DCB to do this at the earliest opportunity in a responsible and professional manner that protects all of its users.
The NEAR Foundation’s purpose is to support the NEAR ecosystem and its users. In order to safeguard users and to facilitate the orderly winding down of USN by DCB, the NEAR Foundation has elected to set aside $40m USD in fiat (equal to the known collateral gap as described above), to be made available via a grant for the creation of a USN Protection Programme. The NEAR Foundation has already provided this grant to a subsidiary of Aurora Labs, – one of the NEAR ecosystem’s most prominent contributors – to set up the USN Protection Programme which is now live on Aurora Labs’s website.
As conditions of the grant, Aurora Labs have agreed that the USN Protection Programme will operate with the sole purpose of making users whole and will be subject to various conditions including KYC/AML, sanctions checks and certain geographic restrictions. In order to prevent abuse of the Programme, only legitimately minted USN in existence immediately prior to the time of this announcement will be eligible (a snapshot). The USN Protection Programme is live and redemptions will begin once the USN smart contract has stopped minting and the double-minted USN has been burnt. The DCB team and their affiliates will not be eligible to participate in the Programme.
With the USN Protection Programme, NEAR Foundation understands that USN is now overcollateralised, as there is also approximately 5.7m $NEAR in the DCB treasury which the Foundation expects DCB to donate to the NEAR community.
Given that the undercollateralisation gap is contained, based on the information and assumptions set out above, the Foundation are confident that this action best safeguards users and the wider NEAR ecosystem.
A look ahead to the future
The NEAR Foundation is confident that as the ecosystem grows and matures, this type of intervention should not be required in the future.
Moving forward, the NEAR Foundation will be working with the NDC and wider community to set up a funded initiative with the remit of developing robust community standards & guardrails, in particular in relation to stablecoins, to help ensure that users are protected in situations of rapid innovation, and evolving markets and regulations.
Note: Nothing in this statement is intended to be, and is not, an offer or sale of any tokens or securities. Certain persons, including US persons, as defined in Regulation S under the US Securities Act will not be eligible to participate in the USN Protection Programme. |
---
id: compile-and-run-a-node
title: Run a Validator Node
sidebar_label: Run a Node
sidebar_position: 3
description: Compile and Run a NEAR Node without Container in localnet, testnet, and mainnet
---
*If this is the first time for you to setup a validator node, head to our [Validator Bootcamp 🚀](/validator/validator-bootcamp).*
The following instructions are applicable across localnet, testnet, and mainnet.
If you are looking to learn how to compile and run a NEAR validator node natively (without containerization) for one of the following networks, this guide is for you.
- [`localnet`](/validator/compile-and-run-a-node#localnet)
- [`testnet`](/validator/compile-and-run-a-node#testnet)
- [`mainnet`](/validator/compile-and-run-a-node#mainnet)
## Prerequisites {#prerequisites}
- [Rust](https://www.rust-lang.org/). If not already installed, please [follow these instructions](https://docs.near.org/docs/tutorials/contracts/intro-to-rust#3-step-rust-installation).
- [Git](https://git-scm.com/)
- Installed developer tools:
- MacOS
```bash
$ brew install cmake protobuf llvm awscli
```
- Linux
```bash
$ apt update
$ apt install -y git binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev cmake gcc g++ python docker.io protobuf-compiler libssl-dev pkg-config clang llvm cargo awscli
```
## How to use this document {#how-to-use-this-document}
This document is separated into sections by network ID. Although all of the sections have almost the exact same steps/text, we found it more helpful to create individual sections so you can easily copy-paste commands to quickly get your node running.
### Choosing your `nearcore` version {#choosing-your-nearcore-version}
When building your NEAR node you will have two branch options to choose from depending on your desired use:
- `master` : _(**Experimental**)_
- Use this if you want to play around with the latest code and experiment. This branch is not guaranteed to be in a fully working state and there is absolutely no guarantee it will be compatible with the current state of *mainnet* or *testnet*.
- [`Latest stable release`](https://github.com/near/nearcore/tags) : _(**Stable**)_
- Use this if you want to run a NEAR node for *mainnet*. For *mainnet*, please use the latest stable release. This version is used by mainnet validators and other nodes and is fully compatible with the current state of *mainnet*.
- [`Latest release candidates`](https://github.com/near/nearcore/tags) : _(**Release Candidates**)_
- Use this if you want to run a NEAR node for *tesnet*. For *testnet*, we first release a RC version and then later make that release stable. For testnet, please run the latest RC version.
#### (Optional) Enable debug logging {#optional-enable-debug-logging}
> **Note:** Feel free to skip this step unless you need more information to debug an issue.
To enable debug logging, run `neard` like this:
```bash
$ RUST_LOG=debug,actix_web=info ./target/release/neard --home ~/.near run
```
## `localnet` {#localnet}
### 1. Clone `nearcore` project from GitHub {#clone-nearcore-project-from-github}
First, clone the [`nearcore` repository](https://github.com/near/nearcore).
```bash
$ git clone https://github.com/near/nearcore
```
Next, checkout the release branch you need if you will not be using the default `master` branch. [ [More info](/validator/compile-and-run-a-node#choosing-your-nearcore-version) ]
```bash
$ git checkout master
```
### 2. Compile `nearcore` binary {#compile-nearcore-binary}
In the repository run the following commands:
```bash
$ make neard
```
This will start the compilation process. It will take some time
depending on your machine power (e.g. i9 8-core CPU, 32 GB RAM, SSD
takes approximately 25 minutes). Note that compilation will need over
1 GB of memory per virtual core the machine has. If the build fails
with processes being killed, you might want to try reducing number of
parallel jobs, for example: `CARGO_BUILD_JOBS=8 make neard`.
By the way, if you’re familiar with Cargo, you could wonder why not
run `cargo build -p neard --release` instead. While this will produce
a binary, the result will be a less optimized version. On technical
level, this is because building via `make neard` enables link-time
optimisation which is disabled by default. The binary path is `target/release/neard`.
For `localnet`, you also have the option to build in nightly mode (which is experimental and is used for cutting-edge testing). When you compile, use the following command:
```bash
$ cargo build --package neard --features nightly_protocol,nightly_protocol_features --release
```
### 3. Initialize working directory {#initialize-working-directory}
The NEAR node requires a working directory with a couple of configuration files. Generate the initial required working directory by running:
```bash
$ ./target/release/neard --home ~/.near init --chain-id localnet
```
> You can skip the `--home` argument if you are fine with the default working directory in `~/.near`. If not, pass your preferred location.
This command will create the required directory structure and will generate `config.json`, `node_key.json`, `validator_key.json`, and `genesis.json` files for `localnet` network.
- `config.json` - Neard node configuration parameters.
- `genesis.json` - A file with all the data the network started with at genesis. This contains initial accounts, contracts, access keys, and other records which represents the initial state of the blockchain.
- `node_key.json` - A file which contains a public and private key for the node. Also includes an optional `account_id` parameter.
- `data/` - A folder in which a NEAR node will write its state.
- `validator_key.json` - A file which contains a public and private key for local `test.near` account which belongs to the only local network validator.
### 4. Run the node {#run-the-node}
To run your node, simply run the following command:
```bash
$ ./target/release/neard --home ~/.near run
```
That's all. The node is running you can see log outputs in your console.
## `testnet` {#testnet}
### 1. Clone `nearcore` project from GitHub {#clone-nearcore-project-from-github-1}
First, clone the [`nearcore` repository](https://github.com/near/nearcore).
```bash
$ git clone https://github.com/near/nearcore
$ cd nearcore
$ git fetch origin --tags
```
Checkout to the branch you need if not `master` (default). Latest release is recommended. Please check the [releases page on GitHub](https://github.com/near/nearcore/releases).
```bash
$ git checkout tags/1.35.0 -b mynode
```
### 2. Compile `nearcore` binary {#compile-nearcore-binary-1}
In the `nearcore` folder run the following commands:
```bash
$ make neard
```
This will start the compilation process. It will take some time
depending on your machine power (e.g. i9 8-core CPU, 32 GB RAM, SSD
takes approximately 25 minutes). Note that compilation will need over
1 GB of memory per virtual core the machine has. If the build fails
with processes being killed, you might want to try reducing number of
parallel jobs, for example: `CARGO_BUILD_JOBS=8 make neard`.
By the way, if you’re familiar with Cargo, you could wonder why not
run `cargo build -p neard --release` instead. While this will produce
a binary, the result will be a less optimized version. On technical
level, this is because building via `make neard` enables link-time
optimisation which is disabled by default.
The binary path is `target/release/neard`
### 3. Initialize working directory {#initialize-working-directory-1}
The NEAR node requires a working directory with a couple of configuration files. Generate the initial required working directory by running:
```bash
$ ./target/release/neard --home ~/.near init --chain-id testnet --download-genesis --download-config
```
> You can skip the `--home` argument if you are fine with the default working directory in `~/.near`. If not, pass your preferred location.
This command will create the required directory structure and will generate `config.json`, `node_key.json`, and `genesis.json` files for `testnet` network.
- `config.json` - Neard node configuration parameters.
- `genesis.json` - A file with all the data the network started with at genesis. This contains initial accounts, contracts, access keys, and other records which represents the initial state of the blockchain.
- `node_key.json` - A file which contains a public and private key for the node. Also includes an optional `account_id` parameter.
- `data/` - A folder in which a NEAR node will write it's state.
> **Heads up**
> The genesis file for `testnet` is big (6GB +) so this command will be running for a while and no progress will be shown.
### 4. Get data backup {#get-data-backup}
The node is ready to be started. When started as-is, it will establish
connection to the network and start downloading latest state. This
may take a while so an alternative is to download [Node Data Snapshots](/intro/node-data-snapshots)
which will speed up the syncing. The short of it is to install AWS
CLI and run:
```bash
$ aws s3 --no-sign-request cp s3://near-protocol-public/backups/testnet/rpc/latest .
$ latest=$(cat latest)
$ aws s3 --no-sign-request cp --no-sign-request --recursive s3://near-protocol-public/backups/testnet/rpc/$latest ~/.near/data
```
> **Heads up**
> An RPC node stores around 500GB of data on disk. Furthermore, it
> requires SSD to be able to keep up with network. Make sure that you
> have enough free space on a fast-enough disk.
Note that you don’t have to perform this step if you prefer a fully
decentralized experience when the node downloads data from the NEAR
network.
### 5. Run the node {#run-the-node}
To start your node simply run the following command:
```bash
$ ./target/release/neard --home ~/.near run
```
That's all. The node is running you can see log outputs in your console. It will download a bit of missing data since the last backup was performed but it shouldn't take much time.
### 6. Prepare to become a validator {#prepare-validator-1}
To start validating we need to prepare by installing nodejs. Check [Nodesource repository](https://github.com/nodesource/distributions) for details on how to install nodejs on your distro. For Ubuntu, this will be done as follows:
```bash
$ sudo apt-get update
$ sudo apt-get install -y ca-certificates curl gnupg
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
$ echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list
$ sudo apt-get update
$ sudo apt-get install nodejs -y
$ sudo apt-get install npm -y
$ sudo npm install -g near-cli
```
### 7. Install and check near-cli
Next we'll need to install near-cli with npm:
```bash
$ sudo npm install -g near-cli
$ export NEAR_ENV=testnet
$ near validators current
```
You should see a list of current validator for the network.
To make the NEAR_ENV persistent, add it to your bashrc:
```bash
$ echo 'export NEAR_ENV=testnet' >> ~/.bashrc
```
#### 8. Create a wallet {#create-wallet}
- TestNet: https://wallet.testnet.near.org/
>Node: this wallet is deprecated in favor of other wallets (i.e https://app.mynearwallet.com/) and near-cli will be updated soon to reflect this.
#### 9. Authorize Wallet Locally
A full access key needs to be installed locally to be able transactions via NEAR-CLI.
* You need to run this command:
```bash
$ near login
```
> Note: This command launches a web browser allowing for the authorization of a full access key to be copied locally.
1 – Copy the link in your browser
![img](/images/1.png)
2 – Grant Access to Near CLI
![img](/images/3.png)
3 – After Grant, you will see a page like this, go back to console
![img](/images/4.png)
4 – Enter your wallet and press Enter
![img](/images/5.png)
>Node: this wallet.testnet.near.org is deprecated in favor of other wallets (i.e https://app.mynearwallet.com/) and near-cli will be updated soon to reflect this.
### 10. Prepare validator key
When step #8 is completed, near-cli will create a key in your ~/.near-credentials/mainnet/ directory. We should use this for our validator. As such we move it to .near directory, add pool factory to accound it and change private_key to secret_key:
```bash
$ cp ~/.near-credentials/testnet/<accountId>.testnet.json ~/.near/validator_key.json
$ sed -i -e "s/<accountId>.testnet/<accountId>.pool.f863973.m0/g" ~/.near/validator_key.json
$ sed -i -e 's/private_key/secret_key/g' ~/.near/validator_key.json
```
### 11. Deploy a staking pool
To create a staking pool on the network, we need to call the create_staking_pool contract with required parameters and deploy it to the indicated accountId:
```bash
$ near call pool.f863973.m0 create_staking_pool '{"staking_pool_id": "<pool_name>", "owner_id": "<pool_owner_accountId>", "stake_public_key": "<public_key>", "reward_fee_fraction": {"numerator": <fee>, "denominator": 100}}' --accountId="<accountId>" --amount=30 --gas=300000000000000
```
From the command above, you need to replace:
* **Pool Name**: Staking pool name, the factory automatically adds its name to this parameter, creating {pool_name}.{staking_pool_factory}
Examples:
- `myamazingpool.pool.f863973.m0`
- `futureisnearyes.pool.f863973.m0`
* **Pool Owner ID**: The NEAR account that will manage the staking pool. Usually your main NEAR account.
* **Public Key**: The public key from your **validator_key.json** file.
* **Fee**: The fee the pool will charge in percents in 0-100 range.
* **Account Id**: The NEAR account deploying the staking pool. This needs to be a named account initialized within near-cli (be present in ~/.near-credentials/mainnet/ directory and exist on the network). It can be the same account as the pool owner id
> Be sure to have at least 30 NEAR available, it is the minimum required for storage.
You will see something like this:
![img](/images/pool.png)
If there is a “True” at the End. Your pool is created.
To change the pool parameters, such as changing the amount of commission charged to 1% in the example below, use this command:
```
$ near call <pool_name> update_reward_fee_fraction '{"reward_fee_fraction": {"numerator": 1, "denominator": 100}}' --accountId <account_id> --gas=300000000000000
```
### 12. Propose to start validating
> NOTE: Validator must be fully synced before issuing a proposal or depositing funds. Check the neard logs to see if syncing is completed.
In order to get a validator seat you must first submit a proposal with an appropriate amount of stake. Proposals are sent for epoch +2. Meaning if you send a proposal now, if approved, you would get the seat in 3 epochs. You should submit a proposal every epoch to ensure your seat. To send a proposal we use the ping command. A proposal is also sent if a stake or unstake command is sent to the staking pool contract.
To note, a ping also updates the staking balances for your delegators. A ping should be issued each epoch to keep reported rewards current on the pool contract.
#### Deposit and Stake NEAR
Deposit token to a pool (can be done using any account, not necessary the one created/used in steps above):
```bash
$ near call <staking_pool_id> deposit_and_stake --amount <amount> --accountId <accountId> --gas=300000000000000
```
#### Ping
A ping issues a new proposal and updates the staking balances for your delegators. A ping should be issued each epoch to keep reported rewards current.
Command:
```bash
$ near call <staking_pool_id> ping '{}' --accountId <accountId> --gas=300000000000000
```
Once above is completed, verify your validator proposal status:
```bash
$ near proposals
```
Your validator pool should have **"Proposal(Accepted)"** status
## `mainnet` {#mainnet}
### 1. Clone `nearcore` project from GitHub {#clone-nearcore-project-from-github-2}
First, clone the [`nearcore` repository](https://github.com/near/nearcore).
```bash
$ git clone https://github.com/near/nearcore
$ cd nearcore
$ git fetch origin --tags
```
Next, checkout the release branch you need you will not be using the
default `master` branch. Please check the [releases page on
GitHub](https://github.com/near/nearcore/releases) for the latest
release.
For more information on choosing between `master` and latest release branch [ [click here](/validator/compile-and-run-a-node#choosing-your-nearcore-version) ].
```bash
$ git checkout tags/1.26.1 -b mynode
```
### 2. Compile `nearcore` binary {#compile-nearcore-binary-2}
In the `nearcore` folder run the following commands:
```bash
$ make neard
```
This will start the compilation process. It will take some time
depending on your machine power (e.g. i9 8-core CPU, 32 GB RAM, SSD
takes approximately 25 minutes). Note that compilation will need over
1 GB of memory per virtual core the machine has. If the build fails
with processes being killed, you might want to try reducing number of
parallel jobs, for example: `CARGO_BUILD_JOBS=8 make neard`.
By the way, if you’re familiar with Cargo, you could wonder why not
run `cargo build -p neard --release` instead. While this will produce
a binary, the result will be a less optimized version. On technical
level, this is because building via `make neard` enables link-time
optimisation which is disabled by default.
The binary path is `target/release/neard`
### 3. Initialize working directory {#initialize-working-directory-2}
In order to work NEAR node requires to have working directory and a couple of configuration files. Generate the initial required working directory by running:
```bash
$ ./target/release/neard --home ~/.near init --chain-id mainnet --download-config
```
> You can skip the `--home` argument if you are fine with the default working directory in `~/.near`. If not, pass your preferred location.
This command will create the required directory structure by generating a `config.json`, `node_key.json`, and downloads a `genesis.json` for `mainnet`.
- `config.json` - Neard node configuration parameters.
- `genesis.json` - A file with all the data the network started with at genesis. This contains initial accounts, contracts, access keys, and other records which represents the initial state of the blockchain.
- `node_key.json` - A file which contains a public and private key for the node. Also includes an optional `account_id` parameter which is required to run a validator node (not covered in this doc).
- `data/` - A folder in which a NEAR node will write it's state.
### 4. Get data backup {#get-data-backup-1}
The node is ready to be started. When started as-is, it will establish
connection to the network and start downloading latest state. This
may take a while so an alternative is to download [Node Data Snapshots](/intro/node-data-snapshots)
which will speed up the syncing. The short of it is to install AWS
CLI and run:
```bash
$ aws s3 --no-sign-request cp s3://near-protocol-public/backups/mainnet/rpc/latest .
$ latest=$(cat latest)
$ aws s3 --no-sign-request cp --no-sign-request --recursive s3://near-protocol-public/backups/mainnet/rpc/$latest ~/.near/data
```
> **Heads up**
> An RPC node stores around 500GB of data on disk. Furthermore, it
> requires SSD to be able to keep up with network. Make sure that you
> have enough free space on a fast-enough disk.
Note that you don’t have to perform this step if you prefer a fully
decentralized experience when the node downloads data from the NEAR
network.
### 5. Run the node {#run-the-node-1}
To start your node simply run the following command:
```bash
$ ./target/release/neard --home ~/.near run
```
The node is running and you can see log outputs in your console. It will download the missing data since the last snapshot was performed but it shouldn't take much time.
### 6. Prepare to become a validator {#prepare-validator-1}
To start validating we need to prepare by installing nodejs. Check [Nodesource repository](https://github.com/nodesource/distributions) for details on how to install nodejs on your distro. For Ubuntu, this will be done as follows:
```bash
$ sudo apt-get update
$ sudo apt-get install -y ca-certificates curl gnupg
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
$ echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list
$ sudo apt-get update
$ sudo apt-get install nodejs -y
$ sudo apt-get install npm -y
$ sudo npm install -g near-cli
```
### 7. Install and check near-cli
Next we'll need to install near-cli with npm:
```bash
$ sudo npm install -g near-cli
$ export NEAR_ENV=mainnet
$ near validators current
```
You should see a list of current validator for the network.
To make the NEAR_ENV persistent, add it to your bashrc:
```bash
echo 'export NEAR_ENV=mainnet' >> ~/.bashrc
```
#### 8. Create a wallet {#create-wallet}
- MainNet: https://wallet.near.org/
>Node: this wallet.testnet.near.org is deprecated in favor of other wallets (i.e https://app.mynearwallet.com/) and near-cli will be updated soon to reflect this.
#### 9. Authorize Wallet Locally
A full access key needs to be installed locally to be able transactions via NEAR-CLI.
* You need to run this command:
```
near login
```
> Note: This command launches a web browser allowing for the authorization of a full access key to be copied locally.
1 – Copy the link in your browser
![img](/images/1.png)
2 – Grant Access to Near CLI
![img](/images/3.png)
3 – After Grant, you will see a page like this, go back to console
![img](/images/4.png)
4 – Enter your wallet and press Enter
![img](/images/5.png)
>Node: this wallet.testnet.near.org is deprecated in favor of other wallets (i.e https://app.mynearwallet.com/) and near-cli will be updated soon to reflect this.
### 10. Prepare validator key
When step #8 is completed, near-cli will create a key in your ~/.near-credentials/mainnet/ directory. We should use this for our validator. As such we move it to .near directory, add pool factory to accound it and change private_key to secret_key:
```bash
$ cp ~/.near-credentials/testnet/<accountId>.mainnet.json ~/.near/validator_key.json
$ sed -i -e "s/<accountId>.mainnet/<accountId>.poolv1.near/g" ~/.near/validator_key.json
$ sed -i -e 's/private_key/secret_key/g' ~/.near/validator_key.json
```
### 11. Deploy a staking pool
To create a staking pool on the network, we need to call the create_staking_pool contract with required parameters and deploy it to the indicated accountId:
```bash
$ near call poolv1.near create_staking_pool '{"staking_pool_id": "<pool_name>", "owner_id": "<pool_owner_accountId>", "stake_public_key": "<public_key>", "reward_fee_fraction": {"numerator": <fee>, "denominator": 100}}' --accountId="<accountId>" --amount=30 --gas=300000000000000
```
From the command above, you need to replace:
* **Pool Name**: Staking pool name, the factory automatically adds its name to this parameter, creating {pool_name}.{staking_pool_factory}
Examples:
- `myamazingpool.poolv1.near`
- `futureisnearyes.poolv1.near`
* **Pool Owner ID**: The NEAR account that will manage the staking pool. Usually your main NEAR account.
* **Public Key**: The public key from your **validator_key.json** file.
* **Fee**: The fee the pool will charge in percents in 0-100 range.
* **Account Id**: The NEAR account deploying the staking pool. This needs to be a named account initialized within near-cli (be present in ~/.near-credentials/mainnet/ directory and exist on the network). It can be the same account as the pool owner id
> Be sure to have at least 30 NEAR available, it is the minimum required for storage.
You will see something like this:
![img](/images/pool.png)
If there is a “True” at the End. Your pool is created.
To change the pool parameters, such as changing the amount of commission charged to 1% in the example below, use this command:
```
$ near call <pool_name> update_reward_fee_fraction '{"reward_fee_fraction": {"numerator": 1, "denominator": 100}}' --accountId <account_id> --gas=300000000000000
```
### 12. Propose to start validating
> NOTE: Validator must be fully synced before issuing a proposal or depositing funds. Check the neard logs to see if syncing is completed.
In order to get a validator seat you must first submit a proposal with an appropriate amount of stake. Proposals are sent for epoch +2. Meaning if you send a proposal now, if approved, you would get the seat in 3 epochs. You should submit a proposal every epoch to ensure your seat. To send a proposal we use the ping command. A proposal is also sent if a stake or unstake command is sent to the staking pool contract.
To note, a ping also updates the staking balances for your delegators. A ping should be issued each epoch to keep reported rewards current on the pool contract.
#### Deposit and Stake NEAR
Deposit token to a pool (can be done using any account, not necessary the one created/used in steps above):
```
$ near call <staking_pool_id> deposit_and_stake --amount <amount> --accountId <accountId> --gas=300000000000000
```
#### Ping
A ping issues a new proposal and updates the staking balances for your delegators. A ping should be issued each epoch to keep reported rewards current.
Command:
```bash
$ near call <staking_pool_id> ping '{}' --accountId <accountId> --gas=300000000000000
```
Once above is completed, verify your validator proposal status:
```bash
$ near proposals
```
Your validator pool should have **"Proposal(Accepted)"** status
>Got a question?
<a href="https://stackoverflow.com/questions/tagged/nearprotocol">
<h8>Ask it on StackOverflow!</h8></a>
|
NEAR Transparency Report: November 25
NEAR FOUNDATION
November 25, 2022
As part of the Foundation’s commitment to transparency, each week it will publish data to help the NEAR community understand the health of the ecosystem. This will be on top of the quarterly reports, and the now monthly funding reports.
You can find the quarterly reports here.
You can find monthly reports on funding here.
Last week’s transparency report can be found here.
This week, the Foundation is focusing on the state of token supply in the ecosystem.
The importance of transparency
The NEAR Foundation has always held transparency as one of its core beliefs. Being open to the community, investors, builders and creators is one of the core tenets of being a Web3 project. But it’s become apparent the Foundation needs to do more.
The Foundation hears the frustration from the community, and it wants to be more pro-active in when and how it communicates.
Total Supply
At Genesis, the NEAR blockchain was created with one billion tokens. Since then, the number of supply has increased to 1.1 billion. This number steadily rises due to inflation. For reference, 90% of the 5% annual inflation is sent to Validators to be paid out as staking rewards, with the residual 10% returned to the NEAR Foundation Treasury.
The total number of live accounts has been increasing rapidly, up to a total of 22 million. At present, new accounts are created on NEAR at an average rate between 35,000 and 38,000 new accounts per day. This number is down from the week before, which average 37-39,000 accounts per day.
Circulating Supply Statistics
Below is a break down of the circulating supply. The blue section represents the total number of $NEAR, and the green section represents tokens not currently locked in lockup contracts.
The number of tokens in circulating supply has been steadily increasing at a faster rate than the total circulating supply. In the past week, 3 million more $NEAR has moved from locked contracts to circulating, taking the number of circulating tokens from 827 million, to 830 million. This is typically due to further unlocks within both the NEAR Foundation Treasury and contractual vesting schedules.
Active Accounts
The most active accounts on NEAR are displayed here as the most active in the past 14 days. The most active account is relay.aurora, which is the main conduit by which Aurora moves transactions between Ethereum and NEAR. In the past two weeks, it recorded more than 900,000 transactions.
The second most active account was oracle.sweat, which is allows SWEAT users to communicate with the NEAR blockchain. The oracle.sweat account made just under 800,000 transactions in the last two weeks. The remaining accounts, collectively recorded more than a million transactions in the time period.
Daily Number of Transactions
The daily number of transactions is a record of how many times the blockchain logged a transaction. The earliest data available is from the second week of November. The data shows that daily transactions have been trending upwards and sit just under one million transactions per day. This is significant growth from the previous average of 400,000 per day.
These reports will be generated each week and published on Friday. |