diff --git "a/FAB_deep-se.csv" "b/FAB_deep-se.csv" deleted file mode 100644--- "a/FAB_deep-se.csv" +++ /dev/null @@ -1,304 +0,0 @@ -"issuekey","created","title","description","storypoint" -"FAB-30","08/11/2016 19:32:40","Endorser simulation of transactions.","Story: Developer needs the ability to gain signatures from endorsers as defined by a policy by simulating a transaction without writing to the ledger create, so that they can submit a transaction. Endorsers simulate tx with state and reply with rw set of all keys that were modified. A submitter composes a transaction consisting of [header, payload, rw-set] where rw-set is the set containing the state variables that the transaction reads from (read-set) and the state variables that the transaction writes to (write-set). The rw-set is created via transaction execution simulation (not writing to database). This simulation is also done by the endorsers of the transaction to fulfill the endorsement policy of the transaction.",2 -"FAB-149","08/17/2016 15:45:27","BFT Orderers need to sign batches","BFT orderers have to provide signatures so that validator peers can use the set of signatures to accept validity of a batch without having to connect to f+1 orderers. All orderers already sign with their own key.",2 -"FAB-192","08/20/2016 11:56:13","Ledger advanced simulation: As a consumer of the ledger APIs (Endorser peer chaincode), I need the ability to manipulate keys in various ways, so that I have flexibility in chaincode to get and set the state I need","- Implement QueryExecutor.GetStateRangeScanIterator() - Implement QueryExecutor.GetStateMultipleKeys() - Implement TxSimulator.DeleteState() - Implement TxSimulator.SetStateMultipleKeys()",2 -"FAB-196","08/20/2016 12:11:34","Ledger APIs to query Blocks/Transactions, including rich query and history of key values","- Ledger APIs have been implemented and need to be exposed to SDK: -- GetBlockchainInfo -- GetBlocksByNumber -- GetBlockByNumber -- GetBlockByHash -- GetTransactionById -- ExecuteQuery ",8 -"FAB-200","08/20/2016 12:27:10","Ledger simple provenance: I want to see the full history of key values across time","Querying for full history of keys will require a new history database. See attached slides for details.",8 -"FAB-254","08/26/2016 20:00:01","node.js SDK - add API to send endorsed deployment transaction to orderer","This initial phase is to send to an orderer based on SOLO. The API should take a proper parameter assembled from the returned data of the sendDeploymentProposal() call, and send to the orderer for consensus, which will then deliver to committers for validating and committing to ledger. Note that the event notification on transaction ""complete"", ""error"" will be tracked separately.",8 -"FAB-277","08/31/2016 01:36:29","Enable power node in Jenkins CI","@bsmitha requested us to setup Power node in Jenkins CI.",2 -"FAB-318","09/07/2016 19:59:37","[node-SDK] Technical debt","This will help with creating a cleaner workspace for node.js developers working on the node.js SDK, and eliminate sub-optimal parts that are prone to breaks: - build scripts that does -g installs - reliance on vagrant - ""make"" based build scripts are not best suited for node.js development",8 -"FAB-325","09/08/2016 21:09:24","Create Rawledger API","Create a Rawledger API which can be used by any orderer. This should also cover moving the ramledger implementation of Solo out into its own package, and converting solo to depend on the new abstraction.",2 -"FAB-349","09/12/2016 11:33:23","Control rockdb logging","Fabric peer uses default configurations for logging RocksDB - which produces massive logs in the LOG files. For better managing RockDB logs, RocksDB allows setting following logs related properties - 1) max_log_file_size, 2) keep_log_file_num, and 3) info_log_level Though a newer version (4.3) of RocksDB allows loading properties from an `ini` file (https://github.com/facebook/rocksdb/wiki/RocksDB-Options-File) - gorockdb package is yet to catch up for exposing this option into golang. Current possibility is to expose these configs into core.yaml and programmatically set these on RocksDB options while opening the DB. ",1 -"FAB-355","09/12/2016 15:29:46","As an infrastructure developer and application developer i want to have availale a software based crypto provider implementing the interface of FAB-354 in golang.","Implement a software provider implementing the interface described in FAB-354 in Golang to be used by the peer core code.",8 -"FAB-356","09/12/2016 15:31:11","As an infrastructure developer i want to ensure that all crypto calls within the nodejs client go through the fabric crypto library (BCCSP)","This requires that this library defined in FAB-354, and whose software provider was implemented in FAB-355/FAB-823, is used throughout the fabric client sdk in Node. ",8 -"FAB-359","09/12/2016 20:14:48","Bootstrapping a blockchain network with Orderers and Peers","Bootstrapping Fabric network including bootstrapping 2 components: Orderers and Peers. Orderers make up a network of nodes with mesh connections. Peers network uses gossip to communicate with one another but direct connection with the Orderer network. All connections are TLS. Each Orderer is configured by an administrative CLI command, which provides the necessary bootstrapping data, including a list of trusted roots, a list of Orderer certificates and IP addresses, a set of specific consensus algorithm properties, and access control policies. With this information, an Orderer may start up and connect with other Orderers. Each Peer requires at least the following configurations: # An enrollment certificate to participate in the network. The certificate may come from any CA as long as the CA is part of the trusted roots on the consensus service that the Peer will connect to # A list of the Orderer certificates and IP addresses, which the administrative CLI from the consensus service can generate # A list of trusted root certificates # An optional list of channels that the Peer should subscribe. Unless explicitly configured, the Peer does not subscribe to any channels upon starting up ",8 -"FAB-383","09/13/2016 15:00:42","Verify manual state transfer works","As we discussed over the weekend, following Gari's suggestion. Needed in case things go terribly bad.",1 -"FAB-386","09/13/2016 15:44:21","Add robust configuration mechanism for the orderer package","The orderer service uses piecemeal configuration using environment variables today. This should be done using Viper, but using the newer API which allows error on extraneous variables. In the event that the newer version of Viper is not available, the similar but older version of the call can be used temporarily. This should be re-usable across Solo, Kafka, and the RawLedger implementations.",2 -"FAB-418","09/21/2016 18:40:01","Enhance atomicbroadcast API to support channels","In order to support delivering batches to only a subset of peers, the atomicbroadcast API must be enhanced to allow channel creation, channel membership polling or notifications, and broadcast or delivery to one or more channels. This will require modifications to the `atomicbroadcast/ab.proto`. This will cascade into modification of solo to support the new API.",8 -"FAB-467","09/27/2016 01:23:13","Add support for orderer/config package","Switch over the Kafka orderer to the config package introduced in FAB-386: https://gerrit.hyperledger.org/r/1153",2 -"FAB-468","09/27/2016 01:31:08","Add support MaxWindowSize and QueueSize","MaxWindowSize sets an upper bound on a client's requested window, and QueueSize sets an upper bound on the maximum number of messages a client may have pending. These are not necessary to have but they are good features and they align the Kafka orderer with the Solo one.",2 -"FAB-470","09/27/2016 05:05:08","Make logging level configurable for solo orderer and config package","I think a command-line flag (as we do in the Kafka orderer) will do to begin with, but we probably want to add this to the YAML file as well.",2 -"FAB-473","09/27/2016 12:56:55","sbft: create standalone consensus peer","create executable that will act as atomic broadcast client and use sbft as backend",8 -"FAB-474","09/27/2016 13:14:27","generic censorship prevention and duplicate request purge for BFT consensus","We want to prevent the primary of sBFT (but really any replica in any BFT atomic broadcast) to censor requests (i.e., drop individual requests). Proposal: Per discussion with [~vukolic], this could be addressed by a generic component (not be part of sbft core, nor specific to sbft), which keeps track of new requests (""fresh""), in-flight requests (""pending"", only at primary), and recently completed requests. Timestamped entries are serviced infrequently (several second scale), and fresh requests will be brought to the attention of the remaining network, including the primary. When a second, longer, timeout expires, the component signals to the atomic broadcast implementation that the leader should be changed. Every time the atomic broadcast implementation observes a change in leader, this is communicated to the component and timeouts are adjusted to give the new leader time to act. The leader also uses the registry of fresh requests to assemble a new batch. This sounds deceivingly simple and probably will turn out to be more complicated than expected.",8 -"FAB-475","09/27/2016 13:23:06","generic check for correct requests","The consensus network should not add garbage data to the batches it produces. Especially under byzantine assumptions, it is possible that byzantine nodes (client, primary) introduce invalid data into the batches. To protect against this, requests proposed by a leader need to be validated for a correct signature from an authorized client. This way a leader (or unauthorized client) can not add invalid requests. This can be implemented as a generic component that can be shared by different consensus implementations. In the case of bft, replicas check the batch preprepared by the primary and perform a view change when requests are invalid. This component would be used by FAB-474 to filter requests before they get added to the fresh pool.",8 -"FAB-492","09/27/2016 20:10:00","leverage GO obcsdk to use node.js API instead of REST","Our GO obcsdk is used by a set of tests written in GO for consensus acceptance and regression, ledger stress tests, concurrency, and longrun testing. To talk to the Peers, client threads use REST calls. To leverage all these tests in v1.0 and beyond using node.js (especially needed if REST is deprecated), we must: # Decouple the REST functions from the GO application layer, and create a well-defined API. # Implement the node.js SDK communication interface methods, to perform the same functions as the existing REST functions. # Provide a test environment variable for the GO application tests to choose either REST or node.js. ",8 -"FAB-496","09/27/2016 21:49:02","Generate initial configuration transaction for orderer genesis block","The orderer needs to be able to take its initial configuration and deterministically generate a configuration transaction which all orderers will agree on. This transaction should be well formed, and must be able to be interpreted by both the ordering service and the peer.",5 -"FAB-497","09/27/2016 21:51:24","Create system chaincode at the peer to handle configuration changes","When the peer receives a configuration transaction, as defined in FAB-496, it needs to call the Configuration System Chaincode (CSCC) which performs the following: # map the changes into the correct read-write-set for the ledger to store # process the configuration changes if applicable; ex, removing a member CA from the chain, calling gossip # commit the transaction to ledger CSCC will provide the following functions: # process ""join channel"" for the peer to start receiving transactions on the specified channel # process transaction on commit # process ""queries"" to return configuration data on channel (status, members, crypto) For now, we will not require any endorsement of the transaction though this may be added in the future. ",3 -"FAB-507","09/28/2016 15:14:51","Each solo orderer client should have a separate flow control window","The QueueSize configuration option is being applied across all orderer clients. Change this to apply on a per-client basis.",2 -"FAB-526","09/29/2016 21:52:04","Add reconnection logic","If the Kafka brokers are not up yet, the orderer should attempt to reconnect to them every X seconds for a period of Y seconds, instead of panicking right away. This means there's no need for ""sleep 5s"" hacks when bringing up a network for BDD tests via Docker Compose.",1 -"FAB-539","09/30/2016 03:34:29","Implement crypto in python SDK","Follow SDK spec and refer to nodeJS SDK implementation.",5 -"FAB-580","10/02/2016 13:38:58","As a fabric developer working on endorser logic I want to do all the required security checks on a client proposal","This is in accordance to FAB-351, and FAB-489.",8 -"FAB-600","10/05/2016 11:06:57","As an endorser performing simulation, I want the state database to always be in sync with the blockchain ledger on the file system","ledgernext was initially delivered in sprint1 to quickly unblock dependent components, in this task the simulation and state management code will be hardened, for example to ensure the state index is always consistent with the blockchain ledger.",8 -"FAB-601","10/05/2016 11:22:45","Ledger versioning scheme: I want to use the block/transaction height as a variable's version, instead of an incrementing version number, so that I have traceability between state data and transaction data","Ledger versioning scheme: I want to use the block/transaction height as a variable's version, instead of an incrementing version number, so that I have traceability between state data and transaction data",2 -"FAB-617","10/06/2016 15:38:33","Add headless unit tests for Peer.js and Member.js","So that each function are tested for handling good and bad params, etc.",2 -"FAB-618","10/06/2016 15:40:48","Add error paths to endorser-tests.js","In the initial changeset there's only happy path. Need to add more error paths like what's done in ca-tests.js.",2 -"FAB-659","10/11/2016 15:50:10","Merge master branch's bug fixes and tests into fabric-sdk-node","the new repository ""fabric-sdk-node"" was based on a pre-v0.6 version of the master branch. The master branch has since been further enhanced with bug fixes. We want to bring those into the new repo where make sense. [~pmullaney] has already started working on migrating over the event hub feature into the new repo, so this effort doesn't need to take into account any code related to event hub.",8 -"FAB-666","10/11/2016 16:07:53","As an orderer I want to retrieve the genesis block and join the network","Implement bootstrap startup procedure that accepts genesis block file containing configuration for orderer. Make sure the data is organized as specified in FAB-359.",2 -"FAB-701","10/13/2016 15:04:46","Connections between shims and Kafka cluster should be authenticated","As of 0.9, Kafka supports TLS for client/broker and inter-broker communication. We need to enable it, and also make sure that when a shim or broker is added/removed, the ACLs are updated accordingly. There are no APIs provided by Kafka for this, so we will have to resort to executing scripts. https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Authorization+Command+Line+Interface http://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/ Some more context here: https://hyperledgerproject.slack.com/archives/fabric-consensus-dev/p1477597906002876 Feel free to break this story into tasks, but let's use this as a point of reference for comments and discussion.",8 -"FAB-702","10/13/2016 15:34:27","Dynamic set of peers.","Dissemination layer has to provide an ability of dynamic addition for new peer members, it has to be able to recover from network partition and congestion. Moreover it has to preserve node state integrity. New coming or lagging peers will use the capabilities of this module to synchronize their state from the neighbor peers. ",8 -"FAB-704","10/13/2016 16:51:35","Create signature validation DSL for use by filtering framework and later more broad use","At a high level, the initial configuration defines things like ""Only allow clients who have a certificate which is signed by one of these 6 parties"", but there is no obvious way to express this in a datastructure. Especially as more complicated signature schemes must be specified such as ""It must be signed by the network owner, and, it must also have 4 of the 7 stake holders sign off as well"" the need for a domain specific language arises to express these concepts. This should be defined in protobuf, as it is how we store and marshal things on the blockchain today.",2 -"FAB-705","10/13/2016 17:03:54","Create a policy manager","With the ability to specify policy in FAB-704 it is necessary for the orderer to track what policies are currently in effect and be able to evaluate whether a policy is satisfied by a given transaction. Therefore, a framework for updating and evaluating policies is necessary.",2 -"FAB-706","10/13/2016 18:12:59","Create orderer common components dir","The complexity with introducing pluggable pieces for filtering and validation into ordering is making the 'throw everything into the orderer directory' policy not make sense. The directory structure should be re-arranged.",1 -"FAB-742","10/18/2016 15:00:15","Create and define gossiping node meta object","Once new node appears in the network it has to complete the state based on the information of the ledger height in other nodes. This information is disseminated during gossip keep alive mechanism, e.g. each node transfer its meta-state which should include information about ledger. Need to provide an ability to store relevant info within nodes, serialize and de-serialize this information.",1 -"FAB-761","10/19/2016 23:11:46","Orderer GRPC API","Define a GRPC API for client to submit a transaction to Orderer (ie broadcast and deliver). This is currently in ab.proto, which we need to beef up the description on error conditions.",1 -"FAB-780","10/20/2016 15:57:17","Update readme for the fabric-sdk-node repo to be more contributor friendly","should prominently include steps to build, set up test environment and run tests.",1 -"FAB-782","10/20/2016 17:05:49","Remove sdk/node folder from ""fabric"" project in master","[~cdaughtr] has done the work to merge all subsequent changes to v0.6 and master in the original location (fabric/sdk/node) to the new project (fabric-sdk-node), tracked in FAB-659. so there's no need to keep the old code around in fabric. [~greg.haskins] [~rameshthoomu] want to make sure you guys are aware of this planned action. Hoping to do it asap. Would there be any concerns w.r.t the build steps and CI jobs? we'll for sure clean up the make file as part of this.",2 -"FAB-788","10/20/2016 17:51:40","As a fabric developer, I want to replace RocksDB with another key/value datastore","RocksDB has a patent infringement license in it from Facebook. The legal team is therefore not comfortable with using it. For details read: https://github.com/facebook/rocksdb/blob/master/PATENTS The alternatives include 1. LevelDB (https://github.com/google/leveldb) with a go wrapper (https://github.com/jmhodges/levigo), 2. *goleveldb* (https://github.com/syndtr/goleveldb) - a porting of leveldb in golang 3. BoltDB (https://github.com/boltdb/bolt) BoltDB is suitable for read heavy workloads (e.g., LDAP) but has a relatively poor performance for read-write workloads. Of the other two options, *goleveldb* is chosen because it is implemented in golang and hence easy to intergate and maintain. In addition, as a precedent, ethereum go implementation also uses this package (https://github.com/ethereum/go-ethereum/blob/master/ethdb/database.go)",8 -"FAB-789","10/20/2016 17:54:56","As a fabric developer, I want to understand CouchDB performance as a ledger state database","Initial assessment is complete. Senthil please attach a community-friendly copy of the performance assessment to this work item.",8 -"FAB-821","10/22/2016 17:51:13","Size broadcaster's batchChan appropriately","See https://gerrit.hyperledger.org/r/#/c/1627/3/orderer/kafka/broadcast.go@50 and [~bcbrock]'s comment for details. batchChan is the buffered channel where the broadcaster part of the Kafka shim holds incoming messages until it's time to cut a new block. In the current solo-like implementation, setting the number of elements of this buffer equal to the number of messages a block is expected to have is short-sighted. We are now moving towards a model where the shim just relays each message to the broker as it receives it, so the original problem goes away, but we should still come up with an appropriate capacity for this. Do we expose this as a configuration parameter with a sensible default and call it a day? (And what makes for a sensible default?)",0 -"FAB-826","10/24/2016 13:38:41","As an infrastructure developer i want to implement ACLs on (channel) events","Implement the technique designed in FAB-637 for registration (access control enforcement) to a channel's events. Extend that implementation to have a aper-channel access policy/check.",8 -"FAB-828","10/24/2016 13:41:06","Create couchdb database automatically for main system ledger","The assumption is that all chaincode will run in one CouchDB database for the system ledger, and there will be separate databases for each subledger.",2 -"FAB-829","10/24/2016 13:52:14","As an application/infrastructure developer I want to design a generic membership service interface","This item reflects the work needed to design an interface for membership services operations (issuing/managing certificates, and authentication mechanisms using those certificates) in a way that that interface does not depend on the exact implementation/cryptographic primitives that the membership service uses. This interface is to be integrated to core operations of the fabric, such that a fabric deployer who wishes to substitute only membership service component of the system, is able to do so without affecting the code of fabric core/transaction processing. Wee need to define one interface used by the infrastructure (verification MSP interface) and one for the client to construct transactions (retrieving the corresponding certificates), and signing with its secret key. ",8 -"FAB-833","10/24/2016 14:00:44","As an application developer I want to implement an application library to offer confidentiality of transaction data.","This task relates to the implementation of the interface defined in previous work item of 830.",8 -"FAB-835","10/24/2016 16:10:11","As a Fabric user, I want to know how the Kafka-based ordering service works","We've designed this out in the open, and all the logic is written down in the various JIRA issues that belong to FAB-32, and in #fabric-consensus-dev. However, it's necessary to wrap everything up in a single document that acts as a point of reference.",2 -"FAB-839","10/24/2016 17:39:27","Create per-session buffered channel for broadcast responses","The Recv path should as clear from blocks as possible. An appropriately-sized buffered response channel per connected client and a non-blocking write to it is the way to go. See https://gerrit.hyperledger.org/r/#/c/1627/5/orderer/kafka/broadcast.go@129 for more details.",2 -"FAB-885","10/26/2016 19:37:34","Enable successful execution of endorser.feature without bootstrap","This will not take into account bootstrapping, but will test the engine flow from the client prespective.",1 -"FAB-892","10/27/2016 21:01:10","As an orderer I have to authenticate the peer that connects to me","The orderer/shim needs to check that the peer's certificate links back to a CA that's referenced in the channel's genesis/reconfiguration block.",2 -"FAB-900","10/28/2016 18:04:31","Test Bootstrapping","An admin would bootstrap the network and would need the following: * users and certs for each peer * orderer certs and URL/IP address for the network * any possible subchannel information if network will be configured with such The output is: * the genesis block that contains cert info for all peers in the network * a hash that is sent to each peer for determining if the genesis block that they receive from the network is genuine. ",3 -"FAB-924","11/01/2016 17:10:04","Develop an end-2-end test to drive all v1.0 APIs","this is a happy path test that ensures all peer APIs have not regressed or changed, to ensure the SDK API and peer stay in sync",2 -"FAB-925","11/02/2016 01:53:43","Improve coding styles in headless-tests.js for chaining Promise-based APIs","the current headless-tests.js have a lot of embedded promise-based calls, like below: functionA() .then( function(result) { // do stuff functionB() .then( function(result) {...}); }); this is defeating the purpose of the design of Promise-based APIs, it should have been chained like below (note the ""return"" statement on the call to functionB()): functionA() .then( function(result) { // do stuff return functionB(); }) .then( function(result) {...}); ); ",1 -"FAB-932","11/02/2016 20:44:07","Add a gulp task for running tests with coverage reports","""gulp test"" to run the whole test bucket ""gulp test-headless"" to run just the headless tests both should print a report in the output and generate the HTMLs for the coverage report",1 -"FAB-950","11/03/2016 18:14:34","Have self-contained chaincode deployment setup so no manual prep is needed","Tests that need to deploy chaincode need the following setup: - GOPATH environment variable - chaincode available at the folder corresponding to $GOPATH/src/ have this be part of the test fixture so contributors don't have to manually set it up locally. set process.env.GOPATH in the code to the test fixture folder just for the test execution.",1 -"FAB-993","11/06/2016 15:50:04","Create ChainPartition type","This maps to a Kafka partition, and is related to the two partitions per chain idea we're exploring for FAB-621. ",1 -"FAB-994","11/06/2016 15:53:42","Create DataHolder type","Holds all the data that the Broadcaster needs to keep track of, in order to eventually send a new block on the chain. There is a 1-to-1 mapping between chains and DataHolders.",1 -"FAB-995","11/06/2016 15:56:16","Rewrite Broadcaster so as to support ChainPartitions and DataHolders","The Broadcaster creates a DataHolder reference and spawns a cutBlock goroutine for every chain that is created. It keeps track of which goroutines are running, and can terminate them if need be.",2 -"FAB-996","11/06/2016 16:03:31","Add a commons/util package","The point is to not have to rewrite the same common utility functions again and again. This is what I have so far: # Marshal(pb proto.Message) []byte # Hash(data []byte) []byte # ExtractEnvelope(block *ab.Block, index int) (envelope *ab.Envelope, err error) # ExtractPayload(envelope *ab.Envelope) *ab.Payload The Hash function exists in the core util package as ComputeCryptoHash, so I may have to remove that in the end. I'm doing it now cause I want to switch to SHA-2 (FAB-887).",1 -"FAB-998","11/06/2016 16:04:38","Write test client that sends configuration envelope for new chain creation","This is part of the FAB-819 story. The goal is to use the ""broadcast_configuration"" client to send out a transaction that calls for the creation of a new channel. Then we will use the ""bd_counter"" client to make sure messages show up on the new channel.",1 -"FAB-1013","11/07/2016 17:16:26","Turn validated new-chain configuration envelope into proper genesis block","The ordering node should be able to turn a validated new-chain configuration envelope into a proper genesis block. The ordering logic will then be able to get that block directly and push it to the new chain, and also set-up all the related logic (cutBlock goroutine, etc.) based on the parameters extracted from that genesis block (batchTimeout, batchSize), etc.",0 -"FAB-1019","11/07/2016 21:34:27","Add unit tests for ledger applications on Fabric 1.0","Add unit tests to example.go application in ledger examples.",2 -"FAB-1094","11/11/2016 21:57:00","Need utilities to break out a Block and get the ConfigurationEnvelope","We have multiple places in the code where we need to unmarshall multiple things to get from Block to ConfigurationEnvelope. Create a set of utility functions that can be called from anywhere",1 -"FAB-1141","11/17/2016 19:49:17","Bootstrap BDD","Implement bootstrap BDD feature. This feature will for the basis from which all future features will be based wrt to system composition.",3 -"FAB-1151","11/19/2016 00:47:40","Side DB - Channel Private Data - experimental feature","* As a Fabric deployer, I would like to maintain data such that only its evidence is exposed to the chain, ordering service, and channel peers while the data itself is disseminated to peers based on policy, so that we can achieve finer-grained data confidentiality for transactions while still maintaining ledger consistency and still being able to leverage Fabric for both data evidence and dissemination of the data (but in a more private fashion). *Design*  Slides attached.",40 -"FAB-1174","11/21/2016 19:34:26","Specify path to orderer.yaml via an environment variable","Allow an user to specify where on the file system the orderer configuration yaml file is located. This makes orderer consistent with peer ( and its PEER_CONFIG_PATH ) and allows for easier creation of the docker image.",1 -"FAB-1257","12/03/2016 16:00:37","As a chaincode developer, I want to use JSON-based data structures instead of table-based data structures, so that I have more control over queries","Remove Table API from Hyperleger Fabric in v1. * The v0.5/v0.6 Pseedo-table API does not map well to current or next generation Fabric capabilities * Project teams have been confused and frustrated with table API limitations Encourage all new chaincode to use JSON-based data structures * Additional query benefits when using CouchDB state database * Provide JSON-based samples to help community update table-based chaincode * Initial sample: https://github.com/denyeart/table_to_json/blob/master/chaincode/table_to_json_chaincode.go In the future Fabric may add support for relational state databases * At that time it will make sense to introduce a ‘real’ table API without the limitations of the current pseudo-table API ",8 -"FAB-1278","12/05/2016 21:18:54","Introduce generic notion of execution in the orderer","For the impending chain creation work, a second type of transaction will need to be 'executed' beyond a configuration transaction. This store is to generalizes the old configuration specific code paths into a re-usable path. In particular, the broadcast filters must be generalized to be a more generic filtering mechanism. Instead of replying with the matched rule type, and then having the invoker make decisions based on the match, the filters should return a Committer which can perform those actions with no specific knowledge from the caller.",3 -"FAB-1280","12/05/2016 22:23:23","Create fabric common components directory and move orderer shared components there","The orderer has some common code which is of broader use to the fabric at large, not just the orderer. In particularly, the configuration block parsing, and the policy parsing and evaluation are prime candidates for being shared in a modular way between the orderer and peer (and possibly other components). This is being created in part by the request of [~muralisr] to faciliate some VSCC ESCC work.",1 -"FAB-1298","12/07/2016 20:54:57","Remove queueing concept from Broadcast","The broadcast API suffers from a deficiency today, that it immediately returns success/failure before the request has actually entered consensus. The desired behavior would be to return success only after the request has entered consensus, but, this poses a problem when the broadcast queue overflows. In the event that the broadcast queue overflows, incoming requests should be rejected to alert the client to slow down. This results in the situation of: 1. Wait until after the queue drains to return the failure (which will not throttle the client and not provide immediate feedback, this is bad) 2. Return the success before the queue drains (which will not inform the client if for whatever reason the consensus system never actually accepts the request) Since these options both have drawbacks and are mutually exclusive, a different solution is required. This story is to add windowing to the broadcast API to mirror the deliver API. If the client knows how big the buffer is at the server, then the client can delay sending new messages until it receives the success/failure after processing the queue. If the client violates the protocol and overflows the queue, the client can be dismissed as malicious and hung-up on.",2 -"FAB-1300","12/07/2016 21:55:25","Provide metrics for the common orderer service endpoints","In order to assess the health of the ordering service, we should expose basic metrics about the state of the ordering network like number of clients connected, transaction rates, chains and chain heights, etc.",5 -"FAB-1302","12/07/2016 22:00:45","Add config inspection validation on chain creation transaction","Today, chain creation requests are only validated based on the fact that they are well formed, and signed according to the policy for chain creation. There is no checking to make sure that the orderers are specified correctly, that the orderer MSPs are included, etc. What these checks are need to be determined as part of this story, as well as their implementation. There is a plug point in the multichain systemChain.go code which is meant to handle this, but it currently is mostly a no-op.",3 -"FAB-1304","12/07/2016 22:10:21","Hook into fabric ledger for rawledger implementation","The orderer currently relies on its own rawledger implementations, a simple ram based ledger (which does not persist anything to disk) and a simple file based ledger which uses a very naive JSON encoding with 1 block per file. Neither of these are likely to scale well for real systems and have always been intended to be testing tools while the real ledger is developed. A shim needs to be written between the orderer rawledger API and the fabric ledger api to support deploying with the fabric backing ledger for production deployments.",13 -"FAB-1308","12/08/2016 15:10:58","I want to be able to read from CouchDB state database as an 'external' user (not using the peer's admin user)","Currently only two modes are available when using CouchDB - completely open (no user security) and completely locked down (user security enabled). Production environments utilize the locked down configuration, while development environments may decide to skip user security in order to allow direct access. If user security is enabled, only the peer's admin user can read/write to CouchDB. No other 'external' users can access CouchDB - access is locked down and all requests must go through the peer's CouchDB user. If it is desired to have 'external' users be able to read (only) CouchDB directly, two changes would be required: 1) Add the 'external' users to the CouchDB database security object as members. See http://docs.couchdb.org/en/2.2.0/api/database/security.html . Currently peer writes the database security object with just the peer's admin user upon each startup (allowing for the ability to change the username). Note - it may be possible to use a server admin instead of setting the admins role on each database, and only populate the members role. 2) Any database 'member' would have both read and write access. To lock it down for read access only, a CouchDB 'validation function' (deployed as a design document to CouchDB) would be required, it could enforce that only the peer's admin user could perform writes.",8 -"FAB-1333","12/09/2016 17:44:01","Make orderer logging configurable (in a centralized way)","Today, the orderer initializes loggers in almost every package, and statically sets their logging levels. This was easy to speed development along, but this really needs to be configurable for the whole ordering process in a sane way, ideally lifted from the peer flogging stuff.",3 -"FAB-1334","12/09/2016 17:51:31","Decide on block hashing specifics","Today, the orderer is using the functions defined in `fabric/protos/common/block.go` for hashing the block header and data. These functions are _definitely_ wrong, and were never intended to be long term solutions. These functions need to be fixed to use a hashing algorithm and marshalling scheme (and possibly using a wide Merkle Tree for the BlockData). This was discussed somewhat extensively in https://gerrit.hyperledger.org/r/#/c/1361/ but no conclusion was reached.",3 -"FAB-1335","12/09/2016 18:13:55","BDD tests: Kafka orderer should be resilient to faults","  The following are the details for these tests. * +Component+:  orderer * +Description+:  Test scenarios with faults in the Kafka brokers, and the orderer shims. * +Artifact Locations+:  test/feature/orderer.feature * +Network Topology+:  3 orderers, 3 zookeepers, 4 kafka brokers, 4 peers * +Client Driver+: behave This test should show that as the partition leader changes, the network still functions as expected.",3 -"FAB-1349","12/10/2016 20:29:21","Enforce restrictions on acceptable chain IDs","This is motivated by the fact that Kafka imposes restrictions on the allowed topic names: https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/common/Topic.scala#L29 Regardless of that, we should adopt similar restrictions. Ideally, these will be a superset of the Kafka restriction set so that we don't have to add any extra code for the Kafka consenter case.",1 -"FAB-1351","12/11/2016 05:15:15","Update broadcast_config sample client so that it posts appropriate config for Kafka consenter","As things stand right now, the config it posts only works for solo. The client should have an extra flag that allows us to specify the consenter type, and then, by using the provisional bootstrapper, it should create the appropriate config.",1 -"FAB-1352","12/11/2016 08:16:14","Add time-based block cutting to Kafka consenter","In the version that was rebased on top of the common components, this option was kept out in order to minimize the complexity of the changeset. The deliverable of this story is a Kafka consenter that respects the BatchTimeout config option. ",3 -"FAB-1358","12/12/2016 04:55:23","Convert all batchSize refs to the uint32 type","Address the mismatch between the batchSize type as expressed in the localconfig package versus the sharedconfig package and the orderer configuration proto.",1 -"FAB-1359","12/12/2016 04:57:23","Drop custom flag support for Kafka orderer","As we are slowly moving to a setup where the behavior of the orderer is controlled by the config options captured in the genesis block and the orderer YAML file (and their overrides via ENV vars), it's time to drop the flag support that the Kafka orderer provided. * Remove all flags from the Kafka orderer. * Add a ""verbose"" option to the YAML file to control logging for the package that we use to interact with the Kafka cluster (sarama).",1 -"FAB-1360","12/12/2016 04:59:15","Introduce ChainPartition construct for Kafka","The ChainPartition construct will be used to identify the Kafka topic/partition that the ordering shims should interact with when dealing with a particular chain.",1 -"FAB-1362","12/12/2016 05:04:02","Add KafkaBrokers to shared config","The list of Kafka brokers used for ordering needs to be shared across the shims (ordering service nodes).",1 -"FAB-1363","12/12/2016 05:07:07","Move ChainID method to ConsenterSupport","What necessitates this move is that the Kafka multichain.Chain object returned/expected by HandleChain(), needs to be able to allocate the resources necessary to keep up with its respective chain when the Start() method is invoked by the multichain manager, as the contract of the Chain interface (defined in chainsupport.go) dictates. Keeping up with the respective chain means that the Kafka multichain.Chain object needs to bring up a producer/consumer that connects to the corresponding chain partition, so it needs to know the chain ID. The only object passed to us during the HandleChain call is an object of type multichain.ConsenterSupport. Therefore, this interface needs to be extended with the ChainID() method. I've noted the need for this during my review of this changeset: https://gerrit.hyperledger.org/r/#/c/2763/ (skip to the comment timestamped 11-27 18:52, and its follow-up comment) If you want to see how this will look in action for the Kafka consenter (coming in a follow-up changeset), please see: https://github.com/kchristidis/fabric/blob/47752ed61fcab1b26207a9e9075c1c793d723912/orderer/kafka/main.go#L128 https://github.com/kchristidis/fabric/blob/47752ed61fcab1b26207a9e9075c1c793d723912/orderer/kafka/main.go#L143",1 -"FAB-1364","12/12/2016 05:09:34","Replace static bootstrapper with provisional one","All consenters read several of their config settings (think sharedconfig) from the genesis block that is generated by a bootstrapper. The only bootstrapper available so far is the static one. However, when testing we need to be able to modify several of these config values on the fly. Therefore the bootstrapper should be able to read a config object (which is itself created by reading the orderer.yaml file and -if set- its associated ENV vars). An example of that would be the KafkaBrokers value. For unit tests the ""right"" value is ""127.0.0.1:9092"", whereas for the current Docker Compose-based BDD tests the right value is ""kafka0:9092"". Since this bootstrapper is no longer static, renaming the package seems appropriate. For production we will need to introduce file-based bootstrapper that reads the genesis block created by the genesis block tool.",2 -"FAB-1365","12/12/2016 05:11:28","Introduce Kafka-specific container message types","The revised Kafka consenter needs two special messages: # A time-to-cut message that is used to mark the end of a block, and # A no-op message that each shim posts when bootstrapped by the multichain manager to prevent the possibility of ""listening in"" (seeking and consuming) on a topic/partition that nobody has posted to yet [1]. This is an operation that panics in Kafka: ""[ERROR] Cannot retrieve required offset from Kafka cluster: kafka server: The request attempted to perform an operation on an invalid topic."" These messages are special because they don't carry transactions, and because the Kafka consenter will treat them in a special way: it will ignore every time-to-cut message (for a specific block number) besides the first one, and it will ignore all ""no-op"" messages when processing incoming messages from the chain partition. A preview of these in action can be found here: https://github.com/kchristidis/fabric/blob/47752ed61fcab1b26207a9e9075c1c793d723912/orderer/kafka/main.go#L142 https://github.com/kchristidis/fabric/blob/47752ed61fcab1b26207a9e9075c1c793d723912/orderer/kafka/main.go#L164 https://github.com/kchristidis/fabric/blob/47752ed61fcab1b26207a9e9075c1c793d723912/orderer/kafka/main.go#L204",1 -"FAB-1366","12/12/2016 05:13:31","Update Docker Compose files for Kafka consenter","We should either list only those ENV vars where we need to modify the default values, or list all possible ENV vars to allow for easy editing later on. The Docker Compose files in their present form adopt a middle-ground solution: list only some ENV vars, and have several of them set to the default values.",1 -"FAB-1374","12/12/2016 20:07:32","Remove bd_counter sample client","I wrote this while developing the Kafka orderer for testing. The existing sample clients (broadcast_timestamp and deliver_stdout) provide the same functionality and in a more modular, Unix-like way. Keeping the bd_counter around just increases maintenance burden.",1 -"FAB-1382","12/13/2016 15:48:40","Remove windowing concept from the Deliver API","Per some performance testing feedback from [~bcbrock] it's much more efficient if we allow HTTP2/gRPC to handle the connection data windowing rather than attempting to implement it ourselves at a higher layer. In order to remove windowing from deliver, there must be some other mechanism supplied to allow a client to retrieve a specific number of blocks. This means that the Deliver API must also be enhanced to allow range specification.",3 -"FAB-1416","12/15/2016 17:37:56","Make policy manager pluggable with different policy providers","Per discussion with [~ellaki] [~adecaro] and [~ales] the MSP managers would like to provide their own policy evaluation options. This would be in addition to the NOutOf policies, but would exist within the current framework. This story is to make the existing policy manager pluggable with different policy providers, so that the MSP manager may provide a policy provider for MSP type policies.",3 -"FAB-1418","12/15/2016 19:53:47","Convert Policy from oneof to enum","The Policy type for configuration is currently oneof{ SignaturePolicyEnvelope }. This is somewhat convenient but is not extensible without modification of the common protos. Instead, this should be switched to be the standard enumerated type, bytes model.",1 -"FAB-1422","12/15/2016 20:12:14","Ledger setInfo(): As a chaincode author, I want to pass information to VSCC for transaction validation context","Proposal is to add a setInfo() on the simulator, for the chaincode to pass some transaction context that is to be used later for validation in VSCC, without anything getting written to permanent state (therefore don't want to use WriteSet). This information would get included in the simulation results in the transaction, and VSCC would be able to reference it when performing custom validation logic. (Murali, please fill in your immediate use case for it).",2 -"FAB-1430","12/16/2016 15:06:20","Implement epoch and timestamp message replay protection","The envelope header contains a timestamp and epoch both of which are ignored by the ordering service today. But, these should be leveraged for message replay prevention. This will require synchronizing with the SDK/peer team to make sure these parameters are correctly set.",5 -"FAB-1441","12/16/2016 17:07:53","Behave backend utilities","A test writer should be able to write feature tests using backend utilities to drive the tests such that any new test scenario can utilize the APIs while keeping the modification of the backend to a minimum. *Output:* back-end utility functionality that drives the feature files. These utilities will include (but not limited to): * docker_util * remote_util * grpc_util * shell_util ",5 -"FAB-1446","12/18/2016 14:35:48","Add safesql to CI for go-based components","Add https://github.com/stripe/safesql to our CI pipelines for all golang components that utilize a database. This will enhance our security by finding any potential SQL injection vulnerabilities before they are merged.",2 -"FAB-1503","01/03/2017 14:11:34","Ledger history: When looking at the ledger history for a key, I want to see history with full transactional context ","In addition to seeing the history of key values, we should be able to see the full transaction details for every update to the key. This will require defining a new transaction structure to be returned.",8 -"FAB-1521","01/05/2017 02:39:02","Fix orderer rawledger interface to support restart","The rawledger interface was originally implemented as a single chain initialized with a genesis block. This causes problems on restart because a newer genesis block may already exist and passing the genesis block becomes nonsensical.",2 -"FAB-1523","01/05/2017 05:47:12","Populate metadata last configuration field","Each block is supposed to contain a reference to the last configuration block, the configuration of which that block was generated under. Although this is a separate requirement needed for instance for gossip, this is also useful for supporting restart of the ordering service so is being classified as a sub-task of FAB-1299.",1 -"FAB-1524","01/05/2017 07:03:58","Appropriately get configuration from block metadata on orderer restart","The orderer needs to initialize the configuration for every chain, identify the ordering system chain specially, and initialize the other chains.",1 -"FAB-1563","01/09/2017 18:22:04","Orderers need to filter incoming Broadcast messages by signature","The framework to filter incoming transactions exist, as does the framework to manage policies, but currently the filtering framework does no policy enforcement on incoming broadcast messages, this needs to be fixed.",2 -"FAB-1573","01/09/2017 22:05:55","Deliver messages need to be signable","The deliver message currently has no authentication associated with it. Rather than invent yet another signing scheme, we should adapt the Envelope message to be used to send SeekInfo requests.",2 -"FAB-1574","01/09/2017 23:05:59","Deliver API needs to check signatures against egress policy","The deliver API will support signatures after FAB-1573, so it then needs to actually respect these signatures and only allow readers which are authorized by policy for a given chain.",1 -"FAB-1603","01/11/2017 18:52:48","Incorrect marshalling of transaction into block during block event generation","Block delivered by events is not in the correct format.",1 -"FAB-1611","01/11/2017 22:31:39","Create configuration template for orderer user to consume","The orderer is currently hackily copying code from the orderer bootstrapper to generate the new chain request. This is not a real path, as this needs to be supplied by the orderer, this story should address that.",3 -"FAB-1678","01/16/2017 20:45:15","As an admin and developer, I need a way to inspect and create configuration transactions","This story is to start working on the tools necessary for admins to work with configuration transactions, not only creating them, but also inspecting and signing them. There are four principal components to this issue. 1. Bidirectional proto <=> deep JSON translation: This is at the heart of the issue, and provides both a human readable, and human editable version of the configuration. Because the configuration embeds other message types, like {{Envelope}} it is most natural to make this a generic proto <=> deeply unmarshaled JSON. This means that the output should, to the extent possible, contain no binary marshaled fields. For instance, a {{Block}} should show nested {{Envelope}} messages in its data, and these messages should show {{Payload}} messages, which should in tern show the unmarshaled version of the {{Data}} field, etc. Correspondingly, it is important that this mapping be able to be returned into the native proto form. Note, that the bidirectional marshaling can preserve meaning only, not literal bytes, as proto marshaling is non-deterministic. 2. A Config + Config -> ConfigUpdate utility: This will allow the user to submit an original config and a modified config to produce a config update. This will combine naturally with (1), so that the user may view the configuration in a human readable way, edit it, compute the update, and then see the update in a human readable form. 3. A REST API which exposes (1) and (2). 4. A CLI which exposes (1) and (2) Finally, we may wish to add one additional API to (3)/(4), namely the ability to submit a Config, a Config Update, and have the server do a simulation of the result (including any errors produced because of insufficient signatures, or bad form). For those interested in getting a jump on interacting with the deep JSON representation, please see a (unusually verbose) config update message attached as output_pretty.txt.",8 -"FAB-1710","01/17/2017 21:23:23","Add orderer addresses configuration item","Add a configuration item which will let the peers know in `JoinChannel` which orderer addresses may be connected to.",1 -"FAB-1748","01/19/2017 03:38:12","Refactor provisional bootstrap generator","The old provisional bootstrapper was becoming increasingly convoluted for doing very little, this changeset deletes must/most of that code in favor of a much simpler approach.",1 -"FAB-1760","01/19/2017 17:01:14","Integrate configtx.Manager into peer","The peer currently uses the dangerous and deprecated method of manually inspecting configuration blocks for configuration. This has a multitude of security and correctness problems, but also produces a lot of redundant and unnecessary code. The peer should be converted to utilize the existing configtx.Manager code which does proper configuration transaction validation and provides a simple interface for users of that configuration.",3 -"FAB-1776","01/20/2017 04:35:20","Move policy manager creation to common components","The policy manager is currently only initialized within the orderer, but this is a common function and needs to be moved into the common components.",1 -"FAB-1777","01/20/2017 04:55:04","Refactor orderer multichain package to prep for chainconfig","The orderer multichain package has become a little unwieldy as more and more configuration based handlers have been added to it. This changeset consolidates these many parameters into embedded structures to alleviate this problem in preparation for adding the chain config handler.",1 -"FAB-1856","01/25/2017 19:04:59","Add callback feature on configtx update","Per the gossip team, they require to be notified when the channel configuration changes. This is to implement a way to push notification events to them.",1 -"FAB-1875","01/26/2017 21:37:08","Introduce identity channel to orderer","There is an existing problem with syncing identity across channels. # Application orgs cannot read the ordering system channel, so they cannot reasonably update their identity there. # Application orgs cannot determine what channels they are a member of, so cannot easily script updates to their identity. Further, because the identity may be at different levels and will definitely have different headers, one signature must be generated per channel. # The orderer has no way to ensure that the MSP used in a channel creation request is up to date (except relative to the ordering system channel, which as already pointed out, is problematic to update). # The orderer has no centralized place to get TLS certs from. # The peer has no authoritative source for TLS certs or for local MSP data. # If an application wishes to create a channel with an org they have no other channels with, it's not obvious how to retrieve their current MSP. There are probably other benefits as well.",8 -"FAB-1876","01/26/2017 21:39:05","Update anchor peers to be multiple configuration items","Because write access to config is scoped by configuration item, it does not make sense for all anchor peers to be writable by any organization. Instead, we should have one configuration item per anchor peer, with a modification policy corresponding to that org.",3 -"FAB-1880","01/27/2017 03:08:07","Enable MVCC+Postimage for configtx","The current configuration tx scheme requires a global sequence number be specified. This is problematic as a DoS attack vector if single signers are allowed for modification of items (like anchor peers, or MSP definitions). It should be a relatively straightforward change to modify this to be MVCC+postimage to eliminate the DoS sequence number contention problem.",8 -"FAB-1946","01/30/2017 20:16:22","Remove ChainHeader from ConfigurationItem","In preparation of having one header across an entire update, this needs to be removed from the individual items.",1 -"FAB-1955","01/31/2017 16:01:23","Generate test genesis block in tests.","The unit tests require genesis materials for their chain, but this is currently a manual and hacky process. This story is to fix this into a more automated flow.",3 -"FAB-1960","01/31/2017 16:50:44","Create peer test template","Just like the orderer, and the MSPs, a peer test template needs to be created to facilitate tests which require a valid genesis block for a chain.",1 -"FAB-1962","01/31/2017 18:58:17","Switch peer tests to utilize test templates","The peer already depends on the orderer template, but is not currently using the peer or MSP templates. Where appropriate, especially the MSP template should be utilized.",1 -"FAB-1983","02/01/2017 15:02:32","Allow other chain hashing parameters","Per FAB-1700 and FAB-1699, the chain hashing parameters are now included in the chain configuration. However, for the v1 release it was not practical to actually utilize these values, so instead they are required at a fixed size until they can be implemented. The chainconfig.Descriptor carries these values, but no one is currently consuming them. This additionally includes implementing a Merkle tree for the block data structure.",8 -"FAB-1992","02/01/2017 19:37:50","Move configtx signatures to be across whole config, not just individual items","The current configtx scheme is a collection of signed configuration items. This has the benefit of being very flexible, allowing a submitter to collect signatures from one set of parties for one set of items, a second set of signatures from a second set of parties for a second set of items, and then glue them together, and submit a valid reconfiguration. However, this flexibility comes at the expense of requiring more signatures, and the added complexity does not seem worth the added flexibility. This CR moves the signatures to the envelope level, but maintains the per item policy evaluation.",3 -"FAB-2073","02/06/2017 18:46:08","Implement hierarchical configtx structure","The current configuration structure is a flat list, which causes some problems, especially when attempting to differentiate between the roles of ordering organizations and peer organizations. A hierarchical model is slightly more complex from a data structure perspective, but is more natural and expressive. This work should be careful to ensure that the older model can be supported in the degenerate case.",13 -"FAB-2096","02/07/2017 18:23:09","Remove xxxCryptoHelper to mocks","The real crypto helper was added, but the mock one was left in the real binary, moving to mocks.",1 -"FAB-2097","02/07/2017 18:29:03","Add ConfigNext proto","Rather than modify the config protos and the config producers, parsing, and consumers all at once, this CR adds only the new protos, without removing the old, and makes the minimal changes required only to use the new proto format while on the wire.",2 -"FAB-2102","02/07/2017 20:34:50","Move application shared config from peer to common","Just as with the orderer configuration, having the peer configuration in a separate package is problematic, so this CR moves it to common.",1 -"FAB-2104","02/07/2017 21:04:20","Move channel shared config to common","The orderer and application config were both recently moved to common. For the sake of consistency, the channel config should be moved from its current location in common under configtx like orderer and application.",1 -"FAB-2105","02/07/2017 21:10:27","Add simple configuratoin schema protos","Because the new configtx format is so much more flexible than the original, we need to make sure that it is locked down to keep people from going crazy with it. The restrictions can be relaxed as the use cases are made, but initially the goal is to restrict the configuration to a minimal set and loosen from there. To do this, protos for a simple schema should be defined so that the configtx manager can enforce the restrictive configuration scheme.",1 -"FAB-2120","02/08/2017 04:54:03","Move configtx filter to orderer","When configtx was moved from orderer to common it brought along the configtx.Filter, which is inherently an orderer concept and violates the rule of not importing from common into orderer.",1 -"FAB-2144","02/09/2017 15:06:18","Migrate configtx.Manager to parse ConfigNext","While integrating the ConfigNext proto, adapter code was written to convert it back to the original Config type. This conversion code needs to go away, and as a first step, the configtx.Manager needs to begin using the updated format.",3 -"FAB-2155","02/09/2017 19:37:35","Split orderer config into local and genesis components","This split is necessary to prepare to migrate the orderer genesis creation into the common package to use as a starting point for generating genesis material.",1 -"FAB-2156","02/09/2017 20:09:47","Move orderer viper enhancements to common","Before moving the orderer genesis generation to common, it needs the enhanced viper support added by the orderer.",1 -"FAB-2176","02/10/2017 16:43:01","Add ConfigUpdate proto","In order to stabilize the protos, the ConfigUpdate protos need to go in first. The existing code can be adapted simply to depend on a ConfigUpdate write set which includes the entire config. Then, the actual partial config updates can be built on top of it.",1 -"FAB-2201","02/12/2017 16:54:30","Generate new config from udpated config map","The configtx code currently shortcuts by assuming that the writeset contains exactly the entire config. In order to support partial updates, this writeset should be overlayed on top of the existing config, then transformed back into a new config.",1 -"FAB-2202","02/12/2017 18:34:20","Initialize configtx manager from Config, not Writeset","As a transitional mechanism, the config currently depends on the WriteSet containing the entire configuration. Instead, the config as written in the config envelope needs to be utilized.",1 -"FAB-2210","02/13/2017 14:40:00","Rename CONFIGURATION_ to CONFIG_","This brings the enums into line, using the word config everywhere instead of a mix of config and configuration.",1 -"FAB-2339","02/17/2017 16:57:19","Add simple tool to write out a genesis block","Please refer to the file {{configtxgen.md}} in the fabric/docs directory for usage of this tool.",1 -"FAB-2468","02/24/2017 17:02:47","Remove unnecessary header messages from configtx messages","After an audit with Elli, it's been concluded that the ChannelHeader embedded in the configtx.proto Config and ConfigUpdate messages are unnecessary, that the only piece of information needed is the ChannelId. By removing these, the messages will be simpler to construct and understand.",1 -"FAB-2470","02/24/2017 18:09:25","Fix gossip proto style","The gossip protos were moved under fabric/protos after the fabric/protos were restyled to conform to proto style guidelines. The gossip protos need to be brought in line with this.",1 -"FAB-3125","04/12/2017 19:14:18","Remove sfhackfest from examples","The sfhackfest example is not longer relevant to the master branch / v1.0.0",1 -"FAB-3136","04/13/2017 11:02:33","Include install script with each release package","As we currently distribute the various runtime components as Docker images, it is convenient to include a script which will download the correct version of the images for a given release and platform. With this task, we'll include a script which will pull all of the images for a given release",1 -"FAB-3139","04/13/2017 12:54:33","Improve test coverage for github.com/hyperledger/fabric/core/comm package","The current coverage for the package is 62.8% but the major gap in tests for functions found in connection.go ",1 -"FAB-3264","04/19/2017 21:56:59","Add Config utility for Behave functional tests","This config utility will contain functions that will be used when configuring a fabric network for use in the behave functional test runs. ",3 -"FAB-3281","04/20/2017 20:31:49","Import protobuf files from bddtests directory","Instead of regenerating the python files based on the protobuf files, import the generating files for use in the utilities for these tests.",3 -"FAB-3283","04/20/2017 23:26:41","Add orderer test scaffolding","This scaffolding will be the basis for implementing the orderer feature files.",3 -"FAB-3338","04/22/2017 11:52:32","Provide a sample deployment and topology for Docker Swarm Mode","Docker 1.13+ has a built-in multi-host orchestration engine called ""Swarm Mode"" This task is to provide the relevant artifacts and documentation to help create a sandbox topology using Docker Swarm",2 -"FAB-3506","04/28/2017 20:18:05","Behave peer scaffolding","This scaffolding will be the basis for implementing the peer feature files. ",3 -"FAB-3523","04/30/2017 12:27:25","Modify comm implementation is use callback function to get secure dial options","The current gossip implementation is initialized with a set of dial options which don't change during the lifecycle of the peer. In order to always get the most up to date secure dial options (which basically means dial options with the latest set of trusted roots for remote peers / organizations) this task will add a callback function to the comm impl",1 -"FAB-3675","05/05/2017 13:21:05","add license check script to fabric repo","see FAB-3674 for description",1 -"FAB-3737","05/08/2017 23:21:14","Generate changelog for v1.0.0-alpha2","generate changelog for v1.0.0-alpha2",1 -"FAB-3738","05/09/2017 01:29:22","obtain CII badge","obtain a CII Badge from LF",1 -"FAB-3753","05/09/2017 17:11:06","Add a Readme for the system feature tests","A readme that contains at least the following should be added to the test/feature directory: * Getting Started * Prereqs * How to contribute  https://gerrit.hyperledger.org/r/#/c/9131/",2 -"FAB-3754","05/09/2017 17:38:00","Add endorser_util","This should contain the code needed to perform the key endorser functionality such as * install * instantiate * create channel * join channel * invoke chaincode * query chaincode",3 -"FAB-3758","05/09/2017 19:35:56","Add container start/stop functionality","The tests should have the capability to start and stop containers. [https://gerrit.hyperledger.org/r/#/c/9667]  ",2 -"FAB-3774","05/10/2017 14:58:42","A document that will outline considerations when starting a network","This is a document that will have a lot of cross with best practices. Considerations and thoughts that need to happen when creating a new network. Starter list of items: -  Who will define membership in the network, and how? -  Where will my network be hosted? -  What are the features I'll need? -  What are the technical specifications I’ll need to make it work? -  What kinds of special application layers will I need? regarding -  Governance - Fab-2308 -  :ref:`Membership-Services` -  :ref:`Ordering-Service` -  :ref:`Ledger` for database configuration - Fab-2308 -  Security Options -  :ref:`SDK` s/APIs -  :ref:`Assets` -  :ref:`Channel` benefits",8 -"FAB-3779","05/10/2017 16:47:06","need security vulnerability reporting process implemented and documented","We need a documented process for submitting security issues relating to Fabric implemented and documented to satisfy the CII Badge requirements.",1 -"FAB-3830","05/10/2017 21:58:51","Create a description of why blockchain","Would like to build the story from the very high level down to the details of Hyperledger Fabric. This is the first phase, description of the very high level components of blockchain. Ultimately would like to move to the over arching Hyperledger project umbrella, then each project can build off of it. But wanted to start the story here, so for now will host within fabric doc.",1 -"FAB-3846","05/11/2017 15:38:24","Placeholder for behave tests ","We have to create templates of the tests that will be executed using the behave framework. The Scenario name should look as follows:   {code:java} Scenario: FAB-XXXX -     Given some setup When some state changes Then some action is expected {code} Each scenario should be tagged with the @skip tag. This tag will be removed once the test is written and ready for use.  ",3 -"FAB-3891","05/12/2017 12:26:55","cut v1.0.0-alpha2 release ","prepare for the alpha2 release process. -This will involve creating the ""release"" branch as per the process proposal proposed by Dave at the Hackfest.-  -Suggest that we leave master as master and actually create a ""develop"" branch after we tag the v1.0.0-alpha2 release on the master branch, and that we then shift the target of all CRs to the ""develop"" branch from that point forward. Master will only merge FF merges from a temporary release branch created from ""develop"" which will run through the gauntlet of tests before being merged to ""master"" and cutting a release.- -Note that this will also require that all in-flight CRs be resubmitted against the ""develop"" branch.- just tag and publish release without changing branch structure for now.",1 -"FAB-3981","05/17/2017 16:11:15","Provide various governance options for scenarios.","Provide governance options of Dictatorial/Democratic/Unanimous as scenario options for controlling the default policy settings.",1 -"FAB-4100","05/22/2017 22:41:15","Create proto translator component framework","Doing bidirectional proto translation to/from deep JSON requires a significant amount of dynamic programming. Having each message try to handle this work independently is a recipe for disaster, so the bulk of the reflection work needs to be centralized so that protos may be simply extended with methods which drive the proto translation framework. This task is to cover designing and implementing this plug-able core reflection framework.",3 -"FAB-4101","05/22/2017 22:42:24","Create proto translator nested field component","Proto messages embedded inside of other proto messages must be passed back through the common translator component. This component should act as a catch-all for otherwise unhandled proto messages.",1 -"FAB-4102","05/22/2017 22:43:10","Create proto translator for statically opaque field component","Some proto fields are simply statically marshaled fields, namely their type is known at compile time. This task is to handle the creation of the component to handle these.",1 -"FAB-4103","05/22/2017 22:44:10","Create proto translator for variably opaque field component","Some proto fields are marshaled proto messages who's type varies based on the other contents of the message (possibly including other statically marshaled fields). This task if for creating the variably opaque field component.",1 -"FAB-4104","05/22/2017 22:46:11","Create proto translator for dynamic field components","Some proto fields have component messages whose behavior is determined by some other context, such as where they appear relative to other messages. These messages cannot be simply annotated like other the statically or variably opaque messages, but must instead be decorated at runtime with their descriptive attributes. This task is for creating the component which handles these dynamic fields.",1 -"FAB-4105","05/22/2017 22:47:20","Add proto translator method extensions to fabric proto messages","With the proto translator and component pieces in place, the actual fabric messages which it is to translate must be annotated with the necessary methods to be translated.",1 -"FAB-4106","05/22/2017 22:48:47","Create config update computation component","This task is for creating the code necessary to take in two config messages, and compute the corresponding config update message.",3 -"FAB-4107","05/22/2017 22:50:07","Add REST server component to expose proto translator","This task is for introducing the server which will listen for api requests as discussed in the parent issue.",3 -"FAB-4212","05/30/2017 14:47:58","Cut v1.0.0-alpha3 release","Prepare and publish v1.0.0-alpha3 release of Hyperledger Fabric    Some related discussions also took place [here|https://docs.google.com/document/d/1mPTMjXG_b-mgZd2_EUN9W-82ez0PrWoxIiPgcAAABwE/edit#heading=h.7ylvqvqpim4]. ",2 -"FAB-4215","05/30/2017 15:14:01","prepare Makefile for v1.0.0-alpha3 release","BASE_VERSION = 1.0.0-alpha3 PREV_VERSION = 1.0.0-alpha2 IS_RELEASE = true docs/source/releases.rst should be updated with prose and link to release page change log should be generated and linked from docs/source/releases.rst",1 -"FAB-4216","05/30/2017 15:15:33","prepare Makefile for v1.0.0-alpha3 release","BASE_VERSION = 1.0.0-alpha3 IS_RELEASE = true  ",1 -"FAB-4217","05/30/2017 15:17:48","prepare Makefile for v1.0.0-beta","PROJECT_NAME = fabric-ca BASE_VERSION = 1.0.0-beta IS_RELEASE = false",1 -"FAB-4219","05/30/2017 15:25:36","prepare Makefile for v1.0.0-beta","BASE_VERSION = 1.0.0-beta PREV_VERSION = 1.0.0-alpha3 IS_RELEASE = false",1 -"FAB-4221","05/30/2017 15:40:48","update Getting Started link to resolve to v1.0.0-alpha3 bootstrap.sh","create shortened URL that resolves to v1.0.0-beta tagged version of bootstrap.sh",1 -"FAB-4222","05/30/2017 15:42:09","tag v1.0.0-alpha3 release of fabric","tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link",1 -"FAB-4223","05/30/2017 15:44:12","tag v1.0.0-alpha3 release of fabric-ca","tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link create change log",1 -"FAB-4227","05/30/2017 16:14:42","tag v1.0.0-alpha3 for fabric-sdk-node","tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link create change log   ",1 -"FAB-4228","05/30/2017 16:15:39","tag v1.0.0-alpha3 release of fabric-sdk-java","tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link",1 -"FAB-4230","05/30/2017 16:30:02","create release notes text for fabric","create release notes for fabric text that generalize the nature of changes since v1.0.0-alpha2",1 -"FAB-4231","05/30/2017 16:32:14","create release notes for fabric-ca","create release notes that generalize the changes since alpha2 to be used in release tag",1 -"FAB-4232","05/30/2017 16:34:28","create v1.0.0-alpha3 release notes for fabric-sdk-node","create release notes for v1.0.0-beta that generalize changes since v1.0.0-alpha2",1 -"FAB-4233","05/30/2017 16:36:09","create v1.0.0-alpha3 release notes for fabric-sdk-java","create release notes that generalize the changes since v1.0.0-alpha2",1 -"FAB-4325","06/02/2017 14:07:47","create scheme to handle version-specific bootstrap.sh","we need an ability to have version-specific bootstrap.sh",1 -"FAB-4379","06/05/2017 20:14:08","Cut v1.0.0-beta release","Prepare and publish v1.0.0-beta release of Hyperledger Fabric    Some related discussions also took place [here|https://docs.google.com/document/d/1mPTMjXG_b-mgZd2_EUN9W-82ez0PrWoxIiPgcAAABwE/edit#heading=h.7ylvqvqpim4]. ",2 -"FAB-4384","06/05/2017 20:14:11","update Getting Started link to resolve to v1.0.0-beta bootstrap.sh","create shortened URL that resolves to v1.0.0-rc1 tagged version of bootstrap script scripts/bootstrap-v1.0.0-rc1.sh",1 -"FAB-4383","06/05/2017 20:14:11","prepare Makefile for v1.0.0-rc1","BASE_VERSION = 1.0.0-rc1 PREV_VERSION = 1.0.0-beta IS_RELEASE = false",1 -"FAB-4382","06/05/2017 20:14:11","prepare Makefile for v1.0.0-rc1","PROJECT_NAME = fabric-ca BASE_VERSION = 1.0.0-rc1 IS_RELEASE = false",1 -"FAB-4381","06/05/2017 20:14:11","prepare Makefile for v1.0.0-beta release","BASE_VERSION = 1.0.0-beta IS_RELEASE = true  generate changelog",1 -"FAB-4380","06/05/2017 20:14:11","prepare Makefile for v1.0.0-beta release","BASE_VERSION = 1.0.0-beta PREV_VERSION = 1.0.0-alpha2 IS_RELEASE = true docs/source/releases.rst should be updated with prose and link to release page change log should be generated and linked from docs/source/releases.rst",1 -"FAB-4387","06/05/2017 20:14:12","publish v1.0.0-beta release node sdk","See instructions in FAB-2802. these will trigger CI to publish to npm: fabric-client/package.json: version=1.0.0-beta fabric-ca-client/package.json: version=1.0.0-beta examples/balance-transfer update to target 1.0.0-beta artifacts for docker images, and fabric-client and fabric-ca-client npm",1 -"FAB-4386","06/05/2017 20:14:12","tag v1.0.0-beta release of fabric-ca","  NOTE: Tag release with the release_notes/v1.0.0-beta.md as the tag comment   % git tag \-a v1.0.0-beta \-F release_notes/v1.0.0-beta.md   tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link create change log",1 -"FAB-4385","06/05/2017 20:14:12","tag v1.0.0-beta release of fabric","NOTE: Tag release with the release_notes/v1.0.0-beta.md as the tag comment   % git tag \-a v1.0.0-beta \-F release_notes/v1.0.0-beta.md   tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link",1 -"FAB-4392","06/05/2017 20:14:13","create release notes for fabric-ca","create release notes that generalize the changes since alpha2 to be used in release tag",1 -"FAB-4391","06/05/2017 20:14:13","create release notes text for fabric","create release notes for fabric text that generalize the nature of changes since v1.0.0-alpha2",1 -"FAB-4390","06/05/2017 20:14:13","tag v1.0.0-beta release of fabric-sdk-java","NOTE: Tag release with the release_notes/v1.0.0-beta.md as the tag comment   % git tag \-a v1.0.0-beta \-F release_notes/v1.0.0-beta.md   tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link",1 -"FAB-4389","06/05/2017 20:14:13","tag v1.0.0-beta for fabric-sdk-node","NOTE: Tag release with the release_notes/v1.0.0-beta.md as the tag comment   % git tag \-a v1.0.0-beta \-F release_notes/v1.0.0-beta.md   tag release with a commit message that includes: release notes known vulnerabilities other known issues changelog link create change log   ",1 -"FAB-4394","06/05/2017 20:14:14","create v1.0.0-beta release notes for fabric-sdk-java","create release notes that generalize the changes since v1.0.0-alpha2",1 -"FAB-4393","06/05/2017 20:14:14","create v1.0.0-beta release notes for fabric-sdk-node","create release notes for v1.0.0-beta that generalize changes since v1.0.0-alpha2 generate changelog",1 -"FAB-4572","06/12/2017 18:31:16","address missing license headers fabric-ca","address missing license headers fabric-ca",1 -"FAB-4573","06/12/2017 18:31:38","address missing license headers fabric-sdk-node","address missing license headers fabric-sdk-node",1 -"FAB-4574","06/12/2017 18:31:55","address missing license headers fabric-sdk-java","address missing license headers fabric-sdk-java",1 -"FAB-4576","06/12/2017 18:32:26","address missing license headers fabric","address missing license headers fabric",1 -"FAB-4577","06/12/2017 19:38:05","add missing license headers to fabric-test-resources","add missing license headers to fabric-test-resources",1 -"FAB-4608","06/13/2017 16:52:35","Refactor the docker composition files for the test network topologies","Currently, each network topology is defined in its own single file. It would prove useful for maintaining and reading if there were a single base component file that each network setup can use when setting up Fabric networks for system tests.",2 -"FAB-4971","06/23/2017 20:34:31","behave test fmwk: could we check for existence of sub_elements in return data set","Like being able to verify for an element from returned json structue: Like for instance marble has dataset like: \{""name"", ""color"", ""size"", ""owner""}:\{""marble1"",""red"",""35"",""tom""} when using marbles02 chaincode. If i would like verify say owner of marble1, then may be I would have the following in my feature file {code:java} When a user invokes on the chaincode named ""mycc"" with args [""initMarble"",""marble1"",""red"",""35"",""tom""] And I wait ""10"" seconds When a user queries on the chaincode named ""mycc"" with args [""readMarble"",""marble1"", ""owner""] Then a user receives expected response of {""docType"":""marble"",""name"":""marble1"",""owner"":""tom""}{code}  ",2 -"FAB-5177","07/05/2017 03:44:41","peer/ccenv should not include the shim","In the past, we made the decision to include the shim both as a convenience to our users (to avoid explicit vendoring) and as a space optimization (why have every chaincode include the same shim lib over and over?). Fast forward to now: go dependency management leaves much to be desired.  Having the ccenv include the shim can be problematic w.r.t. fragility between the included shim and the chaincode application.  For example, try to write an application which uses the timestamp features of the shim and you will receive errors due to conflicts between the fabric vendored version of the protobuf library and the chaincode application. Now that we have auto-vendoring, the convenience factor for including the shim is gone.  It only leaves the space optimization consideration.  Considering how fragile go dependency management is, we may want to go the conservative route and simply remove all dependencies from the ccenv and make ""peer package"" / ""chaintool package"" provide them via the auto-vendor feature.   If we are going to do this, we should do it before v1.0 is cut so that chaincode apps for 1.0 are properly formed in a non-breaking way going forward.",3 -"FAB-5236","07/10/2017 18:02:54","Add orderer performance tests","Today, all of our performance testing for the orderer is being done by spinning up an orderer process, bound to a port, then having external clients direct traffic to the ordering service. This has been fine to get an overall impression of performance with the ordering code, but it's very time consuming and difficult to gauge the effect of performance improvements and to catch performance regressions. Completion of this story should require the following: # Refactor {{main.go}} so that the server can be started without binding to a real address, and instead supply a mock structure to register the gRPC services with. # Create mock {{Broadcast}} clients which will submit properly signed messages via the {{Broadcast}} API. The number of clients should be arbitrary, and the message sizes should also be configurable, as should the desired channel. # Create mock {{Deliver}} clients which operate similarly to the {{Broadcast}} ones. # Create performance tests that leverage 1-3 to cover a variety of scenarios. Some which immediately come to mind: ** 1 Broadcast and 1 Deliver, each going as fast as possible on a single channel ** many Broadcast and 1 Deliver on a single channel ** many Broadcast and many Deliver on a single channel ** 1 Broadcast and 1 Deliver for each of many channels ** many Broadcast and many Deliver for each of many channels ** (a)-(e) each with small, medium, and large messages",8 -"FAB-5258","07/11/2017 18:49:21","Optimize orderer message processing flow to remove redundant checks","The current orderer works in a two pass message filtering architecture. The initial pass is done when the client invokes {{Broadcast}} and verifies that the message might be valid once ordered, then sends to the consenter for ordering. After ordering, the consenter re-runs the verification on the messages as a second pass. This second pass ensures that the message is still valid, even after the other in-flight messages have committed. The only cause of a message validating at {{Broadcast}} but not validating after ordering is if the channel configuration changes. For instance, the max message size might be reduced, a certificate might be revoked, etc. The problem with this architecture is that it implies that in the Kafka case, all messages are processed {{n+1}} times, where {{n}} is the number of OSNs. In the Solo case, this implies messages are processed {{2}} times (as {{n}} is fixed to {{1}}. However, in the green path, it should only be necessary to process the message {{1}} time. The message verification step is the primary CPU bound (as it involves hashing and checking signatures) for the orderer, so improving from {{n+1}} to {{1}} has the potential to double or more the performance of the orderer. To achieve this change, the orderer common components must be noticeably restructured. The {{Broadcast}} path needs to include the config sequence at which the message was validated when pushing to consenus. Then, the consenter needs to track changes in the config sequence number, trigger the second pass revalidation only when needed. The message validation is currently buried within filters and filters within blockcutter, so this will need to be factored out into its own message processing package. This is also a good opportunity to fix some of the directory structure which has accumulated considerable cruft during the v1 development. Will add architectural diagrams.",13 -"FAB-5362","07/18/2017 16:02:03","Behave test framework does not handle uppercase and mixchars in chaincode names ","During deployment when behave tests are passed uppercase and mixchar names for chaincode names is tests fail. {code:java} When a user deploys chaincode at path ""github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02"" with [""init"",""a"",""1000"",""b"",""2000""] with name ""MYCC""{code} {code:java} When a user deploys chaincode at path ""github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02"" with [""init"",""a"",""1000"",""b"",""2000""] with name ""MYcc_Test""{code} commit level: Behave Fmwk: *d95525a64e7903565128d4c25ca522365ef6a926* Fabric *b5c74cb8838d07f2101652a1c0acbb3b033a7660*{code}",3 -"FAB-5545","07/31/2017 15:24:29","Set up Makefile to allow fabric to be a git submodule","We would like to include the fabric repository as a git submodule in our repository as a dependency. Ideally, we would be able to build the necessary images and executables that are needed from this submodule. Currently, the Makefile makes some assumptions that the directory structure will be ""hyperledger/fabric/..."" The Makefile should be more flexible to allow for a different structure.",1 -"FAB-5645","08/07/2017 17:57:52","Split configtx processing and channelconfig dependency.","The current configtx code was written under the assumption that it was managing a channel configuration. Although there is some abstraction via the use of interfaces to make unit testing easier, it's not currently possible to use the configtx processing for something like the RSCC support. This story is to remove the channel config specifics from the configtx package, and to factor the common components out of the channel config package into a common config package.",8 -"FAB-5721","08/11/2017 01:25:18","As I fabric developer I want the SCCs to use the new ACLProvider to enforce access control","The new ACLProvider offers a centralised way to manage access control to resources on channels. The following SCCs will be updated to use it: QSCC, CSCC, LSCC, Endorsement (Proposal and CC2CC).",2 -"FAB-5732","08/11/2017 13:40:55","Clarify policy evaluation errors","The logs for policy evaluation can be quite unintuitive to the uninitiated. This CR is to evaluate this logging and attempt to improve it to be less opaque.",2 -"FAB-5864","08/21/2017 16:57:56","Reconfigure membership in examples.","How to conduct channel membership changes is a frequently asked question. The tools are all there, and there is some fragmented documentation around this, but the best way to document is through example. It should be relatively straghtforward to enhance one of the examples to define three application organizations instead of 2, then create a channel with two members, and add the third. This is an exercise that I've gone though by hand and would be happy to walk the implementer through.",3 -"FAB-5966","08/30/2017 05:49:19","Balance Transfer sample application using Fabric Java SDK","A *balance transfer* spring boot application using +fabric SDK java+ which will demonstrates all the basic _functionalities_ of *hyperledger 1.0*, such as enrolling and registering user , creating channel , installing chaincode , instantiating chaincode , invoking chaincode and querying chaincode. I have created this sample application for reference so that people who want to develop a hyperledger fabric solution using Java SDK can use the sample to accelerate their application development.",5 -"FAB-6091","09/10/2017 11:41:13","Build with Go 1.9 by default","The final step is moving to Go 1.9 is to actually build with Go 1.9. In order to do this, we just need to move to a baseimage which uses Go 1.9",1 -"FAB-6635","10/16/2017 19:35:54","Add ability to execute on command line","The ability to execute openssl commands and use the output in the test will assist in the encryption testing.",2 -"FAB-10219","05/18/2018 21:11:34","Usability: idemixgen user versus admin difference","It's not clear how you can tell that you are using an admin setup or a user setup after using the tool. The output all goes to a directory ""user"". How do I know that the admin flag worked as expected?",1 -"FAB-10719","06/19/2018 21:09:18","As chaincode developer, I need a new message type to serialize chaincode install packages.","The new lifecycle work will need a way to store chaincode to the filesystem.  Today, the peer stores chaincode either in a CDSPackage or SignedCDSPackage, both of which are inappropriate for the new lifecycle. These existing package formats include references to legacy datastructures that do not make sense in the new lifecycle world, such as instantiation policy, and conflate things which identify the chaincode (ie, its name, version, and hash) with properties of the chaincode itself (such as its type, and code). This should be a new protobuf message named {{ChaincodeInstallPackage}}. which includes: # {{Type}} as a string (ie, go, java, node) # {{Path}} as a string (as required by some of the current platform packages) # {{CodePackage}} the package itself as bytes (typically a tar file).",1 -"FAB-10720","06/19/2018 21:17:38","As a fabric developer, I need a way to persistently store ChaincodeInstallPackages.","As part of the new lifecycle work, there is a new chaincode package format as defined via FAB-10719. In order for the new life cycle to use this new package format, it needs a way to persist these code packages. The existing persistence is done through the ccprovider package, however, because of the legacy and problematic nature of this code (package scoped state and functions), and the irrelevant details like instantiation policy, extending the existing mechanisms is likely to be more work than simply creating a new persistence system. This new persistence should take as a parameter a name, a version, and a set of bytes. This set of bytes is a marshaled form of the the new message from FAB-10719. The persistence should use the existing configurable chaincode storage directory to store chaincodes, but instead of storing them as '.' as the current code does, it should store the marshaled package and the package name/version as {{.bin}} and {{.json}} respectively. The implementation should be careful not to expose these details to the user. The implementer should evaluate whether creating a new file in the ccprovider package, or simply creating a new package is more sensible for this work.",2 -"FAB-10723","06/19/2018 21:23:14","As a fabric developer, I need a way to retrieve persisted ChaincodeInstallPackages by hash.","FAB-10720 introduces a way to persist the chaincode install packages. In order for this to be useful, this persistence mechanism must also allow chaincodes to be retrieved. The implementation should accept a package hash (corresponding to the hash of the marshaled chaincode install package) and retrieve the package itself, as well as the stored name and version associated with this package.",1 -"FAB-10724","06/19/2018 21:27:57","As a fabric developer, I need a way to retrieve the hash of a persisted ChaincodeInstallPackages by name/version.","FAB-10720 introduces a way to persist chaincode packages onto the filesystem and FAB-10723 introduces a way to retrieve those packages by hash. However, users may not know the hash of their installed chaincode by may only know the friendlier name/version. In this case, we will need to be able to look up the hash by name/version. Once the correct hash is known, the existing API for retrieving the code bytes themselves may be used. The implementation should accept name/version as parameters, and return a byte slice which is the hash.",1 -"FAB-10725","06/19/2018 21:34:09","As a peer admin, I need a chaincode API to install ChaincodeInstallPackages.","The new lifecycle system chaincode needs to expose a function to allow the installation of ChaincodeInstallPackages. It should take as a parameter a message containing the name, version, and ChaincodeInstallPackages. It should return a message containing the hash of the code package. The chaincode function name should be {{InstallCodePackage}} Both the input and output should be new protobuf messages so that we may use the protobuf versioning mechanisms to extend and deprecate fields. It should declare a new interface which declares the dependency on the function provided by FAB-10720 to store the package.",2 -"FAB-10726","06/19/2018 21:49:31","As a peer org admin, I need a chaincode API to signal my approval to execute a chaincode with certain parameters on a channel","This corresponds to the 'InstallChaincodeMetadata' step described in the FAB-8787 design document. This chaincode API should be called {{AgreeToDefine}} and should accept as input a message containing a single field {{Definition}} which is itself a protobuf message with the fields: # Name as a string # Version as a string # Sequence an unsigned 64 bit integer # Hash as bytes (corresponding to the hash returned by FAB-10725) # Endorsement plugin as string # Validation plugin as string # ValidationParameter as bytes The chaincode API should return an empty protobuf message. The function should first verify that the user is an administrator of the org per the channel configuration. Then it should then take the {{Definition}} and put it in the organization's private data collection at key """". Note, the ability to write to an org scoped collection does not exist at the time of writing this story, but should be implemented via FAB-8864.",2 -"FAB-10729","06/19/2018 21:58:08","As a peer org admin, I need a chaincode API to define a chaincode","This story corresponds to the 'Define' step in the design document from FAB-8787. The implementation should add a new chaincode function called {{Define}} which accepts a new message as its only parameter. This message should have a single field, corresponding to the {{Definition}} message as defined in FAB-10726. It should return a new empty message. It should read from the organizations private store the key at and verify that each the messages are equal or fail. It should use protobuf equality checking rather than direct bytes comparison. Next, it should check to see whether the world state key is set. If so, it should should read the bytes there, unmarshal it as a chaincode definition, and check that the sequence contained in the new definition is exactly one larger than the existing sequence number or fail. If the key does not exist, it should ensure that the sequence number is 0 or fail. Finally, it should marshal the definition and store it at world state key and return the empty message successfully.",2 -"FAB-11285","07/25/2018 08:13:32","As a token infrastructure developer, I want the committing peers to setup a Token-specific validation and transaction processor components using a fixed set of parameters for a channel","Transaction processing at committing peer side consists of validation (vscc) & commitment (performed by a transaction processor component). The token-related setup of the committing peer expands on both components. *Acceptance*: I can setup the peer using a fixed set of fabtoken parameters, and monitor the internal state of the peer to ensure that FabToken initialisation took place with the right parameters.",2 -"FAB-11286","07/25/2018 08:32:17","As a token application developer, I want to initialise a simplified Token client library using sample FabToken parameters without signing abilities","{color:#333333}Acceptance: I can setup the token client library using a fixed set of token parameters, and monitor the internal state of the created client to ensure that its initialization took place with the right parameters. The client library would not provide any further functionality at this point beyond setup.{color}",3 -"FAB-11287","07/25/2018 08:59:25","As a token infrastructure (committing peer) developer, I can fully process an ""issue tokens""  transaction ","  *Acceptance*: A (simulated) client generates an ""issue tokens"" transaction that is submitted to the system.  Upon receiving this transaction, a committing peer redirects its processing to its  Token specific validation & commit components.  FabToken custom validation ensures that the tokens requested to be issued are of the right form (e.g., of acceptable type), and custom  commit ensures that the transaction creator is authorised to issue tokens.  Upon successful transaction validation and commit, the ledger state is updated to reflect this. The ledger APIs  can be used to retrieve the committed data. ",5 -"FAB-11291","07/25/2018 09:38:55","As a token infrastructure (committing peer) developer, I can recognise and process a “transfer tokens"" transaction without impacting the ledger state","*Acceptance*: A (simulated) client generates a “transfer tokens"" transaction.  Upon receiving this transaction, a committing peer redirects its processing to its  FabToken specific validation & commit components.  The transaction is to be marked as invalid in the end and will have no impact on the ledger state.   This one relates to the extension of proto messages to accommodate transfer transactions and of the corresponding test, testing that transfer token transactions go through the right path. The first part was already completed (proto messages definition). The second is blocked by  FAB-11175.",2 -"FAB-11292","07/25/2018 09:42:58","As a token infrastructure (committing peer) developer, I can fully process a “transfer tokens"" transaction ","Given FAB-11942, and FAB-11371 this item refers to the testing that the transfer transactions can be correctly processed by the existing tx processor and TMS implementation.",2 -"FAB-11293","07/25/2018 09:52:38","As a token infrastructure (peer) developer, I want the peer to accommodate requests to compute an issue proof given specific parameters","*Acceptance*: A (simulated) client constructs a request to construct an issue proof by  providing the type and quantity of assets to be introduced into the system, and invokes  the peer API. The peer would need to construct the corresponding issue proof and return  it to the (simulated) client. Acceptance would require that the constructed proof matches  the input parameters of the request.",3 -"FAB-11294","07/25/2018 09:56:42","As a token infrastructure (peer) developer, I want to accommodate “list of my tokens” requests","*Acceptance*: A fabric client can be built to submit a “list tokens” query to a peer. Peer considers the current ledger state and returns the tokens owned by the  creator of the query; if the creator of the query does not have permission to read the channel’s data, the request is rejected; if the creator of the query does not own any tokens the peer would return an empty list. The client can observe the correctness of successive “list my tokens” queries in correlation with the content of the ledger.",2 -"FAB-11296","07/25/2018 10:33:20","As a token-based system client developer, I can request from a peer to compute an issue proof matching my issue needs ","{color:#333333}{color:#333333}Acceptance: A client function (cwLib) takes as arguments the type and quantity of assets to be introduced into the system via a token issue process. As a result the function would request from a peer to construct the corresponding issue proof. {color}{color} {color:#333333} {color:#333333}The issue proof is returned to the client and the function will return successfully only if the returned issue proof is correctly computed.{color}{color}",2 -"FAB-11297","07/25/2018 10:40:27","As a token-based application/client developer, I can query the ledger through a peer I trust to list my tokens into the system ","*Acceptance*: Client wallet library for token operations in Fabric is enhanced with a function that takes as arguments the long term identity of an end-user and queries a peer on the set of tokens that this end-user owns. The peer receives the request and acts according to the specification of previous story.",2 -"FAB-11298","07/25/2018 10:41:54","As a token system client developer, I can request from a peer to compute a transfer proof matching my transfer token needs","{color:#333333}Acceptance: A client function (cwLib) takes as arguments the identifiers of the tokens {color}{color:#333333}to be transferred and the expected recipients. As a result the function would request from a peer to construct the corresponding transfer proof. The transfer proof is returned to the client and the function will return successfully only if the returned transfer proof is correctly computed.{color} {color:#333333} {color}",2 -"FAB-11299","07/25/2018 10:44:06","As a token-based application/client developer, I can create and submit a transaction to request the transfer of my tokens","Here the client token library leverages the responses from the prover peer to create a token transaction for transfer that is submitted to the ordering service.",2 -"FAB-11300","07/25/2018 10:46:24","As a token-based application/client developer, I can submit a transaction for issuing new tokens into the system","Here the client token library leverages the responses from the prover peer to create a token transaction for issue that is submitted to the ordering service.",5 -"FAB-11355","07/27/2018 14:13:42","As a token infrastructure (peer) developer, I would like to implement the processing of TLS-authenticated client requests ","Prover peer grpc service is now ready to fully process TLS-authenticated client requests to for proof computation. At the end of this item acceptance criteria of the parent item should be fulfilled.",5 -"FAB-11532","08/10/2018 06:40:46","As a fabric admin, I need a way to inspect the new chaincode package format.","Because in the new lifecycle scheme, we will require that the package hashes are consistent, users will need to share chaincode packages. However, there should be no centralized point of trust, so each user should be able to verify that the chaincode package contains the expected code. Therefore, we need to extend the ""peer chaincode package"" command to be able to extract a chaincode package and write it out to disk. This should be accomplished with a new chaincode sub-command 'inspectpackage'. It should take a path to the chaincode package, and optionally a location to extract it to (defaulting to the current directory). It should then extract the embedded code package.",3 -"FAB-11533","08/10/2018 06:44:03","As a fabric admin, I need a way to install a new chaincode package.","This is to exercise the new +lifecycle.InstallChaincode API. It should extend the existing 'peer chaincode install' command. It should require that a package be provided as input (to avoid non-deterministic multiple-packaging), and invoke SCC function. It should require the 'name' and 'version' parameters, but the 'language' and 'path' parameters should not be allowed. Like the other new lifecycle commands, it should distinguish itself from the old APIs by requiring a ""-N"" parameter.",3 -"FAB-11534","08/10/2018 06:46:48","As a fabric admin, I need a way to query installed chaincode packages from the new lifecycle.","This story is to extend the existing 'peer chaincode list"" command to support listing installed chaincodes via the new chaincode API. The operation should be the same as before, but it should add a new '-N' parameter to indicate that the new lifecycle should be utilized, instead of the old. It should invoke the '+lifecycle.QueryInstalledChaincodes' API, and print a list of name, version, hash. Note, for the time being it is necessary to only allow querying installed chaincodes with '-N', the instantiated chaincodes will be covered in another story.",2 -"FAB-11535","08/10/2018 06:49:23","Add round trip usage of the new chaincode install and query installed chaincodes function to e2e","With the implementation of the peer chaincode package, peer chaincode install, and peer chaincode list for the new lifecycle, it should be possible to test these using the e2e framework. There is likely no need to create an entire new suite for this with a new network, but extending an existing suite should be straightforward. The test should package a chaincode, install that package, and verify that the package is returned as installed.",2 -"FAB-11671","08/22/2018 09:50:44","As a token-based system client developer, I can request from a peer to compute a token redemption proof matching my token redemption needs","{color:#333333}*Acceptance:* A client function (cwLib) takes as arguments the identifiers of tokens to be redeemed via a token redemption process. As a result the function would request from a peer to construct the corresponding token redemption proof. {color} The redemption proof is returned to the client and the function will return successfully only if the returned issue proof is correctly computed.",5 -"FAB-13067","11/30/2018 03:53:24","Expose orderer blockcutter metrics","The only real tunables for the orderer as far as latency and throughput goes is the batch parameters. Therefore, it's critical for network operators to be able to gain insight into the duration filling a block takes to know whether they should relax or tighten their batch parameters.",1 -"FAB-13894","01/25/2019 15:55:18","As a developer, I want to use go 1.11.5 while developing on master","The current version of 1.11 is 1.11.5 and includes security related fixes for crypto/elliptic. We need to bump the version of go that we build with in fabric, fabric ca, our docker images, and all of the other miscellaneous places - including our build environments in CI.",2 -"FAB-13895","01/25/2019 15:56:08","As a developer, I want to our 1.4 stream to build against the latest release of go 1.11","The current version of 1.11 is 1.11.5 and includes security related fixes for crypto/elliptic. We need to bump the version of go that we build with in fabric, fabric ca, our docker images, and all of the other miscellaneous places - including our build environments in CI.",2 -"FAB-14491","03/05/2019 14:17:58","As a fabric operator, I want the peer to execute a binary I configure to determine if a custom builder should be used","Acceptance: I can add keys to {{core.yaml}} that point to a path in the peer's local file system that contains a {{detect}} binary. If it exits, it is called prior to building chaincode. No further action is required. System chaincode never uses the external launcher.",3 -"FAB-14492","03/05/2019 14:23:00","As a fabric operator, I want a custom builder to be invoked when my detect script exits normally","Acceptance: I can drop a {{build}} script the peer's local file system next to the {{detect}} script. When the {{detect}} script exits successfully, the {{build}} script will be called. The arguments to the build script should be the extracted contents of the chaincode package, a directory to place chaincode metadata for the peer (ie. CouchDB index files), and a directory to place assets that may be needed to launch chaincode. If the {{build}} script exits abnormally, the build process has failed.",3 -"FAB-15313","04/30/2019 15:58:52","Address remaining test comments from https://gerrit.hyperledger.org/r/c/31076/4","Comments regarding the assertions in {{TestSelectClusterBootBlock}} need to be addressed. Gerrit has the details but here are the comments: {quote} I think you either want to use assert.Same(t, bootstrapBlock, clusterBlock) so ensure you're getting back the exact same instance (ptr value) or just use assert.Equal(t, bootstrapBlock, clusterBoot) to ensure that the values pointed to are equal. The header number assertion is implied with the equality check. Unrelated but for awareness (you may already know) - if proto objects get marshaled/unmarshaled, assert.Equals won't necessarily work and you'll have to use proto.Equal to check the values. {quote}",1 -"FAB-15708","06/13/2019 16:37:44","Add 'None' genesis method for orderer configuration","Today, the orderer may be given one of two genesis methods 'Provisional' or 'File'. In the 'Provisional' case (useful only for testing) the orderer generates a genesis block based on a configtx.yaml file, while in the 'File' case, the orderer loads a file from the filesystem. In order to ultimately allow the orderer to start without chains defined, we need to allow a genesis method which does not attempt to do anything. This genesis method should be called 'None', and should not trigger loading of the genesis file, nor generation of a genesis block. ---- If you supply None as the genesis method for an orderer that has not been bootstrapped, the orderer should panic. If you supply None as the genesis method for an orderer that has already been bootstrapped and the genesis file is not available, it should start normally. ",2 -"FAB-15709","06/13/2019 16:39:16","Allow orderer startup with no channels defined","Today, if the orderer does not have any channels defined, it panics because it cannot find the system channel. This story is to remove those startup checks to allow the orderer to start successfully, even though it will not be in a position to accomplish anything. This should also block channel creation requests if no system channel is defined to route them to.",2 -"FAB-15732","06/17/2019 16:23:28","As a chaincode developer, I want {{chaincode package}} to honor *all* of my dependencies","The {{chaincode package}} command includes some logic to filter packages that the ccenv includes. This means that older or newer versions of these packages (used during development) will not be included in the package that's installed to the peer. As a developer, when the ""auto-vendoring"" process is performed, I would like my code to be included at all times.",2 -"FAB-16046","07/22/2019 15:57:53","As a Fabric user, I want to instantiate chaincode using the CLI with a policy requiring multiple orgs"," {code:java} peer chaincode instantiate {code} does not support the {{--peerAddresses}} flag which is needed when using instantiation policies which require more than one org.",2 -"FAB-16206","08/02/2019 19:29:52","External builder detect should not need to parse flags","The first pass of the detect, compile, launch infrastructure was implemented with a collection of command line switches. This will become unwieldy over time as implementers will have difficulties ignoring unknown flags and parameters - especially flags that accept values. Instead we should remain consistent with the buildpack model that already exists - namely, we provide a list of directories that contain extracted files. If we need to provide more data, we can add an additional argument or populate additional files. This means that implementations should ignore extra arguments as a general rule until they know what they point to. detect should be provided a directory containing metadata.json (holding metdata as a basic map) and a reference to a directory containing the exploded source. Acceptance: external scripts do not need to process flags; existing parameters are reworked to fit this model.",3 -"FAB-16213","08/05/2019 14:54:12","Remove ext/entities from the go chaincode packages","The dependency on bccsp prevents proper structuring of the code and will result in a rewrite of the package. The proposal from [~mastersingh24] is to remove the code and, if we need to recreate it, we can do it when the package has been extracted from fabric. Acceptance: there are no fabric dependencies in shim extensions other than the protos.",2 -"FAB-16215","08/05/2019 16:33:31","External builder build should not need to parse flags","The first pass of the detect, compile, launch infrastructure was implemented with a collection of command line switches. This will become unwieldy over time as implementers will have difficulties ignoring unknown flags and parameters - especially flags that accept values. Instead we should remain consistent with the buildpack model that already exists - namely, we provide a list of directories that contain extracted files. If we need to provide more data, we can add an additional argument or populate additional files. This means that implementations should ignore extra arguments as a general rule until they know what they point to. build should receive receive arguments for the metadata directory, a directory containing the extracted source, and a directory for it to store its output. Acceptance: external scripts do not need to process flags; existing parameters are reworked to fit this model.",3 -"FAB-16216","08/05/2019 16:34:04","External builder launch should not need to parse flags","The first pass of the detect, compile, launch infrastructure was implemented with a collection of command line switches. This will become unwieldy over time as implementers will have difficulties ignoring unknown flags and parameters - especially flags that accept values. Instead we should remain consistent with the buildpack model that already exists - namely, we provide a list of directories that contain extracted files. If we need to provide more data, we can add an additional argument or populate additional files. This means that implementations should ignore extra arguments as a general rule until they know what they point to. launch receives directory containing metadata, the source the directory, the output directory from build, and an artifacts directory that contains file necessary for launch (key, cert, address, etc.). The artifacts must have well known names. Acceptance: external scripts do not need to process flags; existing parameters are reworked to fit this model.",3 -"FAB-16239","08/06/2019 21:40:25","External builders should run in controlled environment","By default, when a process is started, it inherits the environment of its caller. In the context of external chaincode, the environment of the peer is propagated to the builder. Given most users of the peer rely on environment variables to influence the configuration, it's quite likely that sensitive information is accessible in the environment. In order to reduce the likelihood of information leaks via the environment, the external build configuration element in the peer should be extended to support and environment variable name whitelist. Any environment keys in that list will be propagated to the external builder. The default list should contain PATH, LIBPATH, and TMPDIR. It's likely that environment variables like http_proxy/HTTP_PROXY will be added by users.",2 -"FAB-16661","09/19/2019 19:16:58","Consistently provide PEM encoded keys and certificates to chaincode platform implementations","[~jyellick] highlighted something about the launch contract that we should try to address moving forward. The client key and client cert that we provide to the chaincode are base64 encoded PEM blocks while the root certificate is simply the PEM block. The base64 encoding of the PEM blocked (already encoded) certificate is really weird... Unfortunately, we can't just change it as existing chaincode will be decoding so we need to add mitigation/migration. [https://github.com/hyperledger/fabric-chaincode-go/blob/8f8a45e6039e7bc0e03a042591b010317481f21d/shim/internal/config.go#L48-L67] The thought is that we can create two new files that contain the pem blocks (instead of base 64 encoded pem blocks) and use new keys to reference them. The old config and format would be deprecated and ultimately removed. This will impact the run step of the launcher as well (in a different story)... The go shim should be updated to prefer the non-double-encoded certificates. Node and java should not. Acceptance: I can run chaincode build with the old and new go chaincode shim.",2 -"FAB-16721","09/27/2019 17:37:01","Ensure that go chaincode can be used with the release 1.4 ccenv","With the removal of the fabric/chaincode/shim and dependencies from the v2 ccenv, chaincode packaged for previous releases can't be built with by a 2.0 peer. It's inevitable that previously built chaincode docker images will be removed and, when that happens, the previously installed 1.x chaincode will need to be built by a 2.x peer. This will fail for go chaincode when using the 2.x version of ccenv. To help get out of this mess, we need to verify that the peer can be successfully configured to use the hyperledger/fabric-ccenv-1.4 as the builder image. /cc [~mastersingh24], [~jyellick]",2 -"FAB-16722","09/27/2019 18:27:23","Remove the ""provisional"" system channel genesis block support from the orderer","The generation of a system channel genesis block is not a core function of the orderer. Even though it could be somewhat useful in test environments, from a technical perspective, it requires the orderer to include and use the ""sample"" {{configtx.yaml}} in addition to the orderer's own {{orderer.yaml}}. From a process and deployment perspective, it's better to have the system channel genesis block explicitly created by administrators with real names than to rely on fixtures that were originally intended for test. For these reasons, we should remove the provisional genesis method from the orderer, it's associated artifacts, and documentation. The removal should be highlighted in release notes. Acceptance: code supporting ""provisional"" is gone, documentation for the feature is removed, and a release note highlighting the removal is created.",3 -"FAB-16858","10/17/2019 19:02:34","Generate user consumable documentation for external builder/fabric buildpack/external launcher","Create draft of documentation for the external build and launch infrastructure for Fabric. This should include introductory material, rationale, and reference data for implementors. The documentation will need to be massaged by the technical writing team for Fabric and adapted for use in the correct locations.",2 -"FAB-17001","11/04/2019 16:43:51","New lifecycle should cleanup from failed build during install","When using the new lifecycle to install chaincode, if the build fails due to timeout or some other reason such that the ""endorsement"" fails at the peer, the chaincode package should be removed from the local file system. This prevents an odd behavior where the invoke of a chaincode that failed to build (during install) succeeds.",2 -"FAB-17046","11/11/2019 16:32:54","When chaincode defined with the new lifecycle are no longer defined in any channel, they should be stopped","There is a state listener in the new lifecycle that tracks when chaincode is invocable (or not). Today, when a chaincode becomes invocable, we call the custodian to start it... A similar call needs to be made when a chaincode is no longer invocable to request the custodian to stop it. The lifecycle cache already has a reference counted list of channel references to chaincode. Acceptance: 1. Install chaincode with a name and package-ID ""A"" 2. Define chaincode 3. Install chaincode with name and package-ID ""B"" 4. Upgrade chaincode definition to ""B"" 5. Observe that chaincode container for package-ID ""A"" is stopped. This story applies to internal platform chaincode and externally built and launched chaincode.",2 -"FAB-17047","11/11/2019 16:36:54","RuntimeLauncher needs to be extended to support Stopping or disconnecting from chaincode","Chaincode execution is currently managed through RuntimeLauncher. The launcher needs to be extended to include a ""Stop"" verb to properly disconnect from / shutdown running chaincode that is launched through the external/container/platform packages.",3 -"FAB-17093","11/15/2019 17:58:57","Add a basic test that confirms expected proto types get decorated.","Per https://gerrit.hyperledger.org/r/c/fabric/+/34401#message-eccb8901_793a8c92 The protolator decoration stuff https://github.com/hyperledger/fabric/blob/33139a966bb75377f65c5d9dc990619bd80ca22d/common/tools/protolator/protoext/decorate.go is currently missing any sort of tests. Some changes would be caught by integration tests, but a more basic level of unit testing, ensuring that we are not accidentally omiting decoration of a type we intend to decorate would be nice.",3 -"FAB-17310","01/07/2020 19:37:48","Enable service discovery to query the _lifecycle endorsers on a channel","Similar to using service discovery to determine the endorsers needed to meet a user-defined chaincode's endorsement policy, we should also be able to use it to discover the endorsers required to commit a chaincode definition using _lifecycle. Acceptance: - Query _lifecycle on a specific channel and receive the valid list of endorsers required to meet the endorsement policy. (Version does not come into play.) The list of endorsement should match the active configuration. This will likely involve creating a test with a non-standard _lifecycle endorsement policy.",3 -"FAB-17466","02/03/2020 18:04:12","Implement function to generate create channel transaction from parameters","As a fabric administrator, I want to use a fit-for-purpose library to generate a channel creation transaction from input data structures. This library should not make assumptions about how any key material is stored and should not require network access. Acceptance: Calling a function similar to what we have below returns a proto-encoded genesis block that roughly matches that produced by {{configtxgen -channelID channel-id -profile profile-name ...}} using a prototypical {{configtx.yaml}} from our integration test suite. Generating a genesis block from the configtx.yaml and an msp folder, and using code implemented in this story, I should get two `proto.Equal` blocks. {code} type Profile struct { Consortium string Application *Application Orderer *Orderer Consortiums map[string]*Consortium Capabilities map[string]bool Policies map[string]*Policy } ... type Option func(options) // Options for extensibility type options struct {} func CreateChannelTx(channelSpec ChannelConfig, options ...Option) (*cb.Block, error) { // Implement me } {code} The types that are listed above come from the {{genesisconfig}} package in fabric and appear to be managed through a combination of viper and yaml processing. We do not want to use these packages directly; we want to copy and adapt the necessary structures into our package.",5 -"FAB-17467","02/03/2020 18:20:03","Implement function to create a detached signature for a ChannelUpdate","As a fabric admin, I wish to sign a {{ChannelUpdate}} transaction and generate a detached signature. The key material used to sign the transaction should be one of the input parameters and should not be derived from configuration. Acceptance: Calling the new function with appropriate arguments generates a detached signature that I can use later as one of the _n_ required signatures when assembling the configuration update transaction.",2 -"FAB-17468","02/03/2020 18:22:44","Implement function to attach signatures to a configuration update transaction","As a fabric admin, I want to take a {{ConfigUpdate}} transaction and a collection of detached signatures and use them to create a complete, signed channel config update transaction that can be validated by the orderer and the peers. Acceptance: The generated configuration transaction should be equivalent to one generated by ""building up"" signatures in order using the existing `signtx` command on the peer.",2 -"FAB-17469","02/03/2020 18:24:21","Implement orderer configuration transaction that can update batch parameters","As a fabric administrator, I want to use a fit-for-purpose library to generate a channel update transaction that modifies the {{max_message_count}} batch size parameter. The channel update transaction that is generated should not be signed. Acceptance Calling the new function with appropriate arguments should generate a config update transaction that matches one produced by manually transforming and generating a configuration update using the existing tools (ignoring any signature that may be applied by the existing tools).",2 -"FAB-17487","02/10/2020 14:12:36","Implement function to generate ConfigGroup for a new Organization","As a fabric administrator, I want to use a fit-for-purpose library function to generate a ConfigGroup representing an Organization. The input to the function should not depend on existing MSP material but should accept directly the necessary cryptographic material. Acceptance Calling the new function with appropriate arguments should generate a proto structure that is semantically equivalent to one generated by {{cryptogen}} and {{configtxgen -printOrg}}.",2 -"FAB-17488","02/10/2020 14:23:05","Implement function to compute a ConfigUpdate from a base and modified configuration transaction","As a fabric administrator, I want to use a fit-for-purpose library function to calculate a ConfigUpdate message that would update a {{base}} config to the {{modified}} config. This function should not need to be provided with anything beyond the two transactions. Acceptance Calling the new function with appropriate arguments should generate a {{ConfigUpdate}} that is semantically equivalent to the output of {{configtxlator compute_update}}. See https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html for the current manual process.",3 -"FAB-17543","02/26/2020 13:14:09","Implement function that adds an anchor peer","As a fabric administrator, I want to use a fit-for-purpose library that can modify an existing channel configuration to add an anchor peer. Acceptance: Starting with an existing Config block/transaction, I can call a function that adds an anchor peer to the configuration. After performing that operation, I can compute a ConfigUpdate transaction that is ready for a signature workflow. https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html#updating-the-channel-config-to-include-an-org3-anchor-peer-optional",2 -"FAB-17544","02/26/2020 13:15:15","Implement function that removes an anchor peer","As a fabric administrator, I want to use a fit-for-purpose library that can modify an existing channel configuration to remove an anchor peer. Acceptance: Starting with an existing Config block/transaction, I can call a function that removes an anchor peer from the configuration. After performing that operation, I can compute a ConfigUpdate transaction that is ready for a signature workflow. https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html#updating-the-channel-config-to-include-an-org3-anchor-peer-optional",1 -"FAB-17545","02/26/2020 13:25:00","Implement function to revoke a certificate issued by an MSP","As a fabric administrator, I want to use a fit for purpose library to generate a configuration update that revokes a certificate issued by my organization's MSP. Acceptance: Provided with a config block/transaction, I can call a function to ""revoke"" a certificate issued by an MSP. This should result in a configuration update that appropriately modifies the {{revocation_list}} attribute of {{FabricMSPConfig}}.",3 -"FAB-17546","02/26/2020 13:30:32","Implement function to add a root CA to an MSP","As an administrator, I want to use a fit for purpose library to add a new root CA certificate to an MSP configuration. Acceptance: Starting with a configuration transaction/block, I can call a function that adds CA certificate to the MSP configuration. After this operation, I should be able to create a config update transaction that is ready for the signature workflow. Minimal certificate validation should be done when processing the function. In particular, reject certificates that is missing the KeyUsageCertSign/CA attributes.",2 -"FAB-17552","02/27/2020 14:40:55","Implement an Example test that demonstrates idiomatic usage of the library","As a user of the Fabric channel configuration API, I would like an example in the library documentation that shows me idiomatic usage of the package to create configuration updates and sign them. The updates should involve multiple changes to instruct on how state is managed.   *Acceptance* I can find an example for updating channel transactions in the GoDoc for the package. It should also sign the config update envelope.",2 -"FAB-17565","03/02/2020 18:22:50","Implement an Example test that demonstrates creating a new create channel tx","As a user of the Fabric channel configuration API, I would like an example in the library documentation that shows me idiomatic usage of the package to create new channel create transactions and sign them.   *Acceptance* I can find an example for creating create channel transactions in the GoDoc for the package.",2 -"FAB-17567","03/02/2020 18:34:55","Implement helper function for retrieving anchor peers from an existing config","As a user of the Fabric channel configuration API, I would like to be able to retrieve existing anchor peers without manually parsing the config transaction.   *Acceptance* I should be able to call a function for retrieving anchor peers for a specific org. The example test should also be updated to demonstrate usage of this helper.",1 -"FAB-17568","03/02/2020 18:38:07","Implement helper function for retrieving orderer configuration from an existing config","As a user of the Fabric channel configuration API, I would like to be able to retrieve existing orderer configuration without manually parsing the config transaction so I can persist values I don't want to change when updating orderer configuration   *Acceptance* I should be able to call a function for retrieving orderer configuration that returns a config.Orderer. I can then modify fields on the Orderer configuration and pass it to config.UpdateOrdererConfiguration to update the orderer's configuration. The example tests should also be updated to demonstrate usage of this helper.",1 -"FAB-17569","03/02/2020 18:44:20","Implement helper function for retrieving an org's configuration from an existing config","As a user of the Fabric channel configuration API, I would like to be able to retrieve existing application, orderer, or consortium org configuration without manually parsing the config transaction so I can persist values I don't want to change when updating an org's configuration.   *Acceptance* I should be able to call a function for retrieving an application, orderer, or consortium org's configuration that returns a config.Organization. I can then modify fields on the Organization and pass it to an appropriate update organization function to update the org's configuration. The example tests should also be updated to demonstrate usage of this helper.",1 -"FAB-17570","03/02/2020 18:48:10","Implement functions for adding and removing policies for application orgs and application groups","As a user of the Fabric channel configuration API, I would like to be able to update an application org's and an application group's configuration by adding or removing policies.   *Acceptance* I should be able to call this function to modify a config transaction and then process further configuration updates on the modified config transaction. The examples test should be updated to demonstrate usage of this feature.  ",1 -"FAB-17571","03/02/2020 18:54:08","Implement functions for adding and removing policies for orderer orgs and orderer group","As a user of the Fabric channel configuration API, I would like to be able to update an orderer org's and orderer group's configuration by adding or removing policies.   *Acceptance* I should be able to call this function to modify a config transaction and then process further configuration updates on the modified config transaction. The examples test should be updated to demonstrate usage of this feature.  ",1 -"FAB-17572","03/02/2020 18:55:33","Implement function for adding and removing policies for consortium orgs and consortiums/consortium groups","As a user of the Fabric channel configuration API, I would like to be able to update a consortium org's, consortiums group's, and consortium group's configuration by adding or removing policies.   *Acceptance* I should be able to call this function to modify a config transaction and then process further configuration updates on the modified config transaction. The examples test should be updated to demonstrate usage of this feature.  ",1 -"FAB-17576","03/02/2020 20:07:37","Implement helper function for retrieving msp configuration for a specific org from an existing config","As a user of the Fabric channel configuration API, I would like to be able to retrieve existing msp configuration for an organization without manually parsing the config transaction so I can reuse CA certs or other relevant information.   *Acceptance* I should be able to call a function for retrieving msp configuration for a specific org. The example test should also be updated to demonstrate usage of this helper.",1 -"FAB-17577","03/02/2020 21:07:55","Rename pkg/config to something more appropriate","config as a package name isn't very descriptive of what this package is. We should consider renaming the package to something more appropriate for what it actually does (maybe configtx?)",1 -"FAB-17587","03/05/2020 15:39:22","Verify that a cert is issued by an MSP before adding to revocation list","As a fabric administrator, I want to verify that a cert was issued by my organization's MSP before adding it to the MSP's revocation list. Acceptance: Provided with a config block/transaction, I can call a function to ""revoke"" a certificate issued by an MSP. This should result in a configuration update that appropriately modifies the {{revocation_list}} attribute of {{FabricMSPConfig}}. Providing a certificate that has not been issued by the MSP should result in an error.",2 -"FAB-17591","03/06/2020 19:42:28","Implement function that adds an orderer endpoint","As a fabric administrator, I want to use a fit-for-purpose library that can modify an existing channel configuration to add an orderer endpoint to an orderer org's config group. Acceptance: Starting with an existing Config block/transaction, I can call a function that adds an orderer endpoint to the configuration. After performing that operation, I can compute a ConfigUpdate transaction that is ready for a signature workflow. An appropriate example should also be added to the example test that demonstrates this behavior.",1 -"FAB-17592","03/06/2020 19:44:50","Implement function that removes an orderer endpoint","As a fabric administrator, I want to use a fit-for-purpose library that can modify an existing channel configuration to remove an orderer endpoint from an orderer org's config group. Acceptance: Starting with an existing Config block/transaction, I can call a function that removes an orderer endpoint from the configuration. After performing that operation, I can compute a ConfigUpdate transaction that is ready for a signature workflow. An appropriate example should also be added to the example test that demonstrates this behavior.",1 -"FAB-17600","03/09/2020 20:22:15","Implement helper function for retrieving policies for a specific org from an existing config","As a user of the Fabric channel configuration API, I would like to be able to retrieve existing policies for an organization without manually parsing the config transaction.   *Acceptance* I should be able to call a function for retrieving policies for a specific org (regardless of application/orderer/consortium org). The example test should also be updated to demonstrate usage of this helper.",1 -"FAB-17619","03/13/2020 14:29:09","Wrap config transaction proto and move API to wrapped type","Currently we pass a config object around to every update/API function, lets wrap the config proto so this: {code:java} err := UpdateOrdererConfiguration(config, orderer) {code} becomes something like {code:java} err := config.UpdateOrdererConfiguration(orderer){code}  ",2 -"FAB-17637","03/18/2020 16:14:57","Implement functions for adding and removing capabilities from a config","As a user of the Fabric channel configuration API, I would like to be able to add and remove application, orderer, and channel capabilities. Consider whether we should allow setting unsupported capabilities outside of the existing ones or if we should just have toggles (ie EnableV20ApplicationCapabilities() and Disable...)   *Acceptance* I should be able to call these functions to modify a config transaction and then process further configuration updates on the modified config transaction. The examples test should be updated to demonstrate usage of this feature.  ",1 -"FAB-17638","03/18/2020 16:16:19","Implement helper function for retrieving application configuration from an existing config","As a user of the Fabric channel configuration API, I would like to be able to retrieve existing application configuration without manually parsing the config transaction.   *Acceptance* I should be able to call a function for retrieving application configuration that returns a config.Application.",1 -"FAB-17639","03/18/2020 16:20:08","Implement functions for adding and removing ACLs from an application group","As a user of the Fabric channel configuration API, I would like to be able to update an application groups's ACLS by adding or removing ACLs.   *Acceptance* I should be able to call this function to modify a config transaction and then process further configuration updates on the modified config transaction. The examples test should be updated to demonstrate usage of this feature.  ",1 -"FAB-17662","03/26/2020 13:58:14","Remove Get Prefix on Accessors for ConfigTx functions","[https://golang.org/doc/effective_go.html#Getters]   func (c *ConfigTx) GetAnchorPeers(...) -> func (c *ConfigTx) AnchorPeers(...)   [https://github.com/hyperledger/fabric/pull/880#discussion_r397418783]",1 -"FAB-17733","04/09/2020 22:40:24","Validate consenter certs when adding to application channel (and system channel)","When adding a consenter node to the application channel, it is required that they have the relevant MSP that's contributing the node into the Orderer section... Otherwise any invokes on the channel fails since the channel is in a bad state... if they are going from one to two consenter, the channel is in a irrecoverable state.  We should check and see if the necessary certs are in MSP before allowing a consenter addition",2 -"FAB-17965","06/08/2020 14:00:49","Remove the system channel from a network with channels","Allow a network running with a system channel to migrate and start operate with out it. The flow for doing so is: * Enable the channel participation API on all OSNs (a staggered restart of all OSNs is required). * Put the system channel into maintenance mode in order to prevent channel creation when it is removed, resulting in inconsistencies across orderers. * Remove the system channel with the channel participation API. * Prevent config transactions that change orderer channel membership on app channels, to avoid triggering a transition from Inactive <-> Raft  ** A simple option is to Halt all the channels, and ** Halt the InactiveChainRegistry * Removal should remove also all the channels the orderer is not a member of * Restart every chain (or restart the server), in order to recreat the chains and convert every chain and its haltCalback pointers from Raft+Inactive to Raft+Follower * Normal activity may resume",3 -"FAB-18187","08/25/2020 17:22:11","Back-porting Go Modules to 1.x","Is there any chance that the migration to go modules could be back-ported for the latest 1.x release (1.4.8 at the time of this writing)? I want to write a chain code with the latest 1.x version, but really dislike the legacy package management of go and the need to manage $GOPATH and other env vars (personal preference). If I understand it correctly all it would take is a single commit that adds a go.mod file on the 1.x branch and then a release of a minor version that gets tagged so go can pick it up as a version. Also happy to send a PR with this if people support the idea. Please advise.",3