repo_id
stringlengths
18
103
file_path
stringlengths
30
136
content
stringlengths
2
3.36M
__index_level_0__
int64
0
0
coqui_public_repos/STT/native_client
coqui_public_repos/STT/native_client/kenlm/COPYING.LESSER.3
GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.
0
coqui_public_repos
coqui_public_repos/TTS/.pylintrc
[MASTER] # A comma-separated list of package or module names from where C extensions may # be loaded. Extensions are loading into the active Python interpreter and may # run arbitrary code. extension-pkg-whitelist= # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS # Add files or directories matching the regex patterns to the blacklist. The # regex matches against base names, not paths. ignore-patterns= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the # number of processors available to use. jobs=1 # Control the amount of potential inferred values when inferring a single # object. This can help the performance when dealing with large functions or # complex, nested conditions. limit-inference-results=100 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= # Pickle collected data for later comparisons. persistent=yes # Specify a configuration file. #rcfile= # When enabled, pylint would attempt to guess common misconfiguration and emit # user-friendly hints instead of false-positive error messages. suggestion-mode=yes # Allow loading of arbitrary C extensions. Extensions are imported into the # active Python interpreter and may run arbitrary code. unsafe-load-any-extension=no [MESSAGES CONTROL] # Only show warnings with the listed confidence levels. Leave empty to show # all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED. confidence= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once). You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use "--disable=all --enable=classes # --disable=W". disable=missing-docstring, too-many-public-methods, too-many-lines, bare-except, ## for avoiding weird p3.6 CI linter error ## TODO: see later if we can remove this assigning-non-slot, unsupported-assignment-operation, ## end line-too-long, fixme, wrong-import-order, ungrouped-imports, wrong-import-position, import-error, invalid-name, too-many-instance-attributes, arguments-differ, arguments-renamed, no-name-in-module, no-member, unsubscriptable-object, print-statement, parameter-unpacking, unpacking-in-except, old-raise-syntax, backtick, long-suffix, old-ne-operator, old-octal-literal, import-star-module-level, non-ascii-bytes-literal, raw-checker-failed, bad-inline-option, locally-disabled, file-ignored, suppressed-message, useless-suppression, deprecated-pragma, use-symbolic-message-instead, useless-object-inheritance, too-few-public-methods, too-many-branches, too-many-arguments, too-many-locals, too-many-statements, apply-builtin, basestring-builtin, buffer-builtin, cmp-builtin, coerce-builtin, execfile-builtin, file-builtin, long-builtin, raw_input-builtin, reduce-builtin, standarderror-builtin, unicode-builtin, xrange-builtin, coerce-method, delslice-method, getslice-method, setslice-method, no-absolute-import, old-division, dict-iter-method, dict-view-method, next-method-called, metaclass-assignment, indexing-exception, raising-string, reload-builtin, oct-method, hex-method, nonzero-method, cmp-method, input-builtin, round-builtin, intern-builtin, unichr-builtin, map-builtin-not-iterating, zip-builtin-not-iterating, range-builtin-not-iterating, filter-builtin-not-iterating, using-cmp-argument, eq-without-hash, div-method, idiv-method, rdiv-method, exception-message-attribute, invalid-str-codec, sys-max-int, bad-python3-import, deprecated-string-function, deprecated-str-translate-call, deprecated-itertools-function, deprecated-types-field, next-method-defined, dict-items-not-iterating, dict-keys-not-iterating, dict-values-not-iterating, deprecated-operator-function, deprecated-urllib-function, xreadlines-attribute, deprecated-sys-function, exception-escape, comprehension-escape, duplicate-code, not-callable, import-outside-toplevel, logging-fstring-interpolation, logging-not-lazy # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time (only on the command line, not in the configuration file where # it should appear only once). See also the "--disable" option for examples. enable=c-extension-no-member [REPORTS] # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Template used to display messages. This is a python new-style format string # used to format the message information. See doc for all details. #msg-template= # Set the output format. Available formats are text, parseable, colorized, json # and msvs (visual studio). You can also give a reporter class, e.g. # mypackage.mymodule.MyReporterClass. output-format=text # Tells whether to display a full report or only the messages. reports=no # Activate the evaluation score. score=yes [REFACTORING] # Maximum number of nested blocks for function / method body max-nested-blocks=5 # Complete name of functions that never returns. When checking for # inconsistent-return-statements if a never returning function is called then # it will be considered as an explicit return statement and no message will be # printed. never-returning-functions=sys.exit [LOGGING] # Format style used to check logging format string. `old` means using % # formatting, while `new` is for `{}` formatting. logging-format-style=old # Logging modules to check that the string format arguments are in logging # function parameter format. logging-modules=logging [SPELLING] # Limits count of emitted suggestions for spelling mistakes. max-spelling-suggestions=4 # Spelling dictionary name. Available dictionaries: none. To make it working # install python-enchant package.. spelling-dict= # List of comma separated words that should not be checked. spelling-ignore-words= # A path to a file that contains private dictionary; one word per line. spelling-private-dict-file= # Tells whether to store unknown words to indicated private dictionary in # --spelling-private-dict-file option instead of raising a message. spelling-store-unknown-words=no [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME, XXX, TODO [TYPECHECK] # List of decorators that produce context managers, such as # contextlib.contextmanager. Add to this list to register other decorators that # produce valid context managers. contextmanager-decorators=contextlib.contextmanager # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E1101 when accessed. Python regular # expressions are accepted. generated-members=numpy.*,torch.* # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # Tells whether to warn about missing members when the owner of the attribute # is inferred to be None. ignore-none=yes # This flag controls whether pylint should warn about no-member and similar # checks whenever an opaque object is returned when inferring. The inference # can return multiple potential results while evaluating a Python object, but # some branches might not be evaluated, which results in partial inference. In # that case, it might be useful to still emit no-member and other checks for # the rest of the inferred objects. ignore-on-opaque-inference=yes # List of class names for which member attributes should not be checked (useful # for classes with dynamically set attributes). This supports the use of # qualified names. ignored-classes=optparse.Values,thread._local,_thread._local # List of module names for which member attributes should not be checked # (useful for modules/projects where namespaces are manipulated during runtime # and thus existing member attributes cannot be deduced by static analysis. It # supports qualified module names, as well as Unix pattern matching. ignored-modules= # Show a hint with possible names when a member name was not found. The aspect # of finding the hint is based on edit distance. missing-member-hint=yes # The minimum edit distance a name should have in order to be considered a # similar match for a missing member name. missing-member-hint-distance=1 # The total number of similar names that should be taken in consideration when # showing a hint for a missing member. missing-member-max-choices=1 [VARIABLES] # List of additional names supposed to be defined in builtins. Remember that # you should avoid defining new builtins when possible. additional-builtins= # Tells whether unused global variables should be treated as a violation. allow-global-unused-variables=yes # List of strings which can identify a callback function by name. A callback # name must start or end with one of those strings. callbacks=cb_, _cb # A regular expression matching the name of dummy variables (i.e. expected to # not be used). dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_ # Argument names that match this expression will be ignored. Default to name # with leading underscore. ignored-argument-names=_.*|^ignored_|^unused_ # Tells whether we should check for unused import in __init__ files. init-import=no # List of qualified module names which can have objects that can redefine # builtins. redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io [FORMAT] # Expected format of line ending, e.g. empty (any line ending), LF or CRLF. expected-line-ending-format= # Regexp for a line that is allowed to be longer than the limit. ignore-long-lines=^\s*(# )?<?https?://\S+>?$ # Number of spaces of indent required inside a hanging or continued line. indent-after-paren=4 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' # Maximum number of characters on a single line. max-line-length=120 # Maximum number of lines in a module. max-module-lines=1000 # List of optional constructs for which whitespace checking is disabled. `dict- # separator` is used to allow tabulation in dicts, etc.: {1 : 1,\n222: 2}. # `trailing-comma` allows a space between comma and closing bracket: (a, ). # `empty-line` allows space-only lines. no-space-check=trailing-comma, dict-separator # Allow the body of a class to be on the same line as the declaration if body # contains single statement. single-line-class-stmt=no # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=no [SIMILARITIES] # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # Ignore imports when computing similarities. ignore-imports=no # Minimum lines number of a similarity. min-similarity-lines=4 [BASIC] # Naming style matching correct argument names. argument-naming-style=snake_case # Regular expression matching correct argument names. Overrides argument- # naming-style. argument-rgx=[a-z_][a-z0-9_]{0,30}$ # Naming style matching correct attribute names. attr-naming-style=snake_case # Regular expression matching correct attribute names. Overrides attr-naming- # style. #attr-rgx= # Bad variable names which should always be refused, separated by a comma. bad-names= # Naming style matching correct class attribute names. class-attribute-naming-style=any # Regular expression matching correct class attribute names. Overrides class- # attribute-naming-style. #class-attribute-rgx= # Naming style matching correct class names. class-naming-style=PascalCase # Regular expression matching correct class names. Overrides class-naming- # style. #class-rgx= # Naming style matching correct constant names. const-naming-style=UPPER_CASE # Regular expression matching correct constant names. Overrides const-naming- # style. #const-rgx= # Minimum line length for functions/classes that require docstrings, shorter # ones are exempt. docstring-min-length=-1 # Naming style matching correct function names. function-naming-style=snake_case # Regular expression matching correct function names. Overrides function- # naming-style. #function-rgx= # Good variable names which should always be accepted, separated by a comma. good-names=i, j, k, x, ex, Run, _ # Include a hint for the correct naming format with invalid-name. include-naming-hint=no # Naming style matching correct inline iteration names. inlinevar-naming-style=any # Regular expression matching correct inline iteration names. Overrides # inlinevar-naming-style. #inlinevar-rgx= # Naming style matching correct method names. method-naming-style=snake_case # Regular expression matching correct method names. Overrides method-naming- # style. #method-rgx= # Naming style matching correct module names. module-naming-style=snake_case # Regular expression matching correct module names. Overrides module-naming- # style. #module-rgx= # Colon-delimited sets of names that determine each other's naming style when # the name regexes allow several styles. name-group= # Regular expression which should only match function or class names that do # not require a docstring. no-docstring-rgx=^_ # List of decorators that produce properties, such as abc.abstractproperty. Add # to this list to register other decorators that produce valid properties. # These decorators are taken in consideration only for invalid-name. property-classes=abc.abstractproperty # Naming style matching correct variable names. variable-naming-style=snake_case # Regular expression matching correct variable names. Overrides variable- # naming-style. variable-rgx=[a-z_][a-z0-9_]{0,30}$ [STRING] # This flag controls whether the implicit-str-concat-in-sequence should # generate a warning on implicit string concatenation in sequences defined over # several lines. check-str-concat-over-line-jumps=no [IMPORTS] # Allow wildcard imports from modules that define __all__. allow-wildcard-with-all=no # Analyse import fallback blocks. This can be used to support both Python 2 and # 3 compatible code, which means that the block might have code that exists # only in one or another interpreter, leading to false positives when analysed. analyse-fallback-blocks=no # Deprecated modules which should not be used, separated by a comma. deprecated-modules=optparse,tkinter.tix # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled). ext-import-graph= # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled). import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled). int-import-graph= # Force import order to recognize a module as part of the standard # compatibility libraries. known-standard-library= # Force import order to recognize a module as part of a third party library. known-third-party=enchant [CLASSES] # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__, __new__, setUp # List of member names, which should be excluded from the protected access # warning. exclude-protected=_asdict, _fields, _replace, _source, _make # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls # List of valid names for the first argument in a metaclass class method. valid-metaclass-classmethod-first-arg=cls [DESIGN] # Maximum number of arguments for function / method. max-args=5 # Maximum number of attributes for a class (see R0902). max-attributes=7 # Maximum number of boolean expressions in an if statement. max-bool-expr=5 # Maximum number of branch for function / method body. max-branches=12 # Maximum number of locals for function / method body. max-locals=15 # Maximum number of parents for a class (see R0901). max-parents=15 # Maximum number of public methods for a class (see R0904). max-public-methods=20 # Maximum number of return / yield for function / method body. max-returns=6 # Maximum number of statements in function / method body. max-statements=50 # Minimum number of public methods for a class (see R0903). min-public-methods=2 [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "BaseException, Exception". overgeneral-exceptions=BaseException, Exception
0
coqui_public_repos/STT/native_client/kenlm
coqui_public_repos/STT/native_client/kenlm/lm/search_hashed.cc
#include "search_hashed.hh" #include "binary_format.hh" #include "blank.hh" #include "lm_exception.hh" #include "model.hh" #include "read_arpa.hh" #include "value.hh" #include "vocab.hh" #include "../util/bit_packing.hh" #include "../util/file_piece.hh" #include <string> namespace lm { namespace ngram { class ProbingModel; namespace { /* These are passed to ReadNGrams so that n-grams with zero backoff that appear as context will still be used in state. */ template <class Middle> class ActivateLowerMiddle { public: explicit ActivateLowerMiddle(Middle &middle) : modify_(middle) {} void operator()(const WordIndex *vocab_ids, const unsigned int n) { uint64_t hash = static_cast<WordIndex>(vocab_ids[1]); for (const WordIndex *i = vocab_ids + 2; i < vocab_ids + n; ++i) { hash = detail::CombineWordHash(hash, *i); } typename Middle::MutableIterator i; // TODO: somehow get text of n-gram for this error message. if (!modify_.UnsafeMutableFind(hash, i)) UTIL_THROW(FormatLoadException, "The context of every " << n << "-gram should appear as a " << (n-1) << "-gram"); SetExtension(i->value.backoff); } private: Middle &modify_; }; template <class Weights> class ActivateUnigram { public: explicit ActivateUnigram(Weights *unigram) : modify_(unigram) {} void operator()(const WordIndex *vocab_ids, const unsigned int /*n*/) { // assert(n == 2); SetExtension(modify_[vocab_ids[1]].backoff); } private: Weights *modify_; }; // Find the lower order entry, inserting blanks along the way as necessary. template <class Value> void FindLower( const std::vector<uint64_t> &keys, typename Value::Weights &unigram, std::vector<util::ProbingHashTable<typename Value::ProbingEntry, util::IdentityHash> > &middle, std::vector<typename Value::Weights *> &between) { typename util::ProbingHashTable<typename Value::ProbingEntry, util::IdentityHash>::MutableIterator iter; typename Value::ProbingEntry entry; // Backoff will always be 0.0. We'll get the probability and rest in another pass. entry.value.backoff = kNoExtensionBackoff; // Go back and find the longest right-aligned entry, informing it that it extends left. Normally this will match immediately, but sometimes SRI is dumb. for (int lower = keys.size() - 2; ; --lower) { if (lower == -1) { between.push_back(&unigram); return; } entry.key = keys[lower]; bool found = middle[lower].FindOrInsert(entry, iter); between.push_back(&iter->value); if (found) return; } } // Between usually has single entry, the value to adjust. But sometimes SRI stupidly pruned entries so it has unitialized blank values to be set here. template <class Added, class Build> void AdjustLower( const Added &added, const Build &build, std::vector<typename Build::Value::Weights *> &between, const unsigned int n, const std::vector<WordIndex> &vocab_ids, typename Build::Value::Weights *unigrams, std::vector<util::ProbingHashTable<typename Build::Value::ProbingEntry, util::IdentityHash> > &middle) { typedef typename Build::Value Value; if (between.size() == 1) { build.MarkExtends(*between.front(), added); return; } typedef util::ProbingHashTable<typename Value::ProbingEntry, util::IdentityHash> Middle; float prob = -fabs(between.back()->prob); // Order of the n-gram on which probabilities are based. unsigned char basis = n - between.size(); assert(basis != 0); typename Build::Value::Weights **change = &between.back(); // Skip the basis. --change; if (basis == 1) { // Hallucinate a bigram based on a unigram's backoff and a unigram probability. float &backoff = unigrams[vocab_ids[1]].backoff; SetExtension(backoff); prob += backoff; (*change)->prob = prob; build.SetRest(&*vocab_ids.begin(), 2, **change); basis = 2; --change; } uint64_t backoff_hash = static_cast<uint64_t>(vocab_ids[1]); for (unsigned char i = 2; i <= basis; ++i) { backoff_hash = detail::CombineWordHash(backoff_hash, vocab_ids[i]); } for (; basis < n - 1; ++basis, --change) { typename Middle::MutableIterator gotit; if (middle[basis - 2].UnsafeMutableFind(backoff_hash, gotit)) { float &backoff = gotit->value.backoff; SetExtension(backoff); prob += backoff; } (*change)->prob = prob; build.SetRest(&*vocab_ids.begin(), basis + 1, **change); backoff_hash = detail::CombineWordHash(backoff_hash, vocab_ids[basis+1]); } typename std::vector<typename Value::Weights *>::const_iterator i(between.begin()); build.MarkExtends(**i, added); const typename Value::Weights *longer = *i; // Everything has probability but is not marked as extending. for (++i; i != between.end(); ++i) { build.MarkExtends(**i, *longer); longer = *i; } } // Continue marking lower entries even they know that they extend left. This is used for upper/lower bounds. template <class Build> void MarkLower( const std::vector<uint64_t> &keys, const Build &build, typename Build::Value::Weights &unigram, std::vector<util::ProbingHashTable<typename Build::Value::ProbingEntry, util::IdentityHash> > &middle, int start_order, const typename Build::Value::Weights &longer) { if (start_order == 0) return; // Hopefully the compiler will realize that if MarkExtends always returns false, it can simplify this code. for (int even_lower = start_order - 2 /* index in middle */; ; --even_lower) { if (even_lower == -1) { build.MarkExtends(unigram, longer); return; } if (!build.MarkExtends( middle[even_lower].UnsafeMutableMustFind(keys[even_lower])->value, longer)) return; } } template <class Build, class Activate, class Store> void ReadNGrams( util::FilePiece &f, const unsigned int n, const size_t count, const ProbingVocabulary &vocab, const Build &build, typename Build::Value::Weights *unigrams, std::vector<util::ProbingHashTable<typename Build::Value::ProbingEntry, util::IdentityHash> > &middle, Activate activate, Store &store, PositiveProbWarn &warn) { typedef typename Build::Value Value; assert(n >= 2); ReadNGramHeader(f, n); // Both vocab_ids and keys are non-empty because n >= 2. // vocab ids of words in reverse order. std::vector<WordIndex> vocab_ids(n); std::vector<uint64_t> keys(n-1); typename Store::Entry entry; std::vector<typename Value::Weights *> between; for (size_t i = 0; i < count; ++i) { ReadNGram(f, n, vocab, vocab_ids.rbegin(), entry.value, warn); build.SetRest(&*vocab_ids.begin(), n, entry.value); keys[0] = detail::CombineWordHash(static_cast<uint64_t>(vocab_ids.front()), vocab_ids[1]); for (unsigned int h = 1; h < n - 1; ++h) { keys[h] = detail::CombineWordHash(keys[h-1], vocab_ids[h+1]); } // Initially the sign bit is on, indicating it does not extend left. Most already have this but there might +0.0. util::SetSign(entry.value.prob); entry.key = keys[n-2]; store.Insert(entry); between.clear(); FindLower<Value>(keys, unigrams[vocab_ids.front()], middle, between); AdjustLower<typename Store::Entry::Value, Build>(entry.value, build, between, n, vocab_ids, unigrams, middle); if (Build::kMarkEvenLower) MarkLower<Build>(keys, build, unigrams[vocab_ids.front()], middle, n - between.size() - 1, *between.back()); activate(&*vocab_ids.begin(), n); } store.FinishedInserting(); } } // namespace namespace detail { template <class Value> uint8_t *HashedSearch<Value>::SetupMemory(uint8_t *start, const std::vector<uint64_t> &counts, const Config &config) { unigram_ = Unigram(start, counts[0]); start += Unigram::Size(counts[0]); std::size_t allocated; middle_.clear(); for (unsigned int n = 2; n < counts.size(); ++n) { allocated = Middle::Size(counts[n - 1], config.probing_multiplier); middle_.push_back(Middle(start, allocated)); start += allocated; } allocated = Longest::Size(counts.back(), config.probing_multiplier); longest_ = Longest(start, allocated); start += allocated; return start; } /*template <class Value> void HashedSearch<Value>::Relocate(uint8_t *start, const std::vector<uint64_t> &counts, const Config &config) { unigram_ = Unigram(start, counts[0]); start += Unigram::Size(counts[0]); for (unsigned int n = 2; n < counts.size(); ++n) { middle[n-2].Relocate(start); start += Middle::Size(counts[n - 1], config.probing_multiplier) } longest_.Relocate(start); }*/ template <class Value> void HashedSearch<Value>::InitializeFromARPA(const char * /*file*/, util::FilePiece &f, const std::vector<uint64_t> &counts, const Config &config, ProbingVocabulary &vocab, BinaryFormat &backing) { void *vocab_rebase; void *search_base = backing.GrowForSearch(Size(counts, config), vocab.UnkCountChangePadding(), vocab_rebase); vocab.Relocate(vocab_rebase); SetupMemory(reinterpret_cast<uint8_t*>(search_base), counts, config); PositiveProbWarn warn(config.positive_log_probability); Read1Grams(f, counts[0], vocab, unigram_.Raw(), warn); CheckSpecials(config, vocab); DispatchBuild(f, counts, config, vocab, warn); } template <> void HashedSearch<BackoffValue>::DispatchBuild(util::FilePiece &f, const std::vector<uint64_t> &counts, const Config &config, const ProbingVocabulary &vocab, PositiveProbWarn &warn) { NoRestBuild build; ApplyBuild(f, counts, vocab, warn, build); } template <> void HashedSearch<RestValue>::DispatchBuild(util::FilePiece &f, const std::vector<uint64_t> &counts, const Config &config, const ProbingVocabulary &vocab, PositiveProbWarn &warn) { switch (config.rest_function) { case Config::REST_MAX: { MaxRestBuild build; ApplyBuild(f, counts, vocab, warn, build); } break; case Config::REST_LOWER: { LowerRestBuild<ProbingModel> build(config, counts.size(), vocab); ApplyBuild(f, counts, vocab, warn, build); } break; } } template <class Value> template <class Build> void HashedSearch<Value>::ApplyBuild(util::FilePiece &f, const std::vector<uint64_t> &counts, const ProbingVocabulary &vocab, PositiveProbWarn &warn, const Build &build) { for (WordIndex i = 0; i < counts[0]; ++i) { build.SetRest(&i, (unsigned int)1, unigram_.Raw()[i]); } try { if (counts.size() > 2) { ReadNGrams<Build, ActivateUnigram<typename Value::Weights>, Middle>( f, 2, counts[1], vocab, build, unigram_.Raw(), middle_, ActivateUnigram<typename Value::Weights>(unigram_.Raw()), middle_[0], warn); } for (unsigned int n = 3; n < counts.size(); ++n) { ReadNGrams<Build, ActivateLowerMiddle<Middle>, Middle>( f, n, counts[n-1], vocab, build, unigram_.Raw(), middle_, ActivateLowerMiddle<Middle>(middle_[n-3]), middle_[n-2], warn); } if (counts.size() > 2) { ReadNGrams<Build, ActivateLowerMiddle<Middle>, Longest>( f, counts.size(), counts[counts.size() - 1], vocab, build, unigram_.Raw(), middle_, ActivateLowerMiddle<Middle>(middle_.back()), longest_, warn); } else { ReadNGrams<Build, ActivateUnigram<typename Value::Weights>, Longest>( f, counts.size(), counts[counts.size() - 1], vocab, build, unigram_.Raw(), middle_, ActivateUnigram<typename Value::Weights>(unigram_.Raw()), longest_, warn); } } catch (util::ProbingSizeException &e) { UTIL_THROW(util::ProbingSizeException, "Avoid pruning n-grams like \"bar baz quux\" when \"foo bar baz quux\" is still in the model. KenLM will work when this pruning happens, but the probing model assumes these events are rare enough that using blank space in the probing hash table will cover all of them. Increase probing_multiplier (-p to build_binary) to add more blank spaces.\n"); } ReadEnd(f); } template class HashedSearch<BackoffValue>; template class HashedSearch<RestValue>; } // namespace detail } // namespace ngram } // namespace lm
0
coqui_public_repos/stt-model-manager/coqui_stt_model_manager
coqui_public_repos/stt-model-manager/coqui_stt_model_manager/static/bootstrap-icons.css
@font-face { font-family: "bootstrap-icons"; src: url("./fonts/bootstrap-icons.woff2?856008caa5eb66df68595e734e59580d") format("woff2"), url("./fonts/bootstrap-icons.woff?856008caa5eb66df68595e734e59580d") format("woff"); } [class^="bi-"]::before, [class*=" bi-"]::before { display: inline-block; font-family: bootstrap-icons !important; font-style: normal; font-weight: normal !important; font-variant: normal; text-transform: none; line-height: 1; vertical-align: -.125em; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } .bi-alarm-fill::before { content: "\f101"; } .bi-alarm::before { content: "\f102"; } .bi-align-bottom::before { content: "\f103"; } .bi-align-center::before { content: "\f104"; } .bi-align-end::before { content: "\f105"; } .bi-align-middle::before { content: "\f106"; } .bi-align-start::before { content: "\f107"; } .bi-align-top::before { content: "\f108"; } .bi-alt::before { content: "\f109"; } .bi-app-indicator::before { content: "\f10a"; } .bi-app::before { content: "\f10b"; } .bi-archive-fill::before { content: "\f10c"; } .bi-archive::before { content: "\f10d"; } .bi-arrow-90deg-down::before { content: "\f10e"; } .bi-arrow-90deg-left::before { content: "\f10f"; } .bi-arrow-90deg-right::before { content: "\f110"; } .bi-arrow-90deg-up::before { content: "\f111"; } .bi-arrow-bar-down::before { content: "\f112"; } .bi-arrow-bar-left::before { content: "\f113"; } .bi-arrow-bar-right::before { content: "\f114"; } .bi-arrow-bar-up::before { content: "\f115"; } .bi-arrow-clockwise::before { content: "\f116"; } .bi-arrow-counterclockwise::before { content: "\f117"; } .bi-arrow-down-circle-fill::before { content: "\f118"; } .bi-arrow-down-circle::before { content: "\f119"; } .bi-arrow-down-left-circle-fill::before { content: "\f11a"; } .bi-arrow-down-left-circle::before { content: "\f11b"; } .bi-arrow-down-left-square-fill::before { content: "\f11c"; } .bi-arrow-down-left-square::before { content: "\f11d"; } .bi-arrow-down-left::before { content: "\f11e"; } .bi-arrow-down-right-circle-fill::before { content: "\f11f"; } .bi-arrow-down-right-circle::before { content: "\f120"; } .bi-arrow-down-right-square-fill::before { content: "\f121"; } .bi-arrow-down-right-square::before { content: "\f122"; } .bi-arrow-down-right::before { content: "\f123"; } .bi-arrow-down-short::before { content: "\f124"; } .bi-arrow-down-square-fill::before { content: "\f125"; } .bi-arrow-down-square::before { content: "\f126"; } .bi-arrow-down-up::before { content: "\f127"; } .bi-arrow-down::before { content: "\f128"; } .bi-arrow-left-circle-fill::before { content: "\f129"; } .bi-arrow-left-circle::before { content: "\f12a"; } .bi-arrow-left-right::before { content: "\f12b"; } .bi-arrow-left-short::before { content: "\f12c"; } .bi-arrow-left-square-fill::before { content: "\f12d"; } .bi-arrow-left-square::before { content: "\f12e"; } .bi-arrow-left::before { content: "\f12f"; } .bi-arrow-repeat::before { content: "\f130"; } .bi-arrow-return-left::before { content: "\f131"; } .bi-arrow-return-right::before { content: "\f132"; } .bi-arrow-right-circle-fill::before { content: "\f133"; } .bi-arrow-right-circle::before { content: "\f134"; } .bi-arrow-right-short::before { content: "\f135"; } .bi-arrow-right-square-fill::before { content: "\f136"; } .bi-arrow-right-square::before { content: "\f137"; } .bi-arrow-right::before { content: "\f138"; } .bi-arrow-up-circle-fill::before { content: "\f139"; } .bi-arrow-up-circle::before { content: "\f13a"; } .bi-arrow-up-left-circle-fill::before { content: "\f13b"; } .bi-arrow-up-left-circle::before { content: "\f13c"; } .bi-arrow-up-left-square-fill::before { content: "\f13d"; } .bi-arrow-up-left-square::before { content: "\f13e"; } .bi-arrow-up-left::before { content: "\f13f"; } .bi-arrow-up-right-circle-fill::before { content: "\f140"; } .bi-arrow-up-right-circle::before { content: "\f141"; } .bi-arrow-up-right-square-fill::before { content: "\f142"; } .bi-arrow-up-right-square::before { content: "\f143"; } .bi-arrow-up-right::before { content: "\f144"; } .bi-arrow-up-short::before { content: "\f145"; } .bi-arrow-up-square-fill::before { content: "\f146"; } .bi-arrow-up-square::before { content: "\f147"; } .bi-arrow-up::before { content: "\f148"; } .bi-arrows-angle-contract::before { content: "\f149"; } .bi-arrows-angle-expand::before { content: "\f14a"; } .bi-arrows-collapse::before { content: "\f14b"; } .bi-arrows-expand::before { content: "\f14c"; } .bi-arrows-fullscreen::before { content: "\f14d"; } .bi-arrows-move::before { content: "\f14e"; } .bi-aspect-ratio-fill::before { content: "\f14f"; } .bi-aspect-ratio::before { content: "\f150"; } .bi-asterisk::before { content: "\f151"; } .bi-at::before { content: "\f152"; } .bi-award-fill::before { content: "\f153"; } .bi-award::before { content: "\f154"; } .bi-back::before { content: "\f155"; } .bi-backspace-fill::before { content: "\f156"; } .bi-backspace-reverse-fill::before { content: "\f157"; } .bi-backspace-reverse::before { content: "\f158"; } .bi-backspace::before { content: "\f159"; } .bi-badge-3d-fill::before { content: "\f15a"; } .bi-badge-3d::before { content: "\f15b"; } .bi-badge-4k-fill::before { content: "\f15c"; } .bi-badge-4k::before { content: "\f15d"; } .bi-badge-8k-fill::before { content: "\f15e"; } .bi-badge-8k::before { content: "\f15f"; } .bi-badge-ad-fill::before { content: "\f160"; } .bi-badge-ad::before { content: "\f161"; } .bi-badge-ar-fill::before { content: "\f162"; } .bi-badge-ar::before { content: "\f163"; } .bi-badge-cc-fill::before { content: "\f164"; } .bi-badge-cc::before { content: "\f165"; } .bi-badge-hd-fill::before { content: "\f166"; } .bi-badge-hd::before { content: "\f167"; } .bi-badge-tm-fill::before { content: "\f168"; } .bi-badge-tm::before { content: "\f169"; } .bi-badge-vo-fill::before { content: "\f16a"; } .bi-badge-vo::before { content: "\f16b"; } .bi-badge-vr-fill::before { content: "\f16c"; } .bi-badge-vr::before { content: "\f16d"; } .bi-badge-wc-fill::before { content: "\f16e"; } .bi-badge-wc::before { content: "\f16f"; } .bi-bag-check-fill::before { content: "\f170"; } .bi-bag-check::before { content: "\f171"; } .bi-bag-dash-fill::before { content: "\f172"; } .bi-bag-dash::before { content: "\f173"; } .bi-bag-fill::before { content: "\f174"; } .bi-bag-plus-fill::before { content: "\f175"; } .bi-bag-plus::before { content: "\f176"; } .bi-bag-x-fill::before { content: "\f177"; } .bi-bag-x::before { content: "\f178"; } .bi-bag::before { content: "\f179"; } .bi-bar-chart-fill::before { content: "\f17a"; } .bi-bar-chart-line-fill::before { content: "\f17b"; } .bi-bar-chart-line::before { content: "\f17c"; } .bi-bar-chart-steps::before { content: "\f17d"; } .bi-bar-chart::before { content: "\f17e"; } .bi-basket-fill::before { content: "\f17f"; } .bi-basket::before { content: "\f180"; } .bi-basket2-fill::before { content: "\f181"; } .bi-basket2::before { content: "\f182"; } .bi-basket3-fill::before { content: "\f183"; } .bi-basket3::before { content: "\f184"; } .bi-battery-charging::before { content: "\f185"; } .bi-battery-full::before { content: "\f186"; } .bi-battery-half::before { content: "\f187"; } .bi-battery::before { content: "\f188"; } .bi-bell-fill::before { content: "\f189"; } .bi-bell::before { content: "\f18a"; } .bi-bezier::before { content: "\f18b"; } .bi-bezier2::before { content: "\f18c"; } .bi-bicycle::before { content: "\f18d"; } .bi-binoculars-fill::before { content: "\f18e"; } .bi-binoculars::before { content: "\f18f"; } .bi-blockquote-left::before { content: "\f190"; } .bi-blockquote-right::before { content: "\f191"; } .bi-book-fill::before { content: "\f192"; } .bi-book-half::before { content: "\f193"; } .bi-book::before { content: "\f194"; } .bi-bookmark-check-fill::before { content: "\f195"; } .bi-bookmark-check::before { content: "\f196"; } .bi-bookmark-dash-fill::before { content: "\f197"; } .bi-bookmark-dash::before { content: "\f198"; } .bi-bookmark-fill::before { content: "\f199"; } .bi-bookmark-heart-fill::before { content: "\f19a"; } .bi-bookmark-heart::before { content: "\f19b"; } .bi-bookmark-plus-fill::before { content: "\f19c"; } .bi-bookmark-plus::before { content: "\f19d"; } .bi-bookmark-star-fill::before { content: "\f19e"; } .bi-bookmark-star::before { content: "\f19f"; } .bi-bookmark-x-fill::before { content: "\f1a0"; } .bi-bookmark-x::before { content: "\f1a1"; } .bi-bookmark::before { content: "\f1a2"; } .bi-bookmarks-fill::before { content: "\f1a3"; } .bi-bookmarks::before { content: "\f1a4"; } .bi-bookshelf::before { content: "\f1a5"; } .bi-bootstrap-fill::before { content: "\f1a6"; } .bi-bootstrap-reboot::before { content: "\f1a7"; } .bi-bootstrap::before { content: "\f1a8"; } .bi-border-all::before { content: "\f1a9"; } .bi-border-bottom::before { content: "\f1aa"; } .bi-border-center::before { content: "\f1ab"; } .bi-border-inner::before { content: "\f1ac"; } .bi-border-left::before { content: "\f1ad"; } .bi-border-middle::before { content: "\f1ae"; } .bi-border-outer::before { content: "\f1af"; } .bi-border-right::before { content: "\f1b0"; } .bi-border-style::before { content: "\f1b1"; } .bi-border-top::before { content: "\f1b2"; } .bi-border-width::before { content: "\f1b3"; } .bi-border::before { content: "\f1b4"; } .bi-bounding-box-circles::before { content: "\f1b5"; } .bi-bounding-box::before { content: "\f1b6"; } .bi-box-arrow-down-left::before { content: "\f1b7"; } .bi-box-arrow-down-right::before { content: "\f1b8"; } .bi-box-arrow-down::before { content: "\f1b9"; } .bi-box-arrow-in-down-left::before { content: "\f1ba"; } .bi-box-arrow-in-down-right::before { content: "\f1bb"; } .bi-box-arrow-in-down::before { content: "\f1bc"; } .bi-box-arrow-in-left::before { content: "\f1bd"; } .bi-box-arrow-in-right::before { content: "\f1be"; } .bi-box-arrow-in-up-left::before { content: "\f1bf"; } .bi-box-arrow-in-up-right::before { content: "\f1c0"; } .bi-box-arrow-in-up::before { content: "\f1c1"; } .bi-box-arrow-left::before { content: "\f1c2"; } .bi-box-arrow-right::before { content: "\f1c3"; } .bi-box-arrow-up-left::before { content: "\f1c4"; } .bi-box-arrow-up-right::before { content: "\f1c5"; } .bi-box-arrow-up::before { content: "\f1c6"; } .bi-box-seam::before { content: "\f1c7"; } .bi-box::before { content: "\f1c8"; } .bi-braces::before { content: "\f1c9"; } .bi-bricks::before { content: "\f1ca"; } .bi-briefcase-fill::before { content: "\f1cb"; } .bi-briefcase::before { content: "\f1cc"; } .bi-brightness-alt-high-fill::before { content: "\f1cd"; } .bi-brightness-alt-high::before { content: "\f1ce"; } .bi-brightness-alt-low-fill::before { content: "\f1cf"; } .bi-brightness-alt-low::before { content: "\f1d0"; } .bi-brightness-high-fill::before { content: "\f1d1"; } .bi-brightness-high::before { content: "\f1d2"; } .bi-brightness-low-fill::before { content: "\f1d3"; } .bi-brightness-low::before { content: "\f1d4"; } .bi-broadcast-pin::before { content: "\f1d5"; } .bi-broadcast::before { content: "\f1d6"; } .bi-brush-fill::before { content: "\f1d7"; } .bi-brush::before { content: "\f1d8"; } .bi-bucket-fill::before { content: "\f1d9"; } .bi-bucket::before { content: "\f1da"; } .bi-bug-fill::before { content: "\f1db"; } .bi-bug::before { content: "\f1dc"; } .bi-building::before { content: "\f1dd"; } .bi-bullseye::before { content: "\f1de"; } .bi-calculator-fill::before { content: "\f1df"; } .bi-calculator::before { content: "\f1e0"; } .bi-calendar-check-fill::before { content: "\f1e1"; } .bi-calendar-check::before { content: "\f1e2"; } .bi-calendar-date-fill::before { content: "\f1e3"; } .bi-calendar-date::before { content: "\f1e4"; } .bi-calendar-day-fill::before { content: "\f1e5"; } .bi-calendar-day::before { content: "\f1e6"; } .bi-calendar-event-fill::before { content: "\f1e7"; } .bi-calendar-event::before { content: "\f1e8"; } .bi-calendar-fill::before { content: "\f1e9"; } .bi-calendar-minus-fill::before { content: "\f1ea"; } .bi-calendar-minus::before { content: "\f1eb"; } .bi-calendar-month-fill::before { content: "\f1ec"; } .bi-calendar-month::before { content: "\f1ed"; } .bi-calendar-plus-fill::before { content: "\f1ee"; } .bi-calendar-plus::before { content: "\f1ef"; } .bi-calendar-range-fill::before { content: "\f1f0"; } .bi-calendar-range::before { content: "\f1f1"; } .bi-calendar-week-fill::before { content: "\f1f2"; } .bi-calendar-week::before { content: "\f1f3"; } .bi-calendar-x-fill::before { content: "\f1f4"; } .bi-calendar-x::before { content: "\f1f5"; } .bi-calendar::before { content: "\f1f6"; } .bi-calendar2-check-fill::before { content: "\f1f7"; } .bi-calendar2-check::before { content: "\f1f8"; } .bi-calendar2-date-fill::before { content: "\f1f9"; } .bi-calendar2-date::before { content: "\f1fa"; } .bi-calendar2-day-fill::before { content: "\f1fb"; } .bi-calendar2-day::before { content: "\f1fc"; } .bi-calendar2-event-fill::before { content: "\f1fd"; } .bi-calendar2-event::before { content: "\f1fe"; } .bi-calendar2-fill::before { content: "\f1ff"; } .bi-calendar2-minus-fill::before { content: "\f200"; } .bi-calendar2-minus::before { content: "\f201"; } .bi-calendar2-month-fill::before { content: "\f202"; } .bi-calendar2-month::before { content: "\f203"; } .bi-calendar2-plus-fill::before { content: "\f204"; } .bi-calendar2-plus::before { content: "\f205"; } .bi-calendar2-range-fill::before { content: "\f206"; } .bi-calendar2-range::before { content: "\f207"; } .bi-calendar2-week-fill::before { content: "\f208"; } .bi-calendar2-week::before { content: "\f209"; } .bi-calendar2-x-fill::before { content: "\f20a"; } .bi-calendar2-x::before { content: "\f20b"; } .bi-calendar2::before { content: "\f20c"; } .bi-calendar3-event-fill::before { content: "\f20d"; } .bi-calendar3-event::before { content: "\f20e"; } .bi-calendar3-fill::before { content: "\f20f"; } .bi-calendar3-range-fill::before { content: "\f210"; } .bi-calendar3-range::before { content: "\f211"; } .bi-calendar3-week-fill::before { content: "\f212"; } .bi-calendar3-week::before { content: "\f213"; } .bi-calendar3::before { content: "\f214"; } .bi-calendar4-event::before { content: "\f215"; } .bi-calendar4-range::before { content: "\f216"; } .bi-calendar4-week::before { content: "\f217"; } .bi-calendar4::before { content: "\f218"; } .bi-camera-fill::before { content: "\f219"; } .bi-camera-reels-fill::before { content: "\f21a"; } .bi-camera-reels::before { content: "\f21b"; } .bi-camera-video-fill::before { content: "\f21c"; } .bi-camera-video-off-fill::before { content: "\f21d"; } .bi-camera-video-off::before { content: "\f21e"; } .bi-camera-video::before { content: "\f21f"; } .bi-camera::before { content: "\f220"; } .bi-camera2::before { content: "\f221"; } .bi-capslock-fill::before { content: "\f222"; } .bi-capslock::before { content: "\f223"; } .bi-card-checklist::before { content: "\f224"; } .bi-card-heading::before { content: "\f225"; } .bi-card-image::before { content: "\f226"; } .bi-card-list::before { content: "\f227"; } .bi-card-text::before { content: "\f228"; } .bi-caret-down-fill::before { content: "\f229"; } .bi-caret-down-square-fill::before { content: "\f22a"; } .bi-caret-down-square::before { content: "\f22b"; } .bi-caret-down::before { content: "\f22c"; } .bi-caret-left-fill::before { content: "\f22d"; } .bi-caret-left-square-fill::before { content: "\f22e"; } .bi-caret-left-square::before { content: "\f22f"; } .bi-caret-left::before { content: "\f230"; } .bi-caret-right-fill::before { content: "\f231"; } .bi-caret-right-square-fill::before { content: "\f232"; } .bi-caret-right-square::before { content: "\f233"; } .bi-caret-right::before { content: "\f234"; } .bi-caret-up-fill::before { content: "\f235"; } .bi-caret-up-square-fill::before { content: "\f236"; } .bi-caret-up-square::before { content: "\f237"; } .bi-caret-up::before { content: "\f238"; } .bi-cart-check-fill::before { content: "\f239"; } .bi-cart-check::before { content: "\f23a"; } .bi-cart-dash-fill::before { content: "\f23b"; } .bi-cart-dash::before { content: "\f23c"; } .bi-cart-fill::before { content: "\f23d"; } .bi-cart-plus-fill::before { content: "\f23e"; } .bi-cart-plus::before { content: "\f23f"; } .bi-cart-x-fill::before { content: "\f240"; } .bi-cart-x::before { content: "\f241"; } .bi-cart::before { content: "\f242"; } .bi-cart2::before { content: "\f243"; } .bi-cart3::before { content: "\f244"; } .bi-cart4::before { content: "\f245"; } .bi-cash-stack::before { content: "\f246"; } .bi-cash::before { content: "\f247"; } .bi-cast::before { content: "\f248"; } .bi-chat-dots-fill::before { content: "\f249"; } .bi-chat-dots::before { content: "\f24a"; } .bi-chat-fill::before { content: "\f24b"; } .bi-chat-left-dots-fill::before { content: "\f24c"; } .bi-chat-left-dots::before { content: "\f24d"; } .bi-chat-left-fill::before { content: "\f24e"; } .bi-chat-left-quote-fill::before { content: "\f24f"; } .bi-chat-left-quote::before { content: "\f250"; } .bi-chat-left-text-fill::before { content: "\f251"; } .bi-chat-left-text::before { content: "\f252"; } .bi-chat-left::before { content: "\f253"; } .bi-chat-quote-fill::before { content: "\f254"; } .bi-chat-quote::before { content: "\f255"; } .bi-chat-right-dots-fill::before { content: "\f256"; } .bi-chat-right-dots::before { content: "\f257"; } .bi-chat-right-fill::before { content: "\f258"; } .bi-chat-right-quote-fill::before { content: "\f259"; } .bi-chat-right-quote::before { content: "\f25a"; } .bi-chat-right-text-fill::before { content: "\f25b"; } .bi-chat-right-text::before { content: "\f25c"; } .bi-chat-right::before { content: "\f25d"; } .bi-chat-square-dots-fill::before { content: "\f25e"; } .bi-chat-square-dots::before { content: "\f25f"; } .bi-chat-square-fill::before { content: "\f260"; } .bi-chat-square-quote-fill::before { content: "\f261"; } .bi-chat-square-quote::before { content: "\f262"; } .bi-chat-square-text-fill::before { content: "\f263"; } .bi-chat-square-text::before { content: "\f264"; } .bi-chat-square::before { content: "\f265"; } .bi-chat-text-fill::before { content: "\f266"; } .bi-chat-text::before { content: "\f267"; } .bi-chat::before { content: "\f268"; } .bi-check-all::before { content: "\f269"; } .bi-check-circle-fill::before { content: "\f26a"; } .bi-check-circle::before { content: "\f26b"; } .bi-check-square-fill::before { content: "\f26c"; } .bi-check-square::before { content: "\f26d"; } .bi-check::before { content: "\f26e"; } .bi-check2-all::before { content: "\f26f"; } .bi-check2-circle::before { content: "\f270"; } .bi-check2-square::before { content: "\f271"; } .bi-check2::before { content: "\f272"; } .bi-chevron-bar-contract::before { content: "\f273"; } .bi-chevron-bar-down::before { content: "\f274"; } .bi-chevron-bar-expand::before { content: "\f275"; } .bi-chevron-bar-left::before { content: "\f276"; } .bi-chevron-bar-right::before { content: "\f277"; } .bi-chevron-bar-up::before { content: "\f278"; } .bi-chevron-compact-down::before { content: "\f279"; } .bi-chevron-compact-left::before { content: "\f27a"; } .bi-chevron-compact-right::before { content: "\f27b"; } .bi-chevron-compact-up::before { content: "\f27c"; } .bi-chevron-contract::before { content: "\f27d"; } .bi-chevron-double-down::before { content: "\f27e"; } .bi-chevron-double-left::before { content: "\f27f"; } .bi-chevron-double-right::before { content: "\f280"; } .bi-chevron-double-up::before { content: "\f281"; } .bi-chevron-down::before { content: "\f282"; } .bi-chevron-expand::before { content: "\f283"; } .bi-chevron-left::before { content: "\f284"; } .bi-chevron-right::before { content: "\f285"; } .bi-chevron-up::before { content: "\f286"; } .bi-circle-fill::before { content: "\f287"; } .bi-circle-half::before { content: "\f288"; } .bi-circle-square::before { content: "\f289"; } .bi-circle::before { content: "\f28a"; } .bi-clipboard-check::before { content: "\f28b"; } .bi-clipboard-data::before { content: "\f28c"; } .bi-clipboard-minus::before { content: "\f28d"; } .bi-clipboard-plus::before { content: "\f28e"; } .bi-clipboard-x::before { content: "\f28f"; } .bi-clipboard::before { content: "\f290"; } .bi-clock-fill::before { content: "\f291"; } .bi-clock-history::before { content: "\f292"; } .bi-clock::before { content: "\f293"; } .bi-cloud-arrow-down-fill::before { content: "\f294"; } .bi-cloud-arrow-down::before { content: "\f295"; } .bi-cloud-arrow-up-fill::before { content: "\f296"; } .bi-cloud-arrow-up::before { content: "\f297"; } .bi-cloud-check-fill::before { content: "\f298"; } .bi-cloud-check::before { content: "\f299"; } .bi-cloud-download-fill::before { content: "\f29a"; } .bi-cloud-download::before { content: "\f29b"; } .bi-cloud-drizzle-fill::before { content: "\f29c"; } .bi-cloud-drizzle::before { content: "\f29d"; } .bi-cloud-fill::before { content: "\f29e"; } .bi-cloud-fog-fill::before { content: "\f29f"; } .bi-cloud-fog::before { content: "\f2a0"; } .bi-cloud-fog2-fill::before { content: "\f2a1"; } .bi-cloud-fog2::before { content: "\f2a2"; } .bi-cloud-hail-fill::before { content: "\f2a3"; } .bi-cloud-hail::before { content: "\f2a4"; } .bi-cloud-haze-1::before { content: "\f2a5"; } .bi-cloud-haze-fill::before { content: "\f2a6"; } .bi-cloud-haze::before { content: "\f2a7"; } .bi-cloud-haze2-fill::before { content: "\f2a8"; } .bi-cloud-lightning-fill::before { content: "\f2a9"; } .bi-cloud-lightning-rain-fill::before { content: "\f2aa"; } .bi-cloud-lightning-rain::before { content: "\f2ab"; } .bi-cloud-lightning::before { content: "\f2ac"; } .bi-cloud-minus-fill::before { content: "\f2ad"; } .bi-cloud-minus::before { content: "\f2ae"; } .bi-cloud-moon-fill::before { content: "\f2af"; } .bi-cloud-moon::before { content: "\f2b0"; } .bi-cloud-plus-fill::before { content: "\f2b1"; } .bi-cloud-plus::before { content: "\f2b2"; } .bi-cloud-rain-fill::before { content: "\f2b3"; } .bi-cloud-rain-heavy-fill::before { content: "\f2b4"; } .bi-cloud-rain-heavy::before { content: "\f2b5"; } .bi-cloud-rain::before { content: "\f2b6"; } .bi-cloud-slash-fill::before { content: "\f2b7"; } .bi-cloud-slash::before { content: "\f2b8"; } .bi-cloud-sleet-fill::before { content: "\f2b9"; } .bi-cloud-sleet::before { content: "\f2ba"; } .bi-cloud-snow-fill::before { content: "\f2bb"; } .bi-cloud-snow::before { content: "\f2bc"; } .bi-cloud-sun-fill::before { content: "\f2bd"; } .bi-cloud-sun::before { content: "\f2be"; } .bi-cloud-upload-fill::before { content: "\f2bf"; } .bi-cloud-upload::before { content: "\f2c0"; } .bi-cloud::before { content: "\f2c1"; } .bi-clouds-fill::before { content: "\f2c2"; } .bi-clouds::before { content: "\f2c3"; } .bi-cloudy-fill::before { content: "\f2c4"; } .bi-cloudy::before { content: "\f2c5"; } .bi-code-slash::before { content: "\f2c6"; } .bi-code-square::before { content: "\f2c7"; } .bi-code::before { content: "\f2c8"; } .bi-collection-fill::before { content: "\f2c9"; } .bi-collection-play-fill::before { content: "\f2ca"; } .bi-collection-play::before { content: "\f2cb"; } .bi-collection::before { content: "\f2cc"; } .bi-columns-gap::before { content: "\f2cd"; } .bi-columns::before { content: "\f2ce"; } .bi-command::before { content: "\f2cf"; } .bi-compass-fill::before { content: "\f2d0"; } .bi-compass::before { content: "\f2d1"; } .bi-cone-striped::before { content: "\f2d2"; } .bi-cone::before { content: "\f2d3"; } .bi-controller::before { content: "\f2d4"; } .bi-cpu-fill::before { content: "\f2d5"; } .bi-cpu::before { content: "\f2d6"; } .bi-credit-card-2-back-fill::before { content: "\f2d7"; } .bi-credit-card-2-back::before { content: "\f2d8"; } .bi-credit-card-2-front-fill::before { content: "\f2d9"; } .bi-credit-card-2-front::before { content: "\f2da"; } .bi-credit-card-fill::before { content: "\f2db"; } .bi-credit-card::before { content: "\f2dc"; } .bi-crop::before { content: "\f2dd"; } .bi-cup-fill::before { content: "\f2de"; } .bi-cup-straw::before { content: "\f2df"; } .bi-cup::before { content: "\f2e0"; } .bi-cursor-fill::before { content: "\f2e1"; } .bi-cursor-text::before { content: "\f2e2"; } .bi-cursor::before { content: "\f2e3"; } .bi-dash-circle-dotted::before { content: "\f2e4"; } .bi-dash-circle-fill::before { content: "\f2e5"; } .bi-dash-circle::before { content: "\f2e6"; } .bi-dash-square-dotted::before { content: "\f2e7"; } .bi-dash-square-fill::before { content: "\f2e8"; } .bi-dash-square::before { content: "\f2e9"; } .bi-dash::before { content: "\f2ea"; } .bi-diagram-2-fill::before { content: "\f2eb"; } .bi-diagram-2::before { content: "\f2ec"; } .bi-diagram-3-fill::before { content: "\f2ed"; } .bi-diagram-3::before { content: "\f2ee"; } .bi-diamond-fill::before { content: "\f2ef"; } .bi-diamond-half::before { content: "\f2f0"; } .bi-diamond::before { content: "\f2f1"; } .bi-dice-1-fill::before { content: "\f2f2"; } .bi-dice-1::before { content: "\f2f3"; } .bi-dice-2-fill::before { content: "\f2f4"; } .bi-dice-2::before { content: "\f2f5"; } .bi-dice-3-fill::before { content: "\f2f6"; } .bi-dice-3::before { content: "\f2f7"; } .bi-dice-4-fill::before { content: "\f2f8"; } .bi-dice-4::before { content: "\f2f9"; } .bi-dice-5-fill::before { content: "\f2fa"; } .bi-dice-5::before { content: "\f2fb"; } .bi-dice-6-fill::before { content: "\f2fc"; } .bi-dice-6::before { content: "\f2fd"; } .bi-disc-fill::before { content: "\f2fe"; } .bi-disc::before { content: "\f2ff"; } .bi-discord::before { content: "\f300"; } .bi-display-fill::before { content: "\f301"; } .bi-display::before { content: "\f302"; } .bi-distribute-horizontal::before { content: "\f303"; } .bi-distribute-vertical::before { content: "\f304"; } .bi-door-closed-fill::before { content: "\f305"; } .bi-door-closed::before { content: "\f306"; } .bi-door-open-fill::before { content: "\f307"; } .bi-door-open::before { content: "\f308"; } .bi-dot::before { content: "\f309"; } .bi-download::before { content: "\f30a"; } .bi-droplet-fill::before { content: "\f30b"; } .bi-droplet-half::before { content: "\f30c"; } .bi-droplet::before { content: "\f30d"; } .bi-earbuds::before { content: "\f30e"; } .bi-easel-fill::before { content: "\f30f"; } .bi-easel::before { content: "\f310"; } .bi-egg-fill::before { content: "\f311"; } .bi-egg-fried::before { content: "\f312"; } .bi-egg::before { content: "\f313"; } .bi-eject-fill::before { content: "\f314"; } .bi-eject::before { content: "\f315"; } .bi-emoji-angry-fill::before { content: "\f316"; } .bi-emoji-angry::before { content: "\f317"; } .bi-emoji-dizzy-fill::before { content: "\f318"; } .bi-emoji-dizzy::before { content: "\f319"; } .bi-emoji-expressionless-fill::before { content: "\f31a"; } .bi-emoji-expressionless::before { content: "\f31b"; } .bi-emoji-frown-fill::before { content: "\f31c"; } .bi-emoji-frown::before { content: "\f31d"; } .bi-emoji-heart-eyes-fill::before { content: "\f31e"; } .bi-emoji-heart-eyes::before { content: "\f31f"; } .bi-emoji-laughing-fill::before { content: "\f320"; } .bi-emoji-laughing::before { content: "\f321"; } .bi-emoji-neutral-fill::before { content: "\f322"; } .bi-emoji-neutral::before { content: "\f323"; } .bi-emoji-smile-fill::before { content: "\f324"; } .bi-emoji-smile-upside-down-fill::before { content: "\f325"; } .bi-emoji-smile-upside-down::before { content: "\f326"; } .bi-emoji-smile::before { content: "\f327"; } .bi-emoji-sunglasses-fill::before { content: "\f328"; } .bi-emoji-sunglasses::before { content: "\f329"; } .bi-emoji-wink-fill::before { content: "\f32a"; } .bi-emoji-wink::before { content: "\f32b"; } .bi-envelope-fill::before { content: "\f32c"; } .bi-envelope-open-fill::before { content: "\f32d"; } .bi-envelope-open::before { content: "\f32e"; } .bi-envelope::before { content: "\f32f"; } .bi-eraser-fill::before { content: "\f330"; } .bi-eraser::before { content: "\f331"; } .bi-exclamation-circle-fill::before { content: "\f332"; } .bi-exclamation-circle::before { content: "\f333"; } .bi-exclamation-diamond-fill::before { content: "\f334"; } .bi-exclamation-diamond::before { content: "\f335"; } .bi-exclamation-octagon-fill::before { content: "\f336"; } .bi-exclamation-octagon::before { content: "\f337"; } .bi-exclamation-square-fill::before { content: "\f338"; } .bi-exclamation-square::before { content: "\f339"; } .bi-exclamation-triangle-fill::before { content: "\f33a"; } .bi-exclamation-triangle::before { content: "\f33b"; } .bi-exclamation::before { content: "\f33c"; } .bi-exclude::before { content: "\f33d"; } .bi-eye-fill::before { content: "\f33e"; } .bi-eye-slash-fill::before { content: "\f33f"; } .bi-eye-slash::before { content: "\f340"; } .bi-eye::before { content: "\f341"; } .bi-eyedropper::before { content: "\f342"; } .bi-eyeglasses::before { content: "\f343"; } .bi-facebook::before { content: "\f344"; } .bi-file-arrow-down-fill::before { content: "\f345"; } .bi-file-arrow-down::before { content: "\f346"; } .bi-file-arrow-up-fill::before { content: "\f347"; } .bi-file-arrow-up::before { content: "\f348"; } .bi-file-bar-graph-fill::before { content: "\f349"; } .bi-file-bar-graph::before { content: "\f34a"; } .bi-file-binary-fill::before { content: "\f34b"; } .bi-file-binary::before { content: "\f34c"; } .bi-file-break-fill::before { content: "\f34d"; } .bi-file-break::before { content: "\f34e"; } .bi-file-check-fill::before { content: "\f34f"; } .bi-file-check::before { content: "\f350"; } .bi-file-code-fill::before { content: "\f351"; } .bi-file-code::before { content: "\f352"; } .bi-file-diff-fill::before { content: "\f353"; } .bi-file-diff::before { content: "\f354"; } .bi-file-earmark-arrow-down-fill::before { content: "\f355"; } .bi-file-earmark-arrow-down::before { content: "\f356"; } .bi-file-earmark-arrow-up-fill::before { content: "\f357"; } .bi-file-earmark-arrow-up::before { content: "\f358"; } .bi-file-earmark-bar-graph-fill::before { content: "\f359"; } .bi-file-earmark-bar-graph::before { content: "\f35a"; } .bi-file-earmark-binary-fill::before { content: "\f35b"; } .bi-file-earmark-binary::before { content: "\f35c"; } .bi-file-earmark-break-fill::before { content: "\f35d"; } .bi-file-earmark-break::before { content: "\f35e"; } .bi-file-earmark-check-fill::before { content: "\f35f"; } .bi-file-earmark-check::before { content: "\f360"; } .bi-file-earmark-code-fill::before { content: "\f361"; } .bi-file-earmark-code::before { content: "\f362"; } .bi-file-earmark-diff-fill::before { content: "\f363"; } .bi-file-earmark-diff::before { content: "\f364"; } .bi-file-earmark-easel-fill::before { content: "\f365"; } .bi-file-earmark-easel::before { content: "\f366"; } .bi-file-earmark-excel-fill::before { content: "\f367"; } .bi-file-earmark-excel::before { content: "\f368"; } .bi-file-earmark-fill::before { content: "\f369"; } .bi-file-earmark-font-fill::before { content: "\f36a"; } .bi-file-earmark-font::before { content: "\f36b"; } .bi-file-earmark-image-fill::before { content: "\f36c"; } .bi-file-earmark-image::before { content: "\f36d"; } .bi-file-earmark-lock-fill::before { content: "\f36e"; } .bi-file-earmark-lock::before { content: "\f36f"; } .bi-file-earmark-lock2-fill::before { content: "\f370"; } .bi-file-earmark-lock2::before { content: "\f371"; } .bi-file-earmark-medical-fill::before { content: "\f372"; } .bi-file-earmark-medical::before { content: "\f373"; } .bi-file-earmark-minus-fill::before { content: "\f374"; } .bi-file-earmark-minus::before { content: "\f375"; } .bi-file-earmark-music-fill::before { content: "\f376"; } .bi-file-earmark-music::before { content: "\f377"; } .bi-file-earmark-person-fill::before { content: "\f378"; } .bi-file-earmark-person::before { content: "\f379"; } .bi-file-earmark-play-fill::before { content: "\f37a"; } .bi-file-earmark-play::before { content: "\f37b"; } .bi-file-earmark-plus-fill::before { content: "\f37c"; } .bi-file-earmark-plus::before { content: "\f37d"; } .bi-file-earmark-post-fill::before { content: "\f37e"; } .bi-file-earmark-post::before { content: "\f37f"; } .bi-file-earmark-ppt-fill::before { content: "\f380"; } .bi-file-earmark-ppt::before { content: "\f381"; } .bi-file-earmark-richtext-fill::before { content: "\f382"; } .bi-file-earmark-richtext::before { content: "\f383"; } .bi-file-earmark-ruled-fill::before { content: "\f384"; } .bi-file-earmark-ruled::before { content: "\f385"; } .bi-file-earmark-slides-fill::before { content: "\f386"; } .bi-file-earmark-slides::before { content: "\f387"; } .bi-file-earmark-spreadsheet-fill::before { content: "\f388"; } .bi-file-earmark-spreadsheet::before { content: "\f389"; } .bi-file-earmark-text-fill::before { content: "\f38a"; } .bi-file-earmark-text::before { content: "\f38b"; } .bi-file-earmark-word-fill::before { content: "\f38c"; } .bi-file-earmark-word::before { content: "\f38d"; } .bi-file-earmark-x-fill::before { content: "\f38e"; } .bi-file-earmark-x::before { content: "\f38f"; } .bi-file-earmark-zip-fill::before { content: "\f390"; } .bi-file-earmark-zip::before { content: "\f391"; } .bi-file-earmark::before { content: "\f392"; } .bi-file-easel-fill::before { content: "\f393"; } .bi-file-easel::before { content: "\f394"; } .bi-file-excel-fill::before { content: "\f395"; } .bi-file-excel::before { content: "\f396"; } .bi-file-fill::before { content: "\f397"; } .bi-file-font-fill::before { content: "\f398"; } .bi-file-font::before { content: "\f399"; } .bi-file-image-fill::before { content: "\f39a"; } .bi-file-image::before { content: "\f39b"; } .bi-file-lock-fill::before { content: "\f39c"; } .bi-file-lock::before { content: "\f39d"; } .bi-file-lock2-fill::before { content: "\f39e"; } .bi-file-lock2::before { content: "\f39f"; } .bi-file-medical-fill::before { content: "\f3a0"; } .bi-file-medical::before { content: "\f3a1"; } .bi-file-minus-fill::before { content: "\f3a2"; } .bi-file-minus::before { content: "\f3a3"; } .bi-file-music-fill::before { content: "\f3a4"; } .bi-file-music::before { content: "\f3a5"; } .bi-file-person-fill::before { content: "\f3a6"; } .bi-file-person::before { content: "\f3a7"; } .bi-file-play-fill::before { content: "\f3a8"; } .bi-file-play::before { content: "\f3a9"; } .bi-file-plus-fill::before { content: "\f3aa"; } .bi-file-plus::before { content: "\f3ab"; } .bi-file-post-fill::before { content: "\f3ac"; } .bi-file-post::before { content: "\f3ad"; } .bi-file-ppt-fill::before { content: "\f3ae"; } .bi-file-ppt::before { content: "\f3af"; } .bi-file-richtext-fill::before { content: "\f3b0"; } .bi-file-richtext::before { content: "\f3b1"; } .bi-file-ruled-fill::before { content: "\f3b2"; } .bi-file-ruled::before { content: "\f3b3"; } .bi-file-slides-fill::before { content: "\f3b4"; } .bi-file-slides::before { content: "\f3b5"; } .bi-file-spreadsheet-fill::before { content: "\f3b6"; } .bi-file-spreadsheet::before { content: "\f3b7"; } .bi-file-text-fill::before { content: "\f3b8"; } .bi-file-text::before { content: "\f3b9"; } .bi-file-word-fill::before { content: "\f3ba"; } .bi-file-word::before { content: "\f3bb"; } .bi-file-x-fill::before { content: "\f3bc"; } .bi-file-x::before { content: "\f3bd"; } .bi-file-zip-fill::before { content: "\f3be"; } .bi-file-zip::before { content: "\f3bf"; } .bi-file::before { content: "\f3c0"; } .bi-files-alt::before { content: "\f3c1"; } .bi-files::before { content: "\f3c2"; } .bi-film::before { content: "\f3c3"; } .bi-filter-circle-fill::before { content: "\f3c4"; } .bi-filter-circle::before { content: "\f3c5"; } .bi-filter-left::before { content: "\f3c6"; } .bi-filter-right::before { content: "\f3c7"; } .bi-filter-square-fill::before { content: "\f3c8"; } .bi-filter-square::before { content: "\f3c9"; } .bi-filter::before { content: "\f3ca"; } .bi-flag-fill::before { content: "\f3cb"; } .bi-flag::before { content: "\f3cc"; } .bi-flower1::before { content: "\f3cd"; } .bi-flower2::before { content: "\f3ce"; } .bi-flower3::before { content: "\f3cf"; } .bi-folder-check::before { content: "\f3d0"; } .bi-folder-fill::before { content: "\f3d1"; } .bi-folder-minus::before { content: "\f3d2"; } .bi-folder-plus::before { content: "\f3d3"; } .bi-folder-symlink-fill::before { content: "\f3d4"; } .bi-folder-symlink::before { content: "\f3d5"; } .bi-folder-x::before { content: "\f3d6"; } .bi-folder::before { content: "\f3d7"; } .bi-folder2-open::before { content: "\f3d8"; } .bi-folder2::before { content: "\f3d9"; } .bi-fonts::before { content: "\f3da"; } .bi-forward-fill::before { content: "\f3db"; } .bi-forward::before { content: "\f3dc"; } .bi-front::before { content: "\f3dd"; } .bi-fullscreen-exit::before { content: "\f3de"; } .bi-fullscreen::before { content: "\f3df"; } .bi-funnel-fill::before { content: "\f3e0"; } .bi-funnel::before { content: "\f3e1"; } .bi-gear-fill::before { content: "\f3e2"; } .bi-gear-wide-connected::before { content: "\f3e3"; } .bi-gear-wide::before { content: "\f3e4"; } .bi-gear::before { content: "\f3e5"; } .bi-gem::before { content: "\f3e6"; } .bi-geo-alt-fill::before { content: "\f3e7"; } .bi-geo-alt::before { content: "\f3e8"; } .bi-geo-fill::before { content: "\f3e9"; } .bi-geo::before { content: "\f3ea"; } .bi-gift-fill::before { content: "\f3eb"; } .bi-gift::before { content: "\f3ec"; } .bi-github::before { content: "\f3ed"; } .bi-globe::before { content: "\f3ee"; } .bi-globe2::before { content: "\f3ef"; } .bi-google::before { content: "\f3f0"; } .bi-graph-down::before { content: "\f3f1"; } .bi-graph-up::before { content: "\f3f2"; } .bi-grid-1x2-fill::before { content: "\f3f3"; } .bi-grid-1x2::before { content: "\f3f4"; } .bi-grid-3x2-gap-fill::before { content: "\f3f5"; } .bi-grid-3x2-gap::before { content: "\f3f6"; } .bi-grid-3x2::before { content: "\f3f7"; } .bi-grid-3x3-gap-fill::before { content: "\f3f8"; } .bi-grid-3x3-gap::before { content: "\f3f9"; } .bi-grid-3x3::before { content: "\f3fa"; } .bi-grid-fill::before { content: "\f3fb"; } .bi-grid::before { content: "\f3fc"; } .bi-grip-horizontal::before { content: "\f3fd"; } .bi-grip-vertical::before { content: "\f3fe"; } .bi-hammer::before { content: "\f3ff"; } .bi-hand-index-fill::before { content: "\f400"; } .bi-hand-index-thumb-fill::before { content: "\f401"; } .bi-hand-index-thumb::before { content: "\f402"; } .bi-hand-index::before { content: "\f403"; } .bi-hand-thumbs-down-fill::before { content: "\f404"; } .bi-hand-thumbs-down::before { content: "\f405"; } .bi-hand-thumbs-up-fill::before { content: "\f406"; } .bi-hand-thumbs-up::before { content: "\f407"; } .bi-handbag-fill::before { content: "\f408"; } .bi-handbag::before { content: "\f409"; } .bi-hash::before { content: "\f40a"; } .bi-hdd-fill::before { content: "\f40b"; } .bi-hdd-network-fill::before { content: "\f40c"; } .bi-hdd-network::before { content: "\f40d"; } .bi-hdd-rack-fill::before { content: "\f40e"; } .bi-hdd-rack::before { content: "\f40f"; } .bi-hdd-stack-fill::before { content: "\f410"; } .bi-hdd-stack::before { content: "\f411"; } .bi-hdd::before { content: "\f412"; } .bi-headphones::before { content: "\f413"; } .bi-headset::before { content: "\f414"; } .bi-heart-fill::before { content: "\f415"; } .bi-heart-half::before { content: "\f416"; } .bi-heart::before { content: "\f417"; } .bi-heptagon-fill::before { content: "\f418"; } .bi-heptagon-half::before { content: "\f419"; } .bi-heptagon::before { content: "\f41a"; } .bi-hexagon-fill::before { content: "\f41b"; } .bi-hexagon-half::before { content: "\f41c"; } .bi-hexagon::before { content: "\f41d"; } .bi-hourglass-bottom::before { content: "\f41e"; } .bi-hourglass-split::before { content: "\f41f"; } .bi-hourglass-top::before { content: "\f420"; } .bi-hourglass::before { content: "\f421"; } .bi-house-door-fill::before { content: "\f422"; } .bi-house-door::before { content: "\f423"; } .bi-house-fill::before { content: "\f424"; } .bi-house::before { content: "\f425"; } .bi-hr::before { content: "\f426"; } .bi-hurricane::before { content: "\f427"; } .bi-image-alt::before { content: "\f428"; } .bi-image-fill::before { content: "\f429"; } .bi-image::before { content: "\f42a"; } .bi-images::before { content: "\f42b"; } .bi-inbox-fill::before { content: "\f42c"; } .bi-inbox::before { content: "\f42d"; } .bi-inboxes-fill::before { content: "\f42e"; } .bi-inboxes::before { content: "\f42f"; } .bi-info-circle-fill::before { content: "\f430"; } .bi-info-circle::before { content: "\f431"; } .bi-info-square-fill::before { content: "\f432"; } .bi-info-square::before { content: "\f433"; } .bi-info::before { content: "\f434"; } .bi-input-cursor-text::before { content: "\f435"; } .bi-input-cursor::before { content: "\f436"; } .bi-instagram::before { content: "\f437"; } .bi-intersect::before { content: "\f438"; } .bi-journal-album::before { content: "\f439"; } .bi-journal-arrow-down::before { content: "\f43a"; } .bi-journal-arrow-up::before { content: "\f43b"; } .bi-journal-bookmark-fill::before { content: "\f43c"; } .bi-journal-bookmark::before { content: "\f43d"; } .bi-journal-check::before { content: "\f43e"; } .bi-journal-code::before { content: "\f43f"; } .bi-journal-medical::before { content: "\f440"; } .bi-journal-minus::before { content: "\f441"; } .bi-journal-plus::before { content: "\f442"; } .bi-journal-richtext::before { content: "\f443"; } .bi-journal-text::before { content: "\f444"; } .bi-journal-x::before { content: "\f445"; } .bi-journal::before { content: "\f446"; } .bi-journals::before { content: "\f447"; } .bi-joystick::before { content: "\f448"; } .bi-justify-left::before { content: "\f449"; } .bi-justify-right::before { content: "\f44a"; } .bi-justify::before { content: "\f44b"; } .bi-kanban-fill::before { content: "\f44c"; } .bi-kanban::before { content: "\f44d"; } .bi-key-fill::before { content: "\f44e"; } .bi-key::before { content: "\f44f"; } .bi-keyboard-fill::before { content: "\f450"; } .bi-keyboard::before { content: "\f451"; } .bi-ladder::before { content: "\f452"; } .bi-lamp-fill::before { content: "\f453"; } .bi-lamp::before { content: "\f454"; } .bi-laptop-fill::before { content: "\f455"; } .bi-laptop::before { content: "\f456"; } .bi-layer-backward::before { content: "\f457"; } .bi-layer-forward::before { content: "\f458"; } .bi-layers-fill::before { content: "\f459"; } .bi-layers-half::before { content: "\f45a"; } .bi-layers::before { content: "\f45b"; } .bi-layout-sidebar-inset-reverse::before { content: "\f45c"; } .bi-layout-sidebar-inset::before { content: "\f45d"; } .bi-layout-sidebar-reverse::before { content: "\f45e"; } .bi-layout-sidebar::before { content: "\f45f"; } .bi-layout-split::before { content: "\f460"; } .bi-layout-text-sidebar-reverse::before { content: "\f461"; } .bi-layout-text-sidebar::before { content: "\f462"; } .bi-layout-text-window-reverse::before { content: "\f463"; } .bi-layout-text-window::before { content: "\f464"; } .bi-layout-three-columns::before { content: "\f465"; } .bi-layout-wtf::before { content: "\f466"; } .bi-life-preserver::before { content: "\f467"; } .bi-lightbulb-fill::before { content: "\f468"; } .bi-lightbulb-off-fill::before { content: "\f469"; } .bi-lightbulb-off::before { content: "\f46a"; } .bi-lightbulb::before { content: "\f46b"; } .bi-lightning-charge-fill::before { content: "\f46c"; } .bi-lightning-charge::before { content: "\f46d"; } .bi-lightning-fill::before { content: "\f46e"; } .bi-lightning::before { content: "\f46f"; } .bi-link-45deg::before { content: "\f470"; } .bi-link::before { content: "\f471"; } .bi-linkedin::before { content: "\f472"; } .bi-list-check::before { content: "\f473"; } .bi-list-nested::before { content: "\f474"; } .bi-list-ol::before { content: "\f475"; } .bi-list-stars::before { content: "\f476"; } .bi-list-task::before { content: "\f477"; } .bi-list-ul::before { content: "\f478"; } .bi-list::before { content: "\f479"; } .bi-lock-fill::before { content: "\f47a"; } .bi-lock::before { content: "\f47b"; } .bi-mailbox::before { content: "\f47c"; } .bi-mailbox2::before { content: "\f47d"; } .bi-map-fill::before { content: "\f47e"; } .bi-map::before { content: "\f47f"; } .bi-markdown-fill::before { content: "\f480"; } .bi-markdown::before { content: "\f481"; } .bi-mask::before { content: "\f482"; } .bi-megaphone-fill::before { content: "\f483"; } .bi-megaphone::before { content: "\f484"; } .bi-menu-app-fill::before { content: "\f485"; } .bi-menu-app::before { content: "\f486"; } .bi-menu-button-fill::before { content: "\f487"; } .bi-menu-button-wide-fill::before { content: "\f488"; } .bi-menu-button-wide::before { content: "\f489"; } .bi-menu-button::before { content: "\f48a"; } .bi-menu-down::before { content: "\f48b"; } .bi-menu-up::before { content: "\f48c"; } .bi-mic-fill::before { content: "\f48d"; } .bi-mic-mute-fill::before { content: "\f48e"; } .bi-mic-mute::before { content: "\f48f"; } .bi-mic::before { content: "\f490"; } .bi-minecart-loaded::before { content: "\f491"; } .bi-minecart::before { content: "\f492"; } .bi-moisture::before { content: "\f493"; } .bi-moon-fill::before { content: "\f494"; } .bi-moon-stars-fill::before { content: "\f495"; } .bi-moon-stars::before { content: "\f496"; } .bi-moon::before { content: "\f497"; } .bi-mouse-fill::before { content: "\f498"; } .bi-mouse::before { content: "\f499"; } .bi-mouse2-fill::before { content: "\f49a"; } .bi-mouse2::before { content: "\f49b"; } .bi-mouse3-fill::before { content: "\f49c"; } .bi-mouse3::before { content: "\f49d"; } .bi-music-note-beamed::before { content: "\f49e"; } .bi-music-note-list::before { content: "\f49f"; } .bi-music-note::before { content: "\f4a0"; } .bi-music-player-fill::before { content: "\f4a1"; } .bi-music-player::before { content: "\f4a2"; } .bi-newspaper::before { content: "\f4a3"; } .bi-node-minus-fill::before { content: "\f4a4"; } .bi-node-minus::before { content: "\f4a5"; } .bi-node-plus-fill::before { content: "\f4a6"; } .bi-node-plus::before { content: "\f4a7"; } .bi-nut-fill::before { content: "\f4a8"; } .bi-nut::before { content: "\f4a9"; } .bi-octagon-fill::before { content: "\f4aa"; } .bi-octagon-half::before { content: "\f4ab"; } .bi-octagon::before { content: "\f4ac"; } .bi-option::before { content: "\f4ad"; } .bi-outlet::before { content: "\f4ae"; } .bi-paint-bucket::before { content: "\f4af"; } .bi-palette-fill::before { content: "\f4b0"; } .bi-palette::before { content: "\f4b1"; } .bi-palette2::before { content: "\f4b2"; } .bi-paperclip::before { content: "\f4b3"; } .bi-paragraph::before { content: "\f4b4"; } .bi-patch-check-fill::before { content: "\f4b5"; } .bi-patch-check::before { content: "\f4b6"; } .bi-patch-exclamation-fill::before { content: "\f4b7"; } .bi-patch-exclamation::before { content: "\f4b8"; } .bi-patch-minus-fill::before { content: "\f4b9"; } .bi-patch-minus::before { content: "\f4ba"; } .bi-patch-plus-fill::before { content: "\f4bb"; } .bi-patch-plus::before { content: "\f4bc"; } .bi-patch-question-fill::before { content: "\f4bd"; } .bi-patch-question::before { content: "\f4be"; } .bi-pause-btn-fill::before { content: "\f4bf"; } .bi-pause-btn::before { content: "\f4c0"; } .bi-pause-circle-fill::before { content: "\f4c1"; } .bi-pause-circle::before { content: "\f4c2"; } .bi-pause-fill::before { content: "\f4c3"; } .bi-pause::before { content: "\f4c4"; } .bi-peace-fill::before { content: "\f4c5"; } .bi-peace::before { content: "\f4c6"; } .bi-pen-fill::before { content: "\f4c7"; } .bi-pen::before { content: "\f4c8"; } .bi-pencil-fill::before { content: "\f4c9"; } .bi-pencil-square::before { content: "\f4ca"; } .bi-pencil::before { content: "\f4cb"; } .bi-pentagon-fill::before { content: "\f4cc"; } .bi-pentagon-half::before { content: "\f4cd"; } .bi-pentagon::before { content: "\f4ce"; } .bi-people-fill::before { content: "\f4cf"; } .bi-people::before { content: "\f4d0"; } .bi-percent::before { content: "\f4d1"; } .bi-person-badge-fill::before { content: "\f4d2"; } .bi-person-badge::before { content: "\f4d3"; } .bi-person-bounding-box::before { content: "\f4d4"; } .bi-person-check-fill::before { content: "\f4d5"; } .bi-person-check::before { content: "\f4d6"; } .bi-person-circle::before { content: "\f4d7"; } .bi-person-dash-fill::before { content: "\f4d8"; } .bi-person-dash::before { content: "\f4d9"; } .bi-person-fill::before { content: "\f4da"; } .bi-person-lines-fill::before { content: "\f4db"; } .bi-person-plus-fill::before { content: "\f4dc"; } .bi-person-plus::before { content: "\f4dd"; } .bi-person-square::before { content: "\f4de"; } .bi-person-x-fill::before { content: "\f4df"; } .bi-person-x::before { content: "\f4e0"; } .bi-person::before { content: "\f4e1"; } .bi-phone-fill::before { content: "\f4e2"; } .bi-phone-landscape-fill::before { content: "\f4e3"; } .bi-phone-landscape::before { content: "\f4e4"; } .bi-phone-vibrate-fill::before { content: "\f4e5"; } .bi-phone-vibrate::before { content: "\f4e6"; } .bi-phone::before { content: "\f4e7"; } .bi-pie-chart-fill::before { content: "\f4e8"; } .bi-pie-chart::before { content: "\f4e9"; } .bi-pin-angle-fill::before { content: "\f4ea"; } .bi-pin-angle::before { content: "\f4eb"; } .bi-pin-fill::before { content: "\f4ec"; } .bi-pin::before { content: "\f4ed"; } .bi-pip-fill::before { content: "\f4ee"; } .bi-pip::before { content: "\f4ef"; } .bi-play-btn-fill::before { content: "\f4f0"; } .bi-play-btn::before { content: "\f4f1"; } .bi-play-circle-fill::before { content: "\f4f2"; } .bi-play-circle::before { content: "\f4f3"; } .bi-play-fill::before { content: "\f4f4"; } .bi-play::before { content: "\f4f5"; } .bi-plug-fill::before { content: "\f4f6"; } .bi-plug::before { content: "\f4f7"; } .bi-plus-circle-dotted::before { content: "\f4f8"; } .bi-plus-circle-fill::before { content: "\f4f9"; } .bi-plus-circle::before { content: "\f4fa"; } .bi-plus-square-dotted::before { content: "\f4fb"; } .bi-plus-square-fill::before { content: "\f4fc"; } .bi-plus-square::before { content: "\f4fd"; } .bi-plus::before { content: "\f4fe"; } .bi-power::before { content: "\f4ff"; } .bi-printer-fill::before { content: "\f500"; } .bi-printer::before { content: "\f501"; } .bi-puzzle-fill::before { content: "\f502"; } .bi-puzzle::before { content: "\f503"; } .bi-question-circle-fill::before { content: "\f504"; } .bi-question-circle::before { content: "\f505"; } .bi-question-diamond-fill::before { content: "\f506"; } .bi-question-diamond::before { content: "\f507"; } .bi-question-octagon-fill::before { content: "\f508"; } .bi-question-octagon::before { content: "\f509"; } .bi-question-square-fill::before { content: "\f50a"; } .bi-question-square::before { content: "\f50b"; } .bi-question::before { content: "\f50c"; } .bi-rainbow::before { content: "\f50d"; } .bi-receipt-cutoff::before { content: "\f50e"; } .bi-receipt::before { content: "\f50f"; } .bi-reception-0::before { content: "\f510"; } .bi-reception-1::before { content: "\f511"; } .bi-reception-2::before { content: "\f512"; } .bi-reception-3::before { content: "\f513"; } .bi-reception-4::before { content: "\f514"; } .bi-record-btn-fill::before { content: "\f515"; } .bi-record-btn::before { content: "\f516"; } .bi-record-circle-fill::before { content: "\f517"; } .bi-record-circle::before { content: "\f518"; } .bi-record-fill::before { content: "\f519"; } .bi-record::before { content: "\f51a"; } .bi-record2-fill::before { content: "\f51b"; } .bi-record2::before { content: "\f51c"; } .bi-reply-all-fill::before { content: "\f51d"; } .bi-reply-all::before { content: "\f51e"; } .bi-reply-fill::before { content: "\f51f"; } .bi-reply::before { content: "\f520"; } .bi-rss-fill::before { content: "\f521"; } .bi-rss::before { content: "\f522"; } .bi-rulers::before { content: "\f523"; } .bi-save-fill::before { content: "\f524"; } .bi-save::before { content: "\f525"; } .bi-save2-fill::before { content: "\f526"; } .bi-save2::before { content: "\f527"; } .bi-scissors::before { content: "\f528"; } .bi-screwdriver::before { content: "\f529"; } .bi-search::before { content: "\f52a"; } .bi-segmented-nav::before { content: "\f52b"; } .bi-server::before { content: "\f52c"; } .bi-share-fill::before { content: "\f52d"; } .bi-share::before { content: "\f52e"; } .bi-shield-check::before { content: "\f52f"; } .bi-shield-exclamation::before { content: "\f530"; } .bi-shield-fill-check::before { content: "\f531"; } .bi-shield-fill-exclamation::before { content: "\f532"; } .bi-shield-fill-minus::before { content: "\f533"; } .bi-shield-fill-plus::before { content: "\f534"; } .bi-shield-fill-x::before { content: "\f535"; } .bi-shield-fill::before { content: "\f536"; } .bi-shield-lock-fill::before { content: "\f537"; } .bi-shield-lock::before { content: "\f538"; } .bi-shield-minus::before { content: "\f539"; } .bi-shield-plus::before { content: "\f53a"; } .bi-shield-shaded::before { content: "\f53b"; } .bi-shield-slash-fill::before { content: "\f53c"; } .bi-shield-slash::before { content: "\f53d"; } .bi-shield-x::before { content: "\f53e"; } .bi-shield::before { content: "\f53f"; } .bi-shift-fill::before { content: "\f540"; } .bi-shift::before { content: "\f541"; } .bi-shop-window::before { content: "\f542"; } .bi-shop::before { content: "\f543"; } .bi-shuffle::before { content: "\f544"; } .bi-signpost-2-fill::before { content: "\f545"; } .bi-signpost-2::before { content: "\f546"; } .bi-signpost-fill::before { content: "\f547"; } .bi-signpost-split-fill::before { content: "\f548"; } .bi-signpost-split::before { content: "\f549"; } .bi-signpost::before { content: "\f54a"; } .bi-sim-fill::before { content: "\f54b"; } .bi-sim::before { content: "\f54c"; } .bi-skip-backward-btn-fill::before { content: "\f54d"; } .bi-skip-backward-btn::before { content: "\f54e"; } .bi-skip-backward-circle-fill::before { content: "\f54f"; } .bi-skip-backward-circle::before { content: "\f550"; } .bi-skip-backward-fill::before { content: "\f551"; } .bi-skip-backward::before { content: "\f552"; } .bi-skip-end-btn-fill::before { content: "\f553"; } .bi-skip-end-btn::before { content: "\f554"; } .bi-skip-end-circle-fill::before { content: "\f555"; } .bi-skip-end-circle::before { content: "\f556"; } .bi-skip-end-fill::before { content: "\f557"; } .bi-skip-end::before { content: "\f558"; } .bi-skip-forward-btn-fill::before { content: "\f559"; } .bi-skip-forward-btn::before { content: "\f55a"; } .bi-skip-forward-circle-fill::before { content: "\f55b"; } .bi-skip-forward-circle::before { content: "\f55c"; } .bi-skip-forward-fill::before { content: "\f55d"; } .bi-skip-forward::before { content: "\f55e"; } .bi-skip-start-btn-fill::before { content: "\f55f"; } .bi-skip-start-btn::before { content: "\f560"; } .bi-skip-start-circle-fill::before { content: "\f561"; } .bi-skip-start-circle::before { content: "\f562"; } .bi-skip-start-fill::before { content: "\f563"; } .bi-skip-start::before { content: "\f564"; } .bi-slack::before { content: "\f565"; } .bi-slash-circle-fill::before { content: "\f566"; } .bi-slash-circle::before { content: "\f567"; } .bi-slash-square-fill::before { content: "\f568"; } .bi-slash-square::before { content: "\f569"; } .bi-slash::before { content: "\f56a"; } .bi-sliders::before { content: "\f56b"; } .bi-smartwatch::before { content: "\f56c"; } .bi-snow::before { content: "\f56d"; } .bi-snow2::before { content: "\f56e"; } .bi-snow3::before { content: "\f56f"; } .bi-sort-alpha-down-alt::before { content: "\f570"; } .bi-sort-alpha-down::before { content: "\f571"; } .bi-sort-alpha-up-alt::before { content: "\f572"; } .bi-sort-alpha-up::before { content: "\f573"; } .bi-sort-down-alt::before { content: "\f574"; } .bi-sort-down::before { content: "\f575"; } .bi-sort-numeric-down-alt::before { content: "\f576"; } .bi-sort-numeric-down::before { content: "\f577"; } .bi-sort-numeric-up-alt::before { content: "\f578"; } .bi-sort-numeric-up::before { content: "\f579"; } .bi-sort-up-alt::before { content: "\f57a"; } .bi-sort-up::before { content: "\f57b"; } .bi-soundwave::before { content: "\f57c"; } .bi-speaker-fill::before { content: "\f57d"; } .bi-speaker::before { content: "\f57e"; } .bi-speedometer::before { content: "\f57f"; } .bi-speedometer2::before { content: "\f580"; } .bi-spellcheck::before { content: "\f581"; } .bi-square-fill::before { content: "\f582"; } .bi-square-half::before { content: "\f583"; } .bi-square::before { content: "\f584"; } .bi-stack::before { content: "\f585"; } .bi-star-fill::before { content: "\f586"; } .bi-star-half::before { content: "\f587"; } .bi-star::before { content: "\f588"; } .bi-stars::before { content: "\f589"; } .bi-stickies-fill::before { content: "\f58a"; } .bi-stickies::before { content: "\f58b"; } .bi-sticky-fill::before { content: "\f58c"; } .bi-sticky::before { content: "\f58d"; } .bi-stop-btn-fill::before { content: "\f58e"; } .bi-stop-btn::before { content: "\f58f"; } .bi-stop-circle-fill::before { content: "\f590"; } .bi-stop-circle::before { content: "\f591"; } .bi-stop-fill::before { content: "\f592"; } .bi-stop::before { content: "\f593"; } .bi-stoplights-fill::before { content: "\f594"; } .bi-stoplights::before { content: "\f595"; } .bi-stopwatch-fill::before { content: "\f596"; } .bi-stopwatch::before { content: "\f597"; } .bi-subtract::before { content: "\f598"; } .bi-suit-club-fill::before { content: "\f599"; } .bi-suit-club::before { content: "\f59a"; } .bi-suit-diamond-fill::before { content: "\f59b"; } .bi-suit-diamond::before { content: "\f59c"; } .bi-suit-heart-fill::before { content: "\f59d"; } .bi-suit-heart::before { content: "\f59e"; } .bi-suit-spade-fill::before { content: "\f59f"; } .bi-suit-spade::before { content: "\f5a0"; } .bi-sun-fill::before { content: "\f5a1"; } .bi-sun::before { content: "\f5a2"; } .bi-sunglasses::before { content: "\f5a3"; } .bi-sunrise-fill::before { content: "\f5a4"; } .bi-sunrise::before { content: "\f5a5"; } .bi-sunset-fill::before { content: "\f5a6"; } .bi-sunset::before { content: "\f5a7"; } .bi-symmetry-horizontal::before { content: "\f5a8"; } .bi-symmetry-vertical::before { content: "\f5a9"; } .bi-table::before { content: "\f5aa"; } .bi-tablet-fill::before { content: "\f5ab"; } .bi-tablet-landscape-fill::before { content: "\f5ac"; } .bi-tablet-landscape::before { content: "\f5ad"; } .bi-tablet::before { content: "\f5ae"; } .bi-tag-fill::before { content: "\f5af"; } .bi-tag::before { content: "\f5b0"; } .bi-tags-fill::before { content: "\f5b1"; } .bi-tags::before { content: "\f5b2"; } .bi-telegram::before { content: "\f5b3"; } .bi-telephone-fill::before { content: "\f5b4"; } .bi-telephone-forward-fill::before { content: "\f5b5"; } .bi-telephone-forward::before { content: "\f5b6"; } .bi-telephone-inbound-fill::before { content: "\f5b7"; } .bi-telephone-inbound::before { content: "\f5b8"; } .bi-telephone-minus-fill::before { content: "\f5b9"; } .bi-telephone-minus::before { content: "\f5ba"; } .bi-telephone-outbound-fill::before { content: "\f5bb"; } .bi-telephone-outbound::before { content: "\f5bc"; } .bi-telephone-plus-fill::before { content: "\f5bd"; } .bi-telephone-plus::before { content: "\f5be"; } .bi-telephone-x-fill::before { content: "\f5bf"; } .bi-telephone-x::before { content: "\f5c0"; } .bi-telephone::before { content: "\f5c1"; } .bi-terminal-fill::before { content: "\f5c2"; } .bi-terminal::before { content: "\f5c3"; } .bi-text-center::before { content: "\f5c4"; } .bi-text-indent-left::before { content: "\f5c5"; } .bi-text-indent-right::before { content: "\f5c6"; } .bi-text-left::before { content: "\f5c7"; } .bi-text-paragraph::before { content: "\f5c8"; } .bi-text-right::before { content: "\f5c9"; } .bi-textarea-resize::before { content: "\f5ca"; } .bi-textarea-t::before { content: "\f5cb"; } .bi-textarea::before { content: "\f5cc"; } .bi-thermometer-half::before { content: "\f5cd"; } .bi-thermometer-high::before { content: "\f5ce"; } .bi-thermometer-low::before { content: "\f5cf"; } .bi-thermometer-snow::before { content: "\f5d0"; } .bi-thermometer-sun::before { content: "\f5d1"; } .bi-thermometer::before { content: "\f5d2"; } .bi-three-dots-vertical::before { content: "\f5d3"; } .bi-three-dots::before { content: "\f5d4"; } .bi-toggle-off::before { content: "\f5d5"; } .bi-toggle-on::before { content: "\f5d6"; } .bi-toggle2-off::before { content: "\f5d7"; } .bi-toggle2-on::before { content: "\f5d8"; } .bi-toggles::before { content: "\f5d9"; } .bi-toggles2::before { content: "\f5da"; } .bi-tools::before { content: "\f5db"; } .bi-tornado::before { content: "\f5dc"; } .bi-trash-fill::before { content: "\f5dd"; } .bi-trash::before { content: "\f5de"; } .bi-trash2-fill::before { content: "\f5df"; } .bi-trash2::before { content: "\f5e0"; } .bi-tree-fill::before { content: "\f5e1"; } .bi-tree::before { content: "\f5e2"; } .bi-triangle-fill::before { content: "\f5e3"; } .bi-triangle-half::before { content: "\f5e4"; } .bi-triangle::before { content: "\f5e5"; } .bi-trophy-fill::before { content: "\f5e6"; } .bi-trophy::before { content: "\f5e7"; } .bi-tropical-storm::before { content: "\f5e8"; } .bi-truck-flatbed::before { content: "\f5e9"; } .bi-truck::before { content: "\f5ea"; } .bi-tsunami::before { content: "\f5eb"; } .bi-tv-fill::before { content: "\f5ec"; } .bi-tv::before { content: "\f5ed"; } .bi-twitch::before { content: "\f5ee"; } .bi-twitter::before { content: "\f5ef"; } .bi-type-bold::before { content: "\f5f0"; } .bi-type-h1::before { content: "\f5f1"; } .bi-type-h2::before { content: "\f5f2"; } .bi-type-h3::before { content: "\f5f3"; } .bi-type-italic::before { content: "\f5f4"; } .bi-type-strikethrough::before { content: "\f5f5"; } .bi-type-underline::before { content: "\f5f6"; } .bi-type::before { content: "\f5f7"; } .bi-ui-checks-grid::before { content: "\f5f8"; } .bi-ui-checks::before { content: "\f5f9"; } .bi-ui-radios-grid::before { content: "\f5fa"; } .bi-ui-radios::before { content: "\f5fb"; } .bi-umbrella-fill::before { content: "\f5fc"; } .bi-umbrella::before { content: "\f5fd"; } .bi-union::before { content: "\f5fe"; } .bi-unlock-fill::before { content: "\f5ff"; } .bi-unlock::before { content: "\f600"; } .bi-upc-scan::before { content: "\f601"; } .bi-upc::before { content: "\f602"; } .bi-upload::before { content: "\f603"; } .bi-vector-pen::before { content: "\f604"; } .bi-view-list::before { content: "\f605"; } .bi-view-stacked::before { content: "\f606"; } .bi-vinyl-fill::before { content: "\f607"; } .bi-vinyl::before { content: "\f608"; } .bi-voicemail::before { content: "\f609"; } .bi-volume-down-fill::before { content: "\f60a"; } .bi-volume-down::before { content: "\f60b"; } .bi-volume-mute-fill::before { content: "\f60c"; } .bi-volume-mute::before { content: "\f60d"; } .bi-volume-off-fill::before { content: "\f60e"; } .bi-volume-off::before { content: "\f60f"; } .bi-volume-up-fill::before { content: "\f610"; } .bi-volume-up::before { content: "\f611"; } .bi-vr::before { content: "\f612"; } .bi-wallet-fill::before { content: "\f613"; } .bi-wallet::before { content: "\f614"; } .bi-wallet2::before { content: "\f615"; } .bi-watch::before { content: "\f616"; } .bi-water::before { content: "\f617"; } .bi-whatsapp::before { content: "\f618"; } .bi-wifi-1::before { content: "\f619"; } .bi-wifi-2::before { content: "\f61a"; } .bi-wifi-off::before { content: "\f61b"; } .bi-wifi::before { content: "\f61c"; } .bi-wind::before { content: "\f61d"; } .bi-window-dock::before { content: "\f61e"; } .bi-window-sidebar::before { content: "\f61f"; } .bi-window::before { content: "\f620"; } .bi-wrench::before { content: "\f621"; } .bi-x-circle-fill::before { content: "\f622"; } .bi-x-circle::before { content: "\f623"; } .bi-x-diamond-fill::before { content: "\f624"; } .bi-x-diamond::before { content: "\f625"; } .bi-x-octagon-fill::before { content: "\f626"; } .bi-x-octagon::before { content: "\f627"; } .bi-x-square-fill::before { content: "\f628"; } .bi-x-square::before { content: "\f629"; } .bi-x::before { content: "\f62a"; } .bi-youtube::before { content: "\f62b"; } .bi-zoom-in::before { content: "\f62c"; } .bi-zoom-out::before { content: "\f62d"; } .bi-bank::before { content: "\f62e"; } .bi-bank2::before { content: "\f62f"; } .bi-bell-slash-fill::before { content: "\f630"; } .bi-bell-slash::before { content: "\f631"; } .bi-cash-coin::before { content: "\f632"; } .bi-check-lg::before { content: "\f633"; } .bi-coin::before { content: "\f634"; } .bi-currency-bitcoin::before { content: "\f635"; } .bi-currency-dollar::before { content: "\f636"; } .bi-currency-euro::before { content: "\f637"; } .bi-currency-exchange::before { content: "\f638"; } .bi-currency-pound::before { content: "\f639"; } .bi-currency-yen::before { content: "\f63a"; } .bi-dash-lg::before { content: "\f63b"; } .bi-exclamation-lg::before { content: "\f63c"; } .bi-file-earmark-pdf-fill::before { content: "\f63d"; } .bi-file-earmark-pdf::before { content: "\f63e"; } .bi-file-pdf-fill::before { content: "\f63f"; } .bi-file-pdf::before { content: "\f640"; } .bi-gender-ambiguous::before { content: "\f641"; } .bi-gender-female::before { content: "\f642"; } .bi-gender-male::before { content: "\f643"; } .bi-gender-trans::before { content: "\f644"; } .bi-headset-vr::before { content: "\f645"; } .bi-info-lg::before { content: "\f646"; } .bi-mastodon::before { content: "\f647"; } .bi-messenger::before { content: "\f648"; } .bi-piggy-bank-fill::before { content: "\f649"; } .bi-piggy-bank::before { content: "\f64a"; } .bi-pin-map-fill::before { content: "\f64b"; } .bi-pin-map::before { content: "\f64c"; } .bi-plus-lg::before { content: "\f64d"; } .bi-question-lg::before { content: "\f64e"; } .bi-recycle::before { content: "\f64f"; } .bi-reddit::before { content: "\f650"; } .bi-safe-fill::before { content: "\f651"; } .bi-safe2-fill::before { content: "\f652"; } .bi-safe2::before { content: "\f653"; } .bi-sd-card-fill::before { content: "\f654"; } .bi-sd-card::before { content: "\f655"; } .bi-skype::before { content: "\f656"; } .bi-slash-lg::before { content: "\f657"; } .bi-translate::before { content: "\f658"; } .bi-x-lg::before { content: "\f659"; } .bi-safe::before { content: "\f65a"; }
0
coqui_public_repos/STT/native_client
coqui_public_repos/STT/native_client/python/numpy.i
/* -*- C -*- (not really, but good for syntax highlighting) */ /* * Copyright (c) 2005-2015, NumPy Developers. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * * Neither the name of the NumPy Developers nor the names of any * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifdef SWIGPYTHON %{ #ifndef SWIG_FILE_WITH_INIT #define NO_IMPORT_ARRAY #endif #include "stdio.h" #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #include <numpy/arrayobject.h> %} /**********************************************************************/ %fragment("NumPy_Backward_Compatibility", "header") { %#if NPY_API_VERSION < 0x00000007 %#define NPY_ARRAY_DEFAULT NPY_DEFAULT %#define NPY_ARRAY_FARRAY NPY_FARRAY %#define NPY_FORTRANORDER NPY_FORTRAN %#endif } /**********************************************************************/ /* The following code originally appeared in * enthought/kiva/agg/src/numeric.i written by Eric Jones. It was * translated from C++ to C by John Hunter. Bill Spotz has modified * it to fix some minor bugs, upgrade from Numeric to numpy (all * versions), add some comments and functionality, and convert from * direct code insertion to SWIG fragments. */ %fragment("NumPy_Macros", "header") { /* Macros to extract array attributes. */ %#if NPY_API_VERSION < 0x00000007 %#define is_array(a) ((a) && PyArray_Check((PyArrayObject*)a)) %#define array_type(a) (int)(PyArray_TYPE((PyArrayObject*)a)) %#define array_numdims(a) (((PyArrayObject*)a)->nd) %#define array_dimensions(a) (((PyArrayObject*)a)->dimensions) %#define array_size(a,i) (((PyArrayObject*)a)->dimensions[i]) %#define array_strides(a) (((PyArrayObject*)a)->strides) %#define array_stride(a,i) (((PyArrayObject*)a)->strides[i]) %#define array_data(a) (((PyArrayObject*)a)->data) %#define array_descr(a) (((PyArrayObject*)a)->descr) %#define array_flags(a) (((PyArrayObject*)a)->flags) %#define array_clearflags(a,f) (((PyArrayObject*)a)->flags) &= ~f %#define array_enableflags(a,f) (((PyArrayObject*)a)->flags) = f %#define array_is_fortran(a) (PyArray_ISFORTRAN((PyArrayObject*)a)) %#else %#define is_array(a) ((a) && PyArray_Check(a)) %#define array_type(a) PyArray_TYPE((PyArrayObject*)a) %#define array_numdims(a) PyArray_NDIM((PyArrayObject*)a) %#define array_dimensions(a) PyArray_DIMS((PyArrayObject*)a) %#define array_strides(a) PyArray_STRIDES((PyArrayObject*)a) %#define array_stride(a,i) PyArray_STRIDE((PyArrayObject*)a,i) %#define array_size(a,i) PyArray_DIM((PyArrayObject*)a,i) %#define array_data(a) PyArray_DATA((PyArrayObject*)a) %#define array_descr(a) PyArray_DESCR((PyArrayObject*)a) %#define array_flags(a) PyArray_FLAGS((PyArrayObject*)a) %#define array_enableflags(a,f) PyArray_ENABLEFLAGS((PyArrayObject*)a,f) %#define array_clearflags(a,f) PyArray_CLEARFLAGS((PyArrayObject*)a,f) %#define array_is_fortran(a) (PyArray_IS_F_CONTIGUOUS((PyArrayObject*)a)) %#endif %#define array_is_contiguous(a) (PyArray_ISCONTIGUOUS((PyArrayObject*)a)) %#define array_is_native(a) (PyArray_ISNOTSWAPPED((PyArrayObject*)a)) } /**********************************************************************/ %fragment("NumPy_Utilities", "header") { /* Given a PyObject, return a string describing its type. */ const char* pytype_string(PyObject* py_obj) { if (py_obj == NULL ) return "C NULL value"; if (py_obj == Py_None ) return "Python None" ; if (PyCallable_Check(py_obj)) return "callable" ; if (PyBytes_Check( py_obj)) return "string" ; if (PyLong_Check( py_obj)) return "int" ; if (PyFloat_Check( py_obj)) return "float" ; if (PyDict_Check( py_obj)) return "dict" ; if (PyList_Check( py_obj)) return "list" ; if (PyTuple_Check( py_obj)) return "tuple" ; return "unknown type"; } /* Given a NumPy typecode, return a string describing the type. */ const char* typecode_string(int typecode) { static const char* type_names[25] = {"bool", "byte", "unsigned byte", "short", "unsigned short", "int", "unsigned int", "long", "unsigned long", "long long", "unsigned long long", "float", "double", "long double", "complex float", "complex double", "complex long double", "object", "string", "unicode", "void", "ntypes", "notype", "char", "unknown"}; return typecode < 24 ? type_names[typecode] : type_names[24]; } /* Make sure input has correct numpy type. This now just calls PyArray_EquivTypenums(). */ int type_match(int actual_type, int desired_type) { return PyArray_EquivTypenums(actual_type, desired_type); } %#ifdef SWIGPY_USE_CAPSULE void free_cap(PyObject * cap) { void* array = (void*) PyCapsule_GetPointer(cap,SWIGPY_CAPSULE_NAME); if (array != NULL) free(array); } %#endif } /**********************************************************************/ %fragment("NumPy_Object_to_Array", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros", fragment="NumPy_Utilities") { /* Given a PyObject pointer, cast it to a PyArrayObject pointer if * legal. If not, set the python error string appropriately and * return NULL. */ PyArrayObject* obj_to_array_no_conversion(PyObject* input, int typecode) { PyArrayObject* ary = NULL; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input), typecode))) { ary = (PyArrayObject*) input; } else if is_array(input) { const char* desired_type = typecode_string(typecode); const char* actual_type = typecode_string(array_type(input)); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. Array of type '%s' given", desired_type, actual_type); ary = NULL; } else { const char* desired_type = typecode_string(typecode); const char* actual_type = pytype_string(input); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. A '%s' was given", desired_type, actual_type); ary = NULL; } return ary; } /* Convert the given PyObject to a NumPy array with the given * typecode. On success, return a valid PyArrayObject* with the * correct type. On failure, the python error string will be set and * the routine returns NULL. */ PyArrayObject* obj_to_array_allow_conversion(PyObject* input, int typecode, int* is_new_object) { PyArrayObject* ary = NULL; PyObject* py_obj; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input),typecode))) { ary = (PyArrayObject*) input; *is_new_object = 0; } else { py_obj = PyArray_FROMANY(input, typecode, 0, 0, NPY_ARRAY_DEFAULT); /* If NULL, PyArray_FromObject will have set python error value.*/ ary = (PyArrayObject*) py_obj; *is_new_object = 1; } return ary; } /* Given a PyArrayObject, check to see if it is contiguous. If so, * return the input pointer and flag it as not a new object. If it is * not contiguous, create a new PyArrayObject using the original data, * flag it as a new object and return the pointer. */ PyArrayObject* make_contiguous(PyArrayObject* ary, int* is_new_object, int min_dims, int max_dims) { PyArrayObject* result; if (array_is_contiguous(ary)) { result = ary; *is_new_object = 0; } else { result = (PyArrayObject*) PyArray_ContiguousFromObject((PyObject*)ary, array_type(ary), min_dims, max_dims); *is_new_object = 1; } return result; } /* Given a PyArrayObject, check to see if it is Fortran-contiguous. * If so, return the input pointer, but do not flag it as not a new * object. If it is not Fortran-contiguous, create a new * PyArrayObject using the original data, flag it as a new object * and return the pointer. */ PyArrayObject* make_fortran(PyArrayObject* ary, int* is_new_object) { PyArrayObject* result; if (array_is_fortran(ary)) { result = ary; *is_new_object = 0; } else { Py_INCREF(array_descr(ary)); result = (PyArrayObject*) PyArray_FromArray(ary, array_descr(ary), %#if NPY_API_VERSION < 0x00000007 NPY_FORTRANORDER); %#else NPY_ARRAY_F_CONTIGUOUS); %#endif *is_new_object = 1; } return result; } /* Convert a given PyObject to a contiguous PyArrayObject of the * specified type. If the input object is not a contiguous * PyArrayObject, a new one will be created and the new object flag * will be set. */ PyArrayObject* obj_to_array_contiguous_allow_conversion(PyObject* input, int typecode, int* is_new_object) { int is_new1 = 0; int is_new2 = 0; PyArrayObject* ary2; PyArrayObject* ary1 = obj_to_array_allow_conversion(input, typecode, &is_new1); if (ary1) { ary2 = make_contiguous(ary1, &is_new2, 0, 0); if ( is_new1 && is_new2) { Py_DECREF(ary1); } ary1 = ary2; } *is_new_object = is_new1 || is_new2; return ary1; } /* Convert a given PyObject to a Fortran-ordered PyArrayObject of the * specified type. If the input object is not a Fortran-ordered * PyArrayObject, a new one will be created and the new object flag * will be set. */ PyArrayObject* obj_to_array_fortran_allow_conversion(PyObject* input, int typecode, int* is_new_object) { int is_new1 = 0; int is_new2 = 0; PyArrayObject* ary2; PyArrayObject* ary1 = obj_to_array_allow_conversion(input, typecode, &is_new1); if (ary1) { ary2 = make_fortran(ary1, &is_new2); if (is_new1 && is_new2) { Py_DECREF(ary1); } ary1 = ary2; } *is_new_object = is_new1 || is_new2; return ary1; } } /* end fragment */ /**********************************************************************/ %fragment("NumPy_Array_Requirements", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros") { /* Test whether a python object is contiguous. If array is * contiguous, return 1. Otherwise, set the python error string and * return 0. */ int require_contiguous(PyArrayObject* ary) { int contiguous = 1; if (!array_is_contiguous(ary)) { PyErr_SetString(PyExc_TypeError, "Array must be contiguous. A non-contiguous array was given"); contiguous = 0; } return contiguous; } /* Test whether a python object is (C_ or F_) contiguous. If array is * contiguous, return 1. Otherwise, set the python error string and * return 0. */ int require_c_or_f_contiguous(PyArrayObject* ary) { int contiguous = 1; if (!(array_is_contiguous(ary) || array_is_fortran(ary))) { PyErr_SetString(PyExc_TypeError, "Array must be contiguous (C_ or F_). A non-contiguous array was given"); contiguous = 0; } return contiguous; } /* Require that a numpy array is not byte-swapped. If the array is * not byte-swapped, return 1. Otherwise, set the python error string * and return 0. */ int require_native(PyArrayObject* ary) { int native = 1; if (!array_is_native(ary)) { PyErr_SetString(PyExc_TypeError, "Array must have native byteorder. " "A byte-swapped array was given"); native = 0; } return native; } /* Require the given PyArrayObject to have a specified number of * dimensions. If the array has the specified number of dimensions, * return 1. Otherwise, set the python error string and return 0. */ int require_dimensions(PyArrayObject* ary, int exact_dimensions) { int success = 1; if (array_numdims(ary) != exact_dimensions) { PyErr_Format(PyExc_TypeError, "Array must have %d dimensions. Given array has %d dimensions", exact_dimensions, array_numdims(ary)); success = 0; } return success; } /* Require the given PyArrayObject to have one of a list of specified * number of dimensions. If the array has one of the specified number * of dimensions, return 1. Otherwise, set the python error string * and return 0. */ int require_dimensions_n(PyArrayObject* ary, int* exact_dimensions, int n) { int success = 0; int i; char dims_str[255] = ""; char s[255]; for (i = 0; i < n && !success; i++) { if (array_numdims(ary) == exact_dimensions[i]) { success = 1; } } if (!success) { for (i = 0; i < n-1; i++) { sprintf(s, "%d, ", exact_dimensions[i]); strcat(dims_str,s); } sprintf(s, " or %d", exact_dimensions[n-1]); strcat(dims_str,s); PyErr_Format(PyExc_TypeError, "Array must have %s dimensions. Given array has %d dimensions", dims_str, array_numdims(ary)); } return success; } /* Require the given PyArrayObject to have a specified shape. If the * array has the specified shape, return 1. Otherwise, set the python * error string and return 0. */ int require_size(PyArrayObject* ary, npy_intp* size, int n) { int i; int success = 1; size_t len; char desired_dims[255] = "["; char s[255]; char actual_dims[255] = "["; for(i=0; i < n;i++) { if (size[i] != -1 && size[i] != array_size(ary,i)) { success = 0; } } if (!success) { for (i = 0; i < n; i++) { if (size[i] == -1) { sprintf(s, "*,"); } else { sprintf(s, "%ld,", (long int)size[i]); } strcat(desired_dims,s); } len = strlen(desired_dims); desired_dims[len-1] = ']'; for (i = 0; i < n; i++) { sprintf(s, "%ld,", (long int)array_size(ary,i)); strcat(actual_dims,s); } len = strlen(actual_dims); actual_dims[len-1] = ']'; PyErr_Format(PyExc_TypeError, "Array must have shape of %s. Given array has shape of %s", desired_dims, actual_dims); } return success; } /* Require the given PyArrayObject to be Fortran ordered. If the * the PyArrayObject is already Fortran ordered, do nothing. Else, * set the Fortran ordering flag and recompute the strides. */ int require_fortran(PyArrayObject* ary) { int success = 1; int nd = array_numdims(ary); int i; npy_intp * strides = array_strides(ary); if (array_is_fortran(ary)) return success; int n_non_one = 0; /* Set the Fortran ordered flag */ const npy_intp *dims = array_dimensions(ary); for (i=0; i < nd; ++i) n_non_one += (dims[i] != 1) ? 1 : 0; if (n_non_one > 1) array_clearflags(ary,NPY_ARRAY_CARRAY); array_enableflags(ary,NPY_ARRAY_FARRAY); /* Recompute the strides */ strides[0] = strides[nd-1]; for (i=1; i < nd; ++i) strides[i] = strides[i-1] * array_size(ary,i-1); return success; } } /* Combine all NumPy fragments into one for convenience */ %fragment("NumPy_Fragments", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros", fragment="NumPy_Utilities", fragment="NumPy_Object_to_Array", fragment="NumPy_Array_Requirements") { } /* End John Hunter translation (with modifications by Bill Spotz) */ /* %numpy_typemaps() macro * * This macro defines a family of 75 typemaps that allow C arguments * of the form * * 1. (DATA_TYPE IN_ARRAY1[ANY]) * 2. (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) * 3. (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) * * 4. (DATA_TYPE IN_ARRAY2[ANY][ANY]) * 5. (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * 6. (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) * 7. (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * 8. (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) * * 9. (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) * 10. (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * 11. (DATA_TYPE** IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * 12. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) * 13. (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * 14. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) * * 15. (DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY]) * 16. (DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) * 17. (DATA_TYPE** IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) * 18. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4) * 19. (DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) * 20. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4) * * 21. (DATA_TYPE INPLACE_ARRAY1[ANY]) * 22. (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) * 23. (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) * * 24. (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) * 25. (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * 26. (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) * 27. (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * 28. (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) * * 29. (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) * 30. (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * 31. (DATA_TYPE** INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * 32. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) * 33. (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * 34. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) * * 35. (DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY]) * 36. (DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) * 37. (DATA_TYPE** INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) * 38. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_ARRAY4) * 39. (DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) * 40. (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_FARRAY4) * * 41. (DATA_TYPE ARGOUT_ARRAY1[ANY]) * 42. (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) * 43. (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) * * 44. (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) * * 45. (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) * * 46. (DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY]) * * 47. (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) * 48. (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) * * 49. (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * 50. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) * 51. (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * 52. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) * * 53. (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * 54. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) * 55. (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * 56. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) * * 57. (DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) * 58. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_ARRAY4) * 59. (DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) * 60. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_FARRAY4) * * 61. (DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1) * 62. (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEWM_ARRAY1) * * 63. (DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * 64. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_ARRAY2) * 65. (DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * 66. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_FARRAY2) * * 67. (DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * 68. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_ARRAY3) * 69. (DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * 70. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_FARRAY3) * * 71. (DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) * 72. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4) * 73. (DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) * 74. (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4) * * 75. (DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT) * * where "DATA_TYPE" is any type supported by the NumPy module, and * "DIM_TYPE" is any int-like type suitable for specifying dimensions. * The difference between "ARRAY" typemaps and "FARRAY" typemaps is * that the "FARRAY" typemaps expect Fortran ordering of * multidimensional arrays. In python, the dimensions will not need * to be specified (except for the "DATA_TYPE* ARGOUT_ARRAY1" * typemaps). The IN_ARRAYs can be a numpy array or any sequence that * can be converted to a numpy array of the specified type. The * INPLACE_ARRAYs must be numpy arrays of the appropriate type. The * ARGOUT_ARRAYs will be returned as new numpy arrays of the * appropriate type. * * These typemaps can be applied to existing functions using the * %apply directive. For example: * * %apply (double* IN_ARRAY1, int DIM1) {(double* series, int length)}; * double prod(double* series, int length); * * %apply (int DIM1, int DIM2, double* INPLACE_ARRAY2) * {(int rows, int cols, double* matrix )}; * void floor(int rows, int cols, double* matrix, double f); * * %apply (double IN_ARRAY3[ANY][ANY][ANY]) * {(double tensor[2][2][2] )}; * %apply (double ARGOUT_ARRAY3[ANY][ANY][ANY]) * {(double low[2][2][2] )}; * %apply (double ARGOUT_ARRAY3[ANY][ANY][ANY]) * {(double upp[2][2][2] )}; * void luSplit(double tensor[2][2][2], * double low[2][2][2], * double upp[2][2][2] ); * * or directly with * * double prod(double* IN_ARRAY1, int DIM1); * * void floor(int DIM1, int DIM2, double* INPLACE_ARRAY2, double f); * * void luSplit(double IN_ARRAY3[ANY][ANY][ANY], * double ARGOUT_ARRAY3[ANY][ANY][ANY], * double ARGOUT_ARRAY3[ANY][ANY][ANY]); */ %define %numpy_typemaps(DATA_TYPE, DATA_TYPECODE, DIM_TYPE) /************************/ /* Input Array Typemaps */ /************************/ /* Typemap suite for (DATA_TYPE IN_ARRAY1[ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY1[ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY1[ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = { $1_dim0 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY1[ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = { -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); } %typemap(freearg) (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = {-1}; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY2[ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY2[ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY2[ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { $1_dim0, $1_dim1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY2[ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } %typemap(freearg) (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_fortran_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } %typemap(freearg) (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_fortran_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } %typemap(freearg) (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE** IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE** IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { /* for now, only concerned with lists */ $1 = PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE** IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (DATA_TYPE** array=NULL, PyArrayObject** object_array=NULL, int* is_new_object_array=NULL) { npy_intp size[2] = { -1, -1 }; PyArrayObject* temp_array; Py_ssize_t i; int is_new_object; /* length of the list */ $2 = PyList_Size($input); /* the arrays */ array = (DATA_TYPE **)malloc($2*sizeof(DATA_TYPE *)); object_array = (PyArrayObject **)calloc($2,sizeof(PyArrayObject *)); is_new_object_array = (int *)calloc($2,sizeof(int)); if (array == NULL || object_array == NULL || is_new_object_array == NULL) { SWIG_fail; } for (i=0; i<$2; i++) { temp_array = obj_to_array_contiguous_allow_conversion(PySequence_GetItem($input,i), DATA_TYPECODE, &is_new_object); /* the new array must be stored so that it can be destroyed in freearg */ object_array[i] = temp_array; is_new_object_array[i] = is_new_object; if (!temp_array || !require_dimensions(temp_array, 2)) SWIG_fail; /* store the size of the first array in the list, then use that for comparison. */ if (i == 0) { size[0] = array_size(temp_array,0); size[1] = array_size(temp_array,1); } if (!require_size(temp_array, size, 2)) SWIG_fail; array[i] = (DATA_TYPE*) array_data(temp_array); } $1 = (DATA_TYPE**) array; $3 = (DIM_TYPE) size[0]; $4 = (DIM_TYPE) size[1]; } %typemap(freearg) (DATA_TYPE** IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { Py_ssize_t i; if (array$argnum!=NULL) free(array$argnum); /*freeing the individual arrays if needed */ if (object_array$argnum!=NULL) { if (is_new_object_array$argnum!=NULL) { for (i=0; i<$2; i++) { if (object_array$argnum[i] != NULL && is_new_object_array$argnum[i]) { Py_DECREF(object_array$argnum[i]); } } free(is_new_object_array$argnum); } free(object_array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* IN_ARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_fortran_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3) | !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } %typemap(freearg) (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* IN_FARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_fortran_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[4] = { $1_dim0, $1_dim1, $1_dim2 , $1_dim3}; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 4) || !require_size(array, size, 4)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3, DIM_TYPE DIM4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[4] = { -1, -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 4) || !require_size(array, size, 4)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); $5 = (DIM_TYPE) array_size(array,3); } %typemap(freearg) (DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE** IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3, DIM_TYPE DIM4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE** IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { /* for now, only concerned with lists */ $1 = PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE** IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (DATA_TYPE** array=NULL, PyArrayObject** object_array=NULL, int* is_new_object_array=NULL) { npy_intp size[3] = { -1, -1, -1 }; PyArrayObject* temp_array; Py_ssize_t i; int is_new_object; /* length of the list */ $2 = PyList_Size($input); /* the arrays */ array = (DATA_TYPE **)malloc($2*sizeof(DATA_TYPE *)); object_array = (PyArrayObject **)calloc($2,sizeof(PyArrayObject *)); is_new_object_array = (int *)calloc($2,sizeof(int)); if (array == NULL || object_array == NULL || is_new_object_array == NULL) { SWIG_fail; } for (i=0; i<$2; i++) { temp_array = obj_to_array_contiguous_allow_conversion(PySequence_GetItem($input,i), DATA_TYPECODE, &is_new_object); /* the new array must be stored so that it can be destroyed in freearg */ object_array[i] = temp_array; is_new_object_array[i] = is_new_object; if (!temp_array || !require_dimensions(temp_array, 3)) SWIG_fail; /* store the size of the first array in the list, then use that for comparison. */ if (i == 0) { size[0] = array_size(temp_array,0); size[1] = array_size(temp_array,1); size[2] = array_size(temp_array,2); } if (!require_size(temp_array, size, 3)) SWIG_fail; array[i] = (DATA_TYPE*) array_data(temp_array); } $1 = (DATA_TYPE**) array; $3 = (DIM_TYPE) size[0]; $4 = (DIM_TYPE) size[1]; $5 = (DIM_TYPE) size[2]; } %typemap(freearg) (DATA_TYPE** IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { Py_ssize_t i; if (array$argnum!=NULL) free(array$argnum); /*freeing the individual arrays if needed */ if (object_array$argnum!=NULL) { if (is_new_object_array$argnum!=NULL) { for (i=0; i<$2; i++) { if (object_array$argnum[i] != NULL && is_new_object_array$argnum[i]) { Py_DECREF(object_array$argnum[i]); } } free(is_new_object_array$argnum); } free(object_array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, * DATA_TYPE* IN_ARRAY4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[4] = { -1, -1, -1 , -1}; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 4) || !require_size(array, size, 4)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DIM_TYPE) array_size(array,3); $5 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3, DIM_TYPE DIM4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[4] = { -1, -1, -1, -1 }; array = obj_to_array_fortran_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 4) || !require_size(array, size, 4) | !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); $5 = (DIM_TYPE) array_size(array,3); } %typemap(freearg) (DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, * DATA_TYPE* IN_FARRAY4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[4] = { -1, -1, -1 , -1 }; array = obj_to_array_fortran_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 4) || !require_size(array, size, 4) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DIM_TYPE) array_size(array,3); $5 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /***************************/ /* In-Place Array Typemaps */ /***************************/ /* Typemap suite for (DATA_TYPE INPLACE_ARRAY1[ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY1[ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY1[ANY]) (PyArrayObject* array=NULL) { npy_intp size[1] = { $1_dim0 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_size(array, size, 1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int i=1) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = 1; for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) (PyArrayObject* array=NULL, int i=0) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = 1; for (i=0; i < array_numdims(array); ++i) $1 *= array_size(array,i); $2 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[2] = { $1_dim0, $1_dim1 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_size(array, size, 2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_size(array, size, 3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } /* Typemap suite for (DATA_TYPE** INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE** INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE** INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (DATA_TYPE** array=NULL, PyArrayObject** object_array=NULL) { npy_intp size[2] = { -1, -1 }; PyArrayObject* temp_array; Py_ssize_t i; /* length of the list */ $2 = PyList_Size($input); /* the arrays */ array = (DATA_TYPE **)malloc($2*sizeof(DATA_TYPE *)); object_array = (PyArrayObject **)calloc($2,sizeof(PyArrayObject *)); if (array == NULL || object_array == NULL) { SWIG_fail; } for (i=0; i<$2; i++) { temp_array = obj_to_array_no_conversion(PySequence_GetItem($input,i), DATA_TYPECODE); /* the new array must be stored so that it can be destroyed in freearg */ object_array[i] = temp_array; if ( !temp_array || !require_dimensions(temp_array, 2) || !require_contiguous(temp_array) || !require_native(temp_array) || !PyArray_EquivTypenums(array_type(temp_array), DATA_TYPECODE) ) SWIG_fail; /* store the size of the first array in the list, then use that for comparison. */ if (i == 0) { size[0] = array_size(temp_array,0); size[1] = array_size(temp_array,1); } if (!require_size(temp_array, size, 2)) SWIG_fail; array[i] = (DATA_TYPE*) array_data(temp_array); } $1 = (DATA_TYPE**) array; $3 = (DIM_TYPE) size[0]; $4 = (DIM_TYPE) size[1]; } %typemap(freearg) (DATA_TYPE** INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (array$argnum!=NULL) free(array$argnum); if (object_array$argnum!=NULL) free(object_array$argnum); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_ARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_FARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[4] = { $1_dim0, $1_dim1, $1_dim2 , $1_dim3 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,4) || !require_size(array, size, 4) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3, DIM_TYPE DIM4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,4) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); $5 = (DIM_TYPE) array_size(array,3); } /* Typemap suite for (DATA_TYPE** INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3, DIM_TYPE DIM4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE** INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { $1 = PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE** INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (DATA_TYPE** array=NULL, PyArrayObject** object_array=NULL) { npy_intp size[3] = { -1, -1, -1 }; PyArrayObject* temp_array; Py_ssize_t i; /* length of the list */ $2 = PyList_Size($input); /* the arrays */ array = (DATA_TYPE **)malloc($2*sizeof(DATA_TYPE *)); object_array = (PyArrayObject **)calloc($2,sizeof(PyArrayObject *)); if (array == NULL || object_array == NULL) { SWIG_fail; } for (i=0; i<$2; i++) { temp_array = obj_to_array_no_conversion(PySequence_GetItem($input,i), DATA_TYPECODE); /* the new array must be stored so that it can be destroyed in freearg */ object_array[i] = temp_array; if ( !temp_array || !require_dimensions(temp_array, 3) || !require_contiguous(temp_array) || !require_native(temp_array) || !PyArray_EquivTypenums(array_type(temp_array), DATA_TYPECODE) ) SWIG_fail; /* store the size of the first array in the list, then use that for comparison. */ if (i == 0) { size[0] = array_size(temp_array,0); size[1] = array_size(temp_array,1); size[2] = array_size(temp_array,2); } if (!require_size(temp_array, size, 3)) SWIG_fail; array[i] = (DATA_TYPE*) array_data(temp_array); } $1 = (DATA_TYPE**) array; $3 = (DIM_TYPE) size[0]; $4 = (DIM_TYPE) size[1]; $5 = (DIM_TYPE) size[2]; } %typemap(freearg) (DATA_TYPE** INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { if (array$argnum!=NULL) free(array$argnum); if (object_array$argnum!=NULL) free(object_array$argnum); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, * DATA_TYPE* INPLACE_ARRAY4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_ARRAY4) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_ARRAY4) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,4) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DIM_TYPE) array_size(array,3); $5 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3, DIM_TYPE DIM4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,4) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); $5 = (DIM_TYPE) array_size(array,3); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_FARRAY4) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_FARRAY4) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_FARRAY4) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,4) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DIM_TYPE) array_size(array,3); $5 = (DATA_TYPE*) array_data(array); } /*************************/ /* Argout Array Typemaps */ /*************************/ /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY1[ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY1[ANY]) (PyObject* array = NULL) { npy_intp dims[1] = { $1_dim0 }; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY1[ANY]) { $result = SWIG_Python_AppendOutput($result,(PyObject*)array$argnum); } /* Typemap suite for (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) */ %typemap(in,numinputs=1, fragment="NumPy_Fragments") (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) (PyObject* array = NULL) { npy_intp dims[1]; if (!PyLong_Check($input)) { const char* typestring = pytype_string($input); PyErr_Format(PyExc_TypeError, "Int dimension expected. '%s' given.", typestring); SWIG_fail; } $2 = (DIM_TYPE) PyLong_AsSsize_t($input); if ($2 == -1 && PyErr_Occurred()) SWIG_fail; dims[0] = (npy_intp) $2; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); } %typemap(argout) (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) { $result = SWIG_Python_AppendOutput($result,(PyObject*)array$argnum); } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) */ %typemap(in,numinputs=1, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) (PyObject* array = NULL) { npy_intp dims[1]; if (!PyLong_Check($input)) { const char* typestring = pytype_string($input); PyErr_Format(PyExc_TypeError, "Int dimension expected. '%s' given.", typestring); SWIG_fail; } $1 = (DIM_TYPE) PyLong_AsSsize_t($input); if ($1 == -1 && PyErr_Occurred()) SWIG_fail; dims[0] = (npy_intp) $1; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $2 = (DATA_TYPE*) array_data(array); } %typemap(argout) (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) { $result = SWIG_Python_AppendOutput($result,(PyObject*)array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) (PyObject* array = NULL) { npy_intp dims[2] = { $1_dim0, $1_dim1 }; array = PyArray_SimpleNew(2, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,(PyObject*)array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) (PyObject* array = NULL) { npy_intp dims[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = PyArray_SimpleNew(3, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,(PyObject*)array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY]) (PyObject* array = NULL) { npy_intp dims[4] = { $1_dim0, $1_dim1, $1_dim2, $1_dim3 }; array = PyArray_SimpleNew(4, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,(PyObject*)array$argnum); } /*****************************/ /* Argoutview Array Typemaps */ /*****************************/ /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim_temp) { $1 = &data_temp; $2 = &dim_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) { npy_intp dims[1] = { *$2 }; PyObject* obj = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DATA_TYPE** ARGOUTVIEW_ARRAY1) (DIM_TYPE dim_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim_temp; $2 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) { npy_intp dims[1] = { *$1 }; PyObject* obj = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$2)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEW_ARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEW_FARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp = NULL) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$4)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DATA_TYPE** ARGOUTVIEW_FARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$4)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; $5 = &dim4_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) { npy_intp dims[4] = { *$2, *$3, *$4 , *$5 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_ARRAY4) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 , DATA_TYPE** ARGOUTVIEW_ARRAY4) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &dim4_temp; $5 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_ARRAY4) { npy_intp dims[4] = { *$1, *$2, *$3 , *$4 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$5)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; $5 = &dim4_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) { npy_intp dims[4] = { *$2, *$3, *$4 , *$5 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_FARRAY4) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 , DATA_TYPE** ARGOUTVIEW_FARRAY4) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &dim4_temp; $5 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_FARRAY4) { npy_intp dims[4] = { *$1, *$2, *$3 , *$4 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$5)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /*************************************/ /* Managed Argoutview Array Typemaps */ /*************************************/ /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim_temp) { $1 = &data_temp; $2 = &dim_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1) { npy_intp dims[1] = { *$2 }; PyObject* obj = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEWM_ARRAY1) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DATA_TYPE** ARGOUTVIEWM_ARRAY1) (DIM_TYPE dim_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim_temp; $2 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEWM_ARRAY1) { npy_intp dims[1] = { *$1 }; PyObject* obj = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$2)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$2), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$2), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_ARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEWM_ARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_ARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$3), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$3), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_FARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEWM_FARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements,NumPy_Utilities") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_FARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject* obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$3), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$3), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_ARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DATA_TYPE** ARGOUTVIEWM_ARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_ARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject* obj= PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$4)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$4), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$4), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_FARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DATA_TYPE** ARGOUTVIEWM_FARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements,NumPy_Utilities") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_FARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject* obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$4)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$4), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$4), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; $5 = &dim4_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) { npy_intp dims[4] = { *$2, *$3, *$4 , *$5 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 , DATA_TYPE** ARGOUTVIEWM_ARRAY4) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &dim4_temp; $5 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Utilities") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4) { npy_intp dims[4] = { *$1, *$2, *$3 , *$4 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$5)); PyArrayObject* array = (PyArrayObject*) obj; if (!array) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$5), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$5), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 ) (DATA_TYPE* data_temp = NULL , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; $5 = &dim4_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements,NumPy_Utilities") (DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4) { npy_intp dims[4] = { *$2, *$3, *$4 , *$5 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$1), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$1), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DIM_TYPE* DIM3 , DIM_TYPE* DIM4 , DATA_TYPE** ARGOUTVIEWM_FARRAY4) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DIM_TYPE dim4_temp, DATA_TYPE* data_temp = NULL ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &dim4_temp; $5 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements,NumPy_Utilities") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4) { npy_intp dims[4] = { *$1, *$2, *$3 , *$4 }; PyObject* obj = PyArray_SimpleNewFromData(4, dims, DATA_TYPECODE, (void*)(*$5)); PyArrayObject* array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; %#ifdef SWIGPY_USE_CAPSULE PyObject* cap = PyCapsule_New((void*)(*$5), SWIGPY_CAPSULE_NAME, free_cap); %#else PyObject* cap = PyCObject_FromVoidPtr((void*)(*$5), free); %#endif %#if NPY_API_VERSION < 0x00000007 PyArray_BASE(array) = cap; %#else PyArray_SetBaseObject(array,cap); %#endif $result = SWIG_Python_AppendOutput($result,obj); } /**************************************/ /* In-Place Array Typemap - flattened */ /**************************************/ /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT) (PyArrayObject* array=NULL, int i=1) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_c_or_f_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = 1; for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); } %enddef /* %numpy_typemaps() macro */ /* *************************************************************** */ /* Concrete instances of the %numpy_typemaps() macro: Each invocation * below applies all of the typemaps above to the specified data type. */ %numpy_typemaps(signed char , NPY_BYTE , int) %numpy_typemaps(unsigned char , NPY_UBYTE , int) %numpy_typemaps(short , NPY_SHORT , int) %numpy_typemaps(unsigned short , NPY_USHORT , int) %numpy_typemaps(int , NPY_INT , int) %numpy_typemaps(unsigned int , NPY_UINT , int) %numpy_typemaps(long , NPY_LONG , int) %numpy_typemaps(unsigned long , NPY_ULONG , int) %numpy_typemaps(long long , NPY_LONGLONG , int) %numpy_typemaps(unsigned long long, NPY_ULONGLONG, int) %numpy_typemaps(float , NPY_FLOAT , int) %numpy_typemaps(double , NPY_DOUBLE , int) %numpy_typemaps(int8_t , NPY_INT8 , int) %numpy_typemaps(int16_t , NPY_INT16 , int) %numpy_typemaps(int32_t , NPY_INT32 , int) %numpy_typemaps(int64_t , NPY_INT64 , int) %numpy_typemaps(uint8_t , NPY_UINT8 , int) %numpy_typemaps(uint16_t , NPY_UINT16 , int) %numpy_typemaps(uint32_t , NPY_UINT32 , int) %numpy_typemaps(uint64_t , NPY_UINT64 , int) /* *************************************************************** * The follow macro expansion does not work, because C++ bool is 4 * bytes and NPY_BOOL is 1 byte * * %numpy_typemaps(bool, NPY_BOOL, int) */ /* *************************************************************** * On my Mac, I get the following warning for this macro expansion: * 'swig/python detected a memory leak of type 'long double *', no destructor found.' * * %numpy_typemaps(long double, NPY_LONGDOUBLE, int) */ #ifdef __cplusplus %include <std_complex.i> %numpy_typemaps(std::complex<float>, NPY_CFLOAT , int) %numpy_typemaps(std::complex<double>, NPY_CDOUBLE, int) #endif #endif /* SWIGPYTHON */
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/extensions
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/extensions/pdt/pdtscript.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Definitions of 'scriptable' versions of pdt operations, that is, // those that can be called with FstClass-type arguments. // // See comments in nlp/fst/script/script-impl.h for how the registration // mechanism allows these to work with various arc types. #include <string> #include <vector> #include <fst/extensions/pdt/compose.h> #include <fst/extensions/pdt/expand.h> #include <fst/extensions/pdt/pdtscript.h> #include <fst/extensions/pdt/replace.h> #include <fst/extensions/pdt/reverse.h> #include <fst/extensions/pdt/shortest-path.h> #include <fst/script/script-impl.h> namespace fst { namespace script { void PdtCompose(const FstClass &ifst1, const FstClass &ifst2, const std::vector<LabelPair> &parens, MutableFstClass *ofst, const PdtComposeOptions &copts, bool left_pdt) { if (!internal::ArcTypesMatch(ifst1, ifst2, "PdtCompose") || !internal::ArcTypesMatch(ifst1, *ofst, "PdtCompose")) return; PdtComposeArgs args(ifst1, ifst2, parens, ofst, copts, left_pdt); Apply<Operation<PdtComposeArgs>>("PdtCompose", ifst1.ArcType(), &args); } void PdtExpand(const FstClass &ifst, const std::vector<LabelPair> &parens, MutableFstClass *ofst, const PdtExpandOptions &opts) { PdtExpandArgs args(ifst, parens, ofst, opts); Apply<Operation<PdtExpandArgs>>("PdtExpand", ifst.ArcType(), &args); } void PdtExpand(const FstClass &ifst, const std::vector<std::pair<int64, int64>> &parens, MutableFstClass *ofst, bool connect, bool keep_parentheses, const WeightClass &weight_threshold) { PdtExpand(ifst, parens, ofst, PdtExpandOptions(connect, keep_parentheses, weight_threshold)); } void PdtReplace(const std::vector<LabelFstClassPair> &pairs, MutableFstClass *ofst, std::vector<LabelPair> *parens, int64 root, PdtParserType parser_type, int64 start_paren_labels, const string &left_paren_prefix, const string &right_paren_prefix) { for (size_t i = 1; i < pairs.size(); ++i) { if (!internal::ArcTypesMatch(*pairs[i - 1].second, *pairs[i].second, "PdtReplace")) return; } if (!internal::ArcTypesMatch(*pairs[0].second, *ofst, "PdtReplace")) return; PdtReplaceArgs args(pairs, ofst, parens, root, parser_type, start_paren_labels, left_paren_prefix, right_paren_prefix); Apply<Operation<PdtReplaceArgs>>("PdtReplace", ofst->ArcType(), &args); } void PdtReverse(const FstClass &ifst, const std::vector<LabelPair> &parens, MutableFstClass *ofst) { PdtReverseArgs args(ifst, parens, ofst); Apply<Operation<PdtReverseArgs>>("PdtReverse", ifst.ArcType(), &args); } void PdtShortestPath(const FstClass &ifst, const std::vector<LabelPair> &parens, MutableFstClass *ofst, const PdtShortestPathOptions &opts) { PdtShortestPathArgs args(ifst, parens, ofst, opts); Apply<Operation<PdtShortestPathArgs>>("PdtShortestPath", ifst.ArcType(), &args); } void PrintPdtInfo(const FstClass &ifst, const std::vector<LabelPair> &parens) { PrintPdtInfoArgs args(ifst, parens); Apply<Operation<PrintPdtInfoArgs>>("PrintPdtInfo", ifst.ArcType(), &args); } // Register operations for common arc types. REGISTER_FST_PDT_OPERATIONS(StdArc); REGISTER_FST_PDT_OPERATIONS(LogArc); REGISTER_FST_PDT_OPERATIONS(Log64Arc); } // namespace script } // namespace fst
0
coqui_public_repos/inference-engine/third_party/cereal/include/cereal/external
coqui_public_repos/inference-engine/third_party/cereal/include/cereal/external/rapidjson/allocators.h
// Tencent is pleased to support the open source community by making RapidJSON available. // // Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip. All rights reserved. // // Licensed under the MIT License (the "License"); you may not use this file except // in compliance with the License. You may obtain a copy of the License at // // http://opensource.org/licenses/MIT // // Unless required by applicable law or agreed to in writing, software distributed // under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR // CONDITIONS OF ANY KIND, either express or implied. See the License for the // specific language governing permissions and limitations under the License. #ifndef CEREAL_RAPIDJSON_ALLOCATORS_H_ #define CEREAL_RAPIDJSON_ALLOCATORS_H_ #include "rapidjson.h" CEREAL_RAPIDJSON_NAMESPACE_BEGIN /////////////////////////////////////////////////////////////////////////////// // Allocator /*! \class rapidjson::Allocator \brief Concept for allocating, resizing and freeing memory block. Note that Malloc() and Realloc() are non-static but Free() is static. So if an allocator need to support Free(), it needs to put its pointer in the header of memory block. \code concept Allocator { static const bool kNeedFree; //!< Whether this allocator needs to call Free(). // Allocate a memory block. // \param size of the memory block in bytes. // \returns pointer to the memory block. void* Malloc(size_t size); // Resize a memory block. // \param originalPtr The pointer to current memory block. Null pointer is permitted. // \param originalSize The current size in bytes. (Design issue: since some allocator may not book-keep this, explicitly pass to it can save memory.) // \param newSize the new size in bytes. void* Realloc(void* originalPtr, size_t originalSize, size_t newSize); // Free a memory block. // \param pointer to the memory block. Null pointer is permitted. static void Free(void *ptr); }; \endcode */ /*! \def CEREAL_RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY \ingroup CEREAL_RAPIDJSON_CONFIG \brief User-defined kDefaultChunkCapacity definition. User can define this as any \c size that is a power of 2. */ #ifndef CEREAL_RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY #define CEREAL_RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY (64 * 1024) #endif /////////////////////////////////////////////////////////////////////////////// // CrtAllocator //! C-runtime library allocator. /*! This class is just wrapper for standard C library memory routines. \note implements Allocator concept */ class CrtAllocator { public: static const bool kNeedFree = true; void* Malloc(size_t size) { if (size) // behavior of malloc(0) is implementation defined. return std::malloc(size); else return NULL; // standardize to returning NULL. } void* Realloc(void* originalPtr, size_t originalSize, size_t newSize) { (void)originalSize; if (newSize == 0) { std::free(originalPtr); return NULL; } return std::realloc(originalPtr, newSize); } static void Free(void *ptr) { std::free(ptr); } }; /////////////////////////////////////////////////////////////////////////////// // MemoryPoolAllocator //! Default memory allocator used by the parser and DOM. /*! This allocator allocate memory blocks from pre-allocated memory chunks. It does not free memory blocks. And Realloc() only allocate new memory. The memory chunks are allocated by BaseAllocator, which is CrtAllocator by default. User may also supply a buffer as the first chunk. If the user-buffer is full then additional chunks are allocated by BaseAllocator. The user-buffer is not deallocated by this allocator. \tparam BaseAllocator the allocator type for allocating memory chunks. Default is CrtAllocator. \note implements Allocator concept */ template <typename BaseAllocator = CrtAllocator> class MemoryPoolAllocator { public: static const bool kNeedFree = false; //!< Tell users that no need to call Free() with this allocator. (concept Allocator) //! Constructor with chunkSize. /*! \param chunkSize The size of memory chunk. The default is kDefaultChunkSize. \param baseAllocator The allocator for allocating memory chunks. */ MemoryPoolAllocator(size_t chunkSize = kDefaultChunkCapacity, BaseAllocator* baseAllocator = 0) : chunkHead_(0), chunk_capacity_(chunkSize), userBuffer_(0), baseAllocator_(baseAllocator), ownBaseAllocator_(0) { } //! Constructor with user-supplied buffer. /*! The user buffer will be used firstly. When it is full, memory pool allocates new chunk with chunk size. The user buffer will not be deallocated when this allocator is destructed. \param buffer User supplied buffer. \param size Size of the buffer in bytes. It must at least larger than sizeof(ChunkHeader). \param chunkSize The size of memory chunk. The default is kDefaultChunkSize. \param baseAllocator The allocator for allocating memory chunks. */ MemoryPoolAllocator(void *buffer, size_t size, size_t chunkSize = kDefaultChunkCapacity, BaseAllocator* baseAllocator = 0) : chunkHead_(0), chunk_capacity_(chunkSize), userBuffer_(buffer), baseAllocator_(baseAllocator), ownBaseAllocator_(0) { CEREAL_RAPIDJSON_ASSERT(buffer != 0); CEREAL_RAPIDJSON_ASSERT(size > sizeof(ChunkHeader)); chunkHead_ = reinterpret_cast<ChunkHeader*>(buffer); chunkHead_->capacity = size - sizeof(ChunkHeader); chunkHead_->size = 0; chunkHead_->next = 0; } //! Destructor. /*! This deallocates all memory chunks, excluding the user-supplied buffer. */ ~MemoryPoolAllocator() { Clear(); CEREAL_RAPIDJSON_DELETE(ownBaseAllocator_); } //! Deallocates all memory chunks, excluding the user-supplied buffer. void Clear() { while (chunkHead_ && chunkHead_ != userBuffer_) { ChunkHeader* next = chunkHead_->next; baseAllocator_->Free(chunkHead_); chunkHead_ = next; } if (chunkHead_ && chunkHead_ == userBuffer_) chunkHead_->size = 0; // Clear user buffer } //! Computes the total capacity of allocated memory chunks. /*! \return total capacity in bytes. */ size_t Capacity() const { size_t capacity = 0; for (ChunkHeader* c = chunkHead_; c != 0; c = c->next) capacity += c->capacity; return capacity; } //! Computes the memory blocks allocated. /*! \return total used bytes. */ size_t Size() const { size_t size = 0; for (ChunkHeader* c = chunkHead_; c != 0; c = c->next) size += c->size; return size; } //! Allocates a memory block. (concept Allocator) void* Malloc(size_t size) { if (!size) return NULL; size = CEREAL_RAPIDJSON_ALIGN(size); if (chunkHead_ == 0 || chunkHead_->size + size > chunkHead_->capacity) if (!AddChunk(chunk_capacity_ > size ? chunk_capacity_ : size)) return NULL; void *buffer = reinterpret_cast<char *>(chunkHead_) + CEREAL_RAPIDJSON_ALIGN(sizeof(ChunkHeader)) + chunkHead_->size; chunkHead_->size += size; return buffer; } //! Resizes a memory block (concept Allocator) void* Realloc(void* originalPtr, size_t originalSize, size_t newSize) { if (originalPtr == 0) return Malloc(newSize); if (newSize == 0) return NULL; originalSize = CEREAL_RAPIDJSON_ALIGN(originalSize); newSize = CEREAL_RAPIDJSON_ALIGN(newSize); // Do not shrink if new size is smaller than original if (originalSize >= newSize) return originalPtr; // Simply expand it if it is the last allocation and there is sufficient space if (originalPtr == reinterpret_cast<char *>(chunkHead_) + CEREAL_RAPIDJSON_ALIGN(sizeof(ChunkHeader)) + chunkHead_->size - originalSize) { size_t increment = static_cast<size_t>(newSize - originalSize); if (chunkHead_->size + increment <= chunkHead_->capacity) { chunkHead_->size += increment; return originalPtr; } } // Realloc process: allocate and copy memory, do not free original buffer. if (void* newBuffer = Malloc(newSize)) { if (originalSize) std::memcpy(newBuffer, originalPtr, originalSize); return newBuffer; } else return NULL; } //! Frees a memory block (concept Allocator) static void Free(void *ptr) { (void)ptr; } // Do nothing private: //! Copy constructor is not permitted. MemoryPoolAllocator(const MemoryPoolAllocator& rhs) /* = delete */; //! Copy assignment operator is not permitted. MemoryPoolAllocator& operator=(const MemoryPoolAllocator& rhs) /* = delete */; //! Creates a new chunk. /*! \param capacity Capacity of the chunk in bytes. \return true if success. */ bool AddChunk(size_t capacity) { if (!baseAllocator_) ownBaseAllocator_ = baseAllocator_ = CEREAL_RAPIDJSON_NEW(BaseAllocator)(); if (ChunkHeader* chunk = reinterpret_cast<ChunkHeader*>(baseAllocator_->Malloc(CEREAL_RAPIDJSON_ALIGN(sizeof(ChunkHeader)) + capacity))) { chunk->capacity = capacity; chunk->size = 0; chunk->next = chunkHead_; chunkHead_ = chunk; return true; } else return false; } static const int kDefaultChunkCapacity = CEREAL_RAPIDJSON_ALLOCATOR_DEFAULT_CHUNK_CAPACITY; //!< Default chunk capacity. //! Chunk header for perpending to each chunk. /*! Chunks are stored as a singly linked list. */ struct ChunkHeader { size_t capacity; //!< Capacity of the chunk in bytes (excluding the header itself). size_t size; //!< Current size of allocated memory in bytes. ChunkHeader *next; //!< Next chunk in the linked list. }; ChunkHeader *chunkHead_; //!< Head of the chunk linked-list. Only the head chunk serves allocation. size_t chunk_capacity_; //!< The minimum capacity of chunk when they are allocated. void *userBuffer_; //!< User supplied buffer. BaseAllocator* baseAllocator_; //!< base allocator for allocating memory chunks. BaseAllocator* ownBaseAllocator_; //!< base allocator created by this object. }; CEREAL_RAPIDJSON_NAMESPACE_END #endif // CEREAL_RAPIDJSON_ENCODINGS_H_
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/test/algo_test.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Regression test for various FST algorithms. #ifndef FST_TEST_ALGO_TEST_H_ #define FST_TEST_ALGO_TEST_H_ #include <fst/log.h> #include <fst/fstlib.h> #include "./rand-fst.h" DECLARE_int32(repeat); // defined in ./algo_test.cc namespace fst { // Mapper to change input and output label of every transition into // epsilons. template <class A> class EpsMapper { public: EpsMapper() {} A operator()(const A &arc) const { return A(0, 0, arc.weight, arc.nextstate); } uint64 Properties(uint64 props) const { props &= ~kNotAcceptor; props |= kAcceptor; props &= ~kNoIEpsilons & ~kNoOEpsilons & ~kNoEpsilons; props |= kIEpsilons | kOEpsilons | kEpsilons; props &= ~kNotILabelSorted & ~kNotOLabelSorted; props |= kILabelSorted | kOLabelSorted; return props; } MapFinalAction FinalAction() const { return MAP_NO_SUPERFINAL; } MapSymbolsAction InputSymbolsAction() const { return MAP_COPY_SYMBOLS; } MapSymbolsAction OutputSymbolsAction() const { return MAP_COPY_SYMBOLS; } }; // Generic - no lookahead. template <class Arc> void LookAheadCompose(const Fst<Arc> &ifst1, const Fst<Arc> &ifst2, MutableFst<Arc> *ofst) { Compose(ifst1, ifst2, ofst); } // Specialized and epsilon olabel acyclic - lookahead. void LookAheadCompose(const Fst<StdArc> &ifst1, const Fst<StdArc> &ifst2, MutableFst<StdArc> *ofst) { std::vector<StdArc::StateId> order; bool acyclic; TopOrderVisitor<StdArc> visitor(&order, &acyclic); DfsVisit(ifst1, &visitor, OutputEpsilonArcFilter<StdArc>()); if (acyclic) { // no ifst1 output epsilon cycles? StdOLabelLookAheadFst lfst1(ifst1); StdVectorFst lfst2(ifst2); LabelLookAheadRelabeler<StdArc>::Relabel(&lfst2, lfst1, true); Compose(lfst1, lfst2, ofst); } else { Compose(ifst1, ifst2, ofst); } } // This class tests a variety of identities and properties that must // hold for various algorithms on weighted FSTs. template <class Arc, class WeightGenerator> class WeightedTester { public: typedef typename Arc::Label Label; typedef typename Arc::StateId StateId; typedef typename Arc::Weight Weight; WeightedTester(time_t seed, const Fst<Arc> &zero_fst, const Fst<Arc> &one_fst, const Fst<Arc> &univ_fst, WeightGenerator *weight_generator) : seed_(seed), zero_fst_(zero_fst), one_fst_(one_fst), univ_fst_(univ_fst), weight_generator_(weight_generator) {} void Test(const Fst<Arc> &T1, const Fst<Arc> &T2, const Fst<Arc> &T3) { TestRational(T1, T2, T3); TestMap(T1); TestCompose(T1, T2, T3); TestSort(T1); TestOptimize(T1); TestSearch(T1); } private: // Tests rational operations with identities void TestRational(const Fst<Arc> &T1, const Fst<Arc> &T2, const Fst<Arc> &T3) { { VLOG(1) << "Check destructive and delayed union are equivalent."; VectorFst<Arc> U1(T1); Union(&U1, T2); UnionFst<Arc> U2(T1, T2); CHECK(Equiv(U1, U2)); } { VLOG(1) << "Check destructive and delayed concatenation are equivalent."; VectorFst<Arc> C1(T1); Concat(&C1, T2); ConcatFst<Arc> C2(T1, T2); CHECK(Equiv(C1, C2)); VectorFst<Arc> C3(T2); Concat(T1, &C3); CHECK(Equiv(C3, C2)); } { VLOG(1) << "Check destructive and delayed closure* are equivalent."; VectorFst<Arc> C1(T1); Closure(&C1, CLOSURE_STAR); ClosureFst<Arc> C2(T1, CLOSURE_STAR); CHECK(Equiv(C1, C2)); } { VLOG(1) << "Check destructive and delayed closure+ are equivalent."; VectorFst<Arc> C1(T1); Closure(&C1, CLOSURE_PLUS); ClosureFst<Arc> C2(T1, CLOSURE_PLUS); CHECK(Equiv(C1, C2)); } { VLOG(1) << "Check union is associative (destructive)."; VectorFst<Arc> U1(T1); Union(&U1, T2); Union(&U1, T3); VectorFst<Arc> U3(T2); Union(&U3, T3); VectorFst<Arc> U4(T1); Union(&U4, U3); CHECK(Equiv(U1, U4)); } { VLOG(1) << "Check union is associative (delayed)."; UnionFst<Arc> U1(T1, T2); UnionFst<Arc> U2(U1, T3); UnionFst<Arc> U3(T2, T3); UnionFst<Arc> U4(T1, U3); CHECK(Equiv(U2, U4)); } { VLOG(1) << "Check union is associative (destructive delayed)."; UnionFst<Arc> U1(T1, T2); Union(&U1, T3); UnionFst<Arc> U3(T2, T3); UnionFst<Arc> U4(T1, U3); CHECK(Equiv(U1, U4)); } { VLOG(1) << "Check concatenation is associative (destructive)."; VectorFst<Arc> C1(T1); Concat(&C1, T2); Concat(&C1, T3); VectorFst<Arc> C3(T2); Concat(&C3, T3); VectorFst<Arc> C4(T1); Concat(&C4, C3); CHECK(Equiv(C1, C4)); } { VLOG(1) << "Check concatenation is associative (delayed)."; ConcatFst<Arc> C1(T1, T2); ConcatFst<Arc> C2(C1, T3); ConcatFst<Arc> C3(T2, T3); ConcatFst<Arc> C4(T1, C3); CHECK(Equiv(C2, C4)); } { VLOG(1) << "Check concatenation is associative (destructive delayed)."; ConcatFst<Arc> C1(T1, T2); Concat(&C1, T3); ConcatFst<Arc> C3(T2, T3); ConcatFst<Arc> C4(T1, C3); CHECK(Equiv(C1, C4)); } if (Weight::Properties() & kLeftSemiring) { VLOG(1) << "Check concatenation left distributes" << " over union (destructive)."; VectorFst<Arc> U1(T1); Union(&U1, T2); VectorFst<Arc> C1(T3); Concat(&C1, U1); VectorFst<Arc> C2(T3); Concat(&C2, T1); VectorFst<Arc> C3(T3); Concat(&C3, T2); VectorFst<Arc> U2(C2); Union(&U2, C3); CHECK(Equiv(C1, U2)); } if (Weight::Properties() & kRightSemiring) { VLOG(1) << "Check concatenation right distributes" << " over union (destructive)."; VectorFst<Arc> U1(T1); Union(&U1, T2); VectorFst<Arc> C1(U1); Concat(&C1, T3); VectorFst<Arc> C2(T1); Concat(&C2, T3); VectorFst<Arc> C3(T2); Concat(&C3, T3); VectorFst<Arc> U2(C2); Union(&U2, C3); CHECK(Equiv(C1, U2)); } if (Weight::Properties() & kLeftSemiring) { VLOG(1) << "Check concatenation left distributes over union (delayed)."; UnionFst<Arc> U1(T1, T2); ConcatFst<Arc> C1(T3, U1); ConcatFst<Arc> C2(T3, T1); ConcatFst<Arc> C3(T3, T2); UnionFst<Arc> U2(C2, C3); CHECK(Equiv(C1, U2)); } if (Weight::Properties() & kRightSemiring) { VLOG(1) << "Check concatenation right distributes over union (delayed)."; UnionFst<Arc> U1(T1, T2); ConcatFst<Arc> C1(U1, T3); ConcatFst<Arc> C2(T1, T3); ConcatFst<Arc> C3(T2, T3); UnionFst<Arc> U2(C2, C3); CHECK(Equiv(C1, U2)); } if (Weight::Properties() & kLeftSemiring) { VLOG(1) << "Check T T* == T+ (destructive)."; VectorFst<Arc> S(T1); Closure(&S, CLOSURE_STAR); VectorFst<Arc> C(T1); Concat(&C, S); VectorFst<Arc> P(T1); Closure(&P, CLOSURE_PLUS); CHECK(Equiv(C, P)); } if (Weight::Properties() & kRightSemiring) { VLOG(1) << "Check T* T == T+ (destructive)."; VectorFst<Arc> S(T1); Closure(&S, CLOSURE_STAR); VectorFst<Arc> C(S); Concat(&C, T1); VectorFst<Arc> P(T1); Closure(&P, CLOSURE_PLUS); CHECK(Equiv(C, P)); } if (Weight::Properties() & kLeftSemiring) { VLOG(1) << "Check T T* == T+ (delayed)."; ClosureFst<Arc> S(T1, CLOSURE_STAR); ConcatFst<Arc> C(T1, S); ClosureFst<Arc> P(T1, CLOSURE_PLUS); CHECK(Equiv(C, P)); } if (Weight::Properties() & kRightSemiring) { VLOG(1) << "Check T* T == T+ (delayed)."; ClosureFst<Arc> S(T1, CLOSURE_STAR); ConcatFst<Arc> C(S, T1); ClosureFst<Arc> P(T1, CLOSURE_PLUS); CHECK(Equiv(C, P)); } } // Tests map-based operations. void TestMap(const Fst<Arc> &T) { { VLOG(1) << "Check destructive and delayed projection are equivalent."; VectorFst<Arc> P1(T); Project(&P1, PROJECT_INPUT); ProjectFst<Arc> P2(T, PROJECT_INPUT); CHECK(Equiv(P1, P2)); } { VLOG(1) << "Check destructive and delayed inversion are equivalent."; VectorFst<Arc> I1(T); Invert(&I1); InvertFst<Arc> I2(T); CHECK(Equiv(I1, I2)); } { VLOG(1) << "Check Pi_1(T) = Pi_2(T^-1) (destructive)."; VectorFst<Arc> P1(T); VectorFst<Arc> I1(T); Project(&P1, PROJECT_INPUT); Invert(&I1); Project(&I1, PROJECT_OUTPUT); CHECK(Equiv(P1, I1)); } { VLOG(1) << "Check Pi_2(T) = Pi_1(T^-1) (destructive)."; VectorFst<Arc> P1(T); VectorFst<Arc> I1(T); Project(&P1, PROJECT_OUTPUT); Invert(&I1); Project(&I1, PROJECT_INPUT); CHECK(Equiv(P1, I1)); } { VLOG(1) << "Check Pi_1(T) = Pi_2(T^-1) (delayed)."; ProjectFst<Arc> P1(T, PROJECT_INPUT); InvertFst<Arc> I1(T); ProjectFst<Arc> P2(I1, PROJECT_OUTPUT); CHECK(Equiv(P1, P2)); } { VLOG(1) << "Check Pi_2(T) = Pi_1(T^-1) (delayed)."; ProjectFst<Arc> P1(T, PROJECT_OUTPUT); InvertFst<Arc> I1(T); ProjectFst<Arc> P2(I1, PROJECT_INPUT); CHECK(Equiv(P1, P2)); } { VLOG(1) << "Check destructive relabeling"; static const int kNumLabels = 10; // set up relabeling pairs std::vector<Label> labelset(kNumLabels); for (size_t i = 0; i < kNumLabels; ++i) labelset[i] = i; for (size_t i = 0; i < kNumLabels; ++i) { using std::swap; swap(labelset[i], labelset[rand() % kNumLabels]); } std::vector<std::pair<Label, Label>> ipairs1(kNumLabels); std::vector<std::pair<Label, Label>> opairs1(kNumLabels); for (size_t i = 0; i < kNumLabels; ++i) { ipairs1[i] = std::make_pair(i, labelset[i]); opairs1[i] = std::make_pair(labelset[i], i); } VectorFst<Arc> R(T); Relabel(&R, ipairs1, opairs1); std::vector<std::pair<Label, Label>> ipairs2(kNumLabels); std::vector<std::pair<Label, Label>> opairs2(kNumLabels); for (size_t i = 0; i < kNumLabels; ++i) { ipairs2[i] = std::make_pair(labelset[i], i); opairs2[i] = std::make_pair(i, labelset[i]); } Relabel(&R, ipairs2, opairs2); CHECK(Equiv(R, T)); VLOG(1) << "Check on-the-fly relabeling"; RelabelFst<Arc> Rdelay(T, ipairs1, opairs1); RelabelFst<Arc> RRdelay(Rdelay, ipairs2, opairs2); CHECK(Equiv(RRdelay, T)); } { VLOG(1) << "Check encoding/decoding (destructive)."; VectorFst<Arc> D(T); uint32 encode_props = 0; if (rand() % 2) encode_props |= kEncodeLabels; if (rand() % 2) encode_props |= kEncodeWeights; EncodeMapper<Arc> encoder(encode_props, ENCODE); Encode(&D, &encoder); Decode(&D, encoder); CHECK(Equiv(D, T)); } { VLOG(1) << "Check encoding/decoding (delayed)."; uint32 encode_props = 0; if (rand() % 2) encode_props |= kEncodeLabels; if (rand() % 2) encode_props |= kEncodeWeights; EncodeMapper<Arc> encoder(encode_props, ENCODE); EncodeFst<Arc> E(T, &encoder); VectorFst<Arc> Encoded(E); DecodeFst<Arc> D(Encoded, encoder); CHECK(Equiv(D, T)); } { VLOG(1) << "Check gallic mappers (constructive)."; ToGallicMapper<Arc> to_mapper; FromGallicMapper<Arc> from_mapper; VectorFst<GallicArc<Arc>> G; VectorFst<Arc> F; ArcMap(T, &G, to_mapper); ArcMap(G, &F, from_mapper); CHECK(Equiv(T, F)); } { VLOG(1) << "Check gallic mappers (delayed)."; ToGallicMapper<Arc> to_mapper; FromGallicMapper<Arc> from_mapper; ArcMapFst<Arc, GallicArc<Arc>, ToGallicMapper<Arc>> G(T, to_mapper); ArcMapFst<GallicArc<Arc>, Arc, FromGallicMapper<Arc>> F(G, from_mapper); CHECK(Equiv(T, F)); } } // Tests compose-based operations. void TestCompose(const Fst<Arc> &T1, const Fst<Arc> &T2, const Fst<Arc> &T3) { if (!(Weight::Properties() & kCommutative)) return; VectorFst<Arc> S1(T1); VectorFst<Arc> S2(T2); VectorFst<Arc> S3(T3); ILabelCompare<Arc> icomp; OLabelCompare<Arc> ocomp; ArcSort(&S1, ocomp); ArcSort(&S2, ocomp); ArcSort(&S3, icomp); { VLOG(1) << "Check composition is associative."; ComposeFst<Arc> C1(S1, S2); ComposeFst<Arc> C2(C1, S3); ComposeFst<Arc> C3(S2, S3); ComposeFst<Arc> C4(S1, C3); CHECK(Equiv(C2, C4)); } { VLOG(1) << "Check composition left distributes over union."; UnionFst<Arc> U1(S2, S3); ComposeFst<Arc> C1(S1, U1); ComposeFst<Arc> C2(S1, S2); ComposeFst<Arc> C3(S1, S3); UnionFst<Arc> U2(C2, C3); CHECK(Equiv(C1, U2)); } { VLOG(1) << "Check composition right distributes over union."; UnionFst<Arc> U1(S1, S2); ComposeFst<Arc> C1(U1, S3); ComposeFst<Arc> C2(S1, S3); ComposeFst<Arc> C3(S2, S3); UnionFst<Arc> U2(C2, C3); CHECK(Equiv(C1, U2)); } VectorFst<Arc> A1(S1); VectorFst<Arc> A2(S2); VectorFst<Arc> A3(S3); Project(&A1, PROJECT_OUTPUT); Project(&A2, PROJECT_INPUT); Project(&A3, PROJECT_INPUT); { VLOG(1) << "Check intersection is commutative."; IntersectFst<Arc> I1(A1, A2); IntersectFst<Arc> I2(A2, A1); CHECK(Equiv(I1, I2)); } { VLOG(1) << "Check all epsilon filters leads to equivalent results."; typedef Matcher<Fst<Arc>> M; ComposeFst<Arc> C1(S1, S2); ComposeFst<Arc> C2( S1, S2, ComposeFstOptions<Arc, M, AltSequenceComposeFilter<M>>()); ComposeFst<Arc> C3(S1, S2, ComposeFstOptions<Arc, M, MatchComposeFilter<M>>()); CHECK(Equiv(C1, C2)); CHECK(Equiv(C1, C3)); if ((Weight::Properties() & kIdempotent) || S1.Properties(kNoOEpsilons, false) || S2.Properties(kNoIEpsilons, false)) { ComposeFst<Arc> C4( S1, S2, ComposeFstOptions<Arc, M, TrivialComposeFilter<M>>()); CHECK(Equiv(C1, C4)); } if (S1.Properties(kNoOEpsilons, false) && S2.Properties(kNoIEpsilons, false)) { ComposeFst<Arc> C5(S1, S2, ComposeFstOptions<Arc, M, NullComposeFilter<M>>()); CHECK(Equiv(C1, C5)); } } { VLOG(1) << "Check look-ahead filters lead to equivalent results."; VectorFst<Arc> C1, C2; Compose(S1, S2, &C1); LookAheadCompose(S1, S2, &C2); CHECK(Equiv(C1, C2)); } } // Tests sorting operations void TestSort(const Fst<Arc> &T) { ILabelCompare<Arc> icomp; OLabelCompare<Arc> ocomp; { VLOG(1) << "Check arc sorted Fst is equivalent to its input."; VectorFst<Arc> S1(T); ArcSort(&S1, icomp); CHECK(Equiv(T, S1)); } { VLOG(1) << "Check destructive and delayed arcsort are equivalent."; VectorFst<Arc> S1(T); ArcSort(&S1, icomp); ArcSortFst<Arc, ILabelCompare<Arc>> S2(T, icomp); CHECK(Equiv(S1, S2)); } { VLOG(1) << "Check ilabel sorting vs. olabel sorting with inversions."; VectorFst<Arc> S1(T); VectorFst<Arc> S2(T); ArcSort(&S1, icomp); Invert(&S2); ArcSort(&S2, ocomp); Invert(&S2); CHECK(Equiv(S1, S2)); } { VLOG(1) << "Check topologically sorted Fst is equivalent to its input."; VectorFst<Arc> S1(T); TopSort(&S1); CHECK(Equiv(T, S1)); } { VLOG(1) << "Check reverse(reverse(T)) = T"; for (int i = 0; i < 2; ++i) { VectorFst<ReverseArc<Arc>> R1; VectorFst<Arc> R2; bool require_superinitial = i == 1; Reverse(T, &R1, require_superinitial); Reverse(R1, &R2, require_superinitial); CHECK(Equiv(T, R2)); } } } // Tests optimization operations void TestOptimize(const Fst<Arc> &T) { uint64 tprops = T.Properties(kFstProperties, true); uint64 wprops = Weight::Properties(); VectorFst<Arc> A(T); Project(&A, PROJECT_INPUT); { VLOG(1) << "Check connected FST is equivalent to its input."; VectorFst<Arc> C1(T); Connect(&C1); CHECK(Equiv(T, C1)); } if ((wprops & kSemiring) == kSemiring && (tprops & kAcyclic || wprops & kIdempotent)) { VLOG(1) << "Check epsilon-removed FST is equivalent to its input."; VectorFst<Arc> R1(T); RmEpsilon(&R1); CHECK(Equiv(T, R1)); VLOG(1) << "Check destructive and delayed epsilon removal" << "are equivalent."; RmEpsilonFst<Arc> R2(T); CHECK(Equiv(R1, R2)); VLOG(1) << "Check an FST with a large proportion" << " of epsilon transitions:"; // Maps all transitions of T to epsilon-transitions and append // a non-epsilon transition. VectorFst<Arc> U; ArcMap(T, &U, EpsMapper<Arc>()); VectorFst<Arc> V; V.SetStart(V.AddState()); Arc arc(1, 1, Weight::One(), V.AddState()); V.AddArc(V.Start(), arc); V.SetFinal(arc.nextstate, Weight::One()); Concat(&U, V); // Check that epsilon-removal preserves the shortest-distance // from the initial state to the final states. std::vector<Weight> d; ShortestDistance(U, &d, true); Weight w = U.Start() < d.size() ? d[U.Start()] : Weight::Zero(); VectorFst<Arc> U1(U); RmEpsilon(&U1); ShortestDistance(U1, &d, true); Weight w1 = U1.Start() < d.size() ? d[U1.Start()] : Weight::Zero(); CHECK(ApproxEqual(w, w1, kTestDelta)); RmEpsilonFst<Arc> U2(U); ShortestDistance(U2, &d, true); Weight w2 = U2.Start() < d.size() ? d[U2.Start()] : Weight::Zero(); CHECK(ApproxEqual(w, w2, kTestDelta)); } if ((wprops & kSemiring) == kSemiring && tprops & kAcyclic) { VLOG(1) << "Check determinized FSA is equivalent to its input."; DeterminizeFst<Arc> D(A); CHECK(Equiv(A, D)); { VLOG(1) << "Check determinized FST is equivalent to its input."; DeterminizeFstOptions<Arc> opts; opts.type = DETERMINIZE_NONFUNCTIONAL; DeterminizeFst<Arc> DT(T, opts); CHECK(Equiv(T, DT)); } if ((wprops & (kPath | kCommutative)) == (kPath | kCommutative)) { VLOG(1) << "Check pruning in determinization"; VectorFst<Arc> P; Weight threshold = (*weight_generator_)(); DeterminizeOptions<Arc> opts; opts.weight_threshold = threshold; Determinize(A, &P, opts); CHECK(P.Properties(kIDeterministic, true)); CHECK(PruneEquiv(A, P, threshold)); } if ((wprops & kPath) == kPath) { VLOG(1) << "Check min-determinization"; // Ensures no input epsilons VectorFst<Arc> R(T); std::vector<std::pair<Label, Label>> ipairs, opairs; ipairs.push_back(std::pair<Label, Label>(0, 1)); Relabel(&R, ipairs, opairs); VectorFst<Arc> M; DeterminizeOptions<Arc> opts; opts.type = DETERMINIZE_DISAMBIGUATE; Determinize(R, &M, opts); CHECK(M.Properties(kIDeterministic, true)); CHECK(MinRelated(M, R)); } int n; { VLOG(1) << "Check size(min(det(A))) <= size(det(A))" << " and min(det(A)) equiv det(A)"; VectorFst<Arc> M(D); n = M.NumStates(); Minimize(&M, static_cast<MutableFst<Arc> *>(nullptr), kDelta); CHECK(Equiv(D, M)); CHECK(M.NumStates() <= n); n = M.NumStates(); } if (n && (wprops & kIdempotent) == kIdempotent && A.Properties(kNoEpsilons, true)) { VLOG(1) << "Check that Revuz's algorithm leads to the" << " same number of states as Brozozowski's algorithm"; // Skip test if A is the empty machine or contains epsilons or // if the semiring is not idempotent (to avoid floating point // errors) VectorFst<Arc> R; Reverse(A, &R); RmEpsilon(&R); DeterminizeFst<Arc> DR(R); VectorFst<Arc> RD; Reverse(DR, &RD); DeterminizeFst<Arc> DRD(RD); VectorFst<Arc> M(DRD); CHECK_EQ(n + 1, M.NumStates()); // Accounts for the epsilon transition // to the initial state } } if ((wprops & kSemiring) == kSemiring && tprops & kAcyclic) { VLOG(1) << "Check disambiguated FSA is equivalent to its input."; VectorFst<Arc> R(A), D; RmEpsilon(&R); Disambiguate(R, &D); CHECK(Equiv(R, D)); VLOG(1) << "Check disambiguated FSA is unambiguous"; CHECK(Unambiguous(D)); /* TODO(riley): find out why this fails if ((wprops & (kPath | kCommutative)) == (kPath | kCommutative)) { VLOG(1) << "Check pruning in disambiguation"; VectorFst<Arc> P; Weight threshold = (*weight_generator_)(); DisambiguateOptions<Arc> opts; opts.weight_threshold = threshold; Disambiguate(R, &P, opts); CHECK(Unambiguous(P)); CHECK(PruneEquiv(A, P, threshold)); } */ } if (Arc::Type() == LogArc::Type() || Arc::Type() == StdArc::Type()) { VLOG(1) << "Check reweight(T) equiv T"; std::vector<Weight> potential; VectorFst<Arc> RI(T); VectorFst<Arc> RF(T); while (potential.size() < RI.NumStates()) potential.push_back((*weight_generator_)()); Reweight(&RI, potential, REWEIGHT_TO_INITIAL); CHECK(Equiv(T, RI)); Reweight(&RF, potential, REWEIGHT_TO_FINAL); CHECK(Equiv(T, RF)); } if ((wprops & kIdempotent) || (tprops & kAcyclic)) { VLOG(1) << "Check pushed FST is equivalent to input FST."; // Pushing towards the final state. if (wprops & kRightSemiring) { VectorFst<Arc> P1; Push<Arc, REWEIGHT_TO_FINAL>(T, &P1, kPushLabels); CHECK(Equiv(T, P1)); VectorFst<Arc> P2; Push<Arc, REWEIGHT_TO_FINAL>(T, &P2, kPushWeights); CHECK(Equiv(T, P2)); VectorFst<Arc> P3; Push<Arc, REWEIGHT_TO_FINAL>(T, &P3, kPushLabels | kPushWeights); CHECK(Equiv(T, P3)); } // Pushing towards the initial state. if (wprops & kLeftSemiring) { VectorFst<Arc> P1; Push<Arc, REWEIGHT_TO_INITIAL>(T, &P1, kPushLabels); CHECK(Equiv(T, P1)); VectorFst<Arc> P2; Push<Arc, REWEIGHT_TO_INITIAL>(T, &P2, kPushWeights); CHECK(Equiv(T, P2)); VectorFst<Arc> P3; Push<Arc, REWEIGHT_TO_INITIAL>(T, &P3, kPushLabels | kPushWeights); CHECK(Equiv(T, P3)); } } if ((wprops & (kPath | kCommutative)) == (kPath | kCommutative)) { VLOG(1) << "Check pruning algorithm"; { VLOG(1) << "Check equiv. of constructive and destructive algorithms"; Weight thresold = (*weight_generator_)(); VectorFst<Arc> P1(T); Prune(&P1, thresold); VectorFst<Arc> P2; Prune(T, &P2, thresold); CHECK(Equiv(P1, P2)); } { VLOG(1) << "Check prune(reverse) equiv reverse(prune)"; Weight thresold = (*weight_generator_)(); VectorFst<ReverseArc<Arc>> R; VectorFst<Arc> P1(T); VectorFst<Arc> P2; Prune(&P1, thresold); Reverse(T, &R); Prune(&R, thresold.Reverse()); Reverse(R, &P2); CHECK(Equiv(P1, P2)); } { VLOG(1) << "Check: ShortestDistance(A - prune(A))" << " > ShortestDistance(A) times Threshold"; Weight threshold = (*weight_generator_)(); VectorFst<Arc> P; Prune(A, &P, threshold); CHECK(PruneEquiv(A, P, threshold)); } } if (tprops & kAcyclic) { VLOG(1) << "Check synchronize(T) equiv T"; SynchronizeFst<Arc> S(T); CHECK(Equiv(T, S)); } } // Tests search operations void TestSearch(const Fst<Arc> &T) { uint64 wprops = Weight::Properties(); VectorFst<Arc> A(T); Project(&A, PROJECT_INPUT); if ((wprops & (kPath | kRightSemiring)) == (kPath | kRightSemiring)) { VLOG(1) << "Check 1-best weight."; VectorFst<Arc> path; ShortestPath(T, &path); Weight tsum = ShortestDistance(T); Weight psum = ShortestDistance(path); CHECK(ApproxEqual(tsum, psum, kTestDelta)); } if ((wprops & (kPath | kSemiring)) == (kPath | kSemiring)) { VLOG(1) << "Check n-best weights"; VectorFst<Arc> R(A); RmEpsilon(&R, /*connect=*/ true, Arc::Weight::Zero(), kNoStateId, kDelta); int nshortest = rand() % kNumRandomShortestPaths + 2; VectorFst<Arc> paths; ShortestPath(R, &paths, nshortest, /*unique=*/ true, /*first_path=*/ false, Weight::Zero(), kNumShortestStates, kDelta); std::vector<Weight> distance; ShortestDistance(paths, &distance, true, kDelta); StateId pstart = paths.Start(); if (pstart != kNoStateId) { ArcIterator<Fst<Arc>> piter(paths, pstart); for (; !piter.Done(); piter.Next()) { StateId s = piter.Value().nextstate; Weight nsum = s < distance.size() ? Times(piter.Value().weight, distance[s]) : Weight::Zero(); VectorFst<Arc> path; ShortestPath(R, &path, 1, false, false, Weight::Zero(), kNoStateId, kDelta); Weight dsum = ShortestDistance(path, kDelta); CHECK(ApproxEqual(nsum, dsum, kTestDelta)); ArcMap(&path, RmWeightMapper<Arc>()); VectorFst<Arc> S; Difference(R, path, &S); R = S; } } } } // Tests if two FSTS are equivalent by checking if random // strings from one FST are transduced the same by both FSTs. template <class A> bool Equiv(const Fst<A> &fst1, const Fst<A> &fst2) { VLOG(1) << "Check FSTs for sanity (including property bits)."; CHECK(Verify(fst1)); CHECK(Verify(fst2)); // Ensures seed used once per instantiation. static UniformArcSelector<A> uniform_selector(seed_); RandGenOptions<UniformArcSelector<A>> opts(uniform_selector, kRandomPathLength); return RandEquivalent(fst1, fst2, kNumRandomPaths, kTestDelta, opts); } // Tests FSA is unambiguous bool Unambiguous(const Fst<Arc> &fst) { VectorFst<StdArc> sfst, dfst; VectorFst<LogArc> lfst1, lfst2; Map(fst, &sfst, RmWeightMapper<Arc, StdArc>()); Determinize(sfst, &dfst); Map(fst, &lfst1, RmWeightMapper<Arc, LogArc>()); Map(dfst, &lfst2, RmWeightMapper<StdArc, LogArc>()); return Equiv(lfst1, lfst2); } // Ensures input-epsilon free transducers fst1 and fst2 have the // same domain and that for each string pair '(is, os)' in fst1, // '(is, os)' is the minimum weight match to 'is' in fst2. template <class A> bool MinRelated(const Fst<A> &fst1, const Fst<A> &fst2) { // Same domain VectorFst<Arc> P1(fst1), P2(fst2); Project(&P1, PROJECT_INPUT); Project(&P2, PROJECT_INPUT); if (!Equiv(P1, P2)) { LOG(ERROR) << "Inputs not equivalent"; return false; } // Ensures seed used once per instantiation. static UniformArcSelector<A> uniform_selector(seed_); RandGenOptions<UniformArcSelector<A>> opts(uniform_selector, kRandomPathLength); VectorFst<Arc> path, paths1, paths2; for (ssize_t n = 0; n < kNumRandomPaths; ++n) { RandGen(fst1, &path, opts); Invert(&path); Map(&path, RmWeightMapper<Arc>()); Compose(path, fst2, &paths1); Weight sum1 = ShortestDistance(paths1); Compose(paths1, path, &paths2); Weight sum2 = ShortestDistance(paths2); if (!ApproxEqual(Plus(sum1, sum2), sum2, kTestDelta)) { LOG(ERROR) << "Sums not equivalent: " << sum1 << " " << sum2; return false; } } return true; } // Tests ShortestDistance(A - P) >= // ShortestDistance(A) times Threshold. template <class A> bool PruneEquiv(const Fst<A> &fst, const Fst<A> &pfst, Weight threshold) { VLOG(1) << "Check FSTs for sanity (including property bits)."; CHECK(Verify(fst)); CHECK(Verify(pfst)); DifferenceFst<Arc> D(fst, DeterminizeFst<Arc>(RmEpsilonFst<Arc>( ArcMapFst<Arc, Arc, RmWeightMapper<Arc>>( pfst, RmWeightMapper<Arc>())))); Weight sum1 = Times(ShortestDistance(fst), threshold); Weight sum2 = ShortestDistance(D); return ApproxEqual(Plus(sum1, sum2), sum1, kTestDelta); } // Random seed. int seed_; // FST with no states VectorFst<Arc> zero_fst_; // FST with one state that accepts epsilon. VectorFst<Arc> one_fst_; // FST with one state that accepts all strings. VectorFst<Arc> univ_fst_; // Generates weights used in testing. WeightGenerator *weight_generator_; // Maximum random path length. static const int kRandomPathLength; // Number of random paths to explore. static const int kNumRandomPaths; // Maximum number of nshortest paths. static const int kNumRandomShortestPaths; // Maximum number of nshortest states. static const int kNumShortestStates; // Delta for equivalence tests. static const float kTestDelta; WeightedTester(const WeightedTester &) = delete; WeightedTester &operator=(const WeightedTester &) = delete; }; template <class A, class WG> const int WeightedTester<A, WG>::kRandomPathLength = 25; template <class A, class WG> const int WeightedTester<A, WG>::kNumRandomPaths = 100; template <class A, class WG> const int WeightedTester<A, WG>::kNumRandomShortestPaths = 100; template <class A, class WG> const int WeightedTester<A, WG>::kNumShortestStates = 10000; template <class A, class WG> const float WeightedTester<A, WG>::kTestDelta = .05; // This class tests a variety of identities and properties that must // hold for various algorithms on unweighted FSAs and that are not tested // by WeightedTester. Only the specialization does anything interesting. template <class Arc> class UnweightedTester { public: UnweightedTester(const Fst<Arc> &zero_fsa, const Fst<Arc> &one_fsa, const Fst<Arc> &univ_fsa) {} void Test(const Fst<Arc> &A1, const Fst<Arc> &A2, const Fst<Arc> &A3) {} }; // Specialization for StdArc. This should work for any commutative, // idempotent semiring when restricted to the unweighted case // (being isomorphic to the boolean semiring). template <> class UnweightedTester<StdArc> { public: typedef StdArc Arc; typedef Arc::Label Label; typedef Arc::StateId StateId; typedef Arc::Weight Weight; UnweightedTester(const Fst<Arc> &zero_fsa, const Fst<Arc> &one_fsa, const Fst<Arc> &univ_fsa) : zero_fsa_(zero_fsa), one_fsa_(one_fsa), univ_fsa_(univ_fsa) {} void Test(const Fst<Arc> &A1, const Fst<Arc> &A2, const Fst<Arc> &A3) { TestRational(A1, A2, A3); TestIntersect(A1, A2, A3); TestOptimize(A1); } private: // Tests rational operations with identities void TestRational(const Fst<Arc> &A1, const Fst<Arc> &A2, const Fst<Arc> &A3) { { VLOG(1) << "Check the union contains its arguments (destructive)."; VectorFst<Arc> U(A1); Union(&U, A2); CHECK(Subset(A1, U)); CHECK(Subset(A2, U)); } { VLOG(1) << "Check the union contains its arguments (delayed)."; UnionFst<Arc> U(A1, A2); CHECK(Subset(A1, U)); CHECK(Subset(A2, U)); } { VLOG(1) << "Check if A^n c A* (destructive)."; VectorFst<Arc> C(one_fsa_); int n = rand() % 5; for (int i = 0; i < n; ++i) Concat(&C, A1); VectorFst<Arc> S(A1); Closure(&S, CLOSURE_STAR); CHECK(Subset(C, S)); } { VLOG(1) << "Check if A^n c A* (delayed)."; int n = rand() % 5; Fst<Arc> *C = new VectorFst<Arc>(one_fsa_); for (int i = 0; i < n; ++i) { ConcatFst<Arc> *F = new ConcatFst<Arc>(*C, A1); delete C; C = F; } ClosureFst<Arc> S(A1, CLOSURE_STAR); CHECK(Subset(*C, S)); delete C; } } // Tests intersect-based operations. void TestIntersect(const Fst<Arc> &A1, const Fst<Arc> &A2, const Fst<Arc> &A3) { VectorFst<Arc> S1(A1); VectorFst<Arc> S2(A2); VectorFst<Arc> S3(A3); ILabelCompare<Arc> comp; ArcSort(&S1, comp); ArcSort(&S2, comp); ArcSort(&S3, comp); { VLOG(1) << "Check the intersection is contained in its arguments."; IntersectFst<Arc> I1(S1, S2); CHECK(Subset(I1, S1)); CHECK(Subset(I1, S2)); } { VLOG(1) << "Check union distributes over intersection."; IntersectFst<Arc> I1(S1, S2); UnionFst<Arc> U1(I1, S3); UnionFst<Arc> U2(S1, S3); UnionFst<Arc> U3(S2, S3); ArcSortFst<Arc, ILabelCompare<Arc>> S4(U3, comp); IntersectFst<Arc> I2(U2, S4); CHECK(Equiv(U1, I2)); } VectorFst<Arc> C1; VectorFst<Arc> C2; Complement(S1, &C1); Complement(S2, &C2); ArcSort(&C1, comp); ArcSort(&C2, comp); { VLOG(1) << "Check S U S' = Sigma*"; UnionFst<Arc> U(S1, C1); CHECK(Equiv(U, univ_fsa_)); } { VLOG(1) << "Check S n S' = {}"; IntersectFst<Arc> I(S1, C1); CHECK(Equiv(I, zero_fsa_)); } { VLOG(1) << "Check (S1' U S2') == (S1 n S2)'"; UnionFst<Arc> U(C1, C2); IntersectFst<Arc> I(S1, S2); VectorFst<Arc> C3; Complement(I, &C3); CHECK(Equiv(U, C3)); } { VLOG(1) << "Check (S1' n S2') == (S1 U S2)'"; IntersectFst<Arc> I(C1, C2); UnionFst<Arc> U(S1, S2); VectorFst<Arc> C3; Complement(U, &C3); CHECK(Equiv(I, C3)); } } // Tests optimization operations void TestOptimize(const Fst<Arc> &A) { { VLOG(1) << "Check determinized FSA is equivalent to its input."; DeterminizeFst<Arc> D(A); CHECK(Equiv(A, D)); } { VLOG(1) << "Check disambiguated FSA is equivalent to its input."; VectorFst<Arc> R(A), D; RmEpsilon(&R); Disambiguate(R, &D); CHECK(Equiv(R, D)); } { VLOG(1) << "Check minimized FSA is equivalent to its input."; int n; { RmEpsilonFst<Arc> R(A); DeterminizeFst<Arc> D(R); VectorFst<Arc> M(D); Minimize(&M, static_cast<MutableFst<Arc> *>(nullptr), kDelta); CHECK(Equiv(A, M)); n = M.NumStates(); } if (n) { // Skip test if A is the empty machine VLOG(1) << "Check that Hopcroft's and Revuz's algorithms lead to the" << " same number of states as Brozozowski's algorithm"; VectorFst<Arc> R; Reverse(A, &R); RmEpsilon(&R); DeterminizeFst<Arc> DR(R); VectorFst<Arc> RD; Reverse(DR, &RD); DeterminizeFst<Arc> DRD(RD); VectorFst<Arc> M(DRD); CHECK_EQ(n + 1, M.NumStates()); // Accounts for the epsilon transition // to the initial state } } } // Tests if two FSAS are equivalent. bool Equiv(const Fst<Arc> &fsa1, const Fst<Arc> &fsa2) { VLOG(1) << "Check FSAs for sanity (including property bits)."; CHECK(Verify(fsa1)); CHECK(Verify(fsa2)); VectorFst<Arc> vfsa1(fsa1); VectorFst<Arc> vfsa2(fsa2); RmEpsilon(&vfsa1); RmEpsilon(&vfsa2); DeterminizeFst<Arc> dfa1(vfsa1); DeterminizeFst<Arc> dfa2(vfsa2); // Test equivalence using union-find algorithm bool equiv1 = Equivalent(dfa1, dfa2); // Test equivalence by checking if (S1 - S2) U (S2 - S1) is empty ILabelCompare<Arc> comp; VectorFst<Arc> sdfa1(dfa1); ArcSort(&sdfa1, comp); VectorFst<Arc> sdfa2(dfa2); ArcSort(&sdfa2, comp); DifferenceFst<Arc> dfsa1(sdfa1, sdfa2); DifferenceFst<Arc> dfsa2(sdfa2, sdfa1); VectorFst<Arc> ufsa(dfsa1); Union(&ufsa, dfsa2); Connect(&ufsa); bool equiv2 = ufsa.NumStates() == 0; // Check two equivalence tests match CHECK((equiv1 && equiv2) || (!equiv1 && !equiv2)); return equiv1; } // Tests if FSA1 is a subset of FSA2 (disregarding weights). bool Subset(const Fst<Arc> &fsa1, const Fst<Arc> &fsa2) { VLOG(1) << "Check FSAs (incl. property bits) for sanity"; CHECK(Verify(fsa1)); CHECK(Verify(fsa2)); VectorFst<StdArc> vfsa1; VectorFst<StdArc> vfsa2; RmEpsilon(&vfsa1); RmEpsilon(&vfsa2); ILabelCompare<StdArc> comp; ArcSort(&vfsa1, comp); ArcSort(&vfsa2, comp); IntersectFst<StdArc> ifsa(vfsa1, vfsa2); DeterminizeFst<StdArc> dfa1(vfsa1); DeterminizeFst<StdArc> dfa2(ifsa); return Equivalent(dfa1, dfa2); } // Returns complement Fsa void Complement(const Fst<Arc> &ifsa, MutableFst<Arc> *ofsa) { RmEpsilonFst<Arc> rfsa(ifsa); DeterminizeFst<Arc> dfa(rfsa); DifferenceFst<Arc> cfsa(univ_fsa_, dfa); *ofsa = cfsa; } // FSA with no states VectorFst<Arc> zero_fsa_; // FSA with one state that accepts epsilon. VectorFst<Arc> one_fsa_; // FSA with one state that accepts all strings. VectorFst<Arc> univ_fsa_; }; // This class tests a variety of identities and properties that must // hold for various FST algorithms. It randomly generates FSTs, using // function object 'weight_generator' to select weights. 'WeightTester' // and 'UnweightedTester' are then called. template <class Arc, class WeightGenerator> class AlgoTester { public: typedef typename Arc::Label Label; typedef typename Arc::StateId StateId; typedef typename Arc::Weight Weight; AlgoTester(WeightGenerator generator, int seed) : weight_generator_(generator) { one_fst_.AddState(); one_fst_.SetStart(0); one_fst_.SetFinal(0, Weight::One()); univ_fst_.AddState(); univ_fst_.SetStart(0); univ_fst_.SetFinal(0, Weight::One()); for (int i = 0; i < kNumRandomLabels; ++i) univ_fst_.AddArc(0, Arc(i, i, Weight::One(), 0)); weighted_tester_ = new WeightedTester<Arc, WeightGenerator>( seed, zero_fst_, one_fst_, univ_fst_, &weight_generator_); unweighted_tester_ = new UnweightedTester<Arc>(zero_fst_, one_fst_, univ_fst_); } ~AlgoTester() { delete weighted_tester_; delete unweighted_tester_; } void MakeRandFst(MutableFst<Arc> *fst) { RandFst<Arc, WeightGenerator>(kNumRandomStates, kNumRandomArcs, kNumRandomLabels, kAcyclicProb, &weight_generator_, fst); } void Test() { VLOG(1) << "weight type = " << Weight::Type(); for (int i = 0; i < FLAGS_repeat; ++i) { // Random transducers VectorFst<Arc> T1; VectorFst<Arc> T2; VectorFst<Arc> T3; MakeRandFst(&T1); MakeRandFst(&T2); MakeRandFst(&T3); weighted_tester_->Test(T1, T2, T3); VectorFst<Arc> A1(T1); VectorFst<Arc> A2(T2); VectorFst<Arc> A3(T3); Project(&A1, PROJECT_OUTPUT); Project(&A2, PROJECT_INPUT); Project(&A3, PROJECT_INPUT); ArcMap(&A1, rm_weight_mapper_); ArcMap(&A2, rm_weight_mapper_); ArcMap(&A3, rm_weight_mapper_); unweighted_tester_->Test(A1, A2, A3); } } private: // Generates weights used in testing. WeightGenerator weight_generator_; // FST with no states VectorFst<Arc> zero_fst_; // FST with one state that accepts epsilon. VectorFst<Arc> one_fst_; // FST with one state that accepts all strings. VectorFst<Arc> univ_fst_; // Tests weighted FSTs WeightedTester<Arc, WeightGenerator> *weighted_tester_; // Tests unweighted FSTs UnweightedTester<Arc> *unweighted_tester_; // Mapper to remove weights from an Fst RmWeightMapper<Arc> rm_weight_mapper_; // Maximum number of states in random test Fst. static const int kNumRandomStates; // Maximum number of arcs in random test Fst. static const int kNumRandomArcs; // Number of alternative random labels. static const int kNumRandomLabels; // Probability to force an acyclic Fst static const float kAcyclicProb; // Maximum random path length. static const int kRandomPathLength; // Number of random paths to explore. static const int kNumRandomPaths; AlgoTester(const AlgoTester &) = delete; AlgoTester &operator=(const AlgoTester &) = delete; }; template <class A, class G> const int AlgoTester<A, G>::kNumRandomStates = 10; template <class A, class G> const int AlgoTester<A, G>::kNumRandomArcs = 25; template <class A, class G> const int AlgoTester<A, G>::kNumRandomLabels = 5; template <class A, class G> const float AlgoTester<A, G>::kAcyclicProb = .25; template <class A, class G> const int AlgoTester<A, G>::kRandomPathLength = 25; template <class A, class G> const int AlgoTester<A, G>::kNumRandomPaths = 100; } // namespace fst #endif // FST_TEST_ALGO_TEST_H_
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/script/getters.cc
#include <fst/script/getters.h> namespace fst { namespace script { bool GetArcSortType(const string &str, ArcSortType *sort_type) { if (str == "ilabel") { *sort_type = ILABEL_SORT; } else if (str == "olabel") { *sort_type = OLABEL_SORT; } else { return false; } return true; } bool GetComposeFilter(const string &str, ComposeFilter *compose_filter) { if (str == "alt_sequence") { *compose_filter = ALT_SEQUENCE_FILTER; } else if (str == "auto") { *compose_filter = AUTO_FILTER; } else if (str == "match") { *compose_filter = MATCH_FILTER; } else if (str == "null") { *compose_filter = NULL_FILTER; } else if (str == "sequence") { *compose_filter = SEQUENCE_FILTER; } else if (str == "trivial") { *compose_filter = TRIVIAL_FILTER; } else { return false; } return true; } bool GetDeterminizeType(const string &str, DeterminizeType *det_type) { if (str == "functional") { *det_type = DETERMINIZE_FUNCTIONAL; } else if (str == "nonfunctional") { *det_type = DETERMINIZE_NONFUNCTIONAL; } else if (str == "disambiguate") { *det_type = DETERMINIZE_DISAMBIGUATE; } else { return false; } return true; } bool GetMapType(const string &str, MapType *map_type) { if (str == "arc_sum") { *map_type = ARC_SUM_MAPPER; } else if (str == "arc_unique") { *map_type = ARC_UNIQUE_MAPPER; } else if (str == "identity") { *map_type = IDENTITY_MAPPER; } else if (str == "input_epsilon") { *map_type = INPUT_EPSILON_MAPPER; } else if (str == "invert") { *map_type = INVERT_MAPPER; } else if (str == "output_epsilon") { *map_type = OUTPUT_EPSILON_MAPPER; } else if (str == "plus") { *map_type = PLUS_MAPPER; } else if (str == "power") { *map_type = POWER_MAPPER; } else if (str == "quantize") { *map_type = QUANTIZE_MAPPER; } else if (str == "rmweight") { *map_type = RMWEIGHT_MAPPER; } else if (str == "superfinal") { *map_type = SUPERFINAL_MAPPER; } else if (str == "times") { *map_type = TIMES_MAPPER; } else if (str == "to_log") { *map_type = TO_LOG_MAPPER; } else if (str == "to_log64") { *map_type = TO_LOG64_MAPPER; } else if (str == "to_std" || str == "to_standard") { *map_type = TO_STD_MAPPER; } else { return false; } return true; } bool GetRandArcSelection(const string &str, RandArcSelection *ras) { if (str == "uniform") { *ras = UNIFORM_ARC_SELECTOR; } else if (str == "log_prob") { *ras = LOG_PROB_ARC_SELECTOR; } else if (str == "fast_log_prob") { *ras = FAST_LOG_PROB_ARC_SELECTOR; } else { return false; } return true; } bool GetQueueType(const string &str, QueueType *queue_type) { if (str == "auto") { *queue_type = AUTO_QUEUE; } else if (str == "fifo") { *queue_type = FIFO_QUEUE; } else if (str == "lifo") { *queue_type = LIFO_QUEUE; } else if (str == "shortest") { *queue_type = SHORTEST_FIRST_QUEUE; } else if (str == "state") { *queue_type = STATE_ORDER_QUEUE; } else if (str == "top") { *queue_type = TOP_ORDER_QUEUE; } else { return false; } return true; } bool GetReplaceLabelType(const string &str, bool epsilon_on_replace, ReplaceLabelType *rlt) { if (epsilon_on_replace || str == "neither") { *rlt = REPLACE_LABEL_NEITHER; } else if (str == "input") { *rlt = REPLACE_LABEL_INPUT; } else if (str == "output") { *rlt = REPLACE_LABEL_OUTPUT; } else if (str == "both") { *rlt = REPLACE_LABEL_BOTH; } else { return false; } return true; } } // namespace script } // namespace fst
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/bin/fstinfo-main.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Prints out various information about an FST such as number of states // and arcs and property values (see properties.h). #include <cstring> #include <memory> #include <string> #include <fst/flags.h> #include <fst/script/info.h> DECLARE_string(arc_filter); DECLARE_string(info_type); DECLARE_bool(pipe); DECLARE_bool(test_properties); DECLARE_bool(fst_verify); int fstinfo_main(int argc, char **argv) { namespace s = fst::script; using fst::script::FstClass; string usage = "Prints out information about an FST.\n\n Usage: "; usage += argv[0]; usage += " [in.fst]\n"; std::set_new_handler(FailedNewHandler); SET_FLAGS(usage.c_str(), &argc, &argv, true); if (argc > 2) { ShowUsage(); return 1; } const string in_name = (argc > 1 && (strcmp(argv[1], "-") != 0)) ? argv[1] : ""; std::unique_ptr<FstClass> ifst(FstClass::Read(in_name)); if (!ifst) return 1; s::PrintFstInfo(*ifst, FLAGS_test_properties, FLAGS_arc_filter, FLAGS_info_type, FLAGS_fst_verify, FLAGS_pipe); return 0; }
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst/union.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Functions and classes to compute the union of two FSTs. #ifndef FST_UNION_H_ #define FST_UNION_H_ #include <algorithm> #include <vector> #include <fst/mutable-fst.h> #include <fst/rational.h> namespace fst { // Computes the union (sum) of two FSTs. This version writes the union to an // output MutableFst. If A transduces string x to y with weight a and B // transduces string w to v with weight b, then their union transduces x to y // with weight a and w to v with weight b. // // Complexity: // // Time: (V_2 + E_2) // Space: O(V_2 + E_2) // // where Vi is the number of states, and Ei is the number of arcs, in the ith // FST. template <class Arc> void Union(MutableFst<Arc> *fst1, const Fst<Arc> &fst2) { using Label = typename Arc::Label; using StateId = typename Arc::StateId; using Weight = typename Arc::Weight; // Checks for symbol table compatibility. if (!CompatSymbols(fst1->InputSymbols(), fst2.InputSymbols()) || !CompatSymbols(fst1->OutputSymbols(), fst2.OutputSymbols())) { FSTERROR() << "Union: Input/output symbol tables of 1st argument " << "do not match input/output symbol tables of 2nd argument"; fst1->SetProperties(kError, kError); return; } const auto numstates1 = fst1->NumStates(); const bool initial_acyclic1 = fst1->Properties(kInitialAcyclic, true); const auto props1 = fst1->Properties(kFstProperties, false); const auto props2 = fst2.Properties(kFstProperties, false); const auto start2 = fst2.Start(); if (start2 == kNoStateId) { if (props2 & kError) fst1->SetProperties(kError, kError); return; } if (fst2.Properties(kExpanded, false)) { fst1->ReserveStates(numstates1 + CountStates(fst2) + (initial_acyclic1 ? 0 : 1)); } for (StateIterator<Fst<Arc>> siter(fst2); !siter.Done(); siter.Next()) { const auto s1 = fst1->AddState(); const auto s2 = siter.Value(); fst1->SetFinal(s1, fst2.Final(s2)); fst1->ReserveArcs(s1, fst2.NumArcs(s2)); for (ArcIterator<Fst<Arc>> aiter(fst2, s2); !aiter.Done(); aiter.Next()) { auto arc = aiter.Value(); // Copy intended. arc.nextstate += numstates1; fst1->AddArc(s1, arc); } } const auto start1 = fst1->Start(); if (start1 == kNoStateId) { fst1->SetStart(start2); fst1->SetProperties(props2, kCopyProperties); return; } if (initial_acyclic1) { fst1->AddArc(start1, Arc(0, 0, Weight::One(), start2 + numstates1)); } else { const auto nstart1 = fst1->AddState(); fst1->SetStart(nstart1); fst1->AddArc(nstart1, Arc(0, 0, Weight::One(), start1)); fst1->AddArc(nstart1, Arc(0, 0, Weight::One(), start2 + numstates1)); } fst1->SetProperties(UnionProperties(props1, props2), kFstProperties); } // Computes the union of two FSTs, modifying the RationalFst argument. template <class Arc> void Union(RationalFst<Arc> *fst1, const Fst<Arc> &fst2) { fst1->GetMutableImpl()->AddUnion(fst2); } using UnionFstOptions = RationalFstOptions; // Computes the union (sum) of two FSTs. This version is a delayed FST. If A // transduces string x to y with weight a and B transduces string w to v with // weight b, then their union transduces x to y with weight a and w to v with // weight b. // // Complexity: // // Time: O(v_1 + e_1 + v_2 + e_2) // Space: O(v_1 + v_2) // // where vi is the number of states visited, and ei is the number of arcs // visited, in the ith FST. Constant time and space to visit an input state or // arc is assumed and exclusive of caching. template <class A> class UnionFst : public RationalFst<A> { public: using Arc = A; using StateId = typename Arc::StateId; using Weight = typename Arc::Weight; UnionFst(const Fst<Arc> &fst1, const Fst<Arc> &fst2) { GetMutableImpl()->InitUnion(fst1, fst2); } UnionFst(const Fst<Arc> &fst1, const Fst<Arc> &fst2, const UnionFstOptions &opts) : RationalFst<Arc>(opts) { GetMutableImpl()->InitUnion(fst1, fst2); } // See Fst<>::Copy() for doc. UnionFst(const UnionFst<Arc> &fst, bool safe = false) : RationalFst<Arc>(fst, safe) {} // Gets a copy of this UnionFst. See Fst<>::Copy() for further doc. UnionFst<Arc> *Copy(bool safe = false) const override { return new UnionFst<Arc>(*this, safe); } private: using ImplToFst<internal::RationalFstImpl<Arc>>::GetImpl; using ImplToFst<internal::RationalFstImpl<Arc>>::GetMutableImpl; }; // Specialization for UnionFst. template <class Arc> class StateIterator<UnionFst<Arc>> : public StateIterator<RationalFst<Arc>> { public: explicit StateIterator(const UnionFst<Arc> &fst) : StateIterator<RationalFst<Arc>>(fst) {} }; // Specialization for UnionFst. template <class Arc> class ArcIterator<UnionFst<Arc>> : public ArcIterator<RationalFst<Arc>> { public: using StateId = typename Arc::StateId; ArcIterator(const UnionFst<Arc> &fst, StateId s) : ArcIterator<RationalFst<Arc>>(fst, s) {} }; using StdUnionFst = UnionFst<StdArc>; } // namespace fst #endif // FST_UNION_H_
0
coqui_public_repos/snakepit/scripts/worker
coqui_public_repos/snakepit/scripts/worker/forwarder/package.json
{ "name": "forwarder", "version": "0.0.1", "description": "Snakepit socket forwarding helper", "main": "forwarder.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "Tilman Kamp", "license": "MPL-2.0", "dependencies": { "multiplex": "^6.7.0" } }
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/script/register.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_SCRIPT_REGISTER_H_ #define FST_SCRIPT_REGISTER_H_ #include <istream> #include <string> #include <fst/generic-register.h> #include <fst/script/fst-class.h> #include <fst/script/weight-class.h> // Holds methods and classes responsible for maintaining // the register for FstClass arc types. namespace fst { namespace script { // Registers for reading and converting various kinds of FST classes. // This class definition is to avoid a nested class definition inside the // IORegistration struct. template <class Reader, class Creator, class Converter> struct FstClassRegEntry { Reader reader; Creator creator; Converter converter; FstClassRegEntry(Reader r, Creator cr, Converter co) : reader(r), creator(cr), converter(co) {} FstClassRegEntry() : reader(NullReader), creator(NullCreator), converter(NullConverter) {} // Null-returning reader, creator, and converter, used when registry lookup // fails. template <class FstClassType> static FstClassType *NullReader(std::istream &strm, const FstReadOptions &opts) { return nullptr; } static FstClassImplBase *NullCreator() { return nullptr; } static FstClassImplBase *NullConverter(const FstClass &other) { return nullptr; } }; template <class Reader, class Creator, class Converter> class FstClassIORegister : public GenericRegister<string, FstClassRegEntry<Reader, Creator, Converter>, FstClassIORegister<Reader, Creator, Converter>> { public: Reader GetReader(const string &arc_type) const { return this->GetEntry(arc_type).reader; } Creator GetCreator(const string &arc_type) const { return this->GetEntry(arc_type).creator; } Converter GetConverter(const string &arc_type) const { return this->GetEntry(arc_type).converter; } protected: string ConvertKeyToSoFilename(const string &key) const final { string legal_type(key); ConvertToLegalCSymbol(&legal_type); return legal_type + "-arc.so"; } }; // Struct containing everything needed to register a particular type // of FST class (e.g., a plain FstClass, or a MutableFstClass, etc.). template <class FstClassType> struct IORegistration { using Reader = FstClassType *(*)(std::istream &stream, const FstReadOptions &opts); using Creator = FstClassImplBase *(*)(); using Converter = FstClassImplBase *(*)(const FstClass &other); using Entry = FstClassRegEntry<Reader, Creator, Converter>; // FST class Register. using Register = FstClassIORegister<Reader, Creator, Converter>; // FST class Register-er. using Registerer = GenericRegisterer<FstClassIORegister<Reader, Creator, Converter>>; }; #define REGISTER_FST_CLASS(Class, Arc) \ static IORegistration<Class>::Registerer Class##_##Arc##_registerer( \ Arc::Type(), \ IORegistration<Class>::Entry(Class::Read<Arc>, Class::Create<Arc>, \ Class::Convert<Arc>)) #define REGISTER_FST_CLASSES(Arc) \ REGISTER_FST_CLASS(FstClass, Arc); \ REGISTER_FST_CLASS(MutableFstClass, Arc); \ REGISTER_FST_CLASS(VectorFstClass, Arc); } // namespace script } // namespace fst #endif // FST_SCRIPT_REGISTER_H_
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/script/fst-class.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_SCRIPT_FST_CLASS_H_ #define FST_SCRIPT_FST_CLASS_H_ #include <algorithm> #include <limits> #include <string> #include <type_traits> #include <fst/expanded-fst.h> #include <fst/fst.h> #include <fst/mutable-fst.h> #include <fst/vector-fst.h> #include <fst/script/arc-class.h> #include <fst/script/weight-class.h> // Classes to support "boxing" all existing types of FST arcs in a single // FstClass which hides the arc types. This allows clients to load // and work with FSTs without knowing the arc type. These classes are only // recommended for use in high-level scripting applications. Most users should // use the lower-level templated versions corresponding to these classes. namespace fst { namespace script { // Abstract base class defining the set of functionalities implemented in all // impls and passed through by all bases. Below FstClassBase the class // hierarchy bifurcates; FstClassImplBase serves as the base class for all // implementations (of which FstClassImpl is currently the only one) and // FstClass serves as the base class for all interfaces. class FstClassBase { public: virtual const string &ArcType() const = 0; virtual WeightClass Final(int64_t) const = 0; virtual const string &FstType() const = 0; virtual const SymbolTable *InputSymbols() const = 0; virtual size_t NumArcs(int64_t) const = 0; virtual size_t NumInputEpsilons(int64_t) const = 0; virtual size_t NumOutputEpsilons(int64_t) const = 0; virtual const SymbolTable *OutputSymbols() const = 0; virtual uint64_t Properties(uint64_t, bool) const = 0; virtual int64_t Start() const = 0; virtual const string &WeightType() const = 0; virtual bool ValidStateId(int64_t) const = 0; virtual bool Write(const string &) const = 0; virtual bool Write(std::ostream &, const string &) const = 0; virtual ~FstClassBase() {} }; // Adds all the MutableFst methods. class FstClassImplBase : public FstClassBase { public: virtual bool AddArc(int64_t, const ArcClass &) = 0; virtual int64_t AddState() = 0; virtual FstClassImplBase *Copy() = 0; virtual bool DeleteArcs(int64_t, size_t) = 0; virtual bool DeleteArcs(int64_t) = 0; virtual bool DeleteStates(const std::vector<int64_t> &) = 0; virtual void DeleteStates() = 0; virtual SymbolTable *MutableInputSymbols() = 0; virtual SymbolTable *MutableOutputSymbols() = 0; virtual int64_t NumStates() const = 0; virtual bool ReserveArcs(int64_t, size_t) = 0; virtual void ReserveStates(int64_t) = 0; virtual void SetInputSymbols(SymbolTable *) = 0; virtual bool SetFinal(int64_t, const WeightClass &) = 0; virtual void SetOutputSymbols(SymbolTable *) = 0; virtual void SetProperties(uint64_t, uint64_t) = 0; virtual bool SetStart(int64_t) = 0; ~FstClassImplBase() override {} }; // Containiner class wrapping an Fst<Arc>, hiding its arc type. Whether this // Fst<Arc> pointer refers to a special kind of FST (e.g. a MutableFst) is // known by the type of interface class that owns the pointer to this // container. template <class Arc> class FstClassImpl : public FstClassImplBase { public: explicit FstClassImpl(Fst<Arc> *impl, bool should_own = false) : impl_(should_own ? impl : impl->Copy()) {} explicit FstClassImpl(const Fst<Arc> &impl) : impl_(impl.Copy()) {} // Warning: calling this method casts the FST to a mutable FST. bool AddArc(int64_t s, const ArcClass &ac) final { if (!ValidStateId(s)) return false; // Note that we do not check that the destination state is valid, so users // can add arcs before they add the corresponding states. Verify can be // used to determine whether any arc has a nonexisting destination. Arc arc(ac.ilabel, ac.olabel, *ac.weight.GetWeight<typename Arc::Weight>(), ac.nextstate); static_cast<MutableFst<Arc> *>(impl_.get())->AddArc(s, arc); return true; } // Warning: calling this method casts the FST to a mutable FST. int64_t AddState() final { return static_cast<MutableFst<Arc> *>(impl_.get())->AddState(); } const string &ArcType() const final { return Arc::Type(); } FstClassImpl *Copy() final { return new FstClassImpl<Arc>(impl_.get()); } // Warning: calling this method casts the FST to a mutable FST. bool DeleteArcs(int64_t s, size_t n) final { if (!ValidStateId(s)) return false; static_cast<MutableFst<Arc> *>(impl_.get())->DeleteArcs(s, n); return true; } // Warning: calling this method casts the FST to a mutable FST. bool DeleteArcs(int64_t s) final { if (!ValidStateId(s)) return false; static_cast<MutableFst<Arc> *>(impl_.get())->DeleteArcs(s); return true; } // Warning: calling this method casts the FST to a mutable FST. bool DeleteStates(const std::vector<int64_t> &dstates) final { for (const auto &state : dstates) if (!ValidStateId(state)) return false; // Warning: calling this method with any integers beyond the precision of // the underlying FST will result in truncation. std::vector<typename Arc::StateId> typed_dstates(dstates.size()); std::copy(dstates.begin(), dstates.end(), typed_dstates.begin()); static_cast<MutableFst<Arc> *>(impl_.get())->DeleteStates(typed_dstates); return true; } // Warning: calling this method casts the FST to a mutable FST. void DeleteStates() final { static_cast<MutableFst<Arc> *>(impl_.get())->DeleteStates(); } WeightClass Final(int64_t s) const final { if (!ValidStateId(s)) return WeightClass::NoWeight(WeightType()); WeightClass w(impl_->Final(s)); return w; } const string &FstType() const final { return impl_->Type(); } const SymbolTable *InputSymbols() const final { return impl_->InputSymbols(); } // Warning: calling this method casts the FST to a mutable FST. SymbolTable *MutableInputSymbols() final { return static_cast<MutableFst<Arc> *>(impl_.get())->MutableInputSymbols(); } // Warning: calling this method casts the FST to a mutable FST. SymbolTable *MutableOutputSymbols() final { return static_cast<MutableFst<Arc> *>(impl_.get())->MutableOutputSymbols(); } // Signals failure by returning size_t max. size_t NumArcs(int64_t s) const final { return ValidStateId(s) ? impl_->NumArcs(s) : std::numeric_limits<size_t>::max(); } // Signals failure by returning size_t max. size_t NumInputEpsilons(int64_t s) const final { return ValidStateId(s) ? impl_->NumInputEpsilons(s) : std::numeric_limits<size_t>::max(); } // Signals failure by returning size_t max. size_t NumOutputEpsilons(int64_t s) const final { return ValidStateId(s) ? impl_->NumOutputEpsilons(s) : std::numeric_limits<size_t>::max(); } // Warning: calling this method casts the FST to a mutable FST. int64_t NumStates() const final { return static_cast<MutableFst<Arc> *>(impl_.get())->NumStates(); } uint64_t Properties(uint64_t mask, bool test) const final { return impl_->Properties(mask, test); } // Warning: calling this method casts the FST to a mutable FST. bool ReserveArcs(int64_t s, size_t n) final { if (!ValidStateId(s)) return false; static_cast<MutableFst<Arc> *>(impl_.get())->ReserveArcs(s, n); return true; } // Warning: calling this method casts the FST to a mutable FST. void ReserveStates(int64_t s) final { static_cast<MutableFst<Arc> *>(impl_.get())->ReserveStates(s); } const SymbolTable *OutputSymbols() const final { return impl_->OutputSymbols(); } // Warning: calling this method casts the FST to a mutable FST. void SetInputSymbols(SymbolTable *isyms) final { static_cast<MutableFst<Arc> *>(impl_.get())->SetInputSymbols(isyms); } // Warning: calling this method casts the FST to a mutable FST. bool SetFinal(int64_t s, const WeightClass &weight) final { if (!ValidStateId(s)) return false; static_cast<MutableFst<Arc> *>(impl_.get()) ->SetFinal(s, *weight.GetWeight<typename Arc::Weight>()); return true; } // Warning: calling this method casts the FST to a mutable FST. void SetOutputSymbols(SymbolTable *osyms) final { static_cast<MutableFst<Arc> *>(impl_.get())->SetOutputSymbols(osyms); } // Warning: calling this method casts the FST to a mutable FST. void SetProperties(uint64_t props, uint64_t mask) final { static_cast<MutableFst<Arc> *>(impl_.get())->SetProperties(props, mask); } // Warning: calling this method casts the FST to a mutable FST. bool SetStart(int64_t s) final { if (!ValidStateId(s)) return false; static_cast<MutableFst<Arc> *>(impl_.get())->SetStart(s); return true; } int64_t Start() const final { return impl_->Start(); } bool ValidStateId(int64_t s) const final { // This cowardly refuses to count states if the FST is not yet expanded. if (!Properties(kExpanded, true)) { FSTERROR() << "Cannot get number of states for unexpanded FST"; return false; } // If the FST is already expanded, CountStates calls NumStates. if (s < 0 || s >= CountStates(*impl_)) { FSTERROR() << "State ID " << s << " not valid"; return false; } return true; } const string &WeightType() const final { return Arc::Weight::Type(); } bool Write(const string &fname) const final { return impl_->Write(fname); } bool Write(std::ostream &ostr, const string &fname) const final { const FstWriteOptions opts(fname); return impl_->Write(ostr, opts); } ~FstClassImpl() override {} Fst<Arc> *GetImpl() const { return impl_.get(); } private: std::unique_ptr<Fst<Arc>> impl_; }; // BASE CLASS DEFINITIONS class MutableFstClass; class FstClass : public FstClassBase { public: FstClass() : impl_(nullptr) {} template <class Arc> explicit FstClass(const Fst<Arc> &fst) : impl_(new FstClassImpl<Arc>(fst)) {} FstClass(const FstClass &other) : impl_(other.impl_ == nullptr ? nullptr : other.impl_->Copy()) {} FstClass &operator=(const FstClass &other) { impl_.reset(other.impl_ == nullptr ? nullptr : other.impl_->Copy()); return *this; } WeightClass Final(int64_t s) const final { return impl_->Final(s); } const string &ArcType() const final { return impl_->ArcType(); } const string &FstType() const final { return impl_->FstType(); } const SymbolTable *InputSymbols() const final { return impl_->InputSymbols(); } size_t NumArcs(int64_t s) const final { return impl_->NumArcs(s); } size_t NumInputEpsilons(int64_t s) const final { return impl_->NumInputEpsilons(s); } size_t NumOutputEpsilons(int64_t s) const final { return impl_->NumOutputEpsilons(s); } const SymbolTable *OutputSymbols() const final { return impl_->OutputSymbols(); } uint64_t Properties(uint64_t mask, bool test) const final { // Special handling for FSTs with a null impl. if (!impl_) return kError & mask; return impl_->Properties(mask, test); } static FstClass *Read(const string &fname); static FstClass *Read(std::istream &istrm, const string &source); int64_t Start() const final { return impl_->Start(); } bool ValidStateId(int64_t s) const final { return impl_->ValidStateId(s); } const string &WeightType() const final { return impl_->WeightType(); } // Helper that logs an ERROR if the weight type of an FST and a WeightClass // don't match. bool WeightTypesMatch(const WeightClass &weight, const string &op_name) const; bool Write(const string &fname) const final { return impl_->Write(fname); } bool Write(std::ostream &ostr, const string &fname) const final { return impl_->Write(ostr, fname); } ~FstClass() override {} // These methods are required by IO registration. template <class Arc> static FstClassImplBase *Convert(const FstClass &other) { FSTERROR() << "Doesn't make sense to convert any class to type FstClass"; return nullptr; } template <class Arc> static FstClassImplBase *Create() { FSTERROR() << "Doesn't make sense to create an FstClass with a " << "particular arc type"; return nullptr; } template <class Arc> const Fst<Arc> *GetFst() const { if (Arc::Type() != ArcType()) { return nullptr; } else { FstClassImpl<Arc> *typed_impl = static_cast<FstClassImpl<Arc> *>(impl_.get()); return typed_impl->GetImpl(); } } template <class Arc> static FstClass *Read(std::istream &stream, const FstReadOptions &opts) { if (!opts.header) { LOG(ERROR) << "FstClass::Read: Options header not specified"; return nullptr; } const FstHeader &hdr = *opts.header; if (hdr.Properties() & kMutable) { return ReadTypedFst<MutableFstClass, MutableFst<Arc>>(stream, opts); } else { return ReadTypedFst<FstClass, Fst<Arc>>(stream, opts); } } protected: explicit FstClass(FstClassImplBase *impl) : impl_(impl) {} const FstClassImplBase *GetImpl() const { return impl_.get(); } FstClassImplBase *GetImpl() { return impl_.get(); } // Generic template method for reading an arc-templated FST of type // UnderlyingT, and returning it wrapped as FstClassT, with appropriat // error checking. Called from arc-templated Read() static methods. template <class FstClassT, class UnderlyingT> static FstClassT *ReadTypedFst(std::istream &stream, const FstReadOptions &opts) { std::unique_ptr<UnderlyingT> u(UnderlyingT::Read(stream, opts)); return u ? new FstClassT(*u) : nullptr; } private: std::unique_ptr<FstClassImplBase> impl_; }; // Specific types of FstClass with special properties class MutableFstClass : public FstClass { public: bool AddArc(int64_t s, const ArcClass &ac) { if (!WeightTypesMatch(ac.weight, "AddArc")) return false; return GetImpl()->AddArc(s, ac); } int64_t AddState() { return GetImpl()->AddState(); } bool DeleteArcs(int64_t s, size_t n) { return GetImpl()->DeleteArcs(s, n); } bool DeleteArcs(int64_t s) { return GetImpl()->DeleteArcs(s); } bool DeleteStates(const std::vector<int64_t> &dstates) { return GetImpl()->DeleteStates(dstates); } void DeleteStates() { GetImpl()->DeleteStates(); } SymbolTable *MutableInputSymbols() { return GetImpl()->MutableInputSymbols(); } SymbolTable *MutableOutputSymbols() { return GetImpl()->MutableOutputSymbols(); } int64_t NumStates() const { return GetImpl()->NumStates(); } bool ReserveArcs(int64_t s, size_t n) { return GetImpl()->ReserveArcs(s, n); } void ReserveStates(int64_t s) { GetImpl()->ReserveStates(s); } static MutableFstClass *Read(const string &fname, bool convert = false); void SetInputSymbols(SymbolTable *isyms) { GetImpl()->SetInputSymbols(isyms); } bool SetFinal(int64_t s, const WeightClass &weight) { if (!WeightTypesMatch(weight, "SetFinal")) return false; return GetImpl()->SetFinal(s, weight); } void SetOutputSymbols(SymbolTable *osyms) { GetImpl()->SetOutputSymbols(osyms); } void SetProperties(uint64_t props, uint64_t mask) { GetImpl()->SetProperties(props, mask); } bool SetStart(int64_t s) { return GetImpl()->SetStart(s); } template <class Arc> explicit MutableFstClass(const MutableFst<Arc> &fst) : FstClass(fst) {} // These methods are required by IO registration. template <class Arc> static FstClassImplBase *Convert(const FstClass &other) { FSTERROR() << "Doesn't make sense to convert any class to type " << "MutableFstClass"; return nullptr; } template <class Arc> static FstClassImplBase *Create() { FSTERROR() << "Doesn't make sense to create a MutableFstClass with a " << "particular arc type"; return nullptr; } template <class Arc> MutableFst<Arc> *GetMutableFst() { Fst<Arc> *fst = const_cast<Fst<Arc> *>(this->GetFst<Arc>()); MutableFst<Arc> *mfst = static_cast<MutableFst<Arc> *>(fst); return mfst; } template <class Arc> static MutableFstClass *Read(std::istream &stream, const FstReadOptions &opts) { std::unique_ptr<MutableFst<Arc>> mfst(MutableFst<Arc>::Read(stream, opts)); return mfst ? new MutableFstClass(*mfst) : nullptr; } protected: explicit MutableFstClass(FstClassImplBase *impl) : FstClass(impl) {} }; class VectorFstClass : public MutableFstClass { public: explicit VectorFstClass(FstClassImplBase *impl) : MutableFstClass(impl) {} explicit VectorFstClass(const FstClass &other); explicit VectorFstClass(const string &arc_type); static VectorFstClass *Read(const string &fname); template <class Arc> static VectorFstClass *Read(std::istream &stream, const FstReadOptions &opts) { std::unique_ptr<VectorFst<Arc>> mfst(VectorFst<Arc>::Read(stream, opts)); return mfst ? new VectorFstClass(*mfst) : nullptr; } template <class Arc> explicit VectorFstClass(const VectorFst<Arc> &fst) : MutableFstClass(fst) {} template <class Arc> static FstClassImplBase *Convert(const FstClass &other) { return new FstClassImpl<Arc>(new VectorFst<Arc>(*other.GetFst<Arc>()), true); } template <class Arc> static FstClassImplBase *Create() { return new FstClassImpl<Arc>(new VectorFst<Arc>(), true); } }; } // namespace script } // namespace fst #endif // FST_SCRIPT_FST_CLASS_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-training_8k-linux-amd64-py36m-opt.yml
build: template_file: test-linux-opt-base.tyml dependencies: - "linux-amd64-ctc-opt" system_setup: > apt-get -qq update && apt-get -qq -y install ${training.packages_xenial.apt} args: tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-train-tests.sh 3.6.10:m 8k" workerType: "${docker.dsTests}" metadata: name: "DeepSpeech Linux AMD64 CPU 8kHz basic training Py3.6" description: "Training a DeepSpeech LDC93S1 model for Linux/AMD64 8kHz Python 3.6, CPU only, optimized version"
0
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/framework/data_types_internal.h
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. #pragma once #include <array> #include <cassert> #include <cstdint> #include <string> #include <type_traits> #include <vector> #include "boost/mp11.hpp" #include "core/common/common.h" #ifndef SHARED_PROVIDER #include "core/common/type_list.h" #include "core/framework/data_types.h" #if !defined(ORT_MINIMAL_BUILD) #include "onnx/defs/schema.h" #else #include "onnx/defs/data_type_utils.h" #endif #include "onnx/onnx_pb.h" #include "onnx/onnx-operators_pb.h" #endif namespace onnxruntime { namespace utils { template <typename T> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType() { return ONNX_NAMESPACE::TensorProto_DataType_UNDEFINED; } template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<float>() { return ONNX_NAMESPACE::TensorProto_DataType_FLOAT; } template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<uint8_t>() { return ONNX_NAMESPACE::TensorProto_DataType_UINT8; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<int8_t>() { return ONNX_NAMESPACE::TensorProto_DataType_INT8; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<uint16_t>() { return ONNX_NAMESPACE::TensorProto_DataType_UINT16; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<int16_t>() { return ONNX_NAMESPACE::TensorProto_DataType_INT16; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<int32_t>() { return ONNX_NAMESPACE::TensorProto_DataType_INT32; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<int64_t>() { return ONNX_NAMESPACE::TensorProto_DataType_INT64; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<std::string>() { return ONNX_NAMESPACE::TensorProto_DataType_STRING; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<bool>() { return ONNX_NAMESPACE::TensorProto_DataType_BOOL; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<MLFloat16>() { return ONNX_NAMESPACE::TensorProto_DataType_FLOAT16; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<double>() { return ONNX_NAMESPACE::TensorProto_DataType_DOUBLE; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<uint32_t>() { return ONNX_NAMESPACE::TensorProto_DataType_UINT32; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<uint64_t>() { return ONNX_NAMESPACE::TensorProto_DataType_UINT64; }; template <> constexpr ONNX_NAMESPACE::TensorProto_DataType ToTensorProtoElementType<BFloat16>() { return ONNX_NAMESPACE::TensorProto_DataType_BFLOAT16; }; // The following primitives are strongly recommended for switching on tensor input datatypes for // kernel implementations. // // 1) If you need to handle all of the primitive tensor contained datatypes, the best choice would be macros // DispatchOnTensorType or DispatchOnTensorTypeWithReturn. Use inline wrappers so your function can be invoked as function<T>(). // 2) if you have a few types, use Tensor.IsDataType<T>()/IsDataTypeString() or use utils::IsPrimitiveDataType<T>() // if you have a standalone MLDatatType with a sequence of if/else statements. // 3) For something in between, we suggest to use CallDispatcher pattern. // // Invoking DataTypeImpl::GetType<T>() for switching on input types is discouraged and should be avoided. // Every primitive type carries with it an integer constant that can be used for quick switching on types. #define DispatchOnTensorType(tensor_type, function, ...) \ switch (tensor_type->AsPrimitiveDataType()->GetDataType()) { \ case ONNX_NAMESPACE::TensorProto_DataType_FLOAT: \ function<float>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_BOOL: \ function<bool>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_DOUBLE: \ function<double>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_STRING: \ function<std::string>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT8: \ function<int8_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT8: \ function<uint8_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT16: \ function<int16_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT16: \ function<uint16_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT32: \ function<int32_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT32: \ function<uint32_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT64: \ function<int64_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT64: \ function<uint64_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_FLOAT16: \ function<MLFloat16>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_BFLOAT16: \ function<BFloat16>(__VA_ARGS__); \ break; \ default: \ ORT_ENFORCE(false, "Unknown tensor type of ", tensor_type); \ } #define DispatchOnTensorTypeWithReturn(tensor_type, retval, function, ...) \ switch (tensor_type->AsPrimitiveDataType()->GetDataType()) { \ case ONNX_NAMESPACE::TensorProto_DataType_FLOAT: \ retval = function<float>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_BOOL: \ retval = function<bool>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_DOUBLE: \ retval = function<double>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_STRING: \ retval = function<std::string>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT8: \ retval = function<int8_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT8: \ retval = function<uint8_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT16: \ retval = function<uint16_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT16: \ retval = function<int16_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT32: \ retval = function<int32_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT32: \ retval = function<uint32_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_INT64: \ retval = function<int64_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_UINT64: \ retval = function<uint64_t>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_FLOAT16: \ retval = function<MLFloat16>(__VA_ARGS__); \ break; \ case ONNX_NAMESPACE::TensorProto_DataType_BFLOAT16: \ retval = function<BFloat16>(__VA_ARGS__); \ break; \ default: \ ORT_ENFORCE(false, "Unknown tensor type of ", tensor_type); \ } //////////////////////////////////////////////////////////////////////////////// /// Use the following primitives if you have a few types to switch on so you // can write a short sequence of if/else statements. // This is a frequently used check so we make a separate utility function. inline bool IsDataTypeString(MLDataType dt_type) { auto prim_type = dt_type->AsPrimitiveDataType(); return (prim_type != nullptr && prim_type->GetDataType() == ONNX_NAMESPACE::TensorProto_DataType_STRING); } // Test if MLDataType is a concrete type of PrimitiveDataTypeBase // and it is T template <class T> inline bool IsPrimitiveDataType(MLDataType dt_type) { auto prim_type = dt_type->AsPrimitiveDataType(); return (prim_type != nullptr && prim_type->GetDataType() == ToTensorProtoElementType<T>()); } // Use after AsPrimitiveDataType() is successful // Check if PrimitiveDataTypeBase is of type T template <class T> inline bool IsPrimitiveDataType(const PrimitiveDataTypeBase* prim_type) { assert(prim_type != nullptr); return prim_type->GetDataType() == ToTensorProtoElementType<T>(); } // This implementation contains a workaround for GCC bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=47226 // GCC until very recently does not support template parameter pack expansion within lambda context. namespace mltype_dispatcher_internal { // T - type handled by this helper class CallableDispatchableHelper { int32_t dt_type_; // Type currently dispatched size_t called_; public: explicit CallableDispatchableHelper(int32_t dt_type) noexcept : dt_type_(dt_type), called_(0) {} // Must return integer to be in a expandable context template <class T, class Fn, class... Args> int Invoke(Fn&& fn, Args&&... args) { if (utils::ToTensorProtoElementType<T>() == dt_type_) { std::forward<Fn>(fn)(std::forward<Args>(args)...); ++called_; } return 0; } void CheckCalledOnce() { ORT_ENFORCE(called_ == 1, "Unsupported data type: ", dt_type_); } }; // Default policy is to throw an exception. // Other policies may set the second result argument accordingly. template <class Ret> struct UnsupportedTypeDefaultPolicy { void operator()(int32_t dt_type, Ret& /*result*/) const { ORT_THROW("Unsupported data type: ", dt_type); } }; // Helper with the result type template <class Ret, class UnsupportedPolicy> class CallableDispatchableRetHelper { int32_t dt_type_; // Type currently dispatched size_t called_; Ret result_; public: explicit CallableDispatchableRetHelper(int32_t dt_type) noexcept : dt_type_(dt_type), called_(0), result_() {} Ret Get() { // No type was invoked if (called_ == 0) { UnsupportedPolicy()(dt_type_, result_); } return result_; } // Must return integer to be in a expandable context template <class T, class Fn, class... Args> int Invoke(Fn&& fn, Args&&... args) { if (utils::ToTensorProtoElementType<T>() == dt_type_) { result_ = std::forward<Fn>(fn)(std::forward<Args>(args)...); ++called_; } return 0; } }; template <typename T> using TensorProtoElementTypeConstant = std::integral_constant<ONNX_NAMESPACE::TensorProto_DataType, ToTensorProtoElementType<T>()>; using UndefinedTensorProtoElementTypeConstant = std::integral_constant<ONNX_NAMESPACE::TensorProto_DataType, ONNX_NAMESPACE::TensorProto_DataType_UNDEFINED>; } // namespace mltype_dispatcher_internal /** * This class helps to efficiently dispatch calls to implementation function * objects with a tensor element type template argument. * * The constructor accepts a value corresponding to a tensor element type. * For example, it can be obtained from: * input_tensor->GetElementType() * * The Invoke member functions will instantiate and invoke the provided * function object template, Fn. Fn must be default constructible. Fn must also * have a tensor element type template argument. This type template argument * will be the type that corresponds to the value given in the constructor. * These functions accept and forward arbitrary function arguments. They ensure * that Fn is called once with the type specified in the constructor. * * @tparam Types The types supported by the implementation. This should be a * set of ONNX tensor element types that are supported by ORT. */ template <typename... Types> class MLTypeCallDispatcher { using SupportedTypeList = TypeList<Types...>; using SupportedTensorProtoElementTypeList = boost::mp11::mp_transform< mltype_dispatcher_internal::TensorProtoElementTypeConstant, SupportedTypeList>; static_assert( boost::mp11::mp_and< boost::mp11::mp_is_set<SupportedTensorProtoElementTypeList>, boost::mp11::mp_not< boost::mp11::mp_set_contains< SupportedTensorProtoElementTypeList, mltype_dispatcher_internal::UndefinedTensorProtoElementTypeConstant>>>::value, "Types must map to a unique set of ONNX tensor element data types supported by ORT."); int32_t dt_type_; public: /** * Constructor. * @param dt_type The value corresponding to the tensor element type to be * dispatched to. This can be obtained from * input_tensor->GetElementType() or * utils::ToTensorProtoElementType<T>(). */ explicit MLTypeCallDispatcher(int32_t dt_type) noexcept : dt_type_(dt_type) {} /** * Invokes Fn<T> with the specified arguments. * * @tparam Fn The function object template. * @tparam Args The argument types. */ template <template <typename...> class Fn, typename... Args> void Invoke(Args&&... args) const { InvokeWithLeadingTemplateArgs<Fn, TypeList<>>(std::forward<Args>(args)...); } /** * Invokes Fn<..., T> with leading template arguments and the specified * arguments. * * @tparam Fn The function object template. * @tparam LeadingTemplateArgTypeList A type list of the leading template * arguments. * @tparam Args The argument types. */ template <template <typename...> class Fn, typename LeadingTemplateArgTypeList, typename... Args> void InvokeWithLeadingTemplateArgs(Args&&... args) const { static_assert( boost::mp11::mp_is_list<LeadingTemplateArgTypeList>::value, "LeadingTemplateArgTypeList must be a type list (e.g., onnxruntime::TypeList<T1, T2, ...>)."); mltype_dispatcher_internal::CallableDispatchableHelper helper(dt_type_); // given LeadingTemplateArgTypeList is a type list L<U1, U2, ...>, // call helper.Invoke() with Fn<U1, U2, ..., T> for each T in Types static_cast<void>(std::array<int, sizeof...(Types)>{ helper.template Invoke<Types>( boost::mp11::mp_apply<Fn, boost::mp11::mp_push_back<LeadingTemplateArgTypeList, Types>>(), std::forward<Args>(args)...)...}); // avoid "unused parameter" warning for the case where Types is empty static_cast<void>(std::array<int, sizeof...(Args)>{(ORT_UNUSED_PARAMETER(args), 0)...}); helper.CheckCalledOnce(); } /** * Invokes Fn<T> with the specified arguments and returns the result. * * @tparam Ret The return type. Fn should return a type convertible to Ret. * @tparam Fn The function object template. * @tparam Args The argument types. */ template <class Ret, template <typename...> class Fn, typename... Args> Ret InvokeRet(Args&&... args) const { return InvokeRetWithUnsupportedPolicy< Ret, Fn, mltype_dispatcher_internal::UnsupportedTypeDefaultPolicy<Ret>>( std::forward<Args>(args)...); } /** * Invokes Fn<T> with the specified arguments and returns the result. * * @tparam Ret The return type. Fn should return a type convertible to Ret. * @tparam Fn The function object template. * @tparam UnsupportedPolicy The policy used to handle unsupported types. * See mltype_dispatcher_internal::UnsupportedTypeDefaultPolicy * for an example. * @tparam Args The argument types. */ template <class Ret, template <typename...> class Fn, class UnsupportedPolicy, typename... Args> Ret InvokeRetWithUnsupportedPolicy(Args&&... args) const { return InvokeRetWithUnsupportedPolicyAndLeadingTemplateArgs< Ret, Fn, UnsupportedPolicy, TypeList<>>( std::forward<Args>(args)...); } /** * Invokes Fn<..., T> with leading template arguments and the specified * arguments and returns the result. * * @tparam Ret The return type. Fn should return a type convertible to Ret. * @tparam Fn The function object template. * @tparam LeadingTemplateArgTypeList A type list of the leading template * arguments. * @tparam Args The argument types. */ template <class Ret, template <typename...> class Fn, typename LeadingTemplateArgTypeList, typename... Args> Ret InvokeRetWithLeadingTemplateArgs(Args&&... args) const { return InvokeRetWithUnsupportedPolicyAndLeadingTemplateArgs< Ret, Fn, mltype_dispatcher_internal::UnsupportedTypeDefaultPolicy<Ret>, LeadingTemplateArgTypeList>( std::forward<Args>(args)...); } /** * Invokes Fn<..., T> with leading template arguments and the specified * arguments and returns the result. * * @tparam Ret The return type. Fn should return a type convertible to Ret. * @tparam Fn The function object template. * @tparam UnsupportedPolicy The policy used to handle unsupported types. * See mltype_dispatcher_internal::UnsupportedTypeDefaultPolicy * for an example. * @tparam LeadingTemplateArgTypeList A type list of the leading template * arguments. * @tparam Args The argument types. */ template <class Ret, template <typename...> class Fn, class UnsupportedPolicy, typename LeadingTemplateArgTypeList, typename... Args> Ret InvokeRetWithUnsupportedPolicyAndLeadingTemplateArgs(Args&&... args) const { mltype_dispatcher_internal::CallableDispatchableRetHelper<Ret, UnsupportedPolicy> helper(dt_type_); // given LeadingTemplateArgTypeList is a type list L<U1, U2, ...>, // call helper.Invoke() with Fn<U1, U2, ..., T> for each T in Types static_cast<void>(std::array<int, sizeof...(Types)>{ helper.template Invoke<Types>( boost::mp11::mp_apply<Fn, boost::mp11::mp_push_back<LeadingTemplateArgTypeList, Types>>(), std::forward<Args>(args)...)...}); // avoid "unused parameter" warning for the case where Types is empty static_cast<void>(std::array<int, sizeof...(Args)>{(ORT_UNUSED_PARAMETER(args), 0)...}); return helper.Get(); } }; // the type MLTypeCallDispatcher<T...> given a type list L<T...> template <typename L> using MLTypeCallDispatcherFromTypeList = boost::mp11::mp_apply<MLTypeCallDispatcher, L>; namespace data_types_internal { enum class ContainerType : uint16_t { kUndefined = 0, kTensor = 1, kMap = 2, kSequence = 3, kOpaque = 4 }; class TypeNode { // type_ is a TypeProto value case enum // that may be a kTypeTensor, kTypeMap, kTypeSequence // prim_type_ is a TypeProto_DataType enum that has meaning // - for Tensor then prim_type_ is the contained type // - for Map prim_type is the key type. Next entry describes map value // - For sequence prim_type_ is not used and has no meaning. Next entry // describes the value for the sequence // Tensor is always the last entry as it describes a contained primitive type. ContainerType type_; uint16_t prim_type_; public: TypeNode(ContainerType type, int32_t prim_type) noexcept { type_ = type; prim_type_ = static_cast<uint16_t>(prim_type); } bool IsType(ContainerType type) const noexcept { return type_ == type; } bool IsPrimType(int32_t prim_type) const noexcept { return prim_type_ == static_cast<uint16_t>(prim_type); } }; } // namespace data_types_internal //////////////////////////////////////////////////////////////////// /// Provides generic interface to test whether MLDataType is a Sequence, /// Map or an Opaque type including arbitrary recursive definitions /// without querying DataTypeImpl::GetType<T> for all known complex types // T is a sequence contained element type // If returns true then we know that the runtime // representation is std::vector<T> // T itself can be a runtime representation of another // sequence, map, opaque type or a tensor // // That is it can be std::vector or a std::map // If T is a primitive type sequence is tested whether it contains // tensors of that type // // If T is an opaque type, then it is only tested to be opaque but not exactly // a specific opaque type. To Test for a specific Opaque type use IsOpaqueType() below // // This class examines the supplied MLDataType and records // its information in a vector so any subsequent checks for Sequences and Maps // are quick. class ContainerChecker { using Cont = std::vector<data_types_internal::TypeNode>; Cont types_; // Default IsContainerOfType is for Opaque type template <class T> struct IsContainerOfType { static bool check(const Cont& c, size_t index) { if (index >= c.size()) { return false; } return c[index].IsType(data_types_internal::ContainerType::kOpaque); } }; // Handles the case where sequence element is also a sequence template <class T> struct IsContainerOfType<std::vector<T>> { static bool check(const Cont& c, size_t index) { if (index >= c.size()) { return false; } if (c[index].IsType(data_types_internal::ContainerType::kSequence)) { ORT_ENFORCE(++index < c.size(), "Sequence is missing type entry for its element"); constexpr int32_t prim_type = ToTensorProtoElementType<T>(); // Check if this is a primitive type and it matches ORT_IF_CONSTEXPR (prim_type != ONNX_NAMESPACE::TensorProto_DataType_UNDEFINED) { return c[index].IsType(data_types_internal::ContainerType::kTensor) && c[index].IsPrimType(prim_type); } else { // T is not primitive, check next entry for non-primitive proto return IsContainerOfType<T>::check(c, index); } } return false; } }; template <class K, class V> struct IsContainerOfType<std::map<K, V>> { static bool check(const Cont& c, size_t index) { static_assert(ToTensorProtoElementType<K>() != ONNX_NAMESPACE::TensorProto_DataType_UNDEFINED, "Map Key can not be a non-primitive type"); if (index >= c.size()) { return false; } if (!c[index].IsType(data_types_internal::ContainerType::kMap)) { return false; } constexpr int32_t key_type = ToTensorProtoElementType<K>(); if (!c[index].IsPrimType(key_type)) { return false; } ORT_ENFORCE(++index < c.size(), "Map is missing type entry for its value"); constexpr int32_t val_type = ToTensorProtoElementType<V>(); ORT_IF_CONSTEXPR (val_type != ONNX_NAMESPACE::TensorProto_DataType_UNDEFINED) { return c[index].IsType(data_types_internal::ContainerType::kTensor) && c[index].IsPrimType(val_type); } else return IsContainerOfType<V>::check(c, index); } }; public: explicit ContainerChecker(MLDataType); ~ContainerChecker() = default; bool IsMap() const noexcept { assert(!types_.empty()); return types_[0].IsType(data_types_internal::ContainerType::kMap); } bool IsSequence() const noexcept { assert(!types_.empty()); return types_[0].IsType(data_types_internal::ContainerType::kSequence); } template <class T> bool IsSequenceOf() const { assert(!types_.empty()); return IsContainerOfType<std::vector<T>>::check(types_, 0); } template <class K, class V> bool IsMapOf() const { assert(!types_.empty()); return IsContainerOfType<std::map<K, V>>::check(types_, 0); } }; bool IsOpaqueType(MLDataType ml_type, const char* domain, const char* name); } // namespace utils } // namespace onnxruntime
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/script/libfstscript.vcxproj
<?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="15.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- Need ConfigurationType set before importing openfst.props! --> <PropertyGroup Label="Globals"> <ProjectGuid>{111F46ED-DA1F-469B-B912-BA2ACC2FF8E6}</ProjectGuid> <ConfigurationType>StaticLibrary</ConfigurationType> </PropertyGroup> <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - --> <Import Project="../openfst.props" /> <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - --> <ItemGroup> <ClCompile Include="*.cc" /> <ClInclude Include="..\include\fst\script\*.h" /> </ItemGroup> <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - --> <Import Project="../openfst.targets" /> <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - --> </Project>
0
coqui_public_repos/STT/native_client/java/app/src/main/res
coqui_public_repos/STT/native_client/java/app/src/main/res/layout/activity_stt.xml
<?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".STTActivity"> <!-- <TextView android:id="@+id/audioFormat" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" /> <TextView android:id="@+id/numChannels" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="@+id/audioFormat" /> <TextView android:id="@+id/sampleRate" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="@+id/numChannels" /> <TextView android:id="@+id/bitsPerSample" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="@+id/sampleRate" /> <TextView android:id="@+id/bufferSize" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="@+id/bitsPerSample" /> --> <android.support.constraint.Guideline android:id="@+id/guideline" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginStart="32dp" android:layout_marginTop="32dp" android:layout_marginEnd="32dp" android:layout_marginBottom="32dp" android:orientation="horizontal" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintGuide_end="491dp" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:id="@+id/lblTfliteModel" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:text="Model file" /> <EditText android:id="@+id/tfliteModel" android:layout_width="wrap_content" android:layout_height="wrap_content" android:inputType="text" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:id="@+id/lblAudioFile" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:text="Audio file" /> <EditText android:id="@+id/audioFile" android:layout_width="wrap_content" android:layout_height="wrap_content" android:inputType="text" /> </LinearLayout> <Space android:layout_width="match_parent" android:layout_height="@android:dimen/app_icon_size" /> <TextView android:id="@+id/tfliteStatus" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" /> <Space android:layout_width="match_parent" android:layout_height="@android:dimen/app_icon_size" /> <TextView android:id="@+id/decodedString" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Hello World!" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" /> <Space android:layout_width="match_parent" android:layout_height="@android:dimen/app_icon_size" /> <Button android:id="@+id/btnStartInference" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Run inference!" android:onClick="onClick_inference_handler" /> <!-- <Space android:layout_width="match_parent" android:layout_height="@android:dimen/app_icon_size" /> <Button android:id="@+id/btnPlayAudioFile" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Listen to audio" android:onClick="onClick_audio_handler" /> --> </LinearLayout> </android.support.constraint.ConstraintLayout>
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/tc-python-tests.sh
#!/bin/bash set -xe source $(dirname "$0")/tc-tests-utils.sh extract_python_versions "$1" "pyver" "pyver_pkg" "py_unicode_type" "pyconf" "pyalias" bitrate=$2 set_ldc_sample_filename "${bitrate}" download_data virtualenv_activate "${pyalias}" "deepspeech" if [ "$3" = "cuda" ]; then deepspeech_pkg_url=$(get_python_pkg_url "${pyver_pkg}" "${py_unicode_type}" "deepspeech_gpu") else deepspeech_pkg_url=$(get_python_pkg_url "${pyver_pkg}" "${py_unicode_type}") fi; LD_LIBRARY_PATH=${PY37_LDPATH}:$LD_LIBRARY_PATH pip install --verbose --only-binary :all: --upgrade ${deepspeech_pkg_url} | cat which deepspeech deepspeech --version ensure_cuda_usage "$3" run_all_inference_tests run_hotword_tests virtualenv_deactivate "${pyalias}" "deepspeech"
0
coqui_public_repos/TTS/TTS/tts/layers
coqui_public_repos/TTS/TTS/tts/layers/delightful_tts/acoustic_model.py
### credit: https://github.com/dunky11/voicesmith from typing import Callable, Dict, Tuple import torch import torch.nn.functional as F from coqpit import Coqpit from torch import nn from TTS.tts.layers.delightful_tts.conformer import Conformer from TTS.tts.layers.delightful_tts.encoders import ( PhonemeLevelProsodyEncoder, UtteranceLevelProsodyEncoder, get_mask_from_lengths, ) from TTS.tts.layers.delightful_tts.energy_adaptor import EnergyAdaptor from TTS.tts.layers.delightful_tts.networks import EmbeddingPadded, positional_encoding from TTS.tts.layers.delightful_tts.phoneme_prosody_predictor import PhonemeProsodyPredictor from TTS.tts.layers.delightful_tts.pitch_adaptor import PitchAdaptor from TTS.tts.layers.delightful_tts.variance_predictor import VariancePredictor from TTS.tts.layers.generic.aligner import AlignmentNetwork from TTS.tts.utils.helpers import generate_path, maximum_path, sequence_mask class AcousticModel(torch.nn.Module): def __init__( self, args: "ModelArgs", tokenizer: "TTSTokenizer" = None, speaker_manager: "SpeakerManager" = None, ): super().__init__() self.args = args self.tokenizer = tokenizer self.speaker_manager = speaker_manager self.init_multispeaker(args) # self.set_embedding_dims() self.length_scale = ( float(self.args.length_scale) if isinstance(self.args.length_scale, int) else self.args.length_scale ) self.emb_dim = args.n_hidden_conformer_encoder self.encoder = Conformer( dim=self.args.n_hidden_conformer_encoder, n_layers=self.args.n_layers_conformer_encoder, n_heads=self.args.n_heads_conformer_encoder, speaker_embedding_dim=self.embedded_speaker_dim, p_dropout=self.args.dropout_conformer_encoder, kernel_size_conv_mod=self.args.kernel_size_conv_mod_conformer_encoder, lrelu_slope=self.args.lrelu_slope, ) self.pitch_adaptor = PitchAdaptor( n_input=self.args.n_hidden_conformer_encoder, n_hidden=self.args.n_hidden_variance_adaptor, n_out=1, kernel_size=self.args.kernel_size_variance_adaptor, emb_kernel_size=self.args.emb_kernel_size_variance_adaptor, p_dropout=self.args.dropout_variance_adaptor, lrelu_slope=self.args.lrelu_slope, ) self.energy_adaptor = EnergyAdaptor( channels_in=self.args.n_hidden_conformer_encoder, channels_hidden=self.args.n_hidden_variance_adaptor, channels_out=1, kernel_size=self.args.kernel_size_variance_adaptor, emb_kernel_size=self.args.emb_kernel_size_variance_adaptor, dropout=self.args.dropout_variance_adaptor, lrelu_slope=self.args.lrelu_slope, ) self.aligner = AlignmentNetwork( in_query_channels=self.args.out_channels, in_key_channels=self.args.n_hidden_conformer_encoder, ) self.duration_predictor = VariancePredictor( channels_in=self.args.n_hidden_conformer_encoder, channels=self.args.n_hidden_variance_adaptor, channels_out=1, kernel_size=self.args.kernel_size_variance_adaptor, p_dropout=self.args.dropout_variance_adaptor, lrelu_slope=self.args.lrelu_slope, ) self.utterance_prosody_encoder = UtteranceLevelProsodyEncoder( num_mels=self.args.num_mels, ref_enc_filters=self.args.ref_enc_filters_reference_encoder, ref_enc_size=self.args.ref_enc_size_reference_encoder, ref_enc_gru_size=self.args.ref_enc_gru_size_reference_encoder, ref_enc_strides=self.args.ref_enc_strides_reference_encoder, n_hidden=self.args.n_hidden_conformer_encoder, dropout=self.args.dropout_conformer_encoder, bottleneck_size_u=self.args.bottleneck_size_u_reference_encoder, token_num=self.args.token_num_reference_encoder, ) self.utterance_prosody_predictor = PhonemeProsodyPredictor( hidden_size=self.args.n_hidden_conformer_encoder, kernel_size=self.args.predictor_kernel_size_reference_encoder, dropout=self.args.dropout_conformer_encoder, bottleneck_size=self.args.bottleneck_size_u_reference_encoder, lrelu_slope=self.args.lrelu_slope, ) self.phoneme_prosody_encoder = PhonemeLevelProsodyEncoder( num_mels=self.args.num_mels, ref_enc_filters=self.args.ref_enc_filters_reference_encoder, ref_enc_size=self.args.ref_enc_size_reference_encoder, ref_enc_gru_size=self.args.ref_enc_gru_size_reference_encoder, ref_enc_strides=self.args.ref_enc_strides_reference_encoder, n_hidden=self.args.n_hidden_conformer_encoder, dropout=self.args.dropout_conformer_encoder, bottleneck_size_p=self.args.bottleneck_size_p_reference_encoder, n_heads=self.args.n_heads_conformer_encoder, ) self.phoneme_prosody_predictor = PhonemeProsodyPredictor( hidden_size=self.args.n_hidden_conformer_encoder, kernel_size=self.args.predictor_kernel_size_reference_encoder, dropout=self.args.dropout_conformer_encoder, bottleneck_size=self.args.bottleneck_size_p_reference_encoder, lrelu_slope=self.args.lrelu_slope, ) self.u_bottle_out = nn.Linear( self.args.bottleneck_size_u_reference_encoder, self.args.n_hidden_conformer_encoder, ) self.u_norm = nn.InstanceNorm1d(self.args.bottleneck_size_u_reference_encoder) self.p_bottle_out = nn.Linear( self.args.bottleneck_size_p_reference_encoder, self.args.n_hidden_conformer_encoder, ) self.p_norm = nn.InstanceNorm1d( self.args.bottleneck_size_p_reference_encoder, ) self.decoder = Conformer( dim=self.args.n_hidden_conformer_decoder, n_layers=self.args.n_layers_conformer_decoder, n_heads=self.args.n_heads_conformer_decoder, speaker_embedding_dim=self.embedded_speaker_dim, p_dropout=self.args.dropout_conformer_decoder, kernel_size_conv_mod=self.args.kernel_size_conv_mod_conformer_decoder, lrelu_slope=self.args.lrelu_slope, ) padding_idx = self.tokenizer.characters.pad_id self.src_word_emb = EmbeddingPadded( self.args.num_chars, self.args.n_hidden_conformer_encoder, padding_idx=padding_idx ) self.to_mel = nn.Linear( self.args.n_hidden_conformer_decoder, self.args.num_mels, ) self.energy_scaler = torch.nn.BatchNorm1d(1, affine=False, track_running_stats=True, momentum=None) self.energy_scaler.requires_grad_(False) def init_multispeaker(self, args: Coqpit): # pylint: disable=unused-argument """Init for multi-speaker training.""" self.embedded_speaker_dim = 0 self.num_speakers = self.args.num_speakers self.audio_transform = None if self.speaker_manager: self.num_speakers = self.speaker_manager.num_speakers if self.args.use_speaker_embedding: self._init_speaker_embedding() if self.args.use_d_vector_file: self._init_d_vector() @staticmethod def _set_cond_input(aux_input: Dict): """Set the speaker conditioning input based on the multi-speaker mode.""" sid, g, lid, durations = None, None, None, None if "speaker_ids" in aux_input and aux_input["speaker_ids"] is not None: sid = aux_input["speaker_ids"] if sid.ndim == 0: sid = sid.unsqueeze_(0) if "d_vectors" in aux_input and aux_input["d_vectors"] is not None: g = F.normalize(aux_input["d_vectors"]) # .unsqueeze_(-1) if g.ndim == 2: g = g # .unsqueeze_(0) # pylint: disable=self-assigning-variable if "durations" in aux_input and aux_input["durations"] is not None: durations = aux_input["durations"] return sid, g, lid, durations def get_aux_input(self, aux_input: Dict): sid, g, lid, _ = self._set_cond_input(aux_input) return {"speaker_ids": sid, "style_wav": None, "d_vectors": g, "language_ids": lid} def _set_speaker_input(self, aux_input: Dict): d_vectors = aux_input.get("d_vectors", None) speaker_ids = aux_input.get("speaker_ids", None) if d_vectors is not None and speaker_ids is not None: raise ValueError("[!] Cannot use d-vectors and speaker-ids together.") if speaker_ids is not None and not hasattr(self, "emb_g"): raise ValueError("[!] Cannot use speaker-ids without enabling speaker embedding.") g = speaker_ids if speaker_ids is not None else d_vectors return g # def set_embedding_dims(self): # if self.embedded_speaker_dim > 0: # self.embedding_dims = self.embedded_speaker_dim # else: # self.embedding_dims = 0 def _init_speaker_embedding(self): # pylint: disable=attribute-defined-outside-init if self.num_speakers > 0: print(" > initialization of speaker-embedding layers.") self.embedded_speaker_dim = self.args.speaker_embedding_channels self.emb_g = nn.Embedding(self.num_speakers, self.embedded_speaker_dim) def _init_d_vector(self): # pylint: disable=attribute-defined-outside-init if hasattr(self, "emb_g"): raise ValueError("[!] Speaker embedding layer already initialized before d_vector settings.") self.embedded_speaker_dim = self.args.d_vector_dim @staticmethod def generate_attn(dr, x_mask, y_mask=None): """Generate an attention mask from the linear scale durations. Args: dr (Tensor): Linear scale durations. x_mask (Tensor): Mask for the input (character) sequence. y_mask (Tensor): Mask for the output (spectrogram) sequence. Compute it from the predicted durations if None. Defaults to None. Shapes - dr: :math:`(B, T_{en})` - x_mask: :math:`(B, T_{en})` - y_mask: :math:`(B, T_{de})` """ # compute decode mask from the durations if y_mask is None: y_lengths = dr.sum(1).long() y_lengths[y_lengths < 1] = 1 y_mask = torch.unsqueeze(sequence_mask(y_lengths, None), 1).to(dr.dtype) attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(y_mask, 2) attn = generate_path(dr, attn_mask.squeeze(1)).to(dr.dtype) return attn def _expand_encoder_with_durations( self, o_en: torch.FloatTensor, dr: torch.IntTensor, x_mask: torch.IntTensor, y_lengths: torch.IntTensor, ): y_mask = torch.unsqueeze(sequence_mask(y_lengths, None), 1).to(o_en.dtype) attn = self.generate_attn(dr, x_mask, y_mask) o_en_ex = torch.einsum("kmn, kjm -> kjn", [attn.float(), o_en]) return y_mask, o_en_ex, attn.transpose(1, 2) def _forward_aligner( self, x: torch.FloatTensor, y: torch.FloatTensor, x_mask: torch.IntTensor, y_mask: torch.IntTensor, attn_priors: torch.FloatTensor, ) -> Tuple[torch.IntTensor, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor]: """Aligner forward pass. 1. Compute a mask to apply to the attention map. 2. Run the alignment network. 3. Apply MAS to compute the hard alignment map. 4. Compute the durations from the hard alignment map. Args: x (torch.FloatTensor): Input sequence. y (torch.FloatTensor): Output sequence. x_mask (torch.IntTensor): Input sequence mask. y_mask (torch.IntTensor): Output sequence mask. attn_priors (torch.FloatTensor): Prior for the aligner network map. Returns: Tuple[torch.IntTensor, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor]: Durations from the hard alignment map, soft alignment potentials, log scale alignment potentials, hard alignment map. Shapes: - x: :math:`[B, T_en, C_en]` - y: :math:`[B, T_de, C_de]` - x_mask: :math:`[B, 1, T_en]` - y_mask: :math:`[B, 1, T_de]` - attn_priors: :math:`[B, T_de, T_en]` - aligner_durations: :math:`[B, T_en]` - aligner_soft: :math:`[B, T_de, T_en]` - aligner_logprob: :math:`[B, 1, T_de, T_en]` - aligner_mas: :math:`[B, T_de, T_en]` """ attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(y_mask, 2) # [B, 1, T_en, T_de] aligner_soft, aligner_logprob = self.aligner(y.transpose(1, 2), x.transpose(1, 2), x_mask, attn_priors) aligner_mas = maximum_path( aligner_soft.squeeze(1).transpose(1, 2).contiguous(), attn_mask.squeeze(1).contiguous() ) aligner_durations = torch.sum(aligner_mas, -1).int() aligner_soft = aligner_soft.squeeze(1) # [B, T_max2, T_max] aligner_mas = aligner_mas.transpose(1, 2) # [B, T_max, T_max2] -> [B, T_max2, T_max] return aligner_durations, aligner_soft, aligner_logprob, aligner_mas def average_utterance_prosody( # pylint: disable=no-self-use self, u_prosody_pred: torch.Tensor, src_mask: torch.Tensor ) -> torch.Tensor: lengths = ((~src_mask) * 1.0).sum(1) u_prosody_pred = u_prosody_pred.sum(1, keepdim=True) / lengths.view(-1, 1, 1) return u_prosody_pred def forward( self, tokens: torch.Tensor, src_lens: torch.Tensor, mels: torch.Tensor, mel_lens: torch.Tensor, pitches: torch.Tensor, energies: torch.Tensor, attn_priors: torch.Tensor, use_ground_truth: bool = True, d_vectors: torch.Tensor = None, speaker_idx: torch.Tensor = None, ) -> Dict[str, torch.Tensor]: sid, g, lid, _ = self._set_cond_input( # pylint: disable=unused-variable {"d_vectors": d_vectors, "speaker_ids": speaker_idx} ) # pylint: disable=unused-variable src_mask = get_mask_from_lengths(src_lens) # [B, T_src] mel_mask = get_mask_from_lengths(mel_lens) # [B, T_mel] # Token embeddings token_embeddings = self.src_word_emb(tokens) # [B, T_src, C_hidden] token_embeddings = token_embeddings.masked_fill(src_mask.unsqueeze(-1), 0.0) # Alignment network and durations aligner_durations, aligner_soft, aligner_logprob, aligner_mas = self._forward_aligner( x=token_embeddings, y=mels.transpose(1, 2), x_mask=~src_mask[:, None], y_mask=~mel_mask[:, None], attn_priors=attn_priors, ) dr = aligner_durations # [B, T_en] # Embeddings speaker_embedding = None if d_vectors is not None: speaker_embedding = g elif speaker_idx is not None: speaker_embedding = F.normalize(self.emb_g(sid)) pos_encoding = positional_encoding( self.emb_dim, max(token_embeddings.shape[1], max(mel_lens)), device=token_embeddings.device, ) encoder_outputs = self.encoder( token_embeddings, src_mask, speaker_embedding=speaker_embedding, encoding=pos_encoding, ) u_prosody_ref = self.u_norm(self.utterance_prosody_encoder(mels=mels, mel_lens=mel_lens)) u_prosody_pred = self.u_norm( self.average_utterance_prosody( u_prosody_pred=self.utterance_prosody_predictor(x=encoder_outputs, mask=src_mask), src_mask=src_mask, ) ) if use_ground_truth: encoder_outputs = encoder_outputs + self.u_bottle_out(u_prosody_ref) else: encoder_outputs = encoder_outputs + self.u_bottle_out(u_prosody_pred) p_prosody_ref = self.p_norm( self.phoneme_prosody_encoder( x=encoder_outputs, src_mask=src_mask, mels=mels, mel_lens=mel_lens, encoding=pos_encoding ) ) p_prosody_pred = self.p_norm(self.phoneme_prosody_predictor(x=encoder_outputs, mask=src_mask)) if use_ground_truth: encoder_outputs = encoder_outputs + self.p_bottle_out(p_prosody_ref) else: encoder_outputs = encoder_outputs + self.p_bottle_out(p_prosody_pred) encoder_outputs_res = encoder_outputs pitch_pred, avg_pitch_target, pitch_emb = self.pitch_adaptor.get_pitch_embedding_train( x=encoder_outputs, target=pitches, dr=dr, mask=src_mask, ) energy_pred, avg_energy_target, energy_emb = self.energy_adaptor.get_energy_embedding_train( x=encoder_outputs, target=energies, dr=dr, mask=src_mask, ) encoder_outputs = encoder_outputs.transpose(1, 2) + pitch_emb + energy_emb log_duration_prediction = self.duration_predictor(x=encoder_outputs_res.detach(), mask=src_mask) mel_pred_mask, encoder_outputs_ex, alignments = self._expand_encoder_with_durations( o_en=encoder_outputs, y_lengths=mel_lens, dr=dr, x_mask=~src_mask[:, None] ) x = self.decoder( encoder_outputs_ex.transpose(1, 2), mel_mask, speaker_embedding=speaker_embedding, encoding=pos_encoding, ) x = self.to_mel(x) dr = torch.log(dr + 1) dr_pred = torch.exp(log_duration_prediction) - 1 alignments_dp = self.generate_attn(dr_pred, src_mask.unsqueeze(1), mel_pred_mask) # [B, T_max, T_max2'] return { "model_outputs": x, "pitch_pred": pitch_pred, "pitch_target": avg_pitch_target, "energy_pred": energy_pred, "energy_target": avg_energy_target, "u_prosody_pred": u_prosody_pred, "u_prosody_ref": u_prosody_ref, "p_prosody_pred": p_prosody_pred, "p_prosody_ref": p_prosody_ref, "alignments_dp": alignments_dp, "alignments": alignments, # [B, T_de, T_en] "aligner_soft": aligner_soft, "aligner_mas": aligner_mas, "aligner_durations": aligner_durations, "aligner_logprob": aligner_logprob, "dr_log_pred": log_duration_prediction.squeeze(1), # [B, T] "dr_log_target": dr.squeeze(1), # [B, T] "spk_emb": speaker_embedding, } @torch.no_grad() def inference( self, tokens: torch.Tensor, speaker_idx: torch.Tensor, p_control: float = None, # TODO # pylint: disable=unused-argument d_control: float = None, # TODO # pylint: disable=unused-argument d_vectors: torch.Tensor = None, pitch_transform: Callable = None, energy_transform: Callable = None, ) -> torch.Tensor: src_mask = get_mask_from_lengths(torch.tensor([tokens.shape[1]], dtype=torch.int64, device=tokens.device)) src_lens = torch.tensor(tokens.shape[1:2]).to(tokens.device) # pylint: disable=unused-variable sid, g, lid, _ = self._set_cond_input( # pylint: disable=unused-variable {"d_vectors": d_vectors, "speaker_ids": speaker_idx} ) # pylint: disable=unused-variable token_embeddings = self.src_word_emb(tokens) token_embeddings = token_embeddings.masked_fill(src_mask.unsqueeze(-1), 0.0) # Embeddings speaker_embedding = None if d_vectors is not None: speaker_embedding = g elif speaker_idx is not None: speaker_embedding = F.normalize(self.emb_g(sid)) pos_encoding = positional_encoding( self.emb_dim, token_embeddings.shape[1], device=token_embeddings.device, ) encoder_outputs = self.encoder( token_embeddings, src_mask, speaker_embedding=speaker_embedding, encoding=pos_encoding, ) u_prosody_pred = self.u_norm( self.average_utterance_prosody( u_prosody_pred=self.utterance_prosody_predictor(x=encoder_outputs, mask=src_mask), src_mask=src_mask, ) ) encoder_outputs = encoder_outputs + self.u_bottle_out(u_prosody_pred).expand_as(encoder_outputs) p_prosody_pred = self.p_norm( self.phoneme_prosody_predictor( x=encoder_outputs, mask=src_mask, ) ) encoder_outputs = encoder_outputs + self.p_bottle_out(p_prosody_pred).expand_as(encoder_outputs) encoder_outputs_res = encoder_outputs pitch_emb_pred, pitch_pred = self.pitch_adaptor.get_pitch_embedding( x=encoder_outputs, mask=src_mask, pitch_transform=pitch_transform, pitch_mean=self.pitch_mean if hasattr(self, "pitch_mean") else None, pitch_std=self.pitch_std if hasattr(self, "pitch_std") else None, ) energy_emb_pred, energy_pred = self.energy_adaptor.get_energy_embedding( x=encoder_outputs, mask=src_mask, energy_transform=energy_transform ) encoder_outputs = encoder_outputs.transpose(1, 2) + pitch_emb_pred + energy_emb_pred log_duration_pred = self.duration_predictor( x=encoder_outputs_res.detach(), mask=src_mask ) # [B, C_hidden, T_src] -> [B, T_src] duration_pred = (torch.exp(log_duration_pred) - 1) * (~src_mask) * self.length_scale # -> [B, T_src] duration_pred[duration_pred < 1] = 1.0 # -> [B, T_src] duration_pred = torch.round(duration_pred) # -> [B, T_src] mel_lens = duration_pred.sum(1) # -> [B,] _, encoder_outputs_ex, alignments = self._expand_encoder_with_durations( o_en=encoder_outputs, y_lengths=mel_lens, dr=duration_pred.squeeze(1), x_mask=~src_mask[:, None] ) mel_mask = get_mask_from_lengths( torch.tensor([encoder_outputs_ex.shape[2]], dtype=torch.int64, device=encoder_outputs_ex.device) ) if encoder_outputs_ex.shape[1] > pos_encoding.shape[1]: encoding = positional_encoding(self.emb_dim, encoder_outputs_ex.shape[2], device=tokens.device) # [B, C_hidden, T_src], [B, 1, T_src], [B, C_emb], [B, T_src, C_hidden] -> [B, C_hidden, T_src] x = self.decoder( encoder_outputs_ex.transpose(1, 2), mel_mask, speaker_embedding=speaker_embedding, encoding=encoding, ) x = self.to_mel(x) outputs = { "model_outputs": x, "alignments": alignments, # "pitch": pitch_emb_pred, "durations": duration_pred, "pitch": pitch_pred, "energy": energy_pred, "spk_emb": speaker_embedding, } return outputs
0
coqui_public_repos
coqui_public_repos/STT/.isort.cfg
[settings] profile=black
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/partition.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Functions and classes to create a partition of states. #ifndef FST_PARTITION_H_ #define FST_PARTITION_H_ #include <algorithm> #include <vector> #include <fst/queue.h> namespace fst { namespace internal { template <typename T> class PartitionIterator; // Defines a partitioning of elements, used to represent equivalence classes // for FST operations like minimization. T must be a signed integer type. // // The elements are numbered from 0 to num_elements - 1. // Initialize(num_elements) sets up the class for a given number of elements. // We maintain a partition of these elements into classes. The classes are also // numbered from zero; you can add a class with AddClass(), or add them in bulk // with AllocateClasses(num_classes). Initially the elements are not assigned // to any class; you set up the initial mapping from elements to classes by // calling Add(element_id, class_id). You can also move an element to a // different class by calling Move(element_id, class_id). // // We also support a rather specialized interface that allows you to efficiently // split classes in the Hopcroft minimization algorithm. This maintains a // binary partition of each class. Let's call these, rather arbitrarily, the // 'yes' subset and the 'no' subset of each class, and assume that by default, // each element of a class is in its 'no' subset. When one calls // SplitOn(element_id), element_id is moved to the 'yes' subset of its class. // (If it was already in the 'yes' set, it just stays there). The aim is to // enable (later) splitting the class in two in time no greater than the time // already spent calling SplitOn() for that class. We keep a list of the classes // which have nonempty 'yes' sets, as visited_classes_. When one calls // FinalizeSplit(Queue *l), for each class in visited_classes_ whose 'yes' // and 'no' sets are both nonempty, it will create a new class consisting of // the smaller of the two subsets (and this class will be added to the queue), // and the old class will now be the larger of the two subsets. This call also // resets all the yes/no partitions so that everything is in the 'no' subsets. // // One cannot use the Move() function if SplitOn() has been called without // a subsequent call to FinalizeSplit() template <typename T> class Partition { public: Partition() {} explicit Partition(T num_elements) { Initialize(num_elements); } // Creates an empty partition for num_elements. This means that the elements // are not assigned to a class (i.e class_index = -1); you should set up the // number of classes using AllocateClasses() or AddClass(), and allocate each // element to a class by calling Add(element, class_id). void Initialize(size_t num_elements) { elements_.resize(num_elements); classes_.reserve(num_elements); classes_.clear(); yes_counter_ = 1; } // Adds a class; returns new number of classes. T AddClass() { auto num_classes = classes_.size(); classes_.resize(num_classes + 1); return num_classes; } // Adds 'num_classes' new (empty) classes. void AllocateClasses(T num_classes) { classes_.resize(classes_.size() + num_classes); } // Adds element_id to class_id. element_id should already have been allocated // by calling Initialize(num_elements)---or the constructor taking // num_elements---with num_elements > element_id. element_id must not // currently be a member of any class; once elements have been added to a // class, use the Move() method to move them from one class to another. void Add(T element_id, T class_id) { auto &this_element = elements_[element_id]; auto &this_class = classes_[class_id]; ++this_class.size; // Adds the element to the 'no' subset of the class. auto no_head = this_class.no_head; if (no_head >= 0) elements_[no_head].prev_element = element_id; this_class.no_head = element_id; this_element.class_id = class_id; // Adds to the 'no' subset of the class. this_element.yes = 0; this_element.next_element = no_head; this_element.prev_element = -1; } // Moves element_id from 'no' subset of its current class to 'no' subset of // class class_id. This may not work correctly if you have called SplitOn() // [for any element] and haven't subsequently called FinalizeSplit(). void Move(T element_id, T class_id) { auto elements = &(elements_[0]); auto &element = elements[element_id]; auto &old_class = classes_[element.class_id]; --old_class.size; // Excises the element from the 'no' list of its old class, where it is // assumed to be. if (element.prev_element >= 0) { elements[element.prev_element].next_element = element.next_element; } else { old_class.no_head = element.next_element; } if (element.next_element >= 0) { elements[element.next_element].prev_element = element.prev_element; } // Adds to new class. Add(element_id, class_id); } // Moves element_id to the 'yes' subset of its class if it was in the 'no' // subset, and marks the class as having been visited. void SplitOn(T element_id) { auto elements = &(elements_[0]); auto &element = elements[element_id]; if (element.yes == yes_counter_) { return; // Already in the 'yes' set; nothing to do. } auto class_id = element.class_id; auto &this_class = classes_[class_id]; // Excises the element from the 'no' list of its class. if (element.prev_element >= 0) { elements[element.prev_element].next_element = element.next_element; } else { this_class.no_head = element.next_element; } if (element.next_element >= 0) { elements[element.next_element].prev_element = element.prev_element; } // Adds the element to the 'yes' list. if (this_class.yes_head >= 0) { elements[this_class.yes_head].prev_element = element_id; } else { visited_classes_.push_back(class_id); } element.yes = yes_counter_; element.next_element = this_class.yes_head; element.prev_element = -1; this_class.yes_head = element_id; this_class.yes_size++; } // This should be called after one has possibly called SplitOn for one or more // elements, thus moving those elements to the 'yes' subset for their class. // For each class that has a nontrivial split (i.e., it's not the case that // all members are in the 'yes' or 'no' subset), this function creates a new // class containing the smaller of the two subsets of elements, leaving the // larger group of elements in the old class. The identifier of the new class // will be added to the queue provided as the pointer L. This method then // moves all elements to the 'no' subset of their class. template <class Queue> void FinalizeSplit(Queue *queue) { for (const auto &visited_class : visited_classes_) { const auto new_class = SplitRefine(visited_class); if (new_class != -1 && queue) queue->Enqueue(new_class); } visited_classes_.clear(); // Incrementation sets all the 'yes' members of the elements to false. ++yes_counter_; } const T ClassId(T element_id) const { return elements_[element_id].class_id; } const size_t ClassSize(T class_id) const { return classes_[class_id].size; } const T NumClasses() const { return classes_.size(); } private: friend class PartitionIterator<T>; // Information about a given element. struct Element { T class_id; // Class ID of this element. T yes; // This is to be interpreted as a bool, true if it's in the // 'yes' set of this class. The interpretation as bool is // (yes == yes_counter_ ? true : false). T next_element; // Next element in the 'no' list or 'yes' list of this // class, whichever of the two we belong to (think of // this as the 'next' in a doubly-linked list, although // it is an index into the elements array). Negative // values corresponds to null. T prev_element; // Previous element in the 'no' or 'yes' doubly linked // list. Negative values corresponds to null. }; // Information about a given class. struct Class { Class() : size(0), yes_size(0), no_head(-1), yes_head(-1) {} T size; // Total number of elements in this class ('no' plus 'yes' // subsets). T yes_size; // Total number of elements of 'yes' subset of this class. T no_head; // Index of head element of doubly-linked list in 'no' subset. // Everything is in the 'no' subset until you call SplitOn(). // -1 means no element. T yes_head; // Index of head element of doubly-linked list in 'yes' subset. // -1 means no element. }; // This method, called from FinalizeSplit(), checks whether a class has to // be split (a class will be split only if its 'yes' and 'no' subsets are // both nonempty, but one can assume that since this function was called, the // 'yes' subset is nonempty). It splits by taking the smaller subset and // making it a new class, and leaving the larger subset of elements in the // 'no' subset of the old class. It returns the new class if created, or -1 // if none was created. T SplitRefine(T class_id) { auto yes_size = classes_[class_id].yes_size; auto size = classes_[class_id].size; auto no_size = size - yes_size; if (no_size == 0) { // All members are in the 'yes' subset, so we don't have to create a new // class, just move them all to the 'no' subset. classes_[class_id].no_head = classes_[class_id].yes_head; classes_[class_id].yes_head = -1; classes_[class_id].yes_size = 0; return -1; } else { auto new_class_id = classes_.size(); classes_.resize(classes_.size() + 1); auto &old_class = classes_[class_id]; auto &new_class = classes_[new_class_id]; // The new_class will have the values from the constructor. if (no_size < yes_size) { // Moves the 'no' subset to new class ('no' subset). new_class.no_head = old_class.no_head; new_class.size = no_size; // And makes the 'yes' subset of the old class ('no' subset). old_class.no_head = old_class.yes_head; old_class.yes_head = -1; old_class.size = yes_size; old_class.yes_size = 0; } else { // Moves the 'yes' subset to the new class (to the 'no' subset) new_class.size = yes_size; new_class.no_head = old_class.yes_head; // Retains only the 'no' subset in the old class. old_class.size = no_size; old_class.yes_size = 0; old_class.yes_head = -1; } auto elements = &(elements_[0]); // Updates the 'class_id' of all the elements we moved. for (auto e = new_class.no_head; e >= 0; e = elements[e].next_element) { elements[e].class_id = new_class_id; } return new_class_id; } } // elements_[i] contains all info about the i'th element. std::vector<Element> elements_; // classes_[i] contains all info about the i'th class. std::vector<Class> classes_; // Set of visited classes to be used in split refine. std::vector<T> visited_classes_; // yes_counter_ is used in interpreting the 'yes' members of class Element. // If element.yes == yes_counter_, we interpret that element as being in the // 'yes' subset of its class. This allows us to, in effect, set all those // bools to false at a stroke by incrementing yes_counter_. T yes_counter_; }; // Iterates over members of the 'no' subset of a class in a partition. (When // this is used, everything is in the 'no' subset). template <typename T> class PartitionIterator { public: using Element = typename Partition<T>::Element; PartitionIterator(const Partition<T> &partition, T class_id) : partition_(partition), element_id_(partition_.classes_[class_id].no_head), class_id_(class_id) {} bool Done() { return element_id_ < 0; } const T Value() { return element_id_; } void Next() { element_id_ = partition_.elements_[element_id_].next_element; } void Reset() { element_id_ = partition_.classes_[class_id_].no_head; } private: const Partition<T> &partition_; T element_id_; T class_id_; }; } // namespace internal } // namespace fst #endif // FST_PARTITION_H_
0
coqui_public_repos
coqui_public_repos/TTS/README.md
## 🐸Coqui.ai News - 📣 ⓍTTSv2 is here with 16 languages and better performance across the board. - 📣 ⓍTTS fine-tuning code is out. Check the [example recipes](https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech). - 📣 ⓍTTS can now stream with <200ms latency. - 📣 ⓍTTS, our production TTS model that can speak 13 languages, is released [Blog Post](https://coqui.ai/blog/tts/open_xtts), [Demo](https://huggingface.co/spaces/coqui/xtts), [Docs](https://tts.readthedocs.io/en/dev/models/xtts.html) - 📣 [🐶Bark](https://github.com/suno-ai/bark) is now available for inference with unconstrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html) - 📣 You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS. - 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html) <div align="center"> <img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" /> ## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/> **🐸TTS is a library for advanced Text-to-Speech generation.** 🚀 Pretrained models in +1100 languages. 🛠️ Tools for training new models and fine-tuning existing models in any language. 📚 Utilities for dataset analysis and curation. ______________________________________________________________________ [![Discord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv) [![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0) [![PyPI version](https://badge.fury.io/py/TTS.svg)](https://badge.fury.io/py/TTS) [![Covenant](https://camo.githubusercontent.com/7d620efaa3eac1c5b060ece5d6aacfcc8b81a74a04d05cd0398689c01c4463bb/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d76322e3025323061646f707465642d6666363962342e737667)](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md) [![Downloads](https://pepy.tech/badge/tts)](https://pepy.tech/project/tts) [![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/aux_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/data_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/docker.yaml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/inference_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/style_check.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/text_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/tts_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/vocoder_tests.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests0.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests1.yml/badge.svg) ![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests2.yml/badge.svg) [![Docs](<https://readthedocs.org/projects/tts/badge/?version=latest&style=plastic>)](https://tts.readthedocs.io/en/latest/) </div> ______________________________________________________________________ ## 💬 Where to ask questions Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it. | Type | Platforms | | ------------------------------- | --------------------------------------- | | 🚨 **Bug Reports** | [GitHub Issue Tracker] | | 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] | | 👩‍💻 **Usage Questions** | [GitHub Discussions] | | 🗯 **General Discussion** | [GitHub Discussions] or [Discord] | [github issue tracker]: https://github.com/coqui-ai/tts/issues [github discussions]: https://github.com/coqui-ai/TTS/discussions [discord]: https://discord.gg/5eXr5seRrv [Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials ## 🔗 Links and Resources | Type | Links | | ------------------------------- | --------------------------------------- | | 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/) | 💾 **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#installation)| | 👩‍💻 **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)| | 📌 **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378) | 🚀 **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)| | 📰 **Papers** | [TTS Papers](https://github.com/erogol/TTS-papers)| ## 🥇 TTS Performance <p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p> Underlined "TTS*" and "Judy*" are **internal** 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices. ## Features - High-performance Deep Learning models for Text2Speech tasks. - Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). - Speaker Encoder to compute speaker embeddings efficiently. - Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN) - Fast and efficient model training. - Detailed training logs on the terminal and Tensorboard. - Support for Multi-speaker TTS. - Efficient, flexible, lightweight but feature complete `Trainer API`. - Released and ready-to-use models. - Tools to curate Text2Speech datasets under```dataset_analysis```. - Utilities to use and test your models. - Modular (but not too much) code base enabling easy implementation of new ideas. ## Model Implementations ### Spectrogram models - Tacotron: [paper](https://arxiv.org/abs/1703.10135) - Tacotron2: [paper](https://arxiv.org/abs/1712.05884) - Glow-TTS: [paper](https://arxiv.org/abs/2005.11129) - Speedy-Speech: [paper](https://arxiv.org/abs/2008.03802) - Align-TTS: [paper](https://arxiv.org/abs/2003.01950) - FastPitch: [paper](https://arxiv.org/pdf/2006.06873.pdf) - FastSpeech: [paper](https://arxiv.org/abs/1905.09263) - FastSpeech2: [paper](https://arxiv.org/abs/2006.04558) - SC-GlowTTS: [paper](https://arxiv.org/abs/2104.05557) - Capacitron: [paper](https://arxiv.org/abs/1906.03402) - OverFlow: [paper](https://arxiv.org/abs/2211.06892) - Neural HMM TTS: [paper](https://arxiv.org/abs/2108.13320) - Delightful TTS: [paper](https://arxiv.org/abs/2110.12612) ### End-to-End Models - ⓍTTS: [blog](https://coqui.ai/blog/tts/open_xtts) - VITS: [paper](https://arxiv.org/pdf/2106.06103) - 🐸 YourTTS: [paper](https://arxiv.org/abs/2112.02418) - 🐢 Tortoise: [orig. repo](https://github.com/neonbjb/tortoise-tts) - 🐶 Bark: [orig. repo](https://github.com/suno-ai/bark) ### Attention Methods - Guided Attention: [paper](https://arxiv.org/abs/1710.08969) - Forward Backward Decoding: [paper](https://arxiv.org/abs/1907.09006) - Graves Attention: [paper](https://arxiv.org/abs/1910.10288) - Double Decoder Consistency: [blog](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/) - Dynamic Convolutional Attention: [paper](https://arxiv.org/pdf/1910.10288.pdf) - Alignment Network: [paper](https://arxiv.org/abs/2108.10447) ### Speaker Encoder - GE2E: [paper](https://arxiv.org/abs/1710.10467) - Angular Loss: [paper](https://arxiv.org/pdf/2003.11982.pdf) ### Vocoders - MelGAN: [paper](https://arxiv.org/abs/1910.06711) - MultiBandMelGAN: [paper](https://arxiv.org/abs/2005.05106) - ParallelWaveGAN: [paper](https://arxiv.org/abs/1910.11480) - GAN-TTS discriminators: [paper](https://arxiv.org/abs/1909.11646) - WaveRNN: [origin](https://github.com/fatchord/WaveRNN/) - WaveGrad: [paper](https://arxiv.org/abs/2009.00713) - HiFiGAN: [paper](https://arxiv.org/abs/2010.05646) - UnivNet: [paper](https://arxiv.org/abs/2106.07889) ### Voice Conversion - FreeVC: [paper](https://arxiv.org/abs/2210.15418) You can also help us implement more models. ## Installation 🐸TTS is tested on Ubuntu 18.04 with **python >= 3.9, < 3.12.**. If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released 🐸TTS models, installing from PyPI is the easiest option. ```bash pip install TTS ``` If you plan to code or train models, clone 🐸TTS and install it locally. ```bash git clone https://github.com/coqui-ai/TTS pip install -e .[all,dev,notebooks] # Select the relevant extras ``` If you are on Ubuntu (Debian), you can also run following commands for installation. ```bash $ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS. $ make install ``` If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system). ## Docker Image You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it. ```bash docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu python3 TTS/server/server.py --list_models #To get the list of available models python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server ``` You can then enjoy the TTS server [here](http://[::1]:5002/) More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker_images.html) ## Synthesizing speech by 🐸TTS ### 🐍 Python API #### Running a multi-speaker and multi-lingual model ```python import torch from TTS.api import TTS # Get device device = "cuda" if torch.cuda.is_available() else "cpu" # List available 🐸TTS models print(TTS().list_models()) # Init TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device) # Run TTS # ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language # Text to speech list of amplitude values as output wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en") # Text to speech to a file tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") ``` #### Running a single speaker model ```python # Init TTS with the target model name tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device) # Run TTS tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH) # Example voice cloning with YourTTS in English, French and Portuguese tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device) tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav") tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav") ``` #### Example voice conversion Converting the voice in `source_wav` to the voice of `target_wav` ```python tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda") tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav") ``` #### Example voice cloning together with the voice conversion model. This way, you can clone voices by using any model in 🐸TTS. ```python tts = TTS("tts_models/de/thorsten/tacotron2-DDC") tts.tts_with_vc_to_file( "Wie sage ich auf Italienisch, dass ich dich liebe?", speaker_wav="target/speaker.wav", file_path="output.wav" ) ``` #### Example text to speech using **Fairseq models in ~1100 languages** 🤯. For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`. You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms). ```python # TTS with on the fly voice conversion api = TTS("tts_models/deu/fairseq/vits") api.tts_with_vc_to_file( "Wie sage ich auf Italienisch, dass ich dich liebe?", speaker_wav="target/speaker.wav", file_path="output.wav" ) ``` ### Command-line `tts` <!-- begin-tts-readme --> Synthesize speech on command line. You can either use your trained model or choose a model from the provided list. If you don't specify any models, then it uses LJSpeech based English model. #### Single Speaker Models - List provided models: ``` $ tts --list_models ``` - Get model info (for both tts_models and vocoder_models): - Query by type/name: The model_info_by_name uses the name as it from the --list_models. ``` $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>" ``` For example: ``` $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2 ``` - Query by type/idx: The model_query_idx uses the corresponding idx from --list_models. ``` $ tts --model_info_by_idx "<model_type>/<model_query_idx>" ``` For example: ``` $ tts --model_info_by_idx tts_models/3 ``` - Query info for model info by full name: ``` $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>" ``` - Run TTS with default models: ``` $ tts --text "Text for TTS" --out_path output/path/speech.wav ``` - Run TTS and pipe out the generated TTS wav file data: ``` $ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay ``` - Run a TTS model with its default vocoder model: ``` $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav ``` For example: ``` $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav ``` - Run with specific TTS and vocoder models from the list: ``` $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav ``` For example: ``` $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav ``` - Run your own TTS model (Using Griffin-Lim Vocoder): ``` $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav ``` - Run your own TTS and Vocoder models: ``` $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json ``` #### Multi-speaker Models - List the available speakers and choose a <speaker_id> among them: ``` $ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs ``` - Run the multi-speaker TTS model with the target speaker ID: ``` $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id> ``` - Run your own multi-speaker TTS model: ``` $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id> ``` ### Voice Conversion Models ``` $ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav> ``` <!-- end-tts-readme --> ## Directory Structure ``` |- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.) |- utils/ (common utilities.) |- TTS |- bin/ (folder for all the executables.) |- train*.py (train your target model.) |- ... |- tts/ (text to speech models) |- layers/ (model layer definitions) |- models/ (model definitions) |- utils/ (model specific utilities.) |- speaker_encoder/ (Speaker Encoder models.) |- (same) |- vocoder/ (Vocoder models.) |- (same) ```
0
coqui_public_repos/TTS/TTS
coqui_public_repos/TTS/TTS/bin/eval_encoder.py
import argparse from argparse import RawTextHelpFormatter import torch from tqdm import tqdm from TTS.config import load_config from TTS.tts.datasets import load_tts_samples from TTS.tts.utils.speakers import SpeakerManager def compute_encoder_accuracy(dataset_items, encoder_manager): class_name_key = encoder_manager.encoder_config.class_name_key map_classid_to_classname = getattr(encoder_manager.encoder_config, "map_classid_to_classname", None) class_acc_dict = {} # compute embeddings for all wav_files for item in tqdm(dataset_items): class_name = item[class_name_key] wav_file = item["audio_file"] # extract the embedding embedd = encoder_manager.compute_embedding_from_clip(wav_file) if encoder_manager.encoder_criterion is not None and map_classid_to_classname is not None: embedding = torch.FloatTensor(embedd).unsqueeze(0) if encoder_manager.use_cuda: embedding = embedding.cuda() class_id = encoder_manager.encoder_criterion.softmax.inference(embedding).item() predicted_label = map_classid_to_classname[str(class_id)] else: predicted_label = None if class_name is not None and predicted_label is not None: is_equal = int(class_name == predicted_label) if class_name not in class_acc_dict: class_acc_dict[class_name] = [is_equal] else: class_acc_dict[class_name].append(is_equal) else: raise RuntimeError("Error: class_name or/and predicted_label are None") acc_avg = 0 for key, values in class_acc_dict.items(): acc = sum(values) / len(values) print("Class", key, "Accuracy:", acc) acc_avg += acc print("Average Accuracy:", acc_avg / len(class_acc_dict)) if __name__ == "__main__": parser = argparse.ArgumentParser( description="""Compute the accuracy of the encoder.\n\n""" """ Example runs: python TTS/bin/eval_encoder.py emotion_encoder_model.pth emotion_encoder_config.json dataset_config.json """, formatter_class=RawTextHelpFormatter, ) parser.add_argument("model_path", type=str, help="Path to model checkpoint file.") parser.add_argument( "config_path", type=str, help="Path to model config file.", ) parser.add_argument( "config_dataset_path", type=str, help="Path to dataset config file.", ) parser.add_argument("--use_cuda", type=bool, help="flag to set cuda.", default=True) parser.add_argument("--eval", type=bool, help="compute eval.", default=True) args = parser.parse_args() c_dataset = load_config(args.config_dataset_path) meta_data_train, meta_data_eval = load_tts_samples(c_dataset.datasets, eval_split=args.eval) items = meta_data_train + meta_data_eval enc_manager = SpeakerManager( encoder_model_path=args.model_path, encoder_config_path=args.config_path, use_cuda=args.use_cuda ) compute_encoder_accuracy(items, enc_manager)
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions/const/const64-fst.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fst/fst.h> #include <fst/const-fst.h> namespace fst { static FstRegisterer<ConstFst<StdArc, uint64>> ConstFst_StdArc_uint64_registerer; static FstRegisterer<ConstFst<LogArc, uint64>> ConstFst_LogArc_uint64_registerer; static FstRegisterer<ConstFst<Log64Arc, uint64>> ConstFst_Log64Arc_uint64_registerer; } // namespace fst
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst/script/closure.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_SCRIPT_CLOSURE_H_ #define FST_SCRIPT_CLOSURE_H_ #include <utility> #include <fst/closure.h> #include <fst/script/fst-class.h> namespace fst { namespace script { using ClosureArgs = std::pair<MutableFstClass *, const ClosureType>; template <class Arc> void Closure(ClosureArgs *args) { MutableFst<Arc> *fst = std::get<0>(*args)->GetMutableFst<Arc>(); Closure(fst, std::get<1>(*args)); } void Closure(MutableFstClass *ofst, ClosureType closure_type); } // namespace script } // namespace fst #endif // FST_SCRIPT_CLOSURE_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-electronjs_v10.1_multiarchpkg-win-tflite-opt.yml
build: template_file: test-win-opt-base.tyml dependencies: - "node-package-tflite" - "test-training_16k-linux-amd64-py36m-opt" test_model_task: "test-training_16k-linux-amd64-py36m-opt" system_setup: > ${system.sox_win} && ${nodejs.win.prep_12} args: tests_cmdline: "${system.homedir.win}/DeepSpeech/ds/taskcluster/tc-electron_tflite-tests.sh 12.x 10.1.0 16k" metadata: name: "DeepSpeech Windows AMD64 TFLite ElectronJS MultiArch Package v10.1 tests" description: "Testing DeepSpeech for Windows/AMD64 on ElectronJS MultiArch Package v10.1, TFLite only, optimized version"
0
coqui_public_repos/STT/native_client/kenlm
coqui_public_repos/STT/native_client/kenlm/util/file_piece.cc
#include "file_piece.hh" #include "double-conversion/double-conversion.h" #include "exception.hh" #include "file.hh" #include "mmap.hh" #if defined(_WIN32) || defined(_WIN64) #include <io.h> #else #include <unistd.h> #endif #include <algorithm> #include <cassert> #include <cerrno> #include <cmath> #include <cstdlib> #include <iostream> #include <limits> #include <string> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #if defined(_WIN32) || defined(_WIN64) #include <math.h> #endif namespace util { namespace { const uint64_t kPageSize = SizePage(); } ParseNumberException::ParseNumberException(StringPiece value) throw() { *this << "Could not parse \"" << value << "\" into a "; } LineIterator &LineIterator::operator++() { if (!backing_->ReadLineOrEOF(line_, delim_)) backing_ = NULL; return *this; } FilePiece::FilePiece(const char *name, std::ostream *show_progress, std::size_t min_buffer) : file_(OpenReadOrThrow(name)), total_size_(SizeFile(file_.get())), progress_(total_size_, total_size_ == kBadSize ? NULL : show_progress, std::string("Reading ") + name) { Initialize(name, show_progress, min_buffer); } namespace { std::string NamePossiblyFind(int fd, const char *name) { if (name) return name; return NameFromFD(fd); } } // namespace FilePiece::FilePiece(int fd, const char *name, std::ostream *show_progress, std::size_t min_buffer) : file_(fd), total_size_(SizeFile(file_.get())), progress_(total_size_, total_size_ == kBadSize ? NULL : show_progress, std::string("Reading ") + NamePossiblyFind(fd, name)) { Initialize(NamePossiblyFind(fd, name).c_str(), show_progress, min_buffer); } FilePiece::FilePiece(std::istream &stream, const char * /*name*/, std::size_t min_buffer) : total_size_(kBadSize) { InitializeNoRead("istream", min_buffer); fallback_to_read_ = true; HugeMalloc(default_map_size_, false, data_); position_ = data_.begin(); position_end_ = position_; fell_back_.Reset(stream); } StringPiece FilePiece::ReadLine(char delim, bool strip_cr) { std::size_t skip = 0; while (true) { const char *i = std::find(position_ + skip, position_end_, delim); if (UTIL_LIKELY(i != position_end_)) { // End of line. // Take 1 byte off the end if it's an unwanted carriage return. const std::size_t subtract_cr = ( (strip_cr && i > position_ && *(i - 1) == '\r') ? 1 : 0); StringPiece ret(position_, i - position_ - subtract_cr); position_ = i + 1; return ret; } if (at_end_) { if (position_ == position_end_) { Shift(); } return Consume(position_end_); } skip = position_end_ - position_; Shift(); } } bool FilePiece::ReadLineOrEOF(StringPiece &to, char delim, bool strip_cr) { try { to = ReadLine(delim, strip_cr); } catch (const util::EndOfFileException &e) { return false; } return true; } float FilePiece::ReadFloat() { return ReadNumber<float>(); } double FilePiece::ReadDouble() { return ReadNumber<double>(); } long int FilePiece::ReadLong() { return ReadNumber<long int>(); } unsigned long int FilePiece::ReadULong() { return ReadNumber<unsigned long int>(); } // Factored out so that istream can call this. void FilePiece::InitializeNoRead(const char *name, std::size_t min_buffer) { file_name_ = name; default_map_size_ = kPageSize * std::max<std::size_t>((min_buffer / kPageSize + 1), 2); position_ = NULL; position_end_ = NULL; mapped_offset_ = 0; at_end_ = false; } void FilePiece::Initialize(const char *name, std::ostream *show_progress, std::size_t min_buffer) { InitializeNoRead(name, min_buffer); uint64_t current_offset; bool valid_current_offset; try { current_offset = AdvanceOrThrow(file_.get(), 0); valid_current_offset = true; } catch (const FDException &) { current_offset = 0; valid_current_offset = false; } // So the assertion in TransitionToRead passes fallback_to_read_ = false; if (total_size_ == kBadSize || !valid_current_offset) { if (show_progress) *show_progress << "File " << name << " isn't normal. Using slower read() instead of mmap(). No progress bar." << std::endl; TransitionToRead(); } else { mapped_offset_ = current_offset; } Shift(); // gzip detect. if ((position_end_ >= position_ + ReadCompressed::kMagicSize) && ReadCompressed::DetectCompressedMagic(position_)) { if (!fallback_to_read_) { at_end_ = false; TransitionToRead(); } } } namespace { static const kenlm_double_conversion::StringToDoubleConverter kConverter( kenlm_double_conversion::StringToDoubleConverter::ALLOW_TRAILING_JUNK | kenlm_double_conversion::StringToDoubleConverter::ALLOW_LEADING_SPACES, std::numeric_limits<double>::quiet_NaN(), std::numeric_limits<double>::quiet_NaN(), "inf", "NaN"); StringPiece FirstToken(StringPiece str) { const char *i; for (i = str.data(); i != str.data() + str.size(); ++i) { if (kSpaces[(unsigned char)*i]) break; } return StringPiece(str.data(), i - str.data()); } // std::isnan is technically C++11 not C++98. But in practice this is a problem for visual studio. template <class T> inline int CrossPlatformIsNaN(T value) { #if defined(_WIN32) || defined(_WIN64) return isnan(value); #else return std::isnan(value); #endif } const char *ParseNumber(StringPiece str, float &out) { int count; out = kConverter.StringToFloat(str.data(), str.size(), &count); UTIL_THROW_IF_ARG(CrossPlatformIsNaN(out) && str != "NaN" && str != "nan", ParseNumberException, (FirstToken(str)), "float"); return str.data() + count; } const char *ParseNumber(StringPiece str, double &out) { int count; out = kConverter.StringToDouble(str.data(), str.size(), &count); UTIL_THROW_IF_ARG(CrossPlatformIsNaN(out) && str != "NaN" && str != "nan", ParseNumberException, (FirstToken(str)), "double"); return str.data() + count; } const char *ParseNumber(StringPiece str, long int &out) { char *end; errno = 0; out = strtol(str.data(), &end, 10); UTIL_THROW_IF_ARG(errno || (end == str.data()), ParseNumberException, (FirstToken(str)), "long int"); return end; } const char *ParseNumber(StringPiece str, unsigned long int &out) { char *end; errno = 0; out = strtoul(str.data(), &end, 10); UTIL_THROW_IF_ARG(errno || (end == str.data()), ParseNumberException, (FirstToken(str)), "unsigned long int"); return end; } } // namespace template <class T> T FilePiece::ReadNumber() { SkipSpaces(); while (last_space_ < position_) { if (UTIL_UNLIKELY(at_end_)) { // Hallucinate a null off the end of the file. std::string buffer(position_, position_end_); T ret; // Has to be null-terminated. const char *begin = buffer.c_str(); const char *end = ParseNumber(StringPiece(begin, buffer.size()), ret); position_ += end - begin; return ret; } Shift(); } T ret; position_ = ParseNumber(StringPiece(position_, last_space_ - position_), ret); return ret; } const char *FilePiece::FindDelimiterOrEOF(const bool *delim) { std::size_t skip = 0; while (true) { for (const char *i = position_ + skip; i < position_end_; ++i) { if (delim[static_cast<unsigned char>(*i)]) return i; } if (at_end_) { if (position_ == position_end_) Shift(); return position_end_; } skip = position_end_ - position_; Shift(); } } void FilePiece::Shift() { if (at_end_) { progress_.Finished(); throw EndOfFileException(); } uint64_t desired_begin = position_ - data_.begin() + mapped_offset_; if (!fallback_to_read_) MMapShift(desired_begin); // Notice an mmap failure might set the fallback. if (fallback_to_read_) ReadShift(); for (last_space_ = position_end_ - 1; last_space_ >= position_; --last_space_) { if (kSpaces[static_cast<unsigned char>(*last_space_)]) break; } } void FilePiece::UpdateProgress() { if (!fallback_to_read_) progress_.Set(position_ - data_.begin() + mapped_offset_); } void FilePiece::MMapShift(uint64_t desired_begin) { // Use mmap. uint64_t ignore = desired_begin % kPageSize; // Duplicate request for Shift means give more data. if (position_ == data_.begin() + ignore && position_) { default_map_size_ *= 2; } // Local version so that in case of failure it doesn't overwrite the class variable. uint64_t mapped_offset = desired_begin - ignore; uint64_t mapped_size; if (default_map_size_ >= static_cast<std::size_t>(total_size_ - mapped_offset)) { at_end_ = true; mapped_size = total_size_ - mapped_offset; } else { mapped_size = default_map_size_; } // Forcibly clear the existing mmap first. data_.reset(); try { MapRead(POPULATE_OR_LAZY, *file_, mapped_offset, mapped_size, data_); } catch (const util::ErrnoException &) { if (desired_begin) { SeekOrThrow(*file_, desired_begin); } // The mmap was scheduled to end the file, but now we're going to read it. at_end_ = false; TransitionToRead(); return; } mapped_offset_ = mapped_offset; position_ = data_.begin() + ignore; position_end_ = data_.begin() + mapped_size; progress_.Set(desired_begin); } void FilePiece::TransitionToRead() { assert(!fallback_to_read_); fallback_to_read_ = true; data_.reset(); HugeMalloc(default_map_size_, false, data_); position_ = data_.begin(); position_end_ = position_; try { fell_back_.Reset(file_.release()); } catch (util::Exception &e) { e << " in file " << file_name_; throw; } } void FilePiece::ReadShift() { assert(fallback_to_read_); // Bytes [data_.begin(), position_) have been consumed. // Bytes [position_, position_end_) have been read into the buffer. // Start at the beginning of the buffer if there's nothing useful in it. if (position_ == position_end_) { mapped_offset_ += (position_end_ - data_.begin()); position_ = data_.begin(); position_end_ = position_; } std::size_t already_read = position_end_ - data_.begin(); if (already_read == default_map_size_) { if (position_ == data_.begin()) { // Buffer too small. std::size_t valid_length = position_end_ - position_; default_map_size_ *= 2; HugeRealloc(default_map_size_, false, data_); position_ = data_.begin(); position_end_ = position_ + valid_length; } else { std::size_t moving = position_end_ - position_; memmove(data_.get(), position_, moving); position_ = data_.begin(); position_end_ = position_ + moving; already_read = moving; } } std::size_t read_return = fell_back_.Read(static_cast<uint8_t*>(data_.get()) + already_read, default_map_size_ - already_read); progress_.Set(fell_back_.RawAmount()); if (read_return == 0) { at_end_ = true; } position_end_ += read_return; } } // namespace util
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/script/weight-class.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Represents a generic weight in an FST; that is, represents a specific type // of weight underneath while hiding that type from a client. #ifndef FST_SCRIPT_WEIGHT_CLASS_H_ #define FST_SCRIPT_WEIGHT_CLASS_H_ #include <memory> #include <ostream> #include <string> #include <fst/arc.h> #include <fst/generic-register.h> #include <fst/util.h> #include <fst/weight.h> namespace fst { namespace script { class WeightImplBase { public: virtual WeightImplBase *Copy() const = 0; virtual void Print(std::ostream *o) const = 0; virtual const string &Type() const = 0; virtual string ToString() const = 0; virtual bool operator==(const WeightImplBase &other) const = 0; virtual bool operator!=(const WeightImplBase &other) const = 0; virtual WeightImplBase &PlusEq(const WeightImplBase &other) = 0; virtual WeightImplBase &TimesEq(const WeightImplBase &other) = 0; virtual WeightImplBase &DivideEq(const WeightImplBase &other) = 0; virtual WeightImplBase &PowerEq(size_t n) = 0; virtual ~WeightImplBase() {} }; template <class W> class WeightClassImpl : public WeightImplBase { public: explicit WeightClassImpl(const W &weight) : weight_(weight) {} WeightClassImpl<W> *Copy() const final { return new WeightClassImpl<W>(weight_); } const string &Type() const final { return W::Type(); } void Print(std::ostream *ostrm) const final { *ostrm << weight_; } string ToString() const final { string str; WeightToStr(weight_, &str); return str; } bool operator==(const WeightImplBase &other) const final { const auto *typed_other = static_cast<const WeightClassImpl<W> *>(&other); return weight_ == typed_other->weight_; } bool operator!=(const WeightImplBase &other) const final { return !(*this == other); } WeightClassImpl<W> &PlusEq(const WeightImplBase &other) final { const auto *typed_other = static_cast<const WeightClassImpl<W> *>(&other); weight_ = Plus(weight_, typed_other->weight_); return *this; } WeightClassImpl<W> &TimesEq(const WeightImplBase &other) final { const auto *typed_other = static_cast<const WeightClassImpl<W> *>(&other); weight_ = Times(weight_, typed_other->weight_); return *this; } WeightClassImpl<W> &DivideEq(const WeightImplBase &other) final { const auto *typed_other = static_cast<const WeightClassImpl<W> *>(&other); weight_ = Divide(weight_, typed_other->weight_); return *this; } WeightClassImpl<W> &PowerEq(size_t n) final { weight_ = Power(weight_, n); return *this; } W *GetImpl() { return &weight_; } private: W weight_; }; class WeightClass { public: WeightClass() = default; template <class W> explicit WeightClass(const W &weight) : impl_(new WeightClassImpl<W>(weight)) {} template <class W> explicit WeightClass(const WeightClassImpl<W> &impl) : impl_(new WeightClassImpl<W>(impl)) {} WeightClass(const string &weight_type, const string &weight_str); WeightClass(const WeightClass &other) : impl_(other.impl_ ? other.impl_->Copy() : nullptr) {} WeightClass &operator=(const WeightClass &other) { impl_.reset(other.impl_ ? other.impl_->Copy() : nullptr); return *this; } static constexpr const char *__ZERO__ = "__ZERO__"; // NOLINT static WeightClass Zero(const string &weight_type); static constexpr const char *__ONE__ = "__ONE__"; // NOLINT static WeightClass One(const string &weight_type); static constexpr const char *__NOWEIGHT__ = "__NOWEIGHT__"; // NOLINT static WeightClass NoWeight(const string &weight_type); template <class W> const W *GetWeight() const { if (W::Type() != impl_->Type()) { return nullptr; } else { auto *typed_impl = static_cast<WeightClassImpl<W> *>(impl_.get()); return typed_impl->GetImpl(); } } string ToString() const { return (impl_) ? impl_->ToString() : "none"; } const string &Type() const { if (impl_) return impl_->Type(); static const string *const no_type = new string("none"); return *no_type; } bool WeightTypesMatch(const WeightClass &other, const string &op_name) const; friend bool operator==(const WeightClass &lhs, const WeightClass &rhs); friend WeightClass Plus(const WeightClass &lhs, const WeightClass &rhs); friend WeightClass Times(const WeightClass &lhs, const WeightClass &rhs); friend WeightClass Divide(const WeightClass &lhs, const WeightClass &rhs); friend WeightClass Power(const WeightClass &w, size_t n); private: const WeightImplBase *GetImpl() const { return impl_.get(); } WeightImplBase *GetImpl() { return impl_.get(); } std::unique_ptr<WeightImplBase> impl_; friend std::ostream &operator<<(std::ostream &o, const WeightClass &c); }; bool operator==(const WeightClass &lhs, const WeightClass &rhs); bool operator!=(const WeightClass &lhs, const WeightClass &rhs); WeightClass Plus(const WeightClass &lhs, const WeightClass &rhs); WeightClass Times(const WeightClass &lhs, const WeightClass &rhs); WeightClass Divide(const WeightClass &lhs, const WeightClass &rhs); WeightClass Power(const WeightClass &w, size_t n); std::ostream &operator<<(std::ostream &o, const WeightClass &c); // Registration for generic weight types. using StrToWeightImplBaseT = WeightImplBase *(*)(const string &str, const string &src, size_t nline); template <class W> WeightImplBase *StrToWeightImplBase(const string &str, const string &src, size_t nline) { if (str == WeightClass::__ZERO__) return new WeightClassImpl<W>(W::Zero()); else if (str == WeightClass::__ONE__) return new WeightClassImpl<W>(W::One()); else if (str == WeightClass::__NOWEIGHT__) return new WeightClassImpl<W>(W::NoWeight()); return new WeightClassImpl<W>(StrToWeight<W>(str, src, nline)); } class WeightClassRegister : public GenericRegister<string, StrToWeightImplBaseT, WeightClassRegister> { protected: string ConvertKeyToSoFilename(const string &key) const final { string legal_type(key); ConvertToLegalCSymbol(&legal_type); return legal_type + ".so"; } }; using WeightClassRegisterer = GenericRegisterer<WeightClassRegister>; // Internal version; needs to be called by wrapper in order for macro args to // expand. #define REGISTER_FST_WEIGHT__(Weight, line) \ static WeightClassRegisterer weight_registerer##_##line( \ Weight::Type(), StrToWeightImplBase<Weight>) // This layer is where __FILE__ and __LINE__ are expanded. #define REGISTER_FST_WEIGHT_EXPANDER(Weight, line) \ REGISTER_FST_WEIGHT__(Weight, line) // Macro for registering new weight types; clients call this. #define REGISTER_FST_WEIGHT(Weight) \ REGISTER_FST_WEIGHT_EXPANDER(Weight, __LINE__) } // namespace script } // namespace fst #endif // FST_SCRIPT_WEIGHT_CLASS_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-nodejs_13x_8k_tflite-linux-amd64-prod-opt.yml
build: template_file: test-linux-opt-base.tyml docker_image: "ubuntu:16.04" dependencies: - "linux-amd64-tflite-opt" system_setup: > ${nodejs.packages_xenial.prep_13} && ${nodejs.packages_xenial.apt_pinning} && apt-get -qq update && apt-get -qq -y install ${nodejs.packages_xenial.apt} args: tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-node_tflite-tests-prod.sh 13.x 8k" workerType: "${docker.dsTests}" metadata: name: "DeepSpeech Linux AMD64 TFLite NodeJS 13.x prod tests (8kHz)" description: "Testing DeepSpeech for Linux/AMD64 on NodeJS v13.x on prod model, TFLite, optimized version (8kHz)"
0
coqui_public_repos/TTS/docs/source
coqui_public_repos/TTS/docs/source/models/tortoise.md
# 🐢 Tortoise Tortoise is a very expressive TTS system with impressive voice cloning capabilities. It is based on an GPT like autogressive acoustic model that converts input text to discritized acoustic tokens, a diffusion model that converts these tokens to melspectrogram frames and a Univnet vocoder to convert the spectrograms to the final audio signal. The important downside is that Tortoise is very slow compared to the parallel TTS models like VITS. Big thanks to 👑[@manmay-nakhashi](https://github.com/manmay-nakhashi) who helped us implement Tortoise in 🐸TTS. Example use: ```python from TTS.tts.configs.tortoise_config import TortoiseConfig from TTS.tts.models.tortoise import Tortoise config = TortoiseConfig() model = Tortoise.init_from_config(config) model.load_checkpoint(config, checkpoint_dir="paths/to/models_dir/", eval=True) # with random speaker output_dict = model.synthesize(text, config, speaker_id="random", extra_voice_dirs=None, **kwargs) # cloning a speaker output_dict = model.synthesize(text, config, speaker_id="speaker_n", extra_voice_dirs="path/to/speaker_n/", **kwargs) ``` Using 🐸TTS API: ```python from TTS.api import TTS tts = TTS("tts_models/en/multi-dataset/tortoise-v2") # cloning `lj` voice from `TTS/tts/utils/assets/tortoise/voices/lj` # with custom inference settings overriding defaults. tts.tts_to_file(text="Hello, my name is Manmay , how are you?", file_path="output.wav", voice_dir="path/to/tortoise/voices/dir/", speaker="lj", num_autoregressive_samples=1, diffusion_iterations=10) # Using presets with the same voice tts.tts_to_file(text="Hello, my name is Manmay , how are you?", file_path="output.wav", voice_dir="path/to/tortoise/voices/dir/", speaker="lj", preset="ultra_fast") # Random voice generation tts.tts_to_file(text="Hello, my name is Manmay , how are you?", file_path="output.wav") ``` Using 🐸TTS Command line: ```console # cloning the `lj` voice tts --model_name tts_models/en/multi-dataset/tortoise-v2 \ --text "This is an example." \ --out_path "output.wav" \ --voice_dir path/to/tortoise/voices/dir/ \ --speaker_idx "lj" \ --progress_bar True # Random voice generation tts --model_name tts_models/en/multi-dataset/tortoise-v2 \ --text "This is an example." \ --out_path "output.wav" \ --progress_bar True ``` ## Important resources & papers - Original Repo: https://github.com/neonbjb/tortoise-tts - Faster implementation: https://github.com/152334H/tortoise-tts-fast - Univnet: https://arxiv.org/abs/2106.07889 - Latent Diffusion:https://arxiv.org/abs/2112.10752 - DALL-E: https://arxiv.org/abs/2102.12092 ## TortoiseConfig ```{eval-rst} .. autoclass:: TTS.tts.configs.tortoise_config.TortoiseConfig :members: ``` ## TortoiseArgs ```{eval-rst} .. autoclass:: TTS.tts.models.tortoise.TortoiseArgs :members: ``` ## Tortoise Model ```{eval-rst} .. autoclass:: TTS.tts.models.tortoise.Tortoise :members: ```
0
coqui_public_repos/STT
coqui_public_repos/STT/doc/SUPPORT.rst
.. _support: Contact/Getting Help ==================== There are several ways to contact us or to get help: #. `GitHub Discussions <https://github.com/coqui-ai/STT/discussions>`_ - `GitHub Discussions <https://github.com/coqui-ai/STT/discussions>`_ is the first place to look. Search for keywords related to your question or problem to see if someone else has run into it already. If you can't find anything relevant there, search on our `issue tracker <https://github.com/coqui-ai/STT/issues>`_ to see if there is an existing issue about your problem. #. `Matrix chat <https://matrix.to/#/+coqui:matrix.org>`_ - If your question is not addressed on `GitHub Discussions <https://github.com/coqui-ai/STT/discussions>`_\ , you can contact us on the ``#stt:matrix.org`` `channel on Matrix <https://matrix.to/#/#stt:matrix.org?via=matrix.org>`_. #. `Create a new issue <https://github.com/coqui-ai/STT/issues>`_ - Finally, if you have a bug report or a feature request that isn't already covered by an existing issue, please open an issue in our repo and fill the appropriate information on your hardware and software setup.
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/script/getters.cc
#include <fst/script/getters.h> namespace fst { namespace script { bool GetArcSortType(const string &str, ArcSortType *sort_type) { if (str == "ilabel") { *sort_type = ILABEL_SORT; } else if (str == "olabel") { *sort_type = OLABEL_SORT; } else { return false; } return true; } bool GetComposeFilter(const string &str, ComposeFilter *compose_filter) { if (str == "alt_sequence") { *compose_filter = ALT_SEQUENCE_FILTER; } else if (str == "auto") { *compose_filter = AUTO_FILTER; } else if (str == "match") { *compose_filter = MATCH_FILTER; } else if (str == "null") { *compose_filter = NULL_FILTER; } else if (str == "sequence") { *compose_filter = SEQUENCE_FILTER; } else if (str == "trivial") { *compose_filter = TRIVIAL_FILTER; } else { return false; } return true; } bool GetDeterminizeType(const string &str, DeterminizeType *det_type) { if (str == "functional") { *det_type = DETERMINIZE_FUNCTIONAL; } else if (str == "nonfunctional") { *det_type = DETERMINIZE_NONFUNCTIONAL; } else if (str == "disambiguate") { *det_type = DETERMINIZE_DISAMBIGUATE; } else { return false; } return true; } bool GetMapType(const string &str, MapType *map_type) { if (str == "arc_sum") { *map_type = ARC_SUM_MAPPER; } else if (str == "arc_unique") { *map_type = ARC_UNIQUE_MAPPER; } else if (str == "identity") { *map_type = IDENTITY_MAPPER; } else if (str == "input_epsilon") { *map_type = INPUT_EPSILON_MAPPER; } else if (str == "invert") { *map_type = INVERT_MAPPER; } else if (str == "output_epsilon") { *map_type = OUTPUT_EPSILON_MAPPER; } else if (str == "plus") { *map_type = PLUS_MAPPER; } else if (str == "power") { *map_type = POWER_MAPPER; } else if (str == "quantize") { *map_type = QUANTIZE_MAPPER; } else if (str == "rmweight") { *map_type = RMWEIGHT_MAPPER; } else if (str == "superfinal") { *map_type = SUPERFINAL_MAPPER; } else if (str == "times") { *map_type = TIMES_MAPPER; } else if (str == "to_log") { *map_type = TO_LOG_MAPPER; } else if (str == "to_log64") { *map_type = TO_LOG64_MAPPER; } else if (str == "to_std" || str == "to_standard") { *map_type = TO_STD_MAPPER; } else { return false; } return true; } bool GetRandArcSelection(const string &str, RandArcSelection *ras) { if (str == "uniform") { *ras = UNIFORM_ARC_SELECTOR; } else if (str == "log_prob") { *ras = LOG_PROB_ARC_SELECTOR; } else if (str == "fast_log_prob") { *ras = FAST_LOG_PROB_ARC_SELECTOR; } else { return false; } return true; } bool GetQueueType(const string &str, QueueType *queue_type) { if (str == "auto") { *queue_type = AUTO_QUEUE; } else if (str == "fifo") { *queue_type = FIFO_QUEUE; } else if (str == "lifo") { *queue_type = LIFO_QUEUE; } else if (str == "shortest") { *queue_type = SHORTEST_FIRST_QUEUE; } else if (str == "state") { *queue_type = STATE_ORDER_QUEUE; } else if (str == "top") { *queue_type = TOP_ORDER_QUEUE; } else { return false; } return true; } bool GetReplaceLabelType(const string &str, bool epsilon_on_replace, ReplaceLabelType *rlt) { if (epsilon_on_replace || str == "neither") { *rlt = REPLACE_LABEL_NEITHER; } else if (str == "input") { *rlt = REPLACE_LABEL_INPUT; } else if (str == "output") { *rlt = REPLACE_LABEL_OUTPUT; } else if (str == "both") { *rlt = REPLACE_LABEL_BOTH; } else { return false; } return true; } } // namespace script } // namespace fst
0
coqui_public_repos/STT
coqui_public_repos/STT/ci_scripts/android-armv7-build.sh
#!/bin/bash set -xe source $(dirname "$0")/all-vars.sh source $(dirname "$0")/all-utils.sh source $(dirname "$0")/build-utils.sh source $(dirname "$0")/tf-vars.sh BAZEL_TARGETS=" //native_client:libstt.so //native_client:libkenlm.so //native_client:generate_scorer_package " BAZEL_BUILD_FLAGS="--config=android_arm ${BAZEL_EXTRA_FLAGS}" SYSTEM_TARGET= SYSTEM_RASPBIAN= do_bazel_build do_stt_ndk_build "armeabi-v7a"
0
coqui_public_repos/STT
coqui_public_repos/STT/ci_scripts/host-build.sh
#!/bin/bash set -xe macos_target_arch=$1 SYSTEM_TARGET=host if [ "$(uname)-$(uname -m)" = "Darwin-x86_64" -a "${macos_target_arch}" = "arm64" ]; then SYSTEM_TARGET="darwin-arm64" fi source $(dirname "$0")/all-vars.sh source $(dirname "$0")/all-utils.sh source $(dirname "$0")/build-utils.sh source $(dirname "$0")/tf-vars.sh BAZEL_TARGETS=" //native_client:libstt.so //native_client:libkenlm.so //native_client:generate_scorer_package " BAZEL_BUILD_FLAGS="${BAZEL_OPT_FLAGS} ${BAZEL_EXTRA_FLAGS}" do_bazel_build do_stt_binary_build
0
coqui_public_repos/xtts-streaming-server
coqui_public_repos/xtts-streaming-server/test/requirements.txt
requests==2.31.0 gradio==3.50.2
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions/ngram/ngram-fst.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fst/extensions/ngram/ngram-fst.h> #include <sys/types.h> #include <fst/arc.h> #include <fst/register.h> using fst::NGramFst; using fst::StdArc; using fst::LogArc; REGISTER_FST(NGramFst, StdArc); REGISTER_FST(NGramFst, LogArc);
0
coqui_public_repos/inference-engine/third_party/kenlm/lm
coqui_public_repos/inference-engine/third_party/kenlm/lm/interpolate/bounded_sequence_encoding_test.cc
#include "lm/interpolate/bounded_sequence_encoding.hh" #include "util/scoped.hh" #define BOOST_TEST_MODULE BoundedSequenceEncodingTest #include <boost/test/unit_test.hpp> namespace lm { namespace interpolate { namespace { void ExhaustiveTest(unsigned char *bound_begin, unsigned char *bound_end) { BoundedSequenceEncoding enc(bound_begin, bound_end); util::scoped_malloc backing(util::MallocOrThrow(enc.EncodedLength())); std::vector<unsigned char> values(bound_end - bound_begin), out(bound_end - bound_begin); while (true) { enc.Encode(&values[0], backing.get()); enc.Decode(backing.get(), &out[0]); for (std::size_t i = 0; i != values.size(); ++i) { BOOST_CHECK_EQUAL(values[i], out[i]); } for (std::size_t i = 0;; ++i) { if (i == values.size()) return; ++values[i]; if (values[i] < bound_begin[i]) break; values[i] = 0; } } } void CheckEncodeDecode(unsigned char *bounds, unsigned char *input, unsigned char *output, std::size_t len) { BoundedSequenceEncoding encoder(bounds, bounds + len); util::scoped_malloc backing(util::MallocOrThrow(encoder.EncodedLength())); encoder.Encode(input, backing.get()); encoder.Decode(backing.get(), output); for (std::size_t i = 0; i < len; ++i) { BOOST_CHECK_EQUAL(input[i], output[i]); } } BOOST_AUTO_TEST_CASE(Exhaustive) { unsigned char bounds[] = {5, 2, 3, 9, 7, 20, 8}; ExhaustiveTest(bounds, bounds + sizeof(bounds) / sizeof(unsigned char)); } BOOST_AUTO_TEST_CASE(LessThan64) { unsigned char bounds[] = {255, 255, 255, 255, 255, 255, 255, 3}; unsigned char input[] = {172, 183, 254, 187, 96, 87, 65, 2}; unsigned char output[] = {0, 0, 0, 0, 0, 0, 0, 0}; std::size_t len = sizeof(bounds) / sizeof(unsigned char); assert(sizeof(input) / sizeof(unsigned char) == len); assert(sizeof(output) / sizeof(unsigned char) == len); CheckEncodeDecode(bounds, input, output, len); } BOOST_AUTO_TEST_CASE(Exactly64) { unsigned char bounds[] = {255, 255, 255, 255, 255, 255, 255, 255}; unsigned char input[] = {172, 183, 254, 187, 96, 87, 65, 16}; unsigned char output[] = {0, 0, 0, 0, 0, 0, 0, 0}; std::size_t len = sizeof(bounds) / sizeof(unsigned char); assert(sizeof(input) / sizeof(unsigned char) == len); assert(sizeof(output) / sizeof(unsigned char) == len); CheckEncodeDecode(bounds, input, output, len); } BOOST_AUTO_TEST_CASE(MoreThan64) { unsigned char bounds[] = {255, 255, 255, 255, 255, 255, 255, 255, 255}; unsigned char input[] = {172, 183, 254, 187, 96, 87, 65, 16, 137}; unsigned char output[] = {0, 0, 0, 0, 0, 0, 0, 0, 0}; std::size_t len = sizeof(bounds) / sizeof(unsigned char); assert(sizeof(input) / sizeof(unsigned char) == len); assert(sizeof(output) / sizeof(unsigned char) == len); CheckEncodeDecode(bounds, input, output, len); } }}} // namespaces
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions/pdt/reverse.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Expands a PDT to an FST. #ifndef FST_EXTENSIONS_PDT_REVERSE_H_ #define FST_EXTENSIONS_PDT_REVERSE_H_ #include <vector> #include <fst/mutable-fst.h> #include <fst/relabel.h> #include <fst/reverse.h> namespace fst { // Reverses a pushdown transducer (PDT) encoded as an FST. template <class Arc, class RevArc> void Reverse(const Fst<Arc> &ifst, const std::vector< std::pair<typename Arc::Label, typename Arc::Label>> &parens, MutableFst<RevArc> *ofst) { using Label = typename Arc::Label; // Reverses FST component. Reverse(ifst, ofst); // Exchanges open and close parenthesis pairs. std::vector<std::pair<Label, Label>> relabel_pairs; relabel_pairs.reserve(2 * parens.size()); for (const auto &pair : parens) { relabel_pairs.emplace_back(pair.first, pair.second); relabel_pairs.emplace_back(pair.second, pair.first); } Relabel(ofst, relabel_pairs, relabel_pairs); } } // namespace fst #endif // FST_EXTENSIONS_PDT_REVERSE_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/tc-python-tests-prod.sh
#!/bin/bash set -xe source $(dirname "$0")/tc-tests-utils.sh extract_python_versions "$1" "pyver" "pyver_pkg" "py_unicode_type" "pyconf" "pyalias" bitrate=$2 set_ldc_sample_filename "${bitrate}" model_source=${DEEPSPEECH_PROD_MODEL} model_name=$(basename "${model_source}") model_source_mmap=${DEEPSPEECH_PROD_MODEL_MMAP} model_name_mmap=$(basename "${model_source_mmap}") download_data virtualenv_activate "${pyalias}" "deepspeech" deepspeech_pkg_url=$(get_python_pkg_url ${pyver_pkg} ${py_unicode_type}) LD_LIBRARY_PATH=${PY37_LDPATH}:$LD_LIBRARY_PATH pip install --verbose --only-binary :all: --upgrade ${deepspeech_pkg_url} | cat run_prod_inference_tests "${bitrate}" run_prod_concurrent_stream_tests "${bitrate}" virtualenv_deactivate "${pyalias}" "deepspeech"
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/bin/fstclosure-main.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Creates the Kleene closure of an FST. #include <cstring> #include <memory> #include <string> #include <fst/flags.h> #include <fst/script/closure.h> #include <fst/script/getters.h> DECLARE_bool(closure_plus); int fstclosure_main(int argc, char **argv) { namespace s = fst::script; using fst::script::MutableFstClass; string usage = "Creates the Kleene closure of an FST.\n\n Usage: "; usage += argv[0]; usage += " [in.fst [out.fst]]\n"; std::set_new_handler(FailedNewHandler); SET_FLAGS(usage.c_str(), &argc, &argv, true); if (argc > 3) { ShowUsage(); return 1; } const string in_name = (argc > 1 && strcmp(argv[1], "-") != 0) ? argv[1] : ""; const string out_name = argc > 2 ? argv[2] : ""; std::unique_ptr<MutableFstClass> fst(MutableFstClass::Read(in_name, true)); if (!fst) return 1; s::Closure(fst.get(), s::GetClosureType(FLAGS_closure_plus)); return !fst->Write(out_name); }
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-linux-opt-tag-base.tyml
$if: '(event.event in build.allowed) && ((event.event != "tag") || (build.ref_match in event.head.ref))' then: taskId: ${taskcluster.taskId} provisionerId: ${taskcluster.docker.provisionerId} workerType: ${build.workerType} taskGroupId: ${taskcluster.taskGroupId} schedulerId: ${taskcluster.schedulerId} dependencies: $map: { $eval: build.dependencies } each(b): $eval: as_slugid(b) created: { $fromNow: '0 sec' } deadline: { $fromNow: '1 day' } expires: { $fromNow: '7 days' } payload: maxRunTime: { $eval: to_int(build.maxRunTime) } image: ${build.docker_image} env: $let: training: { $eval: as_slugid(build.test_model_task) } linux_amd64_build: { $eval: as_slugid("linux-amd64-cpu-opt") } linux_amd64_tflite: { $eval: as_slugid("linux-amd64-tflite-opt") } linux_amd64_ctc: { $eval: as_slugid("linux-amd64-ctc-opt") } in: DEEPSPEECH_TEST_MODEL: https://community-tc.services.mozilla.com/api/queue/v1/task/${training}/artifacts/public/output_graph.pb DEEPSPEECH_PROD_MODEL: https://github.com/reuben/DeepSpeech/releases/download/v0.7.0-alpha.3/output_graph.pb DEEPSPEECH_PROD_MODEL_MMAP: https://github.com/reuben/DeepSpeech/releases/download/v0.7.0-alpha.3/output_graph.pbmm DECODER_ARTIFACTS_ROOT: https://community-tc.services.mozilla.com/api/queue/v1/task/${linux_amd64_ctc}/artifacts/public PIP_DEFAULT_TIMEOUT: "60" EXPECTED_TENSORFLOW_VERSION: "${build.tensorflow_git_desc}" command: - "/bin/bash" - "--login" - "-cxe" - $let: extraSystemSetup: { $eval: strip(str(build.system_setup)) } in: > apt-get -qq update && apt-get -qq -y install curl python-simplejson git pixz sox sudo wget && ${extraSystemSetup} && adduser --system --home ${system.homedir.linux} ${system.username} && cd ${system.homedir.linux} && echo -e "#!/bin/bash\nset -xe\n env && id && mkdir ~/DeepSpeech/ && git clone --quiet ${event.head.repo.url} ~/DeepSpeech/ds/ && cd ~/DeepSpeech/ds && git checkout --quiet ${event.head.sha}&& mkdir -p ${system.homedir.linux}/pyenv-root/ && wget -O - ${system.pyenv.linux.url} | tar -C ${system.homedir.linux}/pyenv-root/ -xzf -" > /tmp/clone.sh && chmod +x /tmp/clone.sh && sudo -H -u ${system.username} /bin/bash /tmp/clone.sh && sudo -H -u ${system.username} --preserve-env /bin/bash ${build.args.tests_cmdline} artifacts: "public": type: "directory" path: "/tmp/artifacts/" expires: { $fromNow: '7 days' } metadata: name: ${build.metadata.name} description: ${build.metadata.description} owner: ${event.head.user.email} source: ${event.head.repo.url}
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/relabel.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Functions and classes to relabel an FST (either on input or output). #ifndef FST_RELABEL_H_ #define FST_RELABEL_H_ #include <string> #include <unordered_map> #include <utility> #include <vector> #include <fst/log.h> #include <fst/cache.h> #include <fst/test-properties.h> #include <unordered_map> namespace fst { // Relabels either the input labels or output labels. The old to // new labels are specified using a vector of std::pair<Label, Label>. // Any label associations not specified are assumed to be identity // mapping. The destination labels must be valid labels (e.g., not kNoLabel). template <class Arc> void Relabel( MutableFst<Arc> *fst, const std::vector<std::pair<typename Arc::Label, typename Arc::Label>> &ipairs, const std::vector<std::pair<typename Arc::Label, typename Arc::Label>> &opairs) { using Label = typename Arc::Label; const auto props = fst->Properties(kFstProperties, false); // Constructs label-to-label maps. std::unordered_map<Label, Label> input_map; for (auto &ipair : ipairs) input_map[ipair.first] = ipair.second; std::unordered_map<Label, Label> output_map; for (auto &opair : opairs) output_map[opair.first] = opair.second; for (StateIterator<MutableFst<Arc>> siter(*fst); !siter.Done(); siter.Next()) { for (MutableArcIterator<MutableFst<Arc>> aiter(fst, siter.Value()); !aiter.Done(); aiter.Next()) { auto arc = aiter.Value(); // Relabels input. auto it = input_map.find(arc.ilabel); if (it != input_map.end()) { if (it->second == kNoLabel) { FSTERROR() << "Input symbol ID " << arc.ilabel << " missing from target vocabulary"; fst->SetProperties(kError, kError); return; } arc.ilabel = it->second; } // Relabels output. it = output_map.find(arc.olabel); if (it != output_map.end()) { if (it->second == kNoLabel) { FSTERROR() << "Output symbol id " << arc.olabel << " missing from target vocabulary"; fst->SetProperties(kError, kError); return; } arc.olabel = it->second; } aiter.SetValue(arc); } } fst->SetProperties(RelabelProperties(props), kFstProperties); } // Relabels either the input labels or output labels. The old to // new labels are specified using pairs of old and new symbol tables. // The tables must contain (at least) all labels on the appropriate side of the // FST. If the 'unknown_i(o)symbol' is non-empty, it is used to label any // missing symbol in new_i(o)symbols table. template <class Arc> void Relabel(MutableFst<Arc> *fst, const SymbolTable *old_isymbols, const SymbolTable *new_isymbols, const string &unknown_isymbol, bool attach_new_isymbols, const SymbolTable *old_osymbols, const SymbolTable *new_osymbols, const string &unknown_osymbol, bool attach_new_osymbols) { using Label = typename Arc::Label; // Constructs vectors of input-side label pairs. std::vector<std::pair<Label, Label>> ipairs; if (old_isymbols && new_isymbols) { size_t num_missing_syms = 0; Label unknown_ilabel = kNoLabel; if (!unknown_isymbol.empty()) { unknown_ilabel = new_isymbols->Find(unknown_isymbol); if (unknown_ilabel == kNoLabel) { VLOG(1) << "Input symbol '" << unknown_isymbol << "' missing from target symbol table"; ++num_missing_syms; } } for (SymbolTableIterator siter(*old_isymbols); !siter.Done(); siter.Next()) { const auto old_index = siter.Value(); const auto symbol = siter.Symbol(); auto new_index = new_isymbols->Find(siter.Symbol()); if (new_index == kNoLabel) { if (unknown_ilabel != kNoLabel) { new_index = unknown_ilabel; } else { VLOG(1) << "Input symbol ID " << old_index << " symbol '" << symbol << "' missing from target symbol table"; ++num_missing_syms; } } ipairs.push_back(std::make_pair(old_index, new_index)); } if (num_missing_syms > 0) { LOG(WARNING) << "Target symbol table missing: " << num_missing_syms << " input symbols"; } if (attach_new_isymbols) fst->SetInputSymbols(new_isymbols); } // Constructs vectors of output-side label pairs. std::vector<std::pair<Label, Label>> opairs; if (old_osymbols && new_osymbols) { size_t num_missing_syms = 0; Label unknown_olabel = kNoLabel; if (!unknown_osymbol.empty()) { unknown_olabel = new_osymbols->Find(unknown_osymbol); if (unknown_olabel == kNoLabel) { VLOG(1) << "Output symbol '" << unknown_osymbol << "' missing from target symbol table"; ++num_missing_syms; } } for (SymbolTableIterator siter(*old_osymbols); !siter.Done(); siter.Next()) { const auto old_index = siter.Value(); const auto symbol = siter.Symbol(); auto new_index = new_osymbols->Find(siter.Symbol()); if (new_index == kNoLabel) { if (unknown_olabel != kNoLabel) { new_index = unknown_olabel; } else { VLOG(1) << "Output symbol ID " << old_index << " symbol '" << symbol << "' missing from target symbol table"; ++num_missing_syms; } } opairs.push_back(std::make_pair(old_index, new_index)); } if (num_missing_syms > 0) { LOG(WARNING) << "Target symbol table missing: " << num_missing_syms << " output symbols"; } if (attach_new_osymbols) fst->SetOutputSymbols(new_osymbols); } // Calls relabel using vector of relabel pairs. Relabel(fst, ipairs, opairs); } // Same as previous but no special allowance for unknown symbols. Kept // for backward compat. template <class Arc> void Relabel(MutableFst<Arc> *fst, const SymbolTable *old_isymbols, const SymbolTable *new_isymbols, bool attach_new_isymbols, const SymbolTable *old_osymbols, const SymbolTable *new_osymbols, bool attach_new_osymbols) { Relabel(fst, old_isymbols, new_isymbols, "" /* no unknown isymbol */, attach_new_isymbols, old_osymbols, new_osymbols, "" /* no unknown ioymbol */, attach_new_osymbols); } // Relabels either the input labels or output labels. The old to // new labels are specified using symbol tables. Any label associations not // specified are assumed to be identity mapping. template <class Arc> void Relabel(MutableFst<Arc> *fst, const SymbolTable *new_isymbols, const SymbolTable *new_osymbols) { Relabel(fst, fst->InputSymbols(), new_isymbols, true, fst->OutputSymbols(), new_osymbols, true); } using RelabelFstOptions = CacheOptions; template <class Arc> class RelabelFst; namespace internal { // Relabels an FST from one symbol set to another. Relabeling can either be on // input or output space. RelabelFst implements a delayed version of the // relabel. Arcs are relabeled on the fly and not cached; i.e., each request is // recomputed. template <class Arc> class RelabelFstImpl : public CacheImpl<Arc> { public: using Label = typename Arc::Label; using StateId = typename Arc::StateId; using Weight = typename Arc::Weight; using Store = DefaultCacheStore<Arc>; using State = typename Store::State; using FstImpl<Arc>::SetType; using FstImpl<Arc>::SetProperties; using FstImpl<Arc>::WriteHeader; using FstImpl<Arc>::SetInputSymbols; using FstImpl<Arc>::SetOutputSymbols; using CacheImpl<Arc>::PushArc; using CacheImpl<Arc>::HasArcs; using CacheImpl<Arc>::HasFinal; using CacheImpl<Arc>::HasStart; using CacheImpl<Arc>::SetArcs; using CacheImpl<Arc>::SetFinal; using CacheImpl<Arc>::SetStart; friend class StateIterator<RelabelFst<Arc>>; RelabelFstImpl(const Fst<Arc> &fst, const std::vector<std::pair<Label, Label>> &ipairs, const std::vector<std::pair<Label, Label>> &opairs, const RelabelFstOptions &opts) : CacheImpl<Arc>(opts), fst_(fst.Copy()), relabel_input_(false), relabel_output_(false) { SetProperties(RelabelProperties(fst.Properties(kCopyProperties, false))); SetType("relabel"); // Creates input label map. if (!ipairs.empty()) { for (auto &ipair : ipairs) input_map_[ipair.first] = ipair.second; relabel_input_ = true; } // Creates output label map. if (!opairs.empty()) { for (auto &opair : opairs) output_map_[opair.first] = opair.second; relabel_output_ = true; } } RelabelFstImpl(const Fst<Arc> &fst, const SymbolTable *old_isymbols, const SymbolTable *new_isymbols, const SymbolTable *old_osymbols, const SymbolTable *new_osymbols, const RelabelFstOptions &opts) : CacheImpl<Arc>(opts), fst_(fst.Copy()), relabel_input_(false), relabel_output_(false) { SetType("relabel"); SetProperties(RelabelProperties(fst.Properties(kCopyProperties, false))); SetInputSymbols(old_isymbols); SetOutputSymbols(old_osymbols); if (old_isymbols && new_isymbols && old_isymbols->LabeledCheckSum() != new_isymbols->LabeledCheckSum()) { for (SymbolTableIterator siter(*old_isymbols); !siter.Done(); siter.Next()) { input_map_[siter.Value()] = new_isymbols->Find(siter.Symbol()); } SetInputSymbols(new_isymbols); relabel_input_ = true; } if (old_osymbols && new_osymbols && old_osymbols->LabeledCheckSum() != new_osymbols->LabeledCheckSum()) { for (SymbolTableIterator siter(*old_osymbols); !siter.Done(); siter.Next()) { output_map_[siter.Value()] = new_osymbols->Find(siter.Symbol()); } SetOutputSymbols(new_osymbols); relabel_output_ = true; } } RelabelFstImpl(const RelabelFstImpl<Arc> &impl) : CacheImpl<Arc>(impl), fst_(impl.fst_->Copy(true)), input_map_(impl.input_map_), output_map_(impl.output_map_), relabel_input_(impl.relabel_input_), relabel_output_(impl.relabel_output_) { SetType("relabel"); SetProperties(impl.Properties(), kCopyProperties); SetInputSymbols(impl.InputSymbols()); SetOutputSymbols(impl.OutputSymbols()); } StateId Start() { if (!HasStart()) SetStart(fst_->Start()); return CacheImpl<Arc>::Start(); } Weight Final(StateId s) { if (!HasFinal(s)) SetFinal(s, fst_->Final(s)); return CacheImpl<Arc>::Final(s); } size_t NumArcs(StateId s) { if (!HasArcs(s)) Expand(s); return CacheImpl<Arc>::NumArcs(s); } size_t NumInputEpsilons(StateId s) { if (!HasArcs(s)) Expand(s); return CacheImpl<Arc>::NumInputEpsilons(s); } size_t NumOutputEpsilons(StateId s) { if (!HasArcs(s)) Expand(s); return CacheImpl<Arc>::NumOutputEpsilons(s); } uint64 Properties() const override { return Properties(kFstProperties); } // Sets error if found, and returns other FST impl properties. uint64 Properties(uint64 mask) const override { if ((mask & kError) && fst_->Properties(kError, false)) { SetProperties(kError, kError); } return FstImpl<Arc>::Properties(mask); } void InitArcIterator(StateId s, ArcIteratorData<Arc> *data) { if (!HasArcs(s)) Expand(s); CacheImpl<Arc>::InitArcIterator(s, data); } void Expand(StateId s) { for (ArcIterator<Fst<Arc>> aiter(*fst_, s); !aiter.Done(); aiter.Next()) { auto arc = aiter.Value(); if (relabel_input_) { auto it = input_map_.find(arc.ilabel); if (it != input_map_.end()) arc.ilabel = it->second; } if (relabel_output_) { auto it = output_map_.find(arc.olabel); if (it != output_map_.end()) { arc.olabel = it->second; } } PushArc(s, arc); } SetArcs(s); } private: std::unique_ptr<const Fst<Arc>> fst_; std::unordered_map<Label, Label> input_map_; std::unordered_map<Label, Label> output_map_; bool relabel_input_; bool relabel_output_; }; } // namespace internal // This class attaches interface to implementation and handles // reference counting, delegating most methods to ImplToFst. template <class A> class RelabelFst : public ImplToFst<internal::RelabelFstImpl<A>> { public: using Arc = A; using Label = typename Arc::Label; using StateId = typename Arc::StateId; using Weight = typename Arc::Weight; using Store = DefaultCacheStore<Arc>; using State = typename Store::State; using Impl = internal::RelabelFstImpl<Arc>; friend class ArcIterator<RelabelFst<A>>; friend class StateIterator<RelabelFst<A>>; RelabelFst(const Fst<Arc> &fst, const std::vector<std::pair<Label, Label>> &ipairs, const std::vector<std::pair<Label, Label>> &opairs, const RelabelFstOptions &opts = RelabelFstOptions()) : ImplToFst<Impl>(std::make_shared<Impl>(fst, ipairs, opairs, opts)) {} RelabelFst(const Fst<Arc> &fst, const SymbolTable *new_isymbols, const SymbolTable *new_osymbols, const RelabelFstOptions &opts = RelabelFstOptions()) : ImplToFst<Impl>( std::make_shared<Impl>(fst, fst.InputSymbols(), new_isymbols, fst.OutputSymbols(), new_osymbols, opts)) {} RelabelFst(const Fst<Arc> &fst, const SymbolTable *old_isymbols, const SymbolTable *new_isymbols, const SymbolTable *old_osymbols, const SymbolTable *new_osymbols, const RelabelFstOptions &opts = RelabelFstOptions()) : ImplToFst<Impl>(std::make_shared<Impl>(fst, old_isymbols, new_isymbols, old_osymbols, new_osymbols, opts)) {} // See Fst<>::Copy() for doc. RelabelFst(const RelabelFst<Arc> &fst, bool safe = false) : ImplToFst<Impl>(fst, safe) {} // Gets a copy of this RelabelFst. See Fst<>::Copy() for further doc. RelabelFst<Arc> *Copy(bool safe = false) const override { return new RelabelFst<Arc>(*this, safe); } void InitStateIterator(StateIteratorData<Arc> *data) const override; void InitArcIterator(StateId s, ArcIteratorData<Arc> *data) const override { return GetMutableImpl()->InitArcIterator(s, data); } private: using ImplToFst<Impl>::GetImpl; using ImplToFst<Impl>::GetMutableImpl; RelabelFst &operator=(const RelabelFst &) = delete; }; // Specialization for RelabelFst. template <class Arc> class StateIterator<RelabelFst<Arc>> : public StateIteratorBase<Arc> { public: using StateId = typename Arc::StateId; explicit StateIterator(const RelabelFst<Arc> &fst) : impl_(fst.GetImpl()), siter_(*impl_->fst_), s_(0) {} bool Done() const final { return siter_.Done(); } StateId Value() const final { return s_; } void Next() final { if (!siter_.Done()) { ++s_; siter_.Next(); } } void Reset() final { s_ = 0; siter_.Reset(); } private: const internal::RelabelFstImpl<Arc>* impl_; StateIterator<Fst<Arc>> siter_; StateId s_; StateIterator(const StateIterator &) = delete; StateIterator &operator=(const StateIterator &) = delete; }; // Specialization for RelabelFst. template <class Arc> class ArcIterator<RelabelFst<Arc>> : public CacheArcIterator<RelabelFst<Arc>> { public: using StateId = typename Arc::StateId; ArcIterator(const RelabelFst<Arc> &fst, StateId s) : CacheArcIterator<RelabelFst<Arc>>(fst.GetMutableImpl(), s) { if (!fst.GetImpl()->HasArcs(s)) fst.GetMutableImpl()->Expand(s); } }; template <class Arc> inline void RelabelFst<Arc>::InitStateIterator( StateIteratorData<Arc> *data) const { data->base = new StateIterator<RelabelFst<Arc>>(*this); } // Useful alias when using StdArc. using StdRelabelFst = RelabelFst<StdArc>; } // namespace fst #endif // FST_RELABEL_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-nodejs_13x_multiarchpkg-darwin-tflite-opt.yml
build: template_file: test-darwin-opt-base.tyml dependencies: - "node-package-tflite" - "test-training_16k-linux-amd64-py36m-opt" - "homebrew_tests-darwin-amd64" test_model_task: "test-training_16k-linux-amd64-py36m-opt" system_setup: > ${nodejs.brew.prep_13} args: tests_cmdline: "$TASKCLUSTER_TASK_DIR/DeepSpeech/ds/taskcluster/tc-node_tflite-tests.sh 13.x 16k" metadata: name: "DeepSpeech OSX AMD64 TFLite NodeJS MultiArch Package 13.x tests" description: "Testing DeepSpeech for OSX/AMD64 on NodeJS MultiArch Package v13.x, TFLite only, optimized version"
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-cpp_basic_tflite_valgrind-linux-amd64-dbg.yml
build: template_file: test-linux-opt-base.tyml dependencies: - "linux-amd64-tflite-dbg" - "test-training_16k-linux-amd64-py36m-opt" test_model_task: "test-training_16k-linux-amd64-py36m-opt" docker_image: "ubuntu:20.04" system_setup: > ${valgrind.packages_bionic.apt} args: tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-valgrind-cpp_tflite.sh --basic" workerType: "${docker.dsHighMemTests}" metadata: name: "DeepSpeech Linux AMD64 valgrind C++ TFLite basic tests" description: "Testing basic DeepSpeech valgrind C++ TFLite for Linux/AMD64"
0
coqui_public_repos/STT-models/latvian/itml
coqui_public_repos/STT-models/latvian/itml/v0.1.0/alphabet.txt
a b c d e f g h i j k l m n o p r s t u v x z ā č ē ģ ī ķ ļ ņ š ū ž
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/script/compose.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fst/script/fst-class.h> #include <fst/script/compose.h> #include <fst/script/script-impl.h> namespace fst { namespace script { void Compose(const FstClass &ifst1, const FstClass &ifst2, MutableFstClass *ofst, const ComposeOptions &opts) { if (!internal::ArcTypesMatch(ifst1, ifst2, "Compose") || !internal::ArcTypesMatch(*ofst, ifst1, "Compose")) { ofst->SetProperties(kError, kError); return; } ComposeArgs args(ifst1, ifst2, ofst, opts); Apply<Operation<ComposeArgs>>("Compose", ifst1.ArcType(), &args); } REGISTER_FST_OPERATION(Compose, StdArc, ComposeArgs); REGISTER_FST_OPERATION(Compose, LogArc, ComposeArgs); REGISTER_FST_OPERATION(Compose, Log64Arc, ComposeArgs); } // namespace script } // namespace fst
0
coqui_public_repos/TTS/tests
coqui_public_repos/TTS/tests/text_tests/test_korean_phonemizer.py
import unittest from TTS.tts.utils.text.korean.phonemizer import korean_text_to_phonemes _TEST_CASES = """ 포상은 열심히 한 아이에게만 주어지기 때문에 포상인 것입니다./포상으 녈심히 하 나이에게만 주어지기 때무네 포상인 거심니다. 오늘은 8월 31일 입니다./오느른 파뤌 삼시비리 림니다. 친구 100명 만들기가 목표입니다./친구 뱅명 만들기가 목표임니다. A부터 Z까지 입니다./에이부터 제트까지 임니다. 이게 제 마음이에요./이게 제 마으미에요. """ _TEST_CASES_EN = """ 이제야 이쪽을 보는구나./IJeYa IJjoGeul BoNeunGuNa. 크고 맛있는 cake를 부탁해요./KeuGo MaSinNeun KeIKeuLeul BuTaKaeYo. 전부 거짓말이야./JeonBu GeoJinMaLiYa. 좋은 노래를 찾았어요./JoEun NoLaeLeul ChaJaSseoYo. """ class TestText(unittest.TestCase): def test_korean_text_to_phonemes(self): for line in _TEST_CASES.strip().split("\n"): text, phone = line.split("/") self.assertEqual(korean_text_to_phonemes(text), phone) for line in _TEST_CASES_EN.strip().split("\n"): text, phone = line.split("/") self.assertEqual(korean_text_to_phonemes(text, character="english"), phone) if __name__ == "__main__": unittest.main()
0
coqui_public_repos/STT/native_client/kenlm
coqui_public_repos/STT/native_client/kenlm/util/tokenize_piece_test.cc
#include "tokenize_piece.hh" #include "string_piece.hh" #define BOOST_TEST_MODULE TokenIteratorTest #include <boost/test/unit_test.hpp> #include <iostream> namespace util { namespace { BOOST_AUTO_TEST_CASE(pipe_pipe_none) { const char str[] = "nodelimit at all"; TokenIter<MultiCharacter> it(str, MultiCharacter("|||")); BOOST_REQUIRE(it); BOOST_CHECK_EQUAL(StringPiece(str), *it); ++it; BOOST_CHECK(!it); } BOOST_AUTO_TEST_CASE(pipe_pipe_two) { const char str[] = "|||"; TokenIter<MultiCharacter> it(str, MultiCharacter("|||")); BOOST_REQUIRE(it); BOOST_CHECK_EQUAL(StringPiece(), *it); ++it; BOOST_REQUIRE(it); BOOST_CHECK_EQUAL(StringPiece(), *it); ++it; BOOST_CHECK(!it); } BOOST_AUTO_TEST_CASE(remove_empty) { const char str[] = "|||"; TokenIter<MultiCharacter, true> it(str, MultiCharacter("|||")); BOOST_CHECK(!it); } BOOST_AUTO_TEST_CASE(remove_empty_keep) { const char str[] = " |||"; TokenIter<MultiCharacter, true> it(str, MultiCharacter("|||")); BOOST_REQUIRE(it); BOOST_CHECK_EQUAL(StringPiece(" "), *it); ++it; BOOST_CHECK(!it); } } // namespace } // namespace util
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-cpp_16k_tflite-win-amd64-opt.yml
build: template_file: test-win-opt-base.tyml dependencies: - "win-amd64-tflite-opt" - "test-training_16k-linux-amd64-py36m-opt" test_model_task: "test-training_16k-linux-amd64-py36m-opt" args: tests_cmdline: "$TASKCLUSTER_TASK_DIR/DeepSpeech/ds/taskcluster/tc-cpp_tflite_basic-ds-tests.sh 16k" metadata: name: "DeepSpeech Windows AMD64 TFLite C++ tests (16kHz)" description: "Testing DeepSpeech C++ for Windows/AMD64, TFLite, optimized version (16kHz)"
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst/bi-table.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Classes for representing a bijective mapping between an arbitrary entry // of type T and a signed integral ID. #ifndef FST_BI_TABLE_H_ #define FST_BI_TABLE_H_ #include <deque> #include <memory> #include <functional> #include <unordered_map> #include <unordered_set> #include <vector> #include <fst/log.h> #include <fst/memory.h> namespace fst { // Bitables model bijective mappings between entries of an arbitrary type T and // an signed integral ID of type I. The IDs are allocated starting from 0 in // order. // // template <class I, class T> // class BiTable { // public: // // // Required constructors. // BiTable(); // // // Looks up integer ID from entry. If it doesn't exist and insert // / is true, adds it; otherwise, returns -1. // I FindId(const T &entry, bool insert = true); // // // Looks up entry from integer ID. // const T &FindEntry(I) const; // // // Returns number of stored entries. // I Size() const; // }; // An implementation using a hash map for the entry to ID mapping. H is the // hash function and E is the equality function. If passed to the constructor, // ownership is given to this class. template <class I, class T, class H, class E = std::equal_to<T>> class HashBiTable { public: // Reserves space for table_size elements. If passing H and E to the // constructor, this class owns them. explicit HashBiTable(size_t table_size = 0, H *h = nullptr, E *e = nullptr) : hash_func_(h ? h : new H()), hash_equal_(e ? e : new E()), entry2id_(table_size, *hash_func_, *hash_equal_) { if (table_size) id2entry_.reserve(table_size); } HashBiTable(const HashBiTable<I, T, H, E> &table) : hash_func_(new H(*table.hash_func_)), hash_equal_(new E(*table.hash_equal_)), entry2id_(table.entry2id_.begin(), table.entry2id_.end(), table.entry2id_.size(), *hash_func_, *hash_equal_), id2entry_(table.id2entry_) {} I FindId(const T &entry, bool insert = true) { if (!insert) { const auto it = entry2id_.find(entry); return it == entry2id_.end() ? -1 : it->second - 1; } I &id_ref = entry2id_[entry]; if (id_ref == 0) { // T not found; stores and assigns a new ID. id2entry_.push_back(entry); id_ref = id2entry_.size(); } return id_ref - 1; // NB: id_ref = ID + 1. } const T &FindEntry(I s) const { return id2entry_[s]; } I Size() const { return id2entry_.size(); } // TODO(riley): Add fancy clear-to-size, as in CompactHashBiTable. void Clear() { entry2id_.clear(); id2entry_.clear(); } private: std::unique_ptr<H> hash_func_; std::unique_ptr<E> hash_equal_; std::unordered_map<T, I, H, E> entry2id_; std::vector<T> id2entry_; }; // Enables alternative hash set representations below. enum HSType { HS_STL = 0, HS_DENSE = 1, HS_SPARSE = 2, HS_FLAT = 3 }; // Default hash set is STL hash_set. template <class K, class H, class E, HSType HS> struct HashSet : public std::unordered_set<K, H, E, PoolAllocator<K>> { explicit HashSet(size_t n = 0, const H &h = H(), const E &e = E()) : std::unordered_set<K, H, E, PoolAllocator<K>>(n, h, e) {} void rehash(size_t n) {} }; // An implementation using a hash set for the entry to ID mapping. The hash set // holds keys which are either the ID or kCurrentKey. These keys can be mapped // to entries either by looking up in the entry vector or, if kCurrentKey, in // current_entry_. The hash and key equality functions map to entries first. H // is the hash function and E is the equality function. If passed to the // constructor, ownership is given to this class. template <class I, class T, class H, class E = std::equal_to<T>, HSType HS = HS_FLAT> class CompactHashBiTable { public: friend class HashFunc; friend class HashEqual; // Reserves space for table_size elements. If passing H and E to the // constructor, this class owns them. explicit CompactHashBiTable(size_t table_size = 0, H *h = nullptr, E *e = nullptr) : hash_func_(h ? h : new H()), hash_equal_(e ? e : new E()), compact_hash_func_(*this), compact_hash_equal_(*this), keys_(table_size, compact_hash_func_, compact_hash_equal_) { if (table_size) id2entry_.reserve(table_size); } CompactHashBiTable(const CompactHashBiTable<I, T, H, E, HS> &table) : hash_func_(new H(*table.hash_func_)), hash_equal_(new E(*table.hash_equal_)), compact_hash_func_(*this), compact_hash_equal_(*this), keys_(table.keys_.size(), compact_hash_func_, compact_hash_equal_), id2entry_(table.id2entry_) { keys_.insert(table.keys_.begin(), table.keys_.end()); } I FindId(const T &entry, bool insert = true) { current_entry_ = &entry; if (insert) { auto result = keys_.insert(kCurrentKey); if (!result.second) return *result.first; // Already exists. // Overwrites kCurrentKey with a new key value; this is safe because it // doesn't affect hashing or equality testing. I key = id2entry_.size(); const_cast<I &>(*result.first) = key; id2entry_.push_back(entry); return key; } const auto it = keys_.find(kCurrentKey); return it == keys_.end() ? -1 : *it; } const T &FindEntry(I s) const { return id2entry_[s]; } I Size() const { return id2entry_.size(); } // Clears content; with argument, erases last n IDs. void Clear(std::ptrdiff_t n = -1) { if (n < 0 || n >= id2entry_.size()) { // Clears completely. keys_.clear(); id2entry_.clear(); } else if (n == id2entry_.size() - 1) { // Leaves only key 0. const T entry = FindEntry(0); keys_.clear(); id2entry_.clear(); FindId(entry, true); } else { while (n-- > 0) { I key = id2entry_.size() - 1; keys_.erase(key); id2entry_.pop_back(); } keys_.rehash(0); } } private: static constexpr I kCurrentKey = -1; static constexpr I kEmptyKey = -2; static constexpr I kDeletedKey = -3; class HashFunc { public: explicit HashFunc(const CompactHashBiTable &ht) : ht_(&ht) {} size_t operator()(I k) const { if (k >= kCurrentKey) { return (*ht_->hash_func_)(ht_->Key2Entry(k)); } else { return 0; } } private: const CompactHashBiTable *ht_; }; class HashEqual { public: explicit HashEqual(const CompactHashBiTable &ht) : ht_(&ht) {} bool operator()(I k1, I k2) const { if (k1 == k2) { return true; } else if (k1 >= kCurrentKey && k2 >= kCurrentKey) { return (*ht_->hash_equal_)(ht_->Key2Entry(k1), ht_->Key2Entry(k2)); } else { return false; } } private: const CompactHashBiTable *ht_; }; using KeyHashSet = HashSet<I, HashFunc, HashEqual, HS>; const T &Key2Entry(I k) const { if (k == kCurrentKey) { return *current_entry_; } else { return id2entry_[k]; } } std::unique_ptr<H> hash_func_; std::unique_ptr<E> hash_equal_; HashFunc compact_hash_func_; HashEqual compact_hash_equal_; KeyHashSet keys_; std::vector<T> id2entry_; const T *current_entry_; }; template <class I, class T, class H, class E, HSType HS> constexpr I CompactHashBiTable<I, T, H, E, HS>::kCurrentKey; template <class I, class T, class H, class E, HSType HS> constexpr I CompactHashBiTable<I, T, H, E, HS>::kEmptyKey; template <class I, class T, class H, class E, HSType HS> constexpr I CompactHashBiTable<I, T, H, E, HS>::kDeletedKey; // An implementation using a vector for the entry to ID mapping. It is passed a // function object FP that should fingerprint entries uniquely to an integer // that can used as a vector index. Normally, VectorBiTable constructs the FP // object. The user can instead pass in this object; in that case, VectorBiTable // takes its ownership. template <class I, class T, class FP> class VectorBiTable { public: // Reserves table_size cells of space. If passing FP argument to the // constructor, this class owns it. explicit VectorBiTable(FP *fp = nullptr, size_t table_size = 0) : fp_(fp ? fp : new FP()) { if (table_size) id2entry_.reserve(table_size); } VectorBiTable(const VectorBiTable<I, T, FP> &table) : fp_(new FP(*table.fp_)), fp2id_(table.fp2id_), id2entry_(table.id2entry_) {} I FindId(const T &entry, bool insert = true) { std::ptrdiff_t fp = (*fp_)(entry); if (fp >= fp2id_.size()) fp2id_.resize(fp + 1); I &id_ref = fp2id_[fp]; if (id_ref == 0) { // T not found. if (insert) { // Stores and assigns a new ID. id2entry_.push_back(entry); id_ref = id2entry_.size(); } else { return -1; } } return id_ref - 1; // NB: id_ref = ID + 1. } const T &FindEntry(I s) const { return id2entry_[s]; } I Size() const { return id2entry_.size(); } const FP &Fingerprint() const { return *fp_; } private: std::unique_ptr<FP> fp_; std::vector<I> fp2id_; std::vector<T> id2entry_; }; // An implementation using a vector and a compact hash table. The selecting // functor S returns true for entries to be hashed in the vector. The // fingerprinting functor FP returns a unique fingerprint for each entry to be // hashed in the vector (these need to be suitable for indexing in a vector). // The hash functor H is used when hashing entry into the compact hash table. // If passed to the constructor, ownership is given to this class. template <class I, class T, class S, class FP, class H, HSType HS = HS_DENSE> class VectorHashBiTable { public: friend class HashFunc; friend class HashEqual; explicit VectorHashBiTable(S *s, FP *fp, H *h, size_t vector_size = 0, size_t entry_size = 0) : selector_(s), fp_(fp), h_(h), hash_func_(*this), hash_equal_(*this), keys_(0, hash_func_, hash_equal_) { if (vector_size) fp2id_.reserve(vector_size); if (entry_size) id2entry_.reserve(entry_size); } VectorHashBiTable(const VectorHashBiTable<I, T, S, FP, H, HS> &table) : selector_(new S(table.s_)), fp_(new FP(*table.fp_)), h_(new H(*table.h_)), id2entry_(table.id2entry_), fp2id_(table.fp2id_), hash_func_(*this), hash_equal_(*this), keys_(table.keys_.size(), hash_func_, hash_equal_) { keys_.insert(table.keys_.begin(), table.keys_.end()); } I FindId(const T &entry, bool insert = true) { if ((*selector_)(entry)) { // Uses the vector if selector_(entry) == true. uint64_t fp = (*fp_)(entry); if (fp2id_.size() <= fp) fp2id_.resize(fp + 1, 0); if (fp2id_[fp] == 0) { // T not found. if (insert) { // Stores and assigns a new ID. id2entry_.push_back(entry); fp2id_[fp] = id2entry_.size(); } else { return -1; } } return fp2id_[fp] - 1; // NB: assoc_value = ID + 1. } else { // Uses the hash table otherwise. current_entry_ = &entry; const auto it = keys_.find(kCurrentKey); if (it == keys_.end()) { if (insert) { I key = id2entry_.size(); id2entry_.push_back(entry); keys_.insert(key); return key; } else { return -1; } } else { return *it; } } } const T &FindEntry(I s) const { return id2entry_[s]; } I Size() const { return id2entry_.size(); } const S &Selector() const { return *selector_; } const FP &Fingerprint() const { return *fp_; } const H &Hash() const { return *h_; } private: static constexpr I kCurrentKey = -1; static constexpr I kEmptyKey = -2; class HashFunc { public: explicit HashFunc(const VectorHashBiTable &ht) : ht_(&ht) {} size_t operator()(I k) const { if (k >= kCurrentKey) { return (*(ht_->h_))(ht_->Key2Entry(k)); } else { return 0; } } private: const VectorHashBiTable *ht_; }; class HashEqual { public: explicit HashEqual(const VectorHashBiTable &ht) : ht_(&ht) {} bool operator()(I k1, I k2) const { if (k1 >= kCurrentKey && k2 >= kCurrentKey) { return ht_->Key2Entry(k1) == ht_->Key2Entry(k2); } else { return k1 == k2; } } private: const VectorHashBiTable *ht_; }; using KeyHashSet = HashSet<I, HashFunc, HashEqual, HS>; const T &Key2Entry(I k) const { if (k == kCurrentKey) { return *current_entry_; } else { return id2entry_[k]; } } std::unique_ptr<S> selector_; // True if entry hashed into vector. std::unique_ptr<FP> fp_; // Fingerprint used for hashing into vector. std::unique_ptr<H> h_; // Hash funcion used for hashing into hash_set. std::vector<T> id2entry_; // Maps state IDs to entry. std::vector<I> fp2id_; // Maps entry fingerprints to IDs. // Compact implementation of the hash table mapping entries to state IDs // using the hash function h_. HashFunc hash_func_; HashEqual hash_equal_; KeyHashSet keys_; const T *current_entry_; }; template <class I, class T, class S, class FP, class H, HSType HS> constexpr I VectorHashBiTable<I, T, S, FP, H, HS>::kCurrentKey; template <class I, class T, class S, class FP, class H, HSType HS> constexpr I VectorHashBiTable<I, T, S, FP, H, HS>::kEmptyKey; // An implementation using a hash map for the entry to ID mapping. This version // permits erasing of arbitrary states. The entry T must have == defined and // its default constructor must produce a entry that will never be seen. F is // the hash function. template <class I, class T, class F> class ErasableBiTable { public: ErasableBiTable() : first_(0) {} I FindId(const T &entry, bool insert = true) { I &id_ref = entry2id_[entry]; if (id_ref == 0) { // T not found. if (insert) { // Stores and assigns a new ID. id2entry_.push_back(entry); id_ref = id2entry_.size() + first_; } else { return -1; } } return id_ref - 1; // NB: id_ref = ID + 1. } const T &FindEntry(I s) const { return id2entry_[s - first_]; } I Size() const { return id2entry_.size(); } void Erase(I s) { auto &ref = id2entry_[s - first_]; entry2id_.erase(ref); ref = empty_entry_; while (!id2entry_.empty() && id2entry_.front() == empty_entry_) { id2entry_.pop_front(); ++first_; } } private: std::unordered_map<T, I, F> entry2id_; std::deque<T> id2entry_; const T empty_entry_; I first_; // I of first element in the deque. }; } // namespace fst #endif // FST_BI_TABLE_H_
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions/compact/compact64_acceptor-fst.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fst/fst.h> #include <fst/compact-fst.h> namespace fst { static FstRegisterer<CompactAcceptorFst<StdArc, uint64>> CompactAcceptorFst_StdArc_uint64_registerer; static FstRegisterer<CompactAcceptorFst<LogArc, uint64>> CompactAcceptorFst_LogArc_uint64_registerer; } // namespace fst
0
coqui_public_repos/TTS/TTS/vocoder
coqui_public_repos/TTS/TTS/vocoder/models/fullband_melgan_generator.py
import torch from TTS.vocoder.models.melgan_generator import MelganGenerator class FullbandMelganGenerator(MelganGenerator): def __init__( self, in_channels=80, out_channels=1, proj_kernel=7, base_channels=512, upsample_factors=(2, 8, 2, 2), res_kernel=3, num_res_blocks=4, ): super().__init__( in_channels=in_channels, out_channels=out_channels, proj_kernel=proj_kernel, base_channels=base_channels, upsample_factors=upsample_factors, res_kernel=res_kernel, num_res_blocks=num_res_blocks, ) @torch.no_grad() def inference(self, cond_features): cond_features = cond_features.to(self.layers[1].weight.device) cond_features = torch.nn.functional.pad( cond_features, (self.inference_padding, self.inference_padding), "replicate" ) return self.layers(cond_features)
0
coqui_public_repos
coqui_public_repos/data-checker/README.md
# 🫠 data-checker Code for checking goodness of data for STT and TTS. ### Install with Docker ``` $ git clone https://github.com/coqui-ai/data-checker.git $ cd data-checker $ docker build . -t data-checker ``` ### Check your install ``` $ docker run data-checker python data_checks.py "/code/data/smoke_test/russian_sample_data/ru.csv" 2 . . . 👀 ─ Found 1 <transcript,clip> pairs in /code/data/smoke_test/russian_sample_data/ru.csv · First audio file found: ru.wav of type audio/wav · Checking if audio is readable... 😊 Found no unreadable audiofiles · Reading audio duration... 👀 ─ Found a total of 0.00 hours of readable data · Get transcript length... · Get num feature vectors... 😊 Found no audio clips over 30 seconds in length 😊 Found no transcripts under 10 characters in length · Get ratio (num_feats / transcript_len)... 😊 Found no offending <transcript,clip> pairs · Calculating ratio (num_feats : transcript_len)... 😊 Found no <transcript,clip> pairs more than 2.0 standard deviations from the mean 🎉 ┬ Saved a total of 0.00 hours of data to BEST dataset ├ Removed a total of 0.00 hours (0.00% of original data) ├ Removed a total of 0 samples (0.00% of original data) └ Wrote best data to /code/data/smoke_test/russian_sample_data/ru.BEST ``` ### Run on your data `data-checker` assumes your CSV has two columns: `wav_filename` and `transcript`. Note that you don't actually need to use WAV files, but the header still should be `wav_filename`. ``` $ docker run data-checker --mount "type=bind,src=/path/to/my/local/data,dst=/mnt" python data_checks.py "/mnt/my-data.csv" 2 ```
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/partition.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Functions and classes to create a partition of states. #ifndef FST_PARTITION_H_ #define FST_PARTITION_H_ #include <algorithm> #include <vector> #include <fst/queue.h> namespace fst { namespace internal { template <typename T> class PartitionIterator; // Defines a partitioning of elements, used to represent equivalence classes // for FST operations like minimization. T must be a signed integer type. // // The elements are numbered from 0 to num_elements - 1. // Initialize(num_elements) sets up the class for a given number of elements. // We maintain a partition of these elements into classes. The classes are also // numbered from zero; you can add a class with AddClass(), or add them in bulk // with AllocateClasses(num_classes). Initially the elements are not assigned // to any class; you set up the initial mapping from elements to classes by // calling Add(element_id, class_id). You can also move an element to a // different class by calling Move(element_id, class_id). // // We also support a rather specialized interface that allows you to efficiently // split classes in the Hopcroft minimization algorithm. This maintains a // binary partition of each class. Let's call these, rather arbitrarily, the // 'yes' subset and the 'no' subset of each class, and assume that by default, // each element of a class is in its 'no' subset. When one calls // SplitOn(element_id), element_id is moved to the 'yes' subset of its class. // (If it was already in the 'yes' set, it just stays there). The aim is to // enable (later) splitting the class in two in time no greater than the time // already spent calling SplitOn() for that class. We keep a list of the classes // which have nonempty 'yes' sets, as visited_classes_. When one calls // FinalizeSplit(Queue *l), for each class in visited_classes_ whose 'yes' // and 'no' sets are both nonempty, it will create a new class consisting of // the smaller of the two subsets (and this class will be added to the queue), // and the old class will now be the larger of the two subsets. This call also // resets all the yes/no partitions so that everything is in the 'no' subsets. // // One cannot use the Move() function if SplitOn() has been called without // a subsequent call to FinalizeSplit() template <typename T> class Partition { public: Partition() {} explicit Partition(T num_elements) { Initialize(num_elements); } // Creates an empty partition for num_elements. This means that the elements // are not assigned to a class (i.e class_index = -1); you should set up the // number of classes using AllocateClasses() or AddClass(), and allocate each // element to a class by calling Add(element, class_id). void Initialize(size_t num_elements) { elements_.resize(num_elements); classes_.reserve(num_elements); classes_.clear(); yes_counter_ = 1; } // Adds a class; returns new number of classes. T AddClass() { auto num_classes = classes_.size(); classes_.resize(num_classes + 1); return num_classes; } // Adds 'num_classes' new (empty) classes. void AllocateClasses(T num_classes) { classes_.resize(classes_.size() + num_classes); } // Adds element_id to class_id. element_id should already have been allocated // by calling Initialize(num_elements)---or the constructor taking // num_elements---with num_elements > element_id. element_id must not // currently be a member of any class; once elements have been added to a // class, use the Move() method to move them from one class to another. void Add(T element_id, T class_id) { auto &this_element = elements_[element_id]; auto &this_class = classes_[class_id]; ++this_class.size; // Adds the element to the 'no' subset of the class. auto no_head = this_class.no_head; if (no_head >= 0) elements_[no_head].prev_element = element_id; this_class.no_head = element_id; this_element.class_id = class_id; // Adds to the 'no' subset of the class. this_element.yes = 0; this_element.next_element = no_head; this_element.prev_element = -1; } // Moves element_id from 'no' subset of its current class to 'no' subset of // class class_id. This may not work correctly if you have called SplitOn() // [for any element] and haven't subsequently called FinalizeSplit(). void Move(T element_id, T class_id) { auto elements = &(elements_[0]); auto &element = elements[element_id]; auto &old_class = classes_[element.class_id]; --old_class.size; // Excises the element from the 'no' list of its old class, where it is // assumed to be. if (element.prev_element >= 0) { elements[element.prev_element].next_element = element.next_element; } else { old_class.no_head = element.next_element; } if (element.next_element >= 0) { elements[element.next_element].prev_element = element.prev_element; } // Adds to new class. Add(element_id, class_id); } // Moves element_id to the 'yes' subset of its class if it was in the 'no' // subset, and marks the class as having been visited. void SplitOn(T element_id) { auto elements = &(elements_[0]); auto &element = elements[element_id]; if (element.yes == yes_counter_) { return; // Already in the 'yes' set; nothing to do. } auto class_id = element.class_id; auto &this_class = classes_[class_id]; // Excises the element from the 'no' list of its class. if (element.prev_element >= 0) { elements[element.prev_element].next_element = element.next_element; } else { this_class.no_head = element.next_element; } if (element.next_element >= 0) { elements[element.next_element].prev_element = element.prev_element; } // Adds the element to the 'yes' list. if (this_class.yes_head >= 0) { elements[this_class.yes_head].prev_element = element_id; } else { visited_classes_.push_back(class_id); } element.yes = yes_counter_; element.next_element = this_class.yes_head; element.prev_element = -1; this_class.yes_head = element_id; this_class.yes_size++; } // This should be called after one has possibly called SplitOn for one or more // elements, thus moving those elements to the 'yes' subset for their class. // For each class that has a nontrivial split (i.e., it's not the case that // all members are in the 'yes' or 'no' subset), this function creates a new // class containing the smaller of the two subsets of elements, leaving the // larger group of elements in the old class. The identifier of the new class // will be added to the queue provided as the pointer L. This method then // moves all elements to the 'no' subset of their class. template <class Queue> void FinalizeSplit(Queue *queue) { for (const auto &visited_class : visited_classes_) { const auto new_class = SplitRefine(visited_class); if (new_class != -1 && queue) queue->Enqueue(new_class); } visited_classes_.clear(); // Incrementation sets all the 'yes' members of the elements to false. ++yes_counter_; } const T ClassId(T element_id) const { return elements_[element_id].class_id; } const size_t ClassSize(T class_id) const { return classes_[class_id].size; } const T NumClasses() const { return classes_.size(); } private: friend class PartitionIterator<T>; // Information about a given element. struct Element { T class_id; // Class ID of this element. T yes; // This is to be interpreted as a bool, true if it's in the // 'yes' set of this class. The interpretation as bool is // (yes == yes_counter_ ? true : false). T next_element; // Next element in the 'no' list or 'yes' list of this // class, whichever of the two we belong to (think of // this as the 'next' in a doubly-linked list, although // it is an index into the elements array). Negative // values corresponds to null. T prev_element; // Previous element in the 'no' or 'yes' doubly linked // list. Negative values corresponds to null. }; // Information about a given class. struct Class { Class() : size(0), yes_size(0), no_head(-1), yes_head(-1) {} T size; // Total number of elements in this class ('no' plus 'yes' // subsets). T yes_size; // Total number of elements of 'yes' subset of this class. T no_head; // Index of head element of doubly-linked list in 'no' subset. // Everything is in the 'no' subset until you call SplitOn(). // -1 means no element. T yes_head; // Index of head element of doubly-linked list in 'yes' subset. // -1 means no element. }; // This method, called from FinalizeSplit(), checks whether a class has to // be split (a class will be split only if its 'yes' and 'no' subsets are // both nonempty, but one can assume that since this function was called, the // 'yes' subset is nonempty). It splits by taking the smaller subset and // making it a new class, and leaving the larger subset of elements in the // 'no' subset of the old class. It returns the new class if created, or -1 // if none was created. T SplitRefine(T class_id) { auto yes_size = classes_[class_id].yes_size; auto size = classes_[class_id].size; auto no_size = size - yes_size; if (no_size == 0) { // All members are in the 'yes' subset, so we don't have to create a new // class, just move them all to the 'no' subset. classes_[class_id].no_head = classes_[class_id].yes_head; classes_[class_id].yes_head = -1; classes_[class_id].yes_size = 0; return -1; } else { auto new_class_id = classes_.size(); classes_.resize(classes_.size() + 1); auto &old_class = classes_[class_id]; auto &new_class = classes_[new_class_id]; // The new_class will have the values from the constructor. if (no_size < yes_size) { // Moves the 'no' subset to new class ('no' subset). new_class.no_head = old_class.no_head; new_class.size = no_size; // And makes the 'yes' subset of the old class ('no' subset). old_class.no_head = old_class.yes_head; old_class.yes_head = -1; old_class.size = yes_size; old_class.yes_size = 0; } else { // Moves the 'yes' subset to the new class (to the 'no' subset) new_class.size = yes_size; new_class.no_head = old_class.yes_head; // Retains only the 'no' subset in the old class. old_class.size = no_size; old_class.yes_size = 0; old_class.yes_head = -1; } auto elements = &(elements_[0]); // Updates the 'class_id' of all the elements we moved. for (auto e = new_class.no_head; e >= 0; e = elements[e].next_element) { elements[e].class_id = new_class_id; } return new_class_id; } } // elements_[i] contains all info about the i'th element. std::vector<Element> elements_; // classes_[i] contains all info about the i'th class. std::vector<Class> classes_; // Set of visited classes to be used in split refine. std::vector<T> visited_classes_; // yes_counter_ is used in interpreting the 'yes' members of class Element. // If element.yes == yes_counter_, we interpret that element as being in the // 'yes' subset of its class. This allows us to, in effect, set all those // bools to false at a stroke by incrementing yes_counter_. T yes_counter_; }; // Iterates over members of the 'no' subset of a class in a partition. (When // this is used, everything is in the 'no' subset). template <typename T> class PartitionIterator { public: using Element = typename Partition<T>::Element; PartitionIterator(const Partition<T> &partition, T class_id) : partition_(partition), element_id_(partition_.classes_[class_id].no_head), class_id_(class_id) {} bool Done() { return element_id_ < 0; } const T Value() { return element_id_; } void Next() { element_id_ = partition_.elements_[element_id_].next_element; } void Reset() { element_id_ = partition_.classes_[class_id_].no_head; } private: const Partition<T> &partition_; T element_id_; T class_id_; }; } // namespace internal } // namespace fst #endif // FST_PARTITION_H_
0
coqui_public_repos/STT/native_client/kenlm
coqui_public_repos/STT/native_client/kenlm/lm/trie_sort.hh
// Step of trie builder: create sorted files. #ifndef LM_TRIE_SORT_H #define LM_TRIE_SORT_H #include "max_order.hh" #include "word_index.hh" #include "../util/file.hh" #include "../util/scoped.hh" #include <cstddef> #include <functional> #include <string> #include <vector> #include <stdint.h> namespace util { class FilePiece; } // namespace util namespace lm { class PositiveProbWarn; namespace ngram { class SortedVocabulary; struct Config; namespace trie { class EntryCompare : public std::binary_function<const void*, const void*, bool> { public: explicit EntryCompare(unsigned char order) : order_(order) {} bool operator()(const void *first_void, const void *second_void) const { const WordIndex *first = static_cast<const WordIndex*>(first_void); const WordIndex *second = static_cast<const WordIndex*>(second_void); const WordIndex *end = first + order_; for (; first != end; ++first, ++second) { if (*first < *second) return true; if (*first > *second) return false; } return false; } private: unsigned char order_; }; class RecordReader { public: RecordReader() : remains_(true) {} void Init(FILE *file, std::size_t entry_size); void *Data() { return data_.get(); } const void *Data() const { return data_.get(); } RecordReader &operator++() { std::size_t ret = fread(data_.get(), entry_size_, 1, file_); if (!ret) { UTIL_THROW_IF(!feof(file_), util::ErrnoException, "Error reading temporary file"); remains_ = false; } return *this; } operator bool() const { return remains_; } void Rewind(); std::size_t EntrySize() const { return entry_size_; } void Overwrite(const void *start, std::size_t amount); private: FILE *file_; util::scoped_malloc data_; bool remains_; std::size_t entry_size_; }; class SortedFiles { public: // Build from ARPA SortedFiles(const Config &config, util::FilePiece &f, std::vector<uint64_t> &counts, std::size_t buffer, const std::string &file_prefix, SortedVocabulary &vocab); int StealUnigram() { return unigram_.release(); } FILE *Full(unsigned char order) { return full_[order - 2].get(); } FILE *Context(unsigned char of_order) { return context_[of_order - 2].get(); } private: void ConvertToSorted(util::FilePiece &f, const SortedVocabulary &vocab, const std::vector<uint64_t> &counts, const std::string &prefix, unsigned char order, PositiveProbWarn &warn, void *mem, std::size_t mem_size); util::scoped_fd unigram_; util::scoped_FILE full_[KENLM_MAX_ORDER - 1], context_[KENLM_MAX_ORDER - 1]; }; } // namespace trie } // namespace ngram } // namespace lm #endif // LM_TRIE_SORT_H
0
coqui_public_repos/STT/native_client/kenlm
coqui_public_repos/STT/native_client/kenlm/lm/vocab.cc
#include "vocab.hh" #include "binary_format.hh" #include "enumerate_vocab.hh" #include "lm_exception.hh" #include "config.hh" #include "weights.hh" #include "../util/exception.hh" #include "../util/file_stream.hh" #include "../util/file.hh" #include "../util/joint_sort.hh" #include "../util/murmur_hash.hh" #include "../util/probing_hash_table.hh" #include <cstring> #include <string> #include <sstream> namespace lm { namespace ngram { namespace detail { uint64_t HashForVocab(const char *str, std::size_t len) { // This proved faster than Boost's hash in speed trials: total load time Murmur 67090000, Boost 72210000 // Chose to use 64A instead of native so binary format will be portable across 64 and 32 bit. return util::MurmurHash64A(str, len, 0); } } // namespace detail namespace { // Normally static initialization is a bad idea but MurmurHash is pure arithmetic, so this is ok. const uint64_t kUnknownHash = detail::HashForVocab("<unk>", 5); // Sadly some LMs have <UNK>. const uint64_t kUnknownCapHash = detail::HashForVocab("<UNK>", 5); void ReadWords(int fd, EnumerateVocab *enumerate, WordIndex expected_count, uint64_t offset) { util::SeekOrThrow(fd, offset); // Check that we're at the right place by reading <unk> which is always first. char check_unk[6]; util::ReadOrThrow(fd, check_unk, 6); UTIL_THROW_IF( memcmp(check_unk, "<unk>", 6), FormatLoadException, "Vocabulary words are in the wrong place. This could be because the binary file was built with stale gcc and old kenlm. Stale gcc, including the gcc distributed with RedHat and OS X, has a bug that ignores pragma pack for template-dependent types. New kenlm works around this, so you'll save memory but have to rebuild any binary files using the probing data structure."); if (!enumerate) return; enumerate->Add(0, "<unk>"); WordIndex index = 1; // Read <unk> already. util::FilePiece in(util::DupOrThrow(fd)); for (util::LineIterator w(in, '\0'); w; ++w, ++index) { enumerate->Add(index, *w); } UTIL_THROW_IF(expected_count != index, FormatLoadException, "The binary file has the wrong number of words at the end. This could be caused by a truncated binary file."); } void ReadWords(const char* file_data, EnumerateVocab *enumerate, WordIndex expected_count, uint64_t offset) { file_data += offset; // Check that we're at the right place by reading <unk> which is always first. char check_unk[6]; std::memcpy(check_unk, file_data, 6); file_data += 6; UTIL_THROW_IF( memcmp(check_unk, "<unk>", 6), FormatLoadException, "Vocabulary words are in the wrong place. This could be because the binary file was built with stale gcc and old kenlm. Stale gcc, including the gcc distributed with RedHat and OS X, has a bug that ignores pragma pack for template-dependent types. New kenlm works around this, so you'll save memory but have to rebuild any binary files using the probing data structure."); if (!enumerate) { return; } enumerate->Add(0, "<unk>"); WordIndex index = 1; // Read <unk> already. std::istringstream in(file_data); for (std::string line; std::getline(in, line); ) { // std::cerr << "LINHA -> " << line << std::endl; enumerate->Add(index, line); } UTIL_THROW_IF(expected_count != index, FormatLoadException, "The binary file has the wrong number of words at the end. This could be caused by a truncated binary file."); } // Constructor ordering madness. int SeekAndReturn(int fd, uint64_t start) { util::SeekOrThrow(fd, start); return fd; } } // namespace ImmediateWriteWordsWrapper::ImmediateWriteWordsWrapper(EnumerateVocab *inner, int fd, uint64_t start) : inner_(inner), stream_(SeekAndReturn(fd, start)) {} WriteWordsWrapper::WriteWordsWrapper(EnumerateVocab *inner) : inner_(inner) {} void WriteWordsWrapper::Add(WordIndex index, const StringPiece &str) { if (inner_) inner_->Add(index, str); buffer_.append(str.data(), str.size()); buffer_.push_back(0); } void WriteWordsWrapper::Write(int fd, uint64_t start) { util::SeekOrThrow(fd, start); util::WriteOrThrow(fd, buffer_.data(), buffer_.size()); // Free memory from the string. std::string for_swap; std::swap(buffer_, for_swap); } SortedVocabulary::SortedVocabulary() : begin_(NULL), end_(NULL), enumerate_(NULL) {} uint64_t SortedVocabulary::Size(uint64_t entries, const Config &/*config*/) { // Lead with the number of entries. return sizeof(uint64_t) + sizeof(uint64_t) * entries; } void SortedVocabulary::SetupMemory(void *start, std::size_t allocated, std::size_t entries, const Config &config) { assert(allocated >= Size(entries, config)); // Leave space for number of entries. begin_ = reinterpret_cast<uint64_t*>(start) + 1; end_ = begin_; saw_unk_ = false; } void SortedVocabulary::Relocate(void *new_start) { std::size_t delta = end_ - begin_; begin_ = reinterpret_cast<uint64_t*>(new_start) + 1; end_ = begin_ + delta; } void SortedVocabulary::ConfigureEnumerate(EnumerateVocab *to, std::size_t max_entries) { enumerate_ = to; if (enumerate_) { enumerate_->Add(0, "<unk>"); strings_to_enumerate_.resize(max_entries); } } WordIndex SortedVocabulary::Insert(const StringPiece &str) { uint64_t hashed = detail::HashForVocab(str); if (hashed == kUnknownHash || hashed == kUnknownCapHash) { saw_unk_ = true; return 0; } *end_ = hashed; if (enumerate_) { void *copied = string_backing_.Allocate(str.size()); memcpy(copied, str.data(), str.size()); strings_to_enumerate_[end_ - begin_] = StringPiece(static_cast<const char*>(copied), str.size()); } ++end_; // This is 1 + the offset where it was inserted to make room for unk. return end_ - begin_; } void SortedVocabulary::FinishedLoading(ProbBackoff *reorder) { GenericFinished(reorder); } namespace { #pragma pack(push) #pragma pack(4) struct RenumberEntry { uint64_t hash; const char *str; WordIndex old; bool operator<(const RenumberEntry &other) const { return hash < other.hash; } }; #pragma pack(pop) } // namespace void SortedVocabulary::ComputeRenumbering(WordIndex types, int from_words, int to_words, std::vector<WordIndex> &mapping) { mapping.clear(); uint64_t file_size = util::SizeOrThrow(from_words); util::scoped_memory strings; util::MapRead(util::POPULATE_OR_READ, from_words, 0, file_size, strings); const char *const start = static_cast<const char*>(strings.get()); UTIL_THROW_IF(memcmp(start, "<unk>", 6), FormatLoadException, "Vocab file does not begin with <unk> followed by null"); std::vector<RenumberEntry> entries; entries.reserve(types - 1); RenumberEntry entry; entry.old = 1; for (entry.str = start + 6 /* skip <unk>\0 */; entry.str < start + file_size; ++entry.old) { StringPiece str(entry.str, strlen(entry.str)); entry.hash = detail::HashForVocab(str); entries.push_back(entry); entry.str += str.size() + 1; } UTIL_THROW_IF2(entries.size() != types - 1, "Wrong number of vocab ids. Got " << (entries.size() + 1) << " expected " << types); std::sort(entries.begin(), entries.end()); // Write out new vocab file. { util::FileStream out(to_words); out << "<unk>" << '\0'; for (std::vector<RenumberEntry>::const_iterator i = entries.begin(); i != entries.end(); ++i) { out << i->str << '\0'; } } strings.reset(); mapping.resize(types); mapping[0] = 0; // <unk> for (std::vector<RenumberEntry>::const_iterator i = entries.begin(); i != entries.end(); ++i) { mapping[i->old] = i + 1 - entries.begin(); } } void SortedVocabulary::Populated() { saw_unk_ = true; SetSpecial(Index("<s>"), Index("</s>"), 0); bound_ = end_ - begin_ + 1; *(reinterpret_cast<uint64_t*>(begin_) - 1) = end_ - begin_; } void SortedVocabulary::LoadedBinary(bool have_words, int fd, EnumerateVocab *to, uint64_t offset) { end_ = begin_ + *(reinterpret_cast<const uint64_t*>(begin_) - 1); SetSpecial(Index("<s>"), Index("</s>"), 0); bound_ = end_ - begin_ + 1; if (have_words) ReadWords(fd, to, bound_, offset); } void SortedVocabulary::LoadedBinary(bool have_words, const char* file_data, EnumerateVocab *to, uint64_t offset, bool load_from_memory) { end_ = begin_ + *(reinterpret_cast<const uint64_t*>(begin_) - 1); SetSpecial(Index("<s>"), Index("</s>"), 0); bound_ = end_ - begin_ + 1; if (have_words) { ReadWords(file_data, to, bound_, offset); } } template <class T> void SortedVocabulary::GenericFinished(T *reorder) { if (enumerate_) { if (!strings_to_enumerate_.empty()) { util::PairedIterator<T*, StringPiece*> values(reorder + 1, &*strings_to_enumerate_.begin()); util::JointSort(begin_, end_, values); } for (WordIndex i = 0; i < static_cast<WordIndex>(end_ - begin_); ++i) { // <unk> strikes again: +1 here. enumerate_->Add(i + 1, strings_to_enumerate_[i]); } strings_to_enumerate_.clear(); string_backing_.FreeAll(); } else { util::JointSort(begin_, end_, reorder + 1); } SetSpecial(Index("<s>"), Index("</s>"), 0); // Save size. Excludes UNK. *(reinterpret_cast<uint64_t*>(begin_) - 1) = end_ - begin_; // Includes UNK. bound_ = end_ - begin_ + 1; } namespace { const unsigned int kProbingVocabularyVersion = 0; } // namespace namespace detail { struct ProbingVocabularyHeader { // Lowest unused vocab id. This is also the number of words, including <unk>. unsigned int version; WordIndex bound; }; } // namespace detail ProbingVocabulary::ProbingVocabulary() : enumerate_(NULL) {} uint64_t ProbingVocabulary::Size(uint64_t entries, float probing_multiplier) { return ALIGN8(sizeof(detail::ProbingVocabularyHeader)) + Lookup::Size(entries, probing_multiplier); } uint64_t ProbingVocabulary::Size(uint64_t entries, const Config &config) { return Size(entries, config.probing_multiplier); } void ProbingVocabulary::SetupMemory(void *start, std::size_t allocated) { header_ = static_cast<detail::ProbingVocabularyHeader*>(start); lookup_ = Lookup(static_cast<uint8_t*>(start) + ALIGN8(sizeof(detail::ProbingVocabularyHeader)), allocated); bound_ = 1; saw_unk_ = false; } void ProbingVocabulary::Relocate(void *new_start) { header_ = static_cast<detail::ProbingVocabularyHeader*>(new_start); lookup_.Relocate(static_cast<uint8_t*>(new_start) + ALIGN8(sizeof(detail::ProbingVocabularyHeader))); } void ProbingVocabulary::ConfigureEnumerate(EnumerateVocab *to, std::size_t /*max_entries*/) { enumerate_ = to; if (enumerate_) { enumerate_->Add(0, "<unk>"); } } WordIndex ProbingVocabulary::Insert(const StringPiece &str) { uint64_t hashed = detail::HashForVocab(str); // Prevent unknown from going into the table. if (hashed == kUnknownHash || hashed == kUnknownCapHash) { saw_unk_ = true; return 0; } else { if (enumerate_) enumerate_->Add(bound_, str); lookup_.Insert(ProbingVocabularyEntry::Make(hashed, bound_)); return bound_++; } } void ProbingVocabulary::InternalFinishedLoading() { lookup_.FinishedInserting(); header_->bound = bound_; header_->version = kProbingVocabularyVersion; SetSpecial(Index("<s>"), Index("</s>"), 0); } void ProbingVocabulary::LoadedBinary(bool have_words, int fd, EnumerateVocab *to, uint64_t offset) { UTIL_THROW_IF(header_->version != kProbingVocabularyVersion, FormatLoadException, "The binary file has probing version " << header_->version << " but the code expects version " << kProbingVocabularyVersion << ". Please rerun build_binary using the same version of the code."); bound_ = header_->bound; SetSpecial(Index("<s>"), Index("</s>"), 0); if (have_words) ReadWords(fd, to, bound_, offset); } void ProbingVocabulary::LoadedBinary(bool have_words, const char* file_data, EnumerateVocab *to, uint64_t offset, bool load_from_memory) { UTIL_THROW_IF(header_->version != kProbingVocabularyVersion, FormatLoadException, "The binary file has probing version " << header_->version << " but the code expects version " << kProbingVocabularyVersion << ". Please rerun build_binary using the same version of the code."); bound_ = header_->bound; SetSpecial(Index("<s>"), Index("</s>"), 0); if (have_words) { ReadWords(file_data, to, bound_, offset); } } void MissingUnknown(const Config &config) { switch(config.unknown_missing) { case SILENT: return; case COMPLAIN: if (config.messages) *config.messages << "The ARPA file is missing <unk>. Substituting log10 probability " << config.unknown_missing_logprob << "." << std::endl; break; case THROW_UP: UTIL_THROW(SpecialWordMissingException, "The ARPA file is missing <unk> and the model is configured to throw an exception."); } } void MissingSentenceMarker(const Config &config, const char *str) { switch (config.sentence_marker_missing) { case SILENT: return; case COMPLAIN: if (config.messages) *config.messages << "Missing special word " << str << "; will treat it as <unk>."; break; case THROW_UP: UTIL_THROW(SpecialWordMissingException, "The ARPA file is missing " << str << " and the model is configured to reject these models. Run build_binary -s to disable this check."); } } } // namespace ngram } // namespace lm
0
coqui_public_repos/snakepit
coqui_public_repos/snakepit/bin/prepare-lxd.sh
#!/usr/bin/env bash set -e if [ $(lxc remote list | grep ubuntu-minimal | wc -l) -gt "0" ]; then echo "Remote ubuntu-minimal already configured - skipping..." else echo "Adding remote ubuntu-minimal..." lxc remote add --protocol simplestreams ubuntu-minimal https://cloud-images.ubuntu.com/minimal/releases/ fi
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions/mpdt/mpdtreverse.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Reverses an MPDT. #include <cstring> #include <memory> #include <string> #include <vector> #include <fst/flags.h> #include <fst/log.h> #include <fst/extensions/mpdt/mpdtscript.h> #include <fst/extensions/mpdt/read_write_utils.h> #include <fst/util.h> DEFINE_string(mpdt_parentheses, "", "MPDT parenthesis label pairs with assignments."); DEFINE_string(mpdt_new_parentheses, "", "Output for reassigned parentheses and stacks"); int main(int argc, char **argv) { namespace s = fst::script; using fst::ReadLabelTriples; using fst::WriteLabelTriples; using fst::script::FstClass; using fst::script::VectorFstClass; string usage = "Reverse an MPDT.\n\n Usage: "; usage += argv[0]; usage += " in.pdt [out.fst]\n"; std::set_new_handler(FailedNewHandler); SET_FLAGS(usage.c_str(), &argc, &argv, true); if (argc > 3) { ShowUsage(); return 1; } const string in_name = (argc > 1 && (strcmp(argv[1], "-") != 0)) ? argv[1] : ""; const string out_name = argc > 2 ? argv[2] : ""; std::unique_ptr<FstClass> ifst(FstClass::Read(in_name)); if (!ifst) return 1; if (FLAGS_mpdt_parentheses.empty()) { LOG(ERROR) << argv[0] << ": No MPDT parenthesis label pairs provided"; return 1; } if (FLAGS_mpdt_new_parentheses.empty()) { LOG(ERROR) << argv[0] << ": No MPDT output parenthesis label file provided"; return 1; } std::vector<s::LabelPair> parens; std::vector<int64> assignments; if (!ReadLabelTriples(FLAGS_mpdt_parentheses, &parens, &assignments, false)) return 1; VectorFstClass ofst(ifst->ArcType()); s::MPdtReverse(*ifst, parens, &assignments, &ofst); ofst.Write(out_name); if (!WriteLabelTriples(FLAGS_mpdt_new_parentheses, parens, assignments)) return 1; return 0; }
0
coqui_public_repos/STT/native_client
coqui_public_repos/STT/native_client/javascript/abi_crosswalk_priv.json
{ "0.1.14": { "node_abi": null, "v8": "1.3" }, "0.1.15": { "node_abi": null, "v8": "1.3" }, "0.1.16": { "node_abi": null, "v8": "1.3" }, "0.1.17": { "node_abi": null, "v8": "1.3" }, "0.1.18": { "node_abi": null, "v8": "1.3" }, "0.1.19": { "node_abi": null, "v8": "2.0" }, "0.1.20": { "node_abi": null, "v8": "2.0" }, "0.1.21": { "node_abi": null, "v8": "2.0" }, "0.1.22": { "node_abi": null, "v8": "2.0" }, "0.1.23": { "node_abi": null, "v8": "2.0" }, "0.1.24": { "node_abi": null, "v8": "2.0" }, "0.1.25": { "node_abi": null, "v8": "2.0" }, "0.1.26": { "node_abi": null, "v8": "2.0" }, "0.1.27": { "node_abi": null, "v8": "2.1" }, "0.1.28": { "node_abi": null, "v8": "2.1" }, "0.1.29": { "node_abi": null, "v8": "2.1" }, "0.1.30": { "node_abi": null, "v8": "2.1" }, "0.1.31": { "node_abi": null, "v8": "2.1" }, "0.1.32": { "node_abi": null, "v8": "2.1" }, "0.1.33": { "node_abi": null, "v8": "2.1" }, "0.1.90": { "node_abi": null, "v8": "2.2" }, "0.1.91": { "node_abi": null, "v8": "2.2" }, "0.1.92": { "node_abi": null, "v8": "2.2" }, "0.1.93": { "node_abi": null, "v8": "2.2" }, "0.1.94": { "node_abi": null, "v8": "2.2" }, "0.1.95": { "node_abi": null, "v8": "2.2" }, "0.1.96": { "node_abi": null, "v8": "2.2" }, "0.1.97": { "node_abi": null, "v8": "2.2" }, "0.1.98": { "node_abi": null, "v8": "2.2" }, "0.1.99": { "node_abi": null, "v8": "2.2" }, "0.1.100": { "node_abi": null, "v8": "2.2" }, "0.1.101": { "node_abi": null, "v8": "2.3" }, "0.1.102": { "node_abi": null, "v8": "2.3" }, "0.1.103": { "node_abi": null, "v8": "2.3" }, "0.1.104": { "node_abi": null, "v8": "2.3" }, "0.2.0": { "node_abi": 1, "v8": "2.3" }, "0.2.1": { "node_abi": 1, "v8": "2.3" }, "0.2.2": { "node_abi": 1, "v8": "2.3" }, "0.2.3": { "node_abi": 1, "v8": "2.3" }, "0.2.4": { "node_abi": 1, "v8": "2.3" }, "0.2.5": { "node_abi": 1, "v8": "2.3" }, "0.2.6": { "node_abi": 1, "v8": "2.3" }, "0.3.0": { "node_abi": 1, "v8": "2.5" }, "0.3.1": { "node_abi": 1, "v8": "2.5" }, "0.3.2": { "node_abi": 1, "v8": "3.0" }, "0.3.3": { "node_abi": 1, "v8": "3.0" }, "0.3.4": { "node_abi": 1, "v8": "3.0" }, "0.3.5": { "node_abi": 1, "v8": "3.0" }, "0.3.6": { "node_abi": 1, "v8": "3.0" }, "0.3.7": { "node_abi": 1, "v8": "3.0" }, "0.3.8": { "node_abi": 1, "v8": "3.1" }, "0.4.0": { "node_abi": 1, "v8": "3.1" }, "0.4.1": { "node_abi": 1, "v8": "3.1" }, "0.4.2": { "node_abi": 1, "v8": "3.1" }, "0.4.3": { "node_abi": 1, "v8": "3.1" }, "0.4.4": { "node_abi": 1, "v8": "3.1" }, "0.4.5": { "node_abi": 1, "v8": "3.1" }, "0.4.6": { "node_abi": 1, "v8": "3.1" }, "0.4.7": { "node_abi": 1, "v8": "3.1" }, "0.4.8": { "node_abi": 1, "v8": "3.1" }, "0.4.9": { "node_abi": 1, "v8": "3.1" }, "0.4.10": { "node_abi": 1, "v8": "3.1" }, "0.4.11": { "node_abi": 1, "v8": "3.1" }, "0.4.12": { "node_abi": 1, "v8": "3.1" }, "0.5.0": { "node_abi": 1, "v8": "3.1" }, "0.5.1": { "node_abi": 1, "v8": "3.4" }, "0.5.2": { "node_abi": 1, "v8": "3.4" }, "0.5.3": { "node_abi": 1, "v8": "3.4" }, "0.5.4": { "node_abi": 1, "v8": "3.5" }, "0.5.5": { "node_abi": 1, "v8": "3.5" }, "0.5.6": { "node_abi": 1, "v8": "3.6" }, "0.5.7": { "node_abi": 1, "v8": "3.6" }, "0.5.8": { "node_abi": 1, "v8": "3.6" }, "0.5.9": { "node_abi": 1, "v8": "3.6" }, "0.5.10": { "node_abi": 1, "v8": "3.7" }, "0.6.0": { "node_abi": 1, "v8": "3.6" }, "0.6.1": { "node_abi": 1, "v8": "3.6" }, "0.6.2": { "node_abi": 1, "v8": "3.6" }, "0.6.3": { "node_abi": 1, "v8": "3.6" }, "0.6.4": { "node_abi": 1, "v8": "3.6" }, "0.6.5": { "node_abi": 1, "v8": "3.6" }, "0.6.6": { "node_abi": 1, "v8": "3.6" }, "0.6.7": { "node_abi": 1, "v8": "3.6" }, "0.6.8": { "node_abi": 1, "v8": "3.6" }, "0.6.9": { "node_abi": 1, "v8": "3.6" }, "0.6.10": { "node_abi": 1, "v8": "3.6" }, "0.6.11": { "node_abi": 1, "v8": "3.6" }, "0.6.12": { "node_abi": 1, "v8": "3.6" }, "0.6.13": { "node_abi": 1, "v8": "3.6" }, "0.6.14": { "node_abi": 1, "v8": "3.6" }, "0.6.15": { "node_abi": 1, "v8": "3.6" }, "0.6.16": { "node_abi": 1, "v8": "3.6" }, "0.6.17": { "node_abi": 1, "v8": "3.6" }, "0.6.18": { "node_abi": 1, "v8": "3.6" }, "0.6.19": { "node_abi": 1, "v8": "3.6" }, "0.6.20": { "node_abi": 1, "v8": "3.6" }, "0.6.21": { "node_abi": 1, "v8": "3.6" }, "0.7.0": { "node_abi": 1, "v8": "3.8" }, "0.7.1": { "node_abi": 1, "v8": "3.8" }, "0.7.2": { "node_abi": 1, "v8": "3.8" }, "0.7.3": { "node_abi": 1, "v8": "3.9" }, "0.7.4": { "node_abi": 1, "v8": "3.9" }, "0.7.5": { "node_abi": 1, "v8": "3.9" }, "0.7.6": { "node_abi": 1, "v8": "3.9" }, "0.7.7": { "node_abi": 1, "v8": "3.9" }, "0.7.8": { "node_abi": 1, "v8": "3.9" }, "0.7.9": { "node_abi": 1, "v8": "3.11" }, "0.7.10": { "node_abi": 1, "v8": "3.9" }, "0.7.11": { "node_abi": 1, "v8": "3.11" }, "0.7.12": { "node_abi": 1, "v8": "3.11" }, "0.8.0": { "node_abi": 1, "v8": "3.11" }, "0.8.1": { "node_abi": 1, "v8": "3.11" }, "0.8.2": { "node_abi": 1, "v8": "3.11" }, "0.8.3": { "node_abi": 1, "v8": "3.11" }, "0.8.4": { "node_abi": 1, "v8": "3.11" }, "0.8.5": { "node_abi": 1, "v8": "3.11" }, "0.8.6": { "node_abi": 1, "v8": "3.11" }, "0.8.7": { "node_abi": 1, "v8": "3.11" }, "0.8.8": { "node_abi": 1, "v8": "3.11" }, "0.8.9": { "node_abi": 1, "v8": "3.11" }, "0.8.10": { "node_abi": 1, "v8": "3.11" }, "0.8.11": { "node_abi": 1, "v8": "3.11" }, "0.8.12": { "node_abi": 1, "v8": "3.11" }, "0.8.13": { "node_abi": 1, "v8": "3.11" }, "0.8.14": { "node_abi": 1, "v8": "3.11" }, "0.8.15": { "node_abi": 1, "v8": "3.11" }, "0.8.16": { "node_abi": 1, "v8": "3.11" }, "0.8.17": { "node_abi": 1, "v8": "3.11" }, "0.8.18": { "node_abi": 1, "v8": "3.11" }, "0.8.19": { "node_abi": 1, "v8": "3.11" }, "0.8.20": { "node_abi": 1, "v8": "3.11" }, "0.8.21": { "node_abi": 1, "v8": "3.11" }, "0.8.22": { "node_abi": 1, "v8": "3.11" }, "0.8.23": { "node_abi": 1, "v8": "3.11" }, "0.8.24": { "node_abi": 1, "v8": "3.11" }, "0.8.25": { "node_abi": 1, "v8": "3.11" }, "0.8.26": { "node_abi": 1, "v8": "3.11" }, "0.8.27": { "node_abi": 1, "v8": "3.11" }, "0.8.28": { "node_abi": 1, "v8": "3.11" }, "0.9.0": { "node_abi": 1, "v8": "3.11" }, "0.9.1": { "node_abi": 10, "v8": "3.11" }, "0.9.2": { "node_abi": 10, "v8": "3.11" }, "0.9.3": { "node_abi": 10, "v8": "3.13" }, "0.9.4": { "node_abi": 10, "v8": "3.13" }, "0.9.5": { "node_abi": 10, "v8": "3.13" }, "0.9.6": { "node_abi": 10, "v8": "3.15" }, "0.9.7": { "node_abi": 10, "v8": "3.15" }, "0.9.8": { "node_abi": 10, "v8": "3.15" }, "0.9.9": { "node_abi": 11, "v8": "3.15" }, "0.9.10": { "node_abi": 11, "v8": "3.15" }, "0.9.11": { "node_abi": 11, "v8": "3.14" }, "0.9.12": { "node_abi": 11, "v8": "3.14" }, "0.10.0": { "node_abi": 11, "v8": "3.14" }, "0.10.1": { "node_abi": 11, "v8": "3.14" }, "0.10.2": { "node_abi": 11, "v8": "3.14" }, "0.10.3": { "node_abi": 11, "v8": "3.14" }, "0.10.4": { "node_abi": 11, "v8": "3.14" }, "0.10.5": { "node_abi": 11, "v8": "3.14" }, "0.10.6": { "node_abi": 11, "v8": "3.14" }, "0.10.7": { "node_abi": 11, "v8": "3.14" }, "0.10.8": { "node_abi": 11, "v8": "3.14" }, "0.10.9": { "node_abi": 11, "v8": "3.14" }, "0.10.10": { "node_abi": 11, "v8": "3.14" }, "0.10.11": { "node_abi": 11, "v8": "3.14" }, "0.10.12": { "node_abi": 11, "v8": "3.14" }, "0.10.13": { "node_abi": 11, "v8": "3.14" }, "0.10.14": { "node_abi": 11, "v8": "3.14" }, "0.10.15": { "node_abi": 11, "v8": "3.14" }, "0.10.16": { "node_abi": 11, "v8": "3.14" }, "0.10.17": { "node_abi": 11, "v8": "3.14" }, "0.10.18": { "node_abi": 11, "v8": "3.14" }, "0.10.19": { "node_abi": 11, "v8": "3.14" }, "0.10.20": { "node_abi": 11, "v8": "3.14" }, "0.10.21": { "node_abi": 11, "v8": "3.14" }, "0.10.22": { "node_abi": 11, "v8": "3.14" }, "0.10.23": { "node_abi": 11, "v8": "3.14" }, "0.10.24": { "node_abi": 11, "v8": "3.14" }, "0.10.25": { "node_abi": 11, "v8": "3.14" }, "0.10.26": { "node_abi": 11, "v8": "3.14" }, "0.10.27": { "node_abi": 11, "v8": "3.14" }, "0.10.28": { "node_abi": 11, "v8": "3.14" }, "0.10.29": { "node_abi": 11, "v8": "3.14" }, "0.10.30": { "node_abi": 11, "v8": "3.14" }, "0.10.31": { "node_abi": 11, "v8": "3.14" }, "0.10.32": { "node_abi": 11, "v8": "3.14" }, "0.10.33": { "node_abi": 11, "v8": "3.14" }, "0.10.34": { "node_abi": 11, "v8": "3.14" }, "0.10.35": { "node_abi": 11, "v8": "3.14" }, "0.10.36": { "node_abi": 11, "v8": "3.14" }, "0.10.37": { "node_abi": 11, "v8": "3.14" }, "0.10.38": { "node_abi": 11, "v8": "3.14" }, "0.10.39": { "node_abi": 11, "v8": "3.14" }, "0.10.40": { "node_abi": 11, "v8": "3.14" }, "0.10.41": { "node_abi": 11, "v8": "3.14" }, "0.10.42": { "node_abi": 11, "v8": "3.14" }, "0.10.43": { "node_abi": 11, "v8": "3.14" }, "0.10.44": { "node_abi": 11, "v8": "3.14" }, "0.10.45": { "node_abi": 11, "v8": "3.14" }, "0.10.46": { "node_abi": 11, "v8": "3.14" }, "0.10.47": { "node_abi": 11, "v8": "3.14" }, "0.10.48": { "node_abi": 11, "v8": "3.14" }, "0.11.0": { "node_abi": 12, "v8": "3.17" }, "0.11.1": { "node_abi": 12, "v8": "3.18" }, "0.11.2": { "node_abi": 12, "v8": "3.19" }, "0.11.3": { "node_abi": 12, "v8": "3.19" }, "0.11.4": { "node_abi": 12, "v8": "3.20" }, "0.11.5": { "node_abi": 12, "v8": "3.20" }, "0.11.6": { "node_abi": 12, "v8": "3.20" }, "0.11.7": { "node_abi": 12, "v8": "3.20" }, "0.11.8": { "node_abi": 13, "v8": "3.21" }, "0.11.9": { "node_abi": 13, "v8": "3.22" }, "0.11.10": { "node_abi": 13, "v8": "3.22" }, "0.11.11": { "node_abi": 14, "v8": "3.22" }, "0.11.12": { "node_abi": 14, "v8": "3.22" }, "0.11.13": { "node_abi": 14, "v8": "3.25" }, "0.11.14": { "node_abi": 14, "v8": "3.26" }, "0.11.15": { "node_abi": 14, "v8": "3.28" }, "0.11.16": { "node_abi": 14, "v8": "3.28" }, "0.12.0": { "node_abi": 14, "v8": "3.28" }, "0.12.1": { "node_abi": 14, "v8": "3.28" }, "0.12.2": { "node_abi": 14, "v8": "3.28" }, "0.12.3": { "node_abi": 14, "v8": "3.28" }, "0.12.4": { "node_abi": 14, "v8": "3.28" }, "0.12.5": { "node_abi": 14, "v8": "3.28" }, "0.12.6": { "node_abi": 14, "v8": "3.28" }, "0.12.7": { "node_abi": 14, "v8": "3.28" }, "0.12.8": { "node_abi": 14, "v8": "3.28" }, "0.12.9": { "node_abi": 14, "v8": "3.28" }, "0.12.10": { "node_abi": 14, "v8": "3.28" }, "0.12.11": { "node_abi": 14, "v8": "3.28" }, "0.12.12": { "node_abi": 14, "v8": "3.28" }, "0.12.13": { "node_abi": 14, "v8": "3.28" }, "0.12.14": { "node_abi": 14, "v8": "3.28" }, "0.12.15": { "node_abi": 14, "v8": "3.28" }, "0.12.16": { "node_abi": 14, "v8": "3.28" }, "0.12.17": { "node_abi": 14, "v8": "3.28" }, "0.12.18": { "node_abi": 14, "v8": "3.28" }, "1.0.0": { "node_abi": 42, "v8": "3.31" }, "1.0.1": { "node_abi": 42, "v8": "3.31" }, "1.0.2": { "node_abi": 42, "v8": "3.31" }, "1.0.3": { "node_abi": 42, "v8": "4.1" }, "1.0.4": { "node_abi": 42, "v8": "4.1" }, "1.1.0": { "node_abi": 43, "v8": "4.1" }, "1.2.0": { "node_abi": 43, "v8": "4.1" }, "1.3.0": { "node_abi": 43, "v8": "4.1" }, "1.4.1": { "node_abi": 43, "v8": "4.1" }, "1.4.2": { "node_abi": 43, "v8": "4.1" }, "1.4.3": { "node_abi": 43, "v8": "4.1" }, "1.5.0": { "node_abi": 43, "v8": "4.1" }, "1.5.1": { "node_abi": 43, "v8": "4.1" }, "1.6.0": { "node_abi": 43, "v8": "4.1" }, "1.6.1": { "node_abi": 43, "v8": "4.1" }, "1.6.2": { "node_abi": 43, "v8": "4.1" }, "1.6.3": { "node_abi": 43, "v8": "4.1" }, "1.6.4": { "node_abi": 43, "v8": "4.1" }, "1.7.1": { "node_abi": 43, "v8": "4.1" }, "1.8.1": { "node_abi": 43, "v8": "4.1" }, "1.8.2": { "node_abi": 43, "v8": "4.1" }, "1.8.3": { "node_abi": 43, "v8": "4.1" }, "1.8.4": { "node_abi": 43, "v8": "4.1" }, "2.0.0": { "node_abi": 44, "v8": "4.2" }, "2.0.1": { "node_abi": 44, "v8": "4.2" }, "2.0.2": { "node_abi": 44, "v8": "4.2" }, "2.1.0": { "node_abi": 44, "v8": "4.2" }, "2.2.0": { "node_abi": 44, "v8": "4.2" }, "2.2.1": { "node_abi": 44, "v8": "4.2" }, "2.3.0": { "node_abi": 44, "v8": "4.2" }, "2.3.1": { "node_abi": 44, "v8": "4.2" }, "2.3.2": { "node_abi": 44, "v8": "4.2" }, "2.3.3": { "node_abi": 44, "v8": "4.2" }, "2.3.4": { "node_abi": 44, "v8": "4.2" }, "2.4.0": { "node_abi": 44, "v8": "4.2" }, "2.5.0": { "node_abi": 44, "v8": "4.2" }, "3.0.0": { "node_abi": 45, "v8": "4.4" }, "3.1.0": { "node_abi": 45, "v8": "4.4" }, "3.2.0": { "node_abi": 45, "v8": "4.4" }, "3.3.0": { "node_abi": 45, "v8": "4.4" }, "3.3.1": { "node_abi": 45, "v8": "4.4" }, "4.0.0": { "node_abi": 46, "v8": "4.5" }, "4.1.0": { "node_abi": 46, "v8": "4.5" }, "4.1.1": { "node_abi": 46, "v8": "4.5" }, "4.1.2": { "node_abi": 46, "v8": "4.5" }, "4.2.0": { "node_abi": 46, "v8": "4.5" }, "4.2.1": { "node_abi": 46, "v8": "4.5" }, "4.2.2": { "node_abi": 46, "v8": "4.5" }, "4.2.3": { "node_abi": 46, "v8": "4.5" }, "4.2.4": { "node_abi": 46, "v8": "4.5" }, "4.2.5": { "node_abi": 46, "v8": "4.5" }, "4.2.6": { "node_abi": 46, "v8": "4.5" }, "4.3.0": { "node_abi": 46, "v8": "4.5" }, "4.3.1": { "node_abi": 46, "v8": "4.5" }, "4.3.2": { "node_abi": 46, "v8": "4.5" }, "4.4.0": { "node_abi": 46, "v8": "4.5" }, "4.4.1": { "node_abi": 46, "v8": "4.5" }, "4.4.2": { "node_abi": 46, "v8": "4.5" }, "4.4.3": { "node_abi": 46, "v8": "4.5" }, "4.4.4": { "node_abi": 46, "v8": "4.5" }, "4.4.5": { "node_abi": 46, "v8": "4.5" }, "4.4.6": { "node_abi": 46, "v8": "4.5" }, "4.4.7": { "node_abi": 46, "v8": "4.5" }, "4.5.0": { "node_abi": 46, "v8": "4.5" }, "4.6.0": { "node_abi": 46, "v8": "4.5" }, "4.6.1": { "node_abi": 46, "v8": "4.5" }, "4.6.2": { "node_abi": 46, "v8": "4.5" }, "4.7.0": { "node_abi": 46, "v8": "4.5" }, "4.7.1": { "node_abi": 46, "v8": "4.5" }, "4.7.2": { "node_abi": 46, "v8": "4.5" }, "4.7.3": { "node_abi": 46, "v8": "4.5" }, "4.8.0": { "node_abi": 46, "v8": "4.5" }, "4.8.1": { "node_abi": 46, "v8": "4.5" }, "4.8.2": { "node_abi": 46, "v8": "4.5" }, "4.8.3": { "node_abi": 46, "v8": "4.5" }, "4.8.4": { "node_abi": 46, "v8": "4.5" }, "4.8.5": { "node_abi": 46, "v8": "4.5" }, "4.8.6": { "node_abi": 46, "v8": "4.5" }, "4.8.7": { "node_abi": 46, "v8": "4.5" }, "4.9.0": { "node_abi": 46, "v8": "4.5" }, "4.9.1": { "node_abi": 46, "v8": "4.5" }, "5.0.0": { "node_abi": 47, "v8": "4.6" }, "5.1.0": { "node_abi": 47, "v8": "4.6" }, "5.1.1": { "node_abi": 47, "v8": "4.6" }, "5.2.0": { "node_abi": 47, "v8": "4.6" }, "5.3.0": { "node_abi": 47, "v8": "4.6" }, "5.4.0": { "node_abi": 47, "v8": "4.6" }, "5.4.1": { "node_abi": 47, "v8": "4.6" }, "5.5.0": { "node_abi": 47, "v8": "4.6" }, "5.6.0": { "node_abi": 47, "v8": "4.6" }, "5.7.0": { "node_abi": 47, "v8": "4.6" }, "5.7.1": { "node_abi": 47, "v8": "4.6" }, "5.8.0": { "node_abi": 47, "v8": "4.6" }, "5.9.0": { "node_abi": 47, "v8": "4.6" }, "5.9.1": { "node_abi": 47, "v8": "4.6" }, "5.10.0": { "node_abi": 47, "v8": "4.6" }, "5.10.1": { "node_abi": 47, "v8": "4.6" }, "5.11.0": { "node_abi": 47, "v8": "4.6" }, "5.11.1": { "node_abi": 47, "v8": "4.6" }, "5.12.0": { "node_abi": 47, "v8": "4.6" }, "6.0.0": { "node_abi": 48, "v8": "5.0" }, "6.1.0": { "node_abi": 48, "v8": "5.0" }, "6.2.0": { "node_abi": 48, "v8": "5.0" }, "6.2.1": { "node_abi": 48, "v8": "5.0" }, "6.2.2": { "node_abi": 48, "v8": "5.0" }, "6.3.0": { "node_abi": 48, "v8": "5.0" }, "6.3.1": { "node_abi": 48, "v8": "5.0" }, "6.4.0": { "node_abi": 48, "v8": "5.0" }, "6.5.0": { "node_abi": 48, "v8": "5.1" }, "6.6.0": { "node_abi": 48, "v8": "5.1" }, "6.7.0": { "node_abi": 48, "v8": "5.1" }, "6.8.0": { "node_abi": 48, "v8": "5.1" }, "6.8.1": { "node_abi": 48, "v8": "5.1" }, "6.9.0": { "node_abi": 48, "v8": "5.1" }, "6.9.1": { "node_abi": 48, "v8": "5.1" }, "6.9.2": { "node_abi": 48, "v8": "5.1" }, "6.9.3": { "node_abi": 48, "v8": "5.1" }, "6.9.4": { "node_abi": 48, "v8": "5.1" }, "6.9.5": { "node_abi": 48, "v8": "5.1" }, "6.10.0": { "node_abi": 48, "v8": "5.1" }, "6.10.1": { "node_abi": 48, "v8": "5.1" }, "6.10.2": { "node_abi": 48, "v8": "5.1" }, "6.10.3": { "node_abi": 48, "v8": "5.1" }, "6.11.0": { "node_abi": 48, "v8": "5.1" }, "6.11.1": { "node_abi": 48, "v8": "5.1" }, "6.11.2": { "node_abi": 48, "v8": "5.1" }, "6.11.3": { "node_abi": 48, "v8": "5.1" }, "6.11.4": { "node_abi": 48, "v8": "5.1" }, "6.11.5": { "node_abi": 48, "v8": "5.1" }, "6.12.0": { "node_abi": 48, "v8": "5.1" }, "6.12.1": { "node_abi": 48, "v8": "5.1" }, "6.12.2": { "node_abi": 48, "v8": "5.1" }, "6.12.3": { "node_abi": 48, "v8": "5.1" }, "6.13.0": { "node_abi": 48, "v8": "5.1" }, "6.13.1": { "node_abi": 48, "v8": "5.1" }, "6.14.0": { "node_abi": 48, "v8": "5.1" }, "6.14.1": { "node_abi": 48, "v8": "5.1" }, "6.14.2": { "node_abi": 48, "v8": "5.1" }, "6.14.3": { "node_abi": 48, "v8": "5.1" }, "6.14.4": { "node_abi": 48, "v8": "5.1" }, "6.15.0": { "node_abi": 48, "v8": "5.1" }, "6.15.1": { "node_abi": 48, "v8": "5.1" }, "6.16.0": { "node_abi": 48, "v8": "5.1" }, "6.17.0": { "node_abi": 48, "v8": "5.1" }, "6.17.1": { "node_abi": 48, "v8": "5.1" }, "7.0.0": { "node_abi": 51, "v8": "5.4" }, "7.1.0": { "node_abi": 51, "v8": "5.4" }, "7.2.0": { "node_abi": 51, "v8": "5.4" }, "7.2.1": { "node_abi": 51, "v8": "5.4" }, "7.3.0": { "node_abi": 51, "v8": "5.4" }, "7.4.0": { "node_abi": 51, "v8": "5.4" }, "7.5.0": { "node_abi": 51, "v8": "5.4" }, "7.6.0": { "node_abi": 51, "v8": "5.5" }, "7.7.0": { "node_abi": 51, "v8": "5.5" }, "7.7.1": { "node_abi": 51, "v8": "5.5" }, "7.7.2": { "node_abi": 51, "v8": "5.5" }, "7.7.3": { "node_abi": 51, "v8": "5.5" }, "7.7.4": { "node_abi": 51, "v8": "5.5" }, "7.8.0": { "node_abi": 51, "v8": "5.5" }, "7.9.0": { "node_abi": 51, "v8": "5.5" }, "7.10.0": { "node_abi": 51, "v8": "5.5" }, "7.10.1": { "node_abi": 51, "v8": "5.5" }, "8.0.0": { "node_abi": 57, "v8": "5.8" }, "8.1.0": { "node_abi": 57, "v8": "5.8" }, "8.1.1": { "node_abi": 57, "v8": "5.8" }, "8.1.2": { "node_abi": 57, "v8": "5.8" }, "8.1.3": { "node_abi": 57, "v8": "5.8" }, "8.1.4": { "node_abi": 57, "v8": "5.8" }, "8.2.0": { "node_abi": 57, "v8": "5.8" }, "8.2.1": { "node_abi": 57, "v8": "5.8" }, "8.3.0": { "node_abi": 57, "v8": "6.0" }, "8.4.0": { "node_abi": 57, "v8": "6.0" }, "8.5.0": { "node_abi": 57, "v8": "6.0" }, "8.6.0": { "node_abi": 57, "v8": "6.0" }, "8.7.0": { "node_abi": 57, "v8": "6.1" }, "8.8.0": { "node_abi": 57, "v8": "6.1" }, "8.8.1": { "node_abi": 57, "v8": "6.1" }, "8.9.0": { "node_abi": 57, "v8": "6.1" }, "8.9.1": { "node_abi": 57, "v8": "6.1" }, "8.9.2": { "node_abi": 57, "v8": "6.1" }, "8.9.3": { "node_abi": 57, "v8": "6.1" }, "8.9.4": { "node_abi": 57, "v8": "6.1" }, "8.10.0": { "node_abi": 57, "v8": "6.2" }, "8.11.0": { "node_abi": 57, "v8": "6.2" }, "8.11.1": { "node_abi": 57, "v8": "6.2" }, "8.11.2": { "node_abi": 57, "v8": "6.2" }, "8.11.3": { "node_abi": 57, "v8": "6.2" }, "8.11.4": { "node_abi": 57, "v8": "6.2" }, "8.12.0": { "node_abi": 57, "v8": "6.2" }, "8.13.0": { "node_abi": 57, "v8": "6.2" }, "8.14.0": { "node_abi": 57, "v8": "6.2" }, "8.14.1": { "node_abi": 57, "v8": "6.2" }, "8.15.0": { "node_abi": 57, "v8": "6.2" }, "8.15.1": { "node_abi": 57, "v8": "6.2" }, "8.16.0": { "node_abi": 57, "v8": "6.2" }, "8.16.1": { "node_abi": 57, "v8": "6.2" }, "8.16.2": { "node_abi": 57, "v8": "6.2" }, "8.17.0": { "node_abi": 57, "v8": "6.2" }, "9.0.0": { "node_abi": 59, "v8": "6.2" }, "9.1.0": { "node_abi": 59, "v8": "6.2" }, "9.2.0": { "node_abi": 59, "v8": "6.2" }, "9.2.1": { "node_abi": 59, "v8": "6.2" }, "9.3.0": { "node_abi": 59, "v8": "6.2" }, "9.4.0": { "node_abi": 59, "v8": "6.2" }, "9.5.0": { "node_abi": 59, "v8": "6.2" }, "9.6.0": { "node_abi": 59, "v8": "6.2" }, "9.6.1": { "node_abi": 59, "v8": "6.2" }, "9.7.0": { "node_abi": 59, "v8": "6.2" }, "9.7.1": { "node_abi": 59, "v8": "6.2" }, "9.8.0": { "node_abi": 59, "v8": "6.2" }, "9.9.0": { "node_abi": 59, "v8": "6.2" }, "9.10.0": { "node_abi": 59, "v8": "6.2" }, "9.10.1": { "node_abi": 59, "v8": "6.2" }, "9.11.0": { "node_abi": 59, "v8": "6.2" }, "9.11.1": { "node_abi": 59, "v8": "6.2" }, "9.11.2": { "node_abi": 59, "v8": "6.2" }, "10.0.0": { "node_abi": 64, "v8": "6.6" }, "10.1.0": { "node_abi": 64, "v8": "6.6" }, "10.2.0": { "node_abi": 64, "v8": "6.6" }, "10.2.1": { "node_abi": 64, "v8": "6.6" }, "10.3.0": { "node_abi": 64, "v8": "6.6" }, "10.4.0": { "node_abi": 64, "v8": "6.7" }, "10.4.1": { "node_abi": 64, "v8": "6.7" }, "10.5.0": { "node_abi": 64, "v8": "6.7" }, "10.6.0": { "node_abi": 64, "v8": "6.7" }, "10.7.0": { "node_abi": 64, "v8": "6.7" }, "10.8.0": { "node_abi": 64, "v8": "6.7" }, "10.9.0": { "node_abi": 64, "v8": "6.8" }, "10.10.0": { "node_abi": 64, "v8": "6.8" }, "10.11.0": { "node_abi": 64, "v8": "6.8" }, "10.12.0": { "node_abi": 64, "v8": "6.8" }, "10.13.0": { "node_abi": 64, "v8": "6.8" }, "10.14.0": { "node_abi": 64, "v8": "6.8" }, "10.14.1": { "node_abi": 64, "v8": "6.8" }, "10.14.2": { "node_abi": 64, "v8": "6.8" }, "10.15.0": { "node_abi": 64, "v8": "6.8" }, "10.15.1": { "node_abi": 64, "v8": "6.8" }, "10.15.2": { "node_abi": 64, "v8": "6.8" }, "10.15.3": { "node_abi": 64, "v8": "6.8" }, "10.16.0": { "node_abi": 64, "v8": "6.8" }, "10.16.1": { "node_abi": 64, "v8": "6.8" }, "10.16.2": { "node_abi": 64, "v8": "6.8" }, "10.16.3": { "node_abi": 64, "v8": "6.8" }, "10.17.0": { "node_abi": 64, "v8": "6.8" }, "10.18.0": { "node_abi": 64, "v8": "6.8" }, "10.18.1": { "node_abi": 64, "v8": "6.8" }, "10.19.0": { "node_abi": 64, "v8": "6.8" }, "10.20.0": { "node_abi": 64, "v8": "6.8" }, "10.20.1": { "node_abi": 64, "v8": "6.8" }, "10.21.0": { "node_abi": 64, "v8": "6.8" }, "10.22.0": { "node_abi": 64, "v8": "6.8" }, "10.22.1": { "node_abi": 64, "v8": "6.8" }, "10.23.0": { "node_abi": 64, "v8": "6.8" }, "10.23.1": { "node_abi": 64, "v8": "6.8" }, "10.23.2": { "node_abi": 64, "v8": "6.8" }, "10.23.3": { "node_abi": 64, "v8": "6.8" }, "10.24.0": { "node_abi": 64, "v8": "6.8" }, "10.24.1": { "node_abi": 64, "v8": "6.8" }, "11.0.0": { "node_abi": 67, "v8": "7.0" }, "11.1.0": { "node_abi": 67, "v8": "7.0" }, "11.2.0": { "node_abi": 67, "v8": "7.0" }, "11.3.0": { "node_abi": 67, "v8": "7.0" }, "11.4.0": { "node_abi": 67, "v8": "7.0" }, "11.5.0": { "node_abi": 67, "v8": "7.0" }, "11.6.0": { "node_abi": 67, "v8": "7.0" }, "11.7.0": { "node_abi": 67, "v8": "7.0" }, "11.8.0": { "node_abi": 67, "v8": "7.0" }, "11.9.0": { "node_abi": 67, "v8": "7.0" }, "11.10.0": { "node_abi": 67, "v8": "7.0" }, "11.10.1": { "node_abi": 67, "v8": "7.0" }, "11.11.0": { "node_abi": 67, "v8": "7.0" }, "11.12.0": { "node_abi": 67, "v8": "7.0" }, "11.13.0": { "node_abi": 67, "v8": "7.0" }, "11.14.0": { "node_abi": 67, "v8": "7.0" }, "11.15.0": { "node_abi": 67, "v8": "7.0" }, "12.0.0": { "node_abi": 72, "v8": "7.4" }, "12.1.0": { "node_abi": 72, "v8": "7.4" }, "12.2.0": { "node_abi": 72, "v8": "7.4" }, "12.3.0": { "node_abi": 72, "v8": "7.4" }, "12.3.1": { "node_abi": 72, "v8": "7.4" }, "12.4.0": { "node_abi": 72, "v8": "7.4" }, "12.5.0": { "node_abi": 72, "v8": "7.5" }, "12.6.0": { "node_abi": 72, "v8": "7.5" }, "12.7.0": { "node_abi": 72, "v8": "7.5" }, "12.8.0": { "node_abi": 72, "v8": "7.5" }, "12.8.1": { "node_abi": 72, "v8": "7.5" }, "12.9.0": { "node_abi": 72, "v8": "7.6" }, "12.9.1": { "node_abi": 72, "v8": "7.6" }, "12.10.0": { "node_abi": 72, "v8": "7.6" }, "12.11.0": { "node_abi": 72, "v8": "7.7" }, "12.11.1": { "node_abi": 72, "v8": "7.7" }, "12.12.0": { "node_abi": 72, "v8": "7.7" }, "12.13.0": { "node_abi": 72, "v8": "7.7" }, "12.13.1": { "node_abi": 72, "v8": "7.7" }, "12.14.0": { "node_abi": 72, "v8": "7.7" }, "12.14.1": { "node_abi": 72, "v8": "7.7" }, "12.15.0": { "node_abi": 72, "v8": "7.7" }, "12.16.0": { "node_abi": 72, "v8": "7.8" }, "12.16.1": { "node_abi": 72, "v8": "7.8" }, "12.16.2": { "node_abi": 72, "v8": "7.8" }, "12.16.3": { "node_abi": 72, "v8": "7.8" }, "12.17.0": { "node_abi": 72, "v8": "7.8" }, "12.18.0": { "node_abi": 72, "v8": "7.8" }, "12.18.1": { "node_abi": 72, "v8": "7.8" }, "12.18.2": { "node_abi": 72, "v8": "7.8" }, "12.18.3": { "node_abi": 72, "v8": "7.8" }, "12.18.4": { "node_abi": 72, "v8": "7.8" }, "12.19.0": { "node_abi": 72, "v8": "7.8" }, "12.19.1": { "node_abi": 72, "v8": "7.8" }, "12.20.0": { "node_abi": 72, "v8": "7.8" }, "12.20.1": { "node_abi": 72, "v8": "7.8" }, "12.20.2": { "node_abi": 72, "v8": "7.8" }, "12.21.0": { "node_abi": 72, "v8": "7.8" }, "12.22.0": { "node_abi": 72, "v8": "7.8" }, "12.22.1": { "node_abi": 72, "v8": "7.8" }, "12.22.2": { "node_abi": 72, "v8": "7.8" }, "12.22.3": { "node_abi": 72, "v8": "7.8" }, "12.22.4": { "node_abi": 72, "v8": "7.8" }, "12.22.5": { "node_abi": 72, "v8": "7.8" }, "12.22.6": { "node_abi": 72, "v8": "7.8" }, "12.22.7": { "node_abi": 72, "v8": "7.8" }, "13.0.0": { "node_abi": 79, "v8": "7.8" }, "13.0.1": { "node_abi": 79, "v8": "7.8" }, "13.1.0": { "node_abi": 79, "v8": "7.8" }, "13.2.0": { "node_abi": 79, "v8": "7.9" }, "13.3.0": { "node_abi": 79, "v8": "7.9" }, "13.4.0": { "node_abi": 79, "v8": "7.9" }, "13.5.0": { "node_abi": 79, "v8": "7.9" }, "13.6.0": { "node_abi": 79, "v8": "7.9" }, "13.7.0": { "node_abi": 79, "v8": "7.9" }, "13.8.0": { "node_abi": 79, "v8": "7.9" }, "13.9.0": { "node_abi": 79, "v8": "7.9" }, "13.10.0": { "node_abi": 79, "v8": "7.9" }, "13.10.1": { "node_abi": 79, "v8": "7.9" }, "13.11.0": { "node_abi": 79, "v8": "7.9" }, "13.12.0": { "node_abi": 79, "v8": "7.9" }, "13.13.0": { "node_abi": 79, "v8": "7.9" }, "13.14.0": { "node_abi": 79, "v8": "7.9" }, "14.0.0": { "node_abi": 83, "v8": "8.1" }, "14.1.0": { "node_abi": 83, "v8": "8.1" }, "14.2.0": { "node_abi": 83, "v8": "8.1" }, "14.3.0": { "node_abi": 83, "v8": "8.1" }, "14.4.0": { "node_abi": 83, "v8": "8.1" }, "14.5.0": { "node_abi": 83, "v8": "8.3" }, "14.6.0": { "node_abi": 83, "v8": "8.4" }, "14.7.0": { "node_abi": 83, "v8": "8.4" }, "14.8.0": { "node_abi": 83, "v8": "8.4" }, "14.9.0": { "node_abi": 83, "v8": "8.4" }, "14.10.0": { "node_abi": 83, "v8": "8.4" }, "14.10.1": { "node_abi": 83, "v8": "8.4" }, "14.11.0": { "node_abi": 83, "v8": "8.4" }, "14.12.0": { "node_abi": 83, "v8": "8.4" }, "14.13.0": { "node_abi": 83, "v8": "8.4" }, "14.13.1": { "node_abi": 83, "v8": "8.4" }, "14.14.0": { "node_abi": 83, "v8": "8.4" }, "14.15.0": { "node_abi": 83, "v8": "8.4" }, "14.15.1": { "node_abi": 83, "v8": "8.4" }, "14.15.2": { "node_abi": 83, "v8": "8.4" }, "14.15.3": { "node_abi": 83, "v8": "8.4" }, "14.15.4": { "node_abi": 83, "v8": "8.4" }, "14.15.5": { "node_abi": 83, "v8": "8.4" }, "14.16.0": { "node_abi": 83, "v8": "8.4" }, "14.16.1": { "node_abi": 83, "v8": "8.4" }, "14.17.0": { "node_abi": 83, "v8": "8.4" }, "14.17.1": { "node_abi": 83, "v8": "8.4" }, "14.17.2": { "node_abi": 83, "v8": "8.4" }, "14.17.3": { "node_abi": 83, "v8": "8.4" }, "14.17.4": { "node_abi": 83, "v8": "8.4" }, "14.17.5": { "node_abi": 83, "v8": "8.4" }, "14.17.6": { "node_abi": 83, "v8": "8.4" }, "14.18.0": { "node_abi": 83, "v8": "8.4" }, "14.18.1": { "node_abi": 83, "v8": "8.4" }, "15.0.0": { "node_abi": 88, "v8": "8.6" }, "15.0.1": { "node_abi": 88, "v8": "8.6" }, "15.1.0": { "node_abi": 88, "v8": "8.6" }, "15.2.0": { "node_abi": 88, "v8": "8.6" }, "15.2.1": { "node_abi": 88, "v8": "8.6" }, "15.3.0": { "node_abi": 88, "v8": "8.6" }, "15.4.0": { "node_abi": 88, "v8": "8.6" }, "15.5.0": { "node_abi": 88, "v8": "8.6" }, "15.5.1": { "node_abi": 88, "v8": "8.6" }, "15.6.0": { "node_abi": 88, "v8": "8.6" }, "15.7.0": { "node_abi": 88, "v8": "8.6" }, "15.8.0": { "node_abi": 88, "v8": "8.6" }, "15.9.0": { "node_abi": 88, "v8": "8.6" }, "15.10.0": { "node_abi": 88, "v8": "8.6" }, "15.11.0": { "node_abi": 88, "v8": "8.6" }, "15.12.0": { "node_abi": 88, "v8": "8.6" }, "15.13.0": { "node_abi": 88, "v8": "8.6" }, "15.14.0": { "node_abi": 88, "v8": "8.6" }, "16.0.0": { "node_abi": 93, "v8": "9.0" }, "16.1.0": { "node_abi": 93, "v8": "9.0" }, "16.2.0": { "node_abi": 93, "v8": "9.0" }, "16.3.0": { "node_abi": 93, "v8": "9.0" }, "16.4.0": { "node_abi": 93, "v8": "9.1" }, "16.4.1": { "node_abi": 93, "v8": "9.1" }, "16.4.2": { "node_abi": 93, "v8": "9.1" }, "16.5.0": { "node_abi": 93, "v8": "9.1" }, "16.6.0": { "node_abi": 93, "v8": "9.2" }, "16.6.1": { "node_abi": 93, "v8": "9.2" }, "16.6.2": { "node_abi": 93, "v8": "9.2" }, "16.7.0": { "node_abi": 93, "v8": "9.2" }, "16.8.0": { "node_abi": 93, "v8": "9.2" }, "16.9.0": { "node_abi": 93, "v8": "9.3" }, "16.9.1": { "node_abi": 93, "v8": "9.3" }, "16.10.0": { "node_abi": 93, "v8": "9.3" }, "16.11.0": { "node_abi": 93, "v8": "9.4" }, "16.11.1": { "node_abi": 93, "v8": "9.4" }, "17.0.0": { "node_abi": 102, "v8": "9.5" }, "17.0.1": { "node_abi": 102, "v8": "9.5" } }
0
coqui_public_repos/inference-engine/third_party
coqui_public_repos/inference-engine/third_party/ThreadPool/README.md
ThreadPool ========== A simple C++11 Thread Pool implementation. Basic usage: ```c++ // create thread pool with 4 worker threads ThreadPool pool(4); // enqueue and store future auto result = pool.enqueue([](int answer) { return answer; }, 42); // get result from future std::cout << result.get() << std::endl; ```
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst/script/connect.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_SCRIPT_CONNECT_H_ #define FST_SCRIPT_CONNECT_H_ #include <fst/connect.h> #include <fst/script/fst-class.h> namespace fst { namespace script { template <class Arc> void Connect(MutableFstClass *fst) { Connect(fst->GetMutableFst<Arc>()); } void Connect(MutableFstClass *fst); } // namespace script } // namespace fst #endif // FST_SCRIPT_CONNECT_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-nodejs_15x-raspbian-rpi3-opt.yml
build: template_file: test-raspbian-opt-base.tyml dependencies: - "linux-rpi3-cpu-opt" - "test-training_16k-linux-amd64-py36m-opt" test_model_task: "test-training_16k-linux-amd64-py36m-opt" system_setup: > ${nodejs.packages_buster.prep_15} && ${nodejs.packages_buster.apt_pinning} && apt-get -qq update && apt-get -qq -y install ${nodejs.packages_buster.apt} args: tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-node_tflite-tests.sh 15.x 16k" metadata: name: "DeepSpeech Raspbian RPi3/ARMv7 CPU NodeJS 15.x tests" description: "Testing DeepSpeech for Raspbian RPi3/ARMv7 on NodeJS v15.x, CPU only, optimized version"
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/script/draw.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <ostream> #include <string> #include <fst/script/draw.h> #include <fst/script/fst-class.h> #include <fst/script/script-impl.h> namespace fst { namespace script { void DrawFst(const FstClass &fst, const SymbolTable *isyms, const SymbolTable *osyms, const SymbolTable *ssyms, bool accep, const string &title, float width, float height, bool portrait, bool vertical, float ranksep, float nodesep, int fontsize, int precision, const string &float_format, bool show_weight_one, std::ostream *ostrm, const string &dest) { FstDrawerArgs args(fst, isyms, osyms, ssyms, accep, title, width, height, portrait, vertical, ranksep, nodesep, fontsize, precision, float_format, show_weight_one, ostrm, dest); Apply<Operation<FstDrawerArgs>>("DrawFst", fst.ArcType(), &args); } REGISTER_FST_OPERATION(DrawFst, StdArc, FstDrawerArgs); REGISTER_FST_OPERATION(DrawFst, LogArc, FstDrawerArgs); REGISTER_FST_OPERATION(DrawFst, Log64Arc, FstDrawerArgs); } // namespace script } // namespace fst
0
coqui_public_repos/inference-engine/third_party/kenlm
coqui_public_repos/inference-engine/third_party/kenlm/util/read_compressed.cc
#include "util/read_compressed.hh" #include "util/file.hh" #include "util/have.hh" #include "util/scoped.hh" #include <algorithm> #include <iostream> #include <cassert> #include <climits> #include <cstdlib> #include <cstring> #ifdef HAVE_ZLIB #include <zlib.h> #endif #ifdef HAVE_BZLIB #include <bzlib.h> #endif #ifdef HAVE_XZLIB #include <lzma.h> #endif namespace util { CompressedException::CompressedException() throw() {} CompressedException::~CompressedException() throw() {} GZException::GZException() throw() {} GZException::~GZException() throw() {} BZException::BZException() throw() {} BZException::~BZException() throw() {} XZException::XZException() throw() {} XZException::~XZException() throw() {} void ReadBase::ReplaceThis(ReadBase *with, ReadCompressed &thunk) { thunk.internal_.reset(with); } ReadBase *ReadBase::Current(ReadCompressed &thunk) { return thunk.internal_.get(); } uint64_t &ReadBase::ReadCount(ReadCompressed &thunk) { return thunk.raw_amount_; } namespace { ReadBase *ReadFactory(int fd, uint64_t &raw_amount, const void *already_data, std::size_t already_size, bool require_compressed); // Completed file that other classes can thunk to. class Complete : public ReadBase { public: std::size_t Read(void *, std::size_t, ReadCompressed &) { return 0; } }; class Uncompressed : public ReadBase { public: explicit Uncompressed(int fd) : fd_(fd) {} std::size_t Read(void *to, std::size_t amount, ReadCompressed &thunk) { std::size_t got = PartialRead(fd_.get(), to, amount); ReadCount(thunk) += got; return got; } private: scoped_fd fd_; }; class UncompressedWithHeader : public ReadBase { public: UncompressedWithHeader(int fd, const void *already_data, std::size_t already_size) : fd_(fd) { assert(already_size); buf_.reset(malloc(already_size)); if (!buf_.get()) throw std::bad_alloc(); memcpy(buf_.get(), already_data, already_size); remain_ = static_cast<uint8_t*>(buf_.get()); end_ = remain_ + already_size; } std::size_t Read(void *to, std::size_t amount, ReadCompressed &thunk) { assert(buf_.get()); assert(remain_ != end_); std::size_t sending = std::min<std::size_t>(amount, end_ - remain_); memcpy(to, remain_, sending); remain_ += sending; if (remain_ == end_) { ReplaceThis(new Uncompressed(fd_.release()), thunk); } return sending; } private: scoped_malloc buf_; uint8_t *remain_; uint8_t *end_; scoped_fd fd_; }; static const std::size_t kInputBuffer = 16384; template <class Compression> class StreamCompressed : public ReadBase { public: StreamCompressed(int fd, const void *already_data, std::size_t already_size) : file_(fd), in_buffer_(MallocOrThrow(kInputBuffer)), back_(memcpy(in_buffer_.get(), already_data, already_size), already_size) {} std::size_t Read(void *to, std::size_t amount, ReadCompressed &thunk) { if (amount == 0) return 0; back_.SetOutput(to, amount); do { if (!back_.Stream().avail_in) ReadInput(thunk); if (!back_.Process()) { // reached end, at least for the compressed portion. std::size_t ret = static_cast<const uint8_t *>(static_cast<void*>(back_.Stream().next_out)) - static_cast<const uint8_t*>(to); ReplaceThis(ReadFactory(file_.release(), ReadCount(thunk), back_.Stream().next_in, back_.Stream().avail_in, true), thunk); if (ret) return ret; // We did not read anything this round, so clients might think EOF. Transfer responsibility to the next reader. return Current(thunk)->Read(to, amount, thunk); } } while (back_.Stream().next_out == to); return static_cast<const uint8_t*>(static_cast<void*>(back_.Stream().next_out)) - static_cast<const uint8_t*>(to); } private: void ReadInput(ReadCompressed &thunk) { assert(!back_.Stream().avail_in); std::size_t got = ReadOrEOF(file_.get(), in_buffer_.get(), kInputBuffer); back_.SetInput(in_buffer_.get(), got); ReadCount(thunk) += got; } scoped_fd file_; scoped_malloc in_buffer_; Compression back_; }; #ifdef HAVE_ZLIB class GZip { public: GZip(const void *base, std::size_t amount) { SetInput(base, amount); stream_.zalloc = Z_NULL; stream_.zfree = Z_NULL; stream_.opaque = Z_NULL; stream_.msg = NULL; // 32 for zlib and gzip decoding with automatic header detection. // 15 for maximum window size. UTIL_THROW_IF(Z_OK != inflateInit2(&stream_, 32 + 15), GZException, "Failed to initialize zlib."); } ~GZip() { if (Z_OK != inflateEnd(&stream_)) { std::cerr << "zlib could not close properly." << std::endl; abort(); } } void SetOutput(void *to, std::size_t amount) { stream_.next_out = static_cast<Bytef*>(to); stream_.avail_out = std::min<std::size_t>(std::numeric_limits<uInt>::max(), amount); } void SetInput(const void *base, std::size_t amount) { assert(amount < static_cast<std::size_t>(std::numeric_limits<uInt>::max())); stream_.next_in = const_cast<Bytef*>(static_cast<const Bytef*>(base)); stream_.avail_in = amount; } const z_stream &Stream() const { return stream_; } bool Process() { int result = inflate(&stream_, 0); switch (result) { case Z_OK: return true; case Z_STREAM_END: return false; case Z_ERRNO: UTIL_THROW(ErrnoException, "zlib error"); default: UTIL_THROW(GZException, "zlib encountered " << (stream_.msg ? stream_.msg : "an error ") << " code " << result); } } private: z_stream stream_; }; #endif // HAVE_ZLIB #ifdef HAVE_BZLIB class BZip { public: BZip(const void *base, std::size_t amount) { memset(&stream_, 0, sizeof(stream_)); SetInput(base, amount); HandleError(BZ2_bzDecompressInit(&stream_, 0, 0)); } ~BZip() { try { HandleError(BZ2_bzDecompressEnd(&stream_)); } catch (const std::exception &e) { std::cerr << e.what() << std::endl; abort(); } } bool Process() { int ret = BZ2_bzDecompress(&stream_); if (ret == BZ_STREAM_END) return false; HandleError(ret); return true; } void SetOutput(void *base, std::size_t amount) { stream_.next_out = static_cast<char*>(base); stream_.avail_out = std::min<std::size_t>(std::numeric_limits<unsigned int>::max(), amount); } void SetInput(const void *base, std::size_t amount) { stream_.next_in = const_cast<char*>(static_cast<const char*>(base)); stream_.avail_in = amount; } const bz_stream &Stream() const { return stream_; } private: void HandleError(int value) { switch(value) { case BZ_OK: return; case BZ_CONFIG_ERROR: UTIL_THROW(BZException, "bzip2 seems to be miscompiled."); case BZ_PARAM_ERROR: UTIL_THROW(BZException, "bzip2 Parameter error"); case BZ_DATA_ERROR: UTIL_THROW(BZException, "bzip2 detected a corrupt file"); case BZ_DATA_ERROR_MAGIC: UTIL_THROW(BZException, "bzip2 detected bad magic bytes. Perhaps this was not a bzip2 file after all?"); case BZ_MEM_ERROR: throw std::bad_alloc(); default: UTIL_THROW(BZException, "Unknown bzip2 error code " << value); } } bz_stream stream_; }; #endif // HAVE_BZLIB #ifdef HAVE_XZLIB class XZip { public: XZip(const void *base, std::size_t amount) : stream_(), action_(LZMA_RUN) { memset(&stream_, 0, sizeof(stream_)); SetInput(base, amount); HandleError(lzma_stream_decoder(&stream_, UINT64_MAX, 0)); } ~XZip() { lzma_end(&stream_); } void SetOutput(void *base, std::size_t amount) { stream_.next_out = static_cast<uint8_t*>(base); stream_.avail_out = amount; } void SetInput(const void *base, std::size_t amount) { stream_.next_in = static_cast<const uint8_t*>(base); stream_.avail_in = amount; if (!amount) action_ = LZMA_FINISH; } const lzma_stream &Stream() const { return stream_; } bool Process() { lzma_ret status = lzma_code(&stream_, action_); if (status == LZMA_STREAM_END) return false; HandleError(status); return true; } private: void HandleError(lzma_ret value) { switch (value) { case LZMA_OK: return; case LZMA_MEM_ERROR: throw std::bad_alloc(); case LZMA_FORMAT_ERROR: UTIL_THROW(XZException, "xzlib says file format not recognized"); case LZMA_OPTIONS_ERROR: UTIL_THROW(XZException, "xzlib says unsupported compression options"); case LZMA_DATA_ERROR: UTIL_THROW(XZException, "xzlib says this file is corrupt"); case LZMA_BUF_ERROR: UTIL_THROW(XZException, "xzlib says unexpected end of input"); default: UTIL_THROW(XZException, "unrecognized xzlib error " << value); } } lzma_stream stream_; lzma_action action_; }; #endif // HAVE_XZLIB class IStreamReader : public ReadBase { public: explicit IStreamReader(std::istream &stream) : stream_(stream) {} std::size_t Read(void *to, std::size_t amount, ReadCompressed &thunk) { if (!stream_.read(static_cast<char*>(to), amount)) { UTIL_THROW_IF(!stream_.eof(), ErrnoException, "istream error"); amount = stream_.gcount(); } ReadCount(thunk) += amount; return amount; } private: std::istream &stream_; }; enum MagicResult { UTIL_UNKNOWN, UTIL_GZIP, UTIL_BZIP, UTIL_XZIP }; MagicResult DetectMagic(const void *from_void, std::size_t length) { const uint8_t *header = static_cast<const uint8_t*>(from_void); if (length >= 2 && header[0] == 0x1f && header[1] == 0x8b) { return UTIL_GZIP; } const uint8_t kBZMagic[3] = {'B', 'Z', 'h'}; if (length >= sizeof(kBZMagic) && !memcmp(header, kBZMagic, sizeof(kBZMagic))) { return UTIL_BZIP; } const uint8_t kXZMagic[6] = { 0xFD, '7', 'z', 'X', 'Z', 0x00 }; if (length >= sizeof(kXZMagic) && !memcmp(header, kXZMagic, sizeof(kXZMagic))) { return UTIL_XZIP; } return UTIL_UNKNOWN; } ReadBase *ReadFactory(int fd, uint64_t &raw_amount, const void *already_data, const std::size_t already_size, bool require_compressed) { scoped_fd hold(fd); std::string header(reinterpret_cast<const char*>(already_data), already_size); if (header.size() < ReadCompressed::kMagicSize) { std::size_t original = header.size(); header.resize(ReadCompressed::kMagicSize); std::size_t got = ReadOrEOF(fd, &header[original], ReadCompressed::kMagicSize - original); raw_amount += got; header.resize(original + got); } if (header.empty()) { return new Complete(); } switch (DetectMagic(&header[0], header.size())) { case UTIL_GZIP: #ifdef HAVE_ZLIB return new StreamCompressed<GZip>(hold.release(), header.data(), header.size()); #else UTIL_THROW(CompressedException, "This looks like a gzip file but gzip support was not compiled in."); #endif case UTIL_BZIP: #ifdef HAVE_BZLIB return new StreamCompressed<BZip>(hold.release(), &header[0], header.size()); #else UTIL_THROW(CompressedException, "This looks like a bzip file (it begins with BZh), but bzip support was not compiled in."); #endif case UTIL_XZIP: #ifdef HAVE_XZLIB return new StreamCompressed<XZip>(hold.release(), header.data(), header.size()); #else UTIL_THROW(CompressedException, "This looks like an xz file, but xz support was not compiled in."); #endif default: UTIL_THROW_IF(require_compressed, CompressedException, "Uncompressed data detected after a compresssed file. This could be supported but usually indicates an error."); return new UncompressedWithHeader(hold.release(), header.data(), header.size()); } } } // namespace bool ReadCompressed::DetectCompressedMagic(const void *from_void) { return DetectMagic(from_void, kMagicSize) != UTIL_UNKNOWN; } ReadCompressed::ReadCompressed(int fd) { Reset(fd); } ReadCompressed::ReadCompressed(std::istream &in) { Reset(in); } ReadCompressed::ReadCompressed() {} void ReadCompressed::Reset(int fd) { raw_amount_ = 0; internal_.reset(); internal_.reset(ReadFactory(fd, raw_amount_, NULL, 0, false)); } void ReadCompressed::Reset(std::istream &in) { internal_.reset(); internal_.reset(new IStreamReader(in)); } std::size_t ReadCompressed::Read(void *to, std::size_t amount) { return internal_->Read(to, amount, *this); } std::size_t ReadCompressed::ReadOrEOF(void *const to_in, std::size_t amount) { uint8_t *to = reinterpret_cast<uint8_t*>(to_in); while (amount) { std::size_t got = Read(to, amount); if (!got) break; to += got; amount -= got; } return to - reinterpret_cast<uint8_t*>(to_in); } } // namespace util
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/bin/fstreplace-main.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Performs the dynamic replacement of arcs in one FST with another FST, // allowing for the definition of FSTs analogous to RTNs. #include <cstring> #include <string> #include <vector> #include <fst/flags.h> #include <fst/script/getters.h> #include <fst/script/replace.h> DECLARE_string(call_arc_labeling); DECLARE_string(return_arc_labeling); DECLARE_int64(return_label); DECLARE_bool(epsilon_on_replace); void Cleanup(std::vector<fst::script::LabelFstClassPair> *pairs) { for (const auto &pair : *pairs) { delete pair.second; } pairs->clear(); } int fstreplace_main(int argc, char **argv) { namespace s = fst::script; using fst::script::FstClass; using fst::script::VectorFstClass; using fst::ReplaceLabelType; string usage = "Recursively replaces FST arcs with other FST(s).\n\n" " Usage: "; usage += argv[0]; usage += " root.fst rootlabel [rule1.fst label1 ...] [out.fst]\n"; std::set_new_handler(FailedNewHandler); SET_FLAGS(usage.c_str(), &argc, &argv, true); if (argc < 4) { ShowUsage(); return 1; } const string in_name = argv[1]; const string out_name = argc % 2 == 0 ? argv[argc - 1] : ""; auto *ifst = FstClass::Read(in_name); if (!ifst) return 1; std::vector<s::LabelFstClassPair> pairs; // Note that if the root label is beyond the range of the underlying FST's // labels, truncation will occur. const auto root = atoll(argv[2]); pairs.emplace_back(root, ifst); for (auto i = 3; i < argc - 1; i += 2) { ifst = FstClass::Read(argv[i]); if (!ifst) { Cleanup(&pairs); return 1; } // Note that if the root label is beyond the range of the underlying FST's // labels, truncation will occur. const auto label = atoll(argv[i + 1]); pairs.emplace_back(label, ifst); } ReplaceLabelType call_label_type; if (!s::GetReplaceLabelType(FLAGS_call_arc_labeling, FLAGS_epsilon_on_replace, &call_label_type)) { LOG(ERROR) << argv[0] << ": Unknown or unsupported call arc replace " << "label type: " << FLAGS_call_arc_labeling; } ReplaceLabelType return_label_type; if (!s::GetReplaceLabelType(FLAGS_return_arc_labeling, FLAGS_epsilon_on_replace, &return_label_type)) { LOG(ERROR) << argv[0] << ": Unknown or unsupported return arc replace " << "label type: " << FLAGS_return_arc_labeling; } s::ReplaceOptions opts(root, call_label_type, return_label_type, FLAGS_return_label); VectorFstClass ofst(ifst->ArcType()); s::Replace(pairs, &ofst, opts); Cleanup(&pairs); return !ofst.Write(out_name); }
0
coqui_public_repos/STT-models/sakha/itml
coqui_public_repos/STT-models/sakha/itml/v0.1.1/LICENSE
GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see <https://www.gnu.org/licenses/>.
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions/linear/loglinear-apply.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_EXTENSIONS_LINEAR_LOGLINEAR_APPLY_H_ #define FST_EXTENSIONS_LINEAR_LOGLINEAR_APPLY_H_ #include <fst/compat.h> #include <fst/arc.h> #include <fst/arc-map.h> #include <fst/compose.h> #include <fst/determinize.h> #include <fst/float-weight.h> #include <fst/fst.h> #include <fst/minimize.h> #include <fst/mutable-fst.h> #include <fst/project.h> #include <fst/rmepsilon.h> #include <fst/vector-fst.h> namespace fst { // Applies a FST model as a discriminative model to weighted input // `ifst`. `A` is an arc type with tropical weight of all the // input/output FSTs. // // In general, consider `ifst` an unnormalized probability // distribution between its input X and output Y, P(X, Y); and `lfst` // a group of unnormalized probability distributions of all its output // Z for every input Y, Q(Z|Y). `normalize` controls whether Q is // normalized for every Y before chaining with P(X, Y). I.e., for a // path (X, Y, Z) in `ofst` (where Y is hidden), // // - When `normalize` is true, its weight is P(X, Y) Q(Z|Y) / sum_z Q(z|Y); // - When `normalize` is false, its weight is P(X, Y) Q(Z|Y). template <class A> void LogLinearApply(const Fst<A> &ifst, const Fst<A> &lfst, MutableFst<A> *ofst, bool normalize = true) { LogLinearApply<A, LogArc>(ifst, lfst, ofst, normalize); } // This version gives finer control over the arc type (`B`) to be used // in normalization. `B` is an arc type with log weight (e.g. `LogArc` // or `Log64Arc`). template <class A, class B> void LogLinearApply(const Fst<A> &ifst, const Fst<A> &lfst, MutableFst<A> *ofst, bool normalize = true) { if (normalize) { VectorFst<A> unnormalized_ofst, rescored_ifsa; Compose(ifst, lfst, &unnormalized_ofst); { VectorFst<A> tropical_ifsa(unnormalized_ofst); Project(&tropical_ifsa, PROJECT_INPUT); { VectorFst<B> minimal_log_ifsa; { VectorFst<B> log_ifsa; ArcMap(tropical_ifsa, &log_ifsa, WeightConvertMapper<A, B>()); RmEpsilon(&log_ifsa); Determinize(log_ifsa, &minimal_log_ifsa); } Minimize(&minimal_log_ifsa); ArcMap(&minimal_log_ifsa, InvertWeightMapper<B>()); ArcMap(minimal_log_ifsa, &tropical_ifsa, WeightConvertMapper<B, A>()); } ArcSort(&tropical_ifsa, OLabelCompare<A>()); Compose(tropical_ifsa, ifst, &rescored_ifsa); } ArcSort(&rescored_ifsa, OLabelCompare<A>()); Compose(rescored_ifsa, unnormalized_ofst, ofst); } else { Compose(ifst, lfst, ofst); } } } // namespace fst #endif // FST_EXTENSIONS_LINEAR_LOGLINEAR_APPLY_H_
0
coqui_public_repos/STT/native_client
coqui_public_repos/STT/native_client/kenlm/COPYING
GNU LESSER GENERAL PUBLIC LICENSE Version 2.1, February 1999 Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the first released version of the Lesser GPL. It also counts as the successor of the GNU Library Public License, version 2, hence the version number 2.1.] Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below. When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things. To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it. For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights. We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library. To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others. Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license. Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs. When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library. We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances. For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License. In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system. Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library. The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, whereas the latter must be combined with the library in order to run. GNU LESSER GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you". A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables. The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".) "Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does. 1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) The modified work must itself be a software library. b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change. c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library. In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices. Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy. This option is useful when you wish to copy part of the code of the Library into a program that is not a library. 4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code. 5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables. When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law. If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.) Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself. 6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications. You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things: a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.) b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with. c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution. d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place. e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy. For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute. 7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above. b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it. 10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License. 11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation. 14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Libraries If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License). To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the library's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. <signature of Ty Coon>, 1 April 1990 Ty Coon, President of Vice That's all there is to it!
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-python_37_tflite_16k-darwin-amd64-opt.yml
build: template_file: test-darwin-opt-base.tyml dependencies: - "darwin-amd64-tflite-opt" - "test-training_16k-linux-amd64-py36m-opt" - "homebrew_tests-darwin-amd64" test_model_task: "test-training_16k-linux-amd64-py36m-opt" args: tests_cmdline: "$TASKCLUSTER_TASK_DIR/DeepSpeech/ds/taskcluster/tc-python_tflite-tests.sh 3.7.6:m 16k" metadata: name: "DeepSpeech OSX AMD64 TFLite Python v3.7 tests (16kHz)" description: "Testing DeepSpeech for OSX/AMD64 on Python v3.7 TFLite, optimized version (16kHz)"
0
coqui_public_repos/STT
coqui_public_repos/STT/bin/import_aidatatang.py
#!/usr/bin/env python import glob import os import tarfile import pandas from coqui_stt_training.util.importers import get_importers_parser COLUMN_NAMES = ["wav_filename", "wav_filesize", "transcript"] def extract(archive_path, target_dir): print("Extracting {} into {}...".format(archive_path, target_dir)) with tarfile.open(archive_path) as tar: tar.extractall(target_dir) def preprocess_data(tgz_file, target_dir): # First extract main archive and sub-archives extract(tgz_file, target_dir) main_folder = os.path.join(target_dir, "aidatatang_200zh") for targz in glob.glob(os.path.join(main_folder, "corpus", "*", "*.tar.gz")): extract(targz, os.path.dirname(targz)) # Folder structure is now: # - aidatatang_200zh/ # - transcript/aidatatang_200_zh_transcript.txt # - corpus/train/*.tar.gz # - corpus/train/*/*.{wav,txt,trn,metadata} # - corpus/dev/*.tar.gz # - corpus/dev/*/*.{wav,txt,trn,metadata} # - corpus/test/*.tar.gz # - corpus/test/*/*.{wav,txt,trn,metadata} # Transcripts file has one line per WAV file, where each line consists of # the WAV file name without extension followed by a single space followed # by the transcript. # Since the transcripts themselves can contain spaces, we split on space but # only once, then build a mapping from file name to transcript transcripts_path = os.path.join( main_folder, "transcript", "aidatatang_200_zh_transcript.txt" ) with open(transcripts_path) as fin: transcripts = dict((line.split(" ", maxsplit=1) for line in fin)) def load_set(glob_path): set_files = [] for wav in glob.glob(glob_path): try: wav_filename = wav wav_filesize = os.path.getsize(wav) transcript_key = os.path.splitext(os.path.basename(wav))[0] transcript = transcripts[transcript_key].strip("\n") set_files.append((wav_filename, wav_filesize, transcript)) except KeyError: print("Warning: Missing transcript for WAV file {}.".format(wav)) return set_files for subset in ("train", "dev", "test"): print("Loading {} set samples...".format(subset)) subset_files = load_set( os.path.join(main_folder, "corpus", subset, "*", "*.wav") ) df = pandas.DataFrame(data=subset_files, columns=COLUMN_NAMES) # Trim train set to under 10s by removing the last couple hundred samples if subset == "train": durations = (df["wav_filesize"] - 44) / 16000 / 2 df = df[durations <= 10.0] print("Trimming {} samples > 10 seconds".format((durations > 10.0).sum())) dest_csv = os.path.join(target_dir, "aidatatang_{}.csv".format(subset)) print("Saving {} set into {}...".format(subset, dest_csv)) df.to_csv(dest_csv, index=False) def main(): # https://www.openslr.org/62/ parser = get_importers_parser(description="Import aidatatang_200zh corpus") parser.add_argument("tgz_file", help="Path to aidatatang_200zh.tgz") parser.add_argument( "--target_dir", default="", help="Target folder to extract files into and put the resulting CSVs. Defaults to same folder as the main archive.", ) params = parser.parse_args() if not params.target_dir: params.target_dir = os.path.dirname(params.tgz_file) preprocess_data(params.tgz_file, params.target_dir) if __name__ == "__main__": main()
0
coqui_public_repos/snakepit
coqui_public_repos/snakepit/bin/db-dump.sh
#!/bin/bash if [ ! -f /code/bin/db-init.sh ]; then echo "This command should be run inside the snakepit container." exit 1 fi if [ "$#" -ne 1 ]; then echo "Usage: db-dump.sh some-dump-file.sql" exit 1 fi echo "Writing snakepit DB to dump file $1..." pg_dump -U postgres snakepit > $1 echo "Done."
0
coqui_public_repos/inference-engine/third_party/kenlm
coqui_public_repos/inference-engine/third_party/kenlm/lm/model_type.hh
#ifndef LM_MODEL_TYPE_H #define LM_MODEL_TYPE_H namespace lm { namespace ngram { /* Not the best numbering system, but it grew this way for historical reasons * and I want to preserve existing binary files. */ typedef enum {PROBING=0, REST_PROBING=1, TRIE=2, QUANT_TRIE=3, ARRAY_TRIE=4, QUANT_ARRAY_TRIE=5} ModelType; // Historical names. const ModelType HASH_PROBING = PROBING; const ModelType TRIE_SORTED = TRIE; const ModelType QUANT_TRIE_SORTED = QUANT_TRIE; const ModelType ARRAY_TRIE_SORTED = ARRAY_TRIE; const ModelType QUANT_ARRAY_TRIE_SORTED = QUANT_ARRAY_TRIE; const static ModelType kQuantAdd = static_cast<ModelType>(QUANT_TRIE - TRIE); const static ModelType kArrayAdd = static_cast<ModelType>(ARRAY_TRIE - TRIE); } // namespace ngram } // namespace lm #endif // LM_MODEL_TYPE_H
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/extensions
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/extensions/far/sttable.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fstream> #include <fst/extensions/far/sttable.h> namespace fst { bool IsSTTable(const string &filename) { std::ifstream strm(filename); if (!strm.good()) return false; int32 magic_number = 0; ReadType(strm, &magic_number); return magic_number == kSTTableMagicNumber; } } // namespace fst
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/test-nodejs_12x_8k-linux-amd64-prod_pbmodel-opt.yml
build: template_file: test-linux-opt-base.tyml docker_image: "ubuntu:16.04" dependencies: - "linux-amd64-cpu-opt" system_setup: > ${nodejs.packages_xenial.prep_12} && ${nodejs.packages_xenial.apt_pinning} && apt-get -qq update && apt-get -qq -y install ${nodejs.packages_xenial.apt} args: tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-node-tests-prod.sh 12.x 8k" workerType: "${docker.dsTests}" metadata: name: "DeepSpeech Linux AMD64 CPU NodeJS 12.x prod tests (8kHz)" description: "Testing DeepSpeech for Linux/AMD64 on NodeJS v12.x on prod model, CPU only, optimized version (8kHz)"
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/script/replace.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fst/script/fst-class.h> #include <fst/script/replace.h> #include <fst/script/script-impl.h> namespace fst { namespace script { void Replace(const std::vector<LabelFstClassPair> &pairs, MutableFstClass *ofst, const ReplaceOptions &opts) { if (!pairs.empty()) { for (auto it = pairs.begin(); it != pairs.end() - 1; ++it) { if (!internal::ArcTypesMatch(*it->second, *(it + 1)->second, "Replace")) { ofst->SetProperties(kError, kError); return; } } if (!internal::ArcTypesMatch(*pairs[0].second, *ofst, "Replace")) { ofst->SetProperties(kError, kError); return; } } ReplaceArgs args(pairs, ofst, opts); Apply<Operation<ReplaceArgs>>("Replace", ofst->ArcType(), &args); } REGISTER_FST_OPERATION(Replace, StdArc, ReplaceArgs); REGISTER_FST_OPERATION(Replace, LogArc, ReplaceArgs); REGISTER_FST_OPERATION(Replace, Log64Arc, ReplaceArgs); } // namespace script } // namespace fst
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/CMakeLists.txt
#-DHAVE_CONFIG_H -I./../include -fno-exceptions -funsigned-char -std=c++11 -MT symbol-table.lo -MD -MP -MF .deps/symbol-table.Tpo -c symbol-table.cc -fno-common -DPIC -o .libs/symbol-table.o include_directories(./include/) install(DIRECTORY include/ DESTINATION include/ FILES_MATCHING PATTERN "*.h") add_subdirectory(lib) add_subdirectory(script) if(HAVE_BIN) add_subdirectory(bin) endif(HAVE_BIN) add_subdirectory(extensions) enable_testing() add_subdirectory(test)
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/extensions
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/extensions/python/pywrapfst.pyx
#cython: nonecheck=True, c_string_type=unicode, c_string_encoding=utf8 # See www.openfst.org for extensive documentation on this weighted # finite-state transducer library. """Python interface to the FST scripting API. Operations which construct new FSTs are implemented as traditional functions, as are two-argument boolean functions like `equal` and `equivalent`. Destructive operations---those that mutate an FST, in place---are instance methods, as is `write`. Operator overloading is not used. The following example, based on Mohri et al. 2002, shows the construction of an ASR system given a pronunciation lexicon L, grammar G, a transducer from context-dependent phones to context-independent phones C, and an HMM set H: L = fst.Fst.read("L.fst") G = fst.Fst.read("G.fst") C = fst.Fst.read("C.fst") H = fst.Fst.read("H.fst") LG = fst.determinize(fst.compose(L, G)) CLG = fst.determinize(fst.compose(C, LG)) HCLG = fst.determinize(fst.compose(H, CLG)) HCLG.minimize() # NB: works in-place. Python variables here use snake_case and constants are in all caps, minus the normal `k` prefix. """ # Overview of the file: # # * Imports # * Custom exceptions # * General helpers # * Weight and helpers # * _SymbolTable, _EncodeMapperSymbolTable, _FstSymbolTable, # _MutableFstSymbolTable, SymbolTable, and helpers # * SymbolTableIterator # * EncodeMapper # * _Fst, _MutableFst, Fst, and helpers # * FST properties # * Arc, ArcIterator, and MutableArcIterator # * StateIterator # * FST operations # * Compiler # * FarReader and FarWriter # * Cleanup operations for module entrance and exit. # # TODO(kbg): Try breaking this apart into smaller pieces. # # A few of the more idiosyncratic choices made here are due to "impedance # mismatches" between C++ and Python, as follows. # # Due to differences in C++ and Python scope rules, most C++ class instances # have to be heap-allocated. Since all are packed into Python class instances, # Python destructors are used to semi-automatically free C++ instances. # # Cython's type annotations (e.g., `string`) are used when the variables will # be sent as arguments to C++ functions, but are not used for variables used # within the module. ## Imports. # C imports. from libc.stdint cimport INT32_MAX from libc.stdint cimport SIZE_MAX from posix.unistd cimport getpid # C++ imports. from libcpp cimport bool from libcpp.cast cimport const_cast from libcpp.cast cimport static_cast # Our C++ imports. from ios cimport ofstream from memory cimport static_pointer_cast # Cython operator workarounds. from cython.operator cimport address as addr # &foo from cython.operator cimport dereference as deref # *foo from cython.operator cimport preincrement as inc # ++foo # Python imports. import atexit import numbers import subprocess import logging # TODO(kbg): Figure out how to access static class variables so I don't have # to do it this way. kNoSymbol = -1 ## Custom exceptions. class FstError(Exception): pass class FstArgError(FstError, ValueError): pass class FstBadWeightError(FstError, ValueError): pass class FstDeletedConstructorError(FstError, RuntimeError): pass class FstIndexError(FstError, IndexError): pass class FstIOError(FstError, IOError): pass class FstOpError(FstError, RuntimeError): pass ## General helpers. cdef string tostring(data, encoding="utf8") except *: """Converts strings to bytestrings. This function converts Python bytestrings and Unicode strings to bytestrings encoded in UTF-8. It is used to process most Python string arguments before passing them to the lower-level library. Args: data: A Unicode string or bytestring. encoding: The desired encoding, defaulting to UTF-8. Returns: A bytestring. Raises: FstArgError: Cannot encode string. UnicodeEncodeError. This function is not visible to Python users. """ # A Python bytestring can be implicitly cast to a C++ string. if isinstance(data, bytes): return data elif isinstance(data, unicode): return data.encode(encoding) raise FstArgError("Cannot encode as string: {!r}".format(data)) cdef string weight_tostring(data, encoding="utf8") except *: """Converts strings or numerics to bytestrings. This function converts Python bytestrings, Unicode strings, and numerics which can be cast to floats to bytestrings encoded in UTF-8. It is used to process Python string arguments so they can be used to construct Weight objects. In most cases, weights are underlyingly floating-point, but since not all weights are, they can only be constructed using a string. Args: data: A Unicode string, bytestring, or type which can be converted to a Python float. Returns: A bytestring. Raise: FstArgError: Cannot encode string. ValueError: Invalid literal for float. UnicodeEncodeError. This function is not visible to Python users. """ # A Python bytestring can be implicitly cast to a C++ string. if isinstance(data, bytes): return data elif isinstance(data, unicode): return data.encode(encoding) elif isinstance(data, numbers.Number): return str(data).encode(encoding) raise FstArgError("Cannot encode as string: {!r}".format(data)) cdef fst.ComposeFilter _get_compose_filter( const string &compose_filter) except *: """Matches string with the appropriate ComposeFilter enum value. This function takes a string argument and returns the matching ComposeFilter enum value used to initialize ComposeOptions instances. ComposeOptions is used by difference and intersection in addition to composition. Args: compose_filter: A string matching a known composition filter; one of: "alt_sequence", "auto", "match", "null", "sequence", "trivial". Returns: A ComposeFilter enum value. Raises: FstArgError: Unknown compose filter type. This function is not visible to Python users. """ cdef fst.ComposeFilter compose_filter_enum if not fst.GetComposeFilter(compose_filter, addr(compose_filter_enum)): raise FstArgError("Unknown compose filter type: {!r}".format( compose_filter)) return compose_filter_enum cdef fst.DeterminizeType _get_determinize_type(const string &det_type) except *: """Matches string with the appropriate DeterminizeType enum value. Args: det_type: A string matching a known determinization type; one of: "functional", "nonfunctional", "disambiguate". Returns: A DeterminizeType enum value. Raises: FstArgError: Unknown determinization type. This function is not visible to Python users. """ cdef fst.DeterminizeType det_type_enum if not fst.GetDeterminizeType(det_type, addr(det_type_enum)): raise FstArgError("Unknown determinization type: {!r}".format(det_type)) return det_type_enum cdef fst.QueueType _get_queue_type(const string &queue_type) except *: """Matches string with the appropriate QueueType enum value. This function takes a string argument and returns the matching QueueType enum value passed to the RmEpsilonOptions constructor. Args: queue_type: A string matching a known queue type; one of: "auto", "fifo", "lifo", "shortest", "state", "top". Returns: A QueueType enum value. Raises: FstArgError: Unknown queue type. This function is not visible to Python users. """ cdef fst.QueueType queue_type_enum if not fst.GetQueueType(queue_type, addr(queue_type_enum)): raise FstArgError("Unknown queue type: {!r}".format(queue_type)) return queue_type_enum cdef fst.RandArcSelection _get_rand_arc_selection( const string &select) except *: """Matches string with the appropriate RandArcSelection enum value. This function takes a string argument and returns the matching RandArcSelection enum value passed to the RandGenOptions constructor. Args: select: A string matching a known random arc selection type; one of: "uniform", "log_prob", "fast_log_prob". Returns: A RandArcSelection enum value. Raises: FstArgError: Unknown random arc selection type. This function is not visible to Python users. """ cdef fst.RandArcSelection select_enum if not fst.GetRandArcSelection(select, addr(select_enum)): raise FstArgError("Unknown random arc selection type: {!r}".format(select)) return select_enum cdef fst.ReplaceLabelType _get_replace_label_type( const string &replace_label_type, bool epsilon_on_replace) except *: """Matches string with the appropriate ReplaceLabelType enum value. This function takes a string argument and returns the matching ReplaceLabelType enum value passed to the ReplaceOptions constructor. Args: replace_label_type: A string matching a known replace label type; one of: "neither", "input", "output", "both". epsilon_on_replace: Should call/return arcs be epsilon arcs? Returns: A ReplaceLabelType enum value. Raises: FstArgError: Unknown replace label type. This function is not visible to Python users. """ cdef fst.ReplaceLabelType replace_label_type_enum if not fst.GetReplaceLabelType(replace_label_type, epsilon_on_replace, addr(replace_label_type_enum)): raise FstArgError("Unknown replace label type: {!r}".format( replace_label_type)) return replace_label_type_enum ## Weight and helpers. cdef class Weight(object): """ Weight(weight_type, weight_string) FST weight class. This class represents an FST weight. When passed as an argument to an FST operation, it should have the weight type of the input FST(s) to said operation. Args: weight_type: A string indicating the weight type. weight_string: A string indicating the underlying weight. Raises: FstArgError: Weight type not found. FstBadWeightError: Invalid weight. """ def __repr__(self): return "<{} Weight {} at 0x{:x}>".format(self.type(), self.to_string(), id(self)) def __str__(self): return self.to_string() # This attempts to convert the string form into a float, raising # ValueError when that is not appropriate. def __float__(self): return float(self.to_string()) def __init__(self, weight_type, weight): self._weight.reset(new fst.WeightClass(tostring(weight_type), weight_tostring(weight))) self._check_weight() cdef void _check_weight(self) except *: if self.type() == b"none": raise FstArgError("Weight type not found") if self.to_string() == b"BadNumber": raise FstBadWeightError("Invalid weight") cpdef Weight copy(self): """ copy(self) Returns a copy of the Weight. """ cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass(deref(self._weight))) return result # To get around the inability to declare cdef class methods, we define the # C++ part out-of-class and then call it from within. @classmethod def Zero(cls, weight_type): """ Weight.Zero(weight_type) Constructs semiring zero. """ return _Zero(weight_type) @classmethod def One(cls, weight_type): """ Weight.One(weight_type) Constructs semiring One. """ return _One(weight_type) @classmethod def NoWeight(cls, weight_type): """ Weight.NoWeight(weight_type) Constructs a non-member weight in the semiring. """ return _NoWeight(weight_type) def __eq__(Weight w1, Weight w2): return fst.Eq(deref(w1._weight), deref(w2._weight)) def __ne__(Weight w1, Weight w2): return not w1 == w2 cpdef string to_string(self): return self._weight.get().ToString() cpdef string type(self): """type(self) Returns a string indicating the weight type. """ return self._weight.get().Type() cdef Weight _plus(Weight lhs, Weight rhs): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass(fst.Plus(deref(lhs._weight), deref(rhs._weight)))) return result def plus(Weight lhs, Weight rhs): """ plus(lhs, rhs) Computes the sum of two Weights in the same semiring. This function computes lhs \oplus rhs, raising an exception if lhs and rhs are not in the same semiring. Args: lhs: Left-hand side Weight. rhs: Right-hand side Weight. Returns: A Weight object. Raises: FstArgError: Weight type not found (or not in same semiring). FstBadWeightError: invalid weight. """ cdef Weight result = _plus(lhs, rhs) result._check_weight() return result cdef Weight _times(Weight lhs, Weight rhs): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass(fst.Times(deref(lhs._weight), deref(rhs._weight)))) return result def times(Weight lhs, Weight rhs): """ times(lhs, rhs) Computes the product of two Weights in the same semiring. This function computes lhs \otimes rhs, raising an exception if lhs and rhs are not in the same semiring. Args: lhs: Left-hand side Weight. rhs: Right-hand side Weight. Returns: A Weight object. Raises: FstArgError: Weight type not found (or not in same semiring). FstBadWeightError: Invalid weight. """ cdef Weight result = _times(lhs, rhs) result._check_weight() return result cdef Weight _divide(Weight lhs, Weight rhs): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass(fst.Divide(deref(lhs._weight), deref(rhs._weight)))) return result def divide(Weight lhs, Weight rhs): """ divide(lhs, rhs) Computes the quotient of two Weights in the same semiring. This function computes lhs \oslash rhs, raising an exception if lhs and rhs are not in the same semiring. As there is no way to specify whether to use left vs. right division, this assumes a commutative semiring in which these are equivalent operations. Args: lhs: Left-hand side Weight. rhs: Right-hand side Weight. Returns: A Weight object. Raises: FstArgError: Weight type not found (or not in same semiring). FstBadWeightError: Invalid weight. """ cdef Weight result = _divide(lhs, rhs) result._check_weight() return result cdef Weight _power(Weight w, size_t n): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass(fst.Power(deref(w._weight), n))) return result def power(Weight w, size_t n): """ power(lhs, rhs) Computes the iterated product of a weight. Args: w: The weight. n: The power. Returns: A Weight object. Raises: FstArgError: Weight type not found (or not in same semiring). FstBadWeightError: Invalid weight. """ cdef Weight result = _power(w, n) result._check_weight() return result cdef fst.WeightClass _get_WeightClass_or_Zero(const string &weight_type, weight) except *: """Converts weight string to a WeightClass. This function constructs a WeightClass instance of the desired weight type. If the first argument is null, the weight is set to semiring Zero. Args: weight_type: A string denoting the desired weight type. weight: A object indicating the desired weight; if omitted, the weight is set to semiring Zero. Returns: A WeightClass object. This function is not visible to Python users. """ cdef fst.WeightClass result if weight is None: result = fst.WeightClass.Zero(weight_type) elif isinstance(weight, Weight): result = deref(<fst.WeightClass *> (<Weight> weight)._weight.get()) else: result = fst.WeightClass(weight_type, weight_tostring(weight)) if result.ToString() == b"BadNumber": raise FstBadWeightError(weight_tostring(weight)) return result cdef fst.WeightClass _get_WeightClass_or_One(const string &weight_type, weight) except *: """Converts weight string to a WeightClass. This function constructs a WeightClass instance of the desired weight type. If the first argument is null, the weight is set to semiring One. Args: weight_type: A string denoting the desired weight type. weight: A object indicating the desired weight; if omitted, the weight is set to semiring One. Returns: A WeightClass object. This function is not visible to Python users. """ cdef fst.WeightClass result if weight is None: result = fst.WeightClass.One(weight_type) elif isinstance(weight, Weight): result = deref(<fst.WeightClass *> (<Weight> weight)._weight.get()) else: result = fst.WeightClass(weight_type, weight_tostring(weight)) if result.ToString() == b"BadNumber": raise FstBadWeightError(weight_tostring(weight)) return result cdef Weight _Zero(weight_type): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass(fst.WeightClass.Zero( tostring(weight_type)))) if result._weight.get().Type() == b"none": raise FstArgError("Weight type not found") return result cdef Weight _One(weight_type): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass( fst.WeightClass.One(tostring(weight_type)))) if result._weight.get().Type() == b"none": raise FstArgError("Weight type not found") return result cdef Weight _NoWeight(weight_type): cdef Weight result = Weight.__new__(Weight) result._weight.reset(new fst.WeightClass( fst.WeightClass.NoWeight(tostring(weight_type)))) return result ## _SymbolTable, _MutableSymbolTable, _EncodeMapperSymbolTable, _FstSymbolTable, ## _MutableFstSymbolTable, SymbolTable, and helpers. # # SymbolTable hierarchy: # # _SymbolTable: abstract base class; has-a SymbolTable* # _EncodeMapperSymbolTable(_SymbolTable): constant symbol table returned by # EncodeMapper.input_symbols/output_symbols # _FstSymbolTable(_SymbolTable): constant symbol table returned by # _Fst.input_symbols/output_symbols # # _MutableSymbolTable(_SymbolTable): abstract base class adding mutation methods # _MutableFstSymbolTable(_MutableSymbolTable): mutable symbol table returned by # _MutableFst.mutable_input_symbols/mutable_output_symbols # SymbolTable(_MutableSymbolTable): adds constructor cdef class _SymbolTable(object): """ (No constructor.) Base class for the symbol table hierarchy. This class is the base class for SymbolTable. It has a "deleted" constructor and implementations for the const methods of the wrapped SymbolTable. """ # NB: Do not expose any non-const methods of the wrapped SymbolTable here. # Doing so will allow undefined behavior. def __init__(self): raise FstDeletedConstructorError( "Cannot construct {}".format(self.__class__.__name__)) def __iter__(self): return SymbolTableIterator(self) cpdef int64 available_key(self): """ available_key(self) Returns an integer indicating the next available key index in the table. """ return self._table.AvailableKey() cpdef bytes checksum(self): """ checksum(self) Returns a bytestring indicating the label-independent MD5 checksum. """ return self._table.CheckSum() cpdef SymbolTable copy(self): """ copy(self) Returns a mutable copy of the SymbolTable. """ return _init_SymbolTable(self._table.Copy()) def find(self, key): """ find(self, key) Given a symbol or index, finds the other one. This method returns the index associated with a symbol key, or the symbol associated with a index key. Args: key: Either a string or an index. Returns: If the key is a string, the associated index or NO_LABEL if not found; if the key is an integer, the associated symbol or an empty string if not found. """ try: return self._table.FindIndex(tostring(key)) except FstArgError: return self._table.FindSymbol(key) cpdef int64 get_nth_key(self, ssize_t pos) except *: """ get_nth_key(self, pos) Retrieves the integer index of the n-th key in the table. Args: pos: The n-th key to retrieve. Returns: The integer index of the n-th key, or NO_LABEL if not found. """ return self._table.GetNthKey(pos) cpdef bytes labeled_checksum(self): """ labeled_checksum(self) Returns a bytestring indicating the label-dependent MD5 checksum. """ return self._table.LabeledCheckSum() cpdef bool member(self, key): """ member(self, key) Given a symbol or index, returns whether it is found in the table. This method returns a boolean indicating whether the given symbol or index is present in the table. If one intends to perform subsequent lookup, it is better to simply call the find method, catching the KeyError. Args: key: Either a string or an index. Returns: Whether or not the key is present (as a string or a index) in the table. """ try: return self._table.MemberSymbol(tostring(key)) except FstArgError: return self._table.MemberIndex(key) def __contains__(self, key): return self.member(key) cpdef string name(self): """ name(self) Returns the symbol table's name. """ return self._table.Name() cpdef size_t num_symbols(self): """ num_symbols(self) Returns the number of symbols in the symbol table. """ return self._table.NumSymbols() cpdef void write(self, filename) except *: """ write(self, filename) Serializes symbol table to a file. This methods writes the SymbolTable to a file in binary format. Args: filename: The string location of the output file. Raises: FstIOError: Write failed. """ if not self._table.Write(tostring(filename)): raise FstIOError("Write failed: {!r}".format(filename)) cpdef void write_text(self, filename) except *: """ write_text(self, filename) Writes symbol table to text file. This method writes the SymbolTable to a file in human-readable format. Args: filename: The string location of the output file. Raises: FstIOError: Write failed. """ if not self._table.WriteText(tostring(filename)): raise FstIOError("Write failed: {!r}".format(filename)) cdef class _EncodeMapperSymbolTable(_SymbolTable): """ (No constructor.) Immutable SymbolTable class for tables stored in an EncodeMapper. This class wraps a library const SymbolTable and exposes const methods of the wrapped object. It is only to be returned by method, never constructed directly. """ # NB: Do not expose any non-const methods of the wrapped SymbolTable here. # Doing so will allow undefined behavior. def __repr__(self): return "<const EncodeMapper SymbolTable {!r} at 0x{:x}>".format(self.name(), id(self)) cdef class _FstSymbolTable(_SymbolTable): """ (No constructor.) Mutable SymbolTable class for tables stored in a mutable FST. This class wraps a library SymbolTable and exposes methods of the wrapped object. It is only to be returned by method, never constructed directly. """ # NB: Do not expose any non-const methods of the wrapped SymbolTable here. # Doing so will allow undefined behavior. def __repr__(self): return "<const Fst SymbolTable {!r} at 0x{:x}>".format(self.name(), id(self)) cdef class _MutableSymbolTable(_SymbolTable): """ (No constructor.) Base class for mutable symbol tables. This class is the base class for a mutable SymbolTable. It has a "deleted" constructor and implementations of all methods of the wrapped SymbolTable. """ cpdef int64 add_symbol(self, symbol, int64 key=kNoSymbol): """ add_symbol(self, symbol, key=NO_SYMBOL) Adds a symbol to the table and returns the index. This method adds a symbol to the table. The caller can optionally specify a non-negative integer index for the key. Args: symbol: A symbol string. key: An index for the symbol; if not specified, the next index will be used. Returns: The integer key of the new symbol. """ cdef string symbol_string = tostring(symbol) if key != kNoSymbol: return self._table.AddSymbol(symbol_string, key) else: return self._table.AddSymbol(symbol_string) cpdef void add_table(self, _SymbolTable syms): """ add_table(self, syms) Adds another SymbolTable to this table. This method merges another symbol table into the current table. All key values will be offset by the current available key. Args: syms: A SymbolTable to be merged with the current table. """ self._table.AddTable(deref(syms._table)) cpdef void set_name(self, new_name) except *: self._table.SetName(tostring(new_name)) cdef class _MutableFstSymbolTable(_MutableSymbolTable): """ (No constructor.) Mutable SymbolTable assigned to an FST. """ def __repr__(self): return "<Fst SymbolTable {!r} at 0x{:x}>".format(self.name(), id(self)) cdef class SymbolTable(_MutableSymbolTable): """ SymbolTable(name="<unspecified>") Mutable SymbolTable class. This class wraps the library SymbolTable and exposes both const (i.e., access) and non-const (i.e., mutation) methods of wrapped object. Unlike other classes in the hierarchy, it has a working constructor and can be used to programmatically construct a SymbolTable in memory. Args: name: An optional string indicating the table's name. """ def __repr__(self): return "<SymbolTable {!r} at 0x{:x}>".format(self.name(), id(self)) def __init__(self, name="<unspecified>"): self._table = new fst.SymbolTable(tostring(name)) self._smart_table.reset(self._table) @classmethod def read(cls, filename): """ SymbolTable.read(filename) Reads symbol table from binary file. This class method creates a new SymbolTable from a symbol table binary file. Args: filename: The string location of the input binary file. Returns: A new SymbolTable instance. See also: `SymbolTable.read_fst`, `SymbolTable.read_text`. """ cdef fst.SymbolTable *tsyms = fst.SymbolTable.Read(tostring(filename)) if tsyms == NULL: raise FstIOError("Read failed: {!r}".format(filename)) return _init_SymbolTable(tsyms) @classmethod def read_text(cls, filename, bool allow_negative_labels=False): """ SymbolTable.read_text(filename) Reads symbol table from text file. This class method creates a new SymbolTable from a symbol table text file. Args: filename: The string location of the input text file. allow_negative_labels: Should negative labels be allowed? (Not recommended; may cause conflicts). Returns: A new SymbolTable instance. See also: `SymbolTable.read`, `SymbolTable.read_fst`. """ cdef unique_ptr[fst.SymbolTableTextOptions] opts opts.reset(new fst.SymbolTableTextOptions(allow_negative_labels)) cdef fst.SymbolTable *tsyms = fst.SymbolTable.ReadText(tostring(filename), deref(opts)) if tsyms == NULL: raise FstIOError("Read failed: {!r}".format(filename)) return _init_SymbolTable(tsyms) @classmethod def read_fst(cls, filename, bool input_table): """ SymbolTable.read_fst(filename, input_table) Reads symbol table from an FST file without loading the corresponding FST. This class method creates a new SymbolTable by reading either the input or output symbol table from an FST file, without loading the corresponding FST. Args: filename: The string location of the input FST file. input_table: Should the input table be read (True) or the output table (False)? Returns: A new SymbolTable instance, or None if none can be read. Raises: FstIOError: Read failed. See also: `SymbolTable.read`, `SymbolTable.read_text`. """ cdef fst.SymbolTable *tsyms = fst.FstReadSymbols(tostring(filename), input_table) if tsyms == NULL: raise FstIOError("Read failed: {!r}".format(filename)) return _init_SymbolTable(tsyms) cdef _EncodeMapperSymbolTable _init_EncodeMapperSymbolTable( fst.SymbolTable *table, shared_ptr[fst.EncodeMapperClass] encoder): cdef _EncodeMapperSymbolTable result = ( _EncodeMapperSymbolTable.__new__(_EncodeMapperSymbolTable)) result._table = table result._encoder = encoder return result cdef _FstSymbolTable _init_FstSymbolTable(fst.SymbolTable *table, shared_ptr[fst.FstClass] ifst): cdef _FstSymbolTable result = _FstSymbolTable.__new__(_FstSymbolTable) result._table = table result._fst = ifst return result cdef _MutableFstSymbolTable _init_MutableFstSymbolTable(fst.SymbolTable *table, shared_ptr[fst.MutableFstClass] ifst): cdef _MutableFstSymbolTable result = ( _MutableFstSymbolTable.__new__(_MutableFstSymbolTable)) result._table = table result._mfst = ifst return result cdef SymbolTable _init_SymbolTable(fst.SymbolTable *table): cdef SymbolTable result = SymbolTable.__new__(SymbolTable) result._table = table return result # Constructive SymbolTable operations. cpdef SymbolTable compact_symbol_table(_SymbolTable syms): """ compact_symbol_table(syms) Constructively relabels a SymbolTable to make it a contiguous mapping. Args: syms: Input SymbolTable. Returns: A new compacted SymbolTable. """ return _init_SymbolTable(fst.CompactSymbolTable(deref(syms._table))) cpdef SymbolTable merge_symbol_table(_SymbolTable lhs, _SymbolTable rhs): """ merge_symbol_table(lhs, rhs) Merges all symbols from the left table into the right. This function creates a new SymbolTable which is the merger of the two input symbol Tables. Symbols in the right-hand table that conflict with those in the left-hand table will be assigned values from the left-hand table. Thus the returned table will never modify symbol assignments from the left-hand side, but may do so on the right. If the left-hand table is associated with an FST, it may be necessary to relabel it using the output table. Args: lhs: Left-hand side SymbolTable. rhs: Left-hand side SymbolTable. Returns: A new merged SymbolTable. See also: `relabel_symbols`. """ return _init_SymbolTable(fst.MergeSymbolTable(deref(lhs._table), deref(rhs._table), NULL)) ## SymbolTableIterator. cdef class SymbolTableIterator(object): """ SymbolTableIterator(syms) This class is used for iterating over a symbol table. """ def __repr__(self): return "<SymbolTableIterator at 0x{:x}>".format(id(self)) def __init__(self, _SymbolTable syms): self._siter.reset(new fst.SymbolTableIterator(deref(syms._table))) # This just registers this class as a possible iterator. def __iter__(self): return self # Magic method used to get a Pythonic API out of the C++ API. def __next__(self): if self.done(): raise StopIteration cdef int64 value = self.value() cdef string symbol = self.symbol() self.next() return (value, symbol) cpdef bool done(self): """ done(self) Indicates whether the iterator is exhausted or not. Returns: True if the iterator is exhausted, False otherwise. """ return self._siter.get().Done() cpdef void next(self): """ next(self) Advances the iterator. """ self._siter.get().Next() cpdef void reset(self): """ reset(self) Resets the iterator to the initial position. """ self._siter.get().Reset() cpdef string symbol(self): """ symbol(self) Returns the current symbol string. This method returns the current symbol string at this point in the table. Returns: A symbol string. """ return self._siter.get().Symbol() cpdef int64 value(self): """ value(self) Returns the current integer index of the symbol. Returns: An integer index. """ return self._siter.get().Value() ## EncodeMapper. cdef class EncodeMapper(object): """ EncodeMapper(arc_type="standard", encode_labels=False, encode_weights=False) Arc encoder class, wrapping EncodeMapperClass. This class provides an object which can be used to encode or decode FST arcs. This is most useful to convert an FST to an unweighted acceptor, on which some FST operations are more efficient, and then decoding the FST afterwards. To use an instance of this class to encode or decode a mutable FST, pass it as the first argument to the FST instance methods `encode` and `decode`. For implementational reasons, it is not currently possible to use an encoder on disk to construct this class. Args: arc_type: A string indicating the arc type. encode_labels: Should labels be encoded? encode_weights: Should weights be encoded? """ def __repr__(self): return "<EncodeMapper at 0x{:x}>".format(id(self)) def __init__(self, arc_type=b"standard", bool encode_labels=False, bool encode_weights=False): cdef uint32 flags = fst.GetEncodeFlags(encode_labels, encode_weights) self._encoder.reset(new fst.EncodeMapperClass(tostring(arc_type), flags, fst.ENCODE)) if not self._encoder: raise FstOpError("Unknown arc type: {!r}".format(arc_type)) cpdef string arc_type(self): """ arc_type(self) Returns a string indicating the arc type. """ return self._encoder.get().ArcType() # Python's equivalent to operator(). def __call__(self, Arc arc): """ self(state, ilabel, olabel, weight, nextstate) Uses the encoder to encode an arc. Args: ilabel: The integer index of the input label. olabel: The integer index of the output label. weight: A Weight or weight string indicating the desired final weight; if null, it is set to semiring One. nextstate: The integer index of the destination state. Raises: FstOpError: Incompatible or invalid weight. """ return _init_Arc(self._encoder.get().__call__(deref(arc._arc))) cpdef uint32 flags(self): """ flags(self) Returns the encoder's flags. """ return self._encoder.get().Flags() cpdef _EncodeMapperSymbolTable input_symbols(self): """ input_symbols(self) Returns the encoder's input symbol table, or None if none is present. """ cdef fst.SymbolTable *syms = const_cast[SymbolTable_ptr]( self._encoder.get().InputSymbols()) if syms == NULL: return return _init_EncodeMapperSymbolTable(syms, self._encoder) cpdef _EncodeMapperSymbolTable output_symbols(self): """ output_symbols(self) Returns the encoder's output symbol table, or None if none is present. """ cdef fst.SymbolTable *syms = const_cast[SymbolTable_ptr]( self._encoder.get().OutputSymbols()) if syms == NULL: return return _init_EncodeMapperSymbolTable(syms, self._encoder) cpdef uint64 properties(self, uint64 mask): """ properties(self, mask) Provides property bits. This method provides user access to the properties of the encoder. Args: mask: The property mask to be compared to the encoder's properties. Returns: A 64-bit bitmask representing the requested properties. """ return self._encoder.get().Properties(mask) cpdef void set_input_symbols(self, _SymbolTable syms) except *: """ set_input_symbols(self, syms) Sets the encoder's input symbol table. Args: syms: A SymbolTable. See also: `set_output_symbols`. """ self._encoder.get().SetInputSymbols(syms._table) cpdef void set_output_symbols(self, _SymbolTable syms) except *: """ set_output_symbols(self, syms) Sets the encoder's output symbol table. Args: syms: A SymbolTable. See also: `set_input_symbols`. """ self._encoder.get().SetOutputSymbols(syms._table) cpdef string weight_type(self): """ weight_type(self) Returns a string indicating the weight type. """ return self._encoder.get().WeightType() ## _Fst, _MutableFst, Fst, and helpers. # # Fst hierarchy: # # _Fst: base class; has-a FstClass*. # _MutableFst(_Fst): adds mutable methods. # Fst(filename): pseudo-constructor. cdef class _Fst(object): """ (No constructor.) Immutable FST class, wrapping FstClass. This class is the basic user-facing FST object. It does not itself support any mutation operations. """ # IPython notebook magic to produce an SVG of the FST. def _repr_svg_(self): """IPython notebook magic to produce an SVG of the FST using GraphViz. This method produces an SVG of the internal graph. Users wishing to create publication-quality graphs should instead use the method `draw`, which exposes additional parameters. Raises: OSError: Cannot locate the `dot` executable. subprocess.CalledProcessError: `dot` returned non-zero exit code. See also: `draw`, `text`. """ # Quickly throws OSError if the dot executable i s not found. proc = subprocess.Popen(["dot", "-Tsvg"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) cdef stringstream sstrm fst.DrawFst(deref(self._fst), self._fst.get().InputSymbols(), self._fst.get().OutputSymbols(), NULL, self._fst.get().Properties(fst.kAcceptor, True) == fst.kAcceptor, b"", 8.5, 11, True, False, 0.4, 0.25, 14, 5, b"g", False, addr(sstrm), b"_repr_svg") # The stream gets decoded automatically so we have to re-encode it to pass # it to the process. (sout, serr) = proc.communicate(sstrm.str().encode("utf8")) if proc.returncode != 0: # Just to be explicit. raise subprocess.CalledProcessError(proc.returncode, self._DOT_TSVG) return sout.decode("utf8") def __repr__(self): return "<{} Fst at 0x{:x}>".format(self.fst_type(), id(self)) def __init__(self): raise FstDeletedConstructorError( "Cannot construct {}".format(self.__class__.__name__)) def __str__(self): return self.text() # Registers the class for pickling; must be repeated in any subclass which # can't be derived by _init_XFst. def __reduce__(self): return (_read_from_string, (self.write_to_string(),)) cpdef string arc_type(self): """ arc_type(self) Returns a string indicating the arc type. """ return self._fst.get().ArcType() cpdef ArcIterator arcs(self, int64 state): """ arcs(self, state) Returns an iterator over arcs leaving the specified state. Args: state: The source state ID. Returns: An ArcIterator. See also: `mutable_arcs`, `states`. """ return ArcIterator(self, state) cpdef _Fst copy(self): """ copy(self) Makes a copy of the FST. """ return _init_XFst(new fst.FstClass(deref(self._fst))) cpdef void draw(self, filename, _SymbolTable isymbols=None, _SymbolTable osymbols=None, SymbolTable ssymbols=None, bool acceptor=False, title=b"", double width=8.5, double height=11, bool portrait=False, bool vertical=False, double ranksep=0.4, double nodesep=0.25, int32 fontsize=14, int32 precision=5, float_format=b"g", bool show_weight_one=False): """ draw(self, filename, isymbols=None, osymbols=None, ssymbols=None, acceptor=False, title="", width=8.5, height=11, portrait=False, vertical=False, ranksep=0.4, nodesep=0.25, fontsize=14, precision=5, float_format="g", show_weight_one=False): Writes out the FST in Graphviz text format. This method writes out the FST in the dot graph description language. The graph can be rendered using the `dot` executable provided by Graphviz. Args: filename: The string location of the output dot/Graphviz file. isymbols: An optional symbol table used to label input symbols. osymbols: An optional symbol table used to label output symbols. ssymbols: An optional symbol table used to label states. acceptor: Should the figure be rendered in acceptor format if possible? title: An optional string indicating the figure title. width: The figure width, in inches. height: The figure height, in inches. portrait: Should the figure be rendered in portrait rather than landscape? vertical: Should the figure be rendered bottom-to-top rather than left-to-right? ranksep: The minimum separation separation between ranks, in inches. nodesep: The minimum separation between nodes, in inches. fontsize: Font size, in points. precision: Numeric precision for floats, in number of chars. float_format: One of: 'e', 'f' or 'g'. show_weight_one: Should weights equivalent to semiring One be printed? See also: `text`. """ cdef string filename_string = tostring(filename) cdef unique_ptr[ofstream] ostrm ostrm.reset(new ofstream(filename_string)) cdef fst.SymbolTable *ssymbols_ptr = NULL if ssymbols is not None: ssymbols_ptr = ssymbols._table fst.DrawFst(deref(self._fst), self._fst.get().InputSymbols() if isymbols is None else isymbols._table, self._fst.get().OutputSymbols() if osymbols is None else osymbols._table, ssymbols_ptr, acceptor, tostring(title), width, height, portrait, vertical, ranksep, nodesep, fontsize, precision, tostring(float_format), show_weight_one, ostrm.get(), filename_string) cpdef Weight final(self, int64 state): """ final(self, state) Returns the final weight of a state. Args: state: The integer index of a state. Returns: The final Weight of that state. Raises: FstIndexError: State index out of range. """ cdef Weight weight = Weight.__new__(Weight) weight._weight.reset(new fst.WeightClass(self._fst.get().Final(state))) return weight cpdef string fst_type(self): """ fst_type(self) Returns a string indicating the FST type. """ return self._fst.get().FstType() cpdef _FstSymbolTable input_symbols(self): """ input_symbols(self) Returns the FST's input symbol table, or None if none is present. See also: `input_symbols`. """ cdef fst.SymbolTable *syms = const_cast[SymbolTable_ptr]( self._fst.get().InputSymbols()) if syms == NULL: return return _init_FstSymbolTable(syms, self._fst) cpdef size_t num_arcs(self, int64 state) except *: """ num_arcs(self, state) Returns the number of arcs leaving a state. Args: state: The integer index of a state. Returns: The number of arcs leaving that state. Raises: FstIndexError: State index out of range. See also: `num_states`. """ cdef size_t result = self._fst.get().NumArcs(state) if result == SIZE_MAX: raise FstIndexError("State index out of range") return result cpdef size_t num_input_epsilons(self, int64 state) except *: """ num_input_epsilons(self, state) Returns the number of arcs with epsilon input labels leaving a state. Args: state: The integer index of a state. Returns: The number of epsilon-input-labeled arcs leaving that state. Raises: FstIndexError: State index out of range. See also: `num_output_epsilons`. """ cdef size_t result = self._fst.get().NumInputEpsilons(state) if result == SIZE_MAX: raise FstIndexError("State index out of range") return result cpdef size_t num_output_epsilons(self, int64 state) except *: """ num_output_epsilons(self, state) Returns the number of arcs with epsilon output labels leaving a state. Args: state: The integer index of a state. Returns: The number of epsilon-output-labeled arcs leaving that state. Raises: FstIndexError: State index out of range. See also: `num_input_epsilons`. """ cdef size_t result = self._fst.get().NumOutputEpsilons(state) if result == SIZE_MAX: raise FstIndexError("State index out of range") return result cpdef _FstSymbolTable output_symbols(self): """ output_symbols(self) Returns the FST's output symbol table, or None if none is present. See also: `input_symbols`. """ cdef fst.SymbolTable *syms = const_cast[SymbolTable_ptr]( self._fst.get().OutputSymbols()) if syms == NULL: return return _init_FstSymbolTable(syms, self._fst) cpdef uint64 properties(self, uint64 mask, bool test): """ properties(self, mask, test) Provides property bits. This method provides user access to the properties attributes for the FST. The resulting value is a long integer, but when it is cast to a boolean, it represents whether or not the FST has the `mask` property. Args: mask: The property mask to be compared to the FST's properties. test: Should any unknown values be computed before comparing against the mask? Returns: A 64-bit bitmask representing the requested properties. """ return self._fst.get().Properties(mask, test) cpdef int64 start(self): """ start(self) Returns the start state. """ return self._fst.get().Start() cpdef StateIterator states(self): """ states(self) Returns an iterator over all states in the FST. Returns: A StateIterator object for the FST. See also: `arcs`, `mutable_arcs`. """ return StateIterator(self) cpdef string text(self, _SymbolTable isymbols=None, _SymbolTable osymbols=None, _SymbolTable ssymbols=None, bool acceptor=False, bool show_weight_one=False, missing_sym=b""): """ text(self, isymbols=None, osymbols=None, ssymbols=None, acceptor=False, show_weight_one=False, missing_sym="") Produces a human-readable string representation of the FST. This method generates a human-readable string representation of the FST. The caller may optionally specify SymbolTables used to label input labels, output labels, or state labels, respectively. Args: isymbols: An optional symbol table used to label input symbols. osymbols: An optional symbol table used to label output symbols. ssymbols: An optional symbol table used to label states. acceptor: Should the FST be rendered in acceptor format if possible? show_weight_one: Should weights equivalent to semiring One be printed? missing_symbol: The string to be printed when symbol table lookup fails. Returns: A formatted string representing the machine. """ # Prints FST to stringstream, then returns resulting string. cdef fst.SymbolTable *ssymbols_ptr = NULL if ssymbols is not None: ssymbols_ptr = ssymbols._table cdef stringstream sstrm fst.PrintFst(deref(self._fst), sstrm, "<pywrapfst>", self._fst.get().InputSymbols() if isymbols is None else isymbols._table, self._fst.get().OutputSymbols() if osymbols is None else osymbols._table, ssymbols_ptr, acceptor, show_weight_one, tostring(missing_sym)) return sstrm.str() cpdef bool verify(self): """ verify(self) Verifies that an FST's contents are sane. Returns: True if the contents are sane, False otherwise. """ return fst.Verify(deref(self._fst)) cpdef string weight_type(self): """ weight_type(self) Provides the FST's weight type. Returns: A string representing the weight type. """ return self._fst.get().WeightType() cpdef void write(self, filename) except *: """ write(self, filename) Serializes FST to a file. This method writes the FST to a file in a binary format. Args: filename: The string location of the output file. Raises: FstIOError: Write failed. """ if not self._fst.get().Write(tostring(filename)): raise FstIOError("Write failed: {!r}".format(filename)) cpdef bytes write_to_string(self): """ write_to_string(self) Serializes FST to a string. Returns: A string. Raises: FstIOError: Write to string failed. See also: `read_from_string`. """ cdef stringstream sstrm if not self._fst.get().Write(sstrm, "write_to_string"): raise FstIOError("Write to string failed") return sstrm.str() cdef class _MutableFst(_Fst): """ (No constructor.) Mutable FST class, wrapping MutableFstClass. This class extends _Fst by adding mutation operations. """ cdef void _check_mutating_imethod(self) except *: """Checks whether an operation mutating the FST has produced an error. This function is not visible to Python users. """ if self._fst.get().Properties(fst.kError, True) == fst.kError: raise FstOpError("Operation failed") cdef void _add_arc(self, int64 state, Arc arc) except *: if not self._fst.get().ValidStateId(state): raise FstIndexError("State index out of range") if not self._mfst.get().AddArc(state, deref(arc._arc)): raise FstOpError("Incompatible or invalid weight type") self._check_mutating_imethod() def add_arc(self, int64 state, Arc arc): """ add_arc(self, state, arc) Adds a new arc to the FST and return self. Args: state: The integer index of the source state. arc: The arc to add. Returns: self. Raises: FstIndexError: State index out of range. FstOpdexError: Incompatible or invalid weight type. See also: `add_state`. """ self._add_arc(state, arc) return self cpdef int64 add_state(self) except *: """ add_state(self) Adds a new state to the FST and returns the state ID. Returns: The integer index of the new state. See also: `add_arc`, `set_start`, `set_final`. """ cdef int64 result = self._mfst.get().AddState() self._check_mutating_imethod() return result cdef void _arcsort(self, sort_type=b"ilabel") except *: cdef fst.ArcSortType sort_type_enum if not fst.GetArcSortType(tostring(sort_type), addr(sort_type_enum)): raise FstArgError("Unknown sort type {!r}".format(sort_type)) fst.ArcSort(self._mfst.get(), sort_type_enum) self._check_mutating_imethod() def arcsort(self, sort_type=b"ilabel"): """ arcsort(self, sort_type="ilabel") Sorts arcs leaving each state of the FST. This operation destructively sorts arcs leaving each state using either input or output labels. Args: sort_type: Either "ilabel" (sort arcs according to input labels) or "olabel" (sort arcs according to output labels). Returns: self. Raises: FstArgError: Unknown sort type. See also: `topsort`. """ self._arcsort(sort_type) return self cdef void _closure(self, bool closure_plus=False) except *: fst.Closure(self._mfst.get(), fst.GetClosureType(closure_plus)) self._check_mutating_imethod() def closure(self, bool closure_plus=False): """ closure(self, closure_plus=False) Computes concatenative closure. This operation destructively converts the FST to its concatenative closure. If A transduces string x to y with weight a, then the closure transduces x to y with weight a, xx to yy with weight a \otimes a, xxx to yyy with weight a \otimes a \otimes a, and so on. The empty string is also transduced to itself with semiring One if `closure_plus` is False. Args: closure_plus: If False, do not accept the empty string. Returns: self. """ self._closure(closure_plus) return self cdef void _concat(self, _Fst ifst) except *: fst.Concat(self._mfst.get(), deref(ifst._fst)) self._check_mutating_imethod() def concat(self, _Fst ifst): """ concat(self, ifst) Computes the concatenation (product) of two FSTs. This operation destructively concatenates the FST with a second FST. If A transduces string x to y with weight a and B transduces string w to v with weight b, then their concatenation transduces string xw to yv with weight a \otimes b. Args: ifst: The second input FST. Returns: self. """ self._concat(ifst) return self cdef void _connect(self) except *: fst.Connect(self._mfst.get()) self._check_mutating_imethod() def connect(self): """ connect(self) Removes unsuccessful paths. This operation destructively trims the FST, removing states and arcs that are not part of any successful path. Returns: self. """ self._connect() return self cdef void _decode(self, EncodeMapper encoder) except *: fst.Decode(self._mfst.get(), deref(encoder._encoder)) self._check_mutating_imethod() def decode(self, EncodeMapper encoder): """ decode(self, encoder) Decodes encoded labels and/or weights. This operation reverses the encoding performed by `encode`. Args: encoder: An EncodeMapper object used to encode the FST. Returns: self. See also: `encode`. """ self._decode(encoder) return self cdef void _delete_arcs(self, int64 state, size_t n=0) except *: if not (self._mfst.get().DeleteArcs(state, n) if n else self._mfst.get().DeleteArcs(state)): raise FstIndexError("State index out of range") self._check_mutating_imethod() def delete_arcs(self, int64 state, size_t n=0): """ delete_arcs(self, state, n=0) Deletes arcs leaving a particular state. Args: state: The integer index of a state. n: An optional argument indicating how many arcs to be deleted. If this argument is omitted or passed as zero, all arcs from this state are deleted. Returns: self. Raises: FstIndexError: State index out of range. See also: `delete_states`. """ self._delete_arcs(state, n) return self cdef void _delete_states(self, states=None) except *: # Only the former signature has a possible indexing failure. if states: if not self._mfst.get().DeleteStates(<const vector[int64]> states): raise FstIndexError("State index out of range") else: self._mfst.get().DeleteStates() self._check_mutating_imethod() def delete_states(self, states=None): """ delete_states(self, states=None) Deletes states. Args: states: An optional iterable of integer indices of the states to be deleted. If this argument is omitted, all states are deleted. Returns: self. Raises: FstIndexError: State index out of range. See also: `delete_arcs`. """ self._delete_states(states) return self cdef void _encode(self, EncodeMapper encoder) except *: fst.Encode(self._mfst.get(), encoder._encoder.get()) self._check_mutating_imethod() def encode(self, EncodeMapper encoder): """ encode(self, encoder) Encodes labels and/or weights. This operation allows for the representation of a weighted transducer as a weighted acceptor, an unweighted transducer, or an unweighted acceptor by considering the pair (input label, output label), the pair (input label, weight), or the triple (input label, output label, weight) as a single label. Applying this operation mutates the EncodeMapper argument, which can then be used to decode. Args: encoder: An EncodeMapper object to be used as the encoder. Returns: self. See also: `decode`. """ self._encode(encoder) return self cdef void _invert(self) except *: fst.Invert(self._mfst.get()) self._check_mutating_imethod() def invert(self): """ invert(self) Inverts the FST's transduction. This operation destructively inverts the FST's transduction by exchanging input and output labels. Returns: self. """ self._invert() return self cdef void _minimize(self, float delta=fst.kShortestDelta, bool allow_nondet=False) except *: # This runs in-place when the second argument is null. fst.Minimize(self._mfst.get(), NULL, delta, allow_nondet) self._check_mutating_imethod() def minimize(self, float delta=fst.kShortestDelta, bool allow_nondet=False): """ minimize(self, delta=1e-6, allow_nondet=False) Minimizes the FST. This operation destructively performs the minimization of deterministic weighted automata and transducers. If the input FST A is an acceptor, this operation produces the minimal acceptor B equivalent to A, i.e. the acceptor with a minimal number of states that is equivalent to A. If the input FST A is a transducer, this operation internally builds an equivalent transducer with a minimal number of states. However, this minimality is obtained by allowing transition having strings of symbols as output labels, this known in the litterature as a real-time transducer. Such transducers are not directly supported by the library. This function will convert such transducer by expanding each string-labeled transition into a sequence of transitions. This will results in the creation of new states, hence losing the minimality property. Args: delta: Comparison/quantization delta. allow_nondet: Attempt minimization of non-deterministic FST? Returns: self. """ self._minimize(delta, allow_nondet) return self cpdef MutableArcIterator mutable_arcs(self, int64 state): """ mutable_arcs(self, state) Returns a mutable iterator over arcs leaving the specified state. Args: state: The source state ID. Returns: A MutableArcIterator. See also: `arcs`, `states`. """ return MutableArcIterator(self, state) def mutable_input_symbols(self): """ mutable_input_symbols(self) Returns the FST's (mutable) input symbol table, or None if none is present. """ cdef fst.SymbolTable *tst = self._mfst.get().MutableInputSymbols() if tst == NULL: return return _init_MutableFstSymbolTable(tst, self._mfst) def mutable_output_symbols(self): """ mutable_output_symbols(self) Returns the FST's (mutable) output symbol table, or None if none is present. """ cdef fst.SymbolTable *tst = self._mfst.get().MutableOutputSymbols() if tst == NULL: return return _init_MutableFstSymbolTable(tst, self._mfst) cpdef int64 num_states(self): """ num_states(self) Returns the number of states. """ return self._mfst.get().NumStates() cdef void _project(self, bool project_output=False) except *: fst.Project(self._mfst.get(), fst.GetProjectType(project_output)) self._check_mutating_imethod() def project(self, bool project_output=False): """ project(self, project_output=False) Converts the FST to an acceptor using input or output labels. This operation destructively projects an FST onto its domain or range by either copying each arc's input label to its output label (the default) or vice versa. Args: project_output: Should the output labels be projected? Returns: self. See also: `decode`, `encode`, `relabel_pairs`, `relabel_symbols`. """ self._project(project_output) return self cdef void _prune(self, float delta=fst.kDelta, int64 nstate=fst.kNoStateId, weight=None) except *: # Threshold is set to semiring Zero (no pruning) if no weight is specified. cdef fst.WeightClass wc = _get_WeightClass_or_Zero(self.weight_type(), weight) fst.Prune(self._mfst.get(), wc, nstate, delta) self._check_mutating_imethod() def prune(self, float delta=fst.kDelta, int64 nstate=fst.kNoStateId, weight=None): """ prune(self, delta=0.0009765625, nstate=NO_STATE_ID, weight=None) Removes paths with weights below a certain threshold. This operation deletes states and arcs in the input FST that do not belong to a successful path whose weight is no more (w.r.t the natural semiring order) than the threshold t \otimes-times the weight of the shortest path in the input FST. Weights must be commutative and have the path property. Args: delta: Comparison/quantization delta. nstate: State number threshold. weight: A Weight or weight string indicating the desired weight threshold below which paths are pruned; if omitted, no paths are pruned. Returns: self. See also: The constructive variant. """ self._prune(delta, nstate, weight) return self cdef void _push(self, float delta=fst.kDelta, bool remove_total_weight=False, bool to_final=False) except *: fst.Push(self._mfst.get(), fst.GetReweightType(to_final), delta, remove_total_weight) self._check_mutating_imethod() def push(self, float delta=fst.kDelta, bool remove_total_weight=False, bool to_final=False): """ push(self, delta=0.0009765625, remove_total_weight=False, to_final=False) Pushes weights towards the initial or final states. This operation destructively produces an equivalent transducer by pushing the weights towards the initial state or toward the final states. When pushing weights towards the initial state, the sum of the weight of the outgoing transitions and final weight at any non-initial state is equal to one in the resulting machine. When pushing weights towards the final states, the sum of the weight of the incoming transitions at any state is equal to one. Weights need to be left distributive when pushing towards the initial state and right distributive when pushing towards the final states. Args: delta: Comparison/quantization delta. remove_total_weight: If pushing weights, should the total weight be removed? to_final: Push towards final states? Returns: self. See also: The constructive variant, which also supports label pushing. """ self._push(delta, remove_total_weight, to_final) return self cdef void _relabel_pairs(self, ipairs=None, opairs=None) except *: cdef unique_ptr[vector[fst.LabelPair]] _ipairs _ipairs.reset(new vector[fst.LabelPair]()) cdef unique_ptr[vector[fst.LabelPair]] _opairs _opairs.reset(new vector[fst.LabelPair]()) cdef int64 before cdef int64 after if ipairs: for (before, after) in ipairs: _ipairs.get().push_back(fst.LabelPair(before, after)) if opairs: for (before, after) in opairs: _opairs.get().push_back(fst.LabelPair(before, after)) if _ipairs.get().empty() and _opairs.get().empty(): raise FstArgError("No relabeling pairs specified.") fst.Relabel(self._mfst.get(), deref(_ipairs), deref(_opairs)) self._check_mutating_imethod() def relabel_pairs(self, ipairs=None, opairs=None): """ relabel_pairs(self, ipairs=None, opairs=None) Replaces input and/or output labels using pairs of labels. This operation destructively relabels the input and/or output labels of the FST using pairs of the form (old_ID, new_ID); omitted indices are identity-mapped. Args: ipairs: An iterable containing (older index, newer index) integer pairs. opairs: An iterable containing (older index, newer index) integer pairs. Returns: self. Raises: FstArgError: No relabeling pairs specified. See also: `decode`, `encode`, `project`, `relabel_tables`. """ self._relabel_pairs(ipairs, opairs) return self cdef void _relabel_tables(self, _SymbolTable old_isymbols=None, _SymbolTable new_isymbols=None, unknown_isymbol=b"", bool attach_new_isymbols=True, _SymbolTable old_osymbols=None, _SymbolTable new_osymbols=None, unknown_osymbol=b"", bool attach_new_osymbols=True) except *: if new_isymbols is None and new_osymbols is None: raise FstArgError("No new SymbolTables specified") cdef fst.SymbolTable *new_isymbols_ptr = NULL if new_isymbols is not None: new_isymbols_ptr = new_isymbols._table cdef fst.SymbolTable *new_osymbols_ptr = NULL if new_osymbols is not None: new_osymbols_ptr = new_osymbols._table fst.Relabel(self._mfst.get(), self._fst.get().InputSymbols() if old_isymbols is None else old_isymbols._table, new_isymbols_ptr, tostring(unknown_isymbol), attach_new_isymbols, self._fst.get().OutputSymbols() if old_osymbols is None else old_osymbols._table, new_osymbols_ptr, tostring(unknown_osymbol), attach_new_osymbols) self._check_mutating_imethod() def relabel_tables(self, _SymbolTable old_isymbols=None, _SymbolTable new_isymbols=None, unknown_isymbol=b"", bool attach_new_isymbols=True, _SymbolTable old_osymbols=None, _SymbolTable new_osymbols=None, unknown_osymbol=b"", bool attach_new_osymbols=True): """ relabel_tables(self, old_isymbols=None, new_isymbols=None, unknown_isymbol="", attach_new_isymbols=True, old_osymbols=None, new_osymbols=None, unknown_osymbol="", attach_new_osymbols=True) Replaces input and/or output labels using SymbolTables. This operation destructively relabels the input and/or output labels of the FST using user-specified symbol tables; omitted symbols are identity-mapped. Args: old_isymbols: The old SymbolTable for input labels, defaulting to the FST's input symbol table. new_isymbols: A SymbolTable used to relabel the input labels unknown_isymbol: Input symbol to use to relabel OOVs (if empty, OOVs raise an exception) attach_new_isymbols: Should new_isymbols be made the FST's input symbol table? old_osymbols: The old SymbolTable for output labels, defaulting to the FST's output symbol table. new_osymbols: A SymbolTable used to relabel the output labels. unknown_osymbol: Outnput symbol to use to relabel OOVs (if empty, OOVs raise an exception) attach_new_isymbols: Should new_osymbols be made the FST's output symbol table? Returns: self. Raises: FstArgError: No SymbolTable specified. See also: `decode`, `encode`, `project`, `relabel_pairs`. """ self._relabel_tables(old_isymbols, new_isymbols, unknown_isymbol, attach_new_isymbols, old_osymbols, new_osymbols, unknown_osymbol, attach_new_osymbols) return self cdef void _reserve_arcs(self, int64 state, size_t n) except *: if not self._mfst.get().ReserveArcs(state, n): raise FstIndexError("State index out of range") self._check_mutating_imethod() def reserve_arcs(self, int64 state, size_t n): """ reserve_arcs(self, state, n) Reserve n arcs at a particular state (best effort). Args: state: The integer index of a state. n: The number of arcs to reserve. Returns: self. Raises: FstIndexError: State index out of range. See also: `reserve_states`. """ self._reserve_arcs(state, n) return self cdef void _reserve_states(self, int64 n) except *: self._mfst.get().ReserveStates(n) self._check_mutating_imethod() def reserve_states(self, int64 n): """ reserve_states(self, n) Reserve n states (best effort). Args: n: The number of states to reserve. Returns: self. See also: `reserve_arcs`. """ self._reserve_states(n) return self cdef void _reweight(self, potentials, bool to_final=False) except *: cdef unique_ptr[vector[fst.WeightClass]] _potentials _potentials.reset(new vector[fst.WeightClass]()) cdef string weight_type = self.weight_type() for weight in potentials: _potentials.get().push_back(_get_WeightClass_or_One(self.weight_type(), weight)) fst.Reweight(self._mfst.get(), deref(_potentials), fst.GetReweightType(to_final)) self._check_mutating_imethod() def reweight(self, potentials, bool to_final=False): """ reweight(self, potentials, to_final=False) Reweights an FST using an iterable of potentials. This operation destructively reweights an FST according to the potentials and in the direction specified by the user. An arc of weight w, with an origin state of potential p and destination state of potential q, is reweighted by p^{-1} \otimes (w \otimes q) when reweighting towards the initial state, and by (p \otimes w) \otimes q^{-1} when reweighting towards the final states. The weights must be left distributive when reweighting towards the initial state and right distributive when reweighting towards the final states (e.g., TropicalWeight and LogWeight). Args: potentials: An iterable of Weight or weight strings. to_final: Push towards final states? Returns: self. """ self._reweight(potentials, to_final) return self cdef void _rmepsilon(self, queue_type=b"auto", bool connect=True, weight=None, int64 nstate=fst.kNoStateId, float delta=fst.kShortestDelta) except *: cdef fst.WeightClass wc = _get_WeightClass_or_Zero(self.weight_type(), weight) cdef unique_ptr[fst.RmEpsilonOptions] opts opts.reset(new fst.RmEpsilonOptions(_get_queue_type(tostring(queue_type)), connect, wc, nstate, delta)) fst.RmEpsilon(self._mfst.get(), deref(opts)) self._check_mutating_imethod() def rmepsilon(self, queue_type=b"auto", bool connect=True, weight=None, int64 nstate=fst.kNoStateId, float delta=fst.kShortestDelta): """ rmepsilon(self, queue_type="auto", connect=True, weight=None, nstate=NO_STATE_ID, delta=1e-6): Removes epsilon transitions. This operation destructively removes epsilon transitions, i.e., those where both input and output labels are epsilon) from an FST. Args: queue_type: A string matching a known queue type; one of: "auto", "fifo", "lifo", "shortest", "state", "top". connect: Should output be trimmed? weight: A Weight or weight string indicating the desired weight threshold below which paths are pruned; if omitted, no paths are pruned. nstate: State number threshold. delta: Comparison/quantization delta. Returns: self. """ self._rmepsilon(queue_type, connect, weight, nstate, delta) return self cdef void _set_final(self, int64 state, weight=None) except *: if not self._mfst.get().ValidStateId(state): raise FstIndexError("State index out of range") cdef fst.WeightClass wc = _get_WeightClass_or_One(self.weight_type(), weight) if not self._mfst.get().SetFinal(state, wc): raise FstOpError("Incompatible or invalid weight") self._check_mutating_imethod() def set_final(self, int64 state, weight=None): """ set_final(self, state, weight) Sets the final weight for a state. Args: state: The integer index of a state. weight: A Weight or weight string indicating the desired final weight; if omitted, it is set to semiring One. Returns: self. Raises: FstIndexError: State index out of range. FstOpError: Incompatible or invalid weight. See also: `set_start`. """ self._set_final(state, weight) return self cdef void _set_input_symbols(self, _SymbolTable syms) except *: if syms is None: self._mfst.get().SetInputSymbols(NULL) return self._mfst.get().SetInputSymbols(syms._table) self._check_mutating_imethod() def set_input_symbols(self, _SymbolTable syms): """ set_input_symbols(self, syms) Sets the input symbol table. Passing None as a value will delete the input symbol table. Args: syms: A SymbolTable. Returns: self. See also: `set_output_symbols`. """ self._set_input_symbols(syms) return self cdef void _set_output_symbols(self, _SymbolTable syms) except *: if syms is None: self._mfst.get().SetOutputSymbols(NULL) return self._mfst.get().SetOutputSymbols(syms._table) self._check_mutating_imethod() def set_output_symbols(self, _SymbolTable syms): """ set_output_symbols(self, syms) Sets the output symbol table. Passing None as a value will delete the output symbol table. Args: syms: A SymbolTable. Returns: self. See also: `set_input_symbols`. """ self._set_output_symbols(syms) return self cdef void _set_properties(self, uint64 props, uint64 mask): self._mfst.get().SetProperties(props, mask) def set_properties(self, uint64 props, uint64 mask): """ set_properties(self, props, mask) Sets the properties bits. Args: props: The properties to be set. mask: A mask to be applied to the `props` argument before setting the FST's properties. Returns: self. """ self._set_properties(props, mask) return self cdef void _set_start(self, int64 state) except *: if not self._mfst.get().SetStart(state): raise FstIndexError("State index out of range") self._check_mutating_imethod() def set_start(self, int64 state): """ set_start(self, state) Sets a state to be the initial state state. Args: state: The integer index of a state. Returns: self. Raises: FstIndexError: State index out of range. See also: `set_final`. """ self._set_start(state) return self cdef void _topsort(self) except *: # TopSort returns False if the FST is cyclic, and thus can't be TopSorted. if not fst.TopSort(self._mfst.get()): logging.warning("Cannot topsort cyclic FST.") self._check_mutating_imethod() def topsort(self): """ topsort(self) Sorts transitions by state IDs. This operation destructively topologically sorts the FST, if it is acyclic; otherwise it remains unchanged. Once sorted, all transitions are from lower state IDs to higher state IDs Returns: self. See also: `arcsort`. """ self._topsort() return self cdef void _union(self, _Fst ifst) except *: fst.Union(self._mfst.get(), deref(ifst._fst)) self._check_mutating_imethod() def union(self, _Fst ifst): """ union(self, ifst) Computes the union (sum) of two FSTs. This operation computes the union (sum) of two FSTs. If A transduces string x to y with weight a and B transduces string w to v with weight b, then their union transduces x to y with weight a and w to v with weight b. Args: ifst: The second input FST. Returns: self. """ self._union(ifst) return self # Pseudo-constructors for _Fst and _MutableFst. # # _init_Fst and _init_MutableFst use an FstClass pointer to instantiate _Fst # and _MutableFst objects, respectively. The latter function is only safe to # call when the FST being wrapped is known to be kMutable. The caller can # safely use it when they have either checked this bit (e.g., by using # _init_XFst) or have themselves constructed a mutable container for the # FstClass pointer they're passing (e.g., most of the constructive operations, # storing their results in a VectorFstClass, a derivative of MutableFstClass). # # _create_Fst constructs an empty VectorFstClass of a user-specified arc type, # and passes this pointer to _init_MutableFst. # # _read_Fst reads an FST from disk, performing FST conversion if requested, and # then passes this pointer to _init_XFst. # # The Python class Fst provides a wrapper for these two operations. The former # can be accessed by calling Fst(...), which acts like a class method, and the # latter via Fst.read(...), which acts like a static method. This is a bit # nasty, but totally hidden from the Python user. cdef _Fst _init_Fst(FstClass_ptr tfst): if tfst.Properties(fst.kError, True): raise FstOpError("Operation failed") cdef _Fst ofst = _Fst.__new__(_Fst) ofst._fst.reset(tfst) return ofst cdef _MutableFst _init_MutableFst(MutableFstClass_ptr tfst): if tfst.Properties(fst.kError, True): raise FstOpError("Operation failed") cdef _MutableFst ofst = _MutableFst.__new__(_MutableFst) ofst._fst.reset(tfst) # Makes a copy of it as the derived type! Cool. ofst._mfst = static_pointer_cast[fst.MutableFstClass, fst.FstClass](ofst._fst) return ofst cdef _Fst _init_XFst(FstClass_ptr tfst): if tfst.Properties(fst.kMutable, True): return _init_MutableFst(static_cast[MutableFstClass_ptr](tfst)) else: return _init_Fst(tfst) cdef _MutableFst _create_Fst(arc_type=b"standard"): cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(tostring(arc_type))) if tfst.get() == NULL: raise FstOpError("Unknown arc type: {!r}".format(arc_type)) return _init_MutableFst(tfst.release()) cpdef _Fst _read(filename): cdef unique_ptr[fst.FstClass] tfst tfst.reset(fst.FstClass.Read(tostring(filename))) if tfst.get() == NULL: raise FstIOError("Read failed: {!r}".format(filename)) return _init_XFst(tfst.release()) cpdef _Fst _read_from_string(state): cdef stringstream sstrm sstrm << tostring(state) cdef unique_ptr[fst.FstClass] tfst tfst.reset(fst.FstClass.ReadFromStream(sstrm, "<pywrapfst>")) if tfst.get() == NULL: raise FstIOError("Read failed: <string>") return _init_XFst(tfst.release()) class Fst(object): """ Fst(arc_type="standard") Constructs an empty FST. Args: arc_type: A string indicating the arc type. Raises: FstError: Unknown arc type. Raises: FstOpError: operation failed. """ def __new__(cls, arc_type=b"standard"): return _create_Fst(arc_type) @staticmethod def read(filename): """ read(filename): Reads an FST from a file. Args: filename: The string location of the input file. Returns: An FST object. Raises: FstIOError: Read failed. """ return _read(filename) @staticmethod def read_from_string(state): """ read_from_string(string, fst_type=None) Reads an FST from a serialized string. Args: state: A string containing the serialized FST. Returns: An FST object. Raises: FstIOError: Read failed. FstOpError: Read-time conversion failed. See also: `write_to_string`. """ return _read_from_string(state) ## FST constants. NO_LABEL = fst.kNoLabel NO_STATE_ID = fst.kNoStateId # TODO(kbg): Figure out how to access static class variables so I don't have # to do it this way. NO_SYMBOL = kNoSymbol ## FST properties. EXPANDED = fst.kExpanded MUTABLE = fst.kMutable ERROR = fst.kError ACCEPTOR = fst.kAcceptor NOT_ACCEPTOR = fst.kNotAcceptor I_DETERMINISTIC = fst.kIDeterministic NON_I_DETERMINISTIC = fst.kNonIDeterministic O_DETERMINISTIC = fst.kODeterministic NON_O_DETERMINISTIC = fst.kNonODeterministic EPSILONS = fst.kEpsilons NO_EPSILONS = fst.kNoEpsilons I_EPSILONS = fst.kIEpsilons NO_I_EPSILONS = fst.kNoIEpsilons O_EPSILONS = fst.kOEpsilons NO_O_EPSILONS = fst.kNoOEpsilons I_LABEL_SORTED = fst.kILabelSorted NOT_I_LABEL_SORTED = fst.kNotILabelSorted O_LABEL_SORTED = fst.kOLabelSorted NOT_O_LABEL_SORTED = fst.kNotOLabelSorted WEIGHTED = fst.kWeighted UNWEIGHTED = fst.kUnweighted CYCLIC = fst.kCyclic ACYCLIC = fst.kAcyclic INITIAL_CYCLIC = fst.kInitialCyclic INITIAL_ACYCLIC = fst.kInitialAcyclic TOP_SORTED = fst.kTopSorted NOT_TOP_SORTED = fst.kNotTopSorted ACCESSIBLE = fst.kAccessible NOT_ACCESSIBLE = fst.kNotAccessible COACCESSIBLE = fst.kCoAccessible NOT_COACCESSIBLE = fst.kNotCoAccessible STRING = fst.kString NOT_STRING = fst.kNotString WEIGHTED_CYCLES = fst.kWeightedCycles UNWEIGHTED_CYCLES = fst.kUnweightedCycles NULL_PROPERTIES = fst.kNullProperties COPY_PROPERTIES = fst.kCopyProperties INTRINSIC_PROPERTIES = fst.kIntrinsicProperties EXTRINSIC_PROPERTIES = fst.kExtrinsicProperties SET_START_PROPERTIES = fst.kSetStartProperties SET_FINAL_PROPERTIES = fst.kSetFinalProperties ADD_STATE_PROPERTIES = fst.kAddStateProperties ADD_ARC_PROPERTIES = fst.kAddArcProperties SET_ARC_PROPERTIES = fst.kSetArcProperties DELETE_STATE_PROPERTIES = fst.kDeleteStatesProperties DELETE_ARC_PROPERTIES = fst.kDeleteArcsProperties STATE_SORT_PROPERTIES = fst.kStateSortProperties ARC_SORT_PROPERTIES = fst.kArcSortProperties I_LABEL_INVARIANT_PROPERTIES = fst.kILabelInvariantProperties O_LABEL_INVARIANT_PROPERTIES = fst.kOLabelInvariantProperties WEIGHT_INVARIANT_PROPERTIES = fst.kWeightInvariantProperties ADD_SUPERFINAL_PROPERTIES = fst.kAddSuperFinalProperties RM_SUPERFINAL_PROPERTIES = fst.kRmSuperFinalProperties BINARY_PROPERTIES = fst.kBinaryProperties TRINARY_PROPERTIES = fst.kTrinaryProperties POS_TRINARY_PROPERTIES = fst.kPosTrinaryProperties NEG_TRINARY_PROPERTIES = fst.kNegTrinaryProperties FST_PROPERTIES = fst.kFstProperties ## Arc iterator properties. ARC_I_LABEL_VALUE = fst.kArcILabelValue ARC_O_LABEL_VALUE = fst.kArcOLabelValue ARC_WEIGHT_VALUE = fst.kArcWeightValue ARC_NEXT_STATE_VALUE = fst.kArcNextStateValue ARC_NO_CACHE = fst.kArcNoCache ARC_VALUE_FLAGS = fst.kArcValueFlags ARC_FLAGS = fst.kArcFlags ## EncodeMapper properties. ENCODE_LABELS = fst.kEncodeLabels ENCODE_WEIGHTS = fst.kEncodeWeights ENCODE_FLAGS = fst.kEncodeFlags ## Arc, ArcIterator, and MutableArcIterator. cdef class Arc(object): """ Arc(ilabel, olabel, weight, nextstate) This class represents an arc while remaining agnostic about the underlying arc type. Attributes of the arc can be accessed or mutated, and the arc can be copied. Attributes: ilabel: The input label. olabel: The output label. weight: The arc weight. nextstate: The destination state for the arc. """ def __repr__(self): return "<Arc at 0x{:x}>".format(id(self)) def __init__(self, int64 ilabel, int64 olabel, weight, int64 nextstate): cdef fst.WeightClass wc = _get_WeightClass_or_One(b"tropical", weight) self._arc.reset(new fst.ArcClass(ilabel, olabel, wc, nextstate)) cpdef Arc copy(self): return Arc(self.ilabel, self.olabel, self.weight, self.nextstate) property ilabel: def __get__(self): return deref(self._arc).ilabel def __set__(self, int64 value): deref(self._arc).ilabel = value property olabel: def __get__(self): return deref(self._arc).olabel def __set__(self, int64 value): deref(self._arc).olabel = value property weight: def __get__(self): cdef Weight weight = Weight.__new__(Weight) weight._weight.reset(new fst.WeightClass(deref(self._arc).weight)) return weight def __set__(self, weight): deref(self._arc).weight = _get_WeightClass_or_One(b"tropical", weight) property nextstate: def __get__(self): return deref(self._arc).nextstate def __set__(self, int64 value): deref(self._arc).nextstate = value cdef Arc _init_Arc(const fst.ArcClass &arc): cdef Weight weight = Weight.__new__(Weight) weight._weight.reset(new fst.WeightClass(arc.weight)) return Arc(arc.ilabel, arc.olabel, weight, arc.nextstate) cdef class ArcIterator(object): """ ArcIterator(ifst, state) This class is used for iterating over the arcs leaving some state of an FST. """ def __repr__(self): return "<ArcIterator at 0x{:x}>".format(id(self)) def __init__(self, _Fst ifst, int64 state): if not ifst._fst.get().ValidStateId(state): raise FstIndexError("State index out of range") # Makes copy of the shared_ptr, potentially extending the FST's lifetime. self._fst = ifst._fst self._aiter.reset(new fst.ArcIteratorClass(deref(self._fst), state)) # This just registers this class as a possible iterator. def __iter__(self): return self # Magic method used to get a Pythonic API out of the C++ API. def __next__(self): if self.done(): raise StopIteration result = self.value() self.next() return result cpdef bool done(self): """ done(self) Indicates whether the iterator is exhausted or not. Returns: True if the iterator is exhausted, False otherwise. """ return self._aiter.get().Done() cpdef uint32 flags(self): """ flags(self) Returns the current iterator behavioral flags. Returns: The current iterator behavioral flags as an integer. """ return self._aiter.get().Flags() cpdef void next(self): """ next(self) Advances the iterator. """ self._aiter.get().Next() cpdef size_t position(self): """ position(self) Returns the position of the iterator. Returns: The iterator's position, expressed as an integer. """ return self._aiter.get().Position() cpdef void reset(self): """ reset(self) Resets the iterator to the initial position. """ self._aiter.get().Reset() cpdef void seek(self, size_t a): """ seek(self, a) Advance the iterator to a new position. Args: a: The position to seek to. """ self._aiter.get().Seek(a) cpdef void set_flags(self, uint32 flags, uint32 mask): """ set_flags(self, flags, mask) Sets the current iterator behavioral flags. Args: flags: The properties to be set. mask: A mask to be applied to the `flags` argument before setting them. """ self._aiter.get().SetFlags(flags, mask) cpdef object value(self): """ value(self) Returns the current arc. """ return _init_Arc(self._aiter.get().Value()) cdef class MutableArcIterator(object): """ MutableArcIterator(ifst, state) This class is used for iterating over the arcs leaving some state of an FST, also permitting mutation of the current arc. """ def __repr__(self): return "<MutableArcIterator at 0x{:x}>".format(id(self)) def __init__(self, _MutableFst ifst, int64 state): if not ifst._fst.get().ValidStateId(state): raise FstIndexError("State index out of range") # Makes copy of the shared_ptr, potentially extending the FST's lifetime. self._mfst = ifst._mfst self._aiter.reset(new fst.MutableArcIteratorClass(ifst._mfst.get(), state)) cpdef bool done(self): """ done(self) Indicates whether the iterator is exhausted or not. Returns: True if the iterator is exhausted, False otherwise. """ return self._aiter.get().Done() cpdef uint32 flags(self): """ flags(self) Returns the current iterator behavioral flags. Returns: The current iterator behavioral flags as an integer. """ return self._aiter.get().Flags() cpdef void next(self): """ next(self) Advances the iterator. """ self._aiter.get().Next() cpdef size_t position(self): """ position(self) Returns the position of the iterator. Returns: The iterator's position, expressed as an integer. """ return self._aiter.get().Position() cpdef void reset(self): """ reset(self) Resets the iterator to the initial position. """ self._aiter.get().Reset() cpdef void seek(self, size_t a): """ seek(self, a) Advance the iterator to a new position. Args: a: The position to seek to. """ self._aiter.get().Seek(a) cpdef void set_flags(self, uint32 flags, uint32 mask): """ set_flags(self, flags, mask) Sets the current iterator behavioral flags. Args: flags: The properties to be set. mask: A mask to be applied to the `flags` argument before setting them. """ self._aiter.get().SetFlags(flags, mask) cpdef void set_value(self, Arc arc): """ set_value(self, arc) Replace the current arc with a new arc. Args: arc: The arc to replace the current arc with. """ self._aiter.get().SetValue(deref(arc._arc)) cpdef object value(self): """ value(self) Returns the current arc. """ return _init_Arc(self._aiter.get().Value()) ## StateIterator. cdef class StateIterator(object): """ StateIterator(ifst) This class is used for iterating over the states in an FST. """ def __repr__(self): return "<StateIterator at 0x{:x}>".format(id(self)) def __init__(self, _Fst ifst): # Makes copy of the shared_ptr, potentially extending the FST's lifetime. self._fst = ifst._fst self._siter.reset(new fst.StateIteratorClass(deref(self._fst))) # This just registers this class as a possible iterator. def __iter__(self): return self # Magic method used to get a Pythonic API out of the C++ API. def __next__(self): if self.done(): raise StopIteration cdef int64 result = self.value() self.next() return result cpdef bool done(self): """ done(self) Indicates whether the iterator is exhausted or not. Returns: True if the iterator is exhausted, False otherwise. """ return self._siter.get().Done() cpdef void next(self): """ next(self) Advances the iterator. """ self._siter.get().Next() cpdef void reset(self): """ reset(self) Resets the iterator to the initial position. """ self._siter.get().Reset() cpdef int64 value(self): """ value(self) Returns the current state index. """ return self._siter.get().Value() ## FST operations. cdef _Fst _map(_Fst ifst, float delta=fst.kDelta, map_type=b"identity", double power=1., weight=None): cdef fst.MapType map_type_enum if not fst.GetMapType(tostring(map_type), addr(map_type_enum)): raise FstArgError("Unknown map type: {!r}".format(map_type)) cdef fst.WeightClass wc = (_get_WeightClass_or_One(ifst.weight_type(), weight) if map_type_enum == fst.TIMES_MAPPER else _get_WeightClass_or_Zero(ifst.weight_type(), weight)) return _init_XFst(fst.Map(deref(ifst._fst), map_type_enum, delta, power, wc)) cpdef _Fst arcmap(_Fst ifst, float delta=fst.kDelta, map_type=b"identity", double power=1., weight=None): """ arcmap(ifst, delta=0.0009765625, map_type="identity", weight=None) Constructively applies a transform to all arcs and final states. This operation transforms each arc and final state in the input FST using one of the following: * identity: maps to self. * input_epsilon: replaces all input labels with epsilon. * invert: reciprocates all non-Zero weights. * float_power: raises all weights to a floating-point power. * output_epsilon: replaces all output labels with epsilon. * quantize: quantizes weights. * plus: adds a constant to all weights. * power: raises all weights to an integral power. * rmweight: replaces all non-Zero weights with 1. * superfinal: redirects final states to a new superfinal state. * times: right-multiplies a constant to all weights. * to_log: converts weights to the log semiring. * to_log64: converts weights to the log64 semiring. * to_standard: converts weights to the tropical ("standard") semiring. Args: ifst: The input FST. delta: Comparison/quantization delta (ignored unless `map_type` is `quantize`). map_type: A string matching a known mapping operation (see above). power: A positive scalar or integer power; ignored unless `map_type` is `float_power` or `power` (in which case it defaults to 1). weight: A Weight or weight string passed to the arc-mapper; ignored unless `map_type` is `plus` (in which case it defaults to semiring Zero) or `times` (in which case it defaults to semiring One). Returns: An FST with arcs and final states remapped. Raises: FstArgError: Unknown map type. See also: `statemap`. """ return _map(ifst, delta, map_type, power, weight) cpdef _MutableFst compose(_Fst ifst1, _Fst ifst2, compose_filter=b"auto", bool connect=True): """ compose(ifst1, ifst2, compose_filter="auto", connect=True) Constructively composes two FSTs. This operation computes the composition of two FSTs. If A transduces string x to y with weight a and B transduces y to z with weight b, then their composition transduces string x to z with weight a \otimes b. The output labels of the first transducer or the input labels of the second transducer must be sorted (or otherwise support appropriate matchers). Args: ifst1: The first input FST. ifst2: The second input FST. compose_filter: A string matching a known composition filter; one of: "alt_sequence", "auto", "match", "null", "sequence", "trivial". connect: Should output be trimmed? Returns: An FST. See also: `arcsort`. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst1.arc_type())) cdef unique_ptr[fst.ComposeOptions] opts opts.reset(new fst.ComposeOptions(connect, _get_compose_filter(tostring(compose_filter)))) fst.Compose(deref(ifst1._fst), deref(ifst2._fst), tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef _Fst convert(_Fst ifst, fst_type=b""): """ convert(ifst, fst_type="") Constructively converts an FST to a new internal representation. Args: ifst: The input FST. fst_type: A string indicating the FST type to convert to, or an empty string if no conversion is desired. Returns: The input FST converted to the desired FST type. Raises: FstOpError: Conversion failed. """ cdef string fst_type_string = tostring(fst_type) cdef unique_ptr[fst.FstClass] tfst tfst.reset(fst.Convert(deref(ifst._fst), fst_type_string)) # Script-land Convert returns a null pointer to signal failure. if tfst.get() == NULL: raise FstOpError("Conversion to {!r} failed".format(fst_type)) return _init_XFst(tfst.release()) cpdef _MutableFst determinize(_Fst ifst, float delta=fst.kShortestDelta, det_type=b"functional", int64 nstate=fst.kNoStateId, int64 subsequential_label=0, weight=None, bool increment_subsequential_label=False): """ determinize(ifst, delta=1e-6, det_type="functional", nstate=NO_STATE_ID, subsequential_label=0, weight=None, incremental_subsequential_label=False) Constructively determinizes a weighted FST. This operations creates an equivalent FST that has the property that no state has two transitions with the same input label. For this algorithm, epsilon transitions are treated as regular symbols (cf. `rmepsilon`). Args: ifst: The input FST. delta: Comparison/quantization delta. det_type: Type of determinization; one of: "functional" (input transducer is functional), "nonfunctional" (input transducer is not functional) and disambiguate" (input transducer is not functional but only keep the min of ambiguous outputs). nstate: State number threshold. subsequential_label: Input label of arc corresponding to residual final output when producing a subsequential transducer. weight: A Weight or weight string indicating the desired weight threshold below which paths are pruned; if omitted, no paths are pruned. increment_subsequential_label: Increment subsequential when creating several arcs for the residual final output at a given state. Returns: An equivalent deterministic FST. Raises: FstArgError: Unknown determinization type. See also: `disambiguate`, `rmepsilon`. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) # Threshold is set to semiring Zero (no pruning) if weight unspecified. cdef fst.WeightClass wc = _get_WeightClass_or_Zero(ifst.weight_type(), weight) cdef fst.DeterminizeType determinize_type_enum if not fst.GetDeterminizeType(tostring(det_type), addr(determinize_type_enum)): raise FstArgError("Unknown determinization type: {!r}".format(det_type)) cdef unique_ptr[fst.DeterminizeOptions] opts opts.reset(new fst.DeterminizeOptions(delta, wc, nstate, subsequential_label, determinize_type_enum, increment_subsequential_label)) fst.Determinize(deref(ifst._fst), tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef _MutableFst difference(_Fst ifst1, _Fst ifst2, compose_filter=b"auto", bool connect=True): """ difference(ifst1, ifst2, compose_filter="auto", connect=True) Constructively computes the difference of two FSTs. This operation computes the difference between two FSAs. Only strings that are in the first automaton but not in second are retained in the result. The first argument must be an acceptor; the second argument must be an unweighted, epsilon-free, deterministic acceptor. The output labels of the first transducer or the input labels of the second transducer must be sorted (or otherwise support appropriate matchers). Args: ifst1: The first input FST. ifst2: The second input FST. compose_filter: A string matching a known composition filter; one of: "alt_sequence", "auto", "match", "null", "sequence", "trivial". connect: Should the output FST be trimmed? Returns: An FST representing the difference of the FSTs. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst1.arc_type())) cdef unique_ptr[fst.ComposeOptions] opts opts.reset(new fst.ComposeOptions(connect, _get_compose_filter( tostring(compose_filter)))) fst.Difference(deref(ifst1._fst), deref(ifst2._fst), tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef _MutableFst disambiguate(_Fst ifst, float delta=fst.kDelta, int64 nstate=fst.kNoStateId, int64 subsequential_label=0, weight=None): """ disambiguate(ifst, delta=0.0009765625, nstate=NO_STATE_ID, subsequential_label=0, weight=None): Constructively disambiguates a weighted transducer. This operation disambiguates a weighted transducer. The result will be an equivalent FST that has the property that no two successful paths have the same input labeling. For this algorithm, epsilon transitions are treated as regular symbols (cf. `rmepsilon`). Args: ifst: The input FST. delta: Comparison/quantization delta. nstate: State number threshold. subsequential_label: Input label of arc corresponding to residual final output when producing a subsequential transducer. weight: A Weight or weight string indicating the desired weight threshold below which paths are pruned; if omitted, no paths are pruned. Returns: An equivalent disambiguated FST. See also: `determinize`, `rmepsilon`. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) # Threshold is set to semiring Zero (no pruning) if no weight is specified. cdef fst.WeightClass wc = _get_WeightClass_or_Zero(ifst.weight_type(), weight) cdef unique_ptr[fst.DisambiguateOptions] opts opts.reset(new fst.DisambiguateOptions(delta, wc, nstate, subsequential_label)) fst.Disambiguate(deref(ifst._fst), tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef _MutableFst epsnormalize(_Fst ifst, bool eps_norm_output=False): """ epsnormalize(ifst, eps_norm_output=False) Constructively epsilon-normalizes an FST. This operation creates an equivalent FST that is epsilon-normalized. An acceptor is epsilon-normalized if it it is epsilon-removed (cf. `rmepsilon`). A transducer is input epsilon-normalized if, in addition, along any path, all arcs with epsilon input labels follow all arcs with non-epsilon input labels. Output epsilon-normalized is defined similarly. The input FST must be functional. Args: ifst: The input FST. eps_norm_output: Should the FST be output epsilon-normalized? Returns: An equivalent epsilon-normalized FST. See also: `rmepsilon`. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) fst.EpsNormalize(deref(ifst._fst), tfst.get(), fst.EPS_NORM_OUTPUT if eps_norm_output else fst.EPS_NORM_INPUT) return _init_MutableFst(tfst.release()) cpdef bool equal(_Fst ifst1, _Fst ifst2, float delta=fst.kDelta): """ equal(ifst1, ifst2, delta=0.0009765625) Are two FSTs equal? This function tests whether two FSTs have the same states with the same numbering and the same transitions with the same labels and weights in the same order. Args: ifst1: The first input FST. ifst2: The second input FST. delta: Comparison/quantization delta. Returns: True if the FSTs satisfy the above condition, else False. See also: `equivalent`, `isomorphic`, `randequivalent`. """ return fst.Equal(deref(ifst1._fst), deref(ifst2._fst), delta) cpdef bool equivalent(_Fst ifst1, _Fst ifst2, float delta=fst.kDelta) except *: """ equivalent(ifst1, ifst2, delta=0.0009765625) Are the two acceptors equivalent? This operation tests whether two epsilon-free deterministic weighted acceptors are equivalent, that is if they accept the same strings with the same weights. Args: ifst1: The first input FST. ifst2: The second input FST. delta: Comparison/quantization delta. Returns: True if the FSTs satisfy the above condition, else False. See also: `equal`, `isomorphic`, `randequivalent`. """ return fst.Equivalent(deref(ifst1._fst), deref(ifst2._fst), delta) cpdef _MutableFst intersect(_Fst ifst1, _Fst ifst2, compose_filter=b"auto", bool connect=True): """ intersect(ifst1, ifst2, compose_filter="auto", connect=True) Constructively intersects two FSTs. This operation computes the intersection (Hadamard product) of two FSTs. Only strings that are in both automata are retained in the result. The two arguments must be acceptors. One of the arguments must be label-sorted (or otherwise support appropriate matchers). Args: ifst1: The first input FST. ifst2: The second input FST. compose_filter: A string matching a known composition filter; one of: "alt_sequence", "auto", "match", "null", "sequence", "trivial". connect: Should output be trimmed? Returns: An intersected FST. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst1.arc_type())) cdef unique_ptr[fst.ComposeOptions] opts opts.reset(new fst.ComposeOptions(connect, _get_compose_filter(tostring(compose_filter)))) fst.Intersect(deref(ifst1._fst), deref(ifst2._fst), tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef bool isomorphic(_Fst ifst1, _Fst ifst2, float delta=fst.kDelta): """ isomorphic(ifst1, ifst2, delta=0.0009765625) Are the two acceptors isomorphic? This operation determines if two transducers with a certain required determinism have the same states, irrespective of numbering, and the same transitions with the same labels and weights, irrespective of ordering. In other words, FSTs A, B are isomorphic if and only if the states of A can be renumbered and the transitions leaving each state reordered so the two are equal (according to the definition given in `equal`). Args: ifst1: The first input FST. ifst2: The second input FST. delta: Comparison/quantization delta. Returns: True if the two transducers satisfy the above condition, else False. See also: `equal`, `equivalent`, `randequivalent`. """ return fst.Isomorphic(deref(ifst1._fst), deref(ifst2._fst), delta) cpdef _MutableFst prune(_Fst ifst, float delta=fst.kDelta, int64 nstate=fst.kNoStateId, weight=None): """ prune(ifst, delta=0.0009765625, nstate=NO_STATE_ID, weight=None) Constructively removes paths with weights below a certain threshold. This operation deletes states and arcs in the input FST that do not belong to a successful path whose weight is no more (w.r.t the natural semiring order) than the threshold t \otimes-times the weight of the shortest path in the input FST. Weights must be commutative and have the path property. Args: ifst: The input FST. delta: Comparison/quantization delta. nstate: State number threshold. weight: A Weight or weight string indicating the desired weight threshold below which paths are pruned; if omitted, no paths are pruned. Returns: A pruned FST. See also: The destructive variant. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) cdef fst.WeightClass wc = _get_WeightClass_or_Zero(ifst.weight_type(), weight) fst.Prune(deref(ifst._fst), tfst.get(), wc, nstate, delta) return _init_MutableFst(tfst.release()) cpdef _MutableFst push(_Fst ifst, float delta=fst.kDelta, bool push_weights=False, bool push_labels=False, bool remove_common_affix=False, bool remove_total_weight=False, bool to_final=False): """ push(ifst, delta=0.0009765625, push_weights=False, push_labels=False, remove_common_affix=False, remove_total_weight=False, to_final=False) Constructively pushes weights/labels towards initial or final states. This operation produces an equivalent transducer by pushing the weights and/or the labels towards the initial state or toward the final states. When pushing weights towards the initial state, the sum of the weight of the outgoing transitions and final weight at any non-initial state is equal to 1 in the resulting machine. When pushing weights towards the final states, the sum of the weight of the incoming transitions at any state is equal to 1. Weights need to be left distributive when pushing towards the initial state and right distributive when pushing towards the final states. Pushing labels towards the initial state consists in minimizing at every state the length of the longest common prefix of the output labels of the outgoing paths. Pushing labels towards the final states consists in minimizing at every state the length of the longest common suffix of the output labels of the incoming paths. Args: ifst: The input FST. delta: Comparison/quantization delta. push_weights: Should weights be pushed? push_labels: Should labels be pushed? remove_common_affix: If pushing labels, should common prefix/suffix be removed? remove_total_weight: If pushing weights, should total weight be removed? to_final: Push towards final states? Returns: An equivalent pushed FST. See also: The destructive variant. """ # This is copied, almost verbatim, from ./fstpush.cc. cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) cdef uint32 flags = fst.GetPushFlags(push_weights, push_labels, remove_common_affix, remove_total_weight) fst.Push(deref(ifst._fst), tfst.get(), flags, fst.GetReweightType(to_final), delta) return _init_MutableFst(tfst.release()) cpdef bool randequivalent(_Fst ifst1, _Fst ifst2, int32 npath=1, float delta=fst.kDelta, time_t seed=0, select=b"uniform", int32 max_length=INT32_MAX) except *: """ randequivalent(ifst1, ifst2, npath=1, delta=0.0009765625, seed=0, select="uniform", max_length=2147483647) Are two acceptors stochastically equivalent? This operation tests whether two FSTs are equivalent by randomly generating paths alternatively in each of the two FSTs. For each randomly generated path, the algorithm computes for each of the two FSTs the sum of the weights of all the successful paths sharing the same input and output labels as the randomly generated path and checks that these two values are within `delta`. Args: ifst1: The first input FST. ifst2: The second input FST. npath: The number of random paths to generate. delta: Comparison/quantization delta. seed: An optional seed value for random path generation; if zero, the current time and process ID is used. select: A string matching a known random arc selection type; one of: "uniform", "log_prob", "fast_log_prob". max_length: The maximum length of each random path. Returns: True if the two transducers satisfy the above condition, else False. See also: `equal`, `equivalent`, `isomorphic`, `randgen`. """ cdef fst.RandArcSelection ras = _get_rand_arc_selection(tostring(select)) cdef unique_ptr[fst.RandGenOptions[fst.RandArcSelection]] opts # The three trailing options will be ignored by RandEquivalent. opts.reset(new fst.RandGenOptions[fst.RandArcSelection](ras, max_length, 1, False, False)) if seed == 0: seed = time(NULL) + getpid() return fst.RandEquivalent(deref(ifst1._fst), deref(ifst2._fst), npath, delta, seed, deref(opts)) cpdef _MutableFst randgen(_Fst ifst, int32 npath=1, time_t seed=0, select=b"uniform", int32 max_length=INT32_MAX, bool weighted=False, bool remove_total_weight=False): """ randgen(ifst, npath=1, seed=0, select="uniform", max_length=2147483647, weight=False, remove_total_weight=False) Randomly generate successful paths in an FST. This operation randomly generates a set of successful paths in the input FST. This relies on a mechanism for selecting arcs, specified using the `select` argument. The default selector, "uniform", randomly selects a transition using a uniform distribution. The "log_prob" selector randomly selects a transition w.r.t. the weights treated as negative log probabilities after normalizing for the total weight leaving the state. In all cases, finality is treated as a transition to a super-final state. Args: ifst: The input FST. npath: The number of random paths to generate. seed: An optional seed value for random path generation; if zero, the current time and process ID is used. select: A string matching a known random arc selection type; one of: "uniform", "log_prob", "fast_log_prob". max_length: The maximum length of each random path. weighted: Should the output be weighted by path count? remove_total_weight: Should the total weight be removed (ignored when `weighted` is False)? Returns: An FST containing one or more random paths. See also: `randequivalent`. """ cdef fst.RandArcSelection ras = _get_rand_arc_selection(tostring(select)) cdef unique_ptr[fst.RandGenOptions[fst.RandArcSelection]] opts opts.reset(new fst.RandGenOptions[fst.RandArcSelection](ras, max_length, npath, weighted, remove_total_weight)) cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) if seed == 0: seed = time(NULL) + getpid() fst.RandGen(deref(ifst._fst), tfst.get(), seed, deref(opts)) return _init_MutableFst(tfst.release()) cpdef _MutableFst replace(pairs, call_arc_labeling=b"input", return_arc_labeling=b"neither", bool epsilon_on_replace=False, int64 return_label=0): """ replace(pairs, call_arc_labeling="input", return_arc_labeling="neither", epsilon_on_replace=False, return_label=0) Recursively replaces arcs in the FST with other FST(s). This operation performs the dynamic replacement of arcs in one FST with another FST, allowing the definition of FSTs analogous to RTNs. It takes as input a set of pairs of a set of pairs formed by a non-terminal label and its corresponding FST, and a label identifying the root FST in that set. The resulting FST is obtained by taking the root FST and recursively replacing each arc having a nonterminal as output label by its corresponding FST. More precisely, an arc from state s to state d with (nonterminal) output label n in this FST is replaced by redirecting this "call" arc to the initial state of a copy F of the FST for n, and adding "return" arcs from each final state of F to d. Optional arguments control how the call and return arcs are labeled; by default, the only non-epsilon label is placed on the call arc. Args: pairs: An iterable of (nonterminal label, FST) pairs, where the former is an unsigned integer and the latter is an Fst instance. call_arc_labeling: A string indicating which call arc labels should be non-epsilon. One of: "input" (default), "output", "both", "neither". This value is set to "neither" if epsilon_on_replace is True. return_arc_labeling: A string indicating which return arc labels should be non-epsilon. One of: "input", "output", "both", "neither" (default). This value is set to "neither" if epsilon_on_replace is True. epsilon_on_replace: Should call and return arcs be epsilon arcs? If True, this effectively overrides call_arc_labeling and return_arc_labeling, setting both to "neither". return_label: The integer label for return arcs. Returns: An FST resulting from expanding the input RTN. """ cdef vector[fst.LabelFstClassPair] _pairs cdef int64 root_label cdef int64 label cdef _Fst ifst it = iter(pairs) (root_label, ifst) = next(it) _pairs.push_back(fst.LabelFstClassPair(root_label, ifst._fst.get())) cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) for (label, ifst) in it: _pairs.push_back(fst.LabelFstClassPair(label, ifst._fst.get())) cdef fst.ReplaceLabelType cal = _get_replace_label_type( tostring(call_arc_labeling), epsilon_on_replace) cdef fst.ReplaceLabelType ral = _get_replace_label_type( tostring(return_arc_labeling), epsilon_on_replace) cdef unique_ptr[fst.ReplaceOptions] opts opts.reset(new fst.ReplaceOptions(root_label, cal, ral, return_label)) fst.Replace(_pairs, tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef _MutableFst reverse(_Fst ifst, bool require_superinitial=True): """ reverse(ifst, require_superinitial=True) Constructively reverses an FST's transduction. This operation reverses an FST. If A transduces string x to y with weight a, then the reverse of A transduces the reverse of x to the reverse of y with weight a.Reverse(). (Typically, a = a.Reverse() and Arc = RevArc, e.g., TropicalWeight and LogWeight.) In general, e.g., when the weights only form a left or right semiring, the output arc type must match the input arc type. Args: ifst: The input FST. require_superinitial: Should a superinitial state be created? Returns: A reversed FST. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) fst.Reverse(deref(ifst._fst), tfst.get(), require_superinitial) return _init_MutableFst(tfst.release()) # Pure C++ helper for shortestdistance. cdef vector[fst.WeightClass] *_shortestdistance(_Fst ifst, float delta=fst.kShortestDelta, int64 nstate=fst.kNoStateId, queue_type=b"auto", bool reverse=False) except *: cdef unique_ptr[vector[fst.WeightClass]] distance distance.reset(new vector[fst.WeightClass]()) # For scoping reasons, these have to be declared here even though they may # not be used in all cases. cdef unique_ptr[fst.ShortestDistanceOptions] opts if reverse: # Only the simpler signature supports shortest distance to final states; # `nstate` and `queue_type` arguments are ignored. fst.ShortestDistance(deref(ifst._fst), distance.get(), True, delta) else: opts.reset(new fst.ShortestDistanceOptions( _get_queue_type(tostring(queue_type)), fst.ANY_ARC_FILTER, nstate, delta)) fst.ShortestDistance(deref(ifst._fst), distance.get(), deref(opts)) return distance.release() def shortestdistance(_Fst ifst, float delta=fst.kShortestDelta, int64 nstate=fst.kNoStateId, queue_type=b"auto", bool reverse=False): """ shortestdistance(ifst, delta=1e-6, nstate=NO_STATE_ID, queue_type="auto", reverse=False) Compute the shortest distance from the initial or final state. This operation computes the shortest distance from the initial state (when `reverse` is False) or from every state to the final state (when `reverse` is True). The shortest distance from p to q is the \otimes-sum of the weights of all the paths between p and q. The weights must be right (if `reverse` is False) or left (if `reverse` is True) distributive, and k-closed (i.e., 1 \otimes x \otimes x^2 \otimes ... \otimes x^{k + 1} = 1 \otimes x \otimes x^2 \otimes ... \otimes x^k; e.g., TropicalWeight). Args: ifst: The input FST. delta: Comparison/quantization delta. nstate: State number threshold (ignored if `reverse` is True). queue_type: A string matching a known queue type; one of: "auto", "fifo", "lifo", "shortest", "state", "top" (ignored if `reverse` is True). reverse: Should the reverse distance (from each state to the final state) be computed? Returns: A list of Weight objects representing the shortest distance for each state. """ cdef unique_ptr[vector[fst.WeightClass]] distance distance.reset(_shortestdistance(ifst, delta, nstate, queue_type, reverse)) cdef string weight_type = ifst.weight_type() return [Weight(weight_type, weight.ToString()) for weight in deref(distance)] cpdef _MutableFst shortestpath(_Fst ifst, float delta=fst.kShortestDelta, int32 nshortest=1, int64 nstate=fst.kNoStateId, queue_type=b"auto", bool unique=False, weight=None): """ shortestpath(ifst, delta=1e-6, nshortest=1, nstate=NO_STATE_ID, queue_type="auto", unique=False, weight=None) Construct an FST containing the shortest path(s) in the input FST. This operation produces an FST containing the n-shortest paths in the input FST. The n-shortest paths are the n-lowest weight paths w.r.t. the natural semiring order. The single path that can be read from the ith of at most n transitions leaving the initial state of the resulting FST is the ith shortest path. The weights need to be right distributive and have the path property. They also need to be left distributive as well for n-shortest with n > 1 (e.g., TropicalWeight). Args: ifst: The input FST. delta: Comparison/quantization delta. nshortest: The number of paths to return. nstate: State number threshold. queue_type: A string matching a known queue type; one of: "auto", "fifo", "lifo", "shortest", "state", "top". unique: Should the resulting FST only contain distinct paths? (Requires the input FST to be an acceptor; epsilons are treated as if they are regular symbols.) weight: A Weight or weight string indicating the desired weight threshold below which paths are pruned; if omitted, no paths are pruned. Returns: An FST containing the n-shortest paths. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) # Threshold is set to semiring Zero (no pruning) if no weight is specified. cdef fst.WeightClass wc = _get_WeightClass_or_Zero(ifst.weight_type(), weight) cdef unique_ptr[fst.ShortestPathOptions] opts opts.reset(new fst.ShortestPathOptions(_get_queue_type(tostring(queue_type)), nshortest, unique, delta, wc, nstate)) fst.ShortestPath(deref(ifst._fst), tfst.get(), deref(opts)) return _init_MutableFst(tfst.release()) cpdef _Fst statemap(_Fst ifst, map_type): """ state_map(ifst, map_type) Constructively applies a transform to all states. This operation transforms each state according to the requested map type. Note that currently, only one state-mapping operation is supported. Args: ifst: The input FST. map_type: A string matching a known mapping operation; one of: "arc_sum" (sum weights of identically-labeled multi-arcs), "arc_unique" (deletes non-unique identically-labeled multi-arcs). Returns: An FST with states remapped. Raises: FstArgError: Unknown map type. See also: `arcmap`. """ return _map(ifst, fst.kDelta, map_type, 1., None) cpdef _MutableFst synchronize(_Fst ifst): """ synchronize(ifst) Constructively synchronizes an FST. This operation synchronizes a transducer. The result will be an equivalent FST that has the property that during the traversal of a path, the delay is either zero or strictly increasing, where the delay is the difference between the number of non-epsilon output labels and input labels along the path. For the algorithm to terminate, the input transducer must have bounded delay, i.e., the delay of every cycle must be zero. Args: ifst: The input FST. Returns: An equivalent synchronized FST. """ cdef unique_ptr[fst.VectorFstClass] tfst tfst.reset(new fst.VectorFstClass(ifst.arc_type())) fst.Synchronize(deref(ifst._fst), tfst.get()) return _init_MutableFst(tfst.release()) ## Compiler. cdef class Compiler(object): """ Compiler(fst_type="vector", arc_type="standard", isymbols=None, osymbols=None, ssymbols=None, acceptor=False, keep_isymbols=False, keep_osymbols=False, keep_state_numbering=False, allow_negative_labels=False) Class used to compile FSTs from strings. This class is used to compile FSTs specified using the AT&T FSM library format described here: http://web.eecs.umich.edu/~radev/NLP-fall2015/resources/fsm_archive/fsm.5.html This is the same format used by the `fstcompile` executable. Compiler options (symbol tables, etc.) are set at construction time. compiler = fst.Compiler(isymbols=ascii_syms, osymbols=ascii_syms) Once constructed, Compiler instances behave like a file handle opened for writing: # /ba+/ compiler.write("0 1 50 50") compiler.write("1 2 49 49") compiler.write("2 2 49 49") compiler.write("2") The `compile` method returns an actual FST instance: sheep_machine = compiler.compile() Compilation flushes the internal buffer, so the compiler instance can be reused to compile new machines with the same symbol tables (etc.) Args: fst_type: A string indicating the container type for the compiled FST. arc_type: A string indicating the arc type for the compiled FST. isymbols: An optional SymbolTable used to label input symbols. osymbols: An optional SymbolTable used to label output symbols. ssymbols: An optional SymbolTable used to label states. acceptor: Should the FST be rendered in acceptor format if possible? keep_isymbols: Should the input symbol table be stored in the FST? keep_osymbols: Should the output symbol table be stored in the FST? keep_state_numbering: Should the state numbering be preserved? allow_negative_labels: Should negative labels be allowed? (Not recommended; may cause conflicts). """ def __cinit__(self, string fst_type=b"vector", string arc_type=b"standard", SymbolTable isymbols=None, SymbolTable osymbols=None, SymbolTable ssymbols=None, bool acceptor=False, bool keep_isymbols=False, bool keep_osymbols=False, bool keep_state_numbering=False, bool allow_negative_labels=False): self._sstrm.reset(new stringstream()) self._fst_type = tostring(fst_type) self._arc_type = tostring(arc_type) self._isymbols = NULL if isymbols is not None: self._isymbols = isymbols._table self._osymbols = NULL if osymbols is not None: self._osymbols = osymbols._table self._ssymbols = NULL if ssymbols is not None: self._ssymbols = ssymbols._table self._acceptor = acceptor self._keep_isymbols = keep_isymbols self._keep_osymbols = keep_osymbols self._keep_state_numbering = keep_state_numbering self._allow_negative_labels = allow_negative_labels cpdef _Fst compile(self): """ compile() Compiles the FST in the compiler string buffer. This method compiles the FST and returns the resulting machine. Returns: The FST described by the compiler string buffer. Raises: FstOpError: Compilation failed. """ cdef unique_ptr[fst.FstClass] tfst tfst.reset(fst.CompileFstInternal(deref(self._sstrm), "<pywrapfst>", self._fst_type, self._arc_type, self._isymbols, self._osymbols, self._ssymbols, self._acceptor, self._keep_isymbols, self._keep_osymbols, self._keep_state_numbering, self._allow_negative_labels)) self._sstrm.reset(new stringstream()) if tfst.get() == NULL: raise FstOpError("Compilation failed") return _init_XFst(tfst.release()) cpdef void write(self, expression): """ write(expression) Writes a string into the compiler string buffer. This method adds a line to the compiler string buffer. It is normally invoked using the right shift operator, like so: compiler = fst.Compiler() compiler.write("0 0 49 49") compiler.write("0") Args: expression: A string expression to add to compiler string buffer. """ deref(self._sstrm) << tostring(expression) ## FarReader and FarWriter. cdef class FarReader(object): """ (No constructor.) FAR ("Fst ARchive") reader object. This class is used to read a FAR from disk. FARs contain one or more FSTs (of the same arc type) indexed by a unique string key. To construct a FarReader object, use the `open` class method. Attributes: arc_type: A string indicating the arc type. far_type: A string indicating the FAR type. """ def __init__(self): raise FstDeletedConstructorError( "Cannot construct {}".format(self.__class__.__name__)) def __repr__(self): return "<{} FarReader at 0x{:x}>".format(self.far_type(), id(self)) @classmethod def open(cls, *filenames): """ FarReader.open(*filenames) Creates a FarReader object. This class method creates a FarReader given the string location of one or more FAR files on disk. Args: *filenames: The string location of one or more input FAR files. Returns: A new FarReader instance. Raises: FstIOError: Read failed. """ cdef vector[string] filename_strings for filename in filenames: filename_strings.push_back(tostring(filename)) cdef unique_ptr[fst.FarReaderClass] tfar tfar.reset(fst.FarReaderClass.Open(filename_strings)) if tfar.get() == NULL: raise FstIOError("Read failed: {!r}".format(filenames)) cdef FarReader result = FarReader.__new__(FarReader) result._reader.reset(tfar.release()) return result cpdef string arc_type(self): """ arc_type(self) Returns a string indicating the arc type. """ return self._reader.get().ArcType() cpdef bool done(self): """ done(self) Indicates whether the iterator is exhausted or not. Returns: True if the iterator is exhausted, False otherwise. """ return self._reader.get().Done() cpdef bool error(self): """ error(self) Indicates whether the FarReader has encountered an error. Returns: True if the FarReader is in an errorful state, False otherwise. """ return self._reader.get().Error() cpdef string far_type(self): return fst.GetFarTypeString(self._reader.get().Type()) cpdef bool find(self, key) except *: """ find(self, key) Sets the current position to the first entry greater than or equal to the key (a string) and indicates whether or not a match was found. Args: key: A string key. Returns: True if the key was found, False otherwise. """ return self._reader.get().Find(tostring(key)) cpdef _Fst get_fst(self): """ get_fst(self) Returns the FST at the current position. Returns: A copy of the FST at the current position. """ return _init_XFst(new fst.FstClass( deref(self._reader.get().GetFstClass()))) cpdef string get_key(self): """ get_key(self) Returns the string key at the current position. Returns: The string key at the current position. """ return self._reader.get().GetKey() cpdef void next(self): """ next(self) Advances the iterator. """ self._reader.get().Next() cpdef void reset(self): """ reset(self) Resets the iterator to the initial position. """ self._reader.get().Reset() def __getitem__(self, key): if self._reader.get().Find(tostring(key)): return self.get_fst() else: raise KeyError(key) cdef class FarWriter(object): """ (No constructor.) FAR ("Fst ARchive") writer object. This class is used to write FSTs (of the same arc type) to a FAR on disk. To construct a FarWriter, use the `create` class method. Note that the data is not guaranteed to flush to disk until the FarWriter is garbage-collected. If a FarWriter has been assigned to only one variable, then calling `del` on that variable should decrement the object's reference count from 1 to 0, triggering a flush to disk on the next GC cycle. Attributes: arc_type: A string indicating the arc type. far_type: A string indicating the FAR type. """ def __init__(self): raise FstDeletedConstructorError( "Cannot construct {}".format(self.__class__.__name__)) def __repr__(self): return "<{} FarWriter at 0x{:x}>".format(self.far_type(), id(self)) @classmethod def create(cls, filename, arc_type=b"standard", far_type=b"default"): """ FarWriter. Creates a FarWriter object. This class method creates a FarWriter given the desired output location, arc type, and FAR type. Args: filename: The string location for the output FAR files. arc_type: A string indicating the arc type. far_type: A string indicating the FAR type; one of: "fst", "stlist", "sttable", "sstable", "default". Returns: A new FarWriter instance. Raises: FstIOError: Read failed. """ cdef fst.FarType ft = fst.GetFarType(tostring(far_type)) cdef fst.FarWriterClass *tfar = fst.FarWriterClass.Create( tostring(filename), tostring(arc_type), ft) if tfar == NULL: raise FstIOError("Open failed: {!r}".format(filename)) cdef FarWriter result = FarWriter.__new__(FarWriter) result._writer.reset(tfar) return result # NB: Invoking this method may be dangerous: calling any other method on the # instance after this is invoked may result in a null dereference. cdef void close(self): self._writer.reset() cpdef void add(self, key, _Fst ifst) except *: """ add(self, key, ifst) Adds an FST to the FAR. This method adds an FST to the FAR which can be retrieved with the specified string key. Args: key: The string used to key the input FST. ifst: The FST to write to the FAR. Raises: FstArgError: Key out of order. FstOpError: Incompatible or invalid arc type. """ # Failure here results from passing an FST with a different arc type than # used by the FAR was initialized to use. if not self._writer.get().Add(tostring(key), deref(ifst._fst)): raise FstOpError("Incompatible or invalid arc type") # An error here usually indicates a key out of order. if self._writer.get().Error(): raise FstArgError("Key out of order") cpdef string arc_type(self): """ arc_type(self) Returns a string indicating the arc type. """ return self._writer.get().ArcType() cpdef bool error(self): """ error(self) Indicates whether the FarWriter has encountered an error. Returns: True if the FarWriter is in an errorful state, False otherwise. """ return self._writer.get().Error() cpdef string far_type(self): """ far_type(self) Returns a string indicating the FAR type. """ return fst.GetFarTypeString(self._writer.get().Type()) # Dictionary-like assignment. def __setitem__(self, key, _Fst fst): self.add(key, fst) ## Cleanup operations for module entrance and exit. # Masks fst_error_fatal flags while this module is running, returning to the # previous state upon module exit. cdef bool _fst_error_fatal_old = fst.FLAGS_fst_error_fatal fst.FLAGS_fst_error_fatal = False @atexit.register def _reset_fst_error_fatal(): fst.FLAGS_fst_error_fatal = _fst_error_fatal_old
0
coqui_public_repos/TTS/tests
coqui_public_repos/TTS/tests/vocoder_tests/test_parallel_wavegan_train.py
import glob import os import shutil from tests import get_device_id, get_tests_output_path, run_cli from TTS.vocoder.configs import ParallelWaveganConfig config_path = os.path.join(get_tests_output_path(), "test_vocoder_config.json") output_path = os.path.join(get_tests_output_path(), "train_outputs") config = ParallelWaveganConfig( batch_size=4, eval_batch_size=4, num_loader_workers=0, num_eval_loader_workers=0, run_eval=True, test_delay_epochs=-1, epochs=1, seq_len=2048, eval_split_size=1, print_step=1, print_eval=True, data_path="tests/data/ljspeech", output_path=output_path, ) config.audio.do_trim_silence = True config.audio.trim_db = 60 config.save_json(config_path) # train the model for one epoch command_train = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_vocoder.py --config_path {config_path} " run_cli(command_train) # Find latest folder continue_path = max(glob.glob(os.path.join(output_path, "*/")), key=os.path.getmtime) # restore the model and continue training for one more epoch command_train = ( f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_vocoder.py --continue_path {continue_path} " ) run_cli(command_train) shutil.rmtree(continue_path)
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions/linear/linear-tagger-fst.cc
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #include <fst/extensions/linear/linear-fst.h> #include <fst/register.h> using fst::LinearTaggerFst; using fst::StdArc; using fst::LogArc; REGISTER_FST(LinearTaggerFst, StdArc); REGISTER_FST(LinearTaggerFst, LogArc);
0
coqui_public_repos/STT-models/yoruba/itml
coqui_public_repos/STT-models/yoruba/itml/v0.1.0/alphabet.txt
' a b d e f g h i j k l m n o p r s t u w y ̀ ́ ṣ ẹ ọ
0
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/providers/providers.h
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. #pragma once namespace onnxruntime { class IExecutionProvider; struct IExecutionProviderFactory { virtual ~IExecutionProviderFactory() = default; virtual std::unique_ptr<IExecutionProvider> CreateProvider() = 0; }; } // namespace onnxruntime
0
coqui_public_repos/STT-models/kinyarwanda/digital-umuganda
coqui_public_repos/STT-models/kinyarwanda/digital-umuganda/v0.0.1/MODEL_CARD.md
# Model card for Kinyarwanda STT Jump to section: - [Model details](#model-details) - [Intended use](#intended-use) - [Performance Factors](#performance-factors) - [Metrics](#metrics) - [Training data](#training-data) - [Evaluation data](#evaluation-data) - [Ethical considerations](#ethical-considerations) - [Caveats and recommendations](#caveats-and-recommendations) ## Model details - Person or organization developing model: Originally released by [Digital Umuganda](https://digitalumuganda.com/). - Model date: Accessed from [Github](https://github.com/Digital-Umuganda/Deepspeech-Kinyarwanda/tree/master/jan-8-2021-best-kinya-deepspeech) on March 31, 2021 - Model type: `Speech-to-Text` - Model version: `v0.0.1` - Compatible with 🐸 STT version: `v0.9.3` - Code: [deepspeech-kinyarwanda](https://github.com/Digital-Umuganda/Deepspeech-Kinyarwanda) - License: MPL 2.0 - Citation details: `@misc{deepspeech-kinyarwanda, author = {Digital Umuganda}, title = {Kinyarwanda STT}, publisher = {Digital Umuganda}, journal = {Github}, howpublished = {\url{https://github.com/Digital-Umuganda/Deepspeech-Kinyarwanda}}, commit = {7dbf6705ee38d87138f3558a21f045c40b93f083} }` - Where to send questions or comments about the model: You can leave an issue on [`STT-model` issues](https://github.com/coqui-ai/STT-models/issues), open a new discussion on [`STT-model` discussions](https://github.com/coqui-ai/STT-models/discussions), or chat with us on [Gitter](https://gitter.im/coqui-ai/). ## Intended use Speech-to-Text for the [Kinyarwanda Language](https://en.wikipedia.org/wiki/Kinyarwanda_language) on 16kHz, mono-channel audio. ## Performance Factors Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data). ## Metrics STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk. #### Transcription Accuracy |Test Corpus|WER|CER| |-----------|---|---| |Common Voice|60.1\%|23.5\%| #### Real-Time Factor Real-Time Factor (RTF) is defined as `processing-time / length-of-audio`. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF. Recorded average RTF on laptop CPU: `.69` #### Model Size `model.pbmm`: 181M `model.tflite`: 46M ### Approaches to uncertainty and variability Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio. ## Training data Train on approximately 1,200 hours from the Common Voice corpus. ## Evaluation data Evaluated on the test set from the Common Voice corpus. ## Ethical considerations Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use. ### Demographic Bias You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue. ### Surveillance Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech. ## Caveats and recommendations Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data). In most applications, it is recommended that you [train your own language model](https://stt.readthedocs.io/en/latest/LANGUAGE_MODEL.html) to improve transcription accuracy on your speech data.
0
coqui_public_repos/TTS/TTS/vocoder
coqui_public_repos/TTS/TTS/vocoder/models/gan.py
from inspect import signature from typing import Dict, List, Tuple import numpy as np import torch from coqpit import Coqpit from torch import nn from torch.utils.data import DataLoader from torch.utils.data.distributed import DistributedSampler from trainer.trainer_utils import get_optimizer, get_scheduler from TTS.utils.audio import AudioProcessor from TTS.utils.io import load_fsspec from TTS.vocoder.datasets.gan_dataset import GANDataset from TTS.vocoder.layers.losses import DiscriminatorLoss, GeneratorLoss from TTS.vocoder.models import setup_discriminator, setup_generator from TTS.vocoder.models.base_vocoder import BaseVocoder from TTS.vocoder.utils.generic_utils import plot_results class GAN(BaseVocoder): def __init__(self, config: Coqpit, ap: AudioProcessor = None): """Wrap a generator and a discriminator network. It provides a compatible interface for the trainer. It also helps mixing and matching different generator and disciminator networks easily. To implement a new GAN models, you just need to define the generator and the discriminator networks, the rest is handled by the `GAN` class. Args: config (Coqpit): Model configuration. ap (AudioProcessor): 🐸TTS AudioProcessor instance. Defaults to None. Examples: Initializing the GAN model with HifiGAN generator and discriminator. >>> from TTS.vocoder.configs import HifiganConfig >>> config = HifiganConfig() >>> model = GAN(config) """ super().__init__(config) self.config = config self.model_g = setup_generator(config) self.model_d = setup_discriminator(config) self.train_disc = False # if False, train only the generator. self.y_hat_g = None # the last generator prediction to be passed onto the discriminator self.ap = ap def forward(self, x: torch.Tensor) -> torch.Tensor: """Run the generator's forward pass. Args: x (torch.Tensor): Input tensor. Returns: torch.Tensor: output of the GAN generator network. """ return self.model_g.forward(x) def inference(self, x: torch.Tensor) -> torch.Tensor: """Run the generator's inference pass. Args: x (torch.Tensor): Input tensor. Returns: torch.Tensor: output of the GAN generator network. """ return self.model_g.inference(x) def train_step(self, batch: Dict, criterion: Dict, optimizer_idx: int) -> Tuple[Dict, Dict]: """Compute model outputs and the loss values. `optimizer_idx` selects the generator or the discriminator for network on the current pass. Args: batch (Dict): Batch of samples returned by the dataloader. criterion (Dict): Criterion used to compute the losses. optimizer_idx (int): ID of the optimizer in use on the current pass. Raises: ValueError: `optimizer_idx` is an unexpected value. Returns: Tuple[Dict, Dict]: model outputs and the computed loss values. """ outputs = {} loss_dict = {} x = batch["input"] y = batch["waveform"] if optimizer_idx not in [0, 1]: raise ValueError(" [!] Unexpected `optimizer_idx`.") if optimizer_idx == 0: # DISCRIMINATOR optimization # generator pass y_hat = self.model_g(x)[:, :, : y.size(2)] # cache for generator loss # pylint: disable=W0201 self.y_hat_g = y_hat self.y_hat_sub = None self.y_sub_g = None # PQMF formatting if y_hat.shape[1] > 1: self.y_hat_sub = y_hat y_hat = self.model_g.pqmf_synthesis(y_hat) self.y_hat_g = y_hat # save for generator loss self.y_sub_g = self.model_g.pqmf_analysis(y) scores_fake, feats_fake, feats_real = None, None, None if self.train_disc: # use different samples for G and D trainings if self.config.diff_samples_for_G_and_D: x_d = batch["input_disc"] y_d = batch["waveform_disc"] # use a different sample than generator with torch.no_grad(): y_hat = self.model_g(x_d) # PQMF formatting if y_hat.shape[1] > 1: y_hat = self.model_g.pqmf_synthesis(y_hat) else: # use the same samples as generator x_d = x.clone() y_d = y.clone() y_hat = self.y_hat_g # run D with or without cond. features if len(signature(self.model_d.forward).parameters) == 2: D_out_fake = self.model_d(y_hat.detach().clone(), x_d) D_out_real = self.model_d(y_d, x_d) else: D_out_fake = self.model_d(y_hat.detach()) D_out_real = self.model_d(y_d) # format D outputs if isinstance(D_out_fake, tuple): # self.model_d returns scores and features scores_fake, feats_fake = D_out_fake if D_out_real is None: scores_real, feats_real = None, None else: scores_real, feats_real = D_out_real else: # model D returns only scores scores_fake = D_out_fake scores_real = D_out_real # compute losses loss_dict = criterion[optimizer_idx](scores_fake, scores_real) outputs = {"model_outputs": y_hat} if optimizer_idx == 1: # GENERATOR loss scores_fake, feats_fake, feats_real = None, None, None if self.train_disc: if len(signature(self.model_d.forward).parameters) == 2: D_out_fake = self.model_d(self.y_hat_g, x) else: D_out_fake = self.model_d(self.y_hat_g) D_out_real = None if self.config.use_feat_match_loss: with torch.no_grad(): D_out_real = self.model_d(y) # format D outputs if isinstance(D_out_fake, tuple): scores_fake, feats_fake = D_out_fake if D_out_real is None: feats_real = None else: _, feats_real = D_out_real else: scores_fake = D_out_fake feats_fake, feats_real = None, None # compute losses loss_dict = criterion[optimizer_idx]( self.y_hat_g, y, scores_fake, feats_fake, feats_real, self.y_hat_sub, self.y_sub_g ) outputs = {"model_outputs": self.y_hat_g} return outputs, loss_dict def _log(self, name: str, ap: AudioProcessor, batch: Dict, outputs: Dict) -> Tuple[Dict, Dict]: """Logging shared by the training and evaluation. Args: name (str): Name of the run. `train` or `eval`, ap (AudioProcessor): Audio processor used in training. batch (Dict): Batch used in the last train/eval step. outputs (Dict): Model outputs from the last train/eval step. Returns: Tuple[Dict, Dict]: log figures and audio samples. """ y_hat = outputs[0]["model_outputs"] if self.train_disc else outputs[1]["model_outputs"] y = batch["waveform"] figures = plot_results(y_hat, y, ap, name) sample_voice = y_hat[0].squeeze(0).detach().cpu().numpy() audios = {f"{name}/audio": sample_voice} return figures, audios def train_log( self, batch: Dict, outputs: Dict, logger: "Logger", assets: Dict, steps: int # pylint: disable=unused-argument ) -> Tuple[Dict, np.ndarray]: """Call `_log()` for training.""" figures, audios = self._log("eval", self.ap, batch, outputs) logger.eval_figures(steps, figures) logger.eval_audios(steps, audios, self.ap.sample_rate) @torch.no_grad() def eval_step(self, batch: Dict, criterion: nn.Module, optimizer_idx: int) -> Tuple[Dict, Dict]: """Call `train_step()` with `no_grad()`""" self.train_disc = True # Avoid a bug in the Training with the missing discriminator loss return self.train_step(batch, criterion, optimizer_idx) def eval_log( self, batch: Dict, outputs: Dict, logger: "Logger", assets: Dict, steps: int # pylint: disable=unused-argument ) -> Tuple[Dict, np.ndarray]: """Call `_log()` for evaluation.""" figures, audios = self._log("eval", self.ap, batch, outputs) logger.eval_figures(steps, figures) logger.eval_audios(steps, audios, self.ap.sample_rate) def load_checkpoint( self, config: Coqpit, checkpoint_path: str, eval: bool = False, # pylint: disable=unused-argument, redefined-builtin cache: bool = False, ) -> None: """Load a GAN checkpoint and initialize model parameters. Args: config (Coqpit): Model config. checkpoint_path (str): Checkpoint file path. eval (bool, optional): If true, load the model for inference. If falseDefaults to False. """ state = load_fsspec(checkpoint_path, map_location=torch.device("cpu"), cache=cache) # band-aid for older than v0.0.15 GAN models if "model_disc" in state: self.model_g.load_checkpoint(config, checkpoint_path, eval) else: self.load_state_dict(state["model"]) if eval: self.model_d = None if hasattr(self.model_g, "remove_weight_norm"): self.model_g.remove_weight_norm() def on_train_step_start(self, trainer) -> None: """Enable the discriminator training based on `steps_to_start_discriminator` Args: trainer (Trainer): Trainer object. """ self.train_disc = trainer.total_steps_done >= self.config.steps_to_start_discriminator def get_optimizer(self) -> List: """Initiate and return the GAN optimizers based on the config parameters. It returnes 2 optimizers in a list. First one is for the generator and the second one is for the discriminator. Returns: List: optimizers. """ optimizer1 = get_optimizer( self.config.optimizer, self.config.optimizer_params, self.config.lr_gen, self.model_g ) optimizer2 = get_optimizer( self.config.optimizer, self.config.optimizer_params, self.config.lr_disc, self.model_d ) return [optimizer2, optimizer1] def get_lr(self) -> List: """Set the initial learning rates for each optimizer. Returns: List: learning rates for each optimizer. """ return [self.config.lr_disc, self.config.lr_gen] def get_scheduler(self, optimizer) -> List: """Set the schedulers for each optimizer. Args: optimizer (List[`torch.optim.Optimizer`]): List of optimizers. Returns: List: Schedulers, one for each optimizer. """ scheduler1 = get_scheduler(self.config.lr_scheduler_gen, self.config.lr_scheduler_gen_params, optimizer[0]) scheduler2 = get_scheduler(self.config.lr_scheduler_disc, self.config.lr_scheduler_disc_params, optimizer[1]) return [scheduler2, scheduler1] @staticmethod def format_batch(batch: List) -> Dict: """Format the batch for training. Args: batch (List): Batch out of the dataloader. Returns: Dict: formatted model inputs. """ if isinstance(batch[0], list): x_G, y_G = batch[0] x_D, y_D = batch[1] return {"input": x_G, "waveform": y_G, "input_disc": x_D, "waveform_disc": y_D} x, y = batch return {"input": x, "waveform": y} def get_data_loader( # pylint: disable=no-self-use, unused-argument self, config: Coqpit, assets: Dict, is_eval: True, samples: List, verbose: bool, num_gpus: int, rank: int = None, # pylint: disable=unused-argument ): """Initiate and return the GAN dataloader. Args: config (Coqpit): Model config. ap (AudioProcessor): Audio processor. is_eval (True): Set the dataloader for evaluation if true. samples (List): Data samples. verbose (bool): Log information if true. num_gpus (int): Number of GPUs in use. rank (int): Rank of the current GPU. Defaults to None. Returns: DataLoader: Torch dataloader. """ dataset = GANDataset( ap=self.ap, items=samples, seq_len=config.seq_len, hop_len=self.ap.hop_length, pad_short=config.pad_short, conv_pad=config.conv_pad, return_pairs=config.diff_samples_for_G_and_D if "diff_samples_for_G_and_D" in config else False, is_training=not is_eval, return_segments=not is_eval, use_noise_augment=config.use_noise_augment, use_cache=config.use_cache, verbose=verbose, ) dataset.shuffle_mapping() sampler = DistributedSampler(dataset, shuffle=True) if num_gpus > 1 else None loader = DataLoader( dataset, batch_size=1 if is_eval else config.batch_size, shuffle=num_gpus == 0, drop_last=False, sampler=sampler, num_workers=config.num_eval_loader_workers if is_eval else config.num_loader_workers, pin_memory=False, ) return loader def get_criterion(self): """Return criterions for the optimizers""" return [DiscriminatorLoss(self.config), GeneratorLoss(self.config)] @staticmethod def init_from_config(config: Coqpit, verbose=True) -> "GAN": ap = AudioProcessor.init_from_config(config, verbose=verbose) return GAN(config, ap=ap)
0
coqui_public_repos/inference-engine/third_party/kenlm
coqui_public_repos/inference-engine/third_party/kenlm/lm/sizes.cc
#include "lm/sizes.hh" #include "lm/model.hh" #include "util/file_piece.hh" #include <vector> #include <iomanip> namespace lm { namespace ngram { void ShowSizes(const std::vector<uint64_t> &counts, const lm::ngram::Config &config) { uint64_t sizes[6]; sizes[0] = ProbingModel::Size(counts, config); sizes[1] = RestProbingModel::Size(counts, config); sizes[2] = TrieModel::Size(counts, config); sizes[3] = QuantTrieModel::Size(counts, config); sizes[4] = ArrayTrieModel::Size(counts, config); sizes[5] = QuantArrayTrieModel::Size(counts, config); uint64_t max_length = *std::max_element(sizes, sizes + sizeof(sizes) / sizeof(uint64_t)); uint64_t min_length = *std::min_element(sizes, sizes + sizeof(sizes) / sizeof(uint64_t)); uint64_t divide; char prefix; if (min_length < (1 << 10) * 10) { prefix = ' '; divide = 1; } else if (min_length < (1 << 20) * 10) { prefix = 'k'; divide = 1 << 10; } else if (min_length < (1ULL << 30) * 10) { prefix = 'M'; divide = 1 << 20; } else { prefix = 'G'; divide = 1 << 30; } long int length = std::max<long int>(2, static_cast<long int>(ceil(log10((double) max_length / divide)))); std::cerr << "Memory estimate for binary LM:\ntype "; // right align bytes. for (long int i = 0; i < length - 2; ++i) std::cerr << ' '; std::cerr << prefix << "B\n" "probing " << std::setw(length) << (sizes[0] / divide) << " assuming -p " << config.probing_multiplier << "\n" "probing " << std::setw(length) << (sizes[1] / divide) << " assuming -r models -p " << config.probing_multiplier << "\n" "trie " << std::setw(length) << (sizes[2] / divide) << " without quantization\n" "trie " << std::setw(length) << (sizes[3] / divide) << " assuming -q " << (unsigned)config.prob_bits << " -b " << (unsigned)config.backoff_bits << " quantization \n" "trie " << std::setw(length) << (sizes[4] / divide) << " assuming -a " << (unsigned)config.pointer_bhiksha_bits << " array pointer compression\n" "trie " << std::setw(length) << (sizes[5] / divide) << " assuming -a " << (unsigned)config.pointer_bhiksha_bits << " -q " << (unsigned)config.prob_bits << " -b " << (unsigned)config.backoff_bits<< " array pointer compression and quantization\n"; } void ShowSizes(const std::vector<uint64_t> &counts) { lm::ngram::Config config; ShowSizes(counts, config); } void ShowSizes(const char *file, const lm::ngram::Config &config) { std::vector<uint64_t> counts; util::FilePiece f(file); lm::ReadARPACounts(f, counts); ShowSizes(counts, config); } }} //namespaces
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/kenlm_win-amd64-cpu-opt.yml.DISABLED
build: template_file: generic_tc_caching-win-opt-base.tyml cache: artifact_url: ${system.kenlm.win_amd64_cpu.url} artifact_namespace: ${system.kenlm.win_amd64_cpu.namespace} scripts: setup: "taskcluster/kenlm_tc-setup.sh --windows-amd64" build: "taskcluster/kenlm_tc-build.sh --windows-amd64" package: "taskcluster/kenlm_tc-package.sh" metadata: name: "KenLM Windows AMD64 CPU" description: "Building KenLM for Windows/AMD64, CPU only, optimized version"
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/extensions
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/extensions/linear/loglinear-apply.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_EXTENSIONS_LINEAR_LOGLINEAR_APPLY_H_ #define FST_EXTENSIONS_LINEAR_LOGLINEAR_APPLY_H_ #include <fst/compat.h> #include <fst/arc.h> #include <fst/arc-map.h> #include <fst/compose.h> #include <fst/determinize.h> #include <fst/float-weight.h> #include <fst/fst.h> #include <fst/minimize.h> #include <fst/mutable-fst.h> #include <fst/project.h> #include <fst/rmepsilon.h> #include <fst/vector-fst.h> namespace fst { // Applies a FST model as a discriminative model to weighted input // `ifst`. `A` is an arc type with tropical weight of all the // input/output FSTs. // // In general, consider `ifst` an unnormalized probability // distribution between its input X and output Y, P(X, Y); and `lfst` // a group of unnormalized probability distributions of all its output // Z for every input Y, Q(Z|Y). `normalize` controls whether Q is // normalized for every Y before chaining with P(X, Y). I.e., for a // path (X, Y, Z) in `ofst` (where Y is hidden), // // - When `normalize` is true, its weight is P(X, Y) Q(Z|Y) / sum_z Q(z|Y); // - When `normalize` is false, its weight is P(X, Y) Q(Z|Y). template <class A> void LogLinearApply(const Fst<A> &ifst, const Fst<A> &lfst, MutableFst<A> *ofst, bool normalize = true) { LogLinearApply<A, LogArc>(ifst, lfst, ofst, normalize); } // This version gives finer control over the arc type (`B`) to be used // in normalization. `B` is an arc type with log weight (e.g. `LogArc` // or `Log64Arc`). template <class A, class B> void LogLinearApply(const Fst<A> &ifst, const Fst<A> &lfst, MutableFst<A> *ofst, bool normalize = true) { if (normalize) { VectorFst<A> unnormalized_ofst, rescored_ifsa; Compose(ifst, lfst, &unnormalized_ofst); { VectorFst<A> tropical_ifsa(unnormalized_ofst); Project(&tropical_ifsa, PROJECT_INPUT); { VectorFst<B> minimal_log_ifsa; { VectorFst<B> log_ifsa; ArcMap(tropical_ifsa, &log_ifsa, WeightConvertMapper<A, B>()); RmEpsilon(&log_ifsa); Determinize(log_ifsa, &minimal_log_ifsa); } Minimize(&minimal_log_ifsa); ArcMap(&minimal_log_ifsa, InvertWeightMapper<B>()); ArcMap(minimal_log_ifsa, &tropical_ifsa, WeightConvertMapper<B, A>()); } ArcSort(&tropical_ifsa, OLabelCompare<A>()); Compose(tropical_ifsa, ifst, &rescored_ifsa); } ArcSort(&rescored_ifsa, OLabelCompare<A>()); Compose(rescored_ifsa, unnormalized_ofst, ofst); } else { Compose(ifst, lfst, ofst); } } } // namespace fst #endif // FST_EXTENSIONS_LINEAR_LOGLINEAR_APPLY_H_
0
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/lib/Makefile.in
# Makefile.in generated by automake 1.15.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2017 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = src/lib ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/ac_python_devel.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h \ $(top_builddir)/src/include/fst/config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(libdir)" LTLIBRARIES = $(lib_LTLIBRARIES) am__DEPENDENCIES_1 = libfst_la_DEPENDENCIES = $(am__DEPENDENCIES_1) am_libfst_la_OBJECTS = compat.lo flags.lo fst.lo fst-types.lo \ mapped-file.lo properties.lo symbol-table.lo \ symbol-table-ops.lo weight.lo util.lo libfst_la_OBJECTS = $(am_libfst_la_OBJECTS) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = libfst_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CXX $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CXXLD) $(AM_CXXFLAGS) \ $(CXXFLAGS) $(libfst_la_LDFLAGS) $(LDFLAGS) -o $@ AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = depcomp = $(SHELL) $(top_srcdir)/depcomp am__depfiles_maybe = depfiles am__mv = mv -f CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \ $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) LTCXXCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CXX $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CXX) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CXXFLAGS) $(CXXFLAGS) AM_V_CXX = $(am__v_CXX_@AM_V@) am__v_CXX_ = $(am__v_CXX_@AM_DEFAULT_V@) am__v_CXX_0 = @echo " CXX " $@; am__v_CXX_1 = CXXLD = $(CXX) CXXLINK = $(LIBTOOL) $(AM_V_lt) --tag=CXX $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CXXLD) $(AM_CXXFLAGS) \ $(CXXFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CXXLD = $(am__v_CXXLD_@AM_V@) am__v_CXXLD_ = $(am__v_CXXLD_@AM_DEFAULT_V@) am__v_CXXLD_0 = @echo " CXXLD " $@; am__v_CXXLD_1 = SOURCES = $(libfst_la_SOURCES) DIST_SOURCES = $(libfst_la_SOURCES) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags am__DIST_COMMON = $(srcdir)/Makefile.in $(top_srcdir)/depcomp DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ GREP = @GREP@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PYTHON = @PYTHON@ PYTHON_CPPFLAGS = @PYTHON_CPPFLAGS@ PYTHON_EXEC_PREFIX = @PYTHON_EXEC_PREFIX@ PYTHON_EXTRA_LDFLAGS = @PYTHON_EXTRA_LDFLAGS@ PYTHON_EXTRA_LIBS = @PYTHON_EXTRA_LIBS@ PYTHON_LDFLAGS = @PYTHON_LDFLAGS@ PYTHON_PLATFORM = @PYTHON_PLATFORM@ PYTHON_PREFIX = @PYTHON_PREFIX@ PYTHON_SITE_PKG = @PYTHON_SITE_PKG@ PYTHON_VERSION = @PYTHON_VERSION@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ libfstdir = @libfstdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ pkgpyexecdir = @pkgpyexecdir@ pkgpythondir = @pkgpythondir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ pyexecdir = @pyexecdir@ pythondir = @pythondir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AM_CPPFLAGS = -I$(srcdir)/../include $(ICU_CPPFLAGS) lib_LTLIBRARIES = libfst.la libfst_la_SOURCES = compat.cc flags.cc fst.cc fst-types.cc mapped-file.cc \ properties.cc symbol-table.cc symbol-table-ops.cc \ weight.cc util.cc libfst_la_LDFLAGS = -version-info 13:0:0 libfst_la_LIBADD = $(DL_LIBS) all: all-am .SUFFIXES: .SUFFIXES: .cc .lo .o .obj $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign src/lib/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign src/lib/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): install-libLTLIBRARIES: $(lib_LTLIBRARIES) @$(NORMAL_INSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(MKDIR_P) '$(DESTDIR)$(libdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(libdir)" || exit 1; \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \ } uninstall-libLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \ done clean-libLTLIBRARIES: -test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES) @list='$(lib_LTLIBRARIES)'; \ locs=`for p in $$list; do echo $$p; done | \ sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ sort -u`; \ test -z "$$locs" || { \ echo rm -f $${locs}; \ rm -f $${locs}; \ } libfst.la: $(libfst_la_OBJECTS) $(libfst_la_DEPENDENCIES) $(EXTRA_libfst_la_DEPENDENCIES) $(AM_V_CXXLD)$(libfst_la_LINK) -rpath $(libdir) $(libfst_la_OBJECTS) $(libfst_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/compat.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/flags.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/fst-types.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/fst.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/mapped-file.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/properties.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/symbol-table-ops.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/symbol-table.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/util.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/weight.Plo@am__quote@ .cc.o: @am__fastdepCXX_TRUE@ $(AM_V_CXX)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\ @am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCXX_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ $(AM_V_CXX)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(AM_V_CXX@am__nodep@)$(CXXCOMPILE) -c -o $@ $< .cc.obj: @am__fastdepCXX_TRUE@ $(AM_V_CXX)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\ @am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\ @am__fastdepCXX_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ $(AM_V_CXX)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(AM_V_CXX@am__nodep@)$(CXXCOMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .cc.lo: @am__fastdepCXX_TRUE@ $(AM_V_CXX)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.lo$$||'`;\ @am__fastdepCXX_TRUE@ $(LTCXXCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCXX_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Plo @AMDEP_TRUE@@am__fastdepCXX_FALSE@ $(AM_V_CXX)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(AM_V_CXX@am__nodep@)$(LTCXXCOMPILE) -c -o $@ $< mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(LTLIBRARIES) installdirs: for dir in "$(DESTDIR)$(libdir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libLTLIBRARIES clean-libtool \ mostlyclean-am distclean: distclean-am -rm -rf ./$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-libLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -rf ./$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-libLTLIBRARIES .MAKE: install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am check check-am clean clean-generic \ clean-libLTLIBRARIES clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-compile distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-libLTLIBRARIES install-man install-pdf \ install-pdf-am install-ps install-ps-am install-strip \ installcheck installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-compile \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am uninstall-libLTLIBRARIES .PRECIOUS: Makefile # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT:
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/topsort.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. // // Topological sort of FSTs. #ifndef FST_TOPSORT_H_ #define FST_TOPSORT_H_ #include <memory> #include <vector> #include <fst/dfs-visit.h> #include <fst/fst.h> #include <fst/statesort.h> namespace fst { // DFS visitor class to return topological ordering. template <class Arc> class TopOrderVisitor { public: using StateId = typename Arc::StateId; // If acyclic, order[i] gives the topological position of StateId i; // otherwise it is unchanged. acyclic_ will be true iff the FST has no // cycles. The caller retains ownership of the state order vector. TopOrderVisitor(std::vector<StateId> *order, bool *acyclic) : order_(order), acyclic_(acyclic) {} void InitVisit(const Fst<Arc> &fst) { finish_.reset(new std::vector<StateId>()); *acyclic_ = true; } constexpr bool InitState(StateId, StateId) const { return true; } constexpr bool TreeArc(StateId, const Arc &) const { return true; } bool BackArc(StateId, const Arc &) { return (*acyclic_ = false); } constexpr bool ForwardOrCrossArc(StateId, const Arc &) const { return true; } void FinishState(StateId s, StateId, const Arc *) { finish_->push_back(s); } void FinishVisit() { if (*acyclic_) { order_->clear(); for (StateId s = 0; s < finish_->size(); ++s) { order_->push_back(kNoStateId); } for (StateId s = 0; s < finish_->size(); ++s) { (*order_)[(*finish_)[finish_->size() - s - 1]] = s; } } finish_.reset(); } private: std::vector<StateId> *order_; bool *acyclic_; // States in finish-time order. std::unique_ptr<std::vector<StateId>> finish_; }; // Topologically sorts its input if acyclic, modifying it. Otherwise, the input // is unchanged. When sorted, all transitions are from lower to higher state // IDs. // // Complexity: // // Time: O(V + E) // Space: O(V + E) // // where V is the number of states and E is the number of arcs. template <class Arc> bool TopSort(MutableFst<Arc> *fst) { std::vector<typename Arc::StateId> order; bool acyclic; TopOrderVisitor<Arc> top_order_visitor(&order, &acyclic); DfsVisit(*fst, &top_order_visitor); if (acyclic) { StateSort(fst, order); fst->SetProperties(kAcyclic | kInitialAcyclic | kTopSorted, kAcyclic | kInitialAcyclic | kTopSorted); } else { fst->SetProperties(kCyclic | kNotTopSorted, kCyclic | kNotTopSorted); } return acyclic; } } // namespace fst #endif // FST_TOPSORT_H_
0
coqui_public_repos/STT
coqui_public_repos/STT/native_client/generate_scorer_package.cpp
#include <string> #include <vector> #include <fstream> #include <unordered_set> #include <iostream> using namespace std; #include "absl/types/optional.h" #include "boost/program_options.hpp" #include "ctcdecode/decoder_utils.h" #include "ctcdecode/scorer.h" #include "alphabet.h" #include "coqui-stt.h" namespace po = boost::program_options; int create_package(absl::optional<string> checkpoint_path, string lm_path, string vocab_path, string package_path, absl::optional<bool> force_bytes_output_mode, float default_alpha, float default_beta) { // Read vocabulary unordered_set<string> words; bool vocab_looks_char_based = true; ifstream fin(vocab_path); if (!fin) { cerr << "Invalid vocabulary file " << vocab_path << "\n"; return 1; } string word; while (fin >> word) { words.insert(word); if (get_utf8_str_len(word) > 1) { vocab_looks_char_based = false; } } cerr << words.size() << " unique words read from vocabulary file.\n" << (vocab_looks_char_based ? "Looks" : "Doesn't look") << " like a character based (Bytes Are All You Need) model.\n"; if (!force_bytes_output_mode.has_value()) { force_bytes_output_mode = vocab_looks_char_based; cerr << "--force_bytes_output_mode was not specified, using value " << "infered from vocabulary contents: " << (vocab_looks_char_based ? "true" : "false") << "\n"; } if (!force_bytes_output_mode.value() && !checkpoint_path.has_value()) { cerr << "No --checkpoint path specified, not using bytes output mode, can't continue." << "\nCheckpoint path must contain an alphabet." << "\nStart by creating an alphabet for your models using coqui_stt_training.util.check_characters if needed.\n" << "\n python -m coqui_stt_training.util.check_characters \\" << "\n --csv-files ... \\" << "\n --alphabet-format | grep -v '^#' | sort -n > models/alphabet.txt\n" << "\nThis will create an alphabet models/alphabet.txt.\n" << "Now rerun this script by giving models/ as the checkpoint path.\n" << "\n generate_scorer_package \\" << "\n --checkpoint models/ \\" << "\n ...\n"; return 1; } Scorer scorer; if (force_bytes_output_mode.value()) { scorer.set_alphabet(UTF8Alphabet()); } else { std::string alphabet_path = checkpoint_path.value() + "/alphabet.txt"; Alphabet alphabet; if (alphabet.init(alphabet_path.c_str()) != 0) { cerr << "Alphabet initialization failed (missing file?) - loading from path: " << alphabet_path << "\n"; return 1; } scorer.set_alphabet(alphabet); } scorer.set_utf8_mode(force_bytes_output_mode.value()); scorer.reset_params(default_alpha, default_beta); int err = scorer.load_lm_filepath(lm_path); if (err != STT_ERR_SCORER_NO_TRIE) { cerr << "Error loading language model file: " << (err == STT_ERR_SCORER_UNREADABLE ? "Can't open binary LM file." : STT_ErrorCodeToErrorMessage(err)) << "\n"; return 1; } scorer.fill_dictionary(words); // Copy LM file to final package file destination { ifstream lm_src(lm_path, std::ios::binary); ofstream package_dest(package_path, std::ios::binary); package_dest << lm_src.rdbuf(); } // Save dictionary to package file, appending instead of overwriting if (!scorer.save_dictionary(package_path, true)) { cerr << "Error when saving package in " << package_path << ".\n"; return 1; } cerr << "Package created in " << package_path << ".\n"; return 0; } int main(int argc, char** argv) { po::options_description desc("Options"); desc.add_options() ("help", "show help message") ("checkpoint", po::value<string>(), "Path to a checkpoint directory " "corresponding to the model this scorer will be used with. The " "alphabet will be loaded from an alphabet.txt file in the " "checkpoint directory. Words with characters not in the alphabet " "will not be included in the vocabulary. Optional if using bytes " "output mode.") ("lm", po::value<string>(), "Path of KenLM binary LM file. Must be " "built without including the vocabulary (use the -v flag). See " "generate_lm.py for how to create a binary LM.") ("vocab", po::value<string>(), "Path of vocabulary file. Must contain " "words separated by whitespace.") ("package", po::value<string>(), "Path to save scorer package.") ("default_alpha", po::value<float>(), "Default value of alpha " "hyperparameter (float).") ("default_beta", po::value<float>(), "Default value of beta " "hyperparameter (float).") ("force_bytes_output_mode", po::value<bool>(), "Boolean flag, force " "set or unset bytes output mode in the scorer package. If not set, " "infers from the vocabulary. See <https://stt.readthedocs.io/en/latest/Decoder.html#bytes-output-mode> " "for further explanation.") ; po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); po::notify(vm); if (vm.count("help")) { cout << desc << "\n"; return 1; } // Check required flags. for (const string& flag : {"lm", "vocab", "package", "default_alpha", "default_beta"}) { if (!vm.count(flag)) { cerr << "--" << flag << " is a required flag. Pass --help for help.\n"; return 1; } } // Parse optional --force_bytes_output_mode absl::optional<bool> force_bytes_output_mode = absl::nullopt; if (vm.count("force_bytes_output_mode")) { force_bytes_output_mode = vm["force_bytes_output_mode"].as<bool>(); } // Parse optional --checkpoint absl::optional<string> checkpoint = absl::nullopt; if (vm.count("checkpoint")) { checkpoint = vm["checkpoint"].as<string>(); } create_package(checkpoint, vm["lm"].as<string>(), vm["vocab"].as<string>(), vm["package"].as<string>(), force_bytes_output_mode, vm["default_alpha"].as<float>(), vm["default_beta"].as<float>()); return 0; }
0
coqui_public_repos/STT-models/frisian/itml
coqui_public_repos/STT-models/frisian/itml/v0.1.0/LICENSE
GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see <https://www.gnu.org/licenses/>.
0
coqui_public_repos/inference-engine/third_party/kenlm
coqui_public_repos/inference-engine/third_party/kenlm/util/have.hh
/* Optional packages. You might want to integrate this with your build system e.g. config.h from ./configure. */ #ifndef UTIL_HAVE_H #define UTIL_HAVE_H #ifdef HAVE_CONFIG_H #include "config.h" #endif #ifndef HAVE_ICU //#define HAVE_ICU #endif #endif // UTIL_HAVE_H
0
coqui_public_repos/STT/native_client/swift
coqui_public_repos/STT/native_client/swift/stt_ios_test/ContentView.swift
// // ContentView.swift // stt_ios_test // // Created by Reuben Morais on 15.06.20. // Copyright © 2020 Mozilla // Copyright © 2021 Coqui GmbH import SwiftUI struct ContentView: View { @State var isRecognizingMicrophone = false @State var partialResult = "" @State var result = "" @State var stt: SpeechRecognitionImpl? = nil func setup() { stt = SpeechRecognitionImpl( onPartialResult: { nextPartialResult in partialResult = nextPartialResult }, onResult: { nextResult in result = nextResult } ) } var body: some View { VStack { Text("Coqui STT iOS Demo") .font(.system(size: 30)) if (stt != nil) { Button("Recognize files", action: recognizeFiles) .padding(30) Button( isRecognizingMicrophone ? "Stop Microphone Recognition" : "Start Microphone Recognition", action: isRecognizingMicrophone ? stopMicRecognition : startMicRecognition) .padding(30) Text("Partial result") Text(partialResult) Text("Result") Text(result) } else { Button("Setup", action: setup) .padding(30) } } } func recognizeFiles() { stt?.recognizeFiles() } func startMicRecognition() { isRecognizingMicrophone = true stt?.startMicrophoneRecognition() } func stopMicRecognition() { isRecognizingMicrophone = false stt?.stopMicrophoneRecognition() } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } }
0
coqui_public_repos/STT
coqui_public_repos/STT/taskcluster/decoder-package.sh
#!/bin/bash set -xe source $(dirname "$0")/tc-tests-utils.sh mkdir -p ${TASKCLUSTER_ARTIFACTS} || true if [ -d ${DS_ROOT_TASK}/DeepSpeech/ds/wheels ]; then cp ${DS_ROOT_TASK}/DeepSpeech/ds/wheels/* ${TASKCLUSTER_ARTIFACTS}/ fi;
0
coqui_public_repos/TTS/tests
coqui_public_repos/TTS/tests/tts_tests/test_tacotron2_speaker_emb_train.py
import glob import json import os import shutil from trainer import get_last_checkpoint from tests import get_device_id, get_tests_output_path, run_cli from TTS.tts.configs.tacotron2_config import Tacotron2Config config_path = os.path.join(get_tests_output_path(), "test_model_config.json") output_path = os.path.join(get_tests_output_path(), "train_outputs") config = Tacotron2Config( r=5, batch_size=8, eval_batch_size=8, num_loader_workers=0, num_eval_loader_workers=0, text_cleaner="english_cleaners", use_phonemes=False, phoneme_language="en-us", phoneme_cache_path=os.path.join(get_tests_output_path(), "train_outputs/phoneme_cache/"), run_eval=True, test_delay_epochs=-1, epochs=1, print_step=1, print_eval=True, test_sentences=[ "Be a voice, not an echo.", ], use_speaker_embedding=True, num_speakers=4, max_decoder_steps=50, ) config.audio.do_trim_silence = True config.audio.trim_db = 60 config.save_json(config_path) # train the model for one epoch command_train = ( f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --config_path {config_path} " f"--coqpit.output_path {output_path} " "--coqpit.datasets.0.formatter ljspeech_test " "--coqpit.datasets.0.meta_file_train metadata.csv " "--coqpit.datasets.0.meta_file_val metadata.csv " "--coqpit.datasets.0.path tests/data/ljspeech " "--coqpit.test_delay_epochs 0 " ) run_cli(command_train) # Find latest folder continue_path = max(glob.glob(os.path.join(output_path, "*/")), key=os.path.getmtime) # Inference using TTS API continue_config_path = os.path.join(continue_path, "config.json") continue_restore_path, _ = get_last_checkpoint(continue_path) out_wav_path = os.path.join(get_tests_output_path(), "output.wav") speaker_id = "ljspeech-1" continue_speakers_path = os.path.join(continue_path, "speakers.json") # Check integrity of the config with open(continue_config_path, "r", encoding="utf-8") as f: config_loaded = json.load(f) assert config_loaded["characters"] is not None assert config_loaded["output_path"] in continue_path assert config_loaded["test_delay_epochs"] == 0 # Load the model and run inference inference_command = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' tts --text 'This is an example.' --speaker_idx {speaker_id} --speakers_file_path {continue_speakers_path} --config_path {continue_config_path} --model_path {continue_restore_path} --out_path {out_wav_path}" run_cli(inference_command) # restore the model and continue training for one more epoch command_train = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --continue_path {continue_path} " run_cli(command_train) shutil.rmtree(continue_path)
0
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/framework/framework_common.h
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. #pragma once #include <string> #include <unordered_map> #include <vector> #include "run_options.h" namespace onnxruntime { // forward declarations class Model; class GraphTransformer; class NodeArg; } // namespace onnxruntime namespace onnxruntime { using InputDefList = std::vector<const onnxruntime::NodeArg*>; using OutputDefList = std::vector<const onnxruntime::NodeArg*>; using NameMLValMap = std::unordered_map<std::string, OrtValue>; } // namespace onnxruntime
0
coqui_public_repos/STT-models/french/commonvoice-fr
coqui_public_repos/STT-models/french/commonvoice-fr/v0.8/alphabet.txt
' a b c d e f g h i j k l m n o p q r s t u v w x y z ~ ® à á â ã ç è é ê ë í î ï ñ ò ó ô ö ù û ü ď ĩ ĺ ń ō œ ţ ũ ū ŭ ů ű ŵ ǎ ǔ ɑ ɨ ʋ θ φ о п р ц ч э і ј џ ӌ գ զ ḥ ẓ ẵ ế ề ố ớ ờ ụ ủ ứ ‐ ― ₽ ∆ − ∨ ⊨ ⋅ ⱅ ⱎ 三 保 厳 宇 津 ꝑ ÿ
0
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/script/equivalent.h
// See www.openfst.org for extensive documentation on this weighted // finite-state transducer library. #ifndef FST_SCRIPT_EQUIVALENT_H_ #define FST_SCRIPT_EQUIVALENT_H_ #include <tuple> #include <fst/equivalent.h> #include <fst/script/arg-packs.h> #include <fst/script/fst-class.h> namespace fst { namespace script { using EquivalentInnerArgs = std::tuple<const FstClass &, const FstClass &, float>; using EquivalentArgs = WithReturnValue<bool, EquivalentInnerArgs>; template <class Arc> void Equivalent(EquivalentArgs *args) { const Fst<Arc> &fst1 = *(std::get<0>(args->args).GetFst<Arc>()); const Fst<Arc> &fst2 = *(std::get<1>(args->args).GetFst<Arc>()); args->retval = Equivalent(fst1, fst2, std::get<2>(args->args)); } bool Equivalent(const FstClass &fst1, const FstClass &fst2, float delta = kDelta); } // namespace script } // namespace fst #endif // FST_SCRIPT_EQUIVALENT_H_
0
coqui_public_repos/inference-engine/third_party/kenlm/util
coqui_public_repos/inference-engine/third_party/kenlm/util/double-conversion/bignum.h
// Copyright 2010 the V8 project authors. All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following // disclaimer in the documentation and/or other materials provided // with the distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived // from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #ifndef DOUBLE_CONVERSION_BIGNUM_H_ #define DOUBLE_CONVERSION_BIGNUM_H_ #include "utils.h" namespace kenlm_double_conversion { class Bignum { public: // 3584 = 128 * 28. We can represent 2^3584 > 10^1000 accurately. // This bignum can encode much bigger numbers, since it contains an // exponent. static const int kMaxSignificantBits = 3584; Bignum(); void AssignUInt16(uint16_t value); void AssignUInt64(uint64_t value); void AssignBignum(const Bignum& other); void AssignDecimalString(Vector<const char> value); void AssignHexString(Vector<const char> value); void AssignPowerUInt16(uint16_t base, int exponent); void AddUInt64(uint64_t operand); void AddBignum(const Bignum& other); // Precondition: this >= other. void SubtractBignum(const Bignum& other); void Square(); void ShiftLeft(int shift_amount); void MultiplyByUInt32(uint32_t factor); void MultiplyByUInt64(uint64_t factor); void MultiplyByPowerOfTen(int exponent); void Times10() { return MultiplyByUInt32(10); } // Pseudocode: // int result = this / other; // this = this % other; // In the worst case this function is in O(this/other). uint16_t DivideModuloIntBignum(const Bignum& other); bool ToHexString(char* buffer, int buffer_size) const; // Returns // -1 if a < b, // 0 if a == b, and // +1 if a > b. static int Compare(const Bignum& a, const Bignum& b); static bool Equal(const Bignum& a, const Bignum& b) { return Compare(a, b) == 0; } static bool LessEqual(const Bignum& a, const Bignum& b) { return Compare(a, b) <= 0; } static bool Less(const Bignum& a, const Bignum& b) { return Compare(a, b) < 0; } // Returns Compare(a + b, c); static int PlusCompare(const Bignum& a, const Bignum& b, const Bignum& c); // Returns a + b == c static bool PlusEqual(const Bignum& a, const Bignum& b, const Bignum& c) { return PlusCompare(a, b, c) == 0; } // Returns a + b <= c static bool PlusLessEqual(const Bignum& a, const Bignum& b, const Bignum& c) { return PlusCompare(a, b, c) <= 0; } // Returns a + b < c static bool PlusLess(const Bignum& a, const Bignum& b, const Bignum& c) { return PlusCompare(a, b, c) < 0; } private: typedef uint32_t Chunk; typedef uint64_t DoubleChunk; static const int kChunkSize = sizeof(Chunk) * 8; static const int kDoubleChunkSize = sizeof(DoubleChunk) * 8; // With bigit size of 28 we loose some bits, but a double still fits easily // into two chunks, and more importantly we can use the Comba multiplication. static const int kBigitSize = 28; static const Chunk kBigitMask = (1 << kBigitSize) - 1; // Every instance allocates kBigitLength chunks on the stack. Bignums cannot // grow. There are no checks if the stack-allocated space is sufficient. static const int kBigitCapacity = kMaxSignificantBits / kBigitSize; void EnsureCapacity(int size) { if (size > kBigitCapacity) { UNREACHABLE(); } } void Align(const Bignum& other); void Clamp(); bool IsClamped() const; void Zero(); // Requires this to have enough capacity (no tests done). // Updates used_digits_ if necessary. // shift_amount must be < kBigitSize. void BigitsShiftLeft(int shift_amount); // BigitLength includes the "hidden" digits encoded in the exponent. int BigitLength() const { return used_digits_ + exponent_; } Chunk BigitAt(int index) const; void SubtractTimes(const Bignum& other, int factor); Chunk bigits_buffer_[kBigitCapacity]; // A vector backed by bigits_buffer_. This way accesses to the array are // checked for out-of-bounds errors. Vector<Chunk> bigits_; int used_digits_; // The Bignum's value equals value(bigits_) * 2^(exponent_ * kBigitSize). int exponent_; DISALLOW_COPY_AND_ASSIGN(Bignum); }; } // namespace kenlm_double_conversion #endif // DOUBLE_CONVERSION_BIGNUM_H_
0
coqui_public_repos/inference-engine/third_party
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/Makefile.am
SUBDIRS = src ACLOCAL_AMFLAGS = -I m4
0