repo_id
stringlengths 18
103
| file_path
stringlengths 30
136
| content
stringlengths 2
3.36M
| __index_level_0__
int64 0
0
|
---|---|---|---|
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/test-nodejs_14x_8k-linux-amd64-prod_pbmodel-opt.yml | build:
template_file: test-linux-opt-base.tyml
docker_image: "ubuntu:16.04"
dependencies:
- "linux-amd64-cpu-opt"
system_setup:
>
${nodejs.packages_xenial.prep_14} && ${nodejs.packages_xenial.apt_pinning} && apt-get -qq update && apt-get -qq -y install ${nodejs.packages_xenial.apt}
args:
tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-node-tests-prod.sh 14.x 8k"
workerType: "${docker.dsTests}"
metadata:
name: "DeepSpeech Linux AMD64 CPU NodeJS 14.x prod tests (8kHz)"
description: "Testing DeepSpeech for Linux/AMD64 on NodeJS v14.x on prod model, CPU only, optimized version (8kHz)"
| 0 |
coqui_public_repos/STT-models/welsh/techiaith | coqui_public_repos/STT-models/welsh/techiaith/v21.03/MODEL_CARD.md | # Model card for Welsh STT
Jump to section:
- [Model details](#model-details)
- [Intended use](#intended-use)
- [Performance Factors](#performance-factors)
- [Metrics](#metrics)
- [Training data](#training-data)
- [Evaluation data](#evaluation-data)
- [Ethical considerations](#ethical-considerations)
- [Caveats and recommendations](#caveats-and-recommendations)
## Model details
- Person or organization developing model: Originally trained by [Dewi Bryn Jones](https://github.com/DewiBrynJones) and released by the [Techiaith Language Technologies Unit](https://github.com/techiaith)
- Model language: Welsh / Cymraeg / `cy`
- Model date: Accessed from [Github](https://github.com/techiaith/docker-deepspeech-cy/releases/tag/21.03) on March 31, 2021
- Model type: `Speech-to-Text`
- Model version: `v21.03`
- Compatible with 🐸 STT version: `v0.9.3`
- Code: [docker-deepspeech-cy](https://github.com/techiaith/docker-deepspeech-cy)
- License: MIT
- Citation details: `@misc{welsh-stt-dewibrynjones,
author = {Dewi Bryn Jones},
title = {Docker DeepSpeech Cymraeg},
publisher = {Techiaith},
journal = {docker-deepspeech-cy},
howpublished = {\url{https://github.com/techiaith/docker-deepspeech-cy/releases/tag/21.03}}
}`
- Where to send questions or comments about the model: You can leave an issue on [`STT-model` issues](https://github.com/coqui-ai/STT-models/issues), open a new discussion on [`STT-model` discussions](https://github.com/coqui-ai/STT-models/discussions), or chat with us on [Gitter](https://gitter.im/coqui-ai/).
## Intended use
Speech-to-Text for the [Welsh Language](https://en.wikipedia.org/wiki/Welsh_language) on 16kHz, mono-channel audio.
## Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data).
## Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
#### Transcription Accuracy
Word Error Rates and Character Error Rates were not reported for this model
#### Real-Time Factor
Real-Time Factor (RTF) is defined as `processing-time / length-of-audio`. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: `.76`
#### Model Size
`model.pbmm`: 181M
`model.tflite`: 46M
### Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
## Training data
These models were trained with the Welsh dataset from the [Common Voice Corpus 6.1](https://commonvoice.mozilla.org/datasets) in addition to a small dataset of validated recordings donated by the first users of Bangor University's Language Technology Unit's online automatic transcription website service: [Trawsgrifiwr Ar-lein](https://trawsgrifiwr.techiaith.cymru). [Detailed release notes here](https://github.com/techiaith/docker-deepspeech-cy/releases/tag/21.03).
## Evaluation data
With a language model, the Welsh STT model had a Word Error Rate of 11\%. [Detailed release notes here](https://github.com/techiaith/docker-deepspeech-cy/releases/tag/21.03).
## Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
### Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
### Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
## Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data).
In most applications, it is recommended that you [train your own language model](https://stt.readthedocs.io/en/latest/LANGUAGE_MODEL.html) to improve transcription accuracy on your speech data.
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/lib/fst-types.cc | // Registration of common Fst and arc types.
#include <fst/arc.h>
#include <fst/compact-fst.h>
#include <fst/const-fst.h>
#include <fst/edit-fst.h>
#include <fst/register.h>
#include <fst/vector-fst.h>
namespace fst {
// Registers VectorFst, ConstFst and EditFst for common arcs types.
REGISTER_FST(VectorFst, StdArc);
REGISTER_FST(VectorFst, LogArc);
REGISTER_FST(VectorFst, Log64Arc);
REGISTER_FST(ConstFst, StdArc);
REGISTER_FST(ConstFst, LogArc);
REGISTER_FST(ConstFst, Log64Arc);
REGISTER_FST(EditFst, StdArc);
REGISTER_FST(EditFst, LogArc);
REGISTER_FST(EditFst, Log64Arc);
// Register CompactFst for common arcs with the default (uint32) size type
REGISTER_FST(CompactStringFst, StdArc);
REGISTER_FST(CompactStringFst, LogArc);
REGISTER_FST(CompactWeightedStringFst, StdArc);
REGISTER_FST(CompactWeightedStringFst, LogArc);
REGISTER_FST(CompactAcceptorFst, StdArc);
REGISTER_FST(CompactAcceptorFst, LogArc);
REGISTER_FST(CompactUnweightedFst, StdArc);
REGISTER_FST(CompactUnweightedFst, LogArc);
REGISTER_FST(CompactUnweightedAcceptorFst, StdArc);
REGISTER_FST(CompactUnweightedAcceptorFst, LogArc);
} // namespace fst
| 0 |
coqui_public_repos/inference-engine/third_party/kenlm/util | coqui_public_repos/inference-engine/third_party/kenlm/util/double-conversion/fixed-dtoa.cc | // Copyright 2010 the V8 project authors. All rights reserved.
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following
// disclaimer in the documentation and/or other materials provided
// with the distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include <math.h>
#include "fixed-dtoa.h"
#include "ieee.h"
namespace kenlm_double_conversion {
// Represents a 128bit type. This class should be replaced by a native type on
// platforms that support 128bit integers.
class UInt128 {
public:
UInt128() : high_bits_(0), low_bits_(0) { }
UInt128(uint64_t high, uint64_t low) : high_bits_(high), low_bits_(low) { }
void Multiply(uint32_t multiplicand) {
uint64_t accumulator;
accumulator = (low_bits_ & kMask32) * multiplicand;
uint32_t part = static_cast<uint32_t>(accumulator & kMask32);
accumulator >>= 32;
accumulator = accumulator + (low_bits_ >> 32) * multiplicand;
low_bits_ = (accumulator << 32) + part;
accumulator >>= 32;
accumulator = accumulator + (high_bits_ & kMask32) * multiplicand;
part = static_cast<uint32_t>(accumulator & kMask32);
accumulator >>= 32;
accumulator = accumulator + (high_bits_ >> 32) * multiplicand;
high_bits_ = (accumulator << 32) + part;
ASSERT((accumulator >> 32) == 0);
}
void Shift(int shift_amount) {
ASSERT(-64 <= shift_amount && shift_amount <= 64);
if (shift_amount == 0) {
return;
} else if (shift_amount == -64) {
high_bits_ = low_bits_;
low_bits_ = 0;
} else if (shift_amount == 64) {
low_bits_ = high_bits_;
high_bits_ = 0;
} else if (shift_amount <= 0) {
high_bits_ <<= -shift_amount;
high_bits_ += low_bits_ >> (64 + shift_amount);
low_bits_ <<= -shift_amount;
} else {
low_bits_ >>= shift_amount;
low_bits_ += high_bits_ << (64 - shift_amount);
high_bits_ >>= shift_amount;
}
}
// Modifies *this to *this MOD (2^power).
// Returns *this DIV (2^power).
int DivModPowerOf2(int power) {
if (power >= 64) {
int result = static_cast<int>(high_bits_ >> (power - 64));
high_bits_ -= static_cast<uint64_t>(result) << (power - 64);
return result;
} else {
uint64_t part_low = low_bits_ >> power;
uint64_t part_high = high_bits_ << (64 - power);
int result = static_cast<int>(part_low + part_high);
high_bits_ = 0;
low_bits_ -= part_low << power;
return result;
}
}
bool IsZero() const {
return high_bits_ == 0 && low_bits_ == 0;
}
int BitAt(int position) const {
if (position >= 64) {
return static_cast<int>(high_bits_ >> (position - 64)) & 1;
} else {
return static_cast<int>(low_bits_ >> position) & 1;
}
}
private:
static const uint64_t kMask32 = 0xFFFFFFFF;
// Value == (high_bits_ << 64) + low_bits_
uint64_t high_bits_;
uint64_t low_bits_;
};
static const int kDoubleSignificandSize = 53; // Includes the hidden bit.
static void FillDigits32FixedLength(uint32_t number, int requested_length,
Vector<char> buffer, int* length) {
for (int i = requested_length - 1; i >= 0; --i) {
buffer[(*length) + i] = '0' + number % 10;
number /= 10;
}
*length += requested_length;
}
static void FillDigits32(uint32_t number, Vector<char> buffer, int* length) {
int number_length = 0;
// We fill the digits in reverse order and exchange them afterwards.
while (number != 0) {
int digit = number % 10;
number /= 10;
buffer[(*length) + number_length] = static_cast<char>('0' + digit);
number_length++;
}
// Exchange the digits.
int i = *length;
int j = *length + number_length - 1;
while (i < j) {
char tmp = buffer[i];
buffer[i] = buffer[j];
buffer[j] = tmp;
i++;
j--;
}
*length += number_length;
}
static void FillDigits64FixedLength(uint64_t number,
Vector<char> buffer, int* length) {
const uint32_t kTen7 = 10000000;
// For efficiency cut the number into 3 uint32_t parts, and print those.
uint32_t part2 = static_cast<uint32_t>(number % kTen7);
number /= kTen7;
uint32_t part1 = static_cast<uint32_t>(number % kTen7);
uint32_t part0 = static_cast<uint32_t>(number / kTen7);
FillDigits32FixedLength(part0, 3, buffer, length);
FillDigits32FixedLength(part1, 7, buffer, length);
FillDigits32FixedLength(part2, 7, buffer, length);
}
static void FillDigits64(uint64_t number, Vector<char> buffer, int* length) {
const uint32_t kTen7 = 10000000;
// For efficiency cut the number into 3 uint32_t parts, and print those.
uint32_t part2 = static_cast<uint32_t>(number % kTen7);
number /= kTen7;
uint32_t part1 = static_cast<uint32_t>(number % kTen7);
uint32_t part0 = static_cast<uint32_t>(number / kTen7);
if (part0 != 0) {
FillDigits32(part0, buffer, length);
FillDigits32FixedLength(part1, 7, buffer, length);
FillDigits32FixedLength(part2, 7, buffer, length);
} else if (part1 != 0) {
FillDigits32(part1, buffer, length);
FillDigits32FixedLength(part2, 7, buffer, length);
} else {
FillDigits32(part2, buffer, length);
}
}
static void RoundUp(Vector<char> buffer, int* length, int* decimal_point) {
// An empty buffer represents 0.
if (*length == 0) {
buffer[0] = '1';
*decimal_point = 1;
*length = 1;
return;
}
// Round the last digit until we either have a digit that was not '9' or until
// we reached the first digit.
buffer[(*length) - 1]++;
for (int i = (*length) - 1; i > 0; --i) {
if (buffer[i] != '0' + 10) {
return;
}
buffer[i] = '0';
buffer[i - 1]++;
}
// If the first digit is now '0' + 10, we would need to set it to '0' and add
// a '1' in front. However we reach the first digit only if all following
// digits had been '9' before rounding up. Now all trailing digits are '0' and
// we simply switch the first digit to '1' and update the decimal-point
// (indicating that the point is now one digit to the right).
if (buffer[0] == '0' + 10) {
buffer[0] = '1';
(*decimal_point)++;
}
}
// The given fractionals number represents a fixed-point number with binary
// point at bit (-exponent).
// Preconditions:
// -128 <= exponent <= 0.
// 0 <= fractionals * 2^exponent < 1
// The buffer holds the result.
// The function will round its result. During the rounding-process digits not
// generated by this function might be updated, and the decimal-point variable
// might be updated. If this function generates the digits 99 and the buffer
// already contained "199" (thus yielding a buffer of "19999") then a
// rounding-up will change the contents of the buffer to "20000".
static void FillFractionals(uint64_t fractionals, int exponent,
int fractional_count, Vector<char> buffer,
int* length, int* decimal_point) {
ASSERT(-128 <= exponent && exponent <= 0);
// 'fractionals' is a fixed-point number, with binary point at bit
// (-exponent). Inside the function the non-converted remainder of fractionals
// is a fixed-point number, with binary point at bit 'point'.
if (-exponent <= 64) {
// One 64 bit number is sufficient.
ASSERT(fractionals >> 56 == 0);
int point = -exponent;
for (int i = 0; i < fractional_count; ++i) {
if (fractionals == 0) break;
// Instead of multiplying by 10 we multiply by 5 and adjust the point
// location. This way the fractionals variable will not overflow.
// Invariant at the beginning of the loop: fractionals < 2^point.
// Initially we have: point <= 64 and fractionals < 2^56
// After each iteration the point is decremented by one.
// Note that 5^3 = 125 < 128 = 2^7.
// Therefore three iterations of this loop will not overflow fractionals
// (even without the subtraction at the end of the loop body). At this
// time point will satisfy point <= 61 and therefore fractionals < 2^point
// and any further multiplication of fractionals by 5 will not overflow.
fractionals *= 5;
point--;
int digit = static_cast<int>(fractionals >> point);
ASSERT(digit <= 9);
buffer[*length] = static_cast<char>('0' + digit);
(*length)++;
fractionals -= static_cast<uint64_t>(digit) << point;
}
// If the first bit after the point is set we have to round up.
ASSERT(fractionals == 0 || point - 1 >= 0);
if ((fractionals != 0) && ((fractionals >> (point - 1)) & 1) == 1) {
RoundUp(buffer, length, decimal_point);
}
} else { // We need 128 bits.
ASSERT(64 < -exponent && -exponent <= 128);
UInt128 fractionals128 = UInt128(fractionals, 0);
fractionals128.Shift(-exponent - 64);
int point = 128;
for (int i = 0; i < fractional_count; ++i) {
if (fractionals128.IsZero()) break;
// As before: instead of multiplying by 10 we multiply by 5 and adjust the
// point location.
// This multiplication will not overflow for the same reasons as before.
fractionals128.Multiply(5);
point--;
int digit = fractionals128.DivModPowerOf2(point);
ASSERT(digit <= 9);
buffer[*length] = static_cast<char>('0' + digit);
(*length)++;
}
if (fractionals128.BitAt(point - 1) == 1) {
RoundUp(buffer, length, decimal_point);
}
}
}
// Removes leading and trailing zeros.
// If leading zeros are removed then the decimal point position is adjusted.
static void TrimZeros(Vector<char> buffer, int* length, int* decimal_point) {
while (*length > 0 && buffer[(*length) - 1] == '0') {
(*length)--;
}
int first_non_zero = 0;
while (first_non_zero < *length && buffer[first_non_zero] == '0') {
first_non_zero++;
}
if (first_non_zero != 0) {
for (int i = first_non_zero; i < *length; ++i) {
buffer[i - first_non_zero] = buffer[i];
}
*length -= first_non_zero;
*decimal_point -= first_non_zero;
}
}
bool FastFixedDtoa(double v,
int fractional_count,
Vector<char> buffer,
int* length,
int* decimal_point) {
const uint32_t kMaxUInt32 = 0xFFFFFFFF;
uint64_t significand = Double(v).Significand();
int exponent = Double(v).Exponent();
// v = significand * 2^exponent (with significand a 53bit integer).
// If the exponent is larger than 20 (i.e. we may have a 73bit number) then we
// don't know how to compute the representation. 2^73 ~= 9.5*10^21.
// If necessary this limit could probably be increased, but we don't need
// more.
if (exponent > 20) return false;
if (fractional_count > 20) return false;
*length = 0;
// At most kDoubleSignificandSize bits of the significand are non-zero.
// Given a 64 bit integer we have 11 0s followed by 53 potentially non-zero
// bits: 0..11*..0xxx..53*..xx
if (exponent + kDoubleSignificandSize > 64) {
// The exponent must be > 11.
//
// We know that v = significand * 2^exponent.
// And the exponent > 11.
// We simplify the task by dividing v by 10^17.
// The quotient delivers the first digits, and the remainder fits into a 64
// bit number.
// Dividing by 10^17 is equivalent to dividing by 5^17*2^17.
const uint64_t kFive17 = UINT64_2PART_C(0xB1, A2BC2EC5); // 5^17
uint64_t divisor = kFive17;
int divisor_power = 17;
uint64_t dividend = significand;
uint32_t quotient;
uint64_t remainder;
// Let v = f * 2^e with f == significand and e == exponent.
// Then need q (quotient) and r (remainder) as follows:
// v = q * 10^17 + r
// f * 2^e = q * 10^17 + r
// f * 2^e = q * 5^17 * 2^17 + r
// If e > 17 then
// f * 2^(e-17) = q * 5^17 + r/2^17
// else
// f = q * 5^17 * 2^(17-e) + r/2^e
if (exponent > divisor_power) {
// We only allow exponents of up to 20 and therefore (17 - e) <= 3
dividend <<= exponent - divisor_power;
quotient = static_cast<uint32_t>(dividend / divisor);
remainder = (dividend % divisor) << divisor_power;
} else {
divisor <<= divisor_power - exponent;
quotient = static_cast<uint32_t>(dividend / divisor);
remainder = (dividend % divisor) << exponent;
}
FillDigits32(quotient, buffer, length);
FillDigits64FixedLength(remainder, buffer, length);
*decimal_point = *length;
} else if (exponent >= 0) {
// 0 <= exponent <= 11
significand <<= exponent;
FillDigits64(significand, buffer, length);
*decimal_point = *length;
} else if (exponent > -kDoubleSignificandSize) {
// We have to cut the number.
uint64_t integrals = significand >> -exponent;
uint64_t fractionals = significand - (integrals << -exponent);
if (integrals > kMaxUInt32) {
FillDigits64(integrals, buffer, length);
} else {
FillDigits32(static_cast<uint32_t>(integrals), buffer, length);
}
*decimal_point = *length;
FillFractionals(fractionals, exponent, fractional_count,
buffer, length, decimal_point);
} else if (exponent < -128) {
// This configuration (with at most 20 digits) means that all digits must be
// 0.
ASSERT(fractional_count <= 20);
buffer[0] = '\0';
*length = 0;
*decimal_point = -fractional_count;
} else {
*decimal_point = 0;
FillFractionals(significand, exponent, fractional_count,
buffer, length, decimal_point);
}
TrimZeros(buffer, length, decimal_point);
buffer[*length] = '\0';
if ((*length) == 0) {
// The string is empty and the decimal_point thus has no importance. Mimick
// Gay's dtoa and and set it to -fractional_count.
*decimal_point = -fractional_count;
}
return true;
}
} // namespace kenlm_double_conversion
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions/far/sttable.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fstream>
#include <fst/extensions/far/sttable.h>
namespace fst {
bool IsSTTable(const string &filename) {
std::ifstream strm(filename);
if (!strm.good()) return false;
int32 magic_number = 0;
ReadType(strm, &magic_number);
return magic_number == kSTTableMagicNumber;
}
} // namespace fst
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/matcher.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Classes to allow matching labels leaving FST states.
#ifndef FST_MATCHER_H_
#define FST_MATCHER_H_
#include <algorithm>
#include <unordered_map>
#include <utility>
#include <fst/log.h>
#include <fst/mutable-fst.h> // for all internal FST accessors.
namespace fst {
// Matchers find and iterate through requested labels at FST states. In the
// simplest form, these are just some associative map or search keyed on labels.
// More generally, they may implement matching special labels that represent
// sets of labels such as sigma (all), rho (rest), or phi (fail). The Matcher
// interface is:
//
// template <class F>
// class Matcher {
// public:
// using FST = F;
// using Arc = typename FST::Arc;
// using Label = typename Arc::Label;
// using StateId = typename Arc::StateId;
// using Weight = typename Arc::Weight;
//
// // Required constructors. Note:
// // -- the constructors that copy the FST arg are useful for
// // letting the matcher manage the FST through copies
// // (esp with 'safe' copies); e.g. ComposeFst depends on this.
// // -- the constructor that does not copy is useful when the
// // the FST is mutated during the lifetime of the matcher
// // (o.w. the matcher would have its own unmutated deep copy).
//
// // This makes a copy of the FST.
// Matcher(const FST &fst, MatchType type);
// // This doesn't copy the FST.
// Matcher(const FST *fst, MatchType type);
// // This makes a copy of the FST.
// // See Copy() below.
// Matcher(const Matcher &matcher, bool safe = false);
//
// // If safe = true, the copy is thread-safe. See Fst<>::Copy() for
// // further doc.
// Matcher<FST> *Copy(bool safe = false) const override;
//
// // Returns the match type that can be provided (depending on compatibility
// of the input FST). It is either the requested match type, MATCH_NONE, or
// MATCH_UNKNOWN. If test is false, a costly testing is avoided, but
// MATCH_UNKNOWN may be returned. If test is true, a definite answer is
// returned, but may involve more costly computation (e.g., visiting the FST).
// MatchType Type(bool test) const override;
//
// // Specifies the current state.
// void SetState(StateId s) final;
//
// // Finds matches to a label at the current state, returning true if a match
// // found. kNoLabel matches any non-consuming transitions, e.g., epsilon
// // transitions, which do not require a matching symbol.
// bool Find(Label label) final;
//
// // Iterator methods. Note that initially and after SetState() these have
// undefined behavior until Find() is called.
//
// bool Done() const final;
//
// const Arc &Value() const final;
//
// void Next() final;
//
// // Returns final weight of a state.
// Weight Final(StateId) const final;
//
// // Indicates preference for being the side used for matching in
// // composition. If the value is kRequirePriority, then it is
// // mandatory that it be used. Calling this method without passing the
// // current state of the matcher invalidates the state of the matcher.
// ssize_t Priority(StateId s) final;
//
// // This specifies the known FST properties as viewed from this matcher. It
// // takes as argument the input FST's known properties.
// uint64 Properties(uint64 props) const override;
//
// // Returns matcher flags.
// uint32 Flags() const override;
//
// // Returns matcher FST.
// const FST &GetFst() const override;
// };
// Basic matcher flags.
// Matcher needs to be used as the matching side in composition for
// at least one state (has kRequirePriority).
constexpr uint32 kRequireMatch = 0x00000001;
// Flags used for basic matchers (see also lookahead.h).
constexpr uint32 kMatcherFlags = kRequireMatch;
// Matcher priority that is mandatory.
constexpr ssize_t kRequirePriority = -1;
// Matcher interface, templated on the Arc definition; used for matcher
// specializations that are returned by the InitMatcher FST method.
template <class A>
class MatcherBase {
public:
using Arc = A;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
virtual ~MatcherBase() {}
// Virtual interface.
virtual MatcherBase<Arc> *Copy(bool safe = false) const = 0;
virtual MatchType Type(bool) const = 0;
virtual void SetState(StateId) = 0;
virtual bool Find(Label) = 0;
virtual bool Done() const = 0;
virtual const Arc &Value() const = 0;
virtual void Next() = 0;
virtual const Fst<Arc> &GetFst() const = 0;
virtual uint64 Properties(uint64) const = 0;
// Trivial implementations that can be used by derived classes. Full
// devirtualization is expected for any derived class marked final.
virtual uint32 Flags() const { return 0; }
virtual Weight Final(StateId s) const { return internal::Final(GetFst(), s); }
virtual ssize_t Priority(StateId s) { return internal::NumArcs(GetFst(), s); }
};
// A matcher that expects sorted labels on the side to be matched.
// If match_type == MATCH_INPUT, epsilons match the implicit self-loop
// Arc(kNoLabel, 0, Weight::One(), current_state) as well as any
// actual epsilon transitions. If match_type == MATCH_OUTPUT, then
// Arc(0, kNoLabel, Weight::One(), current_state) is instead matched.
template <class F>
class SortedMatcher : public MatcherBase<typename F::Arc> {
public:
using FST = F;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
using MatcherBase<Arc>::Flags;
using MatcherBase<Arc>::Properties;
// Labels >= binary_label will be searched for by binary search;
// o.w. linear search is used.
// This makes a copy of the FST.
SortedMatcher(const FST &fst, MatchType match_type, Label binary_label = 1)
: SortedMatcher(fst.Copy(), match_type, binary_label) {
owned_fst_.reset(&fst_);
}
// Labels >= binary_label will be searched for by binary search;
// o.w. linear search is used.
// This doesn't copy the FST.
SortedMatcher(const FST *fst, MatchType match_type, Label binary_label = 1)
: fst_(*fst),
state_(kNoStateId),
aiter_(nullptr),
match_type_(match_type),
binary_label_(binary_label),
match_label_(kNoLabel),
narcs_(0),
loop_(kNoLabel, 0, Weight::One(), kNoStateId),
error_(false),
aiter_pool_(1) {
switch (match_type_) {
case MATCH_INPUT:
case MATCH_NONE:
break;
case MATCH_OUTPUT:
std::swap(loop_.ilabel, loop_.olabel);
break;
default:
FSTERROR() << "SortedMatcher: Bad match type";
match_type_ = MATCH_NONE;
error_ = true;
}
}
// This makes a copy of the FST.
SortedMatcher(const SortedMatcher<FST> &matcher, bool safe = false)
: owned_fst_(matcher.fst_.Copy(safe)),
fst_(*owned_fst_),
state_(kNoStateId),
aiter_(nullptr),
match_type_(matcher.match_type_),
binary_label_(matcher.binary_label_),
match_label_(kNoLabel),
narcs_(0),
loop_(matcher.loop_),
error_(matcher.error_),
aiter_pool_(1) {}
~SortedMatcher() override { Destroy(aiter_, &aiter_pool_); }
SortedMatcher<FST> *Copy(bool safe = false) const override {
return new SortedMatcher<FST>(*this, safe);
}
MatchType Type(bool test) const override {
if (match_type_ == MATCH_NONE) return match_type_;
const auto true_prop =
match_type_ == MATCH_INPUT ? kILabelSorted : kOLabelSorted;
const auto false_prop =
match_type_ == MATCH_INPUT ? kNotILabelSorted : kNotOLabelSorted;
const auto props = fst_.Properties(true_prop | false_prop, test);
if (props & true_prop) {
return match_type_;
} else if (props & false_prop) {
return MATCH_NONE;
} else {
return MATCH_UNKNOWN;
}
}
void SetState(StateId s) final {
if (state_ == s) return;
state_ = s;
if (match_type_ == MATCH_NONE) {
FSTERROR() << "SortedMatcher: Bad match type";
error_ = true;
}
Destroy(aiter_, &aiter_pool_);
aiter_ = new (&aiter_pool_) ArcIterator<FST>(fst_, s);
aiter_->SetFlags(kArcNoCache, kArcNoCache);
narcs_ = internal::NumArcs(fst_, s);
loop_.nextstate = s;
}
bool Find(Label match_label) final {
exact_match_ = true;
if (error_) {
current_loop_ = false;
match_label_ = kNoLabel;
return false;
}
current_loop_ = match_label == 0;
match_label_ = match_label == kNoLabel ? 0 : match_label;
if (Search()) {
return true;
} else {
return current_loop_;
}
}
// Positions matcher to the first position where inserting match_label would
// maintain the sort order.
void LowerBound(Label label) {
exact_match_ = false;
current_loop_ = false;
if (error_) {
match_label_ = kNoLabel;
return;
}
match_label_ = label;
Search();
}
// After Find(), returns false if no more exact matches.
// After LowerBound(), returns false if no more arcs.
bool Done() const final {
if (current_loop_) return false;
if (aiter_->Done()) return true;
if (!exact_match_) return false;
aiter_->SetFlags(match_type_ == MATCH_INPUT ?
kArcILabelValue : kArcOLabelValue,
kArcValueFlags);
return GetLabel() != match_label_;
}
const Arc &Value() const final {
if (current_loop_) return loop_;
aiter_->SetFlags(kArcValueFlags, kArcValueFlags);
return aiter_->Value();
}
void Next() final {
if (current_loop_) {
current_loop_ = false;
} else {
aiter_->Next();
}
}
Weight Final(StateId s) const final {
return MatcherBase<Arc>::Final(s);
}
ssize_t Priority(StateId s) final {
return MatcherBase<Arc>::Priority(s);
}
const FST &GetFst() const override { return fst_; }
uint64 Properties(uint64 inprops) const override {
return inprops | (error_ ? kError : 0);
}
size_t Position() const { return aiter_ ? aiter_->Position() : 0; }
private:
Label GetLabel() const {
const auto &arc = aiter_->Value();
return match_type_ == MATCH_INPUT ? arc.ilabel : arc.olabel;
}
bool BinarySearch();
bool LinearSearch();
bool Search();
std::unique_ptr<const FST> owned_fst_; // FST ptr if owned.
const FST &fst_; // FST for matching.
StateId state_; // Matcher state.
ArcIterator<FST> *aiter_; // Iterator for current state.
MatchType match_type_; // Type of match to perform.
Label binary_label_; // Least label for binary search.
Label match_label_; // Current label to be matched.
size_t narcs_; // Current state arc count.
Arc loop_; // For non-consuming symbols.
bool current_loop_; // Current arc is the implicit loop.
bool exact_match_; // Exact match or lower bound?
bool error_; // Error encountered?
MemoryPool<ArcIterator<FST>> aiter_pool_; // Pool of arc iterators.
};
// Returns true iff match to match_label_. The arc iterator is positioned at the
// lower bound, that is, the first element greater than or equal to
// match_label_, or the end if all elements are less than match_label_.
template <class FST>
inline bool SortedMatcher<FST>::BinarySearch() {
size_t low = 0;
size_t high = narcs_;
while (low < high) {
const size_t mid = low + (high - low) / 2;
aiter_->Seek(mid);
if (GetLabel() < match_label_) {
low = mid + 1;
} else {
high = mid;
}
}
aiter_->Seek(low);
return low < narcs_ && GetLabel() == match_label_;
}
// Returns true iff match to match_label_, positioning arc iterator at lower
// bound.
template <class FST>
inline bool SortedMatcher<FST>::LinearSearch() {
for (aiter_->Reset(); !aiter_->Done(); aiter_->Next()) {
const auto label = GetLabel();
if (label == match_label_) return true;
if (label > match_label_) break;
}
return false;
}
// Returns true iff match to match_label_, positioning arc iterator at lower
// bound.
template <class FST>
inline bool SortedMatcher<FST>::Search() {
aiter_->SetFlags(match_type_ == MATCH_INPUT ?
kArcILabelValue : kArcOLabelValue,
kArcValueFlags);
if (match_label_ >= binary_label_) {
return BinarySearch();
} else {
return LinearSearch();
}
}
// A matcher that stores labels in a per-state hash table populated upon the
// first visit to that state. Sorting is not required. Treatment of
// epsilons are the same as with SortedMatcher.
template <class F>
class HashMatcher : public MatcherBase<typename F::Arc> {
public:
using FST = F;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
using MatcherBase<Arc>::Flags;
using MatcherBase<Arc>::Final;
using MatcherBase<Arc>::Priority;
// This makes a copy of the FST.
HashMatcher(const FST &fst, MatchType match_type)
: HashMatcher(fst.Copy(), match_type) {
owned_fst_.reset(&fst_);
}
// This doesn't copy the FST.
HashMatcher(const FST *fst, MatchType match_type)
: fst_(*fst),
state_(kNoStateId),
match_type_(match_type),
loop_(kNoLabel, 0, Weight::One(), kNoStateId),
error_(false) {
switch (match_type_) {
case MATCH_INPUT:
case MATCH_NONE:
break;
case MATCH_OUTPUT:
std::swap(loop_.ilabel, loop_.olabel);
break;
default:
FSTERROR() << "HashMatcher: Bad match type";
match_type_ = MATCH_NONE;
error_ = true;
}
}
// This makes a copy of the FST.
HashMatcher(const HashMatcher<FST> &matcher, bool safe = false)
: owned_fst_(matcher.fst_.Copy(safe)),
fst_(*owned_fst_),
state_(kNoStateId),
match_type_(matcher.match_type_),
loop_(matcher.loop_),
error_(matcher.error_) {}
HashMatcher<FST> *Copy(bool safe = false) const override {
return new HashMatcher<FST>(*this, safe);
}
// The argument is ignored as there are no relevant properties to test.
MatchType Type(bool test) const override { return match_type_; }
void SetState(StateId s) final;
bool Find(Label label) final {
current_loop_ = label == 0;
if (label == 0) {
Search(label);
return true;
}
if (label == kNoLabel) label = 0;
return Search(label);
}
bool Done() const final {
if (current_loop_) return false;
return label_it_ == label_end_;
}
const Arc &Value() const final {
if (current_loop_) return loop_;
aiter_->Seek(label_it_->second);
return aiter_->Value();
}
void Next() final {
if (current_loop_) {
current_loop_ = false;
} else {
++label_it_;
}
}
const FST &GetFst() const override { return fst_; }
uint64 Properties(uint64 inprops) const override {
return inprops | (error_ ? kError : 0);
}
private:
Label GetLabel() const {
const auto &arc = aiter_->Value();
return match_type_ == MATCH_INPUT ? arc.ilabel : arc.olabel;
}
bool Search(Label match_label);
using LabelTable = std::unordered_multimap<Label, size_t>;
using StateTable = std::unordered_map<StateId, LabelTable>;
std::unique_ptr<const FST> owned_fst_; // ptr to FST if owned.
const FST &fst_; // FST for matching.
StateId state_; // Matcher state.
MatchType match_type_;
Arc loop_; // The implicit loop itself.
bool current_loop_; // Is the current arc the implicit loop?
bool error_; // Error encountered?
std::unique_ptr<ArcIterator<FST>> aiter_;
StateTable state_table_; // Table from states to label table.
LabelTable *label_table_; // Pointer to current state's label table.
typename LabelTable::iterator label_it_; // Position for label.
typename LabelTable::iterator label_end_; // Position for last label + 1.
};
template <class FST>
void HashMatcher<FST>::SetState(typename FST::Arc::StateId s) {
if (state_ == s) return;
// Resets everything for the state.
state_ = s;
loop_.nextstate = state_;
aiter_.reset(new ArcIterator<FST>(fst_, state_));
if (match_type_ == MATCH_NONE) {
FSTERROR() << "HashMatcher: Bad match type";
error_ = true;
}
// Attempts to insert a new label table; if it already exists,
// no additional work is done and we simply return.
auto it_and_success = state_table_.emplace(state_, LabelTable());
if (!it_and_success.second) return;
// Otherwise, populate this new table.
// Sets instance's pointer to the label table for this state.
label_table_ = &(it_and_success.first->second);
// Populates the label table.
label_table_->reserve(internal::NumArcs(fst_, state_));
const auto aiter_flags =
(match_type_ == MATCH_INPUT ? kArcILabelValue : kArcOLabelValue) |
kArcNoCache;
aiter_->SetFlags(aiter_flags, kArcFlags);
for (; !aiter_->Done(); aiter_->Next()) {
label_table_->emplace(GetLabel(), aiter_->Position());
}
aiter_->SetFlags(kArcValueFlags, kArcValueFlags);
}
template <class FST>
inline bool HashMatcher<FST>::Search(typename FST::Arc::Label match_label) {
auto range = label_table_->equal_range(match_label);
if (range.first == range.second) return false;
label_it_ = range.first;
label_end_ = range.second;
aiter_->Seek(label_it_->second);
return true;
}
// Specifies whether we rewrite both the input and output sides during matching.
enum MatcherRewriteMode {
MATCHER_REWRITE_AUTO = 0, // Rewrites both sides iff acceptor.
MATCHER_REWRITE_ALWAYS,
MATCHER_REWRITE_NEVER
};
// For any requested label that doesn't match at a state, this matcher
// considers the *unique* transition that matches the label 'phi_label'
// (phi = 'fail'), and recursively looks for a match at its
// destination. When 'phi_loop' is true, if no match is found but a
// phi self-loop is found, then the phi transition found is returned
// with the phi_label rewritten as the requested label (both sides if
// an acceptor, or if 'rewrite_both' is true and both input and output
// labels of the found transition are 'phi_label'). If 'phi_label' is
// kNoLabel, this special matching is not done. PhiMatcher is
// templated itself on a matcher, which is used to perform the
// underlying matching. By default, the underlying matcher is
// constructed by PhiMatcher. The user can instead pass in this
// object; in that case, PhiMatcher takes its ownership.
// Phi non-determinism not supported. No non-consuming symbols other
// than epsilon supported with the underlying template argument matcher.
template <class M>
class PhiMatcher : public MatcherBase<typename M::Arc> {
public:
using FST = typename M::FST;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
// This makes a copy of the FST (w/o 'matcher' arg).
PhiMatcher(const FST &fst, MatchType match_type, Label phi_label = kNoLabel,
bool phi_loop = true,
MatcherRewriteMode rewrite_mode = MATCHER_REWRITE_AUTO,
M *matcher = nullptr)
: matcher_(matcher ? matcher : new M(fst, match_type)),
match_type_(match_type),
phi_label_(phi_label),
state_(kNoStateId),
phi_loop_(phi_loop),
error_(false) {
if (match_type == MATCH_BOTH) {
FSTERROR() << "PhiMatcher: Bad match type";
match_type_ = MATCH_NONE;
error_ = true;
}
if (rewrite_mode == MATCHER_REWRITE_AUTO) {
rewrite_both_ = fst.Properties(kAcceptor, true);
} else if (rewrite_mode == MATCHER_REWRITE_ALWAYS) {
rewrite_both_ = true;
} else {
rewrite_both_ = false;
}
}
// This doesn't copy the FST.
PhiMatcher(const FST *fst, MatchType match_type, Label phi_label = kNoLabel,
bool phi_loop = true,
MatcherRewriteMode rewrite_mode = MATCHER_REWRITE_AUTO,
M *matcher = nullptr)
: PhiMatcher(*fst, match_type, phi_label, phi_loop, rewrite_mode,
matcher ? matcher : new M(fst, match_type)) { }
// This makes a copy of the FST.
PhiMatcher(const PhiMatcher<M> &matcher, bool safe = false)
: matcher_(new M(*matcher.matcher_, safe)),
match_type_(matcher.match_type_),
phi_label_(matcher.phi_label_),
rewrite_both_(matcher.rewrite_both_),
state_(kNoStateId),
phi_loop_(matcher.phi_loop_),
error_(matcher.error_) {}
PhiMatcher<M> *Copy(bool safe = false) const override {
return new PhiMatcher<M>(*this, safe);
}
MatchType Type(bool test) const override { return matcher_->Type(test); }
void SetState(StateId s) final {
if (state_ == s) return;
matcher_->SetState(s);
state_ = s;
has_phi_ = phi_label_ != kNoLabel;
}
bool Find(Label match_label) final;
bool Done() const final { return matcher_->Done(); }
const Arc &Value() const final {
if ((phi_match_ == kNoLabel) && (phi_weight_ == Weight::One())) {
return matcher_->Value();
} else if (phi_match_ == 0) { // Virtual epsilon loop.
phi_arc_ = Arc(kNoLabel, 0, Weight::One(), state_);
if (match_type_ == MATCH_OUTPUT) {
std::swap(phi_arc_.ilabel, phi_arc_.olabel);
}
return phi_arc_;
} else {
phi_arc_ = matcher_->Value();
phi_arc_.weight = Times(phi_weight_, phi_arc_.weight);
if (phi_match_ != kNoLabel) { // Phi loop match.
if (rewrite_both_) {
if (phi_arc_.ilabel == phi_label_) phi_arc_.ilabel = phi_match_;
if (phi_arc_.olabel == phi_label_) phi_arc_.olabel = phi_match_;
} else if (match_type_ == MATCH_INPUT) {
phi_arc_.ilabel = phi_match_;
} else {
phi_arc_.olabel = phi_match_;
}
}
return phi_arc_;
}
}
void Next() final { matcher_->Next(); }
Weight Final(StateId s) const final {
auto weight = matcher_->Final(s);
if (phi_label_ == kNoLabel || weight != Weight::Zero()) {
return weight;
}
weight = Weight::One();
matcher_->SetState(s);
while (matcher_->Final(s) == Weight::Zero()) {
if (!matcher_->Find(phi_label_ == 0 ? -1 : phi_label_)) break;
weight = Times(weight, matcher_->Value().weight);
if (s == matcher_->Value().nextstate) {
return Weight::Zero(); // Does not follow phi self-loops.
}
s = matcher_->Value().nextstate;
matcher_->SetState(s);
}
weight = Times(weight, matcher_->Final(s));
return weight;
}
ssize_t Priority(StateId s) final {
if (phi_label_ != kNoLabel) {
matcher_->SetState(s);
const bool has_phi = matcher_->Find(phi_label_ == 0 ? -1 : phi_label_);
return has_phi ? kRequirePriority : matcher_->Priority(s);
} else {
return matcher_->Priority(s);
}
}
const FST &GetFst() const override { return matcher_->GetFst(); }
uint64 Properties(uint64 props) const override;
uint32 Flags() const override {
if (phi_label_ == kNoLabel || match_type_ == MATCH_NONE) {
return matcher_->Flags();
}
return matcher_->Flags() | kRequireMatch;
}
Label PhiLabel() const { return phi_label_; }
private:
mutable std::unique_ptr<M> matcher_;
MatchType match_type_; // Type of match requested.
Label phi_label_; // Label that represents the phi transition.
bool rewrite_both_; // Rewrite both sides when both are phi_label_?
bool has_phi_; // Are there possibly phis at the current state?
Label phi_match_; // Current label that matches phi loop.
mutable Arc phi_arc_; // Arc to return.
StateId state_; // Matcher state.
Weight phi_weight_; // Product of the weights of phi transitions taken.
bool phi_loop_; // When true, phi self-loop are allowed and treated
// as rho (required for Aho-Corasick).
bool error_; // Error encountered?
PhiMatcher &operator=(const PhiMatcher &) = delete;
};
template <class M>
inline bool PhiMatcher<M>::Find(Label label) {
if (label == phi_label_ && phi_label_ != kNoLabel && phi_label_ != 0) {
FSTERROR() << "PhiMatcher::Find: bad label (phi): " << phi_label_;
error_ = true;
return false;
}
matcher_->SetState(state_);
phi_match_ = kNoLabel;
phi_weight_ = Weight::One();
// If phi_label_ == 0, there are no more true epsilon arcs.
if (phi_label_ == 0) {
if (label == kNoLabel) {
return false;
}
if (label == 0) { // but a virtual epsilon loop needs to be returned.
if (!matcher_->Find(kNoLabel)) {
return matcher_->Find(0);
} else {
phi_match_ = 0;
return true;
}
}
}
if (!has_phi_ || label == 0 || label == kNoLabel) {
return matcher_->Find(label);
}
auto s = state_;
while (!matcher_->Find(label)) {
// Look for phi transition (if phi_label_ == 0, we need to look
// for -1 to avoid getting the virtual self-loop)
if (!matcher_->Find(phi_label_ == 0 ? -1 : phi_label_)) return false;
if (phi_loop_ && matcher_->Value().nextstate == s) {
phi_match_ = label;
return true;
}
phi_weight_ = Times(phi_weight_, matcher_->Value().weight);
s = matcher_->Value().nextstate;
matcher_->Next();
if (!matcher_->Done()) {
FSTERROR() << "PhiMatcher: Phi non-determinism not supported";
error_ = true;
}
matcher_->SetState(s);
}
return true;
}
template <class M>
inline uint64 PhiMatcher<M>::Properties(uint64 inprops) const {
auto outprops = matcher_->Properties(inprops);
if (error_) outprops |= kError;
if (match_type_ == MATCH_NONE) {
return outprops;
} else if (match_type_ == MATCH_INPUT) {
if (phi_label_ == 0) {
outprops &= ~kEpsilons | ~kIEpsilons | ~kOEpsilons;
outprops |= kNoEpsilons | kNoIEpsilons;
}
if (rewrite_both_) {
return outprops &
~(kODeterministic | kNonODeterministic | kString | kILabelSorted |
kNotILabelSorted | kOLabelSorted | kNotOLabelSorted);
} else {
return outprops &
~(kODeterministic | kAcceptor | kString | kILabelSorted |
kNotILabelSorted | kOLabelSorted | kNotOLabelSorted);
}
} else if (match_type_ == MATCH_OUTPUT) {
if (phi_label_ == 0) {
outprops &= ~kEpsilons | ~kIEpsilons | ~kOEpsilons;
outprops |= kNoEpsilons | kNoOEpsilons;
}
if (rewrite_both_) {
return outprops &
~(kIDeterministic | kNonIDeterministic | kString | kILabelSorted |
kNotILabelSorted | kOLabelSorted | kNotOLabelSorted);
} else {
return outprops &
~(kIDeterministic | kAcceptor | kString | kILabelSorted |
kNotILabelSorted | kOLabelSorted | kNotOLabelSorted);
}
} else {
// Shouldn't ever get here.
FSTERROR() << "PhiMatcher: Bad match type: " << match_type_;
return 0;
}
}
// For any requested label that doesn't match at a state, this matcher
// considers all transitions that match the label 'rho_label' (rho =
// 'rest'). Each such rho transition found is returned with the
// rho_label rewritten as the requested label (both sides if an
// acceptor, or if 'rewrite_both' is true and both input and output
// labels of the found transition are 'rho_label'). If 'rho_label' is
// kNoLabel, this special matching is not done. RhoMatcher is
// templated itself on a matcher, which is used to perform the
// underlying matching. By default, the underlying matcher is
// constructed by RhoMatcher. The user can instead pass in this
// object; in that case, RhoMatcher takes its ownership.
// No non-consuming symbols other than epsilon supported with
// the underlying template argument matcher.
template <class M>
class RhoMatcher : public MatcherBase<typename M::Arc> {
public:
using FST = typename M::FST;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
// This makes a copy of the FST (w/o 'matcher' arg).
RhoMatcher(const FST &fst, MatchType match_type, Label rho_label = kNoLabel,
MatcherRewriteMode rewrite_mode = MATCHER_REWRITE_AUTO,
M *matcher = nullptr)
: matcher_(matcher ? matcher : new M(fst, match_type)),
match_type_(match_type),
rho_label_(rho_label),
error_(false),
state_(kNoStateId),
has_rho_(false) {
if (match_type == MATCH_BOTH) {
FSTERROR() << "RhoMatcher: Bad match type";
match_type_ = MATCH_NONE;
error_ = true;
}
if (rho_label == 0) {
FSTERROR() << "RhoMatcher: 0 cannot be used as rho_label";
rho_label_ = kNoLabel;
error_ = true;
}
if (rewrite_mode == MATCHER_REWRITE_AUTO) {
rewrite_both_ = fst.Properties(kAcceptor, true);
} else if (rewrite_mode == MATCHER_REWRITE_ALWAYS) {
rewrite_both_ = true;
} else {
rewrite_both_ = false;
}
}
// This doesn't copy the FST.
RhoMatcher(const FST *fst, MatchType match_type, Label rho_label = kNoLabel,
MatcherRewriteMode rewrite_mode = MATCHER_REWRITE_AUTO,
M *matcher = nullptr)
: RhoMatcher(*fst, match_type, rho_label, rewrite_mode,
matcher ? matcher : new M(fst, match_type)) { }
// This makes a copy of the FST.
RhoMatcher(const RhoMatcher<M> &matcher, bool safe = false)
: matcher_(new M(*matcher.matcher_, safe)),
match_type_(matcher.match_type_),
rho_label_(matcher.rho_label_),
rewrite_both_(matcher.rewrite_both_),
error_(matcher.error_),
state_(kNoStateId),
has_rho_(false) {}
RhoMatcher<M> *Copy(bool safe = false) const override {
return new RhoMatcher<M>(*this, safe);
}
MatchType Type(bool test) const override { return matcher_->Type(test); }
void SetState(StateId s) final {
if (state_ == s) return;
state_ = s;
matcher_->SetState(s);
has_rho_ = rho_label_ != kNoLabel;
}
bool Find(Label label) final {
if (label == rho_label_ && rho_label_ != kNoLabel) {
FSTERROR() << "RhoMatcher::Find: bad label (rho)";
error_ = true;
return false;
}
if (matcher_->Find(label)) {
rho_match_ = kNoLabel;
return true;
} else if (has_rho_ && label != 0 && label != kNoLabel &&
(has_rho_ = matcher_->Find(rho_label_))) {
rho_match_ = label;
return true;
} else {
return false;
}
}
bool Done() const final { return matcher_->Done(); }
const Arc &Value() const final {
if (rho_match_ == kNoLabel) {
return matcher_->Value();
} else {
rho_arc_ = matcher_->Value();
if (rewrite_both_) {
if (rho_arc_.ilabel == rho_label_) rho_arc_.ilabel = rho_match_;
if (rho_arc_.olabel == rho_label_) rho_arc_.olabel = rho_match_;
} else if (match_type_ == MATCH_INPUT) {
rho_arc_.ilabel = rho_match_;
} else {
rho_arc_.olabel = rho_match_;
}
return rho_arc_;
}
}
void Next() final { matcher_->Next(); }
Weight Final(StateId s) const final { return matcher_->Final(s); }
ssize_t Priority(StateId s) final {
state_ = s;
matcher_->SetState(s);
has_rho_ = matcher_->Find(rho_label_);
if (has_rho_) {
return kRequirePriority;
} else {
return matcher_->Priority(s);
}
}
const FST &GetFst() const override { return matcher_->GetFst(); }
uint64 Properties(uint64 props) const override;
uint32 Flags() const override {
if (rho_label_ == kNoLabel || match_type_ == MATCH_NONE) {
return matcher_->Flags();
}
return matcher_->Flags() | kRequireMatch;
}
Label RhoLabel() const { return rho_label_; }
private:
std::unique_ptr<M> matcher_;
MatchType match_type_; // Type of match requested.
Label rho_label_; // Label that represents the rho transition
bool rewrite_both_; // Rewrite both sides when both are rho_label_?
Label rho_match_; // Current label that matches rho transition.
mutable Arc rho_arc_; // Arc to return when rho match.
bool error_; // Error encountered?
StateId state_; // Matcher state.
bool has_rho_; // Are there possibly rhos at the current state?
};
template <class M>
inline uint64 RhoMatcher<M>::Properties(uint64 inprops) const {
auto outprops = matcher_->Properties(inprops);
if (error_) outprops |= kError;
if (match_type_ == MATCH_NONE) {
return outprops;
} else if (match_type_ == MATCH_INPUT) {
if (rewrite_both_) {
return outprops &
~(kODeterministic | kNonODeterministic | kString | kILabelSorted |
kNotILabelSorted | kOLabelSorted | kNotOLabelSorted);
} else {
return outprops &
~(kODeterministic | kAcceptor | kString | kILabelSorted |
kNotILabelSorted);
}
} else if (match_type_ == MATCH_OUTPUT) {
if (rewrite_both_) {
return outprops &
~(kIDeterministic | kNonIDeterministic | kString | kILabelSorted |
kNotILabelSorted | kOLabelSorted | kNotOLabelSorted);
} else {
return outprops &
~(kIDeterministic | kAcceptor | kString | kOLabelSorted |
kNotOLabelSorted);
}
} else {
// Shouldn't ever get here.
FSTERROR() << "RhoMatcher: Bad match type: " << match_type_;
return 0;
}
}
// For any requested label, this matcher considers all transitions
// that match the label 'sigma_label' (sigma = "any"), and this in
// additions to transitions with the requested label. Each such sigma
// transition found is returned with the sigma_label rewritten as the
// requested label (both sides if an acceptor, or if 'rewrite_both' is
// true and both input and output labels of the found transition are
// 'sigma_label'). If 'sigma_label' is kNoLabel, this special
// matching is not done. SigmaMatcher is templated itself on a
// matcher, which is used to perform the underlying matching. By
// default, the underlying matcher is constructed by SigmaMatcher.
// The user can instead pass in this object; in that case,
// SigmaMatcher takes its ownership. No non-consuming symbols other
// than epsilon supported with the underlying template argument matcher.
template <class M>
class SigmaMatcher : public MatcherBase<typename M::Arc> {
public:
using FST = typename M::FST;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
// This makes a copy of the FST (w/o 'matcher' arg).
SigmaMatcher(const FST &fst, MatchType match_type,
Label sigma_label = kNoLabel,
MatcherRewriteMode rewrite_mode = MATCHER_REWRITE_AUTO,
M *matcher = nullptr)
: matcher_(matcher ? matcher : new M(fst, match_type)),
match_type_(match_type),
sigma_label_(sigma_label),
error_(false),
state_(kNoStateId) {
if (match_type == MATCH_BOTH) {
FSTERROR() << "SigmaMatcher: Bad match type";
match_type_ = MATCH_NONE;
error_ = true;
}
if (sigma_label == 0) {
FSTERROR() << "SigmaMatcher: 0 cannot be used as sigma_label";
sigma_label_ = kNoLabel;
error_ = true;
}
if (rewrite_mode == MATCHER_REWRITE_AUTO) {
rewrite_both_ = fst.Properties(kAcceptor, true);
} else if (rewrite_mode == MATCHER_REWRITE_ALWAYS) {
rewrite_both_ = true;
} else {
rewrite_both_ = false;
}
}
// This doesn't copy the FST.
SigmaMatcher(const FST *fst, MatchType match_type,
Label sigma_label = kNoLabel,
MatcherRewriteMode rewrite_mode = MATCHER_REWRITE_AUTO,
M *matcher = nullptr)
: SigmaMatcher(*fst, match_type, sigma_label, rewrite_mode,
matcher ? matcher : new M(fst, match_type)) { }
// This makes a copy of the FST.
SigmaMatcher(const SigmaMatcher<M> &matcher, bool safe = false)
: matcher_(new M(*matcher.matcher_, safe)),
match_type_(matcher.match_type_),
sigma_label_(matcher.sigma_label_),
rewrite_both_(matcher.rewrite_both_),
error_(matcher.error_),
state_(kNoStateId) {}
SigmaMatcher<M> *Copy(bool safe = false) const override {
return new SigmaMatcher<M>(*this, safe);
}
MatchType Type(bool test) const override { return matcher_->Type(test); }
void SetState(StateId s) final {
if (state_ == s) return;
state_ = s;
matcher_->SetState(s);
has_sigma_ =
(sigma_label_ != kNoLabel) ? matcher_->Find(sigma_label_) : false;
}
bool Find(Label match_label) final {
match_label_ = match_label;
if (match_label == sigma_label_ && sigma_label_ != kNoLabel) {
FSTERROR() << "SigmaMatcher::Find: bad label (sigma)";
error_ = true;
return false;
}
if (matcher_->Find(match_label)) {
sigma_match_ = kNoLabel;
return true;
} else if (has_sigma_ && match_label != 0 && match_label != kNoLabel &&
matcher_->Find(sigma_label_)) {
sigma_match_ = match_label;
return true;
} else {
return false;
}
}
bool Done() const final { return matcher_->Done(); }
const Arc &Value() const final {
if (sigma_match_ == kNoLabel) {
return matcher_->Value();
} else {
sigma_arc_ = matcher_->Value();
if (rewrite_both_) {
if (sigma_arc_.ilabel == sigma_label_) sigma_arc_.ilabel = sigma_match_;
if (sigma_arc_.olabel == sigma_label_) sigma_arc_.olabel = sigma_match_;
} else if (match_type_ == MATCH_INPUT) {
sigma_arc_.ilabel = sigma_match_;
} else {
sigma_arc_.olabel = sigma_match_;
}
return sigma_arc_;
}
}
void Next() final {
matcher_->Next();
if (matcher_->Done() && has_sigma_ && (sigma_match_ == kNoLabel) &&
(match_label_ > 0)) {
matcher_->Find(sigma_label_);
sigma_match_ = match_label_;
}
}
Weight Final(StateId s) const final { return matcher_->Final(s); }
ssize_t Priority(StateId s) final {
if (sigma_label_ != kNoLabel) {
SetState(s);
return has_sigma_ ? kRequirePriority : matcher_->Priority(s);
} else {
return matcher_->Priority(s);
}
}
const FST &GetFst() const override { return matcher_->GetFst(); }
uint64 Properties(uint64 props) const override;
uint32 Flags() const override {
if (sigma_label_ == kNoLabel || match_type_ == MATCH_NONE) {
return matcher_->Flags();
}
return matcher_->Flags() | kRequireMatch;
}
Label SigmaLabel() const { return sigma_label_; }
private:
std::unique_ptr<M> matcher_;
MatchType match_type_; // Type of match requested.
Label sigma_label_; // Label that represents the sigma transition.
bool rewrite_both_; // Rewrite both sides when both are sigma_label_?
bool has_sigma_; // Are there sigmas at the current state?
Label sigma_match_; // Current label that matches sigma transition.
mutable Arc sigma_arc_; // Arc to return when sigma match.
Label match_label_; // Label being matched.
bool error_; // Error encountered?
StateId state_; // Matcher state.
};
template <class M>
inline uint64 SigmaMatcher<M>::Properties(uint64 inprops) const {
auto outprops = matcher_->Properties(inprops);
if (error_) outprops |= kError;
if (match_type_ == MATCH_NONE) {
return outprops;
} else if (rewrite_both_) {
return outprops &
~(kIDeterministic | kNonIDeterministic | kODeterministic |
kNonODeterministic | kILabelSorted | kNotILabelSorted |
kOLabelSorted | kNotOLabelSorted | kString);
} else if (match_type_ == MATCH_INPUT) {
return outprops &
~(kIDeterministic | kNonIDeterministic | kODeterministic |
kNonODeterministic | kILabelSorted | kNotILabelSorted | kString |
kAcceptor);
} else if (match_type_ == MATCH_OUTPUT) {
return outprops &
~(kIDeterministic | kNonIDeterministic | kODeterministic |
kNonODeterministic | kOLabelSorted | kNotOLabelSorted | kString |
kAcceptor);
} else {
// Shouldn't ever get here.
FSTERROR() << "SigmaMatcher: Bad match type: " << match_type_;
return 0;
}
}
// Flags for MultiEpsMatcher.
// Return multi-epsilon arcs for Find(kNoLabel).
const uint32 kMultiEpsList = 0x00000001;
// Return a kNolabel loop for Find(multi_eps).
const uint32 kMultiEpsLoop = 0x00000002;
// MultiEpsMatcher: allows treating multiple non-0 labels as
// non-consuming labels in addition to 0 that is always
// non-consuming. Precise behavior controlled by 'flags' argument. By
// default, the underlying matcher is constructed by
// MultiEpsMatcher. The user can instead pass in this object; in that
// case, MultiEpsMatcher takes its ownership iff 'own_matcher' is
// true.
template <class M>
class MultiEpsMatcher {
public:
using FST = typename M::FST;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
// This makes a copy of the FST (w/o 'matcher' arg).
MultiEpsMatcher(const FST &fst, MatchType match_type,
uint32 flags = (kMultiEpsLoop | kMultiEpsList),
M *matcher = nullptr, bool own_matcher = true)
: matcher_(matcher ? matcher : new M(fst, match_type)),
flags_(flags),
own_matcher_(matcher ? own_matcher : true) {
Init(match_type);
}
// This doesn't copy the FST.
MultiEpsMatcher(const FST *fst, MatchType match_type,
uint32 flags = (kMultiEpsLoop | kMultiEpsList),
M *matcher = nullptr, bool own_matcher = true)
: matcher_(matcher ? matcher : new M(fst, match_type)),
flags_(flags),
own_matcher_(matcher ? own_matcher : true) {
Init(match_type);
}
// This makes a copy of the FST.
MultiEpsMatcher(const MultiEpsMatcher<M> &matcher, bool safe = false)
: matcher_(new M(*matcher.matcher_, safe)),
flags_(matcher.flags_),
own_matcher_(true),
multi_eps_labels_(matcher.multi_eps_labels_),
loop_(matcher.loop_) {
loop_.nextstate = kNoStateId;
}
~MultiEpsMatcher() {
if (own_matcher_) delete matcher_;
}
MultiEpsMatcher<M> *Copy(bool safe = false) const {
return new MultiEpsMatcher<M>(*this, safe);
}
MatchType Type(bool test) const { return matcher_->Type(test); }
void SetState(StateId state) {
matcher_->SetState(state);
loop_.nextstate = state;
}
bool Find(Label label);
bool Done() const { return done_; }
const Arc &Value() const { return current_loop_ ? loop_ : matcher_->Value(); }
void Next() {
if (!current_loop_) {
matcher_->Next();
done_ = matcher_->Done();
if (done_ && multi_eps_iter_ != multi_eps_labels_.End()) {
++multi_eps_iter_;
while ((multi_eps_iter_ != multi_eps_labels_.End()) &&
!matcher_->Find(*multi_eps_iter_)) {
++multi_eps_iter_;
}
if (multi_eps_iter_ != multi_eps_labels_.End()) {
done_ = false;
} else {
done_ = !matcher_->Find(kNoLabel);
}
}
} else {
done_ = true;
}
}
const FST &GetFst() const { return matcher_->GetFst(); }
uint64 Properties(uint64 props) const { return matcher_->Properties(props); }
const M *GetMatcher() const { return matcher_; }
Weight Final(StateId s) const { return matcher_->Final(s); }
uint32 Flags() const { return matcher_->Flags(); }
ssize_t Priority(StateId s) { return matcher_->Priority(s); }
void AddMultiEpsLabel(Label label) {
if (label == 0) {
FSTERROR() << "MultiEpsMatcher: Bad multi-eps label: 0";
} else {
multi_eps_labels_.Insert(label);
}
}
void RemoveMultiEpsLabel(Label label) {
if (label == 0) {
FSTERROR() << "MultiEpsMatcher: Bad multi-eps label: 0";
} else {
multi_eps_labels_.Erase(label);
}
}
void ClearMultiEpsLabels() { multi_eps_labels_.Clear(); }
private:
void Init(MatchType match_type) {
if (match_type == MATCH_INPUT) {
loop_.ilabel = kNoLabel;
loop_.olabel = 0;
} else {
loop_.ilabel = 0;
loop_.olabel = kNoLabel;
}
loop_.weight = Weight::One();
loop_.nextstate = kNoStateId;
}
M *matcher_;
uint32 flags_;
bool own_matcher_; // Does this class delete the matcher?
// Multi-eps label set.
CompactSet<Label, kNoLabel> multi_eps_labels_;
typename CompactSet<Label, kNoLabel>::const_iterator multi_eps_iter_;
bool current_loop_; // Current arc is the implicit loop?
mutable Arc loop_; // For non-consuming symbols.
bool done_; // Matching done?
MultiEpsMatcher &operator=(const MultiEpsMatcher &) = delete;
};
template <class M>
inline bool MultiEpsMatcher<M>::Find(Label label) {
multi_eps_iter_ = multi_eps_labels_.End();
current_loop_ = false;
bool ret;
if (label == 0) {
ret = matcher_->Find(0);
} else if (label == kNoLabel) {
if (flags_ & kMultiEpsList) {
// Returns all non-consuming arcs (including epsilon).
multi_eps_iter_ = multi_eps_labels_.Begin();
while ((multi_eps_iter_ != multi_eps_labels_.End()) &&
!matcher_->Find(*multi_eps_iter_)) {
++multi_eps_iter_;
}
if (multi_eps_iter_ != multi_eps_labels_.End()) {
ret = true;
} else {
ret = matcher_->Find(kNoLabel);
}
} else {
// Returns all epsilon arcs.
ret = matcher_->Find(kNoLabel);
}
} else if ((flags_ & kMultiEpsLoop) &&
multi_eps_labels_.Find(label) != multi_eps_labels_.End()) {
// Returns implicit loop.
current_loop_ = true;
ret = true;
} else {
ret = matcher_->Find(label);
}
done_ = !ret;
return ret;
}
// This class discards any implicit matches (e.g., the implicit epsilon
// self-loops in the SortedMatcher). Matchers are most often used in
// composition/intersection where the implicit matches are needed
// e.g. for epsilon processing. However, if a matcher is simply being
// used to look-up explicit label matches, this class saves the user
// from having to check for and discard the unwanted implicit matches
// themselves.
template <class M>
class ExplicitMatcher : public MatcherBase<typename M::Arc> {
public:
using FST = typename M::FST;
using Arc = typename FST::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
// This makes a copy of the FST.
ExplicitMatcher(const FST &fst, MatchType match_type, M *matcher = nullptr)
: matcher_(matcher ? matcher : new M(fst, match_type)),
match_type_(match_type),
error_(false) {}
// This doesn't copy the FST.
ExplicitMatcher(const FST *fst, MatchType match_type, M *matcher = nullptr)
: matcher_(matcher ? matcher : new M(fst, match_type)),
match_type_(match_type),
error_(false) {}
// This makes a copy of the FST.
ExplicitMatcher(const ExplicitMatcher<M> &matcher, bool safe = false)
: matcher_(new M(*matcher.matcher_, safe)),
match_type_(matcher.match_type_),
error_(matcher.error_) {}
ExplicitMatcher<M> *Copy(bool safe = false) const override {
return new ExplicitMatcher<M>(*this, safe);
}
MatchType Type(bool test) const override { return matcher_->Type(test); }
void SetState(StateId s) final { matcher_->SetState(s); }
bool Find(Label label) final {
matcher_->Find(label);
CheckArc();
return !Done();
}
bool Done() const final { return matcher_->Done(); }
const Arc &Value() const final { return matcher_->Value(); }
void Next() final {
matcher_->Next();
CheckArc();
}
Weight Final(StateId s) const final { return matcher_->Final(s); }
ssize_t Priority(StateId s) final { return matcher_->Priority(s); }
const FST &GetFst() const final { return matcher_->GetFst(); }
uint64 Properties(uint64 inprops) const override {
return matcher_->Properties(inprops);
}
const M *GetMatcher() const { return matcher_.get(); }
uint32 Flags() const override { return matcher_->Flags(); }
private:
// Checks current arc if available and explicit. If not available, stops. If
// not explicit, checks next ones.
void CheckArc() {
for (; !matcher_->Done(); matcher_->Next()) {
const auto label = match_type_ == MATCH_INPUT ? matcher_->Value().ilabel
: matcher_->Value().olabel;
if (label != kNoLabel) return;
}
}
std::unique_ptr<M> matcher_;
MatchType match_type_; // Type of match requested.
bool error_; // Error encountered?
};
// Generic matcher, templated on the FST definition.
//
// Here is a typical use:
//
// Matcher<StdFst> matcher(fst, MATCH_INPUT);
// matcher.SetState(state);
// if (matcher.Find(label))
// for (; !matcher.Done(); matcher.Next()) {
// auto &arc = matcher.Value();
// ...
// }
template <class F>
class Matcher {
public:
using FST = F;
using Arc = typename F::Arc;
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
// This makes a copy of the FST.
Matcher(const FST &fst, MatchType match_type)
: owned_fst_(fst.Copy()),
base_(owned_fst_->InitMatcher(match_type)) {
if (!base_) base_.reset(new SortedMatcher<FST>(owned_fst_.get(),
match_type));
}
// This doesn't copy the FST.
Matcher(const FST *fst, MatchType match_type)
: base_(fst->InitMatcher(match_type)) {
if (!base_) base_.reset(new SortedMatcher<FST>(fst, match_type));
}
// This makes a copy of the FST.
Matcher(const Matcher<FST> &matcher, bool safe = false)
: base_(matcher.base_->Copy(safe)) { }
// Takes ownership of the provided matcher.
explicit Matcher(MatcherBase<Arc> *base_matcher)
: base_(base_matcher) { }
Matcher<FST> *Copy(bool safe = false) const {
return new Matcher<FST>(*this, safe);
}
MatchType Type(bool test) const { return base_->Type(test); }
void SetState(StateId s) { base_->SetState(s); }
bool Find(Label label) { return base_->Find(label); }
bool Done() const { return base_->Done(); }
const Arc &Value() const { return base_->Value(); }
void Next() { base_->Next(); }
const FST &GetFst() const {
return static_cast<const FST &>(base_->GetFst());
}
uint64 Properties(uint64 props) const { return base_->Properties(props); }
Weight Final(StateId s) const { return base_->Final(s); }
uint32 Flags() const { return base_->Flags() & kMatcherFlags; }
ssize_t Priority(StateId s) { return base_->Priority(s); }
private:
std::unique_ptr<const FST> owned_fst_;
std::unique_ptr<MatcherBase<Arc>> base_;
};
} // namespace fst
#endif // FST_MATCHER_H_
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions/const/const8-fst.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/fst.h>
#include <fst/const-fst.h>
namespace fst {
static FstRegisterer<ConstFst<StdArc, uint8>> ConstFst_StdArc_uint8_registerer;
static FstRegisterer<ConstFst<LogArc, uint8>> ConstFst_LogArc_uint8_registerer;
static FstRegisterer<ConstFst<Log64Arc, uint8>>
ConstFst_Log64Arc_uint8_registerer;
} // namespace fst
| 0 |
coqui_public_repos/STT-models/amharic/itml | coqui_public_repos/STT-models/amharic/itml/v0.1.0/alphabet.txt |
ሀ
ሁ
ሂ
ሃ
ሄ
ህ
ሆ
ለ
ሉ
ሊ
ላ
ሌ
ል
ሎ
ሏ
ሐ
ሑ
ሒ
ሓ
ሔ
ሕ
ሖ
ሗ
መ
ሙ
ሚ
ማ
ሜ
ም
ሞ
ሟ
ሠ
ሡ
ሢ
ሣ
ሤ
ሥ
ሦ
ሧ
ረ
ሩ
ሪ
ራ
ሬ
ር
ሮ
ሯ
ሰ
ሱ
ሲ
ሳ
ሴ
ስ
ሶ
ሷ
ሸ
ሹ
ሺ
ሻ
ሼ
ሽ
ሾ
ሿ
ቀ
ቁ
ቂ
ቃ
ቄ
ቅ
ቆ
ቈ
ቊ
ቋ
ቌ
ቍ
በ
ቡ
ቢ
ባ
ቤ
ብ
ቦ
ቧ
ቨ
ቩ
ቪ
ቫ
ቬ
ቭ
ቮ
ቯ
ተ
ቱ
ቲ
ታ
ቴ
ት
ቶ
ቷ
ቸ
ቹ
ቺ
ቻ
ቼ
ች
ቾ
ቿ
ኀ
ኁ
ኂ
ኃ
ኄ
ኅ
ኆ
ኈ
ኊ
ኋ
ኌ
ኍ
ነ
ኑ
ኒ
ና
ኔ
ን
ኖ
ኗ
ኘ
ኙ
ኚ
ኛ
ኜ
ኝ
ኞ
ኟ
አ
ኡ
ኢ
ኣ
ኤ
እ
ኦ
ኧ
ከ
ኩ
ኪ
ካ
ኬ
ክ
ኮ
ኰ
ኲ
ኳ
ኴ
ኵ
ኸ
ኹ
ኺ
ኻ
ኼ
ኽ
ኾ
ዀ
ዂ
ዃ
ዄ
ዅ
ወ
ዉ
ዊ
ዋ
ዌ
ው
ዎ
ዐ
ዑ
ዒ
ዓ
ዔ
ዕ
ዖ
ዘ
ዙ
ዚ
ዛ
ዜ
ዝ
ዞ
ዟ
ዠ
ዡ
ዢ
ዣ
ዤ
ዥ
ዦ
ዧ
የ
ዩ
ዪ
ያ
ዬ
ይ
ዮ
ደ
ዱ
ዲ
ዳ
ዴ
ድ
ዶ
ዷ
ጀ
ጁ
ጂ
ጃ
ጄ
ጅ
ጆ
ጇ
ገ
ጉ
ጊ
ጋ
ጌ
ግ
ጎ
ጐ
ጒ
ጓ
ጔ
ጕ
ጠ
ጡ
ጢ
ጣ
ጤ
ጥ
ጦ
ጧ
ጨ
ጩ
ጪ
ጫ
ጬ
ጭ
ጮ
ጯ
ጰ
ጱ
ጲ
ጳ
ጴ
ጵ
ጶ
ጷ
ጸ
ጹ
ጺ
ጻ
ጼ
ጽ
ጾ
ጿ
ፀ
ፁ
ፂ
ፃ
ፄ
ፅ
ፆ
ፈ
ፉ
ፊ
ፋ
ፌ
ፍ
ፎ
ፏ
ፐ
ፑ
ፒ
ፓ
ፔ
ፕ
ፖ
ፗ
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/weight.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// General weight set and associated semiring operation definitions.
#ifndef FST_WEIGHT_H_
#define FST_WEIGHT_H_
#include <cctype>
#include <cmath>
#include <iostream>
#include <sstream>
#include <type_traits>
#include <utility>
#include <fst/compat.h>
#include <fst/log.h>
#include <fst/util.h>
DECLARE_string(fst_weight_parentheses);
DECLARE_string(fst_weight_separator);
namespace fst {
// A semiring is specified by two binary operations Plus and Times and two
// designated elements Zero and One with the following properties:
//
// Plus: associative, commutative, and has Zero as its identity.
//
// Times: associative and has identity One, distributes w.r.t. Plus, and
// has Zero as an annihilator:
// Times(Zero(), a) == Times(a, Zero()) = Zero().
//
// A left semiring distributes on the left; a right semiring is similarly
// defined.
//
// A Weight class must have binary functions Plus and Times and static member
// functions Zero() and One() and these must form (at least) a left or right
// semiring.
//
// In addition, the following should be defined for a Weight:
//
// Member: predicate on set membership.
//
// NoWeight: static member function that returns an element that is
// not a set member; used to signal an error.
//
// >>: reads textual representation of a weight.
//
// <<: prints textual representation of a weight.
//
// Read(istream &istrm): reads binary representation of a weight.
//
// Write(ostream &ostrm): writes binary representation of a weight.
//
// Hash: maps weight to size_t.
//
// ApproxEqual: approximate equality (for inexact weights)
//
// Quantize: quantizes w.r.t delta (for inexact weights)
//
// Divide: for all a, b, c s.t. Times(a, b) == c
//
// --> b' = Divide(c, a, DIVIDE_LEFT) if a left semiring, b'.Member()
// and Times(a, b') == c
// --> a' = Divide(c, b, DIVIDE_RIGHT) if a right semiring, a'.Member()
// and Times(a', b) == c
// --> b' = Divide(c, a) = Divide(c, a, DIVIDE_ANY) =
// Divide(c, a, DIVIDE_LEFT) = Divide(c, a, DIVIDE_RIGHT) if a
// commutative semiring, b'.Member() and Times(a, b') = Times(b', a) = c
//
// ReverseWeight: the type of the corresponding reverse weight.
//
// Typically the same type as Weight for a (both left and right) semiring.
// For the left string semiring, it is the right string semiring.
//
// Reverse: a mapping from Weight to ReverseWeight s.t.
//
// --> Reverse(Reverse(a)) = a
// --> Reverse(Plus(a, b)) = Plus(Reverse(a), Reverse(b))
// --> Reverse(Times(a, b)) = Times(Reverse(b), Reverse(a))
// Typically the identity mapping in a (both left and right) semiring.
// In the left string semiring, it maps to the reverse string in the right
// string semiring.
//
// Properties: specifies additional properties that hold:
// LeftSemiring: indicates weights form a left semiring.
// RightSemiring: indicates weights form a right semiring.
// Commutative: for all a,b: Times(a,b) == Times(b,a)
// Idempotent: for all a: Plus(a, a) == a.
// Path: for all a, b: Plus(a, b) == a or Plus(a, b) == b.
// CONSTANT DEFINITIONS
// A representable float near .001.
constexpr float kDelta = 1.0F / 1024.0F;
// For all a, b, c: Times(c, Plus(a, b)) = Plus(Times(c, a), Times(c, b)).
constexpr uint64 kLeftSemiring = 0x0000000000000001ULL;
// For all a, b, c: Times(Plus(a, b), c) = Plus(Times(a, c), Times(b, c)).
constexpr uint64 kRightSemiring = 0x0000000000000002ULL;
constexpr uint64 kSemiring = kLeftSemiring | kRightSemiring;
// For all a, b: Times(a, b) = Times(b, a).
constexpr uint64 kCommutative = 0x0000000000000004ULL;
// For all a: Plus(a, a) = a.
constexpr uint64 kIdempotent = 0x0000000000000008ULL;
// For all a, b: Plus(a, b) = a or Plus(a, b) = b.
constexpr uint64 kPath = 0x0000000000000010ULL;
// For random weight generation: default number of distinct weights.
// This is also used for a few other weight generation defaults.
constexpr size_t kNumRandomWeights = 5;
// Weight property boolean constants needed for SFINAE.
template <class W>
using IsIdempotent = std::integral_constant<bool,
(W::Properties() & kIdempotent) != 0>;
template <class W>
using IsPath = std::integral_constant<bool, (W::Properties() & kPath) != 0>;
// Determines direction of division.
enum DivideType {
DIVIDE_LEFT, // left division
DIVIDE_RIGHT, // right division
DIVIDE_ANY
}; // division in a commutative semiring
// NATURAL ORDER
//
// By definition:
//
// a <= b iff a + b = a
//
// The natural order is a negative partial order iff the semiring is
// idempotent. It is trivially monotonic for plus. It is left
// (resp. right) monotonic for times iff the semiring is left
// (resp. right) distributive. It is a total order iff the semiring
// has the path property.
//
// For more information, see:
//
// Mohri, M. 2002. Semiring framework and algorithms for shortest-distance
// problems, Journal of Automata, Languages and
// Combinatorics 7(3): 321-350, 2002.
//
// We define the strict version of this order below.
// Declares the template with a second parameter determining whether or not it
// can actually be constructed.
template <class W, class IdempotentType = void>
class NaturalLess;
// Variant for idempotent weights.
template <class W>
class NaturalLess<W, typename std::enable_if<IsIdempotent<W>::value>::type> {
public:
using Weight = W;
NaturalLess() {}
bool operator()(const Weight &w1, const Weight &w2) const {
return w1 != w2 && Plus(w1, w2) == w1;
}
};
// Non-constructible variant for non-idempotent weights.
template <class W>
class NaturalLess<W, typename std::enable_if<!IsIdempotent<W>::value>::type> {
public:
using Weight = W;
// TODO(kbg): Trace down anywhere this is being instantiated, then add a
// static_assert to prevent this from being instantiated.
NaturalLess() {
FSTERROR() << "NaturalLess: Weight type is not idempotent: " << W::Type();
}
bool operator()(const Weight &, const Weight &) const { return false; }
};
// Power is the iterated product for arbitrary semirings such that Power(w, 0)
// is One() for the semiring, and Power(w, n) = Times(Power(w, n - 1), w).
template <class Weight>
Weight Power(const Weight &weight, size_t n) {
auto result = Weight::One();
for (size_t i = 0; i < n; ++i) result = Times(result, weight);
return result;
}
// Simple default adder class. Specializations might be more complex.
template <class Weight>
class Adder {
public:
explicit Adder(Weight w = Weight::Zero()) : sum_(w) { }
Weight Add(const Weight &w) {
sum_ = Plus(sum_, w);
return sum_;
}
Weight Sum() { return sum_; }
void Reset(Weight w = Weight::Zero()) { sum_ = w; }
private:
Weight sum_;
};
// General weight converter: raises error.
template <class W1, class W2>
struct WeightConvert {
W2 operator()(W1 w1) const {
FSTERROR() << "WeightConvert: Can't convert weight from \"" << W1::Type()
<< "\" to \"" << W2::Type();
return W2::NoWeight();
}
};
// Specialized weight converter to self.
template <class W>
struct WeightConvert<W, W> {
W operator()(W weight) const { return weight; }
};
// General random weight generator: raises error.
template <class W>
struct WeightGenerate {
W operator()() const {
FSTERROR() << "WeightGenerate: No random generator for " << W::Type();
return W::NoWeight();
}
};
namespace internal {
class CompositeWeightIO {
public:
CompositeWeightIO();
CompositeWeightIO(char separator, std::pair<char, char> parentheses);
std::pair<char, char> parentheses() const {
return {open_paren_, close_paren_};
}
char separator() const { return separator_; }
bool error() const { return error_; }
protected:
const char separator_;
const char open_paren_;
const char close_paren_;
private:
bool error_;
};
} // namespace internal
// Helper class for writing textual composite weights.
class CompositeWeightWriter : public internal::CompositeWeightIO {
public:
// Uses configuration from flags (FLAGS_fst_weight_separator,
// FLAGS_fst_weight_parentheses).
explicit CompositeWeightWriter(std::ostream &ostrm);
// parentheses defines the opening and closing parenthesis characters.
// Set parentheses = {0, 0} to disable writing parenthesis.
CompositeWeightWriter(std::ostream &ostrm, char separator,
std::pair<char, char> parentheses);
CompositeWeightWriter(const CompositeWeightWriter &) = delete;
CompositeWeightWriter &operator=(const CompositeWeightWriter &) = delete;
// Writes open parenthesis to a stream if option selected.
void WriteBegin();
// Writes element to a stream.
template <class T>
void WriteElement(const T &comp) {
if (i_++ > 0) ostrm_ << separator_;
ostrm_ << comp;
}
// Writes close parenthesis to a stream if option selected.
void WriteEnd();
private:
std::ostream &ostrm_;
int i_ = 0; // Element position.
};
// Helper class for reading textual composite weights. Elements are separated by
// a separator character. There must be at least one element per textual
// representation. Parentheses characters should be set if the composite
// weights themselves contain composite weights to ensure proper parsing.
class CompositeWeightReader : public internal::CompositeWeightIO {
public:
// Uses configuration from flags (FLAGS_fst_weight_separator,
// FLAGS_fst_weight_parentheses).
explicit CompositeWeightReader(std::istream &istrm);
// parentheses defines the opening and closing parenthesis characters.
// Set parentheses = {0, 0} to disable reading parenthesis.
CompositeWeightReader(std::istream &istrm, char separator,
std::pair<char, char> parentheses);
CompositeWeightReader(const CompositeWeightReader &) = delete;
CompositeWeightReader &operator=(const CompositeWeightReader &) = delete;
// Reads open parenthesis from a stream if option selected.
void ReadBegin();
// Reads element from a stream. The second argument, when true, indicates that
// this will be the last element (allowing more forgiving formatting of the
// last element). Returns false when last element is read.
template <class T>
bool ReadElement(T *comp, bool last = false);
// Finalizes reading.
void ReadEnd();
private:
std::istream &istrm_; // Input stream.
int c_ = 0; // Last character read, or EOF.
int depth_ = 0; // Weight parentheses depth.
};
template <class T>
inline bool CompositeWeightReader::ReadElement(T *comp, bool last) {
string s;
const bool has_parens = open_paren_ != 0;
while ((c_ != std::istream::traits_type::eof()) && !std::isspace(c_) &&
(c_ != separator_ || depth_ > 1 || last) &&
(c_ != close_paren_ || depth_ != 1)) {
s += c_;
// If parentheses encountered before separator, they must be matched.
if (has_parens && c_ == open_paren_) {
++depth_;
} else if (has_parens && c_ == close_paren_) {
// Failure on unmatched parentheses.
if (depth_ == 0) {
FSTERROR() << "CompositeWeightReader: Unmatched close paren: "
<< "Is the fst_weight_parentheses flag set correctly?";
istrm_.clear(std::ios::badbit);
return false;
}
--depth_;
}
c_ = istrm_.get();
}
if (s.empty()) {
FSTERROR() << "CompositeWeightReader: Empty element: "
<< "Is the fst_weight_parentheses flag set correctly?";
istrm_.clear(std::ios::badbit);
return false;
}
std::istringstream istrm(s);
istrm >> *comp;
// Skips separator/close parenthesis.
if (c_ != std::istream::traits_type::eof() && !std::isspace(c_)) {
c_ = istrm_.get();
}
const bool is_eof = c_ == std::istream::traits_type::eof();
// Clears fail bit if just EOF.
if (is_eof && !istrm_.bad()) istrm_.clear(std::ios::eofbit);
return !is_eof && !std::isspace(c_);
}
} // namespace fst
#endif // FST_WEIGHT_H_
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/test-properties.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Functions to manipulate and test property bits.
#ifndef FST_TEST_PROPERTIES_H_
#define FST_TEST_PROPERTIES_H_
#include <unordered_set>
#include <fst/flags.h>
#include <fst/log.h>
#include <fst/connect.h>
#include <fst/dfs-visit.h>
DECLARE_bool(fst_verify_properties);
namespace fst {
// namespace internal {
// For a binary property, the bit is always returned set. For a trinary (i.e.,
// two-bit) property, both bits are returned set iff either corresponding input
// bit is set.
inline uint64 KnownProperties(uint64 props) {
return kBinaryProperties | (props & kTrinaryProperties) |
((props & kPosTrinaryProperties) << 1) |
((props & kNegTrinaryProperties) >> 1);
}
// Tests compatibility between two sets of properties.
inline bool CompatProperties(uint64 props1, uint64 props2) {
const auto known_props1 = KnownProperties(props1);
const auto known_props2 = KnownProperties(props2);
const auto known_props = known_props1 & known_props2;
const auto incompat_props = (props1 & known_props) ^ (props2 & known_props);
if (incompat_props) {
uint64 prop = 1;
for (int i = 0; i < 64; ++i, prop <<= 1) {
if (prop & incompat_props) {
LOG(ERROR) << "CompatProperties: Mismatch: " << PropertyNames[i]
<< ": props1 = " << (props1 & prop ? "true" : "false")
<< ", props2 = " << (props2 & prop ? "true" : "false");
}
}
return false;
} else {
return true;
}
}
// Computes FST property values defined in properties.h. The value of each
// property indicated in the mask will be determined and returned (these will
// never be unknown here). In the course of determining the properties
// specifically requested in the mask, certain other properties may be
// determined (those with little additional expense) and their values will be
// returned as well. The complete set of known properties (whether true or
// false) determined by this operation will be assigned to the the value pointed
// to by KNOWN. If 'use_stored' is true, pre-computed FST properties may be used
// when possible. 'mask & required_mask' is used to determine whether the stored
// propertoes can be used. This routine is seldom called directly; instead it is
// used to implement fst.Properties(mask, true).
template <class Arc>
uint64 ComputeProperties(const Fst<Arc> &fst, uint64 mask, uint64 *known,
bool use_stored) {
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
const auto fst_props = fst.Properties(kFstProperties, false); // FST-stored.
// Check stored FST properties first if allowed.
if (use_stored) {
const auto known_props = KnownProperties(fst_props);
// If FST contains required info, return it.
if ((known_props & mask) == mask) {
if (known) *known = known_props;
return fst_props;
}
}
// Computes (trinary) properties explicitly.
// Initialize with binary properties (already known).
uint64 comp_props = fst_props & kBinaryProperties;
// Computes these trinary properties with a DFS. We compute only those that
// need a DFS here, since we otherwise would like to avoid a DFS since its
// stack could grow large.
uint64 dfs_props = kCyclic | kAcyclic | kInitialCyclic | kInitialAcyclic |
kAccessible | kNotAccessible | kCoAccessible |
kNotCoAccessible;
std::vector<StateId> scc;
if (mask & (dfs_props | kWeightedCycles | kUnweightedCycles)) {
SccVisitor<Arc> scc_visitor(&scc, nullptr, nullptr, &comp_props);
DfsVisit(fst, &scc_visitor);
}
// Computes any remaining trinary properties via a state and arcs iterations
if (mask & ~(kBinaryProperties | dfs_props)) {
comp_props |= kAcceptor | kNoEpsilons | kNoIEpsilons | kNoOEpsilons |
kILabelSorted | kOLabelSorted | kUnweighted | kTopSorted |
kString;
if (mask & (kIDeterministic | kNonIDeterministic)) {
comp_props |= kIDeterministic;
}
if (mask & (kODeterministic | kNonODeterministic)) {
comp_props |= kODeterministic;
}
if (mask & (dfs_props | kWeightedCycles | kUnweightedCycles)) {
comp_props |= kUnweightedCycles;
}
std::unique_ptr<std::unordered_set<Label>> ilabels;
std::unique_ptr<std::unordered_set<Label>> olabels;
StateId nfinal = 0;
for (StateIterator<Fst<Arc>> siter(fst); !siter.Done(); siter.Next()) {
StateId s = siter.Value();
Arc prev_arc;
// Creates these only if we need to.
if (mask & (kIDeterministic | kNonIDeterministic)) {
ilabels.reset(new std::unordered_set<Label>());
}
if (mask & (kODeterministic | kNonODeterministic)) {
olabels.reset(new std::unordered_set<Label>());
}
bool first_arc = true;
for (ArcIterator<Fst<Arc>> aiter(fst, s); !aiter.Done(); aiter.Next()) {
const auto &arc = aiter.Value();
if (ilabels && ilabels->find(arc.ilabel) != ilabels->end()) {
comp_props |= kNonIDeterministic;
comp_props &= ~kIDeterministic;
}
if (olabels && olabels->find(arc.olabel) != olabels->end()) {
comp_props |= kNonODeterministic;
comp_props &= ~kODeterministic;
}
if (arc.ilabel != arc.olabel) {
comp_props |= kNotAcceptor;
comp_props &= ~kAcceptor;
}
if (arc.ilabel == 0 && arc.olabel == 0) {
comp_props |= kEpsilons;
comp_props &= ~kNoEpsilons;
}
if (arc.ilabel == 0) {
comp_props |= kIEpsilons;
comp_props &= ~kNoIEpsilons;
}
if (arc.olabel == 0) {
comp_props |= kOEpsilons;
comp_props &= ~kNoOEpsilons;
}
if (!first_arc) {
if (arc.ilabel < prev_arc.ilabel) {
comp_props |= kNotILabelSorted;
comp_props &= ~kILabelSorted;
}
if (arc.olabel < prev_arc.olabel) {
comp_props |= kNotOLabelSorted;
comp_props &= ~kOLabelSorted;
}
}
if (arc.weight != Weight::One() && arc.weight != Weight::Zero()) {
comp_props |= kWeighted;
comp_props &= ~kUnweighted;
if ((comp_props & kUnweightedCycles) &&
scc[s] == scc[arc.nextstate]) {
comp_props |= kWeightedCycles;
comp_props &= ~kUnweightedCycles;
}
}
if (arc.nextstate <= s) {
comp_props |= kNotTopSorted;
comp_props &= ~kTopSorted;
}
if (arc.nextstate != s + 1) {
comp_props |= kNotString;
comp_props &= ~kString;
}
prev_arc = arc;
first_arc = false;
if (ilabels) ilabels->insert(arc.ilabel);
if (olabels) olabels->insert(arc.olabel);
}
if (nfinal > 0) { // Final state not last.
comp_props |= kNotString;
comp_props &= ~kString;
}
const auto final_weight = fst.Final(s);
if (final_weight != Weight::Zero()) { // Final state.
if (final_weight != Weight::One()) {
comp_props |= kWeighted;
comp_props &= ~kUnweighted;
}
++nfinal;
} else { // Non-final state.
if (fst.NumArcs(s) != 1) {
comp_props |= kNotString;
comp_props &= ~kString;
}
}
}
if (fst.Start() != kNoStateId && fst.Start() != 0) {
comp_props |= kNotString;
comp_props &= ~kString;
}
}
if (known) *known = KnownProperties(comp_props);
return comp_props;
}
// This is a wrapper around ComputeProperties that will cause a fatal error if
// the stored properties and the computed properties are incompatible when
// FLAGS_fst_verify_properties is true. This routine is seldom called directly;
// instead it is used to implement fst.Properties(mask, true).
template <class Arc>
uint64 TestProperties(const Fst<Arc> &fst, uint64 mask, uint64 *known) {
if (FLAGS_fst_verify_properties) {
const auto stored_props = fst.Properties(kFstProperties, false);
const auto computed_props = ComputeProperties(fst, mask, known, false);
if (!CompatProperties(stored_props, computed_props)) {
FSTERROR() << "TestProperties: stored FST properties incorrect"
<< " (stored: props1, computed: props2)";
}
return computed_props;
} else {
return ComputeProperties(fst, mask, known, true);
}
}
// If all the properties of 'fst' corresponding to 'check_mask' are known,
// returns the stored properties. Otherwise, the properties corresponding to
// both 'check_mask' and 'test_mask' are computed. This is used to check for
// newly-added properties that might not be set in old binary files.
template <class Arc>
uint64 CheckProperties(const Fst<Arc> &fst, uint64 check_mask,
uint64 test_mask) {
auto props = fst.Properties(kFstProperties, false);
if (FLAGS_fst_verify_properties) {
props = TestProperties(fst, check_mask | test_mask, nullptr);
} else if ((KnownProperties(props) & check_mask) != check_mask) {
props = ComputeProperties(fst, check_mask | test_mask, nullptr, false);
}
return props & (check_mask | test_mask);
}
//} // namespace internal
} // namespace fst
#endif // FST_TEST_PROPERTIES_H_
| 0 |
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core | coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/graph/function.h | // Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#include "core/common/common.h"
#include "core/graph/indexed_sub_graph.h"
namespace onnxruntime {
class Graph;
class Node;
} // namespace onnxruntime
namespace onnxruntime {
/**
@class Function
Class representing a Function.
*/
class Function {
public:
virtual ~Function() = default;
#if !defined(ORT_MINIMAL_BUILD)
/** Gets the OpSchema for the Function. */
virtual const ONNX_NAMESPACE::OpSchema& OpSchema() const = 0;
#endif
/** Gets the Graph instance for the Function body subgraph. */
virtual const onnxruntime::Graph& Body() const = 0;
};
/**
Create a new Function instance.
@param graph The graph containing the Function.
@param nodes_to_fuse the IndexedSubGraph to use for the Function.
*/
std::unique_ptr<Function> MakeFunction(const onnxruntime::Graph& graph,
const IndexedSubGraph& nodes_to_fuse,
const logging::Logger& logger);
} // namespace onnxruntime
| 0 |
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core | coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/common/optional.h | // Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#include <nonstd/optional.hpp>
namespace onnxruntime {
using nonstd::optional;
#ifndef ORT_NO_EXCEPTIONS
using nonstd::bad_optional_access;
#endif
using nonstd::nullopt;
using nonstd::nullopt_t;
using nonstd::in_place;
using nonstd::in_place_t;
using nonstd::make_optional;
} // namespace onnxruntime
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/extensions | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/extensions/mpdt/mpdtcompose.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Composes an MPDT and an FST.
#include <cstring>
#include <memory>
#include <string>
#include <vector>
#include <fst/flags.h>
#include <fst/log.h>
#include <fst/extensions/mpdt/mpdtscript.h>
#include <fst/extensions/mpdt/read_write_utils.h>
#include <fst/extensions/pdt/getters.h>
#include <fst/util.h>
DEFINE_string(mpdt_parentheses, "",
"MPDT parenthesis label pairs with assignments");
DEFINE_bool(left_mpdt, true, "Is the first argument the MPDT?");
DEFINE_bool(connect, true, "Trim output?");
DEFINE_string(compose_filter, "paren",
"Composition filter, one of: \"expand\", \"expand_paren\", "
"\"paren\"");
int main(int argc, char **argv) {
namespace s = fst::script;
using fst::MPdtComposeOptions;
using fst::PdtComposeFilter;
using fst::ReadLabelTriples;
using fst::script::FstClass;
using fst::script::VectorFstClass;
string usage = "Compose an MPDT and an FST.\n\n Usage: ";
usage += argv[0];
usage += " in.pdt in.fst [out.mpdt]\n";
usage += " in.fst in.pdt [out.mpdt]\n";
std::set_new_handler(FailedNewHandler);
SET_FLAGS(usage.c_str(), &argc, &argv, true);
if (argc < 3 || argc > 4) {
ShowUsage();
return 1;
}
const string in1_name = strcmp(argv[1], "-") == 0 ? "" : argv[1];
const string in2_name = strcmp(argv[2], "-") == 0 ? "" : argv[2];
const string out_name = argc > 3 ? argv[3] : "";
if (in1_name.empty() && in2_name.empty()) {
LOG(ERROR) << argv[0] << ": Can't take both inputs from standard input.";
return 1;
}
std::unique_ptr<FstClass> ifst1(FstClass::Read(in1_name));
if (!ifst1) return 1;
std::unique_ptr<FstClass> ifst2(FstClass::Read(in2_name));
if (!ifst2) return 1;
if (FLAGS_mpdt_parentheses.empty()) {
LOG(ERROR) << argv[0] << ": No MPDT parenthesis label pairs provided";
return 1;
}
std::vector<s::LabelPair> parens;
std::vector<int64> assignments;
if (!ReadLabelTriples(FLAGS_mpdt_parentheses, &parens, &assignments, false))
return 1;
VectorFstClass ofst(ifst1->ArcType());
PdtComposeFilter compose_filter;
if (!s::GetPdtComposeFilter(FLAGS_compose_filter, &compose_filter)) {
LOG(ERROR) << argv[0] << ": Unknown or unsupported compose filter type: "
<< FLAGS_compose_filter;
return 1;
}
const MPdtComposeOptions opts(FLAGS_connect, compose_filter);
s::MPdtCompose(*ifst1, *ifst2, parens, assignments, &ofst, opts,
FLAGS_left_mpdt);
ofst.Write(out_name);
return 0;
}
| 0 |
coqui_public_repos/snakepit/scripts | coqui_public_repos/snakepit/scripts/worker/run.sh | #!/usr/bin/env bash
if [ "$HOSTNAME" = snakepit-worker ]; then
exit 0
fi
while [[ ! -f "/env.sh" ]]; do
sleep 0.1
done
export DEBIAN_FRONTEND=noninteractive
source "/etc/profile"
source "/env.sh"
mkdir /data
worker_dir="/data/rw/pit/workers/${WORKER_INDEX}"
i=0
while ! sshfs worker@${DAEMON}: /data -o IdentityFile=/root/.ssh/id_rsa,cache=yes,kernel_cache,big_writes,sshfs_sync,Ciphers=aes128-ctr,reconnect,ServerAliveInterval=15,ServerAliveCountMax=100,StrictHostKeyChecking=no ; do
if [[ ${i} -gt 5 ]]; then
reboot
fi
let i=i+1
sleep 1
done
i=0
while [[ ! -d "${worker_dir}" ]]; do
if [[ ${i} -gt 5 ]]; then
reboot
fi
let i=i+1
sleep 1
done
cd "${WORK_DIR}"
export RESULT_FILE="${worker_dir}/result"
log_file="${worker_dir}/worker.log"
pipe_log () {
stdbuf -oL awk '{print "[worker '${WORKER_INDEX}'] " $0}' >>"${log_file}"
}
print_log () {
echo "$1" | pipe_log
}
print_log "Worker ${WORKER_INDEX} started"
print_log "Preparing script execution..."
apt-get update 2>&1 | pipe_log
systemctl stop apt-daily.service
systemctl kill --kill-who=all apt-daily.service
while ! (systemctl list-units --all apt-daily.service | grep -qE 'dead|failed') ; do sleep 1; done
sleep 10
print_log "Starting script..."
stdbuf -oL bash "/data/rw/pit/script.sh" 2>&1 | pipe_log
exit_code=${PIPESTATUS[0]}
echo "$exit_code" >"${worker_dir}/status"
print_log "Worker ${WORKER_INDEX} ended with exit code ${exit_code}"
touch "${worker_dir}/stop"
poweroff
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/bin/fstdifference-main.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Subtracts an unweighted DFA from an FSA.
#include <cstring>
#include <memory>
#include <string>
#include <fst/flags.h>
#include <fst/log.h>
#include <fst/script/getters.h>
#include <fst/script/difference.h>
DECLARE_string(compose_filter);
DECLARE_bool(connect);
int fstdifference_main(int argc, char **argv) {
namespace s = fst::script;
using fst::ComposeFilter;
using fst::DifferenceOptions;
using fst::script::FstClass;
using fst::script::VectorFstClass;
string usage = "Subtracts an unweighted DFA from an FSA.\n\n Usage: ";
usage += argv[0];
usage += " in1.fst in2.fst [out.fst]\n";
std::set_new_handler(FailedNewHandler);
SET_FLAGS(usage.c_str(), &argc, &argv, true);
if (argc < 3 || argc > 4) {
ShowUsage();
return 1;
}
const string in1_name = strcmp(argv[1], "-") == 0 ? "" : argv[1];
const string in2_name = strcmp(argv[2], "-") == 0 ? "" : argv[2];
const string out_name = argc > 3 ? argv[3] : "";
if (in1_name.empty() && in2_name.empty()) {
LOG(ERROR) << argv[0] << ": Can't take both inputs from standard input";
return 1;
}
std::unique_ptr<FstClass> ifst1(FstClass::Read(in1_name));
if (!ifst1) return 1;
std::unique_ptr<FstClass> ifst2(FstClass::Read(in2_name));
if (!ifst2) return 1;
VectorFstClass ofst(ifst1->ArcType());
ComposeFilter compose_filter;
if (!s::GetComposeFilter(FLAGS_compose_filter, &compose_filter)) {
LOG(ERROR) << argv[0] << ": Unknown or unsupported compose filter type: "
<< FLAGS_compose_filter;
return 1;
}
const DifferenceOptions opts(FLAGS_connect, compose_filter);
s::Difference(*ifst1, *ifst2, &ofst, opts);
return !ofst.Write(out_name);
}
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/script/push.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/script/fst-class.h>
#include <fst/script/push.h>
#include <fst/script/script-impl.h>
namespace fst {
namespace script {
void Push(MutableFstClass *fst, ReweightType rew_type, float delta,
bool remove_total_weight) {
PushArgs1 args(fst, rew_type, delta, remove_total_weight);
Apply<Operation<PushArgs1>>("Push", fst->ArcType(), &args);
}
void Push(const FstClass &ifst, MutableFstClass *ofst, uint32 flags,
ReweightType rew_type, float delta) {
if (!internal::ArcTypesMatch(ifst, *ofst, "Push")) {
ofst->SetProperties(kError, kError);
return;
}
PushArgs2 args(ifst, ofst, flags, rew_type, delta);
Apply<Operation<PushArgs2>>("Push", ifst.ArcType(), &args);
}
REGISTER_FST_OPERATION(Push, StdArc, PushArgs1);
REGISTER_FST_OPERATION(Push, LogArc, PushArgs1);
REGISTER_FST_OPERATION(Push, Log64Arc, PushArgs1);
REGISTER_FST_OPERATION(Push, StdArc, PushArgs2);
REGISTER_FST_OPERATION(Push, LogArc, PushArgs2);
REGISTER_FST_OPERATION(Push, Log64Arc, PushArgs2);
} // namespace script
} // namespace fst
| 0 |
coqui_public_repos/STT-models/kinyarwanda/digital-umuganda | coqui_public_repos/STT-models/kinyarwanda/digital-umuganda/v0.0.1/alphabet.txt | # Each line in this file represents the Unicode codepoint (UTF-8 encoded)
# associated with a numeric label.
# A line that starts with # is a comment. You can escape it with \# if you wish
# to use '#' as a label.
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
'
# The last (non-comment) line needs to end with a newline.
| 0 |
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core | coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/graph/indexed_sub_graph.h | // Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#include <memory>
#include <string>
#include <vector>
#include "core/graph/basic_types.h"
#if !defined(ORT_MINIMAL_BUILD)
#include "onnx/defs/schema.h"
#else
#include "onnx/defs/data_type_utils.h"
#endif
#include "onnx/onnx_pb.h"
#include "onnx/onnx-operators_pb.h"
namespace onnxruntime {
class OpKernel;
class OpKernelInfo;
/**
@class IndexedSubGraph
Class containing information about a subgraph of Nodes from a Graph.
It contains a NodeIndex array of the Nodes covered by the subgraph,
and the meta definition needed for representing this subgraph as a FunctionProto,
which could be serialized/saved to a model file.
*/
struct IndexedSubGraph {
struct MetaDef {
std::string name; ///< Name of customized SubGraph/FunctionProto
std::string domain; ///< Domain of customized SubGraph/FunctionProto
int since_version; ///< Since version of customized SubGraph/FunctionProto.
ONNX_NAMESPACE::OperatorStatus status; ///< Status of customized SubGraph/FunctionProto.
std::vector<std::string> inputs; ///< Inputs of customized SubGraph/FunctionProto.
std::vector<std::string> outputs; ///< Outputs of customized SubGraph/FunctionProto.
NodeAttributes attributes; ///< Attributes of customized SubGraph/FunctionProto.
std::string doc_string; ///< Doc string of customized SubGraph/FunctionProto.
#if !defined(ORT_MINIMAL_BUILD)
/** Type and shape inference function that can optionally be defined for the fused node */
std::function<void (ONNX_NAMESPACE::InferenceContext&)> type_and_shape_inference_function;
#endif
};
/** Nodes covered by this subgraph. The NodeIndex values are from the parent Graph.*/
std::vector<onnxruntime::NodeIndex> nodes;
/** Set the meta definition needed to represent this subgraph as a FunctionProto
It's needed IF AND ONLY IF there are multiple indexes contained in #nodes. */
void SetMetaDef(std::unique_ptr<MetaDef>&& meta_def_) {
meta_def = std::move(meta_def_);
}
/** Gets the meta definition needed to represent this subgraph as a FunctionProto.
@returns MetaDef instance if it has been set. nullptr if not. */
const MetaDef* GetMetaDef() const {
return meta_def.get();
}
private:
// subgraph meta definition.
std::unique_ptr<MetaDef> meta_def;
};
} // namespace onnxruntime
| 0 |
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core | coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/graph/node_arg.h | // Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#include "onnx/onnx_pb.h"
#include "core/graph/basic_types.h"
#include "core/common/status.h"
#include "core/common/logging/logging.h"
namespace onnxruntime {
// Node argument definition, for both input and output,
// including arg name, arg type (contains both type and shape).
//
// Design Question: in my opinion, shape should not be part of type.
// We may align the protobuf design with our operator registry interface,
// which has type specified for each operator, but no shape. Well, shape
// should be inferred with a separate shape inference function given
// input shapes, or input tensor data sometimes.
// With shape as part of type (current protobuf design),
// 1) we'll have to split the "TypeProto" into type and shape in this internal
// representation interface so that it could be easily used when doing type
// inference and matching with operator registry.
// 2) SetType should be always called before SetShape, otherwise, SetShape()
// will fail. Because shape is located in a TypeProto.
// Thoughts?
//
/**
@class NodeArg
Class representing a data type that is input or output for a Node, including the shape if it is a Tensor.
*/
class NodeArg {
public:
/**
Construct a new NodeArg.
@param name The name to use.
@param p_arg_type Optional TypeProto specifying type and shape information.
*/
NodeArg(const std::string& name,
const ONNX_NAMESPACE::TypeProto* p_arg_type);
NodeArg(NodeArg&&) = default;
NodeArg& operator=(NodeArg&& other) = default;
/** Gets the name. */
const std::string& Name() const noexcept;
/** Gets the data type. */
const std::string* Type() const noexcept;
/** Gets the TypeProto
@returns TypeProto if type is set. nullptr otherwise. */
const ONNX_NAMESPACE::TypeProto* TypeAsProto() const noexcept;
/** Gets the shape if NodeArg is for a Tensor.
@returns TensorShapeProto if shape is set. nullptr if there's no shape specified. */
const ONNX_NAMESPACE::TensorShapeProto* Shape() const;
/** Return an indicator.
@returns true if NodeArg is a normal tensor with a non-empty shape or a scalar with an empty shape. Otherwise, returns false. */
bool HasTensorOrScalarShape() const;
#if !defined(ORT_MINIMAL_BUILD)
/** Sets the shape.
@remarks Shape can only be set if the TypeProto was provided to the ctor, or #SetType has been called,
as the shape information is stored as part of TypeProto. */
void SetShape(const ONNX_NAMESPACE::TensorShapeProto& shape);
/** Clears shape info.
@remarks If there is a mismatch during shape inferencing that can't be resolved the shape info may be removed. */
void ClearShape();
/** Validate and merge type [and shape] info from input_type.
@param strict If true, the shape update will fail if there are incompatible values.
If false, will be lenient and merge only shape info that can be validly processed.
@param override_types If true, resolve the two inputs or two outputs type when different
@returns Success unless there is existing type or shape info that can't be successfully updated. */
common::Status UpdateTypeAndShape(const ONNX_NAMESPACE::TypeProto& input_type, bool strict, bool override_types, const logging::Logger& logger);
/** Validate and merge type [and shape] info from node_arg.
@param strict If true, the shape update will fail if there are incompatible values.
If false, will be lenient and merge only shape info that can be validly processed.
@param override_types If true, resolve the two inputs or two outputs type when different
@returns Success unless there is existing type or shape info that can't be successfully updated. */
common::Status UpdateTypeAndShape(const NodeArg& node_arg, bool strict, bool override_types, const logging::Logger& logger);
#endif // !defined(ORT_MINIMAL_BUILD)
/** Gets this NodeArg as a ValueInfoProto. */
const NodeArgInfo& ToProto() const noexcept { return node_arg_info_; }
/** Gets a flag indicating whether this NodeArg exists or not.
Optional inputs are allowed in ONNX and an empty #Name represents a non-existent input argument. */
bool Exists() const noexcept;
private:
ORT_DISALLOW_COPY_AND_ASSIGNMENT(NodeArg);
friend class Graph;
NodeArg(NodeArgInfo&& node_arg_info);
#if !defined(ORT_MINIMAL_BUILD)
void SetType(const std::string* p_type);
void SetType(const ONNX_NAMESPACE::TypeProto& type_proto);
#endif
// Node arg PType.
const std::string* type_;
// Node arg name, type and shape.
NodeArgInfo node_arg_info_;
// Flag indicates whether <*this> node arg exists or not.
bool exists_;
};
} // namespace onnxruntime
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/bin/import_fisher.py | #!/usr/bin/env python
import codecs
import fnmatch
import os
import random
import subprocess
import sys
import unicodedata
import librosa
import pandas
import soundfile # <= Has an external dependency on libsndfile
from coqui_stt_training.util.importers import validate_label_eng as validate_label
# Prerequisite: Having the sph2pipe tool in your PATH:
# https://www.ldc.upenn.edu/language-resources/tools/sphere-conversion-tools
def _download_and_preprocess_data(data_dir):
# Assume data_dir contains extracted LDC2004S13, LDC2004T19, LDC2005S13, LDC2005T19
# Conditionally convert Fisher sph data to wav
_maybe_convert_wav(data_dir, "LDC2004S13", "fisher-2004-wav")
_maybe_convert_wav(data_dir, "LDC2005S13", "fisher-2005-wav")
# Conditionally split Fisher wav data
all_2004 = _split_wav_and_sentences(
data_dir,
original_data="fisher-2004-wav",
converted_data="fisher-2004-split-wav",
trans_data=os.path.join("LDC2004T19", "fe_03_p1_tran", "data", "trans"),
)
all_2005 = _split_wav_and_sentences(
data_dir,
original_data="fisher-2005-wav",
converted_data="fisher-2005-split-wav",
trans_data=os.path.join("LDC2005T19", "fe_03_p2_tran", "data", "trans"),
)
# The following files have incorrect transcripts that are much longer than
# their audio source. The result is that we end up with more labels than time
# slices, which breaks CTC.
all_2004.loc[
all_2004["wav_filename"].str.endswith("fe_03_00265-33.53-33.81.wav"),
"transcript",
] = "correct"
all_2004.loc[
all_2004["wav_filename"].str.endswith("fe_03_00991-527.39-528.3.wav"),
"transcript",
] = "that's one of those"
all_2005.loc[
all_2005["wav_filename"].str.endswith("fe_03_10282-344.42-344.84.wav"),
"transcript",
] = "they don't want"
all_2005.loc[
all_2005["wav_filename"].str.endswith("fe_03_10677-101.04-106.41.wav"),
"transcript",
] = "uh my mine yeah the german shepherd pitbull mix he snores almost as loud as i do"
# The following file is just a short sound and not at all transcribed like provided.
# So we just exclude it.
all_2004 = all_2004[
~all_2004["wav_filename"].str.endswith("fe_03_00027-393.8-394.05.wav")
]
# The following file is far too long and would ruin our training batch size.
# So we just exclude it.
all_2005 = all_2005[
~all_2005["wav_filename"].str.endswith("fe_03_11487-31.09-234.06.wav")
]
# The following file is too large for its transcript, so we just exclude it.
all_2004 = all_2004[
~all_2004["wav_filename"].str.endswith("fe_03_01326-307.42-307.93.wav")
]
# Conditionally split Fisher data into train/validation/test sets
train_2004, dev_2004, test_2004 = _split_sets(all_2004)
train_2005, dev_2005, test_2005 = _split_sets(all_2005)
# Join 2004 and 2005 data
train_files = train_2004.append(train_2005)
dev_files = dev_2004.append(dev_2005)
test_files = test_2004.append(test_2005)
# Write sets to disk as CSV files
train_files.to_csv(os.path.join(data_dir, "fisher-train.csv"), index=False)
dev_files.to_csv(os.path.join(data_dir, "fisher-dev.csv"), index=False)
test_files.to_csv(os.path.join(data_dir, "fisher-test.csv"), index=False)
def _maybe_convert_wav(data_dir, original_data, converted_data):
source_dir = os.path.join(data_dir, original_data)
target_dir = os.path.join(data_dir, converted_data)
# Conditionally convert sph files to wav files
if os.path.exists(target_dir):
print("skipping maybe_convert_wav")
return
# Create target_dir
os.makedirs(target_dir)
# Loop over sph files in source_dir and convert each to 16-bit PCM wav
for root, dirnames, filenames in os.walk(source_dir):
for filename in fnmatch.filter(filenames, "*.sph"):
sph_file = os.path.join(root, filename)
for channel in ["1", "2"]:
wav_filename = (
os.path.splitext(os.path.basename(sph_file))[0]
+ "_c"
+ channel
+ ".wav"
)
wav_file = os.path.join(target_dir, wav_filename)
print("converting {} to {}".format(sph_file, wav_file))
subprocess.check_call(
["sph2pipe", "-c", channel, "-p", "-f", "rif", sph_file, wav_file]
)
def _parse_transcriptions(trans_file):
segments = []
with codecs.open(trans_file, "r", "utf-8") as fin:
for line in fin:
if line.startswith("#") or len(line) <= 1:
continue
tokens = line.split()
start_time = float(tokens[0])
stop_time = float(tokens[1])
speaker = tokens[2]
transcript = " ".join(tokens[3:])
# We need to do the encode-decode dance here because encode
# returns a bytes() object on Python 3, and text_to_char_array
# expects a string.
transcript = (
unicodedata.normalize("NFKD", transcript)
.encode("ascii", "ignore")
.decode("ascii", "ignore")
)
segments.append(
{
"start_time": start_time,
"stop_time": stop_time,
"speaker": speaker,
"transcript": transcript,
}
)
return segments
def _split_wav_and_sentences(data_dir, trans_data, original_data, converted_data):
trans_dir = os.path.join(data_dir, trans_data)
source_dir = os.path.join(data_dir, original_data)
target_dir = os.path.join(data_dir, converted_data)
if not os.path.exists(target_dir):
os.makedirs(target_dir)
files = []
# Loop over transcription files and split corresponding wav
for root, dirnames, filenames in os.walk(trans_dir):
for filename in fnmatch.filter(filenames, "*.txt"):
trans_file = os.path.join(root, filename)
segments = _parse_transcriptions(trans_file)
# Open wav corresponding to transcription file
wav_filenames = [
os.path.splitext(os.path.basename(trans_file))[0]
+ "_c"
+ channel
+ ".wav"
for channel in ["1", "2"]
]
wav_files = [
os.path.join(source_dir, wav_filename) for wav_filename in wav_filenames
]
print("splitting {} according to {}".format(wav_files, trans_file))
origAudios = [
librosa.load(wav_file, sr=16000, mono=False) for wav_file in wav_files
]
# Loop over segments and split wav_file for each segment
for segment in segments:
# Create wav segment filename
start_time = segment["start_time"]
stop_time = segment["stop_time"]
new_wav_filename = (
os.path.splitext(os.path.basename(trans_file))[0]
+ "-"
+ str(start_time)
+ "-"
+ str(stop_time)
+ ".wav"
)
new_wav_file = os.path.join(target_dir, new_wav_filename)
channel = 0 if segment["speaker"] == "A:" else 1
_split_and_resample_wav(
origAudios[channel], start_time, stop_time, new_wav_file
)
new_wav_filesize = os.path.getsize(new_wav_file)
transcript = validate_label(segment["transcript"])
if transcript != None:
files.append(
(os.path.abspath(new_wav_file), new_wav_filesize, transcript)
)
return pandas.DataFrame(
data=files, columns=["wav_filename", "wav_filesize", "transcript"]
)
def _split_audio(origAudio, start_time, stop_time):
audioData, frameRate = origAudio
nChannels = len(audioData.shape)
startIndex = int(start_time * frameRate)
stopIndex = int(stop_time * frameRate)
return (
audioData[startIndex:stopIndex]
if 1 == nChannels
else audioData[:, startIndex:stopIndex]
)
def _split_and_resample_wav(origAudio, start_time, stop_time, new_wav_file):
frameRate = origAudio[1]
chunkData = _split_audio(origAudio, start_time, stop_time)
soundfile.write(new_wav_file, chunkData, frameRate, "PCM_16")
def _split_sets(filelist):
"""
randomply split the datasets into train, validation, and test sets where the size of the
validation and test sets are determined by the `get_sample_size` function.
"""
random.shuffle(filelist)
sample_size = get_sample_size(len(filelist))
train_beg = 0
train_end = len(filelist) - 2 * sample_size
dev_beg = train_end
dev_end = train_end + sample_size
test_beg = dev_end
test_end = len(filelist)
return (
filelist[train_beg:train_end],
filelist[dev_beg:dev_end],
filelist[test_beg:test_end],
)
def get_sample_size(population_size):
"""calculates the sample size for a 99% confidence and 1% margin of error"""
margin_of_error = 0.01
fraction_picking = 0.50
z_score = 2.58 # Corresponds to confidence level 99%
numerator = (z_score**2 * fraction_picking * (1 - fraction_picking)) / (
margin_of_error**2
)
sample_size = 0
for train_size in range(population_size, 0, -1):
denominator = 1 + (z_score**2 * fraction_picking * (1 - fraction_picking)) / (
margin_of_error**2 * train_size
)
sample_size = int(numerator / denominator)
if 2 * sample_size + train_size <= population_size:
break
return sample_size
if __name__ == "__main__":
_download_and_preprocess_data(sys.argv[1])
| 0 |
coqui_public_repos/STT/data/smoke_test | coqui_public_repos/STT/data/smoke_test/webdataset/LDC93S1_wav.txt | she had your dark suit in greasy wash water all year | 0 |
coqui_public_repos/STT-examples/python_websocket_server/helm/stt_server | coqui_public_repos/STT-examples/python_websocket_server/helm/stt_server/templates/deployment.yaml | apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "stt-server.fullname" . }}
labels:
{{- include "stt-server.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "stt-server.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "stt-server.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src | coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/bin/fstdisambiguate-main.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Disambiguates an FST.
#include <cstring>
#include <memory>
#include <string>
#include <fst/flags.h>
#include <fst/script/disambiguate.h>
DECLARE_double(delta);
DECLARE_int64(nstate);
DECLARE_string(weight);
DECLARE_int64(subsequential_label);
int fstdisambiguate_main(int argc, char **argv) {
namespace s = fst::script;
using fst::script::FstClass;
using fst::script::VectorFstClass;
using fst::script::WeightClass;
string usage = "Disambiguates an FST.\n\n Usage: ";
usage += argv[0];
usage += " [in.fst [out.fst]]\n";
std::set_new_handler(FailedNewHandler);
SET_FLAGS(usage.c_str(), &argc, &argv, true);
if (argc > 3) {
ShowUsage();
return 1;
}
const string in_name = (argc > 1 && strcmp(argv[1], "-") != 0) ? argv[1] : "";
const string out_name = argc > 2 ? argv[2] : "";
std::unique_ptr<FstClass> ifst(FstClass::Read(in_name));
if (!ifst) return 1;
VectorFstClass ofst(ifst->ArcType());
const auto weight_threshold =
FLAGS_weight.empty() ? WeightClass::Zero(ifst->WeightType())
: WeightClass(ifst->WeightType(), FLAGS_weight);
const s::DisambiguateOptions opts(FLAGS_delta, weight_threshold, FLAGS_nstate,
FLAGS_subsequential_label);
s::Disambiguate(*ifst, &ofst, opts);
return !ofst.Write(out_name);
}
| 0 |
coqui_public_repos | coqui_public_repos/TTS/requirements.dev.txt | black
coverage
isort
nose2
pylint==2.10.2
| 0 |
coqui_public_repos | coqui_public_repos/TTS-papers/README.md | (Feel free to suggest changes)
# Papers
- Merging Phoneme and Char representations: https://arxiv.org/pdf/1811.07240.pdf
- Tacotron Transfer Learning : https://arxiv.org/pdf/1904.06508.pdf
- Phoneme Timing From Attention: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8683827
- Semi-Supervised Training to Improve Data Efficiency in End-to-End Speech Synthesis - https://arxiv.org/pdf/1808.10128.pdf
- Listening while Speaking: Speech Chain by Deep Learning - https://arxiv.org/pdf/1707.04879.pdf
- Generelized End-to-End Loss for Speaker Verification: https://arxiv.org/pdf/1710.10467.pdf
- Es-Tacotron2: Multi-Task Tacotron 2 with Pre-Trained Estimated Network for Reducing the Over-Smoothness Problem: https://www.mdpi.com/2078-2489/10/4/131/pdf
- Against Over-Smoothness
- FastSpeech: https://arxiv.org/pdf/1905.09263.pdf
- Learning Singing From Speech: https://arxiv.org/pdf/1912.10128.pdf
- TTS-GAN: https://arxiv.org/pdf/1909.11646.pdf
- they use duration and linguistic features for en2en TTS.
- Close to WaveNet performance.
- DurIAN: https://arxiv.org/pdf/1909.01700.pdf
- Duration aware Tacotron
- MelNet: https://arxiv.org/abs/1906.01083
- AlignTTS: https://arxiv.org/pdf/2003.01950.pdf
- Unsupervised Speech Decomposition via Triple Information Bottleneck
- https://arxiv.org/pdf/2004.11284.pdf
- https://anonymous0818.github.io/
- FlowTron: https://arxiv.org/pdf/2005.05957.pdf
- Inverse Autoregresive Flow on Tacotron like architecture
- WaveGlow as vocoder.
- Speech style embedding with Mixture of Gaussian model.
- Model is large and havier than vanilla Tacotron
- MOS values are slighly better than public Tacotron implementation.
- Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention : https://arxiv.org/pdf/1710.08969.pdf
### Expansive Summaries
<details>
<summary> End-to-End Adversarial Text-to-Speech: http://arxiv.org/abs/2006.03575 (Click to Expand)</summary>
- end2end feed-forward TTS learning.
- Character alignment has been done with a separate aligner module.
- The aligner predicts length of each character.
- The center location of a char is found wrt the total length of the previous characters.
- Char positions are interpolated with a Gaussian window wrt the real audio length.
- audio output is computed in mu-law domain. (I don't have a reasoning for this)
- use only 2 secs audio windows for traning.
- GAN-TTS generator is used to produce audio signal.
- RWD is used as a audio level discriminator.
- MelD: They use BigGAN-deep architecture as spectrogram level discriminator regading the problem as image reconstruction.
- Spectrogram loss
- Using only adversarial feed-back is not enough to learn the char alignments. They use a spectrogram loss b/w predicted spectrograms and ground-truth specs.
- Note that model predicts audio signals. Spectrograms above are computed from the generated audio.
- Dynamic Time Wraping is used to compute a minimal-cost alignment b/w generated spectrograms and ground-truth.
- It involves a dynamic programming approach to find a minimal-cost alignment.
- Aligner length loss is used to penalize the aligner for predicting different than the real audio length.
- They train the model with multi speaker dataset but report results on the best performing speaker.
- Ablation Study importance of each component: (LengthLoss and SpectrogramLoss) > RWD > MelD > Phonemes > MultiSpeakerDataset.
- My 2 cents: It is a feed forward model which provides end-2-end speech synthesis with no need to train a separate vocoder model. However, it is very complicated model with a lot of hyperparameters and implementation details. Also the final result is not close to the state of the art. I think we need to find specific algorithms for learning character alignments which would reduce the need of tunning a combination of different algorithms.
<img src="https://user-images.githubusercontent.com/1402048/84696449-d25e6b80-af4c-11ea-8b3a-66ede19124b0.png" width="50%">
</details>
<details>
<summary> Fast Speech2: http://arxiv.org/abs/2006.04558 (Click to Expand)</summary>
- Use phoneme durations generated by [MFA](https://montreal-forced-aligner.readthedocs.io/en/latest/introduction.html) as labels to train a length regulator.
- Thay use frame level F0 and L2 spectrogram norms (Variance Information) as additional features.
- Variance predictor module predicts the variance information at inference time.
- Ablation study result improvements: model < model + L2_norm < model + L2_norm + F0
![image](https://user-images.githubusercontent.com/1402048/84696094-3c2a4580-af4c-11ea-8de3-4e918d651cd4.png)
</details>
<details>
<summary> Glow-TTS: https://arxiv.org/pdf/2005.11129.pdf (Click to Expand)</summary>
- Use Monotonic Alignment Search to learn the alignment b/w text and spectrogram
- This alignment is used to train a Duration Predictor to be used at inference.
- Encoder maps each character to a Gaussian Distribution.
- Decoder maps each spectrogram frame to a latent vector using Normalizing Flow (Glow Layers)
- Encoder and Decoder outputs are aligned with MAS.
- At each iteration first the most probable alignment is found by MAS and this alignment is used to update mode parameters.
- A duration predictor is trained to predict the number of spectrogram frames for each character.
- At inference only the duration predictor is used instead of MAS
- Encoder has the architecture of the TTS transformer with 2 updates
- Instead of absolute positional encoding, they use realtive positional encoding.
- They also use a residual connection for the Encoder Prenet.
- Decoder has the same architecture as the Glow model.
- They train both single and multi-speaker model.
- It is showed experimentally, Glow-TTS is more robust against long sentences compared to original Tacotron2
- 15x faster than Tacotron2 at inference
- My 2 cents: Their samples sound not as natural as Tacotron. I believe normal attention models still generate more natural speech since the attention learns to map characters to model outputs directly. However, using Glow-TTS might be a good alternative for hard datasets.
- Samples: https://github.com/jaywalnut310/glow-tts
- Repository: https://github.com/jaywalnut310/glow-tts
![image](https://user-images.githubusercontent.com/1402048/85527284-06035a80-b60b-11ea-8165-b2f3e841f77f.png)
</details>
<details>
<summary> Non-Autoregressive Neural Text-to-Speech: http://arxiv.org/abs/1905.08459 (Click to Expand)</summary>
- A derivation of Deep Voice 3 model using non-causal convolutional layers.
- Teacher-Student paradigm to train annon-autoregressive student with multiple attention blocks from an autoregressive teacher model.
- The teacher is used to generate text-to-spectrogram alignments to be used by the student model.
- The model is trained with two loss functions for attention alignment and spectrogram generation.
- Multi attention blocks refine the attention alignment layer by layer.
- The student uses dot-product attention with query, key and value vectors. The query is only positinal encoding vectors. The key and the value are the encoder outputs.
- Proposed model is heavily tied to the positional encoding which also relies on different constant values.
![image](https://user-images.githubusercontent.com/1402048/87929772-3e218000-ca87-11ea-9f13-9869bee96b57.png)
</details>
<details>
<summary> Double Decoder Consistency: https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency (Click to Expand)</summary>
- The model uses a Tacotron like architecture but with 2 decoders and a postnet.
- DDC uses two synchronous decoders using different reduction rates.
- The decoders use different reduction rates thus they compute outputs in different granularities and learn different aspects of the input data.
- The model uses the consistency between these two decoders to increase robustness of learned text-to-spectrogram alignment.
- The model also applies a refinement to the final decoder output by applying the postnet iteratively multiple times.
- DDC uses Batch Normalization in the prenet module and drops Dropout layers.
- DDC uses gradual training to reduce the total training time.
- We use a Multi-Band Melgan Generator as a vocoder trained with Multiple Random Window Discriminators differently than the original work.
- We are able to train a DDC model only in 2 days with a single GPU and the final model is able to generate faster than real-time speech on a CPU.
Demo page: https://erogol.github.io/ddc-samples/
Code: https://github.com/mozilla/TTS
![image](https://erogol.com/wp-content/uploads/2020/06/DDC_overview-1536x1220.png)
</details>
<details>
<summary> Parallel Tacotron2: http://arxiv.org/abs/2103.14574 (Click to Expand)</summary>
- Does not require external duration information.
- Solves the alignment issues between the real and ground-truth spectrograms by Soft-DTW loss.
- Predicted durations are converted to alignment by a learned conversion function, rather than a Length Regulator, to solve rounding issues.
- Learns an attention map over "Token Boundary Grids" which are computed from predicted durations.
- Decoder is built on 6 "light-weight Convolutions" blocks.
- A VAE is used to project input spectrograms to latent features and merged with the characterr embeddings as an input to the network.
- Soft-DTW is computationally intensive since it computes pairwise difference for all the spectrogram frames. They contrain it with a certain diagonal window to reduce the overhead.
- The final duration objective is the sum of Duration Loss, VAE loss and Spectrogram Loss.
- They only use proprietary datasets for the experiments 😦.
- Achieves the same MOS with the Tacotron2 model and outperforms ParallelTacotron.
- **Demo page**: https://google.github.io/tacotron/publications/parallel_tacotron_2/index.html
- **Code**: No code so far
<img src="https://user-images.githubusercontent.com/1402048/113508025-017eb180-954e-11eb-8cc5-c7dc87945bac.png" data-canonical-src="https://gyazo.com/eb5c5741b6a9a16c692170a41a49c858.png" height="800"/>
</details>
<details>
<summary> WaveGrad2: https://arxiv.org/pdf/2106.09660.pdf (Click to Expand)</summary>
- It computes the raw waveform directly from a phoneme sequence.
- A Tacotron2 like encoder model is used to compute a hidden representation from phonemes.
- Non-Attentive Tacotron like soft duration predictor to align the hidden represenatation with the output.
- They expand the hidden representation with the predicted durations and sample a certain window to convert to a waveform.
- They explored different window sizes netween 64 and 256 frames corresponding to 0.8 and 3.2 secs of speech. They found that the larger is the better.
- **Demo page**: Nothing so far
- **Code**: No code so far
<img src="https://user-images.githubusercontent.com/1402048/122778044-ea2da580-d2ac-11eb-8446-e8903fc75291.png" height="450"/>
<img src="https://user-images.githubusercontent.com/1402048/122779556-447b3600-d2ae-11eb-8544-187ea5668966.png" height="450"/>
</details>
______________________________________________________________________
## Multi-Speaker Papers
- Training Multi-Speaker Neural Text-to-Speech Systems using Speaker-Imbalanced Speech Corpora - https://arxiv.org/abs/1904.00771
- Deep Voice 2 - https://papers.nips.cc/paper/6889-deep-voice-2-multi-speaker-neural-text-to-speech.pdf
- Sample Efficient Adaptive TTS - https://openreview.net/pdf?id=rkzjUoAcFX
- WaveNet + Speaker Embedding approach
- Voice Loop - https://arxiv.org/abs/1707.06588
- MODELING MULTI-SPEAKER LATENT SPACE TO IMPROVE NEURAL TTS QUICK ENROLLING NEW SPEAKER AND ENHANCING PREMIUM VOICE - https://arxiv.org/pdf/1812.05253.pdf
- Transfer learning from speaker verification to multispeaker text-to-speech synthesis - https://arxiv.org/pdf/1806.04558.pdf
- Fitting new speakers based on a short untranscribed sample - https://arxiv.org/pdf/1802.06984.pdf
- Generalized end-to-end loss for speaker verification - https://arxiv.org/abs/1710.10467
### Expansive Summaries
<details>
<summary> Semi-supervised Learning for Multi-speaker Text-to-speech Synthesis Using Discrete Speech Representation: http://arxiv.org/abs/2005.08024 </summary>
- Train a multi-speaker TTS model with only an hour long paired data (text-to-voice alignment) and more unpaired (only voide) data.
- It learns a code book with each code word corresponds to a single phoneme.
- The code-book is aligned to phonemes using the paired data and CTC algorithm.
- This code book functions like a proxy to implicitly estimate the phoneme sequence of the unpaired data.
- They stack Tacotron2 model on top to perform TTS using the code word embeddings generated by the initial part of the model.
- They beat the benchmark methods in 1hr long paired data setting.
- They don't report full paired data results.
- They don't have a good ablation study which could be interesting to see how different parts of the model contribute to the performance.
- They use Griffin-Lim as a vocoder thus there is space for improvement.
Demo page: https://ttaoretw.github.io/multispkr-semi-tts/demo.html <br>
Code: https://github.com/ttaoREtw/semi-tts
![image](https://user-images.githubusercontent.com/1402048/93603135-de325180-f9c3-11ea-8081-7c1c3390b9f0.png)
</details>
<details>
<summary> Attentron: Few-shot Text-to-Speech Exploiting Attention-based Variable Length Embedding: https://arxiv.org/abs/2005.08484 </summary>
- Use two encoders to learn speaker depended features.
- Coarse encoder learns a global speaker embedding vector based on provided reference spectrograms.
- Fine encoder learns a variable length embedding keeping the temporal dimention in cooperation with a attention module.
- The attention selects important reference spectrogram frames to synthesize target speech.
- Pre-train the model with a single speaker dataset first (LJSpeech for 30k iters.)
- Fine-tune the model with a multi-speaker dataset. (VCTK for 70k iters.)
- It achieves slightly better metrics in comparison to using x-vectors from speaker classification model and VAE based reference audio encoder.
Demo page: https://hyperconnect.github.io/Attentron/ <br>
![image](https://user-images.githubusercontent.com/1402048/105180385-cc57eb00-5b2a-11eb-9b9b-201153ee2029.png)
![image](https://user-images.githubusercontent.com/1402048/105180441-e1347e80-5b2a-11eb-8968-3731a0119ff4.png)
</details>
<details>
<summary> Towards Universal Text-to-Speech: http://www.interspeech2020.org/uploadfile/pdf/Wed-3-4-3.pdf </summary>
- A framework for a sequence to sequence multi-lingual TTS
- The model is trained with a very large, highly unbalanced dataset.
- The model is able to learn a new language with 6 minutes and a new speaker with 20 seconds of data after the initial training.
- The model architecture is a Transformer based Encoder-Decoder network with a Speaker Network and a Language Network for the speaker and language conditinoning. The outputs of these networks are concatenated to the Encoder output.
- The conditioning networks take a one-hot vector representing the speaker or language ID and projects it to a conditioning representation.
- They use a WaveNet vocoder for converting predicted Mel-Spectrograms to the waveform output.
- They use language depended phonemes inputs that are not shared among languages.
- They sample each batch based on the inverse frequency of each language in the dataset. Thus each training batch has a uniform distribution over languages, alleviating the language imbalance in the training dataset.
- For learning new speakers/languages, they fine-tune the Encoder-Decoder model with the conditioning networks. They don’t train the WaveNet model.
- They use 1250 hours professional recordings from 50 languages for the training.
- They use 16khz sampling rate for all the audio samples and trim silences at the beginning and the end of each clip.
- They use 4 V100 GPUs for training but they don’t mention how long they trained the model.
- The results show that single speaker models are better than the proposed approach in MOS metric.
- Also using conditioning networks is important for the long-tail languages in the dataset as they improve the MOS metric for them but impair the performance for the high-resource languages.
- When they add a new speaker, they observe that using more than 5 minutes of data degrades the model performance. They claim that since these recordings are not as clean as the original recordings, using more of them affects the model’s general performance.
- The multi-lingual model is able to train with only 6 minutes of data for new speakers and languages whereas a single speaker model requires 3 hours to train and cannot even attain similar MOS values as the 6 minutes multi-lingual model.
![image](https://user-images.githubusercontent.com/1402048/135748505-e7fd258a-39b6-437c-a542-14456d33344f.png)
![image](https://user-images.githubusercontent.com/1402048/135748507-059f9f91-c838-4286-a2d6-0a29ded9738a.png)
</details>
<details>
<summary> AdaSpeech: Adaptive Text to Speech for Custom Voice: https://openreview.net/pdf?id=Drynvt7gg4L </summary>
- They proposed a system that can adapt to different input acoustic properties of users and it uses minimum number of parameters to achieve this.
- The main architecture is based on FastSpeech2 model that uses Pitch and Variance predictors to learn the finer granularities of the input speech.
- They use 3 additional conditioning networks.
- Utterance level. It takes mel-spectrogram of the reference speech as input.
- Phoneme level. It takes phoneme level mel-spectrograms as input and computes phoneme-level conditioning vectors. Phoneme-level mel-spectrograms are computed by taking the average spectrogram frame in the duration of each phoneme.
- Phoneme level 2. It takes phoneme encoder outputs as inputs. This differs from the network above by just using the phoneme information without seeing the spectrograms.
- All these conditioning networks and the back-bone FastSpeech2 uses Layer Normalisation layers.
- Conditional layer normalisation. They propose fine-tuning only the scale and bias parameters of each layer normalisation layer when the model is fine-tuned for a new speaker. They train a speaker conditioning module for each Layer Norm layer that outputs a scale and a bias values. (They use one speaker conditioning module per Transformer block.)
- It means that you only store the Speaker Conditioning module for each new speaker and predict the scale and bias values at inference as you keep the rest of the model the same.
- In the experiments, they train pre-train the model on LibriTTS dataset and fine-tune it with VCTK and LJSpeech
- The results show that using Conditional Layer Normalisation achieves better than their 2 baselines which use only speaker embedding and decoder network fine-tunning.
- Their ablation study shows that the most significant part of the model is the “Phoneme level” network followed by Conditional Layer Normalisation and “Utterance level” network in an order.
- One important down-side of the paper is that there is almost no comparison with the literature and it makes the results harder to assess objectively.
Demo page: https://speechresearch.github.io/adaspeech/ <br>
![image](https://user-images.githubusercontent.com/1402048/135750827-24f74e2e-ec2f-4af7-ac6f-618309d7178d.png)
![image](https://user-images.githubusercontent.com/1402048/135750837-3958d283-040e-4959-891b-10633455c7b4.png)
![image](https://user-images.githubusercontent.com/1402048/135750845-c606e191-9f6e-4dc0-b50b-b3a1b3bc79c9.png)
![image](https://user-images.githubusercontent.com/1402048/135750852-9e3291ae-8d37-41fc-9c88-7daccd2a2104.png)
![image](https://user-images.githubusercontent.com/1402048/135750857-21565d3d-ef8a-42db-b3a6-ca225457fa1c.png)
![image](https://user-images.githubusercontent.com/1402048/135750860-2c8e610b-c9b3-4bd7-a94f-5479ac6c87b6.png)
</details>
______________________________________________________________________
## Attention
- Location-Relative Attention Mechanisms for Robust Long-Formspeech Synthesis - https://arxiv.org/pdf/1910.10288.pdf
______________________________________________________________________
## Vocoders
- MelGAN: https://arxiv.org/pdf/1910.06711.pdf
- ParallelWaveGAN: https://arxiv.org/pdf/1910.11480.pdf
- Multi scale STFT loss
- ~1M model parameters (very small)
- Slightly worse than WaveRNN
- Improving FFTNEt
- https://www.okamotocamera.com/slt_2018.pdfF
- https://www.okamotocamera.com/slt_2018.pdf
- FFTnet
- https://gfx.cs.princeton.edu/pubs/Jin_2018_FAR/clips/clips.php
- https://gfx.cs.princeton.edu/pubs/Jin_2018_FAR/fftnet-jin2018.pdf
- SPEECH WAVEFORM RECONSTRUCTION USING CONVOLUTIONAL NEURALNETWORKS WITH NOISE AND PERIODIC INPUTS
- 150.162.46.34:8080/icassp2019/ICASSP2019/pdfs/0007045.pdf
- Towards Achieveing Robust Universal Vocoding
- https://arxiv.org/pdf/1811.06292.pdf
- LPCNet
- https://arxiv.org/pdf/1810.11846.pdf
- https://arxiv.org/pdf/2001.11686.pdf
- ExciteNet
- https://arxiv.org/pdf/1811.04769v3.pdf
- GELP: GAN-Excited Linear Prediction for Speech Synthesis fromMel-spectrogram
- https://arxiv.org/pdf/1904.03976v3.pdf
- High Fidelity Speech Synthesis with Adversarial Networks: https://arxiv.org/abs/1909.11646
- GAN-TTS, end-to-end speech synthesis
- Uses duration and linguistic features
- Duration and acoustic features are predicted by additional models.
- Random Window Discriminator: Ingest not the whole Voice sample but random
windows.
- Multiple RWDs. Some conditional and some unconditional. (conditioned on
input features)
- Punchline: Use randomly sampled windows with different window sizes for D.
- Shared results sounds mechanical that shows the limits of non-neural
acoustic features.
- Multi-Band MelGAN: https://arxiv.org/abs/2005.05106
- Use PWGAN losses instead of feature-matching loss.
- Using a larger receptive field boosts model performance significantly.
- Generator pretraining for 200k iters.
- Multi-Band voice signal prediction. The output is summation of 4 different
band predictions with PQMF synthesis filters.
- Multi-band model has 1.9m parameters (quite small).
- Claimed to be 7x faster than MelGAN
- On a Chinese dataset: MOS 4.22
- WaveGLow: https://arxiv.org/abs/1811.00002
- Very large model (268M parameters)
- Hard to train since on 12GB GPU it can only takes batch size 1.
- Real-time inference due to the use of convolutions.
- Based on Invertable Normalizing Flow. (Great tutorial https://blog.evjang.com/2018/01/nf1.html
)
- Model learns and invetible mapping of audio samples to mel-spectrograms with Max Likelihood loss.
- In inference network runs in reverse direction and give mel-specs are converted to audio samples.
- Training has been done using 8 Nvidia V100 with 32GB ram, batch size 24. (Expensive)
- SqueezeWave: https://arxiv.org/pdf/2001.05685.pdf , code: https://github.com/tianrengao/SqueezeWave
- ~5-13x faster than real-time
- WaveGlow redanduncies: Long audio samples, upsamples mel-specs, large channel dimensions in WN function.
- Fixes: More but shorter audio samples as input, (L=2000, C=8 vs L=64, C=256)
- L=64 matchs the mel-spec resolution so no upsampling necessary.
- Use depth-wise separable convolutions in WN modules.
- Use regular convolution instead of dilated since audio samples are shorter.
- Do not split module outputs to residual and network output, assuming these vectors are almost identical.
- Training has been done using Titan RTX 24GB batch size 96 for 600k iterations.
- MOS on LJSpeech: WaveGLow - 4.57, SqueezeWave (L=128 C=256) - 4.07 and SqueezeWave (L=64 C=256) - 3.77
- Smallest model has 21K samples per second on Raspi3.
<details>
<summary>WaveGrad: https://arxiv.org/pdf/2009.00713.pdf </summary>
- It is based on Probability Diffusion and Lagenvin Dynamics
- The base idea is to learn a function that maps a known distribution to target data distribution iteratively.
- They report 0.2 real-time factor on a GPU but CPU performance is not shared.
- In the example code below, the author reports that the model converges after 2 days of training on a single GPU.
- MOS scores on the paper are not compherensive enough but shows comparable performance to known models like WaveRNN and WaveNet.
Code: https://github.com/ivanvovk/WaveGrad
![image](https://user-images.githubusercontent.com/1402048/93461311-e071ae80-f8e4-11ea-82c6-e631301bbd27.png)
</details>
# From the Internet (Blogs, Videos etc)
## Videos
### Paper Discussion
- Tacotron 2 : https://www.youtube.com/watch?v=2iarxxm-v9w
### Talks
- Talk on Pushing the Frontier of Neural Text-to-Speech, by Xu Tan, 2021, https://youtu.be/MA8PCvmr8B0
- Talk on Generative Model-Based Text-to-Speech Synthesis, by Heiga Zen, 2017
- Video: https://youtu.be/nsrSrYtKkT8
- Slide: https://research.google.com/pubs/pub45882.html
- Tutorials on Neural Parametric Text-to-Speech Synthesis at ISCA Odyessy 2020, by Xin Wang, 2020
- Video: https://youtu.be/WCe7SYcDzAI
- Slide: http://tonywangx.github.io/slide.html#dec-2020
- ISCA Speech Processing Course on Neural vocoders, 2022
- Basic components of neural vocoders: https://youtu.be/M833q5I-ZYs
- Deep generative models for speech compression (LPCNet): https://youtu.be/7KsnFx3pLgw
- Neural auto-regressive, source-filter and glottal vocoders: https://youtu.be/gPrmxdberX0
- Slide: http://tonywangx.github.io/slide.html#jul-2020
- Speech synthesis from neural decoding of spoken sentences | AISC : https://www.youtube.com/watch?v=MNDtMDPmnMo
- Generative Text-to-Speech Synthesis : https://www.youtube.com/watch?v=j4mVEAnKiNg
- SPEECH SYNTHESIS FOR THE GAMING INDUSTRY : https://www.youtube.com/watch?v=aOHAYe4A-2Q
### General
- Modern Text-to-Speech Systems Review : https://www.youtube.com/watch?v=8rXLSc-ZcRY
## Jupyter notebooks
- Tutorials on Selected Neural Vocoders: https://github.com/nii-yamagishilab/project-NN-Pytorch-scripts/tree/master/tutorials/b1_neural_vocoder
## Blogs
- Text to Speech Deep Learning Architectures : http://www.erogol.com/text-speech-deep-learning-architectures/
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/generic_tc_caching-win-opt-base.tyml | taskId: ${taskcluster.taskId}
provisionerId: ${taskcluster.docker.provisionerId}
workerType: ${taskcluster.docker.workerTypeWin}
taskGroupId: ${taskcluster.taskGroupId}
schedulerId: ${taskcluster.schedulerId}
created: { $fromNow: '0 sec' }
deadline: { $fromNow: '1 day' }
expires: { $fromNow: '6 months' }
scopes:
- "index:insert-task:project.deepspeech.*"
payload:
maxRunTime: { $eval: to_int(build.maxRunTime) }
features:
taskclusterProxy: true
mounts:
- file: msys2-base-x86_64.tar.xz
content:
sha256: ${system.msys2.sha}
url: ${system.msys2.url}
env:
TC_MSYS_VERSION: 'MSYS_NT-6.3-9600'
MSYS: 'winsymlinks:nativestrict'
GIT_LFS_SKIP_SMUDGE: '1'
command:
- >-
"C:\Program Files\7-zip\7z.exe" x -txz -so msys2-base-x86_64.tar.xz |
"C:\Program Files\7-zip\7z.exe" x -o%USERPROFILE% -ttar -aoa -si
- .\msys64\usr\bin\bash.exe --login -cx "export THIS_BASH_PID=$$; ps -ef | grep '[?]' | awk '{print $2}' | grep -v $THIS_BASH_PID | xargs -r kill; exit 0"
- .\msys64\usr\bin\bash.exe --login -cx "pacman -Syu --noconfirm"
- $let:
extraSystemSetup: { $eval: strip(str(build.system_setup)) }
extraSystemConfig: { $eval: strip(str(build.system_config)) }
taskIndexExpire: { $fromNow: '6 months' }
in: >
echo .\msys64\usr\bin\bash.exe --login -cxe "export LC_ALL=C &&
export PATH=\"$USERPROFILE/msys64/usr/bin:/c/Python36:/c/Program Files/Git/bin:/c/Program Files/7-Zip/:$PATH\" &&
export TASKCLUSTER_ARTIFACTS=\"$(cygpath -u $USERPROFILE/public)\" &&
export TASKCLUSTER_TASK_DIR=\"/c/builds/tc-workdir/\" &&
(rm -fr $TASKCLUSTER_TASK_DIR/ ; mkdir $TASKCLUSTER_TASK_DIR) &&
echo \"export TASKCLUSTER_TASK_EXIT_CODE=0\" > $USERPROFILE/tc-exit.sh &&
env && pacman --noconfirm -S tar && mkdir -p $TASKCLUSTER_ARTIFACTS/ && if [ \"`curl -sSIL -o /dev/null -w %%{http_code} ${build.cache.artifact_url}`\" != \"200\" ]; then git clone --quiet ${build.build_or_cache.repo} $TASKCLUSTER_TASK_DIR/${build.build_or_cache.dir}/ && cd $TASKCLUSTER_TASK_DIR/${build.build_or_cache.dir} && git checkout --quiet ${build.build_or_cache.sha} && ${extraSystemConfig} && $TASKCLUSTER_TASK_DIR/${build.build_or_cache.dir}/${build.scripts.setup} && $TASKCLUSTER_TASK_DIR/${build.build_or_cache.dir}/${build.scripts.build} && $TASKCLUSTER_TASK_DIR/${build.build_or_cache.dir}/${build.scripts.package} && $TASKCLUSTER_TASK_DIR/${build.build_or_cache.dir}/taskcluster/tc-update-index.sh ${taskIndexExpire} taskcluster ${build.cache.artifact_namespace}; fi; echo \"export TASKCLUSTER_TASK_EXIT_CODE=$?\" > $USERPROFILE/tc-exit.sh" | cmd /k
- .\msys64\usr\bin\bash.exe --login -cxe "source $USERPROFILE/tc-exit.sh && exit $TASKCLUSTER_TASK_EXIT_CODE"
artifacts:
- type: "directory"
path: "public/"
expires: { $fromNow: '6 months' }
metadata:
name: ${build.metadata.name}
description: ${build.metadata.description}
owner: ${event.head.user.email}
source: ${event.head.repo.url}
| 0 |
coqui_public_repos/stt-model-manager | coqui_public_repos/stt-model-manager/config/pnpTs.js | 'use strict';
const { resolveModuleName } = require('ts-pnp');
exports.resolveModuleName = (
typescript,
moduleName,
containingFile,
compilerOptions,
resolutionHost
) => {
return resolveModuleName(
moduleName,
containingFile,
compilerOptions,
resolutionHost,
typescript.resolveModuleName
);
};
exports.resolveTypeReferenceDirective = (
typescript,
moduleName,
containingFile,
compilerOptions,
resolutionHost
) => {
return resolveModuleName(
moduleName,
containingFile,
compilerOptions,
resolutionHost,
typescript.resolveTypeReferenceDirective
);
};
| 0 |
coqui_public_repos/TTS/recipes/ljspeech | coqui_public_repos/TTS/recipes/ljspeech/tacotron2-Capacitron/train_capacitron_t2.py | import os
from trainer import Trainer, TrainerArgs
from TTS.config.shared_configs import BaseAudioConfig
from TTS.tts.configs.shared_configs import BaseDatasetConfig, CapacitronVAEConfig
from TTS.tts.configs.tacotron2_config import Tacotron2Config
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.tacotron2 import Tacotron2
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.utils.audio import AudioProcessor
output_path = os.path.dirname(os.path.abspath(__file__))
data_path = "/srv/data/"
# Using LJSpeech like dataset processing for the blizzard dataset
dataset_config = BaseDatasetConfig(
formatter="ljspeech",
meta_file_train="metadata.csv",
path=data_path,
)
audio_config = BaseAudioConfig(
sample_rate=22050,
do_trim_silence=True,
trim_db=60.0,
signal_norm=False,
mel_fmin=0.0,
mel_fmax=11025,
spec_gain=1.0,
log_func="np.log",
ref_level_db=20,
preemphasis=0.0,
)
# Using the standard Capacitron config
capacitron_config = CapacitronVAEConfig(capacitron_VAE_loss_alpha=1.0, capacitron_capacity=50)
config = Tacotron2Config(
run_name="Capacitron-Tacotron2",
audio=audio_config,
capacitron_vae=capacitron_config,
use_capacitron_vae=True,
batch_size=128, # Tune this to your gpu
max_audio_len=8 * 22050, # Tune this to your gpu
min_audio_len=1 * 22050,
eval_batch_size=16,
num_loader_workers=8,
num_eval_loader_workers=8,
precompute_num_workers=24,
run_eval=True,
test_delay_epochs=25,
ga_alpha=0.0,
r=2,
optimizer="CapacitronOptimizer",
optimizer_params={"RAdam": {"betas": [0.9, 0.998], "weight_decay": 1e-6}, "SGD": {"lr": 1e-5, "momentum": 0.9}},
attention_type="dynamic_convolution",
grad_clip=0.0, # Important! We overwrite the standard grad_clip with capacitron_grad_clip
double_decoder_consistency=False,
epochs=1000,
text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="en-us",
phonemizer="espeak",
phoneme_cache_path=os.path.join(data_path, "phoneme_cache"),
stopnet_pos_weight=15,
print_step=25,
print_eval=True,
mixed_precision=False,
seq_len_norm=True,
output_path=output_path,
datasets=[dataset_config],
lr=1e-3,
lr_scheduler="StepwiseGradualLR",
lr_scheduler_params={
"gradual_learning_rates": [
[0, 1e-3],
[2e4, 5e-4],
[4e5, 3e-4],
[6e4, 1e-4],
[8e4, 5e-5],
]
},
scheduler_after_epoch=False, # scheduler doesn't work without this flag
# Need to experiment with these below for capacitron
loss_masking=False,
decoder_loss_alpha=1.0,
postnet_loss_alpha=1.0,
postnet_diff_spec_alpha=0.0,
decoder_diff_spec_alpha=0.0,
decoder_ssim_alpha=0.0,
postnet_ssim_alpha=0.0,
)
ap = AudioProcessor(**config.audio.to_dict())
tokenizer, config = TTSTokenizer.init_from_config(config)
train_samples, eval_samples = load_tts_samples(dataset_config, eval_split=True)
model = Tacotron2(config, ap, tokenizer, speaker_manager=None)
trainer = Trainer(
TrainerArgs(),
config,
output_path,
model=model,
train_samples=train_samples,
eval_samples=eval_samples,
training_assets={"audio_processor": ap},
)
trainer.fit()
| 0 |
coqui_public_repos | coqui_public_repos/STT/README.rst | .. note::
**This project is no longer actively maintained**, and we have stopped hosting the online Model Zoo. We've seen focus shift towards newer STT models such as [Whisper](https://github.com/openai/whisper), and have ourselves focused on [Coqui TTS](https://github.com/coqui-ai/TTS) and [Coqui Studio](https://coqui.ai/).
The models will remain available in [the releases of the coqui-ai/STT-models repo](https://github.com/coqui-ai/STT-models/releases).
.. image:: images/coqui-STT-logo-green.png
:alt: Coqui STT logo
.. |doc-img| image:: https://readthedocs.org/projects/stt/badge/?version=latest
:target: https://stt.readthedocs.io/?badge=latest
:alt: Documentation
.. |covenant-img| image:: https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg
:target: CODE_OF_CONDUCT.md
:alt: Contributor Covenant
.. |gitter-img| image:: https://badges.gitter.im/coqui-ai/STT.svg
:target: https://gitter.im/coqui-ai/STT?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
:alt: Gitter Room
.. |doi| image:: https://zenodo.org/badge/344354127.svg
:target: https://zenodo.org/badge/latestdoi/344354127
|doc-img| |covenant-img| |gitter-img| |doi|
`👉 Subscribe to 🐸Coqui's Newsletter <https://coqui.ai/?subscription=true>`_
**Coqui STT** (🐸STT) is a fast, open-source, multi-platform, deep-learning toolkit for training and deploying speech-to-text models. 🐸STT is battle tested in both production and research 🚀
🐸STT features
---------------
* High-quality pre-trained STT model.
* Efficient training pipeline with Multi-GPU support.
* Streaming inference.
* Multiple possible transcripts, each with an associated confidence score.
* Real-time inference.
* Small-footprint acoustic model.
* Bindings for various programming languages.
`Quickstart <https://stt.readthedocs.io/en/latest/#quickstart>`_
================================================================
Where to Ask Questions
----------------------
.. list-table::
:widths: 25 25
:header-rows: 1
* - Type
- Link
* - 🚨 **Bug Reports**
- `Github Issue Tracker <https://github.com/coqui-ai/STT/issues/>`_
* - 🎁 **Feature Requests & Ideas**
- `Github Issue Tracker <https://github.com/coqui-ai/STT/issues/>`_
* - ❔ **Questions**
- `Github Discussions <https://github.com/coqui-ai/stt/discussions/>`_
* - 💬 **General Discussion**
- `Github Discussions <https://github.com/coqui-ai/stt/discussions/>`_ or `Gitter Room <https://gitter.im/coqui-ai/STT?utm_source=share-link&utm_medium=link&utm_campaign=share-link>`_
Links & Resources
-----------------
.. list-table::
:widths: 25 25
:header-rows: 1
* - Type
- Link
* - 📰 **Documentation**
- `stt.readthedocs.io <https://stt.readthedocs.io/>`_
* - 🚀 **Latest release with pre-trained models**
- `see the latest release on GitHub <https://github.com/coqui-ai/STT/releases/latest>`_
* - 🤝 **Contribution Guidelines**
- `CONTRIBUTING.rst <CONTRIBUTING.rst>`_
| 0 |
coqui_public_repos/TTS/TTS/tts | coqui_public_repos/TTS/TTS/tts/configs/bark_config.py | import os
from dataclasses import dataclass, field
from typing import Dict
from TTS.tts.configs.shared_configs import BaseTTSConfig
from TTS.tts.layers.bark.model import GPTConfig
from TTS.tts.layers.bark.model_fine import FineGPTConfig
from TTS.tts.models.bark import BarkAudioConfig
from TTS.utils.generic_utils import get_user_data_dir
@dataclass
class BarkConfig(BaseTTSConfig):
"""Bark TTS configuration
Args:
model (str): model name that registers the model.
audio (BarkAudioConfig): audio configuration. Defaults to BarkAudioConfig().
num_chars (int): number of characters in the alphabet. Defaults to 0.
semantic_config (GPTConfig): semantic configuration. Defaults to GPTConfig().
fine_config (FineGPTConfig): fine configuration. Defaults to FineGPTConfig().
coarse_config (GPTConfig): coarse configuration. Defaults to GPTConfig().
CONTEXT_WINDOW_SIZE (int): GPT context window size. Defaults to 1024.
SEMANTIC_RATE_HZ (float): semantic tokens rate in Hz. Defaults to 49.9.
SEMANTIC_VOCAB_SIZE (int): semantic vocabulary size. Defaults to 10_000.
CODEBOOK_SIZE (int): encodec codebook size. Defaults to 1024.
N_COARSE_CODEBOOKS (int): number of coarse codebooks. Defaults to 2.
N_FINE_CODEBOOKS (int): number of fine codebooks. Defaults to 8.
COARSE_RATE_HZ (int): coarse tokens rate in Hz. Defaults to 75.
SAMPLE_RATE (int): sample rate. Defaults to 24_000.
USE_SMALLER_MODELS (bool): use smaller models. Defaults to False.
TEXT_ENCODING_OFFSET (int): text encoding offset. Defaults to 10_048.
SEMANTIC_PAD_TOKEN (int): semantic pad token. Defaults to 10_000.
TEXT_PAD_TOKEN ([type]): text pad token. Defaults to 10_048.
TEXT_EOS_TOKEN ([type]): text end of sentence token. Defaults to 10_049.
TEXT_SOS_TOKEN ([type]): text start of sentence token. Defaults to 10_050.
SEMANTIC_INFER_TOKEN (int): semantic infer token. Defaults to 10_051.
COARSE_SEMANTIC_PAD_TOKEN (int): coarse semantic pad token. Defaults to 12_048.
COARSE_INFER_TOKEN (int): coarse infer token. Defaults to 12_050.
REMOTE_BASE_URL ([type]): remote base url. Defaults to "https://huggingface.co/erogol/bark/tree".
REMOTE_MODEL_PATHS (Dict): remote model paths. Defaults to None.
LOCAL_MODEL_PATHS (Dict): local model paths. Defaults to None.
SMALL_REMOTE_MODEL_PATHS (Dict): small remote model paths. Defaults to None.
CACHE_DIR (str): local cache directory. Defaults to get_user_data_dir().
DEF_SPEAKER_DIR (str): default speaker directory to stoke speaker values for voice cloning. Defaults to get_user_data_dir().
"""
model: str = "bark"
audio: BarkAudioConfig = field(default_factory=BarkAudioConfig)
num_chars: int = 0
semantic_config: GPTConfig = field(default_factory=GPTConfig)
fine_config: FineGPTConfig = field(default_factory=FineGPTConfig)
coarse_config: GPTConfig = field(default_factory=GPTConfig)
CONTEXT_WINDOW_SIZE: int = 1024
SEMANTIC_RATE_HZ: float = 49.9
SEMANTIC_VOCAB_SIZE: int = 10_000
CODEBOOK_SIZE: int = 1024
N_COARSE_CODEBOOKS: int = 2
N_FINE_CODEBOOKS: int = 8
COARSE_RATE_HZ: int = 75
SAMPLE_RATE: int = 24_000
USE_SMALLER_MODELS: bool = False
TEXT_ENCODING_OFFSET: int = 10_048
SEMANTIC_PAD_TOKEN: int = 10_000
TEXT_PAD_TOKEN: int = 129_595
SEMANTIC_INFER_TOKEN: int = 129_599
COARSE_SEMANTIC_PAD_TOKEN: int = 12_048
COARSE_INFER_TOKEN: int = 12_050
REMOTE_BASE_URL = "https://huggingface.co/erogol/bark/tree/main/"
REMOTE_MODEL_PATHS: Dict = None
LOCAL_MODEL_PATHS: Dict = None
SMALL_REMOTE_MODEL_PATHS: Dict = None
CACHE_DIR: str = str(get_user_data_dir("tts/suno/bark_v0"))
DEF_SPEAKER_DIR: str = str(get_user_data_dir("tts/bark_v0/speakers"))
def __post_init__(self):
self.REMOTE_MODEL_PATHS = {
"text": {
"path": os.path.join(self.REMOTE_BASE_URL, "text_2.pt"),
"checksum": "54afa89d65e318d4f5f80e8e8799026a",
},
"coarse": {
"path": os.path.join(self.REMOTE_BASE_URL, "coarse_2.pt"),
"checksum": "8a98094e5e3a255a5c9c0ab7efe8fd28",
},
"fine": {
"path": os.path.join(self.REMOTE_BASE_URL, "fine_2.pt"),
"checksum": "59d184ed44e3650774a2f0503a48a97b",
},
}
self.LOCAL_MODEL_PATHS = {
"text": os.path.join(self.CACHE_DIR, "text_2.pt"),
"coarse": os.path.join(self.CACHE_DIR, "coarse_2.pt"),
"fine": os.path.join(self.CACHE_DIR, "fine_2.pt"),
"hubert_tokenizer": os.path.join(self.CACHE_DIR, "tokenizer.pth"),
"hubert": os.path.join(self.CACHE_DIR, "hubert.pt"),
}
self.SMALL_REMOTE_MODEL_PATHS = {
"text": {"path": os.path.join(self.REMOTE_BASE_URL, "text.pt")},
"coarse": {"path": os.path.join(self.REMOTE_BASE_URL, "coarse.pt")},
"fine": {"path": os.path.join(self.REMOTE_BASE_URL, "fine.pt")},
}
self.sample_rate = self.SAMPLE_RATE # pylint: disable=attribute-defined-outside-init
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/script/libfstscript.vcxproj.filters | <?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>
</Filter>
</ItemGroup>
</Project> | 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/test-python_39_tflite_16k-linux-amd64-opt.yml | build:
template_file: test-linux-opt-base.tyml
dependencies:
- "linux-amd64-tflite-opt"
- "test-training_16k-linux-amd64-py36m-opt"
test_model_task: "test-training_16k-linux-amd64-py36m-opt"
args:
tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-python_tflite-tests.sh 3.9.0: 16k"
workerType: "${docker.dsTests}"
metadata:
name: "DeepSpeech Linux AMD64 TFLite Python v3.9 tests (16kHz)"
description: "Testing DeepSpeech for Linux/AMD64 on Python v3.9 TFLite, optimized version (16kHz)"
| 0 |
coqui_public_repos/STT-models/finnish/itml | coqui_public_repos/STT-models/finnish/itml/v0.1.0/alphabet.txt |
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
x
y
z
ä
ö
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/linux-rpi3-cpu-dbg.yml | build:
template_file: linux-opt-base.tyml
dependencies:
- "swig-linux-amd64"
- "node-gyp-cache"
- "pyenv-linux-amd64"
- "tf_linux-rpi3-cpu-dbg"
routes:
- "index.project.deepspeech.deepspeech.native_client.${event.head.branchortag}.arm-dbg"
- "index.project.deepspeech.deepspeech.native_client.${event.head.branchortag}.${event.head.sha}.arm-dbg"
- "index.project.deepspeech.deepspeech.native_client.arm-dbg.${event.head.sha}"
## multistrap 2.2.0-ubuntu1 is broken in 14.04: https://bugs.launchpad.net/ubuntu/+source/multistrap/+bug/1313787
system_setup:
>
apt-get -qq -y install gdebi git pixz &&
wget http://mirrors.kernel.org/ubuntu/pool/universe/m/multistrap/multistrap_2.2.0ubuntu2_all.deb -O /tmp/multistrap_2.2.0ubuntu2_all.deb &&
echo "y" | gdebi /tmp/multistrap_2.2.0ubuntu2_all.deb
system_config:
>
multistrap -d /tmp/multistrap-raspbian-buster/ -f ${system.homedir.linux}/DeepSpeech/ds/native_client/multistrap_raspbian_buster.conf
tensorflow: ${system.tensorflow.linux_armv7.url}
scripts:
setup: "taskcluster/tc-true.sh"
build: "taskcluster/rpi3-build-dbg.sh"
package: "taskcluster/package.sh"
workerType: "${docker.dsBuild}"
nc_asset_name: "native_client.rpi3.cpu.linux_dbg.tar.xz"
metadata:
name: "DeepSpeech Linux RPi3/ARMv7 CPU debug"
description: "Building DeepSpeech for Linux RPi3 ARMv7, CPU only, debug version"
| 0 |
coqui_public_repos/TTS/TTS/vc | coqui_public_repos/TTS/TTS/vc/models/freevc.py | from typing import Dict, List, Optional, Tuple, Union
import librosa
import numpy as np
import torch
from coqpit import Coqpit
from torch import nn
from torch.nn import Conv1d, Conv2d, ConvTranspose1d
from torch.nn import functional as F
from torch.nn.utils import spectral_norm
from torch.nn.utils.parametrizations import weight_norm
from torch.nn.utils.parametrize import remove_parametrizations
import TTS.vc.modules.freevc.commons as commons
import TTS.vc.modules.freevc.modules as modules
from TTS.tts.utils.speakers import SpeakerManager
from TTS.utils.io import load_fsspec
from TTS.vc.configs.freevc_config import FreeVCConfig
from TTS.vc.models.base_vc import BaseVC
from TTS.vc.modules.freevc.commons import get_padding, init_weights
from TTS.vc.modules.freevc.mel_processing import mel_spectrogram_torch
from TTS.vc.modules.freevc.speaker_encoder.speaker_encoder import SpeakerEncoder as SpeakerEncoderEx
from TTS.vc.modules.freevc.wavlm import get_wavlm
class ResidualCouplingBlock(nn.Module):
def __init__(self, channels, hidden_channels, kernel_size, dilation_rate, n_layers, n_flows=4, gin_channels=0):
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.n_flows = n_flows
self.gin_channels = gin_channels
self.flows = nn.ModuleList()
for i in range(n_flows):
self.flows.append(
modules.ResidualCouplingLayer(
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=gin_channels,
mean_only=True,
)
)
self.flows.append(modules.Flip())
def forward(self, x, x_mask, g=None, reverse=False):
if not reverse:
for flow in self.flows:
x, _ = flow(x, x_mask, g=g, reverse=reverse)
else:
for flow in reversed(self.flows):
x = flow(x, x_mask, g=g, reverse=reverse)
return x
class Encoder(nn.Module):
def __init__(
self, in_channels, out_channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0
):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.gin_channels = gin_channels
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
def forward(self, x, x_lengths, g=None):
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
x = self.pre(x) * x_mask
x = self.enc(x, x_mask, g=g)
stats = self.proj(x) * x_mask
m, logs = torch.split(stats, self.out_channels, dim=1)
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
return z, m, logs, x_mask
class Generator(torch.nn.Module):
def __init__(
self,
initial_channel,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
gin_channels=0,
):
super(Generator, self).__init__()
self.num_kernels = len(resblock_kernel_sizes)
self.num_upsamples = len(upsample_rates)
self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
self.ups = nn.ModuleList()
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
self.ups.append(
weight_norm(
ConvTranspose1d(
upsample_initial_channel // (2**i),
upsample_initial_channel // (2 ** (i + 1)),
k,
u,
padding=(k - u) // 2,
)
)
)
self.resblocks = nn.ModuleList()
for i in range(len(self.ups)):
ch = upsample_initial_channel // (2 ** (i + 1))
for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
self.resblocks.append(resblock(ch, k, d))
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
self.ups.apply(init_weights)
if gin_channels != 0:
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
def forward(self, x, g=None):
x = self.conv_pre(x)
if g is not None:
x = x + self.cond(g)
for i in range(self.num_upsamples):
x = F.leaky_relu(x, modules.LRELU_SLOPE)
x = self.ups[i](x)
xs = None
for j in range(self.num_kernels):
if xs is None:
xs = self.resblocks[i * self.num_kernels + j](x)
else:
xs += self.resblocks[i * self.num_kernels + j](x)
x = xs / self.num_kernels
x = F.leaky_relu(x)
x = self.conv_post(x)
x = torch.tanh(x)
return x
def remove_weight_norm(self):
print("Removing weight norm...")
for l in self.ups:
remove_parametrizations(l, "weight")
for l in self.resblocks:
remove_parametrizations(l, "weight")
class DiscriminatorP(torch.nn.Module):
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
super(DiscriminatorP, self).__init__()
self.period = period
self.use_spectral_norm = use_spectral_norm
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
self.convs = nn.ModuleList(
[
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
]
)
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
def forward(self, x):
fmap = []
# 1d to 2d
b, c, t = x.shape
if t % self.period != 0: # pad first
n_pad = self.period - (t % self.period)
x = F.pad(x, (0, n_pad), "reflect")
t = t + n_pad
x = x.view(b, c, t // self.period, self.period)
for l in self.convs:
x = l(x)
x = F.leaky_relu(x, modules.LRELU_SLOPE)
fmap.append(x)
x = self.conv_post(x)
fmap.append(x)
x = torch.flatten(x, 1, -1)
return x, fmap
class DiscriminatorS(torch.nn.Module):
def __init__(self, use_spectral_norm=False):
super(DiscriminatorS, self).__init__()
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
self.convs = nn.ModuleList(
[
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
]
)
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
def forward(self, x):
fmap = []
for l in self.convs:
x = l(x)
x = F.leaky_relu(x, modules.LRELU_SLOPE)
fmap.append(x)
x = self.conv_post(x)
fmap.append(x)
x = torch.flatten(x, 1, -1)
return x, fmap
class MultiPeriodDiscriminator(torch.nn.Module):
def __init__(self, use_spectral_norm=False):
super(MultiPeriodDiscriminator, self).__init__()
periods = [2, 3, 5, 7, 11]
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
self.discriminators = nn.ModuleList(discs)
def forward(self, y, y_hat):
y_d_rs = []
y_d_gs = []
fmap_rs = []
fmap_gs = []
for i, d in enumerate(self.discriminators):
y_d_r, fmap_r = d(y)
y_d_g, fmap_g = d(y_hat)
y_d_rs.append(y_d_r)
y_d_gs.append(y_d_g)
fmap_rs.append(fmap_r)
fmap_gs.append(fmap_g)
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
class SpeakerEncoder(torch.nn.Module):
def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256):
super(SpeakerEncoder, self).__init__()
self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True)
self.linear = nn.Linear(model_hidden_size, model_embedding_size)
self.relu = nn.ReLU()
def forward(self, mels):
self.lstm.flatten_parameters()
_, (hidden, _) = self.lstm(mels)
embeds_raw = self.relu(self.linear(hidden[-1]))
return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True)
def compute_partial_slices(self, total_frames, partial_frames, partial_hop):
mel_slices = []
for i in range(0, total_frames - partial_frames, partial_hop):
mel_range = torch.arange(i, i + partial_frames)
mel_slices.append(mel_range)
return mel_slices
def embed_utterance(self, mel, partial_frames=128, partial_hop=64):
mel_len = mel.size(1)
last_mel = mel[:, -partial_frames:]
if mel_len > partial_frames:
mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop)
mels = list(mel[:, s] for s in mel_slices)
mels.append(last_mel)
mels = torch.stack(tuple(mels), 0).squeeze(1)
with torch.no_grad():
partial_embeds = self(mels)
embed = torch.mean(partial_embeds, axis=0).unsqueeze(0)
# embed = embed / torch.linalg.norm(embed, 2)
else:
with torch.no_grad():
embed = self(last_mel)
return embed
class FreeVC(BaseVC):
"""
Papaer::
https://arxiv.org/abs/2210.15418#
Paper Abstract::
Voice conversion (VC) can be achieved by first extracting source content information and target speaker
information, and then reconstructing waveform with these information. However, current approaches normally
either extract dirty content information with speaker information leaked in, or demand a large amount of
annotated data for training. Besides, the quality of reconstructed waveform can be degraded by the
mismatch between conversion model and vocoder. In this paper, we adopt the end-to-end framework of VITS for
high-quality waveform reconstruction, and propose strategies for clean content information extraction without
text annotation. We disentangle content information by imposing an information bottleneck to WavLM features,
and propose the spectrogram-resize based data augmentation to improve the purity of extracted content
information. Experimental results show that the proposed method outperforms the latest VC models trained with
annotated data and has greater robustness.
Original Code::
https://github.com/OlaWod/FreeVC
Examples:
>>> from TTS.vc.configs.freevc_config import FreeVCConfig
>>> from TTS.vc.models.freevc import FreeVC
>>> config = FreeVCConfig()
>>> model = FreeVC(config)
"""
def __init__(self, config: Coqpit, speaker_manager: SpeakerManager = None):
super().__init__(config, None, speaker_manager, None)
self.init_multispeaker(config)
self.spec_channels = self.args.spec_channels
self.inter_channels = self.args.inter_channels
self.hidden_channels = self.args.hidden_channels
self.filter_channels = self.args.filter_channels
self.n_heads = self.args.n_heads
self.n_layers = self.args.n_layers
self.kernel_size = self.args.kernel_size
self.p_dropout = self.args.p_dropout
self.resblock = self.args.resblock
self.resblock_kernel_sizes = self.args.resblock_kernel_sizes
self.resblock_dilation_sizes = self.args.resblock_dilation_sizes
self.upsample_rates = self.args.upsample_rates
self.upsample_initial_channel = self.args.upsample_initial_channel
self.upsample_kernel_sizes = self.args.upsample_kernel_sizes
self.segment_size = self.args.segment_size
self.gin_channels = self.args.gin_channels
self.ssl_dim = self.args.ssl_dim
self.use_spk = self.args.use_spk
self.enc_p = Encoder(self.args.ssl_dim, self.inter_channels, self.hidden_channels, 5, 1, 16)
self.dec = Generator(
self.inter_channels,
self.resblock,
self.resblock_kernel_sizes,
self.resblock_dilation_sizes,
self.upsample_rates,
self.upsample_initial_channel,
self.upsample_kernel_sizes,
gin_channels=self.gin_channels,
)
self.enc_q = Encoder(
self.spec_channels, self.inter_channels, self.hidden_channels, 5, 1, 16, gin_channels=self.gin_channels
)
self.flow = ResidualCouplingBlock(
self.inter_channels, self.hidden_channels, 5, 1, 4, gin_channels=self.gin_channels
)
if not self.use_spk:
self.enc_spk = SpeakerEncoder(model_hidden_size=self.gin_channels, model_embedding_size=self.gin_channels)
else:
self.load_pretrained_speaker_encoder()
self.wavlm = get_wavlm()
@property
def device(self):
return next(self.parameters()).device
def load_pretrained_speaker_encoder(self):
"""Load pretrained speaker encoder model as mentioned in the paper."""
print(" > Loading pretrained speaker encoder model ...")
self.enc_spk_ex = SpeakerEncoderEx(
"https://github.com/coqui-ai/TTS/releases/download/v0.13.0_models/speaker_encoder.pt"
)
def init_multispeaker(self, config: Coqpit):
"""Initialize multi-speaker modules of a model. A model can be trained either with a speaker embedding layer
or with external `d_vectors` computed from a speaker encoder model.
You must provide a `speaker_manager` at initialization to set up the multi-speaker modules.
Args:
config (Coqpit): Model configuration.
data (List, optional): Dataset items to infer number of speakers. Defaults to None.
"""
self.num_spks = self.args.num_spks
if self.speaker_manager:
self.num_spks = self.speaker_manager.num_spks
def forward(
self,
c: torch.Tensor,
spec: torch.Tensor,
g: Optional[torch.Tensor] = None,
mel: Optional[torch.Tensor] = None,
c_lengths: Optional[torch.Tensor] = None,
spec_lengths: Optional[torch.Tensor] = None,
) -> Tuple[
torch.Tensor,
torch.Tensor,
torch.Tensor,
Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor],
]:
"""
Forward pass of the model.
Args:
c: WavLM features. Shape: (batch_size, c_seq_len).
spec: The input spectrogram. Shape: (batch_size, spec_seq_len, spec_dim).
g: The speaker embedding. Shape: (batch_size, spk_emb_dim).
mel: The input mel-spectrogram for the speaker encoder. Shape: (batch_size, mel_seq_len, mel_dim).
c_lengths: The lengths of the WavLM features. Shape: (batch_size,).
spec_lengths: The lengths of the spectrogram. Shape: (batch_size,).
Returns:
o: The output spectrogram. Shape: (batch_size, spec_seq_len, spec_dim).
ids_slice: The slice indices. Shape: (batch_size, num_slices).
spec_mask: The spectrogram mask. Shape: (batch_size, spec_seq_len).
(z, z_p, m_p, logs_p, m_q, logs_q): A tuple of latent variables.
"""
# If c_lengths is None, set it to the length of the last dimension of c
if c_lengths is None:
c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
# If spec_lengths is None, set it to the length of the last dimension of spec
if spec_lengths is None:
spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device)
# If use_spk is False, compute g from mel using enc_spk
g = None
if not self.use_spk:
g = self.enc_spk(mel).unsqueeze(-1)
# Compute m_p, logs_p, z, m_q, logs_q, and spec_mask using enc_p and enc_q
_, m_p, logs_p, _ = self.enc_p(c, c_lengths)
z, m_q, logs_q, spec_mask = self.enc_q(spec.transpose(1, 2), spec_lengths, g=g)
# Compute z_p using flow
z_p = self.flow(z, spec_mask, g=g)
# Randomly slice z and compute o using dec
z_slice, ids_slice = commons.rand_slice_segments(z, spec_lengths, self.segment_size)
o = self.dec(z_slice, g=g)
return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
@torch.no_grad()
def inference(self, c, g=None, mel=None, c_lengths=None):
"""
Inference pass of the model
Args:
c (torch.Tensor): Input tensor. Shape: (batch_size, c_seq_len).
g (torch.Tensor): Speaker embedding tensor. Shape: (batch_size, spk_emb_dim).
mel (torch.Tensor): Mel-spectrogram tensor. Shape: (batch_size, mel_seq_len, mel_dim).
c_lengths (torch.Tensor): Lengths of the input tensor. Shape: (batch_size,).
Returns:
torch.Tensor: Output tensor.
"""
if c_lengths == None:
c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
if not self.use_spk:
g = self.enc_spk.embed_utterance(mel)
g = g.unsqueeze(-1)
z_p, m_p, logs_p, c_mask = self.enc_p(c, c_lengths)
z = self.flow(z_p, c_mask, g=g, reverse=True)
o = self.dec(z * c_mask, g=g)
return o
def extract_wavlm_features(self, y):
"""Extract WavLM features from an audio tensor.
Args:
y (torch.Tensor): Audio tensor. Shape: (batch_size, audio_seq_len).
"""
with torch.no_grad():
c = self.wavlm.extract_features(y)[0]
c = c.transpose(1, 2)
return c
def load_audio(self, wav):
"""Read and format the input audio."""
if isinstance(wav, str):
wav, _ = librosa.load(wav, sr=self.config.audio.input_sample_rate)
if isinstance(wav, np.ndarray):
wav = torch.from_numpy(wav).to(self.device)
if isinstance(wav, torch.Tensor):
wav = wav.to(self.device)
if isinstance(wav, list):
wav = torch.from_numpy(np.array(wav)).to(self.device)
return wav.float()
@torch.inference_mode()
def voice_conversion(self, src, tgt):
"""
Voice conversion pass of the model.
Args:
src (str or torch.Tensor): Source utterance.
tgt (str or torch.Tensor): Target utterance.
Returns:
torch.Tensor: Output tensor.
"""
wav_tgt = self.load_audio(tgt).cpu().numpy()
wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20)
if self.config.model_args.use_spk:
g_tgt = self.enc_spk_ex.embed_utterance(wav_tgt)
g_tgt = torch.from_numpy(g_tgt)[None, :, None].to(self.device)
else:
wav_tgt = torch.from_numpy(wav_tgt).unsqueeze(0).to(self.device)
mel_tgt = mel_spectrogram_torch(
wav_tgt,
self.config.audio.filter_length,
self.config.audio.n_mel_channels,
self.config.audio.input_sample_rate,
self.config.audio.hop_length,
self.config.audio.win_length,
self.config.audio.mel_fmin,
self.config.audio.mel_fmax,
)
# src
wav_src = self.load_audio(src)
c = self.extract_wavlm_features(wav_src[None, :])
if self.config.model_args.use_spk:
audio = self.inference(c, g=g_tgt)
else:
audio = self.inference(c, mel=mel_tgt.transpose(1, 2))
audio = audio[0][0].data.cpu().float().numpy()
return audio
def eval_step():
...
@staticmethod
def init_from_config(config: FreeVCConfig, samples: Union[List[List], List[Dict]] = None, verbose=True):
model = FreeVC(config)
return model
def load_checkpoint(self, config, checkpoint_path, eval=False, strict=True, cache=False):
state = load_fsspec(checkpoint_path, map_location=torch.device("cpu"), cache=cache)
self.load_state_dict(state["model"], strict=strict)
if eval:
self.eval()
def train_step():
...
| 0 |
coqui_public_repos/STT-models/lithuanian/itml | coqui_public_repos/STT-models/lithuanian/itml/v0.1.1/alphabet.txt |
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
ą
č
ė
ę
į
š
ū
ų
ž
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/doc/COMMON_VOICE_DATA.rst | .. _common-voice-data:
Common Voice training data
==========================
This document gives some information about using Common Voice data with STT. If you're in need of training data, the Common Voice corpus is a good place to start.
Common Voice consists of voice data that was donated through Mozilla's `Common Voice <https://commonvoice.mozilla.org/>`_ initiative. You can download the data sets for various languages `here <https://commonvoice.mozilla.org/data>`_.
After you download and extract a data set for one language, you'll find the following contents:
* ``.tsv`` files, containing metadata such as text transcripts
* ``.mp3`` audio files, located in the ``clips`` directory
🐸STT cannot directly work with Common Voice data, so you should run our importer script ``bin/import_cv2.py`` to format the data correctly:
.. code-block:: bash
bin/import_cv2.py --filter_alphabet path/to/some/alphabet.txt /path/to/extracted/common-voice/archive
Providing a filter alphabet is optional. This alphabet is used to exclude all audio files whose transcripts contain characters not in the specified alphabet. Running the importer with ``-h`` will show you additional options.
The importer will create a new ``WAV`` file for every ``MP3`` file in the ``clips`` directory. The importer will also create the following ``CSV`` files:
* ``clips/train.csv``
* ``clips/dev.csv``
* ``clips/test.csv``
The CSV files contain the following fields:
* ``wav_filename`` - path to the audio file, may be absolute or relative. Our importer produces relative paths
* ``wav_filesize`` - samples size given in bytes, used for sorting the data before training. Expects integer
* ``transcript`` - transcription target for the sample
To use Common Voice data for training, validation and testing, you should pass the ``CSV`` filenames via ``--train_files``, ``--dev_files``, ``--test_files``.
For example, if you download, extracted, and imported the French language data from Common Voice, you will have a new local directory named ``fr``. You can train STT with this new French data as such:
.. code-block:: bash
$ python -m coqui_stt_training.train \
--train_files fr/clips/train.csv \
--dev_files fr/clips/dev.csv \
--test_files fr/clips/test.csv
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/register.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Classes for registering derived FST for generic reading.
#ifndef FST_REGISTER_H_
#define FST_REGISTER_H_
#include <string>
#include <type_traits>
#include <fst/compat.h>
#include <fst/generic-register.h>
#include <fst/util.h>
#include <fst/types.h>
#include <fst/log.h>
namespace fst {
template <class Arc>
class Fst;
struct FstReadOptions;
// This class represents a single entry in a FstRegister
template <class Arc>
struct FstRegisterEntry {
using Reader = Fst<Arc> *(*)(std::istream &istrm, const FstReadOptions &opts);
using Converter = Fst<Arc> *(*)(const Fst<Arc> &fst);
Reader reader;
Converter converter;
explicit FstRegisterEntry(Reader reader = nullptr,
Converter converter = nullptr)
: reader(reader), converter(converter) {}
};
// This class maintains the correspondence between a string describing
// an FST type, and its reader and converter.
template <class Arc>
class FstRegister
: public GenericRegister<string, FstRegisterEntry<Arc>, FstRegister<Arc>> {
public:
using Reader = typename FstRegisterEntry<Arc>::Reader;
using Converter = typename FstRegisterEntry<Arc>::Converter;
const Reader GetReader(const string &type) const {
return this->GetEntry(type).reader;
}
const Converter GetConverter(const string &type) const {
return this->GetEntry(type).converter;
}
protected:
string ConvertKeyToSoFilename(const string &key) const override {
string legal_type(key);
ConvertToLegalCSymbol(&legal_type);
return legal_type + "-fst.so";
}
};
// This class registers an FST type for generic reading and creating.
// The type must have a default constructor and a copy constructor from
// Fst<Arc>.
template <class FST>
class FstRegisterer : public GenericRegisterer<FstRegister<typename FST::Arc>> {
public:
using Arc = typename FST::Arc;
using Entry = typename FstRegister<Arc>::Entry;
using Reader = typename FstRegister<Arc>::Reader;
FstRegisterer()
: GenericRegisterer<FstRegister<typename FST::Arc>>(FST().Type(),
BuildEntry()) {}
private:
static Fst<Arc> *ReadGeneric(
std::istream &strm, const FstReadOptions &opts) {
static_assert(std::is_base_of<Fst<Arc>, FST>::value,
"FST class does not inherit from Fst<Arc>");
return FST::Read(strm, opts);
}
static Entry BuildEntry() {
return Entry(&ReadGeneric, &FstRegisterer<FST>::Convert);
}
static Fst<Arc> *Convert(const Fst<Arc> &fst) { return new FST(fst); }
};
// Convenience macro to generate static FstRegisterer instance.
#define REGISTER_FST(FST, Arc) \
static fst::FstRegisterer<FST<Arc>> FST##_##Arc##_registerer
// Converts an FST to the specified type.
template <class Arc>
Fst<Arc> *Convert(const Fst<Arc> &fst, const string &fst_type) {
auto *reg = FstRegister<Arc>::GetRegister();
const auto converter = reg->GetConverter(fst_type);
if (!converter) {
FSTERROR() << "Fst::Convert: Unknown FST type " << fst_type << " (arc type "
<< Arc::Type() << ")";
return nullptr;
}
return converter(fst);
}
} // namespace fst
#endif // FST_REGISTER_H_
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src | coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/script/info-impl.cc | #include <fst/script/info-impl.h>
namespace fst {
void PrintFstInfoImpl(const FstInfo &fstinfo, bool pipe) {
std::ostream &ostrm = pipe ? std::cerr : std::cout;
const auto old = ostrm.setf(std::ios::left);
ostrm.width(50);
ostrm << "fst type" << fstinfo.FstType() << std::endl;
ostrm.width(50);
ostrm << "arc type" << fstinfo.ArcType() << std::endl;
ostrm.width(50);
ostrm << "input symbol table" << fstinfo.InputSymbols() << std::endl;
ostrm.width(50);
ostrm << "output symbol table" << fstinfo.OutputSymbols() << std::endl;
if (!fstinfo.LongInfo()) {
ostrm.setf(old);
return;
}
ostrm.width(50);
ostrm << "# of states" << fstinfo.NumStates() << std::endl;
ostrm.width(50);
ostrm << "# of arcs" << fstinfo.NumArcs() << std::endl;
ostrm.width(50);
ostrm << "initial state" << fstinfo.Start() << std::endl;
ostrm.width(50);
ostrm << "# of final states" << fstinfo.NumFinal() << std::endl;
ostrm.width(50);
ostrm << "# of input/output epsilons" << fstinfo.NumEpsilons() << std::endl;
ostrm.width(50);
ostrm << "# of input epsilons" << fstinfo.NumInputEpsilons() << std::endl;
ostrm.width(50);
ostrm << "# of output epsilons" << fstinfo.NumOutputEpsilons() << std::endl;
ostrm.width(50);
ostrm << "input label multiplicity" << fstinfo.InputLabelMultiplicity()
<< std::endl;
ostrm.width(50);
ostrm << "output label multiplicity" << fstinfo.OutputLabelMultiplicity()
<< std::endl;
ostrm.width(50);
string arc_type = "";
if (fstinfo.ArcFilterType() == "epsilon")
arc_type = "epsilon ";
else if (fstinfo.ArcFilterType() == "iepsilon")
arc_type = "input-epsilon ";
else if (fstinfo.ArcFilterType() == "oepsilon")
arc_type = "output-epsilon ";
const auto accessible_label = "# of " + arc_type + "accessible states";
ostrm.width(50);
ostrm << accessible_label << fstinfo.NumAccessible() << std::endl;
const auto coaccessible_label = "# of " + arc_type + "coaccessible states";
ostrm.width(50);
ostrm << coaccessible_label << fstinfo.NumCoAccessible() << std::endl;
const auto connected_label = "# of " + arc_type + "connected states";
ostrm.width(50);
ostrm << connected_label << fstinfo.NumConnected() << std::endl;
const auto numcc_label = "# of " + arc_type + "connected components";
ostrm.width(50);
ostrm << numcc_label << fstinfo.NumCc() << std::endl;
const auto numscc_label = "# of " + arc_type + "strongly conn components";
ostrm.width(50);
ostrm << numscc_label << fstinfo.NumScc() << std::endl;
ostrm.width(50);
ostrm << "input matcher"
<< (fstinfo.InputMatchType() == MATCH_INPUT
? 'y'
: fstinfo.InputMatchType() == MATCH_NONE ? 'n' : '?')
<< std::endl;
ostrm.width(50);
ostrm << "output matcher"
<< (fstinfo.OutputMatchType() == MATCH_OUTPUT
? 'y'
: fstinfo.OutputMatchType() == MATCH_NONE ? 'n' : '?')
<< std::endl;
ostrm.width(50);
ostrm << "input lookahead" << (fstinfo.InputLookAhead() ? 'y' : 'n')
<< std::endl;
ostrm.width(50);
ostrm << "output lookahead" << (fstinfo.OutputLookAhead() ? 'y' : 'n')
<< std::endl;
uint64 prop = 1;
for (auto i = 0; i < 64; ++i, prop <<= 1) {
if (prop & kBinaryProperties) {
char value = 'n';
if (fstinfo.Properties() & prop) value = 'y';
ostrm.width(50);
ostrm << PropertyNames[i] << value << std::endl;
} else if (prop & kPosTrinaryProperties) {
char value = '?';
if (fstinfo.Properties() & prop)
value = 'y';
else if (fstinfo.Properties() & prop << 1)
value = 'n';
ostrm.width(50);
ostrm << PropertyNames[i] << value << std::endl;
}
}
ostrm.setf(old);
}
} // namespace fst
| 0 |
coqui_public_repos | coqui_public_repos/snakepit/LICENSE | Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
| 0 |
coqui_public_repos/TTS/TTS/tts | coqui_public_repos/TTS/TTS/tts/models/bark.py | import os
from dataclasses import dataclass
from typing import Optional
import numpy as np
from coqpit import Coqpit
from encodec import EncodecModel
from transformers import BertTokenizer
from TTS.tts.layers.bark.inference_funcs import (
codec_decode,
generate_coarse,
generate_fine,
generate_text_semantic,
generate_voice,
load_voice,
)
from TTS.tts.layers.bark.load_model import load_model
from TTS.tts.layers.bark.model import GPT
from TTS.tts.layers.bark.model_fine import FineGPT
from TTS.tts.models.base_tts import BaseTTS
@dataclass
class BarkAudioConfig(Coqpit):
sample_rate: int = 24000
output_sample_rate: int = 24000
class Bark(BaseTTS):
def __init__(
self,
config: Coqpit,
tokenizer: BertTokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased"),
) -> None:
super().__init__(config=config, ap=None, tokenizer=None, speaker_manager=None, language_manager=None)
self.config.num_chars = len(tokenizer)
self.tokenizer = tokenizer
self.semantic_model = GPT(config.semantic_config)
self.coarse_model = GPT(config.coarse_config)
self.fine_model = FineGPT(config.fine_config)
self.encodec = EncodecModel.encodec_model_24khz()
self.encodec.set_target_bandwidth(6.0)
@property
def device(self):
return next(self.parameters()).device
def load_bark_models(self):
self.semantic_model, self.config = load_model(
ckpt_path=self.config.LOCAL_MODEL_PATHS["text"], device=self.device, config=self.config, model_type="text"
)
self.coarse_model, self.config = load_model(
ckpt_path=self.config.LOCAL_MODEL_PATHS["coarse"],
device=self.device,
config=self.config,
model_type="coarse",
)
self.fine_model, self.config = load_model(
ckpt_path=self.config.LOCAL_MODEL_PATHS["fine"], device=self.device, config=self.config, model_type="fine"
)
def train_step(
self,
):
pass
def text_to_semantic(
self,
text: str,
history_prompt: Optional[str] = None,
temp: float = 0.7,
base=None,
allow_early_stop=True,
**kwargs,
):
"""Generate semantic array from text.
Args:
text: text to be turned into audio
history_prompt: history choice for audio cloning
temp: generation temperature (1.0 more diverse, 0.0 more conservative)
Returns:
numpy semantic array to be fed into `semantic_to_waveform`
"""
x_semantic = generate_text_semantic(
text,
self,
history_prompt=history_prompt,
temp=temp,
base=base,
allow_early_stop=allow_early_stop,
**kwargs,
)
return x_semantic
def semantic_to_waveform(
self,
semantic_tokens: np.ndarray,
history_prompt: Optional[str] = None,
temp: float = 0.7,
base=None,
):
"""Generate audio array from semantic input.
Args:
semantic_tokens: semantic token output from `text_to_semantic`
history_prompt: history choice for audio cloning
temp: generation temperature (1.0 more diverse, 0.0 more conservative)
Returns:
numpy audio array at sample frequency 24khz
"""
x_coarse_gen = generate_coarse(
semantic_tokens,
self,
history_prompt=history_prompt,
temp=temp,
base=base,
)
x_fine_gen = generate_fine(
x_coarse_gen,
self,
history_prompt=history_prompt,
temp=0.5,
base=base,
)
audio_arr = codec_decode(x_fine_gen, self)
return audio_arr, x_coarse_gen, x_fine_gen
def generate_audio(
self,
text: str,
history_prompt: Optional[str] = None,
text_temp: float = 0.7,
waveform_temp: float = 0.7,
base=None,
allow_early_stop=True,
**kwargs,
):
"""Generate audio array from input text.
Args:
text: text to be turned into audio
history_prompt: history choice for audio cloning
text_temp: generation temperature (1.0 more diverse, 0.0 more conservative)
waveform_temp: generation temperature (1.0 more diverse, 0.0 more conservative)
Returns:
numpy audio array at sample frequency 24khz
"""
x_semantic = self.text_to_semantic(
text,
history_prompt=history_prompt,
temp=text_temp,
base=base,
allow_early_stop=allow_early_stop,
**kwargs,
)
audio_arr, c, f = self.semantic_to_waveform(
x_semantic, history_prompt=history_prompt, temp=waveform_temp, base=base
)
return audio_arr, [x_semantic, c, f]
def generate_voice(self, audio, speaker_id, voice_dir):
"""Generate a voice from the given audio and text.
Args:
audio (str): Path to the audio file.
speaker_id (str): Speaker name.
voice_dir (str): Path to the directory to save the generate voice.
"""
if voice_dir is not None:
voice_dirs = [voice_dir]
try:
_ = load_voice(speaker_id, voice_dirs)
except (KeyError, FileNotFoundError):
output_path = os.path.join(voice_dir, speaker_id + ".npz")
os.makedirs(voice_dir, exist_ok=True)
generate_voice(audio, self, output_path)
def _set_voice_dirs(self, voice_dirs):
def_voice_dir = None
if isinstance(self.config.DEF_SPEAKER_DIR, str):
os.makedirs(self.config.DEF_SPEAKER_DIR, exist_ok=True)
if os.path.isdir(self.config.DEF_SPEAKER_DIR):
def_voice_dir = self.config.DEF_SPEAKER_DIR
_voice_dirs = [def_voice_dir] if def_voice_dir is not None else []
if voice_dirs is not None:
if isinstance(voice_dirs, str):
voice_dirs = [voice_dirs]
_voice_dirs = voice_dirs + _voice_dirs
return _voice_dirs
# TODO: remove config from synthesize
def synthesize(
self, text, config, speaker_id="random", voice_dirs=None, **kwargs
): # pylint: disable=unused-argument
"""Synthesize speech with the given input text.
Args:
text (str): Input text.
config (BarkConfig): Config with inference parameters.
speaker_id (str): One of the available speaker names. If `random`, it generates a random speaker.
speaker_wav (str): Path to the speaker audio file for cloning a new voice. It is cloned and saved in
`voice_dirs` with the name `speaker_id`. Defaults to None.
voice_dirs (List[str]): List of paths that host reference audio files for speakers. Defaults to None.
**kwargs: Model specific inference settings used by `generate_audio()` and `TTS.tts.layers.bark.inference_funcs.generate_text_semantic().
Returns:
A dictionary of the output values with `wav` as output waveform, `deterministic_seed` as seed used at inference,
`text_input` as text token IDs after tokenizer, `voice_samples` as samples used for cloning, `conditioning_latents`
as latents used at inference.
"""
speaker_id = "random" if speaker_id is None else speaker_id
voice_dirs = self._set_voice_dirs(voice_dirs)
history_prompt = load_voice(self, speaker_id, voice_dirs)
outputs = self.generate_audio(text, history_prompt=history_prompt, **kwargs)
return_dict = {
"wav": outputs[0],
"text_inputs": text,
}
return return_dict
def eval_step(self):
...
def forward(self):
...
def inference(self):
...
@staticmethod
def init_from_config(config: "BarkConfig", **kwargs): # pylint: disable=unused-argument
return Bark(config)
# pylint: disable=unused-argument, redefined-builtin
def load_checkpoint(
self,
config,
checkpoint_dir,
text_model_path=None,
coarse_model_path=None,
fine_model_path=None,
hubert_model_path=None,
hubert_tokenizer_path=None,
eval=False,
strict=True,
**kwargs,
):
"""Load a model checkpoints from a directory. This model is with multiple checkpoint files and it
expects to have all the files to be under the given `checkpoint_dir` with the rigth names.
If eval is True, set the model to eval mode.
Args:
config (TortoiseConfig): The model config.
checkpoint_dir (str): The directory where the checkpoints are stored.
ar_checkpoint_path (str, optional): The path to the autoregressive checkpoint. Defaults to None.
diff_checkpoint_path (str, optional): The path to the diffusion checkpoint. Defaults to None.
clvp_checkpoint_path (str, optional): The path to the CLVP checkpoint. Defaults to None.
vocoder_checkpoint_path (str, optional): The path to the vocoder checkpoint. Defaults to None.
eval (bool, optional): Whether to set the model to eval mode. Defaults to False.
strict (bool, optional): Whether to load the model strictly. Defaults to True.
"""
text_model_path = text_model_path or os.path.join(checkpoint_dir, "text_2.pt")
coarse_model_path = coarse_model_path or os.path.join(checkpoint_dir, "coarse_2.pt")
fine_model_path = fine_model_path or os.path.join(checkpoint_dir, "fine_2.pt")
hubert_model_path = hubert_model_path or os.path.join(checkpoint_dir, "hubert.pt")
hubert_tokenizer_path = hubert_tokenizer_path or os.path.join(checkpoint_dir, "tokenizer.pth")
self.config.LOCAL_MODEL_PATHS["text"] = text_model_path
self.config.LOCAL_MODEL_PATHS["coarse"] = coarse_model_path
self.config.LOCAL_MODEL_PATHS["fine"] = fine_model_path
self.config.LOCAL_MODEL_PATHS["hubert"] = hubert_model_path
self.config.LOCAL_MODEL_PATHS["hubert_tokenizer"] = hubert_tokenizer_path
self.load_bark_models()
if eval:
self.eval()
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/script/epsnormalize.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/script/fst-class.h>
#include <fst/script/epsnormalize.h>
#include <fst/script/script-impl.h>
namespace fst {
namespace script {
void EpsNormalize(const FstClass &ifst, MutableFstClass *ofst,
EpsNormalizeType norm_type) {
if (!internal::ArcTypesMatch(ifst, *ofst, "EpsNormalize")) {
ofst->SetProperties(kError, kError);
return;
}
EpsNormalizeArgs args(ifst, ofst, norm_type);
Apply<Operation<EpsNormalizeArgs>>("EpsNormalize", ifst.ArcType(), &args);
}
REGISTER_FST_OPERATION(EpsNormalize, StdArc, EpsNormalizeArgs);
REGISTER_FST_OPERATION(EpsNormalize, LogArc, EpsNormalizeArgs);
REGISTER_FST_OPERATION(EpsNormalize, Log64Arc, EpsNormalizeArgs);
} // namespace script
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions/compact/compact8_unweighted_acceptor-fst.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/fst.h>
#include <fst/compact-fst.h>
namespace fst {
static FstRegisterer<
CompactUnweightedAcceptorFst<StdArc, uint8>>
CompactUnweightedAcceptorFst_StdArc_uint8_registerer;
static FstRegisterer<
CompactUnweightedAcceptorFst<LogArc, uint8>>
CompactUnweightedAcceptorFst_LogArc_uint8_registerer;
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions/far/script-impl.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Classes and functions for registering and invoking Far main
// functions that support multiple and extensible arc types.
#ifndef FST_EXTENSIONS_FAR_SCRIPT_IMPL_H_
#define FST_EXTENSIONS_FAR_SCRIPT_IMPL_H_
#include <string>
#include <fst/compat.h>
namespace fst {
namespace script {
string LoadArcTypeFromFar(const string &far_fname);
string LoadArcTypeFromFst(const string &fst_fname);
} // namespace script
} // namespace fst
#endif // FST_EXTENSIONS_FAR_SCRIPT_IMPL_H_
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/include/fst/map.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Compatibility file for old-style Map() functions and MapFst class that have
// been renamed to ArcMap (cf. StateMap).
#ifndef FST_MAP_H_
#define FST_MAP_H_
#include <fst/arc-map.h>
namespace fst {
template <class A, class C>
void Map(MutableFst<A> *fst, C *mapper) {
ArcMap(fst, mapper);
}
template <class A, class C>
void Map(MutableFst<A> *fst, C mapper) {
ArcMap(fst, mapper);
}
template <class A, class B, class C>
void Map(const Fst<A> &ifst, MutableFst<B> *ofst, C *mapper) {
ArcMap(ifst, ofst, mapper);
}
template <class A, class B, class C>
void Map(const Fst<A> &ifst, MutableFst<B> *ofst, C mapper) {
ArcMap(ifst, ofst, mapper);
}
using MapFstOptions = ArcMapFstOptions;
template <class A, class B, class C>
class MapFst : public ArcMapFst<A, B, C> {
public:
using FromArc = A;
using ToArc = B;
using StateId = typename ToArc::StateId;
using Weight = typename ToArc::Weight;
using State = CacheState<B>;
MapFst(const Fst<A> &fst, const C &mapper, const MapFstOptions &opts)
: ArcMapFst<A, B, C>(fst, mapper, opts) {}
MapFst(const Fst<A> &fst, C *mapper, const MapFstOptions &opts)
: ArcMapFst<A, B, C>(fst, mapper, opts) {}
MapFst(const Fst<A> &fst, const C &mapper)
: ArcMapFst<A, B, C>(fst, mapper) {}
MapFst(const Fst<A> &fst, C *mapper) : ArcMapFst<A, B, C>(fst, mapper) {}
// See Fst<>::Copy() for doc.
MapFst(const MapFst<A, B, C> &fst, bool safe = false)
: ArcMapFst<A, B, C>(fst, safe) {}
// Get a copy of this MapFst. See Fst<>::Copy() for further doc.
MapFst<A, B, C> *Copy(bool safe = false) const override {
return new MapFst(*this, safe);
}
};
// Specialization for MapFst.
template <class A, class B, class C>
class StateIterator<MapFst<A, B, C>>
: public StateIterator<ArcMapFst<A, B, C>> {
public:
explicit StateIterator(const ArcMapFst<A, B, C> &fst)
: StateIterator<ArcMapFst<A, B, C>>(fst) {}
};
// Specialization for MapFst.
template <class A, class B, class C>
class ArcIterator<MapFst<A, B, C>> : public ArcIterator<ArcMapFst<A, B, C>> {
public:
ArcIterator(const ArcMapFst<A, B, C> &fst, typename A::StateId s)
: ArcIterator<ArcMapFst<A, B, C>>(fst, s) {}
};
// For backwards compatibility only; use IdentityArcMapper otherwise.
template <class A>
struct IdentityMapper {
using FromArc = A;
using ToArc = A;
ToArc operator()(const FromArc &arc) const { return arc; }
constexpr MapFinalAction FinalAction() const { return MAP_NO_SUPERFINAL; }
constexpr MapSymbolsAction InputSymbolsAction() const {
return MAP_COPY_SYMBOLS;
}
constexpr MapSymbolsAction OutputSymbolsAction() const {
return MAP_COPY_SYMBOLS;
}
uint64 Properties(uint64 props) const { return props; }
};
} // namespace fst
#endif // FST_MAP_H_
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/test-nodejs_10x-raspbian-rpi3-opt.yml | build:
template_file: test-raspbian-opt-base.tyml
dependencies:
- "linux-rpi3-cpu-opt"
- "test-training_16k-linux-amd64-py36m-opt"
test_model_task: "test-training_16k-linux-amd64-py36m-opt"
system_setup:
>
${nodejs.packages_buster.prep_10} && ${nodejs.packages_buster.apt_pinning} && apt-get -qq update && apt-get -qq -y install ${nodejs.packages_buster.apt}
args:
tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-node_tflite-tests.sh 10.x 16k"
metadata:
name: "DeepSpeech Raspbian RPi3/ARMv7 CPU NodeJS 10.x tests"
description: "Testing DeepSpeech for Raspbian RPi3/ARMv7 on NodeJS v10.x, CPU only, optimized version"
| 0 |
coqui_public_repos/inference-engine/third_party/kenlm | coqui_public_repos/inference-engine/third_party/kenlm/util/exception.hh | #ifndef UTIL_EXCEPTION_H
#define UTIL_EXCEPTION_H
#include "util/string_stream.hh"
#include <exception>
#include <limits>
#include <string>
#include <stdint.h>
namespace util {
template <class Except, class Data> typename Except::template ExceptionTag<Except&>::Identity operator<<(Except &e, const Data &data);
class Exception : public std::exception {
public:
Exception() throw();
virtual ~Exception() throw();
const char *what() const throw() { return what_.str().c_str(); }
// For use by the UTIL_THROW macros.
void SetLocation(
const char *file,
unsigned int line,
const char *func,
const char *child_name,
const char *condition);
private:
template <class Except, class Data> friend typename Except::template ExceptionTag<Except&>::Identity operator<<(Except &e, const Data &data);
// This helps restrict operator<< defined below.
template <class T> struct ExceptionTag {
typedef T Identity;
};
StringStream what_;
};
/* This implements the normal operator<< for Exception and all its children.
* SFINAE means it only applies to Exception. Think of this as an ersatz
* boost::enable_if.
*/
template <class Except, class Data> typename Except::template ExceptionTag<Except&>::Identity operator<<(Except &e, const Data &data) {
e.what_ << data;
return e;
}
#ifdef __GNUC__
#define UTIL_FUNC_NAME __PRETTY_FUNCTION__
#else
#ifdef _WIN32
#define UTIL_FUNC_NAME __FUNCTION__
#else
#define UTIL_FUNC_NAME NULL
#endif
#endif
/* Create an instance of Exception, add the message Modify, and throw it.
* Modify is appended to the what() message and can contain << for ostream
* operations.
*
* do .. while kludge to swallow trailing ; character
* http://gcc.gnu.org/onlinedocs/cpp/Swallowing-the-Semicolon.html .
* Arg can be a constructor argument to the exception.
*/
#define UTIL_THROW_BACKEND(Condition, Exception, Arg, Modify) do { \
Exception UTIL_e Arg; \
UTIL_e.SetLocation(__FILE__, __LINE__, UTIL_FUNC_NAME, #Exception, Condition); \
UTIL_e << Modify; \
throw UTIL_e; \
} while (0)
#define UTIL_THROW_ARG(Exception, Arg, Modify) \
UTIL_THROW_BACKEND(NULL, Exception, Arg, Modify)
#define UTIL_THROW(Exception, Modify) \
UTIL_THROW_BACKEND(NULL, Exception, , Modify);
#define UTIL_THROW2(Modify) \
UTIL_THROW_BACKEND(NULL, util::Exception, , Modify);
#if __GNUC__ >= 3
#define UTIL_UNLIKELY(x) __builtin_expect (!!(x), 0)
#else
#define UTIL_UNLIKELY(x) (x)
#endif
#if __GNUC__ >= 3
#define UTIL_LIKELY(x) __builtin_expect (!!(x), 1)
#else
#define UTIL_LIKELY(x) (x)
#endif
#define UTIL_THROW_IF_ARG(Condition, Exception, Arg, Modify) do { \
if (UTIL_UNLIKELY(Condition)) { \
UTIL_THROW_BACKEND(#Condition, Exception, Arg, Modify); \
} \
} while (0)
#define UTIL_THROW_IF(Condition, Exception, Modify) \
UTIL_THROW_IF_ARG(Condition, Exception, , Modify)
#define UTIL_THROW_IF2(Condition, Modify) \
UTIL_THROW_IF_ARG(Condition, util::Exception, , Modify)
// Exception that records errno and adds it to the message.
class ErrnoException : public Exception {
public:
ErrnoException() throw();
virtual ~ErrnoException() throw();
int Error() const throw() { return errno_; }
private:
int errno_;
};
// file wasn't there, or couldn't be open for some reason
class FileOpenException : public Exception {
public:
FileOpenException() throw() {}
~FileOpenException() throw() {}
};
// Utilities for overflow checking.
class OverflowException : public Exception {
public:
OverflowException() throw();
~OverflowException() throw();
};
template <unsigned len> inline std::size_t CheckOverflowInternal(uint64_t value) {
UTIL_THROW_IF(value > static_cast<uint64_t>(std::numeric_limits<std::size_t>::max()), OverflowException, "Integer overflow detected. This model is too big for 32-bit code.");
return static_cast<std::size_t>(value);
}
template <> inline std::size_t CheckOverflowInternal<8>(uint64_t value) {
return value;
}
inline std::size_t CheckOverflow(uint64_t value) {
return CheckOverflowInternal<sizeof(std::size_t)>(value);
}
#if defined(_WIN32) || defined(_WIN64)
/* Thrown for Windows specific operations. */
class WindowsException : public Exception {
public:
WindowsException() throw();
~WindowsException() throw();
};
#endif
} // namespace util
#endif // UTIL_EXCEPTION_H
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/bin/run-ci-ldc93s1_new_bytes.sh | #!/bin/sh
set -xe
ldc93s1_dir="./data/smoke_test"
ldc93s1_csv="${ldc93s1_dir}/ldc93s1.csv"
epoch_count=$1
audio_sample_rate=$2
if [ ! -f "${ldc93s1_dir}/ldc93s1.csv" ]; then
echo "Downloading and preprocessing LDC93S1 example data, saving in ${ldc93s1_dir}."
python -u bin/import_ldc93s1.py ${ldc93s1_dir}
fi;
# Force only one visible device because we have a single-sample dataset
# and when trying to run on multiple devices (like GPUs), this will break
export CUDA_VISIBLE_DEVICES=0
python -u train.py --show_progressbar false --early_stop false \
--train_files ${ldc93s1_csv} --train_batch_size 1 \
--feature_cache '/tmp/ldc93s1_cache' \
--dev_files ${ldc93s1_csv} --dev_batch_size 1 \
--test_files ${ldc93s1_csv} --test_batch_size 1 \
--n_hidden 100 --epochs $epoch_count \
--max_to_keep 1 --checkpoint_dir '/tmp/ckpt_bytes' \
--learning_rate 0.001 --dropout_rate 0.05 --export_dir '/tmp/train_bytes' \
--scorer_path 'data/smoke_test/pruned_lm.bytes.scorer' \
--audio_sample_rate ${audio_sample_rate} \
--bytes_output_mode true \
--export_tflite false
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/extensions/far/equal.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#ifndef FST_EXTENSIONS_FAR_EQUAL_H_
#define FST_EXTENSIONS_FAR_EQUAL_H_
#include <memory>
#include <string>
#include <fst/extensions/far/far.h>
#include <fst/equal.h>
namespace fst {
template <class Arc>
bool FarEqual(const string &filename1, const string &filename2,
float delta = kDelta, const string &begin_key = string(),
const string &end_key = string()) {
std::unique_ptr<FarReader<Arc>> reader1(FarReader<Arc>::Open(filename1));
if (!reader1) {
LOG(ERROR) << "FarEqual: Could not open FAR file " << filename1;
return false;
}
std::unique_ptr<FarReader<Arc>> reader2(FarReader<Arc>::Open(filename2));
if (!reader2) {
LOG(ERROR) << "FarEqual: Could not open FAR file " << filename2;
return false;
}
if (!begin_key.empty()) {
bool find_begin1 = reader1->Find(begin_key);
bool find_begin2 = reader2->Find(begin_key);
if (!find_begin1 || !find_begin2) {
bool ret = !find_begin1 && !find_begin2;
if (!ret) {
LOG(ERROR) << "FarEqual: Key " << begin_key << " missing from "
<< (find_begin1 ? "second" : "first") << " archive";
}
return ret;
}
}
for (; !reader1->Done() && !reader2->Done();
reader1->Next(), reader2->Next()) {
const auto &key1 = reader1->GetKey();
const auto &key2 = reader2->GetKey();
if (!end_key.empty() && end_key < key1 && end_key < key2) {
return true;
}
if (key1 != key2) {
LOG(ERROR) << "FarEqual: Mismatched keys " << key1 << " and " << key2;
return false;
}
if (!Equal(*(reader1->GetFst()), *(reader2->GetFst()), delta)) {
LOG(ERROR) << "FarEqual: FSTs for key " << key1 << " are not equal";
return false;
}
}
if (!reader1->Done() || !reader2->Done()) {
LOG(ERROR) << "FarEqual: Key "
<< (reader1->Done() ? reader2->GetKey() : reader1->GetKey())
<< " missing from " << (reader2->Done() ? "first" : "second")
<< " archive";
return false;
}
return true;
}
} // namespace fst
#endif // FST_EXTENSIONS_FAR_EQUAL_H_
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/script/intersect.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#ifndef FST_SCRIPT_INTERSECT_H_
#define FST_SCRIPT_INTERSECT_H_
#include <tuple>
#include <fst/intersect.h>
#include <fst/script/compose.h>
#include <fst/script/fst-class.h>
namespace fst {
namespace script {
using IntersectArgs = std::tuple<const FstClass &, const FstClass &,
MutableFstClass *, const ComposeOptions &>;
template <class Arc>
void Intersect(IntersectArgs *args) {
const Fst<Arc> &ifst1 = *(std::get<0>(*args).GetFst<Arc>());
const Fst<Arc> &ifst2 = *(std::get<1>(*args).GetFst<Arc>());
MutableFst<Arc> *ofst = std::get<2>(*args)->GetMutableFst<Arc>();
const auto &opts = std::get<3>(*args);
Intersect(ifst1, ifst2, ofst, opts);
}
void Intersect(const FstClass &ifst, const FstClass &ifst2,
MutableFstClass *ofst,
const ComposeOptions &opts = ComposeOptions());
} // namespace script
} // namespace fst
#endif // FST_SCRIPT_INTERSECT_H_
| 0 |
coqui_public_repos/STT-examples | coqui_public_repos/STT-examples/vad_transcriber/wavSplit.py | import collections
import contextlib
import wave
def read_wave(path):
"""Reads a .wav file.
Takes the path, and returns (PCM audio data, sample rate).
"""
with contextlib.closing(wave.open(path, 'rb')) as wf:
num_channels = wf.getnchannels()
assert num_channels == 1
sample_width = wf.getsampwidth()
assert sample_width == 2
sample_rate = wf.getframerate()
assert sample_rate in (8000, 16000, 32000)
frames = wf.getnframes()
pcm_data = wf.readframes(frames)
duration = frames / sample_rate
return pcm_data, sample_rate, duration
def write_wave(path, audio, sample_rate):
"""Writes a .wav file.
Takes path, PCM audio data, and sample rate.
"""
with contextlib.closing(wave.open(path, 'wb')) as wf:
wf.setnchannels(1)
wf.setsampwidth(2)
wf.setframerate(sample_rate)
wf.writeframes(audio)
class Frame(object):
"""Represents a "frame" of audio data."""
def __init__(self, bytes, timestamp, duration):
self.bytes = bytes
self.timestamp = timestamp
self.duration = duration
def frame_generator(frame_duration_ms, audio, sample_rate):
"""Generates audio frames from PCM audio data.
Takes the desired frame duration in milliseconds, the PCM data, and
the sample rate.
Yields Frames of the requested duration.
"""
n = int(sample_rate * (frame_duration_ms / 1000.0) * 2)
offset = 0
timestamp = 0.0
duration = (float(n) / sample_rate) / 2.0
while offset + n < len(audio):
yield Frame(audio[offset:offset + n], timestamp, duration)
timestamp += duration
offset += n
def vad_collector(sample_rate, frame_duration_ms,
padding_duration_ms, vad, frames):
"""Filters out non-voiced audio frames.
Given a webrtcvad.Vad and a source of audio frames, yields only
the voiced audio.
Uses a padded, sliding window algorithm over the audio frames.
When more than 90% of the frames in the window are voiced (as
reported by the VAD), the collector triggers and begins yielding
audio frames. Then the collector waits until 90% of the frames in
the window are unvoiced to detrigger.
The window is padded at the front and back to provide a small
amount of silence or the beginnings/endings of speech around the
voiced frames.
Arguments:
sample_rate - The audio sample rate, in Hz.
frame_duration_ms - The frame duration in milliseconds.
padding_duration_ms - The amount to pad the window, in milliseconds.
vad - An instance of webrtcvad.Vad.
frames - a source of audio frames (sequence or generator).
Returns: A generator that yields PCM audio data.
"""
num_padding_frames = int(padding_duration_ms / frame_duration_ms)
# We use a deque for our sliding window/ring buffer.
ring_buffer = collections.deque(maxlen=num_padding_frames)
# We have two states: TRIGGERED and NOTTRIGGERED. We start in the
# NOTTRIGGERED state.
triggered = False
voiced_frames = []
for frame in frames:
is_speech = vad.is_speech(frame.bytes, sample_rate)
if not triggered:
ring_buffer.append((frame, is_speech))
num_voiced = len([f for f, speech in ring_buffer if speech])
# If we're NOTTRIGGERED and more than 90% of the frames in
# the ring buffer are voiced frames, then enter the
# TRIGGERED state.
if num_voiced > 0.9 * ring_buffer.maxlen:
triggered = True
# We want to yield all the audio we see from now until
# we are NOTTRIGGERED, but we have to start with the
# audio that's already in the ring buffer.
for f, s in ring_buffer:
voiced_frames.append(f)
ring_buffer.clear()
else:
# We're in the TRIGGERED state, so collect the audio data
# and add it to the ring buffer.
voiced_frames.append(frame)
ring_buffer.append((frame, is_speech))
num_unvoiced = len([f for f, speech in ring_buffer if not speech])
# If more than 90% of the frames in the ring buffer are
# unvoiced, then enter NOTTRIGGERED and yield whatever
# audio we've collected.
if num_unvoiced > 0.9 * ring_buffer.maxlen:
triggered = False
yield b''.join([f.bytes for f in voiced_frames])
ring_buffer.clear()
voiced_frames = []
if triggered:
pass
# If we have any leftover voiced audio when we run out of input,
# yield it.
if voiced_frames:
yield b''.join([f.bytes for f in voiced_frames])
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src | coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/bin/fstconnect-main.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Removes useless (inaccessible or non-coaccessible) states and arcs from an
// FST.
#include <cstring>
#include <memory>
#include <string>
#include <fst/flags.h>
#include <fst/script/connect.h>
int fstconnect_main(int argc, char **argv) {
namespace s = fst::script;
using fst::script::FstClass;
using fst::script::MutableFstClass;
string usage = "Removes useless states and arcs from an FST.\n\n Usage: ";
usage += argv[0];
usage += " [in.fst [out.fst]]\n";
std::set_new_handler(FailedNewHandler);
SET_FLAGS(usage.c_str(), &argc, &argv, true);
if (argc > 3) {
ShowUsage();
return 1;
}
const string in_name = (argc > 1 && strcmp(argv[1], "-") != 0) ? argv[1] : "";
const string out_name = argc > 2 ? argv[2] : "";
std::unique_ptr<MutableFstClass> fst(MutableFstClass::Read(in_name, true));
if (!fst) return 1;
s::Connect(fst.get());
return !fst->Write(out_name);
}
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/tc-electron_tflite-tests.sh | #!/bin/bash
set -xe
source $(dirname "$0")/tc-tests-utils.sh
nodever=$1
electronver=$2
if [ -z "${nodever}" ]; then
echo "No node version given, aborting."
exit 1
fi;
if [ -z "${electronver}" ]; then
echo "No electron version given, aborting."
exit 1
fi;
bitrate=$3
set_ldc_sample_filename "${bitrate}"
model_source=${DEEPSPEECH_TEST_MODEL//.pb/.tflite}
model_name=$(basename "${model_source}")
model_name_mmap=$(basename "${model_source}")
download_data
node --version
npm --version
NODE_ROOT="${DS_ROOT_TASK}/ds-test/"
NODE_CACHE="${DS_ROOT_TASK}/ds-test.cache/"
export NODE_PATH="${NODE_ROOT}/node_modules/"
export PATH="${NODE_ROOT}:${NODE_PATH}/.bin/:${NODE_PATH}/electron/dist/:$PATH"
# make sure that NODE_ROOT really exists
mkdir -p ${NODE_ROOT}
npm install --prefix ${NODE_ROOT} --cache ${NODE_CACHE} electron@${electronver}
deepspeech_npm_url=$(get_dep_npm_pkg_url)
npm install --prefix ${NODE_ROOT} --cache ${NODE_CACHE} ${deepspeech_npm_url}
if [ "${OS}" = "Darwin" ]; then
ln -s Electron.app/Contents/MacOS/Electron "${NODE_ROOT}/node_modules/electron/dist/node"
else
ln -s electron "${NODE_ROOT}/node_modules/electron/dist/node"
if [ -f "${NODE_ROOT}/node_modules/electron/dist//chrome-sandbox" ]; then
export ELECTRON_DISABLE_SANDBOX=1
fi;
fi
find ${NODE_ROOT}/node_modules/electron/dist/
which electron
which node
if [ "${OS}" = "Linux" ]; then
export DISPLAY=':99.0'
Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 &
xvfb_process=$!
fi
node --version
check_runtime_electronjs
run_electronjs_inference_tests
if [ "${OS}" = "Linux" ]; then
sleep 1
kill -9 ${xvfb_process} || true
fi
| 0 |
coqui_public_repos/TTS/TTS/utils | coqui_public_repos/TTS/TTS/utils/audio/numpy_transforms.py | from io import BytesIO
from typing import Tuple
import librosa
import numpy as np
import scipy
import soundfile as sf
from librosa import magphase, pyin
# For using kwargs
# pylint: disable=unused-argument
def build_mel_basis(
*,
sample_rate: int = None,
fft_size: int = None,
num_mels: int = None,
mel_fmax: int = None,
mel_fmin: int = None,
**kwargs,
) -> np.ndarray:
"""Build melspectrogram basis.
Returns:
np.ndarray: melspectrogram basis.
"""
if mel_fmax is not None:
assert mel_fmax <= sample_rate // 2
assert mel_fmax - mel_fmin > 0
return librosa.filters.mel(sr=sample_rate, n_fft=fft_size, n_mels=num_mels, fmin=mel_fmin, fmax=mel_fmax)
def millisec_to_length(
*, frame_length_ms: int = None, frame_shift_ms: int = None, sample_rate: int = None, **kwargs
) -> Tuple[int, int]:
"""Compute hop and window length from milliseconds.
Returns:
Tuple[int, int]: hop length and window length for STFT.
"""
factor = frame_length_ms / frame_shift_ms
assert (factor).is_integer(), " [!] frame_shift_ms should divide frame_length_ms"
win_length = int(frame_length_ms / 1000.0 * sample_rate)
hop_length = int(win_length / float(factor))
return win_length, hop_length
def _log(x, base):
if base == 10:
return np.log10(x)
return np.log(x)
def _exp(x, base):
if base == 10:
return np.power(10, x)
return np.exp(x)
def amp_to_db(*, x: np.ndarray = None, gain: float = 1, base: int = 10, **kwargs) -> np.ndarray:
"""Convert amplitude values to decibels.
Args:
x (np.ndarray): Amplitude spectrogram.
gain (float): Gain factor. Defaults to 1.
base (int): Logarithm base. Defaults to 10.
Returns:
np.ndarray: Decibels spectrogram.
"""
assert (x < 0).sum() == 0, " [!] Input values must be non-negative."
return gain * _log(np.maximum(1e-8, x), base)
# pylint: disable=no-self-use
def db_to_amp(*, x: np.ndarray = None, gain: float = 1, base: int = 10, **kwargs) -> np.ndarray:
"""Convert decibels spectrogram to amplitude spectrogram.
Args:
x (np.ndarray): Decibels spectrogram.
gain (float): Gain factor. Defaults to 1.
base (int): Logarithm base. Defaults to 10.
Returns:
np.ndarray: Amplitude spectrogram.
"""
return _exp(x / gain, base)
def preemphasis(*, x: np.ndarray, coef: float = 0.97, **kwargs) -> np.ndarray:
"""Apply pre-emphasis to the audio signal. Useful to reduce the correlation between neighbouring signal values.
Args:
x (np.ndarray): Audio signal.
Raises:
RuntimeError: Preemphasis coeff is set to 0.
Returns:
np.ndarray: Decorrelated audio signal.
"""
if coef == 0:
raise RuntimeError(" [!] Preemphasis is set 0.0.")
return scipy.signal.lfilter([1, -coef], [1], x)
def deemphasis(*, x: np.ndarray = None, coef: float = 0.97, **kwargs) -> np.ndarray:
"""Reverse pre-emphasis."""
if coef == 0:
raise RuntimeError(" [!] Preemphasis is set 0.0.")
return scipy.signal.lfilter([1], [1, -coef], x)
def spec_to_mel(*, spec: np.ndarray, mel_basis: np.ndarray = None, **kwargs) -> np.ndarray:
"""Convert a full scale linear spectrogram output of a network to a melspectrogram.
Args:
spec (np.ndarray): Normalized full scale linear spectrogram.
Shapes:
- spec: :math:`[C, T]`
Returns:
np.ndarray: Normalized melspectrogram.
"""
return np.dot(mel_basis, spec)
def mel_to_spec(*, mel: np.ndarray = None, mel_basis: np.ndarray = None, **kwargs) -> np.ndarray:
"""Convert a melspectrogram to full scale spectrogram."""
assert (mel < 0).sum() == 0, " [!] Input values must be non-negative."
inv_mel_basis = np.linalg.pinv(mel_basis)
return np.maximum(1e-10, np.dot(inv_mel_basis, mel))
def wav_to_spec(*, wav: np.ndarray = None, **kwargs) -> np.ndarray:
"""Compute a spectrogram from a waveform.
Args:
wav (np.ndarray): Waveform. Shape :math:`[T_wav,]`
Returns:
np.ndarray: Spectrogram. Shape :math:`[C, T_spec]`. :math:`T_spec == T_wav / hop_length`
"""
D = stft(y=wav, **kwargs)
S = np.abs(D)
return S.astype(np.float32)
def wav_to_mel(*, wav: np.ndarray = None, mel_basis=None, **kwargs) -> np.ndarray:
"""Compute a melspectrogram from a waveform."""
D = stft(y=wav, **kwargs)
S = spec_to_mel(spec=np.abs(D), mel_basis=mel_basis, **kwargs)
return S.astype(np.float32)
def spec_to_wav(*, spec: np.ndarray, power: float = 1.5, **kwargs) -> np.ndarray:
"""Convert a spectrogram to a waveform using Griffi-Lim vocoder."""
S = spec.copy()
return griffin_lim(spec=S**power, **kwargs)
def mel_to_wav(*, mel: np.ndarray = None, power: float = 1.5, **kwargs) -> np.ndarray:
"""Convert a melspectrogram to a waveform using Griffi-Lim vocoder."""
S = mel.copy()
S = mel_to_spec(mel=S, mel_basis=kwargs["mel_basis"]) # Convert back to linear
return griffin_lim(spec=S**power, **kwargs)
### STFT and ISTFT ###
def stft(
*,
y: np.ndarray = None,
fft_size: int = None,
hop_length: int = None,
win_length: int = None,
pad_mode: str = "reflect",
window: str = "hann",
center: bool = True,
**kwargs,
) -> np.ndarray:
"""Librosa STFT wrapper.
Check http://librosa.org/doc/main/generated/librosa.stft.html argument details.
Returns:
np.ndarray: Complex number array.
"""
return librosa.stft(
y=y,
n_fft=fft_size,
hop_length=hop_length,
win_length=win_length,
pad_mode=pad_mode,
window=window,
center=center,
)
def istft(
*,
y: np.ndarray = None,
hop_length: int = None,
win_length: int = None,
window: str = "hann",
center: bool = True,
**kwargs,
) -> np.ndarray:
"""Librosa iSTFT wrapper.
Check http://librosa.org/doc/main/generated/librosa.istft.html argument details.
Returns:
np.ndarray: Complex number array.
"""
return librosa.istft(y, hop_length=hop_length, win_length=win_length, center=center, window=window)
def griffin_lim(*, spec: np.ndarray = None, num_iter=60, **kwargs) -> np.ndarray:
angles = np.exp(2j * np.pi * np.random.rand(*spec.shape))
S_complex = np.abs(spec).astype(complex)
y = istft(y=S_complex * angles, **kwargs)
if not np.isfinite(y).all():
print(" [!] Waveform is not finite everywhere. Skipping the GL.")
return np.array([0.0])
for _ in range(num_iter):
angles = np.exp(1j * np.angle(stft(y=y, **kwargs)))
y = istft(y=S_complex * angles, **kwargs)
return y
def compute_stft_paddings(
*, x: np.ndarray = None, hop_length: int = None, pad_two_sides: bool = False, **kwargs
) -> Tuple[int, int]:
"""Compute paddings used by Librosa's STFT. Compute right padding (final frame) or both sides padding
(first and final frames)"""
pad = (x.shape[0] // hop_length + 1) * hop_length - x.shape[0]
if not pad_two_sides:
return 0, pad
return pad // 2, pad // 2 + pad % 2
def compute_f0(
*,
x: np.ndarray = None,
pitch_fmax: float = None,
pitch_fmin: float = None,
hop_length: int = None,
win_length: int = None,
sample_rate: int = None,
stft_pad_mode: str = "reflect",
center: bool = True,
**kwargs,
) -> np.ndarray:
"""Compute pitch (f0) of a waveform using the same parameters used for computing melspectrogram.
Args:
x (np.ndarray): Waveform. Shape :math:`[T_wav,]`
pitch_fmax (float): Pitch max value.
pitch_fmin (float): Pitch min value.
hop_length (int): Number of frames between STFT columns.
win_length (int): STFT window length.
sample_rate (int): Audio sampling rate.
stft_pad_mode (str): Padding mode for STFT.
center (bool): Centered padding.
Returns:
np.ndarray: Pitch. Shape :math:`[T_pitch,]`. :math:`T_pitch == T_wav / hop_length`
Examples:
>>> WAV_FILE = filename = librosa.example('vibeace')
>>> from TTS.config import BaseAudioConfig
>>> from TTS.utils.audio import AudioProcessor
>>> conf = BaseAudioConfig(pitch_fmax=640, pitch_fmin=1)
>>> ap = AudioProcessor(**conf)
>>> wav = ap.load_wav(WAV_FILE, sr=ap.sample_rate)[:5 * ap.sample_rate]
>>> pitch = ap.compute_f0(wav)
"""
assert pitch_fmax is not None, " [!] Set `pitch_fmax` before caling `compute_f0`."
assert pitch_fmin is not None, " [!] Set `pitch_fmin` before caling `compute_f0`."
f0, voiced_mask, _ = pyin(
y=x.astype(np.double),
fmin=pitch_fmin,
fmax=pitch_fmax,
sr=sample_rate,
frame_length=win_length,
win_length=win_length // 2,
hop_length=hop_length,
pad_mode=stft_pad_mode,
center=center,
n_thresholds=100,
beta_parameters=(2, 18),
boltzmann_parameter=2,
resolution=0.1,
max_transition_rate=35.92,
switch_prob=0.01,
no_trough_prob=0.01,
)
f0[~voiced_mask] = 0.0
return f0
def compute_energy(y: np.ndarray, **kwargs) -> np.ndarray:
"""Compute energy of a waveform using the same parameters used for computing melspectrogram.
Args:
x (np.ndarray): Waveform. Shape :math:`[T_wav,]`
Returns:
np.ndarray: energy. Shape :math:`[T_energy,]`. :math:`T_energy == T_wav / hop_length`
Examples:
>>> WAV_FILE = filename = librosa.example('vibeace')
>>> from TTS.config import BaseAudioConfig
>>> from TTS.utils.audio import AudioProcessor
>>> conf = BaseAudioConfig()
>>> ap = AudioProcessor(**conf)
>>> wav = ap.load_wav(WAV_FILE, sr=ap.sample_rate)[:5 * ap.sample_rate]
>>> energy = ap.compute_energy(wav)
"""
x = stft(y=y, **kwargs)
mag, _ = magphase(x)
energy = np.sqrt(np.sum(mag**2, axis=0))
return energy
### Audio Processing ###
def find_endpoint(
*,
wav: np.ndarray = None,
trim_db: float = -40,
sample_rate: int = None,
min_silence_sec=0.8,
gain: float = None,
base: int = None,
**kwargs,
) -> int:
"""Find the last point without silence at the end of a audio signal.
Args:
wav (np.ndarray): Audio signal.
threshold_db (int, optional): Silence threshold in decibels. Defaults to -40.
min_silence_sec (float, optional): Ignore silences that are shorter then this in secs. Defaults to 0.8.
gian (float, optional): Gain to be used to convert trim_db to trim_amp. Defaults to None.
base (int, optional): Base of the logarithm used to convert trim_db to trim_amp. Defaults to 10.
Returns:
int: Last point without silence.
"""
window_length = int(sample_rate * min_silence_sec)
hop_length = int(window_length / 4)
threshold = db_to_amp(x=-trim_db, gain=gain, base=base)
for x in range(hop_length, len(wav) - window_length, hop_length):
if np.max(wav[x : x + window_length]) < threshold:
return x + hop_length
return len(wav)
def trim_silence(
*,
wav: np.ndarray = None,
sample_rate: int = None,
trim_db: float = None,
win_length: int = None,
hop_length: int = None,
**kwargs,
) -> np.ndarray:
"""Trim silent parts with a threshold and 0.01 sec margin"""
margin = int(sample_rate * 0.01)
wav = wav[margin:-margin]
return librosa.effects.trim(wav, top_db=trim_db, frame_length=win_length, hop_length=hop_length)[0]
def volume_norm(*, x: np.ndarray = None, coef: float = 0.95, **kwargs) -> np.ndarray:
"""Normalize the volume of an audio signal.
Args:
x (np.ndarray): Raw waveform.
coef (float): Coefficient to rescale the maximum value. Defaults to 0.95.
Returns:
np.ndarray: Volume normalized waveform.
"""
return x / abs(x).max() * coef
def rms_norm(*, wav: np.ndarray = None, db_level: float = -27.0, **kwargs) -> np.ndarray:
r = 10 ** (db_level / 20)
a = np.sqrt((len(wav) * (r**2)) / np.sum(wav**2))
return wav * a
def rms_volume_norm(*, x: np.ndarray, db_level: float = -27.0, **kwargs) -> np.ndarray:
"""Normalize the volume based on RMS of the signal.
Args:
x (np.ndarray): Raw waveform.
db_level (float): Target dB level in RMS. Defaults to -27.0.
Returns:
np.ndarray: RMS normalized waveform.
"""
assert -99 <= db_level <= 0, " [!] db_level should be between -99 and 0"
wav = rms_norm(wav=x, db_level=db_level)
return wav
def load_wav(*, filename: str, sample_rate: int = None, resample: bool = False, **kwargs) -> np.ndarray:
"""Read a wav file using Librosa and optionally resample, silence trim, volume normalize.
Resampling slows down loading the file significantly. Therefore it is recommended to resample the file before.
Args:
filename (str): Path to the wav file.
sr (int, optional): Sampling rate for resampling. Defaults to None.
resample (bool, optional): Resample the audio file when loading. Slows down the I/O time. Defaults to False.
Returns:
np.ndarray: Loaded waveform.
"""
if resample:
# loading with resampling. It is significantly slower.
x, _ = librosa.load(filename, sr=sample_rate)
else:
# SF is faster than librosa for loading files
x, _ = sf.read(filename)
return x
def save_wav(*, wav: np.ndarray, path: str, sample_rate: int = None, pipe_out=None, **kwargs) -> None:
"""Save float waveform to a file using Scipy.
Args:
wav (np.ndarray): Waveform with float values in range [-1, 1] to save.
path (str): Path to a output file.
sr (int, optional): Sampling rate used for saving to the file. Defaults to None.
pipe_out (BytesIO, optional): Flag to stdout the generated TTS wav file for shell pipe.
"""
wav_norm = wav * (32767 / max(0.01, np.max(np.abs(wav))))
wav_norm = wav_norm.astype(np.int16)
if pipe_out:
wav_buffer = BytesIO()
scipy.io.wavfile.write(wav_buffer, sample_rate, wav_norm)
wav_buffer.seek(0)
pipe_out.buffer.write(wav_buffer.read())
scipy.io.wavfile.write(path, sample_rate, wav_norm)
def mulaw_encode(*, wav: np.ndarray, mulaw_qc: int, **kwargs) -> np.ndarray:
mu = 2**mulaw_qc - 1
signal = np.sign(wav) * np.log(1 + mu * np.abs(wav)) / np.log(1.0 + mu)
signal = (signal + 1) / 2 * mu + 0.5
return np.floor(
signal,
)
def mulaw_decode(*, wav, mulaw_qc: int, **kwargs) -> np.ndarray:
"""Recovers waveform from quantized values."""
mu = 2**mulaw_qc - 1
x = np.sign(wav) / mu * ((1 + mu) ** np.abs(wav) - 1)
return x
def encode_16bits(*, x: np.ndarray, **kwargs) -> np.ndarray:
return np.clip(x * 2**15, -(2**15), 2**15 - 1).astype(np.int16)
def quantize(*, x: np.ndarray, quantize_bits: int, **kwargs) -> np.ndarray:
"""Quantize a waveform to a given number of bits.
Args:
x (np.ndarray): Waveform to quantize. Must be normalized into the range `[-1, 1]`.
quantize_bits (int): Number of quantization bits.
Returns:
np.ndarray: Quantized waveform.
"""
return (x + 1.0) * (2**quantize_bits - 1) / 2
def dequantize(*, x, quantize_bits, **kwargs) -> np.ndarray:
"""Dequantize a waveform from the given number of bits."""
return 2 * x / (2**quantize_bits - 1) - 1
| 0 |
coqui_public_repos/Trainer | coqui_public_repos/Trainer/examples/train_mnist.py | """
This example shows training of a simple Conv model with MNIST dataset using Auto Training mode of 👟.
"""
import os
from dataclasses import dataclass
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
from trainer import TrainerConfig, TrainerModel, Trainer, TrainerArgs
@dataclass
class MnistModelConfig(TrainerConfig):
optimizer: str = "Adam"
lr: float = 0.001
epochs: int = 1
print_step: int = 1
save_step: int = 5
plot_step: int = 5
dashboard_logger: str = "tensorboard"
class MnistModel(TrainerModel):
def __init__(self):
super().__init__()
# mnist images are (1, 28, 28) (channels, height, width)
self.layer_1 = nn.Linear(28 * 28, 128)
self.layer_2 = nn.Linear(128, 256)
self.layer_3 = nn.Linear(256, 10)
def forward(self, x):
batch_size, _, _, _ = x.size()
# (b, 1, 28, 28) -> (b, 1*28*28)
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
x = F.relu(x)
x = self.layer_3(x)
x = F.log_softmax(x, dim=1)
return x
def train_step(self, batch, criterion):
x, y = batch
logits = self(x)
loss = criterion(logits, y)
return {"model_outputs": logits}, {"loss": loss}
def eval_step(self, batch, criterion):
x, y = batch
logits = self(x)
loss = criterion(logits, y)
return {"model_outputs": logits}, {"loss": loss}
@staticmethod
def get_criterion():
return torch.nn.NLLLoss()
def get_data_loader(
self, config, assets, is_eval, samples, verbose, num_gpus, rank=0
): # pylint: disable=unused-argument
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
dataset = MNIST(os.getcwd(), train=not is_eval, download=True, transform=transform)
dataset.data = dataset.data[:256]
dataset.targets = dataset.targets[:256]
dataloader = DataLoader(dataset, batch_size=config.batch_size)
return dataloader
def main():
"""Run `MNIST` model training from scratch or from previous checkpoint."""
# init args and config
train_args = TrainerArgs()
config = MnistModelConfig()
# init the model from config
model = MnistModel()
# init the trainer and 🚀
trainer = Trainer(
train_args,
config,
config.output_path,
model=model,
train_samples=model.get_data_loader(config, None, False, None, None, None),
eval_samples=model.get_data_loader(config, None, True, None, None, None),
parse_command_line_args=True,
)
trainer.fit()
if __name__ == "__main__":
main()
| 0 |
coqui_public_repos/TTS/TTS/server | coqui_public_repos/TTS/TTS/server/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="🐸Coqui AI TTS demo server.">
<meta name="author" content="🐸Coqui AI TTS">
<title>TTS engine</title>
<!-- Bootstrap core CSS -->
<link href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css"
integrity="sha384-WskhaSGFgHYWDcbwN70/dfYBj47jz9qbsMId/iRN3ewGhXQFZCSftd1LZCfmhktB" crossorigin="anonymous"
rel="stylesheet">
<!-- Custom styles for this template -->
<style>
body {
padding-top: 54px;
}
@media (min-width: 992px) {
body {
padding-top: 56px;
}
}
</style>
</head>
<body>
<a href="https://github.com/coqui-ai/TTS"><img style="position: absolute; z-index:1000; top: 0; left: 0; border: 0;"
src="https://s3.amazonaws.com/github/ribbons/forkme_left_darkblue_121621.png" alt="Fork me on GitHub"></a>
<!-- Navigation -->
<!--
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
<div class="container">
<a class="navbar-brand" href="#">Coqui TTS</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarResponsive">
<ul class="navbar-nav ml-auto">
<li class="nav-item active">
<a class="nav-link" href="#">Home
<span class="sr-only">(current)</span>
</a>
</li>
</ul>
</div>
</div>
</nav>
-->
<!-- Page Content -->
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<img class="mt-5" src="{{url_for('static', filename='coqui-log-green-TTS.png')}}" align="middle"
width="512" />
<ul class="list-unstyled">
</ul>
{%if use_gst%}
<input value='{"0": 0.1}' id="style_wav" placeholder="style wav (dict or path to wav).." size=45
type="text" name="style_wav">
{%endif%}
<input id="text" placeholder="Type here..." size=45 type="text" name="text">
<button id="speak-button" name="speak">Speak</button><br /><br />
{%if use_multi_speaker%}
Choose a speaker:
<select id="speaker_id" name=speaker_id method="GET" action="/">
{% for speaker_id in speaker_ids %}
<option value="{{speaker_id}}" SELECTED>{{speaker_id}}</option>"
{% endfor %}
</select><br /><br />
{%endif%}
{%if use_multi_language%}
Choose a language:
<select id="language_id" name=language_id method="GET" action="/">
{% for language_id in language_ids %}
<option value="{{language_id}}" SELECTED>{{language_id}}</option>"
{% endfor %}
</select><br /><br />
{%endif%}
{%if show_details%}
<button id="details-button" onclick="location.href = 'details'" name="model-details">Model
Details</button><br /><br />
{%endif%}
<audio id="audio" controls autoplay hidden></audio>
<p id="message"></p>
</div>
</div>
</div>
<!-- Bootstrap core JavaScript -->
<script>
function getTextValue(textId) {
const container = q(textId)
if (container) {
return container.value
}
return ""
}
function q(selector) { return document.querySelector(selector) }
q('#text').focus()
function do_tts(e) {
const text = q('#text').value
const speaker_id = getTextValue('#speaker_id')
const style_wav = getTextValue('#style_wav')
const language_id = getTextValue('#language_id')
if (text) {
q('#message').textContent = 'Synthesizing...'
q('#speak-button').disabled = true
q('#audio').hidden = true
synthesize(text, speaker_id, style_wav, language_id)
}
e.preventDefault()
return false
}
q('#speak-button').addEventListener('click', do_tts)
q('#text').addEventListener('keyup', function (e) {
if (e.keyCode == 13) { // enter
do_tts(e)
}
})
function synthesize(text, speaker_id = "", style_wav = "", language_id = "") {
fetch(`/api/tts?text=${encodeURIComponent(text)}&speaker_id=${encodeURIComponent(speaker_id)}&style_wav=${encodeURIComponent(style_wav)}&language_id=${encodeURIComponent(language_id)}`, { cache: 'no-cache' })
.then(function (res) {
if (!res.ok) throw Error(res.statusText)
return res.blob()
}).then(function (blob) {
q('#message').textContent = ''
q('#speak-button').disabled = false
q('#audio').src = URL.createObjectURL(blob)
q('#audio').hidden = false
}).catch(function (err) {
q('#message').textContent = 'Error: ' + err.message
q('#speak-button').disabled = false
})
}
</script>
</body>
</html> | 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/script/minimize.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/script/fst-class.h>
#include <fst/script/minimize.h>
#include <fst/script/script-impl.h>
namespace fst {
namespace script {
void Minimize(MutableFstClass *ofst1, MutableFstClass *ofst2, float delta,
bool allow_nondet) {
if (ofst2 && !internal::ArcTypesMatch(*ofst1, *ofst2, "Minimize")) {
ofst1->SetProperties(kError, kError);
ofst2->SetProperties(kError, kError);
return;
}
MinimizeArgs args(ofst1, ofst2, delta, allow_nondet);
Apply<Operation<MinimizeArgs>>("Minimize", ofst1->ArcType(), &args);
}
REGISTER_FST_OPERATION(Minimize, StdArc, MinimizeArgs);
REGISTER_FST_OPERATION(Minimize, LogArc, MinimizeArgs);
REGISTER_FST_OPERATION(Minimize, Log64Arc, MinimizeArgs);
} // namespace script
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/topsort.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Topological sort of FSTs.
#ifndef FST_TOPSORT_H_
#define FST_TOPSORT_H_
#include <memory>
#include <vector>
#include <fst/dfs-visit.h>
#include <fst/fst.h>
#include <fst/statesort.h>
namespace fst {
// DFS visitor class to return topological ordering.
template <class Arc>
class TopOrderVisitor {
public:
using StateId = typename Arc::StateId;
// If acyclic, order[i] gives the topological position of StateId i;
// otherwise it is unchanged. acyclic_ will be true iff the FST has no
// cycles. The caller retains ownership of the state order vector.
TopOrderVisitor(std::vector<StateId> *order, bool *acyclic)
: order_(order), acyclic_(acyclic) {}
void InitVisit(const Fst<Arc> &fst) {
finish_.reset(new std::vector<StateId>());
*acyclic_ = true;
}
constexpr bool InitState(StateId, StateId) const { return true; }
constexpr bool TreeArc(StateId, const Arc &) const { return true; }
bool BackArc(StateId, const Arc &) { return (*acyclic_ = false); }
constexpr bool ForwardOrCrossArc(StateId, const Arc &) const { return true; }
void FinishState(StateId s, StateId, const Arc *) { finish_->push_back(s); }
void FinishVisit() {
if (*acyclic_) {
order_->clear();
for (StateId s = 0; s < finish_->size(); ++s) {
order_->push_back(kNoStateId);
}
for (StateId s = 0; s < finish_->size(); ++s) {
(*order_)[(*finish_)[finish_->size() - s - 1]] = s;
}
}
finish_.reset();
}
private:
std::vector<StateId> *order_;
bool *acyclic_;
// States in finish-time order.
std::unique_ptr<std::vector<StateId>> finish_;
};
// Topologically sorts its input if acyclic, modifying it. Otherwise, the input
// is unchanged. When sorted, all transitions are from lower to higher state
// IDs.
//
// Complexity:
//
// Time: O(V + E)
// Space: O(V + E)
//
// where V is the number of states and E is the number of arcs.
template <class Arc>
bool TopSort(MutableFst<Arc> *fst) {
std::vector<typename Arc::StateId> order;
bool acyclic;
TopOrderVisitor<Arc> top_order_visitor(&order, &acyclic);
DfsVisit(*fst, &top_order_visitor);
if (acyclic) {
StateSort(fst, order);
fst->SetProperties(kAcyclic | kInitialAcyclic | kTopSorted,
kAcyclic | kInitialAcyclic | kTopSorted);
} else {
fst->SetProperties(kCyclic | kNotTopSorted, kCyclic | kNotTopSorted);
}
return acyclic;
}
} // namespace fst
#endif // FST_TOPSORT_H_
| 0 |
coqui_public_repos | coqui_public_repos/STT-examples/tests.sh | #!/bin/bash
set -xe
THIS=$(dirname "$0")
source $HOME/STT/ds/taskcluster/tc-tests-utils.sh
DEP_TASK_ID=$(curl -s https://community-tc.services.mozilla.com/api/queue/v1/task/${TASK_ID} | python -c 'import json; import sys; print(" ".join(json.loads(sys.stdin.read())["dependencies"]));')
get_python_wheel_url()
{
local this_python_version=$1
extract_python_versions "${this_python_version}" "pyver" "pyver_pkg" "py_unicode_type" "pyconf" "pyalias"
echo "$(get_python_pkg_url "${pyver_pkg}" "${py_unicode_type}" "STT" https://community-tc.services.mozilla.com/api/queue/v1/task/${DEP_TASK_ID}/artifacts/public)"
}
get_npm_package_url()
{
echo "https://community-tc.services.mozilla.com/api/queue/v1/task/${DEP_TASK_ID}/artifacts/public/stt-${DS_VERSION}.tgz"
}
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/extensions/far/farextract.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Extracts component FSTs from an finite-state archive.
#include <string>
#include <vector>
#include <fst/flags.h>
#include <fst/extensions/far/farscript.h>
#include <fst/extensions/far/getters.h>
DEFINE_string(filename_prefix, "", "Prefix to append to filenames");
DEFINE_string(filename_suffix, "", "Suffix to append to filenames");
DEFINE_int32(generate_filenames, 0,
"Generate N digit numeric filenames (def: use keys)");
DEFINE_string(keys, "",
"Extract set of keys separated by comma (default) "
"including ranges delimited by dash (default)");
DEFINE_string(key_separator, ",", "Separator for individual keys");
DEFINE_string(range_delimiter, "-", "Delimiter for ranges of keys");
int main(int argc, char **argv) {
namespace s = fst::script;
string usage = "Extracts FSTs from a finite-state archive.\n\n Usage:";
usage += argv[0];
usage += " [in1.far in2.far...]\n";
std::set_new_handler(FailedNewHandler);
SET_FLAGS(usage.c_str(), &argc, &argv, true);
s::ExpandArgs(argc, argv, &argc, &argv);
std::vector<string> in_fnames;
for (int i = 1; i < argc; ++i) in_fnames.push_back(argv[i]);
if (in_fnames.empty()) in_fnames.push_back("");
const auto arc_type = s::LoadArcTypeFromFar(in_fnames[0]);
if (arc_type.empty()) return 1;
s::FarExtract(in_fnames, arc_type, FLAGS_generate_filenames, FLAGS_keys,
FLAGS_key_separator, FLAGS_range_delimiter,
FLAGS_filename_prefix, FLAGS_filename_suffix);
return 0;
}
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/script/rmepsilon.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/script/fst-class.h>
#include <fst/script/rmepsilon.h>
#include <fst/script/script-impl.h>
namespace fst {
namespace script {
void RmEpsilon(MutableFstClass *fst, const RmEpsilonOptions &opts) {
if (!fst->WeightTypesMatch(opts.weight_threshold, "RmEpsilon")) {
fst->SetProperties(kError, kError);
return;
}
RmEpsilonArgs args(fst, opts);
Apply<Operation<RmEpsilonArgs>>("RmEpsilon", fst->ArcType(), &args);
}
REGISTER_FST_OPERATION(RmEpsilon, StdArc, RmEpsilonArgs);
REGISTER_FST_OPERATION(RmEpsilon, LogArc, RmEpsilonArgs);
REGISTER_FST_OPERATION(RmEpsilon, Log64Arc, RmEpsilonArgs);
} // namespace script
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/bin/Makefile.am | AM_CPPFLAGS = -I$(srcdir)/../include -I$(srcdir)/../script $(ICU_FLAGS)
LDADD = ../script/libfstscript.la ../lib/libfst.la -lm $(DL_LIBS)
if HAVE_BIN
bin_PROGRAMS = fstarcsort fstclosure fstcompile fstcompose fstconcat \
fstconnect fstconvert fstdeterminize fstdifference fstdisambiguate fstdraw \
fstencode fstepsnormalize fstequal fstequivalent fstinfo fstintersect \
fstinvert fstisomorphic fstmap fstminimize fstprint fstproject fstprune \
fstpush fstrandgen fstrelabel fstreplace fstreverse fstreweight fstrmepsilon \
fstshortestdistance fstshortestpath fstsymbols fstsynchronize fsttopsort \
fstunion
fstarcsort_SOURCES = fstarcsort.cc fstarcsort-main.cc
fstclosure_SOURCES = fstclosure.cc fstclosure-main.cc
fstcompile_SOURCES = fstcompile.cc fstcompile-main.cc
fstcompose_SOURCES = fstcompose.cc fstcompose-main.cc
fstconcat_SOURCES = fstconcat.cc fstconcat-main.cc
fstconnect_SOURCES = fstconnect.cc fstconnect-main.cc
fstconvert_SOURCES = fstconvert.cc fstconvert-main.cc
fstdeterminize_SOURCES = fstdeterminize.cc fstdeterminize-main.cc
fstdifference_SOURCES = fstdifference.cc fstdifference-main.cc
fstdisambiguate_SOURCES = fstdisambiguate.cc fstdisambiguate-main.cc
fstdraw_SOURCES = fstdraw.cc fstdraw-main.cc
fstencode_SOURCES = fstencode.cc fstencode-main.cc
fstepsnormalize_SOURCES = fstepsnormalize.cc fstepsnormalize-main.cc
fstequal_SOURCES = fstequal.cc fstequal-main.cc
fstequivalent_SOURCES = fstequivalent.cc fstequivalent-main.cc
fstinfo_SOURCES = fstinfo.cc fstinfo-main.cc
fstintersect_SOURCES = fstintersect.cc fstintersect-main.cc
fstinvert_SOURCES = fstinvert.cc fstinvert-main.cc
fstisomorphic_SOURCES = fstisomorphic.cc fstisomorphic-main.cc
fstmap_SOURCES = fstmap.cc fstmap-main.cc
fstminimize_SOURCES = fstminimize.cc fstminimize-main.cc
fstprint_SOURCES = fstprint.cc fstprint-main.cc
fstproject_SOURCES = fstproject.cc fstproject-main.cc
fstprune_SOURCES = fstprune.cc fstprune-main.cc
fstpush_SOURCES = fstpush.cc fstpush-main.cc
fstrandgen_SOURCES = fstrandgen.cc fstrandgen-main.cc
fstrelabel_SOURCES = fstrelabel.cc fstrelabel-main.cc
fstreplace_SOURCES = fstreplace.cc fstreplace-main.cc
fstreverse_SOURCES = fstreverse.cc fstreverse-main.cc
fstreweight_SOURCES = fstreweight.cc fstreweight-main.cc
fstrmepsilon_SOURCES = fstrmepsilon.cc fstrmepsilon-main.cc
fstshortestdistance_SOURCES = fstshortestdistance.cc fstshortestdistance-main.cc
fstshortestpath_SOURCES = fstshortestpath.cc fstshortestpath-main.cc
fstsymbols_SOURCES = fstsymbols.cc fstsymbols-main.cc
fstsynchronize_SOURCES = fstsynchronize.cc fstsynchronize-main.cc
fsttopsort_SOURCES = fsttopsort.cc fsttopsort-main.cc
fstunion_SOURCES = fstunion.cc fstunion-main.cc
endif
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/extensions | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/extensions/pdt/pdtexpand.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Expands a (bounded-stack) PDT as an FST.
#include <cstring>
#include <memory>
#include <string>
#include <vector>
#include <fst/flags.h>
#include <fst/log.h>
#include <fst/extensions/pdt/pdtscript.h>
#include <fst/util.h>
DEFINE_string(pdt_parentheses, "", "PDT parenthesis label pairs");
DEFINE_bool(connect, true, "Trim output?");
DEFINE_bool(keep_parentheses, false, "Keep PDT parentheses in result?");
DEFINE_string(weight, "", "Weight threshold");
int main(int argc, char **argv) {
namespace s = fst::script;
using fst::script::FstClass;
using fst::script::VectorFstClass;
using fst::script::WeightClass;
using fst::ReadLabelPairs;
string usage = "Expand a (bounded-stack) PDT as an FST.\n\n Usage: ";
usage += argv[0];
usage += " in.pdt [out.fst]\n";
std::set_new_handler(FailedNewHandler);
SET_FLAGS(usage.c_str(), &argc, &argv, true);
if (argc > 3) {
ShowUsage();
return 1;
}
const string in_name =
(argc > 1 && (strcmp(argv[1], "-") != 0)) ? argv[1] : "";
const string out_name = argc > 2 ? argv[2] : "";
std::unique_ptr<FstClass> ifst(FstClass::Read(in_name));
if (!ifst) return 1;
if (FLAGS_pdt_parentheses.empty()) {
LOG(ERROR) << argv[0] << ": No PDT parenthesis label pairs provided";
return 1;
}
std::vector<s::LabelPair> parens;
if (!ReadLabelPairs(FLAGS_pdt_parentheses, &parens, false)) return 1;
const auto weight_threshold =
FLAGS_weight.empty() ? WeightClass::Zero(ifst->WeightType())
: WeightClass(ifst->WeightType(), FLAGS_weight);
VectorFstClass ofst(ifst->ArcType());
s::PdtExpand(*ifst, parens, &ofst,
s::PdtExpandOptions(FLAGS_connect, FLAGS_keep_parentheses,
weight_threshold));
ofst.Write(out_name);
return 0;
}
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/script/print.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <ostream>
#include <string>
#include <fst/script/fst-class.h>
#include <fst/script/print.h>
#include <fst/script/script-impl.h>
namespace fst {
namespace script {
void PrintFst(const FstClass &fst, std::ostream &ostrm, const string &dest,
const SymbolTable *isyms, const SymbolTable *osyms,
const SymbolTable *ssyms, bool accept, bool show_weight_one,
const string &missing_sym) {
const auto sep = FLAGS_fst_field_separator.substr(0, 1);
FstPrinterArgs args(fst, isyms, osyms, ssyms, accept, show_weight_one, &ostrm,
dest, sep, missing_sym);
Apply<Operation<FstPrinterArgs>>("PrintFst", fst.ArcType(), &args);
}
REGISTER_FST_OPERATION(PrintFst, StdArc, FstPrinterArgs);
REGISTER_FST_OPERATION(PrintFst, LogArc, FstPrinterArgs);
REGISTER_FST_OPERATION(PrintFst, Log64Arc, FstPrinterArgs);
} // namespace script
} // namespace fst
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src | coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/bin/fstconnect.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
int fstconnect_main(int argc, char **argv);
int main(int argc, char **argv) { return fstconnect_main(argc, argv); }
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include | coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst/shortest-path.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Functions to find shortest paths in an FST.
#ifndef FST_SHORTEST_PATH_H_
#define FST_SHORTEST_PATH_H_
#include <functional>
#include <type_traits>
#include <utility>
#include <vector>
#include <fst/log.h>
#include <fst/cache.h>
#include <fst/determinize.h>
#include <fst/queue.h>
#include <fst/shortest-distance.h>
#include <fst/test-properties.h>
namespace fst {
template <class Arc, class Queue, class ArcFilter>
struct ShortestPathOptions
: public ShortestDistanceOptions<Arc, Queue, ArcFilter> {
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
int32_t nshortest; // Returns n-shortest paths.
bool unique; // Only returns paths with distinct input strings.
bool has_distance; // Distance vector already contains the
// shortest distance from the initial state.
bool first_path; // Single shortest path stops after finding the first
// path to a final state; that path is the shortest path
// only when:
// (1) using the ShortestFirstQueue with all the weights
// in the FST being between One() and Zero() according to
// NaturalLess or when
// (2) using the NaturalAStarQueue with an admissible
// and consistent estimate.
Weight weight_threshold; // Pruning weight threshold.
StateId state_threshold; // Pruning state threshold.
ShortestPathOptions(Queue *queue, ArcFilter filter, int32_t nshortest = 1,
bool unique = false, bool has_distance = false,
float delta = kShortestDelta, bool first_path = false,
Weight weight_threshold = Weight::Zero(),
StateId state_threshold = kNoStateId)
: ShortestDistanceOptions<Arc, Queue, ArcFilter>(queue, filter,
kNoStateId, delta),
nshortest(nshortest),
unique(unique),
has_distance(has_distance),
first_path(first_path),
weight_threshold(std::move(weight_threshold)),
state_threshold(state_threshold) {}
};
namespace internal {
constexpr size_t kNoArc = -1;
// Helper function for SingleShortestPath building the shortest path as a left-
// to-right machine backwards from the best final state. It takes the input
// FST passed to SingleShortestPath and the parent vector and f_parent returned
// by that function, and builds the result into the provided output mutable FS
// This is not normally called by users; see ShortestPath instead.
template <class Arc>
void SingleShortestPathBacktrace(
const Fst<Arc> &ifst, MutableFst<Arc> *ofst,
const std::vector<std::pair<typename Arc::StateId, size_t>> &parent,
typename Arc::StateId f_parent) {
using StateId = typename Arc::StateId;
ofst->DeleteStates();
ofst->SetInputSymbols(ifst.InputSymbols());
ofst->SetOutputSymbols(ifst.OutputSymbols());
StateId s_p = kNoStateId;
StateId d_p = kNoStateId;
for (StateId state = f_parent, d = kNoStateId; state != kNoStateId;
d = state, state = parent[state].first) {
d_p = s_p;
s_p = ofst->AddState();
if (d == kNoStateId) {
ofst->SetFinal(s_p, ifst.Final(f_parent));
} else {
ArcIterator<Fst<Arc>> aiter(ifst, state);
aiter.Seek(parent[d].second);
auto arc = aiter.Value();
arc.nextstate = d_p;
ofst->AddArc(s_p, arc);
}
}
ofst->SetStart(s_p);
if (ifst.Properties(kError, false)) ofst->SetProperties(kError, kError);
ofst->SetProperties(
ShortestPathProperties(ofst->Properties(kFstProperties, false), true),
kFstProperties);
}
// Helper function for SingleShortestPath building a tree of shortest paths to
// every final state in the input FST. It takes the input FST and parent values
// computed by SingleShortestPath and builds into the output mutable FST the
// subtree of ifst that consists only of the best paths to all final states.
// This is not normally called by users; see ShortestPath instead.
template <class Arc>
void SingleShortestTree(
const Fst<Arc> &ifst, MutableFst<Arc> *ofst,
const std::vector<std::pair<typename Arc::StateId, size_t>> &parent) {
ofst->DeleteStates();
ofst->SetInputSymbols(ifst.InputSymbols());
ofst->SetOutputSymbols(ifst.OutputSymbols());
ofst->SetStart(ifst.Start());
for (StateIterator<Fst<Arc>> siter(ifst); !siter.Done(); siter.Next()) {
ofst->AddState();
ofst->SetFinal(siter.Value(), ifst.Final(siter.Value()));
}
for (const auto &pair : parent) {
if (pair.first != kNoStateId && pair.second != kNoArc) {
ArcIterator<Fst<Arc>> aiter(ifst, pair.first);
aiter.Seek(pair.second);
ofst->AddArc(pair.first, aiter.Value());
}
}
if (ifst.Properties(kError, false)) ofst->SetProperties(kError, kError);
ofst->SetProperties(
ShortestPathProperties(ofst->Properties(kFstProperties, false), true),
kFstProperties);
}
// Implements the stopping criterion when ShortestPathOptions::first_path
// is set to true:
// operator()(s, d, f) == true
// iff every successful path through state 's' has a cost greater or equal
// to 'f' under the assumption that 'd' is the shortest distance to state 's'.
// Correct when using the ShortestFirstQueue with all the weights in the FST
// being between One() and Zero() according to NaturalLess
template <typename S, typename W, typename Queue>
struct FirstPathSelect {
FirstPathSelect(const Queue &) {}
bool operator()(S s, W d, W f) const { return f == Plus(d, f); }
};
// Specialisation for A*.
// Correct when the estimate is admissible and consistent.
template <typename S, typename W, typename Estimate>
class FirstPathSelect<S, W, NaturalAStarQueue<S, W, Estimate>> {
public:
using Queue = NaturalAStarQueue<S, W, Estimate>;
FirstPathSelect(const Queue &state_queue)
: estimate_(state_queue.GetCompare().GetEstimate()) {}
bool operator()(S s, W d, W f) const {
return f == Plus(Times(d, estimate_(s)), f);
}
private:
const Estimate &estimate_;
};
// Shortest-path algorithm. It builds the output mutable FST so that it contains
// the shortest path in the input FST; distance returns the shortest distances
// from the source state to each state in the input FST, and the options struct
// is
// used to specify options such as the queue discipline, the arc filter and
// delta. The super_final option is an output parameter indicating the final
// state, and the parent argument is used for the storage of the backtrace path
// for each state 1 to n, (i.e., the best previous state and the arc that
// transition to state n.) The shortest path is the lowest weight path w.r.t.
// the natural semiring order. The weights need to be right distributive and
// have the path (kPath) property. False is returned if an error is encountered.
//
// This is not normally called by users; see ShortestPath instead (with n = 1).
template <class Arc, class Queue, class ArcFilter>
bool SingleShortestPath(
const Fst<Arc> &ifst, std::vector<typename Arc::Weight> *distance,
const ShortestPathOptions<Arc, Queue, ArcFilter> &opts,
typename Arc::StateId *f_parent,
std::vector<std::pair<typename Arc::StateId, size_t>> *parent) {
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
static_assert(IsPath<Weight>::value, "Weight must have path property.");
static_assert((Weight::Properties() & kRightSemiring) == kRightSemiring,
"Weight must be right distributive.");
parent->clear();
*f_parent = kNoStateId;
if (ifst.Start() == kNoStateId) return true;
std::vector<bool> enqueued;
auto state_queue = opts.state_queue;
const auto source = (opts.source == kNoStateId) ? ifst.Start() : opts.source;
bool final_seen = false;
auto f_distance = Weight::Zero();
distance->clear();
state_queue->Clear();
while (distance->size() < source) {
distance->push_back(Weight::Zero());
enqueued.push_back(false);
parent->push_back(std::make_pair(kNoStateId, kNoArc));
}
distance->push_back(Weight::One());
parent->push_back(std::make_pair(kNoStateId, kNoArc));
state_queue->Enqueue(source);
enqueued.push_back(true);
while (!state_queue->Empty()) {
const auto s = state_queue->Head();
state_queue->Dequeue();
enqueued[s] = false;
const auto sd = (*distance)[s];
// If we are using a shortest queue, no other path is going to be shorter
// than f_distance at this point.
using FirstPath = FirstPathSelect<StateId, Weight, Queue>;
if (opts.first_path && final_seen &&
FirstPath(*state_queue)(s, sd, f_distance)) {
break;
}
if (ifst.Final(s) != Weight::Zero()) {
const auto plus = Plus(f_distance, Times(sd, ifst.Final(s)));
if (f_distance != plus) {
f_distance = plus;
*f_parent = s;
}
if (!f_distance.Member()) return false;
final_seen = true;
}
for (ArcIterator<Fst<Arc>> aiter(ifst, s); !aiter.Done(); aiter.Next()) {
const auto &arc = aiter.Value();
while (distance->size() <= arc.nextstate) {
distance->push_back(Weight::Zero());
enqueued.push_back(false);
parent->push_back(std::make_pair(kNoStateId, kNoArc));
}
auto &nd = (*distance)[arc.nextstate];
const auto weight = Times(sd, arc.weight);
if (nd != Plus(nd, weight)) {
nd = Plus(nd, weight);
if (!nd.Member()) return false;
(*parent)[arc.nextstate] = std::make_pair(s, aiter.Position());
if (!enqueued[arc.nextstate]) {
state_queue->Enqueue(arc.nextstate);
enqueued[arc.nextstate] = true;
} else {
state_queue->Update(arc.nextstate);
}
}
}
}
return true;
}
template <class StateId, class Weight>
class ShortestPathCompare {
public:
ShortestPathCompare(const std::vector<std::pair<StateId, Weight>> &pairs,
const std::vector<Weight> &distance, StateId superfinal,
float delta)
: pairs_(pairs),
distance_(distance),
superfinal_(superfinal),
delta_(delta) {}
bool operator()(const StateId x, const StateId y) const {
const auto &px = pairs_[x];
const auto &py = pairs_[y];
const auto wx = Times(PWeight(px.first), px.second);
const auto wy = Times(PWeight(py.first), py.second);
// Penalize complete paths to ensure correct results with inexact weights.
// This forms a strict weak order so long as ApproxEqual(a, b) =>
// ApproxEqual(a, c) for all c s.t. less_(a, c) && less_(c, b).
if (px.first == superfinal_ && py.first != superfinal_) {
return less_(wy, wx) || ApproxEqual(wx, wy, delta_);
} else if (py.first == superfinal_ && px.first != superfinal_) {
return less_(wy, wx) && !ApproxEqual(wx, wy, delta_);
} else {
return less_(wy, wx);
}
}
private:
Weight PWeight(StateId state) const {
return (state == superfinal_)
? Weight::One()
: (state < distance_.size()) ? distance_[state] : Weight::Zero();
}
const std::vector<std::pair<StateId, Weight>> &pairs_;
const std::vector<Weight> &distance_;
const StateId superfinal_;
const float delta_;
NaturalLess<Weight> less_;
};
// N-Shortest-path algorithm: implements the core n-shortest path algorithm.
// The output is built reversed. See below for versions with more options and
// *not reversed*.
//
// The output mutable FST contains the REVERSE of n'shortest paths in the input
// FST; distance must contain the shortest distance from each state to a final
// state in the input FST; delta is the convergence delta.
//
// The n-shortest paths are the n-lowest weight paths w.r.t. the natural
// semiring order. The single path that can be read from the ith of at most n
// transitions leaving the initial state of the input FST is the ith shortest
// path. Disregarding the initial state and initial transitions, the
// n-shortest paths, in fact, form a tree rooted at the single final state.
//
// The weights need to be left and right distributive (kSemiring) and have the
// path (kPath) property.
//
// Arc weights must satisfy the property that the sum of the weights of one or
// more paths from some state S to T is never Zero(). In particular, arc weights
// are never Zero().
//
// For more information, see:
//
// Mohri, M, and Riley, M. 2002. An efficient algorithm for the n-best-strings
// problem. In Proc. ICSLP.
//
// The algorithm relies on the shortest-distance algorithm. There are some
// issues with the pseudo-code as written in the paper (viz., line 11).
//
// IMPLEMENTATION NOTE: The input FST can be a delayed FST and at any state in
// its expansion the values of distance vector need only be defined at that time
// for the states that are known to exist.
template <class Arc, class RevArc>
void NShortestPath(const Fst<RevArc> &ifst, MutableFst<Arc> *ofst,
const std::vector<typename Arc::Weight> &distance,
int32_t nshortest, float delta = kShortestDelta,
typename Arc::Weight weight_threshold = Arc::Weight::Zero(),
typename Arc::StateId state_threshold = kNoStateId) {
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
using Pair = std::pair<StateId, Weight>;
static_assert((Weight::Properties() & kPath) == kPath,
"Weight must have path property.");
static_assert((Weight::Properties() & kSemiring) == kSemiring,
"Weight must be distributive.");
if (nshortest <= 0) return;
ofst->DeleteStates();
ofst->SetInputSymbols(ifst.InputSymbols());
ofst->SetOutputSymbols(ifst.OutputSymbols());
// Each state in ofst corresponds to a path with weight w from the initial
// state of ifst to a state s in ifst, that can be characterized by a pair
// (s, w). The vector pairs maps each state in ofst to the corresponding
// pair maps states in ofst to the corresponding pair (s, w).
std::vector<Pair> pairs;
// The supefinal state is denoted by kNoStateId. The distance from the
// superfinal state to the final state is semiring One, so
// `distance[kNoStateId]` is not needed.
const ShortestPathCompare<StateId, Weight> compare(pairs, distance,
kNoStateId, delta);
const NaturalLess<Weight> less;
if (ifst.Start() == kNoStateId || distance.size() <= ifst.Start() ||
distance[ifst.Start()] == Weight::Zero() ||
less(weight_threshold, Weight::One()) || state_threshold == 0) {
if (ifst.Properties(kError, false)) ofst->SetProperties(kError, kError);
return;
}
ofst->SetStart(ofst->AddState());
const auto final_state = ofst->AddState();
ofst->SetFinal(final_state, Weight::One());
while (pairs.size() <= final_state) {
pairs.push_back(std::make_pair(kNoStateId, Weight::Zero()));
}
pairs[final_state] = std::make_pair(ifst.Start(), Weight::One());
std::vector<StateId> heap;
heap.push_back(final_state);
const auto limit = Times(distance[ifst.Start()], weight_threshold);
// r[s + 1], s state in fst, is the number of states in ofst which
// corresponding pair contains s, i.e., it is number of paths computed so far
// to s. Valid for s == kNoStateId (the superfinal state).
std::vector<int> r;
while (!heap.empty()) {
std::pop_heap(heap.begin(), heap.end(), compare);
const auto state = heap.back();
const auto p = pairs[state];
heap.pop_back();
const auto d =
(p.first == kNoStateId)
? Weight::One()
: (p.first < distance.size()) ? distance[p.first] : Weight::Zero();
if (less(limit, Times(d, p.second)) ||
(state_threshold != kNoStateId &&
ofst->NumStates() >= state_threshold)) {
continue;
}
while (r.size() <= p.first + 1) r.push_back(0);
++r[p.first + 1];
if (p.first == kNoStateId) {
ofst->AddArc(ofst->Start(), Arc(0, 0, Weight::One(), state));
}
if ((p.first == kNoStateId) && (r[p.first + 1] == nshortest)) break;
if (r[p.first + 1] > nshortest) continue;
if (p.first == kNoStateId) continue;
for (ArcIterator<Fst<RevArc>> aiter(ifst, p.first); !aiter.Done();
aiter.Next()) {
const auto &rarc = aiter.Value();
Arc arc(rarc.ilabel, rarc.olabel, rarc.weight.Reverse(), rarc.nextstate);
const auto weight = Times(p.second, arc.weight);
const auto next = ofst->AddState();
pairs.push_back(std::make_pair(arc.nextstate, weight));
arc.nextstate = state;
ofst->AddArc(next, arc);
heap.push_back(next);
std::push_heap(heap.begin(), heap.end(), compare);
}
const auto final_weight = ifst.Final(p.first).Reverse();
if (final_weight != Weight::Zero()) {
const auto weight = Times(p.second, final_weight);
const auto next = ofst->AddState();
pairs.push_back(std::make_pair(kNoStateId, weight));
ofst->AddArc(next, Arc(0, 0, final_weight, state));
heap.push_back(next);
std::push_heap(heap.begin(), heap.end(), compare);
}
}
Connect(ofst);
if (ifst.Properties(kError, false)) ofst->SetProperties(kError, kError);
ofst->SetProperties(
ShortestPathProperties(ofst->Properties(kFstProperties, false)),
kFstProperties);
}
} // namespace internal
// N-Shortest-path algorithm: this version allows finer control via the options
// argument. See below for a simpler interface. The output mutable FST contains
// the n-shortest paths in the input FST; the distance argument is used to
// return the shortest distances from the source state to each state in the
// input FST, and the options struct is used to specify the number of paths to
// return, whether they need to have distinct input strings, the queue
// discipline, the arc filter and the convergence delta.
//
// The n-shortest paths are the n-lowest weight paths w.r.t. the natural
// semiring order. The single path that can be read from the ith of at most n
// transitions leaving the initial state of the output FST is the ith shortest
// path.
// Disregarding the initial state and initial transitions, The n-shortest paths,
// in fact, form a tree rooted at the single final state.
//
// The weights need to be right distributive and have the path (kPath) property.
// They need to be left distributive as well for nshortest > 1.
//
// For more information, see:
//
// Mohri, M, and Riley, M. 2002. An efficient algorithm for the n-best-strings
// problem. In Proc. ICSLP.
//
// The algorithm relies on the shortest-distance algorithm. There are some
// issues with the pseudo-code as written in the paper (viz., line 11).
template <class Arc, class Queue, class ArcFilter,
typename std::enable_if<IsPath<typename Arc::Weight>::value>::type * =
nullptr>
void ShortestPath(const Fst<Arc> &ifst, MutableFst<Arc> *ofst,
std::vector<typename Arc::Weight> *distance,
const ShortestPathOptions<Arc, Queue, ArcFilter> &opts) {
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
using RevArc = ReverseArc<Arc>;
if (opts.nshortest == 1) {
std::vector<std::pair<StateId, size_t>> parent;
StateId f_parent;
if (internal::SingleShortestPath(ifst, distance, opts, &f_parent,
&parent)) {
internal::SingleShortestPathBacktrace(ifst, ofst, parent, f_parent);
} else {
ofst->SetProperties(kError, kError);
}
return;
}
if (opts.nshortest <= 0) return;
if (!opts.has_distance) {
ShortestDistance(ifst, distance, opts);
if (distance->size() == 1 && !(*distance)[0].Member()) {
ofst->SetProperties(kError, kError);
return;
}
}
// Algorithm works on the reverse of 'fst'; 'distance' is the distance to the
// final state in 'rfst', 'ofst' is built as the reverse of the tree of
// n-shortest path in 'rfst'.
VectorFst<RevArc> rfst;
Reverse(ifst, &rfst);
auto d = Weight::Zero();
for (ArcIterator<VectorFst<RevArc>> aiter(rfst, 0); !aiter.Done();
aiter.Next()) {
const auto &arc = aiter.Value();
const auto state = arc.nextstate - 1;
if (state < distance->size()) {
d = Plus(d, Times(arc.weight.Reverse(), (*distance)[state]));
}
}
// TODO(kbg): Avoid this expensive vector operation.
distance->insert(distance->begin(), d);
if (!opts.unique) {
internal::NShortestPath(rfst, ofst, *distance, opts.nshortest, opts.delta,
opts.weight_threshold, opts.state_threshold);
} else {
std::vector<Weight> ddistance;
DeterminizeFstOptions<RevArc> dopts(opts.delta);
DeterminizeFst<RevArc> dfst(rfst, distance, &ddistance, dopts);
internal::NShortestPath(dfst, ofst, ddistance, opts.nshortest, opts.delta,
opts.weight_threshold, opts.state_threshold);
}
// TODO(kbg): Avoid this expensive vector operation.
distance->erase(distance->begin());
}
template <class Arc, class Queue, class ArcFilter,
typename std::enable_if<!IsPath<typename Arc::Weight>::value>::type
* = nullptr>
void ShortestPath(const Fst<Arc> &, MutableFst<Arc> *ofst,
std::vector<typename Arc::Weight> *,
const ShortestPathOptions<Arc, Queue, ArcFilter> &) {
FSTERROR() << "ShortestPath: Weight needs to have the "
<< "path property and be distributive: " << Arc::Weight::Type();
ofst->SetProperties(kError, kError);
}
// Shortest-path algorithm: simplified interface. See above for a version that
// allows finer control. The output mutable FST contains the n-shortest paths
// in the input FST. The queue discipline is automatically selected. When unique
// is true, only paths with distinct input label sequences are returned.
//
// The n-shortest paths are the n-lowest weight paths w.r.t. the natural
// semiring order. The single path that can be read from the ith of at most n
// transitions leaving the initial state of the output FST is the ith best path.
// The weights need to be right distributive and have the path (kPath) property.
template <class Arc>
void ShortestPath(const Fst<Arc> &ifst, MutableFst<Arc> *ofst,
int32_t nshortest = 1, bool unique = false,
bool first_path = false,
typename Arc::Weight weight_threshold = Arc::Weight::Zero(),
typename Arc::StateId state_threshold = kNoStateId,
float delta = kShortestDelta) {
using StateId = typename Arc::StateId;
std::vector<typename Arc::Weight> distance;
AnyArcFilter<Arc> arc_filter;
AutoQueue<StateId> state_queue(ifst, &distance, arc_filter);
const ShortestPathOptions<Arc, AutoQueue<StateId>, AnyArcFilter<Arc>> opts(
&state_queue, arc_filter, nshortest, unique, false, delta, first_path,
weight_threshold, state_threshold);
ShortestPath(ifst, ofst, &distance, opts);
}
} // namespace fst
#endif // FST_SHORTEST_PATH_H_
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/test-python_38_tflite_16k-linux-amd64-prod-opt.yml | build:
template_file: test-linux-opt-base.tyml
dependencies:
- "linux-amd64-tflite-opt"
args:
tests_cmdline: "${system.homedir.linux}/DeepSpeech/ds/taskcluster/tc-python_tflite-tests-prod.sh 3.8.1: 16k"
workerType: "${docker.dsTests}"
metadata:
name: "DeepSpeech Linux AMD64 TFLite Python v3.8 prod tests (16kHz)"
description: "Testing DeepSpeech for Linux/AMD64 on Python v3.8 on prod model, TFLite, optimized version (16kHz)"
| 0 |
coqui_public_repos/STT-examples | coqui_public_repos/STT-examples/nodejs_wav/index.js | const STT = require('stt');
const Fs = require('fs');
const Sox = require('sox-stream');
const MemoryStream = require('memory-stream');
const Duplex = require('stream').Duplex;
const Wav = require('node-wav');
let modelPath = './models/model.tflite';
let model = new STT.Model(modelPath);
let desiredSampleRate = model.sampleRate();
let scorerPath = './models/huge-vocab.scorer';
model.enableExternalScorer(scorerPath);
let audioFile = process.argv[2] || './audio/2830-3980-0043.wav';
if (!Fs.existsSync(audioFile)) {
console.log('file missing:', audioFile);
process.exit();
}
const buffer = Fs.readFileSync(audioFile);
const result = Wav.decode(buffer);
if (result.sampleRate < desiredSampleRate) {
console.error('Warning: original sample rate (' + result.sampleRate + ') is lower than ' + desiredSampleRate + 'Hz. Up-sampling might produce erratic speech recognition.');
}
function bufferToStream(buffer) {
let stream = new Duplex();
stream.push(buffer);
stream.push(null);
return stream;
}
let audioStream = new MemoryStream();
bufferToStream(buffer).
pipe(Sox({
global: {
'no-dither': true,
},
output: {
bits: 16,
rate: desiredSampleRate,
channels: 1,
encoding: 'signed-integer',
endian: 'little',
compression: 0.0,
type: 'raw'
}
})).
pipe(audioStream);
audioStream.on('finish', () => {
let audioBuffer = audioStream.toBuffer();
const audioLength = (audioBuffer.length / 2) * (1 / desiredSampleRate);
console.log('audio length', audioLength);
let result = model.stt(audioBuffer);
console.log('result:', result);
});
| 0 |
coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core | coqui_public_repos/inference-engine/third_party/onnxruntime/include/onnxruntime/core/framework/tensor_shape.h | // Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#include <iosfwd>
#include <vector>
#include <algorithm>
#include <string>
#include <cstring>
#include "onnxruntime_config.h"
namespace onnxruntime {
#ifdef __GNUC__
#pragma GCC diagnostic push
#ifdef HAS_NULL_DEREFERENCE
#pragma GCC diagnostic ignored "-Wnull-dereference"
#endif
#endif
class TensorShape : private std::vector<int64_t> {
// TODO - Use a custom STL allocator to avoid heap allocations in the common case.
// We use negative numbers for unknown symbolic dimension. Each negative
// number represents a unique symbolic dimension.
// Private inheritance is used to prevent ambiguity of element versus dimension size
public:
TensorShape() = default;
TensorShape(const TensorShape& /*other*/) = default;
TensorShape& operator=(const TensorShape& /*other*/) = default;
TensorShape(TensorShape&& /*other*/) = default;
TensorShape& operator=(TensorShape&& /*other*/) = default;
TensorShape(const std::vector<int64_t>& dims) : std::vector<int64_t>(dims) {}
TensorShape(std::vector<int64_t>&& dims) : std::vector<int64_t>(std::move(dims)) {}
TensorShape(const std::initializer_list<int64_t>& dims) : std::vector<int64_t>(dims) {}
TensorShape(const int64_t* dimension_sizes, size_t dimension_count);
TensorShape(const std::vector<int64_t>& dims, size_t start, size_t end);
/**
Return the dimension specified by <idx>.
*/
const int64_t& operator[](size_t idx) const {
return std::vector<int64_t>::operator[](static_cast<int>(idx));
}
int64_t& operator[](size_t idx) {
return std::vector<int64_t>::operator[](static_cast<int>(idx));
}
bool operator==(const TensorShape& other) const noexcept {
auto thisVector = static_cast<const std::vector<int64_t>*>(this);
auto otherVector = static_cast<const std::vector<int64_t>*>(&other);
return *thisVector == *otherVector;
}
bool operator!=(const TensorShape& other) const noexcept {
return !(*this == other);
}
size_t NumDimensions() const noexcept {
return size();
}
/**
Copy dims into an array with given size
*/
void CopyDims(int64_t* dims, size_t num_dims) const {
memcpy(dims, data(), sizeof(value_type) * std::min(num_dims, NumDimensions()));
}
/**
Return underlying vector representation.
*/
const std::vector<int64_t>& GetDims() const { return *this; }
/**
* Return the total number of elements. Returns 1 for an empty (rank 0) TensorShape.
*
* May return -1
*/
int64_t Size() const;
/**
Return the total number of elements up to the specified dimension.
If the dimension interval is empty (dimension == 0), return 1.
@param dimension Return size up to this dimension. Value must be between 0 and this->NumDimensions(), inclusive.
*/
int64_t SizeToDimension(size_t dimension) const;
/**
Return the total number of elements from the specified dimension to the end of the tensor shape.
If the dimension interval is empty (dimension == this->NumDimensions()), return 1.
@param dimension Return size from this dimension to the end. Value must be between 0 and this->NumDimensions(),
inclusive.
*/
int64_t SizeFromDimension(size_t dimension) const;
/**
Return a new TensorShape of the dimensions from dimstart to dimend.
*/
TensorShape Slice(size_t dimstart, size_t dimend) const;
/**
Return a new TensorShape of the dimensions from dimstart to end.
*/
TensorShape Slice(size_t dimstart) const;
/**
output dimensions nicely formatted
*/
std::string ToString() const;
/**
Calculate size between start and end.
Assumes start and end are between 0 and this->NumDimensions(), inclusive, and that
start < end.
*/
int64_t SizeHelper(size_t start, size_t end) const;
/**
empty shape or 1D shape (1) is regarded as scalar tensor
*/
bool IsScalar() const {
size_t len = size();
return len == 0 || (len == 1 && operator[](0) == 1);
}
static const TensorShape& ReinterpretBaseType(const std::vector<int64_t>& dimensions) {
static_assert(sizeof(TensorShape) == sizeof(std::vector<int64_t>), "Size of TensorShape prevents safe casting from vector");
return *static_cast<const TensorShape*>(&dimensions);
}
};
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif
// operator<< to nicely output to a stream
std::ostream& operator<<(std::ostream& out, const TensorShape& shape);
} // namespace onnxruntime
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/src/extensions/Makefile.am | if HAVE_COMPACT
compactdir = compact
endif
if HAVE_COMPRESS
compressdir = compress
endif
if HAVE_CONST
constdir = const
endif
if HAVE_FAR
fardir = far
endif
if HAVE_GRM
fardir = far
pdtdir = pdt
mpdtdir = mpdt
endif
if HAVE_LINEAR
lineardir = linear
endif
if HAVE_LOOKAHEAD
lookaheaddir = lookahead
endif
if HAVE_MPDT
pdtdir = pdt
mpdtdir = mpdt
endif
if HAVE_NGRAM
ngramdir = ngram
endif
if HAVE_PYTHON
fardir = far
pywrapfstdir = python
endif
if HAVE_PDT
pdtdir = pdt
endif
if HAVE_SPECIAL
specialdir = special
endif
SUBDIRS = $(compactdir) $(compressdir) $(constdir) $(fardir) $(lineardir) \
$(lookaheaddir) $(pdtdir) $(mpdtdir) $(ngramdir) $(pywrapfstdir) \
$(specialdir)
| 0 |
coqui_public_repos/STT/native_client/dotnet/STTWPF | coqui_public_repos/STT/native_client/dotnet/STTWPF/Properties/AssemblyInfo.cs | using System.Reflection;
using System.Resources;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Windows;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("STT.WPF")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("Coqui GmbH")]
[assembly: AssemblyProduct("STT.WPF.SingleFiles")]
[assembly: AssemblyCopyright("Copyright © 2018-2020 Mozilla, © 2021 Coqui GmbH")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]
// Setting ComVisible to false makes the types in this assembly not visible
// to COM components. If you need to access a type in this assembly from
// COM, set the ComVisible attribute to true on that type.
[assembly: ComVisible(false)]
//In order to begin building localizable applications, set
//<UICulture>CultureYouAreCodingWith</UICulture> in your .csproj file
//inside a <PropertyGroup>. For example, if you are using US english
//in your source files, set the <UICulture> to en-US. Then uncomment
//the NeutralResourceLanguage attribute below. Update the "en-US" in
//the line below to match the UICulture setting in the project file.
//[assembly: NeutralResourcesLanguage("en-US", UltimateResourceFallbackLocation.Satellite)]
[assembly: ThemeInfo(
ResourceDictionaryLocation.None, //where theme specific resource dictionaries are located
//(used if a resource is not found in the page,
// or application resource dictionaries)
ResourceDictionaryLocation.SourceAssembly //where the generic resource dictionary is located
//(used if a resource is not found in the page,
// app, or any theme specific resource dictionaries)
)]
// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/symbol-table-ops.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#ifndef FST_SYMBOL_TABLE_OPS_H_
#define FST_SYMBOL_TABLE_OPS_H_
#include <string>
#include <unordered_set>
#include <vector>
#include <fst/fst.h>
#include <fst/symbol-table.h>
namespace fst {
// Returns a minimal symbol table containing only symbols referenced by the
// passed fst. Symbols preserve their original numbering, so fst does not
// require relabeling.
template <class Arc>
SymbolTable *PruneSymbolTable(const Fst<Arc> &fst, const SymbolTable &syms,
bool input) {
std::unordered_set<typename Arc::Label> seen;
seen.insert(0); // Always keep epsilon.
for (StateIterator<Fst<Arc>> siter(fst); !siter.Done(); siter.Next()) {
for (ArcIterator<Fst<Arc>> aiter(fst, siter.Value()); !aiter.Done();
aiter.Next()) {
const auto sym = (input) ? aiter.Value().ilabel : aiter.Value().olabel;
seen.insert(sym);
}
}
auto *pruned = new SymbolTable(syms.Name() + "_pruned");
for (SymbolTableIterator stiter(syms); !stiter.Done(); stiter.Next()) {
const auto label = stiter.Value();
if (seen.count(label)) pruned->AddSymbol(stiter.Symbol(), label);
}
return pruned;
}
// Relabels a symbol table to make it a contiguous mapping.
SymbolTable *CompactSymbolTable(const SymbolTable &syms);
// Merges two SymbolTables, all symbols from left will be merged into right
// with the same ids. Symbols in right that have conflicting ids with those
// in left will be assigned to value assigned from the left SymbolTable.
// The returned symbol table will never modify symbol assignments from the left
// side, but may do so on the right. If right_relabel_output is non-NULL, it
// will be assigned true if the symbols from the right table needed to be
// reassigned.
// A potential use case is to Compose two Fst's that have different symbol
// tables. You can reconcile them in the following way:
// Fst<Arc> a, b;
// bool relabel;
// std::unique_ptr<SymbolTable> bnew(MergeSymbolTable(a.OutputSymbols(),
// b.InputSymbols(), &relabel);
// if (relabel) {
// Relabel(b, bnew.get(), nullptr);
// }
// b.SetInputSymbols(bnew);
SymbolTable *MergeSymbolTable(const SymbolTable &left, const SymbolTable &right,
bool *right_relabel_output = nullptr);
// Read the symbol table from any Fst::Read()able file, without loading the
// corresponding Fst. Returns nullptr if the Fst does not contain a symbol
// table or the symbol table cannot be read.
SymbolTable *FstReadSymbols(const string &filename, bool input);
// Adds a contiguous range of symbols to a symbol table using a simple prefix
// for the string, returning false if the inserted symbol string clashes with
// any currently present.
bool AddAuxiliarySymbols(const string &prefix, int64_t start_label,
int64_t nlabels, SymbolTable *syms);
} // namespace fst
#endif // FST_SYMBOL_TABLE_OPS_H_
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions/compact/compact16_weighted_string-fst.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/fst.h>
#include <fst/compact-fst.h>
namespace fst {
static FstRegisterer<
CompactWeightedStringFst<StdArc, uint16>>
CompactWeightedStringFst_StdArc_uint16_registerer;
static FstRegisterer<
CompactWeightedStringFst<LogArc, uint16>>
CompactWeightedStringFst_LogArc_uint16_registerer;
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/script/shortest-path.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#ifndef FST_SCRIPT_SHORTEST_PATH_H_
#define FST_SCRIPT_SHORTEST_PATH_H_
#include <memory>
#include <vector>
#include <fst/shortest-path.h>
#include <fst/script/fst-class.h>
#include <fst/script/shortest-distance.h>
#include <fst/script/weight-class.h>
namespace fst {
namespace script {
// Slightly simplified interface: `has_distance` and `first_path` are disabled.
struct ShortestPathOptions : public ShortestDistanceOptions {
const int32_t nshortest;
const bool unique;
const WeightClass &weight_threshold;
const int64_t state_threshold;
ShortestPathOptions(QueueType queue_type, int32_t nshortest, bool unique,
float delta, const WeightClass &weight_threshold,
int64_t state_threshold = kNoStateId)
: ShortestDistanceOptions(queue_type, ANY_ARC_FILTER, kNoStateId, delta),
nshortest(nshortest),
unique(unique),
weight_threshold(weight_threshold),
state_threshold(state_threshold) {}
};
namespace internal {
// Code to implement switching on queue types.
template <class Arc, class Queue>
void ShortestPath(const Fst<Arc> &ifst, MutableFst<Arc> *ofst,
std::vector<typename Arc::Weight> *distance,
const ShortestPathOptions &opts) {
using ArcFilter = AnyArcFilter<Arc>;
using Weight = typename Arc::Weight;
const std::unique_ptr<Queue> queue(
QueueConstructor<Arc, Queue, ArcFilter>::Construct(ifst, distance));
const fst::ShortestPathOptions<Arc, Queue, ArcFilter> sopts(
queue.get(), ArcFilter(), opts.nshortest, opts.unique,
/* has_distance=*/false, opts.delta, /* first_path=*/false,
*opts.weight_threshold.GetWeight<Weight>(), opts.state_threshold);
ShortestPath(ifst, ofst, distance, sopts);
}
template <class Arc>
void ShortestPath(const Fst<Arc> &ifst, MutableFst<Arc> *ofst,
const ShortestPathOptions &opts) {
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
std::vector<Weight> distance;
switch (opts.queue_type) {
case AUTO_QUEUE: {
ShortestPath<Arc, AutoQueue<StateId>>(ifst, ofst, &distance, opts);
return;
}
case FIFO_QUEUE: {
ShortestPath<Arc, FifoQueue<StateId>>(ifst, ofst, &distance, opts);
return;
}
case LIFO_QUEUE: {
ShortestPath<Arc, LifoQueue<StateId>>(ifst, ofst, &distance, opts);
return;
}
case SHORTEST_FIRST_QUEUE: {
ShortestPath<Arc, NaturalShortestFirstQueue<StateId, Weight>>(ifst, ofst,
&distance,
opts);
return;
}
case STATE_ORDER_QUEUE: {
ShortestPath<Arc, StateOrderQueue<StateId>>(ifst, ofst, &distance, opts);
return;
}
case TOP_ORDER_QUEUE: {
ShortestPath<Arc, TopOrderQueue<StateId>>(ifst, ofst, &distance, opts);
return;
}
default: {
FSTERROR() << "ShortestPath: Unknown queue type: "
<< opts.queue_type;
ofst->SetProperties(kError, kError);
return;
}
}
}
} // namespace internal
using ShortestPathArgs = std::tuple<const FstClass &, MutableFstClass *,
const ShortestPathOptions &>;
template <class Arc>
void ShortestPath(ShortestPathArgs *args) {
const Fst<Arc> &ifst = *(std::get<0>(*args).GetFst<Arc>());
MutableFst<Arc> *ofst = std::get<1>(*args)->GetMutableFst<Arc>();
const ShortestPathOptions &opts = std::get<2>(*args);
internal::ShortestPath(ifst, ofst, opts);
}
void ShortestPath(const FstClass &ifst, MutableFstClass *ofst,
const ShortestPathOptions &opts);
} // namespace script
} // namespace fst
#endif // FST_SCRIPT_SHORTEST_PATH_H_
| 0 |
coqui_public_repos/open-bible-scripts | coqui_public_repos/open-bible-scripts/extra-preprocess/chichewa.sh | #!/bin/bash
echo "I: extra pre-processing for Chichewa to fix filenames."
mv 01_Genesis GEN; rename 's/01_Genesis/GEN/' GEN/*.wav;
mv 02_Exodus EXO; rename 's/02_Exodus/EXO/' EXO/*.wav;
mv 03_Leviticus LEV; rename 's/03_Leviticus/LEV/' LEV/*.wav;
mv 04_Numbers NUM; rename 's/04_Numbers/NUM/' NUM/*.wav;
mv 05_Deuteronomy DEU; rename 's/05_Deuteronomy/DEU/' DEU/*.wav;
mv 06_Joshua JOS; rename 's/06_Joshua/JOS/' JOS/*.wav;
mv 07_Judges JDG; rename 's/07_Judges/JDG/' JDG/*.wav;
mv 08_Ruth RUT; rename 's/08_Ruth/RUT/' RUT/*.wav;
mv 09_1\ Samuel 1SA; rename 's/09_1\ Samuel/1SA/' 1SA/*.wav;
mv 10_2\ Samuel 2SA; rename 's/10_2\ Samuel/2SA/' 2SA/*.wav;
mv 11_1\ Kings 1KI; rename 's/11_1\ Kings/1KI/' 1KI/*.wav;
mv 12_2\ Kings 2KI; rename 's/12_2\ Kings/2KI/' 2KI/*.wav;
mv 13_1\ Chronicles 1CH; rename 's/13_1\ Chronicles/1CH/' 1CH/*.wav;
mv 14_2\ Chronicles 2CH; rename 's/14_2\ Chronicles/2CH/' 2CH/*.wav;
mv 15_Ezra EZR; rename 's/15_Ezra/EZR/' EZR/*.wav;
mv 16_Nehemiah NEH; rename 's/16_Nehemiah/NEH/' NEH/*.wav;
mv 17_Esther EST; rename 's/17_Esther/EST/' EST/*.wav;
mv 18_Job JOB; rename 's/18_Job/JOB/' JOB/*.wav;
mv 19_Psalms PSA; rename 's/19_Psalms/PSA/' PSA/*.wav;
mv 20_Proverbs PRO; rename 's/20_Proverbs/PRO/' PRO/*.wav;
mv 21_Ecclesiastes ECC; rename 's/21_Ecclesiastes/ECC/' ECC/*.wav;
mv 22_Song\ Of\ Songs SOS; rename 's/22_Song\ Of\ Songs/SOS/' SOS/*.wav;
mv 23_Isaiah ISA; rename 's/23_Isaiah/ISA/' ISA/*.wav;
mv 24_Jeremiah JER; rename 's/24_Jeremiah/JER/' JER/*.wav;
mv 25_Lamentations LAM; rename 's/25_Lamentations/LAM/' LAM/*.wav;
mv 26_Ezekiel EZK; rename 's/26_Ezekiel/EZK/' EZE/*.wav;
mv 27_Daniel DAN; rename 's/27_Daniel/DAN/' DAN/*.wav;
mv 28_Hosea HOS; rename 's/28_Hosea/HOS/' HOS/*.wav;
mv 29_Joel JOL; rename 's/29_Joel/JOL/' JOL/*.wav;
mv 30_Amos AMO; rename 's/30_Amos/AMO/' AMO/*.wav;
mv 31_Obadiah OBA; rename 's/31_Obadiah/OBA/' OBA/*.wav;
mv 32_Jonah JON; rename 's/32_Jonah/JON/' JON/*.wav;
mv 33_Micah MIC; rename 's/33_Micah/MIC/' MIC/*.wav;
mv 34_Nahum NAH; rename 's/34_Nahum/NAH/' NAH/*.wav;
mv 35_Habakkuk HAB; rename 's/35_Habakkuk/HAB/' HAB/*.wav;
mv 36_Zephaniah ZEP; rename 's/36_Zephaniah/ZEP/' ZEP/*.wav;
mv 37_Haggai HAG; rename 's/37_Haggai/HAG/' HAG/*.wav;
mv 38_Zechariah ZEC; rename 's/38_Zechariah/ZEC/' ZEC/*.wav;
mv 39_Malachi MAL; rename 's/39_Malachi/MAL/' MAL/*.wav;
mv 40_Matthew MAT; rename 's/40_Matthew/MAT/' MAT/*.wav;
mv 41_Mark MRK; rename 's/41_Mark/MRK/' MRK/*.wav;
mv 42_Luke LUK; rename 's/42_Luke/LUK/' LUK/*.wav;
mv 43_John JHN; rename 's/43_John/JHN/' JHN/*.wav;
mv 44_Acts ACT; rename 's/44_Acts/ACT/' ACT/*.wav;
mv 45_Romans ROM; rename 's/45_Romans/ROM/' ROM/*.wav;
mv 46_1\ Corinthians 1CO; rename 's/46_1\ Corinthians/1CO/' 1CO/*.wav;
mv 47_2\ Corinthians 2CO; rename 's/47_2\ Corinthians/2CO/' 2CO/*.wav;
mv 48_Galatians GAL; rename 's/48_Galatians/GAL/' GAL/*.wav;
mv 49_Ephesians EPH; rename 's/49_Ephesians/EPH/' EPH/*.wav;
mv 50_Philippians PHP; rename 's/50_Philippians/PHP/' PHP/*.wav;
mv 51_Colossians COL; rename 's/51_Colossians/COL/' COL/*.wav;
mv 52_1\ Thessalonians 1TH; rename 's/52_1\ Thessalonians/1TH/' 1TH/*.wav;
mv 53_2\ Thessalonians 2TH; rename 's/53_2\ Thessalonians/2TH/' 2TH/*.wav;
mv 54_1\ Timothy 1TI; rename 's/54_1\ Timothy/1TI/' 1TI/*.wav;
mv 55_2\ Timothy 2TI; rename 's/55_2\ Timothy/2TI/' 2TI/*.wav;
mv 56_Titus TIT; rename 's/56_Titus/TIT/' TIT/*.wav;
mv 57_Philemon PHM; rename 's/57_Philemon/PHM/' PHM/*.wav;
mv 58_Hebrews HEB; rename 's/58_Hebrews/HEB/' HEB/*.wav;
mv 59_James JAS; rename 's/59_James/JAS/' JAS/*.wav;
mv 60_1\ Peter 1PE; rename 's/60_1\ Peter/1PE/' 1PE/*.wav;
mv 61_2\ Peter 2PE; rename 's/61_2\ Peter/2PE/' 2PE/*.wav;
mv 62_1\ John 1JN; rename 's/62_1\ John/1JN/' 1JN/*.wav;
mv 63_2\ John 2JN; rename 's/63_2\ John/2JN/' 2JN/*.wav;
mv 64_3\ John 3JN; rename 's/64_3\ John/3JN/' 3JN/*.wav;
mv 65_Jude JUD; rename 's/65_Jude/JUD/' JUD/*.wav;
mv 66_Revelation REV; rename 's/66_Revelation/REV/' REV/*.wav;
rename 's/ /_/' */*.wav;
rename 's/_V[0-9]//' */*.wav;
rename 's/ //' PSA/PSA_122\ .wav
rename 's/_0+/_/' */*.wav
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/tc-true.sh | #!/bin/sh
true
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/docs.tyml | $if: 'event.event in build.allowed'
then:
taskId: ${taskcluster.taskId}
provisionerId: ${taskcluster.docker.provisionerId}
workerType: ${build.workerType}
taskGroupId: ${taskcluster.taskGroupId}
schedulerId: ${taskcluster.schedulerId}
dependencies:
$map: { $eval: build.dependencies }
each(b):
$eval: as_slugid(b)
created: { $fromNow: '0 sec' }
deadline: { $fromNow: '1 day' }
expires:
$if: '(event.event == "push") || (event.event == "tag")'
then: { $fromNow: '6 months' }
else: { $fromNow: '7 days' }
extra:
nc_asset_name: { $eval: build.nc_asset_name }
routes:
$if: '(event.event == "push") || (event.event == "tag")'
then:
{ $eval: build.routes }
payload:
maxRunTime: { $eval: to_int(build.maxRunTime) }
image: "ubuntu:18.04"
command:
- "/bin/bash"
- "--login"
- "-cxe"
- $let:
extraSystemSetup: { $eval: strip(str(build.system_setup)) }
extraSystemConfig: { $eval: strip(str(build.system_config)) }
in: >
apt-get -qq update && apt-get -qq -y install git wget gnupg sudo && ${extraSystemSetup} &&
adduser --system --home ${system.homedir.linux} ${system.username} &&
cd ${system.homedir.linux}/ &&
echo -e "#!/bin/bash\nset -xe\n env && id && git clone --quiet ${event.head.repo.url} ~/DeepSpeech/ds/ && cd ~/DeepSpeech/ds && git checkout --quiet ${event.head.sha}" > /tmp/clone.sh && chmod +x /tmp/clone.sh &&
sudo -H -u ${system.username} /bin/bash /tmp/clone.sh && ${extraSystemConfig} &&
sudo -H -u ${system.username} --preserve-env /bin/bash ${system.homedir.linux}/DeepSpeech/ds/${build.scripts.build} &&
sudo -H -u ${system.username} /bin/bash ${system.homedir.linux}/DeepSpeech/ds/${build.scripts.package}
artifacts:
"public":
type: "directory"
path: "/tmp/artifacts/"
expires:
$if: '(event.event == "push") || (event.event == "tag")'
then: { $fromNow: '6 months' }
else: { $fromNow: '7 days' }
metadata:
name: ${build.metadata.name}
description: ${build.metadata.description}
owner: ${event.head.user.email}
source: ${event.head.repo.url}
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/script/map.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/script/fst-class.h>
#include <fst/script/map.h>
#include <fst/script/script-impl.h>
namespace fst {
namespace script {
FstClass *Map(const FstClass &ifst, MapType map_type, float delta, double power,
const WeightClass &weight) {
if (!ifst.WeightTypesMatch(weight, "Map")) return nullptr;
MapInnerArgs iargs(ifst, map_type, delta, power, weight);
MapArgs args(iargs);
Apply<Operation<MapArgs>>("Map", ifst.ArcType(), &args);
return args.retval;
}
REGISTER_FST_OPERATION(Map, StdArc, MapArgs);
REGISTER_FST_OPERATION(Map, LogArc, MapArgs);
REGISTER_FST_OPERATION(Map, Log64Arc, MapArgs);
} // namespace script
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/kenlm/util | coqui_public_repos/STT/native_client/kenlm/util/double-conversion/fixed-dtoa.cc | // Copyright 2010 the V8 project authors. All rights reserved.
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following
// disclaimer in the documentation and/or other materials provided
// with the distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include <cmath>
#include "fixed-dtoa.h"
#include "ieee.h"
namespace kenlm_double_conversion {
// Represents a 128bit type. This class should be replaced by a native type on
// platforms that support 128bit integers.
class UInt128 {
public:
UInt128() : high_bits_(0), low_bits_(0) { }
UInt128(uint64_t high, uint64_t low) : high_bits_(high), low_bits_(low) { }
void Multiply(uint32_t multiplicand) {
uint64_t accumulator;
accumulator = (low_bits_ & kMask32) * multiplicand;
uint32_t part = static_cast<uint32_t>(accumulator & kMask32);
accumulator >>= 32;
accumulator = accumulator + (low_bits_ >> 32) * multiplicand;
low_bits_ = (accumulator << 32) + part;
accumulator >>= 32;
accumulator = accumulator + (high_bits_ & kMask32) * multiplicand;
part = static_cast<uint32_t>(accumulator & kMask32);
accumulator >>= 32;
accumulator = accumulator + (high_bits_ >> 32) * multiplicand;
high_bits_ = (accumulator << 32) + part;
DOUBLE_CONVERSION_ASSERT((accumulator >> 32) == 0);
}
void Shift(int shift_amount) {
DOUBLE_CONVERSION_ASSERT(-64 <= shift_amount && shift_amount <= 64);
if (shift_amount == 0) {
return;
} else if (shift_amount == -64) {
high_bits_ = low_bits_;
low_bits_ = 0;
} else if (shift_amount == 64) {
low_bits_ = high_bits_;
high_bits_ = 0;
} else if (shift_amount <= 0) {
high_bits_ <<= -shift_amount;
high_bits_ += low_bits_ >> (64 + shift_amount);
low_bits_ <<= -shift_amount;
} else {
low_bits_ >>= shift_amount;
low_bits_ += high_bits_ << (64 - shift_amount);
high_bits_ >>= shift_amount;
}
}
// Modifies *this to *this MOD (2^power).
// Returns *this DIV (2^power).
int DivModPowerOf2(int power) {
if (power >= 64) {
int result = static_cast<int>(high_bits_ >> (power - 64));
high_bits_ -= static_cast<uint64_t>(result) << (power - 64);
return result;
} else {
uint64_t part_low = low_bits_ >> power;
uint64_t part_high = high_bits_ << (64 - power);
int result = static_cast<int>(part_low + part_high);
high_bits_ = 0;
low_bits_ -= part_low << power;
return result;
}
}
bool IsZero() const {
return high_bits_ == 0 && low_bits_ == 0;
}
int BitAt(int position) const {
if (position >= 64) {
return static_cast<int>(high_bits_ >> (position - 64)) & 1;
} else {
return static_cast<int>(low_bits_ >> position) & 1;
}
}
private:
static const uint64_t kMask32 = 0xFFFFFFFF;
// Value == (high_bits_ << 64) + low_bits_
uint64_t high_bits_;
uint64_t low_bits_;
};
static const int kDoubleSignificandSize = 53; // Includes the hidden bit.
static void FillDigits32FixedLength(uint32_t number, int requested_length,
Vector<char> buffer, int* length) {
for (int i = requested_length - 1; i >= 0; --i) {
buffer[(*length) + i] = '0' + number % 10;
number /= 10;
}
*length += requested_length;
}
static void FillDigits32(uint32_t number, Vector<char> buffer, int* length) {
int number_length = 0;
// We fill the digits in reverse order and exchange them afterwards.
while (number != 0) {
int digit = number % 10;
number /= 10;
buffer[(*length) + number_length] = static_cast<char>('0' + digit);
number_length++;
}
// Exchange the digits.
int i = *length;
int j = *length + number_length - 1;
while (i < j) {
char tmp = buffer[i];
buffer[i] = buffer[j];
buffer[j] = tmp;
i++;
j--;
}
*length += number_length;
}
static void FillDigits64FixedLength(uint64_t number,
Vector<char> buffer, int* length) {
const uint32_t kTen7 = 10000000;
// For efficiency cut the number into 3 uint32_t parts, and print those.
uint32_t part2 = static_cast<uint32_t>(number % kTen7);
number /= kTen7;
uint32_t part1 = static_cast<uint32_t>(number % kTen7);
uint32_t part0 = static_cast<uint32_t>(number / kTen7);
FillDigits32FixedLength(part0, 3, buffer, length);
FillDigits32FixedLength(part1, 7, buffer, length);
FillDigits32FixedLength(part2, 7, buffer, length);
}
static void FillDigits64(uint64_t number, Vector<char> buffer, int* length) {
const uint32_t kTen7 = 10000000;
// For efficiency cut the number into 3 uint32_t parts, and print those.
uint32_t part2 = static_cast<uint32_t>(number % kTen7);
number /= kTen7;
uint32_t part1 = static_cast<uint32_t>(number % kTen7);
uint32_t part0 = static_cast<uint32_t>(number / kTen7);
if (part0 != 0) {
FillDigits32(part0, buffer, length);
FillDigits32FixedLength(part1, 7, buffer, length);
FillDigits32FixedLength(part2, 7, buffer, length);
} else if (part1 != 0) {
FillDigits32(part1, buffer, length);
FillDigits32FixedLength(part2, 7, buffer, length);
} else {
FillDigits32(part2, buffer, length);
}
}
static void RoundUp(Vector<char> buffer, int* length, int* decimal_point) {
// An empty buffer represents 0.
if (*length == 0) {
buffer[0] = '1';
*decimal_point = 1;
*length = 1;
return;
}
// Round the last digit until we either have a digit that was not '9' or until
// we reached the first digit.
buffer[(*length) - 1]++;
for (int i = (*length) - 1; i > 0; --i) {
if (buffer[i] != '0' + 10) {
return;
}
buffer[i] = '0';
buffer[i - 1]++;
}
// If the first digit is now '0' + 10, we would need to set it to '0' and add
// a '1' in front. However we reach the first digit only if all following
// digits had been '9' before rounding up. Now all trailing digits are '0' and
// we simply switch the first digit to '1' and update the decimal-point
// (indicating that the point is now one digit to the right).
if (buffer[0] == '0' + 10) {
buffer[0] = '1';
(*decimal_point)++;
}
}
// The given fractionals number represents a fixed-point number with binary
// point at bit (-exponent).
// Preconditions:
// -128 <= exponent <= 0.
// 0 <= fractionals * 2^exponent < 1
// The buffer holds the result.
// The function will round its result. During the rounding-process digits not
// generated by this function might be updated, and the decimal-point variable
// might be updated. If this function generates the digits 99 and the buffer
// already contained "199" (thus yielding a buffer of "19999") then a
// rounding-up will change the contents of the buffer to "20000".
static void FillFractionals(uint64_t fractionals, int exponent,
int fractional_count, Vector<char> buffer,
int* length, int* decimal_point) {
DOUBLE_CONVERSION_ASSERT(-128 <= exponent && exponent <= 0);
// 'fractionals' is a fixed-point number, with binary point at bit
// (-exponent). Inside the function the non-converted remainder of fractionals
// is a fixed-point number, with binary point at bit 'point'.
if (-exponent <= 64) {
// One 64 bit number is sufficient.
DOUBLE_CONVERSION_ASSERT(fractionals >> 56 == 0);
int point = -exponent;
for (int i = 0; i < fractional_count; ++i) {
if (fractionals == 0) break;
// Instead of multiplying by 10 we multiply by 5 and adjust the point
// location. This way the fractionals variable will not overflow.
// Invariant at the beginning of the loop: fractionals < 2^point.
// Initially we have: point <= 64 and fractionals < 2^56
// After each iteration the point is decremented by one.
// Note that 5^3 = 125 < 128 = 2^7.
// Therefore three iterations of this loop will not overflow fractionals
// (even without the subtraction at the end of the loop body). At this
// time point will satisfy point <= 61 and therefore fractionals < 2^point
// and any further multiplication of fractionals by 5 will not overflow.
fractionals *= 5;
point--;
int digit = static_cast<int>(fractionals >> point);
DOUBLE_CONVERSION_ASSERT(digit <= 9);
buffer[*length] = static_cast<char>('0' + digit);
(*length)++;
fractionals -= static_cast<uint64_t>(digit) << point;
}
// If the first bit after the point is set we have to round up.
DOUBLE_CONVERSION_ASSERT(fractionals == 0 || point - 1 >= 0);
if ((fractionals != 0) && ((fractionals >> (point - 1)) & 1) == 1) {
RoundUp(buffer, length, decimal_point);
}
} else { // We need 128 bits.
DOUBLE_CONVERSION_ASSERT(64 < -exponent && -exponent <= 128);
UInt128 fractionals128 = UInt128(fractionals, 0);
fractionals128.Shift(-exponent - 64);
int point = 128;
for (int i = 0; i < fractional_count; ++i) {
if (fractionals128.IsZero()) break;
// As before: instead of multiplying by 10 we multiply by 5 and adjust the
// point location.
// This multiplication will not overflow for the same reasons as before.
fractionals128.Multiply(5);
point--;
int digit = fractionals128.DivModPowerOf2(point);
DOUBLE_CONVERSION_ASSERT(digit <= 9);
buffer[*length] = static_cast<char>('0' + digit);
(*length)++;
}
if (fractionals128.BitAt(point - 1) == 1) {
RoundUp(buffer, length, decimal_point);
}
}
}
// Removes leading and trailing zeros.
// If leading zeros are removed then the decimal point position is adjusted.
static void TrimZeros(Vector<char> buffer, int* length, int* decimal_point) {
while (*length > 0 && buffer[(*length) - 1] == '0') {
(*length)--;
}
int first_non_zero = 0;
while (first_non_zero < *length && buffer[first_non_zero] == '0') {
first_non_zero++;
}
if (first_non_zero != 0) {
for (int i = first_non_zero; i < *length; ++i) {
buffer[i - first_non_zero] = buffer[i];
}
*length -= first_non_zero;
*decimal_point -= first_non_zero;
}
}
bool FastFixedDtoa(double v,
int fractional_count,
Vector<char> buffer,
int* length,
int* decimal_point) {
const uint32_t kMaxUInt32 = 0xFFFFFFFF;
uint64_t significand = Double(v).Significand();
int exponent = Double(v).Exponent();
// v = significand * 2^exponent (with significand a 53bit integer).
// If the exponent is larger than 20 (i.e. we may have a 73bit number) then we
// don't know how to compute the representation. 2^73 ~= 9.5*10^21.
// If necessary this limit could probably be increased, but we don't need
// more.
if (exponent > 20) return false;
if (fractional_count > 20) return false;
*length = 0;
// At most kDoubleSignificandSize bits of the significand are non-zero.
// Given a 64 bit integer we have 11 0s followed by 53 potentially non-zero
// bits: 0..11*..0xxx..53*..xx
if (exponent + kDoubleSignificandSize > 64) {
// The exponent must be > 11.
//
// We know that v = significand * 2^exponent.
// And the exponent > 11.
// We simplify the task by dividing v by 10^17.
// The quotient delivers the first digits, and the remainder fits into a 64
// bit number.
// Dividing by 10^17 is equivalent to dividing by 5^17*2^17.
const uint64_t kFive17 = DOUBLE_CONVERSION_UINT64_2PART_C(0xB1, A2BC2EC5); // 5^17
uint64_t divisor = kFive17;
int divisor_power = 17;
uint64_t dividend = significand;
uint32_t quotient;
uint64_t remainder;
// Let v = f * 2^e with f == significand and e == exponent.
// Then need q (quotient) and r (remainder) as follows:
// v = q * 10^17 + r
// f * 2^e = q * 10^17 + r
// f * 2^e = q * 5^17 * 2^17 + r
// If e > 17 then
// f * 2^(e-17) = q * 5^17 + r/2^17
// else
// f = q * 5^17 * 2^(17-e) + r/2^e
if (exponent > divisor_power) {
// We only allow exponents of up to 20 and therefore (17 - e) <= 3
dividend <<= exponent - divisor_power;
quotient = static_cast<uint32_t>(dividend / divisor);
remainder = (dividend % divisor) << divisor_power;
} else {
divisor <<= divisor_power - exponent;
quotient = static_cast<uint32_t>(dividend / divisor);
remainder = (dividend % divisor) << exponent;
}
FillDigits32(quotient, buffer, length);
FillDigits64FixedLength(remainder, buffer, length);
*decimal_point = *length;
} else if (exponent >= 0) {
// 0 <= exponent <= 11
significand <<= exponent;
FillDigits64(significand, buffer, length);
*decimal_point = *length;
} else if (exponent > -kDoubleSignificandSize) {
// We have to cut the number.
uint64_t integrals = significand >> -exponent;
uint64_t fractionals = significand - (integrals << -exponent);
if (integrals > kMaxUInt32) {
FillDigits64(integrals, buffer, length);
} else {
FillDigits32(static_cast<uint32_t>(integrals), buffer, length);
}
*decimal_point = *length;
FillFractionals(fractionals, exponent, fractional_count,
buffer, length, decimal_point);
} else if (exponent < -128) {
// This configuration (with at most 20 digits) means that all digits must be
// 0.
DOUBLE_CONVERSION_ASSERT(fractional_count <= 20);
buffer[0] = '\0';
*length = 0;
*decimal_point = -fractional_count;
} else {
*decimal_point = 0;
FillFractionals(significand, exponent, fractional_count,
buffer, length, decimal_point);
}
TrimZeros(buffer, length, decimal_point);
buffer[*length] = '\0';
if ((*length) == 0) {
// The string is empty and the decimal_point thus has no importance. Mimic
// Gay's dtoa and set it to -fractional_count.
*decimal_point = -fractional_count;
}
return true;
}
} // namespace kenlm_double_conversion
| 0 |
coqui_public_repos/STT-examples/web_microphone_websocket | coqui_public_repos/STT-examples/web_microphone_websocket/src/App.js | import React, {Component} from 'react';
import io from 'socket.io-client';
const DOWNSAMPLING_WORKER = './downsampling_worker.js';
class App extends Component {
constructor(props) {
super(props);
this.state = {
connected: false,
recording: false,
recordingStart: 0,
recordingTime: 0,
recognitionOutput: []
};
}
componentDidMount() {
let recognitionCount = 0;
this.socket = io.connect('http://localhost:4000', {});
this.socket.on('connect', () => {
console.log('socket connected');
this.setState({connected: true});
});
this.socket.on('disconnect', () => {
console.log('socket disconnected');
this.setState({connected: false});
this.stopRecording();
});
this.socket.on('recognize', (results) => {
console.log('recognized:', results);
const {recognitionOutput} = this.state;
results.id = recognitionCount++;
recognitionOutput.unshift(results);
this.setState({recognitionOutput});
});
}
render() {
return (<div className="App">
<div>
<button disabled={!this.state.connected || this.state.recording} onClick={this.startRecording}>
Start Recording
</button>
<button disabled={!this.state.recording} onClick={this.stopRecording}>
Stop Recording
</button>
{this.renderTime()}
</div>
{this.renderRecognitionOutput()}
</div>);
}
renderTime() {
return (<span>
{(Math.round(this.state.recordingTime / 100) / 10).toFixed(1)}s
</span>);
}
renderRecognitionOutput() {
return (<ul>
{this.state.recognitionOutput.map((r) => {
return (<li key={r.id}>{r.text}</li>);
})}
</ul>)
}
createAudioProcessor(audioContext, audioSource) {
let processor = audioContext.createScriptProcessor(4096, 1, 1);
const sampleRate = audioSource.context.sampleRate;
let downsampler = new Worker(DOWNSAMPLING_WORKER);
downsampler.postMessage({command: "init", inputSampleRate: sampleRate});
downsampler.onmessage = (e) => {
if (this.socket.connected) {
this.socket.emit('stream-data', e.data.buffer);
}
};
processor.onaudioprocess = (event) => {
var data = event.inputBuffer.getChannelData(0);
downsampler.postMessage({command: "process", inputFrame: data});
};
processor.shutdown = () => {
processor.disconnect();
this.onaudioprocess = null;
};
processor.connect(audioContext.destination);
return processor;
}
startRecording = e => {
if (!this.state.recording) {
this.recordingInterval = setInterval(() => {
let recordingTime = new Date().getTime() - this.state.recordingStart;
this.setState({recordingTime});
}, 100);
this.setState({
recording: true,
recordingStart: new Date().getTime(),
recordingTime: 0
}, () => {
this.startMicrophone();
});
}
};
startMicrophone() {
this.audioContext = new AudioContext();
const success = (stream) => {
console.log('started recording');
this.mediaStream = stream;
this.mediaStreamSource = this.audioContext.createMediaStreamSource(stream);
this.processor = this.createAudioProcessor(this.audioContext, this.mediaStreamSource);
this.mediaStreamSource.connect(this.processor);
};
const fail = (e) => {
console.error('recording failure', e);
};
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({
video: false,
audio: true
})
.then(success)
.catch(fail);
}
else {
navigator.getUserMedia({
video: false,
audio: true
}, success, fail);
}
}
stopRecording = e => {
if (this.state.recording) {
if (this.socket.connected) {
this.socket.emit('stream-reset');
}
clearInterval(this.recordingInterval);
this.setState({
recording: false
}, () => {
this.stopMicrophone();
});
}
};
stopMicrophone() {
if (this.mediaStream) {
this.mediaStream.getTracks()[0].stop();
}
if (this.mediaStreamSource) {
this.mediaStreamSource.disconnect();
}
if (this.processor) {
this.processor.shutdown();
}
if (this.audioContext) {
this.audioContext.close();
}
}
}
export default App;
| 0 |
coqui_public_repos/coqpit | coqui_public_repos/coqpit/tests/test_merge_configs.py | from dataclasses import dataclass
from coqpit.coqpit import Coqpit
@dataclass
class CoqpitA(Coqpit):
val_a: int = 10
val_b: int = None
val_c: str = "Coqpit is great!"
val_same: float = 10.21
@dataclass
class CoqpitB(Coqpit):
val_e: int = 257
val_f: float = -10.21
val_g: str = "Coqpit is really great!"
val_same: int = 25 # duplicate fields are override by the merged Coqpit class.
@dataclass
class Reference(Coqpit):
val_a: int = 10
val_b: int = None
val_c: str = "Coqpit is great!"
val_e: int = 257
val_f: float = -10.21
val_g: str = "Coqpit is really great!"
val_same: int = 10.21 # duplicate fields are override by the merged Coqpit class.
def test_config_merge():
coqpit_ref = Reference()
coqpita = CoqpitA()
coqpitb = CoqpitB()
coqpitb.merge(coqpita)
print(coqpitb.val_a)
print(coqpitb.pprint())
assert coqpit_ref.val_a == coqpitb.val_a
assert coqpit_ref.val_b == coqpitb.val_b
assert coqpit_ref.val_c == coqpitb.val_c
assert coqpit_ref.val_e == coqpitb.val_e
assert coqpit_ref.val_f == coqpitb.val_f
assert coqpit_ref.val_g == coqpitb.val_g
assert coqpit_ref.val_same == coqpitb.val_same
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include | coqui_public_repos/inference-engine/third_party/openfst-1.6.9-win/src/include/fst/union-find.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Union-find algorithm for dense sets of non-negative integers, implemented
// using disjoint tree forests with rank heuristics and path compression.
#ifndef FST_UNION_FIND_H_
#define FST_UNION_FIND_H_
#include <stack>
#include <vector>
namespace fst {
// Union-Find algorithm for dense sets of non-negative integers.
template <class T>
class UnionFind {
public:
// Creates a disjoint set forest for the range [0; max); 'fail' is a value
// indicating that an element hasn't been initialized using MakeSet(...).
// The upper bound of the range can be reset (increased) using MakeSet(...).
UnionFind(T max, T fail) : parent_(max, fail), rank_(max), fail_(fail) {}
// Finds the representative of the set 'item' belongs to, performing path
// compression if necessary.
T FindSet(T item) {
if (item >= parent_.size() || item == fail_ || parent_[item] == fail_) {
return fail_;
}
auto *p = &parent_[item];
for (; *p != item; item = *p, p = &parent_[item]) exec_stack_.push(p);
for (; !exec_stack_.empty(); exec_stack_.pop()) *exec_stack_.top() = *p;
return *p;
}
// Creates the (destructive) union of the sets x and y belong to.
void Union(T x, T y) { Link(FindSet(x), FindSet(y)); }
// Initialization of an element: creates a singleton set containing 'item'.
// The range [0; max) is reset if item >= max.
T MakeSet(T item) {
if (item >= parent_.size()) {
// New value in parent_ should be initialized to fail_.
const auto nitem = item > 0 ? 2 * item : 2;
parent_.resize(nitem, fail_);
rank_.resize(nitem);
}
parent_[item] = item;
return item;
}
// Initialization of all elements starting from 0 to max - 1 to distinct sets.
void MakeAllSet(T max) {
parent_.resize(max);
for (T item = 0; item < max; ++item) parent_[item] = item;
}
private:
// Links trees rooted in 'x' and 'y'.
void Link(T x, T y) {
if (x == y) return;
if (rank_[x] > rank_[y]) {
parent_[y] = x;
} else {
parent_[x] = y;
if (rank_[x] == rank_[y]) {
++rank_[y];
}
}
}
UnionFind(const UnionFind &) = delete;
UnionFind &operator=(const UnionFind &) = delete;
std::vector<T> parent_; // Parent nodes.
std::vector<int> rank_; // Rank of an element = min. depth in tree.
T fail_; // Value indicating lookup failure.
std::stack<T *> exec_stack_; // Used for path compression.
};
} // namespace fst
#endif // FST_UNION_FIND_H_
| 0 |
coqui_public_repos/STT-models/czech/comodoro | coqui_public_repos/STT-models/czech/comodoro/v0.1.0/MODEL_CARD.md | # Model card for Czech STT
Jump to section:
- [Model details](#model-details)
- [Intended use](#intended-use)
- [Performance Factors](#performance-factors)
- [Metrics](#metrics)
- [Training data](#training-data)
- [Evaluation data](#evaluation-data)
- [Ethical considerations](#ethical-considerations)
- [Caveats and recommendations](#caveats-and-recommendations)
## Model details
- Person or organization developing model: Originally trained by [Vojtěch Drábek](https://github.com/comodoro).
- Model language: Czech / čeština / `cs`
- Model date: April 9, 2021
- Model type: `Speech-to-Text`
- Model version: `v0.1.0`
- Compatible with 🐸 STT version: `v0.9.3`
- License: CC-BY-NC
- Citation details: `@techreport{chuvash-stt, author = {Drábek,Vojtěch}, title = {Czech STT 0.1}, institution = {Coqui}, address = {\url{https://github.com/coqui-ai/STT-models}} year = {2021}, month = {April}, number = {STT-CS-0.1} }`
- Where to send questions or comments about the model: You can leave an issue on [`STT-model` issues](https://github.com/coqui-ai/STT-models/issues), open a new discussion on [`STT-model` discussions](https://github.com/coqui-ai/STT-models/discussions), or chat with us on [Gitter](https://gitter.im/coqui-ai/).
## Intended use
Speech-to-Text for the [Czech Language](https://en.wikipedia.org/wiki/Czech_language) on 16kHz, mono-channel audio.
## Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data).
## Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
#### Transcription Accuracy
More information reported on [Github](https://github.com/comodoro/deepspeech-cs/).
|Test Corpus|WER|CER|
|-----------|---|---|
|Common Voice|44.6\%|11.2\%|
#### Real-Time Factor
Real-Time Factor (RTF) is defined as `processing-time / length-of-audio`. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: ``
#### Model Size
`model.pbmm`: 181M
`model.tflite`: 46M
### Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
## Training data
This model was trained on the following corpora:
1. Vystadial 2016 – Czech data
2. OVM – Otázky Václava Moravce
3. Czech Parliament Meetings
4. Large Corpus of Czech Parliament Plenary Hearings
5. Common Voice Czech
6. Some private recordings and parts of audioboooks
## Evaluation data
The model was evaluated on Common Voice Czech.
## Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
### Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
### Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
## Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data).
In most applications, it is recommended that you [train your own language model](https://stt.readthedocs.io/en/latest/LANGUAGE_MODEL.html) to improve transcription accuracy on your speech data.
| 0 |
coqui_public_repos | coqui_public_repos/STT/.readthedocs.yml | # .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: doc/conf.py
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: doc/requirements.txt
| 0 |
coqui_public_repos | coqui_public_repos/Trainer/requirements.test.txt | torchvision | 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/bin/fstproject.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/flags.h>
DEFINE_bool(project_output, false, "Project on output (vs. input)");
int fstproject_main(int argc, char **argv);
int main(int argc, char **argv) { return fstproject_main(argc, argv); }
| 0 |
coqui_public_repos/TTS/TTS/tts/utils/assets | coqui_public_repos/TTS/TTS/tts/utils/assets/tortoise/tokenizer.json | {"version":"1.0","truncation":null,"padding":null,"added_tokens":[{"id":0,"special":true,"content":"[STOP]","single_word":false,"lstrip":false,"rstrip":false,"normalized":false},{"id":1,"special":true,"content":"[UNK]","single_word":false,"lstrip":false,"rstrip":false,"normalized":false},{"id":2,"special":true,"content":"[SPACE]","single_word":false,"lstrip":false,"rstrip":false,"normalized":false}],"normalizer":null,"pre_tokenizer":{"type":"Whitespace"},"post_processor":null,"decoder":null,"model":{"type":"BPE","dropout":null,"unk_token":"[UNK]","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"vocab":{"[STOP]":0,"[UNK]":1,"[SPACE]":2,"!":3,"'":4,"(":5,")":6,",":7,"-":8,".":9,"/":10,":":11,";":12,"?":13,"a":14,"b":15,"c":16,"d":17,"e":18,"f":19,"g":20,"h":21,"i":22,"j":23,"k":24,"l":25,"m":26,"n":27,"o":28,"p":29,"q":30,"r":31,"s":32,"t":33,"u":34,"v":35,"w":36,"x":37,"y":38,"z":39,"th":40,"in":41,"the":42,"an":43,"er":44,"ou":45,"re":46,"on":47,"at":48,"ed":49,"en":50,"to":51,"ing":52,"and":53,"is":54,"as":55,"al":56,"or":57,"of":58,"ar":59,"it":60,"es":61,"he":62,"st":63,"le":64,"om":65,"se":66,"be":67,"ad":68,"ow":69,"ly":70,"ch":71,"wh":72,"that":73,"you":74,"li":75,"ve":76,"ac":77,"ti":78,"ld":79,"me":80,"was":81,"gh":82,"id":83,"ll":84,"wi":85,"ent":86,"for":87,"ay":88,"ro":89,"ver":90,"ic":91,"her":92,"ke":93,"his":94,"no":95,"ut":96,"un":97,"ir":98,"lo":99,"we":100,"ri":101,"ha":102,"with":103,"ght":104,"out":105,"im":106,"ion":107,"all":108,"ab":109,"one":110,"ne":111,"ge":112,"ould":113,"ter":114,"mo":115,"had":116,"ce":117,"she":118,"go":119,"sh":120,"ur":121,"am":122,"so":123,"pe":124,"my":125,"de":126,"are":127,"but":128,"ome":129,"fr":130,"ther":131,"fe":132,"su":133,"do":134,"con":135,"te":136,"ain":137,"ere":138,"po":139,"if":140,"they":141,"us":142,"ag":143,"tr":144,"now":145,"oun":146,"this":147,"have":148,"not":149,"sa":150,"il":151,"up":152,"thing":153,"from":154,"ap":155,"him":156,"ack":157,"ation":158,"ant":159,"our":160,"op":161,"like":162,"ust":163,"ess":164,"bo":165,"ok":166,"ul":167,"ind":168,"ex":169,"com":170,"some":171,"there":172,"ers":173,"co":174,"res":175,"man":176,"ard":177,"pl":178,"wor":179,"way":180,"tion":181,"fo":182,"ca":183,"were":184,"by":185,"ate":186,"pro":187,"ted":188,"ound":189,"own":190,"would":191,"ts":192,"what":193,"qu":194,"ally":195,"ight":196,"ck":197,"gr":198,"when":199,"ven":200,"can":201,"ough":202,"ine":203,"end":204,"per":205,"ous":206,"od":207,"ide":208,"know":209,"ty":210,"very":211,"si":212,"ak":213,"who":214,"about":215,"ill":216,"them":217,"est":218,"red":219,"ye":220,"could":221,"ong":222,"your":223,"their":224,"em":225,"just":226,"other":227,"into":228,"any":229,"whi":230,"um":231,"tw":232,"ast":233,"der":234,"did":235,"ie":236,"been":237,"ace":238,"ink":239,"ity":240,"back":241,"ting":242,"br":243,"more":244,"ake":245,"pp":246,"then":247,"sp":248,"el":249,"use":250,"bl":251,"said":252,"over":253,"get":254},"merges":["t h","i n","th e","a n","e r","o u","r e","o n","a t","e d","e n","t o","in g","an d","i s","a s","a l","o r","o f","a r","i t","e s","h e","s t","l e","o m","s e","b e","a d","o w","l y","c h","w h","th at","y ou","l i","v e","a c","t i","l d","m e","w as","g h","i d","l l","w i","en t","f or","a y","r o","v er","i c","h er","k e","h is","n o","u t","u n","i r","l o","w e","r i","h a","wi th","gh t","ou t","i m","i on","al l","a b","on e","n e","g e","ou ld","t er","m o","h ad","c e","s he","g o","s h","u r","a m","s o","p e","m y","d e","a re","b ut","om e","f r","the r","f e","s u","d o","c on","t e","a in","er e","p o","i f","the y","u s","a g","t r","n ow","ou n","th is","ha ve","no t","s a","i l","u p","th ing","fr om","a p","h im","ac k","at ion","an t","ou r","o p","li ke","u st","es s","b o","o k","u l","in d","e x","c om","s ome","the re","er s","c o","re s","m an","ar d","p l","w or","w ay","ti on","f o","c a","w ere","b y","at e","p ro","t ed","oun d","ow n","w ould","t s","wh at","q u","al ly","i ght","c k","g r","wh en","v en","c an","ou gh","in e","en d","p er","ou s","o d","id e","k now","t y","ver y","s i","a k","wh o","ab out","i ll","the m","es t","re d","y e","c ould","on g","you r","the ir","e m","j ust","o ther","in to","an y","wh i","u m","t w","as t","d er","d id","i e","be en","ac e","in k","it y","b ack","t ing","b r","mo re","a ke","p p","the n","s p","e l","u se","b l","sa id","o ver","ge t"]}} | 0 |
coqui_public_repos/inference-engine/third_party/kenlm/util | coqui_public_repos/inference-engine/third_party/kenlm/util/double-conversion/ieee.h | // Copyright 2012 the V8 project authors. All rights reserved.
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following
// disclaimer in the documentation and/or other materials provided
// with the distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef DOUBLE_CONVERSION_DOUBLE_H_
#define DOUBLE_CONVERSION_DOUBLE_H_
#include "diy-fp.h"
namespace kenlm_double_conversion {
// We assume that doubles and uint64_t have the same endianness.
static uint64_t double_to_uint64(double d) { return BitCast<uint64_t>(d); }
static double uint64_to_double(uint64_t d64) { return BitCast<double>(d64); }
static uint32_t float_to_uint32(float f) { return BitCast<uint32_t>(f); }
static float uint32_to_float(uint32_t d32) { return BitCast<float>(d32); }
// Helper functions for doubles.
class Double {
public:
static const uint64_t kSignMask = UINT64_2PART_C(0x80000000, 00000000);
static const uint64_t kExponentMask = UINT64_2PART_C(0x7FF00000, 00000000);
static const uint64_t kSignificandMask = UINT64_2PART_C(0x000FFFFF, FFFFFFFF);
static const uint64_t kHiddenBit = UINT64_2PART_C(0x00100000, 00000000);
static const int kPhysicalSignificandSize = 52; // Excludes the hidden bit.
static const int kSignificandSize = 53;
Double() : d64_(0) {}
explicit Double(double d) : d64_(double_to_uint64(d)) {}
explicit Double(uint64_t d64) : d64_(d64) {}
explicit Double(DiyFp diy_fp)
: d64_(DiyFpToUint64(diy_fp)) {}
// The value encoded by this Double must be greater or equal to +0.0.
// It must not be special (infinity, or NaN).
DiyFp AsDiyFp() const {
ASSERT(Sign() > 0);
ASSERT(!IsSpecial());
return DiyFp(Significand(), Exponent());
}
// The value encoded by this Double must be strictly greater than 0.
DiyFp AsNormalizedDiyFp() const {
ASSERT(value() > 0.0);
uint64_t f = Significand();
int e = Exponent();
// The current double could be a denormal.
while ((f & kHiddenBit) == 0) {
f <<= 1;
e--;
}
// Do the final shifts in one go.
f <<= DiyFp::kSignificandSize - kSignificandSize;
e -= DiyFp::kSignificandSize - kSignificandSize;
return DiyFp(f, e);
}
// Returns the double's bit as uint64.
uint64_t AsUint64() const {
return d64_;
}
// Returns the next greater double. Returns +infinity on input +infinity.
double NextDouble() const {
if (d64_ == kInfinity) return Double(kInfinity).value();
if (Sign() < 0 && Significand() == 0) {
// -0.0
return 0.0;
}
if (Sign() < 0) {
return Double(d64_ - 1).value();
} else {
return Double(d64_ + 1).value();
}
}
double PreviousDouble() const {
if (d64_ == (kInfinity | kSignMask)) return -Infinity();
if (Sign() < 0) {
return Double(d64_ + 1).value();
} else {
if (Significand() == 0) return -0.0;
return Double(d64_ - 1).value();
}
}
int Exponent() const {
if (IsDenormal()) return kDenormalExponent;
uint64_t d64 = AsUint64();
int biased_e =
static_cast<int>((d64 & kExponentMask) >> kPhysicalSignificandSize);
return biased_e - kExponentBias;
}
uint64_t Significand() const {
uint64_t d64 = AsUint64();
uint64_t significand = d64 & kSignificandMask;
if (!IsDenormal()) {
return significand + kHiddenBit;
} else {
return significand;
}
}
// Returns true if the double is a denormal.
bool IsDenormal() const {
uint64_t d64 = AsUint64();
return (d64 & kExponentMask) == 0;
}
// We consider denormals not to be special.
// Hence only Infinity and NaN are special.
bool IsSpecial() const {
uint64_t d64 = AsUint64();
return (d64 & kExponentMask) == kExponentMask;
}
bool IsNan() const {
uint64_t d64 = AsUint64();
return ((d64 & kExponentMask) == kExponentMask) &&
((d64 & kSignificandMask) != 0);
}
bool IsInfinite() const {
uint64_t d64 = AsUint64();
return ((d64 & kExponentMask) == kExponentMask) &&
((d64 & kSignificandMask) == 0);
}
int Sign() const {
uint64_t d64 = AsUint64();
return (d64 & kSignMask) == 0? 1: -1;
}
// Precondition: the value encoded by this Double must be greater or equal
// than +0.0.
DiyFp UpperBoundary() const {
ASSERT(Sign() > 0);
return DiyFp(Significand() * 2 + 1, Exponent() - 1);
}
// Computes the two boundaries of this.
// The bigger boundary (m_plus) is normalized. The lower boundary has the same
// exponent as m_plus.
// Precondition: the value encoded by this Double must be greater than 0.
void NormalizedBoundaries(DiyFp* out_m_minus, DiyFp* out_m_plus) const {
ASSERT(value() > 0.0);
DiyFp v = this->AsDiyFp();
DiyFp m_plus = DiyFp::Normalize(DiyFp((v.f() << 1) + 1, v.e() - 1));
DiyFp m_minus;
if (LowerBoundaryIsCloser()) {
m_minus = DiyFp((v.f() << 2) - 1, v.e() - 2);
} else {
m_minus = DiyFp((v.f() << 1) - 1, v.e() - 1);
}
m_minus.set_f(m_minus.f() << (m_minus.e() - m_plus.e()));
m_minus.set_e(m_plus.e());
*out_m_plus = m_plus;
*out_m_minus = m_minus;
}
bool LowerBoundaryIsCloser() const {
// The boundary is closer if the significand is of the form f == 2^p-1 then
// the lower boundary is closer.
// Think of v = 1000e10 and v- = 9999e9.
// Then the boundary (== (v - v-)/2) is not just at a distance of 1e9 but
// at a distance of 1e8.
// The only exception is for the smallest normal: the largest denormal is
// at the same distance as its successor.
// Note: denormals have the same exponent as the smallest normals.
bool physical_significand_is_zero = ((AsUint64() & kSignificandMask) == 0);
return physical_significand_is_zero && (Exponent() != kDenormalExponent);
}
double value() const { return uint64_to_double(d64_); }
// Returns the significand size for a given order of magnitude.
// If v = f*2^e with 2^p-1 <= f <= 2^p then p+e is v's order of magnitude.
// This function returns the number of significant binary digits v will have
// once it's encoded into a double. In almost all cases this is equal to
// kSignificandSize. The only exceptions are denormals. They start with
// leading zeroes and their effective significand-size is hence smaller.
static int SignificandSizeForOrderOfMagnitude(int order) {
if (order >= (kDenormalExponent + kSignificandSize)) {
return kSignificandSize;
}
if (order <= kDenormalExponent) return 0;
return order - kDenormalExponent;
}
static double Infinity() {
return Double(kInfinity).value();
}
static double NaN() {
return Double(kNaN).value();
}
private:
static const int kExponentBias = 0x3FF + kPhysicalSignificandSize;
static const int kDenormalExponent = -kExponentBias + 1;
static const int kMaxExponent = 0x7FF - kExponentBias;
static const uint64_t kInfinity = UINT64_2PART_C(0x7FF00000, 00000000);
static const uint64_t kNaN = UINT64_2PART_C(0x7FF80000, 00000000);
const uint64_t d64_;
static uint64_t DiyFpToUint64(DiyFp diy_fp) {
uint64_t significand = diy_fp.f();
int exponent = diy_fp.e();
while (significand > kHiddenBit + kSignificandMask) {
significand >>= 1;
exponent++;
}
if (exponent >= kMaxExponent) {
return kInfinity;
}
if (exponent < kDenormalExponent) {
return 0;
}
while (exponent > kDenormalExponent && (significand & kHiddenBit) == 0) {
significand <<= 1;
exponent--;
}
uint64_t biased_exponent;
if (exponent == kDenormalExponent && (significand & kHiddenBit) == 0) {
biased_exponent = 0;
} else {
biased_exponent = static_cast<uint64_t>(exponent + kExponentBias);
}
return (significand & kSignificandMask) |
(biased_exponent << kPhysicalSignificandSize);
}
DISALLOW_COPY_AND_ASSIGN(Double);
};
class Single {
public:
static const uint32_t kSignMask = 0x80000000;
static const uint32_t kExponentMask = 0x7F800000;
static const uint32_t kSignificandMask = 0x007FFFFF;
static const uint32_t kHiddenBit = 0x00800000;
static const int kPhysicalSignificandSize = 23; // Excludes the hidden bit.
static const int kSignificandSize = 24;
Single() : d32_(0) {}
explicit Single(float f) : d32_(float_to_uint32(f)) {}
explicit Single(uint32_t d32) : d32_(d32) {}
// The value encoded by this Single must be greater or equal to +0.0.
// It must not be special (infinity, or NaN).
DiyFp AsDiyFp() const {
ASSERT(Sign() > 0);
ASSERT(!IsSpecial());
return DiyFp(Significand(), Exponent());
}
// Returns the single's bit as uint64.
uint32_t AsUint32() const {
return d32_;
}
int Exponent() const {
if (IsDenormal()) return kDenormalExponent;
uint32_t d32 = AsUint32();
int biased_e =
static_cast<int>((d32 & kExponentMask) >> kPhysicalSignificandSize);
return biased_e - kExponentBias;
}
uint32_t Significand() const {
uint32_t d32 = AsUint32();
uint32_t significand = d32 & kSignificandMask;
if (!IsDenormal()) {
return significand + kHiddenBit;
} else {
return significand;
}
}
// Returns true if the single is a denormal.
bool IsDenormal() const {
uint32_t d32 = AsUint32();
return (d32 & kExponentMask) == 0;
}
// We consider denormals not to be special.
// Hence only Infinity and NaN are special.
bool IsSpecial() const {
uint32_t d32 = AsUint32();
return (d32 & kExponentMask) == kExponentMask;
}
bool IsNan() const {
uint32_t d32 = AsUint32();
return ((d32 & kExponentMask) == kExponentMask) &&
((d32 & kSignificandMask) != 0);
}
bool IsInfinite() const {
uint32_t d32 = AsUint32();
return ((d32 & kExponentMask) == kExponentMask) &&
((d32 & kSignificandMask) == 0);
}
int Sign() const {
uint32_t d32 = AsUint32();
return (d32 & kSignMask) == 0? 1: -1;
}
// Computes the two boundaries of this.
// The bigger boundary (m_plus) is normalized. The lower boundary has the same
// exponent as m_plus.
// Precondition: the value encoded by this Single must be greater than 0.
void NormalizedBoundaries(DiyFp* out_m_minus, DiyFp* out_m_plus) const {
ASSERT(value() > 0.0);
DiyFp v = this->AsDiyFp();
DiyFp m_plus = DiyFp::Normalize(DiyFp((v.f() << 1) + 1, v.e() - 1));
DiyFp m_minus;
if (LowerBoundaryIsCloser()) {
m_minus = DiyFp((v.f() << 2) - 1, v.e() - 2);
} else {
m_minus = DiyFp((v.f() << 1) - 1, v.e() - 1);
}
m_minus.set_f(m_minus.f() << (m_minus.e() - m_plus.e()));
m_minus.set_e(m_plus.e());
*out_m_plus = m_plus;
*out_m_minus = m_minus;
}
// Precondition: the value encoded by this Single must be greater or equal
// than +0.0.
DiyFp UpperBoundary() const {
ASSERT(Sign() > 0);
return DiyFp(Significand() * 2 + 1, Exponent() - 1);
}
bool LowerBoundaryIsCloser() const {
// The boundary is closer if the significand is of the form f == 2^p-1 then
// the lower boundary is closer.
// Think of v = 1000e10 and v- = 9999e9.
// Then the boundary (== (v - v-)/2) is not just at a distance of 1e9 but
// at a distance of 1e8.
// The only exception is for the smallest normal: the largest denormal is
// at the same distance as its successor.
// Note: denormals have the same exponent as the smallest normals.
bool physical_significand_is_zero = ((AsUint32() & kSignificandMask) == 0);
return physical_significand_is_zero && (Exponent() != kDenormalExponent);
}
float value() const { return uint32_to_float(d32_); }
static float Infinity() {
return Single(kInfinity).value();
}
static float NaN() {
return Single(kNaN).value();
}
private:
static const int kExponentBias = 0x7F + kPhysicalSignificandSize;
static const int kDenormalExponent = -kExponentBias + 1;
static const int kMaxExponent = 0xFF - kExponentBias;
static const uint32_t kInfinity = 0x7F800000;
static const uint32_t kNaN = 0x7FC00000;
const uint32_t d32_;
DISALLOW_COPY_AND_ASSIGN(Single);
};
} // namespace kenlm_double_conversion
#endif // DOUBLE_CONVERSION_DOUBLE_H_
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/bin/fstepsnormalize.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/flags.h>
DEFINE_bool(eps_norm_output, false, "Normalize output epsilons");
int fstepsnormalize_main(int argc, char **argv);
int main(int argc, char **argv) { return fstepsnormalize_main(argc, argv); }
| 0 |
coqui_public_repos | coqui_public_repos/coqpit/VERSION | 0.0.17
| 0 |
coqui_public_repos/inference-engine/third_party/openfst-1.6.7 | coqui_public_repos/inference-engine/third_party/openfst-1.6.7/m4/ltsugar.m4 | # ltsugar.m4 -- libtool m4 base layer. -*-Autoconf-*-
#
# Copyright (C) 2004, 2005, 2007, 2008 Free Software Foundation, Inc.
# Written by Gary V. Vaughan, 2004
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
# serial 6 ltsugar.m4
# This is to help aclocal find these macros, as it can't see m4_define.
AC_DEFUN([LTSUGAR_VERSION], [m4_if([0.1])])
# lt_join(SEP, ARG1, [ARG2...])
# -----------------------------
# Produce ARG1SEPARG2...SEPARGn, omitting [] arguments and their
# associated separator.
# Needed until we can rely on m4_join from Autoconf 2.62, since all earlier
# versions in m4sugar had bugs.
m4_define([lt_join],
[m4_if([$#], [1], [],
[$#], [2], [[$2]],
[m4_if([$2], [], [], [[$2]_])$0([$1], m4_shift(m4_shift($@)))])])
m4_define([_lt_join],
[m4_if([$#$2], [2], [],
[m4_if([$2], [], [], [[$1$2]])$0([$1], m4_shift(m4_shift($@)))])])
# lt_car(LIST)
# lt_cdr(LIST)
# ------------
# Manipulate m4 lists.
# These macros are necessary as long as will still need to support
# Autoconf-2.59 which quotes differently.
m4_define([lt_car], [[$1]])
m4_define([lt_cdr],
[m4_if([$#], 0, [m4_fatal([$0: cannot be called without arguments])],
[$#], 1, [],
[m4_dquote(m4_shift($@))])])
m4_define([lt_unquote], $1)
# lt_append(MACRO-NAME, STRING, [SEPARATOR])
# ------------------------------------------
# Redefine MACRO-NAME to hold its former content plus `SEPARATOR'`STRING'.
# Note that neither SEPARATOR nor STRING are expanded; they are appended
# to MACRO-NAME as is (leaving the expansion for when MACRO-NAME is invoked).
# No SEPARATOR is output if MACRO-NAME was previously undefined (different
# than defined and empty).
#
# This macro is needed until we can rely on Autoconf 2.62, since earlier
# versions of m4sugar mistakenly expanded SEPARATOR but not STRING.
m4_define([lt_append],
[m4_define([$1],
m4_ifdef([$1], [m4_defn([$1])[$3]])[$2])])
# lt_combine(SEP, PREFIX-LIST, INFIX, SUFFIX1, [SUFFIX2...])
# ----------------------------------------------------------
# Produce a SEP delimited list of all paired combinations of elements of
# PREFIX-LIST with SUFFIX1 through SUFFIXn. Each element of the list
# has the form PREFIXmINFIXSUFFIXn.
# Needed until we can rely on m4_combine added in Autoconf 2.62.
m4_define([lt_combine],
[m4_if(m4_eval([$# > 3]), [1],
[m4_pushdef([_Lt_sep], [m4_define([_Lt_sep], m4_defn([lt_car]))])]]dnl
[[m4_foreach([_Lt_prefix], [$2],
[m4_foreach([_Lt_suffix],
]m4_dquote(m4_dquote(m4_shift(m4_shift(m4_shift($@)))))[,
[_Lt_sep([$1])[]m4_defn([_Lt_prefix])[$3]m4_defn([_Lt_suffix])])])])])
# lt_if_append_uniq(MACRO-NAME, VARNAME, [SEPARATOR], [UNIQ], [NOT-UNIQ])
# -----------------------------------------------------------------------
# Iff MACRO-NAME does not yet contain VARNAME, then append it (delimited
# by SEPARATOR if supplied) and expand UNIQ, else NOT-UNIQ.
m4_define([lt_if_append_uniq],
[m4_ifdef([$1],
[m4_if(m4_index([$3]m4_defn([$1])[$3], [$3$2$3]), [-1],
[lt_append([$1], [$2], [$3])$4],
[$5])],
[lt_append([$1], [$2], [$3])$4])])
# lt_dict_add(DICT, KEY, VALUE)
# -----------------------------
m4_define([lt_dict_add],
[m4_define([$1($2)], [$3])])
# lt_dict_add_subkey(DICT, KEY, SUBKEY, VALUE)
# --------------------------------------------
m4_define([lt_dict_add_subkey],
[m4_define([$1($2:$3)], [$4])])
# lt_dict_fetch(DICT, KEY, [SUBKEY])
# ----------------------------------
m4_define([lt_dict_fetch],
[m4_ifval([$3],
m4_ifdef([$1($2:$3)], [m4_defn([$1($2:$3)])]),
m4_ifdef([$1($2)], [m4_defn([$1($2)])]))])
# lt_if_dict_fetch(DICT, KEY, [SUBKEY], VALUE, IF-TRUE, [IF-FALSE])
# -----------------------------------------------------------------
m4_define([lt_if_dict_fetch],
[m4_if(lt_dict_fetch([$1], [$2], [$3]), [$4],
[$5],
[$6])])
# lt_dict_filter(DICT, [SUBKEY], VALUE, [SEPARATOR], KEY, [...])
# --------------------------------------------------------------
m4_define([lt_dict_filter],
[m4_if([$5], [], [],
[lt_join(m4_quote(m4_default([$4], [[, ]])),
lt_unquote(m4_split(m4_normalize(m4_foreach(_Lt_key, lt_car([m4_shiftn(4, $@)]),
[lt_if_dict_fetch([$1], _Lt_key, [$2], [$3], [_Lt_key ])])))))])[]dnl
])
| 0 |
coqui_public_repos/STT | coqui_public_repos/STT/taskcluster/test-electronjs_v6.0-win-amd64-opt.yml | build:
template_file: test-win-opt-base.tyml
dependencies:
- "win-amd64-cpu-opt"
- "test-training_16k-linux-amd64-py36m-opt"
test_model_task: "test-training_16k-linux-amd64-py36m-opt"
system_setup:
>
${system.sox_win} && ${nodejs.win.prep_12}
args:
tests_cmdline: "${system.homedir.win}/DeepSpeech/ds/taskcluster/tc-electron-tests.sh 12.x 6.0.12 16k"
metadata:
name: "DeepSpeech Windows AMD64 CPU ElectronJS v6.0 tests"
description: "Testing DeepSpeech for Windows/AMD64 on ElectronJS v6.0, CPU only, optimized version"
| 0 |
coqui_public_repos/TTS/TTS/tts/layers | coqui_public_repos/TTS/TTS/tts/layers/xtts/gpt_inference.py | import math
import torch
from torch import nn
from transformers import GPT2PreTrainedModel
from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions
class GPT2InferenceModel(GPT2PreTrainedModel):
"""Override GPT2LMHeadModel to allow for prefix conditioning."""
def __init__(self, config, gpt, pos_emb, embeddings, norm, linear, kv_cache):
super().__init__(config)
self.transformer = gpt
self.pos_embedding = pos_emb
self.embeddings = embeddings
self.final_norm = norm
self.lm_head = nn.Sequential(norm, linear)
self.kv_cache = kv_cache
def store_prefix_emb(self, prefix_emb):
self.cached_prefix_emb = prefix_emb
def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
token_type_ids = kwargs.get("token_type_ids", None) # usually None
if not self.kv_cache:
past_key_values = None
# only last token for inputs_ids if past is defined in kwargs
if past_key_values is not None:
input_ids = input_ids[:, -1].unsqueeze(-1)
if token_type_ids is not None:
token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
attention_mask = kwargs.get("attention_mask", None)
position_ids = kwargs.get("position_ids", None)
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if past_key_values is not None:
position_ids = position_ids[:, -1].unsqueeze(-1)
else:
position_ids = None
return {
"input_ids": input_ids,
"past_key_values": past_key_values,
"use_cache": kwargs.get("use_cache"),
"position_ids": position_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
}
def forward(
self,
input_ids=None,
past_key_values=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
assert self.cached_prefix_emb is not None
assert inputs_embeds is None # Not supported by this inference model.
assert labels is None # Training not supported by this inference model.
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# assert len(past_key_values) + len(input_ids) == attention_mask.shape[1]
# Create embedding
prefix_len = self.cached_prefix_emb.shape[1]
if input_ids.shape[1] != 1:
gen_inputs = input_ids[:, prefix_len:]
gen_emb = self.embeddings(gen_inputs)
gen_emb = gen_emb + self.pos_embedding(gen_emb)
if self.cached_prefix_emb.shape[0] != gen_emb.shape[0]:
prefix_emb = self.cached_prefix_emb.repeat_interleave(
gen_emb.shape[0] // self.cached_prefix_emb.shape[0], 0
)
else:
prefix_emb = self.cached_prefix_emb.to(gen_emb.dtype)
emb = torch.cat([prefix_emb, gen_emb], dim=1)
else:
emb = self.embeddings(input_ids)
emb = emb + self.pos_embedding.get_fixed_embedding(
attention_mask.shape[1] - (prefix_len + 1), attention_mask.device
)
transformer_outputs = self.transformer(
inputs_embeds=emb,
past_key_values=past_key_values,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
if not return_dict:
return (lm_logits,) + transformer_outputs[1:]
return CausalLMOutputWithCrossAttentions(
loss=None,
logits=lm_logits,
past_key_values=transformer_outputs.past_key_values,
hidden_states=transformer_outputs.hidden_states,
attentions=transformer_outputs.attentions,
cross_attentions=transformer_outputs.cross_attentions,
)
@staticmethod
def _reorder_cache(past, beam_idx):
"""
This function is used to re-order the :obj:`past_key_values` cache if
:meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is
called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step.
"""
return tuple(
tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
for layer_past in past
)
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/extensions/compact/compact16_acceptor-fst.cc | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#include <fst/fst.h>
#include <fst/compact-fst.h>
namespace fst {
static FstRegisterer<CompactAcceptorFst<StdArc, uint16>>
CompactAcceptorFst_StdArc_uint16_registerer;
static FstRegisterer<CompactAcceptorFst<LogArc, uint16>>
CompactAcceptorFst_LogArc_uint16_registerer;
} // namespace fst
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/script/synchronize.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
#ifndef FST_SCRIPT_SYNCHRONIZE_H_
#define FST_SCRIPT_SYNCHRONIZE_H_
#include <utility>
#include <fst/synchronize.h>
#include <fst/script/fst-class.h>
namespace fst {
namespace script {
using SynchronizeArgs = std::pair<const FstClass &, MutableFstClass *>;
template <class Arc>
void Synchronize(SynchronizeArgs *args) {
const Fst<Arc> &ifst = *(std::get<0>(*args).GetFst<Arc>());
MutableFst<Arc> *ofst = std::get<1>(*args)->GetMutableFst<Arc>();
Synchronize(ifst, ofst);
}
void Synchronize(const FstClass &ifst, MutableFstClass *ofst);
} // namespace script
} // namespace fst
#endif // FST_SCRIPT_SYNCHRONIZE_H_
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.7/src/include/fst/extensions/pdt/pdtlib.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// This is an experimental push-down transducer (PDT) library. A PDT is
// encoded as an FST, where some transitions are labeled with open or close
// parentheses. To be interpreted as a PDT, the parentheses must balance on a
// path.
#ifndef FST_EXTENSIONS_PDT_PDTLIB_H_
#define FST_EXTENSIONS_PDT_PDTLIB_H_
#include <fst/extensions/pdt/compose.h>
#include <fst/extensions/pdt/expand.h>
#include <fst/extensions/pdt/pdt.h>
#include <fst/extensions/pdt/replace.h>
#include <fst/extensions/pdt/reverse.h>
#include <fst/extensions/pdt/shortest-path.h>
#endif // FST_EXTENSIONS_PDT_PDTLIB_H_
| 0 |
coqui_public_repos/snakepit/scripts | coqui_public_repos/snakepit/scripts/daemon/data-ro.mount | [Unit]
Description=Read-only data directory
[Mount]
What=/ro
Where=/data/ro
Type=fuse.bindfs
Options=ro
[Install]
WantedBy=multi-user.target
| 0 |
coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include | coqui_public_repos/STT/native_client/ctcdecode/third_party/openfst-1.6.9-win/src/include/fst/verify.h | // See www.openfst.org for extensive documentation on this weighted
// finite-state transducer library.
//
// Function to verify an FST's contents.
#ifndef FST_VERIFY_H_
#define FST_VERIFY_H_
#include <fst/log.h>
#include <fst/fst.h>
#include <fst/test-properties.h>
namespace fst {
// Verifies that an Fst's contents are sane.
template <class Arc>
bool Verify(const Fst<Arc> &fst, bool allow_negative_labels = false) {
using Label = typename Arc::Label;
using StateId = typename Arc::StateId;
using Weight = typename Arc::Weight;
const auto start = fst.Start();
const auto *isyms = fst.InputSymbols();
const auto *osyms = fst.OutputSymbols();
// Count states
StateId ns = 0;
for (StateIterator<Fst<Arc>> siter(fst); !siter.Done(); siter.Next()) ++ns;
if (start == kNoStateId && ns > 0) {
LOG(ERROR) << "Verify: FST start state ID not set";
return false;
} else if (start >= ns) {
LOG(ERROR) << "Verify: FST start state ID exceeds number of states";
return false;
}
for (StateIterator<Fst<Arc>> siter(fst); !siter.Done(); siter.Next()) {
auto state = siter.Value();
size_t na = 0;
for (ArcIterator<Fst<Arc>> aiter(fst, state); !aiter.Done(); aiter.Next()) {
const auto &arc = aiter.Value();
if (!allow_negative_labels && arc.ilabel < 0) {
LOG(ERROR) << "Verify: FST input label ID of arc at position " << na
<< " of state " << state << " is negative";
return false;
} else if (isyms && isyms->Find(arc.ilabel) == "") {
LOG(ERROR) << "Verify: FST input label ID " << arc.ilabel
<< " of arc at position " << na << " of state " << state
<< " is missing from input symbol table \"" << isyms->Name()
<< "\"";
return false;
} else if (!allow_negative_labels && arc.olabel < 0) {
LOG(ERROR) << "Verify: FST output label ID of arc at position " << na
<< " of state " << state << " is negative";
return false;
} else if (osyms && osyms->Find(arc.olabel) == "") {
LOG(ERROR) << "Verify: FST output label ID " << arc.olabel
<< " of arc at position " << na << " of state " << state
<< " is missing from output symbol table \"" << osyms->Name()
<< "\"";
return false;
} else if (!arc.weight.Member()) {
LOG(ERROR) << "Verify: FST weight of arc at position " << na
<< " of state " << state << " is invalid";
return false;
} else if (arc.nextstate < 0) {
LOG(ERROR) << "Verify: FST destination state ID of arc at position "
<< na << " of state " << state << " is negative";
return false;
} else if (arc.nextstate >= ns) {
LOG(ERROR) << "Verify: FST destination state ID of arc at position "
<< na << " of state " << state
<< " exceeds number of states";
return false;
}
++na;
}
if (!fst.Final(state).Member()) {
LOG(ERROR) << "Verify: FST final weight of state " << state
<< " is invalid";
return false;
}
}
const auto fst_props = fst.Properties(kFstProperties, false);
if (fst_props & kError) {
LOG(ERROR) << "Verify: FST error property is set";
return false;
}
uint64_t known_props;
uint64_t test_props =
ComputeProperties(fst, kFstProperties, &known_props, false);
if (!CompatProperties(fst_props, test_props)) {
LOG(ERROR) << "Verify: Stored FST properties incorrect "
<< "(props1 = stored props, props2 = tested)";
return false;
} else {
return true;
}
}
} // namespace fst
#endif // FST_VERIFY_H_
| 0 |
coqui_public_repos/TTS/TTS/tts/layers | coqui_public_repos/TTS/TTS/tts/layers/vits/networks.py | import math
import torch
from torch import nn
from TTS.tts.layers.glow_tts.glow import WN
from TTS.tts.layers.glow_tts.transformer import RelativePositionTransformer
from TTS.tts.utils.helpers import sequence_mask
LRELU_SLOPE = 0.1
def convert_pad_shape(pad_shape):
l = pad_shape[::-1]
pad_shape = [item for sublist in l for item in sublist]
return pad_shape
def init_weights(m, mean=0.0, std=0.01):
classname = m.__class__.__name__
if classname.find("Conv") != -1:
m.weight.data.normal_(mean, std)
def get_padding(kernel_size, dilation=1):
return int((kernel_size * dilation - dilation) / 2)
class TextEncoder(nn.Module):
def __init__(
self,
n_vocab: int,
out_channels: int,
hidden_channels: int,
hidden_channels_ffn: int,
num_heads: int,
num_layers: int,
kernel_size: int,
dropout_p: float,
language_emb_dim: int = None,
):
"""Text Encoder for VITS model.
Args:
n_vocab (int): Number of characters for the embedding layer.
out_channels (int): Number of channels for the output.
hidden_channels (int): Number of channels for the hidden layers.
hidden_channels_ffn (int): Number of channels for the convolutional layers.
num_heads (int): Number of attention heads for the Transformer layers.
num_layers (int): Number of Transformer layers.
kernel_size (int): Kernel size for the FFN layers in Transformer network.
dropout_p (float): Dropout rate for the Transformer layers.
"""
super().__init__()
self.out_channels = out_channels
self.hidden_channels = hidden_channels
self.emb = nn.Embedding(n_vocab, hidden_channels)
nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
if language_emb_dim:
hidden_channels += language_emb_dim
self.encoder = RelativePositionTransformer(
in_channels=hidden_channels,
out_channels=hidden_channels,
hidden_channels=hidden_channels,
hidden_channels_ffn=hidden_channels_ffn,
num_heads=num_heads,
num_layers=num_layers,
kernel_size=kernel_size,
dropout_p=dropout_p,
layer_norm_type="2",
rel_attn_window_size=4,
)
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
def forward(self, x, x_lengths, lang_emb=None):
"""
Shapes:
- x: :math:`[B, T]`
- x_length: :math:`[B]`
"""
assert x.shape[0] == x_lengths.shape[0]
x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
# concat the lang emb in embedding chars
if lang_emb is not None:
x = torch.cat((x, lang_emb.transpose(2, 1).expand(x.size(0), x.size(1), -1)), dim=-1)
x = torch.transpose(x, 1, -1) # [b, h, t]
x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) # [b, 1, t]
x = self.encoder(x * x_mask, x_mask)
stats = self.proj(x) * x_mask
m, logs = torch.split(stats, self.out_channels, dim=1)
return x, m, logs, x_mask
class ResidualCouplingBlock(nn.Module):
def __init__(
self,
channels,
hidden_channels,
kernel_size,
dilation_rate,
num_layers,
dropout_p=0,
cond_channels=0,
mean_only=False,
):
assert channels % 2 == 0, "channels should be divisible by 2"
super().__init__()
self.half_channels = channels // 2
self.mean_only = mean_only
# input layer
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
# coupling layers
self.enc = WN(
hidden_channels,
hidden_channels,
kernel_size,
dilation_rate,
num_layers,
dropout_p=dropout_p,
c_in_channels=cond_channels,
)
# output layer
# Initializing last layer to 0 makes the affine coupling layers
# do nothing at first. This helps with training stability
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
self.post.weight.data.zero_()
self.post.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
"""
Note:
Set `reverse` to True for inference.
Shapes:
- x: :math:`[B, C, T]`
- x_mask: :math:`[B, 1, T]`
- g: :math:`[B, C, 1]`
"""
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0) * x_mask
h = self.enc(h, x_mask, g=g)
stats = self.post(h) * x_mask
if not self.mean_only:
m, log_scale = torch.split(stats, [self.half_channels] * 2, 1)
else:
m = stats
log_scale = torch.zeros_like(m)
if not reverse:
x1 = m + x1 * torch.exp(log_scale) * x_mask
x = torch.cat([x0, x1], 1)
logdet = torch.sum(log_scale, [1, 2])
return x, logdet
else:
x1 = (x1 - m) * torch.exp(-log_scale) * x_mask
x = torch.cat([x0, x1], 1)
return x
class ResidualCouplingBlocks(nn.Module):
def __init__(
self,
channels: int,
hidden_channels: int,
kernel_size: int,
dilation_rate: int,
num_layers: int,
num_flows=4,
cond_channels=0,
):
"""Redisual Coupling blocks for VITS flow layers.
Args:
channels (int): Number of input and output tensor channels.
hidden_channels (int): Number of hidden network channels.
kernel_size (int): Kernel size of the WaveNet layers.
dilation_rate (int): Dilation rate of the WaveNet layers.
num_layers (int): Number of the WaveNet layers.
num_flows (int, optional): Number of Residual Coupling blocks. Defaults to 4.
cond_channels (int, optional): Number of channels of the conditioning tensor. Defaults to 0.
"""
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.num_layers = num_layers
self.num_flows = num_flows
self.cond_channels = cond_channels
self.flows = nn.ModuleList()
for _ in range(num_flows):
self.flows.append(
ResidualCouplingBlock(
channels,
hidden_channels,
kernel_size,
dilation_rate,
num_layers,
cond_channels=cond_channels,
mean_only=True,
)
)
def forward(self, x, x_mask, g=None, reverse=False):
"""
Note:
Set `reverse` to True for inference.
Shapes:
- x: :math:`[B, C, T]`
- x_mask: :math:`[B, 1, T]`
- g: :math:`[B, C, 1]`
"""
if not reverse:
for flow in self.flows:
x, _ = flow(x, x_mask, g=g, reverse=reverse)
x = torch.flip(x, [1])
else:
for flow in reversed(self.flows):
x = torch.flip(x, [1])
x = flow(x, x_mask, g=g, reverse=reverse)
return x
class PosteriorEncoder(nn.Module):
def __init__(
self,
in_channels: int,
out_channels: int,
hidden_channels: int,
kernel_size: int,
dilation_rate: int,
num_layers: int,
cond_channels=0,
):
"""Posterior Encoder of VITS model.
::
x -> conv1x1() -> WaveNet() (non-causal) -> conv1x1() -> split() -> [m, s] -> sample(m, s) -> z
Args:
in_channels (int): Number of input tensor channels.
out_channels (int): Number of output tensor channels.
hidden_channels (int): Number of hidden channels.
kernel_size (int): Kernel size of the WaveNet convolution layers.
dilation_rate (int): Dilation rate of the WaveNet layers.
num_layers (int): Number of the WaveNet layers.
cond_channels (int, optional): Number of conditioning tensor channels. Defaults to 0.
"""
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.num_layers = num_layers
self.cond_channels = cond_channels
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
self.enc = WN(
hidden_channels, hidden_channels, kernel_size, dilation_rate, num_layers, c_in_channels=cond_channels
)
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
def forward(self, x, x_lengths, g=None):
"""
Shapes:
- x: :math:`[B, C, T]`
- x_lengths: :math:`[B, 1]`
- g: :math:`[B, C, 1]`
"""
x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
x = self.pre(x) * x_mask
x = self.enc(x, x_mask, g=g)
stats = self.proj(x) * x_mask
mean, log_scale = torch.split(stats, self.out_channels, dim=1)
z = (mean + torch.randn_like(mean) * torch.exp(log_scale)) * x_mask
return z, mean, log_scale, x_mask
| 0 |
coqui_public_repos/STT/native_client | coqui_public_repos/STT/native_client/ctcdecode/scorer.cpp | #ifdef _MSC_VER
#include <io.h>
#include <stdlib.h>
#define NOMINMAX
#include <windows.h>
#define R_OK 4 /* Read permission. */
#define W_OK 2 /* Write permission. */
#define F_OK 0 /* Existence. */
#define access _access
#else /* _MSC_VER */
#include <unistd.h>
#endif
#include "scorer.h"
#include <fstream>
#include <iostream>
#include "kenlm/lm/config.hh"
#include "kenlm/lm/model.hh"
#include "kenlm/lm/state.hh"
#include "kenlm/lm/word_index.hh"
#include "kenlm/util/string_piece.hh"
#include "decoder_utils.h"
using namespace fl::lib::text;
using namespace std;
static const int32_t MAGIC = 'TRIE';
static const int32_t FILE_VERSION = 6;
Scorer::Scorer() {}
Scorer::~Scorer() {}
int
Scorer::init_from_filepath(const string& lm_path, const Alphabet& alphabet)
{
set_alphabet(alphabet);
return load_lm_filepath(lm_path);
}
int
Scorer::init_from_filepath(const string& lm_path,
const string& alphabet_config_path)
{
Alphabet a;
int err = a.init(alphabet_config_path.c_str());
if (err != 0) {
return err;
}
set_alphabet(a);
return load_lm_filepath(lm_path);
}
int
Scorer::init_from_buffer(const string& buffer, const Alphabet& alphabet)
{
set_alphabet(alphabet);
return load_lm_buffer(buffer);
}
int
Scorer::init_from_buffer(const string& buffer,
const string& alphabet_config_path)
{
Alphabet a;
int err = a.init(alphabet_config_path.c_str());
if (err != 0) {
return err;
}
set_alphabet(a);
return load_lm_buffer(buffer);
}
void
Scorer::set_alphabet(const Alphabet& alphabet)
{
alphabet_ = alphabet;
setup_char_map();
}
const Alphabet&
Scorer::get_alphabet() const
{
return alphabet_;
}
void
Scorer::setup_char_map()
{
// (Re-)Initialize character map
char_map_.clear();
SPACE_ID_ = alphabet_.GetSpaceLabel();
for (int i = 0; i < alphabet_.GetSize(); i++) {
// The initial state of FST is state 0, hence the index of chars in
// the FST should start from 1 to avoid the conflict with the initial
// state, otherwise wrong decoding results would be given.
char_map_[alphabet_.DecodeSingle(i)] = i + 1;
}
}
int
Scorer::load_lm_filepath(const string& path)
{
// Check if file is readable to avoid KenLM throwing an exception
const char* filename = path.c_str();
if (access(filename, R_OK) != 0) {
return STT_ERR_SCORER_UNREADABLE;
}
// Check if the file format is valid to avoid KenLM throwing an exception
lm::ngram::ModelType model_type;
if (!lm::ngram::RecognizeBinary(filename, model_type)) {
return STT_ERR_SCORER_INVALID_LM;
}
// Load the LM
lm::ngram::Config config;
config.load_method = util::LoadMethod::LAZY;
language_model_.reset(lm::ngram::LoadVirtual(filename, config));
max_order_ = language_model_->Order();
uint64_t trie_offset = language_model_->GetEndOfSearchOffset();
uint64_t package_size;
{
util::scoped_fd fd(util::OpenReadOrThrow(filename));
package_size = util::SizeFile(fd.get());
}
if (package_size <= trie_offset) {
// File ends without a trie structure
return STT_ERR_SCORER_NO_TRIE;
}
// Read metadata and trie from file
ifstream fin(filename, ios::binary);
fin.seekg(trie_offset);
return load_trie_mmap(fin, path);
}
int
Scorer::load_lm_buffer(const string& buffer)
{
// Load the LM
lm::ngram::Config config;
config.load_method = util::LoadMethod::LAZY;
language_model_.reset(
lm::ngram::LoadVirtual(buffer.c_str(), buffer.size(), config));
max_order_ = language_model_->Order();
uint64_t trie_offset = language_model_->GetEndOfSearchOffset();
stringstream stst(buffer);
stst.seekg(trie_offset);
return load_trie_buffer(stst);
}
int
Scorer::load_trie_buffer(stringstream& stream)
{
return load_trie_impl(stream, "", true);
}
int
Scorer::load_trie_mmap(ifstream& stream, const string& file_path)
{
return load_trie_impl(stream, file_path, false);
}
int
Scorer::load_trie_impl(basic_istream<char>& stream,
const string& file_path,
bool load_from_bytes)
{
int magic;
stream.read(reinterpret_cast<char*>(&magic), sizeof(magic));
if (magic != MAGIC) {
cerr << "Error: Can't parse scorer file, invalid header. Try updating "
"your scorer file."
<< endl;
return STT_ERR_SCORER_INVALID_TRIE;
}
int version;
stream.read(reinterpret_cast<char*>(&version), sizeof(version));
if (version != FILE_VERSION) {
cerr << "Error: Scorer file version mismatch (" << version
<< " instead of expected " << FILE_VERSION << "). ";
if (version < FILE_VERSION) {
cerr << "Update your scorer file.";
} else {
cerr << "Downgrade your scorer file or update your version of Coqui STT.";
}
cerr << endl;
return STT_ERR_SCORER_VERSION_MISMATCH;
}
stream.read(reinterpret_cast<char*>(&is_utf8_mode_), sizeof(is_utf8_mode_));
// Read hyperparameters from header
double alpha, beta;
stream.read(reinterpret_cast<char*>(&alpha), sizeof(alpha));
stream.read(reinterpret_cast<char*>(&beta), sizeof(beta));
reset_params(alpha, beta);
fst::FstReadOptions opt;
if (load_from_bytes) {
dictionary.reset(fst::ConstFst<fst::StdArc>::Read(stream, opt));
} else {
opt.mode = fst::FstReadOptions::MAP;
opt.source = file_path;
dictionary.reset(FstType::Read(stream, opt));
}
return STT_ERR_OK;
}
bool
Scorer::save_dictionary(const string& path, bool append_instead_of_overwrite)
{
ios::openmode om;
if (append_instead_of_overwrite) {
om = ios::in | ios::out | ios::binary | ios::ate;
} else {
om = ios::out | ios::binary;
}
fstream fout(path, om);
if (!fout || fout.bad()) {
cerr << "Error opening '" << path << "'" << endl;
return false;
}
fout.write(reinterpret_cast<const char*>(&MAGIC), sizeof(MAGIC));
if (fout.bad()) {
cerr << "Error writing MAGIC '" << path << "'" << endl;
return false;
}
fout.write(reinterpret_cast<const char*>(&FILE_VERSION),
sizeof(FILE_VERSION));
if (fout.bad()) {
cerr << "Error writing FILE_VERSION '" << path << "'" << endl;
return false;
}
fout.write(reinterpret_cast<const char*>(&is_utf8_mode_),
sizeof(is_utf8_mode_));
if (fout.bad()) {
cerr << "Error writing is_utf8_mode '" << path << "'" << endl;
return false;
}
fout.write(reinterpret_cast<const char*>(&alpha), sizeof(alpha));
if (fout.bad()) {
cerr << "Error writing alpha '" << path << "'" << endl;
return false;
}
fout.write(reinterpret_cast<const char*>(&beta), sizeof(beta));
if (fout.bad()) {
cerr << "Error writing beta '" << path << "'" << endl;
return false;
}
fst::FstWriteOptions opt;
opt.align = true;
opt.source = path;
return dictionary->Write(fout, opt);
}
bool
Scorer::is_scoring_boundary(PathTrie* prefix, size_t new_label)
{
if (is_utf8_mode()) {
if (prefix->character == -1) {
return false;
}
unsigned char first_byte;
int distance_to_boundary =
prefix->distance_to_codepoint_boundary(&first_byte, alphabet_);
int needed_bytes;
if ((first_byte >> 3) == 0x1E) {
needed_bytes = 4;
} else if ((first_byte >> 4) == 0x0E) {
needed_bytes = 3;
} else if ((first_byte >> 5) == 0x06) {
needed_bytes = 2;
} else if ((first_byte >> 7) == 0x00) {
needed_bytes = 1;
} else {
assert(false); // invalid byte sequence. should be unreachable, disallowed
// by vocabulary/trie
return false;
}
return distance_to_boundary == needed_bytes;
} else {
return new_label == SPACE_ID_;
}
}
double
Scorer::get_log_cond_prob(const vector<string>& words, bool bos, bool eos)
{
return get_log_cond_prob(words.begin(), words.end(), bos, eos);
}
double
Scorer::get_log_cond_prob(const vector<string>::const_iterator& begin,
const vector<string>::const_iterator& end,
bool bos,
bool eos)
{
const auto& vocab = language_model_->BaseVocabulary();
lm::ngram::State state_vec[2];
lm::ngram::State* in_state = &state_vec[0];
lm::ngram::State* out_state = &state_vec[1];
if (bos) {
language_model_->BeginSentenceWrite(in_state);
} else {
language_model_->NullContextWrite(in_state);
}
double cond_prob = 0.0;
for (auto it = begin; it != end; ++it) {
lm::WordIndex word_index = vocab.Index(*it);
// encounter OOV
if (word_index == lm::kUNK) {
return OOV_SCORE;
}
cond_prob = language_model_->BaseScore(in_state, word_index, out_state);
swap(in_state, out_state);
}
if (eos) {
cond_prob =
language_model_->BaseScore(in_state, vocab.EndSentence(), out_state);
}
// return loge prob
return cond_prob / NUM_FLT_LOGE;
}
void
Scorer::reset_params(float alpha, float beta)
{
this->alpha = alpha;
this->beta = beta;
}
vector<string>
Scorer::split_labels_into_scored_units(const vector<unsigned int>& labels)
{
if (labels.empty())
return {};
string s = alphabet_.Decode(labels);
vector<string> words;
if (is_utf8_mode_) {
words = split_into_codepoints(s);
} else {
words = split_str(s, " ");
}
return words;
}
vector<string>
Scorer::make_ngram(PathTrie* prefix)
{
vector<string> ngram;
PathTrie* current_node = prefix;
PathTrie* new_node = nullptr;
for (int order = 0; order < max_order_; order++) {
if (!current_node || current_node->character == -1) {
break;
}
vector<unsigned int> prefix_vec;
if (is_utf8_mode_) {
new_node = current_node->get_prev_grapheme(prefix_vec, alphabet_);
} else {
new_node = current_node->get_prev_word(prefix_vec, alphabet_);
}
current_node = new_node->parent;
// reconstruct word
string word = alphabet_.Decode(prefix_vec);
ngram.push_back(word);
}
reverse(ngram.begin(), ngram.end());
return ngram;
}
void
Scorer::fill_dictionary(const unordered_set<string>& vocabulary)
{
// ConstFst is immutable, so we need to use a MutableFst to create the trie,
// and then we convert to a ConstFst for the decoder and for storing on disk.
fst::StdVectorFst dictionary;
// For each unigram convert to ints and put in trie
for (const auto& word : vocabulary) {
if (word != START_TOKEN && word != UNK_TOKEN && word != END_TOKEN) {
add_word_to_dictionary(
word, char_map_, is_utf8_mode_, SPACE_ID_ + 1, &dictionary);
}
}
/* Simplify FST
* This gets rid of "epsilon" transitions in the FST.
* These are transitions that don't require a string input to be taken.
* Getting rid of them is necessary to make the FST deterministic, but
* can greatly increase the size of the FST
*/
fst::RmEpsilon(&dictionary);
unique_ptr<fst::StdVectorFst> new_dict(new fst::StdVectorFst);
/* This makes the FST deterministic, meaning for any string input there's
* only one possible state the FST could be in. It is assumed our
* dictionary is deterministic when using it.
* (lest we'd have to check for multiple transitions at each state)
*/
fst::Determinize(dictionary, new_dict.get());
/* Finds the simplest equivalent fst. This is unnecessary but decreases
* memory usage of the dictionary
*/
fst::Minimize(new_dict.get());
// Now we convert the MutableFst to a ConstFst (Scorer::FstType) via its ctor
unique_ptr<FstType> converted(new FstType(*new_dict));
this->dictionary = move(converted);
}
LMStatePtr
Scorer::start(bool startWithNothing)
{
auto outState = make_shared<KenLMState>();
if (startWithNothing) {
language_model_->NullContextWrite(outState->ken());
} else {
language_model_->BeginSentenceWrite(outState->ken());
}
return outState;
}
pair<LMStatePtr, float>
Scorer::score(const LMStatePtr& state, const int usrTokenIdx)
{
if (usrTokenIdx < 0 || usrTokenIdx >= usrToLmIdxMap_.size()) {
throw runtime_error("[Scorer] Invalid user token index: " +
to_string(usrTokenIdx));
}
auto inState = static_pointer_cast<KenLMState>(state);
auto outState = inState->child<KenLMState>(usrTokenIdx);
float score = language_model_->BaseScore(
inState->ken(), usrToLmIdxMap_[usrTokenIdx], outState->ken());
return make_pair(move(outState), score);
}
pair<LMStatePtr, float>
Scorer::finish(const LMStatePtr& state)
{
auto inState = static_pointer_cast<KenLMState>(state);
auto outState = inState->child<KenLMState>(-1);
float score =
language_model_->BaseScore(inState->ken(),
language_model_->BaseVocabulary().EndSentence(),
outState->ken());
return make_pair(move(outState), score);
}
void
Scorer::load_words(const Dictionary& word_dict)
{
const auto& vocab = language_model_->BaseVocabulary();
usrToLmIdxMap_.resize(word_dict.indexSize());
for (int i = 0; i < word_dict.indexSize(); ++i) {
usrToLmIdxMap_[i] = vocab.Index(word_dict.getEntry(i));
}
}
| 0 |