File size: 7,876 Bytes
b2eb230
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
{
  "16-mixed is recommended for 10+ series GPU": "10+ 系列 GPU 建议使用 16-mixed",
  "5 to 10 seconds of reference audio, useful for specifying speaker.": "5 到 10 秒的参考音频,适用于指定音色。",
  "A text-to-speech model based on VQ-GAN and Llama developed by [Fish Audio](https://fish.audio).": "由 [Fish Audio](https://fish.audio) 研发的基于 VQ-GAN 和 Llama 的多语种语音合成.",
  "Accumulate Gradient Batches": "梯度累积批次",
  "Add to Processing Area": "加入处理区",
  "Added path successfully!": "添加路径成功!",
  "Advanced Config": "高级参数",
  "Base LLAMA Model": "基础 LLAMA 模型",
  "Batch Inference": "批量推理",
  "Batch Size": "批次大小",
  "Changing with the Model Path": "随模型路径变化",
  "Chinese": "中文",
  "Compile Model": "编译模型",
  "Compile the model can significantly reduce the inference time, but will increase cold start time": "编译模型可以显著减少推理时间,但会增加冷启动时间",
  "Copy": "复制",
  "Data Preprocessing": "数据预处理",
  "Data Preprocessing Path": "数据预处理路径",
  "Data Source": "数据源",
  "Decoder Model Config": "解码器模型配置",
  "Decoder Model Path": "解码器模型路径",
  "Disabled": "禁用",
  "Enable Reference Audio": "启用参考音频",
  "English": "英文",
  "Error Message": "错误信息",
  "File Preprocessing": "文件预处理",
  "Generate": "生成",
  "Generated Audio": "音频",
  "If there is no corresponding text for the audio, apply ASR for assistance, support .txt or .lab format": "如果音频没有对应的文本,可以应用 ASR 辅助,支持 .txt 或 .lab 格式",
  "Infer interface is closed": "推理界面已关闭",
  "Inference Configuration": "推理配置",
  "Inference Server Configuration": "推理服务器配置",
  "Inference Server Error": "推理服务器错误",
  "Inferring interface is launched at {}": "推理界面已在 {} 上启动",
  "Initial Learning Rate": "初始学习率",
  "Input Audio & Source Path for Transcription": "输入音频和转录源路径",
  "Input Text": "输入文本",
  "Invalid path: {}": "无效路径: {}",
  "It is recommended to use CUDA, if you have low configuration, use CPU": "建议使用 CUDA,如果配置较低,使用 CPU",
  "Iterative Prompt Length, 0 means off": "迭代提示长度,0 表示关闭",
  "Japanese": "日文",
  "LLAMA Configuration": "LLAMA 配置",
  "LLAMA Model Config": "LLAMA 模型配置",
  "LLAMA Model Path": "LLAMA 模型路径",
  "Labeling Device": "标注加速设备",
  "LoRA Model to be merged": "要合并的 LoRA 模型",
  "Maximum Audio Duration": "最大音频时长",
  "Maximum Length per Sample": "每个样本的最大长度",
  "Maximum Training Steps": "最大训练步数",
  "Maximum tokens per batch, 0 means no limit": "每批最大令牌数,0 表示无限制",
  "Merge": "合并",
  "Merge LoRA": "合并 LoRA",
  "Merge successfully": "合并成功",
  "Minimum Audio Duration": "最小音频时长",
  "Model Output Path": "模型输出路径",
  "Model Size": "模型规模",
  "Move": "移动",
  "Move files successfully": "移动文件成功",
  "No audio generated, please check the input text.": "没有生成音频,请检查输入文本.",
  "No selected options": "没有选择的选项",
  "Number of Workers": "数据加载进程数",
  "Open Inference Server": "打开推理服务器",
  "Open Labeler WebUI": "打开标注工具",
  "Open Tensorboard": "打开 Tensorboard",
  "Opened labeler in browser": "在浏览器中打开标注工具",
  "Optional Label Language": "[可选] 标注语言",
  "Optional online ver": "[可选] 使用在线版",
  "Output Path": "输出路径",
  "Path error, please check the model file exists in the corresponding path": "路径错误,请检查模型文件是否存在于相应路径",
  "Precision": "精度",
  "Probability of applying Speaker Condition": "应用说话人条件的概率",
  "Put your text here.": "在此处输入文本.",
  "Reference Audio": "参考音频",
  "Reference Text": "参考文本",
  "Related code and weights are released under CC BY-NC-SA 4.0 License.": "相关代码和权重使用 CC BY-NC-SA 4.0 许可证发布.",
  "Remove Selected Data": "移除选中数据",
  "Removed path successfully!": "移除路径成功!",
  "Repetition Penalty": "重复惩罚",
  "Save model every n steps": "每 n 步保存模型",
  "Select LLAMA ckpt": "选择 LLAMA 检查点",
  "Select VITS ckpt": "选择 VITS 检查点",
  "Select VQGAN ckpt": "选择 VQGAN 检查点",
  "Select source file processing method": "选择源文件处理方法",
  "Select the model to be trained (Depending on the Tab page you are on)": "根据您所在的选项卡页面选择要训练的模型",
  "Selected: {}": "已选择: {}",
  "Speaker": "说话人",
  "Speaker is identified by the folder name": "自动根据父目录名称识别说话人",
  "Start Training": "开始训练",
  "Streaming Audio": "流式音频",
  "Streaming Generate": "流式合成",
  "Tensorboard Host": "Tensorboard 监听地址",
  "Tensorboard Log Path": "Tensorboard 日志路径",
  "Tensorboard Port": "Tensorboard 端口",
  "Tensorboard interface is closed": "Tensorboard 界面已关闭",
  "Tensorboard interface is launched at {}": "Tensorboard 界面已在 {} 上启动",
  "Text is too long, please keep it under {} characters.": "文本太长,请保持在 {} 个字符以内.",
  "The path of the input folder on the left or the filelist. Whether checked or not, it will be used for subsequent training in this list.": "左侧输入文件夹的路径或文件列表。无论是否选中,都将在此列表中用于后续训练.",
  "Training Configuration": "训练配置",
  "Training Error": "训练错误",
  "Training stopped": "训练已停止",
  "Type name of the speaker": "输入说话人的名称",
  "Type the path or select from the dropdown": "输入路径或从下拉菜单中选择",
  "Use LoRA": "使用 LoRA",
  "Use LoRA can save GPU memory, but may reduce the quality of the model": "使用 LoRA 可以节省 GPU 内存,但可能会降低模型质量",
  "Use filelist": "使用文件列表",
  "Use large for 10G+ GPU, medium for 5G, small for 2G": "10G+ GPU 使用 large, 5G 使用 medium, 2G 使用 small",
  "VITS Configuration": "VITS 配置",
  "VQGAN Configuration": "VQGAN 配置",
  "Validation Batch Size": "验证批次大小",
  "View the status of the preprocessing folder (use the slider to control the depth of the tree)": "查看预处理文件夹的状态 (使用滑块控制树的深度)",
  "We are not responsible for any misuse of the model, please consider your local laws and regulations before using it.": "我们不对模型的任何滥用负责,请在使用之前考虑您当地的法律法规.",
  "WebUI Host": "WebUI 监听地址",
  "WebUI Port": "WebUI 端口",
  "Whisper Model": "Whisper 模型",
  "You can find the source code [here](https://github.com/fishaudio/fish-speech) and models [here](https://huggingface.co/fishaudio/fish-speech-1).": "你可以在 [这里](https://github.com/fishaudio/fish-speech) 找到源代码和 [这里](https://huggingface.co/fishaudio/fish-speech-1) 找到模型.",
  "bf16-true is recommended for 30+ series GPU, 16-mixed is recommended for 10+ series GPU": "30+ 系列 GPU 建议使用 bf16-true, 10+ 系列 GPU 建议使用 16-mixed",
  "latest": "最近的检查点",
  "new": "创建新的检查点",
  "Realtime Transform Text": "实时规范化文本",
  "Normalization Result Preview (Currently Only Chinese)": "规范化结果预览",
  "Text Normalization": "文本规范化",
  "Select Example Audio": "选择参考音频"
}