File size: 4,624 Bytes
0d9ba8e 2e22026 0d9ba8e 4961c70 0d9ba8e 4961c70 0d9ba8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
license: llama2
---
### Description
This is a translation model utilizing the high Japanese proficiency of Swallow-hf-13b, primarily focused on English-Japanese or any language-to-Japanese translation.
The model, tokyotech-llm/Swallow-13b-hf, has been fine-tuned with an 4K context and is mainly aimed at translating relatively long texts ranging from 10 to several thousand tokens.
While its core strength lies in English-Japanese translation, it also partially supports translation in multiple other languages.
(Multilingual translation features and long context translation become unstable when quantized.)
### Prompt
An XML-like instruction template has been adopted.
---
### 概要
Swallow-hf-13bの高い日本語力を利用した翻訳モデルです
[tokyotech-llm/Swallow-hf-13b](https://huggingface.co/tokyotech-llm/Swallow-13b-hf)
英日翻訳メインに、ファインチューニングしています
数千tokenまでの翻訳に対応しています
多言語から日本語への翻訳も一部対応しています(多言語翻訳機能や長文翻訳は量子化するとさらに不安定です)
### プロンプト
XML likeなタグによるinstructionフォーマットを採用しました
- 利点
- Instructionのtoken消費少ない
- モデルの指示理解がよい
- 欠点
- タグ付きテキスト処理に弱い
## Usage
### Prompt format:English to Japanese
```
<english>: {} <NL>
<japanese>: {}
```
### Prompt format:Other language to Japanese
```
<english>: {} <NL>
<japanese>: {}
```
### Prompt format:Japanese to English
```
not supported
```
長文の場合、Textstreamerの使用をお勧めします
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name = "aixsatoshi/Honyaku-13b"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define the streamer
streamer = TextStreamer(tokenizer)
# Define the English prompt
english_prompt = """
In an era marked by rapid globalization, the intricate interplay between international law, economic policies, and political dynamics has become increasingly complex.
Legal frameworks, once confined within national borders, now stretch across continents, necessitating a nuanced understanding of transnational legislation and treaties.
As multinational corporations navigate the labyrinthine maze of global markets, economic theories that underpin currency fluctuations, trade imbalances, and fiscal policies are more pertinent than ever.
Central to these economic considerations is the concept of market equilibrium, a delicate balance affected by myriad factors including consumer behavior, governmental regulations, and global crises.
Politically, the landscape is equally labyrinthine. Ideological shifts and the resurgence of nationalism have reshaped diplomatic relations, with international agreements and alliances being tested under the strain of geopolitical tensions.
The role of supranational entities like the United Nations and the European Union in mediating these conflicts is of paramount importance, as is the need for diplomatic finesse in an increasingly multipolar world.
Furthermore, the intersection of politics and economics is evident in the debate over economic sanctions and their efficacy in swaying political decisions.
In this context, understanding the subtleties of rhetoric used in political discourse, and how it interweaves with legal jargon and economic terminology, is crucial.
For instance, the rhetoric surrounding fiscal austerity measures often intertwines with legal discourse on budgetary legislation and economic debates on inflation control.
Similarly, discussions on constitutional amendments are frequently laden with political undertones, reflecting broader societal issues and ideological divides.
This convergence of legal, economic, and political vernacular presents a unique challenge for machine translation systems, demanding not only linguistic accuracy but also a deep comprehension of the nuanced interplay of these disciplines.
"""
# Prepare the prompt for English to Japanese translation
prompt = f"<english>: {english_prompt} <NL>\n\n<japanese>:"
# Tokenize the input text and move to CUDA device
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate the output using the model and streamer
output = model.generate(**inputs, max_new_tokens=4096, do_sample=True, top_k=20, top_p=0.95, streamer=streamer)
```
|