Mar2Ding commited on
Commit
eeda3fe
·
verified ·
1 Parent(s): e09ff81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -7
README.md CHANGED
@@ -25,15 +25,15 @@ pipeline_tag: text-generation
25
 
26
  </div>
27
 
28
- **SongComposer** is a language large model (VLLM) based on [InternLM2](https://github.com/InternLM/InternLM) for lyric and melody composition in song generation.
29
 
30
  We release SongComposer series in two versions:
31
 
32
- - SongComposer_pretrain: The pretrained SongComposer with InternLM2 as the initialization of the LLM, gain basic knowledge on lyric and melody.
33
  - SongComposer_sft: The finetuned SongComposer for *instruction-following song generation* including lyric to melody, melody to lyric, song continuation, text to song.
34
 
35
  ### Import from Transformers
36
- To load the SongComposer_sft model using Transformers, use the following code:
37
  ```python
38
  from transformers import AutoTokenizer, AutoModelForCausalLM
39
  ckpt_path = "Mar2Ding/songcomposer_pretrain"
@@ -41,12 +41,11 @@ tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True)
41
  model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half()
42
  from modeling_internlm2 import inference_pretrain
43
  prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n'
44
- inference_pretrain(prompt, tokenizer, model)
45
  ```
46
 
47
  ### 通过 Transformers 加载
48
- 通过以下的代码加载 SongComposer_sft 模型
49
-
50
  ```python
51
  from transformers import AutoTokenizer, AutoModelForCausalLM
52
  ckpt_path = "Mar2Ding/songcomposer_pretrain"
@@ -54,7 +53,7 @@ tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True)
54
  model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half()
55
  from modeling_internlm2 import inference_pretrain
56
  prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n'
57
- inference_pretrain(prompt, tokenizer, model)
58
  ```
59
 
60
  ### Open Source License
 
25
 
26
  </div>
27
 
28
+ **SongComposer** is a language large model (LLM) based on [InternLM2](https://github.com/InternLM/InternLM) for lyric and melody composition in song generation.
29
 
30
  We release SongComposer series in two versions:
31
 
32
+ - SongComposer_pretrain: The pretrained SongComposer with InternLM2 as the initialization of the LLM, gains basic knowledge of lyric and melody.
33
  - SongComposer_sft: The finetuned SongComposer for *instruction-following song generation* including lyric to melody, melody to lyric, song continuation, text to song.
34
 
35
  ### Import from Transformers
36
+ To load the SongComposer_pretrain model using Transformers, use the following code:
37
  ```python
38
  from transformers import AutoTokenizer, AutoModelForCausalLM
39
  ckpt_path = "Mar2Ding/songcomposer_pretrain"
 
41
  model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half()
42
  from modeling_internlm2 import inference_pretrain
43
  prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n'
44
+ model.inference_pretrain(prompt, tokenizer, model)
45
  ```
46
 
47
  ### 通过 Transformers 加载
48
+ 通过以下的代码加载 SongComposer_pretrain 模型
 
49
  ```python
50
  from transformers import AutoTokenizer, AutoModelForCausalLM
51
  ckpt_path = "Mar2Ding/songcomposer_pretrain"
 
53
  model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half()
54
  from modeling_internlm2 import inference_pretrain
55
  prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n'
56
+ model.inference_pretrain(prompt, tokenizer, model)
57
  ```
58
 
59
  ### Open Source License