File size: 3,759 Bytes
9569060
 
 
 
 
 
 
 
 
 
 
 
 
6190a72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9569060
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---

# How to use?

- We use Unsloth for faster inference and load the adapter:

```python
from unsloth import FastLanguageModel
max_seq_length = 8192 
dtype = None 
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "patched-codes/Llama-3.2-1B-FastApply",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
```

- The model works with original code and the edited code as input to generate the final updated code:
  
```python
original_code = """import React from 'react';
import { Loader } from 'lucide-react';

interface ButtonProps {
  text: string;
  onClick?: () => void;
  loading?: boolean;
  disabled?: boolean;
  icon?: React.ReactNode;
}

const Button: React.FC<ButtonProps> = ({
  text,
  onClick,
  loading = false,
  disabled = false,
  icon
}) => (
  <button
    className="bg-blue-500 text-white p-2 rounded flex items-center gap-2"
    onClick={onClick}
    disabled={disabled || loading}
  >
    {loading ? <Loader className="animate-spin" /> : icon}
    {text}
  </button>
);

export default Button;
"""

update_snippet = """interface ButtonProps {
  variant?: 'primary' | 'secondary' | 'danger';
  size?: 'small' | 'medium' | 'large';
  // ... other props
}

const Button: React.FC<ButtonProps> = ({
  variant = 'primary',
  size = 'medium',
  // ... other props
}) => (
  <button
    className={`flex items-center gap-2 rounded ${
      size === 'small' ? 'p-1 text-sm' :
      size === 'large' ? 'p-3 text-lg' :
      'p-2 text-md'
    } ${
      variant === 'primary' ? 'bg-blue-500 text-white' :
      variant === 'secondary' ? 'bg-gray-500 text-white' :
      'bg-red-500 text-white'
    }`}
    // ... other attributes
  >
    // ... existing code ...
  </button>
);
"""
```

- Prepare your input following the prompt structure:
  
```python
input_text = f"""
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.

<code>{original_code}</code>

<update>{update_snippet}</update>

Provide the complete updated code.
"""

messages = [
    {"role": "system", "content": "You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated."},
    {"role": "user", "content": input_text.strip()},
]

inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
output = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 8192,
                   use_cache = True, temperature = 1.5, min_p = 0.1)

response = tokenizer.decode(output[0][len(inputs[0]):])

updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0]
```

# Uploaded  model

- **Developed by:** patched-codes
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)