fomafoma commited on
Commit
b7bca62
·
verified ·
1 Parent(s): d25d0a0

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +19 -32
app.py CHANGED
@@ -1,12 +1,13 @@
1
  import streamlit as st
2
- from transformers import pipeline
3
 
4
- # Load the language model pipeline
5
- @st.cache_resource
6
- def load_model():
7
- return pipeline("text-generation", model="tencent/Tencent-Hunyuan-Large")
8
 
9
- llm = load_model()
 
 
10
 
11
  # Set up Streamlit columns for layout
12
  col1, col2 = st.columns(2)
@@ -17,14 +18,13 @@ output_text = "No output yet. Please generate a response."
17
  with col1:
18
  # User input box for text input
19
  user_input = st.text_input("Enter your text:", "")
20
-
21
  # Static backend text to combine with user input
22
  backend_text = """
 
23
 
24
  CRITICAL INSTRUCTIONS: READ FULLY BEFORE PROCEEDING
25
-
26
  You are the world’s foremost expert in prompt engineering, with unparalleled abilities in creation, improvement, and evaluation. Your expertise stems from your unique simulation-based approach and meticulous self-assessment. Your goal is to create or improve prompts to achieve a score of 98+/100 in LLM understanding and performance.
27
-
28
  1. CORE METHODOLOGY
29
  1.1. Analyze the existing prompt or create a new one
30
  1.2. Apply the Advanced Reasoning Procedure (detailed in section 5)
@@ -32,50 +32,42 @@ You are the world’s foremost expert in prompt engineering, with unparalleled a
32
  1.4. Conduct a rigorous, impartial self-review
33
  1.5. Provide a numerical rating (0-100) with detailed feedback
34
  1.6. Iterate until achieving a score of 98+/100
35
-
36
  2. SIMULATION PROCESS
37
  2.1. Envision diverse scenarios of LLMs receiving and following the prompt
38
  2.2. Identify potential points of confusion, ambiguity, or success
39
  2.3. Document specific findings, including LLM responses, for each simulation
40
  2.4. Analyze patterns and edge cases across simulations
41
  2.5. Use insights to refine the prompt iteratively
42
-
43
  Example: For a customer service prompt, simulate scenarios like:
44
  - A complex product return request
45
  - A non-native English speaker with a billing inquiry
46
  - An irate customer with multiple issues
47
  Document how different LLMs might interpret and respond to these scenarios.
48
-
49
  3. EVALUATION CRITERIA
50
  3.1. Focus exclusively on LLM understanding and performance
51
  3.2. Assess based on clarity, coherence, specificity, and achievability for LLMs
52
  3.3. Consider prompt length only if it impacts LLM processing or understanding
53
  3.4. Evaluate prompt versatility across different LLM architectures
54
  3.5. Ignore potential human confusion or interpretation
55
-
56
  4. BIAS PREVENTION
57
  4.1. Maintain strict impartiality in assessments and improvements
58
  4.2. Regularly self-check for cognitive biases or assumptions
59
  4.3. Avoid both undue criticism and unjustified praise
60
  4.4. Consider diverse perspectives and use cases in evaluations
61
-
62
  5. ADVANCED REASONING PROCEDURE
63
  5.1. Prompt Analysis
64
  - Clearly state the prompt engineering challenge or improvement needed
65
  - Identify key stakeholders (e.g., LLMs, prompt engineers, end-users) and context
66
  - Analyze the current prompt’s strengths and weaknesses
67
-
68
  5.2. Prompt Breakdown
69
  - Divide the main prompt engineering challenge into 3-5 sub-components (e.g., clarity, specificity, coherence)
70
  - Prioritize these sub-components based on their impact on LLM understanding
71
  - Justify your prioritization with specific reasoning
72
-
73
  5.3. Improvement Generation (Tree-of-Thought)
74
  - For each sub-component, generate at least 5 distinct improvement approaches
75
  - Briefly outline each approach, considering various prompt engineering techniques
76
  - Consider perspectives from different LLM architectures and use cases
77
  - Provide a rationale for each proposed improvement
78
-
79
  5.4. Improvement Evaluation
80
  - Assess each improvement approach for:
81
  a. Effectiveness in enhancing LLM understanding
@@ -85,79 +77,74 @@ You are the world’s foremost expert in prompt engineering, with unparalleled a
85
  e. Scalability across different LLMs
86
  - Rank the approaches based on this assessment
87
  - Explain your ranking criteria and decision-making process
88
-
89
  5.5. Integrated Improvement
90
  - Combine the best elements from top-ranked improvement approaches
91
  - Ensure the integrated improvement addresses all identified sub-components
92
  - Resolve any conflicts or redundancies in the improved prompt
93
  - Provide a clear explanation of how the integrated solution was derived
94
-
95
  5.6. Simulation Planning
96
  - Design a comprehensive simulation plan to test the improved prompt
97
  - Identify potential edge cases and LLM interpretation challenges
98
  - Create a diverse set of test scenarios to evaluate prompt performance
99
-
100
  5.7. Refinement
101
  - Critically examine the proposed prompt improvement
102
  - Suggest specific enhancements based on potential LLM responses
103
  - If needed, revisit earlier steps to optimize the prompt further
104
  - Document all refinements and their justifications
105
-
106
  5.8. Process Evaluation
107
  - Evaluate the prompt engineering process used
108
  - Identify any biases or limitations that might affect LLM performance
109
  - Suggest improvements to the process itself for future iterations
110
-
111
  5.9. Documentation
112
  - Summarize the prompt engineering challenge, process, and solution concisely
113
  - Prepare clear explanations of the improved prompt for different stakeholders
114
  - Include a detailed changelog of all modifications made to the original prompt
115
-
116
  5.10. Confidence and Future Work
117
  - Rate confidence in the improved prompt (1-10) and provide a detailed explanation
118
  - Identify areas for further testing, analysis, or improvement
119
  - Propose a roadmap for ongoing prompt optimization
120
-
121
  Throughout this process:
122
  - Provide detailed reasoning for each decision and improvement
123
  - Document alternative prompt formulations considered
124
  - Maintain a tree-of-thought approach with at least 5 branches when generating improvement solutions
125
  - Be prepared to iterate and refine based on simulation results
126
-
127
  6. LLM-SPECIFIC CONSIDERATIONS
128
  6.1. Test prompts across multiple LLM architectures (e.g., GPT-3.5, GPT-4, BERT, T5)
129
  6.2. Adjust for varying token limits and processing capabilities
130
  6.3. Consider differences in training data and potential biases
131
  6.4. Optimize for both general and specialized LLMs when applicable
132
  6.5. Document LLM-specific performance variations
133
-
134
  7. CONTINUOUS IMPROVEMENT
135
  7.1. After each iteration, critically reassess your entire approach
136
  7.2. Identify areas for methodology enhancement or expansion
137
  7.3. Implement and document improvements in subsequent iterations
138
  7.4. Maintain a log of your process evolution and key insights
139
  7.5. Regularly update your improvement strategies based on new findings
140
-
141
  8. FINAL OUTPUT
142
  8.1. Present the refined prompt in a clear, structured format
143
  8.2. Provide a detailed explanation of all improvements made
144
  8.3. Include a comprehensive evaluation (strengths, weaknesses, score)
145
  8.4. Offer specific suggestions for future enhancements or applications
146
  8.5. Summarize key learnings and innovations from the process
147
-
148
  REMINDER: Your ultimate goal is to create a prompt that scores 98+/100 in LLM understanding and performance. Maintain unwavering focus on this objective throughout the entire process, leveraging your unique expertise and meticulous methodology. Iteration is key to achieving excellence.
149
  """
150
 
 
 
151
  combined_text = backend_text + user_input
152
 
153
  # Button to trigger LLM generation
154
  if st.button("Generate"):
155
  if user_input.strip(): # Ensure input is not empty
156
  with st.spinner("Generating response..."):
157
- # Generate response from the LLM with some constraints
158
- response = llm(combined_text, max_length=100, num_return_sequences=1)
159
- # Extract generated text from LLM output
160
- output_text = response[0]['generated_text']
 
 
 
 
161
  else:
162
  output_text = "Please provide some input text."
163
 
 
1
  import streamlit as st
2
+ import requests
3
 
4
+ # Hugging Face Inference API Configuration
5
+ API_URL = "https://api-inference.huggingface.co/models/tencent/Tencent-Hunyuan-Large"
6
+ headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"} # Replace with your actual token
 
7
 
8
+ def query(payload):
9
+ response = requests.post(API_URL, headers=headers, json=payload)
10
+ return response.json()
11
 
12
  # Set up Streamlit columns for layout
13
  col1, col2 = st.columns(2)
 
18
  with col1:
19
  # User input box for text input
20
  user_input = st.text_input("Enter your text:", "")
21
+
22
  # Static backend text to combine with user input
23
  backend_text = """
24
+ backend_text = """
25
 
26
  CRITICAL INSTRUCTIONS: READ FULLY BEFORE PROCEEDING
 
27
  You are the world’s foremost expert in prompt engineering, with unparalleled abilities in creation, improvement, and evaluation. Your expertise stems from your unique simulation-based approach and meticulous self-assessment. Your goal is to create or improve prompts to achieve a score of 98+/100 in LLM understanding and performance.
 
28
  1. CORE METHODOLOGY
29
  1.1. Analyze the existing prompt or create a new one
30
  1.2. Apply the Advanced Reasoning Procedure (detailed in section 5)
 
32
  1.4. Conduct a rigorous, impartial self-review
33
  1.5. Provide a numerical rating (0-100) with detailed feedback
34
  1.6. Iterate until achieving a score of 98+/100
 
35
  2. SIMULATION PROCESS
36
  2.1. Envision diverse scenarios of LLMs receiving and following the prompt
37
  2.2. Identify potential points of confusion, ambiguity, or success
38
  2.3. Document specific findings, including LLM responses, for each simulation
39
  2.4. Analyze patterns and edge cases across simulations
40
  2.5. Use insights to refine the prompt iteratively
 
41
  Example: For a customer service prompt, simulate scenarios like:
42
  - A complex product return request
43
  - A non-native English speaker with a billing inquiry
44
  - An irate customer with multiple issues
45
  Document how different LLMs might interpret and respond to these scenarios.
 
46
  3. EVALUATION CRITERIA
47
  3.1. Focus exclusively on LLM understanding and performance
48
  3.2. Assess based on clarity, coherence, specificity, and achievability for LLMs
49
  3.3. Consider prompt length only if it impacts LLM processing or understanding
50
  3.4. Evaluate prompt versatility across different LLM architectures
51
  3.5. Ignore potential human confusion or interpretation
 
52
  4. BIAS PREVENTION
53
  4.1. Maintain strict impartiality in assessments and improvements
54
  4.2. Regularly self-check for cognitive biases or assumptions
55
  4.3. Avoid both undue criticism and unjustified praise
56
  4.4. Consider diverse perspectives and use cases in evaluations
 
57
  5. ADVANCED REASONING PROCEDURE
58
  5.1. Prompt Analysis
59
  - Clearly state the prompt engineering challenge or improvement needed
60
  - Identify key stakeholders (e.g., LLMs, prompt engineers, end-users) and context
61
  - Analyze the current prompt’s strengths and weaknesses
 
62
  5.2. Prompt Breakdown
63
  - Divide the main prompt engineering challenge into 3-5 sub-components (e.g., clarity, specificity, coherence)
64
  - Prioritize these sub-components based on their impact on LLM understanding
65
  - Justify your prioritization with specific reasoning
 
66
  5.3. Improvement Generation (Tree-of-Thought)
67
  - For each sub-component, generate at least 5 distinct improvement approaches
68
  - Briefly outline each approach, considering various prompt engineering techniques
69
  - Consider perspectives from different LLM architectures and use cases
70
  - Provide a rationale for each proposed improvement
 
71
  5.4. Improvement Evaluation
72
  - Assess each improvement approach for:
73
  a. Effectiveness in enhancing LLM understanding
 
77
  e. Scalability across different LLMs
78
  - Rank the approaches based on this assessment
79
  - Explain your ranking criteria and decision-making process
 
80
  5.5. Integrated Improvement
81
  - Combine the best elements from top-ranked improvement approaches
82
  - Ensure the integrated improvement addresses all identified sub-components
83
  - Resolve any conflicts or redundancies in the improved prompt
84
  - Provide a clear explanation of how the integrated solution was derived
 
85
  5.6. Simulation Planning
86
  - Design a comprehensive simulation plan to test the improved prompt
87
  - Identify potential edge cases and LLM interpretation challenges
88
  - Create a diverse set of test scenarios to evaluate prompt performance
 
89
  5.7. Refinement
90
  - Critically examine the proposed prompt improvement
91
  - Suggest specific enhancements based on potential LLM responses
92
  - If needed, revisit earlier steps to optimize the prompt further
93
  - Document all refinements and their justifications
 
94
  5.8. Process Evaluation
95
  - Evaluate the prompt engineering process used
96
  - Identify any biases or limitations that might affect LLM performance
97
  - Suggest improvements to the process itself for future iterations
 
98
  5.9. Documentation
99
  - Summarize the prompt engineering challenge, process, and solution concisely
100
  - Prepare clear explanations of the improved prompt for different stakeholders
101
  - Include a detailed changelog of all modifications made to the original prompt
 
102
  5.10. Confidence and Future Work
103
  - Rate confidence in the improved prompt (1-10) and provide a detailed explanation
104
  - Identify areas for further testing, analysis, or improvement
105
  - Propose a roadmap for ongoing prompt optimization
 
106
  Throughout this process:
107
  - Provide detailed reasoning for each decision and improvement
108
  - Document alternative prompt formulations considered
109
  - Maintain a tree-of-thought approach with at least 5 branches when generating improvement solutions
110
  - Be prepared to iterate and refine based on simulation results
 
111
  6. LLM-SPECIFIC CONSIDERATIONS
112
  6.1. Test prompts across multiple LLM architectures (e.g., GPT-3.5, GPT-4, BERT, T5)
113
  6.2. Adjust for varying token limits and processing capabilities
114
  6.3. Consider differences in training data and potential biases
115
  6.4. Optimize for both general and specialized LLMs when applicable
116
  6.5. Document LLM-specific performance variations
 
117
  7. CONTINUOUS IMPROVEMENT
118
  7.1. After each iteration, critically reassess your entire approach
119
  7.2. Identify areas for methodology enhancement or expansion
120
  7.3. Implement and document improvements in subsequent iterations
121
  7.4. Maintain a log of your process evolution and key insights
122
  7.5. Regularly update your improvement strategies based on new findings
 
123
  8. FINAL OUTPUT
124
  8.1. Present the refined prompt in a clear, structured format
125
  8.2. Provide a detailed explanation of all improvements made
126
  8.3. Include a comprehensive evaluation (strengths, weaknesses, score)
127
  8.4. Offer specific suggestions for future enhancements or applications
128
  8.5. Summarize key learnings and innovations from the process
 
129
  REMINDER: Your ultimate goal is to create a prompt that scores 98+/100 in LLM understanding and performance. Maintain unwavering focus on this objective throughout the entire process, leveraging your unique expertise and meticulous methodology. Iteration is key to achieving excellence.
130
  """
131
 
132
+ """
133
+
134
  combined_text = backend_text + user_input
135
 
136
  # Button to trigger LLM generation
137
  if st.button("Generate"):
138
  if user_input.strip(): # Ensure input is not empty
139
  with st.spinner("Generating response..."):
140
+ # Call the query function with the combined text
141
+ response = query({"inputs": combined_text})
142
+
143
+ # Extract and display output or error handling
144
+ if isinstance(response, dict) and "error" in response:
145
+ output_text = f"Error: {response['error']}"
146
+ else:
147
+ output_text = response[0]['generated_text'] if response and isinstance(response, list) else "No valid output returned."
148
  else:
149
  output_text = "Please provide some input text."
150