--- title: ๐ง ๐ฑSynapTree๐ณ emoji: ๐ณ๐ง ๐ฑ colorFrom: indigo colorTo: blue sdk: streamlit sdk_version: 1.42.2 app_file: app.py pinned: false license: mit short_description: AI Knowledge Tree Builder AI --- # AI Knowledge Tree Builder with Agents ๐ณโจ An enhanced Streamlit app integrating Transformers Agents into a Mixture of Experts (MoE) system for smarter knowledge tree building! ๐ค ## Features ๐ - 9 Agents assisting ML tasks: - **CodeCrafter ๐ฅ๏ธ**: Writes code (CodeAgent). - **StepSage ๐ง **: Step-by-step reasoning (ReactCodeAgent). - **JsonJugger ๐คก**: JSON-based actions (ReactJsonAgent). - **OutlineOracle ๐**: Builds outlines. - **ToolTitan ๐ง**: Lists tools. - **SpecSpinner ๐**: Crafts specs. - **ImageImp ๐จ**: Generates image prompts. - **VisualVortex ๐ผ๏ธ**: Creates visuals. - **GlossGuru ๐**: Defines terms. - MoE system maps prompts to agents for dynamic task handling. - Witty, emoji-rich UI! ๐ ## Functions ๐ - `run_agent(task, agent_name)`: Executes the specified agentโs `run` method. - `main()`: Orchestrates UI and agent interactions. ## Setup ๐ ๏ธ 1. `pip install -r requirements.txt` 2. `streamlit run app.py` 3. Pick an MoE prompt and query away! ๐ ## Notes ๐ - Default LLM uses HF Inference API. Add a token for private models via `HF_TOKEN`. - Extends original SynapTree with agent-based MoE. AgentsKnowledgeTreeBuilder is designed with the following tenets: 1. Portability - Universal access via any device & link sharing 2. Speed of Build - Rapid deployments (< 2min to production) 3. Linkiness - Programmatic access to major AI knowledge sources 4. Abstractive - Core stays lean by isolating high-maintenance components 5. Memory - Shareable flows with deep-linked research paths 6. Personalized - Rapidly adapts knowledge base to user needs 7. Living Brevity - Easily cloneable, self modifies with data and public to shares results. ๐ง **Systems, Infrastructure & Low-Level Engineering** ๐ง 1. Low-level system integrations compilers Cplusplus ๐ง 2. Linux or embedded systems experience ๐ง 3. Hardware acceleration ๐ง 4. Accelerating ML training inference across AI hardware ๐ง 5. CUDA kernels ๐ง 6. Optimum integration for specialized AI hardware ๐ง 7. Cross-layer performance tuning hardware plus software ๐ง 8. Data-center scale HPC or ML deployment ๐ง 9. GPU accelerator architecture and CUDA kernel optimization ๐ง 10. GPU kernel design and HPC concurrency ๐ง 11. GPU cluster configuration and job scheduling ๐ง 12. HPC provisioning and GPU cluster orchestration ๐ง 13. HPC training pipeline and multi-GPU scheduling ๐ง 14. HPC scheduling and multi-node debugging ๐ง 15. HPC or large-batch evaluations ๐ง 16. Hybrid on-premise and cloud HPC setups ๐ง 17. Large-scale distributed computing and HPC performance ๐ง 18. Low-level HPC code Cplusplus Triton and parallel programming ๐ง 19. Low-level driver optimizations CUDA RDMA etc ๐ง 20. Multi-GPU training and HPC acceleration ๐ง 21. Overseeing HPC infrastructure for RL reasoning tasks ๐ง 22. Performance modeling for large GPU fleets ๐ง 23. Python and low-level matrix operations custom CUDA kernels ๐ง 24. Python Cplusplus tooling for robust model tests ๐ง 25. Stress-testing frontier LLMs and misuse detection ๐ง 26. Building and optimizing distributed backend systems ๐ง 27. Distributed system debugging and optimization ๐ง 28. Distributed system design and MLOps best practices ๐ง 29. High-performance optimization for ML training and inference ๐ง 30. Implementing quantitative models of system throughput ๐ง 31. Load balancing and high-availability design ๐ง 32. Optimizing system performance under heavy ML loads ๐ง 33. Performance optimization for LLM inference ๐ง 34. Python-driven distributed training pipelines ๐ง 35. Throughput and performance optimization ๐ง 36. Cross-team platform innovation and proactive ML based resolution ๐ง 37. Distributed systems design and scalable architectures ๐ง 38. Observability anomaly detection and automated triage AIOps Python Go ๐ง 39. ServiceNow expansions AIOps and AI automation ๐ง 40. User-centric IT workflows and design integration ๐ป **Software, Cloud, MLOps & Infrastructure** ๐ป 1. Python APIs and framework optimizations tokenizers datasets ๐ป 2. Python programming ๐ป 3. Rust programming ๐ป 4. PyTorch and Keras development ๐ป 5. TypeScript development ๐ป 6. MongoDB integration ๐ป 7. Kubernetes orchestration ๐ป 8. Building secure and robust developer experiences and APIs ๐ป 9. Full-stack development Nodejs Svelte AWS ๐ป 10. Javascript TypeScript machine learning libraries transformersjs huggingfacejs ๐ป 11. In-browser inference using WebGPU WASM ONNX ๐ป 12. Integrating with major cloud platforms AWS GCP Azure ๐ป 13. Containerization with Docker and MLOps pipelines ๐ป 14. Distributed data processing ๐ป 15. Building essential tooling for ML hubs ๐ป 16. Cloud infrastructure provisioning Terraform Helm ๐ป 17. Coordination of concurrency frameworks Kubernetes etc ๐ป 18. Data pipeline tooling Spark Airflow ๐ป 19. Deep learning systems performance profiling and tuning ๐ป 20. End-to-end MLOps and DevOps practices ๐ป 21. GPU-based microservices and DevOps ๐ป 22. Infrastructure as Code Terraform Kubernetes ๐ป 23. Managing GPU infrastructure at scale K8s orchestration ๐ป 24. Model and pipeline parallel strategies ๐ป 25. Python and Golang for infrastructure automation ๐ป 26. Python-based distributed frameworks Ray Horovod ๐ป 27. Reliability and performance scaling of infrastructure ๐ป 28. System reliability and SRE best practices ๐ป 29. Building observability and debugging tools for crawlers ๐ป 30. Building scalable data pipelines for language model training ๐ป 31. Cloud infrastructure optimization and integration AWS GCP ๐ป 32. Data quality assurance and validation systems ๐ป 33. Designing cloud-native architectures for AI services ๐ป 34. Ensuring system resilience and scalability ๐ป 35. High-availability and scalable system design ๐ป 36. Infrastructure design for large-scale ML systems ๐ป 37. Integration with ML frameworks ๐ป 38. Python and distributed computing frameworks Spark ๐ป 39. Python automation and container orchestration Kubernetes ๐ป 40. Python for automation and infrastructure monitoring ๐ป 41. Python scripting for deployment automation ๐ป 42. Scalable system architecture ๐ป 43. Enhancing reliability quality and time-to-market through performance optimization ๐ป 44. Managing production environments using Azure VSCode Datadog Qualtrics ServiceNow ๐ป 45. Building MLOps pipelines for containerizing models with Docker TypeScript Rust MongoDB Svelte TailwindCSS Kubernetes ๐ค **Machine Learning, AI & Model Development** ๐ค 1. Performance tuning for Transformers in NLP CV and Speech ๐ค 2. Industrial-level ML for text generation inference ๐ค 3. Optimizing and scaling real-world ML services ๐ค 4. Reliability and performance monitoring for ML systems ๐ค 5. Ablation and training small models for data-quality analysis ๐ค 6. Reducing model size and complexity via quantization ๐ค 7. Neural sparse models and semantic dense retrieval SPLADE BM25 ๐ค 8. LLM usage and fine-tuning with chain-of-thought prompting ๐ค 9. Energy efficiency and carbon footprint analysis in ML ๐ค 10. Post-training methods for LLMs RLHF PPO DPO instruction tuning ๐ค 11. Building LLM agents with external tool usage ๐ค 12. Creating LLM agents that control GUIs via screen recordings ๐ค 13. Building web-scale high-quality LLM training datasets ๐ค 14. LLM-based code suggestions in Gradio Playground ๐ค 15. Speech-to-text text-to-speech and speaker diarization ๐ค 16. Abuse detection and ML-based risk scoring ๐ค 17. AI safety and alignment methodologies RLHF reward models ๐ค 18. Building ML-driven products using Python and PyTorch ๐ค 19. Building massive training sets for LLMs ๐ค 20. Developing next-generation AI capabilities ๐ค 21. Collaborative research on AI risk and safety ๐ค 22. Distributed training frameworks for large models ๐ค 23. Experimental large-model prototypes ๐ค 24. Exploratory ML research with LLMs and RL ๐ค 25. Large-scale retrieval optimization RAG etc ๐ค 26. Managing large ML architectures using Transformers ๐ค 27. NLP pipelines using PyTorch and Transformers ๐ค 28. Python-based data pipelines for query handling ๐ค 29. Python-based LLM experimentation ๐ค 30. Transformer-based LLM development and fine-tuning ๐ค 31. Transformer modeling and novel architecture prototyping GPTlike ๐ค 32. Vector databases and semantic search FAISS etc ๐ค 33. Advanced distributed training techniques ๐ค 34. Coordinating experimental design using Python ๐ค 35. Designing experiments to probe LLM inner workings ๐ค 36. Empirical AI research and reinforcement learning experiments ๐ค 37. Leveraging Python for ML experiment pipelines ๐ค 38. Reverse-engineering neural network mechanisms ๐ค 39. Strategic roadmap for safe LLM development ๐ค 40. Transformer-based LLM interpretability and fine-tuning ๐ค 41. AI DL model productization using established frameworks ๐ค 42. Utilizing AI frameworks PyTorch JAX TensorFlow TorchDynamo ๐ค 43. Building AI inference APIs and MLOps solutions with Python ๐ค 44. Developing agentic AI RAG and generative AI solutions LangChain AutoGen ๐ค 45. End-to-end AI lifecycle management and distributed team leadership ๐ค 46. Full-stack AI shipping with parallel and distributed training ๐ค 47. GPU kernel integration with CUDA TensorRT and roadmap alignment ๐ค 48. Large-language model inference and microservices design ๐ค 49. LLM-based enterprise analytics systems ๐ค 50. LLM diffusion-based product development ๐ค 51. LLM alignment and RLHF pipelines for model safety ๐ค 52. Mixed-precision and HPC algorithm development ๐ค 53. Optimizing open-source DL frameworks PyTorch TensorFlow ๐ค 54. Parallel and distributed training architectures and reinforcement learning methods PPO SAC QLearning ๐ค 55. Python development for large-scale MLOps deployment ๐ค 56. Scaling AI inference on hundreds of GPUs ๐ค 57. System design for multi-agent AI workflows ๐ค 58. Developing generative AI solutions with Python Streamlit Gradio and Torch ๐ค 59. Developing Web AI solutions with Javascript TypeScript and HuggingFacejs ๐ค 60. Creating WebML applications for on-device model inference ๐ค 61. Building JSTS libraries for in-browser inference using ONNX and quantization with WebGPU WebNN and WASM ๐ค 62. Driving forward quantization in the open-source ecosystem Accelerate PEFT Diffusers Bitsandbytes AWQ AutoGPTQ ๐ค 63. Designing modern search solutions combining semantic and lexical search dense bi-encoder models SPLADE BM25 ๐ค 64. Training neural sparse models with Sentence Transformers integration ๐ค 65. Leveraging chain-of-thought techniques in small models to outperform larger models ๐ค 66. Addressing hardware acceleration and numerical precision challenges for scalable software ๐ **Data Engineering, Analytics & Data Governance** ๐ 1. Advanced analytics and forecasting using Python R ๐ 2. Alerting systems and dashboards Grafana etc ๐ 3. Collaboration with data science teams ๐ 4. Data modeling and warehousing ๐ 5. Data storytelling and stakeholder communications ๐ 6. Data warehousing and BI tools Looker etc ๐ 7. Distributed compute frameworks Spark Flink ๐ 8. ETL pipelines using Airflow and Spark ๐ 9. Experiment design and user behavior modeling ๐ 10. Handling large event data Kafka S3 ๐ 11. Managing data lakes and warehousing ๐ 12. Python and SQL based data pipelines for finance ๐ 13. Real-time anomaly detection using Python and streaming ๐ 14. Root-cause analysis and incident response ๐ 15. SQL and Python workflows for data visualization ๐ 16. Product analytics and funnel insights ๐ 17. Complex data pipelines and HPC optimization techniques ๐ 18. Large-scale data ingestion transformation and curation ๐ 19. Multi-modal data processing for diverse inputs ๐ **Security, Compliance & Reliability** ๐ 1. Attack simulations and detection pipelines ๐ 2. Automation with Python and Bash ๐ 3. Cross-team incident response orchestration ๐ 4. IAM solutions AzureAD Okta ๐ 5. MacOS and iOS endpoint security frameworks ๐ 6. ML system vulnerability management ๐ 7. Risk assessment and vulnerability management ๐ 8. Security audits and penetration testing ๐ 9. Security best practices for AI products appsec devsecops ๐ 10. Secure architecture for HPC and ML pipelines ๐ 11. Security privacy and compliance in data management ๐ 12. Coordinating with security and compliance teams ๐ 13. Designing fault-tolerant high-availability LLM serving systems ๐ 14. Designing resilient and scalable architectures ๐ 15. Ensuring compliance and secure transactions ๐ 16. Familiarity with technical operations tools for security ๐ 17. Managing security processes for AI systems ๐ 18. Performance tuning for LLM serving systems ๐ 19. Process optimization and rapid troubleshooting for security ๐ 20. Python for reliability monitoring and automation ๐ 21. Python-based monitoring and fault-tolerance solutions ๐ 22. Risk management and compliance strategies ๐ 23. Cost optimization and reliability in cloud environments ๐ 24. Data quality standards and compliance Informatica Collibra Alation ๐ 25. Enterprise-wide data governance and policies for security ๐ 26. Hybrid cloud integration for secure operations ๐ 27. Identity management MFA ActiveDirectory AzureAD SSO ZeroTrust ๐ 28. Scalable database security MySQL PostgreSQL MongoDB Oracle ๐ 29. Security and operational excellence in IT and cloud ๐ฅ **Leadership, Management & Collaboration** ๐ฅ 1. Coordinating engineering design and research teams ๐ฅ 2. Cross-functional leadership for platform roadmaps ๐ฅ 3. Cross-functional leadership across finance and engineering ๐ฅ 4. Cross-team collaboration and project leadership ๐ฅ 5. Data-driven product management AB testing and analytics ๐ฅ 6. Deep knowledge of AI frameworks and constraints ๐ฅ 7. Driving cross-team alignment on HPC resources ๐ฅ 8. People and team management for data teams ๐ฅ 9. Stakeholder management and vendor oversight ๐ฅ 10. Team-building and product strategy ๐ฅ 11. Team leadership and project delivery ๐ฅ 12. Balancing innovative research with product delivery ๐ฅ 13. Balancing rapid product delivery with AI safety standards ๐ฅ 14. Bridging customer requirements with technical development ๐ฅ 15. Collaboration across diverse technology teams ๐ฅ 16. Coordinating reinforcement learning experiments ๐ฅ 17. Coordinating with security and compliance teams ๐ฅ 18. Cross-functional agile collaboration for ML scalability ๐ฅ 19. Cross-functional team coaching and agile processes ๐ฅ 20. Cross-functional stakeholder management ๐ฅ 21. Cross-regional team alignment ๐ฅ 22. Cross-team collaboration for ML deployment ๐ฅ 23. Data-driven growth strategies for AI products ๐ฅ 24. Data-driven strategy implementation ๐ฅ 25. Detailed project planning and stakeholder coordination ๐ฅ 26. Driving execution of global market entry strategies ๐ฅ 27. Leading high-impact zero-to-one ML development teams ๐ฅ 28. Leading interdisciplinary ML research initiatives ๐ฅ 29. Leading teams building reinforcement learning systems ๐ฅ 30. Leading teams in ML interpretability research ๐ฅ 31. Overseeing Python-driven ML infrastructure ๐ฅ 32. Vendor and cross-team coordination ๐ฅ 33. Facilitating cross-disciplinary innovation ๐ฑ **Full-Stack, UI, Mobile & Product Development** ๐ฑ 1. Building internal AI automation tools ๐ฑ 2. CI CD automation and testing frameworks ๐ฑ 3. Cloud-based microservices and REST GraphQL APIs ๐ฑ 4. GraphQL or REST based data fetching ๐ฑ 5. Integrating AI chat features in mobile applications ๐ฑ 6. LLM integration for user support flows ๐ฑ 7. MacOS iOS fleet management and security ๐ฑ 8. MDM solutions and iOS provisioning ๐ฑ 9. Native Android development Kotlin Java ๐ฑ 10. Observability and robust logging tracing ๐ฑ 11. Performance tuning and enhancing user experience for mobile ๐ฑ 12. Python Node backend development for AI features ๐ฑ 13. Rapid prototyping of AI based internal apps ๐ฑ 14. React Nextjs with Python for web services ๐ฑ 15. React TypeScript front-end development ๐ฑ 16. Integrating with GPT and other LLM endpoints ๐ฑ 17. TypeScript React and Python backend development ๐ฑ 18. Zero-touch deployment and patching ๐ฑ 19. Active engagement with open-source communities ๐ฑ 20. API design for scalable LLM interactions ๐ฑ 21. Bridging native mobile frontends with Python backends ๐ฑ 22. Bridging Python based ML models with frontend tooling ๐ฑ 23. Building internal tools to boost productivity in ML teams ๐ฑ 24. Building intuitive UIs integrated with Python backed ML ๐ฑ 25. Building robust developer infrastructure for ML products ๐ฑ 26. Crafting user-centric designs for AI interfaces ๐ฑ 27. Developer tools for prompt engineering and model testing ๐ฑ 28. End-to-end product delivery in software development ๐ฑ 29. Enhancing secure workflows and enterprise integrations ๐ฑ 30. Experimentation and iterative product development ๐ฑ 31. Full-stack development for ML driven products ๐ฑ 32. Integrating robust UIs with backend ML models ๐ฑ 33. Iterative design based on user feedback ๐ฑ 34. Mobile app development incorporating AI features ๐ฑ 35. Optimizing TypeScript Node build systems ๐ฑ 36. Python based API and data pipeline creation ๐ฑ 37. Senior engineering for practical AI and ML solutions ๐ฑ 38. Creating Python and Javascript HTML libraries for ML use cases ๐ฑ 39. Developing specialized software for healthcare ML use cases ๐ฑ 40. Utilizing library frameworks for scalable healthcare solutions ๐ฑ 41. Writing apps using Python Rust CUDA Transformers Keras ๐ฑ 42. Building AI solutions for healthcare with open-source libraries and Azure SaaS ๐ฑ 43. Designing and developing secure robust apps and APIs using Streamlit and Gradio ๐ฑ 44. Expertise with tools like Transformers Diffusers Accelerate PEFT Datasets ๐ฑ 45. Leveraging deep learning frameworks PyTorch XLA and cloud platforms ๐ฏ **Specialized Domains & Emerging Technologies** ๐ฏ 1. 3D computer vision and neural rendering radiance fields ๐ฏ 2. Advanced 3D reconstruction techniques Gaussian splatting NERF ๐ฏ 3. Graphics engines and deep learning for graphics Unreal Unity ๐ฏ 4. Low-level rendering pipelines DirectX Vulkan DX12 ๐ฏ 5. Performance optimized computer vision algorithms real-time tracking relighting ๐ฏ 6. Semantic video search and 3D reconstruction services ๐ฏ 7. Agent frameworks and LLM pipelines LangChain AutoGen ๐ฏ 8. Concurrency in Cplusplus Python and vector database integration ๐ฏ 9. Cross-layer performance analysis and debugging techniques ๐ฏ 10. EDA and transistor-level performance modeling SPICE BSIM STA ๐ฏ 11. GPU and SoC modeling and SoC architecture SystemC TLM ๐ฏ 12. Next-generation hardware bringup and system simulation ๐ฏ 13. Parallel computing fundamentals and performance simulation ๐ฏ 14. Advanced development for programmable networks SDN SONiC P4 ๐ฏ 15. System design for multi-agent AI workflows ๐ฏ 16. Advanced AI for self-driving software ๐ฏ 17. Autonomous vehicle data pipelines and debugging ๐ฏ 18. Car fleet software updates OTA and telemetry management ๐ฏ 19. Large-scale multi-sensor data operations and calibration ๐ฏ 20. Path planning and decision-making in robotics ๐ฏ 21. Real-time embedded systems for robotics Cplusplus Python ๐ฏ 22. Sensor fusion and HPC integration for perception systems ๐ฏ 23. Domain randomization and sim-to-real transfer for reinforcement learning ๐ฏ 24. GPU accelerated physics simulation Isaac Sim ๐ฏ 25. Large-scale reinforcement learning methods PPO SAC QLearning ๐ฏ 26. Policy optimization for robotics at scale ๐ฏ 27. Reinforcement learning orchestration and simulation based training ๐ฏ 28. Communication libraries NCCL NVSHMEM UCX ๐ฏ 29. HPC networking InfiniBand RoCE and distributed GPU programming ๐ฏ 30. GPU verification architecture techniques TLM SystemC modeling ๐ฏ 31. Hardware prototyping and verification SDN SONiC P4 programmable hardware ๐ฏ 32. GPU communications libraries management and performance tuning ๐ฏ 33. Senior software architecture for data centers EthernetIP design switch OS ๐ฏ 34. Developing Web AI solutions using Python Streamlit Gradio and Torch ๐ฏ 35. Developing Web AI solutions with Javascript TypeScript and HuggingFacejs ๐ฏ 36. Creating WebML applications for on-device model inference ๐ฏ 37. Building JSTS libraries for in-browser inference using ONNX and quantization with WebGPU WebNN and WASM ๐ฏ 38. Driving forward quantization in the open-source ecosystem Accelerate PEFT Diffusers Bitsandbytes AWQ AutoGPTQ ๐ฏ 39. Designing modern search solutions combining semantic and lexical search dense bi-encoder models SPLADE BM25 ๐ฏ 40. Training neural sparse models with Sentence Transformers integration ๐ฏ 41. Leveraging chain-of-thought techniques in small models to outperform larger models ๐ฏ 42. Addressing hardware acceleration and numerical precision challenges for scalable software ๐ข **Community, Open-Source & Communication** ๐ข 1. Educating the ML community on accelerating training and inference workloads ๐ข 2. Working through strategic collaborations ๐ข 3. Contributing documentation and code examples for technical and business audiences ๐ข 4. Building and evangelizing demos and strategic partner conversations ๐ข 5. Sharing fast Python AI development code samples and demos ๐ข 6. Communicating effectively in public speaking and technical education ๐ข 7. Engaging on social platforms GitHub LinkedIn Twitter Reddit ๐ข 8. Bringing fresh informed ideas while collaborating in a decentralized manner ๐ข 9. Writing technical documentation examples and notebooks to demonstrate new features ๐ข 10. Writing clear documentation across the product lifecycle ๐ข 11. Contributing to open-source libraries Transformers Datasets Accelerate ๐ข 12. Communicating via GitHub forums or Slack ๐ข 13. Demonstrating creativity to make complex technology accessible ----- Lets create a gradio demo app that spins up 9 ML agents to help with the aspects of ML Development . 1st my agent code should follow and demo all the agent features in transformers, yet keep the UI witty emoji filled with humor and use either gradio or streamlit and have app.py plus requirements.txt. Any documentation say a markdown outline on the functions and help or docs would be in README.md file so three files always with those. 2nd I will have a knowledge tree program which already has a MoE. Can you please add the transformers agents code to it? Transformers AGents Docs: Agents We provide two types of agents, based on the main Agent class: CodeAgent acts in one shot, generating code to solve the task, then executes it at once. ReactAgent acts step by step, each step consisting of one thought, then one tool call and execution. It has two classes: ReactJsonAgent writes its tool calls in JSON. ReactCodeAgent writes its tool calls in Python code. Agent class transformers.Agent < source > ( tools: typing.Union[typing.List[transformers.agents.tools.Tool], transformers.agents.agents.Toolbox]llm_engine: typing.Callable = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Noneadditional_args: typing.Dict = {}max_iterations: int = 6tool_parser: typing.Optional[typing.Callable] = Noneadd_base_tools: bool = Falseverbose: int = 0grammar: typing.Optional[typing.Dict[str, str]] = Nonemanaged_agents: typing.Optional[typing.List] = Nonestep_callbacks: typing.Optional[typing.List[typing.Callable]] = Nonemonitor_metrics: bool = True ) execute_tool_call < source > ( tool_name: strarguments: typing.Dict[str, str] ) Parameters tool_name (str) โ Name of the Tool to execute (should be one from self.toolbox). arguments (Dict[str, str]) โ Arguments passed to the Tool. Execute tool with the provided input and returns the result. This method replaces arguments with the actual values from the state if they refer to state variables. extract_action < source > ( llm_output: strsplit_token: str ) Parameters llm_output (str) โ Output of the LLM split_token (str) โ Separator for the action. Should match the example in the system prompt. Parse action from the LLM output run < source > ( **kwargs ) To be implemented in the child class write_inner_memory_from_logs < source > ( summary_mode: typing.Optional[bool] = False ) Reads past llm_outputs, actions, and observations or errors from the logs into a series of messages that can be used as input to the LLM. CodeAgent class transformers.CodeAgent < source > ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneadditional_authorized_imports: typing.Optional[typing.List[str]] = None**kwargs ) A class for an agent that solves the given task using a single block of code. It plans all its actions, then executes all in one shot. parse_code_blob < source > ( result: str ) Override this method if you want to change the way the code is cleaned in the run method. run < source > ( task: strreturn_generated_code: bool = False**kwargs ) Parameters task (str) โ The task to perform return_generated_code (bool, optional, defaults to False) โ Whether to return the generated code instead of running it kwargs (additional keyword arguments, optional) โ Any keyword argument to send to the agent when evaluating the code. Runs the agent for the given task. Example: Copied from transformers.agents import CodeAgent agent = CodeAgent(tools=[]) agent.run("What is the result of 2 power 3.7384?") React agents class transformers.ReactAgent < source > ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneplan_type: typing.Optional[str] = Noneplanning_interval: typing.Optional[int] = None**kwargs ) This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The action will be parsed from the LLM output: it consists in calls to tools from the toolbox, with arguments chosen by the LLM engine. direct_run < source > ( task: str ) Runs the agent in direct mode, returning outputs only at the end: should be launched only in the run method. planning_step < source > ( taskis_first_step: bool = Falseiteration: int = None ) Parameters task (str) โ The task to perform is_first_step (bool) โ If this step is not the first one, the plan should be an update over a previous plan. iteration (int) โ The number of the current step, used as an indication for the LLM. Used periodically by the agent to plan the next steps to reach the objective. provide_final_answer < source > ( task ) This method provides a final answer to the task, based on the logs of the agentโs interactions. run < source > ( task: strstream: bool = Falsereset: bool = True**kwargs ) Parameters task (str) โ The task to perform Runs the agent for the given task. Example: Copied from transformers.agents import ReactCodeAgent agent = ReactCodeAgent(tools=[]) agent.run("What is the result of 2 power 3.7384?") stream_run < source > ( task: str ) Runs the agent in streaming mode, yielding steps as they are executed: should be launched only in the run method. class transformers.ReactJsonAgent < source > ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneplanning_interval: typing.Optional[int] = None**kwargs ) This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in JSON format, then parsed and executed. step < source > ( log_entry: typing.Dict[str, typing.Any] ) Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method. class transformers.ReactCodeAgent < source > ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneadditional_authorized_imports: typing.Optional[typing.List[str]] = Noneplanning_interval: typing.Optional[int] = None**kwargs ) This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in code format, then parsed and executed. step < source > ( log_entry: typing.Dict[str, typing.Any] ) Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method. ManagedAgent class transformers.ManagedAgent < source > ( agentnamedescriptionadditional_prompting = Noneprovide_run_summary = False ) Tools load_tool transformers.load_tool < source > ( task_or_repo_idmodel_repo_id = Nonetoken = None**kwargs ) Parameters task_or_repo_id (str) โ The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are: "document_question_answering" "image_question_answering" "speech_to_text" "text_to_speech" "translation" model_repo_id (str, optional) โ Use this argument to use a different model than the default one for the tool you selected. token (str, optional) โ The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). kwargs (additional keyword arguments, optional) โ Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir, revision, subfolder) will be used when downloading the files for your tool, and the others will be passed along to its init. Main function to quickly load a tool, be it on the Hub or in the Transformers library. Loading a tool means that youโll download the tool and execute it locally. ALWAYS inspect the tool youโre downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. tool transformers.tool < source > ( tool_function: typing.Callable ) Parameters tool_function โ Your function. Should have type hints for each input and a type hint for the output. Should also have a docstring description including an โArgs โโ part where each argument is described. Converts a function into an instance of a Tool subclass. Tool class transformers.Tool < source > ( *args**kwargs ) A base class for the functions used by the agent. Subclass this and implement the __call__ method as well as the following class attributes: description (str) โ A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance โThis is a tool that downloads a file from a url. It takes the url as input, and returns the text contained in the fileโ. name (str) โ A performative name that will be used for your tool in the prompt to the agent. For instance "text-classifier" or "image_generator". inputs (Dict[str, Dict[str, Union[str, type]]]) โ The dict of modalities expected for the inputs. It has one typekey and a descriptionkey. This is used by launch_gradio_demo or to make a nice space from your tool, and also can be used in the generated description for your tool. output_type (type) โ The type of the tool output. This is used by launch_gradio_demo or to make a nice space from your tool, and also can be used in the generated description for your tool. You can also override the method setup() if your tool as an expensive operation to perform before being usable (such as loading a model). setup() will be called the first time you use your tool, but not at instantiation. from_gradio < source > ( gradio_tool ) Creates a Tool from a gradio tool. from_hub < source > ( repo_id: strtoken: typing.Optional[str] = None**kwargs ) Parameters repo_id (str) โ The name of the repo on the Hub where your tool is defined. token (str, optional) โ The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). kwargs (additional keyword arguments, optional) โ Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir, revision, subfolder) will be used when downloading the files for your tool, and the others will be passed along to its init. Loads a tool defined on the Hub. Loading a tool from the Hub means that youโll download the tool and execute it locally. ALWAYS inspect the tool youโre downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. from_langchain < source > ( langchain_tool ) Creates a Tool from a langchain tool. from_space < source > ( space_id: strname: strdescription: strapi_name: typing.Optional[str] = Nonetoken: typing.Optional[str] = None ) โ Tool Parameters space_id (str) โ The id of the Space on the Hub. name (str) โ The name of the tool. description (str) โ The description of the tool. api_name (str, optional) โ The specific api_name to use, if the space has several tabs. If not precised, will default to the first available api. token (str, optional) โ Add your token to access private spaces or increase your GPU quotas. Returns Tool The Space, as a tool. Creates a Tool from a Space given its id on the Hub. Examples: Copied image_generator = Tool.from_space( space_id="black-forest-labs/FLUX.1-schnell", name="image-generator", description="Generate an image from a prompt" ) image = image_generator("Generate an image of a cool surfer in Tahiti") Copied face_swapper = Tool.from_space( "tuan2308/face-swap", "face_swapper", "Tool that puts the face shown on the first image on the second image. You can give it paths to images.", ) image = face_swapper('./aymeric.jpeg', './ruth.jpg') push_to_hub < source > ( repo_id: strcommit_message: str = 'Upload tool'private: typing.Optional[bool] = Nonetoken: typing.Union[bool, str, NoneType] = Nonecreate_pr: bool = False ) Parameters repo_id (str) โ The name of the repository you want to push your tool to. It should contain your organization name when pushing to a given organization. commit_message (str, optional, defaults to "Upload tool") โ Message to commit while pushing. private (bool, optional) โ Whether to make the repo private. If None (default), the repo will be public unless the organizationโs default is private. This value is ignored if the repo already exists. token (bool or str, optional) โ The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) โ Whether or not to create a PR with the uploaded files or directly commit. Upload the tool to the Hub. For this method to work properly, your tool must have been defined in a separate module (not __main__). For instance: Copied from my_tool_module import MyTool my_tool = MyTool() my_tool.push_to_hub("my-username/my-space") save < source > ( output_dir ) Parameters output_dir (str) โ The folder in which you want to save your tool. Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your tool in output_dir as well as autogenerate: a config file named tool_config.json an app.py file so that your tool can be converted to a space a requirements.txt containing the names of the module used by your tool (as detected when inspecting its code) You should only use this method to save tools that are defined in a separate module (not __main__). setup < source > ( ) Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model. Toolbox class transformers.Toolbox < source > ( tools: typing.List[transformers.agents.tools.Tool]add_base_tools: bool = False ) Parameters tools (List[Tool]) โ The list of tools to instantiate the toolbox with add_base_tools (bool, defaults to False, optional, defaults to False) โ Whether to add the tools available within transformers to the toolbox. The toolbox contains all tools that the agent can perform operations with, as well as a few methods to manage them. add_tool < source > ( tool: Tool ) Parameters tool (Tool) โ The tool to add to the toolbox. Adds a tool to the toolbox clear_toolbox < source > ( ) Clears the toolbox remove_tool < source > ( tool_name: str ) Parameters tool_name (str) โ The tool to remove from the toolbox. Removes a tool from the toolbox show_tool_descriptions < source > ( tool_description_template: str = None ) Parameters tool_description_template (str, optional) โ The template to use to describe the tools. If not provided, the default template will be used. Returns the description of all tools in the toolbox update_tool < source > ( tool: Tool ) Parameters tool (Tool) โ The tool to update to the toolbox. Updates a tool in the toolbox according to its name. PipelineTool class transformers.PipelineTool < source > ( model = Nonepre_processor = Nonepost_processor = Nonedevice = Nonedevice_map = Nonemodel_kwargs = Nonetoken = None**hub_kwargs ) Parameters model (str or PreTrainedModel, optional) โ The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the value of the class attribute default_checkpoint. pre_processor (str or Any, optional) โ The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the value of model if unset. post_processor (str or Any, optional) โ The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the pre_processor if unset. device (int, str or torch.device, optional) โ The device on which to execute the model. Will default to any accelerator available (GPU, MPS etcโฆ), the CPU otherwise. device_map (str or dict, optional) โ If passed along, will be used to instantiate the model. model_kwargs (dict, optional) โ Any keyword argument to send to the model instantiation. token (str, optional) โ The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). hub_kwargs (additional keyword arguments, optional) โ Any additional keyword argument to send to the methods that will load the data from the Hub. A Tool tailored towards Transformer models. On top of the class attributes of the base class Tool, you will need to specify: model_class (type) โ The class to use to load the model in this tool. default_checkpoint (str) โ The default checkpoint that should be used when the user doesnโt specify one. pre_processor_class (type, optional, defaults to AutoProcessor) โ The class to use to load the pre-processor post_processor_class (type, optional, defaults to AutoProcessor) โ The class to use to load the post-processor (when different from the pre-processor). decode < source > ( outputs ) Uses the post_processor to decode the model output. encode < source > ( raw_inputs ) Uses the pre_processor to prepare the inputs for the model. forward < source > ( inputs ) Sends the inputs through the model. setup < source > ( ) Instantiates the pre_processor, model and post_processor if necessary. launch_gradio_demo transformers.launch_gradio_demo < source > ( tool_class: Tool ) Parameters tool_class (type) โ The class of the tool for which to launch the demo. Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes inputs and output_type. stream_to_gradio transformers.stream_to_gradio < source > ( agenttask: strtest_mode: bool = False**kwargs ) Runs an agent with the given task and streams the messages from the agent as gradio ChatMessages. ToolCollection class transformers.ToolCollection < source > ( collection_slug: strtoken: typing.Optional[str] = None ) Parameters collection_slug (str) โ The collection slug referencing the collection. token (str, optional) โ The authentication token if the collection is private. Tool collections enable loading all Spaces from a collection in order to be added to the agentโs toolbox. [!NOTE] Only Spaces will be fetched, so you can feel free to add models and datasets to your collection if youโd like for this collection to showcase them. Example: Copied from transformers import ToolCollection, ReactCodeAgent image_tool_collection = ToolCollection(collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f") agent = ReactCodeAgent(tools=[*image_tool_collection.tools], add_base_tools=True) agent.run("Please draw me a picture of rivers and lakes.") Engines Youโre free to create and use your own engines to be usable by the Agents framework. These engines have the following specification: Follow the messages format for its input (List[Dict[str, str]]) and return a string. Stop generating outputs before the sequences passed in the argument stop_sequences TransformersEngine For convenience, we have added a TransformersEngine that implements the points above, taking a pre-initialized Pipeline as input. Copied from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TransformersEngine model_name = "HuggingFaceTB/SmolLM-135M-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) engine = TransformersEngine(pipe) engine([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]) "What a " class transformers.TransformersEngine < source > ( pipeline: Pipelinemodel_id: typing.Optional[str] = None ) This engine uses a pre-initialized local text-generation pipeline. HfApiEngine The HfApiEngine is an engine that wraps an HF Inference API client for the execution of the LLM. Copied from transformers import HfApiEngine messages = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "No need to help, take it easy."}, ] HfApiEngine()(messages, stop_sequences=["conversation"]) "That's very kind of you to say! It's always nice to have a relaxed " class transformers.HfApiEngine < source > ( model: str = 'meta-llama/Meta-Llama-3.1-8B-Instruct'token: typing.Optional[str] = Nonemax_tokens: typing.Optional[int] = 1500timeout: typing.Optional[int] = 120 ) Parameters model (str, optional, defaults to "meta-llama/Meta-Llama-3.1-8B-Instruct") โ The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub. token (str, optional) โ Token used by the Hugging Face API for authentication. If not provided, the class will use the token stored in the Hugging Face CLI configuration. max_tokens (int, optional, defaults to 1500) โ The maximum number of tokens allowed in the output. timeout (int, optional, defaults to 120) โ Timeout for the API request, in seconds. Raises ValueError ValueError โ If the model name is not provided. A class to interact with Hugging Faceโs Inference API for language model interaction. This engine allows you to communicate with Hugging Faceโs models using the Inference API. It can be used in both serverless mode or with a dedicated endpoint, supporting features like stop sequences and grammar customization. Agent Types Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, โฆ), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a PIL.Image. These types have three specific purposes: Calling to_raw on the type should return the underlying object Calling to_string on the type should return the object as a string: that can be the string in case of an AgentText but will be the path of the serialized version of the object in other instances Displaying it in an ipython kernel should display the object correctly AgentText class transformers.agents.agent_types.AgentText < source > ( value ) Text type returned by the agent. Behaves as a string. AgentImage class transformers.agents.agent_types.AgentImage < source > ( value ) Image type returned by the agent. Behaves as a PIL.Image. save < source > ( output_bytesformat**params ) Parameters output_bytes (bytes) โ The output bytes to save the image to. format (str) โ The format to use for the output image. The format is the same as in PIL.Image.save. **params โ Additional parameters to pass to PIL.Image.save. Saves the image to a file. to_raw < source > ( ) Returns the โrawโ version of that object. In the case of an AgentImage, it is a PIL.Image. to_string < source > ( ) Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized version of the image. AgentAudio class transformers.agents.agent_types.AgentAudio < source > ( valuesamplerate = 16000 ) Audio type returned by the agent. to_raw < source > ( ) Returns the โrawโ version of that object. It is a torch.Tensor object. to_string < source > ( ) Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized version of the audio. Code to SynapTree my Knowledge Tree Builder to demo MoE and Agents: import streamlit as st import os import glob import re import base64 import pytz import time import streamlit.components.v1 as components from urllib.parse import quote from gradio_client import Client from datetime import datetime # Page configuration Site_Name = 'AI Knowledge Tree Builder ๐๐ฟ Grow Smarter with Every Click' title = "๐ณโจAI Knowledge Tree Builder๐ ๏ธ๐ค" helpURL = 'https://huggingface.co/spaces/awacke1/AIKnowledgeTreeBuilder/' bugURL = 'https://huggingface.co/spaces/awacke1/AIKnowledgeTreeBuilder/' icons = '๐ณโจ๐ ๏ธ๐ค' SidebarOutline = """๐ณ๐ค Designed with the following tenets: 1 ๐ฑ **Portability** - Universal access via any device & link sharing 2. โก **Speed of Build** - Rapid deployments < 2min to production 3. ๐ **Linkiness** - Programmatic access to AI knowledge sources 4. ๐ฏ **Abstractive** - Core stays lean isolating high-maintenance components 5. ๐ง **Memory** - Shareable flows deep-linked research paths 6. ๐ค **Personalized** - Rapidly adapts knowledge base to user needs 7. ๐ฆ **Living Brevity** - Easily cloneable, self modify data public share results. """ st.set_page_config( page_title=title, page_icon=icons, layout="wide", initial_sidebar_state="auto", menu_items={ 'Get Help': helpURL, 'Report a bug': bugURL, 'About': title } ) st.sidebar.markdown(SidebarOutline) # Initialize session state variables if 'selected_file' not in st.session_state: st.session_state.selected_file = None if 'view_mode' not in st.session_state: st.session_state.view_mode = 'view' if 'files' not in st.session_state: st.session_state.files = [] # --- MoE System Prompts Setup --- moe_prompts_data = """1. Create a python streamlit app.py demonstrating the topic and show top 3 arxiv papers discussing this as reference. 2. Create a python gradio app.py demonstrating the topic and show top 3 arxiv papers discussing this as reference. 3. Create a mermaid model of the knowledge tree around concepts and parts of this topic. Use appropriate emojis. 4. Create a top three list of tools and techniques for this topic with markdown and emojis. 5. Create a specification in markdown outline with emojis for this topic. 6. Create an image generation prompt for this with Bosch and Turner oil painting influences. 7. Generate an image which describes this as a concept and area of study. 8. List top ten glossary terms with emojis related to this topic as markdown outline.""" # Split the data by lines and remove the numbering/period (assume each line has "number. " at the start) moe_prompts_list = [line.split('. ', 1)[1].strip() for line in moe_prompts_data.splitlines() if '. ' in line] moe_options = [""] + moe_prompts_list # blank is default # Place the selectbox at the top of the app; store selection in session_state key "selected_moe" selected_moe = st.selectbox("Choose a MoE system prompt", options=moe_options, index=0, key="selected_moe") # --- Utility Functions --- def get_display_name(filename): """Extract text from parentheses or return filename as is.""" match = re.search(r'\((.*?)\)', filename) if match: return match.group(1) return filename def get_time_display(filename): """Extract just the time portion from the filename.""" time_match = re.match(r'(\d{2}\d{2}[AP]M)', filename) if time_match: return time_match.group(1) return filename def sanitize_filename(text): """Create a safe filename from text while preserving spaces.""" safe_text = re.sub(r'[^\w\s-]', ' ', text) safe_text = re.sub(r'\s+', ' ', safe_text) safe_text = safe_text.strip() return safe_text[:50] def generate_timestamp_filename(query): """Generate filename with format: 1103AM 11032024 (Query).md""" central = pytz.timezone('US/Central') current_time = datetime.now(central) time_str = current_time.strftime("%I%M%p") date_str = current_time.strftime("%m%d%Y") safe_query = sanitize_filename(query) filename = f"{time_str} {date_str} ({safe_query}).md" return filename def delete_file(file_path): """Delete a file and return success status.""" try: os.remove(file_path) return True except Exception as e: st.error(f"Error deleting file: {e}") return False def save_ai_interaction(query, ai_result, is_rerun=False): """Save AI interaction to a markdown file with new filename format.""" filename = generate_timestamp_filename(query) if is_rerun: content = f"""# Rerun Query Original file content used for rerun: {query} # AI Response (Fun Version) {ai_result} """ else: content = f"""# Query: {query} ## AI Response {ai_result} """ try: with open(filename, 'w', encoding='utf-8') as f: f.write(content) return filename except Exception as e: st.error(f"Error saving file: {e}") return None def get_file_download_link(file_path): """Generate a base64 download link for a file.""" try: with open(file_path, 'r', encoding='utf-8') as f: content = f.read() b64 = base64.b64encode(content.encode()).decode() filename = os.path.basename(file_path) return f'{get_display_name(filename)}' except Exception as e: st.error(f"Error creating download link: {e}") return None # --- New Functions for Markdown File Parsing and Link Tree --- def clean_item_text(line): """ Remove emoji and numbered prefix from a line. E.g., "๐ง 1. Low-level system integrations compilers Cplusplus" becomes "Low-level system integrations compilers Cplusplus". Also remove any bold markdown markers. """ # Remove leading emoji and number+period cleaned = re.sub(r'^[^\w]*(\d+\.\s*)', '', line) # Remove any remaining emoji (simple unicode range) and ** markers cleaned = re.sub(r'[\U0001F300-\U0001FAFF]', '', cleaned) cleaned = cleaned.replace("**", "") return cleaned.strip() def clean_header_text(header_line): """ Extract header text from a markdown header line. E.g., "๐ง **Systems, Infrastructure & Low-Level Engineering**" becomes "Systems, Infrastructure & Low-Level Engineering". """ match = re.search(r'\*\*(.*?)\*\*', header_line) if match: return match.group(1).strip() return header_line.strip() def parse_markdown_sections(md_text): """ Parse markdown text into sections. Each section starts with a header line containing bold text. Returns a list of dicts with keys: 'header' and 'items' (list of lines). Skips any content before the first header. """ sections = [] current_section = None lines = md_text.splitlines() for line in lines: if line.strip() == "": continue # Check if line is a header (contains bold markdown and an emoji) if '**' in line: header = clean_header_text(line) current_section = {'header': header, 'raw': line, 'items': []} sections.append(current_section) elif current_section is not None: # Only add lines that appear to be list items (start with an emoji and number) if re.match(r'^[^\w]*\d+\.\s+', line): current_section['items'].append(line) else: if current_section['items']: current_section['items'][-1] += " " + line.strip() else: current_section['items'].append(line) return sections def display_section_items(items): """ Display list of items as links. For each item, clean the text and generate search links using your original link set. If a MoE system prompt is selected (non-blank), prepend itโwith three spacesโbefore the cleaned text. """ # Retrieve the current selected MoE prompt (if any) moe_prefix = st.session_state.get("selected_moe", "") search_urls = { "๐๐ArXiv": lambda k: f"/?q={quote(k)}", "๐ฎGoogle": lambda k: f"https://www.google.com/search?q={quote(k)}", "๐บYoutube": lambda k: f"https://www.youtube.com/results?search_query={quote(k)}", "๐ญBing": lambda k: f"https://www.bing.com/search?q={quote(k)}", "๐กClaude": lambda k: f"https://claude.ai/new?q={quote(k)}", "๐ฑX": lambda k: f"https://twitter.com/search?q={quote(k)}", "๐คGPT": lambda k: f"https://chatgpt.com/?model=o3-mini-high&q={quote(k)}", } for item in items: cleaned_text = clean_item_text(item) # If a MoE prompt is selected (non-blank), prepend it (with three spaces) to the cleaned text. final_query = (moe_prefix + " " if moe_prefix else "") + cleaned_text links_md = ' '.join([f"[{emoji}]({url(final_query)})" for emoji, url in search_urls.items()]) st.markdown(f"- **{cleaned_text}** {links_md}", unsafe_allow_html=True) def display_markdown_tree(): """ Allow user to upload a .md file or load README.md. Parse the markdown into sections and display each section in a collapsed expander with the original markdown and a link tree of items. """ st.markdown("## Markdown Tree Parser") uploaded_file = st.file_uploader("Upload a Markdown file", type=["md"]) if uploaded_file is not None: md_content = uploaded_file.read().decode("utf-8") else: if os.path.exists("README.md"): with open("README.md", "r", encoding="utf-8") as f: md_content = f.read() else: st.info("No Markdown file uploaded and README.md not found.") return sections = parse_markdown_sections(md_content) if not sections: st.info("No sections found in the markdown file.") return for sec in sections: with st.expander(sec['header'], expanded=False): st.markdown(f"**Original Markdown:**\n\n{sec['raw']}\n") if sec['items']: st.markdown("**Link Tree:**") display_section_items(sec['items']) else: st.write("No items found in this section.") # --- Existing AI and File Management Functions --- def search_arxiv(query): st.write("Performing AI Lookup...") client = Client("awacke1/Arxiv-Paper-Search-And-QA-RAG-Pattern") result1 = client.predict( prompt=query, llm_model_picked="mistralai/Mixtral-8x7B-Instruct-v0.1", stream_outputs=True, api_name="/ask_llm" ) st.markdown("### Mixtral-8x7B-Instruct-v0.1 Result") st.markdown(result1) result2 = client.predict( prompt=query, llm_model_picked="mistralai/Mistral-7B-Instruct-v0.2", stream_outputs=True, api_name="/ask_llm" ) st.markdown("### Mistral-7B-Instruct-v0.2 Result") st.markdown(result2) combined_result = f"{result1}\n\n{result2}" return combined_result @st.cache_resource def SpeechSynthesis(result): documentHTML5 = '''