AI & ML interests

Application Security, AI and Software Engineering

Recent Activity

lambdasec's activity

codelionĀ 
posted an update 4 months ago
view post
Post
1935
We recently worked with OpenAI to fine-tune gpt-4o and built the SOTA model for the patched-codes/static-analysis-eval benchmark. All the code and data patched-codes/synth-vuln-fixes on how we did it is available on their GitHub - https://github.com/openai/build-hours/tree/main/5-4o_fine_tuning.

Here are some tips based on our experience:

ā†’ Establish baseline with "conditioning" / prompting

ā†’ Task-specific datasets are ideal for PEFT; hard to beat gpt-4o on "broad" tasks

ā†’ Add your best system prompt to each example

ā†’ Ensure training data distribution is similar to inference data

ā†’ Shorten instructions with concise prompts; may require more examples.

ā†’ Define clear evaluation metrics (seriously, please eval!)

You can see more details on the benchmark and process here - https://www.patched.codes/blog/the-static-analysis-evaluation-benchmark-measuring-llm-performance-in-fixing-software-vulnerabilities
codelionĀ 
posted an update 6 months ago
view post
Post
2495
A new paper titled "STALL+: Boosting LLM-based Repository-level Code Completion with Static Analysis" shows the benefits of integrating static analysis with LLMs. (https://arxiv.org/abs/2406.10018)

Authors evaluate 4 key questions:

- How does each static analysis integration strategy perform in LLM-based repository-level code completion?
> They found that integrating static analysis in the prompting phase (especially with file-level dependencies) can achieve the substantially larger improvements than other phases.

- How do different combinations of integration strategies affect LLM-based repository-level code completion?
> Languages that are easier to analyze like Java show more improvements compared to dynamic languages like Python.

- How do static analysis integration strategies perform when compared or combined with RAG in LLM-based repository-level code completion?
> Static analysis and RAG are complementary and boost the overall accuracy.

- What are the online costs of different integration strategies in LLM-based repository-level code completion?
> Combining prompting-phase static analysis and RAG is the best option for cost-effectiveness.

In my @owasp App Sec keynote last year, I had described how one can do static analysis augmented generation (SaAG) to boost the accuracy of LLM based patches for vulnerability remediation. (you can see the talk here - https://www.youtube.com/watch?v=Cw4-ZnUNVLs)
codelionĀ 
posted an update 6 months ago
view post
Post
2229
LLM-Assisted Patching of Polyfill Supply Chain Attack

A recent supply chain attack on polyfill.io affected over 100,000 websites (see https://www.patched.codes/blog/patching-the-polyfill-supply-chain-attack). To address this issue, we show how developers can leverage Large Language Models (LLMs) for efficient vulnerability patching:

1. Automated Detection: Using Semgrep rules (see https://semgrep.dev/playground/r/KxUvD7w/asankhaya_personal_org.polyfill-compromise-copy) to identify vulnerable code.

2. LLM-Powered Patching: Utilizing Patchwork (https://github.com/patched-codes/patchwork), an open-source solution that employs LLMs to automatically fix vulnerabilities.

3. Custom Workflows: The "Fixpolyfill" patchflow (https://github.com/patched-codes/patchwork-configs/tree/main/patchflows/Fixpolyfill) , tailored for this specific attack, can be easily run across multiple repositories.

4. Scalable Solutions: Options to scan and patch entire GitHub/GitLab organizations, with automated pull request generation.

5. Rapid Response: LLM-assisted patching enables swift action to minimize damage from supply chain attacks.

This approach demonstrates how LLMs can be effectively used to quickly respond to and remediate widespread security vulnerabilities in code.
codelionĀ 
posted an update 6 months ago
view post
Post
7620
The new Claude Sonnet 3.5 model from Anthropic AI has been getting good reviews on since last night. It is quite good at coding related tasks. We tried it on the Static Analysis Eval benchmark ( patched-codes/static-analysis-eval) which measures the ability of a LLM to fix vulnerabilities. The model scores 59.21% which is good but not better than other frontier models (like GPT-4, Gemini-1.5 and LLama-3).
Ā·
codelionĀ 
posted an update 6 months ago
codelionĀ 
posted an update 7 months ago
view post
Post
1608
WorkerSafetyQAEval: A new benchmark to evaluate worker safety domain question and answering

Happy to share a new benchmark on question and answers for worker safety domain. The benchmark and leaderboard is available at
codelion/worker-safety-qa-eval

We evaluate popular generic chatbots like ChatGPT and HuggingChat on WorkerSafetyQAEval and compare it with a domain specific RAG bot called Securade.ai Safety Copilot - codelion/safety-copilot It highlights the importance of having domain specific knowledge for critical domains like worker safety that require high accuracy. Securade.ai Safety Copilot achieves ~97% on the benchmark setting a new SOTA.

You can read more about the Safety Copilot on https://securade.ai/blog/how-securade-ai-safety-copilot-transforms-worker-safety.html
codelionĀ 
posted an update 7 months ago
view post
Post
1115
After the announcements yesterday, I got a chance to try the new gemini-1.5-flash model from @goog1e , it is almost as good as gpt-4o on the StaticAnalaysisEval ( patched-codes/static-analysis-eval) It is also a bit faster than gpt-4o and much cheaper.

I did run into a recitation flag with an example in the dataset where the api refused to fix the vulnerability and flagged the input as using copyrighted content. This is something you cannot unset even with the safety filters and seems to be an existing bug https://issuetracker.google.com/issues/331677495

But overall you get gpt-4o level performance for 7% the price, we are thinking of making it default in patchwork - https://github.com/patched-codes/patchwork You can use the google_api_key and model options to choose gemini-1.5-flash-latest to run it with patchwork.
  • 2 replies
Ā·
codelionĀ 
posted an update 7 months ago
view post
Post
1784
The new gpt-4o model seems to a very good coder. OpenAI reported a 90+ score on https://huggingface.co/datasets/openai_humaneval

We tried the new model on our patched-codes/static-analysis-eval which evaluates the model on vulnerability remediation. gpt-4o has reclaimed the top spot on our leaderboard (from meta-llama/Meta-Llama-3-70B-Instruct).

You can now use the new model with our open-source framework PatchWork - https://github.com/patched-codes/patchwork by passing model=gpt-4o on the CLI.
Ā·
codelionĀ 
posted an update 8 months ago
view post
Post
1759
Happy to announce the open source framework to turbo charge devops called patchwork - https://github.com/patched-codes/patchwork

You can use it to build patchflows - workflows that use LLMs for software development tasks like bug fixing, pull request review, library migration and documentation.

Supports any LLM of your choice including our own MoE model - patched-codes/patched-mix-4x7B

Give it a try!
  • 2 replies
Ā·
codelionĀ 
posted an update 8 months ago
codelionĀ 
posted an update 8 months ago
view post
Post
1942
We just released a new MoE model (meraGPT/mera-mix-4x7B) that is half as large as Mixtral-8x7B while still been competitive with it across different benchmarks. mera-mix-4x7B achieves 76.37 on the open LLM eval.

You can check mera-mix-4x7B out on HF here - meraGPT/mera-mix-4x7B
codelionĀ 
updated a Space over 1 year ago
codelionĀ 
updated a Space over 1 year ago