Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
codelion 
posted an update May 15
Post
1115
After the announcements yesterday, I got a chance to try the new gemini-1.5-flash model from @goog1e , it is almost as good as gpt-4o on the StaticAnalaysisEval ( patched-codes/static-analysis-eval) It is also a bit faster than gpt-4o and much cheaper.

I did run into a recitation flag with an example in the dataset where the api refused to fix the vulnerability and flagged the input as using copyrighted content. This is something you cannot unset even with the safety filters and seems to be an existing bug https://issuetracker.google.com/issues/331677495

But overall you get gpt-4o level performance for 7% the price, we are thinking of making it default in patchwork - https://github.com/patched-codes/patchwork You can use the google_api_key and model options to choose gemini-1.5-flash-latest to run it with patchwork.

cool, let us know once it's in patchwork! also interested to hear what projects you've already used patchwork for

·

We can use it in patchwork already by using model=gemini-1.5-flash-latest at CLI.

We have benchmarked the AutoFix patchflow in patchwork on a number of projects. E.g. here are a few PRs with differed models that fix vulnerabilities -

https://github.com/patched-codes/dvpwa/pulls

https://github.com/patched-codes/tarpit/pulls

https://github.com/patched-codes/shiftleft-java-demo/pulls

https://github.com/patched-codes/AltoroJ/pulls

https://github.com/patched-codes/pygoat/pulls