This is a GGUF q4_k_m quantized version of @mlabonne's model mlabonne/NeuralBeagle14-7B using his AutoGGUF notebook just for learning purpose.