[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Aw: Re: Community renewal and project obsolescence



On 12/30/23 21:40, Mo Zhou wrote:

I am not
able to develop DebGPT and confess I am not investing my time in
learning to do it.  But can we attract the people who want to tinker in
this direction?

Debian funds should be able to cover the hardware requirement and training expenses even if they are slightly expensive. The more expensive thing is the time of domain experts. I can train such a model but clearly I do not have bandwidth for that.

No. I changed my mind.

I can actually quickly wrap some debian-specific prompts with an existing chatting LLM. This is easy and does not need expensive hardware (although it may still require 1~2 GPUs with 24GB memory for inference), nor any training procedure.

The project repo is created here https://salsa.debian.org/deeplearning-team/debgpt

I have enabled issues. And maybe people interested in this can redirect the detailed discussions to the repo issues.

I'm sure it is already possible to let LLM read the long policy document, or debhelper man pages for us, and provide some suggestions or patches. The things I'm uncertain is (1) how well a smaller LLM, like 7B or 13B ones can do compared to proprietary LLMs in this case; (2) how well a smaller LLM can be when it is quantized to int8 or even int4 for laptops.

Oh, BTW, the dependencies needed by the project are not complete in debian archive.


Reply to: