Many ML researchers are unhappy with their development process. Coding from scratch is laborious since the process for developing and testing new models is largely the same each time but no-code and low-code platforms do not provide enough granularity to tweak models, loss functions, and training processes. Most ML researchers experiment in Jupyter notebooks. They are quick, composable, and easy to present.However, even with the help of LLMs: - Copy-pasting code between web-interfaces and notebooks is slow - Errors in generated code are difficult to detect and fix - Writing the appropriate prompt to generate correct boilerplate code is still repetitive Our solution takes existing data and a natural language prompt and uses it to build a model that is compatible with the shape and types of the data. It also uses recursive API calls to fix any errors in the generated code by passing them back to the LLM. In the future, this product could be extended to generate code for the full build, train, test, and measure cycle so that researchers can ask for a set of models to be tested, tweak the generated code as needed, and rapidly evaluate the best model for their needs.
Category tags:Developer Tools, IDE Extension, Productivity