Ask HN: Is synthetic data generation practical outside academia?

I keep seeing synthetic data pipelines powering the latest LLM “breakthroughs”: • TinyZero’s $30 fine-tuning workflow • Sky-T1’s $450 reasoning-model build • Meta AI’s Llama 3 herd (2024 paper detailing their synthetic-data training) • Berkeley OpenThoughts (“Data Recipes for Reasoning Models”), published yesterday

There are also open-source toolkits you can experiment with:

https://github.com/meta-llama/synthetic-data-kit https://github.com/bespokelabsai/curator

But it still feels very research-oriented. I haven’t found many examples of these pipelines running in real-world products.

I’m curious:

1. Who is using synthetic-data pipelines in production today?

2. What tasks does it actually improve. E.g. fine-tuning smaller models for specific tasks?

Any real-world stories, pointers, or further reading would be hugely appreciated. Thanks!

4 points | by cpard 16 hours ago

3 comments

  • publicdaniel 4 hours ago
    I’m currently working on a document parsing engine for a specific type of document. The inputs are usually PDFs. I’m able to get great structured output from both the latest Gemini Flash models and the latest Llama Scout models. The best latency I get with Gemini is about 5 seconds end to end. With llama hosted on groq it’s about 3 seconds.

    My use case is latency constrained, so I’m exploring fine tuning / distilling to see if I can get latency sub second. I imagine these are the kinds of scenarios where it’s still worth it to fine-tune and distill.

    My plan is to generate a lot of synthetic training data using more capable slower foundation models and use that to train the smaller model.

  • publicdaniel 4 hours ago
    It’s really useful for generating synthetic data for search and recommendations that you can use to train a smaller / faster model. This is especially useful if you don’t have lots of click-through data or with cold start scenarios. There are some good articles that cover this, if you’re interested I’ll try to find them and share
  • sargstuff 15 hours ago
    Non-AI specific 'synthetic data generation':

    historically used for processes which make use of time-series / simulations & modeling / forcasting. aka weather forcasting, related points in [0]

    2) a) Testing with actual 'sensitive' data may not be possible for security reasons (aka payroll information, stock market price influences)[1]. b) insufficent/incomplete information. aka figure out how well what's known matches 'reality' and/or may suggest areas to look for 'missing' pieces in model.

    -----

    [0] : https://www.oreilly.com/library/view/practical-time-series/9...

    [1] : https://www.k2view.com/what-is-synthetic-data-generation/

    • cpard 14 hours ago
      This is great. Synthetic data has been around for a long time, I think the difference with LLM related cases is that in the past it was primarily structured data that was a bit easier to approximate with some distribution or some grammar.

      With synthetic data for large languages models it’s more about QA pairs and reasoning trails for solving complicated problems