• zero_spelled_with_an_ecks@programming.dev
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      You need to actively have the relevant code in context.

      I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it’s important to find a good fit and to learn to use the tool effectively.

      We have three different models at work, and they work a lot differently and are good at different things.