from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.

Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.

  • resipsaloquitur@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    10 hours ago

    It’s not LLMs — see the peak at 2013. They aggressively started closing any “duplicate” questions around then. The whole premise was that experts were supposed to answer questions for clout that would bolster their resume, but after getting silenced a few times, why would they come back? And anyone with the temerity to ask a question that was asked (with or without a good answer, ten years ago) would also never come back after getting shut down.

    They couldn’t decide if they were a forum or Wikipedia and became neither.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    16 hours ago

    Note that the decline began well before “AI” stuff became a thing. Stack Overflow has had a major culture problem as well as not treating their users with respect for ages.

    For the part about respecting users, they have a history of ignoring Meta (their site specifically for talking about Stack Overflow site itself) while acting like they use it.

  • zarathustra0@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    18 hours ago

    In the future we will be dependent on LLMs for everything because the only people with enough money to maintain libraries of data which are untainted by LLMs will be the people who own the LLMs.

    Step 1: Steal all of the data (including copyrighted stuff)

    Step 2: Poison the well

    Step 3: Profit

  • zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    17 hours ago

    Not surprised, even without the LLM boom, StackOverflow was doomed for the same reason reddit is doomed: power tripping bastards, gatekeeping everything which is not part of their narrow minded world.

  • letsgo@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 hours ago

    I’m not surprised. StackOverflow has moderated itself out of relevance. Ask a question and get flamed. DDG a question plus “stackoverflow” and get something that may well have been correct and useful in 2012 but tech moves on and it’s now archaic trivia, somewhat akin to facts about punched cards. “Help me StackOverflow, you’re my only hope” hasn’t been true for quite some time now.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      They moderated themselves out of relevance because when you ask new questions that aren’t duplicates they still close them as duplicates.

  • Elgenzay@lemmy.ml
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 day ago

    Think in the future LLMs will perform worse on modern problems due to the lack of recent StackOverflow training data?

    • Rexios@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      Maybe but a lot of StackOverflow answers come straight from documentation anyways so it might not matter

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      23 hours ago

      StackOverflow training data

      Q: detailed problem description with research and links explaining how problem is different from existing posts and that the mentioned solutions did not work for this case.

      A: duplicate. (links to same url Q explicitly mentioned and explained)

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      22 hours ago

      I suspect it may be a self-balancing problem. For topics that llms don’t do well there will be discussions in forums. Then the AI will have training data and catch up.

    • ikt@aussie.zoneOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      At the current rate yeah, it simply isn’t good enough, my go to question is print Hello World in brainfuck and then it passes that have it print Hello <random other place>

      In this case I just asked it ‘I have a question about brainfuck’ and it gave an example of Hello World! Great!

      Unfortunately it just outputs “HhT”

      So I know that they are trying hard with synthetic data:

      https://www.youtube.com/watch?v=m1CH-mgpdYg

      but I think fundamentally they just need to be straight better at absorbing the data that they’ve already got

      • cevn@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        I think the disconnect we are experiencing is how the AI will write some code and never execute it. It should absolutely be trying to compile it in some sandbox if we had a really smart AI , thru installing it on some box. Maybe someone has already come up with this.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I think so. I am legitimately worried about what happens in 10 years with everyone relying on llms to code when nobody seems to be planning for how things will work when LLM coding is nearly universal

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        there’s nothing to plan for. Shit will be broken, shit is already expected to be broken nowadays, business as usual. I hate what programming has become.

      • ikt@aussie.zoneOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I do wonder if a new programming language will be invented that is ‘ai friendly’ and far more better integrated

        • markovs_gun@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          23 hours ago

          The main concern for me is how that would even work. LLMs struggle to come up with anything truly novel, and are mostly copying from their training set. What happens when 99% of the training corpus for a programming language is AI code or at least partially AI code? Without human data to start with how do LLMs continue to get better? This is kind of an issue with everything LLMs do but especially programming.

          • ikt@aussie.zoneOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            21 hours ago

            I’m thinking more along the lines of a new programming language unlike any programming language ever made, simply made for an LLM to produce, like machine generation of machine code (but who knows, LLM’s in themselves are frankly magic to me, last thing I want to do is be like someone in the early 1900’s predicating in the year 2000 we’ll all use advanced hot air balloons to move about)

    • Kühlschrank@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      Do llms get the bulk of their training date from Stack? Legitimately curious as I am sure they do get at least some training from non Q&A style sources

    • Psaldorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      That and they cover up half the fucking page when you try to view it. Google login, giant cookie popup etc

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      Nah, that drop comes WELL before AI answers. Look at the dates. They’ve had a culture of people overly aggressively closing new questions for pointless/irrelevant reasons as well as being generally nasty to new users for ages. Sure, it started dropping way faster post 2020 because of AI, but the problem was already there.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      ·
      1 day ago

      The fast drop yes, but really it’s been in decline for around a decade before that.

      • MrZee@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        Interesting! When I first read your comment, I looked at the chart and thought “it looks to me like the drop starts at the end of 2022. Isn’t that before LLMs started being used broadly?”

        Nope. Looks like ChatGPT was released in November 2022. It doesnt feel like it’s been around that long, but I guess it has.

      • Vince@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 day ago

        That sucks, is there an alternative people are using? seems like it would still be a useful knowledge base to have

        • HellieSkellie@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          The common alternative is to just ask ChatGPT your software questions, get false information from the AI, and then try and push that horrible code to production anyway if my past two jobs are any indicator.

          Stack Overflow is still useful to find old answers, but fucking sucks to ask new questions on. If you aren’t getting an AI answer to your question, then you’re getting your question deleted for some made up reason.

          The real answer that everyone hates is: If you have a question about something, read the documentation and experiment with it to figure that something out. If the documentation seems wrong, submit an issue report to the devs (usually on GitHub) and see what they say.

          The secondary answer is that almost everything FOSS has a slack channel or even sometimes discord channels. Go to the channels and ask people who use/make whatever tool you need help with.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            The common alternative is to just ask ChatGPT your software questions, get false information from the AI, and then try and push that horrible code to production anyway if my past two jobs are any indicator.

            If you have developers pushing bad and broken code to production your problem isn’t AI.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      1 day ago

      I believe it’s more of a generational shift.

      The age groups who used to rely on SO are now skilled enough not to rely on it as much (or they more often have the types of questions SO can’t answer).

      Younger age groups probably prefer other means of learning (like ChatGPT, Discord and YouTube videos).

      • shaserlark@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 day ago

        Yeah I’m working in some niche and there is a stackoverflow that they refer newbies to because "no developer support on their discord“. But if you ask a question there no one will ever answer, otoh if you know where and how to ask you’ll actually get help on discord. I feel like SO is pretty much dead with anything where change happens quickly.

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          There’s also only so many ways to ask how to sort a list or whatever and SO removes duplicate questions. So at some point the number of unique questions asked begins to plateau. I think that explains the slow drop before LLMs came on the scene.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I assumed it was because stackoverflow already had all the answers I needed except for the things too obscure to search for that result in my crying and trying to piece it together from scraps of info on 50 different tabs.

    • calcopiritus@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 day ago

      Yes. But not just in the “obvious” way.

      I first started to contribute back when LLMs first appeared. Then SO allowed became LLM training grounds. Which made me stop contributing instantly.

      I guess a not-insignificant amount of people stopped answering questions, which means less search results, which ends in less traffic.

      I’m sure the fall wouldn’t be as big as it is if they didn’t allow LLMs to train on their data.

    • juli@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      It probably started off when reddit/discord became a friendly place for troubleshooting (code among other things), then the AI dropped it off the cliff.

OSZAR »