• vane@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 hours ago


    Like they know what they’re doing now. They combine 1-5TB of tokens with random numbers.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    This is the sort of propaganda these companies leak as if it’s a problem when what they really want is to perpetuate this idea that their autocomplete machines are magical and mysterious.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 days ago

    So, this is where the researchers are acknowledging that they are choking on the smell of their own farts.

    However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.

    Bull fucking shit. Making something complex enough to be unpredictable doesn’t mean it is intentional. Reasoning models are not reasoning, they are outputting something that looks like reasoning. There is no intent behind it to mislead or do anything on purpose. It just looks that way because the output is in a format that looks like a person writing down their thoughts.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.

      Sure, let’s put this loose into the public’s hands. It’ll be fine

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 days ago

    Meaningless grifter nonsense.

    On the one hand it’s relatively simple to understand how data is processed.

    On the other hand It’s impossible for these grifters to “understand” a function developed by processing endless stolen data.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    They actually believe their own press and think their “reasoning” models say anything meaningful when they “explain” their “reasoning”?

    Wow! I could be a top AI researcher too and I don’t even know how to begin programming a computer!