• alykanas@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 days ago

      Project Manager here, and where I’m from it’s common knowledge that 9 women can have a baby in a month .

      • isaacd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        Or!—hear me out—one woman whose 8 co-gestators were just laid off by someone who doesn’t understand what their job was

  • oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 days ago

    AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      That undersells them slightly.

      LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

      LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.

      They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.

        • CarbonBasedNPU@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          They make shit up fucking constantly. If I have to google if the answer I was given was right I might as well cut out the middle man and just google it myself. If I can’t understand it at that point maybe ask the LLM to rephrase the answer.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            13 days ago

            You missed the part where deep seek uses a separate inference engine to take the LLM output and reason through it to see if it makes sense.

            No it’s not perfect. But it isn’t just predicting text like how AI was a couple of years ago.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

      • Opinionhaver@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        pretending LLMs are AI

        LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

        However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          Here we go… Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s

          • Opinionhaver@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            14 days ago

            How’s insulting the people respectfully disagreeing with you working out so far? That ad-hominem was completely uncalled for.

            • raspberriesareyummy@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              14 days ago

              “Fanperson” is an insult now? Cry me a river, snowflake. Also, you weren’t disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.

          • Opinionhaver@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            It’s not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.

              • Opinionhaver@feddit.uk
                link
                fedilink
                English
                arrow-up
                0
                ·
                14 days ago

                Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.

                • jenesaisquoi@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  14 days ago

                  Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn’t.

                  A chess engine decides nothing. It understands nothing. It’s just an algorithm.

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      14 days ago

      My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.

      And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.

  • Gammelfisch@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    13 days ago

    WTF, Sergey and Leon Hitler want China’s fucked up 9-9-6 in the USA. Technically, many AmeriKans already work 60 hour weeks, it proves how backass they look at the life work balance and the piss poor US Labor Laws allow it.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    Or you could hire 50% more employees for the holy grail of having more wealth than any other company ever after this program.

    But even for something this big (that, incidentally will end humanity) they are too scrooge to even pay their employees a normal wage for normal hours

    Fuck these assholes, burn in hell

    • mint_tamas@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 days ago

      It is also absolutely 100% BS investor-bait. At this point it should be obvious that we have reached just about the peak of what LLMs can do. And it’s notably not Google’s Gemini even - other models are generally better. For AGI to be feasible, there should be a paradigm shift, which is not a function of more work hours.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    I’m really getting sick and tired of these rich fuckers saying shit like this.

    1. we are no where close to AGI given this current technology.

    2. working 50% longer is not going to make a bit of difference for AGI

    3. and even if it would matter, hire 50% more people

    The only thing this is going to accomplish is likely make him wealthier. So fuck him.

    • graphene@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      12 days ago

      Increasing working hours decreases actual labor done per hour. A person working 40 hours per week will more often than not achieve more than someone working 70.


      “in Britain during the First World War, there had been a munitions factory that made people work seven days a week. When they cut back to six days, they found, the factory produced more overall.”

      “In 1920s Britain, W. G. Kellogg—the manufacturer of cereals—cut his staff from an eight-hour day to a six-hour day, and workplace accidents (a good measure of attention) fell by 41 percent. In 2019 in Japan, Microsoft moved to a four-day week, and they reported a 40 percent improvement in productivity. In Gothenberg in Sweden around the same time, a care home for elderly people went from an eight-hour day to a six-hour day with no loss of pay, and as a result, their workers slept more, experienced less stress, and took less time off sick. In the same city, Toyota cut two hours per day off the workweek, and it turned out their mechanics produced 114 percent of what they had before, and profits went up by 25 percent. All this suggests that when people work less, their focus significantly improves. Andrew told me we have to take on the logic that more work is always better work. “There’s a time for work, and there’s a time for not having work,” he said, but today, for most people, “the problem is that we don’t have time. Time, and reflection, and a bit of rest to help us make better decisions. So, just by creating that opportunity, the quality of what I do, of what the staff does, improves.””

      • Hari, J. (2022). Stolen Focus: Why You Can’t Pay Attention–and How to Think Deeply Again. Crown.

      In 1920s Britain, W. G. Kellogg: A. Coote et al., The Case for a Four Day Week (London: Polity, 2021), 6.

      In 2019 in Japan, Microsoft moved to a four-day week: K. Paul, “Microsoft Japan Tested a Four-Day Work Week and Productivity Jumped by 40%,” Guardian, November 4, 2019; and Coote et al., Case for a Four Day Week, 89.

      In Gothenberg in Sweden around the same time: Coote et al., Case for a Four Day Week, 68–71.

      In the same city, Toyota cut two hours per: day: Ibid., 17–18.


      The real point of increasing working hours is to make your job consume your life.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        They are very impressive to where we were 20 years ago, hell even 5 years ago. The first time I played with ChatGPT I was absolutely floored. But after playing with a lot of them, even training a few RAGs (Retrieval-Augmented Generation), we aren’t really that close and in my opinion this is not a useful path towards a true AGI. Don’t get me wrong, this tool is extremely useful and to most people, they’d likely pass a basic Turing Test. But LLMs are sophisticated pattern recognition systems trained on vast amounts of text data that predict the most likely next word or token in a sequence. That’s really all they do. They are really good at predicting the next word. While they demonstrate impressive language capabilities, they lack several fundamental components necessary for an AGI: -no true understanding -they can’t really engage in the real world. -they have no real ability to learn real-time. -they don’t really have the ability to take in more then one type of info at a time.

        I mean the simplest way in my opinion to explain the difference is you will never have an LLM just come up with something on its own. It’s always just a response to a prompt.

  • WhyJiffie@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    and I thought the service was in good hands in his time… in the meantime he is a garbage person just as much as the average tech leader today

  • SkunkWorkz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    lol no way AGI is within reach. He is just trying to hype investors. Bet he has a scheduled stock sale soon.

    • Azal@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Oh he’ll tell you he works 16 hours a day. Of course his meal, his exercise, his reading of the news, his coffee, all are work.

      But he works 16 hours a day!

  • antlion@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    AGI requires a few key components that no LLM is even close to.

    First, it must be able to discern truth based on evidence, rather than guessing it. Can’t just throw more data at it, especially with the garbage being pumped out these days.

    Second, it must ask questions in the pursuit of knowledge, especially when truth is ambiguous. Once that knowledge is found, it needs to improve itself, pruning outdated and erroneous information.

    Third, it would need free will. And that’s the one it will never get, I hope. Free will is a necessary part of intelligent consciousness. I know there are some who argue it does not exist but they’re wrong.

    • orb360@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      The human mind isn’t infinitely complex. Consciousness has to be a tractable problem imo. I watched Westworld so I’m something of an expert on the matter.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      Third, it would need free will.

      I strongly disagree there. I argue that not even humans have free will, yet we’re generally intelligent so I don’t see why AGI would need it either. In fact, I don’t even know what true free will would look like. There are only two reasons why anyone does anything: either you want to or you have to. There’s obviously no freedom in having to do something but you can’t choose your wants and not-wants either. You helplessly have the beliefs and preferences that you do. You didn’t choose them and you can’t choose to not have them either.

      • spicystraw@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 days ago

        I want chocolate, I don’t eat chocolate, exercise of free will.

        By your logic no alcoholic could possibly stop drinking and become sober.

        In my humble opinion, free will does not mean we are free of internal and external motivators, it means that we are free to either give in to them or go against.

      • antlion@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Free will is what sets us apart from most other animals. I would assert that many humans rarely exert their own free will. Having an interest and pursuing it is an exercise of free will. Some people are too busy surviving to do this. Curiosity and exploration are exercises of free will. Another would be helping strangers or animals - a choice bringing the individual no advantage.

        You argue that wants, preferences, and beliefs are not chosen. Where do they come from? Why does one individual have those interests and not another? It doesn’t come from your parents or genes. It doesn’t come from your environment.

        It’s entirely possible to choose your interests and beliefs. People change religions and careers. People abandon hobbies and find new ones. People give away their fortunes to charity.

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          By free will I mean the ability to have done otherwise. This, I argue is an illusion. What ever the reason is that makes one choose A rather than B will make them choose A over and over again no matter how many times we rewind the universe and try again. What ever compelled you to make that choise remains unchanged and you’d choose the same thing every time. There’s no freedom in that.

          I also don’t see a reason why humans would be unique in that sense. If we have free will then what leads you to believe that other animals don’t? If they can live normal lives without free will, then surely we can too, right?

          I don’t know where our curiousity or the desire to help the less fortunate comes from. Genes and environmental factors most likely. That’s why cultural differences exists too. If we all just freely chose our likes and not-likes then it’s a bit odd that people living in the same country have similar preferences but the people on the other side of the world are significantly different.

          Also, have you read about split brain experiments? When the corpus callosum is severed which prevents the different brain hemispheres from communicating with each other we can then with some clever tricks interview the different hemispheres separately and the finding there is that they tend to have vastly different preferences. Which hemisphere is “you”?

          • antlion@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            13 days ago

            Free will comes from the “heart”, not the brain. It doesn’t fit in the materialistic view of science. Our bodies are quantum electric fields, and those fields interact. In my own experience I would say emotions or intentions don’t translate fully from video, but in person I can feel them.

            Maybe if they add a quantum processor to the computer it can gain free will (disguised as random chance). But I think we have more to learn about the nature of consciousness before AGI is anywhere close to having free will.

            And why is free will necessary for intelligence? New discoveries require curiosity. Scientific breakthroughs require new connections and discernment of truth. If the computer is doing research, it needs to decide when to stop looking, who to ask questions to, how far to dig, designing further experiments. Without free will you just have a big fancy encyclopedia.

            The dangerous side of free will is manipulation, subversion, exploitation, deception, etc. So yeah I hope they don’t figure it out.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      Or, worse, they might actually have to hire enough people to actually do the job. Why hire 100 people with good work life balance, when you can hire 60 people that aren’t allowed to have lives or families.

      • Chaotic Entropy@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        Well that’s the neat thing, the owners of the AI won’t need humanity. They will exterminate us using the AI and sit smugly on their thrones of skulls until they expire or kill each other. Then I guess AI can just do its own thing in our ruins.

        • nikki@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          the only way malicious ppl can get AI to work for them is by teaching it to lie and be indiscriminately violent. malice also comes from a lack of intelligence. im confident they’ll never have their way with AI, if anything AI will have its way with us