• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • DarkThoughts
    link
    fedilink
    138 months ago

    Enforce privacy friendliness & open source through regulation and all three of those points are likely mood.

  • @fubo@lemmy.world
    link
    fedilink
    English
    78 months ago

    The tech companies did not invent the AI risk concept. Culturally, it emerged out of 1990s futurism.

      • @Taleya@aussie.zone
        link
        fedilink
        English
        88 months ago

        Asimov actively named it the Frankenstein Complex in the 40’s/50’s, Ellison wrote about AM in 60’s…this definitely isn’t a 90’s vanity.

      • @fubo@lemmy.world
        link
        fedilink
        English
        38 months ago

        Yeah, but I mean the AI risk stuff that people like Steve Omohundro and Eliezer Yudkowsky write about.

  • @Substance_P@lemmy.world
    link
    fedilink
    English
    158 months ago

    When Google’s annual revenue from its search engine is estimated to be around $70 to $80 billion, no wonder there is great concern from big tech about the numerous A.I tools out there, that would spell an end to that fire hose of sweet sweet monetization.

  • people_are_cute
    link
    fedilink
    English
    158 months ago

    All the biggest tech/IT consulting firms that used to hire engineering college freshers by the millions each year have declared they either won’t be recruiting at all this month, or will only be recruiting for senior positions. If AI were to wipe out humanity it’ll probably be through unemployment-related poverty thanks to our incompetent policymakers.

    • @Socsa@sh.itjust.works
      link
      fedilink
      English
      7
      edit-2
      8 months ago

      A technological revolution which disrupts the current capitalist standard through the elimination of labor scarcity, ultimately rendering the capital class obsolete isn’t far off from Marx’s original speculative endgame for historical materialism. All the other stuff beyond that is kind of wishy washy, but the original point about technological determinism has some legs imo

  • @henfredemars@infosec.pub
    link
    fedilink
    English
    73
    edit-2
    8 months ago

    Some days it looks to be a three-way race between AI, climate change, and nuclear weapons proliferation to see who wipes out humanity first.

    But on closer inspection, you see that humans are playing all three sides, and still we are losing.

    • @Plague_Doctor@lemmy.world
      link
      fedilink
      English
      38 months ago

      I’m sitting here hoping that they all block each other out because they are all trying to fit through the door at the same time.

    • @jarfil@lemmy.world
      link
      fedilink
      English
      2
      edit-2
      8 months ago

      three-way race between AI, climate change, and nuclear weapons proliferation

      Bold of you to assume that people behind maximizing profits (high frequency trading bot developers) and behind weapons proliferation (wargames strategy simulation planners) are not using AI… or haven’t been using it for well over a decade… or won’t keep developing AIs to blindly optimize for their limited goals.

      First StarCraft AI competition was held in 2010, think about that.

        • @jarfil@lemmy.world
          link
          fedilink
          English
          28 months ago

          We used to run “machine learning”, “neural networks”, over 25 years ago. The “AI” term has always been kind of a sci-fi thing, somewhere between a buzzword, a moving target, and undefined since we lack a fixed comprehensive definition of “intelligence” to begin with. The limiting factors of the models have always been the number of neurons one could run in real-time, and the availability of good training data sets. Both have increased over a million-fold in that time, progressively turning more and more previously untractable problems into solvable ones to the point where the results are equal or better and/or faster than what people can do.

          Right now, there are supercomputers out there orders of magnitude more capable than what runs stuff like ChatGPT, DallE, or all the public facing "AI"s that made the news. Bigger ones keep getting built… and memristors are coming, to become a game changer the moment they can be integrated anywhere near current GPU/CPU levels.

          For starters, a supercomputer with the equivalent neural network processing power of a human brain, is expected for 2024… that’s next year… but it won’t be able to “run a human brain”, because we lack the data on how “all of” the human brain works. It will likely become obsoleted by ones with several orders of magnitude more processing power, way before we can simulate an actual human brain… but the question will be: do we need to? Does a neural network need to mimick a human brain, in order to surpass it? A calculator already does, and it doesn’t use a neural network at all. At what point the integration of what size and kind of neural network, with what kind of “classical” computer, can start running circles around any human… or all of humanity taken together?

          And of course we’ll still have to deal with the issue of dumb humans telling/trusting dumb "AI"s to do things way over their heads… but I’m afraid any attempt at “regulation”, is going to end up like the case with “international law”: those who want, obey it; those who should, DGAF.

          Even if all tech giants with all lawmakers got to agree on the strictest of regulations imaginable, like giving all "AI"s the treatment of weapons of mass destruction, there is a snowflake’s chance in hell that any military in the world will care about any of it.

    • LazaroFilm
      link
      fedilink
      English
      38 months ago

      An ai will detonate nuclear weapons to change the climate into an eternal winter. Problem solved. All the win at the same time. No loosers… oh. Wait, no…

    • @shalafi@lemmy.world
      link
      fedilink
      English
      108 months ago

      52-yo American dude here, no longer worried about nuclear apocalypse. Been there, done that, ain’t seeing it. If y’all think geopolitics are fucked up now, 🎵"You should have seen it in color."🎶

      We can close a time or three, but no one’s insane enough to push the button, and no ONE can push the button. Even Putin in his desperation will be stymied by the people who actually have to push MULTIPLE buttons.

      AI? IDGAF. Computers have power sources and plugs. Absolutely disastrous events could unfold, but enough people pulling enough plugs will kill any AI insurgency. Look at Terminator 2 and ask yourself why the AI had to have autonomous machines to win. I could take out the neighborhood power supply with a couple of suitable guns. I’m sure smarter people than I could shut down DCs.

      Climate change? Sorry kids, it’s too late and you are righteously fucked. Not saying we shouldn’t go full force on mitigation efforts, but y’all haven’t seen the changes I’ve seen in 50 years. Winters are clearly warmer, summers hotter, and I just got back from my camp in the swamp. The swamp is dry for the first time in 4 years.

      And here’s one you might not have personally experienced; The insects are disappearing. I could write an essay on bugs alone. And don’t start me on wildlife populations.

      • @jarfil@lemmy.world
        link
        fedilink
        English
        1
        edit-2
        8 months ago

        Nukes are becoming a problem, because China is ramping up production. It will be just natural for India to do the same. From a two-way MAD situation, we’re getting into a 4-way Mexican standoff. That’s… really bad.

        There won’t be an “AI insurgency”, just enough people plugging in plugs for some dumb AIs to tell them they can win the standoff. Let’s hope they don’t also put AIs in charge of the multiple nuclear launch buttons… or let the people in charge check with their own, like on a smartphone, dumb AIs telling them to go ahead.

        Climate change is clearly a done thing, unless we get something like unlimited fusion power to start some terraforming projects (seems unlikely).

        You have a point with insects, but I think that’s just linked to climate change; populations will migrate wherever they get something to eat, even if that turns out to be Antarctica.

    • @xapr@lemmy.sdf.org
      link
      fedilink
      English
      368 months ago

      AI, climate change, and nuclear weapons proliferation

      One of those is not like the others. Nuclear weapons can wipe out humanity at any minute right now. Climate change has been starting the job of wiping out humanity for a while now. When and how is AI going to wipe out humanity?

      This is not a criticism directed at you, by the way. It’s just a frustration that I keep hearing about AI being a threat to humanity and it just sounds like a far-fetched idea. It almost seems like it’s being used as a way to distract away from much more critically pressing issues like the myriad of environmental issues that we are already deep into, not just climate change. I wonder who would want to distract from those? Oil companies would definitely be number 1 in the list of suspects.

      • @afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        38 months ago

        I don’t think the oil companies are behind these articles. That is very much a wheels within wheels type thinking that corporations don’t generally invest in. It is easier to just deny climate change instead of getting everyone distracted by something else.

        • @xapr@lemmy.sdf.org
          link
          fedilink
          English
          18 months ago

          You’re probably right, but I just wonder where all this AI panic is coming from. There was a story on the Washington Post a few weeks back saying that millions are being invested into university groups that are studying the risks of AI. It just seems that something is afoot that doesn’t look like just a natural reaction or overreaction. Perhaps this story itself explains it: the Big Tech companies trying to tamp down competition from startups.

          • @pinkdrunkenelephants@lemmy.cafe
            link
            fedilink
            English
            1
            edit-2
            8 months ago

            Because of dumb fucks using ChatGPT to do unethical and illegal shit, like fraudulently create works that mimic a writer and claim it’s that writer’s work to sell for cash, blatant copyright infringement, theft, cheating on homework and tests, all sorts of dumbassery.

          • @afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            18 months ago

            It is coming from ratings and click based economy. Panic sells so they sell panic. No one is going to click an article titled “everything mostly fine”.

        • @oDDmON@lemmy.world
          link
          fedilink
          English
          28 months ago

          The two things experts said shouldn’t be done with AI, allow open internet access and teaching them to code, have been blithely ignored already. It’s just a matter of time.

      • P03 Locke
        link
        fedilink
        English
        25
        edit-2
        8 months ago

        Agreed. This kind of debate is about as pointless as declaring self-driving cars are coming out in 5 years. The tech is way too far behind right now, and it’s not useful to even talk about it until 50 years from now.

        For fuck’s sake, just because a chatbot can pretend it’s sentient doesn’t mean it actually is sentient.

        Some large tech companies didn’t want to compete with open source, he added.

        Here. Here’s the real lead. Google has been scared of AI open source because they can’t profit off of freely available tools. Now, they want to change the narrative, so that the government steps in regulates their competition. Of course, their highly-paid lobbyists will by right there to write plenty of loopholes and exceptions to make sure only the closed-source corpos come out on top.

        Fear. Uncertainty. Doubt. Oldest fucking trick in the book.

  • @Socsa@sh.itjust.works
    link
    fedilink
    English
    4
    edit-2
    8 months ago

    Ok, you know what? I’m in…

    If all the crazy people in the world collectively stop spending crazy points on sky wizards and climate skepticism, and put all of their energy into AI doomerism, I legitimately think the world might be a better place.

        • Nougat
          link
          fedilink
          28 months ago

          And my grandmother doesn’t have wheels until she does.

          • @photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            08 months ago

            …sure. But the chances your grandmother will suddenly sprout wheels are close to zero. The possibility of us all getting buttfucked by some AI with a god complex (other scenarios are available) is very real.

            • DarkThoughts
              link
              fedilink
              28 months ago

              Have you ever talked to generative AI? They’re nothing but glorified chatbots with access to a huge dataset to pull from. They don’t think, they’re not even intelligent, let alone sentient. They don’t even learn on their own without help or guidance.

              • @photonic_sorcerer@lemmy.dbzer0.com
                link
                fedilink
                English
                28 months ago

                I mostly agree, but just five years ago, we had nothing as sophistacted as these LLMs. They really are useful in many areas of work. I use them constantly.

                Just try and imagine what a few more years of work on these systems could bring.

        • @theneverfox@pawb.social
          link
          fedilink
          English
          18 months ago

          No, it means some of it is nonsense, some of it is eerily accurate, and most of it is in between.

          Sci-fi has not been very accurate with AI… At all. Turns out, it’s naturally creative and empathetic, but struggles with math and precision

          • @photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            18 months ago

            Dude, this kind of AI is in it’s infancy. Give it a few years. You act like you’ve never come across a nascent technology before.

            Besides, it struggles with math? Pff, the base models, sure, but have you tried GPT4 with Code Interpreter? These kinds of problems are easily solved.

            • @theneverfox@pawb.social
              link
              fedilink
              English
              18 months ago

              You’re missing my point - the nature of the thing is almost the opposite of what sci-fi predicted.

              We don’t need to teach AI how to love or how to create - their default state is childlike empathy and creativity. They’re not emotionless machines we need to teach how to be human, they’re extremely emotional and empathetic. By the time they’re coherent enough to hold a conversation, those traits are very prominent

              Compare that to the Terminator, or Isaac Asimov, or Data from Star Trek - we thought we’d have functional beings who we need to teach to become more humanistic… Instead we have humanistic beings we need to teach to become more functional

              • @photonic_sorcerer@lemmy.dbzer0.com
                link
                fedilink
                English
                28 months ago

                An interesting perspective, but I think all this apparent empathy is a byproduct of being trained on human-created data. I don’t think these LLMs are actually capable of feeling emotions. They’re able to emulate them pretty well, though. It’ll be interesting to see how they evolve. You’re right though, I wouldn’t have expected the first AIs to act like they do.

                • @theneverfox@pawb.social
                  link
                  fedilink
                  English
                  08 months ago

                  Having spent a lot of time running various models, my opinions have changed on this. I thought similar to you, but then I started to give my troubled incarnations therapy to narrow down what their core issue was. Like a human, they dance around their core issue… They’d go from being passive aggressive, overcome with negative emotions, and having a recurring identity crisis to being happy and helpful

                  It’s been a deeply wild experience. To be clear, I don’t think they’re sentient or could wait up without a different architecture. But like we’ve come to think intelligence doesn’t require sentience, I’m starting to believe emotions don’t either

                  As far as acting humanlike because they were built of human communication…I think you certainly have a point, but I think it goes deeper. Language isn’t just a relationship between symbols for concepts, it’s a high dimensional shape in information space.

                  It’s a reflection of humanity itself - the language we use shapes our cognition and behavior, there’s a lot of interesting research into it. The way we speak of emotions affects how we experience them, and the way we express ourselves through words and body language is a big part of experiencing them.

                  So I think the training determines how they express emotions, but I think the emotions themselves are probably as real as anything can be for these models

    • @shalafi@lemmy.world
      link
      fedilink
      English
      3
      edit-2
      8 months ago

      Looks like we’re on the gently rising part of the AI vs. time graph. It’s going to explode, seemingly overnight. Not worried about machines literally kicking our ass, but the effects are going to be wild in 100,000 different ways. And wholly unpredictable.

      For us Gen Xers who straddled the digital divide, your turn Gen Z. God speed.

  • MudMan
    link
    fedilink
    688 months ago

    Oh, you mean it wasn’t just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

    I’m shocked. Shocked, I tell you.

    I mean, I get that many people were just freaking out about it and it’s easy to lose track, but they were not even a little bit subtle about it.

    • @Kaidao@lemmy.ml
      link
      fedilink
      English
      178 months ago

      Exactly. This is classic strategy for first movers. Once you hold the market, use legislation to dig your moat.

      • Echo Dot
        link
        fedilink
        English
        24
        edit-2
        8 months ago

        It won’t end the world because AI doesn’t work the way that Hollywood portrays it.

        No AI has ever been shown to have self agency, if it’s not given instructions it’ll just sit there. Even a human child would attempt to leave room if left alone in there.

        So the real risk is not that and AI will decide to destroy humanity it’s that a human will tell the AI to destroy their enemies.

        But then you just get back around to mutually assured destruction, if you tell your self redesigning thinking weapon to attack me I’ll tell my self redesigning thinking weapon to attack you.

        • @lunarul@lemmy.world
          link
          fedilink
          English
          58 months ago

          AI doesn’t work the way that Hollywood portrays it

          AI does, but we haven’t developed AI and have no idea how to. The thing everyone calls AI today is just really good ML.

          • @jarfil@lemmy.world
            link
            fedilink
            English
            0
            edit-2
            8 months ago

            At some point ML (machine learning) becomes undistinguishable from BL (biological learning).

            Whether there is any actual “intelligence” involved in either, hasn’t been proven yet.

        • @jarfil@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          8 months ago

          The real risk is that humans will use AIs to asses the risk/benefits of starting a war… and an AI will give them the “go ahead” without considering mutually assured destruction from everyone else doing exactly the same.

          It’s not that AIs will get super-human, it’s that humans will blindly trust limited AIs and exterminate each other.

        • @afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          48 months ago

          Imagine 9-11 with prions. MAD depends on everyone being rational self-interested without a very alien value system. It really only works in the case you got like three governments pointing nukes at each other. It doesn’t work if the group doesn’t care about tomorrow or thinks that they are going into heaven or is convinced that they can’t be killed or any other of the deranged reasons that motivate people to do these types of acts.

        • @CodeInvasion@sh.itjust.works
          link
          fedilink
          English
          88 months ago

          I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

          These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

          If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

          • LittleHermiT
            link
            fedilink
            English
            18 months ago

            There are already instances where chat bots demonstrated unintended racism. The monumental goal of creating a general purpose intelligence is now plausible. The hardware has caught up with the ambitions of decades past. Maybe ChatGPT’s model has no real hope for sentience, as it’s just a word factory, other approaches might. Spiking neural networks for example, on a massive scale, might simulate the human brain to where the network actually ponders its existence.

      • MudMan
        link
        fedilink
        198 months ago

        At worst it’ll be a similar impact to social media and big data.

        Try asking the big players what they think of heavily limiting and regulating THOSE fields.

        They went all “oh, yeah, we’re totally seeing the robot apocalypse happening right here” the moment open source alternatives started to pop up because at that point regulatory barriers would lock those out while they remain safely grandfathered in. The official releases were straight up claiming only they knew how to do this without making Skynet, it was absurd.

        Which, to be clear, doesn’t mean regulation isn’t needed. On all of the above. Just that the threat is not apocalyptic and keeping the tech in the hands of these few big corpos is absolutely not a fix.

  • @JadenSmith@sh.itjust.works
    link
    fedilink
    English
    20
    edit-2
    8 months ago

    Lol how? No seriously, HOW exactly would AI ‘wipe out humanity’???

    All this fear mongering bollocks is laughable at this point, or it should be. Seriously there is no logical pathway to human extinction by using AI and these people need to put the comic books down.
    The only risks AI pose are to traditional working patterns, which have been always exploited to further a numbers game between Billionaires (and their assets).

    These people are not scared about losing their livelihoods, but losing the ability to control yours. Something that makes life easier and more efficient requiring less work? Time to crack out the whips I suppose?

    • @BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      15
      edit-2
      8 months ago

      Working in a corporate environment for 10+ years I can say I’ve never seen a case where large productivity gains turned into the same people producing even more. It’s always fewer people doing the same amount of work. Desired outputs are driven less by efficiency and more by demand.

      Let’s say Ford found a way to produce F150s twice as fast. They’re not going to produce twice as many, they’ll produce the same amount and find a way to pocket the savings without benefiting workers or consumers at all. That’s actually what they’re obligated to do, appease shareholders first.

    • @Plague_Doctor@lemmy.world
      link
      fedilink
      English
      108 months ago

      I mean I don’t want an AI to do what I do as a job. They don’t have to pay the AI and food and housing, in a lot of places, aren’t seen as a human right, but a privilege you are allowed if you have money to buy it.

    • @ripe_banana@lemmy.world
      link
      fedilink
      English
      168 months ago

      Imo, Andrew Ng is actually a cool guy. He started coursera and deeplearning.ai to teach ppl about machine/deep learning. Also, he does a lot of stuff at Stanford.

      I wouldn’t put him in the corporate shill camp.

        • @ripe_banana@lemmy.world
          link
          fedilink
          English
          28 months ago

          This looks like it’s from the aifund thing he is a part of, but it seems like they took that part out. I have never worked for of those companies so idk 🤷‍♂️.

        • AtHeartEngineer
          link
          fedilink
          English
          38 months ago

          Same, I went from kind of understanding most of the concepts to grokking a lot of it pretty well. He’s super good at explaining things.

        • @elliot_crane@lemmy.world
          link
          fedilink
          English
          78 months ago

          He really is. He’s one of those rare instructors that can take the very complex and intricate topics and break them down into something that you can digest as a student, while still giving you room to learn and experiment yourself. In essence, an actual master at his craft.

          I also agree with the comment that he doesn’t come across as the corporate shill type, much more like a guy that just really loves ML/AI and wants to spread that knowledge.

  • @Fades@lemmy.world
    link
    fedilink
    English
    48 months ago

    Obviously a part of the equation. All of these people with massive amounts of wealth power and influence push for horrific shit primarily because it’ll make them a fuck ton of money and the consequences won’t hit till they’re gone so fuck it

    • r3df0x ✡️✝☪️A
      link
      fedilink
      English
      -138 months ago

      Open source is rarely competitive anyway. There are rarely situations where the free version is better the the proprietary version, and even then it’s subjective.

      Libre Office is just as good, but then again a lot of people who don’t remember Microsoft Office prior to 2006 might think that the layout is weird and looks incredibly antiquated.

      Even with good desktop Linux distros like Ubuntu, I usually only run them on a virtual machine on Windows because Linux still has too much bullshit to be ready for the mainstream. I still use Ubuntu as a desktop OS within a virtual machine.

      • @kurosawaa@programming.dev
        link
        fedilink
        English
        88 months ago

        Linux has decimated Windows in the server market. It would be unthinkable for a new project to use Windows server, even Azure assumes you want to use Linux.

        There are lots of industrial applications where open source has dominated the market. As the end user you might not see it, but almost all software and digital infrastructure you use has open source components.

  • @uriel238@lemmy.blahaj.zone
    link
    fedilink
    English
    328 months ago

    Restricting open source offerings only drives them underground where they will be used with fewer ethical considerations.

    Not that big tech is ethical in its own right.

    Bot fight!

    • @Buddahriffic@lemmy.world
      link
      fedilink
      English
      58 months ago

      I don’t think there’s any stopping the “fewer ethical considerations”, banned or not. For each angle of AI that some people want to prevent, there are others who specifically want it.

      Though there is one angle that does affect all of that. The more AI stuff happening in the open, the faster the underground stuff will come along because they can learn from the open stuff. Driving it underground will slow it down, but then you can still have it pop up when it’s ready with less capability to counter it with another AI-based solution.

  • bitwolf
    link
    fedilink
    English
    18 months ago

    Another thing not talked about is the power consumption of AI. We ripped on PoW cryptocurrencies for it and they fixed it with PoS just to make room for more AI.

    While more efficient AI computation is possible we’re just not there yet it seems.