• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • @CodeInvasion@sh.itjust.works
    link
    fedilink
    English
    81 year ago

    I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

    These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

    If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

    • LittleHermiT
      link
      fedilink
      English
      11 year ago

      There are already instances where chat bots demonstrated unintended racism. The monumental goal of creating a general purpose intelligence is now plausible. The hardware has caught up with the ambitions of decades past. Maybe ChatGPT’s model has no real hope for sentience, as it’s just a word factory, other approaches might. Spiking neural networks for example, on a massive scale, might simulate the human brain to where the network actually ponders its existence.