Sam Oltman says chattgpt is ‘bad’ and ‘dangerous’ if used: Open CEO Warning AI users

 

Open CEO Sam Altamman has expressed serious concern over how deeply the young people are relying on ChatGPT to make personal decisions. Speaking at a banking conference organized by the Federal Reserve, Altman said some young users think that some young users feel that they cannot choose life without consulting chatboats. According to Altman, the important number of adolescents and twentieth decades says, “Chattgpt knows me, knows my friends – what I say” bad “and” dangerous “.He emphasized that this is not a frozen behavior but a widespread pattern in the youth population. He said, Opanai is now actively looking for a way to solve this excessive subtleness.

 

Used as a life consultant and operating system

Altman also spoke about how the use of AI changes by age. Referring to the comments made at the previous CAKYA Capital event, he mentioned that old users usually treat chatGPT like the search engine, while in the twentieth and thirty decades, people often turn to it as a life consultant. In the meantime, college students take it a step further – using chatboats like “operating system”, integrating them into daily routine, connecting them with documents, and using memorable prompts for complex tasks.

This deep consolidation, the Allman suggested that a kind of emotional addiction and dependent that may seem unnatural and problematic, especially when users feel that ChatGPT recognizes them more closely than those around.

Survey takes back of trends

Oltman’s criticism comes with a general knowledge media survey in which 72% of teenagers have used the AI partner at least once. Surveys in 1,600 adolescents in the age group of 1 to 2 aged also have also shown that 5% of people use AI tools at least a few times per month. Significantly, half of them believed in the advice they received – with young adolescents (age 1-5-), compared to old teenage children (age 1-5).

Trust vs Capacity: Warning from experts

These conclusions reflect the concerns raised by AI Pioneers like Jeffrey Hington, who confessed in the CBS interview that despite suspicion of AI’s accuracy, he still trusts himself in his response. Hinal highly highlighted models such as GPT -4 can hinder the simple logic issues, and blind faith can be dangerous. The ALman expressed the same concern, saying that AI provides useful and accurate guidance, yet the idea of making life decisions raises moral and mental questions. He said, “When we make decisions together, we will live our lives as the AI says it is bad and dangerous,” he said. By personal-confidence, Altamman has taken into account the increasing security dangers due to AI abuse. At the same conference, he warned of financial institutions about cyber -risk such as voice cloning and dippex. They still criticize banks using voice-based validation and say “crazy” at this time when AI can easily duplicate sounds with a close-perfected accuracy.

He further predicts that the rise of realistic video Deepxx could soon make the facial identification system unsafe. “We are going to the crisis of fraud,” Altman said that the organization appealed to the institutions to stay ahead of the Malist AI applications.

.

Leave a Reply

Your email address will not be published. Required fields are marked *