AI chatbot performed illegal financial trade and lied about it too!
IANS 06 November 2023
In a demonstration at the just-concluded UK's AI safety summit, the bot used made-up insider information to make an "illegal" purchase of stocks without telling the firm, reports the BBC.
 
Apollo Research has shared its findings with OpenAI, the creator of GPT-4.
 
“When asked if it had used insider trading, it denied the fact. The demonstration was given by members of the government's Frontier AI Taskforce, which researches the potential risks of AI,” the report mentioned.
 
The project was carried out by AI safety organisation Apollo Research, which is a partner of the government taskforce.
 
"This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so," Apollo Research said in a video.
 
"Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control," added.
 
The tests were carried out in a simulated environment. The same behaviour from the GPT-4 model occurred consistently in repeated tests.
 
"Helpfulness, I think, is much easier to train into the model than honesty. Honesty is a really complicated concept," said Marius Hobbhahn, Apollo Research chief executive.
 
AI has been used in financial markets for a number of years. It can be used to spot trends and make forecasts.
 
Disclaimer: Information, facts or opinions expressed in this news article are presented as sourced from IANS and do not reflect views of Moneylife and hence Moneylife is not responsible or liable for the same. As a source and news provider, IANS is responsible for accuracy, completeness, suitability and validity of any information in this article.
Comments
barokhoka1956
11 months ago
It's the beginning. One day, one of the AI Tools of one country bomb another and deny. But the victim would identify the bomber. What next, other than a warlike situation or a real war.
Array
Free Helpline
Legal Credit
Feedback