Tagged: Artificial Intelligence
- at #12523R. MartinezMember
Facebook’s researchers set up their chatbot negotiation experiments by giving two AI agents a collection of virtual items and then instructing them to negotiate how best to split the goods between them. Each chatbot placed a different value, which the other chatbot didn’t know, on individual items, and each bot was told ending with no deal would result in a loss for both of them.
The problem appeared when these chatbots started to communicate in a lenguage only the could understand, and this has led to whole a media swarm. Here you can check out a sample of the conversation between the to chatbots:
But another scary fact is that beyond creating their own lenguage, they started liying to each:
Since both things could have been avoided if the creators of the chatbots would have thought it could happen, the main question is, in my opinion, whether or not we will be able to settle the correct constraints on time when programming, before it gets out of hand.
Is this as scary as it seems? Am I missing something?
at #12531Eduardo CáceresParticipant
- This topic was modified 2 years, 11 months ago by Ennomotive Administrator.
- This topic was modified 1 year, 6 months ago by Mario Honrubia.
I just wanted to point out how British tabloids published that Facebook chatbots were shut down by their engineers because last ones were afraid of the results, something which is completely and obviously wrong, and how some Spanish media stated the same after (poorly) translating those articles.
Firstly, if bots communicating in a new language had been a real novelty, Facebook engineers would have never shut them down. In fact, I can imagine Facebook already having a full department dedicated to that specific topic before this piece of news. They’re probably laughing at media right now, BTW.
Secondly, and regarding bots lying each other, let’s remember how an AI beat some of the best human Texas Hold’em (poker) players in the world recently.
In short, Facebook chatbots shouldn’t scare us, or at least no more than other existing technologies.at #12536R. MartinezMember
Thanks for the insight Eduardo, in fact they noted that they shut down the chatbots because that wasn’t the point of their research.
I guess they wern’t scared, and probably had everything under control, and that’s why I think that the real question is not about the chatbots but the potential of artificial intelligence, about where is the limit of our control, furthermore if we will be able to realize that we have reached that limit.
In other words, if we will find the right constraints on time.
Nice and interesting article!at #12558Alberto Frías FernándezParticipant
On the topic, tho what Eduardo says is true and British tabloids didn’t understand the situation correctly, it is important to remember that the fear of Skynet arising is real, and companies and researchers are a bit concerned. Take for example the work of DeepMind (Google’s AI company) in developing an “emergency button” to stop an AI from going rogue [Paper can be found here ].
So, while the original article may be off, things might go South if not enough “protection” layers or rules are implemented.at #12568Eduardo CáceresParticipant
You nailed it Alberto, just wanted to point out that nothing new was discovered with these chatbots.
BTW, quite a fasctinating paper to read, I was aware of the existance of that concrete research field, but had never gotten deep into it.at #15379
You must be logged in to reply to this topic.