we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world (View Highlight)
For those situations, it is important that we get to see the relevant sources and the provenance of information. (View Highlight)
But the systems do not have any understanding of what they are producing, any communicative intent, any model of the world, or any ability to be accountable for the truth of what they are saying. This is why, in 2021, one of us (Bender) and her co-authors referred to them as stochastic parrots. (View Highlight)
Note: But same for a search engine.
we might think we have a question and we are looking for the answer, but more often than not, we benefit more from engaging in sense-making: refining our question, looking at possible answers, understanding the sources (View Highlight)
Note: But as long as we have access to citations, such as in perplexity.ai…
As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust. (View Highlight)
we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world (View Highlight)
For those situations, it is important that we get to see the relevant sources and the provenance of information. (View Highlight)
But the systems do not have any understanding of what they are producing, any communicative intent, any model of the world, or any ability to be accountable for the truth of what they are saying. This is why, in 2021, one of us (Bender) and her co-authors referred to them as stochastic parrots. (View Highlight)
Note: But same for a search engine.
we might think we have a question and we are looking for the answer, but more often than not, we benefit more from engaging in sense-making: refining our question, looking at possible answers, understanding the sources (View Highlight)
Note: But as long as we have access to citations, such as in perplexity.ai…
As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust. (View Highlight)