Chat-bots are amazing these days! About a month ago LaMDA made the news when it apparently convinced an engineer at Google that it was sentient. GPT-3 from OpenAI is similarly sophisticated, and my collaborators and I have trained it to auto-generate Splintered Mind blog posts. (This is not one of them, in case you were worried.)
Earlier this year, with Daniel Dennett’s permission and cooperation, Anna Strasser, Matthew Crosby, and I “fine-tuned” GPT-3 on most of Dennett’s corpus, with the aim of seeing whether the resulting program could answer philosophical questions similarly to how Dennett himself would answer those questions. We asked Dennett ten philosophical questions, then posed those same questions to our fine-tuned version of GPT-3. Could blog readers, online research participants, and philosophical experts on Dennett’s work distinguish Dennett’s real answer from alternative answers generated by GPT-3?
Read in full here:
This thread was posted by one of our members via one of our news source trackers.