You are probably prompting AI wrong: Anthropic philosopher explains how to learn the language of AI

1 month ago 3
ARTICLE AD BOX

With AI chatbots increasingly found in a lot of spheres of life, writing the perfect prompt has become a skill in itself. While each AI company usually gives a glimpse at how to best ask questions to its models, these methods vary widely across different chatbots.

Anthropic's resident philosopher Amanda Askell has shared detailed insights on how users can get the best results from most chatbots.

In a Q&A video released by the company, Askell says there is no single textbook for prompting and describes it as an "empirical domain," meaning users must learn the techniques of prompting by observing and testing.

"Prompting is very experimental," Askell explains. "You find a new model, and I'll be like, 'I have a whole different approach to how I prompt for that model that I find by interacting with it a lot.'"

Askell says there is a need to scrap the assumptions about the models and "output after output" to understand the specific disposition of the model they are using.

“It is really hard to distill what is going on because one thing is just like a willingness to interact with the models a lot and to really look at output after output” she says

Akell says that her training as a philopher does come in handy in this area, she explains, “This is where I actually do think philosophy can actually be useful for prompting in a way because, like, a lot of my job is just being like I try and explain some issue or concern or thought that I'm having to the model as clearly as possible,”

What has Anthropic previously said on prompting Claude?

In a 'Prompt Engineering Overview' published in July, Anthropic added an analogy for users to master its chatbot. The company adviced them to think of Claude not just as software, but as “a brilliant, but very new employee (with amnesia) who needs explicit instructions.”

The guide highlights that unlike a long-term human colleague, the AI has no context on "your norms, styles, guidelines, or preferred ways of working." Because the model starts from scratch with every interaction, Anthropic notes that "the more precisely you explain what you want, the better Claude's response will be."

“Just like you might be able to better perform on a task if you knew more context, Claude will perform better if it has more contextual information.” the company noted

Read Entire Article