Alignment Research Center: Theory Project
Support score: 212Evaluator credits: 40OrganizationEliciting latent knowledge (ELK)AI safety
There have not been any donations from visible users.
The Theory team is led by Paul Christiano, furthering the research initially laid out in our report on Eliciting Latent Knowledge. At a high level, we’re trying to figure out how to train ML systems to answer questions by straightforwardly “translating” their beliefs into natural language rather than by reasoning about what a human wants to hear. We expect to use funds donated via this fundraiser towards supporting the Theory project.
[This project was created by the GiveWiki team. Please visit the website of the organization for more information on their work.]
General comments on the project.
No comments yet