ACL'24 conference note
Paper list:
- Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts
- Improving alignment of dialogue agents via targeted human judgements
- Rule Based Rewards for Language Model Safety
- Fast Inference from Transformers via Speculative Decoding
- Reducing Privacy Risks in Online Self-Disclosures with Language Models
- Leveraging LLM Reasoning Enhances Personalized Recommender Systems
- ChartCheck: Explainable Fact-Checking over Real-World Chart Images
- LLMCRIT: Teaching Large Language Models to Use Criteria
- Word Embeddings Are Steers for Language Models
- On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
- Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
Cool repo: