Artwork

Player FM - Internet Radio Done Right
Checked 4M ago
เพิ่มแล้วเมื่อ twoปีที่ผ่านมา
เนื้อหาจัดทำโดย Zeta Alpha เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Zeta Alpha หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !
icon Daily Deals

The Promise of Language Models for Search: Generative Information Retrieval

1:07:31
 
แบ่งปัน
 

Manage episode 361628091 series 3446693
เนื้อหาจัดทำโดย Zeta Alpha เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Zeta Alpha หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

In this episode of Neural Search Talks, Andrew Yates (Assistant Prof at the University of Amsterdam) Sergi Castella (Analyst at Zeta Alpha), and Gabriel Bénédict (PhD student at the University of Amsterdam) discuss the prospect of using GPT-like models as a replacement for conventional search engines. Generative Information Retrieval (Gen IR) SIGIR Workshop

References

Timestamps: 00:00 Introduction, ChatGPT Plugins 02:01 ChatGPT plugins, LangChain 04:37 What is even Information Retrieval? 06:14 Index-centric vs. model-centric Retrieval 12:22 Generative Information Retrieval (Gen IR) 21:34 Gen IR emerging applications 24:19 How Retrieval Augmented LMs incorporate external knowledge 29:19 What is hallucination? 35:04 Factuality and Faithfulness 41:04 Evaluating generation of Language Models 47:44 Do we even need to "measure" performance? 54:07 How would you evaluate Bing's Sydney? 57:22 Will language models take over commercial search? 1:01:44 NLP academic research in the times of GPT-4 1:06:59 Outro

  continue reading

21 ตอน

Artwork

The Promise of Language Models for Search: Generative Information Retrieval

Neural Search Talks — Zeta Alpha

published

iconแบ่งปัน
 
Manage episode 361628091 series 3446693
เนื้อหาจัดทำโดย Zeta Alpha เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Zeta Alpha หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

In this episode of Neural Search Talks, Andrew Yates (Assistant Prof at the University of Amsterdam) Sergi Castella (Analyst at Zeta Alpha), and Gabriel Bénédict (PhD student at the University of Amsterdam) discuss the prospect of using GPT-like models as a replacement for conventional search engines. Generative Information Retrieval (Gen IR) SIGIR Workshop

References

Timestamps: 00:00 Introduction, ChatGPT Plugins 02:01 ChatGPT plugins, LangChain 04:37 What is even Information Retrieval? 06:14 Index-centric vs. model-centric Retrieval 12:22 Generative Information Retrieval (Gen IR) 21:34 Gen IR emerging applications 24:19 How Retrieval Augmented LMs incorporate external knowledge 29:19 What is hallucination? 35:04 Factuality and Faithfulness 41:04 Evaluating generation of Language Models 47:44 Do we even need to "measure" performance? 54:07 How would you evaluate Bing's Sydney? 57:22 Will language models take over commercial search? 1:01:44 NLP academic research in the times of GPT-4 1:06:59 Outro

  continue reading

21 ตอน

ทุกตอน

×
 
In this episode of Neural Search Talks, we have invited Louis Rosenberg, CEO of Unanimous.AI, to discuss the future of AI in decision-making, contrasting the development of artificial superintelligence (ASI) with collective human intelligence systems, such as swarm intelligence. In particular, Louis argues that the advancement of AI should focus on amplifying human intelligence rather than replacing it, drawing from the biological inspiration found in nature, where species evolve by connecting individuals into systems that function as a singular intelligent entity, exemplified by schools of fish and swarms of bees. Tune into our conversation to learn more about how AI can assist humans in disseminating knowledge and making better decisions! Check out the Zeta Alpha Neural Discovery platform: https://zeta-alpha.com Subscribe to the Zeta Alpha calendar to not miss out on any of our events: https://lu.ma/zeta-alpha Timestamps: 0:00 Intro by Jakub Zavrel 2:08 Using AI to amplify human intelligence 18:19 How AI and humans learn from each other 26:41 Scaling human collaboration with AI 40:13 Satisfying information needs with AI 45:57 How Unanimous AI connects experts to make better decisions 51:37 Predictions for AI progress in one year 53:21 Outro…
 
In this episode of Neural Search Talks, we welcome Hyeongu Yun from LG AI Research to discuss the newest addition to the EXAONE Universe: EXAONE 3.0. The model demonstrates strong capabilities in both English and Korean, excelling not only in real-world instruction-following scenarios but also achieving impressive results in math and coding benchmarks. Hyeongu shares the team's approach to the development of this model, revealing key training factors that contributed to its success while also highlighting the challenges they faced along the way. We close this episode off with a look at EXAONE's future, as well as Hyeongu's perspective on the evolving role of AI systems. Check out the Zeta Alpha Neural Discovery platform . Subscribe to the Zeta Alpha calendar to not miss out on any of our events! Sources: - https://lgresearch.ai/blog/view?seq=460 - https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct - https://arxiv.org/abs/2408.03541 Timestamps: 0:00 Intro by Jakub Zavrel 1:37 The journey of the EXAONE project 4:34 The main challenges in the development of EXAONE 3.0 6:37 The secret to achieving great bilingual performance in English & Korean 7:51 How EXAONE 3.0 stacks against other open-source models 9:20 The trade-off between instruction-following and reasoning skills 12:32 How will retrieval and generative models evolve in the future 16:36 Open sourcing and user feedback on EXAONE 19:20 The role of synthetic data in model training 20:57 The role of LLMs as evaluators 23:16 Outro…
 
In the 30th episode of Neural Search Talks, we have our very own Arthur Câmara, Senior Research Engineer at Zeta Alpha, presenting a 20-minute guide on how we fine-tune Large Language Models for effective text retrieval. Arthur discusses the common issues with embedding models in a general-purpose RAG pipeline, how to tackle the lack of retrieval-oriented data for fine-tuning with InPars, and how we adapted E5-Mistral to rank in the top 10 on the BEIR benchmark. ## Sources InPars https://github.com/zetaalphavector/InPars https://dl.acm.org/doi/10.1145/3477495.3531863 https://arxiv.org/abs/2301.01820 https://arxiv.org/abs/2307.04601 Zeta-Alpha-E5-Mistral https://zeta-alpha.com/post/fine-tuning-an-llm-for-state-of-the-art-retrieval-zeta-alpha-s-top-10-submission-to-the-the-mteb-be https://huggingface.co/zeta-alpha-ai/Zeta-Alpha-E5-Mistral NanoBEIR https://huggingface.co/collections/zeta-alpha-ai/nanobeir-66e1a0af21dfd93e620cd9f6…
 
In this episode of Neural Search Talks, we're chatting with Manuel Faysse, a 2nd year PhD student from CentraleSupélec & Illuin Technology, who is the first author of the paper "ColPali: Efficient Document Retrieval with Vision Language Models". ColPali is making waves in the IR community as a simple but effective new take on embedding documents using their image patches and the late-interaction paradigm popularized by ColBERT. Tune in to learn how Manu conceptualized ColPali, his methodology for tackling new research ideas, and why this new approach outperforms all classic multimodal embedding models. A must-watch episode! Timestamps: 0:00 Introduction with Jakub & Manu 4:09 The "Aha!" moment that led to ColPali 7:06 Challenges that had to be solved 9:16 The main idea behind ColPali 13:20 How ColPali simplifies the IR pipeline 15:54 The ViDoRe benchmark 18:23 Why ColPali is superior to CLIP-based retrievers 20:41 The training setup used for ColPali 24:00 Optimizations to make ColPali more efficient 29:00 How ColPali could work with text-only datasets 31:21 Outro: The next steps for this line of research…
 
In this episode of Neural Search Talks, we're chatting with Ronak Pradeep, a PhD student from the University of Waterloo, about his experience using LLMs in Information Retrieval, both as a backbone of ranking systems and for their end-to-end evaluation. Ronak analyzes the impact of the advancements in language models on the way we think about IR systems and shares his insights on efficiently integrating them in production pipelines, with techniques such as knowledge distillation. Timestamps: 0:00 Introduction & the impact of the LLM day in SIGIR 2024 2:11 The perspective of the IR community on LLMs 6:10 Language models as backbones for Information Retrieval 13:49 The feasibility & tricks for using LLMs in production IR pipelines 20:11 Ronak's hidden gems from the SIGIR 2024 programme 21:36 Outro…
 
In this episode of Neural Search Talks, we're chatting with Omar Khattab, the author behind popular IR & LLM frameworks like ColBERT and DSPy. Omar describes the current state of using AI models in production systems, highlighting how thinking at the right level of abstraction with the right tools for optimization can deliver reliable solutions that extract the most out of the current generation of models. He also lays out his vision for a future of Artificial Programmable Intelligence (API), rather than jumping on the hype of Artificial General Intelligence (AGI), where the goal would be to build systems that effectively integrate AI, with self-improving mechanisms that allow the developers to focus on the design and the problem, rather than the optimization of the lower-level hyperparameters. Timestamps: 0:00 Introduction with Omar Khattab 1:14 How to reliably integrate LLMs in production-grade software 12:19 DSPy's philosophy differences from agentic approaches 14:55 Omar's background in IR that helped him pivot to DSPy 25:47 The strengths of DSPy's optimization framework 39:22 How DSPy has reimagined modularity in AI systems 45:45 The future of using AI models for self-improvement 49:41 How open-sourcing a project like DSPy influences its development 52:32 Omar's vision for the future of AI and his research agenda 59:12 Outro…
 
In this episode of Neural Search Talks, we're chatting with Florin Cuconasu, the first author of the paper "The Power of Noise", presented at SIGIR 2024. We discuss the current state of the field of Retrieval-Augmented Generation (RAG), and how LLMs interact with retrievers to power modern Generative AI applications, with Florin delivering practical advice for those developing RAG systems, and laying out his research agenda for the near future. Timestamps: 0:00 Introduction & how RAG has taken over the IR literature 1:40 How retrievers and LLMs interact in Retrieval-Augmented Generation 2:55 What practitioners should pay attention to when developing RAG systems 5:04 What is the power of noise in the context of RAG? 7:31 Florin's long-term research agenda on RAG interactions 9:25 How advances in LLMs can impact IR research 11:26 Outro…
 
In this episode of Neural Search Talks, we're chatting with Nandan Thakur about the state of model evaluations in Information Retrieval. Nandan is the first author of the paper that introduced the BEIR benchmark, and since its publication in 2021, we've seen models try to hill-climb on the leaderboard, but also fail to outperform the BM25 baseline in subsets like Touché 2020. Plus some insights into what the future of benchmarking IR systems might look like, such as the newly announced TREC RAG track this year. Timestamps: 0:00 Introduction & the vibe at SIGIR'24 1:19 Nandan's two papers at the conference 2:09 The backstory of the BEIR benchmark 5:55 The shortcomings of BEIR in 2024 8:04 What's up with the Touché 2020 subset of BEIR 11:24 The problem with overfitting on benchmarks 13:09 TREC-RAG: the future of IR benchmarking 17:34 MIRACL & the importance of multilinguality in IR 21:38 Outro…
 
In this episode of Neural Search Talks, we're chatting with Aamir Shakir from Mixed Bread AI, who shares his insights on starting a company that aims to make search smarter with AI. He details their approach to overcoming challenges in embedding models, touching on the significance of data diversity, novel loss functions, and the future of multilingual and multimodal capabilities. We also get insights on their journey, the ups and downs, and what they're excited about for the future. Timestamps: 0:00 Introduction 0:25 How did mixedbread.ai start? 2:16 The story behind the company name and its "bakers" 4:25 What makes Berlin a great pool for AI talent 6:12 Building as a GPU-poor team 7:05 The recipe behind mxbai-embed-large-v1 9:56 The Angle objective for embedding models 15:00 Going beyond Matryoshka with mxbai-embed-2d-large-v1 17:45 Supporting binary embeddings & quantization 19:07 Collecting large-scale data is key for robust embedding models 21:50 The importance of multilingual and multimodal models for IR 24:07 Where will mixedbread.ai be in 12 months? 26:46 Outro…
 
Ash shares his journey from software development to pioneering in the AI infrastructure space with Unum. He discusses Unum's focus on unleashing the full potential of modern computers for AI, search, and database applications through efficient data processing and infrastructure. Highlighting Unum's technical achievements, including SIMD instructions and just-in-time compilation, Ash also touches on the future of computing and his vision for Unum to contribute to advances in personalized medicine and extending human productivity. Timestamps: 0:00 Introduction 0:44 How did Unum start and what is it about? 6:12 Differentiating from the competition in vector search 17:45 Supporting modern features like large dimensions & binary embeddings 27:49 Upcoming model releases from Unum 30:00 The future of hardware for AI 34:56 The impact of AI in society 37:35 Outro…
 
In this episode of Neural Search Talks, Andrew Yates (Assistant Prof at the University of Amsterdam) Sergi Castella (Analyst at Zeta Alpha), and Gabriel Bénédict (PhD student at the University of Amsterdam) discuss the prospect of using GPT-like models as a replacement for conventional search engines. Generative Information Retrieval (Gen IR) SIGIR Workshop Workshop organized by Gabriel Bénédict, Ruqing Zhang, and Donald Metzler https://coda.io/@sigir/gen-ir Resources on Gen IR: https://github.com/gabriben/awesome-generative-information-retrieval References Rethinking Search: https://arxiv.org/abs/2105.02274 Survey on Augmented Language Models: https://arxiv.org/abs/2302.07842 Differentiable Search Index: https://arxiv.org/abs/2202.06991 Recommender Systems with Generative Retrieval: https://shashankrajput.github.io/Generative.pdf Timestamps: 00:00 Introduction, ChatGPT Plugins 02:01 ChatGPT plugins, LangChain 04:37 What is even Information Retrieval? 06:14 Index-centric vs. model-centric Retrieval 12:22 Generative Information Retrieval (Gen IR) 21:34 Gen IR emerging applications 24:19 How Retrieval Augmented LMs incorporate external knowledge 29:19 What is hallucination? 35:04 Factuality and Faithfulness 41:04 Evaluating generation of Language Models 47:44 Do we even need to "measure" performance? 54:07 How would you evaluate Bing's Sydney? 57:22 Will language models take over commercial search? 1:01:44 NLP academic research in the times of GPT-4 1:06:59 Outro…
 
Andrew Yates (Assistant Prof at University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discuss the paper "Task-aware Retrieval with Instructions" by Akari Asai et al. This paper proposes to augment a conglomerate of existing retrieval and NLP datasets with natural language instructions (BERRI, Bank of Explicit RetRieval Instructions) and use it to train TART (Multi-task Instructed Retriever). 📄 Paper: https://arxiv.org/abs/2211.09260 🍻 BEIR benchmark: https://arxiv.org/abs/2104.08663 📈 LOTTE (Long-Tail Topic-stratified Evaluation, introduced in ColBERT v2): https://arxiv.org/abs/2112.01488 Timestamps: 00:00 Intro: "Task-aware Retrieval with Instructions" 02:20 BERRI, TART, X^2 evaluation 04:00 Background: recent works in domain adaptation 06:50 Instruction Tuning 08:50 Retrieval with descriptions 11:30 Retrieval with instructions 17:28 BERRI, Bank of Explicit RetRieval Instructions 21:48 Repurposing NLP tasks as retrieval tasks 23:53 Negative document selection 27:47 TART, Multi-task Instructed Retriever 31:50 Evaluation: Zero-shot and X^2 evaluation 39:20 Results on Table 3 (BEIR, LOTTE) 50:30 Results on Table 4 (X^2-Retrieval) 55:50 Ablations 57:17 Discussion: user modeling, future work, scale…
 
Marzieh Fadaee — NLP Research Lead at Zeta Alpha — joins Andrew Yates and Sergi Castella to chat about her work in using large Language Models like GPT-3 to generate domain-specific training data for retrieval models with little-to-no human input. The two papers discussed are "InPars: Data Augmentation for Information Retrieval using Large Language Models" and "Promptagator: Few-shot Dense Retrieval From 8 Examples". InPars: https://arxiv.org/abs/2202.05144 Promptagator: https://arxiv.org/abs/2209.11755 Timestamps: 00:00 Introduction 02:00 Background and journey of Marzieh Fadaee 03:10 Challenges of leveraging Large LMs in Information Retrieval 05:20 InPars, motivation and method 14:30 Vanilla vs GBQ prompting 24:40 Evaluation and Benchmark 26:30 Baselines 27:40 Main results and takeaways (Table 1, InPars) 35:40 Ablations: prompting, in-domain vs. MSMARCO input documents 40:40 Promptagator overview and main differences with InPars 48:40 Retriever training and filtering in Promptagator 54:37 Main Results (Table 2, Promptagator) 1:02:30 Ablations on consistency filtering (Figure 2, Promptagator) 1:07:39 Is this the magic black-box pipeline for neural retrieval on any documents 1:11:14 Limitations of using LMs for synthetic data 1:13:00 Future directions for this line of research…
 
Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discus the two influential papers introducing ColBERT (from 2020) and ColBERT v2 (from 2022), which mainly propose a fast late interaction operation to achieve a performance close to full cross-encoders but at a more manageable computational cost at inference; along with many other optimizations. 📄 ColBERT: "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" by Omar Khattab and Matei Zaharia. https://arxiv.org/abs/2004.12832 📄 ColBERTv2: "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction" by Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. https://arxiv.org/abs/2112.01488 📄 PLAID: "An Efficient Engine for Late Interaction Retrieval" by Keshav Santhanam, Omar Khattab, Christopher Potts, and Matei Zaharia. https://arxiv.org/abs/2205.09707 📄 CEDR: "CEDR: Contextualized Embeddings for Document Ranking" by Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. https://arxiv.org/abs/1904.07094 🪃 Feedback form: https://scastella.typeform.com/to/rg7a5GfJ Timestamps: 00:00 Introduction 00:42 Why ColBERT? 03:34 Retrieval paradigms recap 08:04 ColBERT query formulation and architecture 09:04 Using ColBERT as a reranker or as an end-to-end retriever 11:28 Space Footprint vs. MRR on MS MARCO 12:24 Methodology: datasets and negative sampling 14:37 Terminology for cross encoders, interaction-based models, etc. 16:12 Results (ColBERT v1) on MS MARCO 18:41 Ablations on model components 20:34 Max pooling vs. mean pooling 22:54 Why did ColBERT have a big impact? 26:31 ColBERTv2: knowledge distillation 29:34 ColBERTv2: indexing improvements 33:59 Effects of clustering compression in performance 35:19 Results (ColBERT v2): MS MARCO 38:54 Results (ColBERT v2): BEIR 41:27 Takeaway: strong specially in out-of-domain evaluation 43:59 Qualitatively how do ColBERT scores look like? 46:21 What's the most promising of all current neural IR paradigms 49:34 How come there's still so much interest in Dense retrieval? 51:09 Many to many similarity at different granularities 53:44 What would ColBERT v3 include? 56:39 PLAID: An Efficient Engine for Late Interaction Retrieval Contact: castella@zeta-alpha.com…
 
How much of the training and test sets in TREC or MS Marco overlap? Can we evaluate on different splits of the data to isolate the extrapolation performance? In this episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castella i Sapé discuss the paper "Evaluating Extrapolation Performance of Dense Retrieval" byJingtao Zhan, Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 📄 Paper: https://arxiv.org/abs/2204.11447 ❓ About MS Marco: https://microsoft.github.io/msmarco/ ❓About TREC: https://trec.nist.gov/ 🪃 Feedback form: https://scastella.typeform.com/to/rg7a5GfJ Timestamps: 00:00 Introduction 01:08 Evaluation in Information Retrieval, why is it exciting 07:40 Extrapolation Performance in Dense Retrieval 10:30 Learning in High Dimension Always Amounts to Extrapolation 11:40 3 Research questions 16:18 Defining Train-Test label overlap: entity and query intent overlap 21:00 Train-test Overlap in existing benchmarks TREC 23:29 Resampling evaluation methods: constructing distinct train-test sets 25:37 Baselines and results: ColBERT, SPLADE 29:36 Table 6: interpolation vs. extrapolation performance in TREC 33:06 Table 7: interplation vs. extrapolation in MS Marco 35:55 Table 8: Comparing different DR training approaches 40:00 Research Question 1 resolved: cross encoders are more robust than dense retrieval in extrapolation 42:00 Extrapolation and Domain Transfer: BEIR benchmark. 44:46 Figure 2: correlation between extrapolation performance and domain transfer performance 48:35 Broad strokes takeaways from this work 52:30 Is there any intuition behind the results where Dense Retrieval generalizes worse than Cross Encoders? 56:14 Will this have an impact on the IR benchmarking culture? 57:40 Outro Contact: castella@zeta-alpha.com…
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

คู่มืออ้างอิงด่วน

ฟังรายการนี้ในขณะที่คุณสำรวจ
เล่น