🇺🇸 United States Episodes

14811 episodes from United States

#2078 - Duncan Trussell

From Joe Rogan Experience

Duncan Trussell is a stand-up comic, writer, actor, host of the "Duncan Trussell Family Hour" podcast, creator of "The Midnight Gospel" on Netflix, and the voice of "Hippocampus" on the television series "Krapopolis." www.duncantrussell.com Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Future of Longevity with Tony Robbins

From a16z Podcast

Marc and Ben are joined by special guest Tony Robbins to discuss new breakthroughs in regenerative medicine, AI, biohacking, gene editing, mindset and why this might be the best time to be alive.

Best of IdeaCast: Behaviors of Successful CEOs

From HBR IdeaCast

For the qualities that top-performing CEOs have in common, research shows some surprising results. It turns out that charisma, confidence, and pedigree all have little bearing on CEO success. Elena Botelho, partner at leadership advisory firm ghSMART and coleader of its CEO Genome Project, studied high performers in the corner office. The analysis found that they demonstrated four business behaviors: quick decision making, engaging for impact, adapting proactively, and delivering reliably. Botelho cowrote the HBR article “What Sets Successful CEOs Apart.”

What great founders do at night, w/Arianna Huffington

From Masters of Scale

To survive your entrepreneurial journey, you have to learn to recharge. Knowing when to turn the lights out may be the only way to keep the lights on. Few know this better than Arianna Huffington, who dramatically scaled the Huffington Post – and then experienced profound physical burnout. Her venture, Thrive Global, scales the idea of balance across an organization. With cameo appearances from Chris Yeh (co-author, Blitzscaling) and Dr. Matt Walker (author of Why We Sleep).Read a transcript of this episode: https://mastersofscale.comSubscribe to the Masters of Scale weekly newsletter: https://mastersofscale.com/subscribeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Doug Leone - Lessons from a Titan

This week we are replaying our episode with Doug Leone. Doug led one of the world’s most successful venture firms, Sequoia, for over 25 years after he was given responsibility for the firm by its founder, Don Valentine, in 1996. Alongside Mike Moritz, the pair managed its expansion from a single $150m early-stage fund into an $85 billion global powerhouse. It was a privilege to sit down with Doug and learn from him. We talk about his tough start at Sequoia, get into the technicalities of great go-to-market motions, and survey his advice for other investors in the industry. A key theme that will stick with me from this conversation is Doug’s insistence on keeping things simple and clear. Please enjoy my great conversation with Doug Leone. Listen to Founders Podcast For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors. Tired of running your own expert calls to get up to speed on a company? Tegus lets you ramp faster and find answers to critical questions more efficiently than any alternative method. The gold standard for research, the Tegus platform delivers unmatched access to timely, qualitative insights through the largest and most differentiated expert call transcript database. With over 60,000 transcripts spanning 22,000 public and private companies, investors can accelerate their fundamental research process by discovering highly-differentiated and reliable insights that can’t be found anywhere else in the market. As a listener, drive your next investment thesis forward with Tegus for free at tegus.co/patrick. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes.  Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Show Notes [00:03:21] - [First question] - What Don Valentine’s heart was like [00:06:30] - The most productive and unproductive parts of Don’s toughness  [00:10:55] - Why it’s so important to understand someone’s core motivations [00:14:18] - Questions or topics he returns to when getting to know people  [00:16:44] - The most formative experiences he had prior to becoming an investor that impacted his investing the most  [00:20:37] - What venture looks like to him today relative to his prior career [00:23:51] - His style of approaching emerging technology markets like AI as an investor   [00:26:37] - Whether or not he’d go into venture today if he was in his late 20s  [00:28:30] - Commonalities between the very best at going to market effectively  [00:31:11] - The key components of great product positioning   [00:37:15] - How interacting with companies early on has changed over the ears [00:41:12] - Whether or not new entrants into venture should build firms with enterprise value  [00:46:14] - Sussing out the killer gene in somebody  [00:49:04] - How successful people can instill the lessons learned from hardship into their children  [00:53:52] - Whether or not his view on competitive advantage has changed   [00:55:21] - The early 2000s clawback at Sequoia and what navigating that period was like  [01:00:40] - The most interesting question an LP has ever asked him  [01:05:00] - Which dinner companions he’d pick to educate a newly successful founder [01:07:59] - The kindest thing anyone has ever done for him

ChatGPT did not title this podcast | ReThinking with Adam Grant

From TED Talks Daily

ChatGPT, the artificial intelligence chatbot capable of generating human-like text, seems to be everywhere. But how trustworthy are these tools — and what do they mean for the future of writing and work? Adam brings AI entrepreneur Allie Miller and innovation and entrepreneurship professor Ethan Mollick to discuss the capabilities of ChatGPT, debate its merits and downfalls and ponder what we should — and shouldn't — leave to AI. This is an episode of ReThinking with Adam Grant, another podcast from the TED Audio Collective. For more, check out ReThinking wherever you get your podcasts.Learn more about our flagship conference happening this April at attend.ted.com/podcast Hosted on Acast. See acast.com/privacy for more information.

Protocols to Access Creative Energy and Process | Rick Rubin

From Huberman Lab

In this episode, my guest is Rick Rubin, world-renowned music producer of numerous award-winning artists, including Johnny Cash, Red Hot Chili Peppers, Beastie Boys, Adele, Eminem, Slayer, and many more. Rick is also the host of the podcast Tetragrammaton and the author of the best-selling book about the creative process entitled “The Creative Act: A Way of Being.” In this Q&A episode, Rick explains the practical aspects of the creative process, such as specific morning and daily routines, the role of movement, and how to source and capture ideas, interpret dreams, and generate work-life balance. He also offers advice for those struggling with creative or motivation blocks. He explains how cultivating relationships with the unknown, uncertainty and life circumstances heightens the creative process. Rick’s insights into accessing your artistic spirit and direction apply to everyone and all realms of art, work, and life. For show notes, including referenced articles and additional resources, please visit hubermanlab.com. Use Ask Huberman Lab, our new AI-powered platform, for a summary, clips, and insights from this episode. Thank you to our sponsors AG1: https://drinkag1.com/huberman LMNT: https://drinklmnt.com/hubermanlab Waking Up: https://wakingup.com/huberman Momentous: https://livemomentous.com/huberman Timestamps (00:00:00) Rick Rubin (00:02:00) Sponsors: LMNT & Waking Up (00:06:27) Tool: Coherence Breathing, Heart Rate Variability (00:09:32) Treading Water, Podcasts (00:11:45) Tool: Meditation Practices (00:15:43) Sunlight, Skin, Circadian Rhythm (00:20:00) Headphones, Natural Living, Diet (00:24:31) Artificial Intelligence (AI); Childhood; Magic & Mentalists (00:28:34) Tool: Writer’s Block, Creativity, Diary Entries; Deadlines (00:33:33) Sponsor: AG1 (00:35:54) Uncertainty; Creativity & Challenges; Sensitivity & Environment (00:40:43) Wrestling, Storytelling; Johnny Cash (00:48:51) Creative Endeavors & Outcome; Surprise in Oneself; Experimentation (00:56:36) Resistance; Business & Art (01:01:39) Source of Ideas; Internet & Information (01:08:31) Dreams & Interpretation; Unconscious Mind; Motivations, Art & Outcome (01:14:07) Career Advice, Book Writing, Diary Entries, Expressive Writing (01:19:25) Music Industry; Capturing Ideas; Money & Ingenuity (01:25:21) Audience; Innovative Ideas (01:29:35) Alcohol, Confidence, Psychedelics (01:35:10) Creativity, Chaos & Organization; Shocking Experiences (01:42:13) News & False Stories; Playing, Wonder & Childhood (01:46:58) Ramones; Henry Rollins (01:49:55) Daily Routine; Red Light, Circadian Rhythm & “Cheap Photons” (01:57:46) Creativity, Experience vs. Institutions; Work, Stress & Relationships (02:04:29) Book Recommendations; Ancestry & Creativity (02:07:41) Experiencing Music; Developing Albums (02:12:28) Music Videos; Book Interpretation; Current Projects & Documentaries (02:16:40) Podcasting & Conversation (02:25:41) Zero-Cost Support, Spotify & Apple Reviews, YouTube Feedback, Sponsors,  Momentous, Social Media, Neural Network Newsletter Disclaimer Learn more about your ad choices. Visit megaphone.fm/adchoices

What it's like to find your birth parent | Am I Normal? with Mona Chalabi

From TED Talks Daily

In Britain, one-fourth of people who were adopted make contact with their birth parents before they turn 18. In this episode of Am I Normal? with Mona Chalabi, another podcast from the TED Audio Collective, guest host Saleem Reshamwala meets Amanda, a Dominican woman who was adopted by a white couple in Connecticut. Amanda always knew she was adopted, and was curious about her birth parents. After a few years of dead ends, she finally finds her biological mother ... in the last place she expected. You can listen to more Am I Normal? with Mona Chalabi wherever you get your podcasts.Learn more about our flagship conference happening this April at attend.ted.com/podcast Hosted on Acast. See acast.com/privacy for more information.

Kerry Washington

From Oprah's Super Soul

Oprah invites Emmy-winning actress and political activist Kerry Washington to her home to talk about her new memoir, Thicker Than Water. In this revealing conversation, the notably private Washington opens up about enduring difficult childhood trauma, her struggles with her own body image, mental health, and the revelation of a shocking family secret that changed the trajectory of her life. Kerry explains how acting saved her life in many ways, including the iconic role of Olivia Pope which she says pulled her from "the darkest corners" of herself. Want more podcasts from OWN? Visit https://bit.ly/OWNPods     You can also watch Oprah’s Super Soul, The Oprah Winfrey Show and more of your favorite OWN shows on your TV! Visit https://bit.ly/find_OWN

#406 – Teddy Atlas: Mike Tyson, Cus D’Amato, Boxing, Loyalty, Fear & Greatness

From Lex Fridman Podcast

Teddy Atlas is boxing trainer to 18 world champions, ESPN boxing commentator, and host of podcast THE FIGHT with Teddy Atlas. Please support this podcast by checking out our sponsors: – Notion: https://notion.com/lex – Babbel: https://babbel.com/lexpod and use code Lexpod to get 55% off – ExpressVPN: https://expressvpn.com/lexpod to get 3 months free – InsideTracker: https://insidetracker.com/lex to get 20% off Transcript: https://lexfridman.com/teddy-atlas-transcript EPISODE LINKS: Teddy’s Twitter: https://twitter.com/TeddyAtlasReal Teddy’s Instagram: https://instagram.com/teddy_atlas Teddy’s Website: https://teddyatlas.com/ Atlas: From the Streets to the Ring (book): https://amzn.to/48uIQBj Teddy’s Podcast: https://youtube.com/THEFIGHTwithTeddyAtlas Dr. Theodore Atlas Foundation: http://dratlasfoundation.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: – Check out the sponsors above, it’s the best way to support this podcast – Support on Patreon: https://www.patreon.com/lexfridman – Twitter: https://twitter.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Medium: https://medium.com/@lexfridman OUTLINE: Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) – Introduction (09:47) – Lessons from father (19:53) – Scar story (40:31) – Cus D’Amato (50:43) – Mike Tyson (2:08:39) – Forgiveness

"Santa"

From SmartLess

Unwrap yourself with this Clause for celebration, a SmartLess Christmas Bonus: SANTA!

Bonus Episode: Jamie Kern Lima in Conversation with Oprah Winfrey on The Color Purple

From Oprah's Super Soul

In a sneak peek episode of the upcoming ‘Jamie Kern Lima Show’ Jamie sits down with Oprah Winfrey to discuss the new, reimagined film version of The Color Purple. Oprah shares why Alice Walker’s 1982 book The Color Purple resonated with her, how she was cast in the iconic role of Sofia in Steven Spielberg’s 1985 classic movie and why she leads her life with intention. Jamie Kern Lima created IT Cosmetics in her living room with her husband, Paulo, eventually selling the company to L’Oréal for 1.2 billion dollars. The Color Purple film premieres in theaters on Christmas Day. Buy your tickets on Fandango now! https://www.fandango.com/canvas/thecolorpurple

NeurIPS 2023 Recap — Best Papers

From Latent Space: The AI Engineer Podcast

We are running an end of year listener survey! Please let us know any feedback you have, what episodes resonated with you, and guest requests for 2024! Survey link here.NeurIPS 2023 took place from Dec 10–16 in New Orleans. The Latent Space crew was onsite for as many of the talks and workshops as we could attend (and more importantly, hosted cocktails and parties after hours)!Picking from the 3586 papers accepted to the conference (available online, full schedule here) is an impossible task, but we did our best to present an audio guide with brief commentary on each. We also recommend MLContests.com NeurIPS recap and Seb Ruder’s NeurIPS primer and Jerry Liu’s paper picks. We also found the VizHub guide useful for a t-SNE clustering of papers. Lots also happened in the arxiv publishing world outside NeurIPS, as highlighted by Karpathy, especially DeepMind’s Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models.Jan 2024 update: we also strongly recommend Sebastian Raschka, PhD ‘s pick of the year’s 10 best papers, including Pythia.We’ll start with the NeurIPS Best Paper Awards, and then go to a selection of non-awarded but highly influential papers, and then arbitrary personal picks to round out the selection. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. We give Chris Ré the last word due to the Mamba and StripedHyena state space models drawing particular excitement but still being too early to assess impact. Timestamps* [0:01:19] Word2Vec (Jeff Dean, Greg Corrado)* [0:15:28] Emergence Mirage (Rylan Schaeffer)* [0:28:48] DPO (Rafael Rafailov)* [0:41:36] DPO Poster Session (Archit Sharma)* [0:52:03] Datablations (Niklas Muennighoff)* [1:00:50] QLoRA (Tim Dettmers)* [1:12:23] DataComp (Samir Gadre)* [1:25:38] DataComp Poster Session (Samir Gadre, Alex Dimakis)* [1:35:25] LLaVA (Haotian Liu)* [1:47:21] LLaVA Poster Session (Haotian Liu)* [1:59:19] Tree of Thought (Shunyu Yao)* [2:11:27] Tree of Thought Poster Session (Shunyu Yao)* [2:20:09] Toolformer (Jane Dwivedi-Yu)* [2:32:26] Voyager (Guanzhi Wang)* [2:45:14] CogEval (Ida Momennejad)* [2:59:41] State Space Models (Chris Ré)Papers covered* Distributed Representations of Words and Phrases and their Compositionality (Word2Vec) Tomas Mikolov · Ilya Sutskever · Kai Chen · Greg Corrado · Jeff Dean. The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several improvements that make the Skip-gram model more expressive and enable it to learn higher quality vectors more rapidly. We show that by subsampling frequent words we obtain significant speedup, and also learn higher quality representations as measured by our tasks. We also introduce Negative Sampling, a simplified variant of Noise Contrastive Estimation (NCE) that learns more accurate vectors for frequent words compared to the hierarchical softmax. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of Canada'' and "Air'' cannot be easily combined to obtain "Air Canada''. Motivated by this example, we present a simple and efficient method for finding phrases, and show that their vector representations can be accurately learned by the Skip-gram model.* Some notable reflections from Tomas Mikolov - and debate over the Seq2Seq paper credit with Quoc Le* Are Emergent Abilities of Large Language Models a Mirage? (Schaeffer et al.). Emergent abilities are abilities that are present in large-scale models but not in smaller models and are hard to predict. Rather than being a product of models’ scaling behavior, this paper argues that emergent abilities are mainly an artifact of the choice of metric used to evaluate them. Specifically, nonlinear and discontinuous metrics can lead to sharp and unpredictable changes in model performance. Indeed, the authors find that when accuracy is changed to a continuous metric for arithmetic tasks where emergent behavior was previously observed, performance improves smoothly instead. So while emergent abilities may still exist, they should be properly controlled and researchers should consider how the chosen metric interacts with the model.* Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov et al.)* While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF). However, RLHF is a complex and often unstable procedure, first fitting a reward model that reflects the human preferences, and then fine-tuning the large unsupervised LM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. * In this paper, we leverage a mapping between reward functions and optimal policies to show that this constrained reward maximization problem can be optimized exactly with a single stage of policy training, essentially solving a classification problem on the human preference data. The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight, eliminating the need for fitting a reward model, sampling from the LM during fine-tuning, or performing significant hyperparameter tuning. * Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. Notably, fine-tuning with DPO exceeds RLHF's ability to control sentiment of generations and improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.See also Interconnects on DPO: and recent Twitter discussions* Scaling Data-Constrained Language Models (Muennighoff et al.)* The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations.* 2 minute poster session presentation video* QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al.). * This paper proposes QLoRA, a more memory-efficient (but slower) version of LoRA that uses several optimization tricks to save memory. They train a new model, Guanaco, that is fine-tuned only on a single GPU for 24h and outperforms previous models on the Vicuna benchmark. Overall, QLoRA enables using much fewer GPU memory for fine-tuning LLMs. Concurrently, other methods such as 4-bit LoRA quantization have been developed that achieve similar results.* DataComp: In search of the next generation of multimodal datasets (Gadre et al.)* Multimodal datasets are a critical component in recent breakthroughs such as CLIP, Stable Diffusion and GPT-4, yet their design does not receive the same research attention as model architectures or training algorithms. To address this shortcoming in the machine learning ecosystem, we introduce DataComp, a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing the resulting model on 38 downstream test sets. * Our benchmark consists of multiple compute scales spanning four orders of magnitude, which enables the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow leads to better training sets. Our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training procedure and compute. We release \datanet and all accompanying code at www.datacomp.ai.* Visual Instruction Tuning (Liu et al)* Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. * By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.* Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.* Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Yao et al)* Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. * To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. * ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.* Our experiments show that ToT significantly enhances language models’ problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4\% of tasks, our method achieved a success rate of 74\%. * Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.* Toolformer: Language Models Can Teach Themselves to Use Tools (Schick et al)* LMs exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller specialized models excel. * In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. * We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. * This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q&A system, a search engine, a translation system, and a calendar. * Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.* Voyager: An Open-Ended Embodied Agent with Large Language Models (Wang et al)* We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. Voyager consists of three key components: * 1) an automatic curriculum that maximizes exploration, * 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and * 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. * Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent's abilities rapidly and alleviates catastrophic forgetting. Empirically, Voyager shows strong in-context lifelong learning capability and exhibits exceptional proficiency in playing Minecraft. It obtains 3.3x more unique items, travels 2.3x longer distances, and unlocks key tech tree milestones up to 15.3x faster than prior SOTA. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize.Voyager discovers new Minecraft items and skills continually by self-driven exploration, significantly outperforming the baselines.* Evaluating Cognitive Maps and Planning in Large Language Models with CogEval (Momennejad et al)* Recently an influx of studies claims emergent cognitive abilities in large language models (LLMs). Yet, most rely on anecdotes, overlook contamination of training sets, or lack systematic Evaluation involving multiple tasks, control conditions, multiple iterations, and statistical robustness tests. Here we make two major contributions. * First, we propose CogEval, a cognitive science-inspired protocol for the systematic evaluation of cognitive capacities in LLMs. The CogEval protocol can be followed for the evaluation of various abilities. * * Second, here we follow CogEval to systematically evaluate cognitive maps and planning ability across eight LLMs (OpenAI GPT-4, GPT-3.5-turbo-175B, davinci-003-175B, Google Bard, Cohere-xlarge-52.4B, Anthropic Claude-1-52B, LLaMA-13B, and Alpaca-7B). We base our task prompts on human experiments, which offer both established construct validity for evaluating planning, and are absent from LLM training sets.* * We find that, while LLMs show apparent competence in a few planning tasks with simpler structures, systematic evaluation reveals striking failure modes in planning tasks, including hallucinations of invalid trajectories and falling in loops. These findings do not support the idea of emergent out-of-the-box planning ability in LLMs. This could be because LLMs do not understand the latent relational structures underlying planning problems, known as cognitive maps, and fail at unrolling goal-directed trajectories based on the underlying structure. Implications for application and future directions are discussed.* Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Albert Gu, Tri Dao)* Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. * First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. * Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). * Mamba enjoys fast inference (5x higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-1.4B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.* Get full access to Latent.Space at www.latent.space/subscribe

#723 - Modern Wisdom Christmas Special - Reflecting On The Wildest Year

From Modern Wisdom

2023 has been crazy. I'm back on my old couch in Newcastle with Jonny, Yusef & George to catch up on their favourite lessons from the past 12 months plus their best new hacks and plans for 2024. Expect to learn why you need two duvets to improve your sleep when sharing a bed, the new productivity system that everyone now uses except me, what you can learn about using a fitness tracker without actually buying one, which app Yusef uses over 400 times a day, why all 5 of my top songs this year were from the same artist and much more... Sponsors: Get $150 discount on Plunge’s amazing sauna or cold plunge at https://plunge.com (use code MW150) Get 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM) Get an exclusive discount from Surfshark VPN at https://surfshark.deals/MODERNWISDOM (use code MODERNWISDOM) Extra Stuff: Access Propane's Free Training: https://propanefitness.com/modernwisdom Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ Buy my productivity energy drink Neutonic: https://neutonic.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices

E158: Global trade disrupted, Adobe/Figma canceled, realtors sued, Trump blocked

From All-In with Chamath, Jason, Sacks & Friedberg

(0:00) Bestie intros: Jason comes in hot, All-In's new Chairman Dictator, Holiday party recap, and more (7:54) Understanding the trade disruption in the Red Sea: Houthis, global impact on trade, the dicey geopolitical situation, and how this compares to COVID freight prices with Flexport's Ryan Petersen (35:55) Major M&A deals called off, downstream impacts of a hawkish regulatory environment (54:15) The new era of startup building: less capital raised, less overhead costs, more profitable, smaller exits with higher founder/employee ownership percentages (1:17:01) Bombshell class action lawsuits against the NAR and other real estate brokerages, how this could change residential real estate in the US (1:31:58) Colorado bans Trump from primary ballots Follow the besties: https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow Ryan: https://twitter.com/typesfast Follow the pod: https://twitter.com/theallinpod https://linktr.ee/allinpodcast Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://www.flexport.com https://www.wionews.com/world/wont-stop-fighting-netanyahu-rules-out-ceasefire-until-hamas-elimination-671867 https://www.figma.com/blog/figma-adobe-abandon-proposed-merger https://www.reuters.com/markets/deals/illumina-divest-cancer-test-maker-grail-2023-12-17 https://www.google.com/finance/quote/ADBE:NASDAQ https://www.saasgrid.com https://www.housingwire.com/articles/missouri-jury-finds-nar-brokerages-guilty-of-conspiring-to-inflate-commissions https://www.wsj.com/personal-finance/real-estate-buying-home-charts-6dc40caa https://www.msn.com/en-us/money/realestate/real-estate-commissions-could-be-the-next-fee-on-the-chopping-block/ar-AA1irqIV https://www.redfin.com/guides/how-to-sell-your-home-without-an-agent-fsbo https://apnews.com/article/trump-insurrection-14th-amendment-2024-colorado-d16dd8f354eeaf450558378c65fd79a2 https://www.nytimes.com/2022/04/02/us/politics/merrick-garland-biden-trump.html https://cafe.com/cafe-insider-feed https://www.lawfaremedia.org/podcasts-multimedia/podcast/the-lawfare-podcast

We buy a lot of Christmas trees (Update)

From Planet Money

*Note: This episode originally ran in 2020*'Tis the season for Americans to head out in droves and bring home a freshly-cut Christmas tree. But decorative evergreens don't just magically show up on corner lots, waiting to find a home in your living room. There are a bunch of fascinating steps that determine exactly how many Christmas trees get sold, and how expensive they are.Today on the show, we visit the world's largest auction of Christmas trees — and then see how much green New Yorkers are willing to throw down for some greenery. It's a story where snow-dusted Yuletide dreams meet the hard reality of supply and demand. We've got market theory, a thousand dollars in cash, and a "decent sized truck"... anything could happen.This episode was produced by James Sneed. It was edited by Bryant Urstadt. It was engineered by Gilly Moon. Alex Goldmark is Planet Money's executive producer. Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

How We Got Our First 100 Customers (No Bulls**t, Specific Details)

From My First Million

Episode 534: Shaan Puri (https://twitter.com/ShaanVP) and Sam Parr (https://twitter.com/theSamParr) answer the question, “How did you get your first 100 customers?” In this episode, they'll share you 11 methods. No more small boy spreadsheets, build your business on the free HubSpot CRM: https://mfmpod.link/hrd — Show Notes: (0:00) Intro (2:00) Throw an event method (6:00) The floor-to-floor method (12:30) Magnet method (14:00) "Made to Stick" method (19:00) The no risk offer (25:00) Viral content (30:00) Use the personal story (35:00) Pissing in the pond (37:00) Booth babes method (44:30) Launchpad method (48:30) The David Blaine method — Links: • The Anti-MBA - https://www.theantimba.com/ • Neville Medhora’s blog on Hustle Con - https://www.nevblog.com/hustlecon-2014/ • Ryan Hoover in Fast Company - http://tinyurl.com/muva968r — Check Out Shaan's Stuff: • Try Shepherd Out - https://www.supportshepherd.com/ • Shaan's Personal Assistant System - http://shaanpuri.com/remoteassistant • Power Writing Course - https://maven.com/generalist/writing • Small Boy Newsletter - https://smallboy.co/ • Daily Newsletter - https://www.shaanpuri.com/ Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth Past guests on My First Million include Rob Dyrdek, Hasan Minhaj, Balaji Srinivasan, Jake Paul, Dr. Andrew Huberman, Gary Vee, Lance Armstrong, Sophia Amoruso, Ariel Helwani, Ramit Sethi, Stanley Druckenmiller, Peter Diamandis, Dharmesh Shah, Brian Halligan, Marc Lore, Jason Calacanis, Andrew Wilkinson, Julian Shapiro, Kat Cole, Codie Sanchez, Nader Al-Naji, Steph Smith, Trung Phan, Nick Huber, Anthony Pompliano, Ben Askren, Ramon Van Meer, Brianne Kimmel, Andrew Gazdecki, Scott Belsky, Moiz Ali, Dan Held, Elaine Zelby, Michael Saylor, Ryan Begelman, Jack Butcher, Reed Duchscher, Tai Lopez, Harley Finkelstein, Alexa von Tobel, Noah Kagan, Nick Bare, Greg Isenberg, James Altucher, Randy Hetrick and more. — Other episodes you might enjoy: • #224 Rob Dyrdek - How Tracking Every Second of His Life Took Rob Drydek from 0 to $405M in Exits • #209 Gary Vaynerchuk - Why NFTS Are the Future • #178 Balaji Srinivasan - Balaji on How to Fix the Media, Cloud Cities & Crypto • #169 - How One Man Started 5, Billion Dollar Companies, Dan Gilbert's Empire, & Talking With Warren Buffett • ​​​​#218 - Why You Should Take a Think Week Like Bill Gates • Dave Portnoy vs The World, Extreme Body Monitoring, The Future of Apparel Retail, "How Much is Anthony Pompliano Worth?", and More • How Mr Beast Got 100M Views in Less Than 4 Days, The $25M Chrome Extension, and More

Page 245 of 741 (14811 episodes from United States)

🇺🇸 About United States Episodes

Explore the diverse voices and perspectives from podcast creators in United States. Each episode offers unique insights into the culture, language, and stories from this region.