144. Communicating Through Conflict: How to Get Along with Anyone
Transform conflicts into something productive.
14439 episodes from United States
Transform conflicts into something productive.
How will the return of meme stocks like GameStop and day trading influencers like Roaring Kitty alter the fintech landscape? Robinhood’s co-founder and CEO Vlad Tenev, who was a lightning rod for controversy when meme stocks first surged in 2021, joins Rapid Response to explain what’s the same and what’s different this time around. Tenev also shares how his family’s struggles in 1990s Bulgaria shapes his view of the U.S. financial markets today, and whether trading on Robinhood is like using a sports gambling app.Watch this episode on YouTube: https://www.youtube.com/watch?v=8pChXgKIemwFor more info, visit: www.rapidresponseshow.comSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
My guest today is Howie Liu. Howie is the co-founder and CEO of Airtable, a no-code app platform that allows teams to build on top of their shared data and create productive workflows. The business began in 2013 and now has use cases built out for over 300,000 organizations. As Airtable begins to integrate AI and the latest LLMs into its product, Howie has maintained a focus on an intuitive building experience, allowing anyone to build out their workflow within minutes or hours. We discuss the future of the platform in the era of AI, his perspective on horizontal versus vertical software solutions, and his crucial moments as a leader in building a critical component to the advancement of productivity. Please enjoy this discussion with Howie Liu. Listen to Founders Podcast For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Tegus, where we're changing the game in investment research. Step away from outdated, inefficient methods and into the future with our platform, proudly hosting over 100,000 transcripts – with over 25,000 transcripts added just this year alone. Our platform grows eight times faster and adds twice as much monthly content as our competitors, putting us at the forefront of the industry. Plus, with 75% of private market transcripts available exclusively on Tegus, we offer insights you simply can't find elsewhere. See the difference a vast, quality-driven transcript library makes. Unlock your free trial at tegus.com/patrick. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:06:49) Exploring Horizontal vs. Vertical Software in the AI Era (00:11:00) The Future of Customized Applications (00:15:28) Perspectives on AI's Future and Enterprise Adoption (00:18:13) The Evolution of LLMs and Their Impact on Software Development (00:23:33) Harnessing AI for Business Transformation and Innovation (00:27:28) Reflecting on Airtable's Founding and Evolution (00:33:23) Airtable's Approach to Customer Engagement and Innovation (00:39:59) The Impact of AI on Platform Versatility and Market Penetration (00:46:00) Achieving Product-Market Fit and Initial Monetization (00:50:23) Scaling Up and Securing the First Unicorn Round (00:51:52) Rapid Growth and Organizational Scaling Challenges (00:55:00) Reflecting on Tough Decisions in the Business (01:02:55) The Role of Capital Allocation in Expanding Airtable (01:06:55) The Kindest Thing Anyone Has Ever Done For Howie
Episode 589: Shaan Puri ( https://twitter.com/ShaanVP ) sits down with David Perell to reveal every framework he knows to become a better storyteller, a better writer, and a better creator of binge-worthy content. This episode was originally recorded for the podcast “How I Write,” hosted by David Perell. —> https://www.youtube.com/watch?v=Z2BnqYArwaw Want to see Sam and Shaan’s smiling faces? Head to the MFM YouTube Channel and subscribe - http://tinyurl.com/5n7ftsy5 — Show Notes: (0:00) Intro (3:16) Binge bank (6:03) Storytelling (8:05) Intention & Obstacle (14:22) Hasan Minhaj (15:53) Writing vs Speaking (18:06) Pacing (19:01) Hooks vs Frames (22:30) Viral tweets (26:49) MrBeast (27:26) Storyworthy (29:10) 5-second moment of change (32:02) Origin Stories (42:07) Tony Robbins (43:25) Transformations (44:12) Steven Bartlett (46:17) Viral videos (49:09) Miss Excel (56:04) Change your state & focus (58:31) Paul Graham (1:03:43) Advice to writers (1:06:53) Writer's voice (1:11:48) Dave Chappelle vs Netflix (1:18:18) Distribution (1:21:40) Twitter / X (1:32:34) Writing with humor (1:45:02) Newsletters — Links: • Write of Passage - https://writeofpassage.com/ • David on Twitter - https://twitter.com/david_perell • David’s Website - https://perell.com/ • David Perell on YouTube - https://www.youtube.com/channel/UC0a_pO439rhcyHBZq3AKdrw • David’s Podcast - https://writeofpassage.com/how-i-write • Get HubSpot's Free AI-Powered Sales Hub: enhance support, retention, and revenue all in one place https://clickhubspot.com/sym — Check Out Shaan's Stuff: Need to hire? You should use the same service Shaan uses to hire developers, designers, & Virtual Assistants → it’s called Shepherd (tell ‘em Shaan sent you): https://bit.ly/SupportShepherd — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Pitch your startup for a shot at a $1M investment with Sam Parr as the MC https://clickhubspot.com/pitch My First Million is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano
Speakers for AI Engineer World’s Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we’ve been studying the best ML research conferences so we can make the best AI industry conf! Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.UPDATE: This is a 2 part episode - see Part 2 here.ICLR 2024 took place from May 6-11 in Vienna, Austria. Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.As we did last year, we’ll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré’s spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.We had a blast at ICLR 2024 and you can bet that we’ll be back in 2025 🇸🇬.Timestamps and Overview of Papers[00:02:49] Section A: ImageGen, Compression, Adversarial Attacks* [00:02:49] VAEs* [00:32:36] Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models* [00:37:25] The Hidden Language Of Diffusion Models* [00:48:40] Ilya on Compression* [01:01:45] Christian Szegedy on Compression* [01:07:34] Intriguing properties of neural networks[01:26:07] Section B: Vision Learning and Weak Supervision* [01:26:45] Vision Transformers Need Registers* [01:38:27] Think before you speak: Training Language Models With Pause Tokens* [01:47:06] Towards a statistical theory of data selection under weak supervision* [02:00:32] Is ImageNet worth 1 video?[02:06:32] Section C: Extending Transformers and Attention* [02:06:49] LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models* [02:15:12] YaRN: Efficient Context Window Extension of Large Language Models* [02:32:02] Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs* [02:44:57] ZeRO++: Extremely Efficient Collective Communication for Giant Model Training[02:54:26] Section D: State Space Models vs Transformers* [03:31:15] Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors* [03:37:08] End of Part 1A: ImageGen, Compression, Adversarial Attacks* Durk Kingma (OpenAI/Google DeepMind) & Max Welling: Auto-Encoding Variational Bayes (Full ICLR talk)* Preliminary resources: Understanding VAEs, CodeEmporium, Arxiv Insights* Inaugural ICLR Test of Time Award! “Probabilistic modeling is one of the most fundamental ways in which we reason about the world. This paper spearheaded the integration of deep learning with scalable probabilistic inference (amortized mean-field variational inference via a so-called reparameterization trick), giving rise to the Variational Autoencoder (VAE).”* Pablo Pernías (Stability) et al: Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models (ICLR oral, poster)* Hila Chefer et al (Google Research): Hidden Language Of Diffusion Models (poster)* See also: Google Lumiere, Attend and Excite* Christian Szegedy (X.ai): Intriguing properties of neural networks (Full ICLR talk)* Ilya Sutskever: An Observation on Generalization* on Language Modeling is Compression* “Stating The Obvious” criticism* Really good compression amounts to intelligence* Lexinvariant Language models* Inaugural Test of Time Award runner up: “With the rising popularity of deep neural networks in real applications, it is important to understand when and how neural networks might behave in undesirable ways. This paper highlighted the issue that neural networks can be vulnerable to small almost imperceptible variations to the input. This idea helped spawn the area of adversarial attacks (trying to fool a neural network) as well as adversarial defense (training a neural network to not be fooled). “* with Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob FergusB: Vision Learning and Weak Supervision* Timothée Darcet (Meta) et al : Vision Transformers Need Registers (ICLR oral, Paper)* ICLR Outstanding Paper Award: “This paper identifies artifacts in feature maps of vision transformer networks, characterized by high-norm tokens in low-informative background areas. The authors provide key hypotheses for why this is happening and provide a simple yet elegant solution to address these artifacts using additional register tokens, enhancing model performance on various tasks. The insights gained from this work can also impact other application areas. The paper is very well-written and provides a great example of conducting research – identifying an issue, understanding why it is happening, and then providing a solution.“* HN discussion: “According to the paper, the "registers" are additional learnable tokens that are appended to the input sequence of a Vision Transformer model during training. They are added after the patch embedding layer, with a learnable value, similar to the [CLS] token and then at the end of the Vision Transformer, the register tokens are discarded, and only the [CLS] token and patch tokens are used as image representations.The register tokens provide a place for the model to store, process and retrieve global information during the forward pass, without repurposing patch tokens for this role.Adding register tokens removes the artifacts and high-norm "outlier" tokens that otherwise appear in the feature maps of trained Vision Transformer models. Using register tokens leads to smoother feature maps, improved performance on dense prediction tasks, and enables better unsupervised object discovery compared to the same models trained without the additional register tokens. This is a neat result. For just a 2% increase in inference cost, you can significantly improve ViT model performance. Close to a free lunch.”* Sachin Goyal (Google) et al: Think before you speak: Training Language Models With Pause Tokens (OpenReview)* We operationalize this idea by performing training and inference on language models with a (learnable) pause token, a sequence of which is appended to the input prefix. We then delay extracting the model's outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer. We empirically evaluate pause-training on decoder-only models of 1B and 130M parameters with causal pretraining on C4, and on downstream tasks covering reasoning, question-answering, general understanding and fact recall. * Our main finding is that inference-time delays show gains when the model is both pre-trained and finetuned with delays. For the 1B model, we witness gains on 8 of 9 tasks, most prominently, a gain of 18% EM score on the QA task of SQuAD, 8% on CommonSenseQA and 1% accuracy on the reasoning task of GSM8k. Our work raises a range of conceptual and practical future research questions on making delayed next-token prediction a widely applicable new paradigm.* Pulkit Tandon (Granica) et al: Towards a statistical theory of data selection under weak supervision (ICLR Oral, Poster, Paper)* Honorable Mention: “The paper establishes statistical foundations for data subset selection and identifies the shortcomings of popular data selection methods.”* Shashank Venkataramanan (Inria) et al: Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video (ICLR Oral, paper)* First, we investigate first-person videos and introduce a "Walking Tours" dataset. These videos are high-resolution, hours-long, captured in a single uninterrupted take, depicting a large number of objects and actions with natural scene transitions. They are unlabeled and uncurated, thus realistic for self-supervision and comparable with human learning.* Second, we introduce a novel self-supervised image pretraining method tailored for learning from continuous videos. Existing methods typically adapt image-based pretraining approaches to incorporate more frames. Instead, we advocate a "tracking to learn to recognize" approach. Our method called DoRA leads to attention maps that DiscOver and tRAck objects over time in an end-to-end manner, using transformer cross-attention. We derive multiple views from the tracks and use them in a classical self-supervised distillation loss. Using our novel approach, a single Walking Tours video remarkably becomes a strong competitor to ImageNet for several image and video downstream tasks.* Honorable Mention: “The paper proposes a novel path to self-supervised image pre-training, by learning from continuous videos. The paper contributes both new types of data and a method to learn from novel data.“C: Extending Transformers and Attention* Yukang Chen (CUHK) et al: LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models (ICLR Oral, Poster)* We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. LongLoRA extends Llama2 7B from 4k context to 100k, or Llama2 70B to 32k on a single 8x A100 machine. LongLoRA extends models' context while retaining their original architectures, and is compatible with most existing techniques, like Flash-Attention2.* Bowen Peng (Nous Research) et al: YaRN: Efficient Context Window Extension of Large Language Models (Poster, Paper)* Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length.* Mentioned papers: Kaikoendev on TILs While Training SuperHOT, LongRoPE, Ring Attention, InfiniAttention, Textbooks are all you need and the Synthetic Data problem* Suyu Ge et al: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs (aka FastGen. ICLR Oral, Poster, Paper)* “We introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs). Different from the conventional KV cache that retains key and value vectors for all context tokens, we conduct targeted profiling to discern the intrinsic structure of attention modules. Based on the recognized structure, we then construct the KV cache in an adaptive manner: evicting long-range contexts on attention heads emphasizing local contexts, discarding non-special tokens on attention heads centered on special tokens, and only employing the standard KV cache for attention heads that broadly attend to all tokens. In our experiments across various asks, FastGen demonstrates substantial reduction on GPU memory consumption with negligible generation quality loss. ”* 40% memory reduction for Llama 67b* Honorable Mention: “The paper targets the critical KV cache compression problem with great impact on transformer based LLMs, reducing the memory with a simple idea that can be deployed without resource intensive fine-tuning or re-training. The approach is quite simple and yet is shown to be quite effective.”* Guanhua Wang (DeepSpeed) et al, ZeRO++: Extremely Efficient Collective Communication for Giant Model Training (paper, poster, blogpost)* Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language models on massive GPUs clusters due to its ease of use, efficiency, and good scalability. However, when training on low-bandwidth clusters, or at scale which forces batch size per GPU to be small, ZeRO's effective throughput is limited because of high communication volume from gathering weights in forward pass, backward pass, and averaging gradients. This paper introduces three communication volume reduction techniques, which we collectively refer to as ZeRO++, targeting each of the communication collectives in ZeRO. * Collectively, ZeRO++ reduces communication volume of ZeRO by 4x, enabling up to 2.16x better throughput at 384 GPU scale.* Mentioned: FSDP + QLoRAPoster Session PicksWe ran out of airtime to include these in the podcast, but we recorded interviews with some of these authors and could share audio on request.* Summarization* BooookScore: A systematic exploration of book-length summarization in the era of LLMs (ICLR Oral)* Uncertainty* Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs* Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models* MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs* Language Model Cascades: Token-Level Uncertainty And Beyond* Tabular Data* CABINET: Content Relevance-based Noise Reduction for Table Question Answering* Squeezing Lemons with Hammers: An Evaluation of AutoML and Tabular Deep Learning for Data-Scarce Classification Applications* Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space* Making Pre-trained Language Models Great on Tabular Prediction* How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data* Watermarking (there were >24 papers on watermarking, both for and against!!)* Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense* Provable Robust Watermarking for AI-Generated Text* Attacking LLM Watermarks by Exploiting Their Strengths* Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models* Is Watermarking LLM-Generated Code Robust?* On the Reliability of Watermarks for Large Language Models* Watermark Stealing in Large Language Models* Misc* Massively Scalable Inverse Reinforcement Learning in Google Maps* Zipformer: A faster and better encoder for automatic speech recognition* Conformal Risk ControlD: State Space Models vs Transformers* Sasha Rush’s State Space Models ICLR invited talk on workshop day* Ido Amos (IBM) et al: Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors (ICLR Oral)* Modeling long-range dependencies across sequences is a longstanding goal in machine learning and has led to architectures, such as state space models, that dramatically outperform Transformers on long sequences. * However, these impressive empirical gains have been by and large demonstrated on benchmarks (e.g. Long Range Arena), where models are randomly initialized and trained to predict a target label from an input sequence. In this work, we show that random initialization leads to gross overestimation of the differences between architectures. * In stark contrast to prior works, we find vanilla Transformers to match the performance of S4 on Long Range Arena when properly pretrained, and we improve the best reported results of SSMs on the PathX-256 task by 20 absolute points. * Subsequently, we analyze the utility of previously-proposed structured parameterizations for SSMs and show they become mostly redundant in the presence of data-driven initialization obtained through pretraining. Our work shows that, when evaluating different architectures on supervised tasks, incorporation of data-driven priors via pretraining is essential for reliable performance estimation, and can be done efficiently.* Outstanding Paper Award: “This paper dives deep into understanding the ability of recently proposed state-space models and transformer architectures to model long-term sequential dependencies. Surprisingly, the authors find that training transformer models from scratch leads to an under-estimation of their performance and demonstrates dramatic gains can be achieved with a pre-training and fine-tuning setup. The paper is exceptionally well executed and exemplary in its focus on simplicity and systematic insights.” Get full access to Latent.Space at www.latent.space/subscribe
Consumer AI is moving fast, so who's leading the charge? a16z Consumer Partners Olivia Moore and Bryan Kim discuss our GenAI 100 list and what it takes for an AI model to stand out and dominate the market. They discuss how these cutting-edge apps are connecting with their users and debate whether traditional strategies like paid acquisition and network effects are still effective. We're going beyond rankings to explore pivotal benchmarks like D7 retention and introduce metrics that define today's AI market. Note: This episode was recorded prior to OpenAI's Spring update. Catch our latest insights in the previous episode to stay ahead!
In this episode, my guest is Dr. Diego Bohórquez, PhD, professor of medicine and neurobiology at Duke University and a pioneering researcher into how we use our ‘gut sense.’ He describes how your gut communicates to your brain and the rest of your body through hormones and neural connections to shape your thoughts, emotions, and behaviors. He explains how your gut senses a range of features such as temperature, pH, the macro- and micronutrients in our foods, and much more and signals that information to the brain to affect our food preferences, aversions, and cravings. Dr. Bohórquez describes his early life in the Amazon jungle and how exposure to traditional agriculture inspired his unique expertise combining nutrition, gastrointestinal physiology, and neuroscience. We discuss how the gut and brain integrate sensory cues, leading to our intuitive “gut sense” about food, people, and situations. This episode provides a scientific perspective into your gut sense to help you make better food choices and, indeed, to support better decision-making in all of life. For show notes, including referenced articles and additional resources, please visit hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman Joovv: https://joovv.com/huberman LMNT: https://drinklmnt.com/huberman Helix Sleep: https://helixsleep.com/huberman InsideTracker: https://insidetracker.com/huberman Timestamps 00:00:00 Dr. Diego Bohórquez 00:02:37 Sponsors: Joovv, LMNT & Helix Sleep; YouTube, Spotify & Apple Subscribe 00:06:49 Gut-Brain Axis 00:11:35 Gut Sensing, Hormones 00:15:26 Green Fluorescent Protein; Neuropod Cells & Environment Sensing 00:26:57 Brain & Gut Connection, Experimental Tools & Rabies Virus 00:35:28 Sponsor: AG1 00:37:00 Neuropod Cells & Nutrient Sensing 00:43:55 Gastric Bypass Surgery, Cravings & Food Choice 00:51:14 Optogenetics; Sugar Preference & Neuropod Cells 01:00:29 Gut-Brain Disorders, Irritable Bowel Syndrome 01:03:03 Sponsor: InsideTracker 01:04:04 Gut & Behavior; Gastric Bypass, Cravings & Alcohol 01:07:38 GLP-1, Ozempic, Neuropod Cells 01:11:46 Food Preference & Gut-Brain Axis, Protein 01:21:35 Protein & Sugar, Agriculture & ‘Three Sisters’ 01:25:16 Childhood, Military School; Academics, Nutrition & Nervous System 01:36:15 Plant Wisdom, Agriculture, Indigenous People 01:41:48 Evolution of Food Choices; Learning from Plants 01:48:15 Plant-Based Medicines; Amazonia, Guayusa Ritual & Chonta Palm 01:56:58 Yerba Mate, Chocolate, Guayusa 02:00:22 Brain, Gut & Sensory Integration; Variability 02:06:01 Electrical Patterns in Gut & Brain, “Hangry” 02:12:43 Gut Intuition, Food & Bonding; Subconscious & Superstition 02:22:00 Vagus Nerve & Learning, Humming 02:26:46 Digestive System & Memory; Body Sensing 02:32:51 Listening to the Body, Meditation 02:40:12 Zero-Cost Support, Spotify & Apple Reviews, YouTube Feedback, Sponsors, Social Media, Neural Network Newsletter Disclaimer Learn more about your ad choices. Visit megaphone.fm/adchoices
Hop on your underwater moped, because Trevor Noah joins us this week to talk about thread count, a pocket of nothing, and the trappings of American fame and popularity. So come along and learn how knowing people works… on an all-new SmartLess.
What can't Candace Parker do? A two-time NCAA champion, two-time Olympic gold medalist and two-time WNBA champion, Parker knows what it takes to fight for your dreams. In this inspiring talk, she shares what she's learned during a career spent not accepting limits -- and how her daughter taught her the best lesson of all. "Barrier breaking is about not staying in your lane and not being something that the world expects you to be," she says. "It's about not accepting limitations." Hosted on Acast. See acast.com/privacy for more information.
Neil Strauss is a journalist, writer, and an author. Neil was the world's most famous pickup artist who kickstarted much of the modern dating discourse. So looking back 20 years later, what has he come to realise about what really matters in life and how to find love and connection? Expect to learn the trajectory of Neil’s views on relationships over the years, how Neil reflects on his book The Game, why Neil is having a baby with his ex-wife, what went wrong with the world of pickup, why faking status is not such a great idea, how to measure success in a relationship, how to rid yourself of other people’s expectations and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/wisdom (automatically applied at checkout) Sign up for a one-dollar-per-month trial period from Shopify at https://www.shopify.com/modernwisdom (automatically applied at checkout) Get up to 32% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
Danica Patrick is a former professional race car driver, entrepreneur, and host of the Pretty Intense Podcast. (00:00) The NASCAR Years (12:17) Political Views (16:10) Conspiracy Theories (36:09) Why Do the Media Hate Donald Trump? (40:40) What is Ayahuasca Like? Learn more about your ad choices. Visit megaphone.fm/adchoices
Trillion Dollar Shot, our new series about drugs like Ozempic, will be back next week. Until then, we think you’d enjoy a show from our friends over at Science Vs, a podcast that takes on fads, trends and the opinionated mob to find out what’s fact, what’s not and what’s somewhere in between. This episode examines the fears around the new class of blockbuster weight-loss drugs. This episode does deal with depression and suicidal thoughts. If you are in the U.S. and need help, dial 988. Full list of international hotlines here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Each Sunday, TED shares an episode of another podcast we think you'll love, handpicked for you… by us. Today we're sharing an episode from Good Sport, a show that dives into worlds like F1 racing, table tennis, NBA shooting, and beyond to shed a light on the ups and downs of being human.If a sport isn't thinking about how to entertain its fans, it usually doesn't last long. And with so much competing for our attention, what makes someone follow a specific team, or show up to a game? In this episode we look to two exploding fanbases: Formula One Racing and … Banana Ball? Jody speaks with Jessica Smetana and Spencer Hall, the co-hosts of the Formula One podcast “DNF”, about what Netflix has to do with F1’s success. Then Jody talks to Jesse Cole, the owner of The Savannah Bananas, a baseball team that’s selling out games and gaining millions of followers on TikTok – at the same time Major League Baseball continues to bleed fans. Jessie’s approach to cultivating a “fans first, entertainment always” mentality is literally reinventing how we play and think about sports. Transcripts for Good Sport are available at go.ted.com/GStranscripts Hosted on Acast. See acast.com/privacy for more information.
Why do we tell kids that a fairy will give them cash in exchange for their teeth? How should we talk to them about scary things in the world? And is Mike one of the greatest operatic tenors of all time?
Charan Ranganath is a psychologist and neuroscientist at UC Davis, specializing in human memory. He is the author of a new book titled Why We Remember. Please support this podcast by checking out our sponsors: – Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off – ZipRecruiter: https://ziprecruiter.com/lex – Notion: https://notion.com/lex – MasterClass: https://masterclass.com/lexpod to get 15% off – Shopify: https://shopify.com/lex to get $1 per month trial – LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/charan-ranganath-transcript EPISODE LINKS: Charan’s X: https://x.com/CharanRanganath Charan’s Instagram: https://instagram.com/thememorydoc Charan’s Website: https://charanranganath.com Why We Remember (book): https://amzn.to/3WzUF6x Charan’s Google Scholar: https://scholar.google.com/citations?user=ptWkt1wAAAAJ Dynamic Memory Lab: https://dml.ucdavis.edu/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: – Check out the sponsors above, it’s the best way to support this podcast – Support on Patreon: https://www.patreon.com/lexfridman – Twitter: https://twitter.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Medium: https://medium.com/@lexfridman OUTLINE: Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) – Introduction (10:18) – Experiencing self vs remembering self (23:59) – Creating memories (33:31) – Why we forget (41:08) – Training memory (51:37) – Memory hacks (1:03:26) – Imagination vs memory (1:12:44) – Memory competitions (1:22:33) – Science of memory (1:37:48) – Discoveries (1:48:52) – Deja vu (1:54:09) – False memories (2:14:14) – False confessions (2:18:00) – Heartbreak (2:25:34) – Nature of time (2:33:15) – Brain–computer interface (BCI) (2:47:19) – AI and memory (2:57:33) – ADHD (3:04:30) – Music (3:14:15) – Human mind
Jeremie Harris is the CEO and Edouard Harris the CTO of Gladstone AI, an organization dedicated to promoting the responsible development and adoption of AI. www.gladstone.ai Learn more about your ad choices. Visit podcastchoices.com/adchoices
What does it take to build a legacy? Hip-hop artist Cordae tells how he went from mixtape-dropping high school kid to Grammy-nominated music star whose "Hi Level" mindset helps him achieve his dreams. Hosted on Acast. See acast.com/privacy for more information.
Mo Gawdat is an entrepreneur, former Chief Business Officer at Google, and an author. We often experience stress without knowing where it's coming from. Although we feel overwhelmed, we struggle to pinpoint the source. So how should we go about assessing our lives and reducing our stress? Expect to learn why the modern world is so stressful for everyone, what Mo means when he says young people are comfortably numb, how to assess stress and where it is probably coming from, the things you’re not aware of which cause your emotional discomfort, the most important habits you should implement if you want to become peaceful and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a Free Sample Pack of all LMNT Flavours with your first box at https://www.drinklmnt.com/modernwisdom (automatically applied at checkout) Get the Whoop 4.0 for free and get your first month for free at https://join.whoop.com/modernwisdom (discount automatically applied) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
On today's episode, we ride through the streets of San Francisco with a long-time junkman, Jon Rolston. Jon has spent the last two decades clearing out houses and offices of their junk. He's found all sorts of items: a life-time supply of toilet paper, gold rings, $20,000 in cash. Over the years, he's developed a keen eye for what has value and what might sell. He's become a kind of trash savant.As we ride with Jon, he shows us the whole ecosystem of how our reusable trash gets dealt with — from metals (ferrous and non-ferrous) to tires to cardboard. And we see how our junk can sometimes get a second chance at life. If you can understand the junk market like Jon, you can understand dozens of trends in our economy. This episode was hosted by Erika Beras and James Sneed, and produced by James Sneed with help from Emma Peaslee. It was edited by Jess Jiang. Engineering by Josh Newell. It was fact-checked by Sierra Juarez. Alex Goldmark is Planet Money's executive producer.Help support Planet Money and hear our bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney. Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
(0:00) Bestie intros: Recapping "General AI Hospital" (2:46) Scarlett Johansson vs. OpenAI (14:37) OpenAI's novel off-boarding agreements, ex-employee equity problem, and safety team resignations (25:35) Nvidia crushes earnings again, but it faces a trillion-dollar problem (40:05) Understanding why economic sentiment is so negative among US citizens despite positive data (1:02:36) New study shows plastics in testicles Follow the besties: https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow on X: https://twitter.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@all_in_tok Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://x.com/BobbyAllyn/status/1792679435701014908 https://x.com/sama/status/1790075827666796666 https://openai.com/index/how-the-voices-for-chatgpt-were-chosen https://www.washingtonpost.com/technology/2024/05/22/openai-scarlett-johansson-chatgpt-ai-voice https://x.com/SydSteyerhart/status/1792981291266138531 https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees https://x.com/sama/status/1791936857594581428 https://x.com/ilyasut/status/1790517455628198322 https://x.com/janleike/status/1790603862132596961 https://x.com/janleike/status/1791498184671605209 https://openai.com/index/openai-announces-leadership-transition https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-first-quarter-fiscal-2025 https://www.google.com/finance/quote/INTC:NASDAQ https://www.morningstar.com/stocks/nvidia-2023-vs-cisco-1999-will-history-repeat https://www.fool.com/investing/2024/03/06/is-nvidia-doomed-to-be-the-next-cisco-what-investo https://www.elitetrader.com/et/threads/nvidia-and-the-cautionary-tale-of-cisco-systems.379022 https://chamath.substack.com/p/2023-annual-letter https://www.forbes.com/sites/theapothecary/2024/03/23/summers-inflation-reached-18-in-2022-using-the-governments-previous-formula https://www.theguardian.com/us-news/article/2024/may/22/poll-economy-recession-biden https://fred.stlouisfed.org/series/CCLACBW027SBOG https://x.com/KariLake/status/1792986501820850333 https://www.stlouisfed.org/on-the-economy/2024/apr/how-big-mac-index-relates-overall-consumer-inflation https://www.google.com/finance/quote/MCD:NYSE https://www.wsj.com/economy/gdp-and-the-dow-are-up-but-what-about-american-well-being-87f90e6d https://www.consumerreports.org/health/food-contaminants/the-plastic-chemicals-hiding-in-your-food-a7358224781 https://onlinelibrary.wiley.com/doi/10.1111/j.1365-2605.2007.00837.x https://pubmed.ncbi.nlm.nih.gov/12708228 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7559247 https://pubmed.ncbi.nlm.nih.gov/21524797 https://academic.oup.com/toxsci/advance-article/doi/10.1093/toxsci/kfae060/7673133 https://www.youtube.com/watch?v=o6yuYkfNh-k https://www.youtube.com/watch?v=EYQjShJxCtM https://www.youtube.com/watch?v=r_4jrMwvZ2A
Explore the diverse voices and perspectives from podcast creators in United States. Each episode offers unique insights into the culture, language, and stories from this region.