podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Jingwen Liang
Shows
Daily Paper Cast
Voila: Voice-Language Foundation Models for Real-Time Autonomous Interaction and Voice Role-Play
🤗 Upvotes: 56 | cs.AI, cs.CL, cs.SD Authors: Yemin Shi, Yu Shu, Siwei Dong, Guangyi Liu, Jaward Sesay, Jingwen Li, Zhiting Hu Title: Voila: Voice-Language Foundation Models for Real-Time Autonomous Interaction and Voice Role-Play Arxiv: http://arxiv.org/abs/2505.02707v1 Abstract: A voice AI agent that blends seamlessly into daily life would interact with humans in an autonomous, real-time, and emotionally expressive manner. Rather than merely reacting to commands, it would continuously listen, reason, and respond proactively, fostering fluid, dynamic, and emotionally resonant interactions. We introduce Voila, a f...
2025-05-07
23 min
Daily Paper Cast
DeepCritic: Deliberate Critique with Large Language Models
🤗 Upvotes: 27 | cs.CL, cs.AI, cs.LG Authors: Wenkai Yang, Jingwen Chen, Yankai Lin, Ji-Rong Wen Title: DeepCritic: Deliberate Critique with Large Language Models Arxiv: http://arxiv.org/abs/2505.00662v1 Abstract: As Large Language Models (LLMs) are rapidly evolving, providing accurate feedback and scalable oversight on their outputs becomes an urgent and critical problem. Leveraging LLMs as critique models to achieve automated supervision is a promising solution. In this work, we focus on studying and enhancing the math critique ability of LLMs. Current LLM critics provide critiques tha...
2025-05-03
22 min
Daily Paper Cast
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
🤗 Upvotes: 27 | cs.RO, cs.AI, cs.CV Authors: Zekun Qi, Wenyao Zhang, Yufei Ding, Runpei Dong, Xinqiang Yu, Jingwen Li, Lingyun Xu, Baoyu Li, Xialin He, Guofan Fan, Jiazhao Zhang, Jiawei He, Jiayuan Gu, Xin Jin, Kaisheng Ma, Zhizheng Zhang, He Wang, Li Yi Title: SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation Arxiv: http://arxiv.org/abs/2502.13143v1 Abstract: Spatial intelligence is a critical component of embodied AI, promoting robots to understand and interact with their environments. While recent advances have enhanced the ability of VLM...
2025-02-20
21 min
Daily Paper Cast
Edify Image: High-Quality Image Generation with Pixel Space Laplacian Diffusion Models
🤗 Paper Upvotes: 21 | cs.CV, cs.LG Authors: NVIDIA, :, Yuval Atzmon, Maciej Bala, Yogesh Balaji, Tiffany Cai, Yin Cui, Jiaojiao Fan, Yunhao Ge, Siddharth Gururani, Jacob Huffman, Ronald Isaac, Pooya Jannaty, Tero Karras, Grace Lam, J. P. Lewis, Aaron Licata, Yen-Chen Lin, Ming-Yu Liu, Qianli Ma, Arun Mallya, Ashlee Martino-Tarr, Doug Mendez, Seungjun Nah, Chris Pruett, Fitsum Reda, Jiaming Song, Ting-Chun Wang, Fangyin Wei, Xiaohui Zeng, Yu Zeng, Qinsheng Zhang Title: Edify Image: High-Quality Image Generation with Pixel Space Laplacian Diffusion Models Arxiv: http://arxiv.org/abs/2411.07126v1 Abstract:...
2024-11-13
24 min
Daily Paper Cast
GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
🤗 Paper Upvotes: 18 | cs.SE, cs.LG Authors: Nizar Islah, Justine Gehring, Diganta Misra, Eilif Muller, Irina Rish, Terry Yue Zhuo, Massimo Caccia Title: GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models Arxiv: http://arxiv.org/abs/2411.05830v1 Abstract: The rapid evolution of software libraries presents a significant challenge for code generation models, which must adapt to frequent version updates while maintaining compatibility with previous versions. Existing code completion benchmarks often overlook this dynamic aspect, and the one that does consider it relies on static code prediction tas...
2024-11-13
24 min
Daily Paper Cast
LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
🤗 Paper Upvotes: 15 | cs.CV, cs.CL Authors: Weiquan Huang, Aoqi Wu, Yifan Yang, Xufang Luo, Yuqing Yang, Liang Hu, Qi Dai, Xiyang Dai, Dongdong Chen, Chong Luo, Lili Qiu Title: LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation Arxiv: http://arxiv.org/abs/2411.04997v1 Abstract: CLIP is one of the most important multimodal foundational models today. What powers CLIP's capabilities? The rich supervision signals provided by natural language, the carrier of human knowledge, shape a powerful cross-modal representation space. However, with the rapid advancements in large lan...
2024-11-12
25 min
Daily Paper Cast
Balancing Pipeline Parallelism with Vocabulary Parallelism
🤗 Paper Upvotes: 10 | cs.DC Authors: Man Tsung Yeung, Penghui Qi, Min Lin, Xinyi Wan Title: Balancing Pipeline Parallelism with Vocabulary Parallelism Arxiv: http://arxiv.org/abs/2411.05288v1 Abstract: Pipeline parallelism is widely used to scale the training of transformer-based large language models, various works have been done to improve its throughput and memory footprint. In this paper, we address a frequently overlooked issue: the vocabulary layers can cause imbalanced computation and memory usage across pipeline stages, worsening pipeline bubbles and the memory bottleneck. To tackle this, we par...
2024-11-12
23 min
Daily Paper Cast
StdGEN: Semantic-Decomposed 3D Character Generation from Single Images
🤗 Paper Upvotes: 10 | cs.CV Authors: Yuze He, Yanning Zhou, Wang Zhao, Zhongkai Wu, Kaiwen Xiao, Wei Yang, Yong-Jin Liu, Xiao Han Title: StdGEN: Semantic-Decomposed 3D Character Generation from Single Images Arxiv: http://arxiv.org/abs/2411.05738v1 Abstract: We present StdGEN, an innovative pipeline for generating semantically decomposed high-quality 3D characters from single images, enabling broad applications in virtual reality, gaming, and filmmaking, etc. Unlike previous methods which struggle with limited decomposability, unsatisfactory quality, and long optimization times, StdGEN features decomposability, effectiveness and efficiency; i.e., it generates int...
2024-11-12
21 min
Daily Paper Cast
DELIFT: Data Efficient Language model Instruction Fine Tuning
🤗 Paper Upvotes: 5 | cs.CL Authors: Ishika Agarwal, Krishnateja Killamsetty, Lucian Popa, Marina Danilevksy Title: DELIFT: Data Efficient Language model Instruction Fine Tuning Arxiv: http://arxiv.org/abs/2411.04425v2 Abstract: Fine-tuning large language models (LLMs) is essential for enhancing their performance on specific tasks but is often resource-intensive due to redundant or uninformative data. To address this inefficiency, we introduce DELIFT (Data Efficient Language model Instruction Fine-Tuning), a novel algorithm that systematically optimizes data selection across the three key stages of fine-tuning: (1) instruction tuning, (2) task-specific fine-tuning (e.g., rea...
2024-11-12
21 min
Daily Paper Cast
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study
🤗 Paper Upvotes: 4 | cs.SE, cs.AI, cs.LG Authors: André Storhaug, Jingyue Li Title: Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study Arxiv: http://arxiv.org/abs/2411.02462v1 Abstract: The advent of large language models (LLMs) like GitHub Copilot has significantly enhanced programmers' productivity, particularly in code generation. However, these models often struggle with real-world tasks without fine-tuning. As LLMs grow larger and more performant, fine-tuning for specialized tasks becomes increasingly expensive. Parameter-efficient fine-tuning (PEFT) methods, which fine-tune only a subset of mode...
2024-11-12
25 min
Daily Paper Cast
RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models
🤗 Paper Upvotes: 3 | cs.CV, cs.AI Authors: Maya Varma, Jean-Benoit Delbrouck, Zhihong Chen, Akshay Chaudhari, Curtis Langlotz Title: RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models Arxiv: http://arxiv.org/abs/2411.04097v1 Abstract: Fine-tuned vision-language models (VLMs) often capture spurious correlations between image features and textual attributes, resulting in degraded zero-shot performance at test time. Existing approaches for addressing spurious correlations (i) primarily operate at the global image-level rather than intervening directly on fine-grained image features and (ii) are predominantly designed for unimodal settings. In thi...
2024-11-12
22 min
Daily Paper Cast
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities
🤗 Paper Upvotes: 3 | cs.CL Authors: Zhaofeng Wu, Xinyan Velocity Yu, Dani Yogatama, Jiasen Lu, Yoon Kim Title: The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities Arxiv: http://arxiv.org/abs/2411.04986v1 Abstract: Modern language models can process inputs across diverse languages and modalities. We hypothesize that models acquire this capability through learning a shared representation space across heterogeneous data types (e.g., different languages and modalities), which places semantically similar inputs near one another, even if they are from different modalities/languages. We...
2024-11-12
24 min
Daily Paper Cast
Improving the detection of technical debt in Java source code with an enriched dataset
🤗 Paper Upvotes: 2 | cs.SE Authors: Nam Le Hai, Anh M. T. Bui, Phuong T. Nguyen, Davide Di Ruscio, Rick Kazman Title: Improving the detection of technical debt in Java source code with an enriched dataset Arxiv: http://arxiv.org/abs/2411.05457v1 Abstract: Technical debt (TD) is a term used to describe the additional work and costs that emerge when developers have opted for a quick and easy solution to a problem, rather than a more effective and well-designed, but time-consuming approach. Self-Admitted Technical Debts (SATDs) are a spe...
2024-11-12
26 min
Daily Paper Cast
OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
🤗 Paper Upvotes: 69 | cs.CL, cs.PL Authors: Siming Huang, Tianhao Cheng, Jason Klein Liu, Jiaran Hao, Liuyihan Song, Yang Xu, J. Yang, J. H. Liu, Chenchen Zhang, Linzheng Chai, Ruifeng Yuan, Zhaoxiang Zhang, Jie Fu, Qian Liu, Ge Zhang, Zili Wang, Yuan Qi, Yinghui Xu, Wei Chu Title: OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models Arxiv: http://arxiv.org/abs/2411.04905v1 Abstract: Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems.While open-access cod...
2024-11-09
22 min
Daily Paper Cast
ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning
🤗 Paper Upvotes: 50 | cs.CV, cs.AI, cs.GR, cs.LG Authors: David Junhao Zhang, Roni Paiss, Shiran Zada, Nikhil Karnad, David E. Jacobs, Yael Pritch, Inbar Mosseri, Mike Zheng Shou, Neal Wadhwa, Nataniel Ruiz Title: ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning Arxiv: http://arxiv.org/abs/2411.05003v1 Abstract: Recently, breakthroughs in video modeling have allowed for controllable camera trajectories in generated videos. However, these methods cannot be directly applied to user-provided videos that are not generated by a video model. In thi...
2024-11-09
19 min
Daily Paper Cast
BitNet a4.8: 4-bit Activations for 1-bit LLMs
🤗 Paper Upvotes: 41 | cs.CL, cs.LG Authors: Hongyu Wang, Shuming Ma, Furu Wei Title: BitNet a4.8: 4-bit Activations for 1-bit LLMs Arxiv: http://arxiv.org/abs/2411.04965v1 Abstract: Recent research on the 1-bit Large Language Models (LLMs), such as BitNet b1.58, presents a promising direction for reducing the inference cost of LLMs while maintaining their performance. In this work, we introduce BitNet a4.8, enabling 4-bit activations for 1-bit LLMs. BitNet a4.8 employs a hybrid quantization and sparsification strategy to mitigate the quantization errors introduced by the outlier cha...
2024-11-09
25 min
Daily Paper Cast
DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion
🤗 Paper Upvotes: 27 | cs.CV, cs.AI, cs.GR Authors: Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, Yikai Wang Title: DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion Arxiv: http://arxiv.org/abs/2411.04928v1 Abstract: In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the...
2024-11-09
23 min
Daily Paper Cast
Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
🤗 Paper Upvotes: 25 | cs.CL Authors: Weixin Liang, Lili Yu, Liang Luo, Srinivasan Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, Xi Victoria Lin Title: Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models Arxiv: http://arxiv.org/abs/2411.04996v1 Abstract: The development of large language models (LLMs) has expanded to multi-modal systems capable of processing text, images, and speech within a unified framework. Training these models demands significantly larger datasets and computational resources compared to text-only LLMs. To address the scaling cha...
2024-11-09
24 min
Daily Paper Cast
TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation
🤗 Paper Upvotes: 20 | cs.CV Authors: Wenhao Wang, Yi Yang Title: TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation Arxiv: http://arxiv.org/abs/2411.04709v1 Abstract: Video generation models are revolutionizing content creation, with image-to-video models drawing increasing attention due to their enhanced controllability, visual consistency, and practical applications. However, despite their popularity, these models rely on user-provided text and image prompts, and there is currently no dedicated dataset for studying these prompts. In this paper, we introduce TIP-I2V, the first large-scale dat...
2024-11-09
24 min
Daily Paper Cast
Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large Language Model
🤗 Paper Upvotes: 15 | cs.CL Authors: Young-Jun Lee, Dokyong Lee, Junyoung Youn, Kyeongjin Oh, Ho-Jin Choi Title: Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large Language Model Arxiv: http://arxiv.org/abs/2411.04496v1 Abstract: To increase social bonding with interlocutors, humans naturally acquire the ability to respond appropriately in a given situation by considering which conversational skill is most suitable for the response - a process we call skill-of-mind. For large language model (LLM)-based conversational agents, planning appropriate conversational skills, as humans do, is challenging due to the com...
2024-11-09
22 min
Daily Paper Cast
Needle Threading: Can LLMs Follow Threads through Near-Million-Scale Haystacks?
🤗 Paper Upvotes: 14 | cs.CL Authors: Jonathan Roberts, Kai Han, Samuel Albanie Title: Needle Threading: Can LLMs Follow Threads through Near-Million-Scale Haystacks? Arxiv: http://arxiv.org/abs/2411.05000v1 Abstract: As the context limits of Large Language Models (LLMs) increase, the range of possible applications and downstream functions broadens. In many real-world tasks, decisions depend on details scattered across collections of often disparate documents containing mostly irrelevant information. Long-context LLMs appear well-suited to this form of complex information retrieval and reasoning, which has traditionally proven costly and time-consuming. However, alt...
2024-11-09
22 min
Daily Paper Cast
DynaMem: Online Dynamic Spatio-Semantic Memory for Open World Mobile Manipulation
🤗 Paper Upvotes: 12 | cs.RO, cs.LG Authors: Peiqi Liu, Zhanqiu Guo, Mohit Warke, Soumith Chintala, Chris Paxton, Nur Muhammad Mahi Shafiullah, Lerrel Pinto Title: DynaMem: Online Dynamic Spatio-Semantic Memory for Open World Mobile Manipulation Arxiv: http://arxiv.org/abs/2411.04999v1 Abstract: Significant progress has been made in open-vocabulary mobile manipulation, where the goal is for a robot to perform tasks in any environment given a natural language description. However, most current systems assume a static environment, which limits the system's applicability in real-world scenarios where environments frequently cha...
2024-11-09
21 min
Daily Paper Cast
VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos
🤗 Paper Upvotes: 12 | cs.CV Authors: Shehan Munasinghe, Hanan Gani, Wenqi Zhu, Jiale Cao, Eric Xing, Fahad Shahbaz Khan, Salman Khan Title: VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos Arxiv: http://arxiv.org/abs/2411.04923v1 Abstract: Fine-grained alignment between videos and text is challenging due to complex spatial and temporal dynamics in videos. Existing video-based Large Multimodal Models (LMMs) handle basic conversations but struggle with precise pixel-level grounding in videos. To address this, we introduce VideoGLaMM, a LMM designed for fine-grained pixel-level grounding in vid...
2024-11-09
27 min
Daily Paper Cast
Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination
🤗 Paper Upvotes: 33 | cs.CV, cs.AI, cs.CL, cs.MM Authors: Dingjie Song, Sicheng Lai, Shunian Chen, Lichao Sun, Benyou Wang Title: Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination Arxiv: http://arxiv.org/abs/2411.03823v1 Abstract: The rapid progression of multimodal large language models (MLLMs) has demonstrated superior performance on various multimodal benchmarks. However, the issue of data contamination during training creates challenges in performance evaluation and comparison. While numerous methods exist for detecting dataset contamination in large language models (LLMs), the...
2024-11-08
23 min
Daily Paper Cast
Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level
🤗 Paper Upvotes: 26 | cs.LG, cs.AI Authors: Antoine Grosnit, Alexandre Maraval, James Doran, Giuseppe Paolo, Albert Thomas, Refinath Shahul Hameed Nabeezath Beevi, Jonas Gonzalez, Khyati Khandelwal, Ignacio Iacobacci, Abdelhakim Benechehab, Hamza Cherkaoui, Youssef Attia El-Hili, Kun Shao, Jianye Hao, Jun Yao, Balazs Kegl, Haitham Bou-Ammar, Jun Wang Title: Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level Arxiv: http://arxiv.org/abs/2411.03562v1 Abstract: We introduce Agent K v1.0, an end-to-end autonomous data science agent designed to automate, optimise, and generalise across diverse data science tasks. Ful...
2024-11-08
20 min
Daily Paper Cast
Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
🤗 Paper Upvotes: 10 | cs.CL, cs.AI, cs.LG Authors: Zhijian Zhuo, Ya Wang, Yutao Zeng, Xiaoqing Li, Xun Zhou, Jinwen Ma Title: Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models Arxiv: http://arxiv.org/abs/2411.03884v1 Abstract: Transformers have found extensive applications across various domains due to the powerful fitting capabilities. This success can be partially attributed to their inherent nonlinearity. Thus, in addition to the ReLU function employed in the original transformer architecture, researchers have explored alternative modules such as GeLU and SwishGLU to enh...
2024-11-08
23 min
Daily Paper Cast
Self-Consistency Preference Optimization
🤗 Paper Upvotes: 5 | cs.CL, cs.AI, cs.LG Authors: Archiki Prasad, Weizhe Yuan, Richard Yuanzhe Pang, Jing Xu, Maryam Fazel-Zarandi, Mohit Bansal, Sainbayar Sukhbaatar, Jason Weston, Jane Yu Title: Self-Consistency Preference Optimization Arxiv: http://arxiv.org/abs/2411.04109v1 Abstract: Self-alignment, whereby models learn to improve themselves without human annotation, is a rapidly growing research area. However, existing techniques often fail to improve complex reasoning tasks due to the difficulty of assigning correct rewards. An orthogonal approach that is known to improve correctness is self-consistency, a method applied at...
2024-11-08
20 min
Daily Paper Cast
From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond
🤗 Paper Upvotes: 3 | cs.CL Authors: Harsha Nori, Naoto Usuyama, Nicholas King, Scott Mayer McKinney, Xavier Fernandes, Sheng Zhang, Eric Horvitz Title: From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond Arxiv: http://arxiv.org/abs/2411.03590v1 Abstract: Run-time steering strategies like Medprompt are valuable for guiding large language models (LLMs) to top performance on challenging tasks. Medprompt demonstrates that a general LLM can be focused to deliver state-of-the-art performance on specialized domains like medicine by using a prompt to elicit a run-time str...
2024-11-08
17 min
Daily Paper Cast
HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems
🤗 Paper Upvotes: 34 | cs.IR Authors: Jiejun Tan, Zhicheng Dou, Wen Wang, Mang Wang, Weipeng Chen, Ji-Rong Wen Title: HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems Arxiv: http://arxiv.org/abs/2411.02959v1 Abstract: Retrieval-Augmented Generation (RAG) has been shown to improve knowledge capabilities and alleviate the hallucination problem of LLMs. The Web is a major source of external knowledge used in RAG systems, and many commercial systems such as ChatGPT and Perplexity have used Web search engines as their major retrieval sys...
2024-11-07
21 min
Daily Paper Cast
LLaMo: Large Language Model-based Molecular Graph Assistant
🤗 Paper Upvotes: 13 | cs.LG, cs.AI, q-bio.MN Authors: Jinyoung Park, Minseong Bae, Dohwan Ko, Hyunwoo J. Kim Title: LLaMo: Large Language Model-based Molecular Graph Assistant Arxiv: http://arxiv.org/abs/2411.00871v1 Abstract: Large Language Models (LLMs) have demonstrated remarkable generalization and instruction-following capabilities with instruction tuning. The advancements in LLMs and instruction tuning have led to the development of Large Vision-Language Models (LVLMs). However, the competency of the LLMs and instruction tuning have been less explored in the molecular domain. Thus, we propose LLaMo: Large Language Mod...
2024-11-07
24 min
Daily Paper Cast
DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution
🤗 Paper Upvotes: 10 | cs.RO, cs.AI, cs.LG Authors: Yang Yue, Yulin Wang, Bingyi Kang, Yizeng Han, Shenzhi Wang, Shiji Song, Jiashi Feng, Gao Huang Title: DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution Arxiv: http://arxiv.org/abs/2411.02359v1 Abstract: MLLMs have demonstrated remarkable comprehension and reasoning capabilities with complex language and visual data. These advances have spurred the vision of establishing a generalist robotic MLLM proficient in understanding complex human instructions and accomplishing various embodied tasks. However, developing MLLMs for real-world rob...
2024-11-07
19 min
Daily Paper Cast
Controlling Language and Diffusion Models by Transporting Activations
🤗 Paper Upvotes: 8 | cs.LG, cs.AI, cs.CL, cs.CV, 68T07, 49Q22, I.2.6; I.2.7; I.4.8 Authors: Pau Rodriguez, Arno Blaas, Michal Klein, Luca Zappella, Nicholas Apostoloff, Marco Cuturi, Xavier Suau Title: Controlling Language and Diffusion Models by Transporting Activations Arxiv: http://arxiv.org/abs/2410.23054v1 Abstract: The increasing capabilities of large generative models and their ever more widespread deployment have raised concerns about their reliability, safety, and potential misuse. To address these issues, recent works have proposed to control model generation by steering model activations in order to...
2024-11-07
22 min
Daily Paper Cast
Sample-Efficient Alignment for LLMs
🤗 Paper Upvotes: 8 | cs.LG, cs.AI, cs.CL Authors: Zichen Liu, Changyu Chen, Chao Du, Wee Sun Lee, Min Lin Title: Sample-Efficient Alignment for LLMs Arxiv: http://arxiv.org/abs/2411.01493v1 Abstract: We study methods for efficiently aligning large language models (LLMs) with human preferences given budgeted online feedback. We first formulate the LLM alignment problem in the frame of contextual dueling bandits. This formulation, subsuming recent paradigms such as online RLHF and online DPO, inherently quests for sample-efficient algorithms that incorporate online active exploration. Leveraging insights fro...
2024-11-07
21 min
Daily Paper Cast
DreamPolish: Domain Score Distillation With Progressive Geometry Generation
🤗 Paper Upvotes: 6 | cs.CV, cs.AI Authors: Yean Cheng, Ziqi Cai, Ming Ding, Wendi Zheng, Shiyu Huang, Yuxiao Dong, Jie Tang, Boxin Shi Title: DreamPolish: Domain Score Distillation With Progressive Geometry Generation Arxiv: http://arxiv.org/abs/2411.01602v1 Abstract: We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures. In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process. Instead of relying solely on a view-conditioned diffusion prior in the novel sam...
2024-11-07
18 min
Daily Paper Cast
Adaptive Length Image Tokenization via Recurrent Allocation
🤗 Paper Upvotes: 4 | cs.CV, cs.AI, cs.LG, cs.RO Authors: Shivam Duggal, Phillip Isola, Antonio Torralba, William T. Freeman Title: Adaptive Length Image Tokenization via Recurrent Allocation Arxiv: http://arxiv.org/abs/2411.02393v1 Abstract: Current vision systems typically assign fixed-length representations to images, regardless of the information content. This contrasts with human intelligence - and even large language models - which allocate varying representational capacities based on entropy, context and familiarity. Inspired by this, we propose an approach to learn variable-length token representations for 2D images. Our...
2024-11-07
21 min
Daily Paper Cast
GarVerseLOD: High-Fidelity 3D Garment Reconstruction from a Single In-the-Wild Image using a Dataset with Levels of Details
🤗 Paper Upvotes: 3 | cs.CV, cs.GR Authors: Zhongjin Luo, Haolin Liu, Chenghong Li, Wanghao Du, Zirong Jin, Wanhu Sun, Yinyu Nie, Weikai Chen, Xiaoguang Han Title: GarVerseLOD: High-Fidelity 3D Garment Reconstruction from a Single In-the-Wild Image using a Dataset with Levels of Details Arxiv: http://arxiv.org/abs/2411.03047v1 Abstract: Neural implicit functions have brought impressive advances to the state-of-the-art of clothed human digitization from multiple or even single images. However, despite the progress, current arts still have difficulty generalizing to unseen images with complex cloth deformation and...
2024-11-07
19 min
Daily Paper Cast
Zebra-Llama: A Context-Aware Large Language Model for Democratizing Rare Disease Knowledge
🤗 Paper Upvotes: 3 | cs.CL Authors: Karthik Soman, Andrew Langdon, Catalina Villouta, Chinmay Agrawal, Lashaw Salta, Braian Peetoom, Gianmarco Bellucci, Orion J Buske Title: Zebra-Llama: A Context-Aware Large Language Model for Democratizing Rare Disease Knowledge Arxiv: http://arxiv.org/abs/2411.02657v1 Abstract: Rare diseases present unique challenges in healthcare, often suffering from delayed diagnosis and fragmented information landscapes. The scarcity of reliable knowledge in these conditions poses a distinct challenge for Large Language Models (LLMs) in supporting clinical management and delivering precise patient information underscoring the need for foc...
2024-11-07
25 min
Daily Paper Cast
Inference Optimal VLMs Need Only One Visual Token but Larger Models
🤗 Paper Upvotes: 2 | cs.CV, cs.AI, cs.LG Authors: Kevin Y. Li, Sachin Goyal, Joao D. Semedo, J. Zico Kolter Title: Inference Optimal VLMs Need Only One Visual Token but Larger Models Arxiv: http://arxiv.org/abs/2411.03312v1 Abstract: Vision Language Models (VLMs) have demonstrated strong capabilities across various visual understanding and reasoning tasks. However, their real-world deployment is often constrained by high latency during inference due to substantial compute required to process the large number of input tokens (predominantly from the image) by the LLM. To red...
2024-11-07
22 min
Daily Paper Cast
AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents
🤗 Paper Upvotes: 40 | cs.AI Authors: Yifan Xu, Xiao Liu, Xueqiao Sun, Siyi Cheng, Hao Yu, Hanyu Lai, Shudan Zhang, Dan Zhang, Jie Tang, Yuxiao Dong Title: AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents Arxiv: http://arxiv.org/abs/2410.24024v2 Abstract: Autonomous agents have become increasingly important for interacting with the real world. Android agents, in particular, have been recently a frequently-mentioned interaction method. However, existing studies for training and evaluating Android agents lack systematic research on both open-source and closed-source models. In this work, we pro...
2024-11-06
22 min
Daily Paper Cast
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization
🤗 Paper Upvotes: 28 | cs.LG, cs.AI Authors: Eldar Kurtic, Alexandre Marques, Shubhra Pandit, Mark Kurtz, Dan Alistarh Title: "Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization Arxiv: http://arxiv.org/abs/2411.02355v1 Abstract: Despite the popularity of large language model (LLM) quantization for inference acceleration, significant uncertainty remains regarding the accuracy-performance trade-offs associated with various quantization formats. We present a comprehensive empirical study of quantized accuracy, evaluating popular quantization formats (FP8, INT8, INT4) across academic benchmarks and real-world tasks, on the entire Llama-3.1 mod...
2024-11-06
24 min
Daily Paper Cast
WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning
🤗 Paper Upvotes: 25 | cs.CL Authors: Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Xinyue Yang, Jiadai Sun, Yu Yang, Shuntian Yao, Tianjie Zhang, Wei Xu, Jie Tang, Yuxiao Dong Title: WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning Arxiv: http://arxiv.org/abs/2411.02337v1 Abstract: Large language models (LLMs) have shown remarkable potential as autonomous agents, particularly in web-based tasks. However, existing LLM web agents heavily rely on expensive proprietary LLM APIs, while open LLMs lack the necessary decision-making capabilities. This pap...
2024-11-06
22 min
Daily Paper Cast
MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D
🤗 Paper Upvotes: 20 | cs.CV Authors: Wei Cheng, Juncheng Mu, Xianfang Zeng, Xin Chen, Anqi Pang, Chi Zhang, Zhibin Wang, Bin Fu, Gang Yu, Ziwei Liu, Liang Pan Title: MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D Arxiv: http://arxiv.org/abs/2411.02336v1 Abstract: Texturing is a crucial step in the 3D asset production workflow, which enhances the visual appeal and diversity of 3D assets. Despite recent advancements in Text-to-Texture (T2T) generation, existing methods often yield subpar results, primarily due to local discontinuities, inconsistencies across multiple views, and...
2024-11-06
21 min
Daily Paper Cast
Training-free Regional Prompting for Diffusion Transformers
🤗 Paper Upvotes: 19 | cs.CV Authors: Anthony Chen, Jianjin Xu, Wenzhao Zheng, Gaole Dai, Yida Wang, Renrui Zhang, Haofan Wang, Shanghang Zhang Title: Training-free Regional Prompting for Diffusion Transformers Arxiv: http://arxiv.org/abs/2411.02395v1 Abstract: Diffusion models have demonstrated excellent capabilities in text-to-image generation. Their semantic understanding (i.e., prompt following) ability has also been greatly improved with large language models (e.g., T5, Llama). However, existing models cannot perfectly handle long and complex text prompts, especially when the text prompts contain various objects with numerous attributes and...
2024-11-06
17 min
Daily Paper Cast
How Far is Video Generation from World Model: A Physical Law Perspective
🤗 Paper Upvotes: 19 | cs.CV, cs.AI Authors: Bingyi Kang, Yang Yue, Rui Lu, Zhijie Lin, Yang Zhao, Kaixin Wang, Gao Huang, Jiashi Feng Title: How Far is Video Generation from World Model: A Physical Law Perspective Arxiv: http://arxiv.org/abs/2411.02385v1 Abstract: OpenAI's Sora highlights the potential of video generation for developing world models that adhere to fundamental physical laws. However, the ability of video generation models to discover such laws purely from visual data without human priors can be questioned. A world model learning the tru...
2024-11-06
23 min
Daily Paper Cast
Survey of Cultural Awareness in Language Models: Text and Beyond
🤗 Paper Upvotes: 19 | cs.CL, cs.CV Authors: Siddhesh Pawar, Junyeong Park, Jiho Jin, Arnav Arora, Junho Myung, Srishti Yadav, Faiz Ghifari Haznitrama, Inhwa Song, Alice Oh, Isabelle Augenstein Title: Survey of Cultural Awareness in Language Models: Text and Beyond Arxiv: http://arxiv.org/abs/2411.00860v1 Abstract: Large-scale deployment of large language models (LLMs) in various applications, such as chatbots and virtual assistants, requires LLMs to be culturally sensitive to the user to ensure inclusivity. Culture has been widely studied in psychology and anthropology, and there has been a r...
2024-11-06
23 min
Daily Paper Cast
Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
🤗 Paper Upvotes: 16 | cs.CL, cs.AI Authors: Xingwu Sun, Yanfeng Chen, Yiqing Huang, Ruobing Xie, Jiaqi Zhu, Kai Zhang, Shuaipeng Li, Zhen Yang, Jonny Han, Xiaobo Shu, Jiahao Bu, Zhongzhi Chen, Xuemeng Huang, Fengzong Lian, Saiyong Yang, Jianfeng Yan, Yuyuan Zeng, Xiaoqin Ren, Chao Yu, Lulu Wu, Yue Mao, Jun Xia, Tao Yang, Suncong Zheng, Kan Wu, Dian Jiao, Jinbao Xue, Xipeng Zhang, Decheng Wu, Kai Liu, Dengpeng Wu, Guanghui Xu, Shaohua Chen, Shuang Chen, Xiao Feng, Yigeng Hong, Junqiang Zheng, Chengcheng Xu, Zongwei Li, Xiong Kuang, Jianglu Hu, Yiqi Chen, Yuchi Deng, Guiyang Li, Ao Liu...
2024-11-06
18 min
Daily Paper Cast
GenXD: Generating Any 3D and 4D Scenes
🤗 Paper Upvotes: 13 | cs.CV, cs.AI Authors: Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, Lijuan Wang Title: GenXD: Generating Any 3D and 4D Scenes Arxiv: http://arxiv.org/abs/2411.02319v2 Abstract: Recent developments in 2D visual generation have been remarkably successful. However, 3D and 4D generation remain challenging in real-world applications due to the lack of large-scale 4D data and effective model design. In this paper, we propose to jointly investigate general 3D and 4D generation by lev...
2024-11-06
22 min
Daily Paper Cast
DynaMath: A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models
🤗 Paper Upvotes: 13 | cs.CV, cs.AI, cs.CL Authors: Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, Huan Zhang Title: DynaMath: A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models Arxiv: http://arxiv.org/abs/2411.00836v1 Abstract: The rapid advancements in Vision-Language Models (VLMs) have shown great potential in tackling mathematical reasoning tasks that involve visual context. Unlike humans who can reliably apply solution steps to similar problems with minor modifications, we found that SOTA VLMs like GPT-4o can consistently fai...
2024-11-06
19 min
Daily Paper Cast
OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
🤗 Paper Upvotes: 32 | cs.CL, cs.CV, cs.HC Authors: Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, Yu Qiao Title: OS-ATLAS: A Foundation Action Model for Generalist GUI Agents Arxiv: http://arxiv.org/abs/2410.23218v1 Abstract: Existing efforts in building GUI agents heavily rely on the availability of robust commercial Vision-Language Models (VLMs) such as GPT-4o and GeminiProVision. Practitioners are often reluctant to use open-source VLMs due to their significant performance lag compared to...
2024-11-05
20 min
Daily Paper Cast
Personalization of Large Language Models: A Survey
🤗 Paper Upvotes: 14 | cs.CL Authors: Zhehao Zhang, Ryan A. Rossi, Branislav Kveton, Yijia Shao, Diyi Yang, Hamed Zamani, Franck Dernoncourt, Joe Barrow, Tong Yu, Sungchul Kim, Ruiyi Zhang, Jiuxiang Gu, Tyler Derr, Hongjie Chen, Junda Wu, Xiang Chen, Zichao Wang, Subrata Mitra, Nedim Lipka, Nesreen Ahmed, Yu Wang Title: Personalization of Large Language Models: A Survey Arxiv: http://arxiv.org/abs/2411.00027v1 Abstract: Personalization of Large Language Models (LLMs) has recently become increasingly important with a wide range of applications. Despite the importance and recent progress, most exi...
2024-11-05
25 min
Daily Paper Cast
Constant Acceleration Flow
🤗 Paper Upvotes: 14 | cs.LG, cs.AI, cs.CV Authors: Dogyun Park, Sojin Lee, Sihyeon Kim, Taehoon Lee, Youngjoon Hong, Hyunwoo J. Kim Title: Constant Acceleration Flow Arxiv: http://arxiv.org/abs/2411.00322v1 Abstract: Rectified flow and reflow procedures have significantly advanced fast generation by progressively straightening ordinary differential equation (ODE) flows. They operate under the assumption that image and noise pairs, known as couplings, can be approximated by straight trajectories with constant velocity. However, we observe that modeling with constant velocity and using reflow procedures have limitations in...
2024-11-05
21 min
Daily Paper Cast
TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models
🤗 Paper Upvotes: 13 | cs.CV, cs.AI, cs.CL Authors: Ziyao Shangguan, Chuhan Li, Yuxuan Ding, Yanan Zheng, Yilun Zhao, Tesca Fitzgerald, Arman Cohan Title: TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models Arxiv: http://arxiv.org/abs/2410.23266v1 Abstract: Existing benchmarks often highlight the remarkable performance achieved by state-of-the-art Multimodal Foundation Models (MFMs) in leveraging temporal context for video understanding. However, how well do the models truly perform visual temporal reasoning? Our study of existing benchmarks shows that this capability of MFMs is likely overestimated as...
2024-11-05
24 min
Daily Paper Cast
Randomized Autoregressive Visual Generation
🤗 Paper Upvotes: 10 | cs.CV Authors: Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, Liang-Chieh Chen Title: Randomized Autoregressive Visual Generation Arxiv: http://arxiv.org/abs/2411.00776v1 Abstract: This paper presents Randomized AutoRegressive modeling (RAR) for visual generation, which sets a new state-of-the-art performance on the image generation task while maintaining full compatibility with language modeling frameworks. The proposed RAR is simple: during a standard autoregressive training process with a next-token prediction objective, the input sequence-typically ordered in raster form-is randomly permuted into different factorization orders with a pro...
2024-11-05
20 min
Daily Paper Cast
Survey of User Interface Design and Interaction Techniques in Generative AI Applications
🤗 Paper Upvotes: 8 | cs.HC, cs.AI, cs.CL, cs.LG Authors: Reuben Luera, Ryan A. Rossi, Alexa Siu, Franck Dernoncourt, Tong Yu, Sungchul Kim, Ruiyi Zhang, Xiang Chen, Hanieh Salehy, Jian Zhao, Samyadeep Basu, Puneet Mathur, Nedim Lipka Title: Survey of User Interface Design and Interaction Techniques in Generative AI Applications Arxiv: http://arxiv.org/abs/2410.22370v1 Abstract: The applications of generative AI have become extremely impressive, and the interplay between users and AI is even more so. Current human-AI interaction literature has taken a broad look at...
2024-11-05
23 min
Daily Paper Cast
Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation
🤗 Paper Upvotes: 8 | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 Authors: Bohan Lyu, Yadi Cao, Duncan Watson-Parris, Leon Bergen, Taylor Berg-Kirkpatrick, Rose Yu Title: Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation Arxiv: http://arxiv.org/abs/2411.00412v1 Abstract: Large Language Models (LLMs) demonstrate promising capabilities in solving simple scientific problems but often produce hallucinations for complex ones. While integrating LLMs with tools can increase reliability, this approach typically results in over-reliance on tools, diminishing the model's ability to solve simple problems thr...
2024-11-05
20 min
Daily Paper Cast
In-Context LoRA for Diffusion Transformers
🤗 Paper Upvotes: 7 | cs.CV, cs.GR Authors: Lianghua Huang, Wei Wang, Zhi-Fan Wu, Yupeng Shi, Huanzhang Dou, Chen Liang, Yutong Feng, Yu Liu, Jingren Zhou Title: In-Context LoRA for Diffusion Transformers Arxiv: http://arxiv.org/abs/2410.23775v2 Abstract: Recent research arXiv:2410.15027 has explored the use of diffusion transformers (DiTs) for task-agnostic image generation by simply concatenating attention tokens across images. However, despite substantial computational resources, the fidelity of the generated images remains suboptimal. In this study, we reevaluate and streamline this framework by hypothesizing that text-to-image DiTs inh...
2024-11-05
20 min
Daily Paper Cast
Physics in Next-token Prediction
🤗 Paper Upvotes: 7 | cs.LG, cs.AI Authors: Hongjun An, Yiliang Song, Xuelong Li Title: Physics in Next-token Prediction Arxiv: http://arxiv.org/abs/2411.00660v1 Abstract: We discovered the underlying physics in Next-token Prediction (NTP). We identified the law of information conservation within NTP and proposed the First Law of Information Capacity (IC-1), demonstrating that the essence of intelligence emergence in auto-regressive models is fundamentally a process of information transfer. We also introduced Landauer's Principle into NTP, formulating the Second Law of Information Capacity (IC-2), which establishes the rel...
2024-11-05
18 min
Daily Paper Cast
CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes
🤗 Paper Upvotes: 5 | cs.CV Authors: Yang Liu, Chuanchen Luo, Zhongkai Mao, Junran Peng, Zhaoxiang Zhang Title: CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes Arxiv: http://arxiv.org/abs/2411.00771v1 Abstract: Recently, 3D Gaussian Splatting (3DGS) has revolutionized radiance field reconstruction, manifesting efficient and high-fidelity novel view synthesis. However, accurately representing surfaces, especially in large and complex scenarios, remains a significant challenge due to the unstructured nature of 3DGS. In this paper, we present CityGaussianV2, a novel approach for large-scale scene reconstruction that addresses critical challenges rel...
2024-11-05
20 min
Daily Paper Cast (Test)
CLEAR: Character Unlearning in Textual and Visual Modalities
🤗 Paper Upvotes: 192 | cs.CV, cs.CL Authors: Alexey Dontsov, Dmitrii Korzh, Alexey Zhavoronkin, Boris Mikheev, Denis Bobkov, Aibek Alanov, Oleg Y. Rogov, Ivan Oseledets, Elena Tutubalina Title: CLEAR: Character Unlearning in Textual and Visual Modalities Arxiv: http://arxiv.org/abs/2410.18057v1 Abstract: Machine Unlearning (MU) is critical for enhancing privacy and security in deep learning models, particularly in large multimodal language models (MLLMs), by removing specific private or hazardous information. While MU has made significant progress in textual and visual modalities, multimodal unlearning (MMU) remains significantly underexplored, par...
2024-11-04
03 min
Daily Paper Cast (Test)
AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions
🤗 Paper Upvotes: 31 | cs.AI, cs.CL Authors: Ziming Li, Qianbo Zang, David Ma, Jiawei Guo, Tuney Zheng, Minghao Liu, Xinyao Niu, Yue Wang, Jian Yang, Jiaheng Liu, Wanjun Zhong, Wangchunshu Zhou, Wenhao Huang, Ge Zhang Title: AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions Arxiv: http://arxiv.org/abs/2410.20424v2 Abstract: Data science tasks involving tabular data present complex challenges that require sophisticated problem-solving approaches. We propose AutoKaggle, a powerful and user-centric framework that assists data scientists in completing daily data pipelines through a collaborative multi-agent sys...
2024-11-04
04 min
Daily Paper Cast (Test)
CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation
🤗 Paper Upvotes: 50 | Categories: cs.IR, cs.CL Title: CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation Authors: Yiruo Cheng, Kelong Mao, Ziliang Zhao, Guanting Dong, Hongjin Qian, Yongkang Wu, Tetsuya Sakai, Ji-Rong Wen, Zhicheng Dou Arxiv: http://arxiv.org/abs/2410.23090v1 Abstract: Retrieval-Augmented Generation (RAG) has become a powerful paradigm for enhancing large language models (LLMs) through external knowledge retrieval. Despite its widespread attention, existing academic research predominantly focuses on single-turn RAG, leaving a significant gap in addressing the complexities of multi-turn conversations found in real-world applications. To bridge thi...
2024-11-04
03 min
Daily Paper Cast
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders
🤗 Daily Paper Upvotes: 57Authors: Viacheslav Surkov, Chris Wendler, Mikhail Terekhov, Justin Deschenaux, Robert West, Caglar GulcehreCategories: cs.LG, cs.AI, cs.CVArxiv: http://arxiv.org/abs/2410.22366v1Title: Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders Abstract: Sparse autoencoders (SAEs) have become a core ingredient in the reverse engineering of large-language models (LLMs). For LLMs, they have been shown to decompose intermediate representations that often are not interpretable directly into sparse sums of interpretable features, facilitating better control and subsequent analysis. However, similar analyses and approaches have been lac...
2024-11-03
23 min
Daily Paper Cast
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
🤗 Daily Paper Upvotes: 45 Authors: Ming Li, Yanhong Li, Tianyi Zhou Categories: cs.CL, cs.AI, cs.LG Arxiv: http://arxiv.org/abs/2410.23743v1 Title: What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective Abstract: What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of training LLMs on reasoning paths such as...
2024-11-03
20 min
Daily Paper Cast
A Pointer Network-based Approach for Joint Extraction and Detection of Multi-Label Multi-Class Intents
🤗 Daily Paper Upvotes: 20 Authors: Ankan Mullick, Sombit Bose, Abhilash Nandy, Gajula Sai Chaitanya, Pawan Goyal Categories: cs.CL, cs.IR Arxiv: http://arxiv.org/abs/2410.22476v1 Title: A Pointer Network-based Approach for Joint Extraction and Detection of Multi-Label Multi-Class Intents Abstract: In task-oriented dialogue systems, intent detection is crucial for interpreting user queries and providing appropriate responses. Existing research primarily addresses simple queries with a single intent, lacking effective systems for handling complex queries with multiple intents and extracting different intent spans. Additionally, there is a notable absence of multilingual, multi-intent datasets. This study addresses three critical tasks: extracting multiple int...
2024-11-03
22 min
Daily Paper Cast
Language Models can Self-Lengthen to Generate Long Texts
🤗 Daily Paper Upvotes: 14 Authors: Shanghaoran Quan, Tianyi Tang, Bowen Yu, An Yang, Dayiheng Liu, Bofei Gao, Jianhong Tu, Yichang Zhang, Jingren Zhou, Junyang Lin Categories: cs.CL Arxiv: http://arxiv.org/abs/2410.23933v1 Title: Language Models can Self-Lengthen to Generate Long Texts Abstract: Recent advancements in Large Language Models (LLMs) have significantly enhanced their ability to process long contexts, yet a notable gap remains in generating long, aligned outputs. This limitation stems from a training gap where pre-training lacks effective instructions for long-text generation, and post-training data primarily consists of short query-response pairs. Current approaches, such as instruction backtranslation and beh...
2024-11-03
20 min
Daily Paper Cast
Constraint Back-translation Improves Complex Instruction Following of Large Language Models
🤗 Daily Paper Upvotes: 12 Authors: Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li Categories: cs.CL, cs.AI Arxiv: http://arxiv.org/abs/2410.24175v1 Title: Constraint Back-translation Improves Complex Instruction Following of Large Language Models Abstract: Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated by feeding complex instructions to advanced LLMs. However, even advanced LLMs cannot follow complex instructions well, thus limiting the quality of generated data. In this work, we find that existing datasets inherently con...
2024-11-03
19 min
Daily Paper Cast
BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments
🤗 Daily Paper Upvotes: 11 Authors: Xinghao Wang, Pengyu Wang, Bo Wang, Dong Zhang, Yunhua Zhou, Xipeng Qiu Categories: cs.CL, cs.AI, cs.CV, cs.LG Arxiv: http://arxiv.org/abs/2410.23918v1 Title: BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments Abstract: Large language models (LLMs) have revolutionized numerous applications, yet their deployment remains challenged by memory constraints on local devices. While scaling laws have enhanced LLM capabilities, the primary bottleneck has shifted from \textit{capability} to \textit{availability}, emphasizing the need for efficient memory management. Traditional compression methods, such as quantization, often require predefined compression rat...
2024-11-03
17 min
Daily Paper Cast
SelfCodeAlign: Self-Alignment for Code Generation
🤗 Daily Paper Upvotes: 11 Authors: Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Zachary Mueller, Harm de Vries, Leandro von Werra, Arjun Guha, Lingming Zhang Categories: cs.CL, cs.LG, cs.SE Arxiv: http://arxiv.org/abs/2410.24198v1 Title: SelfCodeAlign: Self-Alignment for Code Generation Abstract: Instruction tuning is a supervised fine-tuning approach that significantly improves the ability of large language models (LLMs) to follow human instructions. We propose SelfCodeAlign, the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. SelfCodeAlign employs the same base model for inference throughout the data generation process. It...
2024-11-03
18 min
Daily Paper Cast
Learning Video Representations without Natural Videos
🤗 Daily Paper Upvotes: 10 Authors: Xueyang Yu, Xinlei Chen, Yossi Gandelsman Categories: cs.CV Arxiv: http://arxiv.org/abs/2410.24213v1 Title: Learning Video Representations without Natural Videos Abstract: In this paper, we show that useful video representations can be learned from synthetic videos and natural images, without incorporating natural videos in the training. We propose a progression of video datasets synthesized by simple generative processes, that model a growing set of natural video properties (e.g. motion, acceleration, and shape transformations). The downstream performance of video models pre-trained on these generated datasets gradually increases with the dataset progression. A VideoMAE model pre...
2024-11-03
22 min
Daily Paper Cast (Test)
BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments
🤗 Daily Paper Upvotes: 11 Authors: Xinghao Wang, Pengyu Wang, Bo Wang, Dong Zhang, Yunhua Zhou, Xipeng Qiu Categories: cs.CL, cs.AI, cs.CV, cs.LG Arxiv: http://arxiv.org/abs/2410.23918v1 Title: BitStack: Fine-Grained Size Control for Compressed Large Language Models in Variable Memory Environments Abstract: Large language models (LLMs) have revolutionized numerous applications, yet their deployment remains challenged by memory constraints on local devices. While scaling laws have enhanced LLM capabilities, the primary bottleneck has shifted from \textit{capability} to \textit{availability}, emphasizing the need for efficient memory management. Traditional compression methods, such as quantization, often require predefined compression rat...
2024-11-03
17 min
Daily Paper Cast (Test)
SelfCodeAlign: Self-Alignment for Code Generation
🤗 Daily Paper Upvotes: 11 Authors: Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Zachary Mueller, Harm de Vries, Leandro von Werra, Arjun Guha, Lingming Zhang Categories: cs.CL, cs.LG, cs.SE Arxiv: http://arxiv.org/abs/2410.24198v1 Title: SelfCodeAlign: Self-Alignment for Code Generation Abstract: Instruction tuning is a supervised fine-tuning approach that significantly improves the ability of large language models (LLMs) to follow human instructions. We propose SelfCodeAlign, the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. SelfCodeAlign employs the same base model for inference throughout the data generation process. It...
2024-11-03
18 min
Daily Paper Cast (Test)
AAAR-1.0: Assessing AI's Potential to Assist Research
🤗 Daily Paper Upvotes: 10 Authors: Renze Lou, Hanzi Xu, Sijia Wang, Jiangshu Du, Ryo Kamoi, Xiaoxin Lu, Jian Xie, Yuxuan Sun, Yusen Zhang, Jihyun Janice Ahn, Hongchao Fang, Zhuoyang Zou, Wenchao Ma, Xi Li, Kai Zhang, Congying Xia, Lifu Huang, Wenpeng Yin Categories: cs.CL Arxiv: http://arxiv.org/abs/2410.22394v1 Title: AAAR-1.0: Assessing AI's Potential to Assist Research Abstract: Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique challenges and opportunities in leveraging LLMs for their own wor...
2024-11-03
22 min
Daily Paper Cast (Test)
Learning Video Representations without Natural Videos
🤗 Daily Paper Upvotes: 10 Authors: Xueyang Yu, Xinlei Chen, Yossi Gandelsman Categories: cs.CV Arxiv: http://arxiv.org/abs/2410.24213v1 Title: Learning Video Representations without Natural Videos Abstract: In this paper, we show that useful video representations can be learned from synthetic videos and natural images, without incorporating natural videos in the training. We propose a progression of video datasets synthesized by simple generative processes, that model a growing set of natural video properties (e.g. motion, acceleration, and shape transformations). The downstream performance of video models pre-trained on these generated datasets gradually increases with the dataset progression. A VideoMAE model pre...
2024-11-03
22 min
Daily Paper Cast (Test)
BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays
🤗 Daily Paper Upvotes: 7 Authors: Yang Zhou, Tan Li Hui Faith, Yanyu Xu, Sicong Leng, Xinxing Xu, Yong Liu, Rick Siow Mong Goh Categories: cs.CV Arxiv: http://arxiv.org/abs/2410.21969v1 Title: BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays Abstract: Medical Vision-Language Pretraining (MedVLP) shows promise in learning generalizable and transferable visual representations from paired and unpaired medical images and reports. MedVLP can provide useful features to downstream tasks and facilitate adapting task-specific models to new setups using fewer examples. However, existing MedVLP methods often differ in terms of datasets, preprocessing, and finetuning implementations. This pos...
2024-11-03
21 min
Daily Paper Cast
AAAR-1.0: Assessing AI's Potential to Assist Research
🤗 Daily Paper Upvotes: 10 Authors: Renze Lou, Hanzi Xu, Sijia Wang, Jiangshu Du, Ryo Kamoi, Xiaoxin Lu, Jian Xie, Yuxuan Sun, Yusen Zhang, Jihyun Janice Ahn, Hongchao Fang, Zhuoyang Zou, Wenchao Ma, Xi Li, Kai Zhang, Congying Xia, Lifu Huang, Wenpeng Yin Categories: cs.CL Arxiv: http://arxiv.org/abs/2410.22394v1 Title: AAAR-1.0: Assessing AI's Potential to Assist Research Abstract: Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique challenges and opportunities in leveraging LLMs for their own wor...
2024-11-03
22 min
Daily Paper Cast
BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays
🤗 Daily Paper Upvotes: 7 Authors: Yang Zhou, Tan Li Hui Faith, Yanyu Xu, Sicong Leng, Xinxing Xu, Yong Liu, Rick Siow Mong Goh Categories: cs.CV Arxiv: http://arxiv.org/abs/2410.21969v1 Title: BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays Abstract: Medical Vision-Language Pretraining (MedVLP) shows promise in learning generalizable and transferable visual representations from paired and unpaired medical images and reports. MedVLP can provide useful features to downstream tasks and facilitate adapting task-specific models to new setups using fewer examples. However, existing MedVLP methods often differ in terms of datasets, preprocessing, and finetuning implementations. This pos...
2024-11-03
21 min
Daily Paper Cast
AAAR-1.0: Assessing AI's Potential to Assist Research
🤗 Daily Paper Upvotes: 10 Authors: Renze Lou, Hanzi Xu, Sijia Wang, Jiangshu Du, Ryo Kamoi, Xiaoxin Lu, Jian Xie, Yuxuan Sun, Yusen Zhang, Jihyun Janice Ahn, Hongchao Fang, Zhuoyang Zou, Wenchao Ma, Xi Li, Kai Zhang, Congying Xia, Lifu Huang, Wenpeng Yin Categories: cs.CL Arxiv: http://arxiv.org/abs/2410.22394v1 Title: AAAR-1.0: Assessing AI's Potential to Assist Research Abstract: Numerous studies have assessed the proficiency of AI systems, particularly large language models (LLMs), in facilitating everyday tasks such as email writing, question answering, and creative content generation. However, researchers face unique challenges and opportunities in leveraging LLMs for their own wor...
2024-11-03
22 min
Daily Paper Cast
BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays
🤗 Daily Paper Upvotes: 7 Authors: Yang Zhou, Tan Li Hui Faith, Yanyu Xu, Sicong Leng, Xinxing Xu, Yong Liu, Rick Siow Mong Goh Categories: cs.CV Arxiv: http://arxiv.org/abs/2410.21969v1 Title: BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays Abstract: Medical Vision-Language Pretraining (MedVLP) shows promise in learning generalizable and transferable visual representations from paired and unpaired medical images and reports. MedVLP can provide useful features to downstream tasks and facilitate adapting task-specific models to new setups using fewer examples. However, existing MedVLP methods often differ in terms of datasets, preprocessing, and finetuning implementations. Thi...
2024-11-03
21 min
Daily Paper Cast (Test)
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
🤗 Daily Paper Upvotes: 42 Authors: Ming Li, Yanhong Li, Tianyi Zhou Categories: cs.CL, cs.AI, cs.LG Arxiv: http://arxiv.org/abs/2410.23743v1 Title: What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective Abstract: What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of training LLMs on reasoning paths such as...
2024-11-03
03 min
Daily Paper Cast (Test)
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders
🤗 Daily Paper Upvotes: 56 Authors: Viacheslav Surkov, Chris Wendler, Mikhail Terekhov, Justin Deschenaux, Robert West, Caglar Gulcehre Categories: cs.LG, cs.AI, cs.CV Arxiv: http://arxiv.org/abs/2410.22366v1 Title: Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders Abstract: Sparse autoencoders (SAEs) have become a core ingredient in the reverse engineering of large-language models (LLMs). For LLMs, they have been shown to decompose intermediate representations that often are not interpretable directly into sparse sums of interpretable features, facilitating better control and subsequent analysis. However, similar analyses and approaches have been lacking for text-to-image models. We investigated the possibility of usi...
2024-11-03
04 min
Daily Paper Cast (Test)
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders
🤗 Daily Paper Upvotes: 56 Authors: Viacheslav Surkov, Chris Wendler, Mikhail Terekhov, Justin Deschenaux, Robert West, Caglar Gulcehre Categories: cs.LG, cs.AI, cs.CV Arxiv: http://arxiv.org/abs/2410.22366v1 Title: Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders Abstract: Sparse autoencoders (SAEs) have become a core ingredient in the reverse engineering of large-language models (LLMs). For LLMs, they have been shown to decompose intermediate representations that often are not interpretable directly into sparse sums of interpretable features, facilitating better control and subsequent analysis. However, similar analyses and approaches have been lacking for text-to-image models. We investigated the possibility of usi...
2024-11-03
04 min
Daily Paper Cast (Test)
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
🤗 Daily Paper Upvotes: 42 Authors: Ming Li, Yanhong Li, Tianyi Zhou Categories: cs.CL, cs.AI, cs.LG Arxiv: http://arxiv.org/abs/2410.23743v1 Title: What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective Abstract: What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of training LLMs on reasoning paths such as...
2024-11-03
03 min
Daily Paper Cast (Test)
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders
🤗 Daily Paper Upvotes: 56 Authors: Viacheslav Surkov, Chris Wendler, Mikhail Terekhov, Justin Deschenaux, Robert West, Caglar Gulcehre Categories: cs.LG, cs.AI, cs.CV Arxiv: http://arxiv.org/abs/2410.22366v1 Title: Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders Abstract: Sparse autoencoders (SAEs) have become a core ingredient in the reverse engineering of large-language models (LLMs). For LLMs, they have been shown to decompose intermediate representations that often are not interpretable directly into sparse sums of interpretable features, facilitating better control and subsequent analysis. However, similar analyses and approaches have been lacking for text-to-image models. We investigated the possibility of usi...
2024-11-03
03 min
Daily Paper Cast (Test)
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
🤗 Daily Paper Upvotes: 42 Authors: Ming Li, Yanhong Li, Tianyi Zhou Categories: cs.CL, cs.AI, cs.LG Arxiv: http://arxiv.org/abs/2410.23743v1 Title: What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective Abstract: What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of training LLMs on reasoning paths such as...
2024-11-03
03 min
Daily Paper Cast (Test)
ROCKET-1: Master Open-World Interaction with Visual-Temporal Context Prompting
Published on Oct 21 2024
2024-10-30
22 min
Daily Paper Cast (Test)
CompassJudger-1: All-in-one Judge Model Helps Model Evaluation and Evolution
Published on Oct 21
2024-10-28
24 min
Daily Paper Cast (Test)
Knowing When to Ask - Bridging Large Language Models and Data
A description of my awesome episode
2024-10-28
20 min
FuturePrint Podcast
#148 - The Challenges and Opportunities of Digital Printing Inks in China, With Jingwen Su, Siegwerk
Send us a textJingwen has worked for many significant inkjet companies in the Chinese market and has seen the changes that have occurred over the last decade.He is well positioned to discuss the challenges and opportunities in the current Chinese market and look forward to a return to growth for Digital printing inks.He describes how Siegwerk's strong customer network in traditional print offers an excellent route to market for Siegwerk Digital inks - especially in packaging and Labels.Listen on:Apple PodcastGoogle Podcast
2023-11-14
25 min
āáǎà
【āáǎà 008】聆听在云南艺术驻地的声音
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。在这一集播客中,作为梦多艺术小组成员的yoyo和小杨,接受了喜林苑梦媛的采访,聊一聊去年年底在喜林苑的艺术驻地经历,尤其是分享第三辑FoooART项目“滇游食记”的创作历程与成果。梦多也将在上海abC艺术书展带着云南的新书与大家相见!2.10–2.13日,不见不散!-此次,我们制作了一份辅助聆听的小册子,可以在梦多La Mondo官方账号查看!-【本期主播】yoyo,喜林苑驻地艺术家(梦多艺术小组成员)小杨,喜林苑驻地艺术家(梦多艺术小组成员)梦媛,喜林苑品牌部负责人-【内容摘要】0:46 欢迎收听 āáǎà,聊聊驻地经历1:25 两位新嘉宾:喜林苑的梦媛和一同参与驻地的小杨3:58 在喜洲的生活与创作体验,与旅游时的区别7:45 与当地人的接触,从早市入手,从食物渐入佳境!16:18 驻地项目FoooART的缘起17:09 第三辑FoooART:脱离艺术圈,如何与更多人联结?20:49 初心不变,项目方案却在变。驻地生活如何影响了我们的创作?24:00 印象深刻的甜品《火花》,共同的创作25:56 与参与者的对话,观众与作品之间的关系30:31 FoooART第三辑的创作思路36:24 聊聊最后的一件作品《你想要一块土地吗?》39:31 艺术小组成员各自的项目分工43:56 第三辑活动的延伸:出版物会在上海abC艺术书展与大家见面-【本期提到的概念与艺术家作品】【概念】FoooARTFoooART艺术项目由梦多 La Mondo(以下简称梦多)艺术小组发起,名字为英文单词“Food”(食物)与“Art”(艺术)的结合。在FoooART项目中,梦多将艺术家的作品及概念转化成为一道道令人垂涎欲滴的新颖菜品、甜点或饮品,并邀请人们品尝这些可食用的作品。FoooART前两辑的现场都发生在不同的地点,也因而成为了一个游牧式的项目。梦多期望借这个项目来拉近艺术与生活的距离,也为观众打开全新的体验艺术作品的视角。-FoooART第三辑“滇游食记”本着想要回归自然的心态,FoooART第三辑“滇游食记”以大地艺术为核心灵感。在2023年的第一天,FoooART第三辑“滇游食记”在喜林苑杨品相宅开启。院子里摆放着从早市采购来的水果和蔬菜香橼、海菜、南瓜、芒果……我们邀请现场的13位参与者临时加入梦多艺术小组,体验一次行为表演《与食物在田间呼吸》。“滇游食记”此次带来了四道风味迥异下午茶点:门迪埃塔塔、火花、池塘里的白立方和金色海浪。而在接近活动的尾声,我们给大家呈现了最后一件作品:它虽然不可食用,却是一份免费的礼物——小土堆。以喜洲的褐色土壤为底,表层撒上了凤羽镇山上的红色土壤,这来自大地的馈赠,将作为FoooART第三辑“滇游食记”的延续。-大地艺术大地艺术(也称“环境艺术”),顾名思义就是将自然与大地作为艺术创作的核心材料,比如运用石头、土壤、树木、特殊地貌等创造巨型装置作品。大地艺术始发于上世纪60年代,正值观念艺术的兴盛时期,多种艺术形式交汇并融合,一同挑战着艺术的传统。对于大地艺术家来说,除了带有时代的反叛精神,他们意在重新审视自己与自然的关系。-艺术驻地艺术家短期居住在一个地方进行创作,时长从1周至1年不等。通过不同置换的方式,如开展公共教育活动、协助机构运营或者留下艺术作品可供收藏等,艺术家通常会获得免费的住宿与餐饮。-【艺术家 / 作品】梦多艺术小组 / La Mondo门迪埃塔塔 / Mendietartare, 2023火花 / Sparks, 2023池塘里的白立方 / White Cube in the Pond, 2023金色海浪 / Golden Waves, 2023安娜·门迪埃塔 / Ana Mendieta蔡国强 / Cai Guo-Qiang罗伯特·史密斯 / Robert Smithson螺旋形的防波堤 / Spiral Jetty, 1970胡伊瑶 / Yiyao Hu切一块土地 / Cut a Piece of Earth, 2022-23-【本期旅游攻略】喜洲:喜林苑杨品相宅 / 严宝成府,神都,稼穑集,佩索阿咖啡,巍山耙肉饵丝,清真来一罐,彩虹猎人农场沙溪:沙登菁,石龙村,吃茶去(沙溪古镇)-【本期音乐credit】 Surface by Robert John- 【关于āáǎà】如果有任何的意见或建议,寻求合作,或者相关话题想要与我们分享,欢迎通过邮件联系我们。我们的邮箱是: aaaapodcast@outlook.com你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2023-02-09
50 min
āáǎà
【āáǎà 007】从过敏出发,聊聊艺术与身体
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本期播客由jingwen和yoyo主持,以“过敏”为切入点,从分享自身经历、趣闻,到有关过敏以及身体的所思所想,再发散到身体艺术和行为艺术。(hanqi正在云南做活动,等待她下次回归!)【本期主播】jingwen,刚回国的无业游民yoyo,中度过敏患者【内容摘要】01:16 欢迎收听 āáǎà02:03 话题的源起,yoyo分享中度过敏患者的经历05:30 不是过敏体质,却也有不耐受07:56 身体内部也在经历“气候变化”10:26 过敏的症状与过敏源19:24 沃尔夫冈·莱布的《榛子花粉》装置,光看图片都想打喷嚏21:08 发散到艺术家和作品24:14 男性与女性,受到社会规训的身体28:39 乔治·巴塔耶与“内在经验”33:20 费利克斯·冈萨雷斯-托雷斯献给逝去恋人罗斯的作品40:05 提问:艺术是为了身体的完美再现吗?【本期提到文本书籍、艺术家作品、概念等】【文本/书籍】Maria Lind,策展人"What Is Wrong with My Nose: From Gogol and Freud to Goldin+Senneby (via Haraway)", e-flux乔治·巴塔耶 / Georges Bataille,法国评论家、思想家、小说家《内在体验》/ L'expérience intérieure, 2018尼古拉斯·米尔佐夫 / Nicholas Mirzoeff,作者、视觉文化理论家《身体图景:艺术、现代性与理想形体》/ Bodyscape: Art, Modernity and the Ideal Figure, 2018【艺术家/作品】Ariana Page Russell / 阿利安娜·佩琦·拉塞尔沃尔夫冈·莱布 / Wolfgang Laib榛子花粉 / Pollen from Hazelnut, 2013罗曼·西格纳 / Roman Signar弗朗西斯·埃利斯 / Francis Alÿs小野洋子 / Yoko Ono切片 / Cut Piece, 1980玛丽娜·阿布拉莫维奇 / Marina Abramović节奏 0 / Rhythm 0, 1974谢德庆 / Tehching Hsieh一年行为表演 1980-1981 / One Year Performance 1980-1莫娜·哈透姆 / Mona Hatoum异物 / Corps Étranger, 1994费利克斯·冈萨雷斯-托雷斯 / Felix Gonzalez-Torres《无题(罗斯在洛杉矶的肖像)》/ Untitled (Portrait of Ross in L.A.), 1991何翔宇 / He Xiangyu“柠檬系列”绘画 / "Lemon
2022-07-30
41 min
āáǎà
【āáǎà 006-2】“爱要大声说出来!”那些我们喜爱的艺术家 们-下集
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本期播客给大家带来了我们三位喜欢的艺术家们。来听听这些艺术家的故事、令我们感动的艺术作品,还有一直在我们脑海中萦绕的情节。本期作为上一期的延续,主要介绍了几位我们喜爱的男性艺术家。如果听完对我们的讲述感兴趣,也欢迎点击收听同期上集!另外我们将提及的艺术作品和艺术家信息都做了整理,请参考小册子配合收听.【本期主播】jingwen,烦恼和哪位艺术家谈恋爱比较好的白日梦家yoyo,依然公平公正的艺术家“评估员”hanqi,在无法改变的生活面前,就来点轻松愉快的艺术吧.【内容摘要】01:12 欢迎收听 āáǎà01:46 大家都爱的 Francis Alÿs16:10 食物与关系艺术,Rirkrit Tiravanija19:42 令人轻松愉悦的 Jeppe Hein22:49 百看不厌的电影是 Jacques Tati24:30 东方,西方:作为先锋的黄永砯.【本期提到艺术家、作品、书籍等】弗朗西斯·埃利斯 Francis Alÿs弗朗西斯1959年出生于比利时。1986年,Alÿs放弃了建筑师的职业,搬到了墨西哥城。他的创作主要由绘画,手稿,纪录影片和在公共场域下的行为实践组成。1959年生于比利时,生活工作于墨西哥城。他使用诗意的、寓言式艺术语言讨论国家边界、本土主义、全球主义、区域冲突等政治及社会现实。埃利斯对于城市的深入探索研究是其创作的坚实基础,厚重的研究及记录成果探讨了在当代进行艺术创作时不可回避的种族问题及审美困境。【书籍】边境壁垒类型学/ Border Barriers Typology, Peter Kilchman, Zürich, Switzerland, 2021【作品】环 / The Loop, Worldwide 1997游客 / Turista, Mexico City, Mexico, 1994配色 / Color Matching, Mosul, Iraq, 2016实践悖论1 / Paradox of Praxis 1 (Sometimes making something leads to nothing), Mexico City, Mexico, 1997收集器(收藏家)/ Colector (The Collector), Mexico City, Mexico, 1990-1992最后的小丑 / The Last Clown, 1995–2000外国佬 / El gringo, Hidalgo, Mexico, 2003孩子的游戏 / Children‘s Games, Worldwide, 1999-Present【展览】弗朗西斯·埃利斯:消耗2018年11月9日 - 2019年2月24日展馆:上海外滩美术馆 (上海 黄浦区虎丘路20号弗朗西斯·埃利斯:孩子的游戏/ FRANCIS ALŸS Children's Games2019年12月19日 — 20203月8日展馆: 阿姆斯特丹Eye Film博物馆.里克力·提拉瓦尼 Rirkrit Tiravanija里克力·提拉瓦尼,1961年生于阿根廷布宜诺斯艾利斯。提拉瓦尼一直将他的艺术创作与社会参与的道德伦理相结合,邀请观众将作品激活。他综合表演、雕塑、装置等手法,将展示艺术的空间转换为人们社交互动的场所,成为会面地点成为人们偶遇和交流之处。他的作品常以非物质化的方式将观众带到一个具有互惠性,同乐和好客的世界之中。【作品】无题, 纽约303画廊 / Untitled 1992 (Free), September 12 – October 10, 303 Gallery, 1992无题2019 (种子无法知道花的形状)/ untitled 2019 (the form of the flower is unknown to the seed): a new, permanently sited work by Rirkrit Tiravanija2019年6月7日起向公众开放.耶普·海因 Jeppe Hein耶普·海因1974年出生于丹麦哥本哈根,他曾经在丹麦皇家艺术学院和德国法兰克福高等学院学习艺术。他的作品具有很强的实验性和北欧独有的自然风情,他也曾经担任过奥拉维尔·埃利亚松的助手
2022-06-18
30 min
āáǎà
【āáǎà 006-1】“爱要大声说出来!”那些我们喜爱的艺术家 们-上集
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本期播客给大家带来了我们三位喜欢的艺术家们。来听听这些艺术家的故事、令我们感动的艺术作品,还有一直在我们脑海中萦绕的情节。在录制时我们为了保证一集播客时间不要太长,便把我们喜欢的男女艺术家分成两期来和大家分享。(Sanyu不小心混进了这一期,也罢,他一生中身边女性缪斯无数,画中也永远都是女性身体)另外我们将提及的艺术作品和艺术家信息都做了整理,请参考小册子配合收听.【本期主播】jingwen,希望也成为被人喜欢的艺术家的人yoyo,公平公正的艺术家“评估员”hanqi,近期迷恋那些直面生活和痛苦的艺术家.【内容摘要】01:05 欢迎收听 āáǎà01:17 话题的源起,“喜欢”的定义02:56 jingwen 从小就迷恋的常玉(Sanyu),一些他在巴黎的故事和小八卦04:47 Sanyu 画中并没有传统的男性凝视07:22 yoyo 购入玛丽娜·阿布拉莫维奇(Marina Abramović)的回忆录后开始喜欢这位“当代行为艺术之母”09:24 不太喜欢的暴力美学13:41 艺术家的营销,随着时代变化的艺术实践17:45 翠西·艾敏(Tracey Emin)能让 hanqi 产生共情21:52 艾敏与癌症斗争,重回故乡马尔盖特26:19 hanqi 对马琳·杜马斯(Marlene Dumas)的作品越看越着迷29:33 回归绘画的感觉.【本期提到艺术家、作品、书籍等】常玉 / Sanyu1900-1966,生于四川顺庆。1917年入上海美术学校就读,1919年以勤工俭学的方式前往巴黎学习艺术。1938年他曾短期回中国,接着转往纽约,在该地生活了两年。并于1948年在纽约现代美术馆展出作品。1948年返回法国,直至1966年逝世于巴黎。台北国立历史博物馆收藏常玉40几幅油画巨作,曾于1978年起定期举办他的回顾展及学术研究,藉此宣扬他在20世纪中国美术史上做出的成就。.https://www.sanyu.org玛丽娜·阿布拉莫维奇 / Marina Abramović1946年生于南斯拉夫的首都贝尔格莱德。她从20世纪70年代开始其在行为艺术上的实践,被认为是20世纪最伟大的行为艺术家之一。她的作品以探索三大方面为主:表演者与观众之间的关系(the relationship between performer and audience)、肉体的极限( the limits of the body)、思维的可能性(the possibilities of the mind)。【书籍】穿过墙壁 / Walk Through Walls: A Memoir, 2016【作品】玛丽娜·阿布拉莫维奇与乌雷 / Marina Abramović, Ulay潜能 / Rest-Energy, 1980夜海之路 / Night Sea Crossing, 1981-87情人·长城 / The Lovers-The Great Wall Walk, 1988玛丽娜·阿布拉莫维奇 / Marina Abramović托马斯之唇 / Lips of Thomas, 1975七个小品 / Seven Easy Pieces, 2005, Guggenheim艺术家在现场 / The Artist is Present, 2010, MoMA.翠西·艾敏 / Tracey Emin1963年生于伦敦,在英国东南部海岸城市马尔盖特长大。翠西·艾敏所使用多媒介,包括绘画、影像、行为、装置等讨论探索爱情、死亡、痛苦和孤独等话题。【作品】我的床 / My Bed, 1998neon sculptures 霓虹装置作品为何我从未成为一个舞者 / Why I Never Became a Dancer, 1995【展览】前往死亡的旅程 / A Journey to Death, April 24 - June 19, 2022, Carl Freedman Gallery, Margate, UK.马琳·杜马斯 / Marlene Dumas1953年出生于南非开普敦,现生活工作于荷兰阿姆斯特丹。马琳·杜马斯的画作通常从她拍摄的宝丽来相片和收集的杂志报纸的照片中汲取灵感。她的肖像作品中持续探索人性的心理、性别、种族以及死亡等主题。【作品】画家 / The Painter, 1994中午的自画像 / Self-portrait at Noon, 2008【小纪录片】图像是一种负担 / The image as burden, Stedelijk Museum.【本期音乐credit】Surface by Robert John.【关于āáǎà】如果有任何的意见或建议,寻求合作,或者相关话题想要与我们分享,欢迎通过邮件联系我们。我们的邮箱是:aaaapodcast@outlook.com你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2022-06-04
31 min
āáǎà
【āáǎà 005】avec 荣晰:存在主义、艺术感知和一些懂得都懂
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。“avec系列” 为邀请嘉宾来 āáǎà 聊聊天。本期播客邀请来第一位嘉宾,我们聊了一些艺术、一些哲学、一些文学,还有一些有的没的,确实很应“泛艺术类聊天播客”主题。.【本期主播】hanqi,人在上海心在海上.【本期嘉宾】荣晰,佛教皈依者,心理学从业者。本科毕业于Colorado College,即将去Northwestern University就读家庭治疗。.【内容摘要】01:30 欢迎收听 āáǎà03:10 从宏观的视角聊聊近况:现实和未来虚无缥缈,不如把眼光放在自身与当下06:10 艺术反映当下问题,也许可以让人“醒来”07:00 从存在主义角度看“醒来”,真的需要所有人“醒来”么?09:20 两位“前理工男”的艺术project,亲身体验后发现从大脑理解世界到生活感受世界13:00 陈丹青《线条的盛宴》,探访北朝(勘误*)墓室壁画,希望作为大众了解艺术的桥梁16:53 《源泉》中“谁想你宣扬牺牲,谁就想成为你的主人”18:06 宏观看世界国家系统都是希望“奴化”,那么以不抵抗的方式抵抗也许才是出路19:15 在死亡和无垠的自然面前,人类多么渺小21:40 与其濒临死亡才懂得权衡“成功”与自由追逐理想,为什么不早点想开呢?24:53 从佛教角度看,对生命“开悟”,生命不只是为了抓住身外之物26:40 也许社会追求的“成功”是一种连接感27:28 每人生命都有自己的快乐源泉,都有自己建造出舒适的方式,倒也没必要辩论哪种生活更好29:26 施勇早期作品反映人类在任何环境中的适应能力都很强30:45 想像康德一样论证不同生命方式的好坏,最后发现论证不出来什么,那么还是好好的关注眼下可以掌控的事情吧35:20 “开心就好”怎么听起来这么简单但做着很难呢?到最后懂得都懂,不懂也没事啦.【本期音乐credit】Surface by Robert John .【关于āáǎà】如果有任何的意见或建议,寻求合作,或者相关话题想要与我们分享,欢迎通过邮件联系我们。我们的邮箱是:aaaapodcast@outlook.com你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2022-05-21
37 min
āáǎà
【āáǎà004】威尼斯双年展,一些艺术的复苏、女性主义、世界和平
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本期我们围绕着正在进行中的第59届威尼斯艺术双年展,“梦之乳”(Milk of Dreams) 展开,进行了一场自由的聊天。这场迟到的艺术盛会让整个艺术圈为之一振,并将所有的时兴话题提上议程:从女性叙事到未来人类,从肤色政治到俄乌战争,从生态议题到世界主义。艺术可以解决什么?改变什么?又或者有怎样的新的问题被提出?我们从自身的兴趣和感受出发希望可以和你一起分享看法互相启发。. 【本期主播】jingwen,想去威尼斯的人yoyo,想做梦的人hanqi,想自由出门的人. 【关于第59届威尼斯艺术双年展】威尼斯双年展在奇数年(如2013、2015)为艺术双年展,在偶数年(如2014、2016)为建筑双年展,展览一般分为国家馆与主题馆两部分。主展馆由Arsenale和Giardini两部分组成。 第59届威尼斯双年展艺术总监塞西莉娅·阿莱马尼(Cecilia Alemani)和双年展主席罗伯托·西库托(Roberto Cicutto)宣布,2022年双年展展览名称为“梦之乳”(The Milk of Dreams),取自超现实艺术家莱奥诺拉·卡林顿(Leonora Carrington)在20世纪50年创作的一本童书。阿莱马尼为双年展主展览列出了三大主题:“身体的表现及其变形;个体与技术之间的关系;身体与地球之间的联系”。阿莱马尼承认,正在计划中的展览处在“一个被撕裂的世界”中,但她承诺,“这将是一个乐观的展览,庆祝艺术及其创造替代性宇宙论和新生存条件的能力”。(简介文字摘自网络). 【内容摘要】00:55 欢迎收听 āáǎà01:18 威尼斯双年展简介05:12 第59届威尼斯艺术双年展的看点与不同05:48 俄乌战争的背景下,艺术家与策展人对战争的回应:达娜·科斯米娜,贝尔基斯·艾永, 塞西莉娅·阿莱马尼以及俄罗斯艺术家与策展人团队08:26 德国国家馆引发的思考:回看侵略历史 用百年的时间消化战争的伤痛09:40 聊聊威尼斯11:46 勘误 瑞士(法国*)艺术家,乌戈·朗迪诺内16:00 双年展辐射艺术界的更多展览17:18 从塞西莉娅·阿莱马尼在疫情之下的工作方法得到启发21:07 性别议题在威尼斯26:58 意大利国家馆,“整体装置”与沉浸式体验30:18 艺术盛会让我们想犯罪 sneaking in and messing around31:10 Yoyo 的作品:如何在威双拥有自己的展馆?32:35 八卦一下 Anish Kapoor Foundation 一掷千金买楼34:32 在现场无法被取代35:45 威双中国馆36:20 望向国内疫情之下的艺术界现状36:52 艺术能改变什么?艺术改变了什么?. 【本期提到艺术家作品、书籍、概念与事件】(按出现顺序排列). 艺术家作品达娜·科斯米娜 / Dana Kosmina乌克兰广场 / Piazza Ucraina ,2022威尼斯 乌克兰特别户外展厅. 贝尔基斯·艾永 / Belkis Ayón祭祀 / La consagracion ,1991 .玛丽亚·艾希霍恩 / Maria Eichhorn重新定位结构 / Relocating a Structure,2022威尼斯 德国国家馆. 乌戈·朗迪诺内 / Ugo Rondinone燃烧 发光 飞翔 / Burn Shine Fly,2022 威尼斯 .吉安·玛丽亚·托萨蒂 / Gian Maria Tosatti夜的历史及彗星的命运 / History of Night and Destiny of Comets, 2022威尼斯 意大利国家馆 .扎伊纳布·塞迪拉 / Zineb Sedira梦没有题目 / Les rêves n’ont pas de titre .胡伊瑶 / Yoyo Hu威尼斯双年展观展套装 / Venice Biennale Visitor Kit .苏伯德·古普塔 / Subodh Gupta烹饪世界 / Cooking The World 2017,威尼斯贝梦德奇普里亚尼酒店 .“元境” / Meta-Scape,2022 威尼斯 中国国家馆 .帕夫洛·马科夫 / Pavlo Makov枯竭之泉 / Fountain of Exhaustion威尼斯 乌克兰国家馆 .书籍和播客莱奥诺拉·卡林顿 / Leonora Carrington -- 06/04/1917 – 25/05/2011《梦之乳》/ ‘The Milk of Dreams’ .《威尼斯商人》 莎士比亚早期的重要作品,是一部具有极大讽刺性的喜剧。 .Talk Art Podcast (Apr 19): Pavlo Makov (Ukrainian Pavilion at Venice Biennale 2022).概念与事件俄罗斯国家馆代表艺术家亚历山德拉·苏哈雷娃(Alexandra Sukhareva),基里尔·萨夫琴科夫(Kirill Savchenkov) 与策展人雷蒙达斯·马拉绍斯卡斯(Raimundas Malašauskas)于二月末通过社交网络宣布他们将不再参加第59届威尼斯艺术双年展。. 威尼斯画派威尼斯画派是意大利文艺复兴时期主要画派之一。威尼斯共和国在14世纪是欧洲和东方的贸易中心,商业资本集中、国家强盛;画风华丽,代表画家有乔尔乔内和提香。 .威尼斯外围展览,“徐世琪:悬浮,香港在威尼斯” 2022,威尼斯. 第58届威尼斯双年展“愿你活在有趣的时代” / “May You Live in Interesting Times” .卡巴科夫夫妇及其“整体装置” / Kabakovs and 'The Total Installation'卡巴科夫夫妇以其“整体装置”而闻名,这是他们开创的一种身临其境的艺术品。 “整体装置”让观众完全沉浸在戏剧性的环境中。他们改变了他们所展示的画廊空间,为观众创造了一个新的现实来进入和体验。(文字来自
2022-05-07
40 min
āáǎà
【āáǎà004】
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本期我们围绕着正在进行中的第59届威尼斯艺术双年展,“梦之乳”(Milk of Dreams) 展开,进行了一场自由的聊天。这场迟到的艺术盛会让整个艺术圈为之一振,并将所有的时兴话题提上议程:从女性叙事到未来人类,从肤色政治到俄乌战争,从生态议题到世界主义。艺术可以解决什么?改变什么?又或者有怎样的新的问题被提出?我们从自身的兴趣和感受出发希望可以和你一起分享看法互相启发。. 【本期主播】jingwen,想去威尼斯的人yoyo,想做梦的人hanqi,想自由出门的人. 【关于第59届威尼斯艺术双年展】威尼斯双年展在奇数年(如2013、2015)为艺术双年展,在偶数年(如2014、2016)为建筑双年展,展览一般分为国家馆与主题馆两部分。主展馆由Arsenale和Giardini两部分组成。 第59届威尼斯双年展艺术总监塞西莉娅·阿莱马尼(Cecilia Alemani)和双年展主席罗伯托·西库托(Roberto Cicutto)宣布,2022年双年展展览名称为“梦之乳”(The Milk of Dreams),取自超现实艺术家莱奥诺拉·卡林顿(Leonora Carrington)在20世纪50年创作的一本童书。阿莱马尼为双年展主展览列出了三大主题:“身体的表现及其变形;个体与技术之间的关系;身体与地球之间的联系”。阿莱马尼承认,正在计划中的展览处在“一个被撕裂的世界”中,但她承诺,“这将是一个乐观的展览,庆祝艺术及其创造替代性宇宙论和新生存条件的能力”。(简介文字摘自网络). 【内容摘要】00:55 欢迎收听 āáǎà01:18 威尼斯双年展简介05:12 第59届威尼斯艺术双年展的看点与不同05:48 俄乌战争的背景下,艺术家与策展人对战争的回应:达娜·科斯米娜,贝尔基斯·艾永, 塞西莉娅·阿莱马尼以及俄罗斯艺术家与策展人团队08:26 德国国家馆引发的思考:回看侵略历史 用百年的时间消化战争的伤痛09:40 聊聊威尼斯11:46 勘误 瑞士(法国*)艺术家,乌戈·朗迪诺内16:00 双年展辐射艺术界的更多展览17:18 从塞西莉娅·阿莱马尼在疫情之下的工作方法得到启发21:07 性别议题在威尼斯26:58 意大利国家馆,“整体装置”与沉浸式体验30:18 艺术盛会让我们想犯罪 sneaking in and messing around31:10 Yoyo 的作品:如何在威双拥有自己的展馆?32:35 八卦一下 Anish Kapoor Foundation 一掷千金买楼34:32 在现场无法被取代35:45 威双中国馆36:20 望向国内疫情之下的艺术界现状36:52 艺术能改变什么?艺术改变了什么?. 【本期提到艺术家作品、书籍、概念与事件】(按出现顺序排列). 艺术家作品达娜·科斯米娜 / Dana Kosmina乌克兰广场 / Piazza Ucraina ,2022威尼斯 乌克兰特别户外展厅. 贝尔基斯·艾永 / Belkis Ayón祭祀 / La consagracion ,1991 .玛丽亚·艾希霍恩 / Maria Eichhorn重新定位结构 / Relocating a Structure,2022威尼斯 德国国家馆. 乌戈·朗迪诺内 / Ugo Rondinone燃烧 发光 飞翔 / Burn Shine Fly,2022 威尼斯 .吉安·玛丽亚·托萨蒂 / Gian Maria Tosatti夜的历史及彗星的命运 / History of Night and Destiny of Comets, 2022威尼斯 意大利国家馆 .扎伊纳布·塞迪拉 / Zineb Sedira梦没有题目 / Les rêves n’ont pas de titre .胡伊瑶 / Yoyo Hu威尼斯双年展观展套装 / Venice Biennale Visitor Kit .苏伯德·古普塔 / Subodh Gupta烹饪世界 / Cooking The World 2017,威尼斯贝梦德奇普里亚尼酒店 .“元境” / Meta-Scape,2022 威尼斯 中国国家馆 .帕夫洛·马科夫 / Pavlo Makov枯竭之泉 / Fountain of Exhaustion威尼斯 乌克兰国家馆 .书籍和播客莱奥诺拉·卡林顿 / Leonora Carrington -- 06/04/1917 – 25/05/2011《梦之乳》/ ‘The Milk of Dreams’ .《威尼斯商人》 莎士比亚早期的重要作品,是一部具有极大讽刺性的喜剧。 .Talk Art Podcast (Apr 19): Pavlo Makov (Ukrainian Pavilion at Venice Biennale 2022).概念与事件俄罗斯国家馆代表艺术家亚历山德拉·苏哈雷娃(Alexandra Sukhareva),基里尔·萨夫琴科夫(Kirill Savchenkov) 与策展人雷蒙达斯·马拉绍斯卡斯(Raimundas Malašauskas)于二月末通过社交网络宣布他们将不再参加第59届威尼斯艺术双年展。. 威尼斯画派威尼斯画派是意大利文艺复兴时期主要画派之一。威尼斯共和国在14世纪是欧洲和东方的贸易中心,商业资本集中、国家强盛;画风华丽,代表画家有乔尔乔内和提香。 .威尼斯外围展览,“徐世琪:悬浮,香港在威尼斯” 2022,威尼斯. 第58届威尼斯双年展“愿你活在有趣的时代” / “May You Live in Interesting Times” .卡巴科夫夫妇及其“整体装置” / Kabakovs and 'The Total Installation'卡巴科夫夫妇以其“整体装置”而闻名,这是他们开创的一种身临其境的艺术品。 “整体装置”让观众完全沉浸在戏剧性的环境中。他们改变了他们所展示的画廊空间,为观众创造了一个新的现实来进入和体验。(文字来自
2022-05-07
02 min
āáǎà
【āáǎà003】聊聊艺术从业
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本期播客我们决定严格遵守我们的节目最初构想概念——泛艺术类聊天播客,来和大家聊一聊我们在艺术行业内从业的一些经历和想法。本期发言完全基于我们的个人经历,是一场朋友间的漫谈。可能会有点噪音,逻辑松散,结构不强,bear with us!【本期主播】yoyojingwenhanqi【内容摘要】00:30 欢迎收听 āáǎà01:10 (免责声明)02:03 jingwen 在成都A4美术馆公教部实习【主要负责项目和工作内容】5:55 国内美术馆从组织架构上存在的一些问题11:20 hanqi 在画廊实习【让人怀疑人生的卖票员】15:00 hanqi 在非营利艺术机构实习【纽约的美妙暑假】17:30 hanqi 在北京今日美术馆实习【免费打杂也开心】19:15 hanqi 在拍卖行实习【只有拍卖日令人激动】20:38 hanqi和yoyo在画廊周北京实习中相遇【浅尝画廊、重燃希望】28:28 yoyo 在两个美术馆实习【参与创作性的工作】29:40 工资!!!艺术工作者的薪资问题33:42 工作中的 BEST and WORST moments (内含搞笑内容)43:35 呼吁Fair Pay, Equal Pay47:56 艺术机构的实习工作会涉及到哪些方面?52:10 bonus 分享那些工作中有趣的回忆55:30 什么样的人适合艺术圈的工作?【本期音乐credit】Surface by Robert John【关于 āáǎà】感谢大家的收听,如果你有对艺术从业更多的问题和想法,欢迎和我们聊天给我们留言。你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2022-04-16
58 min
āáǎà
【āáǎà002-3】春日精神食粮—红色
【āáǎà002-3】大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本集播客是我们特别为大家策划的项目,叫做春日精神食粮。我们三位主播以个人化的“时令食物”概念出发,结合当下发生的时事和我们的生活,各自打造了一个想象的策展空间。本集播客将分三期上线。请大家在聆听播客时配合shownotes里精心制作的小册子食用哦。【本期主播】Yoyo,业余的表演者【内容摘要】00:40 欢迎收听 āáǎà01:10 春日精神食粮项目介绍04:47 Yoyo的“红色”空间1号:卡巴科夫和罗宋汤09:10 Yoyo的“红色”空间2号:孙原&彭禹和苋菜12:33 一首红色的诗14:06 一首诡异的歌【本期音乐credit】Surface by Robert JohnGhost Town by The Specials【关于 āáǎà 】感谢大家的收听,如果大家有相同形式的空间想象,或者时令食物推荐,欢迎和我们聊天给我们留言。你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2022-04-06
17 min
āáǎà
【āáǎà002-2】春日精神食粮—Cheers!朋友干杯
【āáǎà002-2】大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本集播客是我们特别为大家策划的项目,叫做春日精神食粮。我们三位主播以个人化的“时令食物”概念出发,结合当下发生的时事和我们的生活,各自打造了一个想象的策展空间。本集播客将分三期上线。请大家在聆听播客时配合shownotes里精心制作的小册子食用哦。【本期主播】Jingwen, 假酒鬼【内容摘要】00:48 欢迎收听 āáǎà01:01 春日精神食粮项目介绍04:52 Jingwen的酒单从这里开始06:38 第一杯酒:春日里下午三点的暖阳佐以sanyu和余秀华搭配四川小樱桃与粉红香槟11:25 第二杯酒:奈保尔《我们的普世文明》搭配 Francis Alys的作品 龙卷风15:50 第三杯酒:Mark Rothko的红色色域 画佐 以鸡尾酒 血腥玛丽19:54 酒后丧气发言【本期音乐credit】Surface by Robert JohnMy Foolish Heart by Bills Evans Trio【关于 āáǎà 】感谢大家的收听,如果大家有相同形式的空间想象,或者时令食物推荐,欢迎和我们聊天给我们留言。你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2022-04-04
23 min
āáǎà
【āáǎà002-1】春日精神食粮—上海的春天都在隔离中
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。本集播客是我们特别为大家策划的项目,叫做春日精神食粮。我们三位主播以个人化的“时令食物”概念出发,结合当下发生的时事和我们的生活,各自打造了一个想象的策展空间。本集播客将分三期上线。请大家在聆听播客时配合shownotes里精心制作的小册子食用哦。【本期主播】hanqi,想出去晒太阳的艺术从业者春日精神食粮-上海的春天都在隔离中【内容摘要】01:02 欢迎收听 āáǎà01:30 春日精神食粮项目介绍05:03 hanqi的空间从这里开始06:03 聊聊食物推荐 hanqi's plat du jour07:13 从我自己隔离想到艺术家施勇的作品09:08 从俄乌战争想到艺术家 Aljoscha 在基辅做的行为干涉12:05 从俄乌战争继续想到1968年苏联攻打捷克斯洛伐克15:32 从隔离又想到Tracey Emin和Edvard Munch蒙克的展览《灵魂的孤独》16:40 米兰·昆德拉《不能承受的生命之轻》片段17:56 以一种积极治愈的方式来结束18:20 让我们一起来听拉赫马尼诺夫的第二钢琴协奏曲第二乐章【endnote】在发出这一篇播客时我已经居家隔离了15天了,小区通知之后还有14天。(晕倒)希望在上海的大家都可以保持心情愉悦,听听音乐、听听我们播客,等解封后一起晒太阳!【本期音乐credit】Surface by Robert JohnSergei Rachmaninov-Piano Concerto No.2: II Adagio sostenuto by 朗朗【关于 āáǎà 】感谢大家的收听,如果大家有相同形式的空间想象,或者时令食物推荐,欢迎和我们聊天给我们留言。你可以在小宇宙,Spotify和Apple Podcast等平台收听我们的节目。下期再见啦 :)
2022-04-02
30 min
āáǎà
【āáǎà001】“这太当代了”,聊天目里美术馆
大家好,欢迎收听 āáǎà ,一档由三位年轻艺术从业者主持的泛艺术类聊天播客。作为播客的第一期,我们想来先轻松聊下位于杭州的 BY ART MATTERS 天目里美术馆以及其开馆展览“从无到有” (A Show About Nothing)。两位主播yoyo和hanqi在二月的一个周末去到了天目里美术馆的现场,并与身处于英国伦敦的jingwen一起聊了聊观展后的体验:印象深刻的作品与设置、展览“从无到有”的概念、美术馆的空间与在地性以及特别的“Sneak In”项目。友情提示:本集播客将讨论“从无到有”展览中的多件作品,如不想被剧透请谨慎收听哦。【本期主播】jingwen,自聘艺术家yoyo,泛艺术行业兼职人员hanqi,艺术行业打工人【内容摘要】01:35 欢迎收听 āáǎà02:11 天目里美术馆介绍03:19 开馆展览中印象最深刻的作品16:00 “从无到有”中“无”和“有”以及“空”的概念22:32 天目里美术馆空间28:50 从西方当代艺术视角介入杭州,展览及美术馆有无“在地性”38:20 如果可以sneak in放进开馆展里一件作品,会选择哪一件【本期提到艺术家作品、书籍、概念】(按出现顺序排列)艺术家作品(未做特殊说明的均为“从无到有”展览作品)厉槟源 / Li Binyuan当钟声响起时站立 / Stand up when the bell rings,2017,4'15''马丁·克里德 / Martin Creed160号作品:这些灯会忽明忽灭 / Work No.160 The lights going on and off,2000提诺·赛格尔 / Tino Sehgal这太当代了 / This is so contemporary,2004梁芝兰 / Ghislaine Leung面包 / Bread,2000奥拉维尔·埃利亚松 / Olafur EliassonThe weather project, 2003 (on view: Tate Modern, London)______ /______悬梁 / Interlude,2021罗曼·欧达科 / Roman Ondák时间表 / Clockwork,2014理查德·朗 / Richard Long石之线 / Boulder Line (天目里美术馆委托创作)“Sneak In”项目刘宗周《桃花源记》,行草书法菲利普·帕雷诺 / Philippe Parreno共此时 / Synchronicity纽约评论家汤姆· 麦克多诺(Tom McDonough)评价帕雷诺的展览:“乍看似乎没什么展品的展览其实更接近于‘展览本身就是一件展品’;作品只有在特定组合下,在具体时间范围内才具有活力和意义。”托莫·萨维科-泽冈 / Tomo Savic-Gecan书籍德吕克·贾曼 / Derek Jarman,英国电影导演、诗人、艺术家《彩书一种》/ Chroma: A Book of Colour延伸资料瓦尔特·本雅明 / Walter Benjamin,德国哲学家“身体在博物馆空间中观看、感知和摸索即所谓空间的存在”迈克尔·弗雷德 / Michael Fried,艺术史家、艺术批评家"Art and Objecthood", Artforum: https://www.artforum.com/print/196706/art-and-objecthood-36708【本期音乐credit】Surface by Robert John【关于 āáǎà 】
2022-03-19
46 min