• NeurIPS2022 interesting papers


    最近NeurIPS放榜了,花了几个小时整理一下自己感兴趣的文章,收集不易,点个赞吧
    网站链接:https://nips.cc/Conferences/2022/Schedule?type=Poster

    • Differentially Private Model Compression
    • Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity
    • REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering
    • Dataset Inference for Self-Supervised Models
    • Neuron with Steady Response Leads to Better Generalization
    • Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
    • Structural Pruning via Latency-Saliency Knapsack
    • INRAS: Implicit Neural Representation for Audio Scenes
    • What Makes a “Good” Data Augmentation in Knowledge Distillation - A Statistical Perspective
    • On Measuring Excess Capacity in Neural Networks
    • SIREN: Shaping Representations for OOD Detection
    • Distributionally robust weighted k-nearest neighbors
    • Effects of Data Geometry in Early Deep Learning
    • Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
    • Retaining Knowledge for Learning with Dynamic Definition
    • If Influence Functions are the Answer, Then What is the Question?
    • Learning sparse features can lead to overfitting in neural networks
    • VisFIS: Improved Visual Feature Importance Supervision with Right-for-Right-Reason Objectives
    • CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification
    • ‘Why Not Other Classes?”: Towards Class-Contrastive Back-Propagation Explanations
    • Self-Supervised Fair Representation Learning without Demographics
    • Fairness without Demographics through Knowledge Distillation
    • Towards Understanding the Condensation of Neural Networks at Initial Training
    • Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
    • Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs
    • On Robust Multiclass Learnability
    • Neural Matching Fields: Implicit Representation of Matching Cost for Semantic Correspondence
    • Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination
    • Implicit Neural Representations with Levels-of-Experts
    • Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
    • A Data-Augmentation Is Worth A Thousand Samples
    • Exploring Example Influence in Continual Learning
    • Bridge the Gap Between Architecture Spaces via A Cross-Domain Predictor
    • Deconfounded Representation Similarity for Comparison of Neural Networks
    • Training with More Confidence: Mitigating Injected and Natural Backdoors During Training
    • Understanding Neural Architecture Search: Convergence and Generalization
    • Understanding Robust Learning through the Lens of Representation Similarities
    • Efficient Dataset Distillation using Random Feature Approximation
    • Distilling Representations from GAN Generator via Squeeze and Span
    • AttCAT: Explaining Transformers via Attentive Class Activation Tokens
    • On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
    • Task Discovery: Finding the Tasks that Neural Networks Generalize on
    • Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
    • Improving Self-Supervised Learning by Characterizing Idealized Representations
    • Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation
    • Domain Generalization by Learning and Removing Domain-specific Features
    • Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
    • Pruning has a disparate impact on model accuracy
    • Neural network architecture beyond width and depth
    • Explaining Graph Neural Networks with Structure-Aware Cooperative Games
    • TA-GATES: An Encoding Scheme for Neural Network Architectures
    • Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations
    • Respecting Transfer Gap in Knowledge Distillation
    • Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
    • Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation
    • Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
    • 3DB: A Framework for Debugging Computer Vision Models
    • Redundant representations help generalization in wide neural networks
    • What You See is What You Classify: Black Box Attributions
    • Efficient identification of informative features in simulation-based inference
    • Best of Both Worlds Model Selection
    • Understanding Self-Supervised Graph Representation Learning from a Data-Centric Perspective
    • Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation
    • Does GNN Pretraining Help Molecular Representation?
    • Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP
    • GlanceNets: Interpretabile, Leak-proof Concept-based Models
    • Evolution of Neural Tangent Kernels under Benign and Adversarial Training
    • Learning to Scaffold: Optimizing Model Explanations for Teaching
    • Dataset Distillation using Neural Feature Regression
    • Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
    • Palm up: Playing in the Latent Manifold for Unsupervised Pretraining
    • Spherization Layer: Representation Using Only Angles
    • Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
    • Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning
    • Task-Agnostic Graph Explanations
    • On Neural Network Pruning’s Effect on Generalization
    • In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
    • On the Symmetries of Deep Learning Models and their Internal Representations
    • Training Subset Selection for Weak Supervision
    • Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
    • Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
    • Meta-learning for Feature Selection with Hilbert-Schmidt Independence Criterion
    • GULP: a prediction-based metric between representations
    • Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations
    • Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers
    • Insights into Pre-training via Simpler Synthetic Tasks
    • Orient: Submodular Mutual Information Measures for Data Subset Selection under Distribution Shift
    • Knowledge Distillation: Bad Models Can Be Good Role Models
    • Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
    • Weakly Supervised Representation Learning with Sparse Perturbations
    • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
    • FedAvg with Fine Tuning: Local Updates Lead to Representation Learning
    • Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions
    • Efficient Architecture Search for Diverse Tasks
    • Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures
    • Reconstructing Training Data From Trained Neural Networks
    • Procedural Image Programs for Representation Learning
    • Where to Pay Attention in Sparse Training for Feature Selection?
    • Could Giant Pre-trained Image Models Extract Universal Representations?
    • On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
    • Vision GNN: An Image is Worth Graph of Nodes
    • CLEAR: Generative Counterfactual Explanations on Graphs
    • Neural Basis Models for Interpretability
    • Exploring Linear Feature Scalability of Vision Transformer for Parameter-efficient Fine-tuning
    • Private Estimation with Public Data
    • Robust Testing in High-Dimensional Sparse Models
    • Dataset Factorization for Condensation
    • One Layer is All You Need
    • Improved Fine-Tuning by Better Leveraging Pre-Training Data
    • Don’t Throw Your Model Checkpoints Away
    • Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
  • 相关阅读:
    1085 Perfect Sequence
    verilog实现DDS波形发生器模块,可实现频率、相位可调,三种波形
    m低信噪比下GPS信号的捕获算法研究,使用matlab算法进行仿真
    TCP/IP协议到底在讲什么?
    计算机毕业设计Java派大星水产商城(源代码+数据库+系统+lw文档)
    c++ LRU(最近最少使用)缓存机制
    【JavaScript】快速学习JS
    java毕业设计家庭理财管理系统(附源码、数据库)
    SOME/IP 协议介绍(五)指南
    db2数据仓库集群的搭建
  • 原文地址:https://blog.csdn.net/weixin_46782905/article/details/126943221