-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 36.7 KB
/
index.json
1
[{"categories":["Classes"],"contents":"Machine learning studies algorithms that build models from data for subsequent use in prediction, inference, and decision making tasks. Although an active field for the last 60 years, the current demand as well as trust in machine learning exploded as increasingly more data become available and the problems needed to be addressed become literally impossible to program directly. In this advanced course we will cover essential algorithms, concepts, and principles of machine learning. Along with the traditional exposition we will learn how these principles are currently being revisited thanks to the recent discoveries in the field.\n1. Introduction Lecture Slides Youtube Lectures Introductions 7:51 Why Machine Learning 12:00 What is Machine Learning 18:57 History of Machine Learning 17:33 Reinforcement Learning 10:32 Course Overview 19:26 The Project 20:03 2. Foundations of learning Lecture Slides Youtube Lectures Formalizing the Problem of Learning 24:19 Inductive Bias 12:03 Can We Bound the Probability of Error? 25:56 3. PAC learnability Lecture Slides Youtube Lectures Main Definitions from Lecture 2 13:52 Agnostic PAC Learning 53:35 Learning via Uniform Convergence 10:15 4. Linear algebra and Optimization (recap) 3Blue1Brown Playlist 5. Linear learning models Lecture Slides Youtube Lectures Linear Decision Boundary 34:10 Perceptron 37:10 Perceptron Extensions 14:09 Linear Classifier for Linearly non Separable Classes 8:59 6. Principal Component Analysis Lecture Slides Youtube lectures Linear Regression 39:24 Linear Algebra Micro Refresher 2:04 Spectral Theorem 25:54 Principal Component Analysis 22:29 Demonstration 17:38 7. Curse of Dimensionality Lecture Slides Youtube lectures Curse of Dimensionality 1:16:27 8. Bayesian Decision Theory Lecture Slides Youtube lectures Bayesian Decision Theory 56:47 9. Parameter estimation: MLE Lecture Slides Youtube Lectures Independence 12:07 Maximum Likelihood Estimation 50:35 MLE as KL-divergence minimization 21:41 10. Parameter estimation: MAP \u0026amp; Naïve Bayes Lecture Slides Youtube Lectures MAP Estimation 56:00 The Naïve Bayes Classifier 37:09 11. Logistic Regression Lecture Slides Youtube Lectures NB to LR 19:49 Defining Logistic Regression 27:42 Solving Logistic Regression 23:35 12. Kernel Density Estimation Lecture Slides Youtube Lectures Non-parametric Density Estimation 1:13:33 13. Support Vector Machines Lecture Slides Youtube Lectures Max Margin Classifier 35:53 Lagrange Multipliers 32:45 Dual Formulation of Linear SVM 10:34 Kernel Trick and Soft Margin 27:28 14. Matrix Factorization Lecture Slides Youtube Lectures Matrix Factorization 1:24:22 15. Stochastic Gradient Descent Lecture Slides Youtube Lectures Stochastic Gradient Descent 1:06:57 16. k-means Clustering Lecture Slides Youtube Lectures Clustering 6:05 Gaussian Mixture Models 16:34 MLE recap 4:20 Hard k-means Clustering 30:27 Soft k-means Clustering 7:18 17. Expectation Maximization Lecture Slides Youtube Lectures Do we even need EM for GMM? 14:39 A \u0026ldquo;hacky\u0026rdquo; GMM estimation 15:17 MLE via EM 38:28 18. Automatic Differentiation Lecture Slides Youtube Lectures Introduction 25:10 Forward Mode AD 26:46 A minute of Backprop 2:26 Reverse mode AD 17:26 19. Nonlinear Embedding Approaches Lecture Slides Youtube Lectures Manifold Learning 20:13 20. Model Comparison I Lecture Slides Youtube Lectures Bias Variance Trade-Off 36:52 No Free Lunch Theorem 7:29 Problems with using accuracy as performance indicator 12:39 Confusion Matrix 25:15 21. Model Comparison II Lecture Slides Youtube Lectures Cross validation and hyperopt 29:08 Expected Value Framework 22:48 Visualizing Model Performance 1 31:02 Receiver Operating Characteristics 22:34 22. Model Calibration Lecture Slides Youtube Lectures On Model Calibration 36:53 23. Convolutional Neural Networks Lecture Slides Youtube Lectures Building Blocks 39:22 Skip Connection 38:46 Fully Convolutional Networks 8:07 Semantic Segmentation with Twists 23:40 Special Convolutions 20:15 24. Word Embedding Lecture Slides Youtube Lectures Introduction 10:35 Semantic Matrix 30:26 word2vec 54:22 ","permalink":"https://neuroneural.github.io/posts/advancedml/","tags":null,"title":"Advanced Machine Learning"},{"categories":["Publications"],"contents":"Authors : Kseniya Solovyeva, David Danks, Mohammadsajad Abavisani, Sergey Plis\nPublication date : 2023/2/20\nJournal : Proceedings of Machine Learning Research\nVolume : 13\nIssue :\nPages :\nPublisher : Cambridge MA: JMLR\nDescription\nDomain scientists interested in causal mechanisms are usually limited by the frequency at which they can collect the measurements of social, physical, or biological systems. A common and plausible assumption is that higher measurement frequencies are the only way to gain more informative data about the underlying dynamical causal structure. This assumption is a strong driver for designing new, faster instruments, but such instruments might not be feasible or even possible. In this paper, we show that this assumption is incorrect: there are situations in which we can gain additional information about the causal structure by measuring more than our current instruments. We present an algorithm that uses graphs at multiple measurement timescales to infer underlying causal structure, and show that inclusion of structures at slower timescales can nonetheless reduce the size of the equivalence class of possible causal structures. We provide simulation data about the probability of cases in which deliberate undersampling yields a gain, as well as the size of this gain.\nView article\n","permalink":"https://neuroneural.github.io/posts/causal-learning-through-deliberate-undersampling/","tags":["Paper","Publications"],"title":"Causal Learning through Deliberate Undersampling"},{"categories":["Publications"],"contents":"Authors : Alex Fedorov, Eloy Geenjaar, Lei Wu, Tristan Sylvain, Thomas P DeRamus, Margaux Luck, Maria Misiura, R Devon Hjelm, Sergey M Plis, Vince D Calhoun\nPublication date : 2022/9/7\nJournal : arXiv preprint arXiv:2209.02876\nDescription\nRecent neuroimaging studies that focus on predicting brain disorders via modern machine learning approaches commonly include a single modality and rely on supervised over-parameterized models.However, a single modality provides only a limited view of the highly complex brain. Critically, supervised models in clinical settings lack accurate diagnostic labels for training. Coarse labels do not capture the long-tailed spectrum of brain disorder phenotypes, which leads to a loss of generalizability of the model that makes them less useful in diagnostic settings. This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data. We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion. The taxonomy forms a family of decoder-free models with reduced computational complexity and a propensity to capture multi-scale relationships between local and global representations of the multimodal inputs. We conduct a comprehensive evaluation of the taxonomy using functional and structural magnetic resonance imaging (MRI) data across a spectrum of Alzheimer\u0026rsquo;s disease phenotypes and show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training. The proposed multimodal self-supervised learning yields representations with improved classification performance for both modalities. The concomitant rich and flexible unsupervised deep learning framework captures complex multimodal relationships and provides predictive …\nView article\n","permalink":"https://neuroneural.github.io/posts/self-supervised-multimodal-neuroimaging-yields-predictive-representations-for-a-spectrum-of-alzheimers-phenotypes/","tags":["Paper","Publications"],"title":"Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes"},{"categories":["Publications"],"contents":"Authors : Xinhui Li, Alex Fedorov, Mrinal Mathur, Anees Abrol, Gregory Kiar, Sergey Plis, Vince Calhoun\nPublication date : 2022/8/27\nJournal : arXiv preprint arXiv:2208.12909\nDescription\nDeep learning has been widely applied in neuroimaging, including to predicting brain-phenotype relationships from magnetic resonance imaging (MRI) volumes. MRI data usually requires extensive preprocessing before it is ready for modeling, even via deep learning, in part due to its high dimensionality and heterogeneity. A growing array of MRI preprocessing pipelines have been developed each with its own strengths and limitations. Recent studies have shown that pipeline-related variation may lead to different scientific findings, even when using the identical data. Meanwhile, the machine learning community has emphasized the importance of shifting from model-centric to data-centric approaches given that data quality plays an essential role in deep learning applications. Motivated by this idea, we first evaluate how preprocessing pipeline selection can impact the downstream performance of a supervised learning model. We next propose two pipeline-invariant representation learning methodologies, MPSL and PXL, to improve consistency in classification performance and to capture similar neural network representations between pipeline pairs. Using 2000 human subjects from the UK Biobank dataset, we demonstrate that both models present unique advantages, in particular that MPSL can be used to improve out-of-sample generalization to new pipelines, while PXL can be used to improve predictive performance consistency and representational similarity within a closed pipeline set. These results suggest that our proposed models can be applied to overcome pipeline-related biases and to improve reproducibility in neuroimaging …\nView article\n","permalink":"https://neuroneural.github.io/posts/pipeline-invariant-representation-learning-for-neuroimaging/","tags":["Paper","Publications"],"title":"Pipeline-Invariant Representation Learning for Neuroimaging"},{"categories":["Publications"],"contents":"Authors : Md Rahman, Usman Mahmood, Noah Lewis, Harshvardhan Gazula, Alex Fedorov, Zening Fu, Vince D Calhoun, Sergey M Plis\nPublication date : 2022/7/21\nJournal : Scientific reports\nVolume : 12\nIssue : 1\nPages : 1-15\nPublisher : Nature Publishing Group\nDescription\nBrain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Yet, the difficulty of reliable training on high-dimensional low sample size datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this work, we introduce a deep learning framework to learn …\nView article\n","permalink":"https://neuroneural.github.io/posts/interpreting-models-interpreting-brain-dynamics/","tags":["Paper","Publications"],"title":"Interpreting models interpreting brain dynamics"},{"categories":["Publications"],"contents":"Authors : Xinhui Li, Eloy Geenjaar, Zening Fu, Sergey Plis, Vince Calhoun\nPublication date : 2022/7/11\nConference : 2022 44th Annual International Conference of the IEEE Engineering in Medicine \u0026amp; Biology Society (EMBC)\nPages : 1477-1480\nPublisher : IEEE\nDescription\nMental disorders such as schizophrenia have been challenging to characterize due in part to their heterogeneous presentation in individuals. Most studies have focused on identifying groups differences and have typically ignored the heterogeneous patterns within groups. Here we propose a novel approach based on a variational autoencoder (VAE) to interpolate static functional network connectivity (sFNC) across individuals, with group-specific patterns between schizophrenia patients and controls captured simultaneously. We then visualize the original sFNC in a 2D grid according to the samples in the VAE latent space. We observe a high correspondence between the generated and the original sFNC. The proposed framework facilitates data visualization and can potentially be applied to predict the stage that a subject falls within a disorder continuum as well as characterize individual heterogeneity within and …\nView article\n","permalink":"https://neuroneural.github.io/posts/mind-the-gap-functional-network-connectivity-interpolation-between-schizophrenia-patients-and-controls-using-a-variational-autoencoder/","tags":["Paper","Publications"],"title":"Mind the gap: functional network connectivity interpolation between schizophrenia patients and controls using a variational autoencoder"},{"categories":["Publications"],"contents":"Authors : Md Abdur Rahaman, Eswar Damaraju, Debbrata K Saha, Sergey M Plis, Vince D Calhoun\nPublication date : 2022/6/1\nJournal : Human Brain Mapping\nVolume : 43\nIssue : 8\nPages : 2503-2518\nPublisher : John Wiley \u0026amp; Sons, Inc.\nDescription\nDynamic functional network connectivity (dFNC) analysis is a widely used approach for capturing brain activation patterns, connectivity states, and network organization. However, a typical sliding window plus clustering (SWC) approach for analyzing dFNC models the system through a fixed sequence of connectivity states. SWC assumes connectivity patterns span throughout the brain, but they are relatively spatially constrained and temporally short‐lived in practice. Thus, SWC is neither designed to capture transient dynamic changes nor heterogeneity across subjects/time. We propose a state‐space time series summarization framework called “statelets” to address these shortcomings. It models functional connectivity dynamics at fine‐grained timescales, adapting time series motifs to changes in connectivity strength, and constructs a concise yet informative representation of the original data that conveys easily …\nView article\n","permalink":"https://neuroneural.github.io/posts/statelets-capturing-recurrent-transient-variations-in-dynamic-functional-network-connectivity/","tags":["Paper","Publications"],"title":"Statelets: Capturing recurrent transient variations in dynamic functional network connectivity"},{"categories":["Publications"],"contents":"Authors : Mohammadsajad Abavisani, David Danks, Sergey Plis\nPublication date : 2022/5/18\nJournal : arXiv preprint arXiv:2205.09235\nDescription\nGraphical structures estimated by causal learning algorithms from time series data can provide highly misleading causal information if the causal timescale of the generating process fails to match the measurement timescale of the data. Although this problem has been recently recognized, practitioners have limited resources to respond to it, and so must continue using models that they know are likely misleading. Existing methods either (a) require that the difference between causal and measurement timescales is known; or (b) can handle only very small number of random variables when the timescale difference is unknown; or (c) apply to only pairs of variables, though with fewer assumptions about prior knowledge; or (d) return impractically too many solutions. This paper addresses all four challenges. We combine constraint programming with both theoretical insights into the problem structure and prior information about admissible causal interactions. The resulting system provides a practical approach that scales to significantly larger sets (\u0026gt;100) of random variables, does not require precise knowledge of the timescale difference, supports edge misidentification and parametric connection strengths, and can provide the optimum choice among many possible solutions. The cumulative impact of these improvements is gain of multiple orders of magnitude in speed and informativeness.\nView article\n","permalink":"https://neuroneural.github.io/posts/constraint-based-causal-structure-learning-from-undersampled-graphs/","tags":["Paper","Publications"],"title":"Constraint-Based Causal Structure Learning from Undersampled Graphs"},{"categories":["Publications"],"contents":"Authors : Debbrata K Saha, Vince D Calhoun, Yuhui Du, Zening Fu, Soo Min Kwon, Anand D Sarwate, Sandeep R Panta, Sergey M Plis\nPublication date : 2022/5/1\nJournal : Human Brain Mapping\nVolume : 43\nIssue : 7\nPages : 2289-2310\nPublisher : John Wiley \u0026amp; Sons, Inc.\nDescription\nPrivacy concerns for rare disease data, institutional or IRB policies, access to local computational or storage resources or download capabilities are among the reasons that may preclude analyses that pool data to a single site. A growing number of multisite projects and consortia were formed to function in the federated environment to conduct productive research under constraints of this kind. In this scenario, a quality control tool that visualizes decentralized data in its entirety via global aggregation of local computations is especially important, as it would allow the screening of samples that cannot be jointly evaluated otherwise. To solve this issue, we present two algorithms: decentralized data stochastic neighbor embedding, dSNE, and its differentially private counterpart, DP‐dSNE. We leverage publicly available datasets to simultaneously map data samples located at different sites according to their similarities …\nView article\n","permalink":"https://neuroneural.github.io/posts/privacypreserving-quality-control-of-neuroimaging-datasets-in-federated-environments/","tags":["Paper","Publications"],"title":"Privacy‐preserving quality control of neuroimaging datasets in federated environments"},{"categories":["Publications"],"contents":"Authors : Sunitha Basodi, Rajikha Raja, Bhaskar Ray, Harshvardhan Gazula, Anand D Sarwate, Sergey Plis, Jingyu Liu, Eric Verner, Vince D Calhoun\nPublication date : 2022/4/5\nJournal : Neuroinformatics\nPages : 1-10\nPublisher : Springer US\nDescription\nRecent studies have demonstrated that neuroimaging data can be used to estimate biological brain age, as it captures information about the neuroanatomical and functional changes the brain undergoes during development and the aging process. However, researchers often have limited access to neuroimaging data because of its challenging and expensive acquisition process, thereby limiting the effectiveness of the predictive model. Decentralized models provide a way to build more accurate and generalizable prediction models, bypassing the traditional data-sharing methodology. In this work, we propose a decentralized method for biological brain age estimation using support vector regression models and evaluate it on three different feature sets, including both volumetric and voxelwise structural MRI data as well as resting functional MRI data. The results demonstrate that our decentralized brain age …\nView article\n","permalink":"https://neuroneural.github.io/posts/decentralized-brain-age-estimation-using-mri-data/","tags":["Paper","Publications"],"title":"Decentralized Brain Age Estimation using MRI Data"},{"categories":["Publications"],"contents":"Authors : Md Mahfuzur Rahman, Noah Lewis, Sergey Plis\nPublication date : 2014/3/1\nConference : ICLR 2022 Workshop on PAIR {\\textasciicircum\nDescription\nInterpretability methods for deep neural networks mainly focus on modifying the rules of automatic differentiation or perturbing the input and observing the score drop to determine the most relevant features. Among them, gradient-based attribution methods, such as saliency maps, are arguably the most popular. Still, the produced saliency maps may often lack intelligibility. We address this problem based on recent discoveries in geometric properties of deep neural networks\u0026rsquo; loss landscape that reveal the existence of a multiplicity of local minima in the vicinity of a trained model\u0026rsquo;s loss surface. We introduce two methods that leverage the geometry of the loss landscape to improve interpretability: 1)\u0026quot; Geometrically Guided Integrated Gradients\u0026quot;, applying gradient ascent to each interpolation point of the linear path as a guide. 2)\u0026quot; Geometric Ensemble Gradients\u0026quot;, generating ensemble saliency maps by sampling proximal iso-loss models. Compared to vanilla and integrated gradients, these methods significantly improve saliency maps in quantitative and visual terms. We verify our findings on MNIST and Imagenet datasets across convolutional, ResNet, and Inception V3 architectures.\nView article\n","permalink":"https://neuroneural.github.io/posts/geometrically-guided-saliency-maps/","tags":["Paper","Publications"],"title":"Geometrically Guided Saliency Maps"},{"categories":["Publications"],"contents":"Authors : Shile Qi, Rogers F Silva, Daoqiang Zhang, Sergey M Plis, Robyn Miller, Victor M Vergara, Rongtao Jiang, Dongmei Zhi, Jing Sui, Vince D Calhoun\nPublication date : 2022/3/1\nJournal : Human brain mapping\nVolume : 43\nIssue : 4\nPages : 1280-1294\nPublisher : John Wiley \u0026amp; Sons, Inc.\nDescription\nAdvances in imaging acquisition techniques allow multiple imaging modalities to be collected from the same subject. Each individual modality offers limited yet unique views of the functional, structural, or dynamic temporal features of the brain. Multimodal fusion provides effective ways to leverage these complementary perspectives from multiple modalities. However, the majority of current multimodal fusion approaches involving functional magnetic resonance imaging (fMRI) are limited to 3D feature summaries that do not incorporate its rich temporal information. Thus, we propose a novel three‐way parallel group independent component analysis (pGICA) fusion method that incorporates the first‐level 4D fMRI data (temporal information included) by parallelizing group ICA into parallel ICA via a unified optimization framework. A new variability matrix was defined to capture subject‐wise functional variability and then …\nView article\n","permalink":"https://neuroneural.github.io/posts/three-way-parallel-group-independent-component-analysis-fusion-of-spatial-and-spatiotemporal-magnetic-resonance-imaging-data/","tags":["Paper","Publications"],"title":"Three‐way parallel group independent component analysis: Fusion of spatial and spatiotemporal magnetic resonance imaging data"},{"categories":["Publications"],"contents":"Authors : Weizheng Yan, Gang Qu, Wenxing Hu, Anees Abrol, Biao Cai, Chen Qiao, Sergey M Plis, Yu-Ping Wang, Jing Sui, Vince D Calhoun\nPublication date : 2022/2/24\nJournal : IEEE Signal Processing Magazine\nVolume : 39\nIssue : 2\nPages : 87-98\nPublisher : IEEE\nDescription\nDeep learning (DL) has been extremely successful when applied to the analysis of natural images. By contrast, analyzing neuroimaging data presents some unique challenges, including higher dimensionality, smaller sample sizes, multiple heterogeneous modalities, and a limited ground truth. In this article, we discuss DL methods in the context of four diverse and important categories in the neuroimaging field: classification/prediction, dynamic activity/connectivity, multimodal fusion, and interpretation/visualization. We highlight recent progress in each of these categories, discuss the benefits of combining data characteristics and model architectures, and derive guidelines for the use of DL in neuroimaging data. For each category, we also assess promising applications and major challenges to overcome. Finally, we discuss future directions of neuroimaging DL for clinical applications, a topic of great interest …\nView article\n","permalink":"https://neuroneural.github.io/posts/deep-learning-in-neuroimaging-promises-and-challenges/","tags":["Paper","Publications"],"title":"Deep learning in neuroimaging: Promises and challenges"},{"categories":["Classes"],"contents":"Introduction to Deep Learning Welcome to the introduction to deep learning course! This course is designed to provide you with a solid foundation in the fundamentals of deep learning. Throughout this course, you will learn about the basic building blocks of deep learning, including basics of machine learning, convolutional neural networks, and natural language processing. You will also gain an understanding of how deep learning algorithms are used to solve a variety of real-world problems, such as image classification, natural language processing and a few advance approaches such as GANs. \nBy the end of the course, you will have a solid understanding of the core concepts and techniques used in deep learning, as well as hands-on experience building and training your own deep learning models using popular frameworks such as PyTorch and Catalyst \nDr. Sergey Plis is the instructor for this course, bringing his expertise of an active researcher in the fields of neuroscience and computer science. He has extensive experience applying machine learning algorithms to the analysis of brain imaging data. He is also an experienced educator, having taught numerous courses in data science, machine learning, and deep learning at the graduate and undergraduate levels. \nThe hands-on part of the course has been developed by Mrinal Mathur, a seasoned machine learning engineer with experience building and deploying machine learning models for a variety of industries. Mrinal has a deep understanding of the underlying mathematical and statistical concepts that power deep learning algorithms, and he has a passion for teaching others about the exciting possibilities of this field. \nTogether, we have designed a comprehensive and engaging course that will provide you with the knowledge and skills you need to succeed in the exciting field of deep learning. \nIntroduction to Deep Learning 1. Introduction Lecture Slides Introduction to Collab Pandas (optional) Lecture Slides Numpy Machine Learning 2. Foundations of Machine Learning Calculus and Optimization\nLecture Slides Linear Regression/Classification\nLecture Slides Perceptron\nLecture Slides 3. Automatic Differentitation Lecture Slides Colab Notebooks Automatic Differentitation programming 4. Practice for Automatic Differentiation Lecture Slides 5. Pytorch Colab Notebooks\nIntroduction to Pytorch and Catalyst 6. Model Comparision Lecture slides Colab Notebook Model Comparision Computer Vision 7. Computer Vision and Image Processing Lecture Slides Colab Notebook\nIntroduction to Computer Vision 8. Convolution Neural Network Lecture Slides Colab Notebook:\nIntroduction to CNNs 9. Image Classification Lecture Slides Colab Notebook\nClassification in Computer vision 10. Skip Connections and ResNets Colab Notebook\nIntro to Skip Connections and ResNets 11. Segmentation Lecture Slides Colab Notebook\nImage Segmentation 12. Auto-Encoders Lecture Slides Colab Notebooks:\nAuto-Encoders Variational Autoencoders 13. Generative Adversarial Nets Lecture Slides Colab Notebook:\nGenerative Adversarial Nets 14. Regularization Lecture Slides Natural Language Processing 15. Introduction to NLP Lecture Slides Colab Notebooks:\nIntroduction to NLP 16. Recurrent Neural Networks Colab Notebooks:\nRNNs 17. LSTM and GRU Lecture Slides Colab Notebook:\nLSTM and GRU in pytorch 18. Seq2Seq2 Lecture Slides Colab Notebooks:\nseq2seq models 19. Attention is all you need! Lecture Slides Colab Notebooks:\nAttention Mechanisms 20. Transformers Lecture Slides Colab Notebooks:\nTransformers Advance Topics (To be added) [Graph Neural Networks] [Reinforcement Learning] [Meta Learning] [Adversarial Learning] [Transfer Learning] [Self Supervised Learning] [Few Shot Learning] [Active Learning] [Multi Task Learning] [Multi Modal Learning] [Domain Adaptation] [Continual Learning] [Causal Learning] ","permalink":"https://neuroneural.github.io/posts/introduction_to_dl/","tags":null,"title":"Introduction to Deep Learning"},{"categories":["Publications"],"contents":"Authors : Harshvardhan Gazula, Kelly Rootes-Murdy, Bharath Holla, Sunitha Basodi, Zuo Zhang, Eric Verner, Ross Kelly, Pratima Murthy, Amit Chakrabarti, Debasish Basu, Subodh Bhagyalakshmi Nanjayya, Rajkumar Lenin Singh, Roshan Lourembam Singh, Kartik Kalyanram, Kamakshi Kartik, Kumaran Kalyanaraman, Krishnaveni Ghattu, Rebecca Kuriyan, Sunita Simon Kurpad, Gareth J Barker, Rose Dawn Bharath, Sylvane Desrivieres, Meera Purushottam, Dimitri Papadopoulos Orfanos, Eesha Sharma, Matthew Hickman, Mireille Toledano, Nilakshi Vaidya, Tobias Banaschewski, Arun LW Bokde, Herta Flor, Antoine Grigis, Hugh Garavan, Penny Gowland, Andreas Heinz, Rüdiger Brühl, Jean-Luc Martinot, Marie-Laure Paillère Martinot, Eric Artiges, Frauke Nees, Tomáš Paus, Luise Poustka, Juliane H Fröhner, Lauren Robinson, Michael N Smolka, Henrik Walter, Jeanne Winterer, Robert Whelan, Jessica A Turner, Anand D Sarwate, Sergey M Plis, Vivek Benegal, Gunter Schumann, Vince D Calhoun, IMAGEN Consortium\nPublication date : 2022/1/1\nJournal : bioRxiv\nPublisher : Cold Spring Harbor Laboratory\nDescription\nWith the growth of decentralized/federated analysis approaches in neuroimaging, the opportunities to study brain disorders using data from multiple sites has grown multi-fold. One such initiative is the Neuromark, a fully automated spatially constrained independent component analysis (ICA) that is used to link brain network abnormalities among different datasets, studies, and disorders while leveraging subject-specific networks. In this study, we implement the neuromark pipeline in COINSTAC, an open-source neuroimaging framework for collaborative/decentralized analysis. Decentralized analysis of nearly 2000 resting-state functional magnetic resonance imaging datasets collected at different sites across two cohorts and co-located in different countries was performed to study the resting brain functional network connectivity changes in adolescents who smoke and consume alcohol. Results showed hypoconnectivity across the majority of networks including sensory, default mode, and subcortical domains, more for alcohol than smoking, and decreased low frequency power. These findings suggest that global reduced synchronization is associated with both tobacco and alcohol use. This work demonstrates the utility and incentives associated with large-scale decentralized collaborations spanning multiple sites.\nView article\n","permalink":"https://neuroneural.github.io/posts/federated-analysis-in-coinstac-reveals-functional-network-connectivity-and-spectral-links-to-smoking-and-alcohol-consumption-in-nearly-2000-adolescent-brains/","tags":["Paper","Publications"],"title":"Federated analysis in COINSTAC reveals functional network connectivity and spectral links to smoking and alcohol consumption in nearly 2,000 adolescent brains"},{"categories":["Publications"],"contents":"Authors : Haleh Falakshahi, Hooman Rokham, Zening Fu, Armin Iraji, Daniel H Mathalon, Judith M Ford, Bryon A Mueller, Adrian Preda, Theo GM van Erp, Jessica A Turner, Sergey Plis, Vince D Calhoun\nPublication date : 2022/1\nJournal : Network Neuroscience\nPages : 1-45\nDescription\nGraph-theoretical methods have been widely used to study human brain networks in psychiatric disorders. However, the focus has primarily been on global graphic metrics with little attention to the information contained in paths connecting brain regions. Details of disruption of these paths may be highly informative for understanding disease mechanisms. To detect the absence or addition of multistep paths in the patient group, we provide an algorithm estimating edges that contribute to these paths with reference to the control group. We next examine where pairs of nodes were connected through paths in both groups by using a covariance decomposition method. We apply our method to study resting-state fMRI data in schizophrenia versus controls. Results show several disconnectors in schizophrenia within and between functional domains, particularly within the default mode and cognitive control networks …\nView article\n","permalink":"https://neuroneural.github.io/posts/path-analysis-a-method-to-estimate-altered-pathways-in-time-varying-graphs-of-neuroimaging-data/","tags":["Paper","Publications"],"title":"Path analysis: A method to estimate altered pathways in time-varying graphs of neuroimaging data"},{"categories":["Publications"],"contents":"Authors : Samin Yeasar Arnob, Riyasat Ohib, Sergey Plis, Doina Precup\nPublication date : 2021/12/31\nJournal : arXiv preprint arXiv:2112.15579\nDescription\nDeep Reinforcement Learning (RL) is a powerful framework for solving complex real-world problems. Large neural networks employed in the framework are traditionally associated with better generalization capabilities, but their increased size entails the drawbacks of extensive training duration, substantial hardware resources, and longer inference times. One way to tackle this problem is to prune neural networks leaving only the necessary parameters. State-of-the-art concurrent pruning techniques for imposing sparsity perform demonstrably well in applications where data distributions are fixed. However, they have not yet been substantially explored in the context of RL. We close the gap between RL and single-shot pruning techniques and present a general pruning approach to the Offline RL. We leverage a fixed dataset to prune neural networks before the start of RL training. We then run experiments varying the network sparsity level and evaluating the validity of pruning at initialization techniques in continuous control tasks. Our results show that with 95% of the network weights pruned, Offline-RL algorithms can still retain performance in the majority of our experiments. To the best of our knowledge, no prior work utilizing pruning in RL retained performance at such high levels of sparsity. Moreover, pruning at initialization techniques can be easily integrated into any existing Offline-RL algorithms without changing the learning objective.\nView article\n","permalink":"https://neuroneural.github.io/posts/single-shot-pruning-for-offline-reinforcement-learning/","tags":["Paper","Publications"],"title":"Single-shot pruning for offline reinforcement learning"},{"categories":["Publications"],"contents":"Authors : Elena A Allen, Eswar Damaraju, Sergey M Plis, Erik B Erhardt, Tom Eichele, Vince D Calhoun\nPublication date : 2021/11/22\nSource : Neuroinformatics\nPages : 1-14\nPublisher : Springer US\nDescription\nThe field of neuroimaging has embraced sharing data to collaboratively advance our understanding of the brain. However, data sharing, especially across sites with large amounts of protected health information (PHI), can be cumbersome and time intensive. Recently, there has been a greater push towards collaborative frameworks that enable large-scale federated analysis of neuroimaging data without the data having to leave its original location. However, there still remains a need for a standardized federated approach that not only allows for data sharing adhering to the FAIR (Findability, Accessibility, Interoperability, Reusability) data principles, but also streamlines analyses and communication while maintaining subject privacy. In this paper, we review a non-exhaustive list of neuroimaging analytic tools and frameworks currently in use. We then provide an update on our federated neuroimaging analysis …\nView article\n","permalink":"https://neuroneural.github.io/posts/federated-analysis-of-neuroimaging-data-a-review-of-the-field/","tags":["Paper","Publications"],"title":"Federated analysis of neuroimaging data: A review of the field"},{"categories":["Publications"],"contents":"Authors : Elena A Allen, Eswar Damaraju, Sergey M Plis, Erik B Erhardt, Tom Eichele, Vince D Calhoun\nPublication date : 2014/3/1\nJournal : Cerebral cortex\nVolume : 24\nIssue : 3\nPages : 663-676\nPublisher : Oxford University Press\nDescription\nSpontaneous fluctuations are a hallmark of recordings of neural signals, emergent over time scales spanning milliseconds and tens of minutes. However, investigations of intrinsic brain organization based on resting-state functional magnetic resonance imaging have largely not taken into account the presence and potential of temporal variability, as most current approaches to examine functional connectivity (FC) implicitly assume that relationships are constant throughout the length of the recording. In this work, we describe an approach to assess whole-brain FC dynamics based on spatial independent component analysis, sliding time window correlation, and k-means clustering of windowed correlation matrices. The method is applied to resting-state data from a large sample (n = 405) of young adults. Our analysis of FC variability highlights particularly flexible connections between regions in lateral parietal …\nView article\n","permalink":"https://neuroneural.github.io/posts/tracking-whole-brain-connectivity-dynamics-in-the-resting-state/","tags":["Paper","Publications"],"title":"Tracking whole-brain connectivity dynamics in the resting state"}]