Skip to content

Week 8. Feb. 28: Auto-encoders, Network & Table Learning - Orienting #18

Open
@avioberoi

Description

@avioberoi
Collaborator

Post your question here about the orienting readings:

“Graph Neural Networks” and “Autoencoders”, Deep Learning: Foundations and Concepts, chapters 13 and 19.

Activity

psymichaelzhu

psymichaelzhu commented on Feb 28, 2025

@psymichaelzhu

Most GNNs rely on fixed aggregation functions such as sum, mean, or max-pooling to combine information from neighboring nodes. However, different local structures may pose different demands on aggregating neighbor information. Inspired by the philosophy of active learning, could we design an adaptive aggregation mechanism that dynamically adjusts how information is combined based on local property like node importance?

zhian21

zhian21 commented on Feb 28, 2025

@zhian21

How can recent innovations in self-supervised learning and contrastive objectives improve the robustness of GNNs against over-smoothing and enhance VAEs' ability to generate high-fidelity samples without posterior collapse?

yangyuwang

yangyuwang commented on Feb 28, 2025

@yangyuwang

I am working on several projects regarding social network analysis. So I wonder how can GNNs capture the underlying structure of social networks, and what advantages do they offer over traditional SNA techniques in representing node relationships? For example, is GNN able to be used for node prediction which would be better than ERGM?
Another question I just had is about the neural network architecture. It might be a weird question, but I am thinking as neural networks are graphs, is that able for us to use GNN to predict what kind of neural network architecture would be more efficient?

DotIN13

DotIN13 commented on Feb 28, 2025

@DotIN13

When using Graph Neural Networks (GNNs) and Autoencoders, what kind of relationships does each model primarily capture? Do GNNs focus more on local node-to-node relationships or global graph structures? Similarly, do autoencoders capture only broader global representations? How do these differences affect their applications in real-world problems, and how do we design those system to capture different types of relationships in the data?

haewonh99

haewonh99 commented on Feb 28, 2025

@haewonh99

I'm interested in graph classification for the purpose of comparing graphs, but the explanation of graph classification in the reading seemed to be a bit short and I could use some more details. Could you elaborate more on what characteristics of graphs are used in graph classification / how graphs as a whole can be compared?

chychoy

chychoy commented on Feb 28, 2025

@chychoy

I would love additional clarification on how can different types of relationships be effectively represented and encoded? For instance, in a social network, an individual (A) may have multiple types of connections, such as being in a romantic relationship with one person (B) while maintaining a friendship with another (C). How does incorporating such relational information impact the dimensionality of the data, assuming the number of characters/nodes remains constant? Finally, what are the best practices for selecting the most relevant relational features to encode?

shiyunc

shiyunc commented on Feb 28, 2025

@shiyunc

Just as the black box problem of other NN applications, GNN models often struggle to explain why the representation of a particular node is updated to a specific value when making predictions. Is it possible to enhance the interpretability of GNNs, helping us understand how they use information from neighboring nodes to make predictions, particularly in complex behavioral predictions within social networks?

youjiazhou

youjiazhou commented on Feb 28, 2025

@youjiazhou

Question about the data input: does GNN need to absorb multiple different networks at once, or just one graph? how many data are enough for it to get stable results? I am confused about what exactly is GNN learning in a more generalized way.

Also, is there a concept of time in GNNs? Can GNNs incorporate temporal information, be used to model dynamic networks and predict how edges form over time? If yes, how does it learn such thing?

Sam-SangJoonPark

Sam-SangJoonPark commented on Mar 6, 2025

@Sam-SangJoonPark

What is the key difference between network learning and table learning in auto-encoders? How does each approach impact data representation and learning methods?

Daniela-miaut

Daniela-miaut commented on Mar 9, 2025

@Daniela-miaut

Processing graph data seems to be very memory-consuming as the size of the data increases, since we would store not only the information of each data point, but also their pair-wise relationships. What are the memory-efficient methods to pre-process large-scale graph data?

xpan4869

xpan4869 commented on Mar 9, 2025

@xpan4869

Since Graph Neural Networks require carefully designed architectures to maintain invariance to node ordering, how might this translate to applications where the ordering of nodes actually contains meaningful information? For example, in a temporal social network where interaction timing matters?

siyangwu1

siyangwu1 commented on Mar 9, 2025

@siyangwu1

How can the integration of Graph Neural Networks and Autoencoders enhance the learning of representations in data with inherent graph structures, such as social networks or molecular structures?

CallinDai

CallinDai commented on Mar 14, 2025

@CallinDai

We learned that Graph Neural Networks (GNNs) leverage message passing to learn hierarchical representations in structured data rather than relying on predefined statistical measures like traditional network analysis. This makes me think—can GNNs effectively model linguistic hierarchies by dynamically learning dependency structures rather than relying on predefined parse trees? Specifically, do different GNN architectures (e.g., GCN, GAT) capture linguistic relationships such as syntactic dependencies or semantic role labeling in a way that generalizes better than rule-based or tree-based methods? Could this improve robustness in parsing ambiguous sentences where traditional tree-based methods struggle?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @DotIN13@Sam-SangJoonPark@avioberoi@yangyuwang@zhian21

        Issue actions

          Week 8. Feb. 28: Auto-encoders, Network & Table Learning - Orienting · Issue #18 · KnowledgeLab/Thinking-With-Deep-Learning-2025