This post is part three of a short series on social psychology, made up of thought papers I wrote for the Affective Science (PSY5430) course at the University of Toronto, Department of Psychology.
This week’s post focuses on the physiological aspect of emotions, mainly the investigation into the neurological configuration of the brain which deals with affect, appropriately called Affective Neuroscience.
Our original understanding of a living organism’s body, especially primates, is the physical configuration of various body parts, and the obvious functions they perform. The heart is clearly made for pumping blood throughout our system. The human legs are meant to keep us upright and give us the ability to move about. The eye’s most obvious function is to convert light into signals our brain can interpret. This organ-function view of the body has been applied to the brain as well, which is the locationst view [Lindquist et al., 2012]. With the advancement in tools, we discovered that the brain is not simply a set of modules, but a network of neurones. The obvious question, then, is whether the brain could be understood as both individual “parts”, analogous to a leg or an eye, as well as a network of connections. This is the argument of whether a brain is modular, a network, or both, which is the constructionist view [Lindquist et al., 2012].
If we consider the constructionist view of the brain, an internal modularity can still exist in both the physiological and functional forms. Functional in the sense that a group of neurones not necessarily spatially close, work together to perform a task. The formation of modules and hubs has been observed in artificial neural networks when various real-life constraints are placed on the communication between nodes [Pan and Sinha, 2007]. This research has found that modules are formed naturally when a cost is placed on the spatial distance (based on edge-counting) between nodes. This is a realistic view of a physical organ which must balance energy consumption and effectiveness.
Consider the amygdala, a physical hub, connecting various external stimuli to parts of the brain. The amygdala is closely related to fear, meaning fear should be correlated with those senses [Janig, 2003]. Its function has to do with quick responses and interpretations, so it makes sense that the amygdala takes the architectural shape of a tightly connected and efficient hub. In contrast to physical hubs, network hubs in the human brain exist to a serve different type of function [van den Heuvel and Sporns, 2013]. They allow for high levels of functional diversity and functional synchronization between cortical regions. Interpreting emotions is a complicated task and its processing has been observed to be divided between a number of regions with multitude of complex connections. For example, the recognition of emotion from facial expressions and prosody are split between the right and left temporal lobes [Berridge, 2003]. Such hubs form early on in brain development. For example the connection between the medial posterior cingulate, frontal and insular regions are already present in the postnatal infant and child brain [van den Heuvel and Sporns, 2013]. Due to the sheer numbers of neurones in the brain and their complex interconnectedness, it may be impractical, if even possible, to try and identify the functions of individual neurones. Instead, its more feasible to identify sub-networks of neurones which function together to achieve a task. It would also be more practical to identify which hubs of neurones largely function as incoming or outgoing points, the so-called sink or source hubs [van den Heuvel and Sporns, 2013]. It may make sense to call a group of neurones a module, based on their spatial proximity, but it should not be viewed as the only architecture the brain has utilized.
[Berridge, 2003] Berridge, K. C. (2003). Comparing the emotional brains of humans and other animals. In Handbook of affective sciences, chapter 3. Oxford University Press, New York, New York, USA.
[Janig, 2003] Janig, W. (2003). The Automatic Nervous System and Its Coordination by the Brain.
[Lindquist et al., 2012] Lindquist, K. a., Wager, T. D., Kober, H., Bliss-Moreau, E., and Barrett, L. F. (2012). The brain basis of emotion: a meta-analytic review. The Behavioral and brain sciences, 35(3):121–43.
[Pan and Sinha, 2007] Pan, R. and Sinha, S. (2007). Modular networks emerge from multiconstraint optimization. Physical Review E, 76(4):045103.
[van den Heuvel and Sporns, 2013] van den Heuvel, M. P. and Sporns, O. (2013). Network hubs in the human brain. Trends in cognitive sciences, 17(12):683–96.
This post is part two of a short series on social psychology, made up of thought papers I wrote for the Affective Science (PSY5430) course at the University of Toronto, Department of Psychology.
Our physical body reacts very specifically to many different types of physical stimuli, especially when it comes to pain. Although it may be discreet or continuous, our body has evolved to quickly identify pain, alert our minds and bring attention to the source of the pain, and remove it. Unlike this physical pain-attention system, our minds must interact with an enormous variety of continuous stimuli from the environment, be it visual, auditory, or haptic. To be able to process this complexity of information, our brains are constructed as a network of neurones which are much better suited for this kind of continuous and variable stimuli than a brain which is separated discrete modules. In terms of emotions, Barrett identifies this view of the brain as Psychological Constructivism [Barrett, 2009]. This approach states that psychological states are products that emerge from the interplay of more basic, all-purpose components, and makes a discrete separation between a mechanism (brain) and what it produces (emotions). Once this view was defined, it allowed researchers to separate the physical brain from the objects we perceive, which led to the re-examination of the brain at a lower level with a more empirical view. The interaction between cognition, reactions, and what Russell calls core affects is much more complex than could be processed by individual modules [Russell, 2003]. The evidence to support the idea that emotions as natural kinds is hard to identify, presumably because it simply does not exist [Barrett, 2006].
If a body is a self regulating system, it would make sense for the brain to also self-regulate in order to stay in a state of equilibrium, especially in response to stimuli which disrupt this equilibrium. In a similar way that the brain is notified of a sharp pain in a particular part of the body, the brain can “sense” a change in its equilibrium and react accordingly. Whether the required action is immediate (run from a bear) or prolonged (cope with sad news), there is constant activity to bring the mental state to equilibrium, and its progress constantly re-evaluated. The cyclic nature of this process implies what Cunningham et al. call the Iterative Reprocessing (IR) Model [Cunningham et al., 2013]. Again, in the same way that the intensity of pain is felt with varying degrees indicating urgency, the levels of entropy felt by the IR model indicates urgency with which resources are allocated to achieve equilibrium. The source of non-equilibrium are the differences between expected and actual results. By adopting the constructivist view of non-modular brain functions, and relying on the heterogeneity of the network structure, the IR model is free to adopt an empirically observed interpretation of entropy stabilization.
[Barrett, 2006] Barrett, L. F. (2006). Are Emotions Natural Kinds? Perspectives on Psychological Science, 1(1):28–58.
[Barrett, 2009] Barrett, L. F. (2009). The Future of Psychology: Connecting Mind to Brain. Perspectives on psychological science : A Journal of the Association for Psychological Science, 4(4):326–339.
[Cunningham et al., 2013] Cunningham, W. a., Dunfield, K. a., and Stillman, P. E. (2013). Emotional States from Affective Dynamics. Emotion Review, 5(4):344–355.
[Russell, 2003] Russell, J. a. (2003). Core affect and the psychological construction of emotion. Psychological Review, 110(1):145–172.
This post is part one of a short series on social psychology, made up of thought papers I wrote for the Affective Science (PSY5430) course at the University of Toronto, Department of Psychology.
"PicassoGuernica". Via Wikipedia - http://en.wikipedia.org/wiki/File:PicassoGuernica.jpg
The body is a self regulating system, from the way organs function in keeping us alive, to the way the brain consciously and unconsciously regulates communication between those organs. Emotions are a vital part of this system, by helping us regulate internal states including needs for external resources, both physical and psychological. The precise source of emotions within the brain is difficult to identify due to three key reasons. Firstly, elicitors of emotions come from various parts of the brain [Ekman, 1977]. In recent years, much progress has been made to identify regions which elicit neurobiological and neurochemical activity associated with the sensation and recognition of emotions. Secondly, so-called basic emotions are not directly sensed by human beings [Izard, 1992]. What we sense is a combination of “basic emotions”, with varying degrees, or levels of valance [Panksepp, 2000]. The elicitors of those emotions do not trigger them with consistent degrees, if at all. Any elicitor works in combination with other elicitors. Finally, the stimuli eliciting emotions is not consistent. There are many internal and external factors which affect how we interpret stimuli. These factors make it difficult to correctly define the system of emotions, and scientists are looking for strategies to tackle the complexity issue. The categorization of these factors and their analysis within the context of particular categories is a major step in tackling the complexity of our bodies [Panksepp, 2000].
Over time, our needs became more complex, whether through the need to be more innovative in satisfying those needs (harder to acquire resources, etc.) or by the diminishing affect resources have on satisfying those needs (growing family, buildings up tolerance, etc). Adoption to changing environmental conditions is a key evolutionary trait of any organism. For humans, and arguably other mammals, emotions play a key role in setting and successfully acquiring goals through various motivating factors. The need to adopt to a changing environment comes not only through evolutionary changes in our physical bodies, but also in our abilities to learn new skills. Choosing which skills produce maximum outcome does not occur through evolution, although one might argue that the ability to choose does, by developing larger cognitive capacity. There are also social factors which affect our survival rate, such as the ability to care for others and be cared for. This explains why many emotions have an external side that communicates to others our internal states in relation to external goals. Our ability to feel empathy for others, to enact those feelings in other people towards us, and to work together towards a common goal demonstrates the interplay between emotion, motivation, and execution of a plan to achieve an objective. But more importantly it demonstrates the interconnectedness between physical needs, emotional interpretation, and cognitive reaction.
Much confusion in categorizing emotions is caused by apparent overlap of similar emotions which have different elicitors [Ekman, 1977]. For example, some emotions have a defensive function in order to protect ourselves from physical harm. We may have the need for self-preservation from both physical and psychological harm. The triggers are very different but we interpret them in similar ways. Hunger is a physical reaction to our body not having enough resources to function. The uncomfortable feeling of weakness sets in along with physical reactions such as trembling of limbs. This type of physical distress must be perceived emotionally, otherwise we would not have the cognitive reaction to acquire food. The gathering of food is a long term goal with many sub-goals. We must have the cognitive ability to identify that goal, break it up into sub-goals, assess whether we can satisfy those sub-goals, and finally organize them into a comprehensive and realistic plan [Hobbs and Gordon, 2010] [Chulef et al., 2001] [Bargh and Morsella, 2008] [Schank and Abelson, 1977]. This example demonstrates the interconnectedness between physical needs, emotional perception of those needs, and the role cognition plays. Continuing the example of hunger, humans and possibly other primates, developed the ability to persuade others to collect food for us when we ourselves are not able. This developed as our ability to elicit empathy and altruism in others [Panksepp, 2000].
[Bargh and Morsella, 2008] Bargh, J. A. and Morsella, E. (2008). The Unconscious Mind. Perspectives on Psychological Science, 3(1):73–79.
[Chulef et al., 2001] Chulef, A., Read, S., and Walsh, D. (2001). A hierarchical taxonomy of human goals. Motivation and Emotion, 25(3):191–232.
[Ekman, 1977] Ekman, P. (1977). From Biological and Cultural COntrobiutions to Body and Facial Movemenet in the Expression of Emotions.
[Hobbs and Gordon, 2010] Hobbs, J. and Gordon, A. (2010). Goals in a Formal Theory of Commonsense Psychology. In FOIS, pages 59–72.
[Izard, 1992] Izard, C. E. (1992). Basic emotions, relations among emotions, and emotion-cognition relations. Psychological review, 99(3):561–5.
[Panksepp, 2000] Panksepp, J. (2000). Emotions as Natural Kinds within the Mammalian Brain. In Biological and Neurophysilogical Approaches, chapter 9, pages 137– 156.
[Schank and Abelson, 1977] Schank, R. C. and Abelson, R. P. (1977). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Psychology Press.
- Gajderowicz, B., Fox, M. S., Grüninger, M. (in press). Requirements for an Ontological Foundation for Modelling Social Service Chains. In Y. Guan, J. Liao (Eds.), Proceedings of the 2014 Industrial and Systems Engineering Research Conference. Montréal, QC.
- Gajderowicz, B.; Sadeghian, A; Soutchanski, M.: Ontology Enhancement Through Inductive Decision Trees. In: Uncertainty and Reasoning for the Semantic Web II, da Costa, P.C.G., d’Amato, C., Fanizzi, N., Laskey, K.B., Laskey, K.J., Lukasiewicz, T., Nickles, M., Pool, M. (eds.), ISWC International Workshop, URSW 2008-2010 Revised Selected and Invited Papers. LNCS (LNAI). Springer, Heidelberg (2012) (in print)
- Gajderowicz, B.; Sadeghian, A; Soutchanski, M.:Using Decision Trees for Inductively Driven Semantic Integration and Ontology Matching, Master’s thesis, Ryerson University, 250 Victoria Street, Toronto, Ontario, Canada, 2011.
- Gajderowicz, B., Sadeghian, A.: Ontology Granulation Through Inductive Decision Trees. In: Proceedings of the 4th International Semantic Web Conference Workshop on Uncertainty Reasoning for the Semantic Web, Washington D.C, USA, pp. 39-50 (2009)
- Gajderowicz, B., Sadeghian, A., and dos Santos, M. 2009. Expectation Maximization Enhancement with Evolution Strategy for Stochastic Ontology Mapping. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation (Montreal, Québec, Canada, July 08 - 12, 2009). GECCO 2009. ACM, New York, NY, 1847-1848.
- Rahnama, H., Madni, A.M., Sadeghian, A., Mawson, C., Gajderowicz, B. Adaptive Context for Generic Pattern Matching in Ad Hoc Social Networks. In the Proceedings of the 2008 IEEE 3rd International Symposium on Communications, Control and Signal Processing , ISCCSP 2008, art. no. 4537195, pp. 73-78.
Thanks for the great write up.
I’ve often used the term folskonomy when introducing people to semantics. Tags are simple concepts to understand, but more specifically, to use. My hope here is that once people start to use folksonomies to categorise content, the leap to using a vocabulary to “tag” more granular ideas in that content will be easier.
It’s true that for data integration, meshups don’t require semantics. This is because integration is performed manually, on concepts that are easily mapped. The most difficult part is the volume of mappings and conversions to perform, not their complexity nor verification. The small number of more-complex definitions are simply done manually as well, albeit with greater detail.
The pervasiveness of semantics is completely overlooked, so much so that they are seen as a black-box without really thinking about the underlining libraries. A simple API call is where a developer’s or a project manager’s understanding needs to stretch, and no further.
Controlled vocabularies are being used for specific reasons and again API consumers only use the bare minimum of what they require to get the job done. Frédérick Giasson’s recent post on ontologies explains the different types of ontologies and how they tie into to the generally accepted understanding. Unfortunately, this understanding usually stops at a folksonomy, a closed-world vocabulary, or Linked Data. Anything above arbitrary semantics is not required for any of these structures.
Your point about the real contribution of Linked Data is spot on. RDF is sufficient for the majority of what people want to use it for. And really, when using it without any semantics, RDF simply becomes a microformat (RDFa) I’ve discussed the limitations of this elsewhere. RDF can easily represent the relations between relational tables and columns. But it would not suffice to represent the business logic. More expressive languages like OWL are required.
Your point about closed-world vs. open-world is important as well. Because most systems are developed with closed-world assumptions (read defaults), using open world semantics is not required. This is one of the key places where we are in fact asking the wrong questions and looking in the wrong places. When systems step out of their application’s domain to perform meshups is the time when they need better semantic technologies. Companies don’t often have the need to do this except in incremental steps. The steps ar so incremental in fact, there is no need to invest the resources to make this process smoother. Manual integration and verification is sufficient.
The Ontology Driven Apps post was a great introduction to this methodology brought on by Nicalo Guarino and Michael Uschold. Thanks for that as well!
The fields of Semantics and Ontologies has consumed my life for the past several years. Why?
They are enormous topics with applicability in every field, not just formal logic or pure academics. They are fundamental to the way we interpret and reason with the world around us, and the amount of information about the world we are to absorb is staggering. Our mobile devices, our activity on the internet, and the increased digitization of our records: have all contributed to the exponential growth in data. Sorting out all this data is an ever increasingly difficult task. And it’s not just the growth of data that is increasing but also the way it evolves from it’s initial capture-state to how we use it.
Fundamentally, when analyzing data you must determine 3 things:
1) What is the nature of the data? Is it:
- user records
- patient records
- news articles
- individual tweets
- sensory data from weather stations?
All must be treated differently.
2) Are there any patterns in the data?
3) How does the data relate to itself and to other information sources?
- What does the data tell us and what can we learn from it?
- Can the correlation of patient symptoms tell us something about the causation of their condition?
- How accurate are weather predictions based on past records?
Question (1) deals with questions such as (in the weather domain) "What is snow?", ”What is temperature?”, and maybe even “What is low temperature?”. Humans have no problem answering these questions, but machines run into several issues mainly becuase “Snow” is just a 4 character label. This label is not enough to encapsulate all that is “Snow”. The technology behind Linked Data begins to reveal this information. At the very least, it provides a point of reference as an URI for an intended meaning of “Snow”. This particular DBpedia link provides meta data that is associated with the concept “Snow”. Now anyone in any language can reference Snow and mean the same thing.
This series of posts is all about how semantics can help us deal with information.
The fields of Semantics and Ontologies has consumed my life for the past several years. Why? They are enormous topics with applicability in every field, not just formal logic or pure academics. They are fundamental to the way we interpret and reason with the world around us, and the amount of information we are absorbing about the world is staggering. Our mobile devices; our activity on the internet; the increased digitization of our records: are all contributing factors to the exponential growth in data. Being able to understand this data is becoming an increasingly difficult task. More on data later, for now I present what semantics are in a general sense.
The most general and abstract definition of semantics is "the study of meaning".In this series I will focus on semantics as applied to Logic and focus on its application in Computer Science.
In this context, semantics deal with relations between 2 entities. Take the plus sign “+” in these two examples.
1) 4 + 2 = 6
2) hard work + luck = success
Clearly the semantics of “+” between (1) and (2) are very similar. In (1) we add two numbers to get a number. In (2) we add two nouns to get a noun. How we arrived at the result is different, and depends on the the two elements being added, and what “being added” actually means.
Welcome to bartg.org, the personal blog of Bart Gajderowicz. Here you’ll find the things that interest me with the occasional rant on everything else. I’d love to hear from you so do contact me via email, by posting a comment or via social links on the right. My previous blog has been archived (see below).
Who I am …
I am PhD student in the Industrial Engineering Department at the University of Toronto. My supervisors are Mark S. Fox and Michael Grüninger.
I hold a master’s degree from the Department of Computer Science at Ryerson University, focusing on semantic integration and machine learning. Up to recently I was the Senior Research Assistant at the Ubiquitous and Pervasive Computing Lab in my department.
What I do …
My interests include an array of topics, but my time is usually consumed by the technical aspects of information/data/text processing, programming, logic, philosophy, and the social engagement and interaction with technology.
My current PhD research focus is modeling the behavior of Social Service clients and practitioners, the process involved in service delivery, and the social context and constraints. you can see my first publication outlining requirements for my work here.
As part of my master’s and undergraduate research, I have studied machine learning, and the various ways it is applied to the areas of ontology matching, search, classification, pattern recognition, data mining, pattern analysis, context awareness and new media.
I enjoy philosophy and have spent the last several years consumed by ontologies, semantics, logic, reasoning, inference, and the technical issues that surround their applications. Not a logician by any means, my research interests include semantic technologies, reasoning and machine learning. Recently I have focused on Description Logics, OWL 2, the now many definitions of the Semantic Web, and incorporating these areas into machine learning.
I enjoy programming and although I have currently been using Python in my research, I work mostly with Ruby as my language of choice. Its simple and natural syntax makes it an absolute pleasure to work with, not to mention makes prototyping concepts dead simple and quick. Professionally I have been a Ruby on Rails developer since 2006. I also program in C/C++ and Java, especially when incorporating existing libraries often written in Java. I need to contribute to Open Source projects more often. Me on github.
How I do it …