1. Introduction
Cognitive Task Analysis (CTA) methods should be integrated into systems engineering since complex systems typically support highly cognitive activities. New system endeavors are geared towards augmenting the users’ macrocognitive functions such as: problem detection, coordination (i.e. – coordinating teams), and planning. Systems fail if they cannot support these cognitive functions, therefore it is important that the CTA practitioners learn what the functions are and relay them back to the systems engineers. Identifying and accommodating cognitive functions not only prevents systems from failing, it also enhances usability by improving the interactions between users and the system.
The field of Cognitive Systems Engineering (CSE) emerged as a solution to the problem of how to effectively integrate CTA methods into systems engineering. CSE combines the capabilities of technologists (i.e. – software engineers and computer scientists) and CTA researchers (i.e. – cognitive scientists and human factors psychologists). Almost all of the CSE strategies depend on CTA since it is important to understand the cognitive work the systems are trying to support. In this paper, I will look at some of the more popular CSE strategies being used today.
2. Situation Awareness (SA)-Oriented Design
Situation Awareness (SA) has been defined as: "the perception of the elements in the environment within a volume of space and time, the comprehension of their meaning, the projection of their status into the near future, and the prediction of how various actions will affect the fulfillment of one's goals" [1]. The SA-Oriented Design strategy is based on the idea that SA is fundamental in driving human decision-making in complex, dynamic environments [2]. Decision-making and performance can be dramatically improved if designs that enhance the users’ situational awareness are leveraged. SA-Oriented Design has been used to develop and evaluate system design concepts in order to improve human decision-making and performance. SA-Oriented Design is comprised of three main components: SA Requirements Analysis, SA-Oriented Design Principles, and SA Measurement and Validation.
For the SA Requirements Analysis, a form of cognitive task analysis called Goal Directed Task Analysis (GDTA) is used. GDTA focuses on uncovering the situation awareness requirements associated with a given job function. The first step in GDTA is to identify the major goals of a job and the sub-goals needed to reach these goals. Next, the major decisions that need to be made for each sub-goal are identified. Finally, the SA requirements needed for making these decisions and carrying out each sub-goal are identified. Since decision-making is shaped by goals, this requirements analysis is based on goals or objectives, not tasks (as in traditional task analysis).
A set of fifty SA-Oriented Design Principles was developed to help in creating a system design that meets the SA requirements. The principles are based on the mechanisms and processes involved in acquiring and maintaining SA. The principles provide design guidelines for: displaying goal-oriented information, dealing with extraneous information, dealing with system complexity, how to convey confidence levels in information, etc [2].
It is important to measure the proposed design concepts in order to determine if the design actually helps SA, or hinders it. Prototyping is an important tool that can be used when measuring the effectiveness of the design; it can rapidly expose the design to subject matter experts. Simulations of new technologies within the operating environment can also prove to be invaluable when measuring design effectiveness on SA.
3. Decision-Centered Design (DCD)
The DCD strategy was created to leverage CTA in the development of new technologies, including complex software systems [3]. This strategy uses CTA methods to specify the primary cognitive requirements for the design process. Importance is placed on the key decisions users have to make when performing their jobs (hence the name Decision-Centered Design). The idea behind DCD is that CTA can help the design team come up solutions that support the users’ when making key decisions.
DCD’s primary focus is on the difficult and challenging decisions the system must support (the “tough cases”). The focus on these tough cases ensures the system to be robust. The system and its users will be able to bounce back from extraordinary circumstances. If the system is successful in supporting tough cases, chances are it will also be successful in supporting the more routine / menial cases. Focusing on tough cases also ensures that the development process is efficient. There are rarely enough resources to cover all of the cognitive decisions in a complex system; covering only the difficult decisions is more efficient as it yields better coverage.
DCD consists of five stages: preparation, knowledge elicitation, analysis & representation, application design, and evaluation. During preparation, the practitioner is to understand the domain and the nature of the work the users are doing. The practitioner also identifies the appropriate CTA methods to use. In knowledge elicitation, the CTA methods are used to perform an examination of the difficult decisions and complex cognitive tasks. The practitioner then identifies the decision requirements by analyzing the recorded data. The analysis is shared with the systems engineers working on the design, and the design is developed iteratively via an open feedback loop with the users. Finally, performance of the system is measured and, if needed, improvements are made to the prototype.
4. Applied Cognitive Work Analysis (ACWA)
The ACWA methodology was developed by the Cognitive Systems Engineering Center in Pittsburgh, PA as the Cognitive Task Design (CTD) portion of a high quality, affordable systems engineering process [4]. It is touted as being at the same maturity level as the best software engineering methodologies. ACWA was specifically developed to satisfy two critical challenges to making CTD part of system engineering efforts. The first challenge is that CTD must become a high quality engineering methodology (on par with the efforts for software engineering processes made by Carnegie Mellon’s Software Engineering Institute). The second challenge is that the results of CTD must seamlessly integrate with the software and system engineering processes used to construct powerful decision support systems. [4]
ACWA contains a step-by-step approach to CTD. The design steps include domain analysis, decision requirements analysis, supporting information analysis, design requirements analysis, and finally design. Each step produces an artifact and the collection of artifacts provides a link from the cognitive task analysis to the system design. The artifacts serve two purposes: they progressively record the results of the design thinking for subsequent steps in the process, and they provide an opportunity to evaluate the quality and completeness of the design effort.
References
[1] Mica R. Endsley, Towards a Theory of Situation Awareness in Dynamic Systems, Human Factors, 37(1), 32-64, 1995
[2] Mica R. Endsley, Cheryl A. Bolstad, Debra G. Jones, and Jennifer M. Riley, Situation Awareness Oriented Design: From User’s Cognitive Requirements to Creating Effective Supporting Technologies, October 2003
[3] Hutton, R. J.B., T. E. Miller, and M. L. Thordson, Decision-Centered Design: Leveraging Cognitive Task Analysis in Design, Handbook of Cognitive Task Design, 383-416, October 2003
[4] Elm et al, Applied Cognitive Work Analysis: A Pragmatic Methodology for Designing Revolutionary Cognitive Affordances, 2002
List of Acronyms
ACWA - Applied Cognitive Work Analysis
CSE - Cognitive Systems Engineering
CTA - Cognitive Task Analysis
CTD - Cognitive Task Design
DCD - Decision-Centered Design
GDTA - Goal Directed Task Analysis
SA - Situation Awareness
Wednesday, August 10, 2011
Saturday, August 6, 2011
The "PersonaBadge" Social Facilitator and Simulations of its Affects on Social Capital Acquisition
This paper presents a social facilitator and its effects on social capital acquisition in large emergent social networks. Firstly, the software and hardware specifications for the social facilitator are explicated. Next, social network simulations and the cognitive architecture used to construct the simulations are described. Finally, the facilitator is tested within the social simulation and results of the simulation are analyzed.
Direct Link to PDF
http://romalley2000.home.comcast.net/documents/persona_doc/documents/omallr2_design_report.pdf
PersonaBadge Prototype Videos
http://www.youtube.com/user/HCICrossroads
Embedded PDF
Direct Link to PDF
http://romalley2000.home.comcast.net/documents/persona_doc/documents/omallr2_design_report.pdf
PersonaBadge Prototype Videos
http://www.youtube.com/user/HCICrossroads
Embedded PDF
Sunday, July 24, 2011
A Summary of "CLARION" from "A Tutorial on CLARION 5.0"
Citation
Sun, Ron. “A Tutorial on CLARION 5.0.” Department of Cognitive Science. Rensselaer Polytechnic Institute. 6 Oct. 2009. http://www.sts.rpi.edu/~rsun/sun.tutorial.pdf
Summary / Assessment
CLARION (short for Connectionist Learning with Adoptive Rule Induction ON-Line) is a cognitive architecture used to simulate human cognition and behavior. Dr Sun has led the development of CLARION at RPI. In the tutorial cited above, Dr. Sun provides an introduction to CLARION and its structure.
One important aspect of CLARION, and something that sets it apart form other cognitive architectures, is the method in which it models human knowledge. In CLARION, knowledge is split up into implicit knowledge, and explicit knowledge. The two types of knowledge are handled differently in the architecture, just as in real-life. For example, implicit knowledge is not directly accessible, but it can be used during computation (just as tacit knowledge is not easily passed on in real-life, but is used by people when solving problems). More specifically, in CLARION, implicit knowledge is modeled as a backpropagation neural network, which accurately represents the distributed and subsymbolic nature of implicit knowledge. On the other hand, explicit knowledge is represented in a symbolic (localist) way. Explicit knowledge is given a direct meaning making it more accessible and interpretable. Explicit knowledge is further divided into “rules” and “chunks”. Rules govern how an agent interacts with its environment (for example: If the stove is hot, don’t touch the stove). Chunks are combinations of implicit dimension/value pairs that are tied together to form a mental concept. For example: table-1: (size, large) (color, white) (number-of legs, four) where “table-1” is the mental concept and the dimension/value pairs are contained in parenthesis.
A layered concept is used when discussing the overall structure of CLARION. Explicit knowledge forms the top layer, while implicit knowledge forms the bottom layer. The architecture is further divided up into subsystems that handle a specific aspect of human cognition. Each subsystem has a module in the explicit layer (top layer), and a module in the implicit layer (bottom layer).
CLARION contains four subsystems: the Action-Centered Subsystem, the Non-Action-Centered Subsystem, the Motivational Subsystem, and the Meta-Cognitive Subsystem. The Action-Centered subsystem represents the part of the mind that controls an agent’s physical and mental actions. (For example, manipulating objects in the real world and adjusting goals.) The ACS uses input from the environment, and other internal information to determine the best action to take. The Non-Action-Centered Subsystem contains what can be considered as general knowledge or “semantic memory”. This type of knowledge includes ideas, objects, and facts. In the NACS, the upper layer contains connections (or associative rules) that connect declarative knowledge chunks, and the bottom layer contains implicit declarative knowledge in the form of dimension/value pair networks. The Motivational Subsystem represents the part of cognition that supplies reasons behind an agent’s actions. The MS describes the “drives” of the agent which in turn determines the agent’s goals. (For example, a drive may be to quench thirst, so the current goal structure would be centered on obtaining a source of water.) The Meta-Cognitive Subsystem is the main controller of cognition. It regulates the communication between all other subsystems. For example: it adds goals to the current goal structure, in response to the drives formulated by the Motivational Subsystem. It also transfers information learned in the ACS to the NACS.
As far as implications for design and HCI: I think that we can learn a great deal about human cognition from utilizing cognitive architectures, and we could leverage them to effectively simulate how our HCI designs will function in the real-world.
Summary / Assessment
CLARION (short for Connectionist Learning with Adoptive Rule Induction ON-Line) is a cognitive architecture used to simulate human cognition and behavior. Dr Sun has led the development of CLARION at RPI. In the tutorial cited above, Dr. Sun provides an introduction to CLARION and its structure.
One important aspect of CLARION, and something that sets it apart form other cognitive architectures, is the method in which it models human knowledge. In CLARION, knowledge is split up into implicit knowledge, and explicit knowledge. The two types of knowledge are handled differently in the architecture, just as in real-life. For example, implicit knowledge is not directly accessible, but it can be used during computation (just as tacit knowledge is not easily passed on in real-life, but is used by people when solving problems). More specifically, in CLARION, implicit knowledge is modeled as a backpropagation neural network, which accurately represents the distributed and subsymbolic nature of implicit knowledge. On the other hand, explicit knowledge is represented in a symbolic (localist) way. Explicit knowledge is given a direct meaning making it more accessible and interpretable. Explicit knowledge is further divided into “rules” and “chunks”. Rules govern how an agent interacts with its environment (for example: If the stove is hot, don’t touch the stove). Chunks are combinations of implicit dimension/value pairs that are tied together to form a mental concept. For example: table-1: (size, large) (color, white) (number-of legs, four) where “table-1” is the mental concept and the dimension/value pairs are contained in parenthesis.
A layered concept is used when discussing the overall structure of CLARION. Explicit knowledge forms the top layer, while implicit knowledge forms the bottom layer. The architecture is further divided up into subsystems that handle a specific aspect of human cognition. Each subsystem has a module in the explicit layer (top layer), and a module in the implicit layer (bottom layer).
CLARION contains four subsystems: the Action-Centered Subsystem, the Non-Action-Centered Subsystem, the Motivational Subsystem, and the Meta-Cognitive Subsystem. The Action-Centered subsystem represents the part of the mind that controls an agent’s physical and mental actions. (For example, manipulating objects in the real world and adjusting goals.) The ACS uses input from the environment, and other internal information to determine the best action to take. The Non-Action-Centered Subsystem contains what can be considered as general knowledge or “semantic memory”. This type of knowledge includes ideas, objects, and facts. In the NACS, the upper layer contains connections (or associative rules) that connect declarative knowledge chunks, and the bottom layer contains implicit declarative knowledge in the form of dimension/value pair networks. The Motivational Subsystem represents the part of cognition that supplies reasons behind an agent’s actions. The MS describes the “drives” of the agent which in turn determines the agent’s goals. (For example, a drive may be to quench thirst, so the current goal structure would be centered on obtaining a source of water.) The Meta-Cognitive Subsystem is the main controller of cognition. It regulates the communication between all other subsystems. For example: it adds goals to the current goal structure, in response to the drives formulated by the Motivational Subsystem. It also transfers information learned in the ACS to the NACS.
As far as implications for design and HCI: I think that we can learn a great deal about human cognition from utilizing cognitive architectures, and we could leverage them to effectively simulate how our HCI designs will function in the real-world.
Wednesday, May 11, 2011
Electrode Positions for the Emotiv EPOC Research Neuroheadset
Channel names based on the International 10-20 locations are: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4.
Labels:
10-20,
EEG,
Emotiv,
EPOC,
Neuroheadset
Saturday, April 23, 2011
A Summary of "Simulating Organizational Decision-Making Using a Cognitively Realistic Agent Model"
Citation
R. Sun and I. Naveh, Simulating organizational decision-making using a cognitively realistic agent model. Journal of Artificial Societies and Social Simulation, Vol.7, No.3. 2004.
Summary / Assessment
In their paper, Sun and Naveh argue that social simulation needs cognitive science concepts. Social interaction is basically the result of individual cognition; therefore, the more accurately one can model individual agents and their cognitive processes, the more accurate and dependable the social model will become. The authors also argue that utilizing a realistic cognitive model helps us to better understand individual agent cognition, more specifically the socio-cultural aspects of cognition and how individual agents learn from each another. The paper mentions how the CLARION cognitive architecture is well suited to model cognitively accurate agents within a social simulation. The authors describe CLARION’s dual representation of human learning, and go into detail on how the lower layer (the implicit layer) interacts with the upper layer (the explicit layer). They cite this dual-representation as being fundamental in constructing realistic cognitive agents.
The paper moves on to explain an organizational task that is to be studied. The task is to determine if a blip on a radar screen is a hostile aircraft, a flock of geese, or a civilian aircraft (i.e. – whether the blip is an enemy, neutral, or friendly). Each blip has nine attributes that carry a weighted value. If the sum of the attributes is less than 17 it is friendly, if greater than 19 it is hostile, otherwise it is neutral. The organization is presented with 100 problems (100 blips). An organization’s performance is a measure of how accurately it can classify blips.
Originations are set up in one of two ways, as a team structure or as a hierarchy. In a team structure, every agent’s calculations are equally weighted and the final result is determined democratically. In a hierarchy, agents’ decisions are passed on to a supervisor agent, and the supervisor makes the final decision. Additionally, information is disbursed within the organization in one of two ways. In a distributed fashion where no single agent has access to all required pieces of information, or in a blocked fashion where each agent has access to all pieces of required information. (In both cases, all pieces of information are accessible to the agents as whole.)
Earlier studies, performed by Carley et al in 1998, revealed that humans typically perform better in a team situation, especially when information was provided in a distributed manner (no single agent has access to all pieces of required information). The worst performance occurred in hierarchical team structures that used blocked data access (every agent has access to all pieces of required information). In the same study, Carley used four cognitive architectures, other than CLARION, to simulate the same task. Results from these simulations were compared with human data. The comparison showed that the simulations did not provide results that were inline with human capabilities. It is argued that this is due to shortcomings in the architectures to accurately model human intelligence and learning.
Sun and Naveh simulated the same task with the CLARION cognitive architecture. Results from this simulation were compared with the actual human data and it was found that CLARION was able to provide results that were inline with human results. It is argued that the similarity is due to the fact that CLARION accurately models human learning in its dual representation structure. The authors then augmented the simulation by extending runtime, by varying cognitive parameters (such as learning rate, and temperature of randomness), and by introducing differences in the cognitive agents (such as introducing weak learners). The overall result is that when using CLARION, one can create cognitively realistic social simulations yielding results that are in accord with psychological literature.
R. Sun and I. Naveh, Simulating organizational decision-making using a cognitively realistic agent model. Journal of Artificial Societies and Social Simulation, Vol.7, No.3. 2004.
Summary / Assessment
In their paper, Sun and Naveh argue that social simulation needs cognitive science concepts. Social interaction is basically the result of individual cognition; therefore, the more accurately one can model individual agents and their cognitive processes, the more accurate and dependable the social model will become. The authors also argue that utilizing a realistic cognitive model helps us to better understand individual agent cognition, more specifically the socio-cultural aspects of cognition and how individual agents learn from each another. The paper mentions how the CLARION cognitive architecture is well suited to model cognitively accurate agents within a social simulation. The authors describe CLARION’s dual representation of human learning, and go into detail on how the lower layer (the implicit layer) interacts with the upper layer (the explicit layer). They cite this dual-representation as being fundamental in constructing realistic cognitive agents.
The paper moves on to explain an organizational task that is to be studied. The task is to determine if a blip on a radar screen is a hostile aircraft, a flock of geese, or a civilian aircraft (i.e. – whether the blip is an enemy, neutral, or friendly). Each blip has nine attributes that carry a weighted value. If the sum of the attributes is less than 17 it is friendly, if greater than 19 it is hostile, otherwise it is neutral. The organization is presented with 100 problems (100 blips). An organization’s performance is a measure of how accurately it can classify blips.
Originations are set up in one of two ways, as a team structure or as a hierarchy. In a team structure, every agent’s calculations are equally weighted and the final result is determined democratically. In a hierarchy, agents’ decisions are passed on to a supervisor agent, and the supervisor makes the final decision. Additionally, information is disbursed within the organization in one of two ways. In a distributed fashion where no single agent has access to all required pieces of information, or in a blocked fashion where each agent has access to all pieces of required information. (In both cases, all pieces of information are accessible to the agents as whole.)
Earlier studies, performed by Carley et al in 1998, revealed that humans typically perform better in a team situation, especially when information was provided in a distributed manner (no single agent has access to all pieces of required information). The worst performance occurred in hierarchical team structures that used blocked data access (every agent has access to all pieces of required information). In the same study, Carley used four cognitive architectures, other than CLARION, to simulate the same task. Results from these simulations were compared with human data. The comparison showed that the simulations did not provide results that were inline with human capabilities. It is argued that this is due to shortcomings in the architectures to accurately model human intelligence and learning.
Sun and Naveh simulated the same task with the CLARION cognitive architecture. Results from this simulation were compared with the actual human data and it was found that CLARION was able to provide results that were inline with human results. It is argued that the similarity is due to the fact that CLARION accurately models human learning in its dual representation structure. The authors then augmented the simulation by extending runtime, by varying cognitive parameters (such as learning rate, and temperature of randomness), and by introducing differences in the cognitive agents (such as introducing weak learners). The overall result is that when using CLARION, one can create cognitively realistic social simulations yielding results that are in accord with psychological literature.
Friday, April 8, 2011
A Summary of "Collective Intelligence: Mankind’s Emerging World in Cyberspace"
Citation
Levy, Pierre. "Introduction." Collective Intelligence: Mankind’s Emerging World in Cyberspace. Cambridge, Massachusetts: Perseus Books, 1999. 1-19.
Summary / Assessment
In the introduction to his book, Levy first discusses collective intelligence as it relates to economy. He writes: “the prosperity of a nation, geographical region, business, or individual depends on their ability to navigate the knowledge space.” The more we can form intelligent, highly capable communities, the more we can ensure our success in a highly competitive environment. As an example: businesses transform themselves to promote information exchange between departments. This results into what Levy calls “innovation networks”. Departments can easily interact with one another transferring knowledge, personnel, and skills. This allows companies to be more receptive to ever changing demand for skills (such as scientific, technical, social, and aesthetic skills). An organization that is inflexible to changing skills, will eventually collapse.
Levy defines an anthropological space as: “a system of proximity (space) unique to the world of humanity (anthropological), and thus dependant on human technologies, significations, language, culture conventions, representations, and emotions”. Four spaces are defined: earth, territorial space, commodity space, and knowledge space. The Earth Space defines our identity in terms of our “bond with the cosmos”, as well as our affiliation or alliance with other humans (ex: our name is a symbol representing out place in an ancestral line). In the Territorial Space, the meaning of identity shifts. Identity, in this space, is linked with the ownership of property and group affiliations (ex: our home address identifies our geographic location as well as affiliations with certain groups of individuals (our neighborhood)). In the Commodity Space, identity is defined by one’s participation in the process of moving commodities (goods). This includes involvement in the production of goods and involvement in the exchange of goods. In the fourth space, the Knowledge Space, one’s identity is defined by knowledge and the capacity to rapidly acquire knowledge. Levy identifies three aspects of the Knowledge Space: Speed – information/knowledge can rapidly be acquired. Mass – It is impossible to restrict the movement of knowledge; therefore a larger mass of individuals now has access to information/knowledge. Tools – tools have been created that enable individuals to acquire, manage, and filter information as needed.
Levy defines collective intelligence as “a form of universally distributed intelligence, constantly enhanced, coordinated in real-time, and resulting in the effective mobilization of skills.” Knowledge is enhanced at the level of the individual as we try to better ourselves through the acquisition of new skills and abilities. Intelligence is coordinated in real-time through digital mediums and emergent technologies. Skills are mobilized first by acknowledging an individual’s skills, and then by recognizing the individual’s contributions to the collective.
Levy also adds to his definition of collective intelligence by writing, “the basis and goal of collective intelligence is the mutual recognition and enrichment of individuals, rather than the cult of fetishized or hypostatized communities”. In other words, the goal of collective intelligence should be that individuals are compensated or acknowledged based on the quantity and more importantly, the quality of their contribution to the collective. Furthermore, acknowledgement should be made at the level of the individual, rather than at the level of the collective.
Levy, Pierre. "Introduction." Collective Intelligence: Mankind’s Emerging World in Cyberspace. Cambridge, Massachusetts: Perseus Books, 1999. 1-19.
Summary / Assessment
In the introduction to his book, Levy first discusses collective intelligence as it relates to economy. He writes: “the prosperity of a nation, geographical region, business, or individual depends on their ability to navigate the knowledge space.” The more we can form intelligent, highly capable communities, the more we can ensure our success in a highly competitive environment. As an example: businesses transform themselves to promote information exchange between departments. This results into what Levy calls “innovation networks”. Departments can easily interact with one another transferring knowledge, personnel, and skills. This allows companies to be more receptive to ever changing demand for skills (such as scientific, technical, social, and aesthetic skills). An organization that is inflexible to changing skills, will eventually collapse.
Levy defines an anthropological space as: “a system of proximity (space) unique to the world of humanity (anthropological), and thus dependant on human technologies, significations, language, culture conventions, representations, and emotions”. Four spaces are defined: earth, territorial space, commodity space, and knowledge space. The Earth Space defines our identity in terms of our “bond with the cosmos”, as well as our affiliation or alliance with other humans (ex: our name is a symbol representing out place in an ancestral line). In the Territorial Space, the meaning of identity shifts. Identity, in this space, is linked with the ownership of property and group affiliations (ex: our home address identifies our geographic location as well as affiliations with certain groups of individuals (our neighborhood)). In the Commodity Space, identity is defined by one’s participation in the process of moving commodities (goods). This includes involvement in the production of goods and involvement in the exchange of goods. In the fourth space, the Knowledge Space, one’s identity is defined by knowledge and the capacity to rapidly acquire knowledge. Levy identifies three aspects of the Knowledge Space: Speed – information/knowledge can rapidly be acquired. Mass – It is impossible to restrict the movement of knowledge; therefore a larger mass of individuals now has access to information/knowledge. Tools – tools have been created that enable individuals to acquire, manage, and filter information as needed.
Levy defines collective intelligence as “a form of universally distributed intelligence, constantly enhanced, coordinated in real-time, and resulting in the effective mobilization of skills.” Knowledge is enhanced at the level of the individual as we try to better ourselves through the acquisition of new skills and abilities. Intelligence is coordinated in real-time through digital mediums and emergent technologies. Skills are mobilized first by acknowledging an individual’s skills, and then by recognizing the individual’s contributions to the collective.
Levy also adds to his definition of collective intelligence by writing, “the basis and goal of collective intelligence is the mutual recognition and enrichment of individuals, rather than the cult of fetishized or hypostatized communities”. In other words, the goal of collective intelligence should be that individuals are compensated or acknowledged based on the quantity and more importantly, the quality of their contribution to the collective. Furthermore, acknowledgement should be made at the level of the individual, rather than at the level of the collective.
Labels:
Anthropological Space,
Collective,
Intelligence,
Knowledge,
Produsage
Tuesday, March 29, 2011
A Summary of "The Motivational and Meta-Cognitive Subsystems" from "A Tutorial on CLARION 5.0"
Citation
Sun, Ron. "Chapter 4: The Motivational and Meta-Cognitive Subsystems." "A Tutorial on CLARION 5.0." Department of Cognitive Science. Rensselaer Polytechnic Institute. 6 Oct. 2009. http://www.sts.rpi.edu/~rsun/sun.tutorial.pdf
Summary / Assessment
In this chapter, the Motivational Subsystem (MS) and the Meta-Cognitive Subsystem (MCS) of the CLARION architecture are described. The MS is concerned with an agent’s drives and their interactions (i.e. – why an agent does what it does and why it chooses any particular action over another). The MCS controls and regulates cognitive processes. The MCS accomplishes this, for example, by setting goals for the agent and by managing ongoing processes of learning and interactions with the surrounding environment.
Dr. Sun mentions that motivational and meta-level processes are required for an agent to meet the following criteria when performing actions: sustainability, purposefulness, focus, and adaptivity. Sustainability refers to an agent attending to basic needs for survival (i.e. – hunger, thirst, and avoiding danger). Purposefulness refers to an agent selecting activities that will accomplish goals, as opposed to selecting activities completely randomly. Focus refers to an agent’s need to focus its activities on fulfilling a specific purpose. Adaptivity refers to the need of an agent to adapt (i.e. – to learn) to improve its sustainability, purposefulness, and focus.
When modeling a cognitive agent, it is important to include the following considerations concerning drives. Proportional Activation: Activation of drives should be proportional to offsets or deficits within the agent (such as the degree of the lack of nourishment). Opportunism: Opportunities must be factored in when choosing between alternative actions (ex: availability of water may lead an agent to choose drinking water over gathering food, provided that the food deficit is not too high). Contiguity of Actions: A tendency to continue the current action sequence to avoid the overhead of switching to a different action sequence (i.e. – avoid “thrashing”). Persistence: Actions to satisfy a drive should persist beyond minimum satisfaction. Interruption When Necessary: Actions for a higher priority drive should be interrupted when a more urgent drive arises. Combination of Preferences: Preferences resulting from different drives could be combined to generate a higher-order preference. Performing a “compromise candidate” action may not be the best for any single drive, but is best in terms of the combined preference.
Specific drives are then discussed. Drives are segmented into three categories: Low-Level Drives, High-Level Drives, and Derived Secondary Drives. Low-Level drives include physiological needs such as: get-food, get-water, avoid-danger, get-sleep, and reproduce. Low-Level drives also include “saturation drives” such as: avoid-water-saturation, and avoid-food-saturation. High-Level drives include “needs” such as: belongingness, esteem, and self-actualization (and others from Maslow’s needs hierarchy). Derived Secondary Drives include: gradually acquired drives through conditioning (i.e. – associating a secondary goal to a primary drive), and externally set drives (i.e. – drives resulting from the desire to please superiors in a work environment).
Meta-cognition refers to one’s knowledge of one’s own cognitive process. It also refers to the monitoring and orchestration of cognitive processes in the service of some concrete goal or objective. These concepts are operationalized within CLARION’s MCS through the following processes: 1) Behavioral Aims: which set goals and their reinforcements, 2) Information Filtering: which determines the selection of input values from the environment, 3) Information Acquisition: which selects learning methods, 4) Information Utilization: which refers to reasoning, 5) Outcome Selection: or determining the appropriate outputs, 6) Cognitive Modes: or the selection of explicit processing, implicit processing, or combination thereof, and 7)Parameter Settings: such as parameters for learning capability (i.e. – intelligence level).
Sun, Ron. "Chapter 4: The Motivational and Meta-Cognitive Subsystems." "A Tutorial on CLARION 5.0." Department of Cognitive Science. Rensselaer Polytechnic Institute. 6 Oct. 2009. http://www.sts.rpi.edu/~rsun/sun.tutorial.pdf
Summary / Assessment
In this chapter, the Motivational Subsystem (MS) and the Meta-Cognitive Subsystem (MCS) of the CLARION architecture are described. The MS is concerned with an agent’s drives and their interactions (i.e. – why an agent does what it does and why it chooses any particular action over another). The MCS controls and regulates cognitive processes. The MCS accomplishes this, for example, by setting goals for the agent and by managing ongoing processes of learning and interactions with the surrounding environment.
Dr. Sun mentions that motivational and meta-level processes are required for an agent to meet the following criteria when performing actions: sustainability, purposefulness, focus, and adaptivity. Sustainability refers to an agent attending to basic needs for survival (i.e. – hunger, thirst, and avoiding danger). Purposefulness refers to an agent selecting activities that will accomplish goals, as opposed to selecting activities completely randomly. Focus refers to an agent’s need to focus its activities on fulfilling a specific purpose. Adaptivity refers to the need of an agent to adapt (i.e. – to learn) to improve its sustainability, purposefulness, and focus.
When modeling a cognitive agent, it is important to include the following considerations concerning drives. Proportional Activation: Activation of drives should be proportional to offsets or deficits within the agent (such as the degree of the lack of nourishment). Opportunism: Opportunities must be factored in when choosing between alternative actions (ex: availability of water may lead an agent to choose drinking water over gathering food, provided that the food deficit is not too high). Contiguity of Actions: A tendency to continue the current action sequence to avoid the overhead of switching to a different action sequence (i.e. – avoid “thrashing”). Persistence: Actions to satisfy a drive should persist beyond minimum satisfaction. Interruption When Necessary: Actions for a higher priority drive should be interrupted when a more urgent drive arises. Combination of Preferences: Preferences resulting from different drives could be combined to generate a higher-order preference. Performing a “compromise candidate” action may not be the best for any single drive, but is best in terms of the combined preference.
Specific drives are then discussed. Drives are segmented into three categories: Low-Level Drives, High-Level Drives, and Derived Secondary Drives. Low-Level drives include physiological needs such as: get-food, get-water, avoid-danger, get-sleep, and reproduce. Low-Level drives also include “saturation drives” such as: avoid-water-saturation, and avoid-food-saturation. High-Level drives include “needs” such as: belongingness, esteem, and self-actualization (and others from Maslow’s needs hierarchy). Derived Secondary Drives include: gradually acquired drives through conditioning (i.e. – associating a secondary goal to a primary drive), and externally set drives (i.e. – drives resulting from the desire to please superiors in a work environment).
Meta-cognition refers to one’s knowledge of one’s own cognitive process. It also refers to the monitoring and orchestration of cognitive processes in the service of some concrete goal or objective. These concepts are operationalized within CLARION’s MCS through the following processes: 1) Behavioral Aims: which set goals and their reinforcements, 2) Information Filtering: which determines the selection of input values from the environment, 3) Information Acquisition: which selects learning methods, 4) Information Utilization: which refers to reasoning, 5) Outcome Selection: or determining the appropriate outputs, 6) Cognitive Modes: or the selection of explicit processing, implicit processing, or combination thereof, and 7)Parameter Settings: such as parameters for learning capability (i.e. – intelligence level).
Subscribe to:
Posts (Atom)