This paper presents a social facilitator and its effects on social capital acquisition in large emergent social networks. Firstly, the software and hardware specifications for the social facilitator are explicated. Next, social network simulations and the cognitive architecture used to construct the simulations are described. Finally, the facilitator is tested within the social simulation and results of the simulation are analyzed.
Direct Link to PDF
http://romalley2000.home.comcast.net/documents/persona_doc/documents/omallr2_design_report.pdf
PersonaBadge Prototype Videos
http://www.youtube.com/user/HCICrossroads
Embedded PDF
Showing posts with label CLARION. Show all posts
Showing posts with label CLARION. Show all posts
Saturday, August 6, 2011
The "PersonaBadge" Social Facilitator and Simulations of its Affects on Social Capital Acquisition
Saturday, April 23, 2011
A Summary of "Simulating Organizational Decision-Making Using a Cognitively Realistic Agent Model"
Citation
R. Sun and I. Naveh, Simulating organizational decision-making using a cognitively realistic agent model. Journal of Artificial Societies and Social Simulation, Vol.7, No.3. 2004.
Summary / Assessment
In their paper, Sun and Naveh argue that social simulation needs cognitive science concepts. Social interaction is basically the result of individual cognition; therefore, the more accurately one can model individual agents and their cognitive processes, the more accurate and dependable the social model will become. The authors also argue that utilizing a realistic cognitive model helps us to better understand individual agent cognition, more specifically the socio-cultural aspects of cognition and how individual agents learn from each another. The paper mentions how the CLARION cognitive architecture is well suited to model cognitively accurate agents within a social simulation. The authors describe CLARION’s dual representation of human learning, and go into detail on how the lower layer (the implicit layer) interacts with the upper layer (the explicit layer). They cite this dual-representation as being fundamental in constructing realistic cognitive agents.
The paper moves on to explain an organizational task that is to be studied. The task is to determine if a blip on a radar screen is a hostile aircraft, a flock of geese, or a civilian aircraft (i.e. – whether the blip is an enemy, neutral, or friendly). Each blip has nine attributes that carry a weighted value. If the sum of the attributes is less than 17 it is friendly, if greater than 19 it is hostile, otherwise it is neutral. The organization is presented with 100 problems (100 blips). An organization’s performance is a measure of how accurately it can classify blips.
Originations are set up in one of two ways, as a team structure or as a hierarchy. In a team structure, every agent’s calculations are equally weighted and the final result is determined democratically. In a hierarchy, agents’ decisions are passed on to a supervisor agent, and the supervisor makes the final decision. Additionally, information is disbursed within the organization in one of two ways. In a distributed fashion where no single agent has access to all required pieces of information, or in a blocked fashion where each agent has access to all pieces of required information. (In both cases, all pieces of information are accessible to the agents as whole.)
Earlier studies, performed by Carley et al in 1998, revealed that humans typically perform better in a team situation, especially when information was provided in a distributed manner (no single agent has access to all pieces of required information). The worst performance occurred in hierarchical team structures that used blocked data access (every agent has access to all pieces of required information). In the same study, Carley used four cognitive architectures, other than CLARION, to simulate the same task. Results from these simulations were compared with human data. The comparison showed that the simulations did not provide results that were inline with human capabilities. It is argued that this is due to shortcomings in the architectures to accurately model human intelligence and learning.
Sun and Naveh simulated the same task with the CLARION cognitive architecture. Results from this simulation were compared with the actual human data and it was found that CLARION was able to provide results that were inline with human results. It is argued that the similarity is due to the fact that CLARION accurately models human learning in its dual representation structure. The authors then augmented the simulation by extending runtime, by varying cognitive parameters (such as learning rate, and temperature of randomness), and by introducing differences in the cognitive agents (such as introducing weak learners). The overall result is that when using CLARION, one can create cognitively realistic social simulations yielding results that are in accord with psychological literature.
R. Sun and I. Naveh, Simulating organizational decision-making using a cognitively realistic agent model. Journal of Artificial Societies and Social Simulation, Vol.7, No.3. 2004.
Summary / Assessment
In their paper, Sun and Naveh argue that social simulation needs cognitive science concepts. Social interaction is basically the result of individual cognition; therefore, the more accurately one can model individual agents and their cognitive processes, the more accurate and dependable the social model will become. The authors also argue that utilizing a realistic cognitive model helps us to better understand individual agent cognition, more specifically the socio-cultural aspects of cognition and how individual agents learn from each another. The paper mentions how the CLARION cognitive architecture is well suited to model cognitively accurate agents within a social simulation. The authors describe CLARION’s dual representation of human learning, and go into detail on how the lower layer (the implicit layer) interacts with the upper layer (the explicit layer). They cite this dual-representation as being fundamental in constructing realistic cognitive agents.
The paper moves on to explain an organizational task that is to be studied. The task is to determine if a blip on a radar screen is a hostile aircraft, a flock of geese, or a civilian aircraft (i.e. – whether the blip is an enemy, neutral, or friendly). Each blip has nine attributes that carry a weighted value. If the sum of the attributes is less than 17 it is friendly, if greater than 19 it is hostile, otherwise it is neutral. The organization is presented with 100 problems (100 blips). An organization’s performance is a measure of how accurately it can classify blips.
Originations are set up in one of two ways, as a team structure or as a hierarchy. In a team structure, every agent’s calculations are equally weighted and the final result is determined democratically. In a hierarchy, agents’ decisions are passed on to a supervisor agent, and the supervisor makes the final decision. Additionally, information is disbursed within the organization in one of two ways. In a distributed fashion where no single agent has access to all required pieces of information, or in a blocked fashion where each agent has access to all pieces of required information. (In both cases, all pieces of information are accessible to the agents as whole.)
Earlier studies, performed by Carley et al in 1998, revealed that humans typically perform better in a team situation, especially when information was provided in a distributed manner (no single agent has access to all pieces of required information). The worst performance occurred in hierarchical team structures that used blocked data access (every agent has access to all pieces of required information). In the same study, Carley used four cognitive architectures, other than CLARION, to simulate the same task. Results from these simulations were compared with human data. The comparison showed that the simulations did not provide results that were inline with human capabilities. It is argued that this is due to shortcomings in the architectures to accurately model human intelligence and learning.
Sun and Naveh simulated the same task with the CLARION cognitive architecture. Results from this simulation were compared with the actual human data and it was found that CLARION was able to provide results that were inline with human results. It is argued that the similarity is due to the fact that CLARION accurately models human learning in its dual representation structure. The authors then augmented the simulation by extending runtime, by varying cognitive parameters (such as learning rate, and temperature of randomness), and by introducing differences in the cognitive agents (such as introducing weak learners). The overall result is that when using CLARION, one can create cognitively realistic social simulations yielding results that are in accord with psychological literature.
Wednesday, January 12, 2011
Leveraging Cognitive Architectures in Usability Engineering
In chapter 5 of "The Human-Computer Interaction Handbook"
Testing Economy
There are clear, positive implications in leveraging cognitive architectures in usability engineering. Most apparent is an economy in testing.
Software testing can be costly, especially if the software is intended for a large population of users. For example, if the software is to be used by a global audience, languages and other aspects of the target cultural ecosystems need to be considered. Testing would need to be duplicated to support the variance in users. Additionally, testing can be costly for large systems containing many functional points. To meet the goal of building a large and functionally accurate system, multiple usability tests are performed that iteratively shape the software being developed. Usability tests must be altered and additional usability tests need to be created depending on how much the software changes between iterations. Therefore in highly-iterative development, the total cost of usability testing is multiplied by a factor of the number of iterations the software has gone through. Another point of consideration is the administrative costs of performing usability testing. This includes items such as: engineering test cases, writing test plans, distributing test plans, setting up security access for testers, and the coordination and tabulation of test results.
Leveraging a cognitive architecture can help mediate these costs if we can create realistic cognitive agents that model the user base. The costs of utilizing people to perform testing will be reduced since cognitive agents could test in their place. For global applications where there is a disparate user base, user differences could be simulated. For example, varying cultural dimensions could be modeled within the agents. Costs from iterative development could also be avoided as agents that were constructed for initial tests could simply be reused for subsequent testing. Overhead involved with administering tests will be lessened since there would not be a need for the coordination and distribution of testing among large groups of users.
Social Networking System Application and Usability
In his paper, Dr. Ron Sun explains how CLARION, a cognitive architecture, can be applied to modeling social networks (Sun, 2006
With the ability to effectively model human social aspects, we can use cognitive architectures to perform usability analysis on systems that function within a large social setting (for example: a big city population). Traditional usability analysis on the effects such systems might not be possible. The physical deployment of systems to a large community of people is met with several obstacles. First and foremost is cost involved in usability testing. As mentioned in the section above, there are overhead costs such as coordination of testing and distributing test plans. Additionally, a system interface would have to be set up for each person in the community to simulate its effects precisely. Thus, we would incur major expenses without first understanding potential benefits. Secondly, the actual coordination of usability testing in a large community would not be feasible. This is because recruiting the number of individuals required isn’t practical. Finally, there are temporal issues since a social network matures slowly in real-time. Using a cognitive architecture, we can construct a model of the social network that would enable us to avoid these pitfalls.
Along with mitigating the difficulties in usability analysis, there are other benefits to using cognitive architectures. Parameters of the simulated social network can quickly be changed to model real-life scenarios. (Parameters include community size, agent type distribution, and epoch length.) The beliefs, goals, and knowledge of the simulated people (cognitive agents) can also be modified. Finally, since the system is not deployed to actual users, coordination and deployment of changes to users does not need to occur. These benefits allow the social model to be adjusted rapidly and without recourse when managing shifting user requirements. Ultimately, being able to effectively manage change leads to a more usable software system.
References
Byrne, M. D. (2007). "Cognitive architecture." In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition (Human Factors and Ergonomics).
Sun, Ron (2006). “The CLARION cognitive architecture: Extending cognitive modeling to social simulation.” In: Ron Sun (ed.), Cognition and Multi-Agent Interaction.
Tomasello, Michael (1999). The Cultural Origins of Human Cognition.
Subscribe to:
Posts (Atom)