Friday, January 28, 2011

Making jqGrid work with a .NET Web Service pt. 2

This is a follow up to my post: Making jqGrid work with a .NET Web Service.   I was asked to attach a full working example of the code I wrote in that post.

The attached Visual Studio project demonstrates how to use the GetJqgridJson method with a simple jqGrid and webservice.  (The GetJqgridJson method converts a DataTable to a JSON string suitable for jqGrid consumption.)

Download the Code
Note: To get the example to run, you will need to change the url parameter in the AJAX call on the default.aspx page.  Make sure the url parameter points to the correct location and port on your machine.

Tuesday, January 25, 2011

A Summary of "Beyond the Interface: Users’ Perceptions of Interaction and Audience on Websites"

Citation
Light, A.; Wakeman, I. "Beyond the Interface: Users’ Perceptions of Interaction and Audience on Websites.” Interacting with Computers 13 (2001) 325-351.

Summary / Assessment
In “Beyond the Interface”, the authors present results of a study to measure user behavior and thought processes during website interaction. The authors’ intent is to describe how the process of interacting with websites brings about two levels of awareness in the user: (1) awareness of the interface, and (2) awareness of the social context beyond the interface.

The study is grounded in several HCI theories that place focus on using the Web as a medium for communication, rather than using it as a tool for problem solving. Perhaps one of the most important theories suggests user interactions follow similar rules to the “turn taking of spoken engagement”. When these rules are violated, users have a hard time understanding the behavior of the software. Another theory leveraged in the study deals with the idea of “ritual constraints”. “Ritual constraints” are social rules inherent in the nature of human interaction. Here, the user is cognizant of the implication of his or her acts, specifically, how the acts are interpreted by others in the social setting.

Data was gathered from twenty participants on various computer-related activities such as downloading software, using search engines, and subscribing to magazines. Accounts were collected using a retrospective interviewing technique (Vermersch’s Explicitation Interviewing Technique). Behavior and thoughts of the users were recorded in audio for analysis. The goal of the analysis was to discover patterns in the users’ accounts and differences between the accounts.

The study revealed the main point regarding user awareness. Users are aware of the two levels of interaction, one with the interface, and one with the recipients behind the interface. The second awareness seemed to present itself when users were actually entering data/interacting with the website, as apposed to simply navigating the website. The corollary is the importance for developers to adopt a communication metaphor when creating interactive components on a website. The study revealed another interesting point regarding the user’s perception of the recipients of inputted data. Recipient identity is constructed from a combination of the user’s expectations of a commercial brand, the user’s experience with interactive portions of the site, and the user’s purpose in visiting the site.

The paper concludes with recommendations on how to apply the findings of the study. The findings suggest that interactive websites would benefit from: proper identification of the recipient of the user’s data (especially if the data is personal in nature), confirmation pages after the entry of data, statements about how the recipients will use the submitted data, and an explanation of why the recipient is collecting data that does not appear to assist in the completion of the user’s task.

I think the ideas expounded upon in this paper are of great importance to the HCI professional. They help us to understand not only what users think, but how users think when engaged in discourse with interactive web applications. We can use this information when designing web applications, or any other artifact, to help us predict how users will internalize the artifact (i.e. - what mental models the users will build).

Tuesday, January 18, 2011

Leveraging Social Psychology Theories in Software Requirements Elicitation

I wrote this paper for my Theory and Research in HCI master's course at RPI.  The paper discusses exemplary social psychology theories that can be leveraged in the elicitation of software system requirements. Firstly, social psychology theories such as the covariation model and halo effect are presented, each theory is then explained in terms of social context, and finally, links between the theories and requirements engineering are explicated.

Direct Link to PDF
http://romalley2000.home.comcast.net/documents/OMalley_Social_Psych_Requirements.pdf

Embedded PDF


Wednesday, January 12, 2011

Leveraging Cognitive Architectures in Usability Engineering



Introduction
In chapter 5 of "The Human-Computer Interaction Handbook", Byrne discusses how cognitive architectures can be applied in usability engineering (Byrne 95).  He mentions that traditional engineering disciplines are grounded in quantitative theory.  Engineers in these disciplines can augment their designs based on predictions derived from such theory.  In contrast, usability engineers do not have these quantitative tools available and therefore every design must be subjected to its own usability test (Byrne 95).  Computational models, based on cognitive architectures, have the potential to give usability engineers quantitative tools similar to those available in traditional disciplines.  With these tools, usability engineers could potentially quantify the effects of changing attributes of a system (e.g. – changing aspects of a user interface).  In this paper, I discuss additional implications and applications of cognitive architectures in usability engineering.

Testing Economy
There are clear, positive implications in leveraging cognitive architectures in usability engineering.  Most apparent is an economy in testing.

Software testing can be costly, especially if the software is intended for a large population of users.  For example, if the software is to be used by a global audience, languages and other aspects of the target cultural ecosystems need to be considered.  Testing would need to be duplicated to support the variance in users.  Additionally, testing can be costly for large systems containing many functional points.  To meet the goal of building a large and functionally accurate system, multiple usability tests are performed that iteratively shape the software being developed.  Usability tests must be altered and additional usability tests need to be created depending on how much the software changes between iterations.  Therefore in highly-iterative development, the total cost of usability testing is multiplied by a factor of the number of iterations the software has gone through.  Another point of consideration is the administrative costs of performing usability testing.  This includes items such as: engineering test cases, writing test plans, distributing test plans, setting up security access for testers, and the coordination and tabulation of test results.

Leveraging a cognitive architecture can help mediate these costs if we can create realistic cognitive agents that model the user base.  The costs of utilizing people to perform testing will be reduced since cognitive agents could test in their place.  For global applications where there is a disparate user base, user differences could be simulated.  For example, varying cultural dimensions could be modeled within the agents.  Costs from iterative development could also be avoided as agents that were constructed for initial tests could simply be reused for subsequent testing. Overhead involved with administering tests will be lessened since there would not be a need for the coordination and distribution of testing among large groups of users.

Social Networking System Application and Usability
In his paper, Dr. Ron Sun explains how CLARION, a cognitive architecture, can be applied to modeling social networks (Sun, 2006).  CLARION is particularly well suited to model social networks since aspects of its various subsystems allow the creation of realistic social cognitive agents. CLARION includes a motivational subsystem that models needs, desires, and motivations within agents. More specifically, it can be used to model the physical and social motivations of the agent as the agent interacts with its environment (Sun, 2006).  Additionally, the agent can understand other agents’ motivational structures, thereby promoting cooperation in a social setting. CLARION also includes a meta-cognitive subsystem that orchestrates the interaction of other subsystems within the architecture. This allows an agent to reflect on and modify its own behaviors, an ability that makes social interaction possible (Tomasello 1999). This “self-monitoring” allows agents to more effectively interact with each other by providing a means for the agent to alter behaviors that may prevent social interaction.

With the ability to effectively model human social aspects, we can use cognitive architectures to perform usability analysis on systems that function within a large social setting (for example: a big city population).  Traditional usability analysis on the effects such systems might not be possible.  The physical deployment of systems to a large community of people is met with several obstacles.  First and foremost is cost involved in usability testing.  As mentioned in the section above, there are overhead costs such as coordination of testing and distributing test plans.  Additionally, a system interface would have to be set up for each person in the community to simulate its effects precisely.  Thus, we would incur major expenses without first understanding potential benefits.  Secondly, the actual coordination of usability testing in a large community would not be feasible. This is because recruiting the number of individuals required isn’t practical.  Finally, there are temporal issues since a social network matures slowly in real-time.  Using a cognitive architecture, we can construct a model of the social network that would enable us to avoid these pitfalls.

Along with mitigating the difficulties in usability analysis, there are other benefits to using cognitive architectures. Parameters of the simulated social network can quickly be changed to model real-life scenarios.  (Parameters include community size, agent type distribution, and epoch length.)  The beliefs, goals, and knowledge of the simulated people (cognitive agents) can also be modified.  Finally, since the system is not deployed to actual users, coordination and deployment of changes to users does not need to occur.  These benefits allow the social model to be adjusted rapidly and without recourse when managing shifting user requirements.  Ultimately, being able to effectively manage change leads to a more usable software system.


References
Byrne, M. D. (2007). "Cognitive architecture." In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition (Human Factors and Ergonomics). (pp. 93-114). Lawrence Erlbaum.

Sun, Ron (2006). “The CLARION cognitive architecture: Extending cognitive modeling to social simulation.” In: Ron Sun (ed.), Cognition and Multi-Agent Interaction. (pp. 1-26) Cambridge University Press, New York.

Tomasello, Michael (1999). The Cultural Origins of Human Cognition. Harvard University Press.

Tuesday, January 11, 2011

Tools for Aligning Mental Models with Conceptual Models

Introduction
As HCI professionals, a major goal is for our users to construct mental models that are closely aligned to our conceptual design models.   If the two models match up, the users will understand the system and how to use it.   We typically don't speak directly with the users, so our only means of communicating the system is through what Norman refers to as the “System Image”.   The system image is comprised of the user interface as well as any other artifacts related to the system (e.g. - user manuals and training materials).  Therefore, in order to meet our goal, we must construct an appropriate and functionally accurate system image for our users.

To construct the system image, it is important to (1) understand our users and (2) understand the tasks our users need to perform.  The reading material from this week describes methods for understanding users.  We can leverage ideas from Light and Wakeman to understand how users perceive certain aspects of the system interface in regards to awareness.  We can also leverage the differential psychology theories mentioned in Dillon and Watson to understand our users' cognitive abilities.   In understanding users' tasks, task analysis and requirements elicitation methods can be utilized.  Furthermore, a lot of work has been done on the application of cognitive task analysis to draw out tacit knowledge that users may hold about their business processes.

Beyond the Interface
In “Beyond the Interface” from the Human-Computer Interaction Handbook, Light and Wakeman present results of a study to measure user behavior and thought processes during website interaction.  The authors’ intent is to describe how the process of interacting with websites brings about two levels of awareness in the user: (1) awareness of the interface, and (2) awareness of the social context beyond the interface.  The study is grounded in HCI theories that place focus on using the Web as a medium for communication, rather than using it as a tool for problem solving.  Perhaps one of the most important theories suggests user interactions follow similar rules to the “turn taking of spoken engagement”.  When these rules are violated, users have a hard time understanding the behavior of the software.   Another theory leveraged in the study deals with the idea of “ritual constraints”.  “Ritual constraints” are social rules inherent in the nature of human interaction.  Here, the user is cognizant of the implication of his or her acts, specifically, how the acts are interpreted by others in the social setting.

The study revealed the main point regarding user awareness.   Users are aware of the two levels of interaction, one with the interface, and one with the recipients behind the interface.  The second awareness seemed to present itself when users were actually entering data/interacting with the website, as apposed to simply navigating the website.  The corollary is the importance for developers to adopt a communication metaphor when creating interactive components on a website.  The study revealed another interesting point regarding the user’s perception of the recipients of inputted data.  Recipient identity is constructed from a combination of the user’s expectations of a commercial brand, the user’s experience with interactive portions of the site, and the user’s purpose in visiting the site.

The paper concludes with recommendations on how to apply the findings of the study.   The findings suggest that interactive websites would benefit from: proper identification of the recipient of the user’s data (especially if the data is personal in nature), confirmation pages after the entry of data, statements about how the recipients will use the submitted data, and an explanation of why the recipient is collecting data that does not appear to assist in the completion of the user’s task.

User Analysis in HCI
In their paper from the Human-Computer Interaction Handbook, Dillon and Watson assert that user analysis in HCI can benefit from the application of differential psychology research.   User analysis is normally relegated to understanding users in general terms such as educational background and technical expertise.  (Nielsen's three dimensional analysis of users is cited.)  The authors argue that such general user analysis is highly context sensitive and it does not offer solutions that can satisfy unique users/user groups.  The aim of differential psychology is to understand user behavior and specific aptitudes of users such as memory, perceptual speed and, deductive reasoning.

Dillon and Watson go on to describe studies of how work done in differential and experimental psychology was leveraged directly in the design of user interfaces.  One study focused on logical reasoning, the other on visual ability.  They concluded that appropriate design of the interfaces/training material can reduce discrepancies in cognitive abilities amongst users.

Application
I think it's apparent how we can apply the ideas presented in both papers mentioned above. They demonstrate not only how users think, but what users think when engaged in discourse with interactive web applications.  Also demonstrated is how this knowledge can be applied directly to user interface design.  I think the idea of the “conversation” with the system is a huge factor when measuring how closely the user's mental model matches up to the conceptual design model.  I believe its the best way we can describe software systems.  After all, “conversations” define software use cases, and software use cases define a system.

Monday, January 10, 2011

A Regular Expression for Multiple Forms of Scientific Notation

I needed to create a regular expression to match numbers formatted in scientific notation.  It turns out there are many forms used around the world, so coming up with a regex to match all possible variations was a bit of a challenge.  Well, here it goes... 

I found the following “acceptable” forms of scientific notation:
123,000,000,000 can be written as:
1.23E+11
1.23e+11
1.23X10^11
1.23x10^11
1.23X10*11
1.23x10*11

0.000001 can be written as:
1.0E-6
1.0e-6
1.0X10^-6
1.0x10^-6
1.0X10*-6
1.0x10*-6

Additionally…
  • A positive or a negative sign can be appended to the beginning to represent a positive or a negative number respectively. For example:
    • +1.0E-6
    • -1.0E-6
  • The decimal point and the standalone 0 following the decimal point may be omitted. For example:
    • 1E-6
  • The positive sign following the E, e, ^, or * may be omitted. For example:
    • 1.23E11

The regular expression I derived to match all forms mentioned above is:
^[+|-]?\d\.?\d{0,}[E|e|X|x](10)?[\^\*]?[+|-]?\d+$
 

Saturday, January 8, 2011

Values Engendered by Revision Control


Introduction
This paper presents a Value-Sensitive-Design (VSD) conceptual investigation of revision control. It focuses on revision control as it is employed when constructing software applications. Firstly, the sociotechnical problem space is explicated by 1) defining revision control and 2) explaining how organizations can implement revision control through the use of specialized tools. Next, implicated values are identified and defined in terms of interactions with version control tools. Finally, stakeholders are identified and the effect of implicated values on stakeholders is analyzed

Sociotechnical Problem Space
Revision control (also known as version control) is used to manage changes in documents, source code, or any other type of file stored on a computer. Revision control is typically used in software engineering when many developers are making contributions/changes to source code files. As changes to a source code file are committed, a new version of the file is created. Versions of a file are identified either by date or by a sequence number (i.e. – “version 1”, version 2”, etc.). Each version of the file is stored for accountability and stored revisions can be restored, compared, and merged.

The fundamental issue that version control sidesteps is the race condition of multiple developers reading and writing to the same files. For instance, Developer A and Developer B download a copy of a source code file from a server to their local PC. Both developers begin editing their copies of the file. Developer A completes his/her edits and publishes his/her copy of the file to the server. Developer B then completes his/her edits and publishes his/her copy of the file, thereby overwriting Developer A’s file and ultimately erasing all of Developer A’s work.

There are two paradigms that can be used to solve the race condition issue: file locking and copy-modify-merge (Collins-Sussman).

File locking is a simple concept that permits only one developer to modify a file at any given time. To work on a file, Developer A must “check out” the file from the repository and store a copy of the file to his/her local PC. While the file is checked out, Developer B (or any developer for that matter) cannot make edits to the file. Developer B may begin to make edits only after Developer A has “checked in” the file back into the file repository. File locking works but has its drawbacks. File locking can cause administrative problems. For example: a developer may forget to check in a file effectively locking the file out and preventing any other developer from doing work. File locking also causes unnecessary serialization. For example: two developers may want to make edits to different parts of a file that don’t overlap. No problems would arise if both developers could modify the file, and then merge the changes together. File-locking prevents concurrent updates by multiple developers so work has to be done in-turn.

In the copy-modify-merge paradigm, each developer makes a local “mirror copy” of the entire project repository. Developers can work simultaneously and independently from one another on their local copies. Once updates are complete, the developers can push their local copies to the project repository where all changes are merged together into a final version. For example: Developer A and Developer B make changes to the same file within their own copies of the project repository. Developer A saves his/her changes to the global repository first. When Developer B attempts to save his/her changes, the developer is informed that their copy is out of date (i.e. – other changes were committed while he/she was working on the file). Developer B can then request that Developer A’s changes be merged into his/her copy. Once the changes are merged, and if there are no conflicts (i.e. – no changes overlap), Developer B’s copy is then saved into the repository. If there are conflicts, Developer B must resolve them before saving the final copy to the project repository.

A development organization may implement revision control through the use of specialized tools dedicated to source code management. There are several open-source and commercial tools available, each with their advantages and drawbacks. Subversion, an open-source software package, is a well-known and widely used tool (Tigris). Subversion (“SVN”) uses a client-server model. Source code files are stored on the SVN server (aka “repository”) and can be accessed by any PC’s running the SVN client. This allows many developers to work on source code files from different locations/PC’s. Some key features of SVN are: utilization of copy-modify-merge (and file-locking if needed), full directory versioning, atomic commits to the repository, and versioned metadata for each file/directory.

Values Defined
The use of a good revision control methodology engenders several values within a development organization. This section identifies and defines some of these values.

By leveraging revision control, an organization fosters collaboration between its developers. Gray defines collaboration as “a process of joint decision making among key stakeholders of a problem domain about the future of that domain” (Gray, p.11). Source control permits developers to work in teams where each individual can contribute to the overall goal of delivering a quality software product. Each individual makes decisions on which piece of code will work best to reach that goal. The future of the domain, or software release, is defined by the collaborative effort of developers within the workspace.

Revision control usage also engenders accountability. In their book, Friedman et al write: “accountability refers to the properties that ensure the actions of a person, people, or institution may be traced uniquely to the person, people, or institution” (Friedman). Upon change commit (i.e. - submitting a change to the repository), revision control tools record the responsible developer and place a timestamp on the new version of the file. Moreover, the developer can enter comments to describe changes that he/she has made. For these reasons, revision control tools provide a good mechanism for accountability as a complete audit trail of change is recorded.

Another value brought about by revision control is work efficiency. This is especially true when the copy-modify-merge paradigm is utilized. The major advantage of this paradigm is that it allows developers to work individually and concurrently, thereby maximizing available development time. Compare this to the file-lock paradigm where developers can be locked out a file at any given time. Additionally, copy-modify-merge minimizes the coordination effort and expense between developers.

Along with the values stated above, revision control also: enhances communication between developers, prevents loss of work through backups, enables better coordination of efforts, manages code merges, and provides code stability by allowing organizations to rollback to previous versions of the code (O'Sullivan).

Stakeholders
The most apparent direct stakeholders are the software developers. Revision control benefits developers by providing them with a more stable work environment. Without revision control, it is very easy to experience loss of work. Race conditions can occur if multiple developers are sharing the same copy of files. The danger of overwriting updates is real, and it increases exponentially as the project size and organization size increase. Moreover, a complete loss of data can be avoided as copies of code files are constantly being generated and backed-up.

Another benefit for developers is comprehensibility of the system code lifecycle. Developers can review the ancestry of files and by reading other developer’s comments they can elicit the reasoning behind code changes. This information helps ensure that they stay the course of the current branch of development.

In a hierarchical organization, the indirect stakeholders are members of management (ex. - IT Team Leaders). IT Team Leaders are rated on how well their teams meet project timeline and budgetary expectations. Development teams have a better chance at hitting targets with a revision control strategy, as pitfalls that cause delays and unexpected costs can be avoided. Consequently, benefits of meeting targets get cascaded up to higher levels of management within the organization.

End users of the constructed software product are also indirect stakeholders. All of the benefits garnered from revision control are ultimately parlayed into building a more usable and functionally accurate software product that is intended for end user consumption.


References
Collins-Sussman, Ben. "Version Control with Subversion". Tigris.org. 10/20/2009 http://svnbook.red-bean.com/en/1.4/index.html.

Friedman, B., Kahn, P. H., Jr., & Borning, A. (2006). Value Sensitive Design and information systems. In P. Zhang & D. Galletta (eds.), Human-Computer Interaction in Management Information Systems: Foundations, (pp. 348-372). Armonk, New York: M.E. Sharpe.

Gray, Barbara. Collaborating: Finding Common Ground for Multiparty Problems. San Francisco: Jossey-Bass, 1989.

O'Sullivan, Bryan. "Making Sense of Revision-control Systems". ACM. 10/20/2009 http://queue.acm.org/detail.cfm?id=1595636.

Tigris. "Subversion Home Page". Tigris.org. 10/19/2009 http://subversion.tigris.org/.

Terry Winograd's Shift from AI to HCI

Introduction
In a more recent paper [1], Terry Winograd discusses the gulf between Artificial Intelligence and Human-Computer Interaction. He mentions that AI is primarily concerned with replicating the human / human mind whereas HCI is primarily concerned with augmenting human capabilities. One question is wether or not we should use AI as a metaphor when constructing human interfaces to computers. Using the AI metaphor, the goal is for the user to attribute human characteristics to the interface, and communicate with it just as if it were a human. There is also a divide in how researches attempt to understand people. The first approach, what Winograd refers to as, the “rationalistic” approach, attempts to model humans as cognitive machines within the workings of the computer. In contrast, the second approach, the “design” approach, focuses on modeling the interactions between a person and the enveloping environment.

During his career, Winograd shifted interests and crossed the gulf between AI and HCI. In his paper, he mentions that he started his career in the AI field, then rejected the AI approach, and subsequently ended up moving to the field of HCI. He writes “I have seen this as a battle between two competing philosophies of what is most effective to do with computers”. This paper looks at some of the work Winograd has done, and illustrates his shift between the two areas.

Winograd's Ph.D. Thesis and SHRDLU
In his Ph.D. thesis entitled “Procedures as a Representation for Data in a Computer Program for Understanding Natural Language” [2], Winograd describes a software system called SHRDLU that is capable of carrying on a English conversation with its user. The system contains a simulation of a robotic arm that can rearrange colored blocks within its environment. The user enters into discourse with the system and can instruct the arm to pick up, drop, and move objects. A “heuristic understander” is used by the software to infer what each command sentence means. Linguistic information about the sentence, information from other parts of the discourse, and general information are used to interpret the commands. Furthermore, the software asks for clarification from the user if it cannot understand what the inputted sentence means.

The thesis examines the issue of talking with computers. Winograd underscores the idea that it is hard for computers and human to communicate since computers communicate in their own terms; The means of communication is not natural for the human user. More importantly, computers aren't able to use reasoning in an attempt to understand ambiguity in natural language. Computers are typically only supplied with syntactical rules and do not use semantic knowledge to understand meaning. To solve this problem, Winograd suggests giving computers the ability to use more knowledge. Computers need to have knowledge of the subject they are discussing and they must be able to assemble facts in such a way so that they can understand a sentence and respond to it. In SHRDLU knowledge is represented in a structured manner and uses a language that facilitates teaching the system about new subject domains.

SHRDLU is a rationalistic attempt to model how the human mind works. It seeks to replicate human understanding of natural language. Although this work is grounded in AI, there a clear implications for work in HCI. Interfaces that communicate naturally with their users are very familiar and have little to no learning curve. Donald Norman provides several examples of natural interfaces in his book “The Design of Future Things” [3]. One example that stands out is a tea kettle whistle. The tea kettle whistle offers natural communication that water is boiling. The user does not need to translate the sound of a tea kettle whistle from system terms into something he/she understands; it already naturally offers the affordance that the water is ready.

Thinking machines: Can there be? Are We?
In “Thinking Machines” [4], Winograd aligns his prospects for artificial intelligence with those of AI critics. The critics argue that a thinking machine is a contradiction in terms. “Computers with their cold logic, can never be creative or insightful or possess real judgement”. He asserts that the philosophy that has guided artificial intelligence research lacks depth and is a “patchwork” of rationalism and logical empiricism. The technology used in conducting artificial intelligence research is not to blame, it is the under-netting and basic tenets that require scrutiny.

Winograd supports his argument by identifying some fundamental problems inherent in AI. He discusses gaps of anticipation where in a any realistically sized domain, it is near impossible to think of all situations, and combinations of events from those situations. The hope is that the body of knowledge built into the cognitive agent will be broad enough to contain the relevant general knowledge needed for success. In most cases, the body of knowledge contributed by the human element is required; since it cannot be modeled exhaustively within the system. He also writes on the blindness of representation. This is in regards to language and interpretation of language. As expounded upon in his Ph.D. thesis, natural language processing goes far beyond grammatical and syntactic rules. The ambiguity of natural language requires a deep understanding of the subject matter as well as the context. When we de-contextualize symbols (representations) they become ambiguous and can be interpreted in varying ways. Finally, he discusses the idea of domain restriction. Since there is a chance of ambiguity in representations, AI programs must be relegated to very restricted domains. Most domains, or at least the domains the AI hopes to model, are not restricted (e.g. - medicine, engineering, law). The corollary is that AI systems can give expected results only in simplified domains.

Thinking Machines offers some interesting and compelling arguments against the “rationalist approach”. It supports the idea that augmenting human capabilities is far more feasible than attempting to model human intelligence. This is inline with the “design approach” (i.e. - placing focus on modeling interactions between the person and his/her surrounding environment.)

Stanford Human-Computer Interaction Group
Winograd currently heads up Stanford's Human-Computer Interaction Group [5]. They are working on some interesting projects grounded in design. One such project, d.tools is a hardware and software toolkit that allows designers to rapidly prototype physical interaction design. Designers can use the physical components (controllers, output devices) and the accompanying software (called the d.tools editor) to form prototypes and study their behavior. Another project, named Blueprint, integrates program examples into the development environment. Program examples are brought into the IDE through an built-in web search. The main idea behind Blueprint is that it helps to facilitate the prototyping and ideation process by allowing programmers to quickly build and compare competing designs. (more information on these projects can be found on their website (link is below))


References
[1] Terry Winograd, Shifting viewpoints: Artificial intelligence and human–computer interaction, Artificial Intelligence 170 (2006) 1256–1258.
http://hci.stanford.edu/winograd/papers/ai-hci.pdf

[2] Winograd, Terry (1971), "Procedures as a Representation for Data in a Computer Program for Understanding Natural Language," MAC-TR-84, MIT Project MAC, 1971.
http://hci.stanford.edu/~winograd/shrdlu/

[3] Norman, Donald (2001), The Design of Future Things, New York: Basic Books, 2007.

[4] Winograd, Terry (1991), "Thinking machines: Can there be? Are We?," in James Sheehan and Morton Sosna, eds., The Boundaries of Humanity: Humans, Animals, Machines, Berkeley: University of California Press, 1991 pp. 198-223. Reprinted in D. Partridge and Y. Wilks, The Foundations of Artificial Intelligence, Cambridge: Cambridge Univ. Press, 1990, pp. 167-189.
http://hci.stanford.edu/winograd/papers/thinking-machines.html

[5] The Stanford Human-Computer Interaction Group
http://hci.stanford.edu/

Making jqGrid work with a .NET Web Service

(Also see Part 2 of this blog entry for a full working example of the code.)

I tore my hair out trying to get jqGrid to display results from a .NET web service. I ended up creating a method (in c#) to convert a datatable to a JSON string appropriate for consumption by jqGrid. Here is what I did...

First, I wrote the following method to convert a datatable to a jqGrid-appropriate JSON string:

public static string GetJqgridJson(DataTable dt) {
            ///<summary>
            ///     This method converts a DataTable to a JSON string
suitable for jqGrid consumption.
            /// </summary>
            ///<param name="dt">
            ///     The DataTable to be processed.
            /// </param>
            ///<returns>
            ///     jqGrid-JSON string if dt parameter has rows, null if dt 
parameter has rows
            /// </returns>

            StringBuilder jsonString = new StringBuilder();
            if(dt != null && dt.Rows.Count > 0) {
                jsonString.Append("{");
                jsonString.Append("\"total\": \"1\",");  //set as 1 for example
                jsonString.Append("\"page\": \"1\",");   //set as 1 for example
                jsonString.Append("\"records\": \"" + dt.Rows.Count.ToString() + "\",");
                jsonString.Append("\"rows\":[");

                for(int i = 0; i < dt.Rows.Count; i++) {
                    jsonString.Append("{\"cell\" :[");
                    for(int j = 0; j < dt.Columns.Count; j++) {
                        if(j < dt.Columns.Count - 1) {
                            jsonString.Append("\"" + dt.Rows[i][j].ToString()
.Replace("\"", "\\\"") + "\",");
                        } else if(j == dt.Columns.Count - 1) {
                            jsonString.Append("\"" + dt.Rows[i][j].ToString()
.Replace("\"", "\\\"") + "\"");
                        }
                    }

                    if(i == dt.Rows.Count - 1) {
                        jsonString.Append("]} ");
                    } else {
                        jsonString.Append("]}, ");
                    }
                }

                jsonString.Append("]}");
                return jsonString.ToString();
            } else {
                //Return null if datatable has no rows.
                return null;
            }
        }

Next, I created a .Net WebMethod that uses the GetJqgridJson method. My WebMethod fetches data as a DataTable, uses the GetJqgridJson method to convert the DataTable to a JSON string, then returns the JSON string to the browser. (I wont include my WebMethod code here for sake of brevity.)

When you're creating your WebMethod, just be sure to include the following directive above your method to ensure you are sending a JSON and not an XML string back to the browser:
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]

Finally
, I wrote my JavaScript code. Here is some sample code that can be used to set up a jqGrid and fetch the JSON string via AJAX. (Note: The following JavaScript code will not work out of the box. You'll need to update the portions surrounded with "**" to make it work within your project.)
$("**YOUR JQUERY SELECTOR HERE**").jqGrid({
                            datatype: 'jsonstring',
                            jsonReader: { root: "rows",
                                page: "page",
                                total: "total",
                                records: "records",
                                repeatitems: true,
                                cell: "cell"
                            },
                            gridview:true,
                            colNames: [**YOUR COLUMN NAMES HERE**],
                            colModel: [**YOUR COLMODEL HERE**],
                            viewrecords: true,
                            caption: '',
                            loadonce: false,
                            loadui: "enabled",
                            beforeSelectRow: function(rowid, e) {return false;},
                            hoverrows:false,
                            height:200,
                            scroll:1
                });
                $("**YOUR JQUERY SELECTOR HERE**").clearGridData();
                $.ajax({
                    type: "POST",
                    url: "**YOUR WEBMETHOD URL HERE**",
                    data: JSON.stringify(webservice_object),
                    contentType: "application/json; charset=utf-8",
                    dataType: "json",
                    success: function (msg) {
                        var data = msg.d;
                        $("**YOUR JQUERY SELECTOR HERE**").jqGrid('setGridParam', 
                                        {datatype: 'jsonstring', datastr: data});
                        $("**YOUR JQUERY SELECTOR HERE**").trigger('reloadGrid');
                    }
            });

Thursday, January 6, 2011

C18 Brain Model from 3B Scientific

I recently purchased 3B Scientific's C18 Brain Model for use in academic research. It is a 5-part midsagitally sectioned brain taken from the cast of an actual human brain.

Pluses
  • Taken from the cast of an actual human brain so it's very accurate.
  • Nicely detailed and very clearly labeled. (A user manual with structure names is included.)
  • Constructed out of rubber material making it unbreakable.
  • The sections are held together by metal connecting pins that snap securely into place. I have seen other cheaper models that use screws for connectors making disassembling/assembling tough.
  • Comes with a plastic display stand.
  • Construction is very nice and has a professional feel. Other budget brain models don't have the same visceral build quality.
Minuses
Really none, but...
  • One nitpick I have is the manual only contains names of the structures; It does not contain pictures or additional descriptive information. This isn't a big deal since I could just look it up the on the web, but it would be really nice to have this information right in the manual. (I plan on putting a pdf together that contains pictures and descriptions. I'll post it on this blog shortly.)
  • Most important structures are visible however one structure that I'm currently researching, the amygdala, is not visible. As you can see from the 2nd photo below, the amygdala falls in about the same location as the connecting pin for the temporal lobe and encephalic trunk pieces. 

(Click images to enlarge)
Assembled, with manual.
5 pieces (left lobe stays intact).












I would highly recommend this model based on it's realism, build quality, and value. It's definitely something that could be used in the academic and professional arenas.