Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0


A quick overview of the proposed architecture for facilitating S3 functionality. This is the sum of our internal tech discussions at the ENCORE lab (Rokham, Ali, Matt) and further development from discussions with potential collaborators, users, and researchers (groups in Toronto, Oslo, Gent, Chicago, etc.)


  • Modularity, Reusability
    • We (Toronto) need to be able to quickly build smartroom "scenarios" to fulfill diverse research goals. (i.e. we work with many different researchers, each having a variety of ideas they want to test in the smart space/classroom setting).
      • Currently building these scenarios is a laborious process; just about everything has to be built from the ground-up for each new scenario (see below for an explanation of what we mean by "scenario").
      • We need something modular that allows us to reuse common functionality between scenarios, only writing new code when necessary to fulfill the parts of the scenario that are qualitatively different from others.
  • Real-time, Asynchronous
    • Because S3 scenarios are played out in real-time, with many students all interacting with the system asynchronously, we need an architecture that is not just capable of asynchrony, but allows us (developers) to easily write code with asynchronous, event-driven behaviour in mind (i.e. standard, imperative programming practices won't do well here; we need something that can respond to an active, unpredictable smartspace environment).
  • Limited Resources
    • Our resources are limited, placing major constraints on what we are able to do. This means that we cannot embark on a large-scale development process; we do not have the luxury to go away for a year to do nothing but foundational programming/engineering work. We need to build things in small steps, with a working, usable system at each step (i.e. a lightweight many-small-iterations development process).
  • Community Adoption
    • Although our primary concern is to build something specifically suited to our needs, within the constraints of our in-house capabilities and competencies, we hope that the architecture we develop might be adopted by other groups. We see a modular, implementation-agnostic approach as highly compatible with wider community collaboration. Two key guiding principles that should make wider adoption possible:
      • Our software should be easy to deploy, with minimal configuration and minimal sensitivity to the running environment (operating system, server architecture, etc.)
      • Communication between the various services (agents, etc.) in the S3 architecture should be loosely coupled, and should be based on an open, community-developed standard (i.e. a common, extensible vocabulary for data interchange).

Proposed Architecture

By way of illustration:

Image Modified

A couple of things to note in the above diagram:

Real-Time Space vs. Meta Space
  • We propose a clear separation between the real-time, asynchronous aspects of the system (the "real-time space") versus the long-lived meta-services living in the meta space outside the real-time context:
    • A "real-time space" is instantiated whenever a scenario is run (i.e. when students are all gathered in the room, and the scenario is in progress). Agents and user interfaces (workstations, smartphones, multi-touch tables, visualizations, etc.) all live and communicate within the real-time space.
      • Communication between the components in the real-time space is facilitated via XMPP.
    • Meta-services (shown on the left side of the above diagram) reside outside of the real-time context and provide management and control over resources that live outside of the context of a single "run" (e.g. user accounts and user groups, scenario settings, scenario content and materials, etc.)
      • Meta-services are accessible directly by users (admins, teachers) so they will each likely have their own web-based user interfaces.
      • Meta-services are accessible programatically (for example by agents running in the real-time space) via RESTful APIs. REST is preferred over XMPP in this case since REST APIs are likely to be easier to implement while XMPP's asynchronous, stateful properties generally will not be needed outside of the real-time context.

Meta Space

Real-time Space

Long-lived services, manage resources spanning across multiple runs and scenarios

Short-lived agents/processes (i.e. start up at the beginning of a run, shut down at the end)

REST for programmatic access, Web-UI for human access

XMPP for inter-component communication

Web-based Applications (e.g. Rollcall, Drupal, etc.)

Agents, Student/Group Workstations, Smartphones, Multi-Touch Tables, Visualizations

"Groupchat" in the Real-Time Space
  • Components in the real-time space will all by default communicate via XMPP's "groupchat" protocol. Each component sends its messages out to a general group channel that all other components listen to. Each component can pick and choose messages that might be of interest to its own needs.
    • For example, a user (via a smartphone) might type in an observation or an answer to a question. The observation/answer is broadcast out to the groupchat channel. A logging agent might pick up the message and log in to a database for later analysis/datamining. Another agent, this one responsible for visualization, might pick up the message, process it, and send out the digested data to a visualization component projecting out graphics onto a large screen.
    • This use of XMPP's groupchat facility allows for decoupling of components. In many cases the different components need not be dependent on (or even aware of) each other. A "Logging Agent", for example, can be plugged into the system without any modification to the other components.
      • The idea is that this should provide good modularity, allowing us to add or modify individual components (agents, user input devices, data display devices, etc.) as needed.
      • There will likely be cases where components will want to talk to each other directly, via direct XMPP messages/iq's. Some sort of facility for service broadcast/discovery may have to be implemented here (or may be possible within XMPP's existing functionality?)



A "curriculum unit". This might consists of some custom agents, content/materials such as images, text, videos, a manifest of affordances, a script for orchestrating what ought to happen during the run and in what order, and any other resources invoked in the run of this curnit. For example a curnit designed to test the benefits of using a wiki in teaching students about trigonometry inside a smart-classroom environment might include a set of trig questions and answers (the "content"), a web-based user interface for receiving those questions and submitting individual answers (via XMPP), a web-based wiki "whiteboard" where students can collaborate (perhaps not directly wired in to XMPP), agents for "grading" students and re-grouping them in real time based on performance, etc. Note that there are currently no plans to make curnits into self-contained artifacts, so the notion of a curnit will likely remain conceptual.


A particular configuration of a curnit. One scenario of the trig-trial curnit described above might be configured to have students working in groups, while another scenario might be configured to have them work individually. A single scenario may be "run" multiple times (with each run being an instance of a scenario)


An instance of a scenario. A prototypical run will last a few hours at most (although facility for stopping and resuming a run later will likely be implemented). A run may be be scripted, with an explicit list of events, and a start and end (for example start when students enter a room, stop when they leave or when they're done performing a set of activities). Data generated in each run will be tagged as having originated in that particular run.


Agents are the brains (the business logic) in a run.

  • Many agents (as individual pieces of code) will be shared between curnits/scenarios, although each curnit will likely include a few agents specifically developed for that curnit alone.
  • The definition of an agent is intentionally loosely defined, but a basic agent will likely consist of a self-contained process or thread connected to the XMPP server in a real-time space.
  • Each agent should be responsible for a specific, well-defined task (e.g. logging of data, re-grouping students, alerting teachers about some pattern of activity in the classroom, etc.)
  • It ought to be possible to implement agents in any language or framework, as long as the agent is able to communicate with other agents and components via XMPP. The goal is to allow teams to develop/program agents by whatever means they are most comfortable with. For example, in Toronto we have a team members with competency in Ruby and Java, so we may end up developing agents in one or more of these languages, while another team choosing to adopt our architecture may choose to use Python.
    • Because agents are likely to be event-driven programs, frameworks specifically designed for event-driven programming such as Twisted (for Python), EventMachine (for Ruby), Node.js (for JavaScript), etc. will likely be employed to simplify agent development.
    • A common ontology for the contents of the XMPP messages sent out to groupchat should be developed to ensure compatibility between agents developed in disparate environments (this is actually unlikely to work out, since domain-specific dialects will almost certainly emerge, but at least some effort should be made to maintain cross-compatibility).
  • We envision agents as simple Python or Ruby scripts started individually from the command-line. Later on we may develop a strutwork to manage launching and monitoring of agents (or may borrow SCY's agent framework for this).

Question Marks

  • How are S3 curnits/scenarios to be scripted? We currently have no facility for this in the above architecture.
    • One possible solution might be to develop an "orchestration agent". This agent would load up the script for the scenario, and then enforce it by sending out orchestration messages (e.g. "stop everything while the teacher says some stuff", "everyone submit answers now", etc. etc.)
  • Content development and presentation is currently a laborious process (basically Rokham/Ali code up a custom PHP application for each scenario that presents students with questions and takes their answers). We don't really yet have good plans for dealing with this.
    • One possible solution may be to use a CMS like Drupal or some survey-building software to allow researchers/teachers/admins to build the kinds of question-answer pages that Rokham & Ali are currently building by hand (this is the intention for the "Content Management Service" element in the meta space in the diagram above).
    • Alternatively we may want to develop our own CMS-like service to handles this (Ali, is this what you meant by "VLE" in your original architecture diagram?)