Talk:Power TAC Architecture

Main Idea
Extend the message driven nature of the overall TAC Energy design (JMS Messaging between server and broker clients) to also cover the inner core of the TAC Server itself. Spring Integration seems to be the natural choice as implementation base for the necessary refactoring, as it extends Spring Core with an appropriate messaging infrastructure (e.g. introducing concepts such as Message, Channel, Endpoints, Transformers etc.) and comes with out of the box JMS, REST, JMX adapters / connectors.

Example
Refactored order (shout) handling during execution phase using spring integration on top of the current server architecture



Incoming messages from broker are first routed into the appropriate message processing channel and then handled. In the picture above the order entry channel is shown in detail. The auction service (depicted in yellow) can be injected via a plugin and automatically wired into the channel via name based dependency injection. Like this a concrete AuctionService implementation can be replaced by another implementation simply by uninstalling the current AuctionService (grails) plugin and by installing another plugin. No configuration needs to be changed, no additional coding is necessary.

The auction service itself may be implemented as a single class or as a more complex construct. It may, for example, even be a complete RePast environment. For the overall server environment this is unimportant. The only requirement is that the auction service provides a service method that can be called from the service activator. The method name is configurable but it should be able to process a message object.

The main objective is to decouple the different business logic elements of the server. Server queues, message routing and polling, even the message endpoints can be changed via configuration only.

Carstenblock 18:55, October 3, 2010 (UTC)

OK, as long as we can make it work with Repast models.
I like this idea in the abstract, but I'm uncertain how JMS will interact with Repast models. This is a requirement, I believe, on the server as well as on the agent. This is what Nora was talking about last week.

Grampajohn 21:43, September 29, 2010 (UTC)

Repast (headless) runtime requirements?
Therefore the (library / jar) dependencies that are required to execute repast generated agent code (the groovy code repast generates) need to be determined. I assume that most of the code is pure groovy, right? As far as I have seen, most of the libraries are only required for graphical execution. The server uses Ivy dependency management to resolve all required libraries. As soon as the required repast libraries are determined, we can simply add them to the Ivy config in order to execute the code.

Carstenblock 22:43, September 29, 2010 (UTC)

Synchronous vs. Asynchronous interaction
The interaction between broker agents and the server is asynchronous - the server sends and receives messages, and brokers send and receive messages. That makes sense, because you don't want to have the server tied up waiting for an agent to acknowledge a message, and because brokers especially may not be interested in all the incoming messages. This is the JMS model.

Asynchronous interaction works well for coarse-grained components that run more or less independently, like the server and the broker agents. However, most Repast models are made up of a number of fine-grained components (Repast agents) that need to interact with each other directly, in a synchronous manner - they call each others methods, for example, and expect to see a return value. The set of Repast agents that a given agent can interact with is at least partially determined by the Projection in which they are embedded. In a network projection, you can "see" the agents that are connected by edges, and you can observe their public attributes and call their public methods. An example of what such a network might look like is on the Server Architecture page.

So I don't see that JMS makes sense as the interaction model between model elements within either the server or the brokers. I do think it makes sense between the server and the broker agents.

Grampajohn 18:28, September 30, 2010 (UTC)

Some comments:


 * JMS != Messaging Archtitecture: All the calls inside the channels are pure java vm method calls. The endpoints handle incoming and outgoing messages from / to brokers.
 * JMS is an asynchronous messaging model, while RMI is a synchronous RPC model.


 * There is absolutely no problem to include a repast network of agents calling each other. Just think of them as being one single node. The only thing required in the above architecture is that one of the agents is able to react to a void handleMessage(Message<?> message); method.


 * At the moment we don't even know if the repast agents run inside or outside the server VM. Same with the MIS. With the above architecture we don't need to decide that now. Instead we can simply configure the inheritence method (e.g. direct method inheritence via a standard service activator or cross vm service activation via REST, JMS or something else). Its only a matter of configuration
 * It is hard to imagine why Repast agents or the MIS would NOT run inside the server VM. What is the server other than a set of models (all of which need to be able to interact with each other) on top of a communication and logging infrastructure? If we assume they are not in the same VM, then we will have to create proxies so they can still seem to communicate with each other synchronously. That's the RMI model, not the JMS model. (JEC)
 * One of the architectural requirements is simplicity. A researcher needs to be able to download and start the server without special privileges and with little or no configuration/deployment effort. That argues strongly for a single-process model. (JEC)***Fine, I prefer one single over several distinct server packages too. But on the other hand you also argue, that e.g. visualization should be externalized into a separate server for performance reasons... Hard to integrate all these requirements into one single platform... ;-) (CBL)


 * Having two different types of server (one single threaded version for participants to easily setup locally running instances and one distributed, performance optimized, version for the central TAC server, should come at lowest possible development overhead. With the proposed architecture we could think of adding message interceptors to the channels (pure configuration change), refactor current visualization functionality into a plugin, write two different visualization endpoints (one for local invocation, one for remote invocation) and maintain two different channel configurations. (CBL)
 * Here's a slightly different approach, the one I prefer: The simulation server is a simple Spring JMS app, without an HTTP server. It is started by a separate Grails frontend, logs in some agents, runs a single simulation session, and exits. The frontend provides access for server configuration, broker login, visualizer startup and data stream, tournament setup/admin, and post-game data access. In a simple research setup, they would run on the same host, but in different processes. In a multi-server tournament or research setup, the frontend and server would most likely run on separate machines. I believe it's important to separate the web access from the simulation server for a number of reasons. The SCM server is not split in this way, and it has caused numerous problems. I could write a whole page on this. (JEC)

Carstenblock 19:04, October 3, 2010 (UTC)
 * Marshalling: Currently all JMS messages are XML encoded strings in order to remain platform neutral and in order to quickly let us change transmission protocols (e.g. to XMPP).
 * That's OK as long as the marshalling overhead is only invoked when communication is across process boundaries. (JEC)
 * Yes, that's the idea (CBL). Internally we work with POJOs, HashMaps etc. Agreed (JEC)