i-scream documentation viewer
spec-realtime.txt
I-Scream Specification Outline (Realtime side only)
===================================================
ajm4, 30/10/2000
tdb1, 30/10/2000
System Component Startup
************************
CORE
----
The core of the system provides little or no functionality
to the operation of the system, but instead oversees the
running. At startup this should be the first component to
intansiate. It essentially acts as a central logging and
configuration distrubution site, the "central" in
centralised monitoring system. It may also be running the
ORB or some components related to it.
On startup the first thing it should do is read in any
configuration files, start the logging interface then
prepare to bring the system online. This is done by
starting the various components. If however the system
configuration states that particular components are
operating in "distributed" mode, then it blocks until
the various key components have registered that they are
online.
Client Interface
----------------
The Client Interface is essentially just one component with
a series of lists within it. When run it should, obviously,
create an instance of the Client Interface, and then bind
this to the ORB and register with the naming service.
It can then read its configuration in from the CORE and get
a hook on the logging service that the CORE provides.
It then needs to construct the "local clients". These
clients communicate with the system using the same interface
as the external clients, but they are tailored to specific
purposes, such as E-Mail alerts, and SMS alerts. The Client
Interface then listens on a "well known" address for clients
to request a connection.
Filter
------
The filter is broken down into three main subcomponents.
- Filter Manager
The Filter Manager is responsible for managing which
filters are used by the hosts. The Filter Manager is
available at a "well known" location which is pre-
programmed into the hosts. The Filter Manager is
responsible for creating and managing the other
components of the filter system.
- Main Filter
The Main Filter is the single point that links back
into the CORE of the system. It will connect to the
DBI and the CLI to deliver data.
- Filters
There can be multipler Filters, and these are the
"front line" to the hosts. They all link back to the
Main Filter to send data into the system. It is
possible to run these Filters on any machine, allowing
management of data flow.
At startup a Filter Manager object is activated at the "well
known" location (probably a given machine name at a
predefined port). The Filter Manager will create an instance
of the Main Filter, and any Filters under it's control. It
should also bind itself to the ORB and register with the
naming service.
It can then read its configuration in from the CORE and get
a hook on the logging service that the CORE provides.
Through some mechanism the other Filters, elsewhere on the
network, will register with the Filter Manager. The Filter
Manager will need to tell each Filter the location of the
Main Filter upon registering. The Filter Manager will then
be in a position to receive connections from hosts and pass
them off to Filters.
System Running State
********************
CORE
----
Once the various components are running then the core is
essentially idle, logging information and handling
configuration changes.
Client Interface
----------------
In the running state the Client Interface is always
listening for clients on the "well known" address. When a
connection is received it is passed in to the main Client
Interface and the client is queried about which hosts it
wishes to receive information about. This is then stored in
an internal "routing table" so the Client Interface knows
which hosts to send the information on to. This routing
table is constructed with this form;
host1: client1 client2 client5
host2: client2
host3: client3 client4
host4: client1 client3
This design is such that when a piece of information is
recieved from a host the Client Interface can immediately
see which clients wish to receive this data, without too
much searching.
The "local clients" function just like any other client,
although they are local, in that they will wish to receive
information about hosts they are interested in. However,
they will contain a lot more logic, and be required to work
out who wants to be alerted about what, and when. They will
also be responsible for sending the alert.
Filter
------
When a host first loads up it knows where to locate the
Filter Manager because it's located at a "well known"
location. The host will fire up a TCP connection to the
Filter Manager to announce itself. The Filter Manager will
use some method (logically) to allocate a Filter to the
host. The Filter Manager should base this decision on
various factors, such as the load on the selection of
filters, and possibly the location in relation to the host.
The host will then be directed to this Filter for all
further communications.
As the system runs the host will send data with (maybe) UDP
to the Filter (that it's been allocated to). This choice has
been made because it puts less onus on the host to make the
connection, rather the data is just sent out. However, to
ensure that the data isn't just disappearing into the depths
of the network a periodic heartbeat will occur (at a
predefined interval) over TCP to the Filter. This heartbeat
can be used as a form of two-way communication, ensuring
that everything is ok, and if required, to send any
information back to the host. This heartbeat must occur
otherwise the server may infer the host has died.
This could link in to alerting. An amber alert could be
initiated for a host if the server stops receiving UDP
packets, but an red alert be raised if the heartbeat doesn't
occur.
If, for some reason, the Filter were to disappear the host
should fall back on it's initial discovering mechanism - ie.
contacting the Filter Manager at it's "well known" location.
The host should report that it's lost it's Filter (so the
Filter Manager can investigate and remove from it's list of
Filters), and then the Filter Manager will reassign a new
Filter to the host. Communication can then continue.
The idea of plugins to the Filters has been introduced.
These plugins will implement a predefined plugin interface,
and can be chained together at the Filter. Using the
interface we can easily add future plugins that can do
anything from parsing new data formats, to implementing
encryption algorithms. The Filter will pass incoming data to
each plugin in turn that it has available, and then finally
pass the data on to the Main Filter. The Filter need not
have any real knowledge about the content of the data.