Client API

Information contained in this document is subject to change without notice. Complying with all applicable copyright laws is the responsibility of the user. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without express written permission from Green Energy Corp.

?

Document Control #: 02.004.000.000.000

Release Date: 20111220


1. Overview
2. Connecting
2.1. Settings
2.2. Establishing a connection
2.3. Client Log-in
3. Service Interfaces
3.1. Get service
3.2. Service request
3.3. Service Naming Conventions
4. Service Subscriptions
4.1. Subscription event acceptor
4.2. Service request
4.3. Canceling subscriptions
5. Error Handling
6. Protobuf Format

Chapter?1.?Overview

Reef uses AMQP for transport and Google Protocol Buffers to serialize messages. The combination of these two technologies comprise the interface to the Reef service layer.

Figure?1.1.?Client, broker, services interaction


The Java client API handles details of maintaining a connection to an AMQP broker, routing service requests to their named AMQP exchanges, providing the asynchronous delivery of service message events, and building the proper request headers.

For the user, the client API involves the management of three central objects: ConnectionFactory, Connection, Client

  • ConnectionFactory: Provides a means for creating connections to the Reef services.

  • Connection: Maintains a connection to the server, handles the threading of asynchronous communication, acts as a Client factory.

  • Client: Represents a logged-in service session. Is used to get service interfaces in order to make service requests.

Client objects must be used by a single thread at a time; Connection objects are thread-safe.

Chapter?2.?Connecting

2.1.?Settings

The first step to opening a connection is providing the necessary configuration information. The AmqpSettings object contains connection settings for the broker, and the UserSettings object contains username/password information for client login.

Both can be loaded from configuration files:

Example?2.1.?Loading Settings

// Load broker settings from config file
AmqpSettings amqp = new AmqpSettings("file path");

// Load user settings (login credentials) from config file
UserSettings user = new UserSettings("file path");
                


2.2.?Establishing a connection

To establish a connection, first a ConnectionFactory must be instantiated (the ReefConnectionFactory class is the default implementation). It's configured with a AmqpSettings configuration and an instance of ServicesList, which provides bindings for actual services.

Using the ConnectionFactory interface, the connect method is used to acquire a connection to the server.

Example?2.2.?Establishing a Connection

// Create a ConnectionFactory by passing the broker settings. The ConnectionFactory is
// used to create a Connection to the Reef server
ConnectionFactory connectionFactory = new ReefConnectionFactory(amqp, new ReefServices());

// Prepare a Connection reference so it can be cleaned up in case of an error
Connection connection = connectionFactory.connect();
                


2.3.?Client Log-in

Once the connection has been established, the next step is to log-in using user authorization credentials contained in the UserSettings class. The login method returns a Client object which can be used to acquire service interfaces.

The logout method is used to drop the authorization when the Client is no longer being used.

Example?2.3.?Acquiring a Client

// Login with the user credentials
Client client = connection.login(user);

// Work with client...

// Log out, removing auth token from client and expiring it on the server
client.logout();
                


Chapter?3.?Service Interfaces

Interfaces are defined and grouped by function (MeasurementService, EventService, EntityService, ...) but there is also an aggregate interface which provides all functions in all of the service interfaces: AllScadaService.

This aggregate interface is what is actually instantiated, but it is recommended that user objects and functions are passed the minimal interface that has the functions they need. Passing the AllScadaService interface to all consumers of services can make testing and reasoning about the code more difficult.

Caution

Some IDEs do not do a very good job handling a single interface with the number of functions in the AllScadaService. Downcasting to the Service interface that has only the functions needed usually eliminates these errors.

One of the most important conventions is that unless explicitly stated in the documentation null will not returned from API functions. null should never be passed as an argument to any service API functions. Lastly, the only exceptions that can come out of any API functions will be ReefServiceExceptions. There is a high level try/catch block on every function call that will catch RuntimeExceptions and wrap them with a InternalClientError exception.

3.1.?Get service

Service interfaces are acquired by calling the getService method on the Client interface and providing the class for the desired service. A full listing of service interfaces can be found in the service client API Javadoc.

Example?3.1.?Service Interface

// Get service interface for points
PointService service = client.getService(PointService.class);
                


3.2.?Service request

Each service interface contains methods that represent the available service requests. A listing of available methods for each service can be found in the Javadoc for the service client API.

Example?3.2.?Service method

// Retrieve list of all points
List pointList = service.getPoints();
                


3.3.?Service Naming Conventions

The Service APIs have been written with as clear names and typing as possible, but there are some conventions that are used across the APIs that are difficult to describe in the java type system. Most of the function names in the APIs follow a similar pattern:

{result} {verb}{ServiceType}{s}[ByX]({arguments})

API functions will also include what type of resource they are modifying, sometimes referred to as ServiceType. Normally we use the name of the Protobuf type that will be returned to the application. In cases where the Protobuf class name does not exactly match the behavior of the function we will use a more descriptive name. For example when searching Alarms we are usually only concerned with "active" alarms, so the function name is getActiveAlarms(int limit).

Generally the return type of the function will be indicated by the plurality of ServiceType name. Functions that return a list of objects will have ServiceTypes ending with "s" such as getActiveAlarms or getCommands . Functions returning a single type will have the singular getAlarmById(id) or getCommandByUuid(uuid) .

When there are a number of ways to search for, or specify, an object, we differentiate between them by adding a qualifier to the function name. Example qualifiers are "ByName", "UsedByEntity" "OnCommand". Functions without any qualifier that take an argument generally are going to be keying off the id of the object.

Most requests are prefixed with a verb indicating the type of operation being performed. Common verbs and their semantics and conventions:

  • get: Retrieve the matching objects from the service. If its return type is a single Protobuf object and a matching entry is not found, a ReefServiceException will be thrown. A null is not returned. If it returns a list it may have zero or more entries.

  • find: Search for and locate matching objects from the service. If no match is found a null will be returned to the client. If it returns a list it may have zero or more entries.

  • subscribeTo: Subscribe operations have the same semantics as a "get" operation with the same arguments, but they return a SubscriptionResult object that includes the "get" result and a Subscription object for the updates to those resources.

  • create: Will attempt to create the matching object, returning the created object. "create" calls are not idempotent, calling a "create" function again will make a new instance or fail if multiple versions of a similar object are not allowed.

  • delete: Will delete the matching object(s) from the server, returning the deleted object(s). If the object is not found a ReefServiceException will be thrown. "Delete" calls are not idempotent; a second call with same arguments will generally fail.

  • clear: Will delete the matching object(s) from the server, returning the deleted object(s). Does not fail if object cannot be located.

  • set: Updates an object to have new properties, returning updated object. Also used for idempotent create operations. "set" calls are idempotent.

Example?3.3.?Naming examples

// service API naming pattern
// {result} {verb}{ServiceType}{s}[ByX]({arguments})

// a "get" operation with pluralized type returning a list
List getCommands();

// a "get" operation with a "ByName" qualifier to select by name
Command getCommandByName("CommandName");

// a "create" operation that will create a new lock or fail
CommandLock createCommandExecutionLock(command);

// a "delete" operation
CommandLock deleteCommandLock(access);

Chapter?4.?Service Subscriptions

All modifications to resources in Reef produce service events which are notifications when an object is added, modified or removed. This allows a large class of applications to be written in an entirely event driven manner which generally leads to simpler design and better performance.

4.1.?Subscription event acceptor

Receiving subscription events involves first creating a class to accept call-backs for the events. This is done using the SubscriptionEventAcceptor interface, which contains the method onEvent. The onEvent method is called with a SubscriptionEvent, which contains two properties: the event type (whether the resource was added, modified or removed) and the resource value itself.

Example?4.1.?Subscription Event Acceptor

public static class MeasurementSubscriber implements SubscriptionEventAcceptor {

    @Override
    public void onEvent(SubscriptionEvent measurementSubscriptionEvent) {

        // Type of the Event (ADDED, MODIFIED, REMOVED)
        Envelope.SubscriptionEventType eventType = measurementSubscriptionEvent.getEventType();

        // Measurement associated with the event
        Measurement measurement = measurementSubscriptionEvent.getValue();
    }
}
                


4.2.?Service request

Making the subscription request itself is as simple as calling the right method on the service interface. Instead of returning service objects themselves, subscription methods return SubscriptionResult objects. The result objects contain both the immediate results for the request--the state of the system at the time the request was filled--and the means to begin asynchronously receiving subscription events.

Example?4.2.?Subscription request

List pointList = services.getPoints();

SubscriptionResult, Measurement> result = services.subscribeToMeasurementsByPoints(pointList);
                


The immediate results of the request can be obtained using the getResult method:

Example?4.3.?Subscription result

List currentMeasurements = result.getResult();
                


Receiving subscription events involves first instantiating the callback object (SubscriptionEventAcceptor). The getSubscription method on SubscriptionResult provides a Subscription object, which manages the lifecycle of the subscription. Passing the callback object into the start method will cause it to begin receiving events.

Example?4.4.?Starting the subscription

// Build a MeasurementSubscriber callback to accept subscription events
MeasurementSubscriber subscriber = new MeasurementSubscriber();

// Get the Subscription object from the SubscriptionResult
Subscription subscription = result.getSubscription();

// Start the subscription, providing the MeasurementSubscriber as a callback
subscription.start(subscriber);
                


Since the Subscription will only be notified of changes, it is important that the initial results returned by a subscription request should be processed before starting the subscription. All subscription events that are received are guaranteed to have occurred after the immediate results were returned. Events begin queueing on the broker immediately, the call to start() simply signals the client code's willingness to being accepting them.

4.3.?Canceling subscriptions

Each Subscription object is tied to a queue on the message broker. The queue is a "limited" resource and clients should take care to manage subscriptions as they would a file handle or network connection. When a SubscriptionResult is returned from a service function call, the Subscription has been configured and is consuming resources on the broker regardless of whether start() is ever called on the Subscription. If we do not want to start the Subscription (if, for example, the initial state wasn't valid) we need to call cancel() on the subscription object before returning a failure. It is also very important to cancel() the Subscription when it is no longer needed. Subscription will automatically be cleaned up when the client disconnects. However in a long running application subscriptions that are not canceled will accumulate and slow down the message broker as it enqueues messages that will never be delivered.

Example?4.5.?Canceling the subscription

// Cancel the Subscription, cleaning up the resource
subscription.cancel();
                


Manually managing the Subscriptions and making sure they get canceled can be difficult, especially when handling exceptions. Client applications can implement SubscriptionCreationListener which will be notified when each subscription is created which makes them easier to cancel on shutdown.

Chapter?5.?Error Handling

Service requests may experience errors which are thrown as Java exceptions. This can be for local reasons (i.e. RESPONSE_TIMEOUT) or can represent an error raised by the service and tunneled back through the response (i.e. BAD_REQUEST, NOT_ALLOWED). All service errors are sub-classed from ReefServiceException but more specific types of exceptions are thrown to make exception handling easier.

In the event of an error, it's important to clean up the Connection and ConnectionFactory objects.

Example?5.1.?Cleaning up after errors

ConnectionFactory connectionFactory = new ReefConnectionFactory(amqp, new ReefServices());

// Prepare a Connection reference so it can be cleaned up in case of an error
Connection connection = null;

try {

    // Connect to the Reef server, may fail if can't connect
    connection = connectionFactory.connect();

} catch(ReefServiceException rse) {

    // Handle ReefServiceException, potentially caused by connection, login, or service request errors
    System.out.println("Reef service error: " + rse.getMessage() + ". check that Reef server is running.");

} finally {

    if(connection != null) {

        // Disconnect the Connection object, removes clients and subscriptions
        connection.disconnect();
    }

    // Terminate the ConnectionFactory to close threading objects
    connectionFactory.terminate();
}
                


Chapter?6.?Protobuf Format

All data transferred between applications and Reef is encoded and transferred using the Protobuf library developed and released by Google. Protobuf provides a strongly typed but neutral format for data declaration and transport that can be used across languages and platforms. The definitions of the resource types exposed in Reef are available in the schema/proto directory of the source distribution. Each resource definition includes comments explaining in detail each type, field and when it will be populated but not necessarily how to use it.

The standard ways each resource can be used are exposed via the Java Service APIs. Because Protobuf lacks a good equivalent to JavaDoc, we have decided that the best way to document the system with the JavaDoc of the Service API. As a convenience, we have embedded the protobuf files themselves in the javadoc package in the org.totalgrid.reef.client.service.protodoc.* packages.

Caution

Many of the Protobuf fields are "optional" to support querying the services using the same objects, this may change in future versions to use "required" in more locations and use a different querying method. When objects are returned from the services they are generally "fully populated", but we recommend using assertions in application code to verify that all of the expected fields are populated.