The Recursive InterNetworking Architecture (RINA) is a computer network architecture that unifies distributed computing and telecommunications. RINA’s fundamental principle is that computer networking is just Inter-Process Communication IPC. RINA reconstructs the overall structure of the Internet, forming a model that comprises a single repeating layer, the Distributed IPC Facility (DIF), which is the minimal set of components required to allow distributed Inter-Process Communication (IPC) between application processes. RINA inherently supports mobility, multihoming and Quality of Service without the need for extra mechanisms, provides a secure and programmable environment, motivates for a more competitive marketplace, and allows for a seamless adoption.

Illustration of the RINA structure: DIFs and internal organisation of IPC Processes (IPCPs)

1. Illustration of the RINA structure: DIFs and internal organisation of IPC Processes (IPCPs)

A.1 Overview

RINA is the result of an effort that tries to work out the general principles in computer networking that applies to everything. RINA is the specific architecture, implementation, testing platform and ultimately deployment of the theory. This theory is informally known as the Inter-Process Communication “IPC model” although it also deals with concepts and results that are generic for any distributed application and not just for networking. RINA is structured around a single type of layer – the DIF – that repeats as many times as needed by the network designer (Figure 1). In RINA all layers are distributed applications that provide the same service (communication flows between distributed applications) and have the same internal structure. The instantiation of a layer in a computing system is an application process called IPC Process (IPCP). All IPCPs have the same functions, divided into data transfer (delimiting, addressing, sequencing, relaying, multiplexing, lifetime termination, error check, encryption), data transfer control (flow and retransmission control) and layer management (enrollment, routing, flow allocation, namespace management, resource allocation, security management). The functions of an IPCP are programmable via policies, so that each DIF can adapt to its operational environment and to different application requirements.

A.1.1 The DIF service definition

The DIF service definition provides the abstract description of an Application Programming (or Programmer’s) Interface (API) as seen by an Application Process using a DIF (specific APIs are system-dependant and may take into account local constraints; in some cases there may not be an API at all, but an equivalent way to have equivalent interactions). The Application Process might be an IPC Process, reflecting the recursive nature of RINA (a DIF can be used by any distributed application, including other DIFs). All DIFs provide the same service, called flows. A flow is the instantiation of a communication service between two or more application process instances. The DIF API allows application to operate upon flows using the following four operations:

  • Allocate: Allows an application to request a flow to a destination application, providing its application name and the desired characteristics of the flow (statistical bounds on loss and delay, in-order-delivery, minimum capacity, etc.). If the flow allocation is successful, the DIF returns a port-id, which is a local handle to the flow.
  • Write: Write an Service Data Unit (SDU) to the flow identified by port-id. The application writes a full SDU of bytes in a single transaction. The integrity of the SDU is maintained by the DIF, who will try to deliver the whole SDU to the receiving application instance(s). Applications may accept delivery of incomplete or partial SDUs (this can be specified via the flow allocation request). The DIF may block the application or return error on write if the DIF’s flow control or congestion management functions indicate to do so.
  • Read: Read an SDU from a flow identified by port-id.
  • Deallocate: Causes the DIF to terminate the flow and free all the resources associated to it.

A.1.2 The nature of layers (DIFs)

In contrast with traditional network architectures in which layers have been defined as units of modularity, in RINA layers (DIFs) are distributed resource allocators [96]. It is not that layers perform different functions; they all perform the same functions at different scopes. They are doing these functions for the different ranges of the environments the network is targeted at (a single link, a backbone network, an access network, an internet, a Virtual Private Network (VPN), etc.). The scope of each layer is configured to handle a given range of bandwidth, Quality of Service (QoS), and scale: a classic case of divide and conquer. Layers manage resources over a given range. The policies of each layer will be selected to optimise that range, bringing programmability to every relevant function within the layer. How many layers are needed? It depends on the range of bandwidth, QoS, and scale: simple networks have two layers, simple internetworks, 3; more complex networks may have more. This is a network design question, not an architecture question.

A.1.3 Internals of a DIF: only two protocols required

One of the key RINA design principles has been to maximise invariance and minimize discontinuities. In other words, extract as much commonality as possible without creating special cases. Applying the concept from operating systems of separating mechanism and policy, first to the data transfer protocols and then to the layer management machinery (usually referred to the control plane), it turns out that only two protocols are required within a layer:

  • A single data transport protocol framework that supports multiple policies and that allows for different concrete syntaxes (length of fields in the protocol Protocol Data Units (PDUs)). This protocol is called EFCP – the Error and Flow Control Protocol.
  • A common application protocol that operates on remote objects used by all the layer management functions. This protocol is called CDAP – the Common Distributed Application Protocol.

Separation of mechanism and policy also provided new insights about the structure of those functions within the layer, depicted in Figure 1. The primary components of an IPC Process can be divided into three categories: a) Data Transfer, decouple through a state vector from b) Data Transfer Control, decoupled through a Resource Information Base from c) Layer Management. These three loci of processing are characterised by decreasing cycle time and increasing  computational complexity (simpler functions execute more often than complex ones).

  • SDU Delimiting. The integrity of the SDU written to the flow is preserved by the DIF via a delimiting function. Delimiting also adapts the SDU to the maximum PDU size. To do so, delimiting comprises the mechanisms of fragmentation, reassembly, concatenation and separation.
  • Error and Flow Control Protocol (EFCP). This protocol is based on Richard Watson’s work (1978) and separates mechanism and policy. There is one instance of the protocol state for each flow originating or terminating at this IPC Process. The protocol naturally cleaves into Data Transfer (sequencing, lost and duplicate detection, identification of parallel connections), which updates a state vector; and Data Transfer Control, consisting of retransmission control (ack) and flow control.
  • RMT, Relaying and Multiplexing Task. It makes forwarding decision on incoming PDUs and multiplexes multiple flows of outgoing PDUs onto one or more (N-1) flows. There is one Relaying and Multiplexing Task (RMT) per IPC Process.
  • SDU Protection. It does integrity/error detection, e.g. Cyclic Redundancy Check (CRC), encryption, compression, etc. Potentially there can be a different SDU Protection policy for each (N-1) flow.

The state of the IPC Process is modelled as a set of objects stored in the RINA Information Base (RIB) and accessed via the RIB Daemon. The RIB imposes a schema over the objects modelling the IPCP state, defining what Common Distributed Application Protocol (CDAP) operations are available on each object and what will be their effects. The RIB Daemon provides all the layer management functions (enrolment, namespace management, flow allocation, resource allocation, security coordination, etc) with the means to interact with the RIBs of peer IPCPs. Coordination within the layer uses CDAP.

rina2

2. Names and addresses in RINA

A.1.4 Naming and addressing

Figure 2 illustrates the main entities that are named in RINA. Applications are assigned location-independent names that identify the whole distributed application (a Distributed Application Facility (DAF) or a DIF name, since a DIF is also a distributed application), a subset of the distributed application members or individual members (specific application process instances). Application names are unique within the application namespace (several, non-overlapping application namespaces may exist). When applications request a flow allocation to a DIF, they provide the destination application name as one of the arguments. If the flow allocation succeeds, the application is given back a port-id, which is a local identifier of the flow.

IPC Processes are also application processes, therefore they have application names. However, the scope of the application namespace may be much larger than the number of IPCPs within a layer; and application names are not designed to facilitate routing within a specific layer. Therefore it is useful to assign the IPCP a synonym, called an address, which is a location-dependent but route-independent name that facilitates locating the IPCP within the DIF. IPCPs can be assigned multiple addresses. Addresses are unique within a DIF; each DIF maintains its own address namespace. IPCPs exchange traffic with lower-level DIFs via port-ids, the same way general-purpose applications do.

Each flow provided by a DIF is internally implemented by the means of one EFCP connection at a time. Each EFCP connection is identified by a pair of source and destination connection-endpoint ids (cep-ids), which identify the source and destination instances of the EFCP protocol machines processing the PDUs for that connection. Port-ids and cep-ids are tied together via a local binding that can change during the flow’s lifetime. Decoupling port-ids from cep-ids has important security implications, as will be explained below. QoS-ids identify the QoS-cube to which the PDUs of the EFCP connection belong. All PDUs belonging to the same QoS cube will receive the same treatment within the DIF.

A.1.5 Consistent QoS model across layers

As shown in Figure 1 in RINA all layers provide the same service API to its users. This API allows users of a layer to request a flow to a destination application with certain characteristics such as bounds on loss and delay, minimum capacity or in-order delivery of data. Therefore layers can pass performance requirements to each other in a technology-agnostic way, without the intervention of an external entity such as the Management System. DIFs are designed to cover certain ranges of the performance space. A QoS cube is an abstraction of a set of policies that allow the DIF to deliver an IPC service within a certain range of the performance space (e.g. data loss, delay, jitter). Each DIF supports one or more QoS cubes, whose policies (data transfer, resource allocation, scheduling) are designed to ensure the promised performance in the operational environment of the DIF. When an application requests a flow to a DIF, the IPC Process that receives the requests checks the performance requirements for that flow and tries to map it to one of the QoS cubes supported by the DIF. If there is a match, the IPCP creates a new EFCP instance for the flow, configuring it with the policies specified by the QoS cube. Each QoS-cube has a unique id within the DIF. All EFCP packets of a flow belonging to a QoS cube are marked with the qos-id of that QoS-cube, so that all intermediate IPCPs between the source and destination can identify the flows belonging to the different QoS classes and schedule them accordingly.

A.1.6 Consistent security model across layers

The distribution of security functions within the DIF and across DIFs is shown in Figure 3. In RINA the granularity of protection is a layer, not its individual protocols, which allows for a more simple and comprehensive security model. Users of a DIF need to have little trust of the DIF they are using: only that the DIF will attempt to deliver SDUs to some process. Applications using a DIF are ultimately responsible for ensuring the confidentiality and integrity of the SDUs they pass to the DIF. Therefore, proper SDU protection mechanisms (such as encryption) have to be put in place. When a new IPCP wants to join a DIF it first needs to allocate a flow to another IPCP that is already a DIF member via an N-1 DIF both processes must share in common. Here access control is used to determine if the requesting application is allowed to talk to the requested application. If the flow to the existing member is accepted, the next step is to go through an authentication phase, the strength of which can range from no authentication to cryptographic schemes. In case of a successful authentication the DIF member will decide whether the new IPCP is admitted to the DIF, executing a specific access control policy.

3. Distribution of security functions within a DIF and across DIFs

3. Distribution of security functions within a DIF and across DIFs

A.1.7 Network Management

The Network Management distributed application (Network Management DAF or Network Management – Distributed Management System (NM-DMS)) is in charge of managing a collection of systems and its IPC Processes belonging to a network. NM-DMSs are distributed applications, like DIFs, therefore a collection of application processes co-operating to manage a network. Therefore NM-DMSs leverage the common distributed application machinery (RIB to model state, CDAP as a common application protocol) and the IPC services provided by DIFs to perform its task. The DAF model can be applied to network management to represent the whole range from distributed (autonomic) to centralised (traditional).

In the traditional centralised network management architecture, depicted in Figure 4, an NM-DMS would be a heterogeneous DAF consisting of one or more application processes providing management functions (fault management, configuration management, performance management, etc.) with other Distributed Application Processs (DAPs) providing telemetry and local management of systems (the Management Agents (MAs)). MAs have direct access to the IPCPs in the system they manage, via local procedures. It is possible for there to be multiple MAs responsible for different DIFs in the same processing system. For example, one might create DIFs as VPNs and allow them to be managed by their “owners;” or one could imagine different DIFs belonging to different providers at the border between two providers, etc.

4. NMS-DAF in a traditional centralised management configuration

4. NMS-DAF in a traditional centralised management configuration

A.2 Data Transfer: protocols, functions and procedures

EFCP, the Error and Flow Control Protocol, is the single data transfer protocol of a DIF. In order to allow for its adaptation to different operating environments, EFCP supports multiple policies and multiple specific syntaxes. To do so, the EFCP specification defines the hooks where the different policies can be plugged in – also describing the behaviour of default policies – as well as the abstract EFCP syntax (PDU types and its fields, without describing its encoding). EFCP leverages the results published by Richard Watson (and later implemented in the delta-t protocol). Watson proved that bounding three timers is a necessary and sufficient condition for reliable transport connection management; in other words: SYNs and FINs are unnecessary. This not only simplifies the protocol implementation, but it also makes it more reliable against harsh network environments or transport-level attacks.

EFCP has two parts: Data Transfer Protocol (DTP), which deals with the mechanisms tightly coupled to data transfer PDUs (such as addressing or sequencing) and Data Transfer Control Protocol (DTCP), which deals with the loosely bound mechanisms such as flow control or retransmission control. DTP and DTCP are fairly independent and operate with their own PDUs, being just loosely coupled via a state vector.

A.2.1 DTP PDU Format

Figure 5 illustrates the abstract syntax of EFCP DTP PDUs. Note that the length of address, qos-id, cep-id, length and sequence number fields depends on the DIF environment. For example, no source or destination address fields are required for DIFs in point-to-point links.

5 Abstract syntax of DTP PDUs

5 Abstract syntax of DTP PDUs

  • Version: EFCP version.
  • Src/destination address: Addresses of the IPC Processes that host the endpoints of this EFCP connection.
  • QoS-id: Id of the QoS cube where this EFCP connection belongs
  • Src/destination cep-ids: The identifiers of the EFCP instances that re the endpoints of this EFCP connection.
  • PDU type: Code indicating the type of PDU (in this case it is a DTP PDU)
  • Flags: Indicate conditions that can affect the processing of the PDU and can change from one PDU to another.
  • Length: The total length of the PDU in bytes.
  • Sequence number: Sequence number of the PDU.
  • User data: Contains one or more SDU fragments and/or one or more complete SDUs.

A.2.2 DTCP PDU Formats

Depending on the policies associated to a particular EFCP connection, the DTCP instance may be configured to perform flow and/or retransmission control functions. While the EFCP specification defines 10 operation codes defined for DTCP, in reality there are only three PDU types: i) Ack/-Nack/Flow, ii) Selective Ack/Nack/Flow, and iii) Control Ack. Each of these control PDU carries addresses, a connection-id, a sequence number, and retransmission control and/or flow control information, etc. The opcodes indicate which fields in the PDU are valid. Required fields for these PDUs can be extended by defining policies.

6 Example of data transfer procedures

6 Example of data transfer procedures

A.2.3 Overview of data transfer procedures

A high-level overview of the data-transfer procedures is provided by Figure 6. Note that this is an example scenario showing logical functions, in any case is suggesting a particular implementation strategy. In this example scenario, the DIF N provides a flow identified by port-id 1 between applications A and B. When application A writes an SDU to the port, invoking the DIF API. The SDU is then processed by the delimiting function of IPCP I1, which will create one or more EFCP user-data fields from the SDU, according to the delimiting policy. EFCP user-data fields are delivered to the EFCP instance 23 – which is currently bound to port-id 1 – which creates one or more EFCP data transfer PDUs and hands them over to the RMT. The RMT checks the forwarding function (another policy), which returns the port-ids of one or more N-1 flows through which the PDU needs to be forwarded to reach the next hop (in this case the IPCP with address 80). In general there will be one or more queues in front of each N-1 port, and a scheduling policy will sort out outgoing PDUs for transmission according to different criteria. Once the N-1 port through which the PDU will be forwarded is known, the associated SDU protection policy can be  applied to the PDU (or it can be applied when EFCP creates the PDU if there is a common SDU protection policy for all N-1 ports).

Eventually IPCP I2 reads the PDU from N-1 port 4. It removes SDU protection required to process the PDU header, and the RMT decides if it is the final destination of the PDU (depending on the DIF environment; for example, checking the destination address field in this example). In this case the IPCP is not the final destination, so the RMT checks the forwarding function, which returns one or more N-1 ports through which the PDU will be forwarded. The RMT reapplies protection if needed (SDU protection policy may be different), and handles the PDU for transmission to the scheduling policy, which eventually writes the PDU to the N-1 port.

Finally the PDU reaches IPCP I3 through N-1 port 3. SDU protection is removed, the RMT checks if it is the final destination of the PDU and, since in this case it is, it delivers the RMT to the destination EFCP instance (EFCP instance 87 in the example) for further processing. The EFCP instance updates its internal state and may generate zero or more control PDUs. EFCP recovers the PDU’s user data field, and works with the delimiting function according to the configured policies in order to recover full SDUs. Finally SDUs are read from port 2 by application B.

A.3 Layer management: protocol, functions and procedures

A.3.1 Common layer management machinery

The different layer management functions of an IPC Process leverage a common machinery to exchange information with their peers. All the IPC Process externally visible state is modelled as objects that follow a logical schema called RIB, Resource Information Base. The RIB specification defines the object naming, relationships between objects (inheritance, containment, etc.), the object attributes and the CDAP operations that can be applied on them. Access to the RIB is mediated by the RIB Daemon. The RIB Daemon of an IPCP exchanges CDAP PDUs with RIB Daemons of neighbor IPCPs. These PDUs communicate remote operations on objects. When a layer management task wants to communicate an action to a peer (e.g. a routing update), it requests the RIB Daemon to perform an action on one or more objects of one or more neighbor IPCPs. The RIB Daemon generates the required CDAP PDUs and sends them over the required N-1 flows to communicate the action to its neighbors. When the RIB Daemon receives a CDAP PDU, it decodes it, analyzes what objects are involved and notifies the relevant layer management functions (who have previously subscribed to objects of their interest).

7 Common layer management machinery: RIB, RIB Daemon and CDAP

7 Common layer management machinery: RIB, RIB Daemon and CDAP

The whole process is illustrated in Figure 7. This design allows the layer management tasks to just focus in the functions they provide and delegate the rutinary tasks of generating and parsing protocol PDUs to the RIB Daemon (in fact, layer management tasks are not even aware of CDAP). If required, new layer management functions could be added without the need of defining new protocols. Moreover, the RIB Daemon can coordinate and optimise the generation of protocol PDUs from different layer management tasks; thus minimising the layer management traffic between peer IPCPs. The CDAP specification defines an abstract syntax that describes the different types of CDAP PDUs and their fields. Multiple concrete encodings can be supported (it is just a DIF policy), such as the various Abstract Syntax Notation number 1 (ASN.1) encodings, Google Protocol Buffers, etc.

Before being able to exchange any information, two peer IPCPs must establish an association between them. This association is called application connection in RINA terms. During the application connection establishment phase, the IPCPs exchange naming information, optionally authenticate each other, and agree in the abstract and concrete syntaxes of CDAP/RIB to be used in the connection, as well as in the version of the RIB. This version information is important, as RIB model upgrades may not be uniformly applied to the entire network at once. Therefore it must be possible to allow multiple versions of the RIB to be used, to allow for incremental upgrades.

A.3.2 Layer Management functions: enrollment

Enrollment is the procedure by which an IPCP joins an existing DIF and is initialized with enough information to become a fully operational DIF member. Enrollment occurs after an IPC-Process establishes an application connection with another IPCP, which is a member of a DIF. Once the application connection is established this enrolment procedure may proceed. The specific enrollment procedure is a policy of each DIF, bug in general it involves operations similar to the ones described in the next paragraph.

The Member IPCP reads the New Member IPCP’s address. If null or expired, it assigns a new address; otherwise, assumes the New Member was very recently a member. The New Member then reads the information it does not have taking into account how “new” it is. These parameters characterize the operation of this DIF and might include parameters such as max PDU size, various time-out ranges, ranges of policies, etc. Once complete, the New Member is now a member and this triggers a normal RIB update (to get the latest up to date information on routing, directory, resource allocation, etc.)

A.3.3 Layer management functions: namespace management

Managing a name space in a distributed environment requires coordination to ensure that the names remain unambiguous and can be resolved efficiently. The Name SpaceManager (NSM) embedded in the DIF is responsible for mapping application names to IPC Process addresses – the latter being the name space managed by the DIF NSM. Specific ways of achieving this mapping are policy and will vary from DIF to DIF. For small, distributed environments, this management may be fairly decentralised and name resolution may be achieved by exhaustive search. Once found the location of the information that resolved the name may be cached locally in order to shorten future searches. It is easy to see how as the distributed environment grows that these caches would be further organised often using hints in the name itself, such as hierarchical assignment, to shorten search times. For larger environments, distributed databases may be organised with full or partial replication and naming conventions, i.e. topological structure, and search rules to shorten the search, requiring more management of the name space.

The two main functions of the DIF NSM are to assign valid addresses to IPC Processes for its operation within the DIF and to resolve in which IPC Process a specific application is registered. In other words, the NSM maintains a mapping between external application names and IPC Process addresses where there is the potential for a binding within the same processing system. Therefore enrollment, application registration and flow allocation require the services of the NSM.

A.3.4 Layer management functions: flow allocation

The Flow Allocator is responsible for creating and managing an instance of IPC, i.e. a flow. The IPCAPI communicates requests from the application to the DIF. An Allocate-Request causes an instance of the Flow Allocator to be created. The Flow Allocator-Instance (FAI) determines what policies will be utilised to provide the characteristics requested in the Allocate. It is important that how these characteristics are communicated by the application is decoupled from the selection of policies. This gives the DIF important flexibility in using different policies, but also allows new policies to be incorporated. The FAI creates the EFCP instance for the requested flow before sending the CDAP Create Flow Request to find the destination application and determine whether the requestor has access to it.

A create request is sent with the source and destination application names, quality of service information, and policy choices, as well as the necessary access control information. Using the NSM component, the FAI must find the IPCP in the DIF that resides on the processing system that has access to the requested application. This exchange accomplishes three functions:

  • Following the search rules using the Name Space Management function to find the address of an IPC-Process with access to the destination application
  • Determining whether the requesting application process has access to the requested application process and whether or not the destination IPC-Process can support the requested communication;
  • Instantiating the requested application process, if necessary, and allocating an FAI and portid in the destination IPCP. The create response will return an indication of success or failure. If successful, destination address and connection-id information will also be returned along with suggested policy choices. This gives the IPC-Processes sufficient information to then bind the port-ids to an EFCP-instance, i.e. a connection, so that data transfer may proceed.

The create response will return an indication of success or failure. If successful, destination address and connection-id information will also be returned along with suggested policy choices. This gives the IPC-Processes sufficient information to then bind the port-ids to an EFCP-instance, i.e. a connection, so that data transfer may proceed.

A.3.5 Layer management functions: resource allocation

The Resource Allocator (RA) gathers the core intelligence of the IPC Process. It monitors the operation of the IPC Process and makes adjustments to its operation to keep it within the specified operational range. The degree to which the operation of the RA is distributed and performed in collaboration with the other RAs in members of the DIF and the degree to which the RA merely collects and communicates information to a network management system (NM-DMS), which determines the response is a matter of DIF design and research. The former case can be termed autonomic, while the latter case is more the traditional network management approach. Both approaches have their use cases and application areas. The RA has a series of meters and dials that can use to perform its job. There are basically three sets of information available to the IPC Process to make its decisions:

  • The traffic characteristics of traffic arriving from user of the DIF, i.e. the application or (N+1)-DIF
  • The traffic characteristics of the traffic arriving and being sent on the (N-1)-flows.
  • Information from other members of the DIF on what they are observing (this latter category could be restricted to just nearest neighbors or some other subset – all two or three hop neighbors – or the all members of the DIF).

The first two categories would generally be measures that are easily derived from observing traffic: bandwidth, delay, jitter, damaged PDUs, etc. The shared data might consider internal status of other IPC Processes such as queue length, buffer utilization, and others. The Resource Allocator has several “levers” and “dials” that it can change to affect how traffic is handled:

  • Creation/Deletion of QoS Classes. Requests for flow allocations specify the QoS-cube the traffic requires, which is mapped to a QoS-class. The RA may create or delete QoS-classes in response to changing conditions.
  • Data Transfer QoS Sets. When an Allocate requests certain QoS parameters, these are translated into a QoS-class that in turn is translated into a set of data transfer policies. The RA may modify the set of data transfer policies for particular QoS classes. For example, one could imagine a different set of policies for the same QoS-class under different load conditions.
  • Modifying Data Transfer Policy Parameters. It is assumed that some data transfer policies may allow certain parameters to be modified without actually changing the policy in force.
  • A trivial example might be changing the retransmission control policy from acking every second PDU to acking every third PDU.
  • Creation/Deletion of RMT Queues. Data Transfer flows are mapped to Relaying and Multiplexing queues for sending to the (N-1)-DIF. The RA can control these queues as well as which QoS classes are mapped to which queues. (The decision does not have to exclusively based on QoS-class, but may also depend on the addresses or current load, etc.)
  • Modify RMT Queue Servicing. The RA can change the discipline used for servicing the RMT queues.
  • Creation/Deletion of (N-1)-flows. The RA is responsible for managing distinct flows of different QoS-classes with the (N-1)-DIF. Since multiplexing occurs within a DIF one would not expect the (N)-QoS classes to be precisely the same as the (N-1)-QoS classes. The RA can request the creation and deletion of N-1 flows with nearest neighbors, depending on the traffic load offered to the IPC Process and other conditions in the DIF.
  • Forwarding Table Generator Output. The RA takes input from other aspects of layer management to generate the forwarding table. This is commonly thought of as the output of “routing.” It may well be here, but we want to leave open approaches to generating the forwarding table not based on graph theory.

A.3.6 Layer management functions: routing

A major input to the Resource Allocator is Routing. Routing performs the analysis of the information maintained by the RIB to provide connectivity input to the creation of a forwarding table. To support flows with different QoS will in current terminology require using different metrics to optimize the routing. However, this must be done while balancing the conflicting requirements for resources. Current approaches can be used but new approaches to routing will be required to take full advantage of this environment. The choice of routing algorithms in a particular DIF is a matter of policy.

A.3.7 Layer management functions: security coordination

Security coordination is the IPC Process component responsible for implementing a consistent security profile for the IPC Process, coordinating all the security-related functions (authentication, access control, confidentiality, integrity) and also executing some of them (auditing, credential management). The sophistication of this layer management function is a matter of policy.

A.4 Application Discovery: the DIF allocator

Applications can register to multiple DIFs in RINA networks, there is no need to designate a single “top DIF” through which all applications in a specific context should be available. Hence, when a source application issues a flow allocation request to a destination application, which DIF should the RINA machinery choose if multiple are available? Moreover, it may be the case that the destination application may be available through a DIF that still has not enough scope to reach the source application, but that could be enlarged (or a DIF with bigger scope created) to solve the reachability problem.

Hence, it is convenient to add a distributed application (DAF) called DIF Allocator, which manages the mappings of application names to DIFs in a certain distributed context (may be a single provider network, may be a group of DIFs from different providers, etc). The chain of databases maintained by the DIF Allocator is what defines the scope of the application namespace: if two applications are in different DIF Allocators, they cannot discover each other (and hence cannot allocate a flow to each other and communicate). The DIF Allocator locates the destination application and decides which is the best DIF to reach it. It may also collaborate with Network Management Systems in different domains to create a new DIF or enlarge an existing one if this is required to establish communication between the distributed applications requesting the flow. The DIF Allocator can use multiple policies internally to replicate the information it manages and to locate the destination application: from hierarchical partially replicated databases, to distributed hash tables, to exhaustive search, etc.