Getting Started

Have Maestro Core running on your system.

Install

Prerequisites

C compiler understanding C11

POSIX threads

autoreconf

libtool

Get to the repository

git clone <repo>
cd maestro-core

Installation commands are as follows

autoreconf -ifv
./configure [--prefix=$(MAESTRO_PATH)]
make
[make check]
[make install]

First Program

Include the maestro header in your code

#include "maestro.h"

Write a first program that can be

int main(void) {
  mstro_init("my_workflow","my_component_a",0);
  mstro_cdo cdo = NULL;
  mstro_cdo_declare("my_first_cdo", MSTRO_ATTR_DEFAULT, &cdo);
  mstro_cdo_offer(cdo);
  mstro_cdo_withdraw(cdo);
  mstro_cdo_dispose(cdo);
  mstro_finalize();
  return 0;
}

which does single application local Maestro Core execution. This code simply offers and withdraws a CDO to Maestro Pool, no data or metadata transfer is performed for now; for that we would need either another thread for local execution or another application demanding the offered CDO, plus a Pool Manager running in the multi-app setup case.

Add the include path and library path of Maestro to the compilation/linking command

-I$(MAESTRO_PATH)/include/maestro -L$(MAESTRO_PATH)/lib -lmaestro

where $(MAESTRO_PATH) is Maestro install path specified during configuration with ./configure  --prefix=$(MAESTRO_PATH). Then export the path to Maestro library

export LD_LIBRARY_PATH=$(MAESTRO_PATH)/lib:$LD_LIBRARY_PATH

and the first program is ready to run.

Tip

What operations Maestro actually run can be inspected by setting the following environment variable MSTRO_LOG_LEVEL=info.

As additional examples, similar programs such as $(MAESTRO_PATH)/tests/check_pool_local_stress are part of the Maestro test suite, run with make check and logs can be inspected in $(MAESTRO_PATH)/tests/check_pool_local_stress.log

First Workflow

Building up on the first program example, which is a producer application ie the CDO offer side, let us now write a consumer application, ie the CDO demand side so we may see some transfer happen

int main(void) {
  mstro_init("my_workflow","my_component_b",0);
  mstro_cdo cdo = NULL;
  mstro_cdo_declare("my_first_cdo", MSTRO_ATTR_DEFAULT, &cdo);
  mstro_cdo_require(cdo);
  mstro_cdo_demand(cdo);
  mstro_cdo_dispose(cdo);
  mstro_finalize();
  return 0;
}

Tip

For convenience, users may indicate workflow name (MSTRO_WORKFLOW_NAME) and component name (MSTRO_COMPONENT_NAME), both mandatory string values, via env, and leave the corresponding parameters NULL at init time such as mstro_init(NULL,NULL,0);

To properly function in a multi-app setup, Maestro requires a (provided) Pool Manager app to be running a priori. This app is found in $(MAESTRO_PATH)/tests/simple_pool_manager after compiling it which can be done via

make check TESTS=

This app prints to stdout the same string at regular intervals, containing serialised connection information, which needs to be passed to each program wanting to join the workflow, by means of the MSTRO_POOL_MANAGER_INFO environment variable. This is in turn parsed at mstro_init() time and allows transparent connection to the Maestro workflow, operated by the Pool Manager.

Note

mstro_init() should be called only once at a time per process. Typically, in a multi-thread setup, only one thread would be responsible for performing mstro_init() and mstro_finalize()

After starting the Pool Manager, the consumer and the producer as defined above, Maestro transferred only a size-0 CDO with some basic metadata.

Note

For this example, we need to start the consumer ahead of the producer, to make sure Maestro understands there is a client interested in my_first_cdo, and prevents the producer from withdrawing its offer too quickly for the consumer to effectively place an option on it. More elaborate synchronisation means exist in Maestro, as we will see here for instance.

To transfer actual data, the producer needs to specify a minima a pointer and a size before offering the CDO, assuming a byte-addressable DRAM-like location.

void* src_data;
int64_t size;
...
mstro_cdo_attribute_set(src_handle,
                        MSTRO_ATTR_CORE_CDO_RAW_PTR,
                        src_data, ...);
mstro_cdo_attribute_set(src_handle,
                        MSTRO_ATTR_CORE_CDO_SCOPE_LOCAL_SIZE,
                        &size, ...);

With this addition, the consumer will now finally receive a buffer, which is available when mstro_cdo_demand() returns. Transport is RDMA by default.

Note

If the consumer on the other hand does not specify a size and a buffer, as it is shown here, then maestro-core itself will allocate the necessary memory and fill in the size attribute and make it all available.

To also transfer additional metadata, please find further information in general metadata and user-defined metadata.

To access CDO data and length finally

char* data; size_t len;
mstro_cdo_access_ptr(dst_handle, (void**)&data, &len);

As additional examples, basic workflow examples are part of the Maestro core test suite, in particular $(MAESTRO_PATH)/tests/check_pm_interlock.sh starts a workflow comprising a Pool Manager and two clients exchanging a CDO.

Troubleshooting

TODO (please refer to the relevant section in README.md for the time being)