Posts by chrissmith

    Ok, I now understand the scenario.


    I don't think that would be possible with the native bridging support (due to the single parent (or hub) restriction). However, it would be possible if you set up normal connections to the fabrics that acted as a bridge. Let me know if you would like me to put something together to demonstrate this.


    Thanks,

    Chris

    Great to hear you got it working!


    As far as documentation, I agree it could definitely be clearer.


    Here is the page that covers provisioning the samples for the file transfer service:


    https://opendxl-community.gith…n/pydoc/sampleconfig.html


    Each of the samples also mentions provisioning as a prerequisite:


    https://opendxl-community.gith…oc/basicstoreexample.html

    https://opendxl-community.gith…/basicserviceexample.html


    How do you think this could be improved to be clearer? Any input would be greatly appreciated!


    Thanks,

    Chris

    Yes, this can be done. I will put together a similar scenario in the morning and post the configuration.


    Also, you don't necessarily have to use hubs.


    Based on your description you want something like this, correct?


    Code: Broker Fabric
    1. C----->B----->A
    2. / \ / \ / \
    3. C2 C3 B2 B3 A2 A3

    With hubs, it would look like the following:


    Code: Broker Fabric w/ Hubs
    1. [C hub]----------->[B hub]----------->[A hub]
    2. / \ / \ / \
    3. Other C Brokers Other B Brokers Other A brokers


    Each separate fabric has a hub. This allows the spokes (other brokers) to stay connected if one of the hub brokers fails. The "C hub" connects to the "B hub", which connects to the "A hub".


    Please let me know if this is an equivalent scenario, and I will put together the configuration and post it.


    Thanks a lot,

    Chris

    Hi Andrew-


    So, I would suggest following my steps exactly. I am wondering if the files themselves are being stored in the Docker container itself. In my steps, I am not using Docker at all to eliminate that possible scenario.


    Thanks,

    Chris

    Hi Andrew-


    The reason it is appearing multiple times in the services list, is that the service is registering each time it is started (with a unique service identifier). That occurs if the previous service was not shut down correctly. The previous instances remain until they are either invoked (by sending a request), or their TTL expires.


    If you happen to send a request to a service that no longer exists, you will receive a "ServiceNotFound" exception. The broker will round-robin service invocations. So, if you just send requests, you should see all of the stale instances disappear.


    So, what is your current status? Are you still experiencing issues?


    Thanks,

    Chris

    It is really based on your preference, the brokers will work with either distribution model. I think it would make a lot of sense for us to put together a basic guide to walk through both scenarios. Would that be useful?


    Thanks,

    Chris

    Hi-


    Unfortunately, it is a bit difficult to debug what is occurring. However, I went through the process to install and test the service and it seems to be working as expected. The steps are listed below. Please let me know if you have any issues when performing these steps.


    1.) Download the latest service release


    wget https://github.com/opendxl-community/opendxl-file-transfer-service-python/releases/download/0.1.1/dxlfiletransferservice-python-dist-0.1.1.zip


    2.) Unzip the service release


    unzip dxlfiletransferservice-python-dist-0.1.1.zip


    3.) Change into service directory


    cd dxlfiletransferservice-python-dist-0.1.1


    4.) Provision the service


    dxlclient provisionconfig config <provision-service-host> service-cn


    5.) Provision the samples


    dxlclient provisionconfig sample <provision-service-host> sample-cn


    6.) Edit the service config and set a destination directory for transferred files


    vi config/dxlfiletransferservice.config


    Code: File Transfer Service Configuration
    1. ###############################################################################
    2. ## File Transfer DXL Python service settings
    3. ###############################################################################
    4. [General]
    5. # Directory under which to store files (required, no default)
    6. storageDir=/tmp
    7. ...


    7.) Start the service (force into background)


    python -m dxlfiletransferservice config &


    8.) Run the sample (send a file, "README.html")


    python sample/basic/basic_store_example.py README.html


    Code: Output
    1. 2019-06-03 16:20:42,820 dxlfiletransferclient.store INFO Assigning file id '70b7b036-79ec-4fff-98a7-17b4f8c135f6' for '/tmp/.workdir/70b7b036-79ec-4fff-98a7-17b4f8c135f6'
    2. 2019-06-03 16:20:42,821 dxlfiletransferclient.store INFO Stored file '/tmp/README.html' for id '70b7b036-79ec-4fff-98a7-17b4f8c135f6'
    3. Percent complete: 100%
    4. Response to the request for the last segment:
    5. {
    6. "file_id": "70b7b036-79ec-4fff-98a7-17b4f8c135f6",
    7. "result": "store",
    8. "segments_received": 1
    9. }
    10. Elapsed time (ms): 5.20992279053


    9.) Confirm the file was transferred


    ls -l /tmp


    Code: Output
    1. total 304
    2. -rw-r--r-- 1 root root 1209 Jun 3 16:20 README.html
    3. ...


    I transferred multiple files without experiencing any issues. Please let me know if you experience any problems when attempting to perform the above steps.


    Thanks a lot,

    Chris

    Great question. We don't have any specific guidelines posted currently.


    However, a fairly basic approach would be to have a hub and spoke topology. The central hub would consist of two brokers, and spoke brokers could be added as necessary. Clients would connect to the brokers through a load balancer (maybe restrict connections to spoke brokers only). Also, with the latest version of the Open Broker and related clients, you could limit connections to web sockets.


    Thanks,

    Chris

    Hi-


    Yes, there have been conversations, and it is currently in our backlog of items to address in a future release. Our goal is to provide higher QoS levels without introducing a significant drop in performance. With that said, we may consider an intermediate branch release (for early testing) that provides the capability with the caveat of a documented performance drop on non QoS-0 messages.


    Thanks,

    Chris

    Here is a quick example using Node-RED.


    In the example it shows where the different nodes are configured to connect to two separate fabrics (fabric1 and fabric2). Each of the blocks in the flow shows how to forward events and services between the two fabrics.



    The code for the Node-RED flow is as follows:



    If you would like, I could put together a Python example as well.


    Thanks,

    Chris