Posts by chrissmith


    Unfortunately, that particular use case is not currently supported via the DXL interface that is exposed by TIE.

    However, you might (I am not a TIE expert) be able to take advantage of TIE’s ability to have a “reporting” DB node, which is a read-only replica of the primary DB. At that point, you could potentially access the DB directly and extract the information you are looking for.

    Hope this helps,


    chrissmith added a new solution:


    McAfee Active Response delivers continuous detection of and response to advanced security threats to help security practitioners monitor security posture, improve threat detection, and expand incident response capabilities through forward-looking discovery, detailed analysis, forensic investigation, comprehensive reporting, and prioritized alerts and actions

    McAfee Active Response is proof of the effectiveness of the integrated McAfee security architecture, which is designed to resolve more threats faster and with fewer resources in a more complex world. McAfee Active Response gives you continuous visibility and powerful insights into your endpoints so you can identify breaches faster. And it provides you with the tools you need to correct issues faster and in the way that makes the most sense for your business. All of this power is managed via McAfee® ePolicy Orchestrator® (McAfee ePO™) software leveraging McAfee Data Exchange Layer—this provides unified scalability and extensibility without the need for incremental staff to administer the product.

    Thanks, that's very helpful. If I understood correctly, it sounds like a hub consisting of 2 brokers would act very similar to just bridging 2 brokers with a parentId.

    Unfortunately, you can't bridge two brokers and have them bridge to the same parent identifier. A broker can only have one outgoing bridge (it can have an unlimited number of incoming, aka brokers bridging to it). That is the reason the hub exists, the brokers are able to dynamically change who they bridge to based on the presence (or absence) of the other broker in the hub.

    For a simple deployment of just 2 or 3 brokers, would there be any reason to introduce a hub? Is that more of an advanced configuration, once you start to scale out to many brokers across different sites and you want the resilience between hubs?

    For two brokers, no.

    For three brokers, yes.

    By having a hub comprised of two brokers and a single spoke, if a single broker goes down, you are still guaranteed to have the remaining two brokers connected. If you had three brokers connected to each other without a hub, if the middle broker went down, the remaining two would be unable to communicate with each other.

    Hope this helps,


    Hi Brian-

    That is a great question.

    Typically fabrics are scaled out in a hub-and-spoke pattern. To add further scalability (support additional client connections, etc.) you can add more spokes (brokers) to a hub.

    The image above is an example of a hub-and-spoke deployment of a DXL fabric. Each of the hubs might reside in a different geographic location. The top hub might be the "global" hub that connects the various regions.

    Hubs are critical pieces of the fabric due to the fact that if they fail, the fabric will split. This isn't the case with the lower level brokers as they are there to provide scalability. If any particular broker fails, the fabric as a whole is still connected.

    So, to reduce the possibility of a fabric split, DXL introduced the concept of a hub that can contain two brokers. A hub is not something that is physically deployed, it is just a set of rules that the brokers comprising the hub follow. If one of the brokers goes down, the other will bridge to the parent hub or broker, retaining the connected state of the overall fabric. It is also important to note that both of the brokers are active. They connect to each other, and only one of them is connected to their parent at any given time. The children of the hub will select one of the two brokers to connect to.

    Hopefully that helps. Please let me know if you have any additional questions.

    Thanks a lot,



    The integration tests are part of the Java client which is currently being updated to be released as an open source project (should be released fairly soon). The Python tests do perform a subset of the integration tests. Until the Java client releases, the Python tests probably serve as the best integration tests available at this time.




    That is a great question. Unfortunately, at this point there is not a specification for developing a custom broker, but that is definitely something that we should work to address in the future.

    As far as what can be done today, it really depends on what you are wanting to achieve. An OpenDXL client can connect to an MQTT broker today (and send events), but none of the added functionality will be available.

    At a very high-level the main additions to a standard MQTT broker are as follows:

    • RESTful-like service-based model with load-balancing and failover
    • Broker-based filtering of messages to specific clients and brokers
    • Multi-broker hubs to support failover
    • Client and CA-based certificate topic authorization
    • Topic-based routing
    • Multi-tenancy

    See this page for a more detailed comparison between an OpenDXL broker and a standards-based MQTT broker.

    So, one interesting place to start would be to add RESTful-like services support to a standard broker. In the future, other features such as multi-broker fabrics, filtering, routing, etc. could be added.

    The implementation of the service registry in the OpenDXL broker can be found here. The service registry is used to track the services that are available throughout the fabric (as well as those that are on the local broker).

    One of the key aspects of an integration with a broker is that you need to be able to hook into the various points of the message processing flow.

    For Mosquitto, the flow stages for a message are as follows:

    Publish => Store => Insert (per client) => Finalize

    The OpenDXL Broker is able to hook at all these points in a message flow. The majority of the work in the OpenDXL broker occurs in the "Store" stage. At this point the broker can respond to particular topics. For example, it is able to handle service registration messages, and add them to the registry. It is also able to handle service look-ups when a client is attempting to invoke a service.

    All of the handlers can be found in the following location:…brokerlib/message/handler

    Some of the critical classes to look at related to services are as follows:

    ServiceRegistryRegisterRequestHandler: Handles an incoming service registration request

    ServiceLookupHandler: Handles an incoming request from a client to invoke a service

    This is obviously just a starting point, and I totally agree that we need to put together a specification for both brokers and clients. We have been working on putting together specifications that can be used by OpenDXL solutions to document the functionality that they expose (more on this in the near future). After that, we can look at adding specifications for the client and brokers themselves.

    Thanks again for the great question,


    Over the past couple months, several projects have been developed that support using OpenDXL within the Node-RED flow-based programming tool.

    Node-RED consists of a Node.js-based runtime that you point a web browser at to access the flow editor. Within the browser you create your application by dragging nodes from your palette into a workspace and start to wire them together. With a single click, the application is deployed back to the runtime where it is run.

    The palette of nodes can be easily extended by installing new nodes created by the community and the flows you create can be easily shared as JSON files.

    The purpose of this post is to summarize the different projects and efforts that have been made to support the integration of OpenDXL into Node-RED. This post will be updated as additional projects are developed.

    Video: Using OpenDXL with Node-RED

    • This video demonstrates using OpenDXL within Node-RED

    OpenDXL Node-RED Docker Image

    • A Docker image that consists of the Node-RED installation along with the core OpenDXL Node-RED extensions

    OpenDXL Node-RED Extensions

    Node-RED Solution Categories on

    • General Category
      • General Node-RED OpenDXL solutions (Docker images, etc.)
    • Modules Category
      • Contains OpenDXL modules for Node-RED (ePO, TIE, MAR, pxGrid, etc.)
    • Flows Category
      • Contains pre-built Node-RED flows that can be imported into Node-RED

    The issue is that you are attempting to use your client (not the DXL client) with the Python with keyword.

    Your code should resemble the following example (from the TIE client):

    Basic Get Reputation Example

    Your client (IRFlowApiClient) should be created in the same manner as the TieClient shown above.

    Hope this helps.



    Bootstrap will generate the initial documentation. You will want to update it with information that is specific to the solution you are developing.

    This following portion of the bootstrap tutorial walks through creating a distribution, which includes building the documentation via Sphinx (generating HTML from the documentation source files).

    Tutorial Part 5: Package Distributions



    Actually, that is not the process I use.

    Basically, what I do is the following (there are multiple ways, but I will describe this via the GitHub UI).

    1.) Navigate to your repository on GitHub

    2.) Click the "branch" pulldown (master by default)

    3.) Type in "gh-pages" and click "create branch: gh-pages"

    4.) Clone the "gh-pages" branch and delete the existing content. Add the content you want to host (pydocs, etc.).

    5.) Click on "settings" in your repository on GitHub

    6.) Scroll to the "GitHub Pages" section. Ensure "source" is set to "gh-pages branch"

    Again, there are many ways to do this, but this is essentially what I do.



    chrissmith added a new solution:


    When a MISP event is published, the flow examines the event to determine if it contains hash-based attributes. If it does, a MAR search is performed to determine if any active endpoints contain the hashes. For each endpoint containing a hash, a sighting is added to the MISP event in addition to a comment that includes the associated endpoint information.


    The Node-RED flow content for this solution:

    chrissmith added a new solution:

    Yeah, that is definitely worth a shot. I updated the basic service sample (see code below) included with the client to allow for setting the number of services to register. With the latest version of the client I am able to register all of the services consistently (without any deadlocks).



    Odd, not sure what is happening. Can you please try the following:

    For the "external" entry, replace with the actual IP address of that host. is referring to the container itself.

    Also, remove the "local" line altogether.

    So, the brokers section should appear as follows (with the host IP address being the actual IP address of the Docker host system):

    1. [Brokers]
    2. external=external;8883;<host-ip-address>;<host-ip-address>
    3. docker=docker;8883;;