Posts by camlow325

    The broker has some logic to clear out registered services when the client which performs the registrations gracefully disconnects. For cases, however, where the client does not gracefully disconnect (i.e., fails to send a TCP RST or FIN packet to the broker), the broker may not know that the client is no longer connected and, therefore, keeps the services in its registry.


    The services are likely remaining registered for up to one hour because of the default TTL configured during the service registration. You could reduce the time needed to detect the disconnect by using a smaller value for service TTL field. See the documentation for the ServiceRegistrationInfo.ttl field in the OpenDXL Python Client SDK for more information. Note, however, that this would result in the client using more network bandwidth to perform service re-registrations.


    Once the client performing the service registrations has disconnected from the broker, the next request made which the broker attempts to route to the disconnected client should fail with an "unable to locate service for request" error. At that point, the service registration should be removed from the broker - even if the service TTL has not yet expired.


    In the future, it would be nice to have some "retry" logic built into the broker to handle this case. For example, if the same service were registered from two different clients where 1 client is disconnected and 1 client is connected and the broker fails when attempting to route an incoming request to the disconnected client, it would be nice to have some logic to automatically "retry" the request to the other client (which could succeed in providing the response because it is still connected). This is something that may be considered for the future but some more work would need to be done to think though the right design approach.


    Another thing that you may want to look into if you have not already is whether your applications are written in such a way that when they are shutdown more gracefully that they first attempt to unregister services and disconnect from the fabric. When you use the OpenDXL Bootstrap Python project to generate the basic structure for your application code, the generated __main__.py file contains logic to register a signal handler which allows the application to be stopped before the main thread exits and the process dies. When the application is stopped, an attempt should be made to unregister any services which have been registered and disconnect the client from the DXL fabric. Assuming that a system reboot were able to first signal the application to exit, this cleanup work should allow the broker to immediately unregister the client's services.

    It seems possible that the problem could be due to the PAN having supplied empty values for the MD5 and SHA256 values. I can recreate that error message manually by doing the following:

    Code
    1. currentMD5 = ""
    2. currentSHA256 = ""
    3. reputations_dict = \
    4. tie_client.get_file_reputation({
    5. HashType.MD5: currentMD5,
    6. HashType.SHA256: currentSHA256
    7. })

    The above produces the same error message that you mentioned:

    Quote

    Exception: Error: Error during request handling. (0)

    Have you tried modifying the "wf.py" source code to print out the value of the currentMD5 and currentSHA256 variables so you can see what they are when the error occurs?


    Jesse - if this is something you expect could happen in the Wildfire feed, do you think the DXL integration should case around it to avoid trying to make calls to TIE for invalid hash values? Something like...

    Code
    1. if currentMD5 and currentSHA256:
    2.     reputations_dict = \
    3. tie_client.get_file_reputation({
    4.           HashType.MD5: currentMD5,
    5. HashType.SHA256: currentSHA256
    6. })
    7. ...

    I would expect the scheme of using the same client certificate and private key for each of the client machines would work. The OpenDXL broker should still allow each of the clients to still connect concurrently to the DXL fabric even if they all use the same client certificate.


    If you are using an ePO-managed broker and the ability to manage topic authorization per client in ePO, however, the approach of using a single certificate for all clients could limit the ability to provide granular authorization for subsets of clients. Also, if the client certificate needed to be rotated out at some point, it could be a bit more difficult to have to roll the new certificate and private key out to a large number of clients rather than just the smaller subset that needed to be rotated.


    Other than doing something like setting up your own separate application from the OpenDXL broker which does the certificate generation on behalf of the clients (maybe something like what is documented in the Certificate Files Creation (PKI) section in the OpenDXL Python Client SDK), though, I'm not sure that there are really any other better options at present for what you want to do.

    Hi Christophe,


    In the 'atd_subscriber.txt' file that you attached, it appears that the ATD file report does not include a non-empty value for the "Dst IP" (see https://github.com/mohlcyber/O…1.0/atd_subscriber.py#L33) or "Ips" (see https://github.com/mohlcyber/OpenDXL-ATD-Fortinet/blob/v1.0/atd_subscriber.py#L44).


    For example:


    Quote

    INFO:root:{

    "Summary": {

    ...

    "Dst IP": "",


    Since no IPs are present in the report, it appears that the "atd_subscriber" script would not try to run the "forti_push.py" script.


    I don't immediately have any explanation as to why the the ATD report would not include any IP address information. Maybe Martin or Nolan would have an idea about this? Is there some other action with ATD that you might be able to trigger which would include IPs in the report to see that the basic Python script integration with Fortinet is functioning properly?