Dieses Blog durchsuchen

Montag, 11. April 2022

Solution for long running tasks in Camunda: External tasks

One of my recent jobs was to design a bpmn-process with Camunda for one of my clients.

The requirements were clear and not too complicated. I had to struggle with some technical challenges regarding Camunda though. One of these challenges was that a task within the process can take some time. Standard synchronous Camunda tasks are not designed to do long lasting work by design.

At my client the Camunda engine runs within a Wildfly application server and therefore uses the transaction handling of Wildfly. The default transaction timeout for a Wildfly managed database transaction is 10 minutes. This means that your task's transaction will also timeout after 10 minutes which might not be enough in particular cases. Of course you could increase the transaction timeout, but this will affect all applications deployed within wildfly. And what if your transaction has to run for several days or weeks? Consulting the Camunda forum (Camunda has a great community) and the documentation I found a solution in using "external tasks".

External tasks enable you to provide a work load to an external worker. This way the task's transaction can finish right away and let some other service deal with the complexity of the long running job. This service can use its own transaction handling completely independent of the Camunda process.

The following steps are necessary to implement an external task:

  • Activate the Camunda REST-API (it is probably also possible to use external tasks with the Camunda Java API but I have found only examples with the REST-API). The REST-API is activated by extending the Application class and overriding the getClasses method: (only applicable for a Java EE application)

@ApplicationPath("/camunda-rest")
public class CamundaRestApi extends Application {

    @Override
    public Set<Class<?>> getClasses() {
        // setting up Camunda REST service
        Set<Class<?>> classes = new HashSet<Class<?>>();
        // add camunda engine rest resources
        classes.addAll(CamundaRestResources.getResourceClasses());
        // add mandatory configuration classes
        classes.addAll(CamundaRestResources.getConfigurationClasses());
        return classes;
    }
}

Besides the file org.camunda.bpm.engine.rest.spi.ProcessEngineProvider with the content org.camunda.bpm.engine.rest.spi.ProcessEngineProvider has to be placed under the application's META-INF/services path: my-application\src\main\webapp\META-INF\services

  • Define your service task in your bpmn-file by setting the caunda:type attribute to external und providing a topic name. You can choose any topic name you want:
<bpmn:serviceTask id="lotsOfWorkId" name="do a long running task" camunda:type="external" camunda:topic="lotsOfWorkTopic">
  • Create and an external task client and subscripe to the topic:
public static ExternalTaskClient getExternalTaskClient() {
    return externalTaskClientBuilder()
        .baseUrl("https://myHost/MyWebapp/camunda-rest")
        .workerId("lotsOfWorkWorkerId")
        .build();
}

getExternalTaskClient()
    .subscribe("lotsOfWorkTopic")
    .handler(lotsOfWorkHandler)
    .open();
  • The external task handler is the place where the actual business logic is defined that will be executed when a task is written to the topic. This handler is not bound to any transactions and can define on its own how the work should be executed. The handler implements the ExternalTaskHandler interface.
public class LotsOfWorkHandler implements ExternalTaskHandler {
    @Override
    public void execute(final ExternalTask externaltask, final ExternalTaskService service) {
    //do some long lasting work...
    //...
    //complete the task
    externalTaskService.complete(externaltask);
}

By using external tasks I was able to let a task run for a very long time and leave the default transaction timeout to 10 minutes. One disadvantage of this approach is, that all variables that are passed to and from the external task handler have to be serializable because they are transferred over http through the REST API. I was not able to use JSON directly (Camunda uses the Spin API for this) but had to pass a List<String> and convert it to JSON within the task itself.


Freitag, 19. März 2021

CRaSH Console - start your long running process in an extra thread!

The CRaSH console is a nice tool to do imports or batch updates that should not become part of your main application.

Why should you use the CRaSH console?

  • your code runs within your application context and you have access to your runtime environment (business services like spring beans, database access to your DAOs etc)
  • your code is deployed through all stages and it is impossible to accidentally run test code on your production environment
  • you can use light weight scripting languages like groovy to implement your requirements that can be changed without redeployment
There is one important thing to consider when you are doing long running batch updates or imports: For some reason database connection-timeouts occur when running the code synchronously and the execution times takes more than a few hours. 
In order to fix this situation our team has come up with the solution to start the actual long running code in an extra thread:

    public void performBatchJob() {
        new Thread(() -> {
            //do some long running updates or imports
        }).start();
    }

Doing the execution this way will protect you from DB timeouts and the code runs savely for a long time. We have done batch updates that run several weeks without problems in the CRaSH console this way.

To learn more about the CRaSH console visit https://www.crashub.org/



Sonntag, 18. Oktober 2020

JSF: Keep data in Flash Scope on browser refresh and browser back

JSF supports different data scopes like the session scope to store data in the user session and the request scope to keep data for the lifespan of one request. Of course the goal should always be to keep the data in the most narrow scope possible because this approach will save server resources.

An interesting JSF scope is the flash scope. The flash scope expands the request scope to survive redirects. Redirects are an important part of the PRG-pattern which is commonly used in JSF applications. (for details on the PRG-pattern please see this post-redirect-get-and-jsf-20 blog post for details)

One problem I have encountered recently using the flash scope is that data is lost on a browser refresh and on a browser back. Some developers approach this problem by telling the users not to use this browser functionality (i.e. by java script checks) but in my opinion an application should support this basic functionality. Fortunately there is a suprisingly easy solution to this problem: By invoking the code:

FacesContext.getCurrentInstance().getExternalContext().getFlash().keep("context");

JSF is instructed to keep the flash data (in this example the variable "context") even when the user hits F5 (refresh) or navigates back to a previous page with the browser's navigation buttons.

Mittwoch, 8. Juli 2020

Understanding Java Keystores for private key authentication

When your application needs to communicate over https (SSL) a KeyStore and a TrustStore may be involved.

The TrustStore usually holds the public keys of the servers that the client wants to establish a connection to. This store is located in the [jdk_home]\lib\security\cacerts file.

Usually it is sufficient to import a server certificate into this file in order to trust the issuing server with the help of the keytool command.

A KeyStore on the other hand usually holds private keys that can be used for authentication. The file is often in the PKCS12 format and has the file ending "pfx".

There are two ways to configure a TrustStore and a KeyStore. The easiest way is to use these JVM parameters:

 a) by configuration

-Djavax.net.ssl.keyStore=/var/datamyKeyStore.pfx
-Djavax.net.ssl.keyStorePassword=myPassword
-Djavax.net.ssl.trustStore=/java/jdk11/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=changeit

 Obviously these variables need to be evaluated when connecting to a server. The apache http client (https://hc.apache.org/httpcomponents-client-5.0.x/index.html) and the jax-rs jersey client (https://mvnrepository.com/artifact/com.sun.jersey/jersey-client) do not read values from these variables. Therefore a different approach is necessary:

 b) programmatically load the TrustStore and KeyStore

This example uses the popular Apache HttpClient.
The following code shows how a SSL context is created with a truststore and a keystore which is needed when creating the client:

private SSLContext getSslContext() throws Exception {
    //load truststore (cacerts file)
    KeyStore serverKeystore = KeyStore.getInstance(KeyStore.getDefaultType());
    serverKeystore.load(new FileInputStream(trustStorePath), trustStorePassword.toCharArray());
    TrustManagerFactory serverTrustManager = TrustManagerFactory.getInstance("X509");
    serverTrustManager.init(serverKeystore);
    //load keystore (pfx file)
    KeyStore userKeystore = KeyStore.getInstance("JKS");
    userKeystore.load(new FileInputStream(keystorePath), keyStorePassword.toCharArray());
    KeyManagerFactory userKeyFactory =
            KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
    userKeyFactory.init(userKeystore, keyStorePassword.toCharArray());
    SSLContext sslContext = SSLContext.getInstance("TLS");
    //init the SSL context with truststore and keystore
    sslContext.init(userKeyFactory.getKeyManagers(),
            serverTrustManager.getTrustManagers(), null);
    return sslContext;
}

This HttpClient is then created on basis of the SSLContext:

private CloseableHttpClient secureConnection() throws Exception {
    SSLContext sslContext = getSslContext();
    SSLConnectionSocketFactory sslConSocFactory = new SSLConnectionSocketFactory(sslContext);
    CloseableHttpClient httpclient = HttpClients.custom().setSSLSocketFactory(sslConSocFactory).build();
    return (httpclient);
}

Finally the client can be used to perform a https request with private key authentication:

private void performRequest(String url) throws Exception {
    CloseableHttpClient closeableHttpClient = secureConnection();
    HttpHead httpHead = new HttpHead(url);
    CloseableHttpResponse response = closeableHttpClient.execute(httpHead);
}

Sonntag, 29. März 2020

Obfuscating a jar file with yGuard and maven

yGuard is a nice tool that obfuscates java sources. Unfortunately there is no maven support.
It is possible to combine yGuard with maven but you have to make sure obfuscation is called in the correct step of the maven build process.

In order to run yGuard in a maven build process the following antrun plugin configuration in the maven pom.xml file of a jar artifact can be used:

<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
<executions>
 <execution>
  <phase>integration-test</phase>
  <configuration>
   <tasks>
    <taskdef name="yguard"
       classname="com.yworks.yguard.YGuardTask"
       classpath="build-resources/obfuscator/yguard.jar"/>
    <copy todir="target">
     <fileset dir="target" />
     <globmapper from="*.jar" to="backup/*_backup.jar" />
    </copy>
    <yguard>
     <inoutpairs>
      <fileset dir="target" includes="myApp*.jar"/>
     </inoutpairs>
     <property name="naming-scheme" value="best"/>
     <rename logfile="renamelog.xml">
      <adjust replaceContent="true">
       <include
         name="web.xml"/>
      </adjust>
      <keep>
       <class methods="private" fields="private">
        <patternset>
         <include name="de/myProject/unchanged/**/*"/>
        </patternset>
       </class>
       <class classes="none" methods="none" fields="none">
        <patternset>
         <include name="de/myProject/obfuscate/**/*"/>
        </patternset>
       </class>
      </keep>
     </rename>
    </yguard>
    <copy todir="target">
     <fileset dir="target" />
     <globmapper from="*_obf.jar" to="*.jar" />
    </copy>
    <delete>
     <fileset dir="target">
      <include name="*_obf.jar"/>
     </fileset>
    </delete>
   </tasks>
  </configuration>
  <goals>
   <goal>run</goal>
  </goals>
 </execution>
</executions>
</plugin>
Note the following aspects:
  1. the antrun plugin is executed in the "integration-test" phase. During this phase the original jar file that needs to be obfuscated is already built and we have a chance to obfuscate it before it is installed into the local maven repo.
  2. before obfuscation a backup of the original jar file is done 
  3. after obfuscation (running the yguard-task) the obfuscated jar is renamed to its original name (with the help of copy and delete)
  4. Afterwards the obfuscated jar is installed into the local maven repository and can be referenced from other maven modules

Donnerstag, 5. Dezember 2019

Tagging an AWS Cloud Watch alarm

Tagging an AWS Cloud Watch Alarm

Recently tried to tag an alarm in AWS Cloud Watch. First I took a look at their REST-API documentation:


There are several things I found remarkable when trying to add a tag via the TagResource "action".

First of all there is no example included in the documentation. Not too bad because there is google with a lot of examples out there? Wrong, I couldn't find a single sample of someone doing such a REST call. After a lot of try and error I found a working solution:

The tag has to be included as a query parameter and a GET-request has to be issued in order to create a tag. (Please do not ask me why they are not implementing this as a post or put request like everyone else is doing). A key-value pair of tags has to be provided in this format as query parameters:

...&Tags.member.1.Key=myKey&Tags.member.1.Value=myValue

I am wondering how anyone is able to come up with this solution after reading the API documentation (see link above).

Since I found several other issues when implementing plain REST calls I continued with the Java SDK:


This API is well documented with a lot of examples and it is also easy to use. 

Conclusion: I think Amazon doesn't really want you to use their REST API directly. It is complicated, not well documented and has a strange architecture (GET-request to create data). Unfortunately I have found no hint that you are way better off using the SDKs that are available in multiple languages (C#, Go, JavaScript, Python, PHP, etc)

Freitag, 17. Mai 2019

Positioning of a primefaces dialog (p:dialog)

When using the primefaces dialog on a large page that has a vertical scrollbar, the dialog might not be visible because it is displayed at the top of the page and your current scroll position is too far down.

In order to make the dialog nicely centered on the visible part of the page I have used a small java script function that positions the dialog for me after it is rendered by primefaces:

function positionDialog(dialogId,anchorId) {
    var anchor = $(anchorId);
    PF(dialogId).getJQ().position({
        "my": "center",
        "at": "center",
        "of": $(anchor)
    })
}

In your xhtml page all you have to do is to define the dialog and the anchor to which the dialog has to be moved to. The anchor should be put on the place of the page where the dialog should appear:


<div id="myAnchorId"></div>
<p:dialog id="myId"
          header="My dialog"
          widgetVar="myId"
          onShow="positionDialog(myId','#'+'myAnchorId')"
          modal="true">
          This is the content of my dialog
</p:dialog>

This way the dialog is always nicely centered no matter where your vertical scollbar is positioned at the moment.