Native Apps and Nano-flows in Action

Here’s, why you should use it and how we can understand it outside the programming boundary

A.I Hub
31 min read4 days ago
Image by Author using ideogram

In addition to the micro-flows we talked about in previous articles, Mendix also
offers nano-flows. Although the name makes us suspect something to do with size,

this is not the case.

You can envision micro-flows as pieces of functionality that are

being executed on the Mendix Runtime, whereas nano-flows are run on the
Mendix Client also known as your browser.

Nanoflows are translated into

JavaScript which can be executed in the browser directly. In this article, we will be capturing the deep concept of native apps and nano-flows in mendix.

Offerings

  • Mobile application
  • Offline first and synchronization
  • Nano-flows
  • Javascript actions
  • Javascript action example
  • Building a native app
  • Testing your app locally
  • Best practice for offline development
  • Splitting the domain model
  • Deleted flag pattern
  • Delta synchronization pattern
  • Batch synchronization pattern
  • Compound object pattern
  • Request object pattern

Mobile Application

With the Mendix platform, we have basically three options to create apps that can
be used on mobile devices like tablets and phones.

The first is the standard web

app we have been looking at so far. We can create an online app that is responsive
and reacts to the screen size used to display the app.

Maybe we need some

specific user interfaces to be designed for mobile use due to the screen size
limitations.

The advantage of this approach is that we know how to build these

apps by using the Mendix platform.

The disadvantage of the approach is that
styling the app becomes complex very quickly and specific designs for mobile
and other usage are needed.

The second option is to create a PWA. PWAs are an evolution of traditional web
apps.

PWAs tend to behave more like native mobile apps, and their popularity is
increasing.

One possible advantage of PWAs compared to native mobile apps is

that PWAs do not need to be distributed via an app store but can be accessed
directly via the browser.

PWAs have three main characteristics

  • Installable — PWAs can be added to a user’s home screen and be started as
    a full-screen app. This makes PWAs feel more fully capable native apps.
  • Reliable — Using service workers, PWAs can work offline or partially
    offline. Mendix PWAs can work partially offline or fully offline.
  • Capable — PWAs can leverage several device capabilities like the camera
    and location and offer support for web push notifications.

As PWAs are web apps with additional features, Mendix offers these features via
the web navigation profiles.

Depending on your needs, you can create either a
fully offline-capable PWA or a web app that requires an internet connection but
still uses PWA features.

Within the navigation profiles, these PWA features can be configured

  1. Publish as PWA — When using this option, the app registers a service
    worker when deployed to the cloud. In offline navigation profiles, this

    option is always enabled. In online navigation profiles, this option will
    also give the end-user a custom page when the device has no connection.

    Where desired, this page can be customized by adding an offline.html
    page to the theme folder (for example, theme/offline.html). Note that this

    page should not load any other resources over the network.
  2. Allow add to home screen prompt — With this option, the end-user will
    be actively asked to add the app to their device’s home screen or desktop.
  3. Preload static resources — This option will ensure the app pre-loads static
    resources like pages, images and widgets in the background which can

    be beneficial to the overall performance of the app. The pre-loaded
    resources make the app feel faster when navigating between pages. This comes at the cost of higher bandwidth consumption and
    device storage when opening the app. In offline profiles, this feature is

    automatically enabled. Note that all pages and images reachable in the
    navigation profiles are loaded by the browser. This configuration can be
    undesirable from a security perspective, depending on your use case and
    requirements.

The third option is creating a fully native mobile app.

Mendix makes it possible to
build fully native mobile apps.

Native mobile apps do not render inside a web
view but use native UI elements instead.

This results in fast performance, smooth
animations, natural interaction patterns like swiping, pinching and
improved access to all native device capabilities.

To make such responsive native

mobile apps, Mendix leverages the popular open-source framework React Native.

You can create native mobile apps in the same way you build web apps.

You can
use the now familiar elements, such as pages, widgets, microflows and some

elements we will explore in this chapter, like nanoflows and JavaScript actions to
implement the required features.

There are some differences between building
native apps and web apps.

For example, the set of available widgets is slightly different.

How we deal with data is different and, in addition, styling is based on
JavaScript instead of SASS/CSS for native apps.

Choosing between the different approaches is driven by several arguments listed
as follows.

  • The app needs to run on the Windows platform — Use the PWA
    approach.
  • The app requires push notifications on iOS — Use the native approach.
  • Is mobile device management a requirement — Use the native approach.
  • Should the app work fully offline — Use the native approach.
  • Are there specific security features needed — Use the native approach.
  • Is app store presence needed — Use the native approach.
  • Is there a large overlap between the web and mobile version — Use the
    PWA approach

Offline First Synchronization

A native app built with Mendix uses the offline-first approach.

This means that
when we use an app on a mobile device the apps have their own offline, local

database rather than having a connection to the server database.

This means you
will not be able to access data that lives on the server without synchronizing the

data and other users will not be able to access any changes you made offline
until those changes are synchronized.

Keeping your local database on a device in
sync with the server is very important.

Mendix comes with an automatic startup
synchronization to ensure that the local database is aligned with the server.

Synchronization is automatically triggered during these scenarios.

  1. The initial startup of your native app.
  2. The first startup of your native app after the app is re-deployed if the user
    is connected and one of the conditions changed.
  • The synchronization configuration
  • Mendix version
  • A persistent entity in a domain model in the native app
  • Changing the access rules of those persistent entities

3. After the user logs in or out, be aware that any data that was committed
but not synchronized on log out will be removed from the local database by default. This is because the data is tied to the user session.
Keep in mind that the synchronization process is a synchronization of the
database, if an object is not committed to the local database, it will not be

synchronized and uncommitted changes will be ignored.

While this automatic synchronization is useful, it may not cover all the instances
you want to synchronize your data.

For example, if you want to synchronize new
remark entries that were created as part functions the user accessed in the native

app directly.

In these scenarios, Mendix has specific synchronization activities

you can use it to customize what is synchronized.

You can add customizable

synchronization as a button, as an activity in a nanoflow or by pulling down a list.

The custom synchronization takes place in two phases.

In the first phase, the
objects that are committed on the user’s device are downloaded to the server.

Then, a check on deleted objects is performed, where deletions in the local
database are checked.

The next step is that the event handlers defined on the

domain model entities are triggered.

In the second phase of the synchronization,
the user’s device downloads objects from the server based on the synchronization

configuration and entity access rules you have configured in the domain model.

Then, new and changed documents are downloaded to the device.

The two phases are always executed for the different possible synchronization
modes. There are three different modes for custom synchronization

  • Synchronize everything — Synchronizes the entire local database. This
    mode uses the synchronization configuration in the Navigation of your

    app.
  • Synchronize unsynchronized objects — Synchronizes all objects with
    changes committed to the local database. Objects that have been deleted
    on the device are also deleted on the server.
  • Selective synchronization — Synchronizes the selected object or list to the
    server. The objects must be in the context of a nano-flow and are used as

    an input parameter to this action.

When using custom synchronization, we need to consider that a connection to the

internet is required for these actions.

As this is the case, we need to implement
specific measures for handling errors that might occur, much like we would need
to add error handling when calling a REST API, for example.

The steps we should
implement when using the custom synchronization are.

  • Check the internet connectivity — There is a JavaScript action available for
    this from the Native Mobile Resources Marketplace module. Use the

    IsConnectedToServer activity for this to check for a connection to the
    internet and the server.
  • Check for running synchronizations — You can imagine that running
    simultaneous synchronizations from the same device can easily cause
    issues by the nature of the two-phased approach. In a nano-flow, we can
    check for this by using a decision with the statement, not(isSyncing()).
  • Ask the user to start the synchronization — Like with micro-flows that run
    for a long time we ask for a confirmation of the action. Use the

    ShowConfirmation activity for this purpose.
  • Add a custom error handler on the synchronization activity — we can use
    the $latestError variable to provide details on the synchronization error in

    the log and user messages. The error handling should be implemented as a
    custom without rollback.
  • Add a progress bar — Although not strictly required, when a process takes
    longer than a second or two, it is good practice to provide the user with
    feedback in the form of a progress bar. Use the showProgress activity for

    this purpose and ensure that the progress bar is removed when the process
    is finished with the hideProgress activity.

To help troubleshoot issues with synchronization in the navigation profile of your
native app, you have the option to send the logs to the server.

When this option is
enabled, the logging from the device is stored in the same log file as other log

activities and errors we have already encountered.

In addition, we have the

System.Synchronization domain model entity that we can use and inspect when
running into synchronization errors.

Let us explore the synchronization configurations we mentioned earlier.

When we
set up a native app, we start by adding a navigation profile to be used in the native
app.

In this navigation, we define the navigation of our app and what objects are
to be synchronized to the device running the app.

This latter configuration is

accessed from the configure synchronization button in the native navigation
profile.

Synchronization configuration

There are five options for the synchronization that are performed by default and

can be configured per entity.

  • All objects — All objects that the user has access to are downloaded to the
    local database on the device.
  • By XPath — Only the objects that match the XPath constraint and the
    permissions of the user are downloaded to the device. Objects that were
    previously downloaded but no longer comply with the XPath statement
    will be removed from the device database.
  • Nothing (clear data) — This option does not download any records and
    removes the objects from the device database.
  • Nothing (preserve data) — This option does not download any records and the downloaded records will remain on the device.
  • Never — This option ensures the records are never synchronized to the
    device.

When synchronizing, we need to be aware of the amount of data that needs to be
synchronized.

The more data we are synchronizing, the longer the user has to wait
when opening the app.

This will also consume more storage on the user’s device.

The best practice is to only synchronize what the user needs to complete their
tasks in the app.

When using the XPath option, ensure that the paths used are limited in length. When we use many steps in the XPath statement, the load on the
server can be negatively impacted.

Nanoflows

We have already encountered nanoflows in the previous sections.

Let us explore
these document types a bit further.

Mendix native apps are offline-first by nature.

Offline-first apps work regardless of the connection to provide a continuous
experience.

Pages and logic interact with an offline database on the device and

the client synchronizes the data with the server.

Working against local databases

results in a snappier UI, increased reliability and improved device battery life.

In
this offline-first approach, we cannot rely on micro-flows as they run on the server and the server should not be considered available in a native app.

Therefore,
Mendix introduced the nanoflows. Nanoflows are similar to microflows in that
they allow you to express the logic of your app.

However, they do have some

specific benefits, they run directly on the browser/device and can be used in an
offline app.

Furthermore, most of the actions run directly on the device, so there is
also a speed benefit for logic, which does not need access to the server.

Nanoflows
also offer benefits for online apps (for example, for UI logic, validations,
calculations, and navigation. However, please keep in mind that, when you

perform database-related actions, each action will create a separate network
request to the Mendix Runtime.

  1. Create
  2. Commit
  3. Retrieve
  4. Rollback

Therefore, the best practice is to use nano-flows in online apps when they do not
contain the above actions.

Although nano-flows perform best in online apps when
no database-related actions are used, nano-flows that contain at most one database-related action can still perform well.

Since such nano-flows only require one
network call, they perform as well as a micro-flow.

An example of such a use case

is performing validation logic on an object and committing the object in the same
nano-flow.

The one network call statement is very important, though, as when

multiple actions that interact with the database are used, nanoflows will cause
more network traffic than their optimized counterparts in the microflows.

In addition to the difference in network activity between nanoflows and
micro-flows, there are several other differences to consider.

  • When a nanoflow steps through the activities, client actions are directly
    executed. For example, an open page action immediately opens a page

    instead of at the end of the nano-flow, as is the case with a micro-flow. The
    client actions for a micro-flow can only be executed when the action

    returns from the server to the client.
  • The objects and variables — $latestSoapFault,

    $latestHttpResponse, $currentSession, $currentUser,

    $currentDeviceType is not supported in nano-flows.
  • Nano-flows are not run inside a transaction, so if an error occurs in a
    nano-flow, it will not rollback any previous changes. We can view this as
    every activity in a nano-flow runs in a transaction of its own, whereas the
    micro-flow activities all run in one transaction.
  • Nano-flows and micro-flows do not provide the same actions. Some
    actions available in micro-flows are not available in nano-flows and vice
    versa.
  • Since nano-flows use JavaScript libraries and micro-flows use Java
    libraries, there can be slight differences in the way expressions are
    executed.
  • Changes performed to the lists in a sub-nanoflow are not reflected in the
    original nanoflow. In other words, the lists are passed by value rather

    than by reference.

The Mendix IDE allows conversion between micro-flows and nano-flows.

You
can access this function by opening the context menu in the nano and microflow
editors.

Javascript Action

Nano-flows allow us to call JavaScript actions.

We can view these JavaScript
actions as an analogy to the Java actions in micro-flows.

The actions allow us to
expand the functionality of nanoflows with custom code.

When we add a new
JavaScript action, we need to provide a name for the action.

The name is used as
the file name for the JavaScript code that is stored in the application directory

under the subdirectory javascriptsource/{module name}/actions.

A JavaScript action is similar to a Java action as in that it can use input parameters
and provide a return parameter.

These types are applicable for the input
and output parameters, the use of parameters is optional.

  • Object — This allows you to pass a Mendix object to a JavaScript action.
    You need to configure the entity type you will be passing when calling
    the action. Just like with Java actions, type parameters can be used to

    pass any type of object. In the generated JavaScript action code, this type
    is represented as a MxObject.
  • List — The list parameter type allows you to pass a list of Mendix objects,
    a specific entity or a type parameter. This type is represented as an array

    of MxObjects.
  • Entity — The entity parameter will be replaced with an entity’s name when
    called in a nano-flow. The entity type parameter can be used to fill in a
    type parameter. This type is represented as a string.
  • Nanoflow — The nano-flow parameter type allows you to pass a nano-flow
    that you can call from your JavaScript action. The value of the parameter
    is an async function when called, this will trigger the configured
    nano-flow. You can specify parameters as a JavaScript object and capture

    the return value of the nano-flow once execution finishes.
const user = await nanoflowParameter({ Name: "John Doe" });
  • Boolean — The Boolean parameter type allows you to pass a Boolean
    value to a JavaScript action.
  • Date and time — This parameter type allows you to pass a date and time
    value and will be represented as a JavaScript date.
  • Decimal — This parameter type allows you to pass a decimal value which
    will be represented as a big object in the generated code.
  • Enumeration — The enumeration parameter will be represented as a
    string.
  • Integer/Long — The integer/long parameter type allows you to pass a
    decimal value which will be represented as a big object.
  • String — The string parameter type allows you to pass a string value to a
    JavaScript action.

The generated code is depicted as follows, where we left out the comments for
conciseness.

import "mx-global";
import { Big } from "big.js";
// BEGIN EXTRA CODE
// END EXTRA CODE
/**
* @param {string} stringParameter
* @param {MxObject} objectParameter
* @param {MxObject[]} listParameter
* @param {boolean} booleanParameter
* @param {Date} dateTimeParameter
* @param {Big} decimalParameter
* @param {"Module.Enum_OrderStatus._New"}
enumerationParameter
* @param {Big} integerLongParameter
* @param {Nanoflow} nanoflowParameter
* @returns {Promise.<void>}
*/
export async function
JavaScript_action(stringParameter, objectParameter,
listParameter, booleanParameter, dateTimeParameter,
decimalParameter, enumerationParameter,
integerLongParameter, nanoflowParameter) {
// BEGIN USER CODE
throw new Error("JavaScript action was not
implemented");
// END USER CODE
}

In the Parameter section, we can set the description as documentation for the
action and we can group input parameters by using the category section in the

same way we already encountered Java actions.

The return parameter type determines the type of data a JavaScript action returns.

Since many APIs are asynchronous, you can also return a Promise object that
resolves this type.

The return value of the JavaScript action can be given a name
and stored so it can be used in the nanoflow where it is called.

JavaScript actions can be restricted to a platform

  • All — The default setting.
  • Web — The JavaScript action can only be used in a browser or PWA.
  • Native — The JavaScript action can only be used in a native mobile app.

When defining a JavaScript action for a specific platform and using this in a
nanoflow, it will restrict the platform for that nanoflow.

For example, only native
pages can be opened in a nano-flow that contains a JavaScript action where the
platform is set to native.

JavaScript actions can be exposed as nanoflow actions in the same way we
encountered for the Java actions that can be exposed as microflow actions.

Now that we have our JavaScript action defined with input and return parameters,
it is time to write the actual JavaScript code.

The code you write for a JavaScript action can be
written without the need for additional tooling in contrast to Java actions where

we needed Eclipse, for example.

In the JavaScript action, a code editor is directly
available from the Code tab.

The editor is based on the Monaco editor.

It offers
features such as syntax highlighting and code completion.

The code can be written
in modern JavaScript (ES8/ES2017) and can use functions like async with await
and Promise.

The code has three sections, an import list, an extra code block, and a user code
block.

All code that is added should go in one of these blocks.

Code outside the
blocks will be lost when re-generating the template code on deploy or update of
the JavaScript action settings.

Additional imports should start with import and be
placed above // BEGIN EXTRA CODE.

User code should be placed between // BEGIN
USER CODE and // END USER CODE.

Extra implementation code should be placed
between // BEGIN EXTRA CODE and // END EXTRA CODE.

Now that we know how to add a new JavaScript action to our app, let us examine
this a bit further by looking at an example.

In this example, we will implement a
text-to-speech function.

First, we add a new JavaScript action named JS_TextToSpeech and provide a
single input parameter of type string and name this inputText.

Set the return
variable to Boolean and rename this to ResultTextToSpeech. The action is

depicted in the figure.

JavaScript action text-to-speech

When we look at the Code tab, we will see that between the user code comments,
code to throw an error message is automatically generated, just like we saw with

the Java actions.

The first step is to remove this line of code and replace this with
the code provided in the code snippet depicted below that checks if the input
parameter is not empty.

Otherwise, we return a false value and stop the execution
of the code.

export async function TextToSpeech(inputText) {
// BEGIN USER CODE
if (!inputText) {
return false;
}
throw new Error("JavaScript action was not
implemented");
// END USER CODE
}

For spoken text, we need the Web SpeechSynthesis API.

Be aware that not all

browsers support this experimental API.

We will add a check to verify if the API
is available and throw an error if it is not.

Our code is now depicted in this code snippet.

export async function TextToSpeech(inputText) {
// BEGIN USER CODE
if (!inputText) {
return false;
}
if ("speechSynthesis" in window === false) {
throw new Error("Browser does not support text
to speech");
}
throw new Error("JavaScript action was not
implemented");
// END USER CODE
}

To implement text-to-speech functionality, add the code at the end of
the last code snippet, using the SpeechSynthesisUtterance and speak functions by
replacing the throw error code.

To ensure the code finishes (speaking) before
returning to the nano-flow that called the JavaScript action, we attach the onend
and onend handlers to the function, as is depicted in the code snippet.

return new Promise(function(resolve, reject) {
const utterance = new
SpeechSynthesisUtterance(inputText);
utterance.onend = function() {
resolve(true);
};
utterance.onerror = function(event) {

reject("An error occurred during playback:
" + event.error);
};
window.speechSynthesis.speak(utterance);
});

Now that we have our Javascript action, we can add this to a new nanoflow and
call the nanoflow from the user interface to test our brand-new text-to-speech

function.

We have learned about offline, native, nanoflows and JavaScript actions.

Now,
we can start building our first native app.

The easiest way to start building a native
app is by creating a new app and using the Blank Native Mobile App starter
template.

This will provide you with a native and responsive web navigation

profile.

The starter app will have the Native styling available in the

Atlas_NativeMobile module and provide the NanoflowCommons module with
pre-built JavaScript actions to leverage the mobile device’s capabilities.

The
functions include calling a phone number, sending text messages, starting
navigation, providing location information (GPS) and many more.

All these
functions can be used in your nanoflow actions to easily provide any feature you
need in your native app.

The second module included in the starter template is the

NativeMobileResources.

This module provides JavaScript actions to access and
use the device’s camera, clipboard and network status and provides access to
services like notifications and authentication like fingerprint and facial
recognition, to name a few.

The app is built in the same way we have already learned.

We need to decide what
information and which functions are needed for the native app and ensure we
create pages and functionality specifically for that purpose.

Only pages with
native styling can be used in the native profile! When creating our domain model,
we need to take into account that many-to-many associations are not available for
native mobile apps.

When we need this type of construct, we will need to

implement it ourselves by adding an intermediary entity that holds the associated
objects.

In a responsive web app, we might model something like is depicted in
the figure.

Many-to-many association

For a native mobile app, we need to implement this as depicted in the figure,
where we need to implement functionality to store the relation between the

Product and Category in the CatProdRelation entity with the help of the two
associations.

In this way, we can ensure products can have multiple categories and
categories can be associated with multiple products.

Many-to-many construct for native mobile use

We need to set up the synchronization configuration for our entities in the native
navigation profile and implement the custom synchronization strategies with the
help of nanoflows and possible microflows to ensure the data is available on the
mobile device and can be stored in the server database.

Testing your App Locally

When you have created your first native app, you need to test the functionality.

Mendix offers this option without publishing your app to an App Store.

Testing
your app can be performed on your mobile device or by using an emulator for
Android versions of your app.

These emulators work by installing them
on your windows development machine and ensuring that Google Play services

are supported.

  • BlueStacks
  • Genymotion

To test your app, you need to download and install the Make It Native app on
either your mobile device or in the emulator.

To view your app on an Android

device or emulator, you must download and install the Make It Native 10 app
from the Google Play store.

To view your app on an iOS device, you must
download and install the Make It Native 10 app from the Apple App Store.

Viewing your app on a mobile device will allow you to test native features and
other aspects of your app.

  1. Start your app locally.
  2. Open the dropdown with the title View App in Mendix Studio Pro and
    select the option View on your device.
  3. Select the Native mobile tab. Here, you will see your test app’s QR code.
  4. Start the Make It Native app by tapping its icon on your device.
  5. Tap the Scan a QR Code button
Scan QR code

6. If prompted, grant the app permission to access your device’s camera.

7. Point your mobile device’s camera at the QR code, then press the Launch App button to open your app on the device.

Your mobile device must be on the same network as your development machine
for the Make It Native app to work.

If this is the case and the connection still fails,
ensure communication between devices is allowed in the Wi-Fi access point.

Also,
Mendix recommends keeping the Runtime port in App Settings| Edit on 8080.

If you change it, do not change it to 8083 because that is designated for app
packaging.

Now, you can see your app on your device.

Although this is just a
template app, whenever you make changes, you can view them live on your Make
It Native app.

Enabling the Developer Mode toggle will provide you with more detailed warning
messages during error screens, as well as additional functionality on the developer

app menu.

To see how changes made in Mendix Studio Pro are displayed live on your testing
device, make a small change to your app.

Click Run Locally() to automatically
update the running app on your device and see your new changes.

When you click
Run Locally, your app will automatically reload while keeping state.

If you get an
error screen while testing your app, there are easy ways to restart it.

  1. Tap your test app with three fingers to restart your app.
  2. With the enable dev mode toggle turned on, hold a three-fingered tap to
    bring up the Developer app menu—here you can access ADVANCED SETTINGS
    and ENABLE REMOTE JS DEBUGGING.

Splitting the Domain Model

When creating very simple apps with a limited number of records, sharing the
data for the web app’s functionality and the native functionality from the same domain model works quite well.

When dealing with more complex

native apps and more data records, this quickly becomes unusable.

The best
practice is splitting the domain model for the web and native apps.

We will still
create both apps in one project but provide the data for the native app from a
separate data model and therefore, a separate module.

Imagine a domain model containing tasks.

As the number of records can quickly
increase, we only want to synchronize the actual task records that interest the

mobile device user.

To implement this, we will create a second module with the

postfix _offline and the name of the module holding the tasks so we get
TaskModule_offline.

In this domain model, we add a duplicate of the Task
entity and associate the object to the Account entity.

We name the new duplicate

entity Task_Native.

When a new Task is created, we need to create a new Task_Native object and we
can implement this with a microflow as the event handler of the Task entity or via

a micro-flow action that we use when saving the Order, this depends on your
specific situation.

Now, we need to make sure that the Task_Native records are synchronized to our native app.

We do this by replacing our Order object in the
offline synchronization configuration with the Task_Native entity.

When creating
the Task_Native object, make sure that we associate the record with the account
that needs access to this order.

Now, we can set the download option for the
Task_Native objects to by XPath and ensure that we restrict the records by using

the association to the Account.

The mobile pages need to use this new object as
well and need to be adjusted or created using the Task_Native objects.

When we synchronize the native counterpart of the Task, we need to be able to
update the original Task.

This can be achieved by having a unique identifier on the
native counterpart, like a task id or associating the native object with the original
object.

The synchronization of the Task_Native object from the device is implemented
with the help of a nanoflow.

In the nanoflow connected to the UI, we first check if
the app is connected to the server with the help of the IsConnected JavaScript

action from the Native Mobile Resources module.

The next step is to add a

Synchronize to the device activity we provide with the Task_Native object.

Do not
forget to add the error handler, In the domain model, add a after commit event to
the Task_Native object to update the original Task object with the changes made
in the Task_Native object.

Now, by setting the association to the accounts, we can
control what objects are synchronized to the device, and additionally, we can

implement the function to create the native counterparts in such a fashion that
these will only exist when needed.

Deleted Flag Pattern

Using Synchronize to device from a microflow allows for fine-grained control
over which objects are synchronized.

We do not have control over the

synchronization of deleted objects for the offline device database.

This means that
if an object is deleted from the server, it will remain available to the client unless
full synchronization is run.

The deleted flag pattern overcomes this limitation by introducing a Boolean
attribute (the deleted flag) in the entity.

This attribute is used to flag deleted
objects rather than deleting the object.

Implementing the access rules for the
entities so that the user does not have access to the objects where the deleted
flag is true and will make the objects no longer available on the device after synchronization.

On the server side, we can delete the objects by utilizing the
deleted flag attribute value.

To implement this pattern

  • Set the synchronization mode of the target entity to nothing preserve data.
  • Add a Boolean attribute to the target entity to flag objects that have been
    deleted and set the default value to false. For example, isDeleted.
  • Replace any logic that deletes objects of the target entity with a microflow
    that sets this attribute to true.
  • Add the following XPath constraint to all access rules of the target entity to
    limit access to objects that are not flagged as deleted: [not(isDeleted)].
  • Add an index for the created attribute to optimize database performance.
  • Deny read and write access to the created attribute for all roles as we
    control the value for the attribute with a microflow no permissions are
    needed.

Be aware that objects using this pattern are no longer deleted from the server
database which can lead to performance and storage problems.

Cleaning up
flagged objects after a set time can overcome this, but you must ensure that all
clients are synchronized before cleaning up.

Not doing so can lead to

synchronization errors.

Delta Synchronization Pattern

When using the all objects synchronization option, all the records available for the
user performing a synchronization action are sent to the device.

Objects that we
already have on the device and objects that have not been changed are
synchronized along with new objects.

This can cause long wait times for larger
data sets due to the amount of data that needs to be transmitted every time a full

synchronization action is triggered.

To improve this synchronization, we can implement a pattern by which we only
synchronize new and changed objects.

To implement this pattern, we need to keep
track of the last synchronization and the last change date of the objects to be

synchronized.

  • On the entity we want to synchronize, enable the system attribute
    changedDate in the domain model.
  • Create a new entity with an attribute to store the last synchronization date

    (for example, SyncHelper/LastSyncDate).
  • Set the default value for the synchronization date attribute to 1970-01-01.
  • Set the synchronization mode of the SyncHelper entity to never.

Create a micro-flow to trigger the synchronization.

  1. First, add an input parameter of type date and time.
  2. Then, add an activity to retrieve all objects that need to be

    synchronized from the database and have a changedDate greater than
    the parameter.

6. Add a synchronize to device activity for these objects.

  • Create a nanoflow to initialize the helper entity.
  1. Add a retrieve action to retrieve the first helper from the database.
  2. Add a decision to check if the helper exists and return the object.
  3. Add a create object activity for the situation where the helper does not
    exist and return this new object.
  • Create a nanoflow to trigger the synchronization from the mobile device.
  1. The first step is to call the initialization nanoflow above to retrieve the
    helper.
  2. Add a call microflow activity to call the synchronization microflow
    created above with the parameter SyncHelper/LastSyncDate to trigger

    the synchronization.
  3. Add a retrieve the object activity to retrieve the synchronized object
    with the latest changedDate from the database.
  4. As a final step, add a change activity to set the LastSyncDate for the
    helper and make sure to commit the helper object.

Be aware that synchronization by using deltas does not speed up an app’s initial
synchronization.

When you use this best practice pattern for multiple entities, be

sure to track the individual synchronization dates as different attributes of the
SyncHelper object.

Batch Synchronization Pattern

With an offline app, we need to synchronize the data from the server to the local
database on the device.

Depending on the amount of data, the synchronization can
take more time and the synchronization process by default does not provide any
feedback to the user.

This means that the user has no idea of the progress of the

synchronization.

To implement this feedback, we can split the data
synchronization into smaller actions that we can track and use to provide feedback

to the user.

This increases the user’s sense of understanding and control while

using your app.

This best practice is implemented

  1. Implement the delta synchronization pattern discussed in the previous

    section for the target entity.
  2. Add a non-persistent entity to store the progress of your synchronization
    for example, SyncProgress with an attribute Progress of type integer.
  3. Add an integer parameter to the microflow that retrieves and synchronizes
    the changed objects named Offset.
Synchronization microflow with offset
  • Use the Offset in the retrieve and set the amount to a fixed value, 100 e.g.
  • Make sure to retrieve the products where the changedDate is smaller than

    the SyncDate to retrieve only those objects that have been changed after the
    last synchronization.
  • Sort the list by the changedDate attribute.
  • Create a new micro-flow that retrieves all the products that have been
    changed since the previous synchronization, count the retrieved objects,

    and return the count as depicted in the figure.
Count products to be synced microflow

The nanoflow that triggers the synchronization process accepts the SyncProgress
object as input and consists of these activities.

  1. First, call a nanoflow that retrieves or creates the SyncHelper object as we

    have seen in the delta synchronization pattern.
  2. The second step is to call the microflow from the previous step to
    determine the amount of products we need to synchronize.
  3. This is followed by creating an Offset variable initialized to 0.
  4. Add a loop with a while statement.
$Offset < = $Count_ProductsToBeSynced

5. In the loop, we first call the synchronization microflow that we added the
Offset to.

6. The second step in the loop is to change the Offset variable to the current
value plus the amount we used in the previous microflow call, 100, for
example.

7. We add a change activity in which we change the SyncProgress object to
the percentage of progress using the statement.

round(($Offset div $Count_ProductsToBeSynced)*100)

8. After the loop, we add a retrieve activity that retrieves 1 product record
with the latest changedDate.

9. The product from the previous step is used to update the SyncHelper’s
LastSyncDate attribute.

Batch synchronization with progress nanoflow

10. To show the progress, add a data source nanoflow in which we create a
SyncProgress object.

11. We surround the button that triggers the synchronization nanoflow with a
data view that uses the data source nanoflow created in the previous step.

12. In the data view, add a widget to show the Progress attribute of the object.

For more complex synchronization scenarios that use multiple entities, create a
separate synchronization page to show the progress of all synchronized entities.

The used batch size should be between 100 and 10,000 objects. Larger batches
tend to synchronize faster but smaller batches give more responsive feedback to
users.

Be aware that data might change during the batch synchronization.

When this
happens, the Offset variable might run out of sync, resulting in incomplete

synchronizations.

Use this pattern only for data that changes infrequently.

Alternatively, you can implement a locking mechanism that prevents

synchronizations when data changes or vice versa.

Compound Object Pattern

This pattern lets you combine multiple objects to improve synchronization
performance.

Synchronizing data spread across multiple entities requires each

entity to be synchronized individually.

This can lead to performance problems
because of the amount of data that is transmitted and the complexity of the queries
on the offline database.

These performance problems can be mitigated by
combining multiple objects into compound objects.

It is then sufficient to
synchronize only the compound objects and ignore much of the complexity of the
server’s database on the client.

To implement this pattern

  • Create a new entity to store the compound object.
  • Add all attributes needed in the offline client, including those from related
    entities to the compound object.
  • Create a 1-1 association between the compound object and the source
    object. Configure the association to delete the compound object on deleting
    the source object.
  • Create a microflow that retrieves and returns the compound object
    associated with the source object. If it does not exist, a new compound

    object is returned instead. For the new object, the association to the source
    object must be set.
  • Create a microflow and configure it as an after commit event handler of the
    source entity. This microflow ensures that the compound object is created
    and updated when the target object is created or changed.
  • A call microflow activity that retrieves or creates the compound
    object.
  • Retrieve activities for all the associated entities of the source object
    that are needed in the compound object.
  • A change activity to update the attributes with the retrieved values.

6. Create additional micro-flows as after commit event handlers for all other
entities with attributes included in the compound object. These micro-flows
ensure that the compound object is updated when the related object
changes.

  • Create a list of compound objects to commit all changes in a single
    action.
  • Traverse the associations with retrieve by association to get to the
    source entity. Use loops as needed.
  • Retrieve the compound object for the product and add it to the initially
    created list.
  • Change the compound object and update the attributes relating to the
    object from the microflow’s parameter.
  • Commit the list of compound objects.

The compound objects can now be synchronized instead of the original entities
and be used in the pages of your native app.

  • The after commit event handlers used in the best practice can lead to

    performance issues if the source object or related objects change
    frequently. In these cases, use a designated update microflow instead of
    after commit event handlers.
  • If associations to related objects can be empty, handle this in the update
    microflow.
  • It is often useful for compound objects to store aggregate values, such as
    the number of related objects. These can be computed using the
    appropriate List Aggregation action in the update microflow.
  • It is assumed that compound objects are not changed by the offline client.
  • If this is needed, combine the compound object with a Request object,
    which we will discuss in the next section.
  • Combining the compound object with the delta synchronization, which

    we discussed earlier, can further increase synchronization performance.

Requested Object Pattern

This pattern lets you capture changes as objects and apply them after
synchronization, making these changes more secure.

Creating and manipulating
objects in a complex data structure from an offline client can lead to performance
issues, security concerns and data inconsistency.

This is because the entire

domain model is replicated in the offline database and synchronizing changes can
lead to conflicts with other parties editing the data.

Part of the solution is using the

compound object pattern and with the help of the request object pattern, we can
ensure that we can update the source objects when using the compound pattern.

Request objects capture the requested changes in a separate object in the offline
database and then apply the changes in a single transaction after it has been

synchronized.

This reduces the amount of data that needs to be synchronized and

allows the transaction to be rolled back in the case of inconsistent data.

Follow the given steps to implement the request objects pattern

  1. Create one or more entities to store the changes that the offline client can
    make. In the domain model example below, we have added the

    Request_Order and Request_OrderPosition entities to allow the native user
    to create orders with order positions while offline.
Request entities

2. Make sure the user has permissions on these objects and restrict the records
to those created by the user.

3. Ensure the native app will only create and edit the request objects.

4. Create a microflow that applies the changes from the request objects to the
target objects. In the microflow, retrieve the request objects and apply the
stored changes to your domain model. In this example, we create
Order and OrderLine objects based on the request objects.

Create target objects microflow

5. Create a nanoflow that triggers the microflow listed above. Ensure that all
request objects are synchronized before and if changed in the process
after calling the micro-flow.

Synchronization nanoflow creates Order

6. If the native client uses multiple request-objects in parallel, add a unique
identifier by using the get guid nanoflow activity in the main request object
as a reference that can be passed to the microflow.

7. It can be useful for data integrity to store the processed request objects in
the server database. In that case, use the deleted flag pattern to remove

them from the offline client.

8. Combine Request object with Compound object to allow reading from and
writing to complex data structures.

Thanks For Reading 😊

--

--

A.I Hub
A.I Hub

Written by A.I Hub

We writes about Data Science | Software Development | Machine Learning | Artificial Intelligence | Ethical Hacking and much more. Unleash your potential with us

No responses yet