With Infinity 8.8 an on, we have separated the Constellation UI content and delivery into 2 categories and delivery mechanisms:
This is the js that takes the UI metadata and case values and builds the presentation on the client, and then manages interactions between the browser and the case engine. It is exactly the same for all customers, and tightly bound to the Infinity V2DxAPI's. We publish this to a Pega CDN, and all Cosmos React portals automatically load the Pega generated js from there.
All of the client side js is versioned as a complete package, and matched with Infinity. All versions are managed within the CDN, with each js request including the required version.
A new version of the client side is released in step with Infinity releases. For customers that require client side only features or fixes faster than the Infinity release cycle, Constellation updates are available immediately they are developed. Constellation update adoption
As customers author their applications, they add customer and application specific UI assets to the presentation. These assets are stored in the Constellation Application Static Content Service. This approach is unchanged from 8.7:
Example deployment architecture and data flows:
The Pega CDN has regional caches to give worldwide fast content delivery. http/1 and http/2 are supported.
The Application Static service is designed to handle many Infinity deployments. Typically for an on-prem or Customer-Cloud install, 1 install should be capable of handling requests from all Infinity deployments, production, stage, test and dev. for the organisation.
Pod autoscaling can be applied and multiple pods for HA.
Operational statistics (Prometheus) are exposed through the /metrics end-point.
We run a single K8s pod to support all of Pega use on a lightly loaded T3.medium.
For releases matching the Infinity release cycle, the Infinity DSS ConstellationPegaStaticURL should be set to https://release.constellation.pega.io
With this set, Constellation applications can be authored and run. No other setup or installs are necessary.
If elevated security CSP Rules are in use, the CDN url should be added to the 'Connect-Source' and the 'Script Source' items. Only the domain is required.
This minimal setup has the restriction that custom images, custom components and localisations cannot be added to the application. To use custom components, custom images and localisations with the UI, install the App-Static service that handles the customer application specific content.
For customers that have severe restrictions on internet access or security concerns, a service that mimics the CDN can be installed locally. This is not recommended as it brings additional operational costs, maintenance operations and update restrictions. on-prem CDN service install
CS and SA have additional libraries for their UI. This is exposed through an alternate url:
For CS customers that have severe restrictions on internet access or security concerns, a service that mimics the CDN can be installed locally. This is not recommended as it brings additional operational costs, maintenance operations and update restrictions. on-prem CDN service install
The service can be installed from a Docker image, from a K8s yaml file or from a helm chart.
For anything other than a simple single user desktop trial, the service should be exposed to the browsers and the publishers through a load balancer.
Integration with Infinity is completed by populating the Infinity DSS ConstellationSvcURL with the appstatic service public url.
For the DSS value, Use a fully resolved path, do not use a relative path.
If elevated security CSP Rules are in use, the App-Static url should be added to the 'Connect-Source' and the 'Script Source' items. Only the domain is required.
Customer generated UI static assets (images, custome components, localisation files) are synchronised from Infinity to the Application Static service on a number of triggers within Infinity. In the simplest deployment, these assets are stored within the docker image.
Both of these problems are solved by using a shared disk across the containers. The shared disk can be a simple local drive, a network-file-system, or a K8s PVC. A NFS is simple, and also shares across data centers, very effective for HA. We use EFS for cross region availability. For K8s deployments, PV and PVC can be used.
The Constellation service provides static content to browsers. Browsers cache the content, leaving relitavely low network, cpu and memory needs. The service is targetted at K8s deployment, as that is the easiest. It is not the only option. A simple docker container could be put on a simple VM also.
A https certificate must exist on the url end point used by the browser for content fetch, and Infinity content write. The certificate must match the domain the service is on.
One approach is to put the certificate and https on the exposed load balancer. http can be used behind the load balancer, back to the service. Alternatively, a service mesh can provide https from the load balancer to the service.
The service can expose https directly also. The key and certificate files are placed on a disk, and the disk is mounted to the container. The docker install instructions describe this in more detail.
The service api is exposed through the /swagger.html end-point.
This service is very reliable, and not much service maintenance or monitoring is required. Here is our monitoring dashboard:
All requests to the service and unexpected operations in it are logged. The log has unix/epoch based timestamps on every request. This is always the first place to start. Did the request reach the service? What happened during processing? What was the response?
Issues that show up for application developers, using App. Studio, and run-time portal end users should always be checked initially using the browser Dev Tools network trace. From this trace it can be determined if there is an issue with a call to Infinity or a call to the service. Do not rely on the DevTools console.
The DxV2API on the case engine is https only. As browsers will not allow mixed https and http documents, https must be used for the App-Static service. The certificate must match the domain and be valid.
There is a basic 'are you connected' http end point at /v860/ping. This should result in a http 200 response. For no response check the log to see if the request reached the service, then work back through the network to find the problem. While installing the service, checking the /ping end-point with http can be very helpful with https issues.
A more detailed check of js and disk operation on the /v860/healthcheck end point.
The Custom Component, UI images and localisation asset synch is also https with authorisation from Infinity to the service. A fail in this operation will result in missing assets in the UI. Http Java libraries are very brittle with https certificates: Obscure error messages will be generated in the Infinity log on asset push (clear, save, reset, save on the c11n svc DSS) to an endpoint with bad certificate. Domain must match and certificate must be valid.
One easy path to solving both of these issues is to put a good certificate on the load balancer.
Custom Components are uploaded from the desktop 'DX Component Builder', to Infinity and then automatically pushed to the service for use from the UI. Check the timestamps on the specific Rule-UI-Component in Infinity to ensure that the desktop publish is working correctly. Then check the Infinity log for issues around that timestamp. Then check the service log for the inbound asset sync at that time. The check the service log for the browser request for the asset.
All requests and unexpected operations are logged. The log should be inspected regularly for issues. We set automated monitoring on the logs using 'Metric Filters':
The docker image provided by Pega is built from node:xx-alpine. This is a standard nodejs on Alpine image. This is a small image and excellent for purpose. Some customers prefer to build on their own images. This can be done, by taking the Pega provided image and transferring the Pega content of the image to the customers own image.
Custom docker image build instructions: