Equity Analytics - Test Custom Analytics
This page outlines how to use the .fsi.eqea.generateOrderAnalytics
API to test the custom analytics that have been loaded to the system.
Prerequisites
-
Install the kdb Visual Studio Code extension
-
Add custom analytics
Tip
To achieve the best performance for your custom analytics, use the Order Analytics Utility Functions. These utility functions cover a wide range of scenarios, simplify the development process, and ensure optimal performance for your custom analytics.
Test the .fsi.eqea.generateOrderAnalytics
API
Note
You can configure a pipeline to use the .fsi.eqea.generateOrderAnalytics
API for nightly runs to persist the results to disk in the OrderAnalytics
table.
Once you have added custom analytics, validate them using the fsi.eqea.generateOrderAnalytics
API. For more information, refer to the Generate Order Analytics API documentation.
Using the kdb Visual Code extension connected to your Insights environment, run the below snippet, updating the values as required:
q
// Open handle to GW
gw:hopen`$":insights-sg-gateway:5050";
// Define the variables needed to run the query:
st:INSERT_START_TIMESTAMP;
et:INSERT_END_TIMESTAMP;
apiScope:`$"INSERT_PACKAGE_NAME";
// Set up our argument dictionary
args:(!) . flip (
(`table ; `Order);
(`startTS ; st);
(`endTS ; et);
(`scope ; (enlist `assembly)!(enlist apiScope))
);
// Run the API on the GW and set the results to a variable named `t`
t:gw(`.fsi.eqea.generateOrderAnalytics;args;`;()!());
last t
The resulting dataset contains the results for all the deployed analytics, provided that:
-
The required data is available to run the analytics.
-
The custom analytics have been correctly added.
-
The code used to run the custom analytics has been written correctly and run without error.
If the custom analytics run without error, a single line referencing the name of the analytic shows in the DAP logs, displaying a log message similar to the below:
q
{"time":"2025-03-31T16:21:01.892z","corr":"0e53291e-c386-42a5-9bce-9dd4a357e757","component":"eqea","level":"INFO","message":"Running Order Execution Analytic Function: [ .my.custom.function ] ","service":"dap","mount":"hdb"}
If an error has occurred, a single line referencing the name of the analytic shows in the DAP logs, displaying a log message similar to the below:
q
{"time":"2025-03-31T16:21:01.892z","corr":"0e53291e-c386-42a5-9bce-9dd4a357e757","component":"eqea","level":"INFO","message":"Running Order Execution Analytic Function: [ .my.custom.function ] ","service":"dap","mount":"hdb"}
{"time":"2025-03-31T16:21:01.892z","corr":"0e53291e-c386-42a5-9bce-9dd4a357e757","component":"eqea","level":"INFO","message":"Error Encountered Running Order Execution Analytic Function: [ .my.custom.function ] - Error Message: [ rank ] "}
The second line follows the format:
Error Encountered Running Order Execution Analytic Function: [ <FUNCTION_NAME> ] - Error Message: [ <ERROR_MESSAGE> ]
This is designed to provide detailed information to help simplify the debugging process.
Debug custom analytics
This section provides an example of debugging custom analytics. It provides examples of the errors you might encounter and how to resolve them.
In this example, we intentionally add a custom analytic incorrectly, with bugs.
First, add the code provided below to the file ${PKG_NAME}/src/example.eqeaCustomAnalytics.q
, which was created in the first example, when adding custom analytics.
q
// Add function for debugging example
.eqea.config.custom.analytics,:flip `analytic`analyticType`funcName`aggClause`marketDataTabName`joinTimeOffset! flip enlist (
(`myCustomAnalytic ; `myCustomAnalyticType ; `.my.custom.function ; (max;`vol) ; `Trade ; 00:00:30)
);
// Write our custom analytic function for debugging example
.my.custom.function:{[OrderAnalyticsRes]
// We can use a util function `.eqea.util.runSimpleAnalytic` to run analytics that are dependent on columns that already exist in the OrderAnalytics table
cfg:select from .eqea.analytics.cfg where analyticType=`myCustomAnalyticType;
.eqea.util.tickData.getDataAndAggFromCfg[OrderAnalyticsRes;cfg;();`strikeTime;`orderCompletedTime;0b]
}
Next, tear down the existing package and deploy the updated package.
Shell
# Teardown existing package
kxi pm teardown ${PKG_NAME}
# Redeploy updated package
kxi pm push ./${PKG_NAME} --force --deploy
Error when running .fsi.eqea.generateOrderAnalytics
When the package is fully deployed, you can test that your custom analytic runs correctly using .fsi.eqea.generateOrderAnalytics
as outlined here.
For the purpose of this example, when running .fsi.eqea.generateOrderAnalytics
, an error occurs as the analytic is defined in .eqea.analytics.cfg
but not in the OrderAnalytics
table.
When you run the API:
q
t:gw(`.fsi.eqea.generateOrderAnalytics;args;`;()!());
Notice the ()
in the last
item, where you would expect to see our data on a successful API call.
Inspect the first
element of t
:
q
rcvTS | 2025.04.02D11:25:23.466000000
corr | 4fc34106-c6a8-4227-b286-0a9e807add6a
logCorr | "4fc34106-c6a8-4227-b286-0a9e807add6a"
api | `.fsi.eqea.generateOrderAnalytics
agg | `:10.37.151.32:5070
refVintage| 17592186197530
rc | 10h
ac | 10h
ai | "Unexpected error (Error running .fsi.eqea.generateOrderAnalytics Analytic Defined in .eqea.analytics.cfg but not in OrderAnalytics table: myCustomAnalytic) encountered executing .fsi.eqea.generateOrderAnalytics"
The value of the ai
key in first t
contains the following error message:
"Unexpected error (Error running .fsi.eqea.generateOrderAnalytics Analytic Defined in .eqea.analytics.cfg but not in OrderAnalytics table: myCustomAnalytic) encountered executing .fsi.eqea.generateOrderAnalytics"
This is expected as a custom analytic has been added to the config table but there is no column in the OrderAnalytics
table to store the corresponding result.
To remedy this, add a myCustomAnalytic
column to the file ${PKG_NAME}/patches/CustomAnalytics.yaml
, which was originally created in the first example.
The updated ${PKG_NAME}/patches/CustomAnalytics.yaml
file should now look as below:
YAML
kind: Package
apiVersion: pakx/v1
metadata:
name: target
spec:
manifest: {}
tables:
schemas:
- name: OrderAnalytics
columns:
- name: myCustomAnalytic
type: long
- name: reversionAskPrice_30
type: float
- name: reversionBidPrice_30
type: float
- name: strikeToCompletionBidMidPrice
type: float
- name: strikeToCompletionAskMidPrice
type: float
- name: countPriceUnderLimitPrice
type: int
- name: sumVolumeUnderLimitPrice
type: long
- name: myArrivalTradePrice
type: float
- name: myArrivalTradePrice_5
type: float
- name: myArrivalTradePrice_10
type: float
Apply the updated patch using the CLI:
Shell
# Apply the overlay
kxi package overlay ${PKG_NAME} ${PKG_NAME}/patches/CustomAnalytics.yaml
Tear down the existing package and redeploy the updated package.
Shell
# Teardown existing package
kxi pm teardown ${PKG_NAME}
# Redeploy updated package
kxi pm push ./${PKG_NAME} --force --deploy
Error when running Order Execution Analytic Function: [ .my.custom.function ]
This section provides examples of two errors and how to fix them.
Error due to bug in custom function
When the package is fully deployed, you can test that your custom analytic runs correctly using .fsi.eqea.generateOrderAnalytics
as outlined here.
When you run the API:
q
last t:gw(`.fsi.eqea.generateOrderAnalytics;args;`;()!());
The API returns data but your new column myCustomAnalytic
is empty.
Looking at the DAP logs, your function .my.custom.function
encountered the following error:
{"time":"2025-04-02T13:31:35.664z","corr":"5890c50b-c2d7-4814-b794-b876466f1ead","component":"eqea","level":"ERROR","message":"Error Encountered Running Order Execution Analytic Function: [ .my.custom.function ] - Error Message: [ type ] ","service":"dap","mount":"hdb"}
That means your function has encountered a type
error.
Upon inspecting the code deployed, notice thata semi-colon is missing.
Therefore, update your definition of .my.custom.function
in the file ${PKG_NAME}/src/example.eqeaCustomAnalytics.q
to:
q
.my.custom.function:{[OrderAnalyticsRes]
// We can use a util function `.eqea.util.runSimpleAnalytic` to run analytics that are dependent on columns that already exist in the OrderAnalytics table
cfg:select from .eqea.analytics.cfg where analyticType=`myCustomAnalyticType;
.eqea.util.tickData.getDataAndAggFromCfg[OrderAnalyticsRes;cfg;();`strikeTime;`orderCompletedTime;0b]
};
Next, save the file, tear down the existing package and redeploy the updated package.
Shell
# Teardown existing package
kxi pm teardown ${PKG_NAME}
# Redeploy updated package
kxi pm push ./${PKG_NAME} --force --deploy
Error due to bug in custom configurations
When the package is fully deployed, you can test that your custom analytic runs correctly using .fsi.eqea.generateOrderAnalytics
as outlined here.
When you run the API:
q
last t:gw(`.fsi.eqea.generateOrderAnalytics;args;`;()!());
You notice that the API returns data but your new column myCustomAnalytic
is empty.
Looking at the DAP logs, your function .my.custom.function
encountered the following error:
{"time":"2025-04-02T13:48:48.444z","corr":"5d3b958c-08b0-44a4-ad53-964b7deafd5b","component":"eqea","level":"ERROR","message":"Error Encountered Running Order Execution Analytic Function: [ .my.custom.function ] - Error Message: [ vol ] ","service":"dap","mount":"hdb"}
That means your function has encountered a vol
error.
This is not a standard q error so it may be more difficult to spot.
Look at the aggClause
for myCustomAnalytic
in your recently added values to .eqea.config.custom.analytics
in the file ${PKG_NAME}/src/example.eqeaCustomAnalytics.q
in the debugging the custom analytic section.
Notice thatthe aggClause
value is (max;`vol)
and the marketDataTableName
value is Trade
, which means the analytic is attempting to find the maximum value of the vol
column from the Trade
table.
If you look at the Trade
schema, there is no column named vol
however there is a column named volume
.
To resolve this, update your definition of myCustomAnalytic
in the file ${PKG_NAME}/src/example.eqeaCustomAnalytics.q
to:
q
.eqea.config.custom.analytics,:flip `analytic`analyticType`funcName`aggClause`marketDataTabName`joinTimeOffset! flip enlist (
(`myCustomAnalytic ; `myCustomAnalyticType ; `.my.custom.function ; (max;`volume) ; `Trade ; 00:00:30)
);
Save the file, teard own the existing package and redeploy the updated package.
Shell
# Teardown existing package
kxi pm teardown ${PKG_NAME}
# Redeploy updated package
kxi pm push ./${PKG_NAME} --force --deploy
Advanced debugging: Connecting to the HDB
If you encounter an issue with your custom analytic that cannot be resolve by simply inspecting your custom config or function then it may be useful to connect to the HDB and debug manually.
This section includes:
Configuring HDB to allow IPC connections
By default, all kdb Insights database services restrict ad-hoc IPC requests.
If you try to connect to an HDB with IPC security enabled, you encounter the error IPC execution restricted. Rejecting function
.
You might be useful to disable these restrictions in development environments.
WARNING
Disabling IPC security
It is recommended that IPC security remains enabled for all production deployments. Disabling this level of security can allow users to modify the internal state or access data they are not privileged to see.
To disable the IPC security restrictions, you must edit the file ${PKG_NAME}/databases/fsi-core-db/shards/fsi-core-db-shard.yaml
.
Add the environment variable KXI_SECURE_ENABLED=false
to daps.instances.db.env
. This results in:
YAML
daps:
instances:
db:
env:
- name: KXI_SECURE_ENABLED
value: "false"
Save the updated file, tear down the existing package, and redeploy the updated package.
Shell
# Teardown existing package
kxi pm teardown ${PKG_NAME}
# Redeploy updated package
kxi pm push ./${PKG_NAME} --force --deploy
Connecting to the HDB
After you have added the environment variable KXI_SECURE_ENABLED=false
on your DAP, you can connect to your HDB and run ad-hoc queries and code on the HDB.
The easiest way to connect to a HDB process is to port-forward to the relevant port using a kubectl command with the following format:
Shell
kubectl port-forward $PROC_NAME $PORT_FWD:$PORT_POD
Before you can port-forward, you need to find the relevant port. To do this, use:
Shell
kubectl describe pod ${POD_NAME} | grep containerPort
Here's a working example:
-
Find the relevant port:
Shell
Copy$ kubectl describe pod fsi-app-bbg-eqea-dap-db-0 | grep containerPort
{"name":"http-metrics","containerPort":8080,"protocol":"TCP"}
,{"name":"db","containerPort":5080,"protocol":"TCP"}
,{"name":"hdb","containerPort":5081,"protocol":"TCP"}
,{"name":"idb","containerPort":5082,"protocol":"TCP"}
,{"name":"rdb","containerPort":5083,"protocol":"TCP"}
-
The output shows that the HDB containerPort is 5081. Therefore, to debug your analytic, forward one of your ports to the 5081 port on your DAP pod, as below:
Shell
Copykubectl port-forward fsi-app-bbg-eqea-dap-db-0 5081:5081
-
Now you can connect to that port using your IDE of choice, for example the KX VSCode extension. Refer to managing connections in the kdb VSCode extension for more information.
This allows you to debug your function interactively, helping you identify the issue more effectively.
Further reading