Configure the ICE Fixed Income Screener
This page explains the configuration in the ICE Fixed Income Screener.
The ICE Fixed Income Screener comes pre-packed with a feed values YAML file which instructs the ICE feed handler which tokens and data to subscribe to. Should you need to ingest different or new data, modify the ice-fi-feed-values.yaml file with any required data.
To demonstrate the modification of the configuration of the orderbook ingest we can modify the ice-fi-feed-values.yaml
to ensure that the correct feeds are being drawn.
A typical feed-values.yaml
includes:
YAML
# Default values for rt-ice-pub.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
autoscaling:
enabled: false
# TODO: Add you image pull secret here.
imagePullSecrets:
- name: docker-pull
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
#
# Specifies whether to auto-mount a service account
#
autoMount: true
#
# @param resourceAnnotations - Annotations applied to the top level resource StatefulSet/Deployment
#
resourceAnnotations: {}
## @section persistence Configure PVC
## @param enabled Enable or disable persistent storage, claimed through PVCs
## @param useLocalValues Override Global accessMode and storageClass
## @param storageClass Storage Class to apply to PVC
## Unset uses the cluster default storage class.
## @param storageSize Volume requested size
## @param accessModes List of desired access modes for PVC
persistence:
enabled: true
useLocalValues: false
storageClass: ""
storageSize: "10G"
accessModes:
- ReadWriteOnce
# TODO: Ensure stream.name matches your desired RT-North
stream:
sinkName: rt-fsi-app-ice-fi-icerealtimefi-north
rt:
logLevel: INFO
logPath: /tmprt
volCapacity: 10G
ice:
loggingFile: "/var/log/kxfeed_ice.log"
apiLoggingFile: "/var/log/ice_api.log"
multithreaded: false
# TODO: Insert Connection details as provided by ICE
primaryConnection: "<IP_Address_Provided_By_ICE_1>:<Port_Num_Provided_By_ICE_1>"
backupConnection: "<IP_Address_Provided_By_ICE_2>:<Port_Num_Provided_By_ICE_2>"
# Example Subscription List: Wildcard List
# TODO: Update with your subscription list
subscriptionList: 'SUBSCRIBEWILDCARD,ENUM_SRC_ID:1327,SYMBOL_TICKER:{^[a-zA-Z].*};SUBSCRIBEWILDCARD,ENUM_SRC_ID:1330,SYMBOL_TICKER:{^[a-zA-Z].*};SUBSCRIBEWILDCARD,ENUM_SRC_ID:1331,SYMBOL_TICKER:{^[a-zA-Z].*}'
logMsg: false
token2typeLookup: "/tmp/kxfeed_ice/token2Type.json"
# No need to subscribe to any Trade table tokens
# Quote Tokens used by the FSI schemas / ICE overlay
# `sym`time`srcID`permissions are always pulled in
quoteTableTokens: "4,5,10,11,12,13,16,20,55,211,214,225,230,404,911,923,1154,1490,1491,1617,1709,2010,2023,2024,2026,2044,3315,3763"
# Token which decides if a payload is the trade message
tradeMessageIdentifiers: "8"
enableFiltertokens: true
refreshTable: false
restServer: false
sourceTimeZone: "628:EST5EDT,M3.2.0,M11.1.0;270:EST5EDT,M3.2.0,M11.1.0;886:EST5EDT,M3.2.0,M11.1.0;558:EST5EDT,M3.2.0,M11.1.0;564:CST6CDT,M3.2.0,M11.1.0;1330:EST5EDT,M3.2.0,M11.1.0;1331:EST5EDT,M3.2.0,M11.1.0;1327:EST5EDT,M3.2.0,M11.1.0;1328:EST5EDT,M3.2.0,M11.1.0"
dbSchemaFile: "/tmp/kxfeed_ice/schema.xml"
dbConfigFile: "/tmp/kxfeed_ice/kxfeed_config.json"
secrets:
name: my-ice-secrets
image:
repository: portal.dl.kx.com
component: kxfeed_ice
tag: 1.0.0
pullPolicy: IfNotPresent
podSecurityContext:
# 'nobody' user
fsGroup: 65534
runAsUser: 65534
runAsNonRoot: true
securityContext:
readOnlyRootFilesystem: false
# runAsNonRoot: true
# 'nobody' user
# runAsUser: 65534
allowPrivilegeEscalation: false
arguments: ["/usr/local/bin/updRTConfig.sh"]
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
encryption:
# NOTE: Set enabled false if running on an Insights environment that does NOT have encryption enabled.
enabled: true
This file determines what tokens are ingested by the feed through the pipelines into the databases. Additionally, the file also determines persistence, resource, and logging parameters.
As observed, we are ingesting token number 16 in the quoteTableTokens field. This token represents the activityTime as recorded by ICE.
Now that we have instructed the Feed handler to ingest the additional token, we modify our schema to ensure that the there is a column to accept the new data within the table.
To do this we use an overlay to add a field to the existing schema. To use an overlay, refer to the Overlays & Patches section.
The IceRealTime.yaml
contains data similar to the following:
YAML
kind: Package
apiVersion: pakx/v1
metadata:
name: target
spec:
pipelines:
- name: icerealtimefi
spec: src/icerealtimefi-pipeline-spec.q
source: icerealtimefi-north
- name: icehistoricreplayfi
spec: src/icehistoricreplayfi-pipeline-spec.q
destination: fsi-data
controller:
image: {}
k8sPolicy:
resources:
tmpDirSize: 5Mi
serviceAccountConfigure:
create: true
persistence: {}
env:
- name: KXI_SP_BETA_FEATURES
value: 'true'
replicaAffinityTopologyKey: zone
replicas: 1
type: spec
worker:
image: {}
k8sPolicy:
resources:
tmpDirSize: 500Mi
serviceAccountConfigure:
create: true
persistence: {}
- name: iceratings
spec: src/iceratings-pipeline-spec.q
destination: fsi-data
tables:
schemas:
- name: Quote
columns:
- name: eventTimestamp
type: timestamp
- name: instrumentID
type: symbol
foreign: Instrument.instrumentID
attrDisk: parted
attrOrd: parted
- name: assetClass
type: symbol
- name: bidPrice
type: float
- name: bidSize
type: long
- name: bidCount
type: long
- name: askPrice
type: float
- name: askSize
type: long
- name: askCount
type: long
- name: bidYield
type: float
- name: askYield
type: float
- name: bidSpread
type: float
- name: swapSpread
type: float
- name: assetSwapSpread
type: float
- name: midAssetSwapSpread
type: float
- name: ZSpread
type: float
- name: midZSpread
type: float
- name: marketPhase
type: int
- name: modifiedDuration
type: float
- name: convexity
type: float
- name: accruedInterest
type: float
- name: duration
type: float
- name: parity
type: float
- name: BMKYield
type: float
- name: baseIndex
type: float
- name: srcSys
type: symbol
- name: exchTime
type: timestamp
description: ICE fixed income Quote Schema
prtnCol: eventTimestamp
sortColsDisk:
- instrumentID
- eventTimestamp
sortColsMem:
- instrumentID
- eventTimestamp
sortColsOrd:
- instrumentID
- eventTimestamp
type: partitioned
databases:
- name: fsi-core-db
shards:
- name: fsi-core-db-shard
sequencers:
icerealtimefi-north:
external: false
k8sPolicy:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
tmpDirSize: 5Mi
serviceAccountConfigure:
create: true
maxDiskUsagePercent: 90
size: 3
topicConfig:
topicPrefix: rt-
topicConfigDir: /config/topics/
volume:
mountPath: /s/
size: 20Gi
subPaths:
cp: state
in: in
out: out
In this file we add an additional column to the yaml definition of the the Trade table. This allows us to ingest the new data and store it within the table.
The updated yaml is now:
YAML
kind: Package
apiVersion: pakx/v1
metadata:
name: target
spec:
pipelines:
- name: icerealtimefi
spec: src/icerealtimefi-pipeline-spec.q
source: icerealtimefi-north
- name: icehistoricreplayfi
spec: src/icehistoricreplayfi-pipeline-spec.q
destination: fsi-data
controller:
image: {}
k8sPolicy:
resources:
tmpDirSize: 5Mi
serviceAccountConfigure:
create: true
persistence: {}
env:
- name: KXI_SP_BETA_FEATURES
value: 'true'
replicaAffinityTopologyKey: zone
replicas: 1
type: spec
worker:
image: {}
k8sPolicy:
resources:
tmpDirSize: 500Mi
serviceAccountConfigure:
create: true
persistence: {}
- name: iceratings
spec: src/iceratings-pipeline-spec.q
destination: fsi-data
tables:
schemas:
- name: Quote
columns:
- name: eventTimestamp
type: timestamp
- name: instrumentID
type: symbol
foreign: Instrument.instrumentID
attrDisk: parted
attrOrd: parted
- name: assetClass
type: symbol
- name: bidPrice
type: float
- name: bidSize
type: long
- name: bidCount
type: long
- name: askPrice
type: float
- name: askSize
type: long
- name: askCount
type: long
- name: bidYield
type: float
- name: askYield
type: float
- name: bidSpread
type: float
- name: swapSpread
type: float
- name: assetSwapSpread
type: float
- name: midAssetSwapSpread
type: float
- name: ZSpread
type: float
- name: midZSpread
type: float
- name: marketPhase
type: int
- name: modifiedDuration
type: float
- name: convexity
type: float
- name: accruedInterest
type: float
- name: duration
type: float
- name: parity
type: float
- name: BMKYield
type: float
- name: baseIndex
type: float
- name: srcSys
type: symbol
- name: exchTime
type: timestamp
- name: activityTime
type: timestamp
description: ICE fixed income Quote Schema
prtnCol: eventTimestamp
sortColsDisk:
- instrumentID
- eventTimestamp
sortColsMem:
- instrumentID
- eventTimestamp
sortColsOrd:
- instrumentID
- eventTimestamp
type: partitioned
databases:
- name: fsi-core-db
shards:
- name: fsi-core-db-shard
sequencers:
icerealtimefi-north:
external: false
k8sPolicy:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
tmpDirSize: 5Mi
serviceAccountConfigure:
create: true
maxDiskUsagePercent: 90
size: 3
topicConfig:
topicPrefix: rt-
topicConfigDir: /config/topics/
volume:
mountPath: /s/
size: 20Gi
subPaths:
cp: state
in: in
out: out
Finally, we update the pipeline specification to ensure that the pipeline is correctly mapping our new ingested activityTime data to the exchTime timestamp.
Before our change, the pipeline specification is similar to the below:
q
//assembly name
.fsi.assemblyName:`$.spenv.assembly[];
// Note: Set .ice.debug:1b if your wish to enable debug vars
.ice.debug:0b;
// get Quote Schema from assembly
QuoteSch:.qsp.getSchema[`Quote];
typeList[where 10=typeList:exec datatype from QuoteSch]:0h;
.ice.fsiSchema.Quote:flip (exec name from QuoteSch)!typeList$\:();
// Map ICE cols to the FSI schema
// FI Quote
.ice.iceToFsiColMap.Quote:(!) . flip (
(`eventTimestamp ; `time);
(`instrumentID ; `sym);
(`srcSys ; ($;enlist `;(string;`srcID)));
(`assetClass ; `assetClass);
(`bidPrice ; `bidPrice);
(`bidSize ; `bidSize);
(`bidCount ; `bidCount);
(`askPrice ; `askPrice);
(`askSize ; `askSize);
(`askCount ; `askCount);
(`bidYield ; `bidYield);
(`askYield ; `askYield);
(`bidSpread ; `bidSpread);
(`swapSpread ; `swapSpread);
(`assetSwapSpread ; `assetSwapSpread);
(`midAssetSwapSpread ; `midAssetSwapSpread);
(`ZSpread ; `ZSpread);
(`midZSpread ; `midZSpread);
(`marketPhase ; `marketPhase);
(`modifiedDuration ; `modifiedDuration);
(`convexity ; `convexity);
(`accruedInterest ; `accruedInterest);
(`duration ; `duration);
(`parity ; `parity);
(`BMKYield ; `BMKYield);
(`baseIndex ; `baseIndex);
(`exchTime ; `exchTime);
(`activityTime ; `activityTime)
);
// All columns as they come from ICE
.ice.cols.Quote:`msgType`sym`time`srcID`permissions`srcSys`instrumentID`askPrice`askSize`bidPrice`bidSize`eventTimestamp`exchTime`activityTime`askCount`askYield`bidCount`bidYield`parity`swapSpread`bidSpread`accruedInterest`ZSpread`BMKYield`convexity`marketPhase`duration`midZSpread`assetSwapSpread`midAssetSwapSpread`modifiedDuration`assetClass`baseIndex
.ice.upd:{[t;d]
d:.ice.fsiSchema[t] upsert ?[enlist .ice.cols[t]!d;();0b;.ice.iceToFsiColMap[t]];
d
};
.ice.updQuote:.ice.upd[`Quote];
source: .qsp.read.fromStream[]
quoteStream: source
.qsp.filter[{[md;data]data;`quote~(md`table)};.qsp.use``params!(::;`metadata`data)]
.qsp.map[{if[.ice.debug;.debug.Quote:x];x}]
.qsp.map[.ice.updQuote]
.qsp.write.toDatabase[`Quote; .fsi.assemblyName]
.qsp.run(quoteStream)
Following our change, the file looks like this:
Note how we have changed the .ice.icetoFsiColMap.Quote
table to overwrite `exchTime.
q
//assembly name
.fsi.assemblyName:`$.spenv.assembly[];
// Note: Set .ice.debug:1b if your wish to enable debug vars
.ice.debug:0b;
// get Quote Schema from assembly
QuoteSch:.qsp.getSchema[`Quote];
typeList[where 10=typeList:exec datatype from QuoteSch]:0h;
.ice.fsiSchema.Quote:flip (exec name from QuoteSch)!typeList$\:();
// Map ICE cols to the FSI schema
// FI Quote
.ice.iceToFsiColMap.Quote:(!) . flip (
(`eventTimestamp ; `time);
(`instrumentID ; `sym);
(`srcSys ; ($;enlist `;(string;`srcID)));
(`assetClass ; `assetClass);
(`bidPrice ; `bidPrice);
(`bidSize ; `bidSize);
(`bidCount ; `bidCount);
(`askPrice ; `askPrice);
(`askSize ; `askSize);
(`askCount ; `askCount);
(`bidYield ; `bidYield);
(`askYield ; `askYield);
(`bidSpread ; `bidSpread);
(`swapSpread ; `swapSpread);
(`assetSwapSpread ; `assetSwapSpread);
(`midAssetSwapSpread ; `midAssetSwapSpread);
(`ZSpread ; `ZSpread);
(`midZSpread ; `midZSpread);
(`marketPhase ; `marketPhase);
(`modifiedDuration ; `modifiedDuration);
(`convexity ; `convexity);
(`accruedInterest ; `accruedInterest);
(`duration ; `duration);
(`parity ; `parity);
(`BMKYield ; `BMKYield);
(`baseIndex ; `baseIndex);
(`activityTime ; `exchTime);
(`activityTime ; `activityTime)
);
// All columns as they come from ICE
.ice.cols.Quote:`msgType`sym`time`srcID`permissions`srcSys`instrumentID`askPrice`askSize`bidPrice`bidSize`eventTimestamp`exchTime`activityTime`askCount`askYield`bidCount`bidYield`parity`swapSpread`bidSpread`accruedInterest`ZSpread`BMKYield`convexity`marketPhase`duration`midZSpread`assetSwapSpread`midAssetSwapSpread`modifiedDuration`assetClass`baseIndex
.ice.upd:{[t;d]
d:.ice.fsiSchema[t] upsert ?[enlist .ice.cols[t]!d;();0b;.ice.iceToFsiColMap[t]];
d
};
.ice.updQuote:.ice.upd[`Quote];
source: .qsp.read.fromStream[]
quoteStream: source
.qsp.filter[{[md;data]data;`quote~(md`table)};.qsp.use``params!(::;`metadata`data)]
.qsp.map[{if[.ice.debug;.debug.Quote:x];x}]
.qsp.map[.ice.updQuote]
.qsp.write.toDatabase[`Quote; .fsi.assemblyName]
.qsp.run(quoteStream)
The steps would be the same for the other packages in the FSI Accelerator.
Commands to unpack and re-package a package can be found in the packaging documentation.
Commands to push a package can be found in the packaging documentation.
Changing configuration through a custom package
Alternatively, in many cases configuration can be adjusted by including variables in a custom package. Where this is possible, it is highlighted in the applicable section of documentation. Instructions to create and load a custom package can be found package overlays.