Within our context as a benchmark technology partner in the Data field in the Corporate Investment Banking area for one of the leading Spanish financial institutions operating in the principal markets globally; a collaborative technology approach is being developed to rationalise the use of information acquired from external suppliers (Bloomberg, CMA, Markit, S&P, Fitch, Moody’s, etc.) and to impact the cost to final consumers based on the demand for and use of that information. The information is obtained from the original sources to provide the highest quality data at the lowest possible cost.
Storage in the Data Lake is performed by ingestion and normalization of data from various sources, and modelling and population of the Common data layer. The information is made available by means of the application data layer (relational) feed, with development of microservices that expose this data via various APIs (mainly Rest API, Bulk and asynchronous for Events).
Our team performs the design and implementation of access control and payment for use of the information, as well as the design and implementation of the counterparty identifier catalogue system.
The main technologies used are Spark/Scala for information ingestion and its storage in the main lake in Scality of the Global Banking Transactions area, exposure of Java and Spring Boot-based microservices is performed in a PaaS where Kubernetes and OpenShift are used in turn, with API management performed through IBM API Connect.