
BANKING & FINANCIAL SERVICES
Gen AI platform offers future-ready capabilities

Client
A leading North American bank
Goal
Provide a wide range of AI capabilities for various risk and business teams and avoid building fragmented, outdated systems
Tools and Technologies
Amazon Bedrock and Titan V2, pgvector, Faiss, OpenSearch, Llama 7B, Claude Sonnet 3.5 and 3.7
Business Challenge
The Enterprise Risk function at a leading North American bank initiated a Generative AI (Gen AI) solution to offer a wide range of AI capabilities, including document intelligence, summarization, generation, translation, and more.
As the project evolved through proofs of concept and pilots, a key challenge emerged: the risk of creating a fragmented ecosystem with an overwhelming array of unmanageable bespoke solutions, model integrations, and reliance on potentially outdated models and libraries.

Solution
Based on prior engagements across clients, our team delivered thought leadership around how to develop and deliver capabilities using a platform approach. We also set up a Minimum Viable Product Team to iterate on new problem areas and solution approaches. Platform development includes generalized capabilities for:
- Setting up document ingestion pipelines, with choice of parsing approaches, embedding models and vector index stores
- A factory model along with configurations for integrating new parsers, embedding models, LLM interfaces etc., to quickly bring new capabilities to the platform
- User management, SSO integration, entitlements management
- API integration to bring in information/ data from internal and external sources
- Platform support of pgvector, Faiss, OpenSearch, Amazon Titan V2, Llama 7B, Claude Sonnet 3.5, and 3.7, etc.
- Intuitive chat interface for AI Masters - designated business users trained in Prompt Engineering and other techniques to assemble new AI/Gen AI capabilities for users through configuration - and end users

Outcomes
- A future-ready Gen AI platform that can easily incorporate new capabilities and updates
- Multiple specific capabilities, called skills, for use by various risk teams and business users
- A forward-looking roadmap, including ability to compose more complex capabilities using atomic capabilities

Our experts can help you find the right solutions to meet your needs.
Data migration to cloud expedites credit risk functions



Client
A leading North American bank
Goal
Migrate credit risk data and SAS-based analytics models from on-premises data warehouse to AWS to enhance functionality
Tools and Technologies
AWS Glue, Redshift, DataSync, Athena, CloudWatch, SageMaker; Apache Airflow; Delta Lake; Power BI
Business Challenge
The credit risk unit of a major bank aimed to migrate SAS-based analytics models containing data for financial forecasting and sensitivity analysis to Amazon SageMaker.
This was to leverage benefits such as enhanced scalability, improved maintenance for MLOps engineers, and better developer experience. It also sought to migrate credit risk data from a Netezza-based on-premises data warehouse to AWS, utilizing a data lake on AWS S3 and a data warehouse on Redshift to support model migration.

Solution
- Decoupled data workload processing from relational systems using the phased approach with a focus on historical migration, transformational complexities, data volumes, and ingestion frequencies of the incremental loads
- Developed a flexible ETL framework using DataSync for extracting data to AWS as flat files from Netezza
- Transformed data in S3 layers using Glue ETL and moved it to the Redshift data warehouse
- Enabled Glue integration with Delta Lake for incremental data workloads
- Built ETL workflows using Step Functions during orchestration and concurrent runs of the workflow; orchestrated the concurrent runs of workflows using Apache Airflow
- Architected data shift from Netezza to AWS, leveraging a flexible ETL framework

Outcomes
- Enhanced financial forecasting and sensitivity analysis operations with analytical models and data migrated to the AWS public cloud
- Expedited time-to-market catering to client’s downstream consumption needs through Power BI and Amazon SageMaker

Our experts can help you find the right solutions to meet your needs.
Custom analytics enable faster business decisions



Client
U.S.-based asset management company
Goal
Streamline and improve data and analytics capabilities for enhanced user experiences
Technology Tools
Java, React JS, MS SQL Server, Spring Boot, GitHub, Jenkins
Business Challenge
The client captures voluminous data from multiple internal and external sources. The absence of quick, on-demand capabilities for business users was inefficient in generating customized portfolio analytics on attributes such as average quality, yield to maturity, average coupon, etc.
The client teams were spending enormous amounts of manual effort and elapsed time (approximately 12-15 hours) to respond to requests for proposals from their respective clients.

Solution
Iris implemented a data acquisition and analytics system with pre-processing capabilities for grouping, classifying, and handling historical data.
A data dictionary was established for key concepts, such as asset classes and industry classifications, enabling end users to access data for analytical computation. The analytics engine was refactored, optimized, and integrated into the streamlined investment performance data infrastructure.
The team developed an interactive self-service capability, allowing business users to track data availability, perform advanced searches, generate custom analytics, visualize information, and utilize the insights for decision-making.

Outcomes
The solution brought several benefits to the client, including:
- Simplified data access to generate custom analytics for end users
- Eliminated manual processing and the need for complex queries
- Enhanced the stakeholder experience
- Reduced response time to client RFPs by over 50%

Our experts can help you find the right solutions to meet your needs.
Big Data platform improves global AML compliance



Client
A leading global bank with operations in over 100 countries
Goal
Address data quality and cost challenges of legacy AML application infrastructure
Tools and Technologies
Hadoop, Hive, Talend, Kafka, Spark, ETL
Business Challenge
The client’s legacy AML application infrastructure was leading to data acquisition, quality assurance, data processing, AML rules management and reporting challenges.
High data volume and rules-based algorithms were generating high numbers of false positives. Multiple instances of legacy vendor platforms were also adding to cost and complexity.

Solution
Iris developed and implemented multiple AML Trade Surveillance applications and Big Data capabilities. The team designed a centralized data hub with Cloudera Hadoop for AML business processes and migrated application data to the big data analytical platform in the client’s private cloud. Switching from a rule-based approach to algorithmic analytical models, we incorporated a data lake with logical layers and developed a metadata-driven data quality monitoring solution.
We enabled the support for AML model development, execution and testing/validation, and integration with case management. Our data experts also deployed a custom metadata management tool and UI to manage data quality. Data visualization and dashboards were implemented for alerts, monitoring performance, and tracking money laundering activities.

Outcomes
The implemented solution delivered tangible outcomes, including:
- Centralized data hub capable of handling 100+ PB of data and ~5,000 users across 18 regional hubs for several countries
- Ingestion of 30+ million transactions per day from different sources
- Greater insights with scanning of 1.5+ Billion transactions every month
- False positives reduced by over 30%
- AML data storage cost reduced to <10 cents per GB per year
- Extended support to multiple countries and business lines across six global regions; legacy instances reduced from 30+ to <10

Our experts can help you find the right solutions to meet your needs.
Investment warehouse enhances communications



Client
A U.S.-based investment bank
Goal
Improve data collation and information quality for enhanced marketing and client reporting functions
Tools and Technologies
Composite C1, Oracle DB, PostgreSQL, Vermilion Reporting Suite, Python, MS SQL Server, React.js
Business Challenge
The client’s existing investment data structure lacked a single source of truth for investment and performance data. The account management and marketing teams were making significant manual efforts to track portfolio performance, identify opportunities and ensure accurate client reporting. The time-consuming and manual processes of generating marketing exhibits and client reports were highly error-prone.

Solution
Iris implemented a comprehensive investment data infrastructure for a single source of truth and improved reporting capabilities for marketing content and client report generation.
An automated Quality Assurance process was instituted to validate the information in critical marketing materials, such as fact sheets, snapshots, sales kits, and flyers, against the respective data source systems.
Retail and institutional portals were developed to provide a consolidated view of portfolios, with the ability to drill down to underlying assets, AUM (Assets Under Management) trends, incentives, commissions, and active opportunities.

Outcomes
The new data infrastructure delivered a holistic, on-demand view of investment details, including performance characteristics, breakdowns, attributions, and holdings, to the client's marketing team and account managers with:
- ~95% reduction in performance data and exhibit information discrepancies
- ~60% improvement in operational efficiency in core marketing and client reporting functions

Our experts can help you find the right solutions to meet your needs.
Cloud data lakehouse for single source of truth



Client
A medical devices and fertility solutions company
Goal
Establish a cloud data warehouse for a single source of truth and timely month-end activities
Tools and Technologies
Azure Data Factory, Azure Data Lake, Power BI, Synapse Analytics
Business Challenge
The client had multiple instances of ERPs, sales systems, and warehouses built on obsolete technology and frameworks. The existing system siloed the data, resulting in inconsistent versions of the truth. The client's finance and sales teams were struggling to reconcile data offline and feed the same back into the ERPs, causing significant delays in month-end activities.
As the record systems were also not synced and legacy reports were built on the interim warehouses, line managers and executive teams were not able to extract actionable and comprehensive insights. A solution to onboard and integrate new datasets on an ongoing basis was required to support the data merger and acquisition process.

Solution
A strategy to transform data and BI applications to MS Azure was finalized. The transformation was executed in phases and included discovery, report rationalization and foundational build of a global system of reporting.
The solution included the data ingestion process with Azure Data Factory, data storage and processing using Azure Data Lake and Synapse Analytics, reports and dashboards with Power BI.
A utility to accelerate the onboarding of new data entities was conceptualized and delivered to onboard and integrate new datasets to support mergers and acquisitions.

Outcomes
Iris data practitioners helped the client overcome key challenges and advanced data warehousing capabilities by:
- Establishing a "single version of the truth" that enabled data-driven decisions and timely completion of month-end and other critical activities
- Delivering analytics for “Order to Cash” processes, including subject areas of sales, inventory, shipping, finance, etc.
- Facilitating the generation of actionable, insightful reports and dashboards, allowing "self-service" consumption for the business leadership

Our experts can help you find the right solutions to meet your needs.
Software transformation gets compliance for bank


Risk & Compliance
Software transformation gets FDIC compliance for bank

Client
A global investment bank
Goal
To have a unified functional validation system for FDIC compliance
Tools and Technologies
SQL Server, Sybase, Data Lake, UTM, .NET, DTA, Control-M, ALM, JIRA, Git, RLM, Nexus, Unix, WinSCP, Putty, Python, PyCharm, Confluence, Rabacus, SNS, and Datawatch
Business Challenge
The client mandated to comply with new QFC (Qualified Financial Contracts) regulations. The client also needed to perform in-depth functional validation across a revamped data platform to ensure it could timely process, review and submit to the FDIC (Federal Deposit Insurance Corp.) required daily reports on the open QFC positions of all its counterparties.
The project entailed immediate availability and processing of accurate QFC information at the close of each business day to swiftly assess data and note exceptions and exclusions for early corrective action. It also aimed to help the client meet stringent deadlines with varied report formats. Any breach or delay in compliance could attach hefty fines and reputational damage to the bank.

Solution
Iris revamped the entire system and performed end-to-end quality assurance and testing across the new regulatory reporting platform. This meant validating the transformed multi-layer database, user interface (UI), business process rules, and downstream applications.
We identified and solved workflow design gaps affecting data reporting on all open positions, agreements, margins, collaterals, and corporate entities, thus enhancing the capability for addressing irregularities. Our experts established an integrated and collaborative system, commanding transaction and reference data within a single platform by incorporating 166 distinct controls pertaining to data completeness, accuracy, consistency, and timeliness within a strategic framework.

Outcomes
Our quality assurance and testing solution delivered the following impacts:
- Faster and more efficient internal analysis with highly accurate QFC open positions
- 100% compliance with timing and format of required daily QFC report submissions to the FDIC
- Significant decrease in exceptions before the platform went go-live and critical defect delivery drastically reduced post-implementation
- An intuitive UI dashboard reflecting the real-time status of critical underlying data volumes, leakages, job run, and other stats

Our experts can help you find the right solutions to meet your needs.
Platform re-engineering for operational efficiency



Client
One of the top 20 brokerage banks in North America
Goal
Modernize an existing, licensed data platform to meet the increasing volume of transactions and product offerings
Tools and Technologies
Python, Core Java, Oracle, ETL Framework, Apache Zookeeper, Anaconda, Maven, Bamboo, Sonar, Bitbucket
Business Challenge
The client had a licensed data platform for enterprise-wide risk and compliance operations. Spiked volumes with various financial product offerings and trades were restricting the processes and limiting the analytical capabilities on the existing platform.
The system upgrade was required to support related, complex credit risk calculations. These calculations serve as a ground for several thousand bankers/ traders to make loan and investment decisions for customers. System modernization would also cater to the internal transaction and regulatory reporting requirements.

Solution
Iris system re-engineering experts designed and implemented a scalable and highly configurable data extraction platform having global data architecture. This ETL framework-based platform enables faster, more efficient onboarding, consolidation, and processing of the numerous variable product and trading data input sources.
The re-engineered platform was enabled with value-adds and tools to automate, tabulate, compare, reserve, validate and test data. We integrated the data extraction platform seamlessly with downstream risk applications and system adaptability to accommodate operational/business needs.

Outcomes
Our data platform re-engineering solution enabled the client to achieve enormous benefits, including user experience, data quality, and risk management capabilities. Key outcomes of the solution constitute:
- Quicker, real-time configuration and execution of 500+ jobs for loading trade feed
- Downtime reduced to a minimum even during the trade reference data changes
- 15% faster onboarding of the new feed or data source
- Nearly 20% faster throughput for various critical feeds with parallel processing feature
- Reduced anomalies and duplication with improved consistency
- 35-40% savings in annual third-party platform/ module license fees
- Standardized and streamlined onboarding processes and turnaround time, scaling the operational efficiencies

Our experts can help you find the right solutions to meet your needs.
SFTR solution strengthened market leadership


Risk & Compliance
Securities financing transactions regulation compliance made easy

Client
A leading provider of market data and trading services
Goal
Support complex regulatory reporting with automated solution
Tools and Technologies
Java, Spring Boot, Apache Camel, CXF, Drools BRE, Oracle, JBoss Fuse, Elasticsearch, Git, Bitbucket, Sonar, Maven
Business Challenge
The client offers an automated, integrated solution to its clients in the European Union (EU) for complying with the Securities Financing Transactions Regulation (SFTR).
Effective in recent years, SFTR requires timely and detailed reporting based on multitudes of data, systems, collateral, and lifecycle events. The voluminous data is captured from hundreds of millions of daily transactions made to multiple trade repositories registered by the European Securities and Markets Authorities (ESMA).
Non-compliance at any stage is risky, potentially very costly, for all trade counterparties, i.e., broker-dealers, banks, asset managers, institutional investors.

Solution
Experienced in diverse technologies, big data, and capital markets, team Iris developed a streamlined, end-to-end data reporting platform with complex trade matching and monitoring systems. Improving speed, accuracy, and flexibility, the new architecture supports high trade concurrency and acceptance rates with parallel processing of millions of transactions.
The delivered solution also enabled optimal load balancing and matched the reconciliation at the trade repository. Built with microservices to accommodate future scalability, standardization, data quality, and security requirements, the system implemented functional enhancements. A Unique Transaction Identifier (UTI) subsystem was also developed for sharing and matching counterparty transactions, enabling plug-and-play setup for new repositories, and supporting any changes in outbound or inbound data report formats required by ESMA or clients. Improved dashboards and search pages helped the end-users in better configuration and tracking of their transactions.

Outcomes
The nimble delivery and successful roll-out of the new SFTR platform delivered the desired strategic competitive advantage to the client for maintaining its EU market leader position. The consolidated solution also helped in:
- Generating additional revenue from extending the new reporting services to 17 firms
- Beating the industry benchmark (~91%), achieving a higher transaction acceptance rate (~97%), and match reconciliation at the trade repository
- Supporting a high throughput of 6 million transactions per hour which is scalable up to 10 million

Our experts can help you find the right solutions to meet your needs.
Reporting transformation with data science and AI



Client
One of the world's leading bank
Goal
Improve efficiency in disclosure and reporting
Tools and Technologies
Python – SciPy, Pytesseract, NumPy, Statistics
Business Challenge
The client relies upon a centralized operations team to produce monthly net asset value (NAV) and other financial reports for its international hedge funds — from data contained in 2,300 separate monthly investment fund performance reports. With batch receipts of rarely consistent file formats — PDF, Excel, emails, and images — the process to read each report, capture key info, and create and distribute new metrics using the bank’s traditional tools and systems was highly manual, time-consuming, error-prone, and costly.

Solution
Iris developed a Data Science solution that rapidly and accurately extracts tabular data from thousands of variable file documents. Using a statistical, AI-based algorithm featuring unsupervised learning, it auto-detects, construes, and resolves issues for every data point, configuration, and value. Complex inputs are calculated, consolidated, and mapped as per predefined templates and downstream business needs, efficiently generating numerous, distinct, and required period-end financial disclosures.

Outcomes
The high solution accuracy helped the client’s global NAV reporting team significantly improve precision, efficiency, quality, turnaround time, and flexibility. The delivered solution contributed to:
- 90 - 95% reduction in operational efforts
- 99% accuracy in processing variable inputs
- Zero rework effort and cost
Our highly customizable and scalable solution can be seamlessly integrated with existing reporting applications and MS Outlook while accommodating additional volumes, report types, and business units.

Our experts can help you find the right solutions to meet your needs.
Industries
Company

Bring the future into focus.