Introducing Marti – CardoAI’s chatbot!

The chatbot trend has taken the commercial industry by storm, where having bots and digital assistants is becoming not only the norm but the primary channel of communication for providing services to prospecting clients.

With increasing interest and research on the Artificial Intelligence (AI), Natural Language Processing (NLP) and machine learning, bots are becoming progressively more efficient. Nowadays, the usage of chatbots is ubiquitous and synonymous with a booking holiday or ordering in restaurants.

However, in Fintech, the chatbots serve a more tailored purpose supporting the user along its journey on an application. From setting up accounts in digital banking solutions, assisting in the lending process via online forms to the lending robots that carry out investment strategies for retail or institutional investors.

Chatbots have become an intelligent solution for the significant financial and banking industry. They have eliminated the long queues at their branches, saving time and energy, giving customers the liberty to get the work done from anywhere without compromising the safety. [1]

For this reason, CardoAI’s next project is developing and improving Marti – the chatbot integrated inside the digital lending product to guide our users and help in navigating the platform with a personal assistant.

Say hello to Marti!

Upon login on the digital lending product, Marti pops up ready to accommodate the user’s needs through the interactive chat on the side that is available at all times.

Marti greeting users upon sign in

Thanks to Marti, all our users are now able to check their portfolio’s balance, the status of trades in different countries and currencies and several other statistics that will help the user to grasp the portfolio’s position and performance through a simple request by chat.

A sample interaction with Marti

Marti is also clever enough to suggest intents – possible actions that the user might request and information he wants to retrieve in a couple of seconds.

Not only can the bot display real-time financial information, but it also processes this data to provide insights and graphs based on it.

Marti displaying in-chat graphs

The chatbot is capable to depict numerous types of charts such as line charts, doughnut charts, allows the user to select and download files within it and even change the theme of the whole platform!

Our data science team goal is that through Marti, to automate and facilitate all the operations that are already available within the platform by enabling the end user to retrieve data and navigate the platform in the easiest way possible.

Disclaimer: The statistics shown on this post are obfuscated to preserve our clients’ data.


[1] How AI Chatbots Play A Role In The Fintech Industry

This was a collaborative post between myself and Elitjon Metaliaj.

Data leverage and securitizations

From leveraging data to DATA LEVERAGE

What is data leverage and how can you use this concept to accelerate your business growth? Read this article to find out more about the process of leveraging data.

Imagine being a bank, an asset manager, or an investment fund with a well-established franchise in the loan industry that wants to expand its activity by tapping into new markets, products or geography. You would have quite a large database covering your exciting activity in terms of both clients and loans.

But what about a new market you want to target? Do you need to start the activity with a blind eye and collect information from scratch putting your capital at stake and waiting for months or years before having a solid base of data that can help you succeed?

Is there an alternative way?

So far financial institutions have been very vigilant in optimizing the use of their capital: from regulatory capital for banks to equity capital for more unregulated entities the key target is to extract the highest return. The most common way to increase the return of available capital is through leverage: gaining access to extra resources allows to amplify ROE.

What is data leverage

During this time the concept of leverage has been extended to other fields, among which data. The expression “leveraging data” indicates the process to turn raw information into valuable actionable insights.

Leveraging available information is a good practice and companies should take advantage to make data-driven decisions at a strategic and operational level.

Is this all?

What if instead of just maximizing the use of available data, one could actually increase the amount of data available?

I am not talking about expanding the information available via data enrichment or the use of so-called alternative data (which have quite a hype at the moment) as they would only give more dimensions to look at but not increase the actual data available.

Neither am I talking about buying external datasets.

Data leverage and securitizations

I guess it’s about time I get to the point…

Have you ever thought to apply the concept of leverage to data? Not in terms of leveraging available data but actually increasing the data available through an approach that literally mimics the concept of financial leverage: apply a multiplier on the data a financial institution has available.

The concept is very simple: every time a financial institution extends a loan to a counterparty (or buy a bond) it gets data about a single counterparty (sector, geography, rating/FICO score, financial ratios, income, etc.) plus we can track the performance of the transaction over time (delays, renegotiation, defaults, recoveries, etc.). To increase the datapoint available, the financial institution should increase the number of transaction loans lent, or…..

It can invest into a SECURITIZATION!

By buying a note (or just part of it) into a securitization, investors gain exposure to the whole underlying portfolio of loans. And portfolio positions range from THOUSANDS for the smaller SME loan pools to HUNDRED OF THOUSANDS for pools made of consumer loans.

How long and what would it take to originate the same volumes in terms of time, costs, capital, organization, and all the rest? A rhetorical question really as it is not only time-consuming but resource-intensive.

All clear then? But can I just invest €1 into a AAA senior note and get loads of data?

Unfortunately, it’s not so easy…

Notwithstanding the requirements of the Securitisation – Regulation 2017/2402 gives investors the right to receive loan-level information on each transaction, however, in most cases, there is not a button (or a magic wand) that allows an investor to retrieve data in a standardized format nor in a timely manner. Data are indeed typically made available quarterly via pdf reports (or excel in the best scenarios), making it very hard for investors to translate them into meaningful and actionable information.

So data leverage is just a (nice) theory but cannot be realized in the real world?

Actually, there are solutions now!

Nowadays there are technologies, like CARDO AI that facilitate data retrieval from multiple sources with a data health check, as well as standardize them.

This process requires just a couple of clicks and takes just a few seconds (don’t even try to compare it with your excels!).

Now that I showed you how to leverage the data (which might have become BIG DATA with the appropriate multiplicator), here you have some examples on how to use them:

– Data can be used by servicers to adjust investment decisions in dynamic transactions (e.g. those with a ramp-up or reinvestment period or with a higher turnover such as trade receivables) to optimize the risk-return profile of the pool.

– They can be used by risk management departments to assess the rating of a transaction (e.g. applying scenarios deriving from data of comparable pools) or assessing the possible impact connected to particular events that affect a sector (e.g. tourism during the pandemic) or geography (e.g. after an earthquake, a flood or a particularly cold winter).

– They can be used to drive decision-making on new investments, including comparing scenarios provided by the arrangers, assess the impact of a new transaction on the diversification of the overall portfolio, negotiate a price that is more in line with the risk profile of a particular pool of loans.

– In the future, data will likely be used to assess the ESG profile of a securitization’s collateral pool and compare it with that of other transactions


What are the challenges related to the ESG incorporation in securitised products? PRI (Principles for Responsible Investment – the world’s leading advocate for sustainable investing, founded on a United Nations initiative) has recently published an interesting report on the incorporation of ESG in securitization products.

On one side regulators are increasing transparency requirements on sustainable related information on investment products. On the other, client demands and risk management are driving demands for considering the long-term impact and sustainability of investment choices. As a result of these forces, investors and asset managers are widening and improving ESG policies, but few of those are tailored for securitization products.

ESG information wanted by investors

Source: PRI

For ESG incorporation in securitized products to be effective, a holistic, multi-pronged approach needs to be developed. Compared to other asset classes the securitization market shows some additional complexity though including:

Transaction structure: which implies a multi-level assessment of practices and policies including Sponsor and/or Issuer, Originator, Servicer, Deal structure, Loans, Collaterals or Guarantees. This is further complicated by the fact that parties can occupy multiple roles (e.g. servicer and originator) or involve private entities, which tend to be less transparent.

Adequate data: Practitioners consider the ESG information in current deal documentation, marketing materials, and underlying portfolio disclosures insufficient to comprehensively analyze most securitized products.

– No ESG reporting standards for servicers/originators: Relevant ESG information on collateral often lacks uniformity and is not comprehensive.

– A diverse pool of underlying assets: the complexity and diversity of underlying collateral (and the sectors covered) make it difficult to build proprietary ESG frameworks that can be used for assessment.

– A lack of coverage by third-party ESG information providers: ESG information providers have limited coverage of securitized products. This is not surprising given that responsible investments originally developed in equities and only recently expanded to debt capital markets. Moreover, the leveraged finance market includes a high proportion of privately owned and smallcap companies that tend to disclose less information

– Lack of a clear ESG premium: differently from the so-called greenium that typically applies to green bonds, securitization transactions do not show meaningful price differentiation when incorporating ESG criteria[1]

As a result, of the 2,000 signatories that reported on their investment activities to the PRI in 2020, only 215 indicated how they incorporate ESG factors into their securitized product investments.

ESG incorporation in securitized products is at a very early stage

Source: PRI

To find a solution to the complexity above and sustain more ESG driven securitizations, PRIs have identified data quality, availability, and consistency as the main solution: a combination of robust in-house and third-party data sources is likely to drive investor confidence in ESG incorporation across securitized credit markets.

For further information please refer to the following link:

[1]Based on European ESG CLOs that were issued between March 2018 and August 2020 versus traditional CLOs

CARDO AI supports in a new securitization

CARDO AI supports with Banca Valsabbina and Azimut in a new securitization of € 200 million to support Italian SMEs

Details on the transaction

This transaction follows the previous securitization of € 100 million launched in September 2020, as part of the “Slancio Italia” project.

The new resources of € 200 million will be disbursed on through loans to SMEs with a maximum duration of 6 years, with 1 year of pre-amortization, and an amount ranging from € 50,000 to € 1,500,000 with a guarantee of up to 90% of the Italian Governmental Central Guarantee Fund (in Italian: Fondo Centrale di Garanzia) for SMEs.

This new securitization will help SMEs to cope and overcome the crisis linked to the spread of the pandemic. The operation, which sees the collaboration of – ​​the Italian fintech that supports SMEs in accessing credit – with the Valsabbina Bank – Brescian bank present with 70 branches in Lombardy, Veneto, Emilia-Romagna, Piedmont, and Trentino-Alto Adige – Azimut – one of the biggest asset managers in Italy – and CARDO AI – providing institutional investors with advanced technology for private markets to make better investment decisions, is aimed at supporting the real economy.

Compared to the operation of September 2020, the amount available to SMEs has increased from € 100 to € 200 million, thus guaranteeing Italian SMEs firepower for their growth strategies and better crises navigation.

The loans will have a maximum duration of 6 years, including one year of pre-amortization, an amount ranging from € 50,000 to € 1,500,000, and the guarantee of up to 90% of the Central Guarantee Fund (in Italian: Fondo Centrale di Garanzia) for SMEs. The companies applying for the loan will be evaluated within 24 hours based on the credit assessment conducted by through the use of proprietary artificial intelligence algorithms. The automatic process is then followed by verification of a credit analyst and subsequently the underwriting and disbursement of the loan within a few working days.

The Slancio Italia Project

The “Slancio Italia” project was launched at the beginning of the pandemic, in March 2020, and is managed by and financed by credit funds such as Azimut as part of the strategic agreement between the two companies established in May 2020 with the creation Azimut Capital Tech. Azimut also covers the fundamental role of the underwriter of the junior part through its private debt funds. Banca Valsabbina supported the two companies as the arranger of the transaction, Account Bank, as well as underwriter of the senior and mezzanine part, for a maximum commitment of € 180,000,000. CARDO AI, a fintech supporting institutional investors in modernizing their portfolio management with advanced technology and data science. In this particular operation, Cardo AI has acted as the data agent role, facilitating an end-to-end data management and ad-hoc reporting creation. Every institutional investor that has subscribed the notes of the securitization vehicle has received a dedicated access to the Cardo AI platform, enabling a complete transparency at the begging and throughout the lifetime of the operation.

Hogan Lovells Studio Legale provided legal assistance as transaction legal counsel, with a team led by Partner Corrado Fiscale. The Master Servicer will be Centotrenta Servicing S.p.A., while Banca Finanziaria Internazionale S.p.A. (in short Banca Finint) operates in the roles of Paying Agent, Issuing Agent, and Representative of the Noteholders (in short RoN).

Since the outbreak of the pandemic, businesses have not stopped needing liquidity. According to the most recent Istat data, in 2020 there was a decline in the turnover of service companies by 12.1%, the largest since 2001. “The loss of turnover affected almost all the sectors surveyed, particularly in the activities most affected by the restrictions related to the health emergency”, writes the statistical institute.

In this context, in 2020 Fintech came to the rescue of companies for 1.65 billion euros (ItaliaFintech data), with an increase of 450% compared to the 372 million again disbursed in 2019. Also the number of new Italian companies getting support by Fintechs, which rose from 1,092 in 2019 to 5,464 in 2020.

Gabriele Blei, CEO of the Azimut Group, comments: “This initiative is in line with our project of Banca Sintetica, which is an alternative to the traditional banking model to support the needs of small and medium-sized Italian enterprises through the use of fintech platforms with which to finance businesses effectively and quickly. Our goal is to deliver to Italian SMEs, loans of 1.2 billion euros over the next 5 years and to do so we can count on the support from our diverse range of alternative customer lending strategies both private and institutional. A project that allows us to create new performance opportunities for the capital of our customers, fueling the virtuous circle between private savings and businesses”.

“After the securitization carried out last year, we are happy to be a partner of this new operation, which doubles the resources made available to SMEs – said Marco Bonetti, Joint General Manager of Banca Valsabbina – Our institute will continue to look favorably on initiatives like this, which on one hand are an important element to support SMEs, which in particular in times of crisis such as the current one should be supported especially in terms of liquidity, and on the other hand confirm the importance and value of the cooperation between the traditional banking system and fintech, a sector in which is positioned as one of the most attractive” Bonetti concluded.

“We believe that the path of collaboration is the main way to truly innovate the finance world and make it more efficient and more functional to needs, including emerging ones, of the real economy – comments Ivan Pellegrini, CEO of – In a delicate period like what Italy is facing, we felt the need to make our skills available to provide resources to all those healthy SMEs that are having moments of difficulty but want to restart and look to the future. This is why we are pleased to have partners such as Banca Valsabbina and Azimut, with whom we have created a solid and structural alliance: for us, it is an evolution, we are no longer just providers of loans to the real economy, but technological enablers for traditional finance”.

“Data is now the world’s most valuable asset. We see every day how it empowers smarter and safer investment decisions and how it can bring unparalleled transparency and intelligence by replacing antiquated manual processes and streamlining the reporting workflow – comments Altin Kadareja, CEO of Cardo AI.  – As data agents, we are thrilled to participate and support this securitization transaction granting investors easy access to the normalized loan-level data along with fully integrated analytics and reporting tools, where users can review composition, analyse performance, and project collateral and tranche cashflows.

“ con Azimut e Banca Valsabbina in una nuova cartolarizzazione da 200 milioni.” Finance Community,

EU Taxonomy: A tool for ESG transition or a nightmare?

What is the EU taxonomy and what are its implications for sustainable finance? In this article you fill find a practical guide on main steps and cases study results on how to comply with the EU Taxonomy

What is the EU Taxonomy

The EU Taxonomy is one of the most significant developments in sustainable finance and will have wide-ranging implications for investors and issuers working in the EU and beyond. This tool helps investors navigate the transition to a low carbon, resilient and resource-efficient economy by assessing to what degree investment portfolios (both equity and fixed income) are aligned with the European environmental objectives:

  1. Climate change mitigation
  2. Climate change adaptation
  3. Sustainable use and protection of water and marine resources
  4. Transition to a circular economy
  5. Pollution prevention and control
  6. Protection and restoration of biodiversity and ecosystems

As with all other regulations, nothing comes easy. Applying the taxonomy requires a five steps approach that in reality becomes seven or eight steps depending on the data availability, internal ESG readiness and the disclosure level of the invested companies.

One of the key disclosures that investors need to make is the definition of the proportion of underlying investments that are Taxonomy-aligned, expressed as a percentage of the investment, fund, or portfolio – including details on the respective proportions of enabling and transition activities.

Source: Cardo AI analysis

To do that, investors need to go through the following steps:

Step 0 – Translate every sector/industry classification system to NACE economic activity code

To determine eligible economic activities in the Taxonomy, investors need to map the current classification system in use: e.g. NAICS or BICS (already mapped by the Taxonomy), GICS, ICB, SIC, TRBC, etc. with the European industry classification system (NACE).

Step 1 – Break-down invested company’s sectors by turnover or capex, and if relevant opex to determine if these activities are listed in the Taxonomy

Starting from the turnover, investors need to be able to break down the sectors of activity in which the company’s funds are allocated (both equity and fixed income). Successively, map these sectors to the list of taxonomy and flag those sectors that are present.

Step 2 – Validate if the companies meet the substantial contribution criteria

Every company in the portfolio is required to validate whether or not each economic activity meets substantial contribution criteria to climate mitigation and/or adaptation objectives. Substantial contribution is assessed through different screening tests carried out based on a collection of thresholds by sector.

To do that, investors need to have ready the data reported from the companies. In case data is not available – due to lack of reporting – an estimation of the data point or an approximation of the threshold could be a solution to determine the significant contribution.

Step 3 – Validate if the companies meet the “do no significant harm” criteria

The third step requires investors to conduct a due diligence-type process to verify the company’s activities meet the “do no significant harm” to the other environmental objectives using a set of qualitative and quantitative tests. They are typically applied at the company level looking at the production process or in the use phase and end of life treatment of the products produced.

Step 4 – Control if there are any violations of the social minimum safeguards

Investors need to conduct due diligence to control any negative impacts on the minimum safeguards related to UNGP (United Nations Guiding Principles on Business and Human Rights), OECD (Organisation for Economic Co-operation and Development), and ILO (International Labour Organization) conventions. OECD guidelines for MNES (Multinational Enterprises) ensure compliance with qualitative DNSH and minimum safeguards are recommended to be followed for the due diligence process.

Step 5 – Calculate the alignment of investment with the Taxonomy and prepare disclosure at the investment product level

Once the previous steps have been completed and the aligned portions of the companies in the portfolio have been identified, investors can calculate the alignment of their funds with the taxonomy (as an example, if 10% of a fund is invested in a company that makes 10% of its revenue from Taxonomy-aligned activities, the fund is 1% taxonomy-aligned for that investment, and so on).

Assessing a portfolio for Taxonomy alignment

Source: Taxonomy: Final report of the Technical Expert Group on Sustainable Finance (March 2020)

A great initiative has been the case studies around how to use the EU Taxonomy shared by PRI’s (Principle for Responsible Investment: The PRI is an investor initiative in partnership with UNEP Finance Initiative and UN Global Compact). Starting in late 2019, over 40 investment managers and asset owners worked to implement the Taxonomy on a voluntary basis in anticipation of upcoming European regulation.

Here is a summary of some of the cases studies:

For more details on these case studies and practical EU taxonomy implementation recommendations contact us!

Koala sleeping related to Sleep Tight with Automated Testing using pytest

Sleep Tight with Automated Testing

In the initial steps of my career, when I just started writing software, only the senior developers of my company felt responsible to write tests for their code. Over the years, this has become mainstream. But “mainstream” does not mean “universal”. Plenty of developers still do not have comfort with, or even exposure to the unit testing practice, let alone worry about automated testing. And yet, a form of peer pressure causes them to play that close to the vest.

So I reach out to these folks to say “Hey, no worries. You can learn and you don’t even have to climb too steep of a hill.” I’d like to revisit that approach here, today, in the form of a blog post.

Let’s get started with unit testing in Python using pytest, assuming that you know absolutely nothing about it. Although examples are beginner-friendly, this article is not going to be a tutorial on how to write tests with pytest. Its docs already make this. I will be more focused on the idea of testing in itself.

Intro to automated testing

Automated testing is an extremely useful bug-killing tool for the modern Web developer. (Django docs)

As a definition, automated testing or test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. In other words, it is a piece of software completely separate from your source code, responsible to make sure that the source code is working as expected.

What kind of problems do you solve or avoid when you write tests?

  • When you are writing new code, you can use tests to validate that your code works as expected.
  • When you are refactoring or modifying old code, you can use tests to ensure your changes haven’t affected your application’s behavior unexpectedly.

Why do we write code to test other code? Why aren’t we testing manually?

Let’s say you are done with the development of a new feature and ready to deliver it to your Quality Assurance team. Your QA team will spend some time to make sure that all the edge cases are covered and the feature is ready for production. After deploying to production, you are asked to modify or add something new on top of that feature. Then you make all the required changes to fulfill the new requirements and deliver them to your QA team.

Can you spot the problem above?

Now, the QA team is going to make the same manual tests as before, to make sure that modifications to the code didn’t break anything and all edge cases are still covered. It means time, lost time for the same tests, again and again.

What is the solution?

The solution is automating those tests. When you have automated tests, you can run them anytime you want (in the middle of the night), in any environment you want (on your development machine or a CI server), and be sure that you haven’t broken anything in your code, or even if you broke something, then you have a clear idea of what it is. In addition, automated tests are fast, faster than any QA team.

So, yes, we should make some automation on tests, but how? Unit tests can be a good start. They are easy to write and easy to maintain. So what is a unit test?

Automated testing: Unit tests

Unit testing is a software testing method by which individual units of source code are tested to determine whether they are fit for use. Therefore, Unit tests take small pieces of source code and check if those pieces are operating as expected under given conditions. Also, Unit tests find problems early in the development cycle.

Finding bugs in the source code before the release is critical. The cost of finding a bug when the code is written is way lower than finding it later. Finding a bug is just the beginning: after you find a bug, you have to find what is causing it and how to solve it. Making all of those operations on a module that is already in production is very expensive in terms of time and very stressful for the developer. You need to be real quick, the code is in production so the bug is too and your users are facing this bug. You need to be sure that after fixing the problem you haven’t created some new ones.

Consider that Unit tests are a perfect way to find those problems and tell you exactly where the problem occurs so you can skip the detection and identification of the bug and jump directly to solving it. Such Unit tests have a clear result. They are either passing (which means there is nothing to show but a success message) or they are failing (in which case the failure message will contain the exact check performed by the test that has failed). The result of a failing unit test contains a lot of information that can help you have a better idea of what is going wrong.

Real world example with a demo pizza app

Now let’s jump into a real-world example and show how unit testing works. We are going to work on a demo pizza application. Below is a simplified diagram that describes the application.

Demo Pizza App related to automated testing guide
Demo Pizza App

So the diagram says that:

– Each topping and each size has its price
– Each pizza has some default toppings
– For each pizza order, we have to select a pizza, a size, and extra toppings optionally
– Each pizza order is linked with a single order
– Finally, each order holds some general information about the order

We are going to focus on writing some unit tests for the price calculation.

How is the price calculated?

1. Each pizza has a base price, which is the sum of the prices of the default toppings
2. Each pizza order has a price, which is the sum of the following:
2.1. price of pizza
2.2. price of size
2.3. sum of the extra toppings
3. Finally, each order has a price, which is the sum of the pizza order prices

Let’s have a look at the Pizza model class which we are going to test.

Pizza model

So we have some toppings, each of them with its price, and the sum of those prices gives us the total price of a Pizza instance. Let’s write some unit tests for this.

In the gist above we have two tests for the price calculation of a Pizza instance. They check the price calculation of a pizza with and without toppings. Let’s analyze them.

  1. Django DB mark

So unit tests in pytest are not supposed to access the DB by default. To be able to access the DB, we use this mark from the pytest_django package.

2. Preparation phase

Here we create and link all required instances with each other, pizzas and toppings. We assign a value to all of the required attributes, especially to price.

3. Calculating results

We get the actual price by calling the total_price property of the Pizza instance. This property is going to make its calculation, and it is supposed to return a value at the end.

We get the expected price by summing up two toppings of the pizza instance.

4. Assertion

It’s time to compare the values: if the price we get from total_price property is the same as the expected_price the test will pass, otherwise, it will fail.

We can run the tests by using the command pytest in a terminal window and here are the results:

Pizza price test results related to automated testing
Pizza price test results

So pytest started to search for all tests inside the project, collected 2 tests, and ran them. The configuration for pytest is done by using a ini file called pytest.ini located at the root level of the project. Let’s have a look at the configuration file.

1. DJANGO_SETTINGS_MODULE = pizza_app.test_settings

We have defined a different settings file for Django settings which will be used by pytest when getting Django up. This is a practice to speed up the testing process. So for example, if your model fields are supported by SQLite you can use it to make the DB a lot faster while testing. You can use a different password hasher and remove password validators to speed up the process of creating users. Also, you can change the DEBUG setting which is False by default when running tests to execute the tests in an environment as close as possible to production. You can view my test_settings file.

2. python_files = test_*.py *

Here we have defined the naming convention of test files. So every file name that starts with test_ or ends with _tests or is exactly tests will be considered as a test file while discovering the tests by the discoverer.

3. addopts = –reuse-db –nomigrations

Here we are saying that after finishing with test execution, the test DB that was created should not be destroyed, in order to be reused. Furthermore, when creating the DB, pytest should not inspect the Django migrations, but instead go and check the models directly: this option can be faster when there are several migrations to apply for each app.  Running pytest in the terminal will use the configurations provided in the .ini file. These configurations can be overridden when running the command, in the form

pytest --create-db

The reason you may want to use this command is that, when you have any changes to your database schema, you can run the tests once with this command and a new database will be created. After that, you can continue running your tests only with the pytest command so all ini configs will be used again.

Write clean tests

Some developers think it is OK for the testing code to be dirty and not compliant with all the conventions of your language, unlike your source code. I do not agree with that.

Let me describe to you the cost of writing dirty tests. At first, it will be all fine, you wrote some tests super fast, the code is dirty, but that doesn’t matter because they are just tests. After some time, the requirements change, so you have to change your tests too. Now you have two options. The first one is to spend the required time modifying the “dirty testing” code. The second one is to place a skip mark to those tests, because you are almost sure that your code is working and plan to deal with them later on. Even if you want to spend time now to fix broken tests, there will always be a tight deadline that’ll make you choose the second option. And trust me, if you skip those tests, there will never ever be free time to fix them.

So what is the cost? Time. Lost time.

Remember the reason of writing tests? By writing dirty tests, we actually wasted our time writing them and will waste time again by fixing or skipping them and facing bugs in production.

So, my suggestion is to always consider your testing code the same way you consider your source code. Write it as clean as you can.

Now let’s review our testing code. Is it clean? No.

Preparation phase

We are not interested at all in what values will any of the attributes have. Even price. We just want to create a pizza with some toppings. Imagine having to create an instance of a class with dozens of required fields. There must be an easier way.

Comes in: model-bakery. This is a must-have package if you want to write hundreds of tests in Django. This package is capable of creating instances of a class by providing only the class name. So it goes as follows:

It is as simple as that but can we go further? We are doing almost the same thing but without toppings in the second test too. So we can move the creation of a pizza in another module, let’s name it

And the testing code:


The testing code is a little clearer now, but as you may have noticed, we have two tests that test the same thing: the price of a pizza. The only thing that changes is the number of toppings. To make it more simple, we can use one of the best features of pytest, which is parametrization. What we want to parametrize is the number of toppings, it will be given to the testing function as a parameter and pytest will run the test more than once by giving the next number of toppings every time.

@pytest.mark.parametrize(‘number_of_toppings’, list(range(0, 10)))

This line of code makes this test run 10 times, each time with more toppings. So because 0 number_of_toppings means a pizza without toppings, we do not need the second test anymore. This is the output of pytest -v command (-v: verbose, display more information):

running parametrized tests for automated testing
running parametrized tests

Mocking certain operations

Now we have a general understanding of how to approach tests and how to make them simple and clean. Another problem we will be facing while testing our web application is costly operations. By costly operations, I mean some side effects of source code that are out of the testing scope but will be executed while running the tests. For example, let’s suppose that for every order created we need to print out a receipt. Furthermore, let’s suppose that the actual printing is done by a microservice, which we contact via an API, providing the order details. Naturally, we don’t want to print out anything while we are running our tests. This means that, while testing order creation, we need to find a way to skip this API call. The mechanism that provides what we want is called mocking.

There is a pytest way to achieve this. It’s called the monkeypatch: this fixture provides some helper methods for safely patching and mocking functionalities in tests.

Let’s assume we have the following code to save and print out the receipt.

What we want is to mock the post method of the requests module to do nothing instead of posting to the receipt microservice. We just want to test if the price is set while saving an order instance. The testing code:

Monkeypatch is a pytest fixture, so we can directly include it as a parameter in our testing function. After including it, this line does the work:

monkeypatch.setattr(requests, 'post', mock_post)

This command replaces the post method of the requests module with our mock_post function, in the scope of the test. Now, whenever (inside the test) is called, our mock function will be executed instead. If, for some reason, I need the response of the post method, I can write a mock class, which will behave as the original response, i.e. implement its methods that we need.

Mock response class:


We can play around with the MockResponse class as we like to fulfill the requirements of the code that we are mocking.


Our automated testing will not always pass. Sometimes they will fail, which is precisely the reason why we write them: to fail when they need to and tell us what is going wrong. After a test fails, it’s time to verify if we are doing something wrong in the testing code or if something is actually broken in our source code. To be able to do that, we need some more details about what is going on inside the application while the test is running. The source code that we are testing can be hard to understand and even tests can sometimes be hard to understand (although they shouldn’t be). To go deep and understand all the circumstances of a failing test, we need to debug it. Here are two ways to debug a test.

Using Pycharm

The first one is using a modern Integrated Development Environment, like PyCharm. First of all, to be able to use pytest support in PyCharm, you need to set the Default test runner setting to pytest. You can find this setting under Settings > Tools > Python Integrated Tools.

After indexing your project, PyCharm will show a green run icon besides each test:

When you click on this icon you see the following:

By clicking Debug you can run this single test in a debug session and the execution of the program will stop in the breakpoints you have placed around the code.

Dropping to Pdb

If you are already on a terminal window and want to see some variables and other things very quickly, you can also drop into Pdb by using the built-in function breakpoint. In your code:

In your terminal:

If you press TAB two times you can view all available options:

Some of the options are as follows:

next: will execute the next line
cont: will continue execution until the next breakpoint
list: will display the code that will be executed


Test isolation is a topic worth writing an entire article about. I will treat it really briefly here because if we are going to write some tests, we definitely need to keep this concept in mind.

Basically, test isolation says that all tests must be independent of each other and also from other modules of our project. A test function must not make any assumption of the application’s state. It must not depend on the actual state to be able to run. It shouldn’t matter for a test if it is running first or last; or if it is running alone or in a group of tests.

What is the point of all of that?

If a test depends on other tests, you are not guaranteed that it will always pass. If you are not sure about that, then you cannot trust the value this test produces: in other words, the test becomes unusable. For example, for our order_create test, we have created an order using the following util function:

Here we have not assumed that a pizza order is already created in the DB by some other tests. That’s why we haven’t retrieved any PizzaOrder instance, but we have directly created it. Every single thing that a test needs to be able to run must be created inside its body or through a database fixture. What the test needs to run must be obvious. If it is not, it will be much harder to debug it when failing, because we don’t even know what the test is receiving as input. When you don’t even know the exact input, you can’t be checking for the output.

Final worlds on isolation, always make obvious what the tests need to run; tests should not depend on running order; pay attention to the assertion timing.

Flaky tests

A flaky test is a test that both passes and fails periodically without any code changes. Flaky tests are annoying but they can also be quite costly since they often require engineers to retrigger entire builds on CI and often waste a lot of time waiting for new builds to complete successfully. A flaky test is probably a result of a not well-isolated test. This means that it depends on some global variables which change over time. This kind of test provides you some false negatives. It is better to not have tests at all instead of having some tests that eventually fail without any code changes. This type of test will make you and your organization lose confidence in tests and turn back into manual verification. Here is an article about them, check out how do the Spotify engineers deal with them.

In conclusion

We have discussed some ways of writing clean and maintainable tests. Key topics treated:

  • Pytest, as an advanced testing framework that has countless features compared to the standard built-in unittest.
  • Ways to debug tests.
  • Isolation of tests and their importance.
  • Flaky tests, as the result of bad-isolated tests.

If you are going to work on complex applications, bugs will always be present. Testing, or even better automated testing, is a bug-killing tool that can save you a lot of time and headache.

Thanks for reading!


  1. Demo pizza app
  2. Demo pizza app DB diagram
  3. Pytest docs
  4. pytest_django package
  5. model-bakery package
  6. Test Flakiness at Spotify

Being a developer in a Fintech startup

Upon finishing my Bachelor’s, I never imagined I would find myself in the crossroad between the digital lending market and software engineering, with the former being just a part of the broad Fintech domain. I settled into frontend development with no prior knowledge on the tools that I am using today and although it has been only almost one year since I am developing for CardoAI, I would like to share my thoughts and experience on what it means to be a software developer in a Fintech startup.

You will not be able to understand everything at once

There are a lot of business and financial terms that will take time until you are accustomed to and being a developer, you should start to distinguish between different data visualizations and how to best display this ‘odd’ financial information that you are just starting to learn. However, this is part of the process as you get to immerse yourself in the world of Fintech, so just give it some time because you will not be able to understand everything overnight, even if you possess some previous background on finance and economics.

Ask questions A LOT

Always ask questions. Everyone has distinct learning paths so I do not think there are any wrong questions. Especially when it comes to developing in a rapidly changing market with the latest technologies, you will be judged for not asking questions. I have found pair programming and productive brainstorming sessions for discussing a new requirement to be particularly helpful in sharing knowledge between team members.

Put your customers first

Being part of a startup is a significant experience that provides to the team another mindset on how to approach the product and most importantly: your customers. Customer-Centered Approach is highly emphasized and it is the cornerstone in creating positive and meaningful relationships with your customers. The entire planning process and the development cycle will be adjusted to serve your customers’ needs: what they prioritize and what would give them a competitive advantage in the market. Once you start thinking like the customer and once you embrace the product as your own, you will start to identify problems and even hidden opportunities – leading to proactively improving the software without waiting for a customer request.

Challenges will help you grow

A software developer in Fintech should always expect the unexpected. There will be challenges waiting for you at every corner, but this should not demotivate you; quite the contrary, accept the new challenges coming in your way and use them at your advantage as an opportunity to grow. Having to deal with the look and feel of the application, a new challenge may also help you unravel your inner creative spirit and force you to think out of the box.

Being part of a startup in a Fintech environment means that nothing is certain. The market is extremely volatile and everything is changing rapidly. Although a software developer would not necessarily be interested in the business side of doing things, in this case I think that the involvement of the tech team in better understanding the business requirements is one of the greatest strengths a startup could have. As we recognize this constant change in the Fintech domain, we as developers are one step ahead to deliver highly demanded features and perceive the importance of them by putting ourselves in the shoes of our customers. Flying Airplanes While We Build Them – the catchphrase of CardoAI best summarizes the challenging environment we face.

Nevertheless, as in any industry there will be opportunities to capitalize on and also challenges waiting to overcome. Most of the things, as a developer, will be learned through the ‘hands-on’ approach, however there are gaps that could have been filled if the universities prepared their students better on what to expect upon graduating. In my opinion, it is important that the curricula is updated and adapted around the latest technologies most companies are working with, as well as making general financial and accounting courses compulsory for engineering degrees.

Having said that, newcomers should not be frightened to join, quite the contrary, there are plenty of chances to learn and even with limited knowledge and skills there will always be people willing to help.

No industry for late data

The need for real time transaction data amid regulators’ requirements and investors’ need for higher transparency

Let’s picture this for a second, a world where Spotify had no real time data and used no AI and ML algorithms. You would need to send an email to the support center, indicate a list of your past songs and genres that you like, wait a couple of hours or a few days based on how busy the client support is and only then receive a recommendation for listening to a new song. Crazy, isn’t it?

This is exactly what is happening in the financial institutions. Old processes, old systems, late and old data. Today, many financial institutions continue to take decisions involving millions of Euros (if not billions) on the basis of outdated (and often inconsistent) data deriving from manual processes, usually processed using excel.

Regulators are well aware of the risk in using aged (e.g. year old financial reports) and not updated and homogenous data (coming from different sources and based on different definitions) when assessing new investment opportunities.

An example is the new definition of defaults set by the CRR (Capital Requirement Regulation). In September 2016, EBA published final guidelines on the application of Art. 178 related to the definition of default and Regulatory Technical Standards on the materiality threshold of past due credit obligation.

Paragraph 106 – Timeliness of the identification of default states that “Institutions should have effective processes that allow them to obtain the relevant information in order to identify defaults in a timely manner, and to channel the relevant information in the shortest possible time”.

This RTS does not only require a fast process but also indicates that the identification of default should be performed on a daily basis. This is becomes paramount for the industry as it requires to move processes and procedures to the next level in order to comply with this requirement.

On 1 January 2021, all of this will be real, and  credit institutions and investment firms using both IRB or Standardized approach will be required to comply with  the above.

Taking as an example the  securitization industry, credit originators or vehicle servicers report data on monthly (if not quarterly) basis using excel, or in some cases  PDF files. This requires a relevant amount of time to manipulate data (cleaning fields, merging files, linking items, standardizing output) and extract relevant information making investor constantly running behind data.

Regulators are clearly pushing the financial industry to set advanced technological solutions to improve the way they manage data. Another example is the Draft Regulatory Technical Standards on the prudential treatment of software assets published on October 2015[1]that directly support investments by financial institutions in these solutions.

Another need for new and improved technologies to manage data comes from the increased volatility of financial markets (that became even more evident  with the Covid-19 pandemic) requiring prompt reactions even in private markets. But how could you react fast in your portfolio if your date are one month old?

The  buy-side industry (including Asset Manages, Pension Funds, Investment Funds, etc.) requires additional level of transparency when it comes to financial data. To establish trust among investors, managers of securitization vehicles are asked to provide detailed information that goes far beyond the publishing of a monthly report but encompasses asset level information to be provided on a daily basis. This requires  a rethinking of the reporting processes of securitisations, leaving aside excel and pdf files and starting to embed technologies that allow all stakeholders involved to access real time data 24/7.

Technology is now available off-the-shelf also to small players, not only the top ones. Thanks to fintech developments and use of cloud computing, any actor (small or big) can take advantage of advanced ready to use technology propositions. This will in turn, avoid the large and risky project-specific capital expenditures.

The Tortoise and the Hare

What makes the difference is the time to market in terms of adoption of such new technology propositions, not the deep pockets to invest in the development of any proprietary tool as it is was still the case a few years ago.

[1]Draft Regulatory Technical Standards on the prudential treatment of software assets under Article 36 of Regulation (EU) No 575/2013 (Capital Requirements Regulation – CRR) amending Delegated Regulation (EU) 241/2014 supplementing Regulation (EU) No 575/2013 of the European Parliament and of the council with regard to regulatory technical standards for own funds requirements for institutions

Cardo AI top startup at MILAN FINTECH SUMMIT

Among the over 70 candidates from 18 countries, 10 Italian and 10 international companies were selected based on their potentials on the market

Milan, 23 November 2020 – The Fintech companies deemed as having the highest market potential will be the protagonists of the second day of Milan Fintech Summit, the international event dedicated to the world of Finance Technology, scheduled as a streaming live on 10 and 11 December 2020. It is promoted and organised by Fintech District and Fiera Milano Media – Business International supported by the City of Milan through Milano&Partners, and sponsored by AIFI, Assolombarda, Febaf, ItaliaFintech and VC Hub.

Following the call launched on an international level and a careful selection by such experts in the sectors as the Conference Chair Alessandro Hatami and representatives of the organizing committee, today they announced the 20 companies that will be given the opportunity to be on the digital stage to present their own ideas and solutions for the future of financial services.
Among the over 70 candidates from 18 countries, 10 Italian and 10 International companies were selected.

The Italian companies are: insurtech Neosurance, See Your Box and Lokky; WizKey, Soisy, Cardo AI, Stonize and Faire Labs, operating in the lending and credit sector; Trakti offering cybersecurity solutions; dealing with artificial intelligence.

The international ones that were selected are: Insurtech Descartes Underwriting and Zelros (France); Keyless Technologies (UK), CYDEF – Cyber Defence Corporation, Tehama (Canada), dealing with DaaS and Cybersecurity and Privasee (UK), operating in the data market protection; Pocketnest (USA), a SaaS company; Wealth Manager Wondeur (France), DarwinAI (USA) operating in the artificial intelligent sector and Oper Credits (Belgium), operating in the lending and credit field.

These realities, which will be introduced to a parterre of selected Italian and International investors and to fintech experts, were chosen based on the criteria of: innovativeness of the proposal, potential size of the target market, scalability of the proposal, potentials in capital raising; type of the employed technological solution.
The Milan Fintech Summit will thus help introduce the potential of our fintech companies abroad reinforcing the role of Milan as European capital of innovation, an ideal starting point for international companies that want to enter the Italian market.

The program of the event is available on the official site and a physical appointment of the summit is already scheduled for next year, on 4 and 5 October 2021. The December appointments are open to all those interested in knowing and understanding in depth the potentials of fintech. You can register now for free using this link, or purchase a premium ticket to participate as listeners to the pitch session (the only closed door part of the program) and be entitled to other benefits offered by the Summit partners.

Fintech District
Fintech District is the reference international community for fintech ecosystem in Italy. It acts with the aim of creating the best conditions to help all the stakeholders (start-ups, financial institutions, corporations, professionals, institutions, investors) operate in synergy and find opportunities of local and international growth. The companies that decide to adhere have in common the tendency to innovate and the will to develop collaborations based on opening and sharing, The community now consists in 160 start-ups and 14 corporate members choosing to participate to the creation of open innovation projects by collaborating with fintech. Fintech District also has relationships with equivalent innovation hubs abroad to multiply the opportunity to invest and cooperate, establishing its own role as access door and reference in the Italian market. Created in 2017, the Fintech District has its seat in Milan in Palazzo COPERNICO ISOLA FOR S32, in Via Sassetti 32. Fintech District is part of Fabrick.

Subscribe to our Newsletter

The ability to operate with technology and true intelligence at speed can be the deciding factor in success or failure in private market investments.

Start lowering your costs, scale faster and use more data in your decisions. Today!

Our Offices
  • Milan:
    Via Monte di Pietà 1A, Milan, Italy
  • London:
    40 New Bond St, London W1S 2DE, UK
  • Tirana:
    Office 1: Rruga Adem Jashari 1, Tirana, AL
    Office 2: Blvd Zogu I, Tirana, AL

Copyright Cardo AI 2021. All rights reserved. P.IVA: 28385 437473 3745