Tag: securitization technology

Data extraction from Financial Statements with Machine Learning

Data is the foundation that drives the whole decision-making process in the finance ecosystem. With the growth of fin-tech services the process of collecting this data is more easy accessible, and for a data scientist becomes necessary to develop a set of information extraction tools that would automatically fetch and store this relevant data. Doing so, we facilitate the process of information retrieval, which before the development of this tools was done manually, a very tedious and not very time-efficient task.

One of the main providers of this “key knowledge” in finance are the Financial Statements, which offer important insight for a company through its performance, operations, cash flows or balance sheets. While this information is usually provided in text-based formats or other data structures like spreadsheets or data-frames (which can easily be utilized using parser-s or converters), there is also the case when this data comes as other document formats or images in a semi-structured fashion, which also varies between different sources. In this post we will go through different approaches we used to automate the information extraction from these Financial Statements that were formerly provided from different external sources.

Problem introduction

In our particular case, our data consisted of different semi-structured financial statements provided in PDFs and images, each one following a particular template layout. These financial statements consist of relevant company-related information (company name, industry sector, address), different financial metrics and balance sheets. For us, the extraction process task consists of retrieving all the relevant information of each document for every different entity class and storing them as a key-value pair (e.g. company_name -> CARDO AI). Since the composition of information differs between documents we end up having different clusters of them. Going even further, we observe that the information inside the document itself is represented by different typologies of data (text, numerical, tables etc.).

Sample of a financial statement containing relevant entity classes

In this case two main problems emerge: we have to find an approach to solve the task for one type of document and secondly by inductive reasoning, form a broader approach for the general problem, which applies in the whole data set. We have to note here that we are trying to find a single solution that works in the same way for all this diverse data. Treating separately every single document with a different approach denotes missing the whole point of the present task in hand.

“An automated system won’t solve the problem. You have to solve a problem, then automate the solution”

Methodology and solution

Text extraction tools

Firstly we started with text extraction tools like Tabula for tabular data, and PDFMiner and Tesseract for text data. Tabula scrapes tabular data from PDF files, meanwhile PDFMiner and Tesseract are text extraction tools that gather the text data respectively from PDF and images. The way these tools work is by recognizing pieces of text on visual representations (PDFs and images) into textual data (document text). The issue with Tabula was that it worked only on tabular data, however, the most relevant information in the financial documents that we have is not always represented in tabular format.

Meanwhile, when we applied the other tools, PDFMiner and Tesseract, the output raw text was completely unstructured and non human-unreadable (adding here unnecessary white-spaces or confusing words that contained special characters). This text was hard to break down into the meaningful entity classes that we want to extract from there. This was clearly not enough so we had to discover other approaches.

GPT-2

Before moving on, we made an effort to pre-process the outputted text from the above-mentioned extraction tools, and for that we tried GPT-2 [1]. GPT-2 is a large transformer-based language model with 1.5 billion parameters developed from OpenAI, and was considered a great innovative breakthrough in the field of NLP. This model, and also its successor – GPT-3, have achieved strong performance on many NLP tasks, including text generation, translation, as well as several tasks that require on-the-fly reasoning or domain adaptation. In our case, we tried to exploit one of its capabilities which was text summation. After getting a considerable amount of text from the previous text extraction tool, we tried to summarize all this information using the GPT-2 model and take out non-relevant information, taking advantage of the attention mechanism of the transformer model. But this approach did not seem to work quite well considering the non-structured text which is very hard to summarize. Apart from that, there would always be the possibility of the model removing the important information from the text and we cannot give it the benefit of doubt in this regard.

Bounding boxes relationship – OpenCV

The unpromising results of the above approaches made us entertain the idea of treating it as an object detection task using computer vision. Object detection is done by means of outputting a bounding box around the object of interest along with a class label. Then we could construct a relationship graph between these “boxed” entities [2] (see image above). Going forward with this method we tried to do the same with our documents, but instead draw boxes around text that represent an identifiable entity and label each box with the entity name it contained. The next step would have been to develop an algorithm that calculates a metric that represents the relationship values between these boxes based on their spatial position. We could then train a machine learning model that would learn from these relationship values and sequentially decide the position of the next entity by knowing the document locations of the previous ones.

The model creates a relationship graph between entities

However, that was not an easy task, due to the fact that it is very hard to determine the right box which represents a distinct meaningful component in the report, and also as mentioned above different documents follow different document layouts and the position of the information we want to extract is arbitrarily positioned in the document. Henceforth, the previously mentioned algorithm might be inaccurate in determining the position of every box. We moved on to seek for a better plan.

Named Entity Recognition

An example of NER annotation

Named-entity recognition (NER) is a sub-task of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories.

During our research on the quest to explore new approaches for this task, we came upon the expression “named entity” which generally refers to those entities for which one or many strings, such as words or phrases, stands consistently for some referent. Then we discovered Named Entity Recognition, the task of locating and classifying words from an not annotated block of text into predefined categories. The desired approach to solving this task is by using deep learning NLP models which use linguistic grammar-based techniques. Conceptually this task is divided into distinct problems: detection of names and classifying them into the category they fall into. Hence we started to look for different implementations to design our language model.

NLP model – spaCy

At this point, our process path was pretty straightforward to follow due to the ease that the NLP libraries offer. And for this job, we decided to go with spaCy [3], which offers a very simple and flexible API to develop many NLP tasks, and one of them being Named Entity Recognition. The design pipeline could be conceptualized with the below diagram:

Solution pipeline

Before we start the design of our model we have to first construct the training data set. It will essentially consist of annotated blocks of texts or “sentences” that contain the value of the entity it represents and the entity name itself. For that, firstly we extract the text from the paragraphs where the desired information is present by making use of the previously used extraction tools. Then we annotate this text with the found categories, by providing the starting position and length of the word in the text. Doing so, we also provide some context to the model by keeping the nearby words around the annotated word. This whole information retrieved from the financial statements can then be easily stored in a CSV file. In SpaCy this would be represented with the below structure:

TRAIN_DATA = [    ("Cardo AI SRL is a fintech company", {"entities": [(0, 12, "Company")]}),    ("Company is based in Italy", {"entities": [(20, 25, "LOC")]})]

After we prepared our dateset, we then decided to design the NLP model by choosing between the alternatives that spaCy provided. We started from a blank non-trained model and then made an outline of the input and output of the model. We split the data into train and test sets, and then started training the model, following this pipeline. From the training data, the text is firstly tokenized using Doc module, which basically means breaking down text into individual linguistic units and then the annotated text inputted with the supported format is parsed with the GoldParse module to then be fed into the train pipeline of the model.

Training pipeline

Results and constraints

After training the model on about 800 input rows and testing on 200, we got this evaluations:

The evaluation results seemed promising, but that may have come also from the fact that the model was over-fitting or there was not a lot of variability in our data. After our model was trained, all we had to do was feed it with text data taken from the input reports after they had been divided into boxed paragraphs and expect the output represented as a key-value pair.

Constraints

Lack of data
– in order to avoid bias or over-fitting the model should be trained on a relatively large amount of data
– acquiring all this data is not an easy process, adding here the data pre-processing step
Ambiguous output
– the model may output more than one value per entity, which leads to inconsistency in interpreting the results
Unrecognizable text
– the statements have poorly written text not correctly identifiable during the text extraction tools recognition
Numerical values
– having lots of numerical values in the reports it is hard to distinguish the real labels they represent

Potential future steps

In recent years, convolution neural networks have shown great success in various computer vision tasks such as classification and object detection. Seeing the problem from a computer vision perspective as a document segmentation (creating bounding boxes around the text contained in the document and classifying it into categories) is a good approach to proceed on with. And for that the magic formula might be called “RCNN” [4]. By following this path, we might be able to resolve the above-mentioned issues we ended up with our solution. Integrating many different approaches together may also improve the overall accuracy on the labeling process.

After the solution process is stable and the model’s accuracy is satisfactory we need to streamline this whole workflow. For a machine learning model it is important to be fed with an abundant amount of new data which improves the overall performance and predicts future observations more reliably. In order to achieve that, it comes necessary to build an Automated Retraining Pipeline for the model, with a workflow displayed as the following diagram:

Conclusion

We went through and reviewed a couple of different approaches we attempted on solving the Named-Entity Recognition task on Financial Statements. And from this trial and error journey, it seemed that the best method was solving it using Natural Language Processing models trained with our own data and labels. But despite seemingly obtaining satisfactory results in our case study, there is still room for improvement. The one thing we know for sure is that the above Machine Learning approach provided us the best results and following the same path on solving this task is the way to go. Machine Learning is very close to reaching super-intelligence and with the right approach to present problems in every domain, it is becoming a powerful tool to make use of.

References

[1] Better Language Models and Their Implications

[2] Object Detection with Deep Learning: A Review

[3] spaCy Linguistic Features, Named Entity

[4] Ren et al., Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

From leveraging data to DATA LEVERAGE

Imagine being a bank, an asset manager, or an investment fund with a well-established franchise in the loan industry that wants to expand its activity by tapping into new markets, products or geography. You would have quite a large database covering your exciting activity in terms of both clients and loans.

But what about a new market you want to target? Do you need to start the activity with a blind eye and collect information from scratch putting your capital at stake and waiting for months or years before having a solid base of data that can help you succeed?

Is there an alternative way?

So far financial institutions have been very vigilant in optimizing the use of their capital: from regulatory capital for banks to equity capital for more unregulated entities the key target is to extract the highest return. The most common way to increase the return of available capital is through leverage: gaining access to extra resources allows to amplify ROE.

During this time the concept of leverage has been extended to other fields, among which data.  The expression “leveraging data” indicates the process to turn raw information into valuable actionable insights.

Leveraging available information is a good practice and companies should take advantage to make data-driven decisions at a strategic and operational level.

Is this all?

What if instead of just maximizing the use of available data, one could actually increase the amount of data available?

I am not talking about expanding the information available via data enrichment or the use of so-called alternative data (which have quite a hype at the moment) as they would only give more dimensions to look at but not increase the actual data available.

Neither am I talking about buying external datasets.

I guess it’s about time I get to the point…

Have you ever thought to apply the concept of leverage to data? Not in terms of leveraging available data but actually increasing the data available through an approach that literally mimics the concept of financial leverage: apply a multiplier on the data a financial institution has available.

The concept is very simple: every time a financial institution extends a loan to a counterparty (or buy a bond) it gets data about a single counterparty (sector, geography, rating/FICO score, financial ratios, income, etc.) plus we can track the performance of the transaction over time (delays, renegotiation, defaults, recoveries, etc.). To increase the datapoint available, the financial institution should increase the number of transaction loans lent, or…..

It can invest into a SECURITIZATION!

By buying a note (or just part of it) into a securitization, investors gain exposure to the whole underlying portfolio of loans. And portfolio positions range from THOUSANDS for the smaller SME loan pools to HUNDRED OF THOUSANDS for pools made of consumer loans.

How long and what would it take to originate the same volumes in terms of time, costs, capital, organization, and all the rest? A rhetorical question really as it is not only time-consuming but resource-intensive.

All clear then? But can I just invest €1 into a AAA senior note and get loads of data?

Unfortunately, it’s not so easy…

Notwithstanding the requirements of the Securitisation – Regulation 2017/2402 gives investors the right to receive loan-level information on each transaction, however, in most cases, there is not a button (or a magic wand) that allows an investor to retrieve data in a standardized format nor in a timely manner. Data are indeed typically made available quarterly via pdf reports (or excel in the best scenarios), making it very hard for investors to translate them into meaningful and actionable information.

So data leverage is just a (nice) theory but cannot be realized in the real world?

Actually, there are solutions now!

Nowadays there are technologies, like CARDO AI that facilitate data retrieval from multiple sources with a data health check, as well as standardize them.

This process requires just a couple of clicks and takes just a few seconds (don’t even try to compare it with your excels!).

Now that I showed you how to leverage the data (which might have become BIG DATA with the appropriate multiplicator), here you have some examples on how to use them:

– Data can be used by servicers to adjust investment decisions in dynamic transactions (e.g. those with a ramp-up or reinvestment period or with a higher turnover such as trade receivables) to optimize the risk-return profile of the pool.

– They can be used by risk management departments to assess the rating of a transaction (e.g. applying scenarios deriving from data of comparable pools) or assessing the possible impact connected to particular events that affect a sector (e.g. tourism during the pandemic) or geography (e.g. after an earthquake, a flood or a particularly cold winter).

– They can be used to drive decision-making on new investments, including comparing scenarios provided by the arrangers, assess the impact of a new transaction on the diversification of the overall portfolio, negotiate a price that is more in line with the risk profile of a particular pool of loans.

– In the future, data will likely be used to assess the ESG profile of a securitization’s collateral pool and compare it with that of other transactions

ESG INCORPORATION IN SECURITISED PRODUCTS: THE CHALLENGES AHEAD

PRI (Principles for Responsible Investment – the world’s leading advocate for sustainable investing, founded on a United Nations initiative) has recently published an interesting report on the incorporation of ESG in securitization products.

On one side regulators are increasing transparency requirements on sustainable related information on investment products. On the other, client demands and risk management are driving demands for considering the long-term impact and sustainability of investment choices. As a result of these forces, investors and asset managers are widening and improving ESG policies, but few of those are tailored for securitization products.

ESG information wanted by investors

Source: PRI

For ESG incorporation in securitized products to be effective, a holistic, multi-pronged approach needs to be developed. Compared to other asset classes the securitization market shows some additional complexity though including:

Transaction structure: which implies a multi-level assessment of practices and policies including Sponsor and/or Issuer, Originator, Servicer, Deal structure, Loans, Collaterals or Guarantees. This is further complicated by the fact that parties can occupy multiple roles (e.g. servicer and originator) or involve private entities, which tend to be less transparent.

Adequate data: Practitioners consider the ESG information in current deal documentation, marketing materials, and underlying portfolio disclosures insufficient to comprehensively analyze most securitized products.

– No ESG reporting standards for servicers/originators: Relevant ESG information on collateral often lacks uniformity and is not comprehensive.

– A diverse pool of underlying assets: the complexity and diversity of underlying collateral (and the sectors covered) make it difficult to build proprietary ESG frameworks that can be used for assessment.

– A lack of coverage by third-party ESG information providers: ESG information providers have limited coverage of securitized products. This is not surprising given that responsible investments originally developed in equities and only recently expanded to debt capital markets. Moreover, the leveraged finance market includes a high proportion of privately owned and smallcap companies that tend to disclose less information

– Lack of a clear ESG premium: differently from the so-called greenium that typically applies to green bonds, securitization transactions do not show meaningful price differentiation when incorporating ESG criteria[1]

As a result, of the 2,000 signatories that reported on their investment activities to the PRI in 2020, only 215 indicated how they incorporate ESG factors into their securitized product investments.

ESG incorporation in securitized products is at a very early stage

Source: PRI

To find a solution to the complexity above and sustain more ESG driven securitizations, PRIs have identified data quality, availability, and consistency as the main solution: a combination of robust in-house and third-party data sources is likely to drive investor confidence in ESG incorporation across securitized credit markets.

For further information please refer to the following link: https://www.unpri.org/fixed-income/esg-incorporation-in-securitised-products-the-challenges-ahead/7462.article

[1]Based on European ESG CLOs that were issued between March 2018 and August 2020 versus traditional CLOs

CARDO AI supports BorsadelCredito.it with Banca Valsabbina and Azimut in a new securitization of € 200 million to support Italian SMEs

This transaction follows the previous securitization of € 100 million launched in September 2020, as part of the “Slancio Italia” project.

The new resources of € 200 million will be disbursed on BorsadelCredito.it through loans to SMEs with a maximum duration of 6 years, with 1 year of pre-amortization, and an amount ranging from € 50,000 to € 1,500,000 with a guarantee of up to 90% of the Italian Governmental Central Guarantee Fund (in Italian: Fondo Centrale di Garanzia) for SMEs.

This new securitization will help SMEs to cope and overcome the crisis linked to the spread of the pandemic. The operation, which sees the collaboration of BorsadelCredito.it – ​​the Italian fintech that supports SMEs in accessing credit – with the Valsabbina Bank – Brescian bank present with 70 branches in Lombardy, Veneto, Emilia-Romagna, Piedmont, and Trentino-Alto Adige – Azimut – one of the biggest asset managers in Italy – and CARDO AI – providing institutional investors with advanced technology for private markets to make better investment decisions, is aimed at supporting the real economy.

Compared to the operation of September 2020, the amount available to SMEs has increased from € 100 to € 200 million, thus guaranteeing Italian SMEs firepower for their growth strategies and better crises navigation.

The loans will have a maximum duration of 6 years, including one year of pre-amortization, an amount ranging from € 50,000 to € 1,500,000, and the guarantee of up to 90% of the Central Guarantee Fund (in Italian: Fondo Centrale di Garanzia) for SMEs. The companies applying for the loan will be evaluated within 24 hours based on the credit assessment conducted by BorsadelCredito.it through the use of proprietary artificial intelligence algorithms. The automatic process is then followed by verification of a credit analyst and subsequently the underwriting and disbursement of the loan within a few working days.

The “Slancio Italia” project was launched at the beginning of the pandemic, in March 2020, and is managed by BorsadelCredito.it and financed by credit funds such as Azimut as part of the strategic agreement between the two companies established in May 2020 with the creation Azimut Capital Tech. Azimut also covers the fundamental role of the underwriter of the junior part through its private debt funds. Banca Valsabbina supported the two companies as the arranger of the transaction, Account Bank, as well as underwriter of the senior and mezzanine part, for a maximum commitment of € 180,000,000. CARDO AI, a fintech supporting institutional investors in modernizing their portfolio management with advanced technology and data science. In this particular operation, Cardo AI has acted as the data agent role, facilitating an end-to-end data management and ad-hoc reporting creation. Every institutional investor that has subscribed the notes of the securitization vehicle has received a dedicated access to the Cardo AI platform, enabling a complete transparency at the begging and throughout the lifetime of the operation.

Hogan Lovells Studio Legale provided legal assistance as transaction legal counsel, with a team led by Partner Corrado Fiscale. The Master Servicer will be Centotrenta Servicing S.p.A., while Banca Finanziaria Internazionale S.p.A. (in short Banca Finint) operates in the roles of Paying Agent, Issuing Agent, and Representative of the Noteholders (in short RoN).

Since the outbreak of the pandemic, businesses have not stopped needing liquidity. According to the most recent Istat data, in 2020 there was a decline in the turnover of service companies by 12.1%, the largest since 2001. “The loss of turnover affected almost all the sectors surveyed, particularly in the activities most affected by the restrictions related to the health emergency”, writes the statistical institute.

In this context, in 2020 Fintech came to the rescue of companies for 1.65 billion euros (ItaliaFintech data), with an increase of 450% compared to the 372 million again disbursed in 2019. Also the number of new Italian companies getting support by Fintechs, which rose from 1,092 in 2019 to 5,464 in 2020.

Gabriele Blei, CEO of the Azimut Group, comments: “This initiative is in line with our project of Banca Sintetica, which is an alternative to the traditional banking model to support the needs of small and medium-sized Italian enterprises through the use of fintech platforms with which to finance businesses effectively and quickly. Our goal is to deliver to Italian SMEs, loans of 1.2 billion euros over the next 5 years and to do so we can count on the support from our diverse range of alternative customer lending strategies both private and institutional. A project that allows us to create new performance opportunities for the capital of our customers, fueling the virtuous circle between private savings and businesses”.

“After the securitization carried out last year, we are happy to be a partner of this new operation, which doubles the resources made available to SMEs – said Marco Bonetti, Joint General Manager of Banca Valsabbina – Our institute will continue to look favorably on initiatives like this, which on one hand are an important element to support SMEs, which in particular in times of crisis such as the current one should be supported especially in terms of liquidity, and on the other hand confirm the importance and value of the cooperation between the traditional banking system and fintech, a sector in which Borsadelcredito.it is positioned as one of the most attractive” Bonetti concluded.

“We believe that the path of collaboration is the main way to truly innovate the finance world and make it more efficient and more functional to needs, including emerging ones, of the real economy – comments Ivan Pellegrini, CEO of BorsadelCredito.it – In a delicate period like what Italy is facing, we felt the need to make our skills available to provide resources to all those healthy SMEs that are having moments of difficulty but want to restart and look to the future. This is why we are pleased to have partners such as Banca Valsabbina and Azimut, with whom we have created a solid and structural alliance: for us, it is an evolution, we are no longer just providers of loans to the real economy, but technological enablers for traditional finance”.

“Data is now the world’s most valuable asset. We see every day how it empowers smarter and safer investment decisions and how it can bring unparalleled transparency and intelligence by replacing antiquated manual processes and streamlining the reporting workflow – comments Altin Kadareja, CEO of Cardo AI.  – As data agents, we are thrilled to participate and support this securitization transaction granting investors easy access to the normalized loan-level data along with fully integrated analytics and reporting tools, where users can review composition, analyse performance, and project collateral and tranche cashflows.

“BorsadelCredito.it con Azimut e Banca Valsabbina in una nuova cartolarizzazione da 200 milioni.” Finance Community, financecommunity.it/borsadelcredito-it-con-azimut-e-banca-valsabbina-in-una-nuova-cartolarizzazione-da-200-milioni.

MILAN FINTECH SUMMIT – Cardo AI, one of the top Startups having the highest market potential

MILAN FINTECH SUMMIT: A SELECTION OF THE BEST OF ITALIAN AND INTERNATIONAL INNOVATION
Among the over 70 candidates from 18 countries, 10 Italian and 10 international companies were selected based on their potentials on the market

Milan, 23 November 2020 – The Fintech companies deemed as having the highest market potential will be the protagonists of the second day of Milan Fintech Summit, the international event dedicated to the world of Finance Technology, scheduled as a streaming live on 10 and 11 December 2020. It is promoted and organised by Fintech District and Fiera Milano Media – Business International supported by the City of Milan through Milano&Partners, and sponsored by AIFI, Assolombarda, Febaf, ItaliaFintech and VC Hub.

Following the call launched on an international level and a careful selection by such experts in the sectors as the Conference Chair Alessandro Hatami and representatives of the organizing committee, today they announced the 20 companies that will be given the opportunity to be on the digital stage to present their own ideas and solutions for the future of financial services.
Among the over 70 candidates from 18 countries, 10 Italian and 10 International companies were selected.

The Italian companies are: insurtech Neosurance, See Your Box and Lokky; WizKey, Soisy, Cardo AI, Stonize and Faire Labs, operating in the lending and credit sector; Trakti offering cybersecurity solutions; Indigo.ai dealing with artificial intelligence.

The international ones that were selected are: Insurtech Descartes Underwriting and Zelros (France); Keyless Technologies (UK), CYDEF – Cyber Defence Corporation, Tehama (Canada), dealing with DaaS and Cybersecurity and Privasee (UK), operating in the data market protection; Pocketnest (USA), a SaaS company; Wealth Manager Wondeur (France), DarwinAI (USA) operating in the artificial intelligent sector and Oper Credits (Belgium), operating in the lending and credit field.

These realities, which will be introduced to a parterre of selected Italian and International investors and to fintech experts, were chosen based on the criteria of: innovativeness of the proposal, potential size of the target market, scalability of the proposal, potentials in capital raising; type of the employed technological solution.
The Milan Fintech Summit will thus help introduce the potential of our fintech companies abroad reinforcing the role of Milan as European capital of innovation, an ideal starting point for international companies that want to enter the Italian market.

FINTECH ACCESSIBLE TO EVERYONE, AN OPEN DOOR EVENT
The program of the event is available on the official site and a physical appointment of the summit is already scheduled for next year, on 4 and 5 October 2021. The December appointments are open to all those interested in knowing and understanding in depth the potentials of fintech. You can register now for free using this link, or purchase a premium ticket to participate as listeners to the pitch session (the only closed door part of the program) and be entitled to other benefits offered by the Summit partners.

Fintech District
Fintech District is the reference international community for fintech ecosystem in Italy. It acts with the aim of creating the best conditions to help all the stakeholders (start-ups, financial institutions, corporations, professionals, institutions, investors) operate in synergy and find opportunities of local and international growth. The companies that decide to adhere have in common the tendency to innovate and the will to develop collaborations based on opening and sharing, The community now consists in 160 start-ups and 14 corporate members choosing to participate to the creation of open innovation projects by collaborating with fintech. Fintech District also has relationships with equivalent innovation hubs abroad to multiply the opportunity to invest and cooperate, establishing its own role as access door and reference in the Italian market. Created in 2017, the Fintech District has its seat in Milan in Palazzo COPERNICO ISOLA FOR S32, in Via Sassetti 32. Fintech District is part of Fabrick.

Subscribe to our Newsletter

The ability to operate with technology and true intelligence at speed can be the deciding factor in success or failure in private market investments.

Start lowering your costs, scale faster and use more data in your decisions. Today!

Our Offices
  • Milan:
    Via Monte di Pietà 1A, Milan, Italy
  • London:
    40 New Bond St, London W1S 2DE, UK
  • Tirana:
    Office 1: Rruga Adem Jashari 1, Tirana, AL
    Office 2: Blvd Zogu I, Tirana, AL

Copyright Cardo AI 2021. All rights reserved. P.IVA: 28385 437473 3745