Author: Klement Omeri

Visualizing BigO Notation in 12 minutes in text and three robots holding a Big O

Visualizing BigO Notation in 12 Minutes

In this article, I want to make a tiny contribution to the community by explaining a very important concept like BigO notation using some power of visualization and my background in math to help you all better understand this topic.

I truly believe that there is nothing you can’t learn if you start from the most fundamentals.

If you combine the correct resources and the will to learn nothing can stand in front of you and your dreams.

Let’s begin with the definition:

BigO notation is a mathematical notation that describes the limiting behavior of a function when the argument tends toward a particular value or infinity.

How do we understand that?

In computer science, there are almost always several ways to solve a problem. Our job as software engineers is to find the most effective solution and implement it. But what does it mean effective anyway? Is it the fastest way? Is it the one that takes less space than the others? Actually, it depends. This is purely related to your particular problem.

If you are working in embedded systems with limited memory, for example, if the problem is to calculate the required power in watts to defrost 200gr of meat in a microwave you can trade an algorithm that is more memory efficient and takes 1s to make this calculation to another one which makes this calculation in milliseconds but will take much more memory. After all, even if it takes milliseconds to start, the defrost process itself will require 10 to15 mins.

If we talk about the algorithm that locks missiles to a target airplane, it is clear that we are dealing with milliseconds here and the memory consumption can be sacrificed. The plane is big enough to have free space for some more memory slots.

In general, software engineering is about the trade-offs. A good software engineer has to be aware of the requirements and come up with solutions to fulfill them.

With all that being said, it is understandable right now that we need to somehow quantify and measure the performance and memory implications of any algorithm. One way to do that is to take a look at how many seconds one algorithm requires to complete. That can provide some value but the problem is that if my search algorithm takes 2 to 3 seconds on my laptop with an array of 1000 items it can take less than that on another more powerful laptop right? Even if we agree to take my laptop’s performance as a base, we are not aware of telling what happens when the size of the array doubles? What happens when the size of the array goes to infinity?

To answer these questions we will need a measurement that is independent of the machine and can tell us what will happen with our algorithm when the size of the input gets larger and larger.

Here comes the BigO Notation.

BigO aims to find how many operations you need to perform in the worst-case scenario given an input of any size. It aims to find what is the limit for the number of operations of your algorithm when the size gets larger and larger. BigO is divided into Time and Space Complexity analysis. For every algorithm you calculate its Time Complexity by simply counting the number of the auxiliary operations you perform on the given data structure.

If you copy an array of 10 items in another array then you will need to loop all over the array, which means 10 operations. Which in BigO notation is expressed as O(N) where N is the size of the input array. The Space Complexity for this example is again O(N) because you are going to allocate some more memory for the copied array.

What BigO does is to give you a math function that is purely focused on finding the limit of the number of operations you need to perform when the size of the input gets larger. For example, if you are searching for number 5 inside of a given list with a linear search. Then in the worst case, this number will be at the end of the list, but since you will start the iteration from the beginning you will need to perform as many lookup operations as the number of the input.

[1, 2, 3, 4, 5]  # you will perform 5 operations here to find it

Here I want to stop for a moment on the term “worst case”. If you think about it, there is a chance that the required number will be at the beginning of the list. In this case, you will perform only 1 operation.

[5, 1, 2, 3, 4]  # you will perform only one operation in this case

The problem is that we can not take into consideration the best case and hope that it will happen most of the time because in this way we are not able to compare different algorithms to each other. In the context of the BigO notation, we are always interested in the worst case (with some exceptions like hash maps, more on that later).

I said before that BigO gives you a math function that is focused on finding the limit of the number of operations. When we talk about the limits in math, we can not only talk about them without any visualization. This helps a lot in understanding the trend of the function as the size of the input goes to infinity. Let’s start by analyzing one by one some very common BigO notations together with an example.


BigO Notation examples

O(1) Constant Time

O(1) Constant Time

This is understandably the best BigO notation an algorithm can have. Especially when you want to perform a certain action and you can perform it in only one operation. Let’s take a look at an example using Python:

country_phone_code_map = {
'Albania': '+355',
'Algeria': '+213',
'American Samoa': '+684',
}
country = 'Albania'
print(country_phone_code_map[country]) # 1 operation
>>> '+355'

In Python, if you want to make a lookup for an item into a dict then the operation is O(1) Time Complexity. Dict in Python is similar to HashMap in other languages.

To be exact, the worst-case scenario is O(N) and this is related to how well the data structure is implemented. The hashing function takes the key role here but in general, it is agreed that the BigO for dict lookups is O(1). If you are in a coding interview you can assume that it’s O(1).


An important topic when calculating the BigO notations: the constants.

You may or may not be familiar that the constants are ignored when calculating the BigO. I don’t want you to just accept this as a rule and don’t think about the reasons behind it.

This is exactly why I am visualizing the BigO notation. So let’s assume that for the above example we will also need to get the 3 and 2 letter country code by having the country name. This means that we have some other mapping for the 2 and 3 letter country code and we will just need to perform 2 more operations inside the same function.

country_phone_code_map = {
'Albania': '+355',
'Algeria': '+213',
}
country_2_letter_code_map = {
'Albania': 'AL',
'Algeria': 'DZ',
}
country_3_letter_code_map = {
'Albania': 'ALB',
'Algeria': 'DZA',
}
country = 'Albania'

phone_code = country_phone_code_map[country] # 1 operation
two_letter_code = country_2_letter_code_map[country] # 1 operation
three_letter_code = country_3_letter_code_map[country] # 1 operation

If we continue to count the number of operations as we agreed to do before. Here we will have 3 operations that make the BigO = O(3) right?

Let’s visualize this:

BigO Notation

As you can see the number of operations moved up by 3. Which means we are actually performing more than one operation to complete this task. But, BigO says that if there are constants just ignore them. So O(3) or O(2n) or O(2n + 1) will be respectively O(1), O(n), O(n).

This is because we are interested to know the limit of the function as N goes to infinity and not how many operations exactly it will perform. We are not calculating the number of operations but instead, we are interested to see how the number of operations will grow as N goes to infinity. You may be thinking that, yes but an algorithm with O(1000n) is slower than one with O(n) so we need to consider that 1000 we cannot ignore it. That’s true but this number 1000 is a constant and it is not getting bigger as N. Even when N is 10 it will remain 1000, even when N is 1B it will remain 1000. So it does not provide us with any valuable information regarding the limits of the function. The only important part is O(n) which tells us that the more you increase, the more you are going to perform operations to complete the task.


O(logn) Logarithmic Time

O(logn) Logarithmic Time

This notation usually comes together with searching algorithms with the divide and conquer approach. If we are searching for a number in a sorted array we can use the most basic algorithm, binary search. This algorithm will divide the array by half on every operation and it will take log(n) operations to find the number. Here is a nice tool to visualize this algorithm.

Something important here is that when we talk about logarithm in computer science without specifying the base we always talk about the logarithm with base 2. In math, we are used to a logarithm with base 10 in this case but it is different in computer science. Just keep that in mind when dealing with complexity analysis.

As you can see from the image above, this is actually a very good time complexity. To have a complexity of O(logn) in plain English means that every time the size of the input doubles we only need to make one more iteration to complete the task. When N is about a million we need to perform only 20 operations, and when it gets around 1billion we need to perform only 30 operations. You can see the power of an algorithm with O(logn) time complexity. For such a huge increase in N, we only need 10 more operations to perform.


O(N) Linear Time

O(N) Linear Time

In this case when N goes to infinity the number of operations goes to infinity as well with the same rate as N. An example is a linear search as we discussed before.

array = [1, 2, 3, 4, 5]
number = 5
for index, item in enumerate(array): # loop n times
if item == number: # check for equality
print(f'Found item at {index=}')
break

>>> Found item at index=4

O(NlogN) Log-Linear Time

O(NlogN) Log-Linear Time

This notation usually comes together with sorting algorithms. Take a look at this visualization for merge sort. Merge sort divides the array into two halves O(logn) and takes O(n) linear time to merge divided arrays.


O(N²) Quadratic Time

O(N²) Quadratic Time

Usually algorithms with nested loops. For example a sorting algorithm with brute force, that is looping all over the array in two nested for loops. Bubble sort is an example:

def bubble_sort(data):
for _ in range(len(data)): # O(n)
for i in range(len(data) - 1): # nested O(n)

if data[i] > data[i + 1]:
data[i], data[i + 1] = data[i + 1], data[i]
return data

Since the second loop is nested we will multiply the complexity of it with the complexity of the first loop. Which is O(n) * O(n) = O(n²)

If the second loop were outside of the first one, we would sum them instead of multiplying because in this case the second loop will not be repeated as many times as the first loop.


O(N³) Cubic Time

O(N³) Cubic Time

The simplest example can be an algorithm with 3 nested for loops.

for i in range(len(array)):  # O(n)
for j in range(len(array)): # O(n)
for p in range(len(array)): # O(n)
print(i, j, p)

If you directly apply the mathematical definition of matrix multiplication then you will end up with an algorithm with cubic time. There are some improved algorithms for this task, take a look here.


O(2^N) Exponential Time

O(2^N) Exponential Time

The most known example for this notation is finding the nth Fibonacci number with a recursive solution.

def nth_fibonacci(n: int) -> int:
if n in [1, 2]:
return 1

return nth_fibonacci(n - 1) + nth_fibonacci(n - 2)

O(N!) Factorial Time

O(N!) Factorial Time

An example of this would be to generate all the permutations of a list. Take a look at the Traveling Salesman Problem.


Take the most important factor.

We’ve already talked about dropping the constants when calculating the complexity of an algorithm because they don’t provide us with any value. There is something more regarding that rule. When performing complexity analysis, we can end up with an algorithm that performs more than 1 type of operation on the input given. For example, we may need some function to initially sort an array and then search on it. Let’s assume that there will be one operation to sort with complexity O(NlogN) plus another one to search with complexity O(logN).

The time complexity for such a function will be O(NlogN) + O(logN). Let’s visualize this:

O(NlogN) + O(logN)

If you take a look at this graph, you will notice that the impact of O(NlogN) is bigger than the impact of O(logN) since the graph is more similar to the one of O(NlogN) compared to O(logN). We can even mathematically show that by doing so.

O(NlogN) + O(logN) = O((N+1)logN)  # factorize
O((N+1)logN) = O(NlogN) # drop constant 1

In this case, they are relatively close to each other and the difference is not obvious, but if we take another example like O(N! + N³ + N² + N) we will notice that the impact of the notations, except N!, is so small compared to N! when N gets too large!

We can easily compute 1 000 000 ^ 3 but try the same for 1 000 000 factorial.

O(N! + N³ + N² + N)

The factorial of 10 is 3 628 800 whereas 10³ is only 1000. As you can see the impact of N³ is so small compared to the N! that we can actually ignore it. That is why when we have multiple notations summed up we take the most important factor.

Something very important to know when taking the most important factor is that we group factors by the inputs. This means if we have an algorithm that operates on 2 different arrays one of size N and one of size M and the complexity of the algorithm is O(N² + N + M³ + M²) then we cannot just say that the highest factor is M³ so the complexity is O(M³). This is not true, because they are completely separate variables in our function. Our algorithm depends on both of them to work so what is correct is to take out the highest factors for both variables. We eliminate N since N² is higher, and we eliminate M² since M³ is higher and the result is O(N² + M³).

Conclusion

If you want to learn algorithms and data structures deeply then you need to question everything. Do not take as granted any of the rules. Question them and try to find answers. Visualization makes a huge difference in understanding complex algorithms. Do not forget that you should not be learning these topics to pass some coding interviews but to make yourself a better engineer.

Visit Betterprogramming,pub for the original article!

References

Koala sleeping related to Sleep Tight with Automated Testing using pytest

Sleep Tight with Automated Testing

In the initial steps of my career, when I just started writing software, only the senior developers of my company felt responsible to write tests for their code. Over the years, this has become mainstream. But “mainstream” does not mean “universal”. Plenty of developers still do not have comfort with, or even exposure to the unit testing practice, let alone worry about automated testing. And yet, a form of peer pressure causes them to play that close to the vest.

So I reach out to these folks to say “Hey, no worries. You can learn and you don’t even have to climb too steep of a hill.” I’d like to revisit that approach here, today, in the form of a blog post.

Let’s get started with unit testing in Python using pytest, assuming that you know absolutely nothing about it. Although examples are beginner-friendly, this article is not going to be a tutorial on how to write tests with pytest. Its docs already make this. I will be more focused on the idea of testing in itself.

Intro to automated testing

Automated testing is an extremely useful bug-killing tool for the modern Web developer. (Django docs)

As a definition, automated testing or test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. In other words, it is a piece of software completely separate from your source code, responsible to make sure that the source code is working as expected.

What kind of problems do you solve or avoid when you write tests?

  • When you are writing new code, you can use tests to validate that your code works as expected.
  • When you are refactoring or modifying old code, you can use tests to ensure your changes haven’t affected your application’s behavior unexpectedly.

Why do we write code to test other code? Why aren’t we testing manually?

Let’s say you are done with the development of a new feature and ready to deliver it to your Quality Assurance team. Your QA team will spend some time to make sure that all the edge cases are covered and the feature is ready for production. After deploying to production, you are asked to modify or add something new on top of that feature. Then you make all the required changes to fulfill the new requirements and deliver them to your QA team.

Can you spot the problem above?

Now, the QA team is going to make the same manual tests as before, to make sure that modifications to the code didn’t break anything and all edge cases are still covered. It means time, lost time for the same tests, again and again.

What is the solution?

The solution is automating those tests. When you have automated tests, you can run them anytime you want (in the middle of the night), in any environment you want (on your development machine or a CI server), and be sure that you haven’t broken anything in your code, or even if you broke something, then you have a clear idea of what it is. In addition, automated tests are fast, faster than any QA team.

So, yes, we should make some automation on tests, but how? Unit tests can be a good start. They are easy to write and easy to maintain. So what is a unit test?

Automated testing: Unit tests

Unit testing is a software testing method by which individual units of source code are tested to determine whether they are fit for use. Therefore, Unit tests take small pieces of source code and check if those pieces are operating as expected under given conditions. Also, Unit tests find problems early in the development cycle.

Finding bugs in the source code before the release is critical. The cost of finding a bug when the code is written is way lower than finding it later. Finding a bug is just the beginning: after you find a bug, you have to find what is causing it and how to solve it. Making all of those operations on a module that is already in production is very expensive in terms of time and very stressful for the developer. You need to be real quick, the code is in production so the bug is too and your users are facing this bug. You need to be sure that after fixing the problem you haven’t created some new ones.

Consider that Unit tests are a perfect way to find those problems and tell you exactly where the problem occurs so you can skip the detection and identification of the bug and jump directly to solving it. Such Unit tests have a clear result. They are either passing (which means there is nothing to show but a success message) or they are failing (in which case the failure message will contain the exact check performed by the test that has failed). The result of a failing unit test contains a lot of information that can help you have a better idea of what is going wrong.


Real world example with a demo pizza app

Now let’s jump into a real-world example and show how unit testing works. We are going to work on a demo pizza application. Below is a simplified diagram that describes the application.

Demo Pizza App related to automated testing guide
Demo Pizza App

So the diagram says that:

– Each topping and each size has its price
– Each pizza has some default toppings
– For each pizza order, we have to select a pizza, a size, and extra toppings optionally
– Each pizza order is linked with a single order
– Finally, each order holds some general information about the order

We are going to focus on writing some unit tests for the price calculation.

How is the price calculated?

1. Each pizza has a base price, which is the sum of the prices of the default toppings
2. Each pizza order has a price, which is the sum of the following:
2.1. price of pizza
2.2. price of size
2.3. sum of the extra toppings
3. Finally, each order has a price, which is the sum of the pizza order prices

Let’s have a look at the Pizza model class which we are going to test.

Pizza model

So we have some toppings, each of them with its price, and the sum of those prices gives us the total price of a Pizza instance. Let’s write some unit tests for this.

In the gist above we have two tests for the price calculation of a Pizza instance. They check the price calculation of a pizza with and without toppings. Let’s analyze them.

  1. Django DB mark

So unit tests in pytest are not supposed to access the DB by default. To be able to access the DB, we use this mark from the pytest_django package.

2. Preparation phase

Here we create and link all required instances with each other, pizzas and toppings. We assign a value to all of the required attributes, especially to price.

3. Calculating results

We get the actual price by calling the total_price property of the Pizza instance. This property is going to make its calculation, and it is supposed to return a value at the end.

We get the expected price by summing up two toppings of the pizza instance.

4. Assertion

It’s time to compare the values: if the price we get from total_price property is the same as the expected_price the test will pass, otherwise, it will fail.

We can run the tests by using the command pytest in a terminal window and here are the results:

Pizza price test results related to automated testing
Pizza price test results

So pytest started to search for all tests inside the project, collected 2 tests, and ran them. The configuration for pytest is done by using a ini file called pytest.ini located at the root level of the project. Let’s have a look at the configuration file.

1. DJANGO_SETTINGS_MODULE = pizza_app.test_settings

We have defined a different settings file for Django settings which will be used by pytest when getting Django up. This is a practice to speed up the testing process. So for example, if your model fields are supported by SQLite you can use it to make the DB a lot faster while testing. You can use a different password hasher and remove password validators to speed up the process of creating users. Also, you can change the DEBUG setting which is False by default when running tests to execute the tests in an environment as close as possible to production. You can view my test_settings file.

2. python_files = tests.py test_*.py *_tests.py

Here we have defined the naming convention of test files. So every file name that starts with test_ or ends with _tests or is exactly tests will be considered as a test file while discovering the tests by the discoverer.

3. addopts = –reuse-db –nomigrations

Here we are saying that after finishing with test execution, the test DB that was created should not be destroyed, in order to be reused. Furthermore, when creating the DB, pytest should not inspect the Django migrations, but instead go and check the models directly: this option can be faster when there are several migrations to apply for each app.  Running pytest in the terminal will use the configurations provided in the .ini file. These configurations can be overridden when running the command, in the form

pytest --create-db

The reason you may want to use this command is that, when you have any changes to your database schema, you can run the tests once with this command and a new database will be created. After that, you can continue running your tests only with the pytest command so all ini configs will be used again.


Write clean tests

Some developers think it is OK for the testing code to be dirty and not compliant with all the conventions of your language, unlike your source code. I do not agree with that.

Let me describe to you the cost of writing dirty tests. At first, it will be all fine, you wrote some tests super fast, the code is dirty, but that doesn’t matter because they are just tests. After some time, the requirements change, so you have to change your tests too. Now you have two options. The first one is to spend the required time modifying the “dirty testing” code. The second one is to place a skip mark to those tests, because you are almost sure that your code is working and plan to deal with them later on. Even if you want to spend time now to fix broken tests, there will always be a tight deadline that’ll make you choose the second option. And trust me, if you skip those tests, there will never ever be free time to fix them.

So what is the cost? Time. Lost time.

Remember the reason of writing tests? By writing dirty tests, we actually wasted our time writing them and will waste time again by fixing or skipping them and facing bugs in production.

So, my suggestion is to always consider your testing code the same way you consider your source code. Write it as clean as you can.

Now let’s review our testing code. Is it clean? No.

Preparation phase

We are not interested at all in what values will any of the attributes have. Even price. We just want to create a pizza with some toppings. Imagine having to create an instance of a class with dozens of required fields. There must be an easier way.

Comes in: model-bakery. This is a must-have package if you want to write hundreds of tests in Django. This package is capable of creating instances of a class by providing only the class name. So it goes as follows:

It is as simple as that but can we go further? We are doing almost the same thing but without toppings in the second test too. So we can move the creation of a pizza in another module, let’s name it test_utils.py:

And the testing code:


Parametrization

The testing code is a little clearer now, but as you may have noticed, we have two tests that test the same thing: the price of a pizza. The only thing that changes is the number of toppings. To make it more simple, we can use one of the best features of pytest, which is parametrization. What we want to parametrize is the number of toppings, it will be given to the testing function as a parameter and pytest will run the test more than once by giving the next number of toppings every time.

@pytest.mark.parametrize(‘number_of_toppings’, list(range(0, 10)))

This line of code makes this test run 10 times, each time with more toppings. So because 0 number_of_toppings means a pizza without toppings, we do not need the second test anymore. This is the output of pytest -v command (-v: verbose, display more information):

running parametrized tests for automated testing
running parametrized tests

Mocking certain operations

Now we have a general understanding of how to approach tests and how to make them simple and clean. Another problem we will be facing while testing our web application is costly operations. By costly operations, I mean some side effects of source code that are out of the testing scope but will be executed while running the tests. For example, let’s suppose that for every order created we need to print out a receipt. Furthermore, let’s suppose that the actual printing is done by a microservice, which we contact via an API, providing the order details. Naturally, we don’t want to print out anything while we are running our tests. This means that, while testing order creation, we need to find a way to skip this API call. The mechanism that provides what we want is called mocking.

There is a pytest way to achieve this. It’s called the monkeypatch: this fixture provides some helper methods for safely patching and mocking functionalities in tests.

Let’s assume we have the following code to save and print out the receipt.

What we want is to mock the post method of the requests module to do nothing instead of posting to the receipt microservice. We just want to test if the price is set while saving an order instance. The testing code:

Monkeypatch is a pytest fixture, so we can directly include it as a parameter in our testing function. After including it, this line does the work:

monkeypatch.setattr(requests, 'post', mock_post)

This command replaces the post method of the requests module with our mock_post function, in the scope of the test. Now, whenever (inside the test) requests.post is called, our mock function will be executed instead. If, for some reason, I need the response of the post method, I can write a mock class, which will behave as the original response, i.e. implement its methods that we need.

Mock response class:

Test:

We can play around with the MockResponse class as we like to fulfill the requirements of the code that we are mocking.


Debugging

Our automated testing will not always pass. Sometimes they will fail, which is precisely the reason why we write them: to fail when they need to and tell us what is going wrong. After a test fails, it’s time to verify if we are doing something wrong in the testing code or if something is actually broken in our source code. To be able to do that, we need some more details about what is going on inside the application while the test is running. The source code that we are testing can be hard to understand and even tests can sometimes be hard to understand (although they shouldn’t be). To go deep and understand all the circumstances of a failing test, we need to debug it. Here are two ways to debug a test.

Using Pycharm

The first one is using a modern Integrated Development Environment, like PyCharm. First of all, to be able to use pytest support in PyCharm, you need to set the Default test runner setting to pytest. You can find this setting under Settings > Tools > Python Integrated Tools.

After indexing your project, PyCharm will show a green run icon besides each test:

When you click on this icon you see the following:

By clicking Debug you can run this single test in a debug session and the execution of the program will stop in the breakpoints you have placed around the code.

Dropping to Pdb

If you are already on a terminal window and want to see some variables and other things very quickly, you can also drop into Pdb by using the built-in function breakpoint. In your code:

In your terminal:

If you press TAB two times you can view all available options:

Some of the options are as follows:

next: will execute the next line
cont: will continue execution until the next breakpoint
list: will display the code that will be executed


Isolation

Test isolation is a topic worth writing an entire article about. I will treat it really briefly here because if we are going to write some tests, we definitely need to keep this concept in mind.

Basically, test isolation says that all tests must be independent of each other and also from other modules of our project. A test function must not make any assumption of the application’s state. It must not depend on the actual state to be able to run. It shouldn’t matter for a test if it is running first or last; or if it is running alone or in a group of tests.

What is the point of all of that?

If a test depends on other tests, you are not guaranteed that it will always pass. If you are not sure about that, then you cannot trust the value this test produces: in other words, the test becomes unusable. For example, for our order_create test, we have created an order using the following util function:

Here we have not assumed that a pizza order is already created in the DB by some other tests. That’s why we haven’t retrieved any PizzaOrder instance, but we have directly created it. Every single thing that a test needs to be able to run must be created inside its body or through a database fixture. What the test needs to run must be obvious. If it is not, it will be much harder to debug it when failing, because we don’t even know what the test is receiving as input. When you don’t even know the exact input, you can’t be checking for the output.

Final worlds on isolation, always make obvious what the tests need to run; tests should not depend on running order; pay attention to the assertion timing.


Flaky tests

A flaky test is a test that both passes and fails periodically without any code changes. Flaky tests are annoying but they can also be quite costly since they often require engineers to retrigger entire builds on CI and often waste a lot of time waiting for new builds to complete successfully. A flaky test is probably a result of a not well-isolated test. This means that it depends on some global variables which change over time. This kind of test provides you some false negatives. It is better to not have tests at all instead of having some tests that eventually fail without any code changes. This type of test will make you and your organization lose confidence in tests and turn back into manual verification. Here is an article about them, check out how do the Spotify engineers deal with them.


In conclusion

We have discussed some ways of writing clean and maintainable tests. Key topics treated:

  • Pytest, as an advanced testing framework that has countless features compared to the standard built-in unittest.
  • Ways to debug tests.
  • Isolation of tests and their importance.
  • Flaky tests, as the result of bad-isolated tests.

If you are going to work on complex applications, bugs will always be present. Testing, or even better automated testing, is a bug-killing tool that can save you a lot of time and headache.

Thanks for reading!

References

  1. Demo pizza app
  2. Demo pizza app DB diagram
  3. Pytest docs
  4. pytest_django package
  5. model-bakery package
  6. Test Flakiness at Spotify
Subscribe to our Newsletter

The ability to operate with technology and true intelligence at speed can be the deciding factor in success or failure in private market investments.

Start lowering your costs, scale faster and use more data in your decisions. Today!

Our Offices
  • Milan:
    Via Monte di Pietà 1A, Milan, Italy
  • London:
    40 New Bond St, London W1S 2DE, UK
  • Tirana:
    Office 1: Rruga Adem Jashari 1, Tirana, AL
    Office 2: Blvd Zogu I, Tirana, AL

Copyright Cardo AI 2021. All rights reserved. P.IVA: 10357440964