Category: Programming

  • SOLID Design Principles Examples in Python

    SOLID Design Principles Examples in Python

    I already explained what is SOLID Principles in a previous article and here are examples in Python for each of the SOLID principles to make it more clear:

    Single Responsibility Principle (SRP)

    The SRP states that a class should have only one reason to change. In other words, a class should have only one responsibility. Here’s an example of a class that violates the SRP:

    class Employee:
        def calculate_pay(self):
            # Calculate the employee's pay
            pass
    
        def save_to_database(self):
            # Save the employee's data to the database
            pass
    

    In this example, the Employee class has two responsibilities: calculating the employee’s pay and saving the employee’s data to the database. To follow the SRP, we could split this class into two separate classes:

    class Employee:
        def calculate_pay(self):
            # Calculate the employee's pay
            pass
    
    class EmployeeRepository:
        def save_to_database(self, employee):
            # Save the employee's data to the database
            pass
    

    Now we have one class responsible for calculating the employee’s pay and another class responsible for saving the employee’s data to the database.

    Open/Closed Principle (OCP)

    The OCP states that classes should be open for extension but closed for modification. In other words, you should be able to add new functionality to a class without changing its existing code. Here’s an example of a class that violates the OCP:

    class Employee:
        def __init__(self, name, salary):
            self.name = name
            self.salary = salary
    
        def calculate_bonus(self):
            return self.salary * 0.1
    

    In this example, the Employee class calculates a bonus based on the employee’s salary. If we want to calculate a bonus based on other factors, such as the employee’s performance or the company’s revenue, we would need to modify the Employee class. To follow the OCP, we could use inheritance to create a new class for calculating bonuses:

    class BonusCalculator:
        def calculate_bonus(self, employee):
            pass
    
    class SalaryBonusCalculator(BonusCalculator):
        def calculate_bonus(self, employee):
            return employee.salary * 0.1
    
    class PerformanceBonusCalculator(BonusCalculator):
        def calculate_bonus(self, employee):
            # Calculate bonus based on performance
            pass
    
    class RevenueBonusCalculator(BonusCalculator):
        def calculate_bonus(self, employee):
            # Calculate bonus based on revenue
            pass
    

    Now we have one class responsible for calculating bonuses and multiple subclasses that can calculate bonuses based on different criteria.

    Liskov Substitution Principle (LSP)

    The LSP states that objects of a superclass should be replaceable with objects of a subclass without changing the correctness of the program. In other words, subclasses should be able to be used in place of their parent classes without causing unexpected behavior. Here’s an example of a class hierarchy that violates the LSP:

    class Bird:
        def fly(self):
            pass
    
    class Ostrich(Bird):
        def fly(self):
            raise NotImplementedError()
    

    In this example, the Ostrich class is a subclass of Bird, but it cannot fly. If we try to use an Ostrich object in place of a Bird object, we will get an unexpected NotImplementedError. To follow the LSP, we could split the Bird class into two separate classes:

    class Bird:
        def fly(self):
            pass
    
    class FlyingBird(Bird):
        def fly(self):
            pass
    
    class Ostrich(Bird):
        pass
    

    Now we have one class for birds that can fly and another class for birds that cannot fly.

    Interface Segregation Principle (ISP)

    The ISP states that clients should not be forced to depend on methods they do not use. In other words, interfaces should be tailored to the needs of their clients. Here’s an example of a class that violates the ISP:

    class Printer:
        def print(self, document):
            pass
    
        def scan(self, document):
            pass
    

    In this example, the Printer class provides both printing and scanning functionality. If a client only needs printing functionality, they will be forced to depend on the scan method. To follow the ISP, we could split the Printer class into two separate interfaces:

    class Printer:
        def print(self, document):
            pass
    
    class Scanner:
        def scan(self, document):
            pass
    

    Now we have two separate interfaces, one for printing and one for scanning, that clients can use independently.

    Dependency Inversion Principle (DIP)

    The DIP states that high-level modules should not depend on low-level modules. Instead, both should depend on abstractions. In addition, abstractions should not depend on details. Instead, details should depend on abstractions. Here’s an example of a class that violates the DIP:

    class EmployeeService:
        def __init__(self, employee_repository):
            self.employee_repository = employee_repository
    
        def get_employee_by_id(self, employee_id):
            return self.employee_repository.get_by_id(employee_id)
    

    In this example, the EmployeeService class depends on the EmployeeRepository class, which is a low-level module. To follow the DIP, we could introduce an abstraction that both the EmployeeService and EmployeeRepository classes depend on:

    class EmployeeRepository:
        def get_by_id(self, employee_id):
            pass
    
    class EmployeeService:
        def __init__(self, employee_repository):
            self.employee_repository = employee_repository
    
        def get_employee_by_id(self, employee_id):
            return self.employee_repository.get_by_id(employee_id)
    
    class SqlEmployeeRepository(EmployeeRepository):
        def get_by_id(self, employee_id):
            # Retrieve employee from SQL database
            pass
    
    class MongoEmployeeRepository(EmployeeRepository):
        def get_by_id(self, employee_id):
            # Retrieve employee from MongoDB
            pass
    

    Now we have an abstraction, the EmployeeRepository class, that both the EmployeeService and SqlEmployeeRepository and MongoEmployeeRepository classes depend on. This allows us to easily switch between different database implementations without changing the EmployeeService class.

    These examples show how each of the SOLID principles can be applied in Python code to create more maintainable, flexible, and reusable software.

    Conclusion

    The SOLID principles provide a set of guidelines for writing clean and maintainable code. By following these principles, we can write code that is more modular, flexible, and reusable, making it easier to maintain and extend over time.

    In Python, we can apply the SOLID principles in a variety of ways, such as using inheritance and polymorphism to adhere to the LSP, creating small and focused classes that adhere to the SRP, using dependency injection to adhere to the DIP, and splitting interfaces to adhere to the ISP. By applying these principles, we can write code that is easier to understand, modify, and extend, ultimately leading to more robust and maintainable software.

    It’s worth noting that while the SOLID principles are generally considered to be good coding practices, they are not always applicable in every situation. As with any programming guideline or best practice, it’s important to use good judgement and apply the principles in a way that makes sense for the specific requirements and constraints of your project.

    In conclusion, the SOLID principles are an important set of guidelines for writing clean and maintainable code. By following these principles in our Python code, we can create software that is more flexible, reusable, and easier to maintain and extend over time.

  • SOLID Design Principles

    SOLID Design Principles

    Software developers aim to create applications that are easy to maintain, extend and test, and that can adapt to changing requirements. However, software design can be complex and challenging, and it is important to follow established principles and best practices to create effective, high-quality software. One set of principles that has gained widespread recognition and adoption in the software development community is known as SOLID principles. In this article, we will explain what SOLID principles are, why they are important, and how to apply them in your software development projects.

    What are SOLID principles?

    SOLID is an acronym that stands for five principles of object-oriented programming (OOP). These principles were first introduced by Robert C. Martin in the early 2000s as a set of guidelines for writing effective, maintainable, and scalable software. The SOLID principles are:

    • Single Responsibility Principle (SRP)
    • Open/Closed Principle (OCP)
    • Liskov Substitution Principle (LSP)
    • Interface Segregation Principle (ISP)
    • Dependency Inversion Principle (DIP)

    Each principle is designed to address a specific aspect of software design, and they work together to create a foundation for building high-quality, maintainable software applications.

    Single Responsibility Principle (SRP)

    The Single Responsibility Principle states that a class should have only one reason to change. This means that each class should have a single responsibility or job, and that responsibility should be encapsulated within that class. This makes the code more modular, easier to understand, and easier to maintain. When a class has more than one responsibility, it becomes more difficult to modify and test, and changes to one responsibility may unintentionally affect other parts of the code.

    Open/Closed Principle (OCP)

    The Open/Closed Principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. This means that you should be able to add new functionality to a system without modifying the existing code. This principle is essential for building scalable and maintainable software, as it allows you to add new features without disrupting existing functionality. This can be achieved through the use of abstractions, such as interfaces or abstract classes, which provide a contract for how the system should behave.

    Liskov Substitution Principle (LSP)

    The Liskov Substitution Principle states that a derived class must be substitutable for its base class. This means that any object of the base class should be able to be replaced by an object of the derived class without affecting the correctness of the program. This principle is important for ensuring that software is robust and maintainable, as it allows developers to make changes to the implementation of a class without affecting the behavior of the rest of the system.

    Interface Segregation Principle (ISP)

    The Interface Segregation Principle states that clients should not be forced to depend on interfaces they do not use. This means that interfaces should be small and focused, and should only contain methods that are relevant to the clients that use them. This principle is important for creating maintainable and scalable software, as it reduces the impact of changes to the system by limiting the dependencies between different parts of the code.

    Dependency Inversion Principle (DIP)

    The Dependency Inversion Principle states that high-level modules should not depend on low-level modules. Instead, both should depend on abstractions. This means that you should depend on abstractions, such as interfaces, rather than on concrete implementations. This principle is important for creating maintainable and flexible software, as it allows you to change the implementation of a class without affecting the rest of the system.

    Why are SOLID principles important?

    The SOLID principles provide a set of guidelines for creating software that is easy to maintain, extend, and test. By following these principles, developers can create software that is more robust, adaptable, and scalable, with a reduced risk of introducing bugs or unexpected behavior. In addition, adhering to SOLID principles can make the code easier to understand and modify, which can be especially important for teams working on large or complex projects.

    Applying SOLID principles in practice

    Now that we have a better understanding of what SOLID principles are and why they are important, let’s explore how to apply them in practice. Here are some tips for applying each of the SOLID principles in your software development projects:

    Single Responsibility Principle (SRP)

    To apply the Single Responsibility Principle, you should start by identifying the different responsibilities of each class in your system. If a class has more than one responsibility, consider breaking it up into smaller, more focused classes. You can use the following questions to help identify the responsibilities of a class:

    • What does the class do?
    • What does it depend on?
    • What depends on it?
    • Can its responsibilities be separated into smaller, more focused classes?

    Open/Closed Principle (OCP)

    To apply the Open/Closed Principle, you should use abstractions such as interfaces or abstract classes to define the behavior of your system. By depending on abstractions rather than concrete implementations, you can make your system more flexible and easier to extend. Here are some tips for applying the Open/Closed Principle:

    • Define interfaces or abstract classes that define the behavior of your system.
    • Implement concrete classes that conform to the interface or abstract class.
    • Use dependency injection to allow different implementations to be substituted at runtime.
    • Avoid making changes to existing code when adding new functionality.

    Liskov Substitution Principle (LSP)

    To apply the Liskov Substitution Principle, you should ensure that derived classes can be substituted for their base classes without affecting the behavior of the system. Here are some tips for applying the Liskov Substitution Principle:

    • Ensure that the derived class implements all the methods of the base class.
    • Ensure that the derived class does not introduce new behaviors that are not present in the base class.
    • Ensure that the derived class does not violate any invariants of the base class.
    • Use unit tests to ensure that derived classes can be substituted for their base classes without affecting the behavior of the system.

    Interface Segregation Principle (ISP)

    To apply the Interface Segregation Principle, you should ensure that interfaces are small and focused, and that clients only depend on the methods they use. Here are some tips for applying the Interface Segregation Principle:

    • Create interfaces that are small and focused.
    • Ensure that each interface only contains methods that are relevant to the clients that use it.
    • Avoid creating “fat” interfaces that contain methods that are not relevant to all clients.
    • Use composition rather than inheritance to avoid creating interfaces with unnecessary methods.

    Dependency Inversion Principle (DIP)

    To apply the Dependency Inversion Principle, you should depend on abstractions rather than concrete implementations. Here are some tips for applying the Dependency Inversion Principle:

    • Define interfaces or abstract classes to represent the dependencies of your classes.
    • Use dependency injection to inject the dependencies into your classes at runtime.
    • Ensure that your high-level modules depend on abstractions rather than concrete implementations.
    • Use inversion of control containers to manage the dependencies in your system.

    Conclusion

    The SOLID principles provide a set of guidelines for creating effective, maintainable, and scalable software applications. By following these principles, you can create software that is easier to understand, modify, and test, with a reduced risk of introducing bugs or unexpected behavior. Although it may take some time and effort to apply these principles in practice, the benefits are well worth it, especially for larger or more complex software projects. If you’re new to SOLID principles, start by focusing on one principle at a time and gradually incorporating them into your development process. Remember that SOLID principles are not a set of hard and fast rules, but rather a set of guidelines to help you create better software. As you gain experience and confidence, you can adapt and adjust these principles to suit your specific needs and requirements.

    In addition to the SOLID principles, there are other design principles and best practices that can help you create effective, maintainable, and scalable software. These include principles such as Don’t Repeat Yourself (DRY), Keep It Simple Stupid (KISS), and You Ain’t Gonna Need It (YAGNI), as well as practices such as code reviews, automated testing, and continuous integration and deployment. By incorporating these principles and practices into your development process, you can create software that is more efficient, effective, and reliable.

    In conclusion, SOLID principles provide a framework for creating software that is well-designed, maintainable, and scalable. By following these principles, developers can create software that is easier to understand, modify, and test, with a reduced risk of introducing bugs or unexpected behavior. While it may take some effort to apply these principles in practice, the benefits are well worth it in terms of creating software that is more efficient, effective, and reliable. By incorporating SOLID principles and other best practices into your development process, you can create software that is not only functional but also well-designed and maintainable over the long term.

  • From Monolith to Microservices

    From Monolith to Microservices

    In recent years, the shift from monolithic architectures to microservices has become a popular trend in software development. This shift has been driven by the need to build more scalable, agile, and resilient systems, particularly in the context of modern cloud environments. In this article, we will explore the differences between monolithic and microservices architectures, the benefits and challenges of adopting microservices, and some best practices for making the transition successfully.

    Monolithic Architecture

    Monolithic architecture is a traditional way of building software applications where all the components of the system are tightly coupled together into a single codebase. The codebase usually consists of a single executable that includes all the functionality of the application. Monolithic applications typically have a layered architecture with a presentation layer, business logic layer, and data access layer. All the functionality of the application is implemented in these layers, and they are tightly coupled together.

    Monolithic architecture has been the dominant approach for many years. It is relatively easy to develop and deploy, and it is straightforward to test and maintain. However, monolithic applications can become complex and unwieldy as they grow in size and complexity. Changes to one part of the application can have unintended consequences in other parts of the system, and this can make it challenging to introduce new features or make changes to the existing functionality.

    Microservices Architecture

    Microservices architecture is an alternative approach to building software applications. In a microservices architecture, the application is decomposed into a set of small, independent services that communicate with each other using APIs or message queues. Each service is responsible for a specific business capability, and it can be developed, deployed, and scaled independently of the other services in the system.

    Microservices architecture provides several benefits over monolithic architecture. It allows for greater agility, as each service can be developed and deployed independently. This means that new features can be introduced more quickly, and changes to the existing functionality can be made without affecting the other parts of the system. Microservices also allow for greater scalability, as each service can be scaled independently based on its specific requirements. Additionally, microservices architecture can improve resilience, as failures in one service do not necessarily affect the other services in the system.

    Benefits of Adopting Microservices

    There are several benefits to adopting microservices architecture:

    1. Greater agility: Microservices architecture allows for greater agility, as each service can be developed, deployed, and scaled independently. This means that new features can be introduced more quickly, and changes to the existing functionality can be made without affecting the other parts of the system.
    2. Improved scalability: Microservices architecture allows for greater scalability, as each service can be scaled independently based on its specific requirements. This means that you can scale the parts of the system that need it most, without having to scale the entire system.
    3. Better resilience: Microservices architecture can improve resilience, as failures in one service do not necessarily affect the other services in the system. This means that you can isolate failures and minimize their impact on the rest of the system.
    4. Better fault isolation: Microservices architecture allows for better fault isolation, as failures in one service do not necessarily affect the other services in the system. This means that you can isolate failures and minimize their impact on the rest of the system.
    5. Improved development velocity: Microservices architecture can improve development velocity, as each service can be developed independently. This means that you can introduce new features more quickly, and make changes to the existing functionality without affecting the other parts of the system.

    Challenges of Adopting Microservices

    Adopting microservices architecture can be challenging, and there are several key challenges that need to be addressed:

    1. Complexity: Microservices architecture is more complex than monolithic architecture, as it involves multiple services communicating with each other. This can make the system more difficult to understand and manage.
    2. Distributed systems: Microservices architecture involves building distributed systems, which can be more difficult to design, implement, and test than monolithic systems.
    3. Operational overhead: Microservices architecture can increase operational overhead, as there are more services to deploy, manage, and monitor.
    4. Inter-service communication: In a microservices architecture, services communicate with each other using APIs or message queues. This can introduce latency and increase the complexity of the system.
    5. Data management: Microservices architecture can make data management more challenging, as data may be spread across multiple services.

    Best Practices for Adopting Microservices

    To successfully adopt microservices architecture, there are several best practices that should be followed:

    1. Start small: Start with a small, well-defined service that can be developed, deployed, and tested quickly. This will allow you to get a feel for microservices architecture and identify any challenges early on.
    2. Design for failure: In a microservices architecture, failures will happen. Design your system to be resilient to failures, and ensure that failures in one service do not affect the other services in the system.
    3. Use API gateways: Use API gateways to manage the communication between services. This will make it easier to manage the system, and allow you to introduce new services more easily.
    4. Automate everything: Automation is key to managing a microservices architecture at scale. Use automation tools to deploy, manage, and monitor your services.
    5. Embrace DevOps: DevOps practices are essential for managing a microservices architecture. Embrace DevOps principles such as continuous integration, continuous delivery, and infrastructure as code.

    Conclusion

    The shift from monolithic architecture to microservices architecture is a trend that is likely to continue in the coming years. Microservices architecture offers several benefits over monolithic architecture, including greater agility, improved scalability, and better resilience. However, adopting microservices architecture can be challenging, and there are several key challenges that need to be addressed. By following best practices such as starting small, designing for failure, using API gateways, automating everything, and embracing DevOps, organizations can successfully make the transition from monolithic to microservices architecture and realize the benefits of this modern approach to building software applications.

  • Elastic Search for dummies

    Elastic Search for dummies

    Elasticsearch is a powerful search engine and data analytics tool that is designed to be easy to use and highly scalable. It is built on top of the Apache Lucene search engine library and provides a distributed, RESTful search and analytics engine that is widely used in a variety of industries. In this article, we will introduce Elasticsearch for beginners and explain some of its key features and benefits.

    What is Elasticsearch?

    Elasticsearch is a search engine that is used to search and analyze large volumes of data in real-time. It is an open-source search engine that is designed to be scalable, fault-tolerant, and distributed. Elasticsearch can be used for a wide range of use cases, including log analysis, full-text search, e-commerce search, and business analytics.

    Elasticsearch Architecture

    Elasticsearch has a distributed architecture, which means that data is stored across multiple nodes in a cluster. Each node in the cluster can store and search data, and nodes communicate with each other to ensure that data is distributed and replicated across the cluster. Elasticsearch can be deployed on-premises, in the cloud, or in a hybrid environment.

    Elasticsearch Features

    1. Full-Text Search: Elasticsearch is designed for full-text search, which means that it can search for keywords and phrases in the content of documents. Elasticsearch uses an inverted index to store and search documents, which makes it fast and efficient at searching large volumes of data.
    2. Scalability: Elasticsearch is designed to be highly scalable, which means that it can handle large volumes of data and traffic. Elasticsearch can be scaled horizontally by adding more nodes to the cluster, which makes it easy to handle increasing amounts of data.
    3. Fault-Tolerance: Elasticsearch is designed to be fault-tolerant, which means that it can handle node failures without losing data. Elasticsearch uses replication to ensure that data is replicated across multiple nodes in the cluster, which makes it resilient to node failures.
    4. Analytics: Elasticsearch can be used for data analytics, which means that it can be used to search and analyze data in real-time. Elasticsearch provides a powerful query language that can be used to search and filter data, and it also provides aggregation functions that can be used to summarize and group data.
    5. RESTful API: Elasticsearch provides a RESTful API that can be used to interact with the search engine. The API can be used to perform searches, index documents, and manage the cluster. The RESTful API makes it easy to integrate Elasticsearch with other systems and applications.
    6. Plugins: Elasticsearch provides a plugin architecture that allows developers to extend Elasticsearch with additional functionality. There are many plugins available for Elasticsearch that provide features such as security, monitoring, and visualization.

    How to Use Elasticsearch

    1. Install Elasticsearch: The first step in using Elasticsearch is to install it. Elasticsearch can be downloaded from the Elasticsearch website, and it can be installed on a variety of platforms, including Windows, Mac, and Linux.
    2. Index Data: The next step is to index data in Elasticsearch. Data can be indexed using the Elasticsearch API, which can be used to add documents to the search index. Data can also be indexed using Logstash, which is a data processing pipeline that can be used to ingest and process data.
    3. Search Data: Once data has been indexed in Elasticsearch, it can be searched using the Elasticsearch API. Searches can be performed using the query language, which can be used to search for keywords and phrases in the content of documents. Elasticsearch provides a wide range of search capabilities, including fuzzy searches, phrase searches, and wildcard searches.
    4. Analyze Data: Elasticsearch provides powerful analytics capabilities that can be used to analyze data in real-time. Analytics can be performed using the aggregation framework, which can be used to summarize and group data. Aggregations can be used to perform calculations, such as counting the number of documents that match a query, or finding the minimum or maximum value of a field in the search index.
    5. Visualize Data: Elasticsearch provides a variety of visualization tools that can be used to create charts and graphs based on search results. Visualization tools can be used to create dashboards that display data in real-time, and they can be used to create reports that provide insights into data trends.
    6. Monitor Elasticsearch: Elasticsearch provides a variety of tools for monitoring the search engine. Monitoring tools can be used to monitor the health of the cluster, track resource usage, and identify performance bottlenecks. Monitoring tools can also be used to monitor the status of indexing and search operations.

    Example

    here’s an example of how to use Elasticsearch with Python:

    1- Install Elasticsearch and Python Elasticsearch client:

    First, make sure you have Elasticsearch installed on your machine or server. Then, install the Python Elasticsearch client using pip:

    pip install elasticsearch
    

    2- Connect to Elasticsearch:

    Next, create a connection to your Elasticsearch cluster using the Elasticsearch Python client:

    from elasticsearch import Elasticsearch
    
    es = Elasticsearch()
    

    This will connect to Elasticsearch running on your localhost on the default port 9200. You can also specify a different host and port if needed.

    3- Create an index:

    Before you can store data in Elasticsearch, you need to create an index. An index is like a database in a traditional SQL database system. To create an index, use the create_index() method:

    index_name = 'my_index'
    body = {
        'settings': {
            'number_of_shards': 1,
            'number_of_replicas': 0
        },
        'mappings': {
            'properties': {
                'title': {'type': 'text'},
                'description': {'type': 'text'}
            }
        }
    }
    
    es.indices.create(index=index_name, body=body)
    

    This will create an index called my_index with one shard and no replicas. It will also define two fields: title and description, both of which are of type text.

    4- Add data to the index:

    Once you have created an index, you can add data to it. To add data, use the index() method:

    doc = {
        'title': 'First document',
        'description': 'This is the first document'
    }
    
    res = es.index(index=index_name, body=doc)
    

    This will add a new document to the my_index index with the title and description fields.

    5- Search for data:

    To search for data in Elasticsearch, use the search() method:

    search_body = {
        'query': {
            'match': {
                'title': 'first'
            }
        }
    }
    
    res = es.search(index=index_name, body=search_body)
    
    for hit in res['hits']['hits']:
        print(hit['_source'])
    

    This will search for documents in the my_index index that have the word “first” in the title field. It will then print out the _source field of each document that matches the query.

    6- Delete the index:

    Finally, when you’re done with an index, you can delete it using the delete() method:

    es.indices.delete(index=index_name)
    

    This will delete the my_index index.

    Conclusion

    Elasticsearch is a powerful search engine and data analytics tool that is widely used in a variety of industries. It provides a distributed, RESTful search and analytics engine that is designed to be scalable, fault-tolerant, and easy to use. Elasticsearch can be used for a wide range of use cases, including log analysis, full-text search, e-commerce search, and business analytics. If you’re new to Elasticsearch, there are many resources available online to help you get started, including documentation, tutorials, and community forums. With its powerful features and flexible architecture, Elasticsearch is a great choice for anyone looking to build scalable, real-time search and analytics applications.

  • Why you should not use SQLite in production?

    Why you should not use SQLite in production?

    SQLite is a widely used open-source relational database management system (RDBMS) that has gained popularity among developers for its ease of use, small footprint, and flexibility. It is an embedded SQL database engine that can be used in various programming languages and platforms. However, despite its popularity, using SQLite in production environments is generally not recommended. In this article, we will discuss some of the reasons why you should not use SQLite in production.

    Concurrency

    SQLite uses a file-based approach to store data, which means that a single file is used to store the entire database. This file can be accessed by multiple threads or processes simultaneously. However, SQLite’s concurrency control mechanism is not as robust as that of other relational databases, such as MySQL or PostgreSQL. For example, SQLite locks the entire database file when a write operation is being performed, which can lead to performance issues when multiple users or processes are trying to write to the database simultaneously.

    Scalability

    SQLite is designed to be a lightweight database engine that can be embedded in various applications. However, it is not designed for high scalability or high availability. As SQLite is a file-based database, it can become slow when the database file size grows beyond a certain limit. Also, SQLite does not support clustering or replication out-of-the-box, which means that scaling the database horizontally can be challenging.

    Limited Feature Set

    SQLite is a great choice for small-scale applications or prototyping because of its small footprint and ease of use. However, it has a limited feature set compared to other relational databases. For example, it does not support stored procedures, triggers, or views. While these features may not be essential for small-scale applications, they are important for large-scale, complex applications.

    Lack of Professional Support

    SQLite is an open-source database engine that is maintained by a group of volunteers. While the SQLite community is active and provides support through mailing lists and forums, there is no formal professional support available for SQLite. This can be a concern for organizations that rely on their databases for mission-critical applications.

    No Client-Server Architecture

    SQLite does not have a client-server architecture, which means that it does not support network access out-of-the-box. This can be a disadvantage for applications that require multiple users to access the database simultaneously from different locations. While there are workarounds to enable network access to SQLite, they can be complicated to implement and may not be as secure as a client-server architecture.

    Lack of Security Features

    SQLite does not have the same level of security features as other relational databases. For example, it does not support role-based access control, which means that it is difficult to implement fine-grained access control for database objects. Also, SQLite does not have native support for encryption, which means that sensitive data stored in SQLite databases can be vulnerable to unauthorized access.

    Conclusion

    While SQLite is a great choice for small-scale applications or prototyping, it is not recommended for production environments. SQLite’s limited scalability, concurrency control, feature set, and security features make it unsuitable for large-scale, mission-critical applications. If you are looking for a relational database that is suitable for production environments, consider using MySQL, PostgreSQL, or Oracle instead. These databases offer better scalability, concurrency control, feature sets, and security features that are essential for production environments.

  • Why you should learn Golang in 2025

    Why you should learn Golang in 2025

    Golang (or Go) is an open-source statically typed compiled programming language introduced by Google in 2007. It was build to fill in the gaps of C++ and Java that Google came across while working with its servers and distributed systems.

    It is easy to learn, concise, expressive and readable. It offers high performance and the ability to run code quickly. It signals incorrect type use errors during compilation. It can be used for high- and low-level programming, and is based on multiple programming paradigms. It has a built-in garbage collector.

    Since its release, Go has gained popularity among developers due to its simplicity, efficiency, and concurrency capabilities. In this article, we will provide you with practical details on why you should learn Golang in 2025.

    Growing Popularity

    Golang is gaining popularity rapidly among developers, making it one of the top programming languages in demand. According to the TIOBE Index, Go has been steadily rising in popularity, currently ranking at 12th position. With its growing popularity, learning Golang in 2023 can help you stay ahead of the curve in the competitive tech industry.

    High Performance

    Golang is a compiled language that provides fast and efficient performance. The language is designed to optimize the use of system resources and is suitable for building high-performance applications. Go is especially useful in developing microservices, network programming, and concurrent programming.

    Concurrency

    Concurrency is a critical aspect of modern software development, and Golang is designed to handle it well. The language has built-in features such as goroutines and channels, making it easy to write concurrent programs. Goroutines are lightweight threads that allow developers to perform multiple tasks simultaneously, while channels are used for communication and synchronization between goroutines.

    Scalability

    Go is designed to support scalability in software development. With its efficient memory management and garbage collection, Go can handle large-scale applications with ease. Golang’s built-in features also make it easy to write modular, reusable, and maintainable code, making it easier to scale applications as they grow.

    Job Opportunities

    The demand for Golang developers is increasing, and it is expected to continue to rise in the coming years. Many companies, including Google, Uber, and Dropbox, are using Go for their software development. Learning Golang in 2023 can provide you with job opportunities in various industries and fields, including finance, healthcare, e-commerce, and more.

    How to Learn Golang in 2025

    Now that you know why you should learn Golang in 2023, here are some practical steps you can take to get started:

    • Get familiar with Golang basics – Start by understanding the basics of Golang, such as variables, functions, and data types.
    • Practice writing Golang code – Practice writing Golang code and implementing different programming concepts. You can use online coding platforms or Golang-specific coding platforms like Go Playground to get started.
    • Learn Golang libraries and frameworks – Golang has several libraries and frameworks that can help you build efficient applications. Get familiar with popular libraries like Gin, Echo, and Beego, and frameworks like Revel and Buffalo.
    • Join Golang communities – Join Golang communities, attend meetups and conferences, and network with other Golang developers. You can find Golang communities on platforms like Reddit, Slack, and Discord.

    Conclusion

    Learning Golang in 2023 can provide you with several benefits, including high performance, concurrency, scalability, and job opportunities. Golang’s growing popularity and demand make it a valuable skill to have in the tech industry. To get started with learning Golang, get familiar with the basics, practice writing Golang code, learn popular libraries and frameworks, and join Golang communities. Good luck!

  • Python public holidays command step by step

    Python public holidays command step by step

    This is a step by step tutorial for beginners on how to write a python command line tool to fetch data from a remote API. This will teach you how to write clean, maintainable, testable and easy to use code in python.

    The point of this tutorial is mainly to learn how to structure the code in a way that it’s easy to extend in the future and also easy to understand and test.

    The requirements

    Implement a CLI that fetches public holidays from the existing API, https://date.nager.at, and shows the next n (5 by default) occurring holidays.

    • In order to avoid fetching the data too frequently, the endpoint shouldn’t be called more than once a day. To achieve that, we need to implement caching. There are many different ways to do that. I chose to use Redis because it’s really powerful, fast, reliable and widely used in many applications.
    • The country code should be passed as a cli argument.
    • The output should contain the following information: Date, name, counties, types.

    Building CLI

    Argparse is the default python module for creating command line tools. It provides all the features you need to build a simple CLI. It can work for our example but i find it a little bit complicated and not very pythonic that’s why i chose another one called click. It’s a Python package for creating beautiful command line interfaces in a composable way with as little code as necessary. It uses decorators to define commands.

    You can install it using pip:

    $ python -m pip install -U click

    To define a command using click, the command needs to be a function wrapped with  the decorator @click.command():

    import click
    
    
    @click.command()
    def main():
        click.echo("It works.")
    
    
    if __name__ == "__main__":
        main()

    initial implementation of public holidays CLI

    We need to add two arguments to this command, the country code which is a string and the number of upcoming vacation days to display, which is an integer and the default value is five. to do this, we need to use the decorator @click.option as follows:

    import click
    
    
    @click.command()
    @click.option(
        "--country-code",
        prompt="Country Code",
        help="Country code. complete list is here https://date.nager.at/Country."
    )
    @click.option(
        "--max-num",
        prompt="Max num of holidays returned",
        help="Max num of holidays returned.",
        default=5,
    )
    def main(country_code, max_num=5):
        """
        Simple CLI for getting the Public holidays of a country by country code.
        """
        click.echo(f"country_code: {country_code}, max_num: {max_num}")
        click.echo("It works.")
    
    
    if __name__ == "__main__":
        main()

    Now our command is ready. Let’s start with the second step, using docker.

    Using docker

    Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into containers that have everything the software needs to run. Using Docker helps deploy and scale applications into any environment and be sure that your code will run without any installation problem.

    To dockerize our command we need to create a Dockerfile. It’s a text document containing the instructions that docker should follow to build the docker image (using docker build) which is used to run the docker container later.

    here is our Dockerfile:

    # use python3.9 base image
    FROM python:3.9
    
    # defining the working directory
    WORKDIR /code
    
    # copy the requirements.txt from the local directory to the docker image
    ADD ./requirements.txt /code/requirements.txt
    
    # upgrade pip setuptools
    RUN pip install --upgrade setuptools wheel
    
    # install all dependencies from requirements.txt file
    RUN pip install -r requirements.txt

    Dockerfile

    Using docker-compose

    Docker compose is a tool that uses yaml files to define multiple container applications (like this one). Docker compose can be used in all environments but it’s not recommended to use it in production. It’s recommended to use docker swarm which is very similar to docker-compose or even better use kubernetes.

    We need to run multiple containers because we need to have a caching layer in our command line tool to cache the holidays for one day. we will use Redis for that but let’s first start by setting up the redis container and make it available to our command line tool.

    Here is the docker-compose.yaml file for redis. it defines one service called redis and uses redis:alpine image which is available in docker hub

    version: '3.7'
    
    services:
      redis:
        image: "redis:alpine"

    We also need to add our command line tool to the docker-compose file to be able to run the command and start the redis container with only one command. here is how can we do it

    version: '3.7'
    
    services:
      command:
        # build the local Dockerfile image and use it here
        build: .
        
        # command to start the container
        command: python main.py --country-code DE --max-num 5
        
        # volume to mount the code inside the docker-container
        volumes:
          - .:/code
        
        # this container should not start until redis container is up
        depends_on:
          - redis
      redis:
        image: "redis:alpine"

    Here we added another service called command which uses the docker image from the docker file we already implemented previously and the command to start this container is just python main.py. it also depends on redis which means that the command container should not start until the redis container is up.

    Now you should be able to run the command and start the redis server by just running:

    docker-compose up

    Configuring the cache

    The next step is to configure the caching layer and start using it in our command line tool. The first step is to install redis using pip. also make sure to add it to the requirements.txt.

    $ python -m pip install redis

    Then create a new file called cache.py. This should contain the following code

    from redis import StrictRedis
    
    
    # settings from url redis://{host}:{port}
    # the host is the name of redis service we defined in the docker-compose file
    REDIS_URL = 'redis://redis:6379'
    
    
    redis_conn = StrictRedis.from_url(REDIS_URL)
    

    cache.py

    This will create a connection to the redis server using the singleton design pattern because we need to make sure that we only have one connection and we are re-using this connection instead of creating a new connection each time we want to communicate to the redis server.

    The next step is to create some helper functions to handle the common use cases of the cache like caching some data, retrieving some data from cache and also invalidating the cache if needed.

    import json
    from datetime import datetime
    from typing import Optional
    
    from redis import StrictRedis
    
    
    # settings
    REDIS_URL = 'redis://redis:6379'
    LAST_UPDATED_AT_KEY_POSTFIX = '_last_updated_at'
    DEFAULT_CACHE_IN_SECONDS = 86400  # 1 day
    
    
    redis_conn = StrictRedis.from_url(REDIS_URL)
    
    
    def get_data_from_cache(key: str) -> Optional[dict]:
        """
        retrieve data from cache for the given key
        """
        data = redis_conn.get(key)
        if data:
            return json.loads(data)
    
    
    def save_data_to_cache(key: str, data: dict, expire_in_sec: int = DEFAULT_CACHE_IN_SECONDS) -> None:
        """
        Save data to cache
        """
        redis_conn.set(key, json.dumps(data), ex=expire_in_sec)
        redis_conn.set(key + LAST_UPDATED_AT_KEY_POSTFIX, datetime.now().strftime('%Y-%m-%d %H:%M:%S'), ex=expire_in_sec)
    
    
    def invalidate_cache_for_key(key: str) -> None:
        """
        invalidate cache for the given key
        """
        redis_conn.delete(key)
        redis_conn.delete(key + LAST_UPDATED_AT_KEY_POSTFIX)
    

    cache.py

    Now the caching layer is ready for us to use in the command line. As you can see, all we need to do to save some data to the cache is to import and use the save_data_to_cache function without worrying much about which caching solution we use or how to connect to it. Also, if we decided for some reason to change the caching backend to use memcache for example, all we need to do is to change the cache.py file and make sure our helper functions work. no need to change anything in the application.

    Holidays Client

    The next step is to implement the Holidays Client that will allow us to fetch holidays from an external API.

    First we need to install requests. It’s a very common HTTP library. Requests allows you to send HTTP/1.1 requests extremely easily. There’s no need to manually add query strings to your URLs, or to form-encode your PUT & POST data.

    To install it,

    $ python -m pip install requests
    

    Then create a new file called client.py. We will first implement the base class HolidaysClient that will define all attributes and methods we need to implement the client. It does not matter which backend we use to fetch the holidays, it will always use this class as a parent class and implement the get_holidays method.

    import abc
    
    
    class HolidaysClient(abc.ABC):
        """
        Abstract class to be used as a base class to any endpoint for getting the public holidays.
        """
        # Base url for the external endpoint
        BASE_URL = None
    
        def get_holidays(self, country_code: str, year: int) -> dict:
            """
            getting the holidays from external API by country code and year
            """
            raise NotImplemented()
    

    Then we need to implement the NagerHolidaysClient that inherits from HolidaysClient and implement the get_holidays method using date.nager.at PublicHolidays API.

    import abc
    
    import requests
    
    
    class HolidaysClient(abc.ABC):
        """
        Abstract class to be used as a base class to any endpoint for getting the public holidays.
        """
        # Base url for the external endpoint
        BASE_URL = None
    
        def get_holidays(self, country_code: str, year: int) -> dict:
            """
            getting the holidays from external API by country code and year
            """
            raise NotImplemented()
    
    
    class NagerHolidaysClient(HolidaysClient):
        """
        Nager client to get holidays from date.nager.at
        """
        # base url of nager client
        BASE_URL = 'https://date.nager.at'
    
        def get_holidays(self, country_code: str, year: int) -> dict:
            """
            fetch holidays from date.nager.at using PublicHolidays API
            """
            url = f'{self.BASE_URL}/api/v3/PublicHolidays/{year}/{country_code}'
            response = requests.get(url)
            response.raise_for_status()
            response_data = response.json()
            return response_data
    

    Now the client should be ready. The next step will be adding some usecases and utils to help us connecting everything together.

    Implementing the core logic of Holidays CLI

    First, We need to implement a function that will display the result to the console in a human readable way. It’s better to implement this functionality in a separate function because we expect the output format to be changed often and in the future we might implement some other ways to show the results like for example, write them to a file or send emails to customers with the upcoming holidays.

    For now, We will keep it very simple. Just a simple print to the console will be enough. The code should be something like this:

    from typing import List
    
    import click
    
    
    def show_results(results: List[dict]) -> None:
        """
        Given a list of objects, it will print these data to the console in a human-readable way. 
        """
        click.echo('result:')
        click.echo('----------------')
        for idx, item in enumerate(results):
            click.echo(f'{idx + 1}- {item}')
        click.echo('----------------')
    

    The output should be something like this:

    result:
    ----------------
    1- {'date': '2022-08-15', 'name': 'Assumption Day', 'counties': ['DE-SL'], 'types': ['Public']}
    2- {'date': '2022-09-20', 'name': "World Children's Day", 'counties': ['DE-TH'], 'types': ['Public']}
    3- {'date': '2022-10-03', 'name': 'German Unity Day', 'counties': None, 'types': ['Public']}
    | 4- {'date': '2022-10-31', 'name': 'Reformation Day', 'counties': ['DE-BB', 'DE-MV', 'DE-SN', 'DE-ST', 'DE-TH', 'DE-HB', 'DE-HH', 'DE-NI', 'DE-SH'], 'types': ['Public']}
    5- {'date': '2022-11-01', 'name': "All Saints' Day", 'counties': ['DE-BW', 'DE-BY', 'DE-NW', 'DE-RP', 'DE-SL'], 'types': ['Public']}

    Then, We will need to implement the caching logic. if there is no data available in the cache, we should call the HolidaysClient to fetch the data and save them to the cache. here is the code to do this:

    from typing import List, Optional
    
    import requests
    from datetime import datetime
    
    from cache import get_data_from_cache, save_data_to_cache
    from client import NagerHolidaysClient
    
    
    def get_next_occurring_holidays(data: List[dict], max_num: int = 5) -> List[dict]:
        """
        parse Holidays API response and get next n holidays
        :param data: Holidays API response
        :param max_num: number of holidays in the response
        :return: list of holidays
        """
        # get today's date
        today_date = datetime.now().date()
        
        # init the results
        result = []
        
        # for each holiday in the holiday api response
        for holiday in data:
            
            # break if we already reached the required number of holidays in the result
            if len(result) >= max_num:
                break
            
            # get the date of the current holiday
            holiday_date = datetime.strptime(holiday['date'], '%Y-%m-%d').date()
            
            # skip if the holiday date is in the past
            if today_date > holiday_date:
                continue
            
            # save the result
            result.append({
                'date': holiday['date'],
                'name': holiday['name'],
                'counties': holiday['counties'],
                'types': holiday['types'],
            })
    
        return result
    
    
    def get_next_holidays_by_country_code(country_code, max_num=5, year=None) -> (Optional[str], Optional[List[dict]]):
        """
        given a country code and a year, it gets holidays from external API (or cache).
        :param country_code: 2 letters country code. case-insensitive
        :param max_num: number of holidays we want to get
        :param year: the year we want to get holidays for
        :return: error string if any error happens and list of results if there is no error
        """
        # caching key should be something like this `2022;DE`
        cache_key = f'{year};{country_code}'
        
        # check if the data is already cached
        data_from_cache = get_data_from_cache(cache_key)
        if data_from_cache:
            # if the data is in the cache then we don't need to call the external API
            print(f'Getting data from cache for country: {country_code} and year: {year}')
            result = get_next_occurring_holidays(data_from_cache, max_num)
            return None, result
    
        try:
            # getting the holidays from Nager Holidays API
            response_data = NagerHolidaysClient().get_holidays(country_code, year)
        except requests.exceptions.HTTPError:
            return 'HTTPError', None
        except requests.exceptions.JSONDecodeError:
            return 'JSONDecodeError', None
    
        print(f'saving data to cache for country: {country_code} and year: {year}')
        save_data_to_cache(cache_key, response_data)
        result = get_next_occurring_holidays(response_data, max_num)
        return None, result
    

    Finally, We need to use this in the command main.py file so the final version of it should be like this

    from datetime import datetime
    
    import click
    
    from usecases import get_next_holidays_by_country_code
    
    
    __author__ = "Ramadan Khalifa"
    
    from utils import show_results
    
    
    @click.command()
    @click.option(
        "--country-code",
        prompt="Country Code",
        help="Country code. complete list is here https://date.nager.at/Country."
    )
    @click.option(
        "--max-num",
        prompt="Max num of holidays returned",
        help="Max num of holidays returned.",
        default=5,
    )
    def main(country_code, max_num=5):
        """
        Simple CLI for getting the Public holidays of a country by country code.
        """
        # initialize the target year with the current year
        target_year = datetime.now().year
        results = []
        
        # loop until we reach our target number of holidays
        while len(results) < max_num:
            error, next_result = get_next_holidays_by_country_code(
                country_code, max_num=max_num - len(results), year=target_year
            )
            
            # show the error if there is any
            if error:
                click.echo(error)
                return
            
            # next
            results += next_result
            target_year += 1
    
        # print results to the user
        show_results(results)
    
    
    if __name__ == "__main__":
        main()
    

    The whole project is available in github.

    Conclusion

    This was a simple and straightforward project about holidays. We learned how to have independent modules for our project to help us extend our project in the future if needed. We also learned about docker, docker-compose and redis. We will dive deeper into those tools and will explore more in the upcoming tutorials. Please hit subscribe to get updates and new tutorials once they are available.