FOSS IT services: Ansible, Security, DevOps, GNU/Linux, CI/CD, education and more

Rocket science precision for less complex tasks.

GET A QUOTE

Description

    Grow your potential

    with the help of our expertise

    Education

    Learn about security, system administration, DevOps and more with us.

    High availability

    Scale services or migrate servers and cloud providers with limited downtime.

    IT operations

    Get help with GNU/Linux, cloud, DevOps, system administration, CI/CD, servers & automation.

    Cybersecurity

    Improve security with data and traffic encryption, restricted access & firewall.

    Our know-how can improve your skills and business

    And you choose to what extent

    One-time consultation When another opinion is needed to confirm that a solution will be the best fit for the problem. Or when looking for a possible solution to a problem.

    Multiple discussions For more complex issues or integrations that might require in-depth discussions about the available options, their benefits and downsides, research and recommendations.

    Education When you need dedicated training with practical experiments and examples, extended beyond the detailed explanation delivered with every task or consultation that we provide.

    One-time task For simpler or straight-forward tasks where you might need external know-how.

    Complex integrations In cases where more advanced setup is involved, with multiple components that require fine-tuning, in-depth knowledge of software behaviour and configuration.

    Battle-tested technology

    Proven approach, concepts and tools for infrastructure and automation.

    Ansible automation

    Ansible is software for automation of configuration files, application deployment, provisioning of cloud resources, entire operating systems automation, and infrastructure orchestration.

    All configuration can be managed under version control (i.e. Git, GitHub, GitLab, BitBucket), which fits nicely under the Infrastructure as Code (IaC) paradigm.

    Ansible is free and open source software, developed by Red Hat with the help of a dedicated community.

    Red Hat also offers an end-to-end solution on top of Ansible, called Ansible Automation Platform. It is aimed at its enterprise clients and is provided under a subscription.

    Even without the enterprise package, Ansible is robust, reliable and a great option to manage automation. Because Ansible is an open source software it is improved by a vast community, which perfects its security and flexibility over time, which makes it suitable even at enterprise level.

    The very basics of Ansible

    Ansible has a scalable architecture without agents. Remote machines are managed by a control node over SSH. Managed nodes are kept in an inventory file, and the control node runs commands against them. Every computer can be a control node, you just have to install Ansible on it.

    There are different ways to run Ansible — for multiple tasks or single task via Ansible playbooks or directly on the command line trough ad hoc commands. Ansible playbooks are YAML files with groupings of tasks that are executed against the managed nodes over SSH.

    In most cases, all that is needed is to have Python on the managed nodes and Ansible on the control node. The last as well requires Python.

    In general, Ansible is a great tool to scale automation and improve security over your infrastructure.

    More about Ansible:

    CI/CD automation

    Continuous integration, continuous delivery and continuous deployment, grouped together, are an optimization effort in the software development cycle aiming to reduce errors, to increase productivity and to guarantee the production of more stable software, that is released faster and is available to users.

    CI/CD is part of DevOps and relies heavily on cybernation. That is just good old automation, which is blended with the practices of continuous integration, continuous delivery and continuous deployment.

    CI/CD fundamentals

    The development team adds new code more often, in smaller fragments and sets pipelines (automated execution environment) in the software source code repository.

    CI/CD tools run those pipelines (stored in YAML files) and without human intervention perform unit testing, integration testing and other forms of automated testing.

    When the source code is ready for the production environment it is tested and build entirely automatically. Both pipelines and source code are kept under version control for easy management.

    There is no standard for CI/CD tools and pipelines, and as a result no straight-forward portability between implementations (i.e. Gitlab CI, GitHub Actions, Jenkins, CircleCI, Bitbucket Pipeline, Buildbot, Azure DevOps) exists.

    It is important to choose the implementation wisely early on, to reduce the duplication effort of re-implementation in another variant.

    Implementations vary on the type of installation as well: self-hosted, cloud based, cloud native or part of the version control software or provider.

    CI/CD is a great way to improve the efficiency of the development process and to guarantee the robustness of the final product, both of which affect greatly the expenses and revenue of a company.

    More about continuous integration, delivery and deployment:

    FOSS: Free and open source software

    Software is one of the greatest tool ever invented. As such it also can and must be examined, tinkered with, studied and improved. And the best way to do this, is when it is done by as many people as possible. As with other tools, knowledge and science.

    That is exactly what free and open source software does trough licensing. Different software licenses grant different rights and obligations. Some licenses allow users to do practically anything, while others require that users do not prevent other users to benefit fully from the granted rights.

    Open source is more technically oriented while free software is leaning more to the ethical side of the matter and the rights of software users.

    In essence, free software with its licensing guarantees that software users will always be able to run it, change/edit it and share it without limitations.

    Open source software advocates believe, that because of the access to the source code and the distributed model of development, the software produced this way is more stable, secure and generally better. And in most of the cases it really is.

    In general, free and open source software (FOSS) can improve greatly security, performance, scalability, high availability and can speed up software development. Because of its licensing and granted rights, free software (as well as open source) can also reduce costs and increase profitability of a business.

    When using FOSS, it is very important to comply with its licensing. For example, some licenses might not be suitable for certain corporate or enterprise solutions, because they might require to release any modifications and improvements made in-house. It is a good idea to consult an expert on licensing when developing new software on-top of free and open source software.

    In most cases it is safe to use free and open source software and your business will benefit greatly if you build your infrastructure and business in general around it.

      GNU/Linux servers

      GNU/Linux is a computer operating system that is free and open source software. It is Unix-like, compliant with POSIX, and available for desktop, server, and embedded systems and architectures. In some cases even on mobile.

      The development of the core components began before 40 years, and GNU/Linux as a bundle have been developed for more than 30 years.

      The GNU/Linux operating system is stable, reliable, secure and flexible. It is mature enough even for enterprise solutions. In fact, it powers more than 90% of the Internet and cloud computing.

      GNU/Linux comes in many variations, called distributions which organise the system internal configuration, setup, available software packages, software install and updates. In most cases the software is controlled by a package manager which is used to installs updates, new software and security patches.

      Some of the most popular distributions are Debian, Ubuntu, Fedora, CentOS, AlmaLinux, Gentoo, Arch and Slackware. On the fully-free front (GNU/Linux-libre) major examples are Trisquel, Parabola and Guix.

      GNU/Linux is often called just Linux, which is not completely accurate, but it really depends who you ask. Knowing the operating system simply as Linux, one would probably wonder what is exactly the meaning of GNU and Linux.

      GNU and Linux short history

      It all started in 1983 by the GNU project, with the goal of creating a fully free operating system. Until 1991 most of the components were ready, except the kernel of the OS - it was not mature enough. In 1991 a kernel project named Linux was publicly announced. However the first versions were not released under free and open source software license. That changed in 1992 and both GNU and Linux were bundled together into different distributions.

      More about GNU and Linux

      DevOps

      DevOps is an effort to improve the software development process by increasing efficiency, reducing delivery or release times, enhancing security and creating a better software product in the long run.

      This is a continuous process with included feedback, where automation plays a key role.

      The other important thing in this methodology of development is an organizational culture, where the development (Dev) and operations (Ops) teams are working and communicating closely and effectively together into a single unit. In the past those roles were separated in dedicated teams, and advocates of the DevOps movement, claim that this is not effective and it is better to have a single team where engineers know and work both on development and operations.

      This however might not be always as effective as claimed, because practical experience shows that not all developers are good at operations topics, and not all operations engineers are good at programming.

      History had proven effectively that vast high specialization is always better than broader knowledge on many topics. In spite of that, DevOps as a methodology has its benefits and parts of it, or in full should be used by organizations to improve their processes.

      A third thing that is very important in DevOps are the tools. Usually the goal is to move operations into the Dev real, so they can benefit from the flexibility and tools available. For that, configuration, setup, deployments and everything related to infrastructure is treated as software or data and kept under version control. The so called infrastructure as code (IaC).

      All this is achieved trough automation and suitable software for the task at hand - Ansible, CI/CD pipelines, Pulumi, Buildbot, Terraform, Jenkins, GitLab CI, GitHub Actions, Bitbucket Pipelines and many more. Even good old scripting, used before the DevOps era, like Bash, Python and Perl could help, and is often found on existing implementations.

      When security testing is added early on in the design phase and in the development process of software, another enhanced term is used as well — DevSecOps. In this approach, trough automation and CI/CD pipelines, software is statically and dynamically tested for security problems. DevSecOps increases drastically the security of software by implementing security testing right from the start, early in the development cycle.

      DevOps by itself alone, increases security, by automation and limiting inefficiency, reducing manual tasks and human error. DevSecOps goes a step further by using static application security testing (SAST) and dynamic application security testing (DAST).

      The CI/CD pipelines in DevSecOps perform scanning, running and testing of the code for known vulnerabilities trough CVE databases and projects like OWASP.

      The key components to effective DevOps and DevSecOps development lifecycle are:

      • communication and collaboration between team members
      • automation
      • proper toolset for the task at hand
      • using the feedback from tools
      • following the procedures
      • using the feedback of software end users

      Having experienced operations engineers and system administrators understanding the fundamentals of infrastructure is crucial for DevOps teams and for implementing CI/CD pipelines.

      DevOps is generally considered a single organizational unit where all members of the team mix the roles of developers, operation engineers and/or system administrators.

      However, having high specialization roles like developers and operations engineers working closely together with the rest of the DevOps methodology intact, produces much better results.

        ITOps

        Information technology operations (ITOps) is the process of keeping IT infrastructure working with close to zero downtime.

        This might include setup and deployment of infrastructure and services, maintenance, monitoring, hardware replacement, software installations, updates, upgrades, backups and support. Based on opinion, it might include other processes and topics or exclude some of the listed.

        In general, the operations team, monitors dedicated hardware servers, virtual private servers (VPS), cloud servers and other appliances for their effective resource usage.

        Additionally, for hardware components, an attempt is made to predict and prevent uncontrolled hardware failures. By monitoring key thresholds in hard drives, memory banks, network cards and other components, ITOps teams are able to plan a replacement before a critical event.

        In all virtual, hardware and cloud infrastructure, the resources available to servers like RAM, CPU, free disk space and network bandwidth are monitored in an attempt to predict critical overuse of such resources and act before the event.

        ITOps teams try to guarantee close to infinite uptime of the infrastructure, so it does not affect the operations of the business or the organization using it.

        Effective usage of resources is also part of the tasks of IT operations teams. This helps businesses keep infrastructure costs within acceptable budged.

        Additionally servers and infrastructure that are operated in cost-effective manner greatly help in protection against outage attacks (i.e. DDoS, DoS) and are a way to assert maximum uptime.

        IT operations engineers and system administrators are also responsible for software updates, encryption (TLS, SSL), firewalls and other critical security setup on operating system or cloud level.

        ITOps knowledge and experience is crucial for DevOps and DevSecOps teams, where infrastructure is treated as code (IaC). Together with automation tools like CI/CD pipelines, Ansible, Pulumi and others, effectiveness of operations and security are increased by limiting human intervention and errors.

        Operations engineers are also responsible for planning and implementing highly available and scalable infrastructure by hardware, software and cloud techniques. Knowing well the behaviour of software, OS and underneath technology, helps a lot in planning and implementing load balancers and backend copies effectively.

          Upgrade to efficiency