Welcome to the dynamic cosmos of networking and connectivity, where the quest for resilient yet budget-friendly solutions takes centre stage. Today, we’re not merely delving into the technical aspects; we’re embarking on a journey into the avant-garde of connectivity solutions – the fusion of the sleek MikroTik LtAP mini-LTE Kit and the powerful OpenVPN.
In this post, brace yourself to unveil the covert potential hidden within the MikroTik LtAP mini-LTE Kit. We’re not merely scratching the surface of its features; we’re unleashing its power and flexibility to provide solutions beyond its initial design. Get ready for a journey where the possibilities of this kit extend far beyond the expected.
What is Out-of-Band (OOB) Management? Now, picture this scenario: You’re overseeing a network device situated miles away from your headquarters. Through the wonders of remote protocols like SSH, you’ve been managing it seamlessly. Suddenly, the device throws a fit, and you are cut off. Oh no! Cue the need for a physical sprint to the device, employing console access to assess the damage – time wasted, money drained, and downtime soaring.
But hold on! What if I told you there’s a smoother way to confirm the device’s status without breaking a sweat? Yes, you guessed it right – Out-of-Band Management! It’s not just a term; it’s the networking world’s century-defining invention, saving you from the hassle of always needing physical access during failures a thing of the past.
While In-Band Management operates through the LAN using conventional remote access methods like SSH, web or Telnet, Out-of-Band Management offers a secure alternative to administering connected devices and IT assets without relying on the corporate LAN – through a separate connection.
The Internet of Things (IoT) has revolutionized the way we collect, transmit, and process data from a wide range of applications. LoRa (Long-Range) technology has emerged as a popular choice for building low-power, long-range wireless IoT networks.
ChirpStack is one of the powerful open-source LoRaWAN software that provides the infrastructure for managing, collecting LoRa IoT devices’ data from the gateway and sending it to any client’s desired visualization software.
RENU is currently one of the providers of this service in Uganda, with over 6 LoRAWAN gateways, and more than 12 sensors connected in different regions.
Deploying a Django application is a crucial step in taking your web project from development to production. While Django’s built-in development server is great for testing and debugging, it’s not suitable for handling the demands of a live production environment. To ensure your Django application can handle real-world traffic and serve as a reliable, performant, and secure web application, you’ll need to set up a production-ready server stack.
In this guide, we’ll walk you through the process of deploying a Django application on an Ubuntu server using Gunicorn as the application server, Apache as the reverse proxy server, and MySQL as the database management system. This stack is a popular choice for deploying Django applications due to its stability, scalability, and security features.
Setting up the software
1. Update Ubuntu software repository
# sudo apt update
2. Install
i. apache2 – Serve our website
# sudo apt install apache2
ii. mysql-server and libmysqlclient-dev – For database
v. virtualenv – Virtual environment for our django application
# sudo apt install virtualenv
Clone your django repo
$ git clone <link to the repo>
$ cd <project folder>
$ git checkout <active branch>
Installing Python libraries
1. Create virtual env (e.g env)
$ python3 -m venv env
2. Activate virtualenv
$ source env/bin/activate
3. Install project dependencies and packages
$ pip install -r requirements.txt
4. Install mysql client for python
$ pip install mysqlclient
5. Install gunicorn to interact with our python code
$ pip install gunicorn
6. Install white noise to serve our static files
$ pip install whitenoise
Setting up the firewall
We’ll disable access to the server on all ports except 8000 and OpenSSH for now. Later on, we’ll remove this and give access to all the ports that Apache needs.
2. Create a database and a user for the application.
mysql> CREATE DATABASE '<app database name>' CHARACTER SET 'utf8';
mysql> CREATE USER '<app user name>'@'localhost' IDENTIFIED BY '<app user password>';
mysql> GRANT ALL PRIVILEGES ON <app database name>.* TO '<app user name>'@'localhost';
mysql> quit;
Setting up the Django project
In your project folder, modify the settings.py file
1. Include the newly created database access configurations.
2. Include the IP address of your server and domain name (if any) in the allowed hosts.
3. Include CORS_ALLOWED_ORIGINS, CSRF_TRUSTED_ORIGINS, and CORS_ORIGIN_WHITELIST if you have a domain name set already.
4. In your installed apps, include ‘whitenoise.runserver_nostatic’ just above ‘django.contrib.staticfiles’.
5. In your MIDDLEWARE, include ‘whitenoise.middleware.WhiteNoiseMiddleware’ just below, ‘django.middleware.security.SecurityMiddleware’.
DEBUG = False # We have turned DeBUG to False since we’re in production
ALLOWED_HOSTS = ['127.0.0.1', 'domain name', 'ip-address'] # ip address and domain name
INSTALLED_APPS = [
'whitenoise.runserver_nostatic',
'django.contrib.staticfiles', ]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
"whitenoise.middleware.WhiteNoiseMiddleware", ]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'OPTIONS': {
'sql_mode': 'traditional',
},
'NAME': '<app database name>',
'USER': '<app username>',
'PASSWORD': '<app user password>',
'HOST': 'localhost',
'PORT': '3306',
}
}
CORS_ALLOWED_ORIGINS = [
"http://localhost:8000",
'domain name' ]
CSRF_TRUSTED_ORIGINS = [
'http://localhost:8000',
'domain name' ]
CORS_ORIGIN_WHITELIST = [
'http://localhost:8000',
'domain name' ]
STATIC_URL = '/static/'
STATIC_ROOT=os.path.join(BASE_DIR, 'static/')
MEDIA_URL='/media/'
MEDIA_ROOT=os.path.join(BASE_DIR, 'media/')
6. Make database migrations to create all the required tables in the new database and collect all static files to a static folder under the django_project directory.
2. Test the app in your browser on the server ip while mapping the port <server ip>:8000 e.g http://137.63.128.239:8000 . (Notice that you can access static files because we set up whotenoise.)
3. Kill gunicorn and exit the virtual environment.
4. Lets daemonize the gunicorn
i. Open a new gunicorn.service file using any text editor your comfortable with.
#vim /etc/systemd/system/gunicorn.service
[Unit]
Description=Gunicorn instance to serve the django app
After=network.target
[Service]
# Replace with your system user
User=root
# Replace with your system group
Group=root
WorkingDirectory=/home/<user>/<project folder>/
#ExecStart=path to gunicorn_config.py your_app_module.wsgi:application
ExecStart=/home/<user>/<project folder>/<virtual env>/bin/gunicorn_start.sh
[Install]
WantedBy=multi-user.target
#This is the systemd file, which can be called: gunicorn.service
5. Create a script to start the server
i. Create gunicorn_start.sh in your virtualenv bin directory
ii. Edit the file as follows:
vim /home/<user>/<project folder>/<virtualenv>/bin/gunicorn_start.sh
#!/bin/bash
#This is the Gunicorn script used to automatically launch the application through Gunicorn
NAME="django-application"
#path to the folder containing the manage.py file
DIR=/home/<user>/<project folder>
# Replace with your system user
USER=root
# Replace with your system group
GROUP=root
WORKERS=3
#bind to port 8000
BIND=<server ip>:8000
# Put your project name
DJANGO_SETTINGS_MODULE=<Project name>.settings
DJANGO_WSGI_MODULE=<Project name>.wsgi
LOG_LEVEL=error
cd $DIR
#activating the virtual environment
source /home/<user>/<project folder>/<virtualenv>/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DIR:$PYTHONPATH
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $WORKERS \
--user=$USER \
--group=$GROUP \
--bind=$BIND \
--log-level=$LOG_LEVEL \
--log-file=-
If everything works fine, you should be able to access your Django application on <server ip> or <domain name (if you registered one)>
Setting up SSLs for our domain
1. To set up SSL/TLS certificates for a domain (e.g., uc23.devops.renu.ac.ug), We’re going to use Certbot (a tool for obtaining and renewing Let’s Encrypt SSL certificates)
Creating a Pipeline (CI/CD) on GitLab, using Docker on an Ubuntu Server.
Introduction:
Continuous Integration (CI):
Continuous Integration is the practice of frequently and automatically integrating code changes from multiple contributors into a shared repository. Developers regularly submit their code changes, which are then automatically built, tested, and integrated into the main codebase. This helps identify and address integration issues early in the development process.
Continuous Delivery (CD):
Continuous Delivery is an extension of CI and focuses on automating the delivery of software to various environments, such as testing, staging, and production. The goal is to ensure that the software is always in a deployable state, allowing for reliable and efficient releases. CD includes automation of testing, deployment, and monitoring, enabling rapid and consistent software delivery.
Docker:
Docker is a technology that packages applications and all their necessary components into self-contained containers, making it easy to develop, test, and run applications consistently across different environments. It simplifies application management and promotes consistency and portability.
Containerizing a React application with Docker Compose is a widely adopted approach for streamlining both the development and deployment environments of your application. Docker Compose empowers you to define and execute multi-container Docker applications seamlessly.
Leveraging Docker offers a multitude of benefits. Presently, Docker stands as the prevailing choice for containerizing applications, owing to its simplicity in building, packaging, sharing, and distributing applications. The portability of Docker images further simplifies application deployment across various modern cloud providers.
This comprehensive guide outlines a step-by-step process for Dockerizing a React application using Docker Compose.
It is important to have a failover of every system as it improves the availability of the system and reduces data loss. In this articale, we describe how you can have near-live synchronization between two Zimbra servers so that one of them is live and the other is kept in a warm or very warm standby state. The sync can work in reverse when the mirror (redundant) server becomes the active server. This allows easy fall-back to the original server once the failover condition is resolved.
Zimbra employs several different databases to store messages, message indexes, meta-data, account information and configuration. Although it is possible to synchronise two Zimbra servers at the disk level using DRBD or VSphere, the amount of disk operations from all these databases that need to be replicated would probably take up a lot of bandwidth which may be debilitating and/or expensive to implement if the two servers are in remote locations. Continue reading “Setting up zimbra redundancy – live sync”→
For this guide, we shall be looking at how to make use of the FreeIPA replica feature to set up high availability in FreeIPA. We shall then configure Keepalived to facilitate failover between any number of FreeIPA instances. In this guide, only two FreeIPA instances will be used, one being the master server and the other the replica.
This setup procedure is intended for system administrators running a single FreeIPA server and are afraid of a single point of failure.
Therefore, you should have a FreeIPA server already installed and fully functioning – with test accounts.
This guide starts with the setup of a FreeIPA server, followed by the setup of one replica node.
Some documentation can be very small and so a full post might not be required for them. In light of this, I created a small quick guide to host some of them.
In order to improve online learning, educator and learners collaboration, moodle and BigBlueButtion were designed independently and these two can be integrated with ease.
About Moodle;
Moodle is an open source learning management system designed to create a learning environment for educators, learners and administrators. Moodle can be installed on a person’s web server, it is also proven and trusted worldwide.
It is completely free with no licensing fees, highly flexible and fully customizable.
About BigBlueButton;
BigBlueButton is an open source global web conferencing system intended for online learning. It includes features like; real time sharing of audio, screen and video, private/public chat, interactive whiteboard and upload of documents.
BligBlueButton is integrated in moodle as plugin and below are the step by step procedures on how this is done;
Sign into moodle as an administrator > Dashboard > site administration.
In Zimbra, there are two different images to re-brand: one is the image that appears in the Login window and the other is the image in the top-left corner when you are logged in. Each image has a different size, and the logos for the open source email platform (FOSS) should be set to the following max sizes:
Application Banner, 200px X 35px in Zimbra Collaboration 8.x;
Application Banner, 170px X 42px in Zimbra Collaboration 8.0.x;
Login Banner, 440px X 60px in Zimbra Collaboration 8.x;
Login Banner, 450px X 36px in Zimbra Collaboration 8.0.x;