Archive for April, 2025

Debugging Jitsi Meet Server Problems: A Practical Guide

Saturday, April 26th, 2025

Jitsi Meet is a powerful open-source video conferencing platform. But like any real-time communication system, it can run into issues—from video/audio glitches to full-blown connection failures. Debugging Jitsi Meet can be tricky due to its multi-component architecture. This guide walks you through a systematic approach to identify and resolve common server-side issues.

1. Understand the Architecture

Before diving into logs, it's important to understand Jitsi Meet's core components:
 

  • Jitsi Meet (Web UI) – The front-end interface.
  • Jicofo (Focus component) – Manages conference sessions.
  • Prosody (XMPP Server) – Handles user authentication and signaling.
  • JVB (Jitsi Videobridge) – Routes video/audio streams.
  • Nginx or Apache – Web server proxy (often with HTTPS and WebSocket forwarding).


Knowing how these interact helps pinpoint the failing layer.

2. Check Logs in the Right Places

Each component has its own logs. Check them in the following order:

Prosody Logs

Location: /var/log/prosody/prosody.log and prosody.err
​Look for: Authentication issues, connection denials, or component registration problems.
 

Jicofo Logs

Location:  /var/log/jitsi/jicofo.log
Look for: Room creation errors, XMPP connection failures, conference creation attempts.
 

JVB Logs

Location: /var/log/jitsi/jvb.log
​Look for: ICE failures, STUN/TURN issues, packet loss, and bridge reachability.
 

Web Server Logs (Nginx/Apache)

Location (Nginx): /var/log/nginx/error.log and access.log
Look for: HTTP errors (404, 502), WebSocket connection problems.
 

Browser Console Logs
 

Tools: Press F12 in browser → Console/Network tabs.
Look for: WebSocket failures, CORS issues, or media permission problems.
 

3. Common Problems & Fixes

"Failed to join conference"

  • Cause: Prosody may not be running or not configured correctly.​

Fix: Restart Prosody and check domain configuration in /etc/prosody/conf.avail/

 

 

No Audio or Video
 

Usual Cause: Media not reaching the bridge or blocked by firewall

Fix:

  • Verify JVB is listening on correct ports (UDP 10000).
  • ​Check firewall/NAT settings (especially on cloud VMs).
  • Use tcpdump or ss to check traffic flow.
     

WebSocket Connection Fails

 

Usual Cause: Web server (Proxy) misconfiguration.

Fix:

Ensure Nginx is forwarding WebSocket requests to /xmpp-websocket/ .
Add proper proxy settings in nginx.conf
 

Authentication Not Working


Cause: Misconfigured JWT or internal authentication.

Fix:

  • Check Prosody's config for authentication method.
  • If using JWT, verify token structure and shared secret.
     

4. Use Debugging Tools

  • Jitsi Meet in debug mode:


​Add #config.debug=true to your meeting URL.
 

  • ICE Debugging:

     

     

     

    Check about:webrtc (Firefox) or WebRTC Internals (Chrome).
    Look at ICE candidate gathering and connectivity checks.
    Test TURN/STUN:

    • Use tools like trickle-ice to validate your server's ICE configuration.

5. Networking and Firewall Checks

Make sure these ports are open:
 

  • TCP 443 – HTTPS
  • UDP 10000 – Media (JVB)
  • TCP 4443 – (Optional, fallback media)
  • TCP 5222 – XMPP (if not using BOSH/WebSocket)
     

# ss -tuln ufw status


6. Component Health Checks

Do 
# systemctl status for each main jitsi component services:

# systemctl status prosody
# systemctl status jicofo
# systemctl status jitsi-videobridge2

Check uptime, errors, or failure restarts.

7. Enable More Verbose Logs

Increase logging levels for deeper debugging:
 

  • Prosody: Edit /etc/prosody/prosody.cfg.lua → set log = { ... debug = "*" }.
  • Jicofo/JVB: Edit /etc/jitsi/jicofo/logging.properties and /etc/jitsi/videobridge/logging.properties
    → change log level to FINE or ALL .

 

8. Update & Restart Services

Sometimes updates or configs don’t apply until services are restarted:
 

# apt update && apt upgrade systemctl restart prosody jicofo jitsi-videobridge2 nginx

 

Final Closure Thoughts

Debugging Jitsi Meet requires a structured approach, start from the user-facing symptoms, trace through each service, and verify network and authentication configurations.
Debug the status of prosody, jicofo and jitsi-videobridge2, check the firewall openings are okay to the jitsi server
With some log analysis and a bit of patience, experimentation and the help of forums or Artificial Intelligence tool like ChatGPT, the Jitsi server errors will get solved.

How to Install Jitsi Meet on Debian Linux to have your own free software video conferencing secure server

Thursday, April 24th, 2025

 

jitsi-meet-create-new-room-for-video-meetings-linux

 

Jitsi Meet is a free, open-source video conferencing platform that allows you to host secure and scalable video calls both using a Mobile Phone / Tablet / PC or any other electronic device for which jitsi client has available port. Jitsi meet is the best free alternative one can get to Rakuten Viber / Facebook (Meta) / Zoom / Apples' Facetime etc.
What makes Jitsi really worthy is it can make your Video streaming communication give you flexibility to keep your communication a little bit private and harder to be captured than if you use the general Video streaming platforms. 
Jitsi is also a very simple to use and can be used either with a Desktop Client on Windows / Linux and Mac OS or Smart Phone running Android (Samsung / Huawei etc.) or iOS (iPhones) you can configure to use the Jitsi server or directly via a SSL encryption secured web URL address. The only thing i really don't like about Jitsi is it uses Java and its way of work is cryptic just like it is pretty hard to debug or understand exactly how the software works, as when errors came the usual crazzy Java exceptions are filling the jitsi logs.

In below short guide, I'll try provides a simple step-by-step instructions for installing Jitsi Meet on a Debian-based systems, hoping that anyone can benefit from Jitsi by building his own server.

 

jitsi-meet-conference-free-open-source-video-streaming-viber-and-facebook-alternative


What you should have before you start buillding your new Jitsi meet server

Before you begin, ensure that your system meets the following requirements:

  • A fresh installation of Debian 10 (Buster) or newer.

  • A non-root user with sudo privileges.

  • A fully updated system.

  • A domain name pointing to your server's IP address.

  • OpenJDK 11 installed.​

To get a better understanding on how Jitsi meet works it is worthy to take a quick look on Jitsi Architectural diagram:

Jitsi-meet-video-conferencing-software-linux-windows-mac-Architectural-diagram
 

1. Update Your System

Start by updating your system's package list and upgrading existing packages:​

# apt update sudo apt upgrade -y

2. Install Required Dependencies

Install the necessary packages for adding repositories and managing keys:​

# apt install apt-transport-https curl gnupg2 -y

3 Add Jitsi Repository

Add the Jitsi repository key to your system:

# curl https://download.jitsi.org/jitsi-key.gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/jitsi-keyring.gpg

Then, add the Jitsi repository:​

# echo "deb [signed-by=/usr/share/keyrings/jitsi-keyring.gpg] https://download.jitsi.org stable/" | sudo tee /etc/apt/sources.list.d/jitsi-stable.list > /dev/null

Update your package list to include the Jitsi repository into apt database:​

 # apt update

4. Install Jitsi Meet

Install the Jitsi Meet package:​  

# apt install jitsi-meet -y

During installation, you'll be prompted to:​

  • Enter the hostname: Provide your domain name (e.g., meet.example.com ).

  • Choose SSL certificate: Select "Generate a new self-signed certificate" or "Obtain a Let's Encrypt certificate" if you have a valid domain.​JitsiScaleway

If you opt for Let's Encrypt, ensure that ports 80 and 443 are open on your firewall.​

5. Configure Firewall openings

If you have already a configured firewall to filter out traffic, open the necessary ports to allow traffic to your Jitsi Meet server from your router or entry firewall device as well as on the Linux itself: ​

Allow access to SSH server

# ufw allow 22/tcp


Allow access to HTTP unecrypted to Jitsi meet server

# ufw allow 80/tcp
# ufw allow 443/tcp


Allow access necessery for proper operation of Jitsi VideoBridge (port range 10000 to 20000)
 

# ufw allow 10000:20000/udp
# ufw enable

 

Verify the firewall status is Okay​ 

# ufw status

6. Access Jitsi Meet in a browser

Open a web browser and navigate to your server's domain or IP address:​

https://meet.your-custom-domain-or-IP.com

Hopefully all is okay and You should see the Jitsi Meet interface, where you can start or join a meeting.​

7. Secure Conference Creation (Optional extra)

By default, anyone can create a conference. To restrict this:​

  1. Install and configure Prosody for authentication.
    For those who don't know Prosody is a modern XMPP communication server

  2. Set up secure domains and configure authentication settings.​

For detailed instructions, refer to the Jitsi DevOps Guide. ​
 

Conclusion

Now You should have successfully installed Jitsi Meet on your Debian server.
Installing to Ubuntu and Redhat OSes such as Fedora or Redhat based distros should be not much difrerent from on this guide, except you have to use
the correct RPM repositories.

Now you can further now host secure video conferences using your own infrastructure and have an increased privacy and perhaps be more calm that the CIA or Mussat, MI6 / FSB might be not spying your Video conference talks (except if they don't already do it on an OS level which most likely the case but this doesn't matter. :).

For advanced configurations and features, consult the Jitsi Handbook and the Jitsi DevOps Guide.​Jitsi

That's all folks Enjoy !

How to Install and Set Up an NFS Server network Shares on on Linux to easify data transfer across multiple hosts

Monday, April 7th, 2025

How to Configure NFS Server in Redhat,CentOS,RHEL,Debian,Ubuntu and Oracle Linux

Network File System (NFS) is a protocol that allows one system to share directories and files with others over a network. It's commonly used in Linux environments for file sharing between systems. In this guide, we'll walk you through the steps to install and set up an NFS server on a Linux system.

Prerequisites

Before you start, make sure you have:

  • A Linux system distros (e.g., Ubuntu, CentOS, Debian, etc.)
  • Root or sudo privileges on the system.
  • A network connection between the server (NFS server) and clients (machines that will access the shared directories).
     

1. Install NFS Server Package

 

On Ubuntu / Debian based Linux systems:

a. First, update the package list 

# apt update

b. Install the NFS server package
 

# apt install nfs-kernel-server

On CentOS/REL-based systems:

 2. Install the NFS server package
 

      # yum install nfs-utils 

Once the package is installed, ensure that the necessary services are enabled.

 3. Create Shared Directory for file sharing

Decide which directory you want to share over NFS. If the directory doesn't exist, you can create one. For example:

# mkdir -p /nfs_srv_dir/nfs_share

Make sure the directory has the appropriate permissions so that the nfs clients can access it.

# chown nobody:nogroup /nfs_srv_dir/nfs_share 
# chmod 755 /nfs_srv_dir/nfs_share

4. Configure NFS Exports ( /etc/exports file)

The NFS exports file (/etc/exports) is perhaps most important file you will have to create and deal with regularly to define the expored shares, this file contains the configuration settings for directories you want to share with other systems.

       a. Open the /etc/exports file for editing:

vi /etc/exports

Add an entry for the directory you want to share. For example, if you're sharing /nfs_srv_dir/nfs_share and allowing access to all systems on the network (192.168.1.0/24), add the following line:
 

/nfs_srv_dir/nfs_share 192.168.1.0/24(rw,sync,no_subtree_check)


Here’s what each option means:

  • rw: Read and write access.
  • sync: Ensures that changes are written to disk before responding to the client.

 

Here is few lines of  example of my working /etc/exports on my home running NFS server

/var/www 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/jordan 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/mobfiles 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.200/32(rw,no_root_squash,async,subtree_check)
/home/hipo/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/alex/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/necroleak/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/bashscripts 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/backups/Family-Videos 192.168.0.200/32(ro,no_root_squash,async,subtree_check)

 

5. Export the NFS Shares with exportfs command

Once the export file is configured, you need to inform the NFS server to start sharing the directory:
 

# exportfs -a


The -a flag will make it export all the sharings.

6. Start and Enable NFS Services

You need to start and enable the NFS server so it will run on system boot.

On Ubuntu / Debian Linux run the following commands:
 

# systemctl start nfs-kernel-server 
# systemctl enable nfs-kernel-server


On CentOS / RHEL Linux:
 

# systemctl start nfs-server
# systemctl enable nfs-server


7. Allow NFS Traffic Through the Firewall

If your server has a firewall configured / enabled, you will need to allow NFS-related ports through the firewall.
These ports include 2049 TCP protocol Ports (NFS) and 111 (RPCbind) UDP and TCP protocol , and some additional ports.

On Ubuntu/Debian (assuming you are using ufw [UNCOMPLICATED FIREWALL]):

# ufw allow from 192.168.1.0/24 to any port nfs sudo ufw reload

On CentOS / RHEL Linux:

# firewall-cmd –permanent –add-service=nfs sudo firewall-cmd –permanent –add-service=mountd sudo firewall-cmd –permanent –add-service=rpc-bind sudo firewall-cmd –reload

8. Verify NFS Server is Running

To ensure the NFS server is running properly, use the following command:
 

# systemctl status nfs-kernel-server

or

# systemctl status nfs-server

You should see output indicating that the service is active and running.

 

9. Test the NFS Share (Client-Side)

To test the NFS share, you will need to mount it on a client machine. Here's how to mount it:

On the client machine, install the NFS client utilities:

Ubuntu / Debian Linux

# apt install nfs-common

For CentOS / RHEL Linux

# yum install nfs-utils


Create a mount point (Nomatter the distro),:
 

# mkdir -p /mnt/nfs_share


Mount the NFS share:

# mount -t nfs <nfs_server_ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share

Replace <nfs_server_ip> with the IP address of the NFS server or DNS host alias if you have one defined in /etc/hosts file.

Verify that the share is mounted:

​# df -h

You should see the NFS share listed under the mounted file systems.

10. Configure Auto-Mount at Boot (Optional)

To have the NFS share automatically mounted at boot, you can add an entry to the /etc/fstab file on the client machine.

Open /etc/fstab for editing:

# vi /etc/fstab

Add the following line: 

<server-ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share nfs defaults 0 0

Save and close the file.

The NFS share will now be automatically mounted whenever the system reboots.

Debug NFS configuration issues (basics)

 

You can continue to modify the /etc/exports file to share more directories or set specific access restrictions depending on your needs.

If you encounter any issues, checking the server logs or using
 

# exportfs -v
/var/www          192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data      192.168.0.205/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda1/
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda2/info
        192.168.0.200/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/mobfiles    192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data/public_html
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/var/public
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/neon/data
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/scripts      192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/backups/data-limited
        192.168.0.200/32(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/disk/filetransfer
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/public_shared/data
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)


 Of course there is much more to be said on that you can for example, check /var/log/messages /var/log/syslog and other logs that can give you hints about issues, as well as manually try to mount / unmount a NFS stuck share to know more on what is going on, but for a starter that should be enough.

command can help severely in troubleshooting the NFS configuration.

Sum it up what learned ?

We learned how to  set up basic NFS server and mounted its shared directory on a client machine.
This is a great solution for centralized file sharing and collaboration on Linux systems (even though many companies are trying to not use it due to its lack of connection encryption for historical reasons NFS has been widely used over the years and has helped dramatically for the Internet as we know it to become the World Wide Web of today. Thus for a well secured network and perhaps not a critical files infrastructure, still NFS is a key player in file sharing among heterogenous networks for multitudes of Gigabytes or Terra Pentabytes of data you would like to share amoung your Personal Computers / Servers / Phones / Tablets and generally all kind of digital computer equipment devices.

Protect Application servers against sql injects, redirection handling and click jacking with Haproxy load balancer

Tuesday, April 1st, 2025

 

Lets say you are a system administrator that has to manage haproxy Load Balancers for High Availability that are throwing traffic to a set of 4 Application servers and you do only do a traffic round robin load balancing seemless without modifying the sent traffic. The haproxies are used only to send the frontend traffic towards application machines and then the traffic is returned back via another set of haproxies.


As incoming requests to application frontend is crucial to be secure, i'll give in this article few options that can be turned on in haproxy to strenghten security of backend application (against "hackers" / script kiddies ).

Here is the a sample chunk of haproxy frontend backend configuration you can use in haproxy.cfg config file for the purpose.


  frontend Incoming_Frontend
           bind 10.10.150.8:80 ssl crt /etc/haproxy/certs/your-domain-cert.net_haproxy.pem ca-file /etc/haproxy/certs/CustomCompanyCA.crt verify optional
           mode http
                http-request del-header max-forwards
                http-response set-header X-Frame-Options sameorigin
                http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1
              option httplog
              timeout client 600s
              log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r

             default_backend bk_Incoming_Frontend

    backend bk_Incoming_Frontend
           mode http
           balance roundrobin
           timeout server 330s
           timeout connect 4s
           server bk_AppServer_01 10.10.250.40:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer02 10.40.251.30:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer03 10.50.252.40:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer04 10.80.253.50:8088 weight 1 check port 8088 on-marked-down shutdown-sessions

 

The configuration variables that would improve backend security is as so:

  mode http
                http-request del-header max-forwards
                http-response set-header X-Frame-Options sameorigin
                http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1
              option httplog

Above config haproxy meaning explained is as follows: 

This HAProxy configuration is set up for handling HTTP traffic with some specific request and response modifications.

Let's go through each directive:

Breakdown of the Configuration:

  1. mode http

    • This tells HAProxy to operate in HTTP mode, meaning it understands and processes HTTP-specific directives (e.g., modifying headers, logging, etc.).

  2. http-request del-header max-forwards

    • This removes the Max-Forwards header from incoming HTTP requests.

    • The Max-Forwards header is used in TRACE or OPTIONS requests to limit the number of hops a request can take.

    • Removing it may help prevent some types of request-loop abuse or simplify routing.

  3. http-response set-header X-Frame-Options sameorigin

    • This sets the X-Frame-Options header in HTTP responses to sameorigin .

    • Purpose: Prevents clickjacking attacks by ensuring that the page can only be embedded in a frame if it’s from the same origin (not by third-party sites).

      For those who don't know Clickjacking is The malicious practice of manipulating a website user's activity by concealing hyperlinks beneath legitimate clickable content, thereby causing the user to perform actions of which they are unaware. For example you click a payment button on a website from a decoy website but instead of paying to the real target site your money are sent to a malicious user's bank account..

  4. http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1

    • This modifies the Location header in HTTP responses.

    • It strips out the scheme ( http:// or https:// ), domain, and port, leaving only the path.

    • Example:

      • Before: Location: https://example.com:8080/path/to/resource

      • After: Location: /path/to/resource

    • This ensures that redirects remain relative instead of absolute, which can help in reverse proxy setups.

  5. option httplog

    • Enables detailed logging for HTTP traffic.

    • Logs will include request method, URL, response status, and other useful details for debugging and monitoring.


Purpose of This Configuration:

  • Security:

    • Removing Max-Forwards helps mitigate abuse.

    • X-Frame-Options: sameorigin prevents clickjacking.

  • Redirection Handling:

    • Ensures the backend does not expose internal hostnames or ports in redirects.

  • Logging:

    • Enables HTTP-specific logging for better monitoring and debugging.

This setup is typical for a reverse proxy scenario where HAProxy is fronting backend services while enforcing security measures and keeping responses clean.

What we learned ?

In this short article, we've learned about how to imrpove application security with simple haproxy load balancer by removing Max-forwards (limitation of max hops traffic could have until reaching the destination), the X-Frame-Options that prevents clickjacking and using Redirection Handling to make sure backend does not expose internal  hostnames or ports used in redirects.

Any other meaningful protection options and hints whether proxying traffic with haproxy are mostly welcome to har about in commects section. If you know such help others learn by sharing.