Monthly Archives: November 2023


How to Create a WiFi Hotspot in Ubuntu 22.04

Creating a WiFi hotspot on Ubuntu 22.04 is a straightforward process that can be very useful for sharing your internet connection with other devices. Turning your Ubuntu machine into a WiFi access point is a handy solution, whether at home or in a setting where a traditional WiFi network isn’t available. Here’s a detailed guide on configuring WiFi Access Points using the network-manager snap.

Prerequisites

Before we begin, ensure that you have the following:

  • A computer running Ubuntu 22.04.
  • A wireless network interface on your Ubuntu device.
  • The network-manager snap installed on your system.

Step-by-Step Guide to Create a WiFi Hotspot

Open the Terminal: First, open your terminal. You can do this by pressing Ctrl + Alt + T or searching for ‘Terminal’ in your applications menu.

Identify Your WiFi Interface: You need to know the name of your WiFi network interface. You can find this by running the command nmcli device status. Look for the device under the “DEVICE” column that has “wifi” listed in the “TYPE” column.

Configure the WiFi Hotspot: Use the following command to set up your WiFi hotspot:php

nmcli d wifi hotspot ifname <wifi_iface> ssid <ssid> password <password>;

Replace <wifi_iface> with your WiFi interface name, <ssid> with your desired network name (SSID) and <password> with your chosen password. Remember, the password should be between 8-63 characters or 64 hexadecimal characters.

For example, if your WiFi interface is wlan0, your desired SSID is MyHotspot, and your password is MyStrongPassword123, the command will look like this:

Connection Verification: If the command is successful, network-manager will create a connection named ‘Hotspot <N>’, where <N> is a number. This indicates your hotspot is active.

Shared Internet Connection: The created hotspot offers a shared connection by default. This means any device connected to your hotspot should be able to access the internet if your Ubuntu device has internet access.

Connecting Devices: Search for available WiFi networks on your other devices (like smartphones or laptops). You should see the SSID you set (MyHotspot in our example). Connect to it using the password you configured.

Tips and Considerations

  • Ensure your device has a stable internet connection if you intend to share it via the hotspot.
  • Keep your hotspot secure by using a strong, unique password.
  • Remember that using your computer as a hotspot may impact its battery life more quickly if not plugged in.

Conclusion

Creating a WiFi hotspot on Ubuntu 22.04 is a useful feature, especially when you need to share your internet connection quickly and efficiently. Following these simple steps, you can turn your Ubuntu machine into a reliable WiFi access point for various devices.

ncmli device wifi hotspot [ifname ifname] [con-name name] [ssid SSID] [band {a | bg}] [channel channel] [password password]
   Create a Wi-Fi hotspot. The command creates a hotspot connection profile according to Wi-Fi device capabilities and activates it on the device. The hotspot is secured with WPA if device/driver supports that, otherwise WEP is used. Use connection down or device down to stop the hotspot.

   Parameters of the hotspot can be influenced by the optional parameters:

   ifname
       what Wi-Fi device is used.

   con-name
       name of the created hotspot connection profile.

   ssid
       SSID of the hotspot.

   band
       Wi-Fi band to use.

   channel
       Wi-Fi channel to use.

   password
       password to use for the created hotspot. If not provided, nmcli will generate a password. The password is either WPA pre-shared key or WEP key.

       Note that --show-secrets global option can be used to print the hotspot password.
       It is useful especially when the password was generated.

How to Run Three Instances of Signal on Ubuntu

Signal is a popular, privacy-focused messaging app. For various reasons, you might want to run multiple instances of Signal on your Ubuntu system. Here, we’ll guide you through the process of installing three different versions of Signal: the Snap package, the standard Debian-based installation, and the Signal Beta for Linux.

Prerequisites

  • Ubuntu OS (We recommend a recent version, like 20.04 or later)
  • Basic understanding of Linux terminal commands

1. Installing Signal from Snap

Snap is a package management system that makes it easy to install applications in Linux. Follow these steps to install Signal using Snap:

  1. Open Terminal: Use Ctrl+Alt+T to open the terminal.
  2. Install Signal: Enter the command: sudo snap install signal-desktop.
  3. Launch Signal: You can find Signal in your applications menu or launch it from the terminal with signal-desktop.

2. Installing Signal Using Linux (Debian-based) Install Instructions

For the second instance, we will use the Debian-based installation method (https://signal.org/download/):

  1. Add Signal’s Official Repository:
    • Open Terminal.
    • Enter:
      wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg;
      cat signal-desktop-keyring.gpg | sudo tee /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null;
    • Add the repository:
      echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main' | sudo tee /etc/apt/sources.list.d/signal-xenial.list;
  2. Update and Install Signal:
    • Update package database: sudo apt update.
    • Install Signal: sudo apt install signal-desktop.
  3. Launch the Application: Find Signal in your application menu or type signal-desktop in the terminal.

3. Installing Signal Beta for Linux (Debian-based)

Finally, let’s install the Beta version (https://support.signal.org/hc/en-us/articles/360007318471-Signal-Beta):

  1. Add Signal Beta Repository:
    • Open Terminal.
    • Enter:
      wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg;
      cat signal-desktop-keyring.gpg | sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null;
    • Add the Beta repository:
      echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main' | sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
  2. Update and Install Signal Beta:
    • Update the system: sudo apt update.
    • Install Signal Beta: sudo apt install signal-desktop-beta.
  3. Launch Signal Beta: It should appear in your applications menu or can be started from the terminal with signal-desktop-beta.

Tips for Managing Multiple Instances

  • Different Profiles: Each instance of Signal will require a different phone number for registration.
  • System Resources: Running multiple instances can consume more system resources. Monitor your system’s performance.
  • Updates: Regularly check for updates to each version to ensure security and functionality.

Conclusion

With these steps, you should now have three different versions of Signal running on your Ubuntu system. This setup is ideal for separating personal, work, and testing environments within the same machine. Enjoy your enhanced and versatile messaging experience!


Deep Dive into Apple AirTags vs. Samsung SmartTag 2: New Insights and Comparison

Welcome back to our ongoing discussion comparing Apple AirTags and Samsung SmartTag 2. Our previous post provided an initial comparison, but today, we delve deeper with new findings that highlight crucial differences and why they matter.

1. Satellite View on Apple “Find My”

  • Importance: The satellite view feature in Apple’s Find My app offers a more detailed and realistic geographical context. This can be crucial in urban areas with complex layouts, helping users pinpoint their device’s location more accurately.

2. Samsung “SmartThings” Tracks Location History

  • Importance: Samsung’s SmartThings’ ability to show location history adds a layer of tracking detail that can be invaluable in retracing steps or understanding the movement pattern of a lost item.

3. Limitation of Notification in Samsung “SmartThings”

  • Importance: Samsung’s SmartThings, which allows only two devices to send notifications for left-behind items, is a significant limitation, especially for users with multiple valuable assets to keep track of.

4. Active Connection Indicator in Samsung “SmartThings”

  • Importance: Showing when a device is actively connected, rather than just sharing GPS coordinates like Apple’s Find My, provides a more dynamic and immediate understanding of your item’s status, offering potentially faster recovery actions.

5. Notification Issue with Apple “Find My”

  • Importance: Apple’s Find My notifying users of an old location, even if the device hasn’t been recently found, can lead to confusion and wasted effort, diminishing the reliability of the tracking system.

6. Benefits of Using Both Systems

  • Importance: Employing both Apple AirTags and Samsung SmartTag 2 can enhance asset tracking capabilities. The increased spectrum of compatible devices means broader coverage and better chances of locating lost items, especially in diverse geographic areas like London, Cyprus, Dubai, and Kuala Lumpur.

7. Real-Life Performance: Samsung SmartTag 2 Outperforms

  • Importance: Our real-life tests suggest that Samsung SmartTag 2 performs superior to Apple AirTags in varied locations. This could be critical for users who frequently travel or live in areas with different technological ecosystems.

In conclusion, understanding these nuanced differences is essential for making an informed choice between these leading tracking technologies. While both have their strengths, your specific needs and usage scenarios will ultimately determine the best fit for your asset-tracking requirements.


Technical Deep Dive: Multi-Method Face and Person Detection in Python

In this technical post, we’ll dissect a Python script integrating several libraries and techniques for detecting faces and people in video footage. This script is an excellent example of how diverse computer vision tools can be merged to produce a robust solution for image analysis.

# import the necessary packages
import numpy as np
import cv2
import sys
import os
from datetime import datetime
import face_recognition
import dlib

inputVideo = sys.argv[1];
basenameVideo = os.path.basename(inputVideo);
outputDirectory = sys.argv[2];
datetimeNow = datetime.now().strftime("%m-%d-%Y %H:%M:%S");

#Creating the folder to save the output
videoOutputDirectory = outputDirectory + '/' + datetimeNow + '/' + basenameVideo + '/';
os.makedirs(videoOutputDirectory);

##METHOD 1 -- START
# initialize the HOG descriptor/person detector
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
##METHOD 1 -- STOP

##METHOD 2 -- START
faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml");
##METHOD 2 -- STOP

##METHOD 5 -- START
# Initialize face detector, facial landmarks detector and face recognizer
faceDetector = dlib.get_frontal_face_detector()
##METHOD 5 -- STOP

cv2.startWindowThread()

## open webcam video stream
#cap = cv2.VideoCapture(0)
# create a VideoCapture object
cap = cv2.VideoCapture(inputVideo)

frameIndex = 0;

while(True):
	# Capture frame-by-frame
	ret, frame = cap.read()

	# using a greyscale picture, also for faster detection
	gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)

##METHOD 1 -- START
	if True:
		# detect people in the image
		persons, weights = hog.detectMultiScale(frame, winStride=(8,8) )

		persons = np.array([[x, y, x + w, y + h] for (x, y, w, h) in persons])
		print("[INFO][1][{0}] Found {1} Persons.".format(frameIndex, len(persons)));

		for (left, top, right, bottom) in persons:
			print("A person is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
			match_image = frame[top:bottom, left:right];
			cv2.imwrite(videoOutputDirectory + str(frameIndex) + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_persons_M1.jpg', match_image);
##METHOD 1 -- STOP

##METHOD 2 -- START
	if True:
		faces = faceCascade.detectMultiScale(
			gray,
			scaleFactor=1.05,
			minNeighbors=7,
			minSize=(50, 50)
		);

		faces = np.array([[x, y, x + w, y + h] for (x, y, w, h) in faces])
		print("[INFO][2][{0}] Found {1} Faces.".format(frameIndex, len(faces)));

		for (left, top, right, bottom) in faces:
			print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
			match_image = frame[top:bottom, left:right];
			cv2.imwrite(videoOutputDirectory + str(frameIndex) + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_faces_M2.jpg', match_image);
##METHOD 2 -- STOP

##METHOD 3 -- START
	if True:
		faces = face_recognition.face_locations(frame);
		print("[INFO][3][{0}] Found {1} Faces.".format(frameIndex, len(faces)));

		for (top, right, bottom, left) in faces:
			#print("[INFO] Object found. Saving locally.");
			print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
			match_image = frame[top:bottom, left:right];
			cv2.imwrite(videoOutputDirectory + str(frameIndex) + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_faces_M3.jpg', match_image);
##METHOD 3 -- STOP

##METHOD 4 -- START
	if True:
		faces = face_recognition.face_locations(frame, model="cnn");
		print("[INFO][4][{0}] Found {1} Faces.".format(frameIndex, len(faces)));

		for (top, right, bottom, left) in faces:
			#print("[INFO] Object found. Saving locally.");
			print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
			match_image = frame[top:bottom, left:right];
			cv2.imwrite(videoOutputDirectory + str(frameIndex) + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_faces_M4.jpg', match_image);
##METHOD 4 -- STOP

##METHOD 5 -- START
	if True:
		# detect faces in image
		faces = faceDetector(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))

		print("[INFO][5][{0}] Found {1} Faces.".format(frameIndex, len(faces)));
		# Now process each face we found
		for k, face in enumerate(faces):
			top = face.top()
			bottom = face.bottom()
			left = face.left()
			right = face.right()
			print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
			match_image = frame[top:bottom, left:right];
			cv2.imwrite(videoOutputDirectory + str(frameIndex) + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_faces_M5.jpg', match_image);
##METHOD 5 -- STOP
	
	frameIndex += 1

# When everything done, release the capture
cap.release()

Core Libraries and Initial Setup

The script begins by importing several critical libraries:

  • numpy: Essential for numerical computations in Python.
  • cv2 (OpenCV): A cornerstone in computer vision projects.
  • sys and os: For system-level operations and file management.
  • datetime: To handle date and time operations, crucial for timestamping.
  • face_recognition: A high-level facial recognition library.
  • dlib: A toolkit renowned for its machine learning and image processing capabilities.

Video File Handling

The script processes a video file whose path is passed as a command-line argument. It extracts the file name and prepares a unique output directory using the current date and time. This approach ensures that outputs from different runs are stored separately, avoiding overwrites and confusion.

Methodological Overview

The script showcases five distinct methodologies for detecting faces and people:

  1. HOG Person Detector with OpenCV: Uses the Histogram of Oriented Gradients (HOG) descriptor combined with a Support Vector Machine (SVM) for detecting people.
  2. Haar Cascade for Face Detection: Employs OpenCV’s Haar Cascade classifier, a widely-used method for face detection.
  3. Face Detection Using face_recognition (Method 1): Implements the face_recognition library’s default face detection technique.
  4. CNN-Based Face Detection Using face_recognition (Method 2): Utilizes a Convolutional Neural Network (CNN) model within the face_recognition library for face detection.
  5. Dlib’s Frontal Face Detector: Applies Dlib’s frontal face detector, effective for detecting faces oriented towards the camera.

Processing Workflow

The script processes the video on a frame-by-frame basis. For each frame, it:

  • Converts the frame to grayscale when necessary. This conversion can speed up detection in methods that don’t require color information.
  • Sequentially applies each of the five detection methods.
  • For each detected face or person, it outputs the coordinates and saves a cropped image of the detection to the output directory.

Iterative Frame Analysis

The script employs a loop to process each frame of the video. It includes a frame index to keep track of the number of frames processed, which is particularly useful for debugging and analysis purposes.

Resource Management

After processing the entire video, the script releases the video capture object, ensuring that system resources are appropriately freed.

Key Takeaways

This script is a rich demonstration of integrating various face and person detection techniques in a single Python application. It highlights the versatility and power of Python in handling complex tasks like video processing and computer vision. This analysis serves as a guide for developers and enthusiasts looking to understand or venture into the realm of image processing with Python.