Monday, 30 June 2025

nmap

Run as root and replace fqdn. 

  • target=fqdn; nmap -T4 -p$(nmap -Pn -T4 $target | grep '^[0-9]' | cut -d '/' -f 1 | tr '\n' ',' | sed s/,$//) -Pn -sVC $target
 
  • Quick scan and save results:
    • target=fqdn;nmap -sV -p 21,22,23,25,80,110,143,465,443,993,995,1891,3000,3306,4000 -oN ${target}_$(date +%F)_nmap.txt --version-intensity 0 ${target}
 
  • Scan network for hosts up.
    • sudo nmap -oN output.txt -sn -n -PE 192.168.1.0/24

Sunday, 29 June 2025

du

 # To sort directories/files by size:
du -sk *| sort -rn

# To show cumulative human-readable size:
du -sh

# To show cumulative human-readable size and dereference symlinks:
du -shL

# Show apparent size instead of disk usage (so sparse files will show greater
# than zero):
du -h --apparent-size

# To sort directories/files by size (human-readable):
du -sh * | sort -rh 

# To list the 20 largest files and folders under the current working directory:
du -ma | sort -nr | head -n 20

# To include hidden files and directories:
du -sh .[!.]* *

Friday, 27 June 2025

dvd-slideshow

  • cd ~/tmp ; dir2slideshow Photos_dvd   (create an input file)
  • dvd-slideshow -n "Summer Holidays" -o ~/tmp/Photos_vob -f ~/tmp/Photos_dvd.txt

cat  ~/tmp/Photos_dvd.txt

background:0::black
background:1
fadein:1
title:5:Summer Holidays
title:5:2025-04-21 to 2025-05-07
background:0::black
fadeout:1
background:2
"/mnt/data1/Music/A Thousand Years - Christina Perri.mp3":1:fadein:3:fadeout:2
"/mnt/data1/Music/U2 - (1987) The Joshua Tree/09-One Tree Hill.mp3":1:fadein:3:fadeout:2
fadein:1
title:5:Village
fadeout:1
/home/username/tmp/Photos_dvd/IMG_20250425_161448550.jpg:5::rotate:90
fadein:1
title:5:Town
fadeout:1
/home/username/tmp/Photos_dvd/IMG_20250430_091952576_HDR.jpg:5
fadeout:1
background:2


Sunday, 15 June 2025

bash

 Arthmetic operation

  •  eg: count=2;echo $(( count++  + 3 ));echo $count


Output and errors to null: 

  • ls cheap >/dev/null 2>&1

Redirect both the standard output (stdout) and standard error (stderr) of the command3 to the file /tmp/mylog.txt:

  • command3 2>&1 >/tmp/mylog.txt


mutt

 # Create new mailbox in IMAP
        + When located in mailbox list (c)
                shift + C

# Move multiple messages to folder (bulk operations)

  1. Select/tag them with alt+'t'
  2. ;s in mail inbox overview for bulk operation


# Deleting / Undeleting all messages in mutt

  1. In mutt’s index, hit ‘D’ (UPPERCASE D)
  2. It will prompt you with “Delete messages matching: “
    • + enter this string:
    • ~A
    • It should mark all for deletion!
    • Conversely, you can do the same thing with UPPERCASE U to undelete multiple messages.

=================Set up Mutt with Gmail using OAuth2=============
First obtain an OAuth 2.0 Client ID and Secret from the Google Cloud Console.
Publish app in Google Cloud Console or set to Testing and add Test users
Download the mutt_oauth2.py
Edit/add client_id and secret_id in google section
/home/user/.mutt/accounts/mutt_oauth2.py --authorize /home/user/.mutt/accounts/user\@gmail.com.tokens --verbose
 OAuth2 registration: google
 Preferred OAuth2 flow ("authcode" or "localhostauthcode" or "devicecode"): authcode
 Account e-mail address: user\@gmail.com
Visit displayed URL to retrieve authorization code. Enter code from server

edit ~/.muttrc as per your configuration or edit ~/.mutt/accounts/account.com.gmail.user

unset imap_pass
unset smtp_pass
unset imap_authenticators
unset imap_oauth_refresh_command
unset smtp_authenticators
unset smtp_oauth_refresh_command
set imap_user = "user@gmail.com"
set smtp_url = "smtp://user@gmail.com@smtp.gmail.com:587/"
set from = "user@gmail.com"
set realname = "user"
set folder = "imaps://imap.gmail.com:993"
set imap_authenticators="oauthbearer:xoauth2"
set imap_oauth_refresh_command="/home/user/.mutt/accounts/mutt_oauth2.py /home/user/.mutt/accounts/${imap_user}.tokens"
set smtp_authenticators=${imap_authenticators}
set smtp_oauth_refresh_command=${imap_oauth_refresh_command}
set spoolfile = "+INBOX"
set postponed = "+[Gmail]/Drafts"
set header_cache = ~/.mutt/com.gmail.user/cache/headers
set message_cachedir = ~/.mutt/com.gmail.user/cache/bodies
set certificate_file = ~/.mutt/com.gmail.user/certificates
set trash = +'Deleted'
set signature = ~/.mutt/signature_gmail
set ssl_starttls = yes
set ssl_force_tls = yes

================= 

Tuesday, 10 June 2025

tar

# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar

# To extract a .tar in specified directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/

# To create an uncompressed archive:
tar -cvf /path/to/foo.tar /path/to/foo/

# To extract a .tgz or .tar.gz archive:
tar -xzvf /path/to/foo.tgz
tar -xzvf /path/to/foo.tar.gz

# To create a .tgz or .tar.gz archive:
tar -czvf /path/to/foo.tgz /path/to/foo/
tar -czvf /path/to/foo.tar.gz /path/to/foo/

# To list the content of an .tgz or .tar.gz archive:
tar -tzvf /path/to/foo.tgz
tar -tzvf /path/to/foo.tar.gz

# To extract a .tar.bz2 archive:
tar -xjvf /path/to/foo.tar.bz2

# To create a .tar.bz2 archive:
tar -cjvf /path/to/foo.tar.bz2 /path/to/foo/

# To list the content of an .tar.bz2 archive:
tar -tjvf /path/to/foo.tar.bz2

# To create a .tgz archive and exclude all jpg,gif,... from the tgz:
tar -czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/

# To use parallel (multi-threaded) implementation of compression algorithms:
tar -z ... -> tar -Ipigz ...
tar -j ... -> tar -Ipbzip2 ...
tar -J ... -> tar -Ipixz ...

# To append a new file to an old tar archive:
tar -rf <archive.tar> <new-file-to-append>

# Exclude directory
tar --exclude='Documents/wc' --exclude='Documents/Myapps' -czvf /tmp/foo.tgz Documents/

# To create a .tgz archive and exclude all jpg,gif,... from the tgz:
tar -czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/

Sunday, 8 June 2025

freechess

freechess.org

https://www.freechess.org/Help/QuickGuide/index.html

eboard on linux. Set game time 10 mins and increment by 12 seconds. Create game.

  • set time 10
  • set inc 12
  • getgame

gpg

 # Create a key
 gpg --gen-key


# Show keys
  To list a summary of all keys

    gpg --list-keys

  To show your public key

    gpg --armor --export

  To show the fingerprint for a key

    gpg --fingerprint KEY_ID

# Search for keys
  gpg --search-keys 'user@emailaddress.com'


# To Encrypt a File
  gpg --encrypt --recipient 'user@emailaddress.com' example.txt


# To Decrypt a File
  gpg --output example.txt --decrypt example.txt.gpg


# Export keys
  gpg --output ~/public_key.txt --armor --export KEY_ID
  gpg --output ~/private_key.txt --armor --export-secret-key KEY_ID

  Where KEY_ID is the 8 character GPG key ID.

  Store these files to a safe location, such as a USB drive, then
  remove the private key file.

    shred -zu ~/private_key.txt

# Import keys
  Retrieve the key files which you previously exported.

    gpg --import ~/public_key.txt
    gpg --allow-secret-key-import --import ~/private_key.txt

  Then delete the private key file.

    shred -zu ~/private_key.txt

# Revoke a key
  Create a revocation certificate.

    gpg --output ~/revoke.asc --gen-revoke KEY_ID

  Where KEY_ID is the 8 character GPG key ID.

  After creating the certificate import it.

    gpg --import ~/revoke.asc

  Then ensure that key servers know about the revokation.

    gpg --send-keys KEY_ID

# Signing and Verifying files
  If you are uploading files to launchpad you may also want to include
  a GPG signature file.

    gpg -ba filename

  or if you need to specify a particular key:

    gpg --default-key <key ID> -ba filename

  This then produces a file with a .asc extension which can be uploaded.
  If you need to set the default key more permanently then edit the
  file ~/.gnupg/gpg.conf and set the default-key parameter.

  To verify a downloaded file using its signature file.

  gpg --verify filename.asc

# Signing Public Keys
  Import the public key or retrieve it from a server.

    gpg --keyserver <keyserver> --recv-keys <Key_ID>

  Check its fingerprint against any previously stated value.

    gpg --fingerprint <Key_ID>

  Sign the key.

    gpg --sign-key <Key_ID>

  Upload the signed key to a server.

    gpg --keyserver <keyserver> --send-key <Key_ID>

# Change the email address associated with a GPG key
  gpg --edit-key <key ID>
  adduid

  Enter the new name and email address. You can then list the addresses with:

    list

  If you want to delete a previous email address first select it:

    uid <list number>

  Then delete it with:

    deluid

  To finish type:

    save

  Publish the key to a server:

    gpg --send-keys <key ID>

# Creating Subkeys
  Subkeys can be useful if you don't wish to have your main GPG key
  installed on multiple machines. In this way you can keep your
  master key safe and have subkeys with expiry periods or which may be
  separately revoked installed on various machines. This avoids
  generating entirely separate keys and so breaking any web of trust
  which has been established.

    gpg --edit-key <key ID>

  At the prompt type:

    addkey

  Choose RSA (sign only), 4096 bits and select an expiry period.
  Entropy will be gathered.

  At the prompt type:

    save

  You can also repeat the procedure, but selecting RSA (encrypt only).
  To remove the master key, leaving only the subkey/s in place:

    gpg --export-secret-subkeys <subkey ID> > subkeys
    gpg --export <key ID> > pubkeys
    gpg --delete-secret-key <key ID>

  Import the keys back.

    gpg --import pubkeys subkeys

  Verify the import.

    gpg -K

  Should show sec# instead of just sec.
 
# High-quality options for gpg for symmetric (secret key) encryption
  This is what knowledgable people consider a good set of options for
  symmetric encryption with gpg to give you a high-quality result.
 
  gpg \
    --symmetric \
    --cipher-algo aes256 \
    --digest-algo sha512 \
    --cert-digest-algo sha512 \
    --compress-algo none -z 0 \
    --s2k-mode 3 \
    --s2k-digest-algo sha512 \
    --s2k-count 65011712 \
    --force-mdc \
    --pinentry-mode loopback \
    --armor \
    --no-symkey-cache \
    --output somefile.gpg \
    somefile # to encrypt
    
  gpg \
    --decrypt \
    --pinentry-mode loopback \
    --armor \
    --output somefile.gpg \
    somefile # to decrypt

 

You can also change your passphrase using the GNU Privacy Guard (GPG):
    Enter gpg --edit-key key-id
    At the gpg prompt, enter passwd
    Enter your current passphrase
    Enter your new passphrase twice
    Enter save
 

pax

# List the contents of an archive:
pax -f archive.tar

# List the contents of a gzipped archive:
pax -zf archive.tar.gz

# Create an archive from files:
pax -wf target.tar path/to/file1 path/to/file2 path/to/file3

# Create an archive from files, using output redirection:
pax -w path/to/file1 path/to/file2 path/to/file3 > target.tar

# Extract an archive into the current directory:
pax -rf source.tar

# Copy to a directory, while keeping the original metadata; `target/` must exist:
pax -rw path/to/file1 path/to/directory1 path/to/directory2 target/
 

# Replace special character and copy (takes a sed statement as an argument using the -s flag).

  • pax -rw -s '/[?<>\\:*|\"]/_/g' /some/source/path /some/destination/path
  • eg: pax -rw -s '/[?<>\\:*|\"]/_/g' w\*ater .  (will change w*ater to w_ater)

Saturday, 7 June 2025

netcat

# To open a TCP connection from <src-port> to <dest-port> of <dest-host>, with a timeout of <seconds>
nc -p <src-port> -w <seconds> <dest-host> <dest-port>

# To open a UDP connection to <dest-port> of <dest-host>:
nc -u <dest-host> <dest-port>

# To open a TCP connection to port 42 of <host> using <source-host> as the IP for the local end of the connection:
nc -s <source-host> <dest-host> <port>

# To create and listen on a UNIX-domain stream socket:
nc -lU /var/tmp/dsocket

# To connect to <dest-port> of <dest-host> via an HTTP proxy at <proxy-host>,
# <proxy-port>. This example could also be used by ssh(1); see the ProxyCommand
# directive in ssh_config(5) for more information.
nc -x<proxy-host>:<proxy-port> -Xconnect <dest-host> <dest-port>

# The same example again, this time enabling proxy authentication with username "ruser" if the proxy requires it:
nc -x<proxy-host>:<proxy-port> -Xconnect -Pruser <host> <port>

# To choose the source IP for the testing using the -s option
nc -zv -s source_IP target_IP Port 


# Start nc as a server for testing

  • while true; do printf 'HTTP/1.1 200 OK\n\n%s' "$(cat index_qa.html)" | nc -l 7070; done
  • Copy data. This requires netcat on both servers.
    • Destination box: nc -l -p 2342 | tar -C /target/dir -xzf -
    • Source box: tar -cz /source/dir | nc Target_Box 2342


7z

Zip with password
7za a -p /tmp/photos_2015-11-11_181303.7z photos/*

Create a split
7za a -v500m -tzip mail 2.pst -----> create a split of 2.pst
7za t mail.zip.001 -----> test the split


update: 7z u archive.zip *.doc
extract: 7z x archive.zip
compression: -mx0 copy, -mx1 fasttest, -mx3 fast, -mx5 normal, -mx7 maximum -mx9 ultra, -mmt enable/disable multithreading, -ms=on enab
type switch: -t7z, -tgzip, -tzip, -tbzip2, -ttar, -tiso, -tudf
volume switch: -v to split the zip file

eg: 7z a -tzip archive.zip *.jpg -mx0
7za a pw.7z *.txt -pSECRET -----> created password protected file
7za.exe a archive.7z Z*.* -ssc  ----> on windows enable case sensitive
Z*.*:       select only files whose first letter is a capital Z

7z x mail.zip -aoa
7z:       use the 7-zip executable
x:        use the extract command
mail.zip: extract files from this archive
-aoa:     overwrite all existing files. risky!
Switch: -aoa
Overwrite all destination files.

Switch: -aos
Skip over existing files without overwriting.
Use this for files where the earliest version is most important.

Switch: -aou
Avoid name collisions.
New files extracted will have a number appending to their names.
(You will have to deal with them later.)

Switch: -aot
Rename existing files.
This will not rename the new files, just the old ones already there.


-----------------------------
You can easily do an incremental backup via changing the direction in time. i.e. you always keep the latest backup as a full copy and keep differential files into the past.
# create the difference step into the past
    7z u {base archive.7z} {folder to archive} -mx=9 -u- -up1q1r3x1y1z0w1!{decrement.7z}

# update the Archive to the latest files
    7z u {base archive.7z} {folder to archive} -mx=9 -up0q0x2
The base Archive always contains the latest version and via applying the "decrements" step by step you can recreate older Versions. With a little bit scripting you can apply the right numbering to the decremental files.
-----------------------------
Exclude .svn from 7zip backup
To exclude all .svn and .git
Linux
    7z a Documents.7z Documents/ -xr\!?svn -xr\!Documents/Books -xr\!Documents/hydrawordlist -xr\!Documents/vimwiki* -xr\!?git
Windows
    7z a -r -t7z -y -xr!?svn\* test_data.7z test_data

-----------------------------
Delete source (-sdel) after compression
    7z a -sdel archive.aes.7z archive.aes


robocopy

 # Copy from g: to c: but exclude files *.iso *.log and directories g:\dir1 etc.
robocopy G: C:\backup /MIR /Z /LOG:C:\todaysdate-backup.log /XF *.iso *.log *.au /XD G:\dir1 G:\dir3
# Another way to exclude
Let us take this example list of exclusions stored in the following file

exclude.rcj
/XF
    *.pyc
    *.pyo
    *.pyd

/XD
    __pycache__
    .pytest_cache

Note how we are able to have multiple switches inside the file and separate the names with new lines for better readability !

Our batch script can then be something along the lines of:

backup.bat

SET _backupDir=backup_path\
SET _excl="%_backupDir%exclude.rcj"

robocopy src_path dest_path /JOB:%_excl%

bc

 # bc examples
pi=$(echo "scale=10; 4*a(1)" | bc -l)
echo $pi
3.1415926532

echo "scale=10; 4*a(1)" | bc -l
3.1415926532

echo "scale=10; 35*4763" | bc -l
166705

atop

  •  Store information about the system and process activity in binary  compressed form to a file with an interval of ten minutes during an hour:
    • atop -w /tmp/atop.raw 600 6
  • View the contents of this file interactively:
    • atop -r /tmp/atop.raw
    • Jump one interval forward by pressing “t”, to go backward press “T”. To jump to a specified time, press “b”.
  • Ubuntu package contains atop.service which will log the samples into /var/log/atop/atop_YYYYMMDD log files with interval 600.

feh

 feh Passport*.jpg --zoom 20%

exim

# exim
# Manage queue of Exim message transfer agent (MTA) service

# Print amount of enqueued messages
exim -bpc

# List messages in the queue (time queued, size, message-id, sender, recipient)
exim -bp

# Print summary of enqueued messages
exim -bp | exiqsumm

# Check what is Exim doing right now
exiwhat

# Display all of Exim's configuration settings
exim -bP

# Search the queue for messages from a specific sender
exiqgrep -f [user]@domain

# Search the queue for messages from a specific recipient
exiqgrep -r [user]@domain

# Print messages older than the specified number of seconds
# (1 day = 86400 seconds)
exiqgrep -o 86400

# Print messages younger than the specified number of seconds
exiqgrep -y 86400

# Start a queue run
exim -q -v

# Remove message from the queue
exim -Mrm <message-id> [ <message-id> ... ]

# Freeze message
exim -Mf <message-id> [ <message-id> ... ]

# Thaw (unfreeze) a message
exim -Mt <message-id> [ <message-id> ... ]

# Deliver message, whether it's frozen or not,
# whether the retry time has been reached or not
exim -M <message-id> [ <message-id> ... ]

# Force message to fail and bounce as "canceled by administrator"
exim -Mg <message-id> [ <message-id> ... ]

# View message's headers
exim -Mvh <message-id>

# View message's body
exim -Mvb <message-id>

# View message's logs
exim -Mvl <message-id>

# Add a recipient to a message
exim -Mar <message-id> <address> [ <address> ... ]

# Edit the sender of a message
exim -Mes <message-id> <address>

# test how Exim will route a given address
exim -bt email@domain.tld

nnn

# Find and list
find -maxdepth 1 -size +1M -print0 | nnn
#  or  
# redirect a list from a file:
nnn < files.txt

# to show video files in current director (adjust maxdepth if needed), run: list video
list ()
{
    find . -maxdepth 1 | file -if- | grep "$1" | awk -F: '{printf "%s%c", $1, 0}' | nnn
}

rsync

 # To copy files from remote to local, maintaining file properties and sym-links (-a), zipping for faster transfer (-z), verbose (-v).  
rsync -avz host:file1 :file1 /dest/
rsync -avz /source host:/dest

# Copy files using checksum (-c) rather than time to detect if the file has changed. (Useful for validating backups).
rsync -avc /source/ /dest/

# Copy contents of /src/foo to destination:

# This command will create /dest/foo if it does not already exist
rsync -auv /src/foo /dest

# Explicitly copy /src/foo to /dest/foo
rsync -auv /src/foo/ /dest/foo

#rsync exclude for directory exclusion
rsync -avv /src/foo/ /dest/foo/ --exclude=/tools/
The leading slash / in the exclude is of great importance. It means match only at the base of the source tree. Assume the following:
tools/
src/
 - program.c
 - tools/

So there is the directory tools/ and src/tools/. exclude tools/ should exclude both of those directories named tools, whereas exclude /tools/ will only exclude the former (which is probably more often intended)

#Copy hard link as it is.
sudo rsync -az -H --delete --numeric-ids --progress /home/backups/vostroRsnapshot/ /mnt/tmp/backups/vostroRsnapshot/

# with ssh port number
rsync -av -e "ssh -p PORT_NUMBER" <SOURCE> <DESTINATION>:<PATH>
rsync -av -e 'ssh -p 3022' . user@127.0.0.1:~/make-this_folder

# with ssh port number and key
rsync -av -e "ssh -i private.pem -p PORT_NUMBER" <SOURCE> <DESTINATION>:<PATH>

Friday, 6 June 2025

firewalld


  • get list of zones
  1. firewall-cmd --list-all-zones
  2. firewall-cmd --get-active-zones
  • add port rule
  1. firewall-cmd --zone=public --add-port=443/tcp --permanent
  2. firewall-cmd --reload
  • add service rule
  1. firewall-cmd --zone=public --add-service=ssh --permanent
  2. firewall-cmd --reload
  • remove rules for firewalld
  1. firewall-cmd --zone=public --remove-port=5000/tcp --permanent
  2. firewall-cmd --zone=public --remove-port=5001/tcp --permanent
  3. firewall-cmd --zone=public --remove-service=http --permanent
  4. firewall-cmd --reload
  • list the rules
  1. firewall-cmd --info-zone=public


public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: ssh dhcpv6-client
ports: 1891/tcp 80/tcp 443/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

  • enable logging
  1. firewall-cmd --get-log-denied ( to check status)
  2. firewall-cmd --set-log-denied=all ( to enable all /unicast/broadcast/multicast/off )
  3. firewall-cmd --get-log-denied (to verify)


  • create new rule to allow specific IP connecting to port 4567 on the server.
    • without rich-rules (recommended)
  1. firewall-cmd --new-zone=special --permanent
  2. firewall-cmd --reload
  3. firewall-cmd --zone=special --add-source=192.0.2.4/32 --permanent
  4. firewall-cmd --zone=special --add-port=4567/tcp --permanent
  5. firewall-cmd --reload
    • with rich-rules
  1. firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="1.2.3.4/32" port protocol="tcp" port="4567" accept'
  2. firewall-cmd --reload 


  •     drop traffic from source ip
  1. firewall-cmd --zone=public --permanent --add-rich-rule='rule family="ipv4" source address="104.149.159.14" drop'
  2. firewall-cmd reload

 

  • remove rule of source ip
  1. firewall-cmd --zone=public --permanent --remove-rich-rule='rule family="ipv4" source address="104.149.159.14" drop'
  2. firewall-cmd --reload 

Thursday, 5 June 2025

scalc

 

Libre Office

If/Else statement:

  • If D6 value is __ then. =IF(D6>=0.2,1,  IF(D6<=-0.2,-1, IF(D6>-0.2,0, IF(D6<0.2,0))))
  • Calculate days from L10 cell. =if(L10,"na",days(TODAY(),A10))

Iferror statement on division by 0: =iferror(your formula;0)

  • eg: =IFERROR(SUM((K22+L22)/C22)*100),0)

 

find

# Find newer or older than:
From man find:
    -newerXY reference
    Compares the timestamp of the current file with reference. The reference argument is normally the name of a file (and one of its timestamps is used for the comparison) but it may also be a string describing an absolute time. X and Y are placeholders for other letters, and these letters select which time belonging to how reference is used for the comparison.
              a   The access time of the file reference
              B   The birth time of the file reference
              c   The inode status change time of reference
              m   The modification time of the file reference
              t   reference is interpreted directly as a time
Example:

  • find -newermt "mar 03, 2010" -ls
  • find -newermt yesterday -ls
  • find -newermt "mar 03, 2010 09:00" -not -newermt "mar 11, 2010" -ls


# exclude tmp and scripts directories from find

  • find . -type f -name "*_peaks.bed" ! -path "./tmp/*" ! -path "./scripts/*"

Wednesday, 4 June 2025

flexbackup

https://manpages.ubuntu.com/manpages/jammy/man1/flexbackup.1.html
perl based backup: configure /etc/flexbackup.conf

  • flexbackup [...] -level <0-9 | full | differential | incremental> Change  backup level to a number "0-9", or one of the symbolic names: "full" (level 0); "differential" (level 1); "incremental" (previous backup level + 1).
    • eg backup: flexbackup -set set1 full (or)
    • eg backup: flexbackup -set set1 differential
    • eg backup: flexbackup -set set1 incremental (or)
  • list: flexbackup -list home-stansoft.0.202402071723.tar.gz
  • extract: mkdir restore;cd restore; flexbackup -extract  ../home-stansoft.0.202402071723.tar.gz
  • extract one file: flexbackup -extract -onefile <filename>
    • Extract  (restore)  the  single  file  named  "filename"  into your current working directory.
      • eg: flexbackup -extract ../home-username.2.afio-gz -onefile *rdp.sh (need to use asterix. Don't put . in onefile)
      • eg: flexbackup -extract ../home-username.9.afio-gz -onefile *dger/*plot/nifty_wsj* (need to use asterix. Don't put . in onefile)
      • eg: flexbackup -extract /home/backups/flexbackup/home-username-Documents.0.202402071828.tar.gz -onefile "SystemPc.pdf"
encryption (https://github.com/2sh/aes-pipe)
  • Create key:
    • head -c 3705 /dev/random | uuencode -m - | head -n 66 | tail -n 65 | gpg --symmetric -a > /home/backups/flexbackup/encryptionkey.gpg
  • Encrypt backup:
    • cat /home/backups/flexbackup/etc.4.afio-gz | aespipe  -K /home/backups/flexbackup/encryptionkey.gpg > /home/backups/flexbackup/etc.4.afio-gz.aes
  • Decrypt backup:
    • aespipe -d -K /home/backups/flexbackup/encryptionkey.gpg < /home/backups/flexbackup/etc.4.afio-gz.aes | afio -t -

Sunday, 1 June 2025

split

 # To split a large text file into smaller files of 1000 lines each:
split <file> -l 1000

# To split a large binary file into smaller files of 10M each:
split <file> -b 10M

# To consolidate split files into a single file:
cat x* > <file>

TermRecord

TermRecord is a simple terminal session recorder with easy-to-share self-contained HTML output! TermRecord -o /tmp/session.html