Sunday, June 23, 2024

check open port with nc

nc -z hostname port

e.g: check if samba port is available

# nc -z nas.local 445


or to get only up or down:

 # nc -z nas.local 445 && echo up || echo down

Friday, November 26, 2021

Mount QNAP Disk on Linux Server

choose the commands you need ;-)

I was able to recover files from a single Raid 1 disk from a QNAP TS-253A with RHEL 8.5

 

Look out for disks

sudo lshw -class disk -short

->

H/W path         Device     Class          Description
======================================================
/0/100/1f.2/1    /dev/sdb   disk           4TB WDC WD40EFRX-68N


Partitions...

sudo cat /proc/partitions | grep sdb

->

major minor  #blocks  name
   8       16 3907018584 sdb
   8       17     530125 sdb1
   8       18     530142 sdb2
   8       19 3897063763 sdb3
   8       20     530144 sdb4
   8       21    8353796 sdb5


the biggest one is the one ;-)

sudo mdadm --examine /dev/sdb3

->

/dev/sdb3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : db8544d5:f3913fc6:b6f8d0e0:7265ecf1
           Name : 2
  Creation Time : Sat Dec 30 02:12:17 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 7794127240 sectors (3.63 TiB 3.99 TB)
     Array Size : 3897063616 KiB (3.63 TiB 3.99 TB)
  Used Dev Size : 7794127232 sectors (3.63 TiB 3.99 TB)
   Super Offset : 7794127504 sectors
   Unused Space : before=0 sectors, after=264 sectors
          State : clean
    Device UUID : c3fc65bb:e3086e6d:3beed8ef:a8b8f437

    Update Time : Fri Nov 26 09:44:05 2021
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 95b36245 - correct
         Events : 60271


   Device Role : Active device 0
   Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
 


list Raid-Configs with mdadm

sudo mdadm --detail /dev/md*

-> lists raid-configs. only the one I wanted is printed here:

/dev/md125:
           Version : 1.0
     Creation Time : Sat Dec 30 02:12:17 2017
        Raid Level : raid1
        Array Size : 3897063616 (3.63 TiB 3.99 TB)
     Used Dev Size : 3897063616 (3.63 TiB 3.99 TB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Fri Nov 26 09:44:05 2021
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : 2
              UUID : db8544d5:f3913fc6:b6f8d0e0:7265ecf1
            Events : 60271

    Number   Major   Minor   RaidDevice State
       3       8       19        0      active sync   /dev/sdb3
       -       0        0        1      removed


Nothing listed?

mdadm --assemble --scan

Well, my second disk is missing so

sudo mdadm --assemble /dev/md125 /dev/sdb3 --run
->

mdadm: /dev/md125 has been started with 1 drive (out of 2).


Another scan

sudo lvmdiskscan

->

  /dev/sda1  [       2,00 GiB]
  /dev/sda2  [    <215,01 GiB] LVM physical volume
  /dev/md125 [      <3,63 TiB] LVM physical volume
  0 disks
  1 partition
  0 LVM physical volume whole disks
  2 LVM physical volumes


Check for volume groups

sudo vgscan

-> 

  WARNING: PV /dev/md125 in VG vg289 is using an old PV header, modify the VG to update.
  Found volume group "vg289" using metadata type lvm2
  Found volume group "rhel" using metadata type lvm2


Activate volume group

vgchange -a y vg289


Which logical volume?

sudo lvscan

->

  WARNING: PV /dev/md125 in VG vg289 is using an old PV header, modify the VG to update.
  ACTIVE            '/dev/vg289/lv545' [37,16 GiB] inherit
  ACTIVE            '/dev/vg289/lv2' [3,59 TiB] inherit
...

the second looks interesting ...

Activate logical volume

sudo lvchange -a y vg289/lv2

->

WARNING: PV /dev/md125 in VG vg289 is using an old PV header, modify the VG to update.


Mount it

mkdir /mnt/rescuedisk

sudo mount -t ext4 /dev/vg289/lv2 /mnt/rescuedisk


Look for files

cd /mnt/rescuedisk

ls -l

linux - list disks with path


$ sudo lshw -class disk -short


Wednesday, January 2, 2019

ESD to WIM conversion

Convert the install file from your windows 10 install media (iso, dvd) from esd to wim format.

1) Get the right version:
DISM /Get-WimInfo /wimfile:d:\sources\install.esd


2) convert the image from source to destination with the index from your version:
DISM /Export-Image /SourceImageFile:d:\sources\install.esd /SourceIndex:5 /DestinationImageFile:c:\ESD\install.wim /Compress:Max /CheckIntegrity

when using with DISM to /RestoreHealth append the right index-number to the source-path:
DISM /Online /Cleanup-Image /RestoreHealth /Source:Wim:c:\ESD\install.wim:5 /limitaccess

Tuesday, June 19, 2018

Wednesday, April 18, 2018

find out your installed version of debian

cat /etc/issue
or
cat /etc/debian_version
or if installed
lsb_release -da
or when using systemctl
hostnamectl