Fix Amazon EC2 SSH login

| November 20th, 2016

Logged out of Amazon instance and SSH is not working

Blocked myself with CSF firewall on a Centos 7 server. With the “Run Command” feature was not available, and no web terminal or VNC console such as the one in digitalocean( After reading around I found out that the only solution is to stop the running instance, detach the volume, create another Linux instance (shouldn’t be the same as the running one), then attach the old volume to the newly created instance. After mounting the drive, it would be accessible to make change to SSH or Firewall configuration.


    1. Create a New Linux instance: By clicking on the Lunch Instance button.  You should select the same Zone or Subnet as the old server, this is important in order to be able to attach the volume you want to edit
    1. After the instance has been created, stop it
    1. Go to: “ELASTIC BLOCK STORE” -> “Volumes”
    1. Detach the volume you want to edit. Then wait for it to be available
    1. Right click on the volume again, click on Attach. Choose the new instance. Name it anything between sdf to sdp. This will be used to mount the disk later. In newer Linux version, the name will be changed to xvdf though xvdp. (See this for attaching Volumes
    1. When it’s attached and says “in use”, start the new instance
    1. log in to your newly created instance with SSH
    1. Type: lsblk. lsblk will lists all information needed about the attached drives. (
      Mine showed two drives:

          -> xdva1
         -> xdvp1
    1. We have now to mount the partition xdvp1. First create a mounting directory: mkdir -m 000 /hdfix . This will create a directory with permissions only to root. Then mount /dev/xvdp1 /hdfix -t xfs. In my case the filesystem type was xfs. Found out by typing  file -s /dev/xdvp1.
    1. In my case the volume didn’t mount. I received the following error:
      “mount: wrong fs type, bad option, bad superblock on /dev/xvdp1, missing codepage or helper program, or other error”…
      Viewing the log /var/log/messages found our this:
      Nov 19 20:36:36 ip-x-x-x-x kernel: XFS (xvdp1): Filesystem has duplicate UUID ef6ba050-9890-416a-9020-90284d0d206 – can’t mount.

      Solved by mounting the partition with nouuid option. mount -o nouuid /dev/xvdp1 /hdfix -t xfs
    1. Now the disk has been mounted, time to make the changes. In my case I modified the file /hdfix/etc/ssh/sshd_config and fixed the port value
    1. After finishing the needed modifications, I powered off the new instance then detached and reattached the volume to the original instance
  1. In my case it the SSH didn’t work. I had to redo the steps above. Later found out that the ssh config file permissions had been changed and the original server was not able to read it. Changed the permissions to 777 and later on the original server changed it back to 644

After fixing the problem, I terminated the instance and deleted it’s volume

If this didn’t help in solving the problem, try this solution here: