User Tools

Site Tools


section:algosec:documentation:usefullcommands

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
section:algosec:documentation:usefullcommands [2021/11/15 15:17] patriksection:algosec:documentation:usefullcommands [2023/09/29 07:01] (current) – external edit 127.0.0.1
Line 3: Line 3:
  
 ====== Useful Algosec commands, troubleshooting and information ====== ====== Useful Algosec commands, troubleshooting and information ======
 +//
 +by Patrik Hermansson
 +//
 ===== Usefull commands, summary ===== ===== Usefull commands, summary =====
   * [[usefullcommands#Regarding passwords|Regarding passwords]]   * [[usefullcommands#Regarding passwords|Regarding passwords]]
Line 28: Line 31:
  
 ===== Troubleshooting, summary ===== ===== Troubleshooting, summary =====
-  * Backup in gui won’t work (backup is already running error whith manual backup) +  * [[usefullcommands#Backup in gui won’t work (backup is already running error whith manual backup|Backup in gui won’t work (backup is already running error whith manual backup)]] 
-  * To view speed and duplex on an interface +  * [[usefullcommands#To view speed and duplex on an interface|To view speed and duplex on an interface]] 
-  * Sync the database with the reports directory (problem with searching in objects) +  * [[usefullcommands#Sync the database with the reports directory (problem with searching in objects)|Sync the database with the reports directory (problem with searching in objects)]] 
-  * To clear type history in BASH +  * [[usefullcommands#To clear type history in BASH|To clear type history in BASH]] 
-  * To enable debug-mode in FireFlow (CLI) +  * [[usefullcommands#To enable debug-mode in FireFlow (CLI)|To enable debug-mode in FireFlow (CLI)]] 
-  * Verify that the garbage cleanup script have been running +  * [[usefullcommands#Verify that the garbage cleanup script have been running|Verify that the garbage cleanup script have been running]] 
-  * AFA traffic simulation in CLI +  * [[usefullcommands#AFA traffic simulation in CLI|AFA traffic simulation in CLI]] 
-  * Kill (shut down) stuck or big application queries (CLI) +  * [[usefullcommands#Kill (shut down) stuck or big application queries (CLI)|Kill (shut down) stuck or big application queries (CLI)]] 
-  * Low free disk space on / or /data partitions +  * [[usefullcommands#Low free disk space on / or /data partitions|Low free disk space on / or /data partitions]] 
-  * A short SQL-query to get all interfaces with associated IP from all the firewalls in the map +  * [[usefullcommands#Check the status for vacuum db function (Star/stop)|Check the status for vacuum db function (Star/stop)]] 
-  * Licens will not install in CLI +  * [[usefullcommands#Metro service do not start "unresponcive" after breaking cluster|Metro service do not start "unresponcive" after breaking cluster]] 
-  * ABF application flows will not save +  * [[usefullcommands#A short SQL-query to get all interfaces with associated IP from all the firewalls in the map|A short SQL-query to get all interfaces with associated IP from all the firewalls in the map]] 
-  * Local account admin could not login +  * [[usefullcommands#Licens will not install in CLI|Licens will not install in CLI]] 
-  * Cansle connectivity check in ABF +  * [[usefullcommands#ABF application flows will not save|ABF application flows will not save]] 
-  * Art / elasticsearch / kibana backup problems +  * [[usefullcommands#Local account admin could not login|Local account admin could not login]] 
-  * Fetchmail troubleshooting, will not get (fetch) mails +  * [[usefullcommands#Cansle connectivity check in ABF|Cansle connectivity check in ABF]] 
-  * Menubar not showing in AppViz after upgrade to A30.10 +  * [[usefullcommands#Art/elasticsearch/kibana backup problems|Art/elasticsearch/kibana backup problems]] 
-  * Verify disk speed is up to standard +  * [[usefullcommands#Fetchmail troubleshooting, will not get (fetch) mails|Fetchmail troubleshooting, will not get (fetch) mails]] 
-  * Username no longer case sensative?+  * [[usefullcommands#Menubar not showing in AppViz after upgrade to A30.10|Menubar not showing in AppViz after upgrade to A30.10]] 
 +  * [[usefullcommands#Verify disk speed is up to standard|Verify disk speed is up to standard]] 
 +  * [[usefullcommands#Username no longer case sensative?|Username no longer case sensative?]]
  
 ===== Other useful information, summary ===== ===== Other useful information, summary =====
-  * Needed ports for cluster and functions +  * [[usefullcommands#Needed ports for cluster and functions|Needed ports for cluster and functions]] 
-  * Where the risk profiles are located +  * [[usefullcommands#Where the risk profiles are located|Where the risk profiles are located]] 
-  * Regex for search in Notepad++ +  * [[usefullcommands#Regex for search in Notepad++|Regex for search in Notepad++]] 
-  * How to use screens in Linux +  * [[usefullcommands#How to use screens in Linux|How to use screens in Linux]] 
-  * How to identify users in the system +  * [[usefullcommands#BZIP2 and GZIP archiving|BZIP2 and GZIP archiving]] 
-  * BZIP2 and GZIP archiving +  * [[usefullcommands#TAR commands|TAR commands]] 
-  * TAR commands +  * [[usefullcommands#How to Encrypt and Decrypt Files and Directories Using Tar and OpenSSL|How to Encrypt and Decrypt Files and Directories Using Tar and OpenSSL]] 
-  * How to Encrypt and Decrypt Files and Directories Using Tar and OpenSSL +  * [[usefullcommands#How to activate debug mode in ABF|How to activate debug mode in ABF]] 
-  * How to activate debug mode in ABF +  * [[usefullcommands#Boostmode on and off|Boostmode on and off]] 
-  * Boostmode on and off +  * [[usefullcommands#To get destination NAT from firewalls in ASMS databas|To get destination NAT from firewalls in ASMS databas]] 
-  * To get destination NAT from firewalls in ASMS databas +  * [[usefullcommands#Cluster node suddanly removed from cluster|Cluster node suddanly removed from cluster]] 
-  * Cluster node suddanly removed from cluster +  * [[usefullcommands#How the user field in ABF flows work|How the user field in ABF flows work]] 
-  * How the user field in ABF flows work +  * [[usefullcommands#How to get a session id|How to get a session id]] 
-  * How to get a session id +  * [[usefullcommands#How to look into .tar, .zip, .bz2 files without unpacking them|How to look into .tar, .zip, .bz2 files without unpacking them]] 
-  * How to look into .tar, .zip, .bz2 files without unpacking them +  * [[usefullcommands#How to clean up the session database table in postgres|How to clean up the session database table in postgres]] 
-  * How to clean up the session database table in postgres +  * [[usefullcommands#Guide for LVM on new setup virtual appliance|Guide for LVM on new setup virtual appliance]]
-  * Guide for LVM on new setup virtual appliance+
  
  
Line 274: Line 278:
 ### Exit the postgres configuration mode ### Exit the postgres configuration mode
        \q        \q
 +
 +
 +----
 +====== Troubleshooting ======
 +=== Backup in gui won’t work (backup is already running error whith manual backup) ===
 +Try to restart Apache
 +  /etc/init.d/apache-tomcat restart
 +  or
 +  service restart apache-tomcat
 +  or
 +  systemctl restart apache-tomcat
 +The differenses are regarding of version of ASMS.
 +
 +=== To view speed and duplex on an interface ===
 +Use the command ethtool, to see speed and duplex on the interface eth0 use the following command
 +  ethtool eth0
 +Output(example):
 +  Settings for eth0:
 +  Supported ports: [ TP ]
 +  Supported link modes:   10baseT/Half 10baseT/Full
 +            100baseT/Half 100baseT/Full
 +            1000baseT/Half 1000baseT/Full
 +            Supported pause frame use: No
 +            Supports auto-negotiation: Yes
 +            Advertised link modes:  10baseT/Half 10baseT/Full
 +                      100baseT/Half 100baseT/Full
 +                      1000baseT/Half 1000baseT/Full
 +            Advertised pause frame use: Symmetric
 +            Advertised auto-negotiation: Yes
 +            Link partner advertised link modes:  100baseT/Full
 +                                               1000baseT/Full
 +          Link partner advertised pause frame use: No
 +          Link partner advertised auto-negotiation: Yes
 +          Speed: 1000Mb/s
 +          Duplex: Full
 +          Port: Twisted Pair
 +          PHYAD: 1
 +          Transceiver: internal
 +          Auto-negotiation: on
 +          MDI-X: on
 +          Supports Wake-on: g
 +          Wake-on: g
 +          Current message level: 0x000000ff (255)
 +                                 drv probe link timer ifdown ifup rx_err tx_err
 +          Link detected: yes
 +
 +=== Sync the database with the reports directory (problem with searching in objects) ===
 +Login as afa user or switch to the afa user. (su afa) run the script below. Then logout and login again.
 +  /usr/share/fa/bin/syncDbWithReportsDir.sh
 +
 +=== To clear type history in BASH ===
 +In Bash (the terminal) the commands are saved in the .bash_history file. If you write usernames and passwords in the terminal those are saved as commands in that file. In clear text!
 +Password without echo (when you do not see the text) are not saved. Only the character that you see. 
 +To clear the file from entries, do the following. (needs to be done in all open windows (cli logins) if more than one SSH session is active/logged in)
 +  history -c
 +  history -w
 +
 +  The flag -c == clear the history list by deleting all entries.
 +  The flag -w == writes over the .bash_history file with the empty list thereby clearing it.
 +
 +The commands you write in the active (open) window will be saved in memory and written in the file on exit/logout. You can see all commands in “memory” by using the command history without any flags.
 +
 +=== To enable debug-mode in FireFlow (CLI) ===
 +Log in to AFA as root user using SSH. \\
 +Run the following command (as a single line, linebrake after FireFlow_SiteConfig.pm, remove linebrake) to back up the existing FireFlow_SiteConfig.pm file: \\
 +  cp -p /usr/share/fireflow/local/etc/site/FireFlow_SiteConfig.pm /usr/share/fireflow/local/etc/site/FireFlow_SiteConfig.pm.b4debug
 +
 +Add the following lines at the end of the FireFlow_SiteConfig.pm file before the line ending with 1;
 +
 +  vim /usr/share/fireflow/local/etc/site/FireFlow_SiteConfig.pm
 +
 +  Set($LogToFile, 'debug');             
 +  Set($LogMaxMsgLen, 0);             
 +  Set($LogPermissions, 2);
 +
 +Restart FireFlow:
 +  restart_fireflow
 +
 +Recreate the problematic scenario that you want to troubleshoot. \\
 +Download the fireflow.zip file and attach it to the support case. \\
 +In the FireFlow_SiteConfig.pm file, remove or comment with # the lines you added above to enable debugging. \\
 +
 +Restart FireFlow:
 +  restart_fireflow
 +
 +=== Verify that the garbage cleanup script have been running ===
 +This is to cleanup /home/afa/public_html/algosec/sessions-* folders. If that is not done the system will “run out” of inodes (check inodes load with df –ih)
 +
 +  df -ih
 +
 +Run this to see if the garbage cleanup script have been running:
 +  grep "clean_up_garbag.*session-" ~afa/.fa-history*[^0] | head
 +
 +  Example output:
 +  [root@ASMS_2017-2 ~]# grep "clean_up_garbag.*session-" ~afa/.fa-history*[^0] | head
 +  /home/afa/.fa-history.1:[14689] [    ]       [2018-05-05 02:00:20,203] [INFO ] [auto_remove              ::clean_up_garbag:705 ] Remove old files from directory /home/afa/public_html/algosec with prefix session-
 +
 +=== AFA traffic simulation in CLI ===
 +The AFA traffic simulation feature has a CLI version.The bennefit of this compared to the GUI version is that You get much more troubleshooting info, when You have patch discovery (MAP) problems.
 +
 +Log in as AFA user if not that already
 +  su afa
 +
 +Run the command test_fip with the following commands
 +  test_fip -s [from-ip] -d [destination-ip] –o
 +  flag -s: source
 +  flag -d: destination
 +  flag -o: prints fip output if given
 +
 +Example:
 +  test_fip -s 172.18.113.5 -d 172.19.150.97 –o
 +
 +=== Kill (shut down) stuck or big application queries (CLI) ===
 +For instance when the application is stuck in initial plan. \\
 +In the CLI enter the commands: \\
 +  ps -ef | grep 18619 
 +  ps -ef | grep run_query
 +  
 +Where 18619 is the ticket ID. \\
 +
 +Then kill those PIDs 
 +  Kill -9 [pid]
 +
 +=== Low free disk space on / or /data partitions ===
 +If disk utilization is high on the root ( / ) partition the system might stop working. \\
 +Also some functions like backup and report gathering might stop working if the /data partition is full. To fix (at least some of the problems) look at the KB article below.
 +
 +Login is needed:
 +[[https://knowledge.algosec.com/skn/tu/e15153]]
 +
 +=== Check the status for vacuum db function (Star/stop) ===
 +The following is the command to check the vacuum
 +
 +  grep -i vacuum /var/lib/pgsql/data/pg_logpostgresql-.log | grep  > /tmp/results.txt
 +i.e
 +  grep -i vacuum /var/lib/pgsql/data/pg_logpostgresql-Tue.log | grep 2019-05-07 > /tmp/results.txt
 +
 +=== Metro service do not start "unresponcive" after breaking cluster ===
 +The SSL configuration is not removed from one or all nodes. \\
 +Configure the machine_config file (/home/afa/.fa/machine_config) \\
 +
 +  vim /home/afa/.fa/machine_config
 +  
 +  Change secure_conection=true => secure_conection=false
 +
 +Restart apache-tomcat service
 +
 +  service apache-tomcat restart  (systemctl restart apache-tomcat)
 +
 +=== A short SQL-query to get all interfaces with associated IP from all the firewalls in the map ===
 +In CLI on the AlgoSec server run:
 +
 +  sqlite3 /home/afa/.fa/map.sqlite "SELECT DeviceName, HwName, IP FROM Interface INNER JOIN Device ON Device.DeviceID = Interface.DeviceID WHERE DeviceName IS NOT NULL AND DeviceName != \"\" AND IP != \"None\";" ".quit"
 +
 +Edit: this command will also show subnet-ID with CIDR-mask.
 +
 +  sqlite3 /home/afa/.fa/map.sqlite "SELECT DeviceName, HwName, IP, CIDR FROM Interface INNER JOIN Device ON Device.DeviceID = Interface.DeviceID INNER JOIN Subnet ON Subnet.SubnetID = Interface.SubnetID WHERE DeviceName IS NOT NULL AND DeviceName != \"\" AND IP != \"None\";" ".quit"
 +
 +=== Licens will not install in CLI ===
 +For version 2018.1.x-x \\
 +If this occures on the second device in a DA/HA cluster check if the metro service is running. \\
 +Usually not, and to start you need to start the apache-tomcat service.\\
 +
 +  service start apache-tomcat or systemctl start apache-tomcat.service 
 +
 +Rememeber to shut it down after the installation of the license.
 +  service stop apache-tomcat or systemctl stop apache-tomcat.service
 +
 +=== ABF application flows will not save ===
 +This could be because of not allowed text (text that can be translated into code) in some of the comments fields.
 +Affected fields: 
 +  Custom fields 
 +  Flow Names 
 +  Comments
 +
 +Algosec KB for this: [[https://knowledge.algosec.com/skn/tu/e16448]]
 +
 +Example on what is considered as code:
 +  <script>
 +  </script>
 +  src="*"
 +  eval(*)
 +  expression(*)
 +  javascript:
 +  vbscript:
 +  onload*=
 +  
 +  also avoid write <*> html tags 
 +   
 +  "*" means anything between
 +
 +=== Local account admin could not login ===
 +Check the followin KB:
 +[[https://knowledge.algosec.com/skn/c6/AlgoPedia/e4998/Login_Failed_incorrect_user_name_or_password]]
 +
 +If that looks ok, check if two or more accounts have the same password. Is possible if new user are added via the users_info.xml file. \\
 +If that is the case remove the other account or change the email to specific emails for all accounts. \\
 +
 +=== Cansle connectivity check in ABF ===
 +There MAY be a workaround. It isn't verified, but we have used it in other occasions when an ABF application seems to be stuck in updating: \\
 +\\
 +First, Connect to postgres
 +
 +  root@ITSEELM-BB4261:~#psql -U postgres
 +  Password for user postgres:
 +  psql (9.2.5)
 +  Type "help" for help.
 +
 +Enable prettyprint & connect to the bflow database
 +  postgres=# \x on
 +
 +Expanded display is on.
 +  postgres=# \c bflow
 +
 +You are now connected to database "bflow" as user "postgres".
 +
 +Select the application with the ID. This can be found in the URL. Example: 
 +  "https://fo.ikea.com/BusinessFlow/#/application/ --> 2797 <-- /dashboard"
 +
 +  bflow=# select * from applications where id=2797;
 +  
 +  -[ RECORD 1 ]------------------+---------------------------
 +  id | 2797
 +  app_id | 2433
 +  creation_ts | 2019-04-01 11:45:02.110999
 +  lcname | mfc-le-sto-371
 +  name | MFC-LE-STO-371
 +  update_ts | 2019-04-01 11:48:33.302
 +  connectivity_id | 85479
 +  metadata_id | 1100
 +  revision_id | 2827
 +  connectivity_scan_in_progress | f
 +  vulnerability_scan_in_progress | f
 +  last_risk_check |
 +  risk_scan_in_progress | f
 +  risk_score |
 +  risks_information_up_to_date | f
 +  discovery_update_in_progress | f
 +
 +This gives us some data to look at. What we need for next step is the APP_ID field. Use this field in the next query:
 +  bflow=# select * from application_metadata where appId=2433;
 +  -[ RECORD 1 ]------+---------------------------
 +  id | 1100
 +  appid | 2433
 +  applicationlock | t
 +  creation_ts | 2019-02-20 12:53:10.764664
 +  update_ts | 2019-02-20 12:53:10.817
 +  lifecyclephase_id | 1
 +  name_sequence | 1
 +  expiration_date |
 +  rename_in_progress | f
 +
 +This shows us that the applicationlock field is indeed TRUE. set this to FALSE using the APPLICATION_METADATA ID, not the application ID:
 +  bflow=# update application_metadata set applicationlock=false where id=1100;
 +  UPDATE 1
 +
 +Verify that the flag is correct (False):
 +  bflow=# select * from application_metadata where appId=2433;
 +  -[ RECORD 1 ]------+---------------------------
 +  id | 1100
 +  appid | 2433
 +  applicationlock | f
 +  creation_ts | 2019-02-20 12:53:10.764664
 +  update_ts | 2019-02-20 12:53:10.817
 +  lifecyclephase_id | 1
 +  name_sequence | 1
 +  expiration_date |
 +  rename_in_progress | f
 +
 +Quit when done.
 +  bflow=# \q
 +
 +=== Art / elasticsearch / kibana backup problems ===
 +The new backup needs the elasticsearch service to be running. It will fail othervice. (Version 2018.2 => ) \\
 +In at least 2018.2.870 - 2018.2.900 there wherer a mismach with the versions of elasticsearch and kiban4. This could generate problems with the services and backups. \\
 +The problem is resolved in version 2018.2.900-xyz (accoring to Algosec)\\
 +
 +Elasticsearch: \\
 +Check the service via:
 +  service elasticsearch status
 +or
 +  systemctl status elasticsearch
 +
 +Check that the service starts with the system
 +  chkconfig | grep -i elasticsearch
 +
 +Kibana: \\
 +Check the service via:
 +  service kibana4 status
 +or
 +  systemctl status kibana4
 +
 +Check that the service starts with the system
 +  chkconfig | grep -i kibana4
 +
 +There is a script to start the services (or stop them) and enable the start with the system. \\
 +Script: toggle_art.sh \\
 +To run it:
 +  /usr/share/fa/bin/toggle_art.sh
 +use the on/off to turn it on or of... 
 +  /usr/share/fa/bin/toggle_art.sh on
 +or
 +  /usr/share/fa/bin/toggle_art.sh off
 +
 +=== Fetchmail troubleshooting, will not get (fetch) mails ===
 +Log: /var/log/fetchmail.log \\
 +Symptom: The system does not get (fetches) emails from email server. \\
 +Test the function with the following command: 
 +  /usr/bin/fetchmail -c -v -p POP3 -P 995 --ssl -u [username] -L /var/log/fetchmail.log [server FQDN/ip]
 +Exchange POP3 and 995 --ssl if needed. Enter username and server FQDN or ip.
 +\\
 +If that works follow the checklist below. If not check the logfile to see what went wrong.
 +Check: \\
 +That the ownership are correct on the file, .fetchmailrc. (fireflow should be owner of the file) as user root do: 
 +
 +  chown fireflow:fireflow .fetchmailrc
 +          
 +That the rights are correct on the file, .fetchmailrc (chmod 0700) \\
 +as user root do:
 +  chmod 0700 .fetchmailrc
 +
 +Login at fireflowuser (su - fireflow), test with 
 +  /usr/bin/fetchmail
 +without any more inforamtion. The command get the rest from the .fetchmailrc file in /home/fireflow directory. Remember that . files are hidden files. \\
 +\\
 +If it gives an error make sure that point 1 and 2 are done and that the .fetchmailrc file is correctly filled in.  In one instance the file needed to be redone to get it working.
 +
 +=== Menubar not showing in AppViz after upgrade to A30.10 ===
 +Problem: \\
 +After upgradeing to version A30.10 the AppViz (fd Businessflow) menu (blue top row) does not show. \\
 +\\
 +Troubleshooting: \\
 +When checking in the webbrowser the URL gave wrong rederict URL. For us the domain was missing.  \\
 +\\
 +Solution: \\
 +Check in the AppViz config file. (/home/bflow/config/user.properties) \\
 +The following parameters needs to be populated with the full url: \\
 +  afa.hostname=****************** (removed for the document)
 +  fireflow.hostname=****************** (removed for the document)
 +
 +=== Verify disk speed is up to standard ===
 +The disk speed (read/write) in MB/s \\
 +Below is built in check for upgradeing the system. \\
 +\\
 +Is blocked      User approval                         Is allowed \\
 +                 needed \\
 +0------80------ 80---------100-----------------------------300--------------- ==> \\
 +\\
 +So to have a good system at least there should be R/W of 300MB/s \\
 +\\
 +How to check this? \\
 +Linux tools hdparm (for read) and dd (for write). \\
 +\\
 +Disclamer!\\
 +Not sure how much this will affect the system so do this outsid of working hours for production. \\
 +!Disclamer \\
 +
 +  hdparm -Ttv [partition, like /dev/sdb1]
 +
 +  dd if=/dev/zero of=/var/wrtest/test oflag=direct bs=128k count=32k
 +
 +if = input file, of = output file. Put it someherer the system can write ~4GB (in this setting). Remember to remove the of file. \\
 +\\
 +On my home system with older SATA disk i got: \\
 + 
 +  root@system#dd if=/dev/zero of=/var/wrtest/test oflag=direct bs=128k count=32k
 +  32768+0 records in
 +  32768+0 records out
 +  4294967296 bytes (4.3 GB, 4.0 GiB) copied, 53.6856 s, 80.0 MB/s
 +
 +So 80.0 MB/s in write speed.
 +
 +=== Username no longer case sensative? ===
 +System Version: A32.0.x-y \\
 +\\
 +One incident when user could not login, had a duplicate user when one was all lower case and one user with first letter upper case. The system tried to match with the first username in the list and that was the one with upper case. Not the actual account and the user could not login. \\
 +\\
 +Solution?: \\
 +Ended up deleting the account with upper case first letter and user could login again after that.
 +
 +=== Needed ports for cluster and functions ===
 +If possible for uptime allow any ports between the nodes in HA. \\
 +Twice i have had problem at customer due to the fact Algosec did not update there documentation for needed ports. \\
 +Not in documentation nor in KB. (Custemer have PVLAN with ACL) \\
 +
 +^Type ^Port ^CM ↔ Slave ^CM ↔ RA ^Slave ↔ Slave ^HA/DR(2018.2)  ^HA/DR (2018.1) ^HA/DR (2017.3) ^
 +|icmp | V    | V   | -    | V           | V   | V |
 +|ssh |tcp/22 | V   | V   | -      | V           | V   | V |
 +|https | tcp/443    | V    | V   | -      | V            | - | - |
 +|syslog |udp/514   | -   | -   | -         | V           | -        | - |
 +|hazelcast |tcp/5701   | V   | -   | V     | V           | V   | - |
 +|activemq |tcp/61616 | V   | -   | -     | V           | -   | - |
 +|postgresql |tcp/5432  | V   | -   | -         | V           | V   | - |
 +|pgpool    |tcp/5433   | V        | -     | -           | V           | -         | - |
 +|HA/DR   |tcp/9595    | -   | -   | -     | V           | V   | - |
 +|heartbeat |udp/694   | -   | -   | -     | -           | -   | V | 
 +(not in use since 2018.1)
 +\\
 +\\
 +\\
 +Ports required for communications (central manager, remote agents) in geo-distributed architecture.\\
 +1. For configuration procedures (adding a remote agent, adding/editing/deleting devices) that must be synchronous:\\
 +^Port ^Protocol ^Description ^Purpose ^
 +|22 |TCP |SSH |Required for running commands upon the remote agent from the central manager |
 +
 +2. For log collection, monitor and data collections procedures that may be asynchronous:\\
 +^Port ^Protocol ^Description ^Purpose ^
 +|443  |TCP      |SOAP over HTTPS |Required for running commands and obtaining the status of the remote agent and current actions performed on it|
 +|22      |TCP      |SCP |Required for copying files to and from the remote agent|
 +
 +3. For communications between master-slave in load-distributed architecture:\\
 +^Port ^Protocol ^Description ^Purpose ^
 +|443      |TCP      |SOAP over HTTPS |From master to slave|
 +|22      |TCP      |SCP-SSH      |From master to slave|
 +|5432 |TCP      |Postgresql |
 +|5433      |TCP       |PGPool  |
 +
 +=== Where the risk profiles are located ===
 +/home/afa/.fa/risk_profiles/ \\
 +  Files are saved as *.xml
 +
 +=== Regex for search in Notepad++ ===
 +To search after ip-addresses in Notepad++ use the following regex:
 +  [0-9]+.[0-9]+.[0-9]+.[0-9]+
 +  or
 +  \b(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(?1)){3}\b
 +
 +=== How to use screens in Linux ===
 +Screen(s) is a good way to make sure that updates or other things will keep on going and that infromation in the shell will be avialiable even if the connection is severed between the client and the server. \\
 +You can reconnect to the session and the information is still there. \\
 +\\
 +For the Algosec session i will use the session-name of algosec \\
 +\\
 +#Start a screen session
 +  screen
 +OR
 +  screen -S [name of session]
 +
 +#List active screens on the system
 +  screen -ls
 +OR
 +  screen -list
 +
 +  --- *** output *** ---
 +  #named session algosec
 +  [root@server ~]# screen -ls
 +  There is a screen on:
 +          27466.algosec   (Detached)
 +  1 Socket in /var/run/screen/S-root.
 +
 +#Connect to an detached screen on the system
 +  screen -r [screenname/session]
 +
 +  screen -r algosec
 +
 +=== BZIP2 and GZIP archiving ===
 +This can be done with singel file(s) \\
 +
 +* GZIP * \\
 +Compress a single file \\
 +  gzip [filename] ## This will create a compressed file and remove the original file.
 +
 +Compress multiple files at once \\
 +  gzip [filename1] [filename2] [filename3]
 +
 +Compress a single file and keep the original \\
 +  gzip -c [filename] > [filename].gz
 +
 +Decompress a gzip compressed file \\
 +  gzip -d [filename]
 +  or
 +  gunzip [filename]
 +
 +Decompress a gzip file but keep the original compressed file \\
 +  gunzip -c [filename].gz > [filename]
 +
 +* BZIP2 * \\
 +Create archive \\
 +  bzip2 [filename] ## This will create a compressed file and remove the original file.
 +
 +If not to delete the original file use -k \\
 +  bzip2 -k [filename] ## The original file is NOT deleted
 +
 +Decompress archives \\
 +  bzip2 -d [filename] 
 +
 +=== TAR commands ===
 +To build tar archives \\
 +  tar -vcf [filename]
 +
 +To build tar archives with gzip encryption \\
 +This can be done with folder(s) \\
 +  tar -zvcf [filename.tar.gz] [file/folder1] [file/folder2] [...]
 +
 +To build tar archives with bzip2 encryption \\
 +This can be done with folder(s) \\
 +bzip2 have a harder compression, more CPU demanding \\
 +  tar -jvcf [filename.bz2] [file/folder1] [file/folder2] [...]
 +
 +To Decompress tar archive \\
 +  tar -xvf [filename]
 +
 +To Decompress tar archive with gzip \\
 +  tar -zxvf [filename].tar.gz
 +
 +To Decompress tar archive with bzip2 \\
 +  tar -jxvf [filename].bz2
 +
 +=== How to Encrypt and Decrypt Files and Directories Using Tar and OpenSSL ===
 +Remember that the conventional form of using OpenSSL is: \\
 +openssl command command-options arguments \\
 +To encrypt the contents of the current working directory \\
 +
 +  tar -czf - * | openssl enc -e -aes256 -out secured.tar.gz
 +
 +  Explanation of the above command:
 +  enc – openssl command to encode with ciphers
 +   -e – a enc command option to encrypt the input file, which in this case is the output of the tar command
 +   -aes256 – the encryption cipher
 +   -out – enc option used to specify the name of the out filename, secured.tar.gz
 +\\
 +\\
 +Decrypt Files in Linux \\
 +To decrypt a tar archive contents, use the following command. \\
 +
 +  openssl enc -d -aes256 -in secured.tar.gz | tar xz -C test
 +
 +  Explanation of the above command:
 +   -d – used to decrypt the files
 +   -C – extract in subdirectory named test
 +
 +=== How to activate debug mode in ABF ===
 +You need to enable debug mode to troubleshoot BusinessFlow. \\
 +Solution\\ 
 +To enable debug mode in BusinessFlow: \\
 +  1. Log in to AFA as root user using SSH.
 +  2. Edit the following file /home/bflow/config/log4j2.xml:
 +    a. Change the following line:
 +    <property name="algosec-log-level">INFO</property>
 +    to
 +    <property name="algosec-log-level">DEBUG</property>
 +  (( 3. Restart the apache-tomcat service. )) ###usually not needed
 +
 +=== Boostmode on and off ===
 +Script download: https://algosec.sharefile.com/d-s62142b58f5b4210b \\
 +\\
 +\\
 +To install boostmode, perform the following, unzip the script and move it to the system under /tmp/ (this can be applied on GEO’s, Slaves, HA secondaries, all necessary boxes). \\
 +\\
 +As root:
 +  cp /tmp/boostmode /etc/init.d/boostmode
 +  chmod 755 /etc/init.d/boostmode
 +  chkconfig boostmode on
 +  service boostmode start # this may take a few minutes
 +
 +After the service has started, you must restart all the relevant services:
 +  service activemq restart
 +  service apache-tomcat restart
 +  service algosec-ms restart
 +  service postgresql reload
 +  restart_fireflow
 +
 +It’s important to communicate to the customer that they will also need to perform the following steps after applying any hotfixes or patches in the future – as these can overwrite some of the boostmode settings. \\
 +After successfully installing hotfixes the following should be run as root:\\
 +
 +  service boostmode start 
 +  service activemq restart
 +  service apache-tomcat restart
 +  service algosec-ms restart
 +
 +  service postgresql reload
 +  restart_fireflow
 +
 +Boost mode can be disabled just by running:
 +  service boostmode stop
 +
 +Stopping the service will roll back all the changes. \\
 +
 +Before
 +  -------
 +  [root@algosec-RA ~]# swapon -s
 +  Filename                                Type            Size    Used    Priority
 +  /dev/dm-1                               partition       7688188 0       -1
 +  [root@algosec-RA ~]#
 +
 +After
 +  ------
 +  [root@algosec ~]# swapon -s
 +  Filename                                Type            Size    Used    Priority
 +  /dev/zram0                              partition       3087552 0       100
 +  /dev/zram1                              partition       3087552 0       100
 +  /dev/zram2                              partition       3087552 0       100
 +  /dev/zram3                              partition       3087552 0       100
 +  [root@algosec ~]#
 +
 +=== Cluster node suddanly removed from cluster ===
 +Problem: \\
 +On node in the cluster removed from the cluster. \\
 +\\
 +Logs: \\
 +HA logs (/var/log/algosec_hadr/ several logs i this location) \\
 +Messages log (/var/log/message) \\
 +\\
 +Low disk space: \\
 +On one node, in ha logs (collect from ha menue (algosec_conf 13)). Or in HA logs, there will be a entry of low disk space and that cluster will be broken due to that.  Log of this is on the node that was removed from the cluster. \\
 +To low disk space is less that 10% free space (on any partition? Maby but surely on the /data partition)\\
 +
 +=== How the user field in ABF flows work ===
 +The field is populated from either the ABF database OR the supported firewall. (like Palo Alto/Panorama). \\
 +You cannot combine the two source of users, like rest of the produkt. Only one or the other is used at one or the other time. \\
 +Also the users possibly to populate the user field needs to be present in the firewall. More test on this should be done to verify. \\
 +Settings to change this is found uder:\\
 +ABF => [name in upper right corner] => Administration => Configuration => User Awareness Support => USer validation via LDAP is Currently [on/off] \\
 +\\
 +If on = get from firewall \\
 +If off = get from ABF user database \\
 +
 +=== How to get a session id ===
 +GUI: 
 +  1. Go to the AFA home page (the portion displaying graphs).
 +  2. In the Web browser box, type ?"!session!" .
 +  3. Click Enter.
 +  A popup displays a unique session ID. 
 +
 +CLI: 
 +  1. Go to the CLI and type the following command:ls -ltr /home/afa/public_html/algosec | tail
 +  A list of session IDs displays.
 +  2. Make a note of the latest session ID.
 +
 +=== How to look into .tar, .zip, .bz2 files without unpacking them ===
 +.tar
 +
 +.zip
 +  zcat [cat]
 +  zmore [more]
 +  zless [less]
 +
 +Or if the .zip contains multiple files
 +  vim [file].zip
 +
 +Example
 +  zcat testfile.zip
 +  vim testfile.zip
 +
 +*** .bz2
 +  bzcat [cat]
 +  bzless [less]
 +  vim
 +
 +=== How to clean up the session database table in postgres ===
 +In some versions of ASMS the session table just grows and grows. This is a bug! \\
 +To manually empty the database session table do the procedure below. \\
 +
 +  #########################
 +  # Important before synk #
 +  #########################
 +
 +  ++++++++++++++++++++++++++++++++++++++++
 +  + Check the postgres /session db table +
 +  ++++++++++++++++++++++++++++++++++++++++
 +
 +This is the procedure \\
 +\\
 +On the active node (where all services are runnig AFA,AFF,DB) \\
 +Stop services as follows: \\
 +
 +  /usr/share/fireflow/local/sbin/stop_fireflow.sh
 +  service crond stop
 +  service apache-tomcat stop
 +  service algosec-ms stop
 +  service postgresql stop
 +  service activemq stop
 +  service httpd stop
 +  service logstash stop
 +  service elasticsearch stop
 +  service kibana stop
 +  service mongod stop
 +  service aff-boot stop
 +
 +Once all services are stopped bring the postgresql service back up with 'service postgresql restart' \\
 +Once postgres run the following commands from the CLI. \\
 +
 +  psql -U postgres -d rt3 -c 'truncate sessions;'
 +  psql -U postgres -d rt3 -c 'vacuum full verbose sessions;'
 +
 +Once the commands finish bring the rest of the services back online. \\
 +
 +  service crond start
 +  service httpd start
 +  service postgresql start
 +  service activemq start
 +  service apache-tomcat start
 +  service algosec-ms start
 +  service aff-boot start
 +  /usr/share/fireflow/local/sbin/start_fireflow.sh
 +  service logstash start
 +  service elasticsearch start
 +  service kibana start
 +  service mongod start
 +
 +=== Guide for LVM on new setup virtual appliance ===
 +  ################
 +  # Up to A30.20 #
 +  ################
 +
 +Fix the LVM on the devices \\
 +Lists all disks in the system (as fdisk -l) \\
 +  lsblk
 +
 +  parted /dev/sdb
 +  moves from msdos to guided partition table for disks over 2TB
 +
 +  mktable GPT
 +
 +  Creates a partition of 50GB
 +  mkpart 0 1 50000 
 +
 +  Create a partition of the rest of the disk
 +  mkpart 0 50001 100%
 +
 +  Lists all disks in the system (as fdisk -l)
 +  lsblk
 +
 +  Creates the physical volumes of the new partitions
 +  pvcreate /dev/sdb1
 +  pvcreate /dev/sdb2
 +
 +  Extend the volume group /dev/vg_algsoec with the new physical volumes
 +  vgextend /dev/vg_algosec /dev/sdb1
 +  vgextend /dev/vg_algosec /dev/sdb2
 +
 +  Extend the logial volumes with the new partitions (-r will extend automaticly)
 +  lvextend -r /dev/vg_algosec/vg_system /dev/sdb1
 +  lvextend -r /dev/vg_algosec/vg_data /dev/sdb2
 +
 +  if not -r extends the logical volumes automaticly do the following
 +  for ext4 filesystem
 +  resize2fs /dev/vg_algosec/vg_system
 +  resize2fs /dev/vg_algosec/vg_data
 +
 +  for xfs filesystem
 +  xfs_growfs /dev/vg_algosec/vg_system
 +  xfs_growfs /dev/vg_algosec/vg_data
 +
 +  To check the filesystem expends ok via watch per second
 +  screen
 +  watch -n 1 -d "df -hT"
 +  ip addr
 +  watch -n 1 -d "df -hT"
 +
 +  ###################
 +  # For ASMS V32 => #
 +  ###################
 +Differences is that volume group and logical volumes have new names / different locations. \\
 +\\
 +Fix the LVM on the devices\\
 +Lists all disks in the system (as fdisk -l)\\
 +  lsblk
 +
 +
 +  parted /dev/sdb
 +  
 +  moves from msdos to guided partition table for disks over 2TB
 +  mktable GPT
 +
 +  Creates a partition of 50GB
 +  mkpart 0 1 50000 
 +
 +  Create a partition of the rest of the disk
 +  mkpart 0 50001 100%
 +
 +  Lists all disks in the system (as fdisk -l)
 +  lsblk
 +
 +  Creates the physical volumes of the new partitions
 +  pvcreate /dev/sdb1
 +  pvcreate /dev/sdb2
 +
 +  Extend the volume group /dev/vg_algsoec with the new physical volumes
 +  vgextend /dev/centos /dev/sdb1
 +  vgextend /dev/centos /dev/sdb2
 +
 +  Extend the logial volumes with the new partitions (-r will extend automaticly)
 +  lvextend -r /dev/centos/root /dev/sdb1
 +  lvextend -r /dev/centos/data /dev/sdb2
 +
 +  if not -r extends the logical volumes automaticly do the following
 +  for ext4 filesystem
 +  resize2fs /dev/vg_algosec/vg_system
 +  resize2fs /dev/vg_algosec/vg_data
 +
 +  for xfs filesystem
 +  xfs_growfs /dev/vg_algosec/vg_system
 +  xfs_growfs /dev/vg_algosec/vg_data
 +
 +  To check the filesystem expends ok via watch per second
 +  screen
 +  watch -n 1 -d "df -hT"
 +  ip addr
 +  watch -n 1 -d "df -hT"
 +
section/algosec/documentation/usefullcommands.1636989462.txt.gz · Last modified: 2023/09/29 07:01 (external edit)