by Patrik Hermansson
For the passwords, do not use »> & «< sign. Gives an error in the system. algosec_conf error=1
[dfbf1cbc] [main ] [2021-10-25 11:23:03,066] [INFO ] [essExecutorImpl::132 ] executing external command (UUID=aiWfbTn8): sudo -H -u afa /usr/share/fa/bin/add_del_htpasswd del 'admin' [dfbf1cbc] [main ] [2021-10-25 11:23:03,545] [INFO ] [essExecutorImpl::132 ] executing external command (UUID=ukJ2DuLV): export PASSWORD=$'Kaffe&Kakor' && sudo -H -u afa /usr/share/fa/bin/add_del_htpasswd add 'admin' $PASSWORD [dfbf1cbc] [stderr-ukJ2DuLV] [2021-10-25 11:23:04,031] [WARN ] [pl$StreamLogger::238 ] --> sh: Kakor: command not found [dfbf1cbc] [stderr-ukJ2DuLV] [2021-10-25 11:23:04,034] [WARN ] [pl$StreamLogger::238 ] --> Adding password for user admin [dfbf1cbc] [main ] [2021-10-25 11:23:04,064] [ERROR] [ErrorMenuItem ::33 ] An error occurred during algosec_conf menu [dfbf1cbc] [main ] [2021-10-25 11:23:04,144] [INFO ] [Main ::97 ] Force exit from algosec_conf (exit with error 1) [dfbf1cbc] [Thread-36 ] [2021-10-25 11:23:04,148] [INFO ] [Main ::90 ] algosec_conf shutdown [dfbf1cbc] [Thread-36 ] [2021-10-25 11:23:04,149] [INFO ] [Main ::91 ] ------------------------------------------
Login as AFA user
su afa
If you are logged in as ROOT do the su (switch user) afa command to change user to the afa user. (this works with newer version with the root user also (as of at least A30.00 ⇒)
falogs
This is an alias that does a tail with several important logs.
Tip, use this to troubleshoot login errors in real time log.
falogs | grep {username}
The FireFlow httpd error log will give a lot of useful information when encountering problems in FireFlow
Commands:
Less
/ = search
/[search pattern]
Tail
tail -n 100 [log file (full path)]
-n 100 == shows the last 100 lines from the specific file
tail -f [filename]
-f is "follow" i.e. appends output when the file gets bigger
–AFA logs /home/afa/.fa-history (contains lots of Info-messages, use grep -v -i ‘info’) /var/log/ /var/log/messages
–Backup logs /home/afa/backupLogs.history (up till 2018.1) /var/log/algosec-ms/ms-backuprestore.log (2018.2 ⇒ 30.10.x)
–AFF logs /usr/share/fireflow/var/log/fireflow.log /var/log/aff-boot.log
–ABF logs /var/log/bflow/bflow.log
–Other
/data/afa_catalina_base/logs/catalina.out /data/log/algosec_hadr/ (hadr, install -log and others) /var/lib/pgsql/data/pg_log/postgresql-???.log /var/log/httpd/error_log /home/afa/public_html/algosec/.ht-fa-history /etc/httpd/logs/error_log
Enter the command:
algosec_conf
Command:
top
Shows CPU and RAM use, press '1' to display all CPU cores, 'd' to set update interval, 'n' to set maximum tasks displayed, Shift+'P' to sort the output by CPU utilization, Shift+'M' to sort the output by Memory utilization, ‘q’ to exit
iostat -d -x 5 3
shows the disk use
5 3 == Three reports at 5 seconds intervals.
iotop -o shows the disk activity as top does.
Command:
ps -A | grep [processnamn] ps -A == shows the list of running processes, -A shows all processes | == pipe, sends the input from the left of the pipe to the right of the pipe. grep == pipes the ps -A to grep, add the processname and grep sorts out and shows only those processes.
PID Process name
27033 ? 00:00:03 collect_gen
27083 ? 00:00:03 collect_gen
27097 ? 00:00:03 collect_gen
For a more extensive output look at the man pages for ps and grep
ps -aef | grep collect | grep -v grep
User PID PPID path to the application running Afa 31218 31121 8 15:09 ? 00:00:02 /usr/bin/perl /usr/share/fa/bin/collect_ios -d /home/afa/algosec/monitor/CHNx_Nanj481_A_01/new_config -n CHNx_Nanj481_A_01 -m Afa 31530 31484 14 15:09 ? 00:00:03 /usr/bin/perl /usr/share/fa/bin/collect_ios -d /home/afa/algosec/monitor/USAx_hous379_01_02/new_config -n USAx_hous379_01_02 -m Afa 31824 31791 10 15:09 ? 00:00:02 /usr/bin/perl /usr/share/fa/bin/collect_ios -d /home/afa/algosec/monitor/USAx_hous379_01_01/new_config -n USAx_hous379_01_01 -m
Command:
kill [process PID]
This will terminate (kill) the process with the specified Process ID. Use ps -A for the process PID.
This will terminate(killall) a process but via name not PID. command:
killall -v -u afa -e collect_ios -v verbose shows more output from the command -u specify a user, in this case user afa -e specify a certain search parameter, in this case "collect_ios"
pkill == as kill but use process name instead of PID.
pkill [process name]
pkill -9 fa_master -9 ==
The process is import_devices
ps -A | grep import_devices [output with pid] kill -9 [pid]
When you want to kill analysis according the report no. use the below command
pkill -9<report_no>
The below commands will display which version of each product is currently installed.
Firewall Analyzer rpm -q fa
FireFlow rpm -q algosec-ticketing
BusinessFlow rpm -q BusinessFlow
AlgoSec Appliance rpm -q algosec-appliance
Handover HA/DR from primary to secondary: Choose “13. Configure HA/DR” in the startup (content) menu – then 4. Continue according to Article Number: 303 in Algosec Knowledge Center (https://knowledgebase.algosec.com/article.php?id=303)
Restart FireFlow
restart_fireflow
Restart web server
/etc/init.d/httpd start or service httpd restart or systemctl restart httpd
Restart Tomcat
/etc/init.d/apache-tomcat start or service apache-tomcat restart or systemctl restart apache-tomcat
How to restart AppViz (former ABF) When you restart apache-tomcat AppViz will be restarted (and services connected to it) also.
service apache-tomcat restart
Notification changed to Watchdog. Configfile : /data/algosec-ms/config/watchdog_configuration.json
! For instance backup_schedule to syslog is set to false (off) as default !
See more in the AFA admin documentation page 341
## Commands ### Login to the database:
psql -U postgres -d rt3
### Look for the correct user. exchange '[userid]' with the user login name. Login name in the ASMS. For instans 'debugger' if the login username is debugger:
select * from users where name = '[userid]';
Verify that the email is missing or is wrong!
### Update the users email address. Replace the [email protected] with the correct email address:
update users set emailaddress='[email protected]' where name='[userid]';
### Do “step 2” again to verify that the user is updated correctly:
select * from users where name = '[userid]';
### Exit the postgres configuration mode
\q
Try to restart Apache
/etc/init.d/apache-tomcat restart or service restart apache-tomcat or systemctl restart apache-tomcat
The differenses are regarding of version of ASMS.
Use the command ethtool, to see speed and duplex on the interface eth0 use the following command
ethtool eth0
Output(example):
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Link partner advertised link modes: 100baseT/Full
1000baseT/Full
Link partner advertised pause frame use: No
Link partner advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: g
Wake-on: g
Current message level: 0x000000ff (255)
drv probe link timer ifdown ifup rx_err tx_err
Link detected: yes
Login as afa user or switch to the afa user. (su afa) run the script below. Then logout and login again.
/usr/share/fa/bin/syncDbWithReportsDir.sh
In Bash (the terminal) the commands are saved in the .bash_history file. If you write usernames and passwords in the terminal those are saved as commands in that file. In clear text! Password without echo (when you do not see the text) are not saved. Only the character that you see. To clear the file from entries, do the following. (needs to be done in all open windows (cli logins) if more than one SSH session is active/logged in)
history -c history -w
The flag -c == clear the history list by deleting all entries. The flag -w == writes over the .bash_history file with the empty list thereby clearing it.
The commands you write in the active (open) window will be saved in memory and written in the file on exit/logout. You can see all commands in “memory” by using the command history without any flags.
Log in to AFA as root user using SSH.
Run the following command (as a single line, linebrake after FireFlow_SiteConfig.pm, remove linebrake) to back up the existing FireFlow_SiteConfig.pm file:
cp -p /usr/share/fireflow/local/etc/site/FireFlow_SiteConfig.pm /usr/share/fireflow/local/etc/site/FireFlow_SiteConfig.pm.b4debug
Add the following lines at the end of the FireFlow_SiteConfig.pm file before the line ending with 1;
vim /usr/share/fireflow/local/etc/site/FireFlow_SiteConfig.pm
Set($LogToFile, 'debug'); Set($LogMaxMsgLen, 0); Set($LogPermissions, 2);
Restart FireFlow:
restart_fireflow
Recreate the problematic scenario that you want to troubleshoot.
Download the fireflow.zip file and attach it to the support case.
In the FireFlow_SiteConfig.pm file, remove or comment with # the lines you added above to enable debugging.
Restart FireFlow:
restart_fireflow
This is to cleanup /home/afa/public_html/algosec/sessions-* folders. If that is not done the system will “run out” of inodes (check inodes load with df –ih)
df -ih
Run this to see if the garbage cleanup script have been running:
grep "clean_up_garbag.*session-" ~afa/.fa-history*[^0] | head
Example output: [root@ASMS_2017-2 ~]# grep "clean_up_garbag.*session-" ~afa/.fa-history*[^0] | head /home/afa/.fa-history.1:[14689] [ ] [2018-05-05 02:00:20,203] [INFO ] [auto_remove ::clean_up_garbag:705 ] Remove old files from directory /home/afa/public_html/algosec with prefix session-
The AFA traffic simulation feature has a CLI version.The bennefit of this compared to the GUI version is that You get much more troubleshooting info, when You have patch discovery (MAP) problems.
Log in as AFA user if not that already
su afa
Run the command test_fip with the following commands
test_fip -s [from-ip] -d [destination-ip] –o flag -s: source flag -d: destination flag -o: prints fip output if given
Example:
test_fip -s 172.18.113.5 -d 172.19.150.97 –o
For instance when the application is stuck in initial plan.
In the CLI enter the commands:
ps -ef | grep 18619 ps -ef | grep run_query
Where 18619 is the ticket ID.
Then kill those PIDs
Kill -9 [pid]
If disk utilization is high on the root ( / ) partition the system might stop working.
Also some functions like backup and report gathering might stop working if the /data partition is full. To fix (at least some of the problems) look at the KB article below.
Login is needed: https://knowledge.algosec.com/skn/tu/e15153
The following is the command to check the vacuum
grep -i vacuum /var/lib/pgsql/data/pg_logpostgresql-.log | grep > /tmp/results.txt
i.e
grep -i vacuum /var/lib/pgsql/data/pg_logpostgresql-Tue.log | grep 2019-05-07 > /tmp/results.txt
The SSL configuration is not removed from one or all nodes.
Configure the machine_config file (/home/afa/.fa/machine_config)
vim /home/afa/.fa/machine_config Change secure_conection=true => secure_conection=false
Restart apache-tomcat service
service apache-tomcat restart (systemctl restart apache-tomcat)
In CLI on the AlgoSec server run:
sqlite3 /home/afa/.fa/map.sqlite "SELECT DeviceName, HwName, IP FROM Interface INNER JOIN Device ON Device.DeviceID = Interface.DeviceID WHERE DeviceName IS NOT NULL AND DeviceName != \"\" AND IP != \"None\";" ".quit"
Edit: this command will also show subnet-ID with CIDR-mask.
sqlite3 /home/afa/.fa/map.sqlite "SELECT DeviceName, HwName, IP, CIDR FROM Interface INNER JOIN Device ON Device.DeviceID = Interface.DeviceID INNER JOIN Subnet ON Subnet.SubnetID = Interface.SubnetID WHERE DeviceName IS NOT NULL AND DeviceName != \"\" AND IP != \"None\";" ".quit"
For version 2018.1.x-x
If this occures on the second device in a DA/HA cluster check if the metro service is running.
Usually not, and to start you need to start the apache-tomcat service.
service start apache-tomcat or systemctl start apache-tomcat.service
Rememeber to shut it down after the installation of the license.
service stop apache-tomcat or systemctl stop apache-tomcat.service
This could be because of not allowed text (text that can be translated into code) in some of the comments fields. Affected fields:
Custom fields Flow Names Comments
Algosec KB for this: https://knowledge.algosec.com/skn/tu/e16448
Example on what is considered as code:
<script> </script> src="*" eval(*) expression(*) javascript: vbscript: onload*= also avoid write <*> html tags "*" means anything between
Check the followin KB: https://knowledge.algosec.com/skn/c6/AlgoPedia/e4998/Login_Failed_incorrect_user_name_or_password
If that looks ok, check if two or more accounts have the same password. Is possible if new user are added via the users_info.xml file.
If that is the case remove the other account or change the email to specific emails for all accounts.
There MAY be a workaround. It isn't verified, but we have used it in other occasions when an ABF application seems to be stuck in updating:
First, Connect to postgres
root@ITSEELM-BB4261:~#psql -U postgres Password for user postgres: psql (9.2.5) Type "help" for help.
Enable prettyprint & connect to the bflow database
postgres=# \x on
Expanded display is on.
postgres=# \c bflow
You are now connected to database “bflow” as user “postgres”.
Select the application with the ID. This can be found in the URL. Example:
"https://fo.ikea.com/BusinessFlow/#/application/ --> 2797 <-- /dashboard"
bflow=# select * from applications where id=2797; -[ RECORD 1 ]------------------+--------------------------- id | 2797 app_id | 2433 creation_ts | 2019-04-01 11:45:02.110999 lcname | mfc-le-sto-371 name | MFC-LE-STO-371 update_ts | 2019-04-01 11:48:33.302 connectivity_id | 85479 metadata_id | 1100 revision_id | 2827 connectivity_scan_in_progress | f vulnerability_scan_in_progress | f last_risk_check | risk_scan_in_progress | f risk_score | risks_information_up_to_date | f discovery_update_in_progress | f
This gives us some data to look at. What we need for next step is the APP_ID field. Use this field in the next query:
bflow=# select * from application_metadata where appId=2433; -[ RECORD 1 ]------+--------------------------- id | 1100 appid | 2433 applicationlock | t creation_ts | 2019-02-20 12:53:10.764664 update_ts | 2019-02-20 12:53:10.817 lifecyclephase_id | 1 name_sequence | 1 expiration_date | rename_in_progress | f
This shows us that the applicationlock field is indeed TRUE. set this to FALSE using the APPLICATION_METADATA ID, not the application ID:
bflow=# update application_metadata set applicationlock=false where id=1100; UPDATE 1
Verify that the flag is correct (False):
bflow=# select * from application_metadata where appId=2433; -[ RECORD 1 ]------+--------------------------- id | 1100 appid | 2433 applicationlock | f creation_ts | 2019-02-20 12:53:10.764664 update_ts | 2019-02-20 12:53:10.817 lifecyclephase_id | 1 name_sequence | 1 expiration_date | rename_in_progress | f
Quit when done.
bflow=# \q
The new backup needs the elasticsearch service to be running. It will fail othervice. (Version 2018.2 ⇒ )
In at least 2018.2.870 - 2018.2.900 there wherer a mismach with the versions of elasticsearch and kiban4. This could generate problems with the services and backups.
The problem is resolved in version 2018.2.900-xyz (accoring to Algosec)
Elasticsearch:
Check the service via:
service elasticsearch status
or
systemctl status elasticsearch
Check that the service starts with the system
chkconfig | grep -i elasticsearch
Kibana:
Check the service via:
service kibana4 status
or
systemctl status kibana4
Check that the service starts with the system
chkconfig | grep -i kibana4
There is a script to start the services (or stop them) and enable the start with the system.
Script: toggle_art.sh
To run it:
/usr/share/fa/bin/toggle_art.sh
use the on/off to turn it on or of… /usr/share/fa/bin/toggle_art.sh on or
/usr/share/fa/bin/toggle_art.sh off
Log: /var/log/fetchmail.log
Symptom: The system does not get (fetches) emails from email server.
Test the function with the following command:
/usr/bin/fetchmail -c -v -p POP3 -P 995 --ssl -u [username] -L /var/log/fetchmail.log [server FQDN/ip]
Exchange POP3 and 995 –ssl if needed. Enter username and server FQDN or ip.
If that works follow the checklist below. If not check the logfile to see what went wrong.
Check:
That the ownership are correct on the file, .fetchmailrc. (fireflow should be owner of the file) as user root do:
chown fireflow:fireflow .fetchmailrc
That the rights are correct on the file, .fetchmailrc (chmod 0700)
as user root do:
chmod 0700 .fetchmailrc
Login at fireflowuser (su - fireflow), test with
/usr/bin/fetchmail
without any more inforamtion. The command get the rest from the .fetchmailrc file in /home/fireflow directory. Remember that . files are hidden files.
If it gives an error make sure that point 1 and 2 are done and that the .fetchmailrc file is correctly filled in. In one instance the file needed to be redone to get it working.
Problem:
After upgradeing to version A30.10 the AppViz (fd Businessflow) menu (blue top row) does not show.
Troubleshooting:
When checking in the webbrowser the URL gave wrong rederict URL. For us the domain was missing.
Solution:
Check in the AppViz config file. (/home/bflow/config/user.properties)
The following parameters needs to be populated with the full url:
afa.hostname=****************** (removed for the document) fireflow.hostname=****************** (removed for the document)
The disk speed (read/write) in MB/s
Below is built in check for upgradeing the system.
Is blocked User approval Is allowed
needed \\
0——80—— 80———100—————————–300————— =⇒
So to have a good system at least there should be R/W of 300MB/s
How to check this?
Linux tools hdparm (for read) and dd (for write).
Disclamer!
Not sure how much this will affect the system so do this outsid of working hours for production.
!Disclamer
hdparm -Ttv [partition, like /dev/sdb1]
dd if=/dev/zero of=/var/wrtest/test oflag=direct bs=128k count=32k
if = input file, of = output file. Put it someherer the system can write ~4GB (in this setting). Remember to remove the of file.
On my home system with older SATA disk i got:
root@system#dd if=/dev/zero of=/var/wrtest/test oflag=direct bs=128k count=32k
32768+0 records in 32768+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 53.6856 s, 80.0 MB/s
So 80.0 MB/s in write speed.
System Version: A32.0.x-y
One incident when user could not login, had a duplicate user when one was all lower case and one user with first letter upper case. The system tried to match with the first username in the list and that was the one with upper case. Not the actual account and the user could not login.
Solution?:
Ended up deleting the account with upper case first letter and user could login again after that.
If possible for uptime allow any ports between the nodes in HA.
Twice i have had problem at customer due to the fact Algosec did not update there documentation for needed ports.
Not in documentation nor in KB. (Custemer have PVLAN with ACL)
| Type | Port | CM ↔ Slave | CM ↔ RA | Slave ↔ Slave | HA/DR(2018.2) | HA/DR (2018.1) | HA/DR (2017.3) |
|---|---|---|---|---|---|---|---|
| icmp | V | V | - | V | V | V | |
| ssh | tcp/22 | V | V | - | V | V | V |
| https | tcp/443 | V | V | - | V | - | - |
| syslog | udp/514 | - | - | - | V | - | - |
| hazelcast | tcp/5701 | V | - | V | V | V | - |
| activemq | tcp/61616 | V | - | - | V | - | - |
| postgresql | tcp/5432 | V | - | - | V | V | - |
| pgpool | tcp/5433 | V | - | - | V | - | - |
| HA/DR | tcp/9595 | - | - | - | V | V | - |
| heartbeat | udp/694 | - | - | - | - | - | V |
(not in use since 2018.1)
Ports required for communications (central manager, remote agents) in geo-distributed architecture.
1. For configuration procedures (adding a remote agent, adding/editing/deleting devices) that must be synchronous:
| Port | Protocol | Description | Purpose |
|---|---|---|---|
| 22 | TCP | SSH | Required for running commands upon the remote agent from the central manager |
2. For log collection, monitor and data collections procedures that may be asynchronous:
| Port | Protocol | Description | Purpose |
|---|---|---|---|
| 443 | TCP | SOAP over HTTPS | Required for running commands and obtaining the status of the remote agent and current actions performed on it |
| 22 | TCP | SCP | Required for copying files to and from the remote agent |
3. For communications between master-slave in load-distributed architecture:
| Port | Protocol | Description | Purpose |
|---|---|---|---|
| 443 | TCP | SOAP over HTTPS | From master to slave |
| 22 | TCP | SCP-SSH | From master to slave |
| 5432 | TCP | Postgresql | |
| 5433 | TCP | PGPool |
/home/afa/.fa/risk_profiles/
Files are saved as *.xml
To search after ip-addresses in Notepad++ use the following regex:
[0-9]+.[0-9]+.[0-9]+.[0-9]+
or
\b(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(?1)){3}\b
Screen(s) is a good way to make sure that updates or other things will keep on going and that infromation in the shell will be avialiable even if the connection is severed between the client and the server.
You can reconnect to the session and the information is still there.
For the Algosec session i will use the session-name of algosec
#Start a screen session
screen
OR
screen -S [name of session]
#List active screens on the system
screen -ls
OR
screen -list
#named session algosec
[root@server ~]# screen -ls
There is a screen on:
27466.algosec (Detached)
1 Socket in /var/run/screen/S-root.
#Connect to an detached screen on the system
screen -r [screenname/session]
screen -r algosec
This can be done with singel file(s)
* GZIP *
Compress a single file
gzip [filename] ## This will create a compressed file and remove the original file.
Compress multiple files at once
gzip [filename1] [filename2] [filename3]
Compress a single file and keep the original
gzip -c [filename] > [filename].gz
Decompress a gzip compressed file
gzip -d [filename] or gunzip [filename]
Decompress a gzip file but keep the original compressed file
gunzip -c [filename].gz > [filename]
* BZIP2 *
Create archive
bzip2 [filename] ## This will create a compressed file and remove the original file.
If not to delete the original file use -k
bzip2 -k [filename] ## The original file is NOT deleted
Decompress archives
bzip2 -d [filename]
To build tar archives
tar -vcf [filename]
To build tar archives with gzip encryption
This can be done with folder(s)
tar -zvcf [filename.tar.gz] [file/folder1] [file/folder2] [...]
To build tar archives with bzip2 encryption
This can be done with folder(s)
bzip2 have a harder compression, more CPU demanding
tar -jvcf [filename.bz2] [file/folder1] [file/folder2] [...]
To Decompress tar archive
tar -xvf [filename]
To Decompress tar archive with gzip
tar -zxvf [filename].tar.gz
To Decompress tar archive with bzip2
tar -jxvf [filename].bz2
Remember that the conventional form of using OpenSSL is:
openssl command command-options arguments
To encrypt the contents of the current working directory
tar -czf - * | openssl enc -e -aes256 -out secured.tar.gz
Explanation of the above command: enc – openssl command to encode with ciphers -e – a enc command option to encrypt the input file, which in this case is the output of the tar command -aes256 – the encryption cipher -out – enc option used to specify the name of the out filename, secured.tar.gz
Decrypt Files in Linux
To decrypt a tar archive contents, use the following command.
openssl enc -d -aes256 -in secured.tar.gz | tar xz -C test
Explanation of the above command: -d – used to decrypt the files -C – extract in subdirectory named test
You need to enable debug mode to troubleshoot BusinessFlow.
Solution
To enable debug mode in BusinessFlow:
1. Log in to AFA as root user using SSH. 2. Edit the following file /home/bflow/config/log4j2.xml: a. Change the following line: <property name="algosec-log-level">INFO</property> to <property name="algosec-log-level">DEBUG</property> (( 3. Restart the apache-tomcat service. )) ###usually not needed
Script download: https://algosec.sharefile.com/d-s62142b58f5b4210b
To install boostmode, perform the following, unzip the script and move it to the system under /tmp/ (this can be applied on GEO’s, Slaves, HA secondaries, all necessary boxes).
As root:
cp /tmp/boostmode /etc/init.d/boostmode chmod 755 /etc/init.d/boostmode chkconfig boostmode on service boostmode start # this may take a few minutes
After the service has started, you must restart all the relevant services:
service activemq restart service apache-tomcat restart service algosec-ms restart service postgresql reload restart_fireflow
It’s important to communicate to the customer that they will also need to perform the following steps after applying any hotfixes or patches in the future – as these can overwrite some of the boostmode settings.
After successfully installing hotfixes the following should be run as root:
service boostmode start service activemq restart service apache-tomcat restart service algosec-ms restart
service postgresql reload restart_fireflow
Boost mode can be disabled just by running:
service boostmode stop
Stopping the service will roll back all the changes.
Before
[root@algosec-RA ~]# swapon -s
Filename Type Size Used Priority /dev/dm-1 partition 7688188 0 -1 [root@algosec-RA ~]#
After
[root@algosec ~]# swapon -s
Filename Type Size Used Priority /dev/zram0 partition 3087552 0 100 /dev/zram1 partition 3087552 0 100 /dev/zram2 partition 3087552 0 100 /dev/zram3 partition 3087552 0 100 [root@algosec ~]#
Problem:
On node in the cluster removed from the cluster.
Logs:
HA logs (/var/log/algosec_hadr/ several logs i this location)
Messages log (/var/log/message)
Low disk space:
On one node, in ha logs (collect from ha menue (algosec_conf 13)). Or in HA logs, there will be a entry of low disk space and that cluster will be broken due to that. Log of this is on the node that was removed from the cluster.
To low disk space is less that 10% free space (on any partition? Maby but surely on the /data partition)
The field is populated from either the ABF database OR the supported firewall. (like Palo Alto/Panorama).
You cannot combine the two source of users, like rest of the produkt. Only one or the other is used at one or the other time.
Also the users possibly to populate the user field needs to be present in the firewall. More test on this should be done to verify.
Settings to change this is found uder:
ABF ⇒ [name in upper right corner] ⇒ Administration ⇒ Configuration ⇒ User Awareness Support ⇒ USer validation via LDAP is Currently [on/off]
If on = get from firewall
If off = get from ABF user database
GUI:
1. Go to the AFA home page (the portion displaying graphs). 2. In the Web browser box, type ?"!session!" . 3. Click Enter. A popup displays a unique session ID.
CLI:
1. Go to the CLI and type the following command:ls -ltr /home/afa/public_html/algosec | tail A list of session IDs displays. 2. Make a note of the latest session ID.
.tar
.zip
zcat [cat] zmore [more] zless [less]
Or if the .zip contains multiple files
vim [file].zip
Example
zcat testfile.zip vim testfile.zip
*** .bz2
bzcat [cat] bzless [less] vim
In some versions of ASMS the session table just grows and grows. This is a bug!
To manually empty the database session table do the procedure below.
######################### # Important before synk # #########################
++++++++++++++++++++++++++++++++++++++++ + Check the postgres /session db table + ++++++++++++++++++++++++++++++++++++++++
This is the procedure
On the active node (where all services are runnig AFA,AFF,DB)
Stop services as follows:
/usr/share/fireflow/local/sbin/stop_fireflow.sh service crond stop service apache-tomcat stop service algosec-ms stop service postgresql stop service activemq stop service httpd stop service logstash stop service elasticsearch stop service kibana stop service mongod stop service aff-boot stop
Once all services are stopped bring the postgresql service back up with 'service postgresql restart'
Once postgres run the following commands from the CLI.
psql -U postgres -d rt3 -c 'truncate sessions;' psql -U postgres -d rt3 -c 'vacuum full verbose sessions;'
Once the commands finish bring the rest of the services back online.
service crond start service httpd start service postgresql start service activemq start service apache-tomcat start service algosec-ms start service aff-boot start /usr/share/fireflow/local/sbin/start_fireflow.sh service logstash start service elasticsearch start service kibana start service mongod start
################ # Up to A30.20 # ################
Fix the LVM on the devices
Lists all disks in the system (as fdisk -l)
lsblk
parted /dev/sdb moves from msdos to guided partition table for disks over 2TB
mktable GPT
Creates a partition of 50GB mkpart 0 1 50000
Create a partition of the rest of the disk mkpart 0 50001 100%
Lists all disks in the system (as fdisk -l) lsblk
Creates the physical volumes of the new partitions pvcreate /dev/sdb1 pvcreate /dev/sdb2
Extend the volume group /dev/vg_algsoec with the new physical volumes vgextend /dev/vg_algosec /dev/sdb1 vgextend /dev/vg_algosec /dev/sdb2
Extend the logial volumes with the new partitions (-r will extend automaticly) lvextend -r /dev/vg_algosec/vg_system /dev/sdb1 lvextend -r /dev/vg_algosec/vg_data /dev/sdb2
if not -r extends the logical volumes automaticly do the following for ext4 filesystem resize2fs /dev/vg_algosec/vg_system resize2fs /dev/vg_algosec/vg_data
for xfs filesystem xfs_growfs /dev/vg_algosec/vg_system xfs_growfs /dev/vg_algosec/vg_data
To check the filesystem expends ok via watch per second screen watch -n 1 -d "df -hT" ip addr watch -n 1 -d "df -hT"
################### # For ASMS V32 => # ###################
Differences is that volume group and logical volumes have new names / different locations.
Fix the LVM on the devices
Lists all disks in the system (as fdisk -l)
lsblk
parted /dev/sdb moves from msdos to guided partition table for disks over 2TB mktable GPT
Creates a partition of 50GB mkpart 0 1 50000
Create a partition of the rest of the disk mkpart 0 50001 100%
Lists all disks in the system (as fdisk -l) lsblk
Creates the physical volumes of the new partitions pvcreate /dev/sdb1 pvcreate /dev/sdb2
Extend the volume group /dev/vg_algsoec with the new physical volumes vgextend /dev/centos /dev/sdb1 vgextend /dev/centos /dev/sdb2
Extend the logial volumes with the new partitions (-r will extend automaticly) lvextend -r /dev/centos/root /dev/sdb1 lvextend -r /dev/centos/data /dev/sdb2
if not -r extends the logical volumes automaticly do the following for ext4 filesystem resize2fs /dev/vg_algosec/vg_system resize2fs /dev/vg_algosec/vg_data
for xfs filesystem xfs_growfs /dev/vg_algosec/vg_system xfs_growfs /dev/vg_algosec/vg_data
To check the filesystem expends ok via watch per second screen watch -n 1 -d "df -hT" ip addr watch -n 1 -d "df -hT"