Welcome to the user guide for the HPC-T-Annotator web interface. This guide will help you understand and utilize the features of the interface to configure and generate a customized software package for parallelization tasks involving annotation software. Follow the steps below to effectively use the interface.
The web interface consists of two panels:
Here we show two examples of settings (scheduler SLURM, scheduler None) for the upper panel. Based on your selection, different configuration options will be displayed.
Configure the settings for Slurm workload manager:
Below is a screenshot as an example of the SLURM settings.
In the case where 'None' is selected in the upper panel, the parameters to be set are the following.
Configure basic settings when no workload manager is selected:
Below is a screenshot as an example of the None settings.
In the bottom panel, the first step is to select the aligner to use (BLAST or DIAMOND). Once the software is selected, the following parameters need to be filled in the form:
In note, for Additional options field, only options related to computation are accepted; options indicating the usage of threads and input/output file names are not accepted. So the options -p and -o for diamond or -out and -num_threads for blast are not accepted! Use this field to provide any additional command-line options or parameters for the annotation tool. These options should be entered as you would on the command line, separated by spaces. For example, you can specify parameters like "-evalue 1e-5 -max_target_seqs 10" for a BLAST execution.
After configuring the settings for both workload manager and annotation software, you can:
Once generated, you can download the software package in TAR format.
The following step must be done on the HPC cluster.
After downloading the TAR archive (and, if you want, uploading the archive on HPC cluster) you have to unTAR the archive and then run the start script.
And the, if you are on HPC cluster with Slurm workload manager
Else, if you are on workload manager less architecture
Once the entire computation process has ended (check the general.log file for the status), the final result will be in tmp/final_blast.tsv file.
If all jobs are finished, you can check the logs by running this command:
That will display something like this:
You can check for jobs errors running these commands: