Commit 43248f03 authored by Peter Kruczkiewicz's avatar Peter Kruczkiewicz

Merge branch 'development' into fix-639/shovill-asm

# Conflicts:
#	src/main/java/ca/corefacility/bioinformatics/irida/ria/web/analysis/AnalysisController.java
#	src/test/java/ca/corefacility/bioinformatics/irida/ria/unit/web/analysis/AnalysisControllerTest.java
parents 0e84b7dd 2559ab2f
Pipeline #7377 passed with stage
in 108 minutes and 40 seconds
Changes
=======
0.22.0 to 0.23.0
----------------
* [UI]: Added the sample coverage to the table exported from the project samples page.
* [UI/Workflow]: Added option to disable workflows/analysis types from display in IRIDA using `irida.workflow.types.disabled`. (0.22.1)
* [Developer]: Added wait when NCBI Uploader fails before retrying. (0.22.1)
* [UI]: Added configurable warning for analysis results and metadata pages. Set the text for this warning with `irida.analysis.warning`. This can be used to communicate that results of analyses may be preliminary.
0.21.0 to 0.22.0
----------------
* [UI]: Fixed bug where `.xls` file could not be uploaded through the file picker on the metadata upload page. (0.21.1)
* [Workflow]: Added version 0.1.8 of the [MentaLiST](https://github.com/WGS-TB/MentaLiST) pipeline, which includes a fix for downloading cgMLST schemes and a distance matrix output.
* [Workflow]: Added version 0.1.9 of the [MentaLiST](https://github.com/WGS-TB/MentaLiST) pipeline, which includes a fix for downloading cgMLST schemes and a distance matrix output.
* [UI]: Fixed bug where concatenate files was POSTing to incorrect URL. (0.21.2)
* [UI]: Fixed bug where SVG files could not be exported through the advanced visualization page. (0.21.2)
* [UI]: Fixed bug where users could not share more than nine samples. (0.21.2)
* [UI]: Moved the position of the notification system to top center.
* [Workflow]: Added version 2.0.0 of a pipeline for running [bio_hansel](https://github.com/phac-nml/bio_hansel) (version 2.0.0)
* [Workflow]: Added version 0.3 of a pipeline for running [SISTR](https://github.com/peterk87/sistr_cmd/) which now makes use of [Shovill](https://github.com/tseemann/shovill) for genome assembly.
* [Workflow]: Updated SISTR pipeline to store the following additional fields in the metadata table: serogroup, O antigen, H1, H2, and alleles matching genome.
* [UI]: Users can save analysis results to samples after pipeline is done in "Share Results" tab.
* [UI]: Fixed bug where edit groups page would throw a server exception. (0.21.3)
* [UI]: Hiding user page project list for non-admins.
* [Workflow]: Fixed bug where auto updating metadata from analysis submission failed for non-admin user. (0.21.4)
* [UI]: Fixed bug where admin dropdown menu was hidden behind sequencing run sub navigation.
* [Developer]: Moved file processing chain outside of SequencingObjectService. It now runs as a scheduled task. This will help balance the processing load in multi-server deployments.
* [UI]: Ensuring `ROLE_SEQUENCER` users get "Access Denied" for any attempted UI interactions.
* [Developer]: Updated `yarn` to the current version.
* [UI/Workflow]: Pipeline analysis output files are rendered in the same order as they appear in the pipeline `irida_workflow.xml` in the `<outputs>` XML element.
* [Developer]: Can now specify which `chromedriver` to use in UI testing with `-Dwebdriver.chrome.driver=/PATH/TO/chromedriver`.
* [UI]: Fixes slow Sample cart. Quicker saving of large selections of samples to cart (`POST /cart/add/samples`) and loading of existing cart Samples (`GET /cart`).
0.20.0 to 0.21.0
----------------
......
......@@ -4,10 +4,39 @@ Upgrading
This document summarizes the environmental changes that need to be made when
upgrading IRIDA that cannot be automated.
0.22.0 to 0.23.0
----------------
* A new configuration value is avaliable to display a warning on analysis result and metadata pages to communicate that an analysis result should be considered preliminiary. Add a warning message `irida.analysis.warning` in `/etc/irida/web.conf` to display on all analysis result and metadata pages.
0.21.0 to 0.22.0
----------------
* This upgrade makes schema changes to the databases and cannot be parallel deployed. Servlet container must be stopped before deploying the new `war` file.
* This upgrade changes the way the file processors handle uploaded files. File processing now takes place as a scheduled task rather than immediately after files are uploaded. For deployments with multiple IRIDA servers running against the same database, prossing may not be performed by the IRIDA server the files were uploaded to and will instead be balanced among all the available servers. If you want to disable file processing on an IRIDA server, set the following property in `/etc/irida/irida.conf` : `file.processing.process=false`.
* A new pipeline, [bio_hansel](https://irida.corefacility.ca/documentation/administrator/galaxy/pipelines/bio_hansel/), has been included. You will have to make sure to install the necessary Galaxy tools listed in the documentation.
* The [MentaLiST](https://irida.corefacility.ca/documentation/administrator/galaxy/pipelines/mentalist/) pipeline has been ugpraded. Please make sure to install the necessary tools in Galaxy.
* The [SISTR](https://irida.corefacility.ca/documentation/administrator/galaxy/pipelines/sistr/) pipeline has been upgraded to make use of [shovill](https://github.com/tseemann/shovill) for assembly. Please make sure to install the `shovill` Galaxy tool. Also, please make sure to follow the additional instructions in <https://irida.corefacility.ca/documentation/administrator/galaxy/pipelines/sistr/#address-shovill-related-issues>, which involves some modifications of the conda environment for `shovill`. In particular, you must:
1. Install the proper `ncurses` and `bzip2` packages from the **conda-forge** channel.
```bash
# activate the Galaxy shovill conda env
source galaxy/deps/_conda/bin/activate galaxy/deps/_conda/envs/__shovill@0.9.0
# install ncurses and bzip2 from conda-forge channel
conda install -c conda-forge ncurses bzip2
```
2. Set the `SHOVILL_RAM` environment variable in the conda environment:
```bash
cd galaxy/deps/_conda/envs/__shovill@0.9.0
mkdir -p etc/conda/activate.d
mkdir -p etc/conda/deactivate.d
echo -e "export _OLD_SHOVILL_RAM=\$SHOVILL_RAM\nexport SHOVILL_RAM=8" >> etc/conda/activate.d/shovill-ram.sh
echo -e "export SHOVILL_RAM=\$_OLD_SHOVILL_RAM" >> etc/conda/deactivate.d/shovill-ram.sh
```
Please change `8`GB to what works for you for `shovill` (or setup based on the `$GALAXY_MEMORY_MB` environment variable, see the linked instructions for more details).
0.20.0 to 0.21.0
----------------
......
......@@ -11,7 +11,7 @@ This workflow uses the software [MentaLiST][] for typing of microbial samples di
| Tool Name | Owner | Tool Revision | Toolshed Installable Revision | Toolshed |
|:------------------------------:|:--------:|:-------------:|:-----------------------------:|:--------------------:|
| **mentalist** | dfornika | 243be7a79d9c | 7 (2018-05-31) | [Galaxy Main Shed][] |
| **mentalist** | dfornika | a6cd59f35832 | 9 (2018-06-26) | [Galaxy Main Shed][] |
| **combine_tabular_collection** | nml | b815081988b5 | 0 (2017-02-06) | [Galaxy Main Shed][] |
## Step 1: Galaxy Conda Setup
......
......@@ -38,7 +38,18 @@ conda install -c conda-forge ncurses bzip2
[PILON] is a Java application and may require the JVM heap size to be set (e.g. `_JAVA_OPTIONS=-Xmx4g`).
If [shovill] under Galaxy submits jobs to a [SLURM] workload manager, it may be necessary to allot about 4G more through SLURM than through [shovill] `--ram` (default is `${SHOVILL_RAM:-4}` or 4G as of tool revision [57d5928f456e]) so if you give [shovill] 4G, give the SLURM job 8G.
One way you can adjust the `$SHOVILL_RAM` environment variable is via the [conda environment][]. That is, if you find the conda environment containing `shovill` you can set up files in `etc/conda/activate.d` and `etc/conda/deactivate.d` to set environment variables.
```bash
cd galaxy/deps/_conda/bin/activate galaxy/deps/_conda/envs/__shovill@0.9.0
mkdir -p etc/conda/activate.d
mkdir -p etc/conda/deactivate.d
echo -e "export _OLD_SHOVILL_RAM=\$SHOVILL_RAM\nexport SHOVILL_RAM=8" >> etc/conda/activate.d/shovill-ram.sh
echo -e "export SHOVILL_RAM=\$_OLD_SHOVILL_RAM" >> etc/conda/activate.d/shovill-ram.sh
```
You could also get fancier with this by setting `SHOVILL_RAM` based on [GALAXY_MEMORY_MB][], which is assigned by Galaxy based on your job configuration and resource requirements. For example, by setting `SHOVILL_RAM=$($GALAXY_MEMORY_MB/1024)`.
## Step 2: Install Galaxy Tools
......@@ -92,3 +103,5 @@ If everything was successfull then all dependencies for this pipeline have been
[bioconda]: https://bioconda.github.io/
[sistr_cmd]: https://github.com/peterk87/sistr_cmd
[FAQ/Conda dependencies]: ../../../faq#installing-conda-dependencies-in-galaxy-versions--v1601
[conda environment]: https://conda.io/docs/user-guide/tasks/manage-environments.html#saving-environment-variables
[GALAXY_MEMORY_MB]: https://planemo.readthedocs.io/en/latest/writing_advanced.html#developing-for-clusters-galaxy-slots-galaxy-memory-mb-and-galaxy-memory-mb-per-slot
......@@ -2,7 +2,7 @@
"a_galaxy_workflow": "true",
"annotation": "",
"format-version": "0.1",
"name": "MentaLiST MLST v0.1.8",
"name": "MentaLiST MLST v0.1.9",
"steps": {
"0": {
"annotation": "",
......@@ -38,7 +38,7 @@
},
"1": {
"annotation": "",
"content_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_call/0.1.8",
"content_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_call/0.1.9",
"id": 1,
"input_connections": {
"input_type|fastq_collection": {
......@@ -65,27 +65,35 @@
}
],
"position": {
"left": 613.5,
"top": 354.5
"left": 608,
"top": 421
},
"post_job_actions": {
"ChangeDatatypeActionoutput_file": {
"action_arguments": {
"newtype": "tabular"
},
"action_type": "ChangeDatatypeAction",
"output_name": "output_file"
}
},
"post_job_actions": {},
"tool_errors": null,
"tool_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_call/0.1.8",
"tool_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_call/0.1.9",
"tool_shed_repository": {
"changeset_revision": "243be7a79d9c",
"changeset_revision": "f29e7738bb64",
"name": "mentalist",
"owner": "dfornika",
"tool_shed": "toolshed.g2.bx.psu.edu"
},
"tool_state": "{\"input_type\": \"{\\\"fastq_collection\\\": {\\\"__class__\\\": \\\"RuntimeValue\\\"}, \\\"sPaired\\\": \\\"collections\\\", \\\"__current_case__\\\": 1}\", \"__rerun_remap_job_id__\": null, \"kmer_db\": \"{\\\"__class__\\\": \\\"RuntimeValue\\\"}\", \"__page__\": 0}",
"tool_version": "0.1.8",
"tool_version": "0.1.9",
"type": "tool",
"uuid": "6fc0eeb6-37ea-42be-89b6-2285108a9c0c",
"uuid": "107c79f4-95d9-46a3-ba45-3cca6a0309e7",
"workflow_outputs": [
{
"label": null,
"output_name": "output_file",
"uuid": "c3d9b54a-c322-49fa-89ab-aea8727acaaa"
"uuid": "048a4ce4-c211-4d7d-9074-d6b913c269f5"
}
]
},
......@@ -114,8 +122,8 @@
}
],
"position": {
"left": 895.5,
"top": 462.5
"left": 941.5,
"top": 540
},
"post_job_actions": {
"RenameDatasetActionoutput": {
......@@ -137,18 +145,18 @@
"tool_state": "{\"texts\": \"{\\\"__class__\\\": \\\"RuntimeValue\\\"}\", \"__rerun_remap_job_id__\": null, \"__page__\": 0}",
"tool_version": "0.1",
"type": "tool",
"uuid": "ce18006f-79ba-43ca-8d95-95a2187498fe",
"uuid": "877d3d0b-58bc-48a5-87aa-7cc1f0a69a84",
"workflow_outputs": [
{
"label": null,
"output_name": "output",
"uuid": "49bf1d13-92b7-43ed-b087-6bd6569b7b9d"
"uuid": "e30ef8d2-0acd-4dc8-841a-ec87c48ec2f3"
}
]
},
"3": {
"annotation": "",
"content_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_distance/0.1.8",
"content_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_distance/0.1.9",
"id": 3,
"input_connections": {
"input": {
......@@ -171,8 +179,8 @@
}
],
"position": {
"left": 1178,
"top": 407.5
"left": 1243,
"top": 435.5
},
"post_job_actions": {
"RenameDatasetActionoutput": {
......@@ -184,15 +192,15 @@
}
},
"tool_errors": null,
"tool_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_distance/0.1.8",
"tool_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_distance/0.1.9",
"tool_shed_repository": {
"changeset_revision": "243be7a79d9c",
"changeset_revision": "f29e7738bb64",
"name": "mentalist",
"owner": "dfornika",
"tool_shed": "toolshed.g2.bx.psu.edu"
},
"tool_state": "{\"input\": \"{\\\"__class__\\\": \\\"RuntimeValue\\\"}\", \"__rerun_remap_job_id__\": null, \"__page__\": 0}",
"tool_version": "0.1.8",
"tool_version": "0.1.9",
"type": "tool",
"uuid": "26f4ee0b-377f-43f9-9e57-fae677cb1b9c",
"workflow_outputs": [
......@@ -202,7 +210,64 @@
"uuid": "5364b385-5273-4642-b583-64a70ad86003"
}
]
},
"4": {
"annotation": "",
"content_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_tree/0.1.9",
"id": 4,
"input_connections": {
"input": {
"id": 3,
"output_name": "output"
}
},
"inputs": [
{
"description": "runtime parameter for tool MentaLiST Tree",
"name": "input"
}
],
"label": null,
"name": "MentaLiST Tree",
"outputs": [
{
"name": "output",
"type": "txt"
}
],
"position": {
"left": 1528.5,
"top": 646
},
"post_job_actions": {
"RenameDatasetActionoutput": {
"action_arguments": {
"newname": "mentalist_nj_tree.newick"
},
"action_type": "RenameDatasetAction",
"output_name": "output"
}
},
"tool_errors": null,
"tool_id": "toolshed.g2.bx.psu.edu/repos/dfornika/mentalist/mentalist_tree/0.1.9",
"tool_shed_repository": {
"changeset_revision": "f29e7738bb64",
"name": "mentalist",
"owner": "dfornika",
"tool_shed": "toolshed.g2.bx.psu.edu"
},
"tool_state": "{\"input\": \"{\\\"__class__\\\": \\\"RuntimeValue\\\"}\", \"__rerun_remap_job_id__\": null, \"__page__\": 0}",
"tool_version": "0.1.9",
"type": "tool",
"uuid": "81837fd0-7458-4e19-afbd-3c03c97e3151",
"workflow_outputs": [
{
"label": null,
"output_name": "output",
"uuid": "b0e0f6d7-5c05-405b-9a02-4478480d6093"
}
]
}
},
"uuid": "7221366e-de18-4833-9f92-5b8901e5a9bb"
"uuid": "124565f8-a1e4-4314-8996-54e9432ae7fb"
}
\ No newline at end of file
......@@ -135,3 +135,7 @@ ncbi.upload.baseDirectory=tmp
ncbi.upload.port=21
#Default namespace to preface file identifiers
ncbi.upload.namespace=IRIDA
# A list of workflow types to disable from display in the web interface
# For example `irida.workflow.types.disabled=bio_hansel,refseq_masher`
#irida.workflow.types.disabled=
......@@ -21,4 +21,8 @@ mail.server.username=IRIDA Platform
# The e-mail address for contacting an administrator for help. Uncomment
# this and modify to have your own e-mail address rendered in the 'Help' menu.
# If this is left commented out, no contact e-mail appears in the 'Help' menu.
# help.contact.email=you@example.org
\ No newline at end of file
# help.contact.email=you@example.org
# This value may be un-commented and edited to display a dismissable warning
# above all analysis results and metadata pages.
#irida.analysis.warning=Note: These results may be preliminiary
\ No newline at end of file
......@@ -35,6 +35,7 @@ title: "Developer"
<li><a href="data-model">Data Model</a></li>
<li>User Interface
<ul>
<li><a href="interface/testing">Selenium user interface testing</a></li>
<li>
<a href="interface/datatables">Datatables</a>
</li>
......
---
layout: default
---
User interface testing using [Selenium] and [chromedriver]
==========================================================
In order to run the UI tests with [Selenium], you will need the appropriate version of [chromedriver] for your version of [Chrome] or [Chromium].
Initialize DB with Liquibase and run UI test server
---------------------------------------------------
### Create integration test DB
MariaDB/MySQL SQL for dropping and creating `irida_integration_test` database and granting user `test` all privileges on that DB:
```sql
sudo mysql << EOF
DROP DATABASE irida_integration_test;
CREATE DATABASE irida_integration_test;
GRANT ALL PRIVILEGES ON irida_integration_test.* TO 'test'@'localhost';
EOF
```
### DB initialization with Liquibase
You'll need the DB in the proper state to run the UI tests. To do this, you will need to run Liquibase in order to apply all of the necessary DB migration scripts. This can be done with the following command:
```
mvn clean jetty:run -B -Pui_testing \
-Dspring.profiles.active=it \ # run IRIDA with the `it` Spring profile active
-Djdbc.url=jdbc:mysql://localhost:3306/irida_integration_test \
-Dirida.it.rootdirectory=/tmp/irida/ \
-Dhibernate.hbm2ddl.import_files="" \
-Dhibernate.hbm2ddl.auto="" \
-Dliquibase.update.database.schema=true
```
**NOTE:**
- Running Liquibase is required to get the DB into the right state! `-Dhibernate.hbm2ddl.import_files=""`, `-Dhibernate.hbm2ddl.auto=""` and `-Dliquibase.update.database.schema=true` are required to run Liquibase. You might need to wipe the integration test DB and create it again.
- **BEWARE:** `-Djdbc.url=jdbc:mysql://localhost:3306/irida_integration_test` is VERY IMPORTANT. Specify explicitly so you don't point at your development DB and have it accidentally wiped...
- You don't need to specify `-Dsequence.file.base.directory` and `*reference*` and `*output*` if they all have the same root and dirnames of `sequence`, `reference` and `output`. It's automatically implied.
- Set `-Dirida.it.headless=false` so you can see the UI tests in action!
### [chromedriver] and [Chrome]/[Chromium]
You need to match up the version of [Chrome] with [chromedriver]. For example, [Chrome] v66 needs [chromedriver] v2.39 or v2.40 (see https://sites.google.com/a/chromium.org/chromedriver/downloads)
You can specify which [chromedriver] to use with `-Dwebdriver.chrome.driver=/PATH/TO/chromedriver` otherwise, the `node_modules` version of [chromedriver] is used for running UI tests.
Running specific UI tests through [IntelliJ] IDEA
-----------------------------------------------
*Recommended [IntelliJ] Default/Template JUnit VM Options Configuration:*
```
-ea
-Dspring.profiles.active=it
-Djdbc.url=jdbc:mysql://localhost:3306/irida_integration_test
-Dirida.it.rootdirectory=/tmp/irida/
-Dirida.it.nosandbox=true
-Dirida.it.headless=false
-Dwebdriver.chrome.driver=/PATH/TO/chromedriver
```
![](images/intellij-ui-tests-default-junit.png)
After starting the IRIDA UI test server (i.e. `mvn clean jetty:run -B -Pui_testing ...`), you can run the specific integration tests you're interested in through your IDE by going to the `*IT.java` file of interest and clicking the **Run Test** icon in the left gutter beside the first line of the class or function of interest.
![](images/intellij-ui-tests-run-in-ide.png)
**TIP:** Re-build (Ctrl+Shift+F9) to register changes to tests rather than re-building whole project!
[chromedriver]: http://chromedriver.chromium.org/
[Chrome]: https://www.google.com/chrome/
[Chromium]: https://www.chromium.org/
[Selenium]: http://www.seleniumhq.org/
[IntelliJ]: https://www.jetbrains.com/idea/
\ No newline at end of file
......@@ -20,7 +20,8 @@ RUN install-repository "-r 4287dd541327 --url https://irida.corefacility.ca/gala
install-repository "-r 5c8ff92e38a9 --url https://toolshed.g2.bx.psu.edu -o nml --name sistr_cmd" && \
install-repository "-r 26df66c32861 --url https://toolshed.g2.bx.psu.edu -o nml --name refseq_masher" && \
install-repository "-r b815081988b5 --url https://toolshed.g2.bx.psu.edu -o nml --name combine_tabular_collection" && \
install-repository "-r 1d9e3950ce61 --url https://toolshed.g2.bx.psu.edu -o dfornika --name mentalist" && \
install-repository "-r a6cd59f35832 --url https://toolshed.g2.bx.psu.edu -o dfornika --name mentalist" && \
install-repository "-r 4654c51dae72 --url https://toolshed.g2.bx.psu.edu -o nml --name bio_hansel" && \
find /galaxy-central/tool_deps/ -iname '.git' | xargs -I {} rm -rf {}
RUN apt update && apt install --yes gnuplot && apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
......@@ -36,10 +37,14 @@ RUN /tool_deps/_conda/bin/conda config --add channels conda-forge && \
/tool_deps/_conda/bin/conda config --add channels defaults && \
/tool_deps/_conda/bin/conda config --add channels r && \
/tool_deps/_conda/bin/conda config --add channels bioconda && \
/tool_deps/_conda/bin/conda update conda && \
/tool_deps/_conda/bin/conda create -y --name __shovill@0.9.0 shovill=0.9.0 && \
/tool_deps/_conda/bin/conda install -y -c conda-forge --name __shovill@0.9.0 ncurses bzip2 && \
/tool_deps/_conda/bin/conda create -y --name __sistr_cmd@1.0.2 sistr_cmd=1.0.2 && \
/tool_deps/_conda/bin/conda create -y --name __mentalist@0.1.3 mentalist=0.1.3 && \
/tool_deps/_conda/bin/conda create -y --name __mentalist@0.1.9 mentalist=0.1.9 && \
/tool_deps/_conda/bin/conda create -y --name __biopython@_uv_ biopython && \
/tool_deps/_conda/bin/conda create -y --name __r-base@_uv_ r-base && \
/tool_deps/_conda/bin/conda create -y --name __bio_hansel@2.0.0 bio_hansel=2.0.0 && \
/tool_deps/_conda/bin/conda create -y --name __refseq_masher@0.1.1 refseq_masher=0.1.1 && \
/tool_deps/_conda/bin/conda create -y --name __samtools@_uv_ samtools && \
/tool_deps/_conda/bin/conda clean -y -itps && \
......@@ -51,7 +56,7 @@ ADD data/tool-data.tar.gz /galaxy-central/
# Fix up permissions/mentalist erroring out on first execution by running mentalist during build
RUN echo -e "@\nATCG\n+\nIIII" > /tmp/file_1.fastq && \
echo -e "@\nATCG\n+\nIIII" > /tmp/file_2.fastq && \
bash -c "source /tool_deps/_conda/envs/__mentalist\@0.1.3/bin/activate /tool_deps/_conda/envs/__mentalist\@0.1.3/ && mentalist call -o /tmp/mentalist-test -s x --db /galaxy-central/tool-data/mentalist_databases/salmonella_enterica_pubmlst_k31_2018-04-04/salmonella_enterica_pubmlst_k31_2018-04-04.jld /tmp/file_1.fastq /tmp/file_2.fastq" && \
bash -c "source /tool_deps/_conda/bin/activate /tool_deps/_conda/envs/__mentalist\@0.1.9/ && mentalist call -o /tmp/mentalist-test -s x --db /galaxy-central/tool-data/mentalist_databases/salmonella_enterica_pubmlst_k31_2018-04-04/salmonella_enterica_pubmlst_k31_2018-04-04.jld /tmp/file_1.fastq /tmp/file_2.fastq" && \
chown -R galaxy:galaxy /galaxy-central/tool-data/ && \
chown -R galaxy:galaxy /tool_deps/_conda/ && \
rm -rf /tmp/file_*.fastq
......
......@@ -19,14 +19,16 @@ EOF
# install docker so that we can pull down the IRIDA Galaxy Docker container:
yum -y install docker-engine
mkdir -p /home/vagrant/docker
# switch to `devicemapper` for docker as I found the that the IRIDA Galaxy docker image failed to run with the default storage driver (overlay)
sed -i -e 's@ExecStart=/usr/bin/dockerd@ExecStart=/usr/bin/dockerd --storage-driver=devicemapper@' /usr/lib/systemd/system/docker.service
sed -i -e 's@ExecStart=/usr/bin/dockerd@ExecStart=/usr/bin/dockerd --storage-driver=devicemapper --storage-opt dm.basesize=40G -g /home/vagrant/docker@' /usr/lib/systemd/system/docker.service
systemctl enable docker
systemctl start docker
# run the galaxy container, --restart=always makes sure it starts up on boot
mkdir -p /home/irida/data/galaxy-export
docker run --name galaxy -d -p 9090:80 -v /home/irida/data/galaxy-export/:/export/ -v /home/irida/data/sequencing:/home/irida/data/sequencing phacnml/galaxy-irida-17.01:0.21.0
docker pull phacnml/galaxy-irida-17.01
docker run --name galaxy -d -p 9090:80 -v /home/irida/data/galaxy-export/:/export/ -v /home/irida/data/sequencing:/home/irida/data/sequencing phacnml/galaxy-irida-17.01
# wait for galaxy to succeed starting up for the first time, so we don't have to wait for postgres to start up next time
wait_for_galaxy
......@@ -41,7 +43,7 @@ After=docker.service
[Service]
ExecStartPre=-/usr/bin/docker rm --force galaxy
ExecStart=/usr/bin/docker run --name galaxy -d -p 9090:80 -v /home/irida/data/galaxy-export/:/export/ -v /home/irida/data/sequencing:/home/irida/data/sequencing phacnml/galaxy-irida-17.01:0.21.0
ExecStart=/usr/bin/docker run --name galaxy -d -p 9090:80 -v /home/irida/data/galaxy-export/:/export/ -v /home/irida/data/sequencing:/home/irida/data/sequencing phacnml/galaxy-irida-17.01
[Install]
WantedBy=multi-user.target
......
......@@ -17,13 +17,13 @@ mkdir -p /etc/irida/analytics
chown -R tomcat:tomcat /home/irida/
cd /home/irida
curl -O https://irida.corefacility.ca/downloads/webapp/irida-latest.war
curl --insecure -O https://irida.corefacility.ca/downloads/webapp/irida-latest.war
ln -s /home/irida/irida-latest.war /var/lib/tomcat/webapps/irida.war
curl -O https://irida.corefacility.ca/documentation/administrator/web/config/irida.conf
curl --insecure -O https://irida.corefacility.ca/documentation/administrator/web/config/irida.conf
ln -s /home/irida/irida.conf /etc/irida/irida.conf
curl -O https://irida.corefacility.ca/documentation/administrator/web/config/web.conf
curl --insecure -O https://irida.corefacility.ca/documentation/administrator/web/config/web.conf
ln -s /home/irida/web.conf /etc/irida/web.conf
sed -i 's_server.base.url=.*_server.base.url=http://localhost:48888/irida/_' /etc/irida/web.conf
......
......@@ -31,9 +31,9 @@
"disk_size": 1000000,
"guest_os_type": "RedHat_64",
"http_directory": "http",
"iso_checksum": "74391081d998963b21483113143eb172e7a4e5f4",
"iso_checksum": "4eead850afed0fc7d170c23bfabfed379419db79",
"iso_checksum_type": "sha1",
"iso_url": "http://muug.mb.ca/mirror/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1611.iso",
"iso_url": "http://muug.ca/mirror/centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1804.iso",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
......
......@@ -4,7 +4,7 @@
<groupId>ca.corefacility.bioinformatics</groupId>
<artifactId>irida</artifactId>
<packaging>war</packaging>
<version>0.22.0-SNAPSHOT</version>
<version>0.23.0-SNAPSHOT</version>
<name>irida</name>
<url>http://www.irida.ca</url>
......@@ -1082,7 +1082,7 @@
<errorprone.version>2.5</errorprone.version>
<frontend-maven-plugin.version>1.6</frontend-maven-plugin.version>
<node.version>v8.10.0</node.version>
<yarn.version>v1.5.1</yarn.version>
<yarn.version>v1.7.0</yarn.version>
<jgravatar.version>1.0</jgravatar.version>
<!-- project configuration -->
......
......@@ -8,6 +8,7 @@ import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Set;
import java.util.UUID;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
......@@ -24,6 +25,7 @@ import com.google.common.collect.Sets;
import ca.corefacility.bioinformatics.irida.exceptions.IridaWorkflowException;
import ca.corefacility.bioinformatics.irida.exceptions.IridaWorkflowLoadException;
import ca.corefacility.bioinformatics.irida.model.enums.AnalysisType;
import ca.corefacility.bioinformatics.irida.model.enums.config.AnalysisTypeSet;
import ca.corefacility.bioinformatics.irida.model.workflow.IridaWorkflow;
import ca.corefacility.bioinformatics.irida.model.workflow.config.IridaWorkflowIdSet;
import ca.corefacility.bioinformatics.irida.model.workflow.config.IridaWorkflowSet;
......@@ -42,11 +44,11 @@ public class IridaWorkflowsConfig {
private static final Logger logger = LoggerFactory.getLogger(IridaWorkflowsConfig.class);
private static final String IRIDA_DEFAULT_WORKFLOW_PREFIX = "irida.workflow.default";
private static final String IRIDA_DISABLED_TYPES = "irida.workflow.types.disabled";
@Autowired
private Environment environment;
/**
* Gets the {@link Path} for all IRIDA workflow types.
*
......@@ -143,20 +145,32 @@ public class IridaWorkflowsConfig {
return new IridaWorkflowLoaderService(workflowDescriptionUnmarshaller());
}
/**
* Builds a {@link AnalysisTypeSet} of {@link AnalysisType}s which are to be disabled from
* the UI.
*
* @return A {@link AnalysisTypeSet} of {@link AnalysisType}s which are to be disabled from the UI.
*/
@Bean
public AnalysisTypeSet disabledAnalysisTypes() {
String[] disabledWorkflowTypes = environment.getProperty(IRIDA_DISABLED_TYPES, String[].class);
return new AnalysisTypeSet(Sets.newHashSet(disabledWorkflowTypes).stream().map(t -> AnalysisType.fromString(t))
.collect(Collectors.toSet()));
}
/**
* Builds a new {@link IridaWorkflowsService}.
*
* @param iridaWorkflows
* The set of IridaWorkflows to use.
* @param defaultIridaWorkflows
* The set of ids for default workflows to use.
* @param iridaWorkflows The set of IridaWorkflows to use.
* @param defaultIridaWorkflows The set of ids for default workflows to use.
* @param disabledAnalysisTypes The set of disabled {@link AnalysisType}s.
* @return A new {@link IridaWorkflowsService}.
* @throws IridaWorkflowException
* If there was an error loading a workflow.
* @throws IridaWorkflowException If there was an error loading a workflow.
*/
@Bean
public IridaWorkflowsService iridaWorkflowsService(IridaWorkflowSet iridaWorkflows,
IridaWorkflowIdSet defaultIridaWorkflows) throws IridaWorkflowException {
return new IridaWorkflowsService(iridaWorkflows, defaultIridaWorkflows);
IridaWorkflowIdSet defaultIridaWorkflows, AnalysisTypeSet disabledAnalysisTypes)
throws IridaWorkflowException {
return new IridaWorkflowsService(iridaWorkflows, defaultIridaWorkflows, disabledAnalysisTypes);
}
}
package ca.corefacility.bioinformatics.irida.exceptions;
import java.util.UUID;
import ca.corefacility.bioinformatics.irida.model.enums.AnalysisType;
/**
* Exception that gets thrown if a workflow is not displayable.
*
*
*/
public class IridaWorkflowNotDisplayableException extends IridaWorkflowLoadException {
private static final long serialVersionUID = 4583888416199829376L;
/**
* Constructs a new {@link IridaWorkflowNotDisplayableException} with the given
* workflow identifier.
*
* @param workflowId
* The identifier of the workflow.
*/
public IridaWorkflowNotDisplayableException(UUID workflowId) {
super("The workflow " + workflowId + " has been disabled");
}
/**
* Constructs a new {@link IridaWorkflowNotDisplayableException} with the given
* analysis type.
*
* @param analysisType
* The analysis type of the workflow.
*/
public IridaWorkflowNotDisplayableException(AnalysisType analysisType) {
super("Workflows for type " + analysisType + " have been disabled");
}
/**
* Constructs a new {@link IridaWorkflowNotDisplayableException} with the given
* workflow name.
*
* @param workflowName
* The name of the workflow.
*/
public IridaWorkflowNotDisplayableException(String workflowName) {
super("Workflows with name " + workflowName + " have been disabled");
}
}
package ca.corefacility.bioinformatics.irida.model.enums.config;
import java.util.Set;
import com.google.common.collect.Sets;
import ca.corefacility.bioinformatics.irida.model.enums.AnalysisType;
/**
* A class wrapping around {@link AnalysisType} to contain them in a set for
* Spring configuration.
*/
public class AnalysisTypeSet {
private Set<AnalysisType> analysisTypes;
/**
* Builds a new {@link AnalysisTypeSet} of {@link AnalysisType}s.
*
* @param analysisTypes The set of {@link AnalysisType}s to build.
*/
public AnalysisTypeSet(Set<AnalysisType> analysisTypes) {
this.analysisTypes = analysisTypes;
}
/**
* Builds an empty {@link AnalysisTypeSet}.
<