VM Implementation of OMERO
Posted: Thu Aug 07, 2014 2:47 pm
Hello All,
I am currently working to install OMERO at my home institution. I have been working with my IT department to develop an implementation of OMERO that works within our current infrastructure. I would like to share my plan in hopes of receiving feedback from the OMERO community concerning any potential issues or faults that I may have overlooked. Although I am not a sysadmin, I have been making a conscious effort to learn the skills needed for this project. With that in mind, please forgive any ambiguities in the details below.
Goal
Create an image storage server for data generated by 15-35 users. The system should support images generated by the following equipment: Zeiss LSM 780, Zeiss Observer.Z1 (Zen Blue), Zeiss Evo SEM, and JEOL TEM. The system should be geared towards two primary user groups -- one group generating time lapse z-stacks and the other group generating spectral images.
Proposed Solution
Run OMERO as a virtual machine (not the available virtual appliance) and use NAS for data storage and backup. One NAS will be used as the binary repository and an identical NAS will be used for data backup.
Hardware/Software
VM: VMware vSphere
OS: CentOS 6
NAS (Primary Storage): Synology RS2414RP+ with 12x 4TB WD-Red HDD in Raid 6 (48TB Total)
NAS (Backup): Synology RS2414RP+ with 12x 4TB WD-Red HDD in Raid 6 (48TB Total)
The VM can be provisioned with up to 132GB RAM and 12 core 2.9GHz processors; however, we will only be given a small subset of these resources.
Questions
1. What resources should I allocate to the VM? The OMERO.server system requirements page gives several contradictory recommendations. For example, the OMERO.server recommendation is for 8GB RAM, but the page later gives the following suggestions:
The same applies for the CPU recommendation. The 25-50 user recommendation is for a quad core, but the page later states:
I may be misinterpreting this information; however, I suspect other users will encounter similar confusion.
2. When creating the volume for the binary repository, should I create one large 48TB volume, or should I separate it into smaller volumes. The example given on the Server Binary Repository page identifies the volume as
The name “really_big_disk” makes me think that I should partition into one 48TB volume.
3. (Question 2 Continued) Is it possible to split the binary repository over several volumes? To be honest, I am still confused about what is held in the binary repository and how it is accessed by OMERO.server. There is a section stating:
Unfortunately, there is no section that describes in layman’s terms what the repository IS.
Thank you all in advance for your help as I work through this installation.
Cheers,
Blair
I am currently working to install OMERO at my home institution. I have been working with my IT department to develop an implementation of OMERO that works within our current infrastructure. I would like to share my plan in hopes of receiving feedback from the OMERO community concerning any potential issues or faults that I may have overlooked. Although I am not a sysadmin, I have been making a conscious effort to learn the skills needed for this project. With that in mind, please forgive any ambiguities in the details below.
Goal
Create an image storage server for data generated by 15-35 users. The system should support images generated by the following equipment: Zeiss LSM 780, Zeiss Observer.Z1 (Zen Blue), Zeiss Evo SEM, and JEOL TEM. The system should be geared towards two primary user groups -- one group generating time lapse z-stacks and the other group generating spectral images.
Proposed Solution
Run OMERO as a virtual machine (not the available virtual appliance) and use NAS for data storage and backup. One NAS will be used as the binary repository and an identical NAS will be used for data backup.
Hardware/Software
VM: VMware vSphere
OS: CentOS 6
NAS (Primary Storage): Synology RS2414RP+ with 12x 4TB WD-Red HDD in Raid 6 (48TB Total)
NAS (Backup): Synology RS2414RP+ with 12x 4TB WD-Red HDD in Raid 6 (48TB Total)
The VM can be provisioned with up to 132GB RAM and 12 core 2.9GHz processors; however, we will only be given a small subset of these resources.
Questions
1. What resources should I allocate to the VM? The OMERO.server system requirements page gives several contradictory recommendations. For example, the OMERO.server recommendation is for 8GB RAM, but the page later gives the following suggestions:
You are probably going to hit a hard ceiling between 4 and 6GB for JVM size … I would surely doubt a large deployment using more than a few GBs of RAM…
… 16, 24 or 32GB of RAM would be ideal for your OMERO server. If you have a separate database server more than 16GB of RAM may not be of much benefit to you at all.
The same applies for the CPU recommendation. The 25-50 user recommendation is for a quad core, but the page later states:
Summary: Depending on hardware layout 2 x 4, 2 x 6 system core count should be more than enough.
I may be misinterpreting this information; however, I suspect other users will encounter similar confusion.
2. When creating the volume for the binary repository, should I create one large 48TB volume, or should I separate it into smaller volumes. The example given on the Server Binary Repository page identifies the volume as
$ bin/omero config set omero.data.dir /mnt/really_big_disk/OMERO
The name “really_big_disk” makes me think that I should partition into one 48TB volume.
3. (Question 2 Continued) Is it possible to split the binary repository over several volumes? To be honest, I am still confused about what is held in the binary repository and how it is accessed by OMERO.server. There is a section stating:
Your repository is not:
• the “database”
• the directory where your OMERO.server binaries are
• the directory where your OMERO.client (OMERO.insight, OMERO.editor or OMERO.importer) binaries are
• your PostgreSQL data directory
Unfortunately, there is no section that describes in layman’s terms what the repository IS.
Thank you all in advance for your help as I work through this installation.
Cheers,
Blair