https://systems.cs.odu.edu/index.php?title=Special:NewPages&feed=atom&hideredirs=1&limit=50&offset=&namespace=0&username=&tagfilter= Systems Group - New pages [en] 2019-08-24T21:58:47Z From Systems Group MediaWiki 1.22.4 https://systems.cs.odu.edu/Previous_Staff_Ryan_Knauer Previous Staff Ryan Knauer 2019-08-21T17:13:38Z <p>Davidros: Created page with &quot;poopy butts&quot;</p> <hr /> <div>poopy butts</div> Davidros https://systems.cs.odu.edu/HPC_Services_FAQ HPC Services FAQ 2019-07-16T02:23:40Z <p>Aaronolah: </p> <hr /> <div>Back to [[ HPC Services ]]<br /> <br /> &lt;br&gt;<br /> <br /> This page contains questions one may have while utilizing HPC services. Please note that this site assumes your default shell is tcsh.<br /> <br /> Text formatted like this contains commands for you to type or copy/paste into the shell.<br /> <br /> &lt;br&gt;<br /> <br /> = Frequently Asked Questions =<br /> <br /> '''Q: I'm prompted for my password every time I try to connect to a compute node. How do I prevent this?'''<br /> ----<br /> A: There are a few steps you will need to complete to enable passwordless logins:<br /> <br /> 1. Start the key generator:<br /> <br /> ssh-keygen -t rsa<br /> <br /> This will create your RSA Key Pair and will ask for the file name and file path to save the RSA key. Press enter to accept the defaults provided by the key generator which will be /home/username/.ssh. Please enter a passphrase when prompted. DO NOT leave this blank! It used to protect your private key and make it more difficult for someone to compromise your account.<br /> <br /> <br /> 2. Add your key to your list of authorized keys:<br /> <br /> cat ~/.ssh/id_rsa.pub &gt;&gt; ~/.ssh/authorized_keys2<br /> <br /> This will add your newly generated public key to the list of keys allowed for pubkey authentication.<br /> <br /> <br /> 3. Set up ssh-agent:<br /> <br /> For tcsh:<br /> cat &gt;&gt; ~/.tcshrc &lt;&lt; &quot;EOF&quot; <br /> if ($TERM != &quot;dumb&quot;) then <br /> if (! $?SSH_AUTH_SOCK) then <br /> eval `ssh-agent -c` <br /> ssh-add <br /> endif <br /> endif <br /> &quot;EOF&quot;<br /> <br /> For bash:<br /> cat &gt;&gt; ~/.bashrc &lt;&lt; &quot;EOF&quot; <br /> if [ &quot;$TERM&quot; != &quot;dumb&quot; ]; then <br /> if [ ! $SSH_AUTH_SOCK ]; then <br /> eval `ssh-agent -s` <br /> ssh-add <br /> fi <br /> fi <br /> &quot;EOF&quot;<br /> <br /> This will start the ssh-agent every time you log in. It will prompt you for the passphrase you used when you generated your key pair. You won't need to type it again for the remainder of your session.<br /> <br /> For tcsh:<br /> cat &gt;&gt; ~/.logout &lt;&lt; &quot;EOF&quot;<br /> if ($?SSH_AGENT_PID) then<br /> kill $SSH_AGENT_PID<br /> endif<br /> &quot;EOF&quot;<br /> chmod 700 ~/.logout<br /> <br /> <br /> For bash:<br /> cat &gt;&gt; ~/.bash_logout &lt;&lt; &quot;EOF&quot;<br /> if [$SSH_AGENT_PID] then<br /> kill $SSH_AGENT_PID<br /> fi<br /> &quot;EOF&quot;<br /> chmod 700 ~/.bash_logout<br /> <br /> This will configure your account to close any of your active ssh-agent sessions upon logout and set the proper permissions on the .logout file.<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: When I try to ssh in to a compute node I get an error about the REMOTE HOST IDENTIFICATION and I'm returned to a prompt on the head node. How do I fix this?'''<br /> ----<br /> A: To fix this, you can copy/paste the following into your shell:<br /> <br /> cat &gt;&gt; ~/.ssh/config &lt;&lt; &quot;EOF&quot;<br /> Host compute-0-*<br /> UserKnownHostsFile=/etc/ssh/ssh_known_hosts<br /> Host *<br /> UserKnownHostsFile=~/.ssh/known_hosts<br /> ForwardAgent yes<br /> ForwardX11 no<br /> &quot;EOF&quot;<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: Do I need to use a specific compiler for MPI programs?'''<br /> ----<br /> A: Yes. The compiler must match your chosen MPI implementation. i.e, You cannot compile a program with the MPICH2 version of the compiler and use the OpenMPI version of 'mpiexec' to run it.<br /> <br /> Below is a table showing which MPI implementation corresponds to which compiler and execution program for someone trying to run a parallel program written in C.<br /> <br /> {| class=&quot;wikitable&quot;<br /> !Implementation <br /> !Compiler <br /> !Execution Program<br /> |-<br /> |MPICH <br /> |/export/software/mpich/bin/mpicc <br /> |/export/software/mpich/bin/mpirun<br /> |-<br /> |MPICH2 <br /> |/export/software/mpich2/bin/mpicc <br /> |/export/software/mpich2/bin/mpiexec<br /> |- <br /> |OpenMPI <br /> |/export/software/openmpi/bin/mpicc <br /> |/export/software/openmpi/bin/mpirun<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: Should I submit jobs to SGE or run programs using mpiexec, mpirun, etc.?'''<br /> ----<br /> A: To ensure proper resource utilization, we recommend that you run your parallel programs through SGE.<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: How do I submit a job using SGE?'''<br /> ----<br /> A: You should use the qsub command:<br /> <br /> qsub /path/to/yourjobscript<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: What should an SGE parallel job script look like?'''<br /> ----<br /> A: The following is an example script using the OpenMPI implementation of MPI.<br /> <br /> # The shell to be used for job execution <br /> #$ -S /bin/bash <br /> # Pass on your environment variables <br /> #$ -V <br /> # Set the name of the job <br /> #$ -N YourJob <br /> # Set the working directory <br /> #$ -wd /path/to/some/directory <br /> # Merge stdout and stderr <br /> #$ -j y <br /> # Send email to your CS account <br /> #$ -M youremail@cs.odu.edu <br /> # Send email when the job begins and when it has finished <br /> #$ -m be <br /> # Set the parallel environment for OpenMPI and the number of slots <br /> #$ -pe orte 256 <br /> <br /> /export/software/openmpi/bin/mpirun -np 256 /path/to/your/program <br /> <br /> Please see the Job Script Generator page for help creating SGE job scripts.<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: Do I need to specify a machine file if I'm submitting the job to SGE?'''<br /> ----<br /> A: No. In fact, doing so will cause the job to fail to run properly as the SGE parallel environment &quot;mpich&quot; generates one for you.<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: How do I view the output of jobs submitted through SGE?'''<br /> ----<br /> A: The output written to stdout will be contained in a file such as &quot;jobname.o#&quot; where # is the job number. If you did not specify the option to merge stderr and stdout into one file, any output written to stderr will be in a file such as &quot;jobname.e#&quot;.<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: Where can I find the SGE output files?'''<br /> ----<br /> A: The default location for the output files is &quot;/home/$username&quot;. If your job script specifies a working directory, the output files will be found in that directory.<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: How do I control the number of processes per node when running a job with SGE?'''<br /> ----<br /> A: You can edit your job script so that the line where it calls mpirun or mpiexec looks similar to the following to run the job using only 2 slots per host (the total number of slots you've requested and the number of slots per host you've chosen will determine the total number of hosts on which your job will run):<br /> <br /> /opt/openmpi/bin/mpirun -nperhost 2 /path/to/your/program<br /> <br /> &lt;br&gt;<br /> <br /> '''Q: How do I use $NAME_OF_PROGRAM?'''<br /> ----<br /> A: Navigate to the &quot;Installed Software&quot; page of the machine you are using. There should be a tutorial if you click on the program name.</div> Aaronolah https://systems.cs.odu.edu/HPC_Services HPC Services 2019-07-13T20:01:48Z <p>Aaronolah: /* HPC-Phi */</p> <hr /> <div>[[file:Hpcd_diagram.png | right | 300px | thumb | HPCD ]]<br /> <br /> The ODU Computer Science Department provides access to a number of HPC clusters, GPU servers, and high-memory servers for research and other resource-intense workloads.<br /> <br /> If you have any questions, please read the [[ HPC_Services_FAQ | FAQ]].<br /> <br /> &lt;br&gt;<br /> <br /> __TOC__<br /> <br /> <br /> = HPC Clusters =<br /> <br /> === HPCR ===<br /> <br /> '''Hostname:''' hpcr.cs.odu.edu<br /> <br /> '''Operating system:''' CentOS 6.2<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! Node Type<br /> ! Model<br /> ! Nodes<br /> ! CPUs per Node<br /> ! Cores per CPU<br /> ! Slots per Node<br /> ! Processor<br /> ! RAM<br /> |-<br /> | Head<br /> | Dell PowerEdge R410<br /> | style=&quot;text-align:right&quot; | 1<br /> | style=&quot;text-align:right&quot; | 2<br /> | style=&quot;text-align:right&quot; | 4<br /> | style=&quot;text-align:right&quot; | N/A<br /> | Intel Xeon E5506 @ 2.13GHz<br /> | 16GB<br /> |-<br /> | Compute<br /> | Dell PowerEdge R410<br /> | style=&quot;text-align:right&quot; | 3<br /> | style=&quot;text-align:right&quot; | 2<br /> | style=&quot;text-align:right&quot; | 4<br /> | style=&quot;text-align:right&quot; | 8<br /> | Intel Xeon E5506 @ 2.13GHz<br /> | 16GB<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> === HPCD ===<br /> <br /> '''Hostname:''' hpcd.cs.odu.edu<br /> <br /> '''Operating system:''' CentOS 6.2<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! Node Type<br /> ! Model<br /> ! Nodes<br /> ! CPUs per Node<br /> ! Cores per CPU<br /> ! Slots per Node<br /> ! Processor<br /> ! RAM<br /> |-<br /> | Head <br /> | Dell PowerEdge R410<br /> | style=&quot;text-align:right&quot; | 1 <br /> | style=&quot;text-align:right&quot; | 2 <br /> | style=&quot;text-align:right&quot; | 4 <br /> | style=&quot;text-align:right&quot; | N/A <br /> | Intel Xeon E5520 @ 2.26GHz <br /> | 8GB<br /> |-<br /> | PVFS2 I/O <br /> | Dell PowerEdge R510 <br /> | style=&quot;text-align:right&quot; | 1 <br /> | style=&quot;text-align:right&quot; | 2 <br /> | style=&quot;text-align:right&quot; | 4 <br /> | style=&quot;text-align:right&quot; | N/A <br /> | Intel Xeon E5530 @ 2.4GHz <br /> | 16GB<br /> |- <br /> | PVFS2 Metadata <br /> | Dell PowerEdge R410 <br /> | style=&quot;text-align:right&quot; | 1 <br /> | style=&quot;text-align:right&quot; | 2 <br /> | style=&quot;text-align:right&quot; | 4 <br /> | style=&quot;text-align:right&quot; | N/A <br /> | Intel Xeon E5520 @ 2.26GHz <br /> | 8GB<br /> |-<br /> | Compute <br /> | Dell PowerEdge R410 <br /> | style=&quot;text-align:right&quot; | 32 <br /> | style=&quot;text-align:right&quot; | 2 <br /> | style=&quot;text-align:right&quot; | 4 <br /> | style=&quot;text-align:right&quot; | 8 <br /> | Intel Xeon E5504 @ 2.0GHz <br /> | 16GB<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> === HPCX ===<br /> <br /> '''Hostname:''' hpcx.cs.odu.edu<br /> <br /> '''Operating system:''' Ubuntu 16.04<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! Node Type<br /> ! Model<br /> ! Nodes<br /> ! CPUs per Node<br /> ! Cores per CPU<br /> ! Slots per Node<br /> ! Processor<br /> ! RAM<br /> |-<br /> | Head <br /> | Dell PowerEdge R630 <br /> | style=&quot;text-align:right&quot; | 1 <br /> | style=&quot;text-align:right&quot; | 2 <br /> | style=&quot;text-align:right&quot; | 16 <br /> | style=&quot;text-align:right&quot; | N/A <br /> | Intel Xeon E5-2683 @ 2.1GHz <br /> | 128GB<br /> |-<br /> | Compute <br /> | Dell PowerEdge R630 <br /> | style=&quot;text-align:right&quot; | 14 <br /> | style=&quot;text-align:right&quot; | 2 <br /> | style=&quot;text-align:right&quot; | 16 <br /> | style=&quot;text-align:right&quot; | 8 <br /> | Intel Xeon E5-2683 @ 2.1GHz <br /> | 128GB <br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> === HPC-Phi === <br /> <br /> ''' Hostnames: ''' <br /> * hpc-phi-0.cs.odu.edu <br /> * hpc-phi-1.cs.odu.edu<br /> * hpc-phi-2.cs.odu.edu<br /> * hpc-phi-3.cs.odu.edu<br /> <br /> ''' Operating System: ''' Ubuntu 16.04<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! Node Type<br /> ! Nodes<br /> ! CPUs per Node<br /> ! Cores per CPU<br /> ! Slots per Node<br /> ! Processor<br /> ! RAM<br /> |-<br /> | Xeon Phi<br /> | style=&quot;text-align:right&quot; | 1<br /> | style=&quot;text-align:right&quot; | 4<br /> | style=&quot;text-align:right&quot; | 12<br /> | style=&quot;text-align:right&quot; | N/A<br /> | Xeon Phi 7210 @ 1.3GHz<br /> | 64GB<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> = GPU Servers =<br /> <br /> === Tesla ===<br /> <br /> ''' Hostname: ''' tesla.cs.odu.edu<br /> <br /> ''' Operating system: ''' Ubuntu 14.04<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! GPUs<br /> ! Model<br /> ! Memory<br /> ! CUDA Cores<br /> ! Processor <br /> ! RAM<br /> |-<br /> | style=&quot;text-align:right&quot; | 2<br /> | Tesla K40<br /> | 12GB GDDR5<br /> | style=&quot;text-align:right&quot; | 2880<br /> | Intel Xeon E5-2640 v2 @ 2.0 GHz<br /> | 64GB<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> === Pascal ===<br /> <br /> ''' Hostname: ''' pascal.cs.odu.edu<br /> <br /> ''' Operating system: ''' Ubuntu 16.04<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! GPUs<br /> ! Model<br /> ! Memory<br /> ! CUDA Cores<br /> ! Processor <br /> ! RAM<br /> |-<br /> | style=&quot;text-align:right&quot; | 4<br /> | Tesla P40<br /> | 24GB GDDR5<br /> | style=&quot;text-align:right&quot; | 3840<br /> | Intel Xeon E5-2620 @ 2.0GHz<br /> | style=&quot;text-align:right&quot; | 64GB<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> === Aquila ===<br /> <br /> ''' Hostname: ''' aquila.cs.odu.edu<br /> <br /> ''' Operating system: ''' Ubuntu 16.04<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! GPUs<br /> ! Model<br /> ! Memory<br /> ! CUDA Cores<br /> ! Processor <br /> ! RAM<br /> |-<br /> | style=&quot;text-align:right&quot; | 4<br /> | Tesla C2070<br /> | 6GB DDR5<br /> | style=&quot;text-align:right&quot; | 448<br /> | Intel Xeon E5620 @ 2.4GHz<br /> | style=&quot;text-align:right&quot; | 74GB<br /> |}<br /> <br /> &lt;br&gt;<br /> <br /> = High-Memory Servers =<br /> <br /> === HPC-Highmem ===<br /> <br /> ''' Hostnames: ''' <br /> * hpc-highmem-1.cs.odu.edu<br /> * hpc-highmem-2.cs.odu.edu<br /> * hpc-highmem-3.cs.odu.edu<br /> <br /> ''' Operating System: ''' Ubuntu 14.04<br /> <br /> {| class=&quot;wikitable&quot;<br /> ! Node Type<br /> ! Nodes<br /> ! Model<br /> ! CPUs per Node<br /> ! Cores per CPU<br /> ! Processor<br /> ! RAM<br /> |-<br /> | High-Memory<br /> | style=&quot;text-align:right&quot; | 3<br /> | Dell PowerEdge R930<br /> | style=&quot;text-align:right&quot; | 4<br /> | style=&quot;text-align:right&quot; | 12<br /> | Intel Xeon E7-4830 v3 @ 2.10GHz<br /> | style=&quot;text-align:right&quot; | 320GB<br /> |}</div> Aaronolah https://systems.cs.odu.edu/Staff_Carson_Gagliano Staff Carson Gagliano 2019-05-20T16:23:54Z <p>Carson: /* Contact */</p> <hr /> <div>== Joined ==<br /> <br /> May 1st, 2019<br /> <br /> <br /> === Contact ===<br /> ;University Email<br /> : cgagl001@odu.edu<br /> ;Phone<br /> : +1 (804) 972-8687</div> Carson https://systems.cs.odu.edu/VDPortal VDPortal 2019-05-16T15:16:40Z <p>Tylermarshall: Tylermarshall moved page VDPortal to VCPortal: Rename to VCPortal</p> <hr /> <div><br /> == ODU Computer Science | Virtual Computer Lab User’s Guide ==<br /> <br /> The Computer Science Department provides a Virtual Computer Lab as a remote computer solution. &quot;VCLab&quot; allows ODU remote faculty and students access to computer lab machine images over their network. Our users can interact with the remote operating system and its applications as if they were running locally.<br /> <br /> The Virtual Computer Lab features:<br /> * Up-to-date operating system<br /> * Software development applications including IDEs and database tools<br /> * Modern look and feel<br /> * HTML5 web browser as a VDI client<br /> <br /> This documentation is divided into the following sections<br /> * Accessing Virtual Computer Lab from a web browser<br /> * Accessing Virtual Computer Lab with a native RDP client<br /> <br /> == Accessing VCLab from Web Browser == <br /> <br /> ''Before proceeding, Adblockers must be disabled for functionality''<br /> <br /> # Navigate to https://vcportal.cs.odu.edu<br /> # At the login screen, provide your CS credentials <br /> #: [[File:01-login.png|400px]]<br /> # Connect to the remote computer lab by clicking '''VCLab'''<br /> #: [[File:02-homepage.png|400px]]<br /> # Your remote desktop connection will start in your web browser<br /> #: [[File:03-webclient.png|400px]]<br /> <br /> == Remote Desktop Client ==<br /> <br /> # Navigate to https://vcportal.cs.odu.edu<br /> # At the login screen, provide your CS credentials<br /> # Locate and navigate to the &quot;Settings&quot; icon at the top right-hand corner<br /> #: [[File:04-settings.png|400px]]<br /> # Select the &quot;Settings&quot; icon to open the drop-down window<br /> #: [[File:05-rdpfile.png|400px]]<br /> # Select the &quot;Download the rdp file&quot; radio button<br /> #: [[File:06-rdpfile.png|400px]]<br /> # Download the pre-configured RDP file by ''clicking'' '''VCLab'''<br /> #: [[File:02-homepage.png|400px]]<br /> # You will be prompted to save a pre-configured RDP file<br /> #: [[File:07-save.png|400px]]<br /> # This file can be renamed and saved for future sessions. This example below, the file is renamed and saved to the Desktop.<br /> #: [[File:08-desktop.png|400px]]<br /> # Double clicking the icon will launch your RDP Client<br /> #: [[File:09-client.png|400px]]<br /> # For Windows users, opening the RDP file will launch the native RDP client<br /> # All MacOS users will first have to install the free Microsoft Remote Desktop 10 app from the Apple Store<br /> # All Linux users can use Remmina as their RDP client<br /> # When prompted for your CS credentials, you will need to prepend your username with 'CS'<br /> #: [[File:10-login.png|400px]]<br /> #: [[File:11-domain.png|400px]]</div> Tylermarshall https://systems.cs.odu.edu/Staff_Manuel_Resurreccion Staff Manuel Resurreccion 2019-04-04T14:40:39Z <p>Manuelres: Starting on my page</p> <hr /> <div>== Joined == <br /> October, 2018</div> Manuelres https://systems.cs.odu.edu/Previous_Staff_Josh_Hohensee Previous Staff Josh Hohensee 2019-02-28T19:45:31Z <p>Wikiuser: </p> <hr /> <div>DNS F<br /> <br /> DHCP F<br /> <br /> WDS F<br /> <br /> &quot;I passed the retest&quot; - Josh Hohensee</div> Wikiuser