Difference between revisions of "F425962"
From GlueXWiki
(→IPAC) |
(→General Notes) |
||
(41 intermediate revisions by 2 users not shown) | |||
Line 9: | Line 9: | ||
* I connected the PowerEdge to network using its Ethernet interface #1 and requested an IP address from the computer center based on the MAC address of the Ethernet port #1. But when I tried to connect to network the light on the switch would stop blinking indicating that the switch port got disabled. After checking with Brent Morris and Paul Letta we suspected that this must be the IPMI services are interfering with that Ethernet port. So I disabled the IPMI by rebooting it and pressing <CTRL>R just when IPMI stuff appeared. After disabling it the port on the switch did not get blocked. Note that this server does not have a dedicated IPMI interface. We will need to figure out how to enable IPMI services so that Vardan's scheme would work with this. | * I connected the PowerEdge to network using its Ethernet interface #1 and requested an IP address from the computer center based on the MAC address of the Ethernet port #1. But when I tried to connect to network the light on the switch would stop blinking indicating that the switch port got disabled. After checking with Brent Morris and Paul Letta we suspected that this must be the IPMI services are interfering with that Ethernet port. So I disabled the IPMI by rebooting it and pressing <CTRL>R just when IPMI stuff appeared. After disabling it the port on the switch did not get blocked. Note that this server does not have a dedicated IPMI interface. We will need to figure out how to enable IPMI services so that Vardan's scheme would work with this. | ||
− | * I had to install 64-bit RHEL 6 from JLab on it since the 32-bit RHEL 6.2 would only show 16 GB memory instead of 32 GB which was on the system. Besides, everything at the lab was moving to 64-bit. We will need | + | * I had to install 64-bit RHEL 6 from JLab on it since the 32-bit RHEL 6.2 would only show 16 GB memory instead of 32 GB which was on the system. Besides, everything at the lab was moving to 64-bit. We will need to compile the 32-bit codes for controllers from the controllers themselves, or use 32-bit compat-libraries to build them on 64-bit machines. |
− | * | + | * Eventually, it turned out that the best way to deal with the requirements of having 32-bit and 64-bit software for Hall D online computing is to install 64-bit version of Linux, but always install the 32-bit versions of the RPMs that are installed (note that the <i>yum</i> commands below do not always indicate that the 32-bit version has been installed as well). That way we will not have compilation and runtime errors due to missing libraries when building 32-bit codes (like EtherIP EPICS IOCs). |
+ | * During installation I had an issues with the partitioning program on the JLab installation disk. The program was probably based on <i>fdisk</i> utility which only could create MBR partition table thus limiting the maximum partition size to 2TB. I had to partition the RAID using <i>gparted</i> utility from my Ubuntu 10.04 LTS flash drive. After partitioning I ran the JLab installation disk with "no-default" partitioning scheme and installed the 64-bit RHEL 6.2 on this server using the pre-existing partition. | ||
== Extra Linux Packages Installed == | == Extra Linux Packages Installed == | ||
Line 23: | Line 24: | ||
* It turns out that <b>system-config-netboot</b> has been deprecated in RHEL6. Probably will have to configure PXE booting manually. | * It turns out that <b>system-config-netboot</b> has been deprecated in RHEL6. Probably will have to configure PXE booting manually. | ||
* <pre>sudo yum install tftp-server xinetd syslinux dracut-network dhcp</pre> | * <pre>sudo yum install tftp-server xinetd syslinux dracut-network dhcp</pre> | ||
− | * Install <i>readline</i> packages needed by EPICS: <pre> sudo yum install readline-devel.x86_64 compat-readline5.x86_64 readline-static.x86_64 compat-readline5.i686 compat-readline5-devel.x86_64 compat-readline5-devel.i686 compat-readline5-static.x86_64</pre> | + | * Install <i>readline</i> packages needed by EPICS: <pre> sudo yum install readline-devel.x86_64 compat-readline5.x86_64 readline-static.x86_64 compat-readline5.i686 compat-readline5-devel.x86_64 compat-readline5-devel.i686 compat-readline5-static.x86_64 eadline-devel readline-devel.i686</pre> |
+ | * Install <i>glibc</i> for EPICS: <pre>sudo yum install glibc-devel glibc-devel.i686</pre> | ||
+ | * Install <i>ncurses</i> for EPICS: <pre>sudo yum install ncurses-devel ncurses-devel.i686</pre> | ||
+ | * Install <i>libstdc++</i> headers: <pre>sudo yum install libstdc++-devel libstdc++-devel.i686</pre> | ||
* Install openmotif <pre>sudo yum install openmotif openmotif-devel openmotif.i686 openmotif-devel.i686</pre> | * Install openmotif <pre>sudo yum install openmotif openmotif-devel openmotif.i686 openmotif-devel.i686</pre> | ||
* <pre>sudo yum install libXmu.i686 libXmu-devel.i686</pre> | * <pre>sudo yum install libXmu.i686 libXmu-devel.i686</pre> | ||
* <pre>sudo yum install libXpm libXpm-devel </pre> | * <pre>sudo yum install libXpm libXpm-devel </pre> | ||
− | * <pre>sudo yum install finger.x86_64</pre> | + | * <pre>sudo yum install finger.x86_64 re2c</pre> |
* Install Eclipse <pre>sudo yum install eclipse-rcp.x86_64 eclipse-cdt.x86_64 eclipse-callgraph.x86_64 eclipse-eclox.noarch eclipse-cdt-sdk.x86_64</pre> | * Install Eclipse <pre>sudo yum install eclipse-rcp.x86_64 eclipse-cdt.x86_64 eclipse-callgraph.x86_64 eclipse-eclox.noarch eclipse-cdt-sdk.x86_64</pre> | ||
− | + | * Install <i>kickstart</i> configuration tool per computer centre recommendation: <pre>system-config-kickstart</pre> | |
+ | * Install <i>beesu</i> package for sudo-ing: <pre>sudo yum install beesu</pre> | ||
+ | * Install <i>net-snmp</i> related packages for slow controls: <pre>sudo yum install net-snmp net-snmp-devel net-snmp-libs net-snmp-utils</pre> | ||
+ | * Install <i>net-procServ</i> related packages for slow controls: <pre>sudo yum install procServ</pre> | ||
=== Installing <b>backintime</b> === | === Installing <b>backintime</b> === | ||
+ | * There was no RPM available for RHEL 6, so I had to compile <b>backintime</b> from the tarball downloaded from http://backintime.le-web.org/download_page/. | ||
<pre> | <pre> | ||
tar -xzf backintime-1.0.8_src.tar.gz | tar -xzf backintime-1.0.8_src.tar.gz | ||
Line 43: | Line 51: | ||
sudo make install | sudo make install | ||
</pre> | </pre> | ||
+ | * I configured it to back up important directories to the RAID at <i>/local/backups/</i>. From there we can back up data to the tapes every few months or so. | ||
=== Setting up TFTP server === | === Setting up TFTP server === | ||
Line 53: | Line 62: | ||
=== Setting Up NFS exporting === | === Setting Up NFS exporting === | ||
− | * Create a directory for NFS exporting and PXE booting: <pre>sudo mkdir /exported/diskless/ | + | * Create a directory for NFS exporting and PXE booting: <pre>sudo mkdir /exported/diskless/</pre> |
* Create a directory for NFS exporting for Online software purposes: <pre>sudo mkdir /exported/online</pre> | * Create a directory for NFS exporting for Online software purposes: <pre>sudo mkdir /exported/online</pre> | ||
− | * Editing <i>/etc/exports</i> and adding lines <pre> /exported/online 129.57.36.0/255.255.252.0(rw,sync,no_root_squash) </pre><pre>/exported/diskless/ 129.57.36.0/255.255.252.0(rw,sync,no_root_squash) </pre> to the file. | + | * Editing <i>/etc/exports</i> and adding lines <pre> /exported/online 129.57.36.0/255.255.252.0(rw,sync,no_root_squash) </pre><pre>/exported/diskless/ 129.57.36.0/255.255.252.0(rw,sync,no_root_squash) </pre> <pre>/home 129.57.36.0/255.255.252.0(rw,sync,no_root_squash)</pre> to the file. |
− | * Reload NFS <pre> | + | * Reload NFS <pre>service nfs restart</pre> |
* Make sure NFS starts on booting <pre>/sbin/chkconfig --level 345 nfs on</pre>. | * Make sure NFS starts on booting <pre>/sbin/chkconfig --level 345 nfs on</pre>. | ||
+ | * On the NFS client side | ||
+ | *# add to <i>/etc/auto.master</i> file <pre>/gluon /etc/auto.gluon --timeout 60</pre> | ||
+ | *# create a <i>/etc/auto.gluon</i> with the following content: <pre>home -rw,bg gluon01:/home </pre> <pre>online -rw,bg gluon01:/exported/online</pre> <pre>diskless -rw,bg gluon01:/exported/diskless</pre> | ||
= EPICS Installation notes on the first Hall D PowerEdge 710 Dell server = | = EPICS Installation notes on the first Hall D PowerEdge 710 Dell server = | ||
== EPICS Base == | == EPICS Base == | ||
+ | |||
+ | * These comments are made during installation of the 64-bit version of EPICS on 64-bit Linux. But it is possible, and I in fact checked that, to build and install the 32-bit version of EPICS and applications on 64-bit machine by simply changing the <i>EPICS_HOST_ARCH</i> shell variable to <i>linux-x86</i> and just compiling the libraries and executables. The products get installed in the <i>linux-x86</i> subdirectories. I also checked with some XPS motor applications that this works properly. It still needs to be checked if building a 32-bit version of EPICS applications and running them on 64-bit machines would work properly with <i>EtherIP</i> EPICS support which did not seem to work on 64-bit machines with 64-bit code. | ||
+ | |||
* Create the top directory for EPICS for 3-14-12-2 <pre>sudo mkdir /exported/online/controls</pre> <pre>mkdir /exported/online/controls/epics/R3-14-12-2</pre> <pre>cd /exported/online/controls/epics/R3-14-12-2</pre> | * Create the top directory for EPICS for 3-14-12-2 <pre>sudo mkdir /exported/online/controls</pre> <pre>mkdir /exported/online/controls/epics/R3-14-12-2</pre> <pre>cd /exported/online/controls/epics/R3-14-12-2</pre> | ||
Line 80: | Line 95: | ||
== EPICS Support == | == EPICS Support == | ||
− | * Get the following packages <pre>asyn4-14.tar.gz busy_R1- | + | * Get the following packages <pre>asyn4-14.tar.gz busy_R1-4.tar.gz ether_ip-2.23.tar.gz ipac-2.11.tar.gz motorR6-7-1.tar.gz seq-2.1.4.tar.gz</pre> |
+ | * Make the libraries for the packages in the following order and modifying the <i>RELEASE</i> files for each package. | ||
===IPAC=== | ===IPAC=== | ||
− | *<pre> tar -xzf ipac-2.11.tar.gz</pre> | + | * <pre> tar -xzf ipac-2.11.tar.gz</pre> |
− | *In th <i>ipac-2.11/configure/RELEASE</i> file add and leave only the follwing lines: <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> | + | * In th <i>ipac-2.11/configure/RELEASE</i> file add and leave only the follwing lines: <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> |
− | *Type <i>make</ | + | * Type <i>make</i> |
+ | |||
+ | |||
+ | ===SEQ=== | ||
+ | * <pre>tar -xzf seq-2.1.4.tar.gz</pre> | ||
+ | * In th <i>seq-2.1.4/configure/RELEASE</i> file add and leave only the follwing lines: <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> | ||
+ | * Type <i>make</i> | ||
+ | |||
+ | |||
+ | ===EtherIP=== | ||
+ | |||
+ | * <pre>tar -xzf ether_ip-2.23.tar.gz</pre> | ||
+ | * In th <i>ether_ip-2.23/configure/RELEASE</i> file add and leave only the follwing lines: <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> <pre> ETHER_IP=$(EPICS_SUPPORT)/ether_ip-2.23</pre> | ||
+ | * Type <i>make</i> | ||
+ | |||
+ | |||
+ | ===Asyn=== | ||
+ | |||
+ | * <pre>tar -xzf asyn4-18.tar.gz</pre> | ||
+ | * In th <i>asyn4-18/configure/RELEASE</i> file add and leave only the follwing lines: <pre>LINUX_GPIB=NO</pre> <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> <pre>IPAC=$(EPICS_SUPPORT)/ipac-2.11</pre> <pre>SNCSEQ=$(EPICS_SUPPORT)/seq-2.1.4</pre> | ||
+ | * Type <i>make</i> | ||
+ | |||
+ | ===Busy=== | ||
+ | |||
+ | * <pre>tar -xzf busy_R1-4.tar.gz</pre> | ||
+ | * In th <i>busy-1-4/configure/RELEASE</i> file add and leave only the follwing lines: <pre>TEMPLATE_TOP=$(EPICS_BASE)/templates/makeBaseApp/top</pre> <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> <pre>ASYN=$(EPICS_SUPPORT)/asyn4-18</pre> | ||
+ | * Type <i>make</i> | ||
+ | |||
+ | ===Motor=== | ||
+ | |||
+ | * <pre>tar -xzf motorR6-7-1.tar.gz</pre> | ||
+ | * In th <i>busy-1-4/configure/RELEASE</i> file add and leave only the follwing lines: <pre>TEMPLATE_TOP=$(EPICS_BASE)/templates/makeBaseApp/top</pre> <pre>EPICS_BASE=$(EPICS)/base</pre> <pre>EPICS_EXTENSIONS=$(EPICS)/extensions</pre> <pre>EPICS_SUPPORT=$(EPICS)/support</pre> <pre>ASYN=$(EPICS_SUPPORT)/asyn4-18</pre> <pre>SNCSEQ=$(EPICS_SUPPORT)/seq-2.1.4</pre> <pre>BUSY=$(EPICS_SUPPORT)/busy-1-4</pre> <pre>IPAC=$(EPICS_SUPPORT)/ipac-2.11</pre> | ||
+ | * Type <i>make</i> | ||
+ | |||
+ | == SNS CSS == | ||
+ | |||
+ | * Create directory <i>/gluon/online/controls/css/linux-x86_64</i> | ||
+ | * Create a configuration file <i>/gluon/online/controls/css/.cssrc </i>. This file should be sourced from the <i>.cshrc</i> script. | ||
+ | * Download zip-file from http://ics-web.sns.ornl.gov/css/products.html into that directory and unzip it there. | ||
+ | * Similarly I installed the 32-bit RHEL version of the SNS CSS since the disc is going to be shared. | ||
== Hall D EPICS tree == | == Hall D EPICS tree == | ||
− | * Check out Hall D SVN tree for EPICS: <pre>svn co https://halldsvn.jlab.org/repos/trunk/controls/epics </pre> | + | * Check out Hall D SVN tree for EPICS applications in $EPICS directory: <pre>svn co https://halldsvn.jlab.org/repos/trunk/controls/epics/app </pre> This should match your $APP environmental variable. |
* In <i>controls/epics</i> directory create <i>.epicsrc</i> file and edit it to setup the proper EPICS location. <pre>setenv EPICS /gluon/online/controls/epics/R${EPICS_VERSION}</pre> | * In <i>controls/epics</i> directory create <i>.epicsrc</i> file and edit it to setup the proper EPICS location. <pre>setenv EPICS /gluon/online/controls/epics/R${EPICS_VERSION}</pre> | ||
+ | * Source the $EPICS/.epicsrc to setup the EPICS. | ||
+ | * Go to $APP and type <i>make</i> | ||
+ | |||
+ | == Hall D CSS OPIs == | ||
+ | |||
+ | * Go to <i>/gluon/online/controls/css</i> directory and checkout the Hall D CSS workspace: <pre>svn co https://halldsvn.jlab.org/repos/trunk/controls/css/Workspaces</pre> | ||
+ | * Running <pre>css -share_link $CSS/Workspaces/</pre> will start CSS and create shared link with Hall D screens accessible under <b>CSS->Share->Default</b> . After closing and reopening CSS the link will stay there. This is an alternative to manually linking a project directory from within CSS. |
Latest revision as of 17:32, 18 March 2014
Contents
System installation notes on the first Hall D PowerEdge 710 Dell server
General Notes
- We ordered this unit to learn more about Dell servers and their performers, how to install them in the rack-room, how to configure the software them. It cost us about $4.5K.
- The rails that came with it only worked with big square or bog round holes in the vertical rails of the racks. Sice our racks have small circular threaded holes we had to order adapters (~$45) to be able to install them in the racks. Randy gave us a set of rails for circular holes, but it did not match the holes on the sides of the Dell server. But Randy told us that we should be able to order the rails for threaded holes.
- I connected the PowerEdge to network using its Ethernet interface #1 and requested an IP address from the computer center based on the MAC address of the Ethernet port #1. But when I tried to connect to network the light on the switch would stop blinking indicating that the switch port got disabled. After checking with Brent Morris and Paul Letta we suspected that this must be the IPMI services are interfering with that Ethernet port. So I disabled the IPMI by rebooting it and pressing <CTRL>R just when IPMI stuff appeared. After disabling it the port on the switch did not get blocked. Note that this server does not have a dedicated IPMI interface. We will need to figure out how to enable IPMI services so that Vardan's scheme would work with this.
- I had to install 64-bit RHEL 6 from JLab on it since the 32-bit RHEL 6.2 would only show 16 GB memory instead of 32 GB which was on the system. Besides, everything at the lab was moving to 64-bit. We will need to compile the 32-bit codes for controllers from the controllers themselves, or use 32-bit compat-libraries to build them on 64-bit machines.
- Eventually, it turned out that the best way to deal with the requirements of having 32-bit and 64-bit software for Hall D online computing is to install 64-bit version of Linux, but always install the 32-bit versions of the RPMs that are installed (note that the yum commands below do not always indicate that the 32-bit version has been installed as well). That way we will not have compilation and runtime errors due to missing libraries when building 32-bit codes (like EtherIP EPICS IOCs).
- During installation I had an issues with the partitioning program on the JLab installation disk. The program was probably based on fdisk utility which only could create MBR partition table thus limiting the maximum partition size to 2TB. I had to partition the RAID using gparted utility from my Ubuntu 10.04 LTS flash drive. After partitioning I ran the JLab installation disk with "no-default" partitioning scheme and installed the 64-bit RHEL 6.2 on this server using the pre-existing partition.
Extra Linux Packages Installed
- In the /etc/yum.conf file add "proxy=http://jprox:8082" as the last line.
- Install yum-priorities
sudo yum install yum-priorities
- Download EPEL RPM and install
sudo rpm -Uvh epel-release-6-5.noarch.rpm
- Change the EPEL repositories in epel.repo and epel-testing.repo by disabling the mirrors and changing the baseurl to a value that computer center proxy allows us to access:
baseurl=http://download-i2.fedoraproject.org/pub/epel/....
- We have a suspicion that the repository files are being overwritten by the computer center.
- It turns out that system-config-netboot has been deprecated in RHEL6. Probably will have to configure PXE booting manually.
-
sudo yum install tftp-server xinetd syslinux dracut-network dhcp
- Install readline packages needed by EPICS:
sudo yum install readline-devel.x86_64 compat-readline5.x86_64 readline-static.x86_64 compat-readline5.i686 compat-readline5-devel.x86_64 compat-readline5-devel.i686 compat-readline5-static.x86_64 eadline-devel readline-devel.i686
- Install glibc for EPICS:
sudo yum install glibc-devel glibc-devel.i686
- Install ncurses for EPICS:
sudo yum install ncurses-devel ncurses-devel.i686
- Install libstdc++ headers:
sudo yum install libstdc++-devel libstdc++-devel.i686
- Install openmotif
sudo yum install openmotif openmotif-devel openmotif.i686 openmotif-devel.i686
-
sudo yum install libXmu.i686 libXmu-devel.i686
-
sudo yum install libXpm libXpm-devel
-
sudo yum install finger.x86_64 re2c
- Install Eclipse
sudo yum install eclipse-rcp.x86_64 eclipse-cdt.x86_64 eclipse-callgraph.x86_64 eclipse-eclox.noarch eclipse-cdt-sdk.x86_64
- Install kickstart configuration tool per computer centre recommendation:
system-config-kickstart
- Install beesu package for sudo-ing:
sudo yum install beesu
- Install net-snmp related packages for slow controls:
sudo yum install net-snmp net-snmp-devel net-snmp-libs net-snmp-utils
- Install net-procServ related packages for slow controls:
sudo yum install procServ
Installing backintime
- There was no RPM available for RHEL 6, so I had to compile backintime from the tarball downloaded from http://backintime.le-web.org/download_page/.
tar -xzf backintime-1.0.8_src.tar.gz cd backintime-1.0.8 cd common/ ./configure make sudo make install cd ../gnome/ make sudo make install
- I configured it to back up important directories to the RAID at /local/backups/. From there we can back up data to the tapes every few months or so.
Setting up TFTP server
- Edit /etc/xinetd.d/tftp and change "disable = yes" to "disable = no"
- Restart TFTP server
sudo service xinetd restart
- Copy pxelinux.0 to the tftp root directory :
sudo cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot
- Create a pxelinux.cfg directory inside the tftp root directory:
sudo mkdir -p /var/lib/tftpboot/pxelinux.cfg/
Setting Up NFS exporting
- Create a directory for NFS exporting and PXE booting:
sudo mkdir /exported/diskless/
- Create a directory for NFS exporting for Online software purposes:
sudo mkdir /exported/online
- Editing /etc/exports and adding lines
/exported/online 129.57.36.0/255.255.252.0(rw,sync,no_root_squash)
/exported/diskless/ 129.57.36.0/255.255.252.0(rw,sync,no_root_squash)
/home 129.57.36.0/255.255.252.0(rw,sync,no_root_squash)
to the file. - Reload NFS
service nfs restart
- Make sure NFS starts on booting
/sbin/chkconfig --level 345 nfs on
. - On the NFS client side
- add to /etc/auto.master file
/gluon /etc/auto.gluon --timeout 60
- create a /etc/auto.gluon with the following content:
home -rw,bg gluon01:/home
online -rw,bg gluon01:/exported/online
diskless -rw,bg gluon01:/exported/diskless
- add to /etc/auto.master file
EPICS Installation notes on the first Hall D PowerEdge 710 Dell server
EPICS Base
- These comments are made during installation of the 64-bit version of EPICS on 64-bit Linux. But it is possible, and I in fact checked that, to build and install the 32-bit version of EPICS and applications on 64-bit machine by simply changing the EPICS_HOST_ARCH shell variable to linux-x86 and just compiling the libraries and executables. The products get installed in the linux-x86 subdirectories. I also checked with some XPS motor applications that this works properly. It still needs to be checked if building a 32-bit version of EPICS applications and running them on 64-bit machines would work properly with EtherIP EPICS support which did not seem to work on 64-bit machines with 64-bit code.
- Create the top directory for EPICS for 3-14-12-2
sudo mkdir /exported/online/controls
mkdir /exported/online/controls/epics/R3-14-12-2
cd /exported/online/controls/epics/R3-14-12-2
- Build EPICS base 3-14-12-2 after downloading and unzipping it:
- Create a soft link /usr/local/epics/base/ pointing to the directory with the newly created version of EPICS distribution.
- Edit base/startup/Site.cshrc file and comment out the line with echo $ADTHOME. Also it is better to change the definition of EPICS_BASE variable to environmental variable setenv EPICS_BASE /usr/local/epics/base.
- In order to be able to work with NCSL SNMP EPICS driver I had to modify base/src/dbStatic/dbStaticLib.c to increase the length of the links from 80 to 256. This was done following the recommendation by John Priller who provided me with the driver from NCSL.
-
source startup/Site.cshrc
- Compile EPICS BASE by typing :
make
EPICS Extensions
- Got the extensionsTop_20070703.tar.gz for the extension top directory and unzipped it in $EPICS directory.
- Get the follwing extensions and unzip them in $EPICS_EXTENSIONS/src directory:
alh1_2_26.tar.gz medm3_1_5.tar.gz msi1-5.tar.gz StripTool2_5_13_0.tar.gz
- Create soft links on them:
ln -s alh1_2_26 alh
ln -s medm3_1_5 medm
ln -s msi1-5 msi
ln -s StripTool2_5_13_0 StripTool
EPICS Support
- Get the following packages
asyn4-14.tar.gz busy_R1-4.tar.gz ether_ip-2.23.tar.gz ipac-2.11.tar.gz motorR6-7-1.tar.gz seq-2.1.4.tar.gz
- Make the libraries for the packages in the following order and modifying the RELEASE files for each package.
IPAC
-
tar -xzf ipac-2.11.tar.gz
- In th ipac-2.11/configure/RELEASE file add and leave only the follwing lines:
EPICS_BASE=$(EPICS)/base
EPICS_EXTENSIONS=$(EPICS)/extensions
EPICS_SUPPORT=$(EPICS)/support
- Type make
SEQ
-
tar -xzf seq-2.1.4.tar.gz
- In th seq-2.1.4/configure/RELEASE file add and leave only the follwing lines:
EPICS_BASE=$(EPICS)/base
EPICS_EXTENSIONS=$(EPICS)/extensions
EPICS_SUPPORT=$(EPICS)/support
- Type make
EtherIP
-
tar -xzf ether_ip-2.23.tar.gz
- In th ether_ip-2.23/configure/RELEASE file add and leave only the follwing lines:
EPICS_BASE=$(EPICS)/base
EPICS_EXTENSIONS=$(EPICS)/extensions
EPICS_SUPPORT=$(EPICS)/support
ETHER_IP=$(EPICS_SUPPORT)/ether_ip-2.23
- Type make
Asyn
-
tar -xzf asyn4-18.tar.gz
- In th asyn4-18/configure/RELEASE file add and leave only the follwing lines:
LINUX_GPIB=NO
EPICS_BASE=$(EPICS)/base
EPICS_EXTENSIONS=$(EPICS)/extensions
EPICS_SUPPORT=$(EPICS)/support
IPAC=$(EPICS_SUPPORT)/ipac-2.11
SNCSEQ=$(EPICS_SUPPORT)/seq-2.1.4
- Type make
Busy
-
tar -xzf busy_R1-4.tar.gz
- In th busy-1-4/configure/RELEASE file add and leave only the follwing lines:
TEMPLATE_TOP=$(EPICS_BASE)/templates/makeBaseApp/top
EPICS_BASE=$(EPICS)/base
EPICS_EXTENSIONS=$(EPICS)/extensions
EPICS_SUPPORT=$(EPICS)/support
ASYN=$(EPICS_SUPPORT)/asyn4-18
- Type make
Motor
-
tar -xzf motorR6-7-1.tar.gz
- In th busy-1-4/configure/RELEASE file add and leave only the follwing lines:
TEMPLATE_TOP=$(EPICS_BASE)/templates/makeBaseApp/top
EPICS_BASE=$(EPICS)/base
EPICS_EXTENSIONS=$(EPICS)/extensions
EPICS_SUPPORT=$(EPICS)/support
ASYN=$(EPICS_SUPPORT)/asyn4-18
SNCSEQ=$(EPICS_SUPPORT)/seq-2.1.4
BUSY=$(EPICS_SUPPORT)/busy-1-4
IPAC=$(EPICS_SUPPORT)/ipac-2.11
- Type make
SNS CSS
- Create directory /gluon/online/controls/css/linux-x86_64
- Create a configuration file /gluon/online/controls/css/.cssrc . This file should be sourced from the .cshrc script.
- Download zip-file from http://ics-web.sns.ornl.gov/css/products.html into that directory and unzip it there.
- Similarly I installed the 32-bit RHEL version of the SNS CSS since the disc is going to be shared.
Hall D EPICS tree
- Check out Hall D SVN tree for EPICS applications in $EPICS directory:
svn co https://halldsvn.jlab.org/repos/trunk/controls/epics/app
This should match your $APP environmental variable. - In controls/epics directory create .epicsrc file and edit it to setup the proper EPICS location.
setenv EPICS /gluon/online/controls/epics/R${EPICS_VERSION}
- Source the $EPICS/.epicsrc to setup the EPICS.
- Go to $APP and type make
Hall D CSS OPIs
- Go to /gluon/online/controls/css directory and checkout the Hall D CSS workspace:
svn co https://halldsvn.jlab.org/repos/trunk/controls/css/Workspaces
- Running
css -share_link $CSS/Workspaces/
will start CSS and create shared link with Hall D screens accessible under CSS->Share->Default . After closing and reopening CSS the link will stay there. This is an alternative to manually linking a project directory from within CSS.