Below is an overview of the process that a user who is new to the
TeraGrid will go through, and common operations.

0) Obtain TeraGrid allocation:
Development allocation (DAC) - Up to 30,000 SUs,
                               up to 1 TB disk
Medium allocation (MRAC)     - 30,001 up to 500,000 SUs,
                               greater than 1 TB and  up to 20 TB disk
Large allocation (LRAC)      - greater than 500,000 SUs,
                               greater than 20 TB disk

1) Login to tg-login.frost.ncar.teragrid.org
     If you are a previous NCAR user with a UCAS login and cryptocard,
     use these mechanisms to access the Frost TeraGrid resource.

     If you are a new user to both the TeraGrid and the Frost system,
     you will need to log into another resource and setup your TeraGrid
     Single-Sign On credentials, then use the TG SSO to log into Frost.

2) 'softenv' environment:

Verify that you do not have a '.nosoft' file in your home directory,
which disables softenv. Query some of the common softenv packages.

3) Setup the TeraGrid Single Sign-On between RP's using a TeraGrid
Certificate:

  a) Log in to one of the TG Resources and setup the TG SSO
     using the the automatically created NCSA certificate
     using your TeraGrid portal login and password:

     $ myproxy-logon -l [username]
     Enter MyProxy pass phrase:
     A credential has been received for user [username]
     in /tmp/x509up_uNNNN.

  b) Verify that the Globus Certificate Proxy has been
     successfully created:

     $ grid-proxy-info

3) Use the SSO to log into the other resources:

$ gsissh [TG resource login node]

Here is a list of the current login nodes:
tg-login.frost.ncar.teragrid.org
login.bigred.iu.teragrid.org
tg-login64.purdue.teragrid.org
tg-login.lonestar.tacc.teragrid.org
tg-viz-login.tacc.teragrid.org
tg-login.uc.teragrid.org
tg-viz-login.uc.teragrid.org
tg-login.ornl.teragrid.org
login-w.ncsa.teragrid.org
login-cu.ncsa.teragrid.org
login-co.ncsa.teragrid.org
login-hg.ncsa.teragrid.org
tg-login.bigben.psc.teragrid.org
tg-login.rachel.psc.teragrid.org
tg-login.sdsc.teragrid.org
bglogin.sdsc.edu
dslogin.sdsc.edu

4) Check resource query commands & allocation

$ tgusage
$ tgwhatami
$ tgwhereami
$ tg-policy -data
$ tg-policy -sched
$ tg-policy -fs

5) Copy some files around (target hostnames from
http://teragrid.org/userinfo/data/transfer_location.php)

$ tgcp -v file:///home/oberg/file.10MB \
gsiftp://gridftp-hg.ncsa.teragrid.org/~/file.10MB.incoming

$ tgcp -big -v file:///home/oberg/file.100MB
gsiftp://gridftp-hg.ncsa.teragrid.org/~/file.100MB.incoming

$ globus-url-copy -v -vb -tcp-bs 33554432 -stripe -len 4194304000 -p 24 file:///home/oberg/file.10MB \
gsiftp://gridftp-hg.ncsa.teragrid.org/~/file.10MB.incoming

11:18:42 (oberg@fr0103en)~(0)$ uberftp
uberftp> parallel 4
uberftp> tcpbuf 201326592
TCP buffer set to 201326592 bytes
uberftp> open gridftp-hg.ncsa.teragrid.org
220 tg-s038.ncsa.teragrid.org GridFTP Server 2.1 (gcc64dbg,
1122653280-63) ready.
230 User oberg logged in.
uberftp> get file.100MB.incoming
Transfer of 104857600 bytes completed in 2.52 seconds. 41566.18 KB/sec


If you want to test the best-case performance between any two sites:
11:44:21 (oberg@fr0103en)~(0)$ globus-url-copy -vb -tcp-bs 33554432 -stripe -len 4194304000 -p 24 gsiftp://tg-gridftp.sdsc.teragrid.org/dev/zero gsiftp://gridftp.frost.ncar.teragrid.org/dev/null
Source: gsiftp://tg-gridftp.sdsc.teragrid.org/dev/
Dest:   gsiftp://gridftp.frost.ncar.teragrid.org/dev/
  zero  ->  null
   2216689664 bytes       422.80 MB/sec avg       428.68 MB/sec inst


6) Submit to TeraGrid queue, explicitely setting your project id:

$ cqsub -q teragrid -p ######## ...

7) Use GRAM to submit a job:

$ globus-job-run -verify \
gatekeeper.frost.ncar.teragrid.org/jobmanager-cobalt [executable]