I’m Natalia Erazo, currently working on the aimed at examining biogeochemical processes in mangrove forest. In this tutorial, we’ll learn the basics of (free) QGIS, how to import vector data, and make a map using data obtained from our recent field trip to the Ecological Reserve Cayapas Mataje in Ecuador! We’ll also learn standard map elements and QGIS function: Print Composer to generate a map.
Objectives: I. Install QGIS II. Learn how to upload raster data using the Plugin OpenLayers and QuickMap services.
Learn how to import vector data: import latitude, longitude data and additional data. Learn how to select attributes from the data e.g., salinity values and plot them. Make a map using Print Composer in QGIS. QGIS- Installation QGIS is a very powerful tool and user friendly open source geographical system that runs on linux, unix, mac, and windows. QGIS can be downloaded.
You should follow the instructions and install gdal complete.pkg, numpy.pkg, matplotlib.pkg, and qgis.pkg. II.Install QGIS Plug-in and Upload a base map. Install QGIS Plug-in Go to Plugins and select Manage and Install plugins. This will open the plugins dialogue box and type OpenLayers Plugin and click on Install plugin.
This plugin will give you access to Google Maps, openStreet map layers and others, and it is very useful to make quick maps from Google satellite, physical, and street layers. However, the OpenLayers plugin could generate zoom errors in your maps. There is another plug in: Quick Map Service which uses tile servers and not the direct api for getting Google layers and others. This is a very useful plugin which offers more options for base maps and less zoom errors.
To install it you should follow the same steps as you did for the OpenLayers plugin except this time you’ll type QuickMap Service and install the plugin. Also, If you want to experiment with QuickMap services you can expand the plugin: Go to Web-Quick Map Services-Settings-More services and click on get contributed pack.
This will generate more options for mapping. Add the base layer Map: I recommend playing with the various options in either OpenLayers like the Google satellite, physical, and other maps layers, or QuickMap Service. For this map, we will use ESRI library from QuickMap services. Go to– Web- -QuickMapServices– Esri– ESRI Satellite You should see your satellite map. You can click on the zoom in icon to adjust the zoom, as shown in the map below where I zoom in the Galapagos Islands. You’ll also notice that on the left side you have a Layers panel box, this box shows all the layers you add to your map. Layers can be raster data or vector data, in this case we see the layer: ESRI Satellite.
At the far left you’ll see a list of icons that are used to import your layers. It is important to know what kind of data you are importing to QGIS to use the correct function. Adding our vector data. We will now add our data file which contains latitude and longitude of all the sites we collected samples, in addition to values for salinity, temperature, and turbidity. You can do this with your own data by creating a file in excel and have a column with longitude and latitude values and columns with other variables and save it as a csv file.
To input data you’ll go to the icons on the far left and click on “Add Delimited Text Layer”. Or you can click on Layer- Add Layer- Add Delimited Text Layer. You’ll browse to the file with your data. Make sure that csv is selected for File format.
Additionally, make sure that X field represents the column for your longitude points and Y field for latitude. QGIS is smart enough to recognize longitude and latitude columns but double check! You can also see an overview of the data with columns for latitude, longitude, Barometer mmHg, conductivity, Salinity psu and other variables. You can leave everything else as default and click ok. You’ll be prompt to select the coordinate reference system selector, and this is very important because if you do not select the right one you’ll get your points in the wrong location. For GPS coordinates, as the data we are using here, you need to select WGS 84 ESPG 43126. Now we can see all the points where we collected data!
As we saw earlier, the data contains environmental measurements such as: salinity, turbidity, temperature and others. We can style the layer with our sampling points based on the variables of our data. In this example we will create a layer representing salinity values.
You’ll right click on the layer with our data in the Layer Panel, in this case our layer: 2017ecuadorysidat. And select properties.
The are many styles you can choose for the layer and the styling options are located in the Style tab of the Properties dialogue. Clicking on the drop-down bottom in the Style dialogue, you’ll see there are five options available: Single Symbol, Categorized, Graduated, Rule Based and Point displacement. We’ll use Graduated which allows you to break down the data in unique classes. Here we will use the salinity values and will classify them into 3 classes: low, medium, and high salinity. There are 5 modes available in the Graduated style to do this: Equal interval, Quantile, Natural breaks, Standard deviation and Pretty breaks. You can read more about these options in In this tutorial, for simplicity we’ll use the Quantile option.
This method will decide the classes such that number of values in each class are the same; for example, if there are 100 values and we want 4 classes, the quantile method decide the classes such that each class will have 25 values. In the Style section: Select- Graduated, in Column-salinity psu, and in color ramp we’ll do colors ranging from yellow to red. In the classes box write down 3 and select mode– Quantile.
Click on classify, and QGIS will classify your values in different ranges. Now we have all the data points color in the 3 different ranges: low, medium, and high salinity.
However, we have a lot of points and it is hard to visualize the data points. We can edit the points by right clicking on the marker points and select edit symbol. Now, I am going to get rid of the black outline to make the points easy to visualize. Select the point by clicking on Simple Marker and in Outline style select the No Pen. Do the same for the remaining two points. Nice, now we can better see variations in our points based on salinity! Print Composer: making a final map We can start to assemble the final version of our map.
QGIS has the option to create a Print composer where you can edit your map. Go to Project - New Print composer You will be prompted to enter a title for the composer, enter the title name and hit ok. You will be taken to the Composer window. In the Print composer window, we want to bring the map view that we see in the QGIS canvas to the composer.
Go to Layout- Add a Map. Once the Add map button is active, hold the left mouse and drag a rectangle where you want to insert the map. You will see that the rectangle window will be rendered with the map from the main QGIS canvas. You can see in the far right end the Items box; this shows you the map you just added. If you want to make changes, you’ll select the map and edit it under item properties. Sometimes it is useful to edit the scale until you are happy with the map. We can also add a second map of the location of Cayapas Mataje in South America as a geographic reference.
Go to the main qgis canvas and zoom out the map until you can see where in South America the reserve is located. Now go back to Print Composer and add the map of the entire region.
You’ll do the same as with the first map. Go to Layout– Add map. Drag a rectangle where you want to insert the map. You will see that the rectangle window will be rendered with the map from the main QGIS canvas. In Items box, you can see you have Map 0 and Map 1. Select Map 1, and add a frame under Item properties, click on Frame to activate it and adjust the thickness to 0.40mm.
We can add a North arrow to the map. The print composer comes with a collection of map related images including many North arrows. Click layout– add image. Hold on the left mouse button, draw a rectangle on the top-right corner of the map canvas. On the right-hand panel, click on the Item Properties tab and expand the Search directories and select the north arrow image you like the most.
Once you’ve selected your image, you can always edit the arrow under SVG parameters. Now we’ll add a scale bar. Click on Layout– Add a Scale bar. Click on the layout where you want the scale bar to appear. Choose the Style and units that fit your requirement.
In the Segments panel, you can adjust the number of segments and their size. Make sure Map 0 is selected under main properties. I’ll add a legend to the map. Go to Layout– add a Legend. Hold on the left mouse button, and draw a rectangle on the area you want the legend to appear.
You can make any changes such as adding a title in the item properties, changing fonts and renaming your legend points by clicking on them and writing the text you want. It’s time to label our map. Click on Layout ‣ Add Label.
Click on the map and draw a box where the label should be. In the Item Properties tab, expand the Label section and enter the text as shown below. You can also make additional changes to your font, size by editing the label under Appearance. Once you have your final version, you can export it as Image, PDF or SVG. For this tutorial, let’s export it as an image.
Click Composer ‣ Export as Image. Here is our final map! Now you can try the tutorial with your own data.
Making maps is always a bit challenging but put your imagination to work! Here is a list of links that could help with QGIS: -QGIS blog with various tutorials and new info on functions to use:.If you want more information on how QGIS handles symbol and vector data styling: here is a good -If you need data, a good place to start is: Free vector and raster basemap data used for almost any cartographic endeavor. If you have specific questions please don’t hesitate to ask.
I'm a fan of, by Joe Kissell. It's a $10 ebook that is specifically designed for beginning command line users on Mac OS X. I think it's exactly what you asked for. I find many O'Reilly UNIX books to be frustratingly opaque for the beginner.
Book blurb from the above link: If you've ever thought you should learn how to use the Unix command line that underlies Mac OS X, or felt at sea when typing commands into Terminal, Joe Kissell is here to help! This 111-page ebook will help you become comfortable working on the Mac's command line, starting with the fundamentals and walking you through more advanced topics as your knowledge increases. And if you're uncertain how to put your new-found skills to use, Joe includes numerous real-life 'recipes' for tasks that are best done from the command line.
Disclaimer: Though I wrote a seven years ago, I have no financial interest in Joe's book.
Command Line Primer Historically, the command line interface provided a way to manipulate a computer over simple, text-based connections. In the modern era, in spite of the ability to transmit graphical user interfaces over the Internet, the command line remains a powerful tool for performing certain types of tasks. As described previously in, most users interact with a command-line environment using the Terminal application, though you may also use a remote connection method such as secure shell (SSH). Each Terminal window or SSH connection provides access to the input and output of a shell process. A shell is a special command-line tool that is designed specifically to provide text-based interactive control over other command-line tools.
In addition to running individual tools, most shells provide some means of combining multiple tools into structured programs, called shell scripts (the subject of this book). Different shells feature slightly different capabilities and scripting syntax.
Symbol A Practical Guide To Unix For Mac Os X Users
Although you can use any shell of your choice, the examples in this book assume that you are using the standard OS X shell. The standard shell is bash if you are running OS X v10.3 or later and tcsh if you are running an earlier version of the operating system. The following sections provide some basic information and tips about using the command-line interface more effectively; they are not intended as an exhaustive reference for using the shell environments. Note: This appendix was originally part of. Basic Shell Concepts Before you start working in any shell environment, there are some basic features of shell scripting that you should understand. Some of these features are specific to OS X, but most are common to all platforms that support shell scripting. Running Your First Command-Line Tool In general, you run command-line tools that OS X provides by typing the name of the tool.
(The syntax for running tools that you’ve added is described later in this appendix.) For example, if you run the ls command, by default, it lists the files in your home directory. To run this command, type ls and press Return. Most tools also can take a number of flags (sometimes called switches).
For example, you can get a “long” file listing (with additional information about every file) by typing ls -l and pressing Return. The -l flag tells the ls command to change its default behavior. Similarly, most tools take arguments. For example, to show a long listing of the files on your OS X desktop, type ls -l Desktop and press Return.
In that command, the word Desktop is an argument that is the name of the folder that contains the contents of your OS X desktop. In addition, some tools have flags that take flag-specific arguments in addition to the main arguments to the tool as a whole. Specifying Files and Directories Most commands in the shell operate on files and directories, the locations of which are identified by paths. The directory names that make up a path are separated by forward-slash characters. For example, the Terminal program is in the Utilities folder within the Applications folder at the top level of your hard drive. Its path is /Applications/Utilities/Terminal.app. The shell (along with, for that matter, all other UNIX applications and tools) also has a notion of a current working directory.
When you specify a filename or path that does not start with a slash, that path is assumed to be relative to this directory. For example, if you type cat foo, the cat command prints the contents of the file foo in the current directory. You can change the current directory using the cd command. Finally, the shell supports a number of directory names that have a special meaning. Table A-1 lists some of the standard shortcuts used to represent specific directories in the system. Because they are based on context, these shortcuts eliminate the need to type full paths in many situations. Table A-1 Special path characters and their meaning Path string Description.
Directory (single period) is a special directory that, when accessed, points to the current working directory. This value is often used as a shortcut to eliminate the need to type in a full path when running a command. For example, if you type./mytool and press return, you are running the mytool command in the current directory (if such a tool exists).
Directory (two periods) is a special directory that, when accessed, points to the directory that contains the current directory (called its parent directory). This directory is used for navigating up one level towards the top of the directory hierarchy.
For example, the path./Test is a file or directory (named Test) that is a sibling of the current directory. Note: Depending on the shell, if you follow a symbolic link into a subdirectory, typing cd. Directory will either take you back to the directory you came from or will take you to the parent of the current directory. or $HOME At the beginning of a path, the tilde character represents the home directory of the specified user, or the currently logged in user if no user is specified. And., this is not an actual directory, but a substitution performed by the shell.) For example, you can refer to the current user’s Documents folder as /Documents. Similarly, if you have another user whose short name is frankiej, you could access that user’s Documents folder as frankiej/Documents (if that user has set permissions on his or her Documents directory to allow you to see its contents). The $HOME environment variable can also be used to represent the current user’s home directory.
In OS X, the user’s home directory usually resides in the /Users directory or on a network server. File and directory names traditionally include only letters, numbers, hyphens, the underscore character ( ), and often a period (.) followed by a file extension that indicates the type of file (.txt, for example). Most other characters, including space characters, should be avoided because they have special meaning to the shell.
Although some OS X file systems permit the use of these other characters, including spaces, you must do one of the following:. “Escape” the character—put a backslash character ( ) immediately before the character in the path. Add single or double quotation marks around the path or the portion that contains the offending characters. For example, the path name My Disk can be written as 'My Disk', 'My Disk', or My Disk. Single quotes are safer than double quotes because the shell does not do any interpretation of the contents of a single-quoted string. However, double quotes are less likely to appear in a filename, making them slightly easier to use.
When in doubt, use a backslash before the character in question, or two backslashes to represent a literal backslash. For more detailed information, see in.
Accessing Files on Additional Volumes On a typical UNIX system, the storage provided by local disk drives is presented as a single tree of files descending from a single root directory. This differs from the way the Finder presents local disk drives, which is as one or more volumes, with each volume acting as the root of its own directory hierarchy. To satisfy both worlds, OS X includes a hidden directory, Volumes, at the root of the local file system. This directory contains all of the volumes attached to the local computer. To access the contents of other local (and many network) volumes, you prefix the volume-relative path with /Volumes/ followed by the volume name. For example, to access the Applications directory on a volume named MacOSX, you would use the path /Volumes/MacOSX/Applications.
Note: To access files on the boot volume, you are not required to add volume information, since the root directory of the boot volume is /. Including the volume information still works, though, so if you are interacting with the shell from an application that is volume-aware, you may want to add it, if only to be consistent with the way you access other volumes. You must include the volume information for all volumes other than the boot volume. Input And Output Most tools take text input from the user and print text out to the user’s screen.
They do so using three standard file descriptors, which are created by the shell and are inherited by the program automatically. These standard file descriptors are listed in Table A-2. Table A-2 Input and output sources for programs File descriptor Description stdin The standard input file descriptor is the means through which a program obtains input from the user or other tools. By default, this descriptor provides the user’s keystrokes. You can also redirect the output from files or other commands to stdin, allowing you to control one tool with another tool. Stdout The standard output file descriptor is where most tools send their output data.
By default, standard output sends data back to the user. You can also redirect this output to the input of other tools. Stderr The standard error file descriptor is where the program sends error messages, debug messages, and any other information that should not be considered part of the program’s actual output data. By default, errors are displayed on the command line like standard output. The purpose for having a separate error descriptor is so that the user can redirect the actual output data from the tool to another tool without that data getting corrupted by non-fatal errors and warnings. To learn more about working with these descriptors, including redirecting the output of one tool to the input of another, read. Terminating Programs To terminate the currently running program from the command line, press Control-C.
This keyboard shortcut sends an abort ( ABRT) signal to the currently running process. In most cases this causes the process to terminate, although some tools may install signal handlers to trap this signal and respond differently.
(See in for details.) In addition, you can terminate most scripts and command-line tools by closing a Terminal window or SSH connection. This sends a hangup ( HUP) signal to the shell, which it then passes on to the currently running program. If you want a program to continue running after you log out, you should run it using the nohup command, which catches that signal and does not pass it on to whatever command it invokes. Frequently Used Commands Shell scripting involves a mixture of built-in shell commands and standard programs that run in all shells. Although most shells offer the same basic set of commands, there are often variations in the syntax and behavior of those commands. In addition to the shell commands, OS X also provides a set of standard programs that run in all shells. Table A-3 lists some commands that are commonly used interactively in the shell.
Most of the items in this table are not specific to any given shell. For syntax and usage information for each command, see the corresponding man page. For a more in-depth list of commands and their accompanying documentation, see OS X Man Pages.
Table A-3 Frequently used commands and programs Command Meaning Description cat (con)catenate Prints the contents of the specified files to stdout. Cd change directory Changes the current working directory to the specified path.
Cp copy Copies files (and directories, when using the -r option) from one location to another. Date date Displays the current date and time using the standard format.
You can display this information in other formats by invoking the command with specific flags. Echo echo to output Writes its arguments to stdout. This command is most often used in shell scripts to print status information to the user.
Less and more pager commands Used to scroll through the contents of a file or the results of another shell command. This command allows forward and backward navigation through the text. The more command got its name from the prompt “Press a key to show more.” commonly used at the end of a screenful of information. The less command gets its name from the idiom “less is more”. Ls List Displays the contents of the specified directory (or the current directory if no path is specified).
Pass the -a flag to list all directory contents (including hidden files and directories). Pass the -l flag to display detailed information for each entry. Pass -@ with -l to show extended attributes. Mkdir Make Directory Creates a new directory. Mv Move Moves files and directories from one place to another.
You also use this command to rename files and directories. Open Open an application or file. You can use this command to launch applications from Terminal and optionally open files in that application. Pwd Print Working Directory Displays the full path of the current directory.
Rm Remove Deletes the specified file or files. You can use pattern matching characters (such as the asterisk) to match more than one file. You can also remove directories with this command, although use of rmdir is preferred. Rmdir Remove Directory Deletes a directory. The directory must be empty before you delete it. Ctrl-C Abort Sends an abort signal to the current command.
In most cases this causes the command to terminate, although commands may install signal handlers to trap this command and respond differently. Ctrl-Z Suspend Sends the SIGTSTP signal to the current command. In most cases this causes the command to be suspended, although commands may install signal handlers to trap this command and respond differently.
Once suspended, you can use the fg builtin to bring the process back to the foreground or the bg builtin to continue running it in the background. Ctrl- Quit Sends the SIGQUIT signal to the current command. In most cases this causes the command to terminate, although commands may install signal handlers to trap this command and respond differently. Environment Variables Some programs require the use of environment variables for their execution. Environment variables are variables inherited by all programs executed in the shell’s context.
The shell itself uses environment variables to store information such as the name of the current user, the name of the host computer, and the paths to any executable programs. You can also create environment variables and use them to control the behavior of your program without modifying the program itself. For example, you might use an environment variable to tell your program to print debug information to the console. To set the value of an environment variable, you use the appropriate shell command to associate a variable name with a value. For example, to set the environment variable MYFUNCTION to the value MyGetData in the global shell environment you would type the following command in a Terminal window.
# In Bourne shell variants export MYFUNCTION='MyGetData' # In C shell variants setenv MYFUNCTION 'MyGetData' When you launch an application from a shell, the application inherits much of its parent shell’s environment, including any exported environment variables. This form of inheritance can be a useful way to configure the application dynamically. For example, your application can check for the presence (or value) of an environment variable and change its behavior accordingly.
Different shells support different semantics for exporting environment variables, so see the man page for your preferred shell for further information. Child processes of a shell inherit a copy of the environment of that shell. Shells do not share their environments with one another.
Thus, variables you set in one Terminal window are not set in other Terminal windows. Once you close a Terminal window, any variables you set in that window are gone.
If you want the value of a variable to persist between sessions and in all Terminal windows, you must either add it to a login script or add it to your environment property list. See for details. Similarly, environment variables set by tools or subshells are lost when those tools or subshells exit. Running User-Added Commands As mentioned previously, you can run most tools by typing their name.
This is because those tools are located in specific directories that the shell searches when you type the name of a command. The shell uses the PATH environment variable to control where it searches for these tools. It contains a colon-delimited list of paths to search— /usr/bin:/bin:/usr/sbin:/sbin, for example. If a tool is in any other directory, you must provide a path for the program to tell it where to find that tool.
Watch dji naza m v2 assistent software for mac. (For security reasons, when writing scripts, you should always specify a complete, absolute path.) For security reasons, the current working directory is not part of the default search path ( PATH), and should not be added to it. If it were, then another user on a multi-user system could trick you into running a command by adding a malicious tool with the same name as one you would typically run (such as the ls command) or a common misspelling thereof. For this reason, if you need to run a tool in the current working directory, you must explicitly specify its path, either as an absolute path (starting from /) or as a relative path starting with a directory name (which can be the. For example, to run the MyCommandLineProgram tool in the current directory, you could type./MyCommandLineProgram and press Return. With the aforementioned security caveats in mind, you can add new parts (temporarily) to the value of the PATH environment variable by doing the following.
Note: As a general rule, if you launch a GUI application from a script, you should run that script only within Terminal or another GUI application. You cannot necessarily launch an GUI application when logged in remotely (using SSH, for example). In general, doing so is possible only if you are also logged in using the OS X GUI, and in some versions of OS X, it is disallowed entirely. Learning About Other Commands At the command-line level, most documentation comes in the form of man pages (short for manual).
Man pages provide reference information for many shell commands, programs, and POSIX-level concepts. The manual page describes the organization of manual, and the format and syntax of individual man pages. To access a man page, type the man command followed by the name of the thing you want to look up. For example, to look up information about the bash shell, you would type man bash. The man pages are also included in the OS X Developer Library ( OS X Man Pages). You can also search the manual pages by keyword using the apropos command.
Compiling Your Code in OS X Now that you have the basic pieces in place, it is time to build your application. This section covers some of the more common issues that you may encounter in bringing your UNIX application to OS X. These issues apply largely without regard to what type of development you are doing. Using GNU Autoconf, Automake, and Autoheader If you are bringing a preexisting command-line utility to OS X that uses GNU autoconf, automake, or autoheader, you will probably find that it configures itself without modification (though the resulting configuration may be insufficient).
Just run configure and make as you would on any other UNIX-based system. If running the configure script fails because it doesn’t understand the architecture, try replacing the project’s config.sub and config.guess files with those available in /usr/share/automake-1.6. If you are distributing applications that use autoconf, you should include an up-to-date version of config.sub and config.guess so that OS X users don’t have to do anything extra to build your project. If that still fails, you may need to run /usr/bin/autoconf on your project to rebuild the configure script before it works. OS X includes autoconf in the BSD tools package. Beyond these basics, if the project does not build, you may need to modify your makefile using some of the tips provided in the following sections. After you do that, more extensive refactoring may be required.
Some programs may use autoconf macros that are not supported by the version of autoconf that shipped with OS X. Because autoconf changes periodically, you may actually need to get a new version of autoconf if you need to build the very latest sources for some projects. In general, most projects include a prebuilt configure script with releases, so this is usually not necessary unless you are building an open source project using sources obtained from CVS or from a daily source snapshot. However, if you find it necessary to upgrade autoconf, you can get a current version from. Note that autoconf, by default, installs in /usr/local/, so you may need to modify your PATH environment variable to use the newly updated version. Do not attempt to replace the version installed in /usr/. For additional information about using the GNU autotoolset, see and the manual pages autoconf, automake, and autoheader.
Compiling for Multiple CPU Architectures Because the Macintosh platform includes more than one processor family, it is often important to compile software for multiple processor architectures. For example, libraries should generally be compiled as universal binaries even if you are exclusively targeting an Intel-based Macintosh computer, as your library may be used by a PowerPC binary running under Rosetta. For executables, if you plan to distribute compiled versions, you should generally create universal binaries for convenience. When compiling programs for architectures other than your default host architecture, such as compiling for a ppc64 or Intel-based Macintosh target on a PowerPC-based build host, there are a few common problems that you may run into. Most of these problems result from one of the following mistakes:. Assuming that the build host is architecturally similar to the target architecture and will thus be capable of executing intermediate build products.
Trying to determine target-processor-specific information at configuration time (by compiling and executing small code snippets) rather than at compile time (using macro tests) or execution time (for example, by using conditional byte swap functions) Whenever cross-compiling occurs, extra care must be taken to ensure that the target architecture is detected correctly. This is particularly an issue when generating a binary containing object code for more than one architecture. In many cases, binaries containing object code for more than one architecture can be generated simply by running the normal configuration script, then overriding the architecture flags at compile time. For example, you might run. Note: If you are using an older version of gcc and your makefile passes LDFLAGS to gcc instead of passing them directly to ld, you may need to specify the linker flags as -Wl,-syslibroot,/Developer/SDKs/MacOSX10.4u.sdk. This tells the compiler to pass the unknown flags to the linker without interpreting them.
Do not pass the LDFLAGS in this form to ld, however; ld does not currently support the -Wl syntax. If you need to support an older version of gcc and your makefile passes LDFLAGS to both gcc and ld, you may need to modify it to pass this argument in different forms, depending on which tool is being used. Fortunately, these cases are rare; most makefiles either pass LDFLAGS to gcc or ld, but not both. Newer versions of gcc support - syslibroot directly.
If your makefile does not explicitly pass the contents of LDFLAGS to gcc or ld, they may still be passed to one or the other by a make rule. If you are using the standard built-in make rules, the contents of LDFLAGS are passed directly to ld. If in doubt, assume that it is passed to ld. If you get an invalid flag error, you guessed incorrectly. If your makefile uses gcc to run the linker instead of invoking it directly, you must specify a list of target architectures to link when working with universal binary object (.o) files even if you are not using all of the architectures of the object file. If you don't, you will not create a universal binary, and you may also get a linker error. For more information about 64-bit executables, see.
However, applications that make configuration-time decisions about the size of data structures will generally fail to build correctly in such an environment (since those sizes may need to be different depending on whether the compiler is executing a ppc pass, a ppc64 pass, or an i386 pass). When this happens, the tool must be configured and compiled for each architecture as separate executables, then glued together manually using lipo. In rare cases, software not written with cross-compilation in mind will make configure-time decisions by executing code on the build host.
In these cases, you will have to manually alter either the configuration scripts or the resulting headers to be appropriate for the actual target architecture (rather than the build architecture). In some cases, this can be solved by telling the configure script that you are cross-compiling using the -host, -build, and -target flags. However, this may simply result in defaults for the target platform being inserted, which doesn’t really solve the problem. The best fix is to replace configure-time detection of endianness, data type sizes, and so on with compile-time or run-time detection. For example, instead of testing the architecture for endianness to obtain consistent byte order in a file, you should do one of the following:. Use C preprocessor macros like BIGENDIAN and LITTLEENDIAN to test endianness at compile time. Use functions like htonl, htons, ntohl, and ntohs to guarantee a big-endian representation on any architecture.
Extract individual bytes by bitwise masking and shifting (for example, lowbyte=word & 0xff; nextbyte = (word 8) & 0xff; and so on). Similarly, instead of performing elaborate tests to determine whether to use int or long for a 4-byte piece of data, you should simply use a standard sized type such as uint32t. Note: Not all script execution is incompatible with cross-compiling. A number of open source tools (GTK, for example) use script execution to determine the presence or absence of libraries, determine their versions and locations, and so on. In those cases, you must be certain that the info script associated with the universal binary installation (or the target platform installation if you are strictly cross-compiling) is the one that executes during the configuration process, rather than the info script associated with an installation specific to your host architecture. There are a few other caveats when working with universal binaries:.
The library archive utility, ar, cannot work with libraries containing code for more than one architecture (or single-architecture libraries generated with lipo) after ranlib has added a table of contents to them. Thus, if you need to add additional object files to a library, you must keep a separate copy without a TOC. The -M switch to gcc (to output dependency information) is not supported when multiple architectures are specified on the command line. Depending on your makefile, this may require substantial changes to your makefile rules. For autoconf-based configure scripts, the flag -disable-dependency-tracking should solve this problem. For projects using automake, it may be necessary to run automake with the -i flag to disable dependency checks or put no-dependencies in the AUTOMAKEOPTIONS variable in each Makefile.am file.
If you run into problems building a universal binary for an open source tool, the first thing you should do is to get the latest version of the source code. This does two things:. Ensures that the version of autoconf and automake used to generate the configuration scripts is reasonably current, reducing the likelihood of build failures, execution failures, backwards or forwards compatibility problems, and other idiosyncratic or downright broken behavior. Reduces the likelihood of building a version of an open source tool that contains known security holes or other serious bugs.
Older versions of autoconf do not handle the case where -target, -host, and -build are not handled gracefully. Different versions also behave differently when you specify only one or two of these flags. Thus, you should always specify all three of these options if you are running an autoconf-generated configure script with intent to cross-compile.
Some earlier versions of autoconf handle cross-compiling poorly. If your tool contains a configure script generated by an early autoconf, you may be able to significantly improve things by replacing some of the config. files (and config.guess in particular) with updated copies from the version of autoconf that comes with OS X. This will not always work, however, in which case it may be necessary to actually regenerate the configure script by running autoconf.
To do this, simply change into the root directory of the project and run /usr/bin/autoconf. It will automatically detect and use the configure.in file and use it to generate a new configure script. If you get warnings, you should first try a web search for the error message, as someone else may have already run into the problem (possibly on a different tool) and found a solution. If you get errors about missing AC macros, you may need to download a copy of libraries on which your tool depends and copy their.m4 autoconf configuration files into /usr/share/autoconf. Alternately, you can add the macros to the file acinclude.m4 in your project’s main directory and autoconf should automatically pick up those macros.
You may, in some cases, need to rerun automake and/or autoheader if your tool uses them. Be prepared to run into missing AM and AH macros if you do, however. Because of the added risk of missing macros, this should generally only be done if running autoconf by itself does not correct a build problem. Make CFLAGS='-isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch i386 -arch ppc' should generally result in the above being added to CFLAGS during compilation. However, this behavior is not completely consistent across makefiles from different projects. For additional information about autoconf, automake, and autoheader, you can view the autoconf documentation at.
For additional information on compiler flags for Intel-based Macintosh computers, modifying code to support little-endian CPUs, and other porting concerns, you should read, available from the ADC Reference Library. Cross-Compiling a Self-Bootstrapping Tool Probably the most difficult situation you may experience is that of a self-bootstrapping tool—a tool that uses a (possibly stripped-down) copy of itself to either compile the final version of itself or to construct support files or libraries. Some examples include TeX, Perl, and gcc. Ideally, you should be able to build the executable as a universal binary in a single build pass. If that is possible, everything “just works”, since the universal binary can execute on the host.
However, this is not always possible. If you have to cross-compile and glue the pieces together with lipo, this obviously will not work. If the build system is written well, the tool will bootstrap itself by building a version compiled for the host, then use that to build the pieces for the target, and finally compile a version of the binary for the target. In that case, you should not have to do anything special for the build to succeed. In some cases, however, it is not possible to simultaneously compile for multiple architectures and the build system wasn’t designed for cross-compiling. In those cases, the recommended solution is to pre-install a version of the tool for the host architecture, then modify the build scripts to rename the target’s intermediate copy of the tool and copy the host’s copy in place of that intermediate build product (for example, mv miniperl miniperl-target; cp /usr/bin/perl miniperl). By doing this, later parts of the build script will execute the version of the tool built for the host architecture.
Assuming there are no architecture dependencies in the dependent tools or support files, they should build correctly using the host’s copy of the tool. Once the dependent build is complete, you should swap back in the original target copy in the final build phase. The trick is in figuring out when to have each copy in place.
Conditional Compilation on OS X You will sometimes find it necessary to use conditional compilation to make your code behave differently depending on whether certain functionality is available. Older code sometimes used conditional statements like #ifdef MACH or #ifdef APPLE to try to determine whether it was being compiled on OS X or not. While this seems appealing as a quick way of getting ported, it ultimately causes more work in the long run. For example, if you make the assumption that a particular function does not exist in OS X and conditionally replace it with your own version that implements the same functionality as a wrapper around a different API, your application may no longer compile or may be less efficient if Apple adds that function in a later version. Apart from displaying or using the name of the OS for some reason (which you can more portably obtain from the API), code should never behave differently on OS X merely because it is running on OS X. Code should behave differently because OS X behaves differently in some way—offering an additional feature, not offering functionality specific to another operating system, and so on.
Thus, for maximum portability and maintainability, you should focus on that difference and make the conditional compilation dependent upon detecting the difference rather than dependent upon the OS itself. This not only makes it easier to maintain your code as OS X evolves, but also makes it easier to port your code to other platforms that may support different but overlapping feature sets.
The most common reasons you might want to use such conditional statements are attempts to detect differences in:. processor architecture.
byte order. file system case sensitivity. other file system properties. compiler, linker, or toolchain differences. availability of application frameworks. availability of header files.
support for a function or feature Instead it is better to figure out why your code needs to behave differently in OS X, then use conditional compilation techniques that are appropriate for the actual root cause. The misuse of these conditionals often causes problems. For example, if you assume that certain frameworks are present if those macros are defined, you might get compile failures when building a 64-bit executable. If you instead test for the availability of the framework, you might be able to fall back on an alternative mechanism such as X11, or you might skip building the graphical portions of the application entirely. For example, OS X provides preprocessor macros to determine the CPU architecture and byte order. These include:.
i386—Intel (32-bit). x8664—Intel (64-bit). ppc—PowerPC (32-bit). ppc64—PowerPC (64-bit). BIGENDIAN—Big endian CPU.
LITTLEENDIAN—Little endian CPU. LP64—The LP64 (64-bit) data model In addition, using tools like autoconf, you can create arbitrary conditional compilation on nearly any practical feature of the installation, from testing to see if a file exists to seeing if you can successfully compile a piece of code. For example, if a portion of your project requires a particular application framework, you can compile a small test program whose main function calls a function in that framework. If the test program compiles and links successfully, the application framework is present for the specified CPU architecture. You can even use this technique to determine whether to include workarounds for known bugs in Apple or third-party libraries and frameworks, either by testing the versions of those frameworks or by providing a test case that reproduces the bug and checking the results. For example, in OS X, does not support device files such as /dev/tty.
If you just avoid poll if your code is running on OS X, you are making two assumptions that you should not make:. You are assuming that what you are doing will always be unsupported.
OS X is an evolving operating system that adds new features on a regular basis, so this is not necessarily a valid assumption. You are assuming that OS X is the only platform that does not support using poll on device files. While this is probably true for most device files, not all device files support poll in all operating systems, so this is also not necessarily a valid assumption. A better solution is to use a configuration-time test that tries to use poll on a device file, and if the call fails, disables the use of poll. If using poll provides some significant advantage, it may be better to perform a runtime test early in your application execution, then use poll only if that test succeeds.
By testing for support at runtime, your application can use the poll API if is supported by a particular version of any operating system, falling back gracefully if it is not supported. A good rule is to always test for the most specific thing that is guaranteed to meet your requirements. If you need a framework, test for the framework. If you need a library, test for the library. If you need a particular compiler version, test the compiler version.
By doing this, you increase your chances that your application will continue to work correctly without modification in the future. Choosing a Compiler OS X ships two compilers and their corresponding toolchains. The default compiler is based on GCC 4.2.
In addition, a compiler based on GCC 4.0 is provided. Older versions of Xcode also provide prior versions. Compiling for 64-bit PowerPC and Intel-based Macintosh computers is only supported in version 4.0 and later. Compiling 64-bit kernel extensions is only supported in version 4.2 and later. Always try to compile your software using GCC 4 because future toolchains will be based on GCC version 4 or later. However, because GCC 4 is a relatively new toolchain, you may find bugs that prevent compiling certain programs. Use of the legacy GCC 2.95.2-based toolchain is strongly discouraged unless you have to maintain compatibility with OS X version 10.1.
If you run into a problem that looks like a compiler bug, try using a different version of GCC. You can run a different version by setting the CC environment variable in your Makefile. For example, CC=gcc-4.0 chooses GCC 4.0. In Xcode, you can change the compiler setting on a per-project basis or a per-file basis by selecting a different compiler version in the appropriate build settings inspector.
Setting Compiler Flags When building your projects in OS X, simply supplying or modifying the compiler flags of a few key options is all you need to do to port most programs. These are usually specified by either the CFLAGS or LDFLAGS variable in your makefile, depending on which part of the compiler chain interprets the flags.
Unless otherwise specified, you should add these flags to CFLAGS if needed. Note: The 64-bit toolchain in OS X v10.4 and later has additional compiler flags (and a few deprecated flags). These are described in more detail in. Some common flags include: -flatnamespace (in LDFLAGS) Changes from a two-level to a single-level (flat) namespace. By default, OS X builds libraries and applications with a two-level namespace where references to dynamic libraries are resolved to a definition in a specific dynamic library when the image is built.
Use of this flag is generally discouraged, but in some cases, is unavoidable. For more information, see.bundle (in LDFLAGS) Produces a Mach-O bundle format file, which is used for creating loadable plug-ins. See the ld man page for more discussion of this flag.bundleloader executable (in LDFLAGS) Specifies which executable will load a plug-in. Undefined symbols in that bundle are checked against the specified executable as if it were another dynamic library, thus ensuring that the bundle will actually be loadable without missing symbols.framework framework (in LDFLAGS) Links the executable being built against the listed framework.
For example, you might add -framework vecLib to include support for vector math.mmacosx-version-min version Specifies the version of OS X you are targeting. You must target your compile for the oldest version of OS X on which you want to run the executable. In addition, you should install and use the cross-development SDK for that version of OS X.
For more information, see. Note: OS X uses a single-pass linker.
Make sure that you put your framework and library options after the object(.o) files. To get more information about the Apple linker read the manual page for ld. More extensive discussion for the compiler in general can be found at. Understanding Two-Level Namespaces By default, OS X builds libraries and applications with a two-level namespace.
Symbol A Practical Guide To Unix For Mac Os X Users Pdf
In a two-level namespace environment, when you compile a new dynamic library, any references that the library might make to other dynamic libraries are resolved to a definition in those specific dynamic libraries. The two-level namespace design has many advantages for Carbon applications. However, it can cause problems for many traditional UNIX applications if they were designed to work in a flat namespace environment. For example, suppose one library, call it libfoo, uses another library, libbar, for its implementation of the function barIt. Now suppose an application wants to override the use of libbar with a compressed version, called libzbar.
Since libfoo was linked against libbar at compile time, this is not possible without recompiling libfoo. To allow the application to override references made by libfoo to libbar, you would use the flag -flatnamespace. The ld man page has a more detailed discussion of this flag.
If you are writing libraries from scratch, it may be worth considering the two-level namespace issue in your design. If you expect that someone may want to override your library’s use of another library, you might have an initializer routine that takes pointers to the second library as its arguments, and then use those pointers for the calls instead of calling the second library directly. Alternately, you might use a plug-in architecture in which the calls to the outside library are made from a plug-in that could be easily replaced with a different plug-in for a different outside library. See for more information. For the most part, however, unless you are designing a library from scratch, it is not practical to avoid using -flatnamespace if you need to override a library’s references to another library.
If you are compiling an executable (as opposed to a library), you can also use -forceflatnamespace to tell dyld to use a flat namespace when loading any dynamic libraries and bundles loaded by the binary. This is usually not necessary, however. Executable Format The only executable format that the OS X kernel understands is the Mach-O format.
Some bridging tools are provided for classic Macintosh executable formats, but Mach-O is the native format. It is very different from the commonly used Executable and Linking Format (ELF). For more information on Mach-O, see OS X ABI Mach-O File Format Reference. Dynamic Libraries and Plug-ins Dynamic libraries and plug-ins behave differently in OS X than in other operating systems. This section explains some of those differences. Using Dynamic Libraries at Link Time When linking an executable, OS X treats dynamic libraries just like libraries in any other UNIX-based or UNIX-like operating system. If you have a library called libmytool.a, libmytool.dylib, or libmytool.so, for example, all you have to do is this.
Ld a.o b.o c.o.L/path/to/lib -lmytool As a general rule, you should avoid creating static libraries (.a) except as a temporary side product of building an application. You must run ranlib on any archive file before you attempt to link against it. Using Dynamic Libraries Programmatically OS X makes heavy use of dynamically linked code. Unlike other binary formats such as ELF and xcoff, Mach-O treats plug-ins differently than it treats shared libraries.
The preferred mechanism for dynamic loading of shared code, beginning in OS X v10.4 and later, is the family of functions. These are described in the man page for. The ld and dyld man pages give more specific details of the dynamic linker’s implementation. Note: By default, the names of dynamic libraries in OS X end in.dylib instead of.so. You should be aware of this when writing code to load shared code in OS X. Libraries that you are familiar with from other UNIX-based systems might not be in the same location in OS X.
This is because OS X has a single dynamically loadable framework, libSystem, that contains much of the core system functionality. This single module provides the standard C runtime environment, input/output routines, math libraries, and most of the normal functionality required by command-line applications and network services.
The libSystem library also includes functions that you would normally expect to find in libc and libm, RPC services, and a name resolver. Because libSystem is automatically linked into your application, you do not need to explicitly add it to the compiler’s link line. For your convenience, many of these libraries exist as symbolic links to libSystem, so while explicitly linking against -lm (for example) is not needed, it will not cause an error. To learn more about how to use dynamic libraries, see. Compiling Dynamic Libraries and Plugins For the most part, you can treat dynamic libraries and plugins the same way as on any other platform if you use GNU libtool. This tool is installed in OS X as glibtool to avoid a name conflict with NeXT libtool. For more information, see the manual page for glibtool.
You can also create dynamic libraries and plugins manually if desired. As mentioned in, dynamic libraries and plugins are not the same thing in OS X. Thus, you must pass different flags when you create them. To create dynamic libraries in OS X, pass the -dynamiclib flag. To create plugins, pass the -bundle flag. Because plugins can be tailored to a particular application, the OS X compiler provides the ability to check these plugins for loadability at compile time.
To take advantage of this feature, use the -bundleloader flag. Important: OS X does not support the concept of weak linking as it is found in systems like Linux. If you override one symbol, you must override all of the symbols in that object file. To learn more about how to create and use dynamic libraries, see. Bundles In the OS X file system, some directories store executable code and the software resources related to that code in one discrete package.
These packages, known as bundles, come in two varieties: application bundles and frameworks. There are two basic types of bundles that you should be familiar with during the basic porting process: application bundles and frameworks. In particular, you should be aware of how to use frameworks, since you may need to link against the contents of a framework when porting your application. Application Bundles Application bundles are special directories that appear in the Finder as a single entity. Having only one item allows a user to double-click it to get the application with all of its supporting resources. If you are building Mac apps, you should make application bundles.
Xcode builds them by default if you select one of the application project types. More information on application bundles is available in and in. Frameworks A framework is a type of bundle that packages a shared library with the resources that the library requires. Depending on the library, this bundle could include header files, images, and reference documentation. If you are trying to maintain cross-platform compatibility, you may not want to create your own frameworks, but you should be aware of them because you might need to link against them. For example, you might want to link against the Core Foundation framework. Since a framework is just one form of a bundle, you can do this by linking against /System/Library/Frameworks/CoreFoundation.framework with the -framework flag.
A more thorough discussion of frameworks is in. For More Information You can find additional information about bundles in. Handling Multiply Defined Symbols A multiply defined symbol error occurs if there are multiple definitions for any symbol in the object files that you are linking together. You can specify the following flags to modify the handling of multiply defined symbols under certain circumstances: -multiplydefined treatment Specifies how multiply defined symbols in dynamic libraries should be treated when -twolevelnamespace is in effect. The values for treatment must be one of:. error—Treat multiply defined symbols as an error.
warning—Treat multiply defined symbols as a warning. suppress—Ignore multiply defined symbols.
The default behavior is to treat multiply defined symbols in dynamic libraries as warnings when -twolevelnamespace is in effect.multiplydefinedunused treatment Specifies how unused multiply defined symbols in dynamic libraries should be treated when -twolevelnamespace is in effect. An unused multiply defined symbol is a symbol defined in the output that is also defined in one of the dynamic libraries, but in which but the symbol in the dynamic library is not used by any reference in the output.
The values for treatment must be error, warning, or suppress. The default for unused multiply defined symbols is to suppress these messages. Predefined Macros The following macros are predefined in OS X: OBJC This macro is defined when your code is being compiled by the Objective-C compiler. By default, this occurs when compiling a.m file or any header included by a.m file. You can force the Objective-C compiler to be used for a.c or.h file by passing the -ObjC or -ObjC flags.
cplusplus This macro is defined when your code is being compiled by the C compiler (either explicitly or by passing the -ObjC flag). ASSEMBLER This macro is defined when compiling.s files. NATURALALIGNMENT This macro is defined on systems that use natural alignment.
When using natural alignment, an int is aligned on sizeof(int) boundary, a short int is aligned on sizeof(short) boundary, and so on. It is defined by default when you're compiling code for PowerPC architecutres. It is not defined when you use the -malign-mac68k compiler switch, nor is it defined on Intel architectures.
MACH This macro is defined if Mach system calls are supported. APPLE This macro is defined in any Apple computer. APPLECC This macro is set to an integer that represents the version number of the compiler.
This lets you distinguish, for example, between compilers based on the same version of GCC, but with different bug fixes or features. Larger values denote later compilers.
BIGENDIAN and LITTLEENDIAN These macros tell whether the current architecture uses little endian or big endian byte ordering. For more information, see.
Note: To define a section of code to be compiled on OS X system, you should define a section using APPLE with MACH macros. The macro UNIX is not defined in OS X. Other Porting Tips This section describes alternatives to certain commonly used APIs. Headers The following headers commonly found in UNIX, BSD, or Linux operating systems are either unavailable or are unsupported in OS X: alloc.h This file does not exist in OS X, but the functionality does exist.
You should include stdlib.h instead. Alternatively, you can define the prototypes yourself as follows. #ifndef ALLOCAH #undef alloca /. Now define the internal interfaces./ extern void.alloca (sizet size); #ifdef GNUC # define alloca(size) builtinalloca (size) #endif /.
GCC./ #endif ftw.h The ftw function traverses through the directory hierarchy and calls a function to get information about each file. However, there isn't a function similar to ftw in fts.h. One alternative is to use ftsopen, ftschildren, and ftsclose to implement such a file traversal. To do this, use the ftsopen function to get a handle to the file hierarchy, use ftsread to get information on each file, and use ftschildren to get a link to a list of structures containing information about files in a directory. Alternatively, you can use opendir, readdir and closedir with recursion to achieve the same result.
For example, in order to get a description of each file located in /usr/include using fts.h, then the code would be as follows. (year)%4 0 && ((year)%100!= 0 (year)% 400 0) You can either use this code to implement this functionality, or you can use any of the existing APIs in time.h to do something similar. Ecvt, fcvt Discouraged in OS X. Use, and similar functions instead. Fcloseall This function is an extension to fclose.
Although OS X supports fclose, fcloseall is not supported. You can use fclose to implement fcloseall by storing the file pointers in an array and iterating through the array. Getmntent, setmntent, addmntent, endmntent, hasmntopt In general, volumes in OS X are not in /etc/fstab. However, to the extent that they are, you can get similar functionality from and related functions. Poll This API is partially supported in OS X. It does not support polling devices. Sbrk, brk The brk and sbrk functions are historical curiosities left over from earlier days before the advent of virtual memory management.
Although they are present on the system, they are not recommended. Shmget This API is supported but is not recommended. Shmget has a limited memory blocks allocation. When several applications use shmget, this limit may change and cause problems for the other applications.
![]()
In general, you should either use for mapping files into memory or use the POSIX function and related functions for creating non-file-backed shared memory. Swapon, swapoff These functions are not supported in OS X. Utilities The chapter in describes a number of cross-platform compatibility issues that you may run into with command-line utilities that are commonly scripted. This section lists several commands that are primarily of interest to people porting compiled applications and drivers, rather than general-purpose scripting.
Ldd The ldd command is not available in OS X. However, you can use the command otool -L to get the same functionality that ldd provides. The otool command displays specified parts of object files or libraries. The option -L displays the name and version numbers of the shared libraries that an object file uses. To see all the existing options, see the manual page for otool. Lsmod lsmod is not available on OS X, but other commands exist that offer similar functionality.
The kextutil Loads, diagnoses problems with, and generates symbols for kernel extensions. Kextstat Prints statistics about currently loaded drivers and other kernel extensions. Kextload Loads the kernel module for a device driver or other kernel extensions. This command is a basic command intended for use in scripts. For developer purposes, use kextutil instead. Kmodunload Unloads the kernel module for a device driver or other kernel extensions. This command is a basic command intended for use in scripts.
For developer purposes, use kextutil instead. For more information about loading kernel modules, see.
The main goal of this document is to provide sufficient information on basic Linux and shell scripting usage to get started on introductory exercises. By the end of you should be able to launch commands from the command-line, compose commands using pipes, and navigate in the filesystem.
By the end of you should be able to write basic shell scripts to perform repetitive operations. The context of this guide are under-graduate and post-graduate courses at Heriot-Watt University, Edinburgh, that rely on basic knowledge of Linux usage. In particular, this guide should provide sufficient context for assignments in the courses F21CN Computer Network Security and F21SC Industrial Programming. However, it should be of broader use, and it is designed for such.
Linux Quick Reference Before you start the tutorial, proper, it is strongly recommended that you get this (O'Reilly). First of all, make sure that you have your username and password ready.
You will get these at induction day. If you missed that opportunity, (room EMB 1.33).
Now, work through, which takes you through the first steps of logging in and basic Linux command-line usage. By the end of this section, you should be able to launch commands from the command-line and get help on the most common activities. Then, work through, which teaches you basic shell usage in bash and takes you through a couple of exercises. It starts with simple straight-line scripts, i.e. Sequences of commands that are executed as if typed on the command line, and moves on to repetitive scripts, e.g. Using loops and function calls. Finally, to deepen your understanding, look at the examples in, save the examples in separate files, and execute the files as discussed.
![]()
Make small changes to the scripts to modify behaviour. By the the end of this section, you should be able to write basic shell scripts, to automate repetitive processes.
For further practicals beyond the scope of this tutorial, check the section below, in particular look-up the detailed 3-part online. If you prefer a comprehensive textbook, covering the range of basic to advanced Linux usage, with many examples and a fairly complete command reference, check out. All exercises in this tutorial are shown in a format like this: $ pwd /home/hwloidl The lines starting with $ are executable commands all other lines are the output of running this command.
The $ symbol stands for the prompt you see in your terminal window. To do the exercise, cut-and-paste the text after the $ symbol into your terminal window. For example, when you cut-and-paste the command pwd from the example above, you will get the current directory as a reply. Lines starting with # are comments, explaining what the commands are doing, and can be ignored. Try variants of the commands as shown in the introduction to understand what's happening in each step. Notes at various points will refer you to a more detailed treatment of individual topics. There is a lot of good introductory material on Linux around.
These are guides tailored to the Linux setup at MACS:. by Rob Stewart (a Linux practical for those completely new to Linux). These are general Linux tutorials or cheat sheets (a concise collection of the most important commands):. of the Advanced Bash Scripting Guide. The following resources are more detailed Linux introductions and tutorials:.
A detailed 3-part course on Linux and Shell Scripting, by Information Services at Edinburgh University (copyright The University of Edinburgh 2012):. (a software carpentry course). by Vivek G. Gite (an introduction starting from very basics; see also the ). by Mendel Cooper (the best source of information for bash scripting; example driven). (for looking up details of bash commands etc).
(especially useful for sysadmin tasks) Books about Linux and UNIX in general:. by Mark G. Sobell, Prentice Hall, 2012.
(an excellent, comprehensive guide to Linux usage, starting from scratch and reaching advanced usage)., By Grace Todino-Gonguet, John Strang, Jerry Peek. 5th Edition October 2001 ISBN 0-596-00261-0. (an introduction to UNIX for newbies)., By Arnold Robbin (standard reference book)., learning UNIX by examples (for Beginners). (the ultimate guide for efficient usage of UNIX tools).
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |