[FAQ Index] [To Section 13 - Multimedia] [To Section 15 - Packages and Ports]
The details of setting up disks in OpenBSD vary between platforms, so you should read the installation instructions in the INSTALL.<arch> file for your platform to determine the specifics for your system.
The two types of "partitions" are:
All OpenBSD platforms use disklabel(8) as the primary way to manage OpenBSD filesystem partitions, but only some platforms also require using fdisk(8) to manage Partition Table partitions. On the platforms that use fdisk partitions, one fdisk partition is used to hold all of the OpenBSD file systems, this partition is then sliced up into disklabel partitions. These disklabel partitions are labeled "a" through "p". A few of these are "special":
Some utilities will let you use the "shortcut" name of a partition (i.e., "sd0d") or a drive (i.e., "wd1") instead of the actual device name ("/dev/sd0d" or "/dev/wd1c", respectively).
Note again that if you put data on wd2d, then later remove wd1 from the system and reboot, your data is now on wd1d, as your old wd2 is now wd1. However, a drive's identification won't change after boot, so if a USB drive is unplugged or fails, it won't change the identification of other drives until reboot.
These UIDs can be used to identify the disks almost anywhere a partition or device would be specified, for example in /etc/fstab or in command lines. Of course, disks and partitions may also be identified in the traditional way, by device, unit number and partition (i.e., /dev/sd1f), and this can be done interchangeably.
It is worth noting that the DUID is a property of the disklabel, though as OpenBSD only supports one disklabel per disk, this is mostly academic.
fdisk(8) is used on some platforms (i386, amd64, macppc, zaurus and armish) to create a partition recognized by the system boot ROM, into which the OpenBSD disklabel partitions can be placed. Other platforms do not need or use fdisk(8). fdisk(8) can also be used for manipulations of the Master Boot Record (MBR), which can impact all operating systems on a computer. Unlike the fdisk-like programs on some other operating systems, OpenBSD's fdisk assumes you know what you want to do, and for the most part, it will let you do what you need to do, making it a powerful tool to have on hand. It will also let you do things you shouldn't or didn't intend to do, so it must be used with care.
Normally, only one OpenBSD fdisk partition will be placed on a disk. That partition will be subdivided by disklabel into OpenBSD filesystem partitions.
To just view your partition table using fdisk, use:
# fdisk sd0
Which will give an output similar to this:
Disk: sd0 geometry: 553/255/63 [8883945 Sectors] Offset: 0 Signature: 0xAA55 Starting Ending LBA Info: #: id C H S - C H S [ start: size ] ------------------------------------------------------------------------ *0: A6 3 0 1 - 552 254 63 [ 48195: 8835750 ] OpenBSD 1: 12 0 1 1 - 2 254 63 [ 63: 48132 ] Compaq Diag. 2: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused 3: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused
In this example we are viewing the fdisk output of the first SCSI-like drive. We can see the OpenBSD partition (id A6) and its size. The * tells us that the OpenBSD partition is the bootable partition.
In the previous example we just viewed our information. What if we want to edit our partition table? Well, to do so we must use the -e flag. This will bring up a command line prompt to interact with fdisk.
# fdisk -e wd0 Enter 'help' for information fdisk: 1> help help Command help list manual Show entire OpenBSD man page for fdisk reinit Re-initialize loaded MBR (to defaults) setpid Set the identifier of a given table entry disk Edit current drive stats edit Edit given table entry flag Flag given table entry as bootable update Update machine code in loaded MBR select Select extended partition table entry MBR swap Swap two partition entries print Print loaded MBR partition table write Write loaded MBR to disk exit Exit edit of current MBR, without saving changes quit Quit edit of current MBR, saving current changes abort Abort program without saving current changes fdisk: 1>
Here is an overview of the commands you can use when you choose the -e flag.
First, be sure to read the disklabel(8) man page.
The details of setting up disks in OpenBSD varies somewhat between platforms. For i386, amd64, macppc, zaurus, and armish, disk setup is done in two stages. First, the OpenBSD slice of the hard disk is defined using fdisk(8), then that slice is subdivided into OpenBSD partitions using disklabel(8).
All OpenBSD platforms, however, use disklabel(8) as the primary way to manage OpenBSD partitions. Platforms that also use fdisk(8) place all the disklabel(8) partitions in a single fdisk partition.
Labels hold certain information about your disk, like your drive geometry and information about the filesystems on the disk. The disklabel is then used by the bootstrap program to access the drive and to know where filesystems are contained on the drive. You can read more in-depth information about disklabel in the disklabel(5) man page.
On some platforms, disklabel helps overcome architecture limitations on disk partitioning. For example, on i386, you can have 4 primary partitions, but with disklabel(8), you use one of these 'primary' partitions to store all of your OpenBSD partitions (for example, 'swap', '/', '/usr', '/var', etc.), and you still have 3 more partitions available for other OSs.
One of the major parts of OpenBSD's install is your initial creation of labels. During the install you use disklabel(8) to create your separate partitions. As part of the install process, you can define your mount points from within disklabel(8), but you can change these later in the install or post-install, as well.
There is not one "right" way to label a disk, but there are many wrong ways. Before attempting to label your disk, see this discussion on partitioning and partition sizing.
For an example of using disklabel(8) during install, see the Custom disklabel layout part of the Installation Guide.
After install, one of the most common reasons to use disklabel(8) is to look at how your disk is laid out. The following command will show you the current disklabel, without modifying it:
# disklabel wd0 <-- Or whatever disk device you'd like to view type: ESDI disk: ESDI/IDE disk label: SAMSUNG HD154UI duid: d920a43a5a56ad5f flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 16 sectors/cylinder: 1008 cylinders: 2907021 total sectors: 2930277168 boundstart: 64 boundend: 2930272065 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 1024064 64 4.2BSD 2048 16384 1 # / b: 4195296 1024128 swap c: 2930277168 0 unused d: 4195296 5219424 4.2BSD 2048 16384 1 # /usr e: 4195296 9414720 4.2BSD 2048 16384 1 # /tmp f: 20972448 13610016 4.2BSD 2048 16384 1 # /var h: 2097632 34582464 4.2BSD 2048 16384 1 # /home
Note how this disk has only part of its disk space allocated at this time. Disklabel offers two different modes for editing the disklabel, a built-in command-driven editor (this is how you installed OpenBSD originally), and a full editor, such as vi(1). You may find the command-driven editor "easier", as it guides you through all the steps and provides help upon request, but the full-screen editor has definite use, too.
Let's add a partition to the above system.
Warning: Any time you are fiddling with your disklabel, you are putting all the data on your disk at risk. Make sure your data is backed up before editing an existing disklabel!
We will use the built-in command-driven editor, which is invoked using the "-E" option to disklabel(8).
In this case, disklabel(8) was kind enough to calculate a good starting offset for the partition. In many cases, it will be able to do this, but if you have "holes" in the disklabel (i.e., you deleted a partition, or you just like making your life miserable) you may need to sit down with a paper and pencil to calculate the proper offset. Note that while disklabel(8) does some sanity checking, it is very possible to do things very wrong here. Be careful, understand the meaning of the numbers you are entering.# disklabel -E wd0 ... > a k offset: [36680096] size: [2893591969] 1T Rounding to cylinder: 2147483536 FS type: [4.2BSD] > p m OpenBSD area: 64-2930272065; size: 1430796.9M; free: 364310.8M # size offset fstype [fsize bsize cpg] a: 500.0M 64 4.2BSD 2048 16384 1 # / b: 2048.5M 1024128 swap c: 1430799.4M 0 unused d: 2048.5M 5219424 4.2BSD 2048 16384 1 # /usr e: 2048.5M 9414720 4.2BSD 2048 16384 1 # /tmp f: 10240.5M 13610016 4.2BSD 2048 16384 1 # /var h: 1024.2M 34582464 4.2BSD 2048 16384 1 # /home k: 1048575.9M 36680192 4.2BSD 8192 65536 1 > q Write new label?: [y]
On most OpenBSD platforms, there are sixteen disklabel partitions available, labeled "a" through "p". (some "specialty" systems may have only eight). Every disklabel should have a 'c' partition, with an "fstype" of "unused" that covers the entire physical drive. If your disklabel is not like this, it must be fixed, the "D" option (below) can help. Never try to use the "c" partition for anything other than accessing the raw sectors of the disk, do not attempt to create a file system on "c". On the boot device, "a" is reserved for the root partition, and "b" is the swap partition, but only the boot device makes these distinctions. Other devices may use all fifteen partitions other than "c" for file systems.
Well once you get your disk installed PROPERLY you need to use fdisk(8) (i386 only) and disklabel(8) to set up your disk in OpenBSD.
For i386 folks, start with fdisk. Other architectures can ignore this. In the below example we're adding a third SCSI-like drive to the system.
This will initialize the disk's "real" partition table for exclusive use by OpenBSD. Next you need to create a disklabel for it. This will seem confusing.# fdisk -i sd2
First, ignore the 'c' partition, it's always there and is for programs like disklabel to function! Fstype for OpenBSD is 4.2BSD. Total sectors is the total size of the disk. Say this is a 3 gigabyte disk. Three gigabytes in disk manufacturer terms is 3000 megabytes. So divide 6185088/3000 (use bc(1)). You get 2061. So, to make up partition sizes for a, d, e, f, g, ... just multiply X*2061 to get X megabytes of space on that partition. The offset for your first new partition should be the same as the "sectors/track" reported earlier in disklabel's output. For us it is 63. The offset for each partition afterwards should be a combination of the size of each partition and the offset of each partition (except the 'c' partition, since it has no play into this equation.)# disklabel -e sd2 (screen goes blank, your $EDITOR comes up) type: SCSI ...bla... sectors/track: 63 total sectors: 6185088 ...bla... 16 partitions: # size offset fstype [fsize bsize cpg] c: 6185088 0 unused 0 0 # (Cyl. 0 - 6135) d: 1405080 63 4.2BSD 1024 8192 16 # (Cyl. 0*- 1393*) e: 4779945 1405143 4.2BSD 1024 8192 16 # (Cyl. 1393*- 6135)
Or, if you just want one partition on the disk, say you will use the whole thing for web storage or a home directory or something, just take the total size of the disk and subtract the sectors per track from it. 6185088-63 = 6185025. Your partition is
If all this seems needlessly complex, you can just use disklabel -E to get the same partitioning mode that you got on your install disk! There, you can just use "96M" to specify "96 megabytes", or 96G for 96 gigs.d: 6185025 63 4.2BSD 1024 8192 16
That was a lot. But you are not finished. Finally, you need to create the filesystem on that disk using newfs(8).
# newfs sd2d
Or whatever your disk was named as per OpenBSD's disk numbering scheme. (Look at the output from dmesg(8) to see what your disk was named by OpenBSD.)
Now figure out where you are going to mount this new partition you just created. Say you want to put it on /u. First, make the directory /u. Then, mount it.
# mount /dev/sd2d /u
Finally, add it to /etc/fstab(5).
/dev/sd2d /u ffs rw 1 1
What if you need to migrate an existing directory like /usr/local? You should mount the new drive in /mnt and copy /usr/local to the /mnt directory. Example:
Edit the /etc/fstab(5) file to show that the /usr/local partition is now /dev/sd2d (your freshly formatted partition). Example:# cd /usr/local && pax -rw -p e . /mnt
/dev/sd2d /usr/local ffs rw 1 1
Reboot into single user mode with boot -s, move the existing /usr/local to /usr/local-backup (or delete it if you feel lucky) and create an empty directory /usr/local. Then reboot the system, and voila, the files are there!
One non-obvious use for swap is to be a place the kernel can dump a copy of what is in core in the event of a system panic for later analysis. For this to work, you must have a swap partition (not a swap file) at least as large as your RAM. By default, the system will save a copy of this dump to /var/crash on reboot, so if you wish to be able to do this automatically, you will need sufficient free space on /var. However, you can also bring the system up single-user, and use savecore(8) to dump it elsewhere.
Many types of systems may be appropriately configured with no swap at all. For example, firewalls should not swap in normal operation. Machines with flash storage generally should not swap. If your firewall is flash based, you may benefit (slightly) by not allocating a swap partition, though in most other cases, a swap partition won't actually hurt anything; most disks have more than enough space to allocate a little to swap.
There are all kinds of tips about optimizing swap (where on the disk, separate disks, etc.), but if you find yourself in a situation where optimizing swap is an issue, you probably need more RAM. In general, the best optimization for swap is to not need it.
In OpenBSD, swap is managed with the swapctl(8) program, which adds, removes, lists and prioritizes swap devices and files.
On OpenBSD, the 'b' partition of the boot drive is used by default and automatically for swap. No configuration is needed for this to take place. If you do not wish to use swap on the boot disk, do not define a "b" partition. If you wish to use swap on other partitions or on other disks, you need to define these partitions in /etc/fstab with lines something like:
/dev/sd3b none swap sw 0 0 /dev/sd3d none swap sw 0 0
Sometimes, your initial guess about how much swap you need proves to be wrong, and you have to add additional swap space, occasionally in a hurry (as in, "Geez, at the rate it is burning swap, we'll be wedged in five minutes"). If you find yourself in this position, adding swap space as a file on an existing file system can be a quick fix.
The file must not reside on a filesystem which has SoftUpdates enabled (they are disabled by default). To start out, you can see how much swap you currently have and how much you are using with the swapctl(8) utility. You can do this by using the command:
$ swapctl -l Device 512-blocks Used Avail Capacity Priority swap_device 65520 8 65512 0% 0
This shows the devices currently being used for swapping and their current statistics. In the above example there is only one device named "swap_device". This is the predefined area on disk that is used for swapping. (Shows up as partition b when viewing disklabels) As you can also see in the above example, that device isn't getting much use at the moment, but for the purposes of this document, we will act as if an extra 32M is needed.
The first step to setting up a file as a swap device is to create the file. It's best to do this with the dd(1) utility. Here is an example of creating the file /var/swap that is 32M in size.
$ sudo dd if=/dev/zero of=/var/swap bs=1k count=32768 32768+0 records in 32768+0 records out 33554432 bytes transferred in 20 secs (1677721 bytes/sec)
Once this has been done, we can turn on swapping to that device. Use the following command to turn on swapping to this device
$ sudo chmod 600 /var/swap $ sudo swapctl -a /var/swap
Now we need to check to see if it has been correctly added to the list of our swap devices.
$ swapctl -l Device 512-blocks Used Avail Capacity Priority swap_device 65520 8 65512 0% 0 /var/swap 65536 0 65536 0% 0 Total 131056 8 131048 0%
Now that the file is setup and swapping is being done, you need to add a line to your /etc/fstab file so that this file is configured on the next boot time also. If this line is not added, your won't have this swap device configured.
$ cat /etc/fstab /dev/wd0a / ffs rw 1 1 /var/swap /var/swap swap sw 0 0
Soft Updates is based on an idea proposed by Greg Ganger and Yale Patt and developed for FreeBSD by Kirk McKusick. SoftUpdates imposes a partial ordering on the buffer cache operations which permits the requirement for synchronous writing of directory entries to be removed from the FFS code. Thus, a large performance increase is seen in disk writing performance.
Enabling soft updates must be done with a mount-time option. When mounting a partition with the mount(8) utility, you can specify that you wish to have soft updates enabled on that partition. Below is a sample /etc/fstab(5) entry that has one partition sd0a that we wish to have mounted with soft updates.
/dev/sd0a / ffs rw,softdep 1 1
Note to sparc users: Do not enable soft updates on sun4 or sun4c machines. These architectures support only a very limited amount of kernel memory and cannot use this feature. However, sun4m machines are fine.
While OpenBSD includes its own MBR code, you are not obliged to use it, as virtually any MBR code can boot OpenBSD. The MBR is manipulated by the fdisk(8) program, which is used both to edit the partition table, and also to install the MBR code on the disk.
OpenBSD's MBR announces itself with the message:
showing the disk and partition it is about to load the PBR from. In addition to the obvious, it also shows a trailing period ("."), which indicates this machine is capable of using LBA translation to boot. If the machine were incapable of using LBA translation, the above period would have been replaced with a semicolon (";"), indicating CHS translation:Using drive 0, partition 3.
Note that the trailing period or semicolon can be used as an indicator of the "new" OpenBSD MBR, introduced with OpenBSD 3.5.Using drive 0, partition 3;
The PBR is installed by installboot(8), which is further described later in this document. The PBR announces itself with the message:
printing a dot for every file system block it attempts to load. Again, the PBR shows if it is using LBA or CHS to load, if it has to use CHS translation, it displays a message with a semicolon:Loading...
Loading;...
boot(8) is an interactive program. After it loads, it attempts to locate and read /etc/boot.conf, if it exists (which it does not on a default install), and processes any commands in it. Unless instructed otherwise by /etc/boot.conf, it then gives the user a prompt:
It gives the user (by default) five seconds to start giving it other tasks, but if none are given before the timeout, it starts its default behavior: loading the kernel, bsd, from the root partition of the first hard drive. The second-stage boot loader probes (examines) your system hardware, through the BIOS (as the OpenBSD kernel is not loaded). Above, you can see a few things it looked for and found:probing: pc0 com0 com1 apm mem[636k 190M a20=on] disk: fd0 hd0+ >> OpenBSD/i386 BOOT 3.21 boot>
Using drive 0, partition 3. <- MBR Loading.... <- PBR probing: pc0 com0 com1 apm mem[636k 190M a20=on] <- /boot disk: fd0 hd0+ >> OpenBSD/i386 BOOT 3.21 boot> booting hd0a:/bsd 4464500+838332 [58+204240+181750]=0x56cfd0 entry point at 0x100120 [ using 386464 bytes of bsd ELF symbol table ] Copyright (c) 1982, 1986, 1989, 1991, 1993 <- Kernel The Regents of the University of California. All rights reserved. Copyright (c) 1995-2013 OpenBSD. All rights reserved. http://www.OpenBSD.org OpenBSD 5.4 (GENERIC) #37: Tue Jul 30 12:05:01 MDT 2013 deraadt@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC ...
You may install the OpenBSD MBR on your hard disk using the fdisk program. Boot from your install media, choose "Shell" to get a command prompt:
You may also install a specific MBR to disk using fdisk:# fdisk -u wd0
which will install the file /usr/mdec/mbr as your system's MBR. This particular file on a standard OpenBSD install happens to be the standard MBR that is also built into fdisk, but any other MBR could be specified here.# fdisk -u -f /usr/mdec/mbr wd0
For more information on the i386 boot process, see:
OpenBSD supports both FFS and FFS2 (also known as UFS and UFS2) file systems. FFS is the historic OpenBSD file system, FFS2 is new as of 4.3. Before looking at the limits of each system, we need to look at some more general system limits.
Of course, the ability of file system and the abilities of particular hardware are two different things. A newer 250G IDE hard disk may have issues on older (pre >137G standards) interfaces (though for the most part, they work just fine), and some very old SCSI adapters have been seen to have problems with more modern drives, and some older BIOSs will hang when they encounter a modern sized hard disk. You must respect the abilities of your hardware and boot code, of course.
For this reason, the entire /bsd file (the kernel) must be located on the disk within the boot ROM addressable area. This means that on some older i386 systems, the root partition must be completely within the first 504M, but newer computers may have limits of 2G, 8G, 32G, 128G or more. It is worth noting that many relatively new computers which support larger than 128G drives actually have BIOS limitations of booting only from within the first 128G. You can use these systems with large drives, but your root partition must be within the space supported by the boot ROM.
Note that it is possible to install a 40G drive on an old 486 and load OpenBSD on it as one huge partition, and think you have successfully violated the above rule. However, it might come back to haunt you in a most unpleasant way:
Why? Because when you copied "over" the new /bsd file, it didn't overwrite the old one, it got relocated to a new location on the disk, probably outside the 504M range the BIOS supported. The boot loader was unable to fetch the file /bsd, and the system hung.
To get OpenBSD to boot, the boot loaders (biosboot(8) and /boot in the case of i386/amd64) and the kernel (/bsd) must be within the boot ROM's supported range, and within their own abilities. To play it safe, the rule is simple:
The entire root partition must be within the computer's BIOS (or boot ROM) addressable space.
Some non-i386 users think they are immune to this, however most platforms have some kind of boot ROM limitation on disk size. Finding out for sure what the limit is, however, can be difficult.
This is another good reason to partition your hard disk, rather than using one large partition.
The time required to fsck the drive may become a problem as the file system size expands, but you only have to fsck the disk space that is actually allocated to mounted filesystems. This is another reason NOT to allocate all your disk space Just Because It Is There. Keeping file systems mounted RO or not mounted helps keep them from needing to be fsck(8)ed after tripping over the power cord. Reducing the number of inodes (using the -i option of newfs) can also improve fsck time -- assuming you really don't need them.
Don't forget that if you have multiple disks on the system, they could all end up being fsck(8)ed after a crash at the same time, so they could require more RAM than a single disk.
The boot/installation kernels only support FFS, not FFS2, so key system partitions (/, /usr, /var, /tmp) should not be FFS2, or severe maintenance problems can arise (there should be no reason for those partitions to be that large, anyway). For this reason, very large partitions should only be used for "non-system" partitions, for example, /home, /var/www/, /bigarray, etc.
Note that not all controllers and drivers support large disks. For example, ami(4) has a limit of 2TB per logical volume. Always be aware of what was available when a controller or interface was manufactured, and don't just rely on "the connectors fit".
To use a larger than 2TB disk, create an OpenBSD partition on the disk using fdisk, whatever size fdisk will let you. When you label the disk with disklabel(8), use the "b" option to set the OpenBSD boundaries (which defaulted to the size of the OpenBSD fdisk partition) to cover the entire disk. Now you can create your partitions as you wish. You must still respect the abilities of your BIOS, which will have the limitation of only understanding fdisk partitions, so your 'a' partition should be entirely within the fdisk-managed part of the disk, in addition to any BIOS limitations.
OpenBSD has a very robust boot loader that is quite indifferent to drive geometries, however, it is sensitive to where the file /boot resides on the disk. If you do something that causes boot(8) to be moved to a new place on the disk (actually, a new inode), you will "break" your system, preventing it from booting properly. To fix your boot block so that you can boot normally, just put a boot CDROM in your drive (or use a boot floppy) and at the boot prompt, type "boot hd0a:/bsd" to force it to boot from the first hard disk (and not the CD or floppy). Your machine should come up normally. You now need to reinstall the first-stage boot loader (biosboot(8)) based on the position of the /boot file, using the installboot(8) program.
Our example will assume your boot disk is sd0 (but for IDE it would be wd0, etc.):
Note that "/boot" is the physical location of the file "boot" you wish to use when the system boots normally as the system is currently mounted. If your situation were a little different and you had booted from the CD and mounted your 'a' partition on /mnt, this would probably be "/mnt/boot" instead. installboot(8) does two things here -- it installs the file "biosboot" to where it needs to be in the Partition Boot Record, and modifies it with the physical location of the "/boot" file.# cd /usr/mdec; ./installboot /boot biosboot sd0
If you plan on running what might be called a production server, it is advisable to have some form of backup in the event one of your fixed disk drives fails, or the data is otherwise lost.
This information will assist you in using the standard dump(8)/restore(8) utilities provided with OpenBSD. More advanced backup utilities, such as "Amanda" and "Bacula" are available through packages for backing up multiple servers to disk and tape.
Backing up to tape requires knowledge of where your file systems are mounted. You can determine how your filesystems are mounted using the mount(8) command at your shell prompt. You should get output similar to this:
# mount /dev/sd0a on / type ffs (local) /dev/sd0h on /usr type ffs (local)
In this example, the root (/) filesystem resides physically on sd0a which indicates a SCSI-like fixed disk 0, partition a. The /usr filesystem resides on sd0h, which indicates SCSI-like fixed disk 0, partition h.
Another example of a more advanced mount table might be:
# mount /dev/sd0a on / type ffs (local) /dev/sd0d on /var type ffs (local) /dev/sd0e on /home type ffs (local) /dev/sd0h on /usr type ffs (local)
In this more advanced example, the root (/) filesystem resides physically on sd0a. The /var filesystem resides on sd0d, the /home filesystem on sd0e and finally /usr on sd0h.
To backup your machine you will need to feed dump the name of each fixed disk partition. Here is an example of the commands needed to backup the simpler mount table listed above:
# /sbin/dump -0au -f /dev/nrst0 /dev/rsd0a # /sbin/dump -0au -f /dev/nrst0 /dev/rsd0h # mt -f /dev/rst0 rewind
For the more advanced mount table example, you would use something similar to:
# /sbin/dump -0au -f /dev/nrst0 /dev/rsd0a # /sbin/dump -0au -f /dev/nrst0 /dev/rsd0d # /sbin/dump -0au -f /dev/nrst0 /dev/rsd0e # /sbin/dump -0au -f /dev/nrst0 /dev/rsd0h # mt -f /dev/rst0 rewind
You can review the dump(8) man page to learn exactly what each command line switch does. Here is a brief description of the parameters used above:
Finally which partition to backup (/dev/rsd0a, etc.)
The mt(1) command is used at the end to rewind the drive. Review the mt man page for more options (such as eject).
If you are unsure of your tape device name, use dmesg to locate it. An example tape drive entry in dmesg might appear similar to:
st0 at scsibus0 targ 5 lun 0: <ARCHIVE, Python 28388-XXX, 5.28>
You may have noticed that when backing up, the tape drive is accessed as device name "nrst0" instead of the "st0" name that is seen in dmesg. When you access st0 as nrst0 you are accessing the same physical tape drive but telling the drive to not rewind at the end of the job and access the device in raw mode. To back up multiple file systems to a single tape, be sure you use the non-rewind device, if you use a rewind device (rst0) to back up multiple file systems, you'll end up overwriting the prior filesystem with the next one dump tries to write to tape. You can find a more elaborate description of various tape drive devices in the dump man page.
If you wanted to write a small script called "backup", it might look something like this:
echo " Starting Full Backup..." /sbin/dump -0au -f /dev/nrst0 /dev/rsd0a /sbin/dump -0au -f /dev/nrst0 /dev/rsd0d /sbin/dump -0au -f /dev/nrst0 /dev/rsd0e /sbin/dump -0au -f /dev/nrst0 /dev/rsd0h echo echo -n " Rewinding Drive, Please wait..." mt -f /dev/rst0 rewind echo "Done." echo
If scheduled nightly backups are desired, cron(8) could be used to launch your backup script automatically.
It will also be helpful to document (on a scrap of paper) how large each file system needs to be. You can use "df -h" to determine how much space each partition is currently using. This will be handy when the drive fails and you need to recreate your partition table on the new drive.
Restoring your data will also help reduce fragmentation. To ensure you get all files, the best way of backing up is rebooting your system in single user mode. File systems do not need to be mounted to be backed up. Don't forget to mount root (/) r/w after rebooting in single user mode or your dump will fail when trying to write out dumpdates. Enter "bsd -s" at the boot> prompt for single user mode.
After you've backed up your file systems for the first time, it would be a good idea to briefly test your tape and be sure the data on it is as you expect it should be.
You can use the following example to review a catalog of files on a dump tape:
# /sbin/restore -tvs 1 -f /dev/rst0
This will cause a list of files that exist on the 1st partition of the dump tape to be listed. Following along from the above examples, 1 would be your root (/) file system.
To see what resides on the 2nd tape partition and send the output to a file, you would use a command similar to:
# /sbin/restore -tvs 2 -f /dev/rst0 > /home/me/list.txt
If you have a mount table like the simple one, 2 would be /usr, if yours is a more advanced mount table 2 might be /var or another fs. The sequence number matches the order in which the file systems are written to tape.
The example scenario listed below would be useful if your fixed drive has failed completely. In the event you want to restore a single file from tape, review the restore man page and pay attention to the interactive mode instructions.
If you have prepared properly, replacing a disk and restoring your data from tape can be a very quick process. The standard OpenBSD install/boot floppy already contains the required restore utility as well as the binaries required to partition and make your new drive bootable. In most cases, this floppy and your most recent dump tape is all you'll need to get back up and running.
After physically replacing the failed disk drive, the basic steps to restore your data are as follows:
Boot from the OpenBSD install/boot floppy. At the menu selection, choose
Shell. Write protect and insert your most recent back up tape into the
drive.
Using the fdisk(8) command, create a primary OpenBSD partition on this newly installed drive. Example:
# fdisk -e sd0
See fdisk FAQ for more info.
Using the disklabel command, recreate your OpenBSD partition table inside that primary OpenBSD partition you just created with fdisk. Example:
# disklabel -E sd0
(Don't forget swap, see disklabel FAQ for more info)
Use the newfs command to build a clean file system on each partition you created in the above step. Example:
# newfs /dev/rsd0a # newfs /dev/rsd0h
Mount your newly prepared root (/) file system on /mnt. Example:
# mount /dev/sd0a /mnt
Change into that mounted root file system and start the restore process. Example:
# cd /mnt # restore -rs 1 -f /dev/rst0
You'll want this new disk to be bootable, use the following to write a new MBR to your drive. Example:
# fdisk -i sd0
In addition to writing a new MBR to the drive, you will need to install boot blocks to boot from it. The following is a brief example:
# cp /usr/mdec/boot /mnt/boot # /usr/mdec/installboot -v /mnt/boot /usr/mdec/biosboot sd0
Your new root file system on the fixed disk should be ready enough so you can boot it and continue restoring the rest of your file systems. Since your operating system is not complete yet, be sure you boot back up with single user mode. At the shell prompt, issue the following commands to unmount and halt the system:
# umount /mnt # halt
Remove the install/boot floppy from the drive and reboot your system. At the OpenBSD boot> prompt, issue the following command:
boot> bsd -s
The bsd -s will cause the kernel to be started in single user mode which will only require a root (/) file system.
Assuming you performed the above steps correctly and nothing has gone wrong you should end up at a prompt asking you for a shell path or press return. Press return to use sh. Next, you'll want to remount root in r/w mode as opposed to read only. Issue the following command:
# mount -u -w /
Once you have re-mounted in r/w mode you can continue restoring your other file systems. Example:
(simple mount table) # mount /dev/sd0h /usr; cd /usr; restore -rs 2 -f /dev/rst0 (more advanced mount table) # mount /dev/sd0d /var; cd /var; restore -rs 2 -f /dev/rst0 # mount /dev/sd0e /home; cd /home; restore -rs 3 -f /dev/rst0 # mount /dev/sd0h /usr; cd /usr; restore -rs 4 -f /dev/rst0
You could use "restore rvsf" instead of just rsf to view names of objects as they are extracted from the dump set.
Finally after you finish restoring all your other file systems to disk, reboot into multiuser mode. If everything went as planned your system will be back to the state it was in as of your most recent back up tape and ready to use again.
To mount a disk image (ISO images, disk images created with dd, etc.) in OpenBSD you must configure a vnd(4) device. For example, if you have an ISO image located at /tmp/ISO.image, you would take the following steps to mount the image.
# vnconfig vnd0 /tmp/ISO.image # mount -t cd9660 /dev/vnd0c /mnt
Notice that since this is an ISO-9660 image, as used by CDs and DVDs, you must specify type of cd9660 when mounting it. This is true, no matter what type, e.g. you must use type ext2fs when mounting Linux disk images.
To unmount the image use the following commands.
# umount /mnt # vnconfig -u vnd0
For more information, refer to the vnconfig(8) man page.
DMA IDE transfers, supported by pciide(4) are unreliable with many combinations of older hardware.
OpenBSD is aggressive and attempts to use the highest DMA Mode it can configure. This will cause corruption of data transfers in some configurations because of buggy motherboard chipsets, buggy drives, and/or noise on the cables. Luckily, Ultra-DMA modes protect data transfers with a CRC to detect corruption. When the Ultra-DMA CRC fails, OpenBSD will print an error message and try the operation again.
wd2a: aborted command, interface CRC error reading fsbn 64 of 64-79 (wd2 bn 127; cn 0 tn 2 sn 1), retrying
After failing a couple times, OpenBSD will downgrade to a slower (hopefully more reliable) Ultra-DMA mode. If Ultra-DMA mode 0 is hit, then the drive downgrades to PIO mode.
UDMA errors are often caused by low quality or damaged cables. Cable problems should usually be the first suspect if you get many DMA errors or unexpectedly low DMA performance. It is also a bad idea to put the CD-ROM on the same channel with a hard disk.
If replacing cables does not resolve the problem and OpenBSD does not successfully downgrade, or the process causes your machine to lock hard, or causes excessive messages on the console and in the logs, you may wish to force the system to use a lower level of DMA or UDMA by default. This can be done by using UKC or config(8) to change the flags on the wd(4) device.
When a filesystem is created with newfs(8), some of the available space is held in reserve from normal users. This provides a margin of error when you accidently fill the disk, and helps keep disk fragmentation to a minimum. Default for this is 5% of the disk capacity, so if the root user has been carelessly filling the disk, you may see up to 105% of the available capacity in use.
If the 5% value is not appropriate for you, you can change it with the tunefs(8) command.
If you have a damaged partition table, there are various things you can attempt to do to recover it.
Firstly, panic. You usually do so anyways, so you might as well get it over with. Just don't do anything stupid. Panic away from your machine. Then relax, and see if the steps below won't help you out.
A copy of the disklabel for each disk is saved in /var/backups as part of the daily system maintenance. Assuming you still have the var partition, you can simply read the output, and put it back into disklabel.
In the event that you can no longer see that partition, there are two options. Fix enough of the disc so you can see it, or fix enough of the disc so that you can get your data off. Depending on what happened, one or other of those may be preferable (with dying discs you want the data first, with sloppy fingers you can just have the label)
The first tool you need is scan_ffs(8) (note the underscore, it isn't called "scanffs"). scan_ffs(8) will look through a disc, and try and find partitions and also tell you what information it finds about them. You can use this information to recreate the disklabel. If you just want /var back, you can recreate the partition for /var, and then recover the backed up label and add the rest from that.
disklabel(8) will update both the kernel's understanding of the disklabel, and then attempt to write the label to disk. Therefore, even if the area of the disk containing the disklabel is unreadable, you will be able to mount(8) it until the next reboot.
We will give a general overview on how to use one of these filesystems under OpenBSD. To be able to use a filesystem, it must be mounted. For details and mount options, please consult the mount(8) manual page, and that of the mount command for the filesystem you will be mounting, e.g. mount_msdos, mount_ext2fs, ...
First, you must know on which device your filesystem is located. This can be simply your first hard disk, wd0 or sd0, but it may be less obvious. All recognized and configured devices on your system are mentioned in the output of the dmesg(1) command: a device name, followed by a one-line description of the device. For example, my first CD-ROM drive is recognized as follows:
cd0 at scsibus0 targ 0 lun 0: <COMPAQ, DVD-ROM LTD163, GQH3> SCSI0 5/cdrom removable
For a much shorter list of available disks, you can use sysctl(8). The command
will show all disks currently known to your system, for example:# sysctl hw.disknames
hw.disknames=cd0:,cd1:,wd0:,fd0:,cd2:
At this point, it is time to find out which partitions are on the device, and in which partition the desired filesystem resides. Therefore, we examine the device using disklabel(8). The disklabel contains a list of partitions, with a maximum number of 16. Partition c always indicates the entire device. Partitions a-b and d-p are used by OpenBSD. Partitions i-p may be automatically allocated to file systems of other operating systems. In this case, I'll be viewing the disklabel of my hard disk, which contains a number of different filesystems.
NOTE: OpenBSD was installed after the other operating systems on this system, and during the install a disklabel containing partitions for the native as well as the foreign filesystems was installed on the disk. However, if you install foreign filesystems after the OpenBSD disklabel was already installed on the disk, you need to add or modify them manually afterwards. This will be explained in this subsection.
# disklabel wd0 # using MBR partition 2: type A6 off 20338290 (0x1365672) size 29318625 (0x1bf5de1) # /dev/rwd0c: type: ESDI disk: ESDI/IDE disk label: ST340016A duid: d920a43a5a56ad5f flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 16 sectors/cylinder: 1008 cylinders: 16383 total sectors: 78165360 boundstart: 20338290 boundend: 49656915 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 408366 20338290 4.2BSD 2048 16384 16 # / b: 1638000 20746656 swap c: 78165360 0 unused d: 4194288 22384656 4.2BSD 2048 16384 16 # /usr e: 409248 26578944 4.2BSD 2048 16384 16 # /tmp f: 10486224 26988192 4.2BSD 2048 16384 16 # /var g: 12182499 37474416 4.2BSD 2048 16384 16 # /home i: 64197 63 unknown j: 20274030 64260 unknown k: 1975932 49656978 MSDOS l: 3919797 51632973 unknown m: 2939832 55552833 ext2fs n: 5879727 58492728 ext2fs o: 13783707 64372518 ext2fs
As can be seen in the above output, the OpenBSD partitions are listed first. Next to them are a number of ext2 partitions and one MSDOS partition, as well as a few 'unknown' partitions. On i386 and amd64 systems, you can usually find out more about those using the fdisk(8) utility. For the curious reader: partition i is a maintenance partition created by the vendor, partition j is a NTFS partition and partition l is a Linux swap partition.
Once you have determined which partition it is you want to use, you can move to the final step: mounting the filesystem contained in it. Most filesystems are supported in the GENERIC kernel: just have a look at the kernel configuration file, located in the /usr/src/sys/arch/<arch>/conf directory. If you want to use one of the filesystems not supported in GENERIC, you will need to build a custom kernel.
When you have gathered the information needed as mentioned above, it is time to mount the filesystem. Let's assume a directory /mnt/otherfs exists, which we will use as a mount point where we will mount the desired filesystem. In this example, we will mount the ext2 filesystem in partition m:
# mount -t ext2fs /dev/wd0m /mnt/otherfs
If you plan to use this filesystem regularly, you may save yourself some time by inserting a line for it in /etc/fstab, for example something like:
Notice the 0 values in the fifth and sixth field. This means we do not require the filesystem to be dumped, and checked using fsck. Generally, those are things you want to have handled by the native operating system associated with the filesystem./dev/wd0m /mnt/otherfs ext2fs rw,noauto,nodev,nosuid 0 0
As an example, I have modified one of my existing ext2 partitions: using Linux's fdisk program, I've reduced the size of the 'o' partition (see disklabel output above) to 1G. We will be able to recognize it easily by its starting position (offset: 64372518) and size (13783707). Note that these values are sector numbers, and that using sector numbers (not megabytes or any other measure) is the most exact and safest way of reading this information.
Before the change, the partition looked like this using OpenBSD's fdisk(8) utility (leaving only relevant output):
As you can see, the starting position and size are exactly those reported by disklabel(8) earlier. (Dont' be confused by the value indicated by "Offset": it is referring to the starting position of the extended partition in which the ext2 partition is contained.)# fdisk wd0 . . . Offset: 64372455 Signature: 0xAA55 Starting Ending LBA Info: #: id C H S - C H S [ start: size ] ------------------------------------------------------------------------ 0: 83 4007 1 1 - 4864 254 63 [ 64372518: 13783707 ] Linux files* . . .
After changing the partition's size from Linux, it looks like this:
Now this needs to be changed using disklabel(8). For instance, you can issue disklabel -e wd0, which will invoke an editor specified by the EDITOR environment variable (default is vi). Within the editor, change the last line of the disklabel to match the new size:# fdisk wd0 . . . Offset: 64372455 Signature: 0xAA55 Starting Ending LBA Info: #: id C H S - C H S [ start: size ] ------------------------------------------------------------------------ 0: 83 4007 1 1 - 4137 254 63 [ 64372518: 2104452 ] Linux files* . . .
Save the disklabel to disk when finished. Now that the disklabel is up to date again, you should be able to mount your partitions as described above.o: 2104452 64372518 ext2fs
You can follow a very similar procedure to add new partitions.
These lines indicate that the umass(4) (USB mass storage) driver has been attached to the memory device, and that it is using the SCSI system. The last two lines are the most important ones: they are saying to which device node the memory device has been attached, and what the total amount of storage space is. If you somehow missed these lines, you can still see them afterwards with the dmesg(1) command. The reported CHS geometry is a rather fictitious one, as the flash memory is being treated like any regular SCSI disk.umass0 at uhub1 port 1 configuration 1 interface 0 umass0: LEXR PLUG DRIVE LEXR PLUG DRIVE, rev 1.10/0.01, addr 2 umass0: using SCSI over Bulk-Only scsibus2 at umass0: 2 targets sd0 at scsibus2 targ 1 lun 0: <LEXAR, DIGITAL FILM, /W1.> SCSI2 0/direct removable sd0: 123MB, 512 bytes/sec, 251904 sec total
We will discuss two scenarios below.
In this example I created just one partition a in which I will place a FFS filesystem:
Let's mount the filesystem we created in the a partition on /mnt/flashmem. Create the mount point first if it does not exist.# newfs sd0a Warning: inode blocks/cyl group (125) >= data blocks (62) in last cylinder group. This implies 1984 sector(s) cannot be allocated. /dev/rsd0a: 249856 sectors in 122 cylinders of 64 tracks, 32 sectors 122.0MB in 1 cyl groups (122 c/g, 122.00MB/g, 15488 i/g) super-block backups (for fsck -b #) at: 32,
# mkdir /mnt/flashmem # mount /dev/sd0a /mnt/flashmem
There is a considerable chance the other person is not using OpenBSD, so there may be a foreign filesystem on the memory device. Therefore, we will first need to find out which partitions are on the device, as described in FAQ 14 - Foreign Filesystems.
As can be seen in the disklabel output above, there is only one partition i, containing a FAT filesystem created on a Windows machine. As usual, the c partition indicates the entire device.# disklabel sd0 # /dev/rsd0c: type: SCSI disk: SCSI disk label: DIGITAL FILM flags: bytes/sector: 512 sectors/track: 32 tracks/cylinder: 64 sectors/cylinder: 2048 cylinders: 123 total sectors: 251904 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # microseconds track-to-track seek: 0 # microseconds drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] c: 251904 0 unused 0 0 # Cyl 0 - 122 i: 250592 32 MSDOS # Cyl 0*- 122*
Let's now mount the filesystem in the i partition on /mnt/flashmem.
Now you can start using it just like any other disk.# mount -t msdos /dev/sd0i /mnt/flashmem
WARNING: You should always unmount the filesystem before unplugging the memory device. If you don't, the filesystem may be left in an inconsistent state, which may result in data corruption.
Upon detaching the memory device from your machine, you will again see the kernel write messages about this to the console:
umass0: at uhub1 port 1 (addr 2) disconnected sd0 detached scsibus2 detached umass0 detached
A flash device attached to a USB port will show up as a sd(4) SCSI-like device. When attached to an IDE adapter, it will show up as a wd(4) device.
In the case of flash media in an IDE adapter, it can be booted from any system that could boot from an IDE hard disk on the same adapter. In every sense, the system sees the flash media as an IDE disk. Simply configure the hardware appropriately, then install OpenBSD to the flash disk as normal.
In the case of booting from a USB device, your system must be able to boot from the USB device without being distracted by other devices on the system. Note that if your intention is to make a portable boot environment on a USB device, you really want to use DUIDs, rather than the traditional "/dev/sd0X" notation. The USB device will show up as a SCSI disk, sometimes sd0. Without DUIDs, if you plug this device into a system which already has a few SCSI-like disks (i.e., devices attached to an ahci(4) interface) on it, it will probably end up with a different identifier, which will complicate carrying the flash device from system to system, as you would have to update /etc/fstab. Using DUIDs completely resolves this issue.
Some notes:
Some reasons you may want to do this:
There are some things you may want to do after the install to improve your results:
Disk performance is a significant factor in the overall speed of your computer. It becomes increasingly important when your computer is hosting a multi-user environment (users of all kinds, from those who log-in interactively to those who see you as a file-server or a web-server). Data storage constantly needs attention, especially when your partitions run out of space or when your disks fail. OpenBSD has a few options to increase the speed of your disk operations.
Question: "I simply do "mount -u -o async /" which makes one package I use (which insists on touching a few hundred things from time to time) usable. Why is async mounting frowned upon and not on by default (as it is in some other unixen)? Isn't it a much simpler, and therefore, a safer way of improving performance in some applications?"
Answer: "Async mounts are indeed faster than sync mounts, but they are also less safe. What happens in case of a power failure? Or a hardware problem? The quest for speed should not sacrifice the reliability and the stability of the system. Check the man page for mount(8)."
async All I/O to the file system should be done asynchronously. This is a dangerous flag to set since it does not guaran- tee to keep a consistent file system structure on the disk. You should not use this flag unless you are pre- pared to recreate the file system should your system crash. The most common use of this flag is to speed up restore(8) where it can give a factor of two speed in- crease.
On the other hand, when you are dealing with temp data that you can recreate from scratch after a crash, you can gain speed by using a separate partition for that data only, mounted async. Again, do this only if you don't mind the loss of all the data in the partition when something goes wrong. For this reason, mfs(8) partitions are mounted asynchronously, as they will get wiped and recreated on a reboot anyway.
and set the appropriate environment variable in /etc/daily.local:/dev/wd1a /altroot ffs xx 0 0
As the altroot process will capture your /etc directory, this will make sure any configuration changes there are updated daily.# echo ROOTBACKUP=1 >>/etc/daily.local
This is a "disk image" copy done with dd(8), not a file-by-file copy, so your /altroot partition should be exactly the same size as your root partition or larger. Also, excessively large root partitions should be avoided so the process does not take too long.
For full redundancy, the rest of the partitions should be duplicated as well, using softraid disk, dump(8)/restore(8), rsync, etc. It can be done manually, or part of a regular schedule, such as the weekly.local, daily.local, or monthly.local scripts.
Generally, you will want your "altroot" partition to be on a different disk that has been configured to be fully bootable should the primary disk fail. It is possible to have an "altroot" on the same disk as your boot drive, but the benefit of this is limited.
Note that we did not specify the altroot device by DUID, but by device name. We probably want to be pushing from the boot device to the secondary device, which can end up changing if the drive order is changed. For this reason, you may want to specify the root and altroot in /etc/fstab as a device name, not a DUID.
This virtual disk is treated as any other disk -- first partitioned with fdisk(8) (on fdisk platforms) and then with disklabel(8), partitions have file systems made, mounted, then used.
The tools to assemble your softraid system are in the basic OpenBSD install (for adding softraid devices after install), but they are also available on the CD-ROM and bsd.rd installation kernels. They do not exist on the floppies due to space issues; one simple work-around is to do a very minimal OpenBSD install from floppy, then boot from bsd.rd on your installed system and re-build as desired.
The installation process will be a little different than the standard OpenBSD install, as you will want to drop to the shell and create your softraid(4) drive before doing the install. Once the softraid(4) disk is created, you will perform the install relatively normally, placing the partitions you wish to be RAIDed on the newly configured drive.
You can pre-create just the RAID partitions and assemble them into a softraid(4) volume and let the installer do the rest, but it is probably easier to also manually create your root and swap partitions before invoking the installer.
This does mean you will have to carefully set up the disk before invoking the installer, making sure you manually do a few steps that the installer normally takes you through.
The install kernel only has the /dev entries for one wd(4) device and one sd(4) device on boot, so you will need to create more disk devices to set up your softraid device. This process is normally done automatically by the installer, but you haven't yet run the installer, and you will be adding a disk that didn't exist at boot. For example, if we needed to support a second and third wd(4) device and a second sd(4) device (remember, the softraid devices will be sd(4) devices), you could do the following from the shell prompt:
You now have full support for sd0, sd1, wd0, wd1 and wd2.# cd /dev # sh MAKEDEV wd1 wd2 sd1
You will need to properly fdisk(1) the physical drives (if appropriate for your platform -- make sure you set up the second disk so it is bootable!) and then use disklabel to set up the partitions.
The fdisk(8) steps below will put an MBR on the disk and an OpenBSD partition on the disk. IF you wish to use the entire disk for OpenBSD (i.e., have NOTHING else on the disk), you can do this with a simple one-liner for each drive:
(Do be sure you understand what those lines do to any data that was on your disk before using it blindly!) Otherwise, you will need to create an OpenBSD partition within the new disks.# fdisk -iy wd0 # fdisk -iy wd1
# disklabel -E wd0 Label editor (enter '?' for help at any prompt) > a a offset: [64] ENTER size: [30282461] 500m Rounding to cylinder (16065 sectors): 1028096 FS type: [4.2BSD] ENTER > a b offset: [1028160] ENTER size: [29254365] 500m Rounding to cylinder (16065 sectors): 1028160 FS type: [swap] ENTER > a m offset: [3148740] ENTER size: [28226205] 10g Rounding to cylinder (16065 sectors): 20980890 FS type: [4.2BSD] RAID > q Write new label?: [y] ENTER
Now, we need to prep out the second disk to match key parts of the first disk's layout. Since we are using the /altroot system, we will want an 'a' partition on the secondary disk the same size as the primary's 'a'. We want the system to run off the second drive as it would the first, so we will want to have a similar sized swap partition (though a little bigger or smaller will not hurt). We will also want a RAID partition the same size as the primary. If the RAID partitions are not the same size, the smaller of the two will dictate the final RAID volume size.
In short...you really want to just repeat the above allocation process on the second drive, wd1.
Note that since softraid(4) has to look around a bit to find evidence of arrays it needs to assemble, if your disk has been used for softraid previously, you may find it very helpful to use dd(1) to clear the first megabyte or so from each partition before going any further:
# dd if=/dev/zero of=/dev/rwd0m bs=1m count=1 ... # dd if=/dev/zero of=/dev/rwd1m bs=1m count=1 ...
We now create our new softraid(4) disk using bioctl(8):
This creates a RAID1 volume ("-c 1"), using the listed partitions ("-l /dev/wd0m,/dev/wd1m"), using the softraid0 driver. If there are no other sd(4) devices on this system, this will become sd0. Note that if you are creating multiple RAID devices, either on one disk or on multiple devices, you are always going to be using the softraid0 virtual disk interface driver, you won't be using "softraid1" or others. Remember, the "softraid0" there is a virtual RAID controller, you can hang many virtual disks off this controller.# bioctl -c 1 -l /dev/wd0m,/dev/wd1m softraid0
This will create a new disk, "sd0" (assuming there are no other sd(4) devices on your system). This device will now show on the system console and dmesg as a newly installed device:
showing that we now have a new SCSI bus, and a new disk. This volume will be automatically detected and assembled from this point onwards when the system boots.scsibus1 at softraid0: 1 targets sd0 at scsibus2 targ 0 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed sd0: 10244MB, 512 bytes/sec, 20980362 sec total
Because the new device probably has a lot of garbage where you expect a MBR and disklabel, zeroing the first chunk of the new disk is highly recommended, if you didn't zero the component parts above:
You are now ready to install OpenBSD on your system. Perform the install as normal by invoking "install" at the boot media command prompt. Be careful to select "custom" layout for disklabel when prompted, otherwise your RAID partition will be overwritten! Use the 'n' option of disklabel to define the mount point for your root partition, and create all the partitions on your new softraid disk (sd0 in our example here) that should be there, rather than on your non-RAID disks.# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
Now you can reboot your system, and if you have done all properly, it will automatically assemble your RAID set, and mount the appropriate partitions.
You may not want to specify the root device by DUID.
Keep in mind, failures are often not simple. The author of this article had a drive in a hardware RAID solution develop a short across the power feed, which in addition to the drive itself, also required replacing the power supply, the RAID enclosure and a power supply on a second computer he used to verify the drive was actually dead, and the data from backup as he didn't properly configure the replacement enclosure.
The steps needed for system recovery can be performed in single user mode, or from the install kernel (bsd.rd).
If you plan on practicing softraid recovery (and we HIGHLY suggest you do so!), you may find it helpful to zero a drive you remove from the array before you attempt to return it to the array. Not only does this more accurately simulate replacing the drive with a new one, it will avoid the confusion that can result when the system detects the remains of a softraid array.
Recovery from a failure will often be a two-stage event -- the first stage is bringing the system back up to a running state, the second stage is to rebuild the failed array. The two stages may be separated by some time if you don't have a replacement drive handy.
When you are ready to repair the system, you will replace the failed drive, create the RAID and other disklabel partitions, then rebuild the mirror. Assuming your RAID volume is sd0, and you are replacing the failed device with wd1m, the following process should work:
# bioctl -R /dev/wd1m sd0
In general, if your primary drive fails, you will have to remove it, and in many cases "promote" your secondary drive to primary configuration before the system will boot. This may involve re-jumpering the disk, plugging the disk into another port or some other variation. Of course, what is on the secondary disk has to not only include your RAID partition, but also has to be functionally bootable.
Once you have the system back up on the secondary disk and a new disk in place, you rebuild as above.
Fortunately, softraid handles this very well, it considers the disks "roaming", but will successfully rebuild your arrays. However, the boot disk in the machine has to be bootable, and if you just made changes in the root partition before doing this, you probably want to be sure you didn't boot from your altroot partition by mistake.
You can then mount the encrypted volume's partitions using mount as usual.# bioctl -c C -l /dev/sd1m softraid0 Passphrase: My Crypto Pass Phrase softraid0: CRYPTO volume attached as sd1
To disconnect a crypto volume (rendering it unusable again), dismount any file systems and use the following (where the encrypted volume is sd1):
The man page for this looks a little scary, as the -d command is described as "deleting" the volume, but in the case of crypto, it just deactivates encrypted volume so it can't be accessed until it is activated again with the passphrase.# bioctl -d sd1
Many other options are available with softraid, and new features are being added and improvements made, so do consult the man pages for bioctl(8) and softraid(4) on your system.
[FAQ Index] [To Section 13 - Multimedia] [To Section 15 - Packages and Ports]