[OpenBSD]

[Handbook Index] [To Section 2 - OpenBSD Porting Guide] [To Section 4 - Port Testing Guide]

3 - Special Porting Topics


Table of Contents


3.1 - Shared Libraries

Understanding shared libraries number rules

Shared libraries are a bit tricky for a variety of reasons. You must understand the library naming scheme: libfoo.so.major.minor.

When you link a program, the linker ld embeds that information in the created binary. You can see it with ldd. Later, when you run that program, the dynamic linker ld.so uses that information to find the right dynamic library:

So, this means that all libraries with the same major number and an equal or higher minor number must satisfy the binary API that the program expects. If they do not, then your port is broken. Specifically, it will break when users try to update their system.

The rules for shared libraries are quite simple.

Sometimes, it happens that a library is written as several files, and that internal functions happen to be visible to communicate between those files. Those function names traditionally begin with an underscore, and are not part of the API proper.

Tweaking ports builds to achieve the right names

Quite a few ports need tweaks to build shared libraries correctly anyways. Remember that building shared libraries should be done with
$ gcc -shared -fpic|-fPIC -o libfoo.so.4.5 obj1 obj2

Trying to rename the library after the fact to adjust the version number does not work: ELF libraries use some extra magic to set the library internal name, so you must link it with the correct version the first time.

On the other hand, remember that you can override Makefile variables from the command line, by using MAKE_FLAGS in the port's Makefile. In some cases, the program you're porting will have a simple variable which you can override by setting the library version in MAKE_FLAGS, for example MAKE_FLAGS= SO_VERSION=${LIBfoo_VERSION}. In others, the port will need to be patched to make use of such a variable.

The ports infrastructure already handles these details in libtool-based and CMake-based ports. For libtool, by default the version from the base OS is used, but in some cases this is insufficient and USE_LIBTOOL=gnu can be set. CMake is handled by using the cmake.port.mk module: MODULES += devel/cmake. In these cases, most details are handled automatically:

Avoid DT_SONAME hardcoding

Some ports use ld(1)'s -soname flag to override the library specification in the DT_SONAME field. Setting DT_SONAME is not a bug itself but is usually not desirable on OpenBSD where ld.so(1) is smart and the ports tree takes care of library versioning. Moreover a wrong soname can result in unusable binaries that depend on this library; either always or after some updates to the port containing the library. To check if the DT_SONAME field is set, run the following command:

$ objdump -x /path/to/libfoo.so.0.0 | fgrep SONAME
  SONAME      libfoo.so.0.0
As a general rule, setting soname explicitly should be patched out. The only exception is a situation when the right soname is recorded and it's hard to patch soname-related code out and upstream won't accept such a patch. In that case the soname should fully match the file name (see the example above).

Try putting all user visible libraries into /usr/local/lib

As a rule, requesting the user to add directories to their ldconfig path is a very bad idea: all shared libraries that are linked directly to programs should appear in /usr/local/lib. However, it is quite possible to use a symbolic link to the actual library. You should understand the library lookup rules: So, let us assume you have two ports that provide two major versions of a given library, say qt.1.45 and qt.2.31. Since both ports can be installed simultaneously, to make sure a given program will link against qt.1, that library is provided as /usr/local/lib/qt/libqt.so.1.45, and programs will be linked using
$ ld -o program program.o -L/usr/local/lib/qt -lqt
Similarly, a program that links with qt.2 will use the /usr/local/lib/qt2/libqt.so.2.31 file with
$ ld -o program program.o -L/usr/local/lib/qt2 -lqt

To solve those libraries at run-time, a link called /usr/local/lib/libqt.so.1.45 and a link called /usr/local/lib/libqt.so.2.31 have been provided. This is enough to satisfy ld.so.

It is an error to link a program using qt1 with

$ ld -o program program.o -L/usr/local/lib -lqt
This code assumes that qt.2.31 is not installed, which is a wrong assumption.

Such tricks are only necessary in the rare cases of very pervasive libraries where a transition period between major versions must be provided. In general, it is enough to make sure the library appears in /usr/local/lib.

Writing library dependencies correctly

The new dependency code does need complete library dependencies. You must use make lib-depends-check or make port-lib-depends-check to verify a port does mention all libraries it requires. You just write them in LIB_DEPENDS/WANTLIB like this:
LIB_DEPENDS += x11/gtk+
WANTLIB += gtk>=1.2 gdk>=1.2

It is not an error to specify static libraries on a WANTLIB line as well. WANTLIB are fully evaluated at package build time: the resulting package will have library dependency information embedded as lines for ld.so that hold the actual major.minor number that was used for building, and nothing for static libraries.

You must provide RUN_DEPENDS as well if a port requires anything beyond a library proper. This will allow the port to build correctly on architectures that do not support shared libraries.

In fact, providing LIB_DEPENDS lines even for static libraries is a good idea: this will simplify port update if a given dependency goes from a static library to a shared library.

WANTLIB lines must specify the same paths that are used for ld. With the same example as above, a standard qt2 depends fragment would say WANTLIB += lib/qt2/qt.=2. This allows the dependency checking code to do the right thing when multiple versions of the same library are encountered.

Updating ports correctly

So, when you update or add a port that involves shared libraries, a few details must be done right.

3.2 - GNU autoconf

autoconf is a gnu tool that is supposed to help in writing portable programs. It is often used together with automake (portable makefiles) and libtool (portable shared libraries).

Those tools do not work all that well, and often create specific challenges in porting software to OpenBSD.

Detecting the use of autoconf in a piece of software

Quite a few software projects have configure scripts, and in most cases, those scripts were generated by autoconf. Such scripts have a line near the top that says:
# Generated automatically using autoconf version 2.13
or something similar. The generation procedure is covered in a following section. Most often, autoconf ports come with the generated scripts, and with the source scripts that generated these. The next section covers the simple case where you simply want to run the generated script, and not modify it. Make sure you read the section about Trojan horses as well.

Running an autoconf configure script

This script is normally run during the configure stage of ports building. To invoke the configure script, one only has to set CONFIGURE_STYLE= gnu which will automatically invoke ${WRKSRC}/configure.

If your configure script lies elsewhere, just set CONFIGURE_SCRIPT to the right value.

configure scripts often take a lot of arguments. The default processing of the ports tree will only pass --prefix and --sysconfdir to these. Very old configure scripts don't understand --sysconfdir; you can set CONFIGURE_STYLE=gnu old in such cases.

Similarly, some ports are not aware of DESTDIR. Those ports will often accept setting prefix=${DESTDIR}/usr/local without any issue, which can be done with CONFIGURE_STYLE=gnu dest.

Ports using autoconf and automake will have Makefiles with a specific format, that begins with a few standard locations:

If the configure script does not allow you to override these, you may still be able to do it later on during the build or fake stage. This does assume, of course, that the only reference to such a directory is with in the generated Makefile.

For instance, a neat trick involves switching sysconfdir to ${PREFIX}/share/example/pkgname during the fake stage, to get default config files to package (since packages don't normally store files under /etc).

Ports fully using autoconf and automake may support building under a different directory: try setting SEPARATE_BUILD=flavored and see if that works. This would allow you to wipe the build tree without wiping the source tree, by giving you separate ${WRKSRC} and ${WRKBUILD} locations. In a few cases, separate builds may need to use gmake, where the rest of the port is happy with bsd-make, in which case this is not worth it.

automake will generate a few rules to rebuild all the generated scripts if anything changes. These often get in the way of OpenBSD specific patches. For that reason, as soon as CONFIGURE_STYLE corresponds to autoconf use, post-patch will touch various files in a specific order, so that no automake dependencies get triggered later. The list of dependencies is given in tsort(1) order in a file mentioned in REORDER_DEPENDENCIES (the default is ${PORTSDIR}/infrastructure/mk/automake.dep).

The mechanics of configure checks

The configure script first runs a fixed script called config.guess, that will determine which system configure is running on. config.guess does not vary from port to port and is a fixed script, so the OpenBSD ports tree replaces it with a fixed version that knows about some specific OpenBSD architectures. Since most software packages come with bundled config.guess, and since some of them are quite old, this is a necessary step. If a software package contains more than one config.guess, you can overwrite them all by setting MODGNU_CONFIG_GUESS_DIRS to the full list of directories to process.

The configure script generated by autoconf then simply checks all functionality on the existing system, by looking for a compiler, and running simple test programs through it. Since some of these tests are quite lengthy, the ports tree primes configure with a CONFIG_SITE=config.site file. configure will look at the contents of that file first before running the tests. A few configure scripts may have bugs that will prevent them from running correctly in the presence of config.site. Setting CONFIG_SITE to empty will weed out these kind of problems.

Most configure will auto-detect quite a few conditions. It is very important to look at configure's options, at configure's output, and at the generated config.log file: these will tell you what options were found, and what options were not found. This will allow you to find out when configure did not find a package that was installed.

This will also tell you which optional packages configure would find. In the ports tree, those are called hidden dependencies. This is a bad thing: a hidden dependency is some extra package configure will pick up if it's installed. Then it will proceed in building a mutant package. In some cases, the build will fail because of OpenBSD peculiarities. In some cases, the package creation will fail, as some files will have different names. In some cases, the resulting package will be incorrect, as it will fail to record any dependency on the optional package. So looking at configure's output is one of the most important duty of ports' maintainers. Watch out for cascading tests: detecting a given feature may lead a configure script to try out and find some dependent feature, so you will not see the second feature in the configure output unless the first feature is triggered.

In case some hidden dependencies are found, some action should be taken. The most simple action is to install the optional package, and see what configure will do. If it detects the package, one can either disable the detection (by using configure options, or environment variables, or patching the configure script), or verify that the build goes well and add the dependency to the list of dependent packages. A better choice is to figure out a reasonable set of default dependencies, and then add some flavors to cover other common features.

Re-generating configure scripts

configure scripts are normally generated from a configure.in file (recent versions of autoconf use a configure.ac file instead). A standard library of definitions is often available in an aclocal.m4.

In most cases, patching configure directly is a bad idea. It is better to patch the configure.in file and get the ports tree to call autoconf. Good porters will endeavor to write configure.in changes that they can feed to software authors.

Different versions of autoconf will produce distinct configure scripts. autoconf-2.13 is special: it was used over a fairly long period, and there has been mutant versions of autoconf-2.13 (actually, betas of a newer autoconf) in wide use. Hence, using autoconf-2.13 will often not produce the exact same configure script.

Since having several autoconf versions around at the same time is useful, the autoconf script actually available in the ports tree is part of a port called metaauto. Which autoconf script actually gets called is controlled through the environment variable AUTOCONF_VERSION. Calling autoconf happens if you set CONFIGURE_STYLE=autoconf, together with setting AUTOCONF_VERSION. In most cases, identify the version of autoconf that was used to generate the distributed configure script (usually obvious when reading the script) and use this same version yourself.

autoconf relies on the standard unix preprocessor m4(1). Normally, autoconf relies on some features on the GNU version of m4, gm4. Fortunately, OpenBSD's m4 has enough features to run autoconf as well, it just needs to be invoked with -g to handle autoconf. Very seldom, autoconf run with OpenBSD's m4 will produce bogus configure scripts. The OpenBSD developers will fix such an issue.

Trojan horses

Configure scripts are big generated files. They are an ideal hiding place for Trojan horses, and this has indeed already happened in the past. This is the main reason for having most versions of autoconf in the tree: a good porter is expected to check that a generated configure script matches what the ports tree autoconf builds.

Interaction with other programs

autoheader is another program related to autoconf that is normally run to create a config.h.in file. Setting CONFIGURE_STYLE=autoconf will also run autoheader. A few ports don't use autoheader. Setting CONFIGURE_STYLE=autoconf no-autoheader will fix that issue.

libtool has a few specific hooks in configure.in. There is often a libtool.m4 script that goes with it. Getting libtool to do the right thing goes beyond the scope of this documentation.

KDE uses an extra layer on top of autoconf. This extra layer assembles a configure.in file from a set of configure.in.in files, and is also able to tweak both configure.in for snappier results, and Makefile.in to allow for some supplementary options in building, and to automatically insert DESTDIR in the right places. The AUTOCONF variable can be used to tweak the actual autoconf script that gets run, and KDE expects /bin/sh ${WRKDIST}/admin/cvs.sh to work correctly.

3.3 - Configuration Files

Packages should only install files under ${PREFIX}, which is /usr/local by default. On the other hand, the OpenBSD policy is to install most configuration files under ${SYSCONFDIR}, which is /etc by default.

Note that it is perfectly acceptable for a binary package to have both ${PREFIX} and ${SYSCONFDIR} hardcoded: PREFIX and SYSCONFDIR are mostly user settings that influence the build of the package.

@sample explained

Packing-lists contain a specific @sample mechanism to deal with configuration files:

more @sample specificities

Contrary to other files in a packing-list, @sample entries can have an absolute path name.

Some big packages will also need their own configuration directory, @sample ${SYSCONFDIR}/directory/ will deal with that.

Using @sample directory/ to create port specific directories that do not hold any configuration files is perfectly good style. @sample correctly interprets correct @mode, @owner, @group annotations. This can be a bit cumbersome, because you will often need to switch back and forth between a default mode and a configuration file specific mode.

Special tricks

make update-plist knows how to copy @sample annotations over, but it does not know how to create them, so they have to be written in the first place.

Note the distinction between configuration files and example configuration files: the port must be configured to find its files under ${SYSCONFDIR}, it is only the fake installation stage that must put stuff under ${PREFIX}/share/examples. One simple way to handle that is to copy the files over in a post-install.

A neat trick which often works is to look at a program's Makefile, and override the configuration directory in the fake installation stage by using specific FAKE_FLAGS, for instance:

FAKE_FLAGS=	DESTDIR=${WRKINST} \
		sysconfdir=${WRKINST}${TRUEPREFIX}/share/examples/PKGNAME
You just need to watch out for programs that write the configuration directory down in specific files during their install stage.

Examples

3.4 - Audio Applications

This document currently deals with sampled sounds issues only. Contributions dealing with synthesizers and waveform tables are welcome.

Audio applications tend to be hard to port, as this is a domain where interfaces are not standardized at all, though approaches don't vary much between operating systems.

Using ossaudio

The ossaudio emulation is possibly the simplest way, but it won't always work, and it is not such a great idea usually.

Using existing NetBSD or FreeBSD code

Since we share part of the audio interface with NetBSD and FreeBSD, starting from a NetBSD port is reasonable. Be aware that some files changed places, and that some entries in sys/audioio.h are obsolete. Also, many ports tend to be incorrectly coded and to work on only one type of machine. Some changes are bound to be necessary, though. Read through the next part.

Writing OpenBSD code

libsndio

OpenBSD has its own audio layer provided by the sndio library, documented in sio_open(3). Until it's merged into this page, you can find further information about programming for this API in the guide, hints on writing and porting audio code. sndio allows user processes to access audio(4) hardware and the aucat(1) audio server in a uniform way. It supports full-duplex operation, and when used with the aucat(1) server it supports resampling and format conversions on the fly.

Hardware independence

YOU SHOULDN'T ASSUME ANYTHING ABOUT THE AUDIO HARDWARE USED.
Wrong code is code that only checks the a_info.play.precision field against 8 or 16 bits, and assumes unsigned or signed samples based on soundblaster behavior. You should check the sample type explicitly, and code according to that. Simple example:

AUDIO_INIT_INFO(&a_info);
a_info.play.encoding = AUDIO_ENCODING_SLINEAR;
a_info.play.precision = 16;
a_info.play.sample_rate = 22050;
error = ioctl(audio, AUDIO_SETINFO, &a_info);
if (error)
    /* deal with it */
error = ioctl(audio, AUDIO_GETINFO, &a_info);
switch(a_info.play.encoding)
    {
case AUDIO_ENCODING_ULINEAR_LE:
case AUDIO_ENCODING_ULINEAR_BE:
    if (a_info.play.precision == 8)
        /* ... */
    else
        /* ... */
    break;
case ...

default:
    /* don't forget to deal with what you don't know !!! For instance, */
    fprintf(stderr,
        "Unsupported audio format (%d), ask ports@ about that\n",
            a_info.play.encoding);

    }
    /* now don't forget to check what sampling frequency you actually got */

This is about the smallest code fragment that will deal with most issues.

16 bit formats and endianness

In normal usage, you just ask for an encoding type (e.g., AUDIO_ENCODING_SLINEAR), and you retrieve an encoding with endianness (e.g., AUDIO_ENCODING_SLINEAR_LE). Considering that a soundcard does not have to use the same endianness as your platform, you should be prepared to deal with that. The easiest way is probably to prepare a full audio buffer, and to use swab(3) if an endianness change is required. Dealing with external samples usually amounts to:
  1. Parsing the sample format,
  2. Getting the sample in,
  3. Swapping endianness if it is not your native format,
  4. Computing what you want to output into a buffer,
  5. Swapping endianness if the sound card is not in your native format,
  6. Playing the buffer.
Obviously, you may be able to remove steps 3 and 5 if you are simply playing a sound sample which happens to be in your sound card native format.

Audio quality

Hardware may have some weird limitations, such as being unable to get over 22050 Hz in stereo, but up to 44100 in mono. In such cases, you should give the user a change to state his preferences, then try your best to give the best performance possible. For instance, it is stupid to limit the frequency to 22050 Hz because you are outputting stereo. What if the user does not have a stereo sound system connected to his audio card output ?

It is also stupid to hardcode soundblaster-like limitations into your program. You should be aware of these, but do try to get over the 22050 Hz/stereo barrier and check the results.

Sampling frequency

You should definitely check the sampling frequency your card gives you back. A 5% discrepancy already amounts to a half-tone, and some people have much more accurate hearing than that, though most of us won't notice a thing. Your application should be able to perform resampling on the fly, possibly naively, or through devious applications of Shannon's resampling formula if you can.

Dynamic range

Samples don't always use the full range of values they could. First, samples recorded with a low gain will not sound very loud on the machine, forcing the user to turn the volume up. Second, on machines with badly isolated audio, low sound output means you mostly hear your machine heart-beat, and not the sound you expected. Finally, dumb conversion from 16 bits to 8 bits may leave you with only 4 bits of usable audio, which makes for an awfully bad quality.

If possible, the best solution is probably to scan the whole stream you are going to play ahead of time, and to scale it so that it fits the full dynamic range. If you can't afford that, but you can manage to get a bit of look-ahead on what you're going to play, you can adjust the volume boost on the fly, you just have to make sure that the boost factor stays at a low frequency compared to the sound you want to play, and that you get absolutely no overflows -- those will always sound much worse than the improvement you're trying to achieve.
As sound volume perception is logarithmic, using arithmetic shifts is usually enough. If your data is signed, you should explicitly code the shift as a division, as C >> operator is not portable on signed data.

If all else fails, you should at least try to provide the user with a volume scaling option.

Audio performance

Low-end applications usually don't have much to worry about. Keep in mind that some of us do use OpenBSD on low-end 68030, and that if a sound application can run on that, it should.

Don't forget to run benches. Theoretical optimizations are just that: theoretical. Some hard figures should be collected to check what's a sizeable improvement, and what's not.

For high performance audio applications, such as mpegI-layer3, some points should be taken into account:

A model you may have to follow to get optimal results is to first compile a small test program that enquires about the specific audio hardware available, then proceed to configure your program so that it deals optimally with this hardware. You may reasonably expect people who want good audio performance to recompile your port when they change hardware, provided it makes a difference.

Real time or synchronized

Considering that OpenBSD is not real time, you may still wish to write audio applications that are mostly real time, for instance games. In such a case, you will have to lower the blocksize so that the sound effects don't get out of synch with the current game. The problem with this if that the audio device may get starved, which yields horrible results.

In case you simply want audio to be synchronized with some graphics output, but the behavior of your program is predictable, synchronization is easier to achieve. You just play your audio samples, and ask the audio device what you are currently playing with AUDIO_GETOOFFS, then use that information to post-synchronize graphics. Provided you ask sufficiently often (say, every tenth of a second), and as long as you have enough horse-power to run your application, you can get very good synchronization that way. You might have to tweak the figures by a constant offset, as there is some lag between what the audio reports, what's currently playing, and the time it takes for XWindow to display something.

Contributing code back

In the case of audio applications, working with the original program's author is very important. If their code only works with soundblaster cards, for instance, there is a good chance they will have to cope with other technology soon.

If you don't send your comments to the author, your work will have been useless.

It may also be that the author has already noticed whatever problems you are currently dealing with, and is addressing them in his current development tree. If the patches you are writing amount to more than a handful of lines, cooperation is almost certainly a very good idea.

3.5 - Manual pages

This section provides guidelines on how to deal with groff versus mandoc(1) issues in ports.

Should I check anything?

When creating a new port or updating an existing port, you are welcome to check whether the port can use mandoc to format its manuals. This may make the manuals more useable for the port's users, and it will reduce the port's build time. Note that these checks are recommended, but not strictly required to create a new port. It is also acceptable to just put USE_GROFF=Yes into the Makefile and be done with it. Marc has done that more than 3.000 times, and we didn't kill him.

Ports matching one or more of the following criteria are good candidates for doing the checks described below, so in these cases, a bit of your time is likely well spent:

Of course, ports not matching these criteria might work as well, so there is nothing wrong with checking if you like to. However,

When you choose to do any checks, note that both automatic and manual checks are required. In case you are not willing to do a manual check at the end, looking over the manuals as formatted with mandoc, do not bother with the automatic part of the checks, just put USE_GROFF=Yes.

Only remove USE_GROFF if you plan to actively maintain the port. Removing USE_GROFF from a port nobody is going to maintain will force the poor soul doing the next update to redo the check. If that person fails to check and the upstream author has added any features to the manuals mandoc does not support, we will end up with unusable manuals.

Which tools do I need?

Before you start checking anything, make sure you have -current mandoc installed. For now, it should not be older than one or two months: being told about issues that have been fixed long ago will not help mandoc developers. In case of doubt, just update your mandoc utility, it is really quick and easy, and does not require to have your whole system up to date:

$ cd /usr/src/usr.bin/mandoc/
$ cvs up -dP
$ make cleandir
$ make obj
$ make depend
$ make
$ sudo make install
Optionally, you may also get a copy of the gmdiff utility script that helps to compare groff and mandoc output. The gmdiff script is not strictly required, doing the necessary checks by hand is perfectly acceptable.

How do I report the results?

The following paragraphs ask for sending in reports to the mandoc maintainers in some particular situations. Before sending such reports, please always tick off the following checklist:

  1. For the time being, do not report problems related to DocBook. We know source code generated by DocBook is extremely crappy even when DocBook works as intended, and on top of that, DocBook tends to be more buggy than average software. We also know that mandoc usually parses and renders DocBook output badly, and we know what to do to improve mandoc in this respect. This may change in the future, but right now, more than half of all reports that fail to reveal new, useful information are related to DocBook, and the signal-to-noise ratio in DocBook reports is too low to be worthwhile.
  2. Make sure your mandoc binary is up to date - see above.
  3. Attach the mdoc(7) or man(7) source file in question to the mail. This may either be a file contained in the distribution tarball or a file generated during the build process. In case several files exhibit the problems, choose one that shows all problems. In case different files exhibit different problems you wish to report, attach as many files as necessary. The point is to save the mandoc maintainers the work of downloading distribution tarballs, searching them for source files, sometimes even installing software before being able to start a build, while you have that information readily at hand, anyway.
  4. Briefly describe all the problems you want to report, and where they can be seen in which file. I have spent time wondering what exactly the reporter's point was more than once in the past.
  5. In case your report is related to errors or warnings printed by the mandoc utility, copy the output of mandoc -Tlint (or mandoc -Tlint -Werror when warnings are irrelevant) into the body of your mail. Usually, this is easy to reproduce, but it did happen that it was not, causing unnecessary confusion.
  6. In case the version of the port you are talking about is not yet committed, please attach what is needed to build the uncommitted port: A diff against -current when it is an update, or a tarball of the port directory when it is a completely new port. Very often, the source files will be sufficient to identify the problem; however, in those cases where they are not, mailing back and forth or searching mailing list archives just to get the needed additional information is a waste of time.
  7. As a rule, please mail to both schwarze@ and kristaps@ to minimize the risk that real problems get lost. Unless you are the maintainer of the port, Cc: him or her. Unless you are an OpenBSD developer, in case you regularly work with a developer who is committing your ports and who you know is interested in this port, Cc:ing him or her may be useful as well.

How do I do automatic checking?

To do the automatic part of the check, please run the following command over all mdoc(7) and man(7) manual source files contained in the port:

$ mandoc -Tlint -Werror *

If this produces a considerable number of error messages, for now, the port needs USE_GROFF. Don't even consider using mandoc for anything here, it's not up to the job yet. Sending a report to the mandoc developers is useful if:

If manual pages look good with groff, never patch them to get rid of mandoc errors. That would merely be a make-work project not helping anyone: It will neither help to improve upstream manuals nor mandoc.

How do I do manual checking?

If there are no errors, proceed to the manual part of the check. Look at the manuals as formatted by mandoc. Do they look fine? If yes, you do not need USE_GROFF, and there is no need to report anything.

If there are no errors, but mandoc output has serious issues, that is, relevant information is missing or part of the output is seriously garbled, please always report your findings, even if you happen to know it's due to a known issue with mandoc. We do want to know which issues cause serious problems in practice, such that we can address the most pressing issues first.

If mandoc output has serious issues and groff output looks bad as well, then the manuals are probably just broken upstream. In that case, you have the usual options when porting broken software: Abandon the port, ignore the problem, report upstream, and/or patch the bugs away. In case you need help with the latter, talk to schwarze@.

If there are no errors, but mandoc output has minor issues that don't really hinder the user when reading the manual, you are welcome to report these issues as well. In that case, you are even more welcome to first check the mandoc TODO list, to avoid having the same minor issues reported again and again - but in case of doubt, it is always better to report dupes than to let problems go unnoticed.

If there are only very few errors, in particular if you get the impression that mandoc output is just fine all the same, it's your call to either set USE_GROFF=Yes (that's the quicker way out) or to report it and ask whether it's acceptable to remove USE_GROFF in that particular case. Reporting it is often a great help in order to improve mandoc error reporting, in particular to identify and remove bogus mandoc errors messages.

To speed up the manual checks, in particular if you are often doing mandoc checks on OpenBSD ports, and to reduce the risk of overlooking problems, consider using the gmdiff utility script. It takes the file names of an arbitrary number of manual source files as arguments, runs both groff and mandoc on all the files in turn, and compares the output of both programs. However, bear in mind that you are still doing manual checks with the ultimate goal to judge the quality of mandoc output: All the above points still apply even when you are using the gmdiff script to help your work. Also note that gmdiff will usually find minor formatting differences between both programs, in particular with respect to whitespace. If mandoc output looks good, even if it's slightly different from groff output, USE_GROFF is not needed.

For ease of use, it's possible to call gmdiff from a custom target in mk.conf:

gmdiff:
	@make fake; cd ${WRKINST}${TRUEPREFIX}; find man -type f -path 'man/man*' -print0 | xargs -0r gmdiff | less

What about warnings?

You might wonder about mandoc warnings, as opposed to mandoc errors. In a nutshell, the distinction is that errors may seriously impact the usefulness of the output, while warnings might at the worst cause minor formatting glitches, if at all. If a mandoc warning appears to be related to seriously garbled output, that's probably a bug in mandoc and should always be reported.

That said, it is obvious that warnings are irrelevant for the decision whether to use or not to use mandoc for a given port. They are for manual authors, to help improve manual quality, not for porters.

How can I help upstream?

In case you are one of the port's upstream developers, or know they care about good quality of their manuals and gladly accept patches, it may make sense to use mandoc -Tlint to identify potential formatting issues and to produce patches to be submitted upstream. Usually, there is no need to put such patches into the ports tree.

As with any kind of linting, before changing your mdoc(7) or man(7) source code or sending out patches, first make sure you are chasing real problems in the manuals. The mandoc utility is not perfect. It may produce bogus warnings. We are trying to fix that, but there will always be room for improvement. In case of doubt, report the issue and ask for advice.

3.6 - rc.d(8) scripts

This section is intended to provide some information on writing and installing rc.d(8) scripts.

Ports that install a daemon benefit greatly from having rc.d(8) scripts. It allows the user to easily check if the daemon is running, as well as providing an easy and consistent way to start and stop it.

Writing rc.d(8) scripts

Writing an rc.d(8) script is straightforward and simple due to the clean and simple design of the rc.subr(8) system. Though there are several things to take into account.

  1. The script has to be placed into ${PKGDIR} with a .rc extension, like mpd.rc. This will allow the package tools to pick it up.
  2. Be sure to test all the functions of the script, especially the reload function.
  3. Use ${TRUEPREFIX} when writing the path to the daemon.

Example script

Below is an example of a typical script.

#!/bin/sh
#
# $OpenBSD: munin_node.rc,v 1.6 2013/01/08 11:14:02 kirby Exp $

daemon="${TRUEPREFIX}/sbin/munin-node"

. /etc/rc.d/rc.subr

pexp="/usr/bin/perl -wT $daemon"

rc_pre() {
	install -d -o _munin /var/run/munin
}

rc_cmd $1
A template script can also be found in the templates directory of your ports tree.

[Handbook Index] [To Section 2 - OpenBSD Porting Guide] [To Section 4 - Port Testing Guide]


[back] www@openbsd.org
$OpenBSD: specialtopics.html,v 1.38 2014/04/18 11:07:21 sthen Exp $