Tuesday, December 25, 2007

What Is Bit-Banging?

What Is Bit-Banging?
Bit-banging is a method of using general-purpose I/O lines to
emulate a serial port. Microcontrollers that include serial-port
modules like SPI[tm] and I2C[tm] manage all synchronization and
timing signals, and this activity is transparent to the user.
With bit-banging, however, each write to the port causes a
single transition at the port pin. And it's up to the user,
first, to provide the correct number of transitions to obtain
the desired waveform and, second, to ensure that the timing
requirements (particularly setup and hold times for reading and
writing data) are met. Due to the overhead associated with the
number of writes to the port, though the actual port speed might
be quite high the actual bit-bang throughput rate is usually
very slow. This technique is very inefficient from a software
perspective, but it may be acceptable in some applications where
the communication overhead is acceptable (for example for doing
occasional controlling communication).

Thursday, December 13, 2007

Mixed-Language Programming and External Linkage

Dec 13, 2007

Mixed-Language Programming and External Linkage

By Giri
March 2005,
March 2006

Abstract: This article introduces
the concept of linkage and shows how
a simple C++ program fails without
language linkage, but can succeed
with proper linkage.


* Introduction
* The Problem
* The Reason
* The Solution
* Resources

It is a common practice to call
functions of a C library from a C++
program. This works out well as long
as developers restrict themselves to
the standard headers and libraries
that were supplied with the
operating system. But novice
programmers may stumble with some
link-time errors, as soon as they
try to call methods of their own C
library from a C++ program.
Potential reasons for the failure
could include unfamiliarity with
linkage specifications and how C/C++
compilers handle symbols during the

This article briefly introduces the
concept of linkage and shows how a
simple C++ program fails without
language linkage, and succeeds with
proper linkage. Mixing code written
in C++ with code written in C is
relatively straightforward, as C++
is mostly a superset of C. Although
mixing C++ objects with objects in
languages other than C is allowed,
it is a bit more complicated, hence
this article restricts the
discussion to C and C++ objects.


The C++ standard provides a
mechanism called linkage
specification for mixing code that
was written in different programming
languages and was compiled by the
respective compilers, in the same
program. Linkage specification
refers to the protocol for linking
functions or procedures written in
different languages. Linkage is the
term used by the C++ standard to
describe the accessibility of
objects from one file to another or
even within the same file. Three
types of linkage exist:

* No linkage
* Internal linkage
* External linkage

Something internal to a function, in
regard to its arguments, variables,
and so on, always has no linkage and
hence can be accessed only within
the function.

Sometimes it is necessary to declare
functions and other objects within a
single file in a way that allows
them to reference each other, but
not to be accessible from outside
that file. This can be done through
internal linkage. Symbols with
internal linkage only refer to the
same object within a single source
file. Prefixing the declarations
with the keyword static changes the
linkage of external objects from
external linkage to internal

Objects that have external linkage
are all considered to be located at
the outermost level of the program.
This is the default linkage for
functions and anything declared
outside of a function. All instances
of a particular name with external
linkage refer to the same object in
the program. If two or more
declarations of the same symbol have
external linkage but with
incompatible types (for example,
mismatch of declaration and
definition), then the program may
either crash or show abnormal
behavior. The rest of the article
discusses one of the issues with
mixed code and provides a
recommended solution with external

The Problem

In the real world, it is very common
to use the functionality of code
written in one programming language
from code written in another. A
trivial example is a C++ programmer
relying on a standard C library
(libc) for sorting a series of
integers with the "quick sort"
technique. It works because the C
implementation takes care of the
language linkage for us. But we need
to take additional care if we use
our own libraries written in C, from
a C++ program. Otherwise the
compilation may fail with link
errors caused by unresolved symbols.
Consider the following example:

Assume that we're writing C++ code
and wish to call a C function from C
++ code. Here's the code for the
callee, for example, C routine:

%cat greet.h
extern char *greet();

%cat greet.c
#include "greet.h"

char *greet() {
return ((char *) "Hello!");

%cc -G -o libgreet.so greet.c

Note: The extern keyword declares a
variable or function and specifies
that it has external linkage, i.e.,
its name is visible from files other
than the one in which it's defined.

Let's try to call the C function
greet() from a C++ program.

%cat mixedcode.cpp
#include <iostream.h>
#include "greet.h"

int main() {
char *greeting = greet();
cout << greeting << "\n";
return (0);

%CC -lgreet mixedcode.cpp
Undefined first referenced
symbol in file
char*greet() mixedcode.o
ld: fatal: Symbol referencing errors. No output written to a.out

Though the C++ code is linked with
the dynamic library that holds the
implementation for greet(),
libgreet.so, the linking failed with
undefined symbol error. What went

The Reason

The reason for the link error is
that a typical C++ compiler mangles
(encodes) function names to support
function overloading. So, the symbol
greet is changed to something else
depending on the algorithm
implemented in the compiler during
the name mangling process. Hence the
object file does not have the symbol
greet anywhere in the symbol table.
The symbol table of mixedcode.o
confirms this. Let's have a look at
the symbol tables of both
libgreet.so and mixedcode.o:

%elfdump1 -s libgreet.so

Symbol Table Section: .symtab
index value size type bind oth ver shndx name
[1] 0x00000000 0x00000000 FILE LOCL D 0 ABS libgreet.so
[37] 0x00000268 0x00000004 OBJT GLOB D 0 .rodata _lib_version
[38] 0x000102f3 0x00000000 OBJT GLOB D 0 .data1 _edata
[39] 0x00000228 0x00000028 FUNC GLOB D 0 .text greet
[40] 0x0001026c 0x00000000 OBJT GLOB D 0 .dynamic _DYNAMIC

%elfdump -s mixedcode.o

Symbol Table Section: .symtab
index value size type bind oth ver shndx name
[0] 0x00000000 0x00000000 NOTY LOCL D 0 UNDEF
[1] 0x00000000 0x00000000 FILE LOCL D 0 ABS mixedcode.cpp
[2] 0x00000000 0x00000000 SECT LOCL D 0 .rodata
[3] 0x00000000 0x00000000 FUNC GLOB D 0 UNDEF
[4] 0x00000000 0x00000000 FUNC GLOB D 0 UNDEF __1cFgreet6F_pc_
[5] 0x00000000 0x00000000 NOTY GLOB D 0 UNDEF __1cDstdEcout_
[6] 0x00000010 0x00000050 FUNC GLOB D 0 .text main
[7] 0x00000000 0x00000000 NOTY GLOB D 0 ABS __fsr_init_value

%dem2 __1cFgreet6F_pc_

__1cFgreet6F_pc_ == char*greet()

char*greet() has been mangled to
__1cFgreet6F_pc_ by the Sun Studio 9
C++ compiler. That's the reason why
the static linker (ld) couldn't
match the symbol in the object file.

Note that a C compiler that complies
with the C99 standard may mangle
some names. For example, on systems
in which linkers cannot accept
extended characters, a C compiler
may encode the universal character
name in forming valid external

The Solution

The C++ standard provides a
mechanism called linkage
specification to enables smooth
compilation of mixed code. Linkage
between C++ and non-C++ code
fragments is called language
linkage. All function types,
function names, and variable names
have a default C++ language linkage.
Language linkage can be achieved
using the following linkage

Linkage specification:

extern string-literal {
extern string-literal function-declaration;

The string-literal specifies the
linkage associated with a particular
function, for example, C and C++.
Every C++ implementation provides
for linkage to functions written in
C language ("C") and linkage to C++

The solution to the problem under
discussion is to ask the C++
compiler to use C mangling for the
external functions to be called, so
we can use the functionality of
external C functions from C++ code,
without any issues. We can
accomplish this using the linkage to
C. The following declaration of
greet() in greet.h should resolve
the problem:

extern "C" char *greet();

Because we were calling C code from
a C++ program, C linkage was used
for the routine greet(). The linkage
directive extern "C" tells the
compiler to change from C++ mangling
to C mangling for the function, and
to use C calling conventions while
sending external information to the
linker. In other words, the C
linkage specification forces the C++
compiler to adopt C conventions,
which are not the same as C++

So, let's modify the header greet.h,
and recompile:

%cat greet.h
#if defined __cplusplus
extern "C" {

Thursday, November 22, 2007

Multi-file projects and the GNU Make utility

C-Scene Issue #2
Multi-file projects and the GNU Make utility
Author: George Foot
Email: george.foot@merton.ox.ac.uk
Occupation: Student at Merton College, Oxford University, England
IRC nick: gfoot

Disclaimer: The author accepts no liability whatsoever for any
damage this may cause to anything, real, abstract or
virtual, that you may or may not own. Any damage
caused is your responsibility, not mine.

Ownership: The section `Multi-file projects' remains the
property of the author, and is copyright (c) George
Foot May-July 1997. The remaining sections are the
property of CScene and are copyright (c) 1997 by
CScene, all rights reserved. Distribution of this
article, in whole or in part, is subject to the same
conditions as any other CScene article.

0) Introduction

This article will explain firstly why, when and how to split
your C source code between several files sensibly, and it will
then go on to show you how the GNU Make utility can handle all
your compilation and linking automatically. Users of other make
utilities may still find the information useful, but it may
require some adaptation to work on other utilities. If in doubt,
try it out, but check the manual first.

1) Multi-file projects

1.1 Why use them?

Firstly, then, why are multi-file projects a good thing?
They appear to complicate things no end, requiring header
files, extern declarations, and meaning you need to search
through more files to find the function you're looking

In fact, though, there are strong reasons to split up
projects. When you modify a line of your code, the
compiler has to recompile everything to create a new
executable. However, if your project is in several files
and you modify one of them, the object files for the other
source files are already on disk, so there's no point in
recompiling them. All you need to do is recompile the file
that was changed, and relink the object files. In a large
project this can mean the difference between a lengthy
(several minutes to several hours) rebuild and a ten or
twenty second adjustment.

With a little organisation, splitting a project between
files can make it much easier to find the piece of code
you are looking for. It's simple - you split the code
between the files based upon what the code does. Then if
you're looking for a routine you know exactly where to
find it.

It is much better to create a library from many object
files than from a single object file. Whether or not this
is a real advantage depends what system you're using, but
when gcc/ld links a library into a program at link time it
tries not to link in unused code. It can only exclude
entire object files from the library at a time, though, so
if you reference any symbols from a particular object file
of a library the whole object file must be linked in. If
the library is very segmented, the resulting executables
can be much smaller than they would be if the library
consisted of a single object file.

Also, since your program is very modular with the minimum
amount of sharing between files there are many other
benefits -- bugs are easier to track down, modules can
often be reused in another project, and last but not
least, other people will find it much easier to understand
what your code is doing.

1.2 When to split up your projects

It is obvisouly not sensible to split up *everything*;
small programs like `Hello World' can't really be split
anyway since there's nothing to split. Splitting up small
throwaway test programs is pretty pointless too. In
general, though, I split things whenever doing so seems to
improve the layout, development and readability of the
program. This is in fact true most of the time.

The decision about what to split and how is of course
yours; I can only make general suggestions here, which you
may or may not choose to follow.

If you are developing a fairly large project, you should
think before you start how you are going to implement it,
and create several (appropriately named) files initially
to hold your code. Of course, don't hesitate to create new
files later in development, but if you do then you are
changing your mind and should perhaps think about whether
some other structural changes would be appropriate.

For medium-sized projects, you can use the above technique
of course, or you might be able to just start typing, and
split the file up later when it is getting hard to manage.
In my experience, though, it is a great deal simpler to
start off with a scheme in mind and stick to it or adapt
it as the program's needs change during development.

1.3 How to split up projects

Again, this is strictly my opinion; you may (probably
will?) prefer to lay things out differently. This is
touching on the controversial topic of coding style; what
I present here is simply my personal preference (along
with reasons for each of these guidelines):

i) Don't make header files which span several source
files (exception: library header files). It's much
easier to track and usually more efficient if each
header file only declares symbols from one source
file. Otherwise, changing the structure of one
source file (and its header file) may cause more
files to be rebuilt that is really necessary.

ii) Where appropriate, do use more than one header file
for a source file. It is often useful to seperate
function prototypes, type definitions, etc, from the
C source file into a header file even when they are
not publicly available. Making one header file for
public symbols and one for private symbols means
that if you change the internals of the file you can
recompile it without having to recompile other files
that use the public header file.

iii) Don't duplicate information in several header files.
If you need to, #include one in the other, but don't
write out the same header information twice. The
reason for this is that if you change the
information in the future you will only need to
change it once, rather than hunting for duplicates
which would also need modifying.

iv) Make each source file #include all the header files
which declare information in the source file. Doing
this means that the compiler is more likely to pick
out mistakes, where you have declared something
differently in the header file to what it is in the
source file.

1.4 Notes on common errors

a) Identifier clashes between source files: In C,
variables and functions are by default public, so that
any C source file may refer to global variables and
functions from another C source file. This is true even
if the file in question does not have a declaration or
prototype for the variable or function. You must,
therefore, ensure that the same symbol name is not used
in two different files. If you don't do this you will
get linker errors and possibly warnings during

One way of doing this is to prefix public symbols with
some string which depends on the source file they appear
in. For example, all the routines in gfx.c might begin
with the prefix `gfx_'. If you are careful with the way
you split up your program, use sensible function names,
and don't go overboard with global variables, this
shouldn't be a problem anyway.

To prevent a symbol from being visible from outside the
source file it is defined in, prefix its definition
with the keyword `static'. This is useful for small
functions which are used internally by a file, and
won't be needed by any other file.

b) Multiply defined symbols (again): A header file is
literally substituted into your C code in place of the
#include statement. Consequently, if the header file is
#included in more than one source file all the
definitions in the header file will occur in both
source files. This causes them to be defined more than
once, which gives a linker error (see above).

Solution: don't define variables in header files. You
only want to declare them in the header file, and
define them (once only) in the appropriate C source
file, which should #include the header file of course
for type checking. The distinction between a
declaration and a definition is easy to miss for
beginners; a declaration tells the compiler that the
named symbol should exist and should have the specified
type, but it does not cause the compiler to allocate
storage space for it, while a definition does allocate
the space. To make a declaration rather than a
definition, put the keyword `extern' before the

So, if we have an integer called `counter' which we
want to be publicly available, we would define it in a
source file (one only) as `int counter;' at top level,
and declare it in a header file as `extern int

Function prototypes are implicitly extern, so they do
not create this problem.

c) Redefinitions, redeclarations, conflicting types:
Consider what happens if a C source file #includes both
a.h and b.h, and also a.h #includes b.h (which is
perfectly sensible; b.h might define some types that
a.h needs). Now, the C source file #includes b.h twice.
So every #define in b.h occurs twice, every declaration
occurs twice (not actually a problem), every typedef
occurs twice, etc. In theory, since they are exact
duplicates it shouldn't matter, but in practice it is
not valid C and you will probably get compiler errors
or at least warnings.

The solution to this problem is to ensure that the body
of each header file is included only once per source
file. This is generally achieved using preprocessor
directives. We will #define a macro for each header
file, as we enter the header file, and only use the
body of the file if the macro is not already defined.
In practice it is as simple as putting this at the
start of each header file:

#ifndef FILENAME_H
#define FILENAME_H

and then putting this at the end of it:

Using C/C++ libraries with Automake and Autoconf

Using C/C++ libraries with Automake and Autoconf
* Introduction.
* configure.ac
* Makefile.am
* Example Files
* Recommended Reading
If you have read Using Automake and Autoconf with C++ then you
already know how to use automake and autoconf to build your C/C
++ programs. This document will show you what you need to add to
those configure.ac and Makefile.am files to link your code with
a shared library.

I have included an example, which links to the examplelib
library used in Building C/C++ libraries with Automake and

The Makefile needs 2 pieces of information - how to find the
library's header files and how to link to the library itself.
These are traditionally stored in variables ending in CFLAGS
(for the headers' include argument) and LIBS (for the linker
argument). For instance, GTKMM_CFLAGS and GTKMM_LIBS. These
variables will be set in the configure.ac file.

Your configure.ac script should find the library and set the
CFLAGS and LIBS variables:

Libraries which have installed a .pc pkg-config file
Recently, libraries have started to use pkg-config to
provide includes and linker information, instead of the
methods described below. In this case, you should use
the PKG_CHECK_MODULES() macro in your configure.ac file.
For instance:

PKG_CHECK_MODULES(DEPS, gtkmm-2.0 >= 1.3.3 somethingelse-1.0 >= 1.0.2)

DEPS_CFLAGS and DEPS_LIBS will then include the include
and linker options for the library and all of its
dependencies, for you to use in your Makefile.am file. I
have used the DEPS prefix to mean 'dependencies', but
you can use any prefix that you like. Notice that you
can get information about several libraries at once,
putting all of the information into one set of _CFLAGS
and _LIBS variables. You can also use more than one
PKG_CHECK_MODULES() line to put information about
different sets of libraries in separate _CFLAGS and
_LIBS variables.

Of course you must ensure that you have installed

Libraries which have installed an AM_PATH_* macro
Some libraries, such as gtkmm 1.2, install an m4 macro
which makes your life slightly easier. You can call the
macro from your configure.ac. It will set *_CFLAGS and
*_LIBS variables which you can use in your Makefile.am
files. For instance:

AM_PATH_GTKMM(1.2.2,,AC_MSG_ERROR([Cannot find correct gtkmm version]))

This macro will call the gtkmm-config script and sets
the GTKMM_CFLAGS and GTKMM_LIBS variables with the data
that it returns. It will also report the library version
found and complain if the library is not installed, or
if it is the wrong version.

When you call aclocal the macro will be copied to the
aclocal.m4 file in your project's directory. If you did
not install the library at the same prefix (e.g. /usr
or /usr/local) as the aclocal tool, then you will need
to call aclocal with the -I argument. For instance:

# aclocal -I /home/myinstalls/share/aclocal
Libraries which have installed a *-config script
Some libraries do not install an AM_PATH_* m4 macro, but
they do install a *-config script. In this situation you
need to call the script and set the variables in your
own code.

For instance,

# GNOME--:
# (These macros are in the 'macros' directory,
# copied from the gnome-libs distribution.)
# GNOME_INIT sets the GNOME_CONFIG variable, among other things:

# GNOME-CONFIG script knows about gnomemm:
# ('gnome-config' is installed by GNOME)
# So call gnome-config with some arguments:

Libraries with no macro and no script
There are still many libraries which do not use *-config
scripts or macros to make your life easier. The best
thing to do in this situation is to allow the user to
tell the configure script where to find the library. You
can do this with the AM_ARG_WITH() macro. This adds a
command line argument to the configure script and
complains if it isn't used.

For instance:

# Ask user for path to libmysqlclient stuff:.
[ --with-mysql=<path> prefix of MySQL installation. e.g. /usr/local or /usr],
AC_MSG_ERROR([You must call configure with the --with-mysql option.
This tells configure where to find the MySql C library and headers.
e.g. --with-mysql=/usr/local or --with-mysql=/usr])
MYSQL_LIBS="-L${MYSQL_PREFIX}/lib/mysql -lmysqlclient"

The CFLAGS and LIBS variables are used in your Makefile.am

For programs
If you are using the library in a program, then you
should do something like the following.

For libraries
If you are using the library from another library, then
you should do something like the following. This will
not actually link with a shared library - it will just
tells your library that it needs to link with the other
library at run time.

libsomething_la_LIBADD = $(DEPS_LIBS)
Example Files
You may download this example which demonstrates how to link an
executable to the example library used in Building C/C++
libraries with Automake and Autoconf.

Recommended Reading
* Building C/C++ libraries with Automake and Autoconf
* GNU's automake, autoconf, and libtool manuals

Building C/C++ libraries with Automake and Autoconf

Building C/C++ libraries with Automake and Autoconf
* Introduction.
* libtool
* Directory structure
* Installing headers
* Version numbers
* Making your library easy to use
* C++ namespaces
* Example Files
* Recommended Reading
If you have read Using Automake and Autoconf with C++ then you
should already know how to use automake and autoconf to build
your C++ programs. This document will show you how to use the
same tools to build a reusable library. I have included an
example which demonstrates these ideas.

You may also wish to read Using C/C++ libraries with automake
and autoconf to see what users of your library will expect.

Why use libtool?
Each platform has its own way of implementing the shared
(or 'dynamic') library idea, and there are various tools
needed to build these libraries. Libtool delegates to
these platform-specific tools and presents the developer
with a simpler set of options. Automake and autoconf can
use libtool to build libraries for many OS and
development environments using the same build files.

Libtool also makes it easy to build a static library or
a dynamic library from the same project.

When you start your project files you need to issue the
'libtoolize' command to add libtool support files to
your project.

You need to call AM_PROG_LIBTOOL in your configure.ac

libtool variables
When building an executable you use something like this
in your Makefile.am:

PROGRAMS = someapp
someapp_SOURCES = main.cc

To build a library you use the LTLIBRARIES set of
variables instead:

lib_LTLIBRARIES = something-1.0.la
something_la_SOURCES = something.h something.cc
Parallel installs
Notice that the library is called
something-1.0.la, including the version number
in its name. This will allow the next version,
libsomething-2.0, to be installed alongside,
without preventing use of the previous version.

Directory structure
Don't use 'src'
When the library is installed, its headers will be
installed in their own directory in the 'include'
directory. Code that uses the library should #include
them like so:

#include <something/something.h>
#include <something/extrabits.h>

If you put your source files in a 'src' directory then
the #include lines in your own headers will not work
when they are installed, and the #includes in your
examples (in the 'examples' directory') will be
misleading. At best, they will include like so:

#include <something.h>
#include <extrabits.h>

I suggest that you put your sources in a directory that
has the same name as your library. Then the examples
inside your distribution and any external examples will
use the same path in their #include directives.

Sources in sub directories
In Using automake and autoconf with C++ I explained how
to build intermediate static libraries in each sub
directory. The idea is very similar when building a
library, but the syntax is slightly different.

* Libtool libraries have the .la suffix, instead
of .a
* We need to use _LIBADD instead of_ LDADD.

For instance

lib_LTLIBRARIES = something.la
something_la_SOURCES = main.cc
something_la_LIBADD = sub/libsubstuff.la
This technique is demonstrated in the downloadable example.

Note that, at the time of writing, there are two
problems with libtool that you should be aware of:

* Libtool will not add libtool libraries
recursively. Therefore you need to list all of
the convenience libraries in one place. For
something_la_LIBADD = sub/libsub.la
* Libtool will not differentiate between two
libraries with the same name in different
directories. Therefore you should probably
include the full path in the name of your
convenience libraries. For instance:
something_la_LIBADD = foo/libfoo.la
foo/sub/libfoo_sub.la goo/libgoo.la

Hopefully these problems will be fixed in the next
version of libtool. Please tell me when they have been
fixed, so that I can update this page.

Installing headers
When the user types 'make install' the library's header files
should be installed as well as the library itself. You can make
this happen by using these variables in your Makefile.am files:

library_include_HEADERS = something.h foo.h

This will put something.h and foo.h in
Users of the library would then #include your headers like so:

#include <something/something.h>
Parallel installs
Notice that the headers should be installed in a
version-specific directory. This will allow the next
version's headers to be installed alongside in
something-2.0, without preventing use of the previous
version's headers.

The generated config.h header should be installed in the
lib directory, because it is architecture-dependent.
Actually, I'd like a better explanation than that to put

For example, in your Makefile,am file:

something_configdir = $(libdir)/something-1.0/include
something_config_DATA = config.h
Version numbers
Your library should have two numbers - the 'release number' and
the 'version number'.

The release number uses a scheme of your own devising. Generally
it indicates how much functionality has been added since the
last version, and how many bugs were fixed.

The version number uses an established scheme to indicate what
type of changes happened to your library's interface. The
following diagram can be found in many configure.ac files:

| | |
+------+ | +---+
| | |
| | |
| | +- increment if interfaces have been added
| | set to zero if interfaces have been removed
| | or changed
| +- increment if source code has changed
| set to zero if current is incremented
+- increment if interfaces have been added, removed or changed

Use this version number in your Makefile.am file:

libsomething_la_LDFLAGS= -version-info $(EXAMPLE_LIBRARY_VERSION) -release $(EXAMPLE_RELEASE)
Making your library easy to use
Experts can use your library if they are given just the headers
and the library, but you can make life much easier for people
who are using automake and autoconf. In my opinion, your library
will appear more complete, and will be used by more people if
you use pkg-config. This tool was created relatively recently to
improve upon the old method, described here. It allows you to
install details about your library, specifically the linker and
include options that should be used with it. Developers can add
a line to their configure.ac files that reads this infomation
back, along with the options required for your library's

The .pc.in file
Your library should install a .pc file, describing the
linker and include options for your library. But those
are dependent on the --prefix given to the configure
script, so you'll need to crate a .pc.in file. For


Name: something
Description: Some library.
Requires: somethingelse-2.0 somethingmore-1.0
Version: @VERSION@
Libs: -L${libdir} -lsomething-1.0
Cflags: -I${includedir}/something-1.0 -I${libdir}/something-1.0/include

You'll need to mention this new .in file in your
configure.ac script, like so:

AC_OUTPUT( Makefile \
something/Makefile \
something/sub/Makefile \

And you'll need to mention it in your Makefile.am file,
so that it gets installed and distributed. For instance:

pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = something-1.0.pc
Read Using C/C++ libraries with automake and autoconf to
see how this pkg-config file would be used.

Parallel Installs
The .pc.in file should include the version number in its
name. For instance, something-1.0.pc.in. This will allow
the next version of the library to install its own
something-2.0.pc file alongside, without preventing use
of the previous version.

C++ namespaces
If you are writing a C++ library, I strongly suggest that you
put all the classes in a namespace. For instance, in the header

namespace Something
class Example
} /* namespace Something */

And in the implementation file:

namespace Something

} /* namespace Something */

This will prevent name clashes and make it more obvious when
other code is using the library.

Example Files
You may download this example which demonstrates how to put all
these ideas together.

This example uses some 'generic' variables instead of repeating
the library name several times. This should make the project
files easier to maintain, and it is used to generate the
examplelib-config script automatically. Thanks to Cedric Gustin
for this idea.

The document Using C/C++ libraries with automake and autoconf
contains an example which links to this library.

Recommended Reading
* Using Automake and Autoconf with C++
* GNU's automake, autoconf, and libtool manuals

automake related

Using Automake and Autoconf with C++
* Introduction.
* make and configure
* automake and autoconf
* Sub Directories
* Example Files
* Recommended Reading
* Translations
The automake and autoconf tools can be used to manage C++
projects under Unix. They should save a lot of time compared to
make and configure, while ensuring that your project is
structured according to GNU standards.

However, it is difficult for beginners to get started.
Hopefully, this tutorial will provide enough information for C++
programmers who are new to Unix to create their first C++
projects, while gaining a superficial understanding of what the
tools are doing.

I am not an expert on automake and autoconf, so I welcome
constructive advice on this tutorial. If you find problems with
the examples, please try to provide patches.

make and configure
The make tool can be used to manage multi-file projects. make
uses the Makefile file in your project folder, which lists the
various compiling and linking steps, targets, and dependencies.
make is well explained in C-Scene: Multi-File Projects and the
GNU Make Utility.

A configure script can be used to aid cross-platform compiling.
A suitable configure script should interpret a Makefile.in file
and then create a platform-specific Makefile file. It will do
this after performing several tests to determine the
characteristics of the platform.

This allows a user to type './configure' and then 'make' to
compile a project on his platform.

automake and autoconf
Obviously most well-written Makefiles and configure scripts will
look very similar. In fact, GNU provide guidelines about what
should be in these files. Therefore, GNU created automake and
autoconf to simplify the process and ensure that the Makefile
and configure script conform to GNU standards.

Here is a brief explanation of how these tools are used. You can
see examples of the files used by these tools in the Examples
Files section.

Note: These tools use the m4 programming language. aclocal adds
aclocal.m4 to your project directory, which contains some m4
macros which are needed.

autoconf looks for a file called configure.ac (or,
previously, configure.in). It then creates the configure
script, based on the macros which it finds.

Whenever you add a macro to configure.ac, you should run
aclocal as well as autoconf, because aclocal scans
configure.ac to find which macros it should provide.

Lines which every configure.ac should have
Every configure.ac should have lines like the







The AC_INIT macro can take any source file as an
argument. It just checks that the file is there,
which should, in turn, mean that the source
directory is there.

The AM_INIT_AUTOMAKE line adds several standard
checks. It takes the program name and version
number as arguments.

AC_PROG_CC indicates that the source code may be
in C. If the source code is C++ then we also

AC_PROG_INSTALL will generate an install target
so that users may just type 'make install' to
install the software.

AC_OUTPUT indicates the name of the Makefile
which will be generated.

Using a Config Header
The AM_CONFIG_HEADER(config.h) line indicates
that you will be using a config.h file. autoconf
will then need a config.h.in file, which it
processes to create the config.h file. This is
#included by your source code and provides a way
for people to customise the configuration for
their platform, via #defines. config.h.in can be
generated automatically with the autoheader

However, you need a stamp-h file in your project
to ensure that automake regenerates config.h
from config.h.in. Type 'touch stamp-h' to add
this file to your project.

automake looks for a file called Makefile.am. It then
creates a Makefile.in, based on the macros which it
finds. This is later used by the configure script (see

GNU-style projects, or not
Because automake tries to make a GNU-style
project by default, it will add a COPYING file
and complain if some other necessary informative
text files are missing. You can add blank files
with the following command:


If you do not want these GNU-style files, then
you could add the following to your Makefile.am


Thanks to Marc van Woerkom for this suggestion.

Telling automake about your source files
Use lines like the following to name your
program and list its source files:

bin_PROGRAMS = hello

hello_SOURCES = hello.h hello.cc main.cc

Note that the second variable is prefixed with
the value of the first variable. This is a
common practice with autoconf and automake.

The Whole Process
Assuming that you have written appropriate Makefile.am
and configure.ac files (there are examples below), you
should be able to build your project by entering the
following commands:

* 'autoheader' - creates config.h.in
* 'touch NEWS README AUTHORS ChangeLog'
* 'touch stamp-h'
* aclocal - adds aclocal.m4 to directory. Defines
some m4 macros used by the auto tools.
* 'autoconf '- creates configure from
* 'automake' - Creates Makefile.in from
* './configure' - creates Makefile from
Makefile.in >
* 'make'

Just repeat the last 5 steps to completely rebuild the
project. Most projects have an autogen.sh script that
runs everything up to the configure step.

Sub Directories
Project files should, of course, be organised in sub folders.
Ideally, all of the source files should be in a folder called
'src' in the project folder, with all of the other files (such
as makefiles, configure scripts, and readmes) separate at the
top. Projects which have several levels of folders are called
'Deep' projects. I have listed the necessary steps here, but you
should look at the Example Files section to see them in context.

When using sub-directories, you need to do the following:

1. Add a SUBDIRS entry to the top-level Makefile.am. For

SUBDIRS = doc intl po src tests

Note that the directory names are separated only by spaces.

2. Add a Makefile.am file to every sub directory. The
sub-directories do not need configure.ac files. Be sure to add
the Makefiles to the list in the AC_OUPUT macro in the top-level

For sub directories containing additional source code
3. Add the AC_PROG_RANLIB macro to your configure.ac.
This will allow you to build code in sub-directories
into temporary libraries, which make will then link in
with the rest of the code.

4. Add some macros to the Makefile.am of any source
directory under src. These will build a non-installing
library. You need to give the library a name beginning
with 'lib', specify the sources, and specify the
locations of any header files. For example:

noinst_LIBRARIES = libfoo.a

libfoo_a_SOURCES = foo.h foo.cc

INCLUDES = -I@top_srcdir@/src/includes

Notice that the SOURCES macro uses the library name with
an underscore instead of a dot. Also, notice the use of
the top_srcdir variable to refer to the top-level of the

5. Use a LDADD_ macro in the Makefile.am of a higher
directory to link the temporary library with any code
that uses it. For example,

LDADD = foofiles/libfoo.a
For sub directories containing non-source files
3. The Makefile.am in the sub-directory should contain a
line like the following:

EXTRA_DIST = somefile.txt someotherfile.html

This tells automake that you want the files to be
distributed, but that they do not need to be compiled.

Example Files
Here are some example configure.ac and Makefile.am files. They
are sufficient to manage a C++ project which uses the Standard

See the automake and autoconf manuals for information on the
macros and variable names used in these files. I do not want to
make this seem more complicated by explaining each line of these

These examples are for a 'Deep' project with the following



















AC_OUTPUT(Makefile src/Makefile src/foofiles/Makefile)
Makefile.am for the src directory
bin_PROGRAMS = hello

hello_SOURCES = hello.h hello.cc main.cc

SUBDIRS = foofiles

LDADD = foofiles/libfoo.a
Makefile.am for foofiles directory under src
noinst_LIBRARIES = libfoo.a

libfoo_a_SOURCES = foo.h foo.cc

INCLUDES = -I@top_srcdir@/

You may download a simple example project here:

Recommended Reading
* Building C/C++ libraries with automake and autoconf
* Using C/C++ libraries with automake and autoconf
* GNU Autoconf, Automake, and Libtool: New online book by the
* C-Scene: Multi-File Projects and the GNU Make Utility
* GNU's automake and autoconf manuals
* Learning autoconf and automake
* Learning the GNU Development tools
* Programming with GNU Software, Mike Loukides & Andy Oram

Monday, August 20, 2007

Bash: command list (|| &&)


# delete.sh, not-so-cunning file deletion utility.
# Usage: delete filename


if [ -z "$1" ]
echo "Usage: `basename $0` filename"
exit $E_BADARGS # No arg? Bail out.
file=$1 # Set filename.

[ ! -f "$file" ] && echo "File \"$file\" not found. \
Cowardly refusing to delete a nonexistent file."
# AND LIST, to give error message if file not present.
# Note echo message continued on to a second line with an escape.

[ ! -f "$file" ] || (rm -f $file; echo "File \"$file\" deleted.")
# OR LIST, to delete file if present.

# Note logic inversion above.
# AND LIST executes on true, OR LIST on false.

exit 0

Sunday, August 19, 2007

Gnu-make: pattern rule

A pattern rule contains the character `%' (exactly one of them) in the
target; otherwise, it looks exactly like an ordinary rule. The target is
a pattern for matching file names; the `%' matches any nonempty
substring, while other characters match only themselves

Here are some examples of pattern rules actually predefined in make.
First, the rule that compiles `.c' files into `.o' files:

%.o : %.c
$(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@

defines a rule that can make any file `x.o' from `x.c'. The command uses
the automatic variables `$@' and `$<' to substitute the names of the
target file and the source file in each case where the rule applies (see
section Automatic Variables).

Gnu-make: patsubst

is equivalent to
$(patsubst pattern,replacement,$(var))
The second shorthand simplifies one of the most common uses of patsubst:
replacing the suffix at the end of file names.
is equivalent to
$(patsubst %suffix,%replacement,$(var))
For example, you might have a list of object files:
objects = foo.o bar.o baz.o
To get the list of corresponding source files, you could simply write:
instead of using the general form:
$(patsubst %.o,%.c,$(objects))

Friday, August 10, 2007




initramfs by default load a cpio image. But it can also load initrd (a
fs) image.

The "initramfs" concept has been in the 2.5 plans since back before
there was a 2.5 kernel. Things have been very quiet on the initramfs
front, however, until the first patch showed up and was merged into the
2.5.46 tree.

The basic idea behind initramfs is that a cpio archive can be attached
to the kernel image itself. At boot time, the kernel unpacks that
archive into a RAM-based disk, which is then mounted and used at the
initial root filesystem. Much of the kernel initialization and bootstrap
code can then be moved into this disk and run in user mode. Tasks like
finding the real root disk, boot-time networking setup, handling of
initrd-style ramdisks, ACPI setup, etc. will be shifted out of the
kernel in this way.

An obvious advantage of this scheme is that the size of the kernel code
itself can shrink. That does not free memory for a running system, since
the Linux kernel already dumps initialization code when it is no longer
needed. But a smaller code base for the kernel itself makes the whole
thing a little easier to maintain, and that is always a good thing. But
the real advantages of initramfs are:

* Customizing the early boot process becomes much easier. Anybody
who needs to change how the system boot can now do so with
user-space code; patching the kernel itself will no longer be

* Moving the initialization code into user space makes it easier
to write that code - it has a full C library, memory protection,

* As pointed out by Alexander Viro: user-space code is required to
deal with the kernel via system calls. This requirement will
flush a lot of in-kernel "magic" currently used by the
initialization code; the result will be cleaner, safer code.



_start() -> start_kernel() -> rest_init()->

kernel_thread(init, NULL, CLONE_FS | CLONE_SIGHAND);

_init() -> do_basic_settup():

* Ok, the machine is now initialized. None of the devices
* have been touched yet, but the CPU subsystem is up and
* running, and memory and process management works.
* Now we can finally start doing some real work..
static void __init do_basic_setup(void)
/* drivers will send hotplug events */

do_initcalls() --> populate_rootfs()

Monday, August 06, 2007

Gnu-make: "-include"

If you want make to simply ignore a makefile which does not exist and
cannot be remade, with no error message, use the -include directive
instead of include, like this:

-include filenames...

This acts like include in every way except that there is no error (not
even a warning) if any of the filenames do not exist. For compatibility
with some other make implementations, sinclude is another name for

Tuesday, July 10, 2007

CVS import


cvsimport The way you create a new directory or tree of directories in
You use a cvs import command when you want to add a whole directory to
CVS. CVS import is not used to add a bunch of files to an existing
directory - for that use "cvs add" (see above). Before getting into the
command itself, first pick a place in the existing cvs tree where you
want to add your stuff. For this example, let's say you wanted to add a
directory of "tool" files to cvs at the new directory "common/tool", so
its reference directory would be $CD_SOFT/ref/common/tool/. The argument
you would have to give to the cvs import command will be "common/tool".
The argument is always the full pathname, after the $CD_SOFT/cvs part,
of the root of the directory you want to create, even if some of the
intermediate directories already exist (in this case, "common/" already

cvs import always imports all the files, and all subdirectories, in the
working directoryfrom which it is being run. That is, it imports a
directory tree into the place specified by the argumetk. So, be careful
not to do something like cd to a directory which contains the root of a
directory tree which you want to import and then issue cvs import giving
as the argument the leaf-of-directory-tree you want to import, e.g. cd
~/work (containing common/tool) and then cvs import common/to.ol. That
would create $CD_SOFT/cvs/common/tool/common/tool/!! If you only want to
import a single directory, then the root and the leaf are the same
directory, so you can use a sequence of commands as in example 1) below.
But if you really want to import more than one directory, you have to
use a sequence like that in example 2.

Also be careful not to import a directory system that contains a
subdirectory that is itself the result of a CVS checkout, because that
subdirectory will contain a CVS subdirectory. This is very messy to
clean up. You shouldn't ever want to anyway, because cvs import must
always be run from the directory whose files you want to import, and
always takes the fully qualified cvs module name as the argument.

The two other arguments to cvs import are the "vendor tag", and the
"release" tag:

* "vendor tag" is a free form text string you're supposed to use
to identify the vendor of software. Since it's a CVS tag, it
should be all upper case and not have any special charatcters
save the "_" (like no "." or "-"). Our standard for this tag is
"CD_SOFT", when we're the vendors.
* "release tag", is also a free form text string you're supposed
to use to identify the release of the software you're putting in
CVS. For EPICS software, we use a release tag like "R3_13_6",
for all other software, for the initial release, we use "R1_0".

After you have done the cvs import, be sure to go to the corresponding
reference area and do the initial cvs checkout.


cvs import [options]
vendor-tag release-tag


cd ~/work/common/tool

cvs import common/tool

cd $CD_SOFT/ref

cvs checkout common/tool

Say ~/work/common/tool
is the directory where
all the tool files are.
All the files in that
directory will be
imported (unless they're
in the CVSIGNORE set).

Imports all the files
from your working
directory, into

Creates the initial
checkout of the
directory you just
created in cvs.


cd ~/work

cvs import app/myapp

cd $CD_SOFT/ref

cvs checkout app/myapp

Say ~/work is the root
directory of where all
the files are of a new
application are. All the
files and all the
subdirectories in that
directory will be
imported into
cvs/app/myapp (unless
they're in the CVSIGNORE

Imports all the files
from ~/work, into


cvs import -m "initial
import" ... app/myapp

As above, but gave a
comment on the command
line rather than making
cvs start an editor and
asking for the comment

Monday, July 09, 2007

Gnu-make: define virables

There are two ways that a variable in GNU make can have a value; we call
them the two flavors of variables. The two flavors are distinguished in
how they are defined and in what they do when expanded.

The first flavor of variable is a recursively expanded variable.
Variables of this sort are defined by lines using `=' (see section
Setting Variables) or by the define directive (see section Defining
Variables Verbatim). The value you specify is installed verbatim; if it
contains references to other variables, these references are expanded
whenever this variable is substituted (in the course of expanding some
other string). When this happens, it is called recursive expansion.

For example,

foo = $(bar)
bar = $(ugh)
ugh = Huh?

all:;echo $(foo)

will echo `Huh?': `$(foo)' expands to `$(bar)' which expands to `$(ugh)'
which finally expands to `Huh?'.

This flavor of variable is the only sort supported by other versions of
make. It has its advantages and its disadvantages. An advantage (most
would say) is that:

CFLAGS = $(include_dirs) -O
include_dirs = -Ifoo -Ibar

will do what was intended: when `CFLAGS' is expanded in a command, it
will expand to `-Ifoo -Ibar -O'. A major disadvantage is that you cannot
append something on the end of a variable, as in


because it will cause an infinite loop in the variable expansion.
(Actually make detects the infinite loop and reports an error.)

Another disadvantage is that any functions (see section Functions for
Transforming Text) referenced in the definition will be executed every
time the variable is expanded. This makes make run slower; worse, it
causes the wildcard and shell functions to give unpredictable results
because you cannot easily control when they are called, or even how many

To avoid all the problems and inconveniences of recursively expanded
variables, there is another flavor: simply expanded variables.

Simply expanded variables are defined by lines using `:=' (see
section Setting Variables). The value of a simply expanded variable is
scanned once and for all, expanding any references to other variables
and functions, when the variable is defined. The actual value of the
simply expanded variable is the result of expanding the text that you
write. It does not contain any references to other variables; it
contains their values as of the time this variable was defined.

x := foo
y := $(x) bar
x := later

is equivalent to

y := foo bar
x := later

When a simply expanded variable is referenced, its value is substituted

GNU-Make: Variables from the Environment

Variables from the Environment
Variables in make can come from the environment in which make is run.
Every environment variable that make sees when it starts up is
transformed into a make variable with the same name and value. But an
explicit assignment in the makefile, or with a command argument,
overrides the environment. (If the `-e' flag is specified, then values
from the environment override assignments in the makefile. See section
Summary of Options. But this is not recommended practice.)

Thus, by setting the variable CFLAGS in your environment, you can cause
all C compilations in most makefiles to use the compiler switches you
prefer. This is safe for variables with standard or conventional
meanings because you know that no makefile will use them for other
things. (But this is not totally reliable; some makefiles set CFLAGS
explicitly and therefore are not affected by the value in the

When make is invoked recursively, variables defined in the outer
invocation can be passed to inner invocations through the environment
(see section Recursive Use of make). By default, only variables that
came from the environment or the command line are passed to recursive
invocations. You can use the export directive to pass other variables.
See section Communicating Variables to a Sub-make, for full details.

Monday, April 02, 2007


JTAG Interface

A JTAG interface is a special four/five-pin interface added to a chip,
designed so that multiple chips on a board can have their JTAG lines
daisy-chained together, and a test probe need only connect to a single
"JTAG port" to have access to all chips on a circuit board. The
connector pins are

1. TDI (Test Data In)
2. TDO (Test Data Out)
3. TCK (Test Clock)
4. TMS (Test Mode Select)
5. TRST (Test ReSeT) optional.

Since only one data line is available, the protocol is necessarily
serial like SPI. The clock input is at the TCK pin. Configuration is
performed by manipulating a state machine one bit at a time through a
TMS pin. One bit of data is transferred in and out per TCK clock pulse
at the TDI and TDO pins, respectively. Different instruction modes can
be loaded to read the chip ID, sample input pins, drive (or float)
output pins, manipulate chip functions, or bypass (pipe TDI to TDO to
logically shorten chains of multiple chips). The operating frequency of
TCK varies depending on the chip, but it is typically 10-100 MHz
(100-10ns per bit).

TAP Instructions

The hardware controller communicates serially with the JTAG-compliant
device through the TAP controller and uses the TCK and TMS inputs to
clock in state-machine commands.
These three TAP instructions manipulate the data:
• SAMPLE/PRELOAD—Used to either SAMPLE the data currently contained in
the BSCs, or to PRELOAD data into the BSCs.
• EXTEST—Performed when the BSCs attached to the JTAG compliant device
input pins act as sensors while the BSCs attached to output pins
propagates data to interconnecting devices. The interconnecting devices
may or may not be JTAG-compliant.
• BYPASS—Reduces the BSC shift path through the device to a single bit
register. For example, if a device contains 401 BSCs and the BYPASS
instruction is executed, the BSCs reduce to one for that device.

Program Flash Memory via JTAG

Engineers perform OBP by serially shifting data through the BSR and
latching the data into the BSCs. After the appropriate data is loaded
into the BSCs, the EXTEST instruction is used to propagate the BSC
contents to the Flash
memory. For instance, if the JTAG-compliant device contains 224 BSCs,
224 TCKs are used to clock in each bit of data. This represents one BSR
Each series of BSR shifts outputs one logical state, either high or low,
to the Flash memory. Each data bit is clocked in on the rising edge of
TCK, then the data bit is latched into the BSCs. After completely
loading the BSC with all data bits, the EXTEST instruction is used to
perform a BSR shift, which outputs data to the Flash memory. Each BSR
shift increases programming time; minimizing the number of BSR shifts
enables faster programming times.

JTAG Tools

Tuesday, February 27, 2007

variables in bash


A="\"a b c\""

echo $A #"a b c" - three strings

echo "$A" #"a b c" - one string

echo '$A' # $A

Blackfin CPLB - 1

ata Structure:

#define MAX_CPLBS (16 * 2)

* Number of required data CPLB switchtable entries
* MEMSIZE / 4 (we mostly install 4M page size CPLBs
* approx 16 for smaller 1MB page size CPLBs for allignment purposes
* 1 for L1 Data Memory
* 1 for ASYNC Memory

#define MAX_SWITCH_D_CPLBS (((CONFIG_MEM_SIZE / 4) + 16 + 1 + 1 + 1) *

* Number of required instruction CPLB switchtable entries
* MEMSIZE / 4 (we mostly install 4M page size CPLBs
* approx 12 for smaller 1MB page size CPLBs for allignment purposes
* 1 for L1 Instruction Memory

#define MAX_SWITCH_I_CPLBS (((CONFIG_MEM_SIZE / 4) + 12 + 1 + 1) * 2)



struct cplb_desc {
u32 start; /* start address */
u32 end; /* end address */
u32 psize; /* prefered size if any otherwise 1MB or 4MB*/
u16 attr;/* attributes */
u16 i_conf;/* I-CPLB DATA */
u16 d_conf;/* D-CPLB DATA */
u16 valid;/* valid */
const s8 name[30];/* name */

struct cplb_tab {
u_long *tab;
u16 pos;
u16 size;

u_long icplb_table[MAX_CPLBS+1];
u_long dcplb_table[MAX_CPLBS+1];

/* Till here we are discussing about the static memory management
* However, the operating envoronments commonly define more CPLB
* descriptors to cover the entire addressable memory than will fit into
* the available on-chip 16 CPLB MMRs. When this happens, the below
* will be used which will hold all the potentially required CPLB
* This is how Page descriptor Table is implemented in uClinux/Blackfin.

u_long ipdt_table[MAX_SWITCH_I_CPLBS+1];
u_long dpdt_table[MAX_SWITCH_D_CPLBS+1];

u_long ipdt_swapcount_table[MAX_SWITCH_I_CPLBS];
u_long dpdt_swapcount_table[MAX_SWITCH_D_CPLBS];

struct s_cplb {
struct cplb_tab init_i;
struct cplb_tab init_d;
struct cplb_tab switch_i;
struct cplb_tab switch_d;

static struct cplb_desc cplb_data[] = {
.start = 0,
.end = SIZE_4K,
.psize = SIZE_4K,
.i_conf = SDRAM_OOPS,
.d_conf = SDRAM_OOPS,
.valid = 1,
.valid = 0,
.name = "ZERO Pointer Saveguard",
.start = L1_CODE_START,
.psize = SIZE_4M,
.i_conf = L1_IMEMORY,
.d_conf = 0,
.valid = 1,
.name = "L1 I-Memory",
.start = L1_DATA_A_START,
.psize = SIZE_4M,
.i_conf = 0,
.d_conf = L1_DMEMORY,
.valid = 1,
.name = "L1 D-Memory",
.start = 0,
.end = 0, /* dynamic */
.psize = 0,
.valid = 1,
.name = "SDRAM Kernel",
.start = 0, /* dynamic */
.end = 0, /* dynamic */
.psize = 0,
.d_conf = SDRAM_DNON_CHBL,
.valid = 1,
.name = "SDRAM RAM MTD",
.start = 0, /* dynamic */
.end = 0, /* dynamic */
.psize = SIZE_1M,
.d_conf = SDRAM_DNON_CHBL,
.valid = 1,//(DMA_UNCACHED_REGION > 0),
.name = "SDRAM Uncached DMA ZONE",
.start = 0, /* dynamic */
.end = 0, /* dynamic */
.psize = 0,
.attr = SWITCH_T | D_CPLB,
.i_conf = 0, /* dynamic */
.d_conf = 0, /* dynamic */
.valid = 1,
.name = "SDRAM Reserved Memory",
.start = ASYNC_BANK0_BASE,
.psize = 0,
.attr = SWITCH_T | D_CPLB,
.d_conf = SDRAM_EBIU,
.valid = 1,
.name = "ASYNC Memory",
#if defined(CONFIG_BF561)
.start = L2_SRAM,
.end = L2_SRAM_END,
.psize = SIZE_1M,
.attr = SWITCH_T | D_CPLB,
.i_conf = L2_MEMORY,
.d_conf = L2_MEMORY,
.valid = 1,
.valid = 0,
.name = "L2 Memory",



/* Initilize bellow tables using definition in cplb_data[]

u_long icplb_table[MAX_CPLBS+1];
u_long dcplb_table[MAX_CPLBS+1];

u_long ipdt_table[MAX_SWITCH_I_CPLBS+1];
u_long dpdt_table[MAX_SWITCH_D_CPLBS+1];


static unsigned short __init
fill_cplbtab(struct cplb_tab *table,
unsigned long start, unsigned long end,
unsigned long block_size, unsigned long cplb_data)

static unsigned short __init
close_cplbtab(struct cplb_tab *table)

static void __init generate_cpl_tables(void)



/* Copy icplb_table[], dcplb_table[] to CPLB MMRs */

Note that when the Cache is disabled, the CPLB MMRs will not be filled.


/* Initialize Instruction CPLBS */

I0.L = (ICPLB_ADDR0 & 0xFFFF);
I0.H = (ICPLB_ADDR0 >> 16);

I1.L = (ICPLB_DATA0 & 0xFFFF);
I1.H = (ICPLB_DATA0 >> 16);

I2.L = _icplb_table;
I2.H = _icplb_table;

r1 = -1; /* end point comparison */
r3 = 15; /* max counter */

/* read entries from table */

R0 = [I2++];
CC = R0 == R1;
IF CC JUMP .Lidone;
[I0++] = R0;

R2 = [I2++];
[I1++] = R2;
R3 = R3 + R1;
CC = R3 == R1;
IF !CC JUMP .Lread_iaddr;

/* Enable Instruction Cache */
P0.h = (IMEM_CONTROL >> 16);
R1 = [P0];
R0 = R0 | R1;

/* Anomaly 05000125 */
SSYNC; /* SSYNC required before writing to IMEM_CONTROL. */
.align 8;
[P0] = R0;


/* Initialize Data CPLBS */

I0.L = (DCPLB_ADDR0 & 0xFFFF);
I0.H = (DCPLB_ADDR0 >> 16);

I1.L = (DCPLB_DATA0 & 0xFFFF);
I1.H = (DCPLB_DATA0 >> 16);

I2.L = _dcplb_table;
I2.H = _dcplb_table;

R1 = -1; /* end point comparison */
R3 = 15; /* max counter */

/* read entries from table */
R0 = [I2++];
cc = R0 == R1;
IF CC JUMP .Lddone;
[I0++] = R0;

R2 = [I2++];
[I1++] = R2;
R3 = R3 + R1;
CC = R3 == R1;
IF !CC JUMP .Lread_daddr;
P0.H = (DMEM_CONTROL >> 16);
R1 = [P0];


R0 = R0 | R1;
/* Anomaly 05000125 */
SSYNC; /* SSYNC required before writing to DMEM_CONTROL. */
.align 8;
[P0] = R0;


Exception -> entry(trap)->extable:


ENTRY(_trap) /* Exception: 4th entry into system event table(supervisor
/* Since the kernel stack can be anywhere, it's not guaranteed
to be
* covered by a CPLB. Switch to an exception stack; use RETN as
* scratch register (for want of a better option).
retn = sp;
sp.l = _exception_stack_top;
sp.h = _exception_stack_top;
/* Try to deal with syscalls quickly. */
[--sp] = ASTAT;
[--sp] = (R7:6, P5:4);
r7 = SEQSTAT; /* reason code is in bit 5:0 */
r7 = r7 & r6;
p5.h = _extable;
p5.l = _extable;
p4 = r7;
p5 = p5 + (p4 << 2);
p4 = [p5];
jump (p4);

extable: (64 entry - corresponds to SEQSTAT[EXCAUSE] (0-5))

* Put these in the kernel data section - that should always be covered
* a CPLB. This is needed to ensure we don't get double fault conditions

/* entry for each EXCAUSE[5:0]
* This table bmust be in sync with the table
in ./kernel/traps.c
* EXCPT instruction can provide 4 bits of EXCAUSE, allowing 16
to be user defined
The Force Exception instruction forces an exception with code uimm4.
When the EXCPT instruction is issued, the sequencer vectors to the
exception handler that the user provides.
Application-level code uses the Force Exception instruction for
operating system calls. The instruction does not set the EVSW bit (bit
3) of the ILAT register.

.long _ex_syscall; /* 0x00 - User Defined - Linux Syscall
.long _ex_soft_bp /* 0x01 - User Defined - Software
breakpoint */
.long _ex_trap_c /* 0x02 - User Defined */
.long _ex_trap_c /* 0x03 - User Defined - Atomic test
and set service */
.long _ex_spinlock /* 0x04 - User Defined */
.long _ex_trap_c /* 0x05 - User Defined */
.long _ex_trap_c /* 0x06 - User Defined */
.long _ex_trap_c /* 0x07 - User Defined */
.long _ex_trap_c /* 0x08 - User Defined */
.long _ex_trap_c /* 0x09 - User Defined */
.long _ex_trap_c /* 0x0A - User Defined */
.long _ex_trap_c /* 0x0B - User Defined */
.long _ex_trap_c /* 0x0C - User Defined */
.long _ex_trap_c /* 0x0D - User Defined */
.long _ex_trap_c /* 0x0E - User Defined */
.long _ex_trap_c /* 0x0F - User Defined */
.long _ex_single_step /* 0x10 - HW Single step */
.long _ex_trap_c /* 0x11 - Trace Buffer Full */
.long _ex_trap_c /* 0x12 - Reserved */
.long _ex_trap_c /* 0x20 - Reserved */
.long _ex_trap_c /* 0x21 - Undefined Instruction */
.long _ex_trap_c /* 0x22 - Illegal Instruction
Combination */
.long _ex_dcplb /* 0x23 - Data CPLB Protection Violation
.long _ex_trap_c /* 0x24 - Data access misaligned */
.long _ex_trap_c /* 0x25 - Unrecoverable Event */
.long _ex_dcplb /* 0x26 - Data CPLB Miss */
.long _ex_trap_c /* 0x27 - Data CPLB Multiple Hits -
Linux Trap Zero */
.long _ex_trap_c /* 0x28 - Emulation Watchpoint */
.long _ex_trap_c /* 0x29 - Instruction fetch access error
(535 only) */
.long _ex_trap_c /* 0x2A - Instruction fetch misaligned
.long _ex_icplb /* 0x2B - Instruction CPLB protection
Violation */
.long _ex_icplb /* 0x2C - Instruction CPLB miss */
.long _ex_trap_c /* 0x2D - Instruction CPLB Multiple Hits
.long _ex_trap_c /* 0x2E - Illegal use of Supervisor
Resource */
.long _ex_trap_c /* 0x2E - Illegal use of Supervisor
Resource */

/* Slightly simplified and streamlined entry point for CPLB misses.
* This one does not lower the level to IRQ5, and thus can be used to
* patch up CPLB misses on the kernel stack.
* Work around an anomaly: if we see a new DCPLB fault, return
* without doing anything. Then, if we get the same fault
* handle it.
p5.l = _last_cplb_fault_retx;
p5.h = _last_cplb_fault_retx;
r7 = [p5];
r6 = retx;
[p5] = r6;
cc = r6 == r7;
if !cc jump _return_from_exception;
/* fall through */
/* Used by the assembly entry point to work around an anomaly. */
.long 0;

(R7:6,P5:4) = [sp++];
ASTAT = [sp++];
call __cplb_hdr;

/* To get here, we just tried and failed to change a CPLB
* so, handle things in trap_c (C code), by lowering to
* IRQ5, just like we normally do. Since this is not a
* "normal" return path, we have a do alot of stuff to
* the stack to get ready so, we can fall through - we
* need to make a CPLB exception look like a normal exception

[--sp] = ASTAT;
[--sp] = (R7:6, P5:4);

/* Call C code (trap_c) to handle the exception, which most
* likely involves sending a signal to the current process.
* To avoid double faults, lower our priority to IRQ5 first.
P5.h = _exception_to_level5;
P5.l = _exception_to_level5;
p4.l = lo(EVT5);
p4.h = hi(EVT5);
[p4] = p5;

/* Disable all interrupts, but make sure level 5 is enabled so
* we can switch to that level. Save the old mask. */
cli r6;
p4.l = _excpt_saved_imask;
p4.h = _excpt_saved_imask;
[p4] = r6;
r6 = 0x3f;
sti r6;

/* Save the excause into a circular buffer, in case the
* which caused this excecptions causes others.
P5.l = _in_ptr_excause;
P5.h = _in_ptr_excause;
R7 = [P5];
R7 += 4;
R6 = 0xF;
R7 = R7 & R6;
[P5] = R7;
R6.l = _excause_circ_buf;
R6.h = _excause_circ_buf;
R7 = R7 + R6;
p5 = R7;
[P5] = R6;

(R7:6,P5:4) = [sp++];
ASTAT = [sp++];
raise 5;



.type _cplb_mgr, STT_FUNC;
.type _panic_cplb_error, STT_FUNC;

.align 2

.global __cplb_hdr;
.type __cplb_hdr, STT_FUNC;

/* Mask the contents of SEQSTAT and leave only EXCAUSE in R2 */
R2 <<= 26;
R2 >>= 26;

R1 = 0x23; /* Data access CPLB protection violation */
CC = R2 == R1;
IF !CC JUMP .Lnot_data_write;
R0 = 2; /* is a write to data space*/
JUMP .Lis_icplb_miss;

R1 = 0x2C; /* CPLB miss on an instruction fetch */
CC = R2 == R1;
R0 = 0; /* is_data_miss == False*/
IF CC JUMP .Lis_icplb_miss;

R1 = 0x26;
CC = R2 == R1;
IF !CC JUMP .Lunknown;

R0 = 1; /* is_data_miss == True*/


# endif
# endif
# endif
R1 = 0;

Configuration in uClinux

Everything starts from uClinux-dist/Makefile. Let's make a trip from
typing "make menuconfig"

0. make menuconfig:

1. ".PHONY: menuconfig
menuconfig: config.in "

A phony target is one that is not really the name of a file. It is just
a name for some commands to be executed when you make an explicit
request. There are two reasons to use a phony target: to avoid a
conflict with a file of the same name, and to improve performance.

2. ".PHONY: config.tk config.in
config/mkconfig > config.in
config/mkconfig is a script to dynamicly create menuconfig items
according to vender/AnalogDevices/<dirs>
It create configuration for the first two level menu

3. menuconfig: config.in
$(MAKE) -C $(SCRIPTSDIR)/lxdialog all
@HELP_FILE=config/Configure.help $(CONFIG_SHELL) $(SCRIPTSDIR)/Menuconfig
Displays the first two level of menu - then the user choose save
and .config is created.

@if [ ! -f .config ]; then echo; \ echo "You have not saved your
config, please re-run make config"; \
echo; \ exit 1; fi
@config/setconfig defaults
Use the default configuation for the applications, uClibc and Linux
That is, configuration filese in vendors/AnalogDevices/<boards> are
copied as configuration files.

@if egrep "^CONFIG_DEFAULTS_KERNEL=y" .config > /dev/null; then
$(MAKE) linux_menuconfig; fi
@if egrep "^CONFIG_DEFAULTS_MODULES=y" .config > /dev/null; then
$(MAKE) modules_menuconfig; fi
@if egrep "^CONFIG_DEFAULTS_VENDOR=y" .config > /dev/null; then
$(MAKE) config_menuconfig; fi
User has choosed to change above settings - so invoke the next level of
config menu

@config/setconfig final
Check whether user has choose to save the updated configuration as

4. For above, if to use configure the applications:

" @if egrep "^CONFIG_DEFAULTS_VENDOR=y" .config > /dev/null; then
$(MAKE) config_menuconfig; fi
$(MAKEARCH) -C config menuconfig


@$(CONFIG_SHELL) -n config.in
$(MAKE) -C $(SCRIPTSDIR)/lxdialog all
@HELP_FILE=Configure.help AUTOCONF_FILE=autoconf.h $(CONFIG_SHELL) $(SCRIPTSDIR)/Menuconfig config.in

So as a result, we see the bellow facts:

1. config/* contains config scrips, like mkconfig, setconfig. It also
contains Makefiles and config files for

2. vendos/AnalogDevices/<boards>/* contains the default configuration

Blog Archive