Oct 27

The CAN interface and the low level library that seems to be available in the demo code from ATMEL provide the physical layer (electrical) and the data link layer (frames) of the OSI model. As a result you can send and receive frames that contain up to 8 data bytes. What these data bytes mean is not determined, it is up to the developer to define how to use them.

During the search on CAN reference material, I found regularly the reference to CANOpen. This standard implements the network layer (variable data length), the transport layer (data segmentation), the session layer (initiate and respond) and the presentation layer (what does which byte mean).

Every CANOpen device makes internal data (process data, parameters) available on the bus via a defined interface, whereby this data is organized in an object directory. Entries in the object directory are accessed via a 16-bit index and supplemental 8-bit sub-index. The index range is subdivided into logical segments to organize the structure so that it easier to comprehend by users. The name of a device, for example, can be read from index 0x1008 (sub-Index 0).

Index (hex)    Object
0000              not used
0001-025F     Data Types
0260-0FFF     Reserved for further use
1000-1FFF     Communication Profile Area
2000-5FFF     Manufacturer Specific Profile Area
6000-9FFF     Standardized Device Profile Area
A000-AFFF     Reserved for further use

I’m not sure this is needed for my robots, but it is interesting to see what is available for use on the AT90CAN controllers and how this would work in practice.

Oct 26

The CAN interface is a serial interface that was designed to offer a very reliable data transmission in automotive applications. It allows every device to send data, for this it first has to monitor if there is activity on the bus before sending data. When transmitting data, it always needs to monitor if the data bit that is set on the output is actually available on the bus, if not, than it must stop sending and retry after there is no activity anymore. This tells me it is similar to I2C, it uses an open collector to pull down the data line. If you send a logical one and somebody else sends a logical zero than the zero always wins. The sender of the one sends a one but reads a zero, as a result it stops sending.

The benefit of this is that data is not corrupted since the zero bit sender was not disrupted and continues. All other reading devices do not even detect that there was a device trying to send as well. Second benefit is that lower data always wins over higher data, in other words when the first bytes is an address for example than the device that wants to send data to address 0x00 will be allowed to send, the device that want to send data to address 0x01 stops at the last bit since at that moment. This allows a simple priority game, if the brains have address 0x00 it always receives the messages over any other device. When the message is transfered, the device that was sending to address 0x01 will retry.

Although above example is correct, CAN uses a message based protocol and not an address based protocol. All devices receive the message and it is up to the device to decide if it does something with the message or not. The priority of the message itself is embedded in the message as data bits (11 or 29 depending the CAN revision) in the beginning of the message. A lower value of priority will win over a higher priority value (as in the address example) so priority 0 means top priority. .

The benefit of a message based protocol is that devices do not need to use an address, instead the message holds what kind of information is included in the message as part of its data. This allows devices to be added without the need of reconfiguring the network. Specially handy in case more segments to the snake are added in a later stage or new functional devices are added.

  • There are four kind of messages available:
  • Data frame, containing (of course) data
  • Remote Transmit Request, a request from one device to another device to trigger it to send information
  • Error frame, containing information on occurred errors on the CAN bus
  • Overload frame, if a device is busy and it can no longer store incoming frames due to the fact it’s buffer is full it can send out a signal to other device to stop sending messages to prevent it misses frames.

Next to the message priority the data also contains one RTR bit that indicates the message is a Remote Transmit Request. There are four bits to indicate the amount of data bytes in the frame followed by the actual data bytes. At last a 15 bit CRC is added allowing the receiving devices to verify if the message is not corrupted.

Big part of the CAN protocol is the error handling ranging from a simple retry to send a message to fully disable devices from the network if they detect errors. There are several errors that can be detected:

  • CRC error, all devices calculate the CRC value of the frame and if it does not match it sends an error message back that the message was corrupted. I’m not sure how this will work now, I guess the other devices receive the error frame as well and than decide that they do not need to do this anymore to prevent every devices sends this message. In order to recover from this error the message is resend.
  • Acknowledge error, after receiving the last bit of a frame one (or more) devices respond within a specific time by pulling low the data line. If this is not detected by the sender the frame will be resend. This does not guarantee that all devices have received the message.
  • From error, there are several control bits in the message that should have a fixed level (delimiters), if a device detects they are not correct a form error is send so the transmitter can resend the frame.
  • Bit error, when the transmitter detects that the bit read is not the bit send. There should not be an error frame send I guess, only detection so the device can stop sending data.
  • Stuff error, there is no clock line in the CAN interface, this means each device must calibrate it’s own internal clock to match the other devices clock. This is done by monitoring the bus and detect the flanks of the data line changing at which time the clock is synchronized. If there are more than 5 bits send with the same polarity than automatically an inverse bit is send to prevent the clock signal getting out of sync. If such a bit is not detected an stuff error frame is send so the frame can be repeated.

When a device detects errors it can be because the sender is the cause or it can be that the device itself is causing this due to defect hardware and/or software. When 127 error messages are send by a device the change is high that it is causing itself the problem, than automatically is stops sending actively error frames and starts responding with passive error frames. If than it reaches 255 error messages it is disconnected from the bus and can not participate anymore in communication.

This is an abstract out the full CAN spec from Bosch and it all sounds good, but it seems a lot of higher level software is needed for a full support of the CAN protocol. ATMEL has special micro controllers in the AVR series that include a CAN interface (AT90CAN32, AT90CAN64 and AT90CAN128; difference is in the amount of on board flash) and a special CAN bus driver (ATA6660). Scanning chapter 19 of the micro controller spec reveals some sort of message box which are automatically send after which an interrupt signal is triggered. It seems some level of error handling is done automatically including disabling the bus when multiple errors are found. After downloading the software package AT90CAN128/64/32 Software Library and Examples from the AT90CAN info page it seems there are device drivers available providing higher level call to send a get a frame.

In total is looks promising enough to spend some money on evaluation kits en CAN monitors. Next step would be to figure out which one to buy. I already use AVR Studio 4 in combination with an AVR JTAGICE mkII JTAG interface. For experiment I use an ATSTK500 board so it does not take a long time to decide I will need to buy an ATDVK90CAN1 evaluation board which can be ordered from Farnell. That will allow me to test the example code already downloaded and get a better feeling of what is needed in my own application to send/receive frames and to handle the errors.

Since a network uses at least two devices I need a device to communicate with, it makes sense to use some sort of PC CAN card in combination with bus monitoring software so that it can now be used for the basic setup but later also to monitor the bus and do some debugging in case I run into problems. If possible it would also be nice if the PC based card supports an interface for my own PC based applciations to send and receive data over the CAN bus.

Some CAN interface cards I found:

  • CAN USB Performance, 279 US$, comes with monitor software, Windows drivers with code examples and Linux device drivers. Additional software with extra features can be bought separately.
  • USBtoCAN, no price mentioned, same as above but at first glance the monitor software shows much more details than the first one.
  • USB to CAN Compact, no priced mentioned, full Windows support inlcuding simple CAN monitoring software, no linux support.
  • CANUSB, 205 EUR, a combo pack of an USB-to-CAN module and the monitor software from WGSoft. This site resells from EasySync, this is a nice device since it uses an USB chip from FTDI which I used before in some projects. The interface to their device driver is simple, there is also code examples available including Linux support. Some Googling more I found the monitoring software for free on the site of CANUSB.
  • CAN232, 105 EUR, same as above but than using a serial interface. There are many open source projects ranging from full monitors to simple interface libraries.

I decided to purchase the CANUSB device and the ATDVK90CAN1 evaluation board.

Oct 25

In order to translate high level movement commands from the brain a small micro controller will be placed between the interface to all segments and the FB. To minimize the time needed for transferring commands a parallel bus interface will be used to the micro and an interrupt signal from the micro to the FB in order to indicate attention is required.

Using 8 bidirectional pins, some address outputs, one read output and one write output are all that is needed to create a memory mapped IO interface. By making sure that there are enough pins used for the address bus it might be possible to add later a second board on top that uses the same interface but now in a different memory area.

Some thoughts about how communication will take place between the FB and the micro:

  • Read the primary motor cortex status, can be a specific memory location
  • Status of a specific segment reporting current servo current, position, charge voltage, charge current, segment status. This would require more a memory location to set the targeted segment using a memory location followed by reading multiple memory locations (5 so far).
  • Normally the micro controls the segments, however when making very focused movements it might be wise to have write access to a segment directly for setting maximum current, position, maximum charge current, charging on/off. This can be the same memory area and principle but than writing to the locations after writing the targeted segment.
  • Give commands to the micro, for example move forward, using the vector model this would require a target vector (X, Y), perhaps even 16 bits and what kind of movement will be required at what speed.

This list is most likely not complete and should be considered work in progress. But I wonder if using a memory location for each variable would not work. Above list would already require 8 to 12 address locations, as an alternative a protocol can be defined over the interface, for example byte one is the command, byte two the amount of data bytes to follow and byte two+n the data bytes. In this case a very limited amount of address lines would be needed, theoretically only one.

With respect to available IO on the FB, the pins IOG8..IOG15 can be used as a bi-directional data bus. The direction must be changeable depending read or write commands. This is possible as a whole 8 bit bus but not per pin, this is not a problem.  Pins IOG16..IOG23 can be used as an 8 bit address bus, all to be set as outputs. Outputs OG3 and OG4 can be used for /RD and /WR and input IG1 as the /INT signal. Some additional outputs OG1 and OG5 can be used to drive two LED’s which always prove to be helpful during development.

In total there are 19 pins used for interfacing with the micro. Most ATMEGA’s have more so also here there is no problem to be expected. As a result, there is a 256 byte memory area defined, so both the memory mapped and protocol based methods of interface are possible. Additional all pins are mapped on port G requiring only one file descriptor to be opened in the application on the FB (/dev/gpiog) making maintenance easier and allowing a second application to run in parallel using the other pins if needed later on.

For the interface to the segments an I2C interface is planned. Although perfect as interface I’m worried about data integrity. The snake might turn out to be long (1 to 2 meters) requiring long interface lines from head to tail. Combining this with multiple servos all driven using a pulsed driving signal the electrical noise generated should not be under-estimated. A better choice would be RS485 since this is designed for noisy long distance communication. However, setting up a multi master system with this will not be easy. Perhaps a better option is to use the interface designed for this: CAN.

Oct 23

Each segment can hold it’s own battery pack to spread weight evenly over the snake. Since most servo’s require somewhere between 4.8 and 6.0V at least 4 NiMh batteries are needed per segment. There are 2500mAh versions available at reasonable cost providing enough power per servo to run for several hours.

Charging these batteries will require a simple circuit, the most simple is a current source that limits the charging current to 10% of the capacity, so 250mA in this case. Charging can continue for 15 hours after which it should be stopped. The micro controller that will be in the segment can be used to switch this circuit on and off depending the available of a higher voltage on a dedicated wire and can take care of stopping the charging process when the 15 hours have passed.

Since there are several AD-converters in most ATMEGA micro controllers it would also be possible to increase the charging current and measure the voltage over the battery pack to determine if the batteries are full. This allows quicker charging of the batteries and (using a small discharge circuit) do maintenance of the batteries by fully discharging them from time to time just before rapid charging. Measuring the voltage also allows the segment to check the battery level during operation.

When all segments are connected in parallel using a diode to prevent current flowing from one pack to another pack the brains can be supplied by any pack that still has some energy left. Since the brain can communicate with each segment and retrieve the current level of energy it can decide when it is time to search for food or when it ate enough.

It should not be a surprise that I’m not the first one to use a micro controller for charging, there is even a nice application note from ATMEL that gives examples on how to do this. Needless to say I will use this as reference for my own design.

One thing to think of in more detail, the servo can be connected directly to the battery or it can be connected to the main power line that also feeds the brain. Advantage of the first is that current spikes are kept local, disadvantage is that when one segment consumes more current the battery is faster empty then another segment which can cause problems during movement. Specially if one segment does not have any power at all in which case a servo will not move. Advantage of the second is that there is always power to the whole system if at least one pack has some energy. Downside is that than the whole system draws current from this single pack resulting in high peaks though the respective diode. Second downside is that the voltage available for the servos is reduced by the forward voltage of the diode which probably means raising the number of batteries per segment to 5.

Despite the downsides of the second option I feel this is still preferred. Normally the different packs should balance out themselves, if one pack has the highest voltage than most energy will be delivered from that one but it drops directly in voltage and so another pack has the highest voltage. Next to the fact that all segments will consume roughly the same amount of energy I feel this high current through the diode problem will not be big.

Oct 22

Thinking more about the fast bus structure, it should be a two-directional bus allowing multiple masters to initiate the communication. In nature, when you want to move your hand the brain tells the muscles in your arm to act making the brain the master. When your hand touches something, it tells the brain it bumped into something making the hand the master. Both can initiate the communication.

Using a multi master architecture the single wire next to the interface to indicate to the brain a problem has occurred is no longer needed. This makes the system more versatile, every module can talk to every module  without intervention of the brain making it possible to extend reflex actions over multiple modules.

A very suitable interface for this as an I2C interface which is supported in hardware by most ATMEL Mega micro controllers like the ATMEGA32. Next to the fact that the I2C protocol supports addressing and variable amount of data bytes it also supports a multi master setup that detects any bus conflict if two or more masters start transmitting at the same time. It also does not require special driver chips to make the electrical interface to the bus, simply bot lines are shared thoughout the system.

The FB also has an I2C interface but this seems to be a software implementation. For making the connection between the “co-brain” and the FB a better solution is to make something parallel like an 8-bit interface using 1 or 2 pins as address pins and a read and write pin. An interrupt pin set by the co-brain can signal the brain that it want it’s attention preventing the need to poll the status of the co-brain.

For the batteries, connecting them all in parallel might not be the smartest choice. Most types can not handle this or it makes the possible charging methods very limited. Some form of rapid charging is usefull, in most cases this means monitoring the charge state and change the charging method over time. If two batteries are placed in parallel and they are slightly different in characteristics due to production or aging this will influence the charging responce. In other words, you measure the responce of a system and not of a single battery. More thought needs to go into this.

Oct 21

Since I now have the development environment for the FOX Board up and running it’s time to think about the hardware architecture. The FB will be used as the brains and in control of deciding what to do. The decision will be based on several factors like is there enough energy to do it? If searching for a pray, what is the best way to navigate through the environment. If a pray has been sensed, how to catch it? All together a lot of complex time consuming tasks that need to be done in parallel with moving the body controlling each segment.

The body indeed will consist of “x” segments, most likely all the same but slightly vary in size. These segments have nothing else to do than to move a servo from one angle to another angle at a certain speed dictated by the brains. It would be an good option to include some kind of “self protection” circuit that detects overloads and under voltages. In that case it should take appropriate action as a reflex and inform the brain that something is wrong. The brain can than respond with a slight delay since the most immediate potential damage is reduced by the segment itself.

The human body works the same way, you can control your hand an finger by your brains. Whenever you touch something hot, a reflex is triggered to pull away after which your brain will be triggered to take secondary actions. A better example is the closing reflex of your eyes when something approaches your head fast. No time to think, simply close the eyelids to prevent damage to your eyes. Secondly tell the brains to open them slowly to see what is wrong and how to respond.

Based on history (and apparently by genetics as indicated above), you should never mix time critical tasks with time non-critical tasks. The brain can set the path to follow and instruct each segment to move, it can also request status but it can not control basic maintenance. Since the “x” in consist of “x” segments is not defined a simple fast bus structure might be the right choice, something like a serial interface might be an option allowing multiple segments or sensors all connected to the same interface with the minimum amount of wires, something like a spinal cord.

Each segment will than have it’s own small micro processor that controls the servo position and monitors for danger. It might also have it’s own power supply (battery) so that the weight distribution is evenly spread out over the whole body like fat reserves all over the body. In case of danger, it can act upon itself to prevent damage and using a single wire next to the interface to indicate to the brain a problem has occurred. By investigating the status of each segment the brain can figure out which one (or more) segments reported the issue. If this takes a while since an image from a potential camera is being processed, no harm done.

Secondly, while moving there is a lot of information to be calculated and send to each segment as previously found by the snake movement simulation program. It might be wise to offload this also from the brains or add a “co-brain” that takes care of the detailed calculations when it receives a command from the brain to move forward. Nature solved this by splitting the brain in area’s each responsible for a specific task like motor functions. I quote:

The brain contains a number of areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, via the so-called pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements.

This would result in a micro controller that interfaces directly with the FB and receives global commands like move forward 1 cm. The micro controller takes this command and splits it up into several commands to each segment moving the servo’s to a specific position. In case of danger, the segment prevents damage, signals the micro controller that stops the movement and informs the brain something happened.

As mentioned, each segment can have it’s own battery pack to store energy. By connecting them all in parallel to the same power line (the blood vessel) a vast amount of energy can be stored which is available throughout the body of the snake. In a human body the energy from food is extracted in the colon, passed to blood vessels which transport it to fat reserves. This can result into special segments that have next to movement also a secondary task like taking the raw voltage applied while “eating” and distributing it using the power lines to the various battery packs.

Oct 19

The whole chain from entering source code in Geany, compiling using gcc-cris, transfering the application to the target using ssh and finally debugging it using gdbserver, gdb-cris and kdbg is now working using below makefile. In Geany the execute button executes the make execute command in the folder holding the source of the application and the makefile.

COMPILER = /usr/local/cris/bin/gcc-cris
CFLAGS = -g -fno-omit-frame-pointer -O0 -mlinux -o
SOURCES = main.c MyUnit.c
TARGET = Hello_World
USER = root@FOXBoard
DESTINATION = /mnt/flash/bin/HelloWorld

# top-level rule to create the program, executed by default if no params are provided
all: compile

# Called by pressing the Compile or Build button in Geanny
compile: $(SOURCES)

build: compile

execute: build
<TAB>ssh $(USER) “gdbserver :1234 $(DESTINATION)/$(TARGET)” &
<TAB>sleep 5
<TAB>kdbg -r FOXBoard:1234 $(TARGET)
<TAB>ssh $(USER) “/mnt/flash/bin/StopRemote.sh $(TARGET)”

Everything works great except one thing: the printf command does not show anything in either the terminal available in Kdbg or in Geany. Although I can fully debug the application in theory I do not need printf to debug by application but using printf to display progress or status in my applications would be very nice.

When I start gdbserver on the FB using a ssh terminal than printf works when using Kdbg (or gdb-cris manually) although not in the terminal from Kdbg but in the terminal that was used to start gdbserver. When starting gdbserver using a ssh root@FOXBoard “gdbserver….” command than printf does not work until the moment that the application (and so gdbserver) is stopped.

My guess is that printf prints to the terminal that was used for starting the application. In case of using the ssh root@FOXBoard “gdbserver….” command a new shell is started in which the command passed to ssh is executed.

Thinking about this more, over the last days I find myself always having a ssh terminal running to the FOXBoard doing maintenance like organizing files and (previously) killing gdbserver from time to time. So it would not be a bad idea to not fully automate the process. Perhaps a better option is to have three programs open (Geany, a ssh terminal and Kdbg) in which case the order would be:

  • Change code in Geany, execute compilation and secure copy the application to the FB
  • In case of debugging, start gdbserver in the ssh terminal (arrow up and enter the second time)
  • In case of normal development, start the application in the ssh terminal
  • In case of debugging, in Kdbg select File |Recent Executables…, select the application and press F1 to reload the source before starting the debugging.

For now I will start working like this to see if this is an acceptable way of working. The only change needed in order to work like this is to change the command that will be executed when pressing the Compile button in Geany. In Build | Set Includes and arguments change the Compile command into make build. As a result, pressing the Compile button will compile and copy the application to the FB. Pressing the Execute button will also start gdbserver and Kdbg.

Oct 13

When debuging is stopped while the application on the remote target is not finished I want the remote application and gdbserver to be stopped for several reasons. For one, debugging means finding a problem in an application so execution is apparently not correct and is must be stopped. Secondly, the next time that the executable must be copied it will fail since the executable is still locked.

The Execution | Kill command in KDbg does this, it send a kill command to gdb-cris, this stops execution of the remote program and gdbserver. So perfect, problem is that this is not done automatically when you close KDbg so you always need to do this first before returning to the code. As an option, a script can be launched when Kdbg is stopped that investigates if the application and/or gdbserver is still running. If so, they can be stopped remotely.

Killing gdbserver is done using the kill command, open a terminal and run the command ssh root@FOXBoard “gdbserver :1234 /mnt/flash/bin/HelloWorld/Hello_World”. Next open open a remote shell to the FB and type the command ps. This will list all processes running including their process ID. If gdbserver is running you will see three processes running:

  • 834 root  536 S   sh -c gdbserver :1234 /mnt/flash/bin/HelloWorld/Hello
  • 835 root  512 S   gdbserver :1234 /mnt/flash/bin/HelloWorld/Hello_World
  • 836 root  32 T   /mnt/flash/bin/HelloWorld/Hello_World

The first one is the secure shell used for remotly starting gdbserver, second one is gdbserver and the last is the application being executed. The number in front of it is the process ID of the application, this must be pass to the kill command. If you would kill gdbserver please note that this prevents the application from being executed. After gdbserver is killed this means that the application starts (or continues) executing since it is no longer being kept on hold. In parallel the remote shell sees that gdbserver is finished and wants to close the shell but this is not allowed since a second process (the application) is also using the shell to execute. As a result, process 834 and 835 are killed using the kill 835 command but the prompt is not released in the terminal that was used for the ssh command. Only when the second kill 836 command is executed all applications in the shell are finished and the command prompt is released in the terminal.

This is not the behavior I’m looking for since this means the application continues running out of control after gdbserver is stopped half way the debugging process. In most cases this means the issue has been located and the application must be stopped as well since the application is not working correctly.

The correct order to prevent the application from running without gdbserver is to kill the application first and than kill gdbserver. Giving the command kill 836 will the application although it remains visible when showing all running processes. I guess this is because gdbserver also has a lock to it since giving the command kill 835 will kill gdbserver and the application at the same time.

I’m not sure how but this all should be possible in shell script that is kept universal so that it only needs to be written ones and can be used for all future FB projects. The first step is to create the basic script. Start gedit StopRemote.sh to create a new file, copy below text, save and close.

# (c)2008 J.P. van de Kamer
echo “Stop gdbserver and target application on the FOXBoard…”
echo $executable

This script will do nothing else than printing the first parameter that is passed to the script when started. In order to make the script executable, enter the command chmod 0755 StopRemote.sh, this will change the attributes of the script so anybody that starts this script in a shell can execute it. Enter the command ./StopRemote Hello_World, this will show below output:

jan@ITX-Development:~/FOXBoard/HelloWorld$ ./StopRemote.sh Hello_World
Starting gdbserver with target application on the FOXBoard…

The kill command requires the process ID of the process. In order to find the PID of the application, the command pidof Hello_World can be given, there is no need to pass a full path to the application, only the executable filename will do. The result is the PID, this must be passed to the kill command. In order to do this, we first assign it to a variable using the command

pid=`ssh root@FOXBoard “pidof $executable”`

Note the ` character, everything between two of them is execute and the output is seen as the value in the equation. In this case the variable pid is assigned the output of the remote command.

It’s time to make a small change, executing the commands remotely takes a long time. So it’s better to make this script a local script that is stored on the FB and executed with a single remote call. As a result, the script so far should be:

echo “Stopping the target application and gdbserver on the FOXBoard…”
pid=`pidof $executable`
echo The PID of $executable is $pid

When an executable name is passed to the script that is not executed the value of PID is empty. Before killing the PID, a check should be done to see if a value is assigned to PID, if not, the kill command can be skipped.

if [ “$pid” != “” ]
echo The PID of $executable is $pid
echo $executable is not running

Couple of this to note here:

  • The spaces after the [ and before the ] are not optional
  • The $pid must be between “”, this will result in a string compare between the pid and an empty string
  • The then must be on a new line

And now convert it so it actually kill something:

if [ “$pid” != “” ]
echo Stopping $executable…
kill $pid
echo $executable is not running

Next is to kill gdbserver, a simple copy/paste/modify action will do this. Below the whole script:

# (c)2008 J.P. van de Kamer
echo “Stopping the target application and gdbserver on the FOXBoard…”

pid=`pidof $executable`

if [ “$pid” != “” ]
echo Stopping $executable…
kill $pid
echo $executable is not running

pid=`pidof gdbserver`

if [ “$pid” != “” ]
echo Stopping gdbserver…
kill $pid
echo gdbserver is not running

Copy this script on the FB, in my case in /mnt/flash/bin using the scp StopRemote.sh root@FOXBoard:/mnt/flash/bin command.

Now a single command ssh root@FOXBoard “/mnt/flash/bin/StopRemote.sh Hello_World” will stop the application and gdbserver.

Oct 6

During my search on how to debug the applications on the FB, I found a very interesting feature of ssh in order to execute gdbserber on a remote target. Normally executing such an application would require:

  • open a secure shell using ssh root@FOXBoard
  • navigate to the folder
  • start gdbserver using gdbserver :1234 MyProgram
  • when it finishes, closing the terminal would require the exit command

All this can also be done using a “non-documented” feature of ssh to enter the command to execute directly after the target between colons.

  • ssh root@FOXBoard ‘gdbserver :1234 /mnt/flash/bin/HelloWorld/MyProgram’

This opens the shell, logs on, start gdbserver in the right location with the right program and logs off when the application finishes. It does not only work for starting gdbserver, any remote command can be placed between the colons, e.g ssh root@FOXBoard ‘rm /mnt/flash/bin/HelloWorld/MyProgram’ will remove the application.

This is perfect in order to be used in a makefile for remote maintenance in combination with preventing the password to be given after every secure command.

Oct 6

Searching the web how to restart an application on a remote target does not provide any clue how to do this. I found one link to another debugger that acts as a wrapper around gbd, it is called DDD. The benifit of this debugger is that it is less integrated meaning it shows a terminal with gdb running, pressing the buttons basicly types the command linked to the button in the terminal and executes it. Any feedback from gdb is shown (of course) in the terminal.

Open a terminal, change to the location that holds the HelloWord application and enter the command ddd –debugger /usr/local/gdb-cris/gdb-cris, this will start ddd using gdb-cris as the debugger. Use the File | Open Program option to load the HelloWorld application. This will open main.c in the source window.

In a second terminal, start a secure shell terminal to the FB. In the folder containg the HelloWorld application, enter the command gdbserver :1234 Hello_World. I figured out that the IP address of the remote PC that will be used for the debugging is not needed, so less typing.

In the bottom terminal window of ddd type the command target remote FOXBoard:1234 as you would do with the bare version of gdb. The FB will respond with a “Remote debugging from host xxx” indicating the connection has been made.

Any button you now press results in a command being send to gdbserver. For example, click on the first line of the program and press the STOP icon. This sends the command break “something”, as a result this tool can be used to quickly learn the basic gdb commands.

In the File menu, there is a restart option. Pressing this button actually restarts ddd….

Reloading the Hello_World application results in the same killing of gdbserver. After another 30 minutes of tying I give up, the only way to restart the application is indeed killing the current version of gdbserver, start it again and reload the executable.

Although ddd shows the commands send to gdb and allows manual intervention I still like the look and feel of Kdbg so that will be my main debugger. From time to time I might use ddd or even gdb-cris directly if it is not clear what goes on or if special commands must be given.

« Previous Entries