You are currently browsing the archive for the Uncategorized category.

Currently running a NAS Synology DS212+, DSM 4.0-2228

2 needed modules:
- ftdi_sio.ko
- usbserial.ko

insmod on each.

Possibly needed, create device node:

mknod /dev/usb/ttyUSB0 c 188 0
stty -F /dev/usb/ttyUSB0 1200 sane evenp parenb cs7 -crtscts

Plugging USB-Serial converted should show something like this on dmesg:

[1124863.890000] usbcore: registered new interface driver usbserial_generic
[1124863.900000] usbserial: USB Serial Driver core
[1124867.710000] USB Serial support registered for FTDI USB Serial Device
[1124867.710000] ftdi_sio 2-4:1.0: FTDI USB Serial Device converter detected
[1124867.730000] usb 2-4: Detected FT232RL
[1124867.730000] usb 2-4: Number of endpoints 2
[1124867.740000] usb 2-4: Endpoint 1 MaxPacketSize 64
[1124867.740000] usb 2-4: Endpoint 2 MaxPacketSize 64
[1124867.750000] usb 2-4: Setting MaxPacketSize 64
[1124867.770000] usb 2-4: FTDI USB Serial Device converter now attached to ttyUSB0
[1124867.770000] usbcore: registered new interface driver ftdi_sio
[1124867.780000] ftdi_sio: v1.5.0:USB FTDI Serial Converters Driver

This post is about a small recipe to perform face detection using Nokia N900 phone. It’s based on ROS and OpenCV and shows how these components are mixed together and configure. See this OpenCV wiki page about face detection to understand how it works behind the scene.

First of all, ROS needs to be installed on N900. I’ve built several ROS packages, including latest offical release, code name “C turtle”. I assume you know how to ssh to your N900 and gain root access.


Careful while tinkering with your N900, you may “brick” it (if you don’t know what I mean, close this page). And I wouldn’t be responsible if this would occur, or if anything would turn into something bad. You’ve been warned.

Setting up ros-n900 source

I’ve created a Google Code project dedicated to N900 ports and package developments. This project can be reached at http://code.google.com/p/ros-n900/. In download section, you’ll find deb packages. You directly download them to your N900, or configure a new APT source pointing to this project:

$ echo "deb http://ros-n900.googlecode.com/files /" >> /etc/apt/source.list
$ apt-get update

Now install ROS with apt-get:

$ apt-get install ros-cturtle-base

This will install ROS on /opt partition (usually 2GB ext3 space), leaving rootfs untouched. ROS uses ~500MB to install. You can also install it on a (fast) SD-card, formatted using ext3 filesystem (don’t use FAT32). You’d then need to create a symlink /opt/ros-cturtle-base pointing to your SD-card.

Accessing N900 webcams under ROS

brown-ros-pkg project hosts gscam, a very nice ROS packages used to access camera with GStreamer. Since N900 webcams are recognized as V4L2 devices, it’s easy to setup a gstreamer pipeline. First install dependencies. On N900:

$ apt-get install gstreamer-tools

This example shows how to send videos to PC host using UDP. Device /dev/video0 is back camera (big one, high resolution), /dev/video1 is the front one (small resolution).

# On N900, assuming is PC's IP address (usually true if following usbnet howto tutorial)
$ gst-launch v4l2src device=/dev/video1 ! videoscale ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! smokeenc ! udpsink host= port=1234

# On PC computer:
$ gst-launch-0.10 udpsrc port=1234 ! smokedec ! autovideosink

Here we are! A small window appears broadcasting N900 videos. You may even see your face. Install some more dependencies to build ROS package. On N900:

$ apt-get install libgstreamer0.10-dev

Now go to ROS stacks directory.

$ roscd
$ cd ../stacks

Install gscam package. Follow instructions here, download gscam archive in download section or install it from sources.

$ svn co -r 682 http://brown-ros-pkg.googlecode.com/svn/trunk/unstable/gscam gscam

Once installed, build it using rosmake:

$ roscd gscam
$ rosmake -i

(requires quite a lot of time, be patient…)

gscam requires environment variable GSCAM_CONFIG to be set. It stores gstreamer pipeline definition. I had lots of troubles running the correct pipeline, and finally got help from ros-users list. The trick is to convert YUV format (the only N900 cams seem to output) into RGB.

$ export GSCAM_CONFIG="v4l2src device=/dev/video0 ! videoscale ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! video/x-raw-rgb ! identity name=ros ! fakesink"

You can manually check it’s working without ROS:

$ gst-launch-0.10 $GSCAM_CONFIG

If it pauses, that’s ok. Now run it using gscam node. It requires to run within “bin” directory.

$ roscd gscam
$ cd bin
$ rosrun gscam gscam

At this time, gscam should say it’s “processing…” (of course, a reachable roscore should run somewhere, for instance on PC host). Now back on PC, install n900-cam package.

$ roscd
$ cd ../stacks
$ svn co http://ros-n900.googlecode.com/svn/trunk/src/n900_cam
$ roscd n900_cam
$ rosmake -i

Now run n900-cam testcam.py node. It’ll retrieve images from image_topic subscription, add a circle and display the result in a window (from ROS tutorials).

$ rosrun n900-cam testcam.py

So far so good. Let’s face detect ourself ! This is closed to previous example, except now images are submitted to OpenCV for face detection. Code is coming from OpenCV samples and is glued here to work with a ROS node.

$ rosrun n900-cam facedetect.py

If you can see yourself with a red square around your face, that’s good news. If not, either you’re not human, either something is wrong with running configuration…


I’ve recently bought a Nokia N900 smartphone. It’s described as an Internet Tablet with phone capabilities. Interesting thing is 100% linux based, you can have full root access. On paper, this phone is awesome, in reality, it suffers from a lot of half-baked applications, poorly maintained software but, still, this opens to lots of tinkering…
On the other side, I’ve also discovered ROS. ROS stands for Robotic Operating System. It’s robotic framework, offers distributed computing over nodes, pubsub architecture for inter-processes message exchanges. It can be programmed using C++, python and other less supported language. It’s developped by Willow Garage, the guys who built robot PR2. If you’ve ever searched a flexible, powerful and fun robotic framework, or even wanted to develop your own (…), you definitely need to give ROS a try.
There are lots of advantages running a PC based robot. For instance you can easily plug a USB webcam and give vision to your robot. For minimal cost. Doing this with an embedded cam, like CMUCam, is certainly fun and interesting but in the end, performances can’t be compared and you’ll sure need some power computing to process incoming images. There are existing tiny PC, based on ITX motherboards for instance in order to do this. You can install Linux, put ROS on it and start to build your Linux powered robot. But, wait, I also have a very, very small form factor Linux PC, my N900… Why not using it as a robotic platform ?
It provides:

  • 2 webcams (front, back)
  • 3-axis accelerometer
  • GPS
  • high resolution touchscreen
  • micro-USB connector, can be used as a USB host with some tinkering
  • Wifi
  • bluetooth
  • Infra-red beam
  • 32GB memory, extendable to 64GB with microSD cards
  • microphone
  • speaker
  • ambient light sensor

Doesn’t it sound awesome as your main robotic platform ?

The idea is thus to install ROS on N900. Low-level tasks, such as actually activate motors, collecting sensor data, should remain on a microcontroller board, like Jaluino. All collected data and actions should go through N900, acting as a hub, performing some pre-processing tasks before delegating more power-consuming tasks to a PC around there, also running ROS.
It’s been a while since I’ve already install ROS on N900. There were lots of trials and errors, highly time consuming, but it definitely worth it! I’ve created a dedicated Google Code project, named ros-n900. You’ll find ROS packages specific to N900 target, and deb packages to easily install ROS on N900. You can also follow instructions on this wiki page I wrote on ros.org: http://www.ros.org/wiki/N900ROS.
Next, we’ll see how to have fun with N900 webcams, ROS and OpenCV!

I now have quite a nice base for my SirBot project, highly configurable, and hopefully more stable now implementation is done using Twisted (no threads anymore). With this base, any data coming from the PIC (a request, a response, a message, see SirBot’s doc for more), can be parsed and objects can be created from them. Then spread to whatever application needs them. Now the time has come to build a GUI…

For instance, I’ve experimented back-emf while trying to control DC motor. This was very fun and produced very nice data and graphs. But those were done after the experiment. I just logged raw data on a file, then process them with a awful python script to build gnuplot graphs. Where is the real-time ? Observing these graphs in real-time is now mandatory. I need to know what’s going on this bot !

Yes, I now need to build a GUI, and it really pisses me off :) The simple idea I could write


really gets me sick. It’s so much waste of time in my opinion. I just don’t want to design a GUI, selecting the appropriate layout, putting some code for buttons here and there. Maybe I’d need a graphical GUI builder. But when it comes to deal with real-time graphs, I’ll need to implement a sort-of canvas, and draw points on it, and… I just don’t want to spend my time on this, I just want to see my data coming from my PIC. In real-time. And I’d like it flexible, with a lot of widgets, like gauges, sliders, knobs, etc…

I first tried and thought to have a solution with Flex. Interestingly enough, Flex can be used to build GUI with real-time data. It’s quite fun to use, compiler is open-source, documentation is awesome. Lots of widgets are available, for free, like Fusion Charts. And it’s cross-platform since it’s all about Flash. I prototyped and even wrote a Flash gateway to spread message from the SirBot’s core, over a Flash XMLSocket. Worked great, nice performance. Then I tried to add some colors and… you have to sub-class, add callbacks for whatever I don’t know, etc… I may have missed something, but it looks too complicated for just what I want to do. And most importantly, I’ll need to design a layout, add a menu, add a button here, write the code so when to button here is pressed, then it switch on this pane. I just don’t have time !

So, how can I do this ? How can I build a rich GUI, with real-time widgets, easy and fast to implement ? Which tool(s) to use ? Is there a tool which can do this ?

Though I’ve never tried it, Labview seems to be a solution, but it’s way, way too much expensive, and there’s no free-of-charge edition for the “cheap guy”… Looks like I need a Labview alternative. During this “quest”, I’ve found many interesting projects. What can be surprising is most of the time, this is about sounds, videos and artistic related projects.

I first found Eyesweb through BioMobius, which integrates it, adds blocks dedicated to the biomedical field (and also provides an awful GUI builder…). Eyesweb is free, works on Windows. It looks very powerful, and seems to be a real Labview-alternative. Its first purpose was about to deal with audio and video, in real-time, with motion-capture, for artistic projects. I can remember having read it was used on a opera, to produce nice visual effects according to what’s going on the scene.

I tried to prototype things, and the cost to enter is quite high (but this is what I expect for these type of tools). I tried to connect it to my SirBot’s core, via a NetReceiver, but it just crashes. This really is a nice tool, I’d need to spend more time on it, if others don’t do the trick. And the motion capture can be interesting when I add a camera to my bot…

Pure Data
Pure Data is an old project. I’ve seen many incredible videos on youtube about it. Though it has a visual environment, it’s more like classical “type” programming: you have to know objects’ name, and what you can do with them. Runs under Linux.

I did not give a try with this one, but it looks very powerful and fast. It may be too much audio-oriented.


Max/MSP is derived from Pure Data. Lots of nice widgets. Runs under Mac and Windows. Not Linux.


Open Sound Control. Not a GUI toolkit nor a tool by itself, but a specification: “Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology.” Many, many tools I’ve found, included those listed here (and particularly Eyesweb), can use OSC as input. OSC is used in many areas other than sounds. I think I’ll need to implement a OSC output in SirBot’s core.

Other fun tools…


From their site: “Fluxus reads live audio, midi or OSC network messages which can be used as a source of animation data for realtime performances or installations. Keyboard or mouse input can also be read for games development, and a physics engine is included for realtime simulations of rigid body dynamics.” Looks fun ! While not directly usable for what I want (and from what I understand), it may be interesting to keep this in mind. For instance, it could be used to “just” produce a visual… stuff which globally represents the bot’s state. No “scientific” graphs, or the like. Just… visual stuff. Fluxus can be programmed in live (see this video for more). No GUI, no visual programming.



SuperCollider seems very similar to Fluxus. Live coding, no GUI, just type and to see/listen to the results. See this video, scratching with a WiiMote :)

There are many other tools, libraries out there, (Quartz Composer, vvvv, CSound, …) to deal with real-time data, lots of artistic projects I just discover (and started to discover with Arduino), and look very, very fun :)

So, which to choose ?
I don’t know yet, but EyesWeb probably fits my needs the best. Others are of interest too, but maybe not right now. I have to spend more time on EyesWeb, and build a real prototype for validation, probably based on a OSC server/gateway.

  • I’ve been working on jallib project, which aims to provide standard libraries to use with jalv2 compiler, with some other guys. Jallib is released under BSD and ZLIB licenses, and available on GoogleCode. A Google group is also available, this is where we discuss and debate about jalv2, jallib, etc…
  • I’ve written some blog posts on jallib’s blog
  • I’ve migrated my SirBot’s projects and javl2 libraries to jallib
  • I’ve re-written an important of SirBot communication layer, now based on Twisted. This provides a extremely powerful base, without threads, easy to maintain
  • I’ve played with Flex, Adobe’s OpenSource RIA framework, and tried to prototype a Flash-based GUI for SirBot’s monitoring widgets. Quite promising…

Having recently tested a lot this method, I’ve finally determine my "easy and cheap" way to build PCB using toner transfer method. I tried a lot of things, different papers. I’ve also tested the professional photoengraving  method, this one can be considered for the most accurate (but certainly not the cheapest).

So, what’s the recipe ? This is closed to the last one, but now way faster, since only ~ 10 minutes are needed. Main difference is you don’t have to be careful while peel off the paper, toner really sticks firmly, there’s no risk damaging the tracks.

    • I use glossy photo paper for laser printer (135g/m2)
    • wash your PCB board with soap, then with window cleaner (or detergent with alcohol).
    • sandpaper your board with ultra thin paper (600, one used for body car)
    • wash again the board with soap, then with window cleaner. It must absolutely be dry.
    • preheat the board,
    • place the paper on the board. Be careful, it’ll instantly stick to the board.
    • iron the paper., first smoothly to help paper to sitck firmly, then with a lot of pressure (~1min)
    • continue with the iron tip, and redraw all the circuit (black tracks will appear through the paper), for ~4/5min.
    • Ironing time: ~4/5min
    • then place the board on hot water, no soap. Wait for ~ 3/4 minutes, watching the paper beeing soaked
    • peel-off the paper. Most can easily be removed, only the last layer, where the toner sticks, will mostly remain on the board.
    • peel-off again the paper with a toothbrush.  Don’t hesitate, it won’t damage your PCB.
    • once done, dry the board. Check if you’ve missed some paper.
    • clean the board with window cleaner. While it won’t remove toner, it’ll help to remove paper residue and get an accurate board.
    • you’re done. You’ll then need to etch the board. Once done, remove the toner using acetone (nail polish remover works well)

This video shows the whole recipe:

Now, as a conclusion, here’s a comparison of different PCB creation techniques:

  Photoengraving Toner transfer with transparency Toner transfer with photo inkjet paper Toner transfer with photo laser paper

[EDIT 2009-09-10] : here are some photos of the glossy paper I use, hoping it’ll help choose the correct type.

Two new boards are available from SirBot Modules. Both are using a new bus cable connector, and were designed using Eagle.
    • a new version of the mainboard: with an integrated i2c bus, better design
    • a DC motor controller board: can be drive existing h-bridges, such as those found in RC toys. Extremely configurable and extensible, highly documented.
Many months (about six) have passed since the last Sirbot release, it’s now time for a new one. Just need to finish the ChangeLog…
Weeks ago, while I was trying to interface my mainboard with a DC motor controller board, I had to dive into the i2c protocol. Serial com. couldn’t make the deal, as I plan to connect several daughter boards, thus need to address them. Although RS485 was also possible, it implied a lot of changes, particularly on the PC side where I’d need an adaptor…

Anyway, just as blinking a LED, i2c is a must-have protocol. Several links helped me a lot, like this one explaining the whole theory, step-by-step, message-by-message. But the most helpful documentation still remains PIC 16F88 datasheet, and the Application Note AN734A which explains quite well how to implement an i2c slave as a state-machine. An interesting post is also explaining this very same App. note has a lot of bugs. I wouldn’t say “a lot”, still there’s a little bug about clock stretching: CKP must be set to high when receiving a NACK. Anyway…

So, this time, this is about setting an i2c communication bus between two PIC 16F88, using Jal v2. The first thing to note is 16F88 implements SSP (Synchronous Serial Port), but not MSSP (Master Synchronous Serial Port). This means 16F88 can’t be just configured to act as a i2c master, all of this must be done in software. Good news is jal v2 standard libraries come with an i2c module, which can handle all the i2c protocol subtleties, from a master point-of-view…

This jal v2 master implementation isn’t based on interrupts: since it’s the master which decides to take control of the bus, there’s no need to react to external event… except when using Multi-Master i2c bus. To help this, 16F88 can be configured to generate interrupts on START/STOP bits (which defines when the bus is available or not). A Multi-Master i2c bus seems great but I didn’t give a try, regarding all the debugging time it took me to set a “simple” 1-Master/1-Slave bus.

So, the master part is ok, now the slave part. This is where things are getting tough. Particularly without any oscilloscope or digital probe/analyzer. 16F88 can be configured as a hardware slave i2c. An IC address must be set, it must be the same as the one the master is using… This is what I’ve first thought, but it’s not actually exactly true ! Actually, SSP must be configured so SSPADD contains a 8-bit address, that is, with the 8th bit setting read/write address type. Whereas in Jal, IC address is coded on 7-bits. This means, for instance:


Jal slave i2c master
(declared slave address)
Hardware i2c slave address
0x2E = 0b00101110 0x5C = 0b1011100
That is, if the slave is declared to have 0x5C address, the master will have to talk to 0x2E as jalv2 is actually shifting 1-bit while building a write- or a read-address. As usual (and as everyone seems to report), i2c com. problems mainly come from address issues.

Setting a i2c bus is quite easy, from a hardware point of view. SDA and SCL pins must be connected on each side, on a bus using pull-up  resistors to +5V. Different values can be used, they’re determining the stability and the bus speed. I’ve successfully tried 2.2K and 3.3K, these are standard values.

Finally, here’s a two small Jal programs to test this i2c bus. The master 16F88, also connected via UART, gets a character from serial (from a PC), echoes it and send it to slave 16F88. This ones gets this char, process it (char = char + 1…). The master then gets the result and send it back to the PC. So, if you type “a”, you’ll get “a” as echo, then “b” as the slave’s results.
PIC 16F88 i2c Master PIC 16F88 i2c Slave
simple_i2c_master.jal simple_i2c_slave.jal
I’ve just build another board, using toner-transfert-system (TTS). Last try required lots of trials/errors, still leaving unanswered questions… This time, the first try was perfect. Here’s my ultimate recipe:


    • I use glossy photo paper for laser printer (135g/m2)
    • wash your PCB board with soap, then with window cleaner (or detergent with alcohol).
    • sandpaper your board with ultra thin paper (600, one used for body car)
    • wash again the board with soap, then with window cleaner. It must absolutely be dry.
    • preheat the board, with your iron (max temperature, mine doesn’t heat a lot)
    • place the paper on the board. Be careful, it’ll instantly stick to the board.
    • iron the paper. Apply a lot of pressure, all over the board, for ~15min (yes a long time, but it may depend on your iron)
    • continue with the iron tip, and redraw all the circuit (black tracks will appear through the paper), for ~5min.
    • Ironing time: ~20min
    • then place the board on hot water, no soap. Soak it for 30min.
    • peel-off the paper. Most can easily be removed, only the last layer, where the toner sticks, will mostly remain on the board.
    • soak it again for 1h
    • peel-off again the paper. You may need to remove very small portion of paper, between the tracks. Leave the paper on  the tracks, it won’t bother while etching since tracks remain accurate. This can be time-consuming, but be precise. Also don’t be paranoid, the toner really stick well to the board and won’t be removed easily
    • once done, dry the board. Check if you’ve missed some paper.
    • clean the board with window cleaner. While it won’t remove toner, it’ll help to remove paper residue and get an accurate board.
    • you’re done. You’ll then need to etch the board. Once done, remove the toner using acetone (nail polish remover works well)

Here’s some pictures:

Once all paper has been peeled-off, here’s the global result: tracks are still covered with paper, but the whole seems accurate. Zooming shows tracks are perfectly accurate, even holes in pads. Text is too fuzzy won’t be rendered well if left as is. Using window cleaner can help removing extra paper residue.
Paper on tracks is clearly visible here. Important is the way limit between tracks and copper is clean, accurate. After etching, here the result. Everything looks nice. There are two pen traces which I’ve added. The big trace on top of the board isn’t consistent at all. Using a PCB pen is definitively not reliable…
Once cleaned… Great… Even text is readable :)
Tired of using a PCB pen to build my boards… Too time-consuming, not really repeatable, dirty… So, I’ve tested this method and here some results and thoughts.This method consists in printing circuit using a laser printer. The paper is then ironed on the copper side of a board. The toner is transferred: you have a nice circuit drawn on your board. Because the toner is composed by small plastic particles, it protects tracks while etching the board. Results: a amazing PCB board, easy, cheap, fast.That’s theory…Originally, this method seems to be first used by Thomas P. Gootee. While it exists commercial and expensive paper doing this (Press’n Peel for instance), he observed he could get the same results using some glossy photo paper, used for inkjet printers. The kind of paper is the key factor. And the iron temperature too. And also the ironing time. And the way you peel off the paper. And how you prepare you board. And also how you soak your paper. Lots of parameters, few data…

I’m not comfortable using glossy paper for inkjet on a laser printer. It can stick on the fuser and ruins the printer. Some people mentionthis. Others pretends to nice results using normal, standard paper. Some also have amazing results using mailing label backing paper, or glossy photo paper for laser printer. For now, I have tried standard paper, label paper and glossy photo paper for laser printer(see following results, except for label paper: I didn’t even managed to print on it…).

Everyone seems to report it. The board has to be clean. Very clean. Some are saying it’s important to prepare it using sand paper (or the like) so the toner has something to grip to. I’m always following those advices: clean the board with soap, use very thin sand paper, clean it again with soap, then clean it again with window cleaner to help drying it. It’s ready.

About my iron, it’s an old one. It doesn’t heat a lot. I tend to iron a lot of time, while I’m not sure it’s a good idea. I think it depends on how the toner has been fixed onto the paper.

While I was trying to build a new SirBot Mainboard, I took several pictures to report what I’ve done, what failed and what has been quite a success…


My first attempt was using standard paper. This was a on very small testing board, and results were amazing. Then I tried on a real PCB (photo). Several times. At least 4 times (maybe 5). And it always failed…Anyway, whatever the paper type is, the board has to be a little bit larger than the paper. Ironing will be easier and everywhere the same (hopefully).
Again, whatever the paper, the board has to be cleaned and prepared usingsandpaper. I use very thin sand paper, one used to sand body car. I clean it using soap and ultimately using window cleaner (or something with alcohol): it helps to get a dry board.
The paper is then put down on the board, toner side on copper side. Pre-heat the board (using another paper, without any dust or the like). While you stick the paper to the board, be sure you’re right because the toner will instantly grip the board.
Iron the paper. Use a lot of pressure, everywhere. For this board, I’ve tested different ironing times: 5min to 12min, all attempts have failed…Iif you’re not ironing enough, some tracks won’t stick to board (I’ve observed). If you iron too much, tracks will get fuzzy (I’ve never observed it). I think ironing this board (10cm x 10cm) for at least 10min is ok.
So far so good… One ironed, put the board on water. Hot, cold ? With ot without soap ? Some say putting the board a cold water help the toner to fix the board. I’ve experimented it: the toner also seems to diffuse on the paper, make it harder to remove. But remember, that’s a standard paper, so it may be ok for other type of paper. Soap can also help to remove the paper. I tend to put the board on medium hot water (same temp as for dishes), with a little soap…
After 15min, the board shows bubbles on its surface: every piece of paper without toner gets unstuck (remember, standard paper). This clearly shows how well the toner has fixed the board. That’s promising… You may not observe those bubbles using glossy papers.
You can then start to gently rub the paper. You should be able to easily remove most of the paper. Only the last layer will cause problems (and still cause problems…).
You should not doing this, but who could resist… Just be sure not to damage any tracks.
Finally, after 30-45min, you can get this type of results. Some paper is still stuck onto the board. And won’t be removed, even after soaking it overnight.Now, 2 options:

  • you used standard paper: the remaining paper makes tracks fuzzy, dirty. You can’t remove it without damaging them. Like the paper, you’re stuck…
  • you used glossy paper for laser printer: there’s also remaining paper (less, though), but tracks still look thin. That’s ok, you can probably etch the board :)
This picture shows tracks covered with paper, using standard paper. Some tracks are damaged, but the most important thing here is tracks are fuzzy, due irregular paper residues. If etched, you won’t get a workable result…Now, may I iron the board too much ? Not enough ? Some tracks did not stick to the board. I tend to say “not enough”. This is plausible as my iron doesn’t heat a lot. Next time I’ll try ironing it at least 20min…
Another one… this time using glossy photo paper for laser. While there’s still paper on tracks, those are accurate, well limited and quite consistent. Having paper residue is not a problem, it’s the way it sticks that is important.
Some tracks were damaged (still). I needed to double-check the board andcorrect those errors with a PCB pen. Note, on this pictures, one pen trace doesn’t mean one error: I redraw some of the tracks too make them larger. There were maybe 3 errors for the whole board. I think µI didn’t iron enough.Once ready, put the board into etchant. I continually move the board into the etchant (for 18min for this one), to be sure the board is etched everywhere, equally (hopefully).
Tadaaaa ! Nothing to say except tracks are nice…
Clean the remaining toner. People say it cannot be removed without acetone. I use nail cleaner without acetone, and it works perfectly.
Tadaaa (again) ! Ready for soldering !
A nice looking result. I can even read “SirBot Project   Mainboard” and the very small date “2008-04-08″.
So, what to say ? Using standard paper won’t produce good results ? For sure, at least for me. A glossy photo paper for laser printer is the minimum required. This is also a very first result. I needed a lot of attempts and I’m not even sure this is all repeatable. Probably the tricky ironing step needs for experiments. For now, I need to solder components on my new “half-professional looking” PCB…

« Older entries