Just wanted to report on completing a quick electronics prototyping project – making a very tiny ATTiny45-based simple USB dongle.
I used the blueprints from here: http://macetech.com/blog/?q=node/46
Printed the thing on the Roland Modela in the Media Lab shop, stuffed it with components obtained from DigiKey (links) there are only 5…, and programmed with a combination of the code from macetech and this instructables tutorial, and voilla – it works.
The advantage of the ATTiny45-20 (as opposed to the 45 not ’20’, e.g. ’10’) is that it works at 5V, which is what you get from USB, and also has an internal oscillator and PLL that can go over 20MHz (@ 4.5V-5.5V), which surpasses the USB’s requirements of 16MHz. The ATTinyX5-10 will not do, as it doesn’t go over 10MHz. So basically this chip is all you need to create a USB device, how awesome is that?
This is the image I used for cutting out the board (it is to-scale at 500 DPI, meaning you can use it to mill your own board). The red marking can be used for cutting out the board:
Using Poppler, of course!
Poppler is a very useful tool for handling PDF, so I’ve discovered lately. Having tried both muPDF and ImageMagick’s Magick++ and failed, Poppler stepped up to the challenge and paid off.
So here’s a small example of how work the API (with OpenCV, naturally):
#include <iostream> #include <fstream> #include <sstream> #include <opencv2/opencv.hpp> #include <poppler-document.h> #include <poppler-page.h> #include <poppler-page-renderer.h> #include <poppler-image.h> using namespace cv; using namespace std; using namespace poppler; Mat readPDFtoCV(const string& filename,int DPI) { document* mypdf = document::load_from_file(filename); if(mypdf == NULL) { cerr << "couldn't read pdf\n"; return Mat(); } cout << "pdf has " << mypdf->pages() << " pages\n"; page* mypage = mypdf->create_page(0); page_renderer renderer; renderer.set_render_hint(page_renderer::text_antialiasing); image myimage = renderer.render_page(mypage,DPI,DPI); cout << "created image of " << myimage.width() << "x"<< myimage.height() << "\n"; Mat cvimg; if(myimage.format() == image::format_rgb24) { Mat(myimage.height(),myimage.width(),CV_8UC3,myimage.data()).copyTo(cvimg); } else if(myimage.format() == image::format_argb32) { Mat(myimage.height(),myimage.width(),CV_8UC4,myimage.data()).copyTo(cvimg); } else { cerr << "PDF format no good\n"; return Mat(); } return cvimg; }
All you have to do is give it the DPI (say you want to render in 100 DPI) and a filename.
Keep in mind it only renders the first page, but getting the other pages is just as easy.
That’s it, enjoy!
Roy.
זהו פוסט ראשון בבלוג בעברית, מאחר והוא דן בנושא שמות ישראליים בעברית. לאחרונה יצא לי להתעסק בכריית שמות מדפי אינטרנט ומהר הבנתי שלא אתקדם הרבה אם לא תהיה לי רשימת מילים שהן למעשה שמות, כדי להפריד בקלות את הטקסט מהשמות.
לא מצאתי רשימה כזו פשוטה, למרות שבאתר מ.י.ל.ה של הטכניון יש לקסיקון די נרחב של מילים בעברים עם טיוג גם לשמות. למרות שאפשר בקלות לדלות משם את השמות עם JAXB על הסכמה של הXML, לא עשיתי זאת מפאת חוסר זמן וקוצר רוח.
אז עשיתי רשימה בעצמי. התחלתי ממאגר שמות שקיים אצלי ופירקתי לשם פרטי ומשפחה באמצעות רווחים, ולאחר מכן התחלתי במלאכת הכרייה שהוסיפה הרבה מאוד שמות למאגר.
לאחר מכן חזרתי למאגר שלי ומניתי את המופעים של כל שם כשם פרטי ושם משפחה, כדי לעזור בכרייה עתידית. כך אפשר למצוא עוד שמות למשל אם לוקחים את המילה שבאה לפני שם משפחה מובהק מאוד.
עם זאת ישנם שמות מאוד מבלבלים מבחינת שיוך לשם פרטי או משפחה, למשל “גל”, “שלום”, או “ברק”. לעומתם שמות מובהקים לכאן או לכאן כמו “אהוד” או “לוי”
בכל מקרה, הנה הרשימה לשימושכם החופשי.
נא לקחת בחשבון שזו רשימה חלקית ביותר, וכן המנייה של השמות חלקית ביותר גם היא.
hebrew_names
This is the first hebrew speaking post on the MTT blog, since it speaks of names in Hebrew. This is also not a translation of the above text, just a preamble to it. I’ve collected a list of Hebrew first and last names and counted the number of times a name appears as first and last on a private database of names. The result may be useful for someone extracting Hebrew names from the web.
Enjoy!
Roy.
Years ago I wanted to implement PTAM. I was young and naïve 🙂
Well I got a few moments to spare on a recent sleepless night, and I set out to implement the basic bootstrapping step of initializing a map with a planar object – no known markers needed, and then tracking it for augmented reality purposes.
Just sharing a code snippet about how to implement a jQuery+Bootstrap progress bar for a background operation in Tapestry 5. There’s not a lot to it, but it took me a while and serious digging through the internet to find how to make it work. Essentially it’s based on a couple of examples and references I found:
- http://permalink.gmane.org/gmane.comp.java.tapestry.user/85776
- https://github.com/uklance/tapestry-stitch
But I simplified things because I don’t like the over-design Java can easily make you do…
You already know I love libQGLViewer. So here a snippet on how to do AR in a QGLViewer widget. It only requires a couple of tweaks/overloads to the plain vanilla widget setup (using the matrices properly, disable the mouse binding) and it works.
The major problems I recognize with getting a working AR from OpenCV’s intrinsic and extrinsic camera parameters are their translation to OpenGL. I saw a whole lot of solutions online, and I contributed from my own experience a while back, so I want to reiterate here again in the context of libQGLViewer, with a couple extra tips.
Just sharing a simple recipe for a video stabilizer in OpenCV based on goodFeaturesToTrack() and calcOpticalFlowPyrLK().
Well… it’s a bit more than 20 lines, but it is short. And it doesn’t work for every kind of video (although the results are funny anyway! :).
So lately I’m into Optical Music Recognition (OMR), and a central part of that is doing staff line removal. That is when you get rid of the staff lines that obscure the musical symbols to make recognition much easier. There are a lot of ways to do it, but I’m going to share with you how I did it (fairly easily) with Hidden Markov Models (HMMs), which will also teach us a good lesson on this wonderfully useful approach.
OMR has been around for ages, and if you’re interested in learning about it [Fornes 2014] and [Rebelo 2012] are good summary articles.
The matter of Staff Line Removal has occupied dozens of researchers for as long as OMR exists; [Dalitz 2008] give a good overview. Basically the goal is to remove the staff lines that obscure the musical symbols, so they would be easier to recognize.
But, the staff lines are connected to the symbols, so simply removing them will cut up the symbols and make them hardly recognizable.
So let’s see how we could do this with HMMs.
I came across an extremely simple color balancing algorithm here. And I thought I’ll quickly transcode it to OpenCV.
Here’s the gist: