Got an hour-long video and not really into manually creating subtitles? not plans to put it on YouTube for their automated transcription services? then – try Google Cloud Speech-to-Text! In this post I’ll share some scripts for automating the process and creating an .str
file to go along your video for displaying the subtitles.
Author: Roy
ImageMagick has a built in Montage creating tool. It’s good enough for casual montaging, but it’s definitely suboptimal for packing varying size images.
Mastering OpenCV 4 – my new book!
I’m very excited to announce the publication of my latest Mastering OpenCV book!
With many new chapters and all the others re-written practically from scratch, this edition is by far the best ever.
The excellent David Millán Escrivá and I go deep and wide across the range of capabilities of OpenCV, explaining the theory and implementing recent real-world vision tasks from the ground up.
It’s been baking for many months in the oven, rising slowly, and finally ready for consumption… yum!
The sources are free to grab: https://github.com/PacktPublishing/Mastering-OpenCV-4-Third-Edition
And copies are available on
Amazon: https://amzn.to/2Ff1mmE
Packt: https://www.packtpub.com/application-development/mastering-opencv-4-third-edition?utm_source=github&utm_medium=repository&utm_campaign=9781789533576
Enjoy reading!
Hey-o
Just sharing a code snippet to warp images to cylindrical coordinates, in case you’re stitching panoramas in Python OpenCV…
This is an improved version from what I had in class some time ago…
It runs VERY fast. No loops involved, all matrix operations. In C++ this code would look gnarly.. Thanks Numpy!
Enjoy!
Roy
Reporting on a project I worked on for the last few weeks – porting the excellent Gesture Recognition Toolkit (GRT) to Python.
Right now it’s still a pull request: https://github.com/nickgillian/grt/pull/151.
Not exactly porting, rather I’ve simply added Python bindings to GRT that allow you to access the GRT C++ APIs from Python.
Did it using the wonderful SWIG project. Such a wondrous tool, SWIG is. Magical.
Here are the deets
This is my first trial at using Jupyter notebook to write a post, hope it makes sense.
I’ve recently taught a class on generative models: http://hi.cs.stonybrook.edu/teaching/cdt450
In class we’ve manipulated face images with neural networks.
One important thing I found that helped is to align the images so the facial features overlap.
It helps the nets learn the variance in faces better, rather than waste their “representation power” on the shift between faces.
The following is some code to align face images using the excellent Dlib (python bindings) http://dlib.net. First I’m just using a standard face detector, and then using the facial fatures extractor I’m using that information for a complete alignment of the face.
After the alignment – I’m just having fun with the aligned dataset 🙂
I’ve recently made a tutorial on using Docker for machine learning purposes, and I thought also to publish it in here: http://hi.cs.stonybrook.edu/teaching/docker4ml
It includes videos, slides and code, with hands-on demonstrations in class.
A GitHub repo holds the code: https://github.com/royshil/Docker4MLTutorial
I made several scripts to make it easy to upload python code that performs an ML inference (“prediction”) operation on AWS Lambda.
Enjoy!
Roy.
Last time I’ve posted about cross compiling TF for the TK1. That however was a canned sample example from TF, based on the bazel build system.
Let’s say we want to make our own TF C++ app and just link vs. TF for inference on the TK1.
Now that’s a huge mess.
First we need to cross-compile TF with everything built in.
Then we need protobuf cross-compiled for the TK1.
Bundle everything together, cross(-compile) our fingers and pray.
The prayer doesn’t help. But let’s see what does…
Been looking around for a solid resource on how to get Tensorflow to run on the Jetson TK1. Most what I found was how to get TF 0.8 to run, which was the last TF version to allow usage of cuDNN 6 that is the latest version available for the TK1.
The TK1 is an aging platform with halted support, but it is still a cheap option for high-powered embedded compute. Unfortunately, being so outdated it’s impossible to get the latest and greatest of DNN to work on the CUDA GPU on the TK1, but we can certainly use the CPU!
So a word of disclaimer – this compiled TF version will not use the GPU, just the CPU. However, it will let you run the most recent NN architectures with the latest layer implementations.
Cross compilation for the TK1 solves the acute problem of space on the device itself, as well as speed of compilation. On the other hand it required bringing up a compilation toolchain, which took a while to find.
I am going to be assuming a Ubuntu 16.04 x86_64 machine, which is what I have, and really you can do this in a VM or a Docker container just as well on Windows.
For a class I’m teaching (on deep learning and art) I had to create a machine that auto starts a jupyter notebook with tensorflow and GPU support. Just create an instance and presto – Jupyter notebook with TF and GPU!
How awesome is that?
Well… building it wasn’t that simple.
So for your enjoyment – here’s my recipe: