Categories
machine learning programming school work

No More Software Left to Write

TLDR: Traditional software engineering is becoming commoditized. Infrastructure, deployment, and development have become incredibly easy thanks to modern tools and platforms. While this means utility software (business applications) will likely be automated away, there’s still room for creativity and personal impact. Even though most software has already been written or will be handled by AI, developers should focus on writing software that matters to them or makes a difference to others, rather than waiting for even better tools.

The world of software engineering and development is changing at a breakneck pace. For someone who’s been in SWE for nearly 40 years (since I was 6 years old) and professionally for nearly 25 years (since I started getting paid for SWE work), I am concerned, but still hopeful.

What do I mean by “no more software left to write”? It means a few things:

  • Software infrastructure has become so widely developed that writing new applications today — by hand, we’ll get to AI later — is 100x easier, faster, secure and optimized than just 5 years ago, and this rate of development is constant, meaning that in 5 years it would be 100x faster / easier / better than today.
  • The amount of software written has risen dramatically. Just the sheer volume of applications and projects has increased, and within those – open-source projects that are easily copiable or integrable. It’s nearly impossible today to find an alcove of human pursuit that has been untouched by software or digitization.
  • Software (and software engineering) has become as much like LEGO as it has ever been, where engineers (builders) can piece together an application in minutes! With advanced features and production-ready backends with just lines of code.
  • Software deployment has become automated to immense degrees, where an app (web, mobile, desktop) can be packaged and served online (or dished out as download executable) with a single line of terminal. All backend services, all testing and integrations fully managed by someone else.
  • AI coding has made it so full applications can be “written” ad-hoc to the user’s needs almost instantaneously, making the need for bespoke engineering obsolete – apps can be generated on the fly for a single use! The age of disposable food containers has arrived in software engineering.
  • Hardware platforms are more generous than ever before! With memory, compute and disk capabilities that really make runtime optimization a thing of the past. You can brute force your way to software success and deal with consequences later, if at all it will become an issue.

All this means is that it has never been a better time to be a software engineer. And it has never been a worse time to start as a software engineer. If you’re starting out today, take note of the rate of change in the field – it is exponential. Tools and paradigms used today will be obsolete or woefully outdated in just 2-3 years. Except for the deepest of technologies, engineering applications has been commoditized to a pulp.

Categories
code machine learning programming Stream video

CleanStream OBS Plugin: Remove Filler Words with Whisper CPP

CleanStream OBS Plugin is a powerful tool that helps clean live audio streams from unwanted words, filler words, and profanities. Created in C++, this plugin can improve the quality of live streams while saving time and effort in post-processing. In this blog post, we will take a detailed walk-through of the code for my CleanStream OBS plugin, explaining how it is built and its core functionalities.

Categories
code graphics machine learning opencv opengl programming video vision

Building an OBS Background Removal Plugin: A Walkthrough

In this blog post, we will take a closer look at the development of the OBS Background Removal Plugin, discussing its key components, functionalities, and the process behind building it. The plugin was created to address the need for virtual green screen and background removal capabilities in OBS (Open Broadcaster Software), a popular live streaming and recording software. With over 500,000 downloads and ongoing contributions from various developers, the OBS Background Removal Plugin has gained significant traction in the streaming community. Whether you’re interested in understanding how this plugin works or considering building a similar plugin yourself, this walkthrough will provide valuable insights.

Categories
code machine learning python

Take a SWIG out of the Gesture Recognition Toolkit (GRT)

Reporting on a project I worked on for the last few weeks – porting the excellent Gesture Recognition Toolkit (GRT) to Python.
Right now it’s still a pull request: https://github.com/nickgillian/grt/pull/151.
Not exactly porting, rather I’ve simply added Python bindings to GRT that allow you to access the GRT C++ APIs from Python.
Did it using the wonderful SWIG project. Such a wondrous tool, SWIG is. Magical.
Here are the deets

Categories
code machine learning opencv programming python vision

Aligning faces with py opencv-dlib combo

This is my first trial at using Jupyter notebook to write a post, hope it makes sense.
I’ve recently taught a class on generative models: http://hi.cs.stonybrook.edu/teaching/cdt450
In class we’ve manipulated face images with neural networks.
One important thing I found that helped is to align the images so the facial features overlap.
It helps the nets learn the variance in faces better, rather than waste their “representation power” on the shift between faces.
The following is some code to align face images using the excellent Dlib (python bindings) http://dlib.net. First I’m just using a standard face detector, and then using the facial fatures extractor I’m using that information for a complete alignment of the face.
After the alignment – I’m just having fun with the aligned dataset 🙂

Categories
code linux machine learning python

Build your AWS Lambda Machine Learning Function with Docker

I’ve recently made a tutorial on using Docker for machine learning purposes, and I thought also to publish it in here: http://hi.cs.stonybrook.edu/teaching/docker4ml
It includes videos, slides and code, with hands-on demonstrations in class.
A GitHub repo holds the code: https://github.com/royshil/Docker4MLTutorial
I made several scripts to make it easy to upload python code that performs an ML inference (“prediction”) operation on AWS Lambda.
Enjoy!
Roy.

Categories
cmake code linux machine learning programming

Cross Compile TensorFlow C++ app for the Jetson TK1

Last time I’ve posted about cross compiling TF for the TK1. That however was a canned sample example from TF, based on the bazel build system.
Let’s say we want to make our own TF C++ app and just link vs. TF for inference on the TK1.
Now that’s a huge mess.
First we need to cross-compile TF with everything built in.
Then we need protobuf cross-compiled for the TK1.
Bundle everything together, cross(-compile) our fingers and pray.
The prayer doesn’t help. But let’s see what does…

Categories
linux machine learning Solutions

Cross-compile latest Tensorflow (1.5+) for the Nvidia Jetson TK1

Been looking around for a solid resource on how to get Tensorflow to run on the Jetson TK1. Most what I found was how to get TF 0.8 to run, which was the last TF version to allow usage of cuDNN 6 that is the latest version available for the TK1.
The TK1 is an aging platform with halted support, but it is still a cheap option for high-powered embedded compute. Unfortunately, being so outdated it’s impossible to get the latest and greatest of DNN to work on the CUDA GPU on the TK1, but we can certainly use the CPU!
So a word of disclaimer – this compiled TF version will not use the GPU, just the CPU. However, it will let you run the most recent NN architectures with the latest layer implementations.
Cross compilation for the TK1 solves the acute problem of space on the device itself, as well as speed of compilation. On the other hand it required bringing up a compilation toolchain, which took a while to find.
I am going to be assuming a Ubuntu 16.04 x86_64 machine, which is what I have, and really you can do this in a VM or a Docker container just as well on Windows.

Categories
linux machine learning python

An automatic Tensorflow-CUDA-Docker-Jupyter machine on Google Cloud Platform


For a class I’m teaching (on deep learning and art) I had to create a machine that auto starts a jupyter notebook with tensorflow and GPU support. Just create an instance and presto – Jupyter notebook with TF and GPU!
How awesome is that?
Well… building it wasn’t that simple.
So for your enjoyment – here’s my recipe: