When I first released the DeepBeliefSDK for iOS devices, one of the top requests was for an Android version. I’m pleased to say, after some serious technical wrestling, you can now use the image recognition library in your own Android apps! Just download the github repository and run the Android sample code.
I wasn’t expecting the port to be as big a technical challenge as it turned out to be. The algorithm relies on some pretty hefty numerical calculations, and on iOS I was able to use the Accelerate framework, but I was surprised to find that there was no equivalent for Android. I ended up writing my own custom ARM NEON SIMD code, alongside the Eigen library for some linear algebra operations. There don’t seem to be any easy ways to use ATLAS or even OpenBLAS on Android unfortunately. Both have ARM ports, but don’t make it easy to cross-compile, and I couldn’t find any pre-built binaries. I’d love to see the porting happen, feels like a great Summer of Code project?
I also hit problems when I was porting multi-core code. I’m a big fan of OpenMP, and the official Android build chain recently added support for the framework, but if you call it from a non-main thread, it crashes. I could have tried the workaround mentioned in that post, but rebuilding the toolchain was a bit too time-consuming for this project. In the end I was able to do a decent job on multi-core processors, but it was a lot harder than I was hoping.
The final hurdle was profiling. I’ve been spoiled by the Xcode’s Instrument profiling tool, and the best I could find for the kind of native code I’m running is this injected library that outputs gprof timing information. I ended up resorting to the old standby of shotgun profiling – doing a binary search by commenting out blocks of code to understand where the time is going. I managed to speed up the classification to around 650ms on a Galaxy S5, but I’m sure with better visibility I can squeeze some more out in the future.
I think most of my issues came from going against the grain of the platform. On iOS, everything’s C-like compiled code, whereas that’s a special case on Android. Java/JVM is the default, so it’s not too surprising that the tools are focused there instead. I’m excited to see what folks build with this now it’s out there, I hope you have fun!
Cool stuff. I also recently ported a project from iOS to android and experienced the same kinds of problems. Especially having no alternative for the Accelerate Framework sucks.
I tried to run the Android example code on a Sony Xperia Z1 and got the following segfault:
http://pastebin.com/YZBm8mU5
Any idea what i’m doing wrong?
hey thanx very much .. I am creating an app for skin disease diagnosis so i want to create the network for image classification can u help me on how to create that .ntwk file in caffe