- Devbox 2 0 – Mobile Development Toolbox Free
- Devbox 2 0 – Mobile Development Toolbox Set
- Devbox 2 0 – Mobile Development Toolbox Template
The Free Mac Developer Toolbox: Get More Done With These 4 Top-Notch Mac Apps For Developers. The Free Mac Developer Toolbox: Get More Done With These 4 Top-Notch Mac Apps For Developers. Development IT + Security Marketing View All (800+) Lifestyle Everyday Carry Fitness + Health Home Kitchen Travel View All (300+) Best Sellers. DevBox — All-in-one (mobile) development toolbox. It's easy, beautiful, fast, smart & up to date. Just see for yourself. FEATURES ICONS iOS, Mac OSX, Android & Web. Deep Learning DevBox Mini Edition - Intel Core Extreme X299 for CUDA Development, AI Inference (Intel Core i7-9800X 8-Core Processor, 1 x GeForce RTX 2080) by MITXPC 5.0 out of 5 stars 1 rating. Development environment for Telemeta and TimeSide based projects - Parisson/DevBox.
Deep learning is a subset of AI and machine learning that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, language translation, and others.
Deep learning differs from traditional machine learning techniques in that they can automatically learn representations from data such as images, video or text, without introducing hand-coded rules or human domain knowledge. Their highly flexible architectures can learn directly from raw data and can increase their predictive accuracy when provided with more data.
Deep learning is commonly used across apps in computer vision, conversational AI and recommendation systems. Computer vision apps use deep learning to gain knowledge from digital images and videos. Conversational AI apps help computers understand and communicate through natural language. Recommendation systems use images, language, and a user’s interests to offer meaningful and relevant search results and services.
Deep learning has led to many recent breakthroughs in AI such as Google DeepMind’s AlphaGo, self-driving cars, intelligent voice assistants and many more. With NVIDIA GPU-accelerated deep learning frameworks, researchers and data scientists can significantly speed up deep learning training, that could otherwise take days and weeks to just hours and days. When models are ready for deployment, developers can rely on GPU-accelerated inference platforms for the cloud, embedded device or self-driving cars, to deliver high-performance, low-latency inference for the most computationally-intensive deep neural networks.
NVIDIA AI Platform for Developers
Developing AI applications start with training deep neural networks with large datasets. GPU-accelerated deep learning frameworks offer flexibility to design and train custom deep neural networks and provide interfaces to commonly-used programming languages such as Python and C/C++. Every major deep learning framework such as TensorFlow, PyTorch, and others, are already GPU-accelerated, so data scientists and researchers can get productive in minutes without any GPU programming.
For AI researchers and application developers, NVIDIA Volta and Turing GPUs powered by tensor cores give you an immediate path to faster training and greater deep learning performance. With Tensor Cores enabled, FP32 and FP16 mixed precision matrix multiply dramatically accelerates your throughput and reduces AI training times.
For developers integrating deep neural networks into their cloud-based or embedded application, Deep Learning SDK includes high-performance libraries that implement building block APIs for implementing training and inference directly into their apps. With a single programming model for all GPU platform - from desktop to datacenter to embedded devices, developers can start development on their desktop, scale up in the cloud and deploy to their edge devices - with minimal to no code changes.
NVIDIA provides optimized software stacks to accelerate training and inference phases of the deep learning workflow. Learn more on the links below.
Every AI Framework - Accelerated
Deep learning frameworks offer building blocks for designing, training and validating deep neural networks, through a high level programming interface. Every major deep learning framework such as Caffe2, Chainer, Microsoft Cognitive Toolkit, MxNet, PaddlePaddle, Pytorch and TensorFlow rely on Deep Learning SDK libraries to deliver high-performance multi-GPU accelerated training. As a framework user, it’s as simple as downloading a framework and instructing it to use GPUs for training. Learn more about deep learning frameworks and explore these examples to getting started quickly.
Deep Learning Frameworks
Tensor Core Optimized Model Scripts
Unified Platform
Development to Deployment
Deep learning frameworks are optimized for every GPU platform from Titan V desktop developer GPU to data center grade Tesla GPUs. This allows researchers and data scientist teams to start small and scale out as data, number of experiments, models and team size grows. Since Deep Learning SDK libraries are API compatible across all NVIDIA GPU platforms, when a model is ready to be integrated into an application, developers can test and validate locally on the desktop, and with minimal to no code changes validate and deploy to Tesla datacenter platforms, Jetson embedded platform or DRIVE autonomous driving platform. This improves developer productivity and reduces chances of introducing bugs when going from prototype to production.
Get Started With Hands-On Training
The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers in AI and accelerated computing. Get certified in the fundamentals of Computer Vision through the hands-on, self-paced course online. Plus, check out two-hour electives on Digital Content Creation, Healthcare, and Intelligent Video Analytics.
06 Apr 2017or, Building a platform agnostic cross-compiling toolchain for an ARM based robot.
In the last couple of months I’ve had the pleasure of working with a young, and very interesting robotics company here in Beijing. They are called Vincross, and they are building a robot called HEXA. HEXA taking on a Lion
What’s interesting about this startup is that they are providing HEXA owners with an SDK which they can use to build their own applications for the HEXA, called Skills.
HEXA owners can then publish their Skills to the Skill Store, where they can also download other developers Skills.
A Skill consists of two parts:
- Remote - A web application running on a mobile device used to control the HEXA remotely.
- Robot - A Golang application running on the robot, typically where the core Skill logic lives.
When I arrived at Vincross, they had just shipped their first batch of HEXA’s to customers together with an SDK and command-line interface for Skill development.
Skill development workflow worked as such:
- User runs
mind init
and a Skill project is scaffolded. - User writes some Golang code and JavaScript code.
- User runs
mind run
and code is packaged into a .mpk file which is uploaded to the robot, compiled and then executed.
The reason it had to be compiled on the robot is because the robot is using an ARM processor, while the developer’s machine most likely is using an x86 processor.
Golang supports cross compiling to ARM architecture, but abstracting away the build process of Golang applications on all the platforms supported by MIND SDK (Windows, Linux and macOS) is not trivial. Add to that, compilation of C++ libraries like OpenCV, and bindings to these libraries using SWIG/CGO, and it’s easy to see why the decision of “let’s just compile on the robot instead” makes a lot of sense.
The benefits we would reap from cross-compiling are:
- Ability to build third-party or non-golang libs into Skills.
- Skills can theoretically be developed in any language.
- Shorter build times.
- Skills source code can be proprietary and closed.
As we all know, Apple supports cross compilation of iOS applications to both the simulator running on x86, as well as to the actual phone, which is running on ARM.However, it’s easy to see why it’s not possible to develop iOS apps on Windows or Linux. Apple just doesn’t want to spend the time porting their own toolchain, dealing with the ins and outs of a 3rd party operating system, and keeping up with breaking changes when they already have their own hardware, operating system and XCode.
So how can we build a cross-compiling toolchain that will support cross compiling to x86 and ARM and at the same time be platform agnostic?
We do virtualization where it’s needed. And who does that? Docker does.
As long as we can get the whole cross-compiling toolchain working in Linux, we can ship
mind
as a binary, which responsibilities are very simple:- To make sure Docker is installed
- To download the latest
mindcli
image - To forward
mind
subcommands into themindcli
docker container.
So as far as platform agnostic goes, we trust that docker will provide us with that abstraction, and we pray that they do not mess up too often.
Alright, lets dig into the implementation details:
Cross compiling C/C++
The first goal was to cross-compile C/C++ applications, more specifically, we wanted to cross-compile OpenCV since it has a lot of features that are useful when you are building a robot that is suppose to visually understand the world.
We decided that, if we manage to cross compile OpenCV and get Go bindings to OpenCV working, our users should be able to do the same for any other library of their choice.
To cross compile C++ for ARM, all you need is the correct gcc cross compiling toolchain for your ARM processor. In our case, the HEXA is equipped with an ARMv7 processor with support for hardware floating point calculations. Thus, we want the arm-linux-gnueabihf version of the cross compiling tools.
We also want to install Go and set it up for cross compilation.
Above is a snippet of the Dockerfile in our (Open sourced cross compiler image. The full version also ensures that future packages installed with
apt-get
will include their respective ARM architecture version of that package, which is required when installing the build dependencies of OpenCV)With this Dockerfile in place and built, all we have to do is to
docker run
it with the OpenCV source mounted into the container, install a few dependencies with apt-get
, and execute cmake
on the OpenCV provided cmake file as such:After a pretty long compile time (Luckily we only have to compile once), the
${OPENCV_ARTIFACTS_DIR}
now contains all of OpenCV’s dynamic libraries and header files.Golang bindings
To generate Golang bindings for C libraries, one would typically use CGO, and when binding for C++ libraries, one would use SWIG.Writing the Golang bindings can be a pretty mundane process, and in our case we were lucky. Some cool people had already gone through the effort of writing Golang bindings for OpenCV using SWIG.
Now, to cross compile our Golang application using the Golang bindings to OpenCV, all we need to do is to
docker run
our container with our source mounted, tell Go how to find its dependencies……and then compile our application as usual.
Running it on the HEXA.
If OpenCV was compiled as a static library, we would just have to upload the binary to the HEXA, execute it and be done with it.
However, since we are linking against a C++ library, we now don’t have a statically linked executable anymore, and it will to try to find the shared libraries it’s depending on.
(We could build OpenCV against musl-libc instead, but since HEXA is running ubuntu 14.04, we already have glibc anyway)
But its an easy problem to solve.
- Pack the binary and the
${OPENCV_ARTIFACTS_DIR}/lib
into a zip file. - Upload the zip file to the robot and unzip it.
- On the robot, tell the run time shared library loader to look for libraries in our
lib/
directory and execute the application.
Done !We can now cross compile C/C++/Golang applications on our PC, and pack it together for upload and execution on the robot.
However, we for sure don’t want our dear users to have to go through this whole process, so we need to provide them with some sweet abstractions:
Through the MIND Command-line interface, users have everything they need to develop *Skills for the HEXA.*
MIND Software Development Kit
Let’s start by showing a very basic example of a Skill. All it does is make the HEXA stand up.
As seen above, we are importing
skill
and the hexabody
driver which we use to make the HEXA stand up using its 6 legs. These two packages are part of the MIND Binary Only Distribution Package, which previously came prebaked on the HEXA.Since we now are compiling inside of a docker container instead of on the HEXA, we don’t need to ship the package prebaked on the HEXA. Instead, we just put it inside of our
GOPATH
on the cross-compiling capable container.![Devbox 2 0 – Mobile Development Toolbox Devbox 2 0 – Mobile Development Toolbox](https://cdn.tutsplus.com/mobile/uploads/legacy/105_mobile-dev-toolkit/apphub.jpg)
Let’s delete some code
All of the things that the previous CLI used to do, like entrypoint generation, packaging, uploading, installation, execution, log retrieval, communication with HEXA over websockets etc, can now be accomplished with Linux tools and shell scripts instead of thousands of lines of Golang code.
The key to this functionality is this Golang function.
We can implement
mind build
by doing a docker run
on the container with the current folder mounted, injecting some environment variables and execute the following shell script inside the container.We can pack the Skill together as an
.mpk
file using zip like this:And then serve the
.mpk
file to the HEXA using Caddy.In addition, all of the websocket logic was rewritten and simplified using The WebSocket Transfer Agent
As I mentioned earlier, the only thing the MIND CLI has to do is forward subcommands into the docker container. The only exception is scanning the local network for HEXAs.
When scanning the network for HEXAs, the CLI will send UDP packets to the networks multicast address and wait for a UDP packet to be sent back by the HEXA containing its name and serial number. When doing this we can not NAT to the docker container since it would cause us to lose the packet source address. (Maybe we will run the container on host network in the future)
Wrapping it all up
The MIND SDK consists of the following parts:
- XCompile Docker Image - An image preconfigured for cross compilation of C/C++ and Golang code to ARM architecture.
- MIND Binary Only Distribution - Used by the Skill to interface with the HEXA hardware.
- MIND JavaScript SDK - Used by the remote part of the Skill to talk to the the HEXA.
- Templates and shell scripts used to generate the Skill main entrypoint.
- Makefiles and shell scripts used to compile and pack a Skill with its 3rd party dependencies and assets into an mpk file.
- Scripts to upload, install and execute Skills on the HEXA.
- Scripts to retrieve logs and communicate with the HEXA in realtime using websockets.
All of the parts listed above go through different build pipelines, to finally be packaged into a single docker image published on docker hub
In front of this docker image stands the MIND Command-line Interface abstracting away all of the docker commands.
Since Docker is providing the host operating system abstraction layer, we had, after getting it to run on macOS and Linux, close to 0 issues getting the whole toolchain working in Windows, both with and without Hyper-V.
Here is an example showing how a user would go about developing a new Skill for the HEXA using the SDK.
To use OpenCV inside a Skill, we can create a simple Makefile or shellscript for building OpenCV:
and build OpenCV inside the cross compiling container by executing
mind x make
, followed by copying the generated libraries and headers into the robot/deps
folder before building the Skill.And lastly, by executing a
mind upgrade
, the latest version of the MIND SDK container will pulled down from docker hub.It’s open source !
We opensourced the whole MIND Software Development Kit on GitHub and hope that it will be useful to HEXA owners as well as other robotics developers.
If you have any comments or suggestions please feel free to post them in the comment section below.
See you next time!
21 Jul 2016Docker does not make it easy for those who want to do isolated builds of separate applications using shared code in a monorepo.
There are probably many ways to solve it, but for me, finding a way that works in a consistent way for all of the projects and languages in our code base was not trivial.Here I’m going to present a solution that works for us at Traintracks.
This solution is agnostic to language, package manager, build system, project hierarchy and can be implemented in the same way throughout your whole stack. (Please do comment if you notice a case where it’s not)
So here it goes!
Cached dependencies
If you’ve ever used Scala and SBT, you probably know that you’ll have enough time to grow and cut your toenails (might even start eating them) in between builds if your build cache gets reset at each build.
The immutable nature of docker plus the fact that SBT does not have have a
package.json
or a requirements.txt
file like npm
/pip
means that we can’t cache our dependencies easily.Every time we update some code we are back to 0 because the downloading of dependencies and building of code happens in the same step.
Build containers to the rescue?
It goes pretty much like this.
- You create a container with all the tools to build your application.
- You run the container and tell it to build your application with your project folder mounted into a folder in the container.
- You execute your build inside of the container and everything is persisted on your host for your next build.
All good? not really, unless you also mounted your ~/.m2 or ~/.ivy2 folder or redirected them to somewhere else and also don’t mind keeping the same build artifacts shared between your host and docker container.
Adding to that, if you are in Vagrant and share your workspace volume with your host and have not set up NFS then be prepared for really slow build times.
Besides, you still want to have your static dependencies cached away and separate from your dynamic dependencies so that your team’s code can be built by all engineers regardless of how broken the internet is at that point. This is particularly relevant if you are behind a corporate firewall or in someplace with internet connectivity issues.
That means that your build container needs to already come shipped with the third party dependencies required before we execute the build in it.
To summarize, we need to do an initial build of the application inside the container before it can act as a pre-cached build container. As dependencies update the build container will be rebuilt.
Let’s continue to the next requirement.
Shared code
Maybe you made a nice library with some transformations that you want to use both in your data ingestion app and in your query application.On top of that, maybe one of the engineers on your team enjoys sitting in IntelliJ with all the Scala projects open in the same workspace, modifying the shared library code and recompile both of his projects from within the IDE.
How do we build individual applications isolated when they have shared dependencies above themselves in the project hierarchy?
Lets imagine a monorepo and try to figure out how to build coolapp and awesomeapp that both share the dependencies lib1 and lib2.We are going to use Golang for this example instead of Scala (for simplicity) but the same concepts apply.
We can’t just execute
docker build -t coolapp .
inside of coolapp because lib1 and lib2 are outside of it’s context.However, we can move the context up one directory and specify the dockerfile like this.
We are getting there. but wait, there is a folder that says its too fat for your docker context and we are not even depending on it.
What if we have so many projects in this repo that the size of the build context we send to docker ends up being a huge build time bottleneck?
Typically we would add a .dockerignore file that tells docker which files to ignore when uploading the context but that won’t work here since what we want to ignore is conditional (depending on which app we are building).
So what we need to do is to cherry pick our build context and send it to docker (Note that we’re using GNU Tar and not BSD Tar).
GNU Tar also takes –exclude-from-file where you can pass a .gitignore or a .dockerignore. Note that .gitignore haveexpansion rules not supported by Tar so you are either gonna have to tar dependencies individually and concatenated, ask git for the relevant files or align to a unified ignore pattern across your libraries.
Lets have a look at the Dockerfile in coolapp.
Lets build the container, jump into it and look at the content of our GOPATH.
It has downloaded our dependencies from the internet and also built coolapp-builder with our cherry picked dependencies.
Now we have edited a line of code in lib1 and want to rebuild coolapp.We are going to execute the container with the build context mounted to /mount and tell it to make an rsync between /mount and the corresponding folder in the GOPATH.
Remember what I said about .gitignore passed into Tar, the same applies here
Now we just have to build the app again with a
go get ./...
and unless you have new internet dependencies since last build the build will be as fast as your CPU and disk.Final step is to copy our artifacts to somewhere in the mounted folder.
Back on our host we can inspect the folder again
So there is your coolapp binary ready for you to throw it into a plain linux container without any builds tools or source code.This will keep your containers lean and will avoid potential leakage of code.
coolapp.dockerfile might look something like this
Good ol’ Makefiles
That was a lot of steps and it might seem like a very troublesome process but actually we can wrap all of it in this Makefile and work ourselves towards a generalised solution that will work for all of our projects.
I have created an example repository that you can clone and try out.
To summarise what all of this gave us.
- Pre-cached dependencies without a requirements file.
- Separation between build/run containers.
- No dirty artifacts on host.
- Support for a project hierarchy of your choice.
- Fast builds on shared disks in Vagrant.
- A unified build system for all your applications.
If you think you might have a better solution than what I presented here or have some cool improvements please leave me a comment ! I’m more than happy to learn how others have tackled these problems.
12 Jul 2016In the previous article Safeguarding your deployments with packer we explained in theory how we can use Packer to achieve immutable server configurations.
At Traintracks, we not only use Packer for server deployments but also for our development environment.
There are many benefits to this such as:
- Every engineer’s development environment is the same.
- New engineers can start being productive from day one.
- What works on my machine will work on any other engineers machine.
- What works on my machine will (probably) work in production.
- Development environment is host operating system agnostic (Even works for windows users).
When using Packer for server deployments you want to keep all of your server configurations as immutable as possible. However, for a development environment, it’s just not practical to throw away your devbox and build a new one every time something in the dev environment has been updated.
Instead of only optimising for immutability and consistency we also need to optimise for efficiency (developer hours cost more than computer hours).
This is why we are gonna bring in two new concepts here:
- Static dependencies (Dependencies that do not get updated very often, eg. operating system, system packages, third party software like docker, ansible, git, curl etc).
- Dynamic dependencies (In-house tooling and configuration files that are constantly iterated on)
We are going to use Packer to pack all of our static dependencies and Ansible to provision our dynamic dependencies inside of Vagrant.
Devbox 2 0 – Mobile Development Toolbox Free
A simple example to clarify what I mean:
At Traintracks we have a remote working culture but most of our engineers are in Beijing.
That means that everything that requires free and fast access to the greater internet goes into our static dependencies (Packer). Third party installation scripts might be pulling from Amazon S3 (blocked in China).
Kubernetes is downloaded from google servers, which means it is also blocked.
Due to internet connectivity and speed limitations we want these types of dependencies to be downloaded and configured once and then distributed to all the team members without anyone having to jump on a VPN to download software dependencies.
Of course we could host these dependencies on our own servers and we very often do but for dependencies that are not being changed a lot (our static dependencies) we prefer to grab them directly from the correct source once, and distribute everywhere just like we do for our production servers.
So, enough talking and let’s get to it!
Prerequisites
- Packer 0.10 or above
- Vagrant 1.8.1 or above.
- Ansible 2.0 or above.
Assuming you’re on a mac and use homebrew:
Packer (Static dependencies)
We have prepared a boilerplate for a Packer configuration that is very similar to the one we use at Traintracks that we will use as our base.
This boilerplate will give you a box containing:
- Ubuntu 16.04
- VirtualBox Guest Additions
- Docker, kubectl and kargo
- git, wget, curl, vim, zsh, htop, tmux, ntp
Lets start by inspecting the packer folder
devbox.json is the file that explains to packer how to build the devbox, which files to copy and which scripts to run.You can also add provisioners for other image types (ec2, vmware etc) in here.If you want to use another base operating system you define thatin here and provide an url and hash sum to the base image.
preseed.cfg will be fetched by the Ubuntu installer from a local web server that Packer has spun-up that will automate the Ubuntu installation by automatically providing answers to all of the installation prompts.
scripts folder contains scripts that makes little sense to perform with ansible. Eg: ansible.sh installs ansible and cleanup.sh does final cleanup before exporting the box.
playbook.yml is the ansible playbook where you define packages to be installed and other configurations.
To customise the devbox to your needs you will mainly be interested in devbox.json and playbook.yml.
Now we can go ahead and build the devbox with packer.
To see the installation progress you can either go from the VirtualBox UI, watch the preview screen, select Show from Machine menu, or set headless to false in the devbox.json file.
Dynamic dependencies
As mentioned earlier your team might have tooling or configuration that is frequently updated which you want to propagate throughout your team more often than you want to build a new box with Packer.
One example could be a company wide ssh config or a common zshrc file.The boilerplate contains a simple example on how this is done.
Lets have a look inside of the Vagrantfile.
Check out the lines between
# PROVISION START
and # PROVISION END
The first three lines copies your host machines default ssh keys into the devbox so that you can access your remote machines from the devbox as you would from your host machine.We also copy your git config so that you can make git commits from within the devbox.
After that you can see that we are calling ansible to do the rest of the provisioning using the ansible/playbook.yml file.
Currently all it does is setting the default shell to zsh and copies a zshrc file into the vagrant home folder but it serves as a template for you to add all of the other tools and configurations that go into the devbox.
For example you can add a company wide ssh config that is pushed to git and all your team mates have to do to get the new config is a
git pull
followed by a vagrant provision
.Once you notice a dynamic dependency is being updated less frequently you can move it to the static dependencies instead (A mere copy paste between two ansible files).
Now lets add the box to vagrant, provision it and start it up!
If everything went well you should be greeted with a shell looking like this.
05 Jul 2016For me, one of the greatest challenges of building our solution was making sure we had the ability to deploy on-premise, or on any cloud provider.
Devbox 2 0 – Mobile Development Toolbox Set
At the root of all the tools we use to make this possible is Packer.
Devbox 2 0 – Mobile Development Toolbox Template
“Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.” - Hashicorp
By using Packer we know can pack all of our applications and their dependencies into a deployable image, through a single configuration, that can be easily installed on our cloud clusters or on-premise bare metal clusters,
Traditionally when deploying a cluster of machines you often do the provisioning through a configuration management tool like Ansible, Puppet or Chef.
But whether you are provisioning thousands of servers or only a dozen, not only will it take a considerable amount of time but also every step along the way things can fail and very often does, even with idempotent provisioning scripts.
That’s because even if it ran correctly last time, maybe links on the internet have changed or an external software package was updated during provisioning and it no longer works. You end up trusting a lot of the internet to be stable which just does not happen in reality.
By using Packer to pack your OS and dependencies into one image you have defended against the instability of the outside world without sacrificing reproducibility. Throw packer into your CI/CD pipeline and you can achieve an immutable server configuration and not have to worry about any of your cluster nodes ending up in an inconsistent state. When one gets ill you don’t nurse it, you throw it away and get a new one aligning to the Pets vs Cattle analogy.Happy sysadmin
We have seen in theory how packer can be applied to your production servers, but can the same concept be applied to your development environment?
The short answer is yes, so stay tuned (feel free to sign up for our mailing list), because in the next article we will get you familiar with Packer while setting up a “devbox” for you and your team. It’s been a great time saver for me, and I hope it will help you too.