• We love you

    Happy Valentine! Hate it or love it! On the 14th of February, the topic of the day is Valentine. The fact is that we should all love each other more. And why not use Valentine’s as a pretext to amplify to love each other more message. So here we are, Kinamo a hosting and development company, participating to the Valentine’s message.

    Enough excuses already for our cheesy Valentine’s communication. 😉

    We made love cards to share with your special someone, in case you were still looking for a nice card. Or if you forgot a special something for Valentine, we got you covered. Did you know our baseline is, Kinamo Your Uptime Partner? Well, with a little twist, we got you covered, even for Valentine. 😉

    Read More
  • Accessible computing power for Soil and Water Assessment Tool (SWAT) with Jupyter & R

    Back in 2015 we were contacted by the VUB Department of Hydrology and Hydraulic Engineering (in short HYDR) to see if we could run SWAT (Soil and Water Assessment Tool) and Python scripts in a cloud based environment. Main goal was to have raw computing power at hand, whenever a complex model had to be calculated.
    We extended the original request for setting up a cloud hosting environment for SWAT and started thinking about going to the next level: getting more computing power from cloud based systems.

    It was not until begin 2017 HYDR and Kinamo gave rebirth to the original idea and restarted the thinking process on how we could get this effectively work and step beyond the “Run some scripts on your portable or server” approach. Unlocking cloud CPU’s would mean instant high computing power (HPC) at the fingertips and thus booking major results in terms of speed and efficiency.

    Introducing Jupyter Notebook

    Jupyter Notebook originally started as IPython Notebook and introduced the ability to present an interactive notebook environment that introduced interaction with your data using any programming language. While originally it used Python it has been extended to using R, Fsharp, … basically anything where Jupyter has a Kernel made available for. Check out the extensive (and growing) list of Jupyter Kernels on Github.

    The initial idea for using Jupyter Notebook came from Ann. It was however uncertain if SWAT (Soil & Water Assessment Tool) would run in such environment since it it called from within the used scripting languages, and above all we aimed for using multiple CPU cores. Let’s face it, what would be the advantage of running a HPC cluster if you only use one core…

    The SWAT question aside, the choice for Jupyter was pretty straightforward, it allows:

    • Independent usage of programming languages (as mentioned, the Jupyter Kernel availability determines the languages you use in the notebook)
    • Sharing of Notebooks and thus data and results
    • Integrate with Apache Spark, but also spawn Docker containers, use Kubernetes… basically modern age computing!

    Our first tests used a basic Notebook installation and mainly focused on getting SWAT to run in the Jupyter environment. It prove not to be straightforward, but with help of the HYDR department we managed to sort out the requirements.
    However, since SWAT mainly is used on Windows machines and based on .NET, setting it up on Linux demands a Mono installation (Mono provides cross-platform .NET framework, more information on the mono-project website).

    The people from the HYDR department got the models up and running but the next pitfall were the number of CPU cores. Our primary choice for scripting was R, due to our initial idea to head towards the Mosix direction (an idea we ditched later on). By default any R script will use only one core, you will have to tell R to effectively use multiple cores.

    Going Parallel in R

    If you want R to be using multiple cores, you must load additional libraries. We ended up using the “parallel” library in R, but you do require a more recent version. We will add details on how to effectively make the script use multiple cores. It all depends on your code however.

    In the end, we were able to run one single Notebook using R as programming language on 8 cores. Silly as it is, that big block of CPU usage actually made us and the HYDR team very happy!

    What’s next?

    This all sounds very academical (and it is) but this post will be continued in a second part. We’ll dive more into our reason for extending the (very basic) Jupyter Notebook installation with JupyterHub that allows us to move from a simple, single instance to effectively what we are aiming for: Jupyter hosting with centralized deployment and excellent data integration. In other words, every data scientists dream and that within reach.

    Watch this space!

    … curious about our Jupyter experiments? Contact us, we’d be happy to brainstorm!

     

     

     

     

  • Kinamo has received ISO 9001:2015 certification!

    As from February 20th 2017, Kinamo was granted the International Organization for Standardization (ISO) 9001:2015 certification. By meeting the extensive criteria of this standard, it is confirmed that the company’s Quality Management System complies with the standard requirements and aims for continuous improvement of products, services, and internal processes.

    “Our goal is to provide high quality and professional hosting services to our customers” said Dominique Quintelier, CEO.

    By obtaining the ISO 9001:2015 certification, we demonstrate our strong commitment to our customers and our continuous dedication to improve our organization’s efficiency and quality.

    “Obtaining the certification required thorough preparation, review and both internal and external audits. Needless to say, I am very proud of my team for being able to reach this important milestone. It is a significant achievement for Kinamo, but also an important signal to our customers highlighting our constant aim for improvement of our services “.

    About the ISO 9001:2015 standard

    The ISO 9001:2015 standard is one of the world’s most regarded quality management system standards to help businesses prove their ability to consistently provide products and services that meet and exceed customer requirements. For more information on the ISO9001:2015 standard, please visit the official ISO website.

    About Kinamo

    Kinamo is a privately held company providing managed hosting services, integrated cloud services, domain name registrations and SSL certificates. Kinamo stays loyal to a vision where early adoption and experimenting with new technology is a key element in their service offering, keeping reliability and innovation in balance.

    Would you like to know more about our services and how we can help you in your cloud hosting projects? Feel free to contact us!

  • Working with GPU’s in a cloud based environment… or not?

    Every now and then we get questions on running heavy rendering solutions, computing applications or graphic intensive desktops in a cloud environment.
    Every time this discussion starts to emerge, the hunger for fancy toys pops into our mind thanks to the fact Nvidia and their CUDA technology relying on GPU’s saw daylight.

    But… as it always goes with cheering moments and pure joy, reality kicks in when you start to build a proposal with real world things in mind like cost of ownership, actual practical usage of GPU’s, performance and not entirely unimportant: flexibility.

    VMware and Nvidia have gone a long way in making the GPU dream available for the masses, however we are 2017 now and finally using GPU’s for a variety of applications becomes actually possible for the normal human beings among us.
    Do not be fooled: it still costs a dime, but at least the technology is starting to fill in the blanks.

    Behold, the virtual desktop!

    While many people tend to automatically link a graphics card and the fancy GPU (which is acronym for Graphics Processing Unit) with a ton of memory to games and video, the GPU can do more. Why?
    Check out this excellent (and quite entertaining video) to understand the difference between CPU and GPU’s.

    True… in a virtual environment, having a graphics card available in a server with tons of graphical memory and with the correct VMware setup, you can start a true VDI (Virtual Desktop Infrastructure) frenzy, but the true path where GPU’s start to shine is when it comes to parallel computing in our humble opinion (think scientific calculations, tons of processing, …).

    Therefore we at Kinamo are quite happy to inform you about the differences in GPU enabled cloud environments, why you should care and, why probably not (if you’re just looking for a simple WordPress hosting). Nevertheless, we had you entertained till this point!

    However, let’s get down to business… Forget about the parallel space-age computing thing for a while and let’s dive into the standard graphical basics!

    And then they invented… 3D acceleration

    You probably remember that fancy tick box in the DirectX 3D settings dialog centuries ago: Enable 3D Hardware Acceleration. If you don’t… well it was there, we ensure you.
    Truth is… not much has changed, from a practical point of view.
    3D Hardware Acceleration was kind of synonym for “Make sure you start using that expensive graphics card I paid for”.
    Without the 3D hardware acceleration you still had the joy of falling back to the dreaded “Software rendering” which basically meant your hard working CPU would get even busier rendering your frame rate and drawing your screens.

    When working with a virtual desktop on a VMware environment (for example) without all the fancy GPU stuff, it basically is the same: you can enable the Soft 3D acceleration.
    You do not require GPU hardware to be available in your ESXi host for this, but this comes with a drawback. It is a simple but effective rendering with limited DirectX or OpenGL capabilities and fair enough for Windows Aero desktop with some limited 3D gizmo’s. Do not expect to start a full blown 3D render application with these settings, most likely you will end up in tears (next to your dying VM).

    So… GPU hardware it is?

    Ever since virtualization hypervisors started to introduce the ability to use GPU’s effectively, adding a graphics card with decent GPU and graphical memory has become a sexy option (all be it still quite expensive!).

    Once you have added a hardware card to the host, a new world opens.
    VMware has two additional options: vSGA which allows you to share the GPU’s among your VM’s and each VM requires to have some video memory assigned to it as well. It relies on the SVGA 3D driver and is a feasible option, but in our opinion most applicable for day to day virtual desktops… Again, running Autodesk’s 3D Max is not quite an option.

    If you want real joy… there’s vDGA! Basically, with this ability VMware ESXi allows you to pass the hardware card straight to the virtual machine.
    Clearly this means instant rendering power, since the card is assigned one-on-one to the virtual machine, but as always: “It comes with a prize”.
    Your virtual machine is locked to your host (hence, it is using the hardware card IN the host) so forget fancy failover / high-availability / vMotion and other wonderful assets of the platform.

    Also, an additional vm with vDGA? You will need a second card (and so on).
    This is a terrific option performance wise, but in our opinion if you go this route you might as well go for a managed dedicated server with a hardware card and use that as a rendering or 3D rig!

    Nvidia’s GRID and vGPU technology

    Luckily Nvidia invented the vGPU, think of it as a virtualisation layer for GPU’s.
    We will not dig into the details here (sorry) but to summarize it: by using vGPU’s each vm get’s a resource assigned from your GPU.
    The Nvidia GRID technology (GRID Manager, and yes, it is license based) allows you to pass that little part of the GPU as a native pure hardware GPU to the VM, resulting in true 3D acceleration without the disadvantages of vDGA.

     

     

    For more information on Nvidia GRID we suggest you head out to the Nvidia website!

    Google Compute Engine and GPU’s? AWS and GPU’s?

    Both Google Compute Engine and AWS have announced that they will be offering GPU’s in their cloud offering. While this is still in beta (don’t we love new technology!) it certainly opens the abilities of deep learning, setting up true performing rendering applications and effectively addressing GPU’s for scientific computing purposes, calculations, data modelling and so on.

    Since we are not married to our own infrastructure we are very excited about Google Compute Engine introducing these aspects. Setting up hybrid cloud environments with no limitations is becoming reality!

    For the tinkering fans among us, Nvidia has released an excellent blog post on setting up nvidia-docker, the layer that allows your docker container to be aware of the Nvidia hardware. Read up on deploying Nvidia Docker here and be sure to visit the GitHub Nvidia Docker project!

     

    We hope you enjoyed this article and if you’re interested in learning (no, not deep learning) why we’re so fond of GPU enabled computing, drop us a line!