• Discover here how we’ve fixed a blocking error when upgrading ESXi 6.5 to 7.0 update 2 through vSphere Lifecycle Manager.

    Kinamo maintains multiple VMware vSphere clusters for a variety of customers. As you may already know, keeping these clusters up to date is one of the important tasks that comes with maintaining them.

    In this blogpost we will deal with the following error:

    Software or system configuration of host XXX is incompatible.

    We encountered this error when trying to upgrade ESXi 6.5 to 7.0 update 2 through vSphere Lifecycle Manager’s “updates” tab.

    Read More
  • Installing NVIDIA vGPU driver for ESXi through vCenter lifecycle manager

    As an NVIDIA Solution Advisor, Kinamo maintains a VMware vSphere cluster for a VMware Horizon VDI deployment of one of our customers. The ESXi hardware hosts are equipped with NVIDIA Tesla GPUs to accelerate graphical processing for the clients.
    NVIDIA Virtual GPU (vGPU) enables multiple virtual machines (VMs) to have simultaneous, direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized operating systems.
    Installing NVIDIA vGPU ensures that all of our client virtual desktops have unparalleled graphics performance, computing performance and application compatibility, together with the cost-effectiveness and scalability brought by sharing a GPU among multiple workloads.

    This article covers how to install the NVIDIA vGPU for ESXI through vCenter lifecycle manager, in detail.

    Read More
  • The web is becoming more secure, but so are phishing sites…

    While the Firefox 51 and Chrome 56 roll-out promised a safer place thanks to security warnings and a gentle push in the right direction to making websites having to rely on HTTPS with SSL certificates, it seems phishing sites rather quickly jumped the bandwagon and started implementing “secure and encrypted” phishing scams.

    HTTPS enabled phishing sites on the rise

    As reported by Netcraft on May 17th 2017 since the release on January 26th 2017 of the Chrome and Firefox “security warning enabled” browsers, the number of phishing websites using https has risen from 5% to 15% with even a small peak of 20%.

    Phishing sites with HTTPS on the rise - (c) Netcraft

    Phishing sites with HTTPS on the rise since January 26th 2017 – Graph (c) Netcraft.com

    To make this an even bigger problem, the phishing sites rely on trusted valid Certificate Authorities like Let’s Encrypt and Comodo.

    The popularity of Let’s Encrypt has also became it’s weakness: it’s very easy to get a valid, browser trusted certificate valid for a limited amount of time. While this is excellent for automated renewal services, it also makes an attractive magnet for institutions with less happy incentives such as phishing sites. As a reaction to Let’s Encrypt’s “free” certificate, Comodo has launched a so called Trial Certificate valid for 90 days. But, as always, “free” is usually to good to be true.

    Top 10 phishiest certificate authorities, unfortunately Let’s Encrypt (Yellow) and Comodo (Blue) lead the way.

    Let’s Encrypt does use the Safe Browsing API to check validity of an issued certificate, but this relies on a check before the content probably is added so basically, when the harm is done.

    Since browser users are trained to check for a “valid” and “secure” URL at first glance it seems the website they visit is legit. This makes the problem even worse, since the scam sites are TLS enabled and appear to be valid.

    What will happen?

    While some Certificate Authorities claim it is not the task of a CA to check whether the certificate being issues is reliable content-wise, we can not deny this can become a problem. But, when you take a look at the bigger picture it was a problem that could be expected. We are talking about DV, Domain Validation  certificates… they are cheap or free and the validation method is, to say the least, pretty limited (a simple mail or dns validation check usually is enough to get you going).

    Our advice generally is not to rely on “the cheapest” SSL certificate option unless you know what you are doing, you know what you are going to use the SSL certificate for and you can live with the idea there might become a day the SSL certificate can (let’s hope not) become a problem.

    If you got an organisation or corporation and wish to steer away from this there’s always the less cheap Domain Validation certificates or the more reliable Organisation Validation certificates. As from the perspective of an end user: always, and we do mean ALWAYS check the certificate and the domain name in the address bar. There is a thing called “Deceptive Domain Score” which usually explains well how unreliable the domain can be.

    Get a valid – non free, sorry – SSL certificate at Kinamo. And seen the circumstances, we do favor GlobalSign!

    SSL Certificate Comparison

  • Accessible computing power for Soil and Water Assessment Tool (SWAT) with Jupyter & R

    Back in 2015 we were contacted by the VUB Department of Hydrology and Hydraulic Engineering (in short HYDR) to see if we could run SWAT (Soil and Water Assessment Tool) and Python scripts in a cloud based environment. Main goal was to have raw computing power at hand, whenever a complex model had to be calculated.
    We extended the original request for setting up a cloud hosting environment for SWAT and started thinking about going to the next level: getting more computing power from cloud based systems.

    It was not until begin 2017 HYDR and Kinamo gave rebirth to the original idea and restarted the thinking process on how we could get this effectively work and step beyond the “Run some scripts on your portable or server” approach. Unlocking cloud CPU’s would mean instant high computing power (HPC) at the fingertips and thus booking major results in terms of speed and efficiency.

    Introducing Jupyter Notebook

    Jupyter Notebook originally started as IPython Notebook and introduced the ability to present an interactive notebook environment that introduced interaction with your data using any programming language. While originally it used Python it has been extended to using R, Fsharp, … basically anything where Jupyter has a Kernel made available for. Check out the extensive (and growing) list of Jupyter Kernels on Github.

    The initial idea for using Jupyter Notebook came from Ann. It was however uncertain if SWAT (Soil & Water Assessment Tool) would run in such environment since it it called from within the used scripting languages, and above all we aimed for using multiple CPU cores. Let’s face it, what would be the advantage of running a HPC cluster if you only use one core…

    The SWAT question aside, the choice for Jupyter was pretty straightforward, it allows:

    • Independent usage of programming languages (as mentioned, the Jupyter Kernel availability determines the languages you use in the notebook)
    • Sharing of Notebooks and thus data and results
    • Integrate with Apache Spark, but also spawn Docker containers, use Kubernetes… basically modern age computing!

    Our first tests used a basic Notebook installation and mainly focused on getting SWAT to run in the Jupyter environment. It prove not to be straightforward, but with help of the HYDR department we managed to sort out the requirements.
    However, since SWAT mainly is used on Windows machines and based on .NET, setting it up on Linux demands a Mono installation (Mono provides cross-platform .NET framework, more information on the mono-project website).

    The people from the HYDR department got the models up and running but the next pitfall were the number of CPU cores. Our primary choice for scripting was R, due to our initial idea to head towards the Mosix direction (an idea we ditched later on). By default any R script will use only one core, you will have to tell R to effectively use multiple cores.

    Going Parallel in R

    If you want R to be using multiple cores, you must load additional libraries. We ended up using the “parallel” library in R, but you do require a more recent version. We will add details on how to effectively make the script use multiple cores. It all depends on your code however.

    In the end, we were able to run one single Notebook using R as programming language on 8 cores. Silly as it is, that big block of CPU usage actually made us and the HYDR team very happy!

    What’s next?

    This all sounds very academical (and it is) but this post will be continued in a second part. We’ll dive more into our reason for extending the (very basic) Jupyter Notebook installation with JupyterHub that allows us to move from a simple, single instance to effectively what we are aiming for: Jupyter hosting with centralized deployment and excellent data integration. In other words, every data scientists dream and that within reach.

    Watch this space!

    … curious about our Jupyter experiments? Contact us, we’d be happy to brainstorm!

     

     

     

     

  • Working with GPU’s in a cloud based environment… or not?

    Every now and then we get questions on running heavy rendering solutions, computing applications or graphic intensive desktops in a cloud environment.
    Every time this discussion starts to emerge, the hunger for fancy toys pops into our mind thanks to the fact Nvidia and their CUDA technology relying on GPU’s saw daylight.

    But… as it always goes with cheering moments and pure joy, reality kicks in when you start to build a proposal with real world things in mind like cost of ownership, actual practical usage of GPU’s, performance and not entirely unimportant: flexibility.

    VMware and Nvidia have gone a long way in making the GPU dream available for the masses, however we are 2017 now and finally using GPU’s for a variety of applications becomes actually possible for the normal human beings among us.
    Do not be fooled: it still costs a dime, but at least the technology is starting to fill in the blanks.

    Behold, the virtual desktop!

    While many people tend to automatically link a graphics card and the fancy GPU (which is acronym for Graphics Processing Unit) with a ton of memory to games and video, the GPU can do more. Why?
    Check out this excellent (and quite entertaining video) to understand the difference between CPU and GPU’s.

    True… in a virtual environment, having a graphics card available in a server with tons of graphical memory and with the correct VMware setup, you can start a true VDI (Virtual Desktop Infrastructure) frenzy, but the true path where GPU’s start to shine is when it comes to parallel computing in our humble opinion (think scientific calculations, tons of processing, …).

    Therefore we at Kinamo are quite happy to inform you about the differences in GPU enabled cloud environments, why you should care and, why probably not (if you’re just looking for a simple WordPress hosting). Nevertheless, we had you entertained till this point!

    However, let’s get down to business… Forget about the parallel space-age computing thing for a while and let’s dive into the standard graphical basics!

    And then they invented… 3D acceleration

    You probably remember that fancy tick box in the DirectX 3D settings dialog centuries ago: Enable 3D Hardware Acceleration. If you don’t… well it was there, we ensure you.
    Truth is… not much has changed, from a practical point of view.
    3D Hardware Acceleration was kind of synonym for “Make sure you start using that expensive graphics card I paid for”.
    Without the 3D hardware acceleration you still had the joy of falling back to the dreaded “Software rendering” which basically meant your hard working CPU would get even busier rendering your frame rate and drawing your screens.

    When working with a virtual desktop on a VMware environment (for example) without all the fancy GPU stuff, it basically is the same: you can enable the Soft 3D acceleration.
    You do not require GPU hardware to be available in your ESXi host for this, but this comes with a drawback. It is a simple but effective rendering with limited DirectX or OpenGL capabilities and fair enough for Windows Aero desktop with some limited 3D gizmo’s. Do not expect to start a full blown 3D render application with these settings, most likely you will end up in tears (next to your dying VM).

    So… GPU hardware it is?

    Ever since virtualization hypervisors started to introduce the ability to use GPU’s effectively, adding a graphics card with decent GPU and graphical memory has become a sexy option (all be it still quite expensive!).

    Once you have added a hardware card to the host, a new world opens.
    VMware has two additional options: vSGA which allows you to share the GPU’s among your VM’s and each VM requires to have some video memory assigned to it as well. It relies on the SVGA 3D driver and is a feasible option, but in our opinion most applicable for day to day virtual desktops… Again, running Autodesk’s 3D Max is not quite an option.

    If you want real joy… there’s vDGA! Basically, with this ability VMware ESXi allows you to pass the hardware card straight to the virtual machine.
    Clearly this means instant rendering power, since the card is assigned one-on-one to the virtual machine, but as always: “It comes with a prize”.
    Your virtual machine is locked to your host (hence, it is using the hardware card IN the host) so forget fancy failover / high-availability / vMotion and other wonderful assets of the platform.

    Also, an additional vm with vDGA? You will need a second card (and so on).
    This is a terrific option performance wise, but in our opinion if you go this route you might as well go for a managed dedicated server with a hardware card and use that as a rendering or 3D rig!

    Nvidia’s GRID and vGPU technology

    Luckily Nvidia invented the vGPU, think of it as a virtualisation layer for GPU’s.
    We will not dig into the details here (sorry) but to summarize it: by using vGPU’s each vm get’s a resource assigned from your GPU.
    The Nvidia GRID technology (GRID Manager, and yes, it is license based) allows you to pass that little part of the GPU as a native pure hardware GPU to the VM, resulting in true 3D acceleration without the disadvantages of vDGA.

     

     

    For more information on Nvidia GRID we suggest you head out to the Nvidia website!

    Google Compute Engine and GPU’s? AWS and GPU’s?

    Both Google Compute Engine and AWS have announced that they will be offering GPU’s in their cloud offering. While this is still in beta (don’t we love new technology!) it certainly opens the abilities of deep learning, setting up true performing rendering applications and effectively addressing GPU’s for scientific computing purposes, calculations, data modelling and so on.

    Since we are not married to our own infrastructure we are very excited about Google Compute Engine introducing these aspects. Setting up hybrid cloud environments with no limitations is becoming reality!

    For the tinkering fans among us, Nvidia has released an excellent blog post on setting up nvidia-docker, the layer that allows your docker container to be aware of the Nvidia hardware. Read up on deploying Nvidia Docker here and be sure to visit the GitHub Nvidia Docker project!

     

    We hope you enjoyed this article and if you’re interested in learning (no, not deep learning) why we’re so fond of GPU enabled computing, drop us a line!

  • SSL certificate installation tests evolve in 2017!

    When it comes to analyzing and debugging, we all have our favorite tools… For our SSL department, the CA Security Council SSL Labs test by Qualys is a true must have.

    This excellent tool analyses the https installation based on a number of tests against known vulnerabilities and standards: the certificate, protocol support, key exchange and cypher strength…
    Known vulnerabilities such as DROWN, BEAST, POODLE and Heartbleed are also tested extensively.

    Sinc,e for the first time in 20 years, HTTPS is getting an advantage over classic http traffic – probably also a bit thanks to the forced “push” by Google among others, testing an SSL installation becomes a necessity, and the tool will become more strict as the market evolves.

    From 2017 on, the following changes will be incorporated:

    • 3DES: Because of the Sweet32 vulnerability enabled support for 3DES in modern browsers will get a C score.
    • Forward Secrecy: since Edward Snowden announced several privacy related breached the industry decided to see forward secrecy as a requirement, if this is not enabled on the server, a score of B is the best you will get.
    • AEAD Suites: authenticated communication is strongly advised and AEAD is the only suite having support for TLS 1.3. AEAD suites are required to get an A+ score!
    • TLS Fallback: since the introduction of the POODLE vulnerability most browsers have made adjustments and TLS_FALLBACK_SCSV is no longer needed to get an A+ rating.
    • Weak cyphers: all cyphers with less than 128 bits will get an F rating, without hesitation!
    • RC4: Servers supporting RC4 will get a C capped score.
    • SHA-1: Sites using an SHA-1 certificate will not be treated as secure and chances are (not confirmed yet) that they might get an F rating. We strongly suggest everyone to replace the SHA-1 certificate with a SHA-256 certificate!

    The SSL Labs test helps us in refining an SSL certificate installation and will help you as website or server owner in following up whether your SSL installation is still “up to par” with the recommended strict HTTPS settings.

    We always suggest everyone to frequently audit their SSL installation!

    Start here to check your HTTPS installation: casecurity.ssllabs.com