>> Source VMware

Jun 232017
 

Agents make it possible to collect events from log files on Linux and Windows devices and forward them to avRealize Log Insightserver or a third-party logging system. Well … we all know that!! A LOT and yes I mean a LOT of folks from the field have asked for the agent to be able to send logs to multiple destinations and possibly over different protocols and perhaps also have filtering on the logs before they get sent as the cherry on the cake. With vRealize Log Insight v4.5 we can!

vRealize Log Insight v4.5 agent allows you to deliver collected events to multiple destinations via Ingestion API or syslog protocols simultaneously. It also has the ability to define multi-destination servers and allow filtering of events per destination based on event collecting source and the events field’s values.

You could want to do this for several reasons like deliver the same events to multiple vRLI clusters for backup or data recovery purposes. Or you may want to send different kind of events to different IT department logging systems, for example audit/system logs to the security team&#rsquo;s server, application logs to DevOps team&#rsquo;s server and system metric logs to the IT team&#rsquo;s management system.

So let&#rsquo;s look at how you can actually use it…..

 

Multiple destination connections could be defined through the [server|<destination_id>] section, where<destination_id>is unique per configuration connection id ( i.e. per liagent.ini). For backward compatibility there could be an unnamed [server] section which will be treated as masterconnection and the current implementation assumes that it always exists by default. Only themasterconnection will be able to ask configuration from server and only if cfapi is used as connection protocol. All server sections will use all existing configurations options to define connection properties, i.e. hostname, proto, port, ssl, etc. The only exception is that ‘hostname’ option will not have a default value for non-master connections. For [server] section (without the destination_id) the default value for hostname option remains “loginsight”.

 

The following parameters can be defined in server sections: hostname,proto,port,ssl,filter.

Note:

  • Default value for ‘filter’ option is {;.*;}, which means accept all events.
  • The {;;} filter could be used to deny any event transmission to the destination, for example the connection could be used only for configuration and auto-update purposes.

 

  • Example #1 ofmulti-destination use in liagent.ini

– # First (default) destination will receive all collected events.
[server]
hostname=prod1.team.vmware.com

– # Second destination will receive just syslog events through the syslog protocol.
[server|syslog-audit]
hostname=third_party_audit_management.vmware.com
proto=syslog
filter={filelog; syslog; }

– # Third destination will receive vrops events if they have the level field equal to “error” or “warning” and they are collected by agent config sections which name begin with “vrops-“.
[server|team-prod1]
hostname=vrops-errors.team.vmware.com
filter={; vrops-.*; level == “error” || level == “warning”}

– # The filelog section below is collecting syslog messages that are sent to [server] and [server|syslog-audit]

[filelog|syslog]
directory=/var/log
include=messages

– # For various vRops logs. Note that all section names begin with “vrops-” prefix, which is used in third server destination filter above. The logs in these sections are sent to [server] and [server|team-prod1]

[filelog|vrops-ANALYTICS-analytics]
directory=/data/vcops/log
include=analytics*.log*
exclude=analytics*-gc.log*
parser=auto

[filelog|vrops-COLLECTOR-collector]
directory=/data/vcops/log
include=collector.log*
event_marker=^\d{4}-\d{2}-\d{2}[\s]\d{2}:\d{2}:\d{2}\,\d{3}
parser=auto

[filelog|vrops-COLLECTOR-collector_wrapper]
directory=/data/vcops/log
include=collector-wrapper.log*
event_marker=^\d{4}-\d{2}-\d{2}[\s]\d{2}:\d{2}:\d{2}\.\d{3}
parser=auto

Here all logs from [filelog|syslog] section go to server third_party_audit_management.vmware.com over the syslog protocol.The vROps logs from the three filelog sections go to server vrops-errors.team.vmware.com if they are of level error or warning. And the server prod1.team.vmware.com is receiving all events.

 

  • Example #2ofmulti-destination use in liagent.ini

[server]

hostname=10.11.12.13

ssl=no

[server|desktop]

hostname=10.21.22.23

filter={winlog; System; }

[winlog|System]

channel=System

[winlog|Application]

channel=Application

[winlog|Security]

channel=Security

 

In this example all events (from all three channels) go to hostname 10.11.12.13 and server 10.21.22.23 only receives events that are received on the System channel. ssl option is not defined for [server|desktop] section therefore the default will be used which is ssl=yes.

Note: The defaults for the options in the [server] sections are still the same as before.

; Protocol can be cfapi (Log Insight REST API), syslog. Default:

;proto=cfapi

 

; Log Insight server port to connect to. Default ports for protocols (all TCP):

; syslog: 514; syslog with ssl: 6514; cfapi: 9000; cfapi with ssl: 9543. Default:

;port=9000

 

; SSL usage. Default: (Starting vRLI 4.0 default ssl value is yes)

;ssl=yes

 

It is important to note that sending logs to multiple agent destination will not send duplicate events. No events duplication will be seen at the destination servers, i.e, if more than 1 filter_tuple matches the same event, then the event will be sent just once. So the following filter definitions are equal:

  • filter ={filelog; sample.*; facility > 7},{filelog; sample.*; level == “error”}
  • filter ={filelog; sample.*; facility >7 || level == “error”}

The events with facility=8 and level=”error” will be sent just once.

 

Some additional details about Filter criteria

The filter format for an agent destination looks like { collector_type; collector_filter; event_filter }

The filter selects only logs collected by sections where collector type (i.e. filelog or winlog) matches to providedcollector_typeand name of collector_filter (identifier which goes after pipe sign e.g. filelog|syslog) matches tocollector_filterregular expression, in addition if there is anevent_filterdefined then it selects only events for which the expression evaluates to True.

collector_filterexpression should not contain ;{} characters. In case of name-spaced collectors thecollector_filtermatches only the name part ignoring the namespace, i.e “ana.*” filter will match the following section names: “com.vmware.vrops.analitics” and “anaconda” . The regular expressioncollector_filtervalue could be an empty string which doesn’t match to any collector name, i.e no events will be sent. The {;;} filter could be used to deny any event transmission to the destination, for example the connection could be used only for configuration and auto-update purposes.

event_filteris an event fields expression working in the same way as collectors ‘whitelist’ option and evaluates based on event fields values. It could be an empty string which treated as True expression.

Default value for ‘filter’ option is {;.*;}, which means accept all events.

For every collected event the connection evaluates the event properties to match the filter. All filter_tuples in the list are concatenated by logical ‘or’ operation. If event passed any filter_tuple then it will be sent to the destination server. Every filter_tuple evaluated by the following steps:
1. Ifcollector_type_listis empty or contains the collector type of the event then proceeds to the next step, otherwisedropthe event.
2. If event collector name (i.e. part of the section name after pipe sign) of the event matches thecollector_filterthen proceed to the next step, otherwisedropthe event.
3. Ifevent_filteris empty or evaluates to True then send the event to the destination, otherwisedropthe event (including the case when expression could not be evaluated because the event doesn’t have field(s) used in the expression).

Invalid values of filter options in server sections

Invalid values are generally skipped/ ignored and defaults are used which will result in all events being collected.Some examples of invalid values for the filter option are:

  • filter ={filelog; samplez.*; facility >7 || level == “error”} – If there is no log files called samplez.* events will not be collected
  • filter ={filelog; sample.*; facility <0 || level == “error”} – There are no events with facility less than zero and no events will be collected.
  • filter ={filelog; sample.*; facility >7 && level == “nolevel”} – There are no events with a level called nolevel (assuming nolevel is an invalid value) AND facility greater than 7 so no events will be collected.
  • filter ={winlog; Security;. } – &#lsquo;.&#rsquo; Is invalid option for event_filter and all events collected by agent will be sent to the server and filter is ignored.
  • filter ={mylog; sample.*; facility >7 || level == “error”} – although the collector filter and event filter are valid ; mylog is not a valid collector_type; so all events collected by agent will be sent to the server and filter is ignored.
  • filter ={winlog; NoName; } – If there is no winlog section called &#lsquo;NoName&#rsquo; then events are not collected and sent to the destination.
  • filter ={winlog; System } – there is no &#lsquo;;&#rsquo; after system this is not a valid format and all log events collected by agent will be sent to the server.

And last but not the least all errors will be reported in agent log file.Consult the log file if an unexpected behavior is encountered and fix all errors by reported agent.

Note:

  • All filter options case sensitive.
  • debug_level = 2 gives verbose information about errors as in earlier versions of the agent.
  • These options cannot be used to send importer events to multiple destinations via the Importer tool.
  • GUI does not support applying ofmulti-destination in server options to an agent yet in vRealize Log Inisight v4.5

The post vRealize Log Insight agent multi-destination support appeared first on VMware Cloud Management.

Jun 232017
 

Agents make it possible to collect events from log files on Linux and Windows devices and forward them to avRealize Log Insightserver or a third-party logging system. Well … we all know that!! A LOT and yes I mean a LOT of folks from the field have asked for the agent to be able to send logs to multiple destinations and possibly over different protocols and perhaps also have filtering on the logs before they get sent as the cherry on the cake. With vRealize Log Insight v4.5 we can!

vRealize Log Insight v4.5 agent allows you to deliver collected events to multiple destinations via Ingestion API or syslog protocols simultaneously. It also has the ability to define multi-destination servers and allow filtering of events per destination based on event collecting source and the events field’s values.

You could want to do this for several reasons like deliver the same events to multiple vRLI clusters for backup or data recovery purposes. Or you may want to send different kind of events to different IT department logging systems, for example audit/system logs to the security team&#rsquo;s server, application logs to DevOps team&#rsquo;s server and system metric logs to the IT team&#rsquo;s management system.

So let&#rsquo;s look at how you can actually use it…..

 

Multiple destination connections could be defined through the [server|<destination_id>] section, where<destination_id>is unique per configuration connection id ( i.e. per liagent.ini). For backward compatibility there could be an unnamed [server] section which will be treated as masterconnection and the current implementation assumes that it always exists by default. Only themasterconnection will be able to ask configuration from server and only if cfapi is used as connection protocol. All server sections will use all existing configurations options to define connection properties, i.e. hostname, proto, port, ssl, etc. The only exception is that ‘hostname’ option will not have a default value for non-master connections. For [server] section (without the destination_id) the default value for hostname option remains “loginsight”.

 

The following parameters can be defined in server sections: hostname,proto,port,ssl,filter.

Note:

  • Default value for ‘filter’ option is {;.*;}, which means accept all events.
  • The {;;} filter could be used to deny any event transmission to the destination, for example the connection could be used only for configuration and auto-update purposes.

 

  • Example #1 ofmulti-destination use in liagent.ini

– # First (default) destination will receive all collected events.
[server]
hostname=prod1.team.vmware.com

– # Second destination will receive just syslog events through the syslog protocol.
[server|syslog-audit]
hostname=third_party_audit_management.vmware.com
proto=syslog
filter={filelog; syslog; }

– # Third destination will receive vrops events if they have the level field equal to “error” or “warning” and they are collected by agent config sections which name begin with “vrops-“.
[server|team-prod1]
hostname=vrops-errors.team.vmware.com
filter={; vrops-.*; level == “error” || level == “warning”}

– # The filelog section below is collecting syslog messages that are sent to [server] and [server|syslog-audit]

[filelog|syslog]
directory=/var/log
include=messages

– # For various vRops logs. Note that all section names begin with “vrops-” prefix, which is used in third server destination filter above. The logs in these sections are sent to [server] and [server|team-prod1]

[filelog|vrops-ANALYTICS-analytics]
directory=/data/vcops/log
include=analytics*.log*
exclude=analytics*-gc.log*
parser=auto

[filelog|vrops-COLLECTOR-collector]
directory=/data/vcops/log
include=collector.log*
event_marker=^\d{4}-\d{2}-\d{2}[\s]\d{2}:\d{2}:\d{2}\,\d{3}
parser=auto

[filelog|vrops-COLLECTOR-collector_wrapper]
directory=/data/vcops/log
include=collector-wrapper.log*
event_marker=^\d{4}-\d{2}-\d{2}[\s]\d{2}:\d{2}:\d{2}\.\d{3}
parser=auto

Here all logs from [filelog|syslog] section go to server third_party_audit_management.vmware.com over the syslog protocol.The vROps logs from the three filelog sections go to server vrops-errors.team.vmware.com if they are of level error or warning. And the server prod1.team.vmware.com is receiving all events.

 

  • Example #2ofmulti-destination use in liagent.ini

[server]

hostname=10.11.12.13

ssl=no

[server|desktop]

hostname=10.21.22.23

filter={winlog; System; }

[winlog|System]

channel=System

[winlog|Application]

channel=Application

[winlog|Security]

channel=Security

 

In this example all events (from all three channels) go to hostname 10.11.12.13 and server 10.21.22.23 only receives events that are received on the System channel. ssl option is not defined for [server|desktop] section therefore the default will be used which is ssl=yes.

Note: The defaults for the options in the [server] sections are still the same as before.

; Protocol can be cfapi (Log Insight REST API), syslog. Default:

;proto=cfapi

 

; Log Insight server port to connect to. Default ports for protocols (all TCP):

; syslog: 514; syslog with ssl: 6514; cfapi: 9000; cfapi with ssl: 9543. Default:

;port=9000

 

; SSL usage. Default: (Starting vRLI 4.0 default ssl value is yes)

;ssl=yes

 

It is important to note that sending logs to multiple agent destination will not send duplicate events. No events duplication will be seen at the destination servers, i.e, if more than 1 filter_tuple matches the same event, then the event will be sent just once. So the following filter definitions are equal:

  • filter ={filelog; sample.*; facility > 7},{filelog; sample.*; level == “error”}
  • filter ={filelog; sample.*; facility >7 || level == “error”}

The events with facility=8 and level=”error” will be sent just once.

 

Some additional details about Filter criteria

The filter format for an agent destination looks like { collector_type; collector_filter; event_filter }

The filter selects only logs collected by sections where collector type (i.e. filelog or winlog) matches to providedcollector_typeand name of collector_filter (identifier which goes after pipe sign e.g. filelog|syslog) matches tocollector_filterregular expression, in addition if there is anevent_filterdefined then it selects only events for which the expression evaluates to True.

collector_filterexpression should not contain ;{} characters. In case of name-spaced collectors thecollector_filtermatches only the name part ignoring the namespace, i.e “ana.*” filter will match the following section names: “com.vmware.vrops.analitics” and “anaconda” . The regular expressioncollector_filtervalue could be an empty string which doesn’t match to any collector name, i.e no events will be sent. The {;;} filter could be used to deny any event transmission to the destination, for example the connection could be used only for configuration and auto-update purposes.

event_filteris an event fields expression working in the same way as collectors ‘whitelist’ option and evaluates based on event fields values. It could be an empty string which treated as True expression.

Default value for ‘filter’ option is {;.*;}, which means accept all events.

For every collected event the connection evaluates the event properties to match the filter. All filter_tuples in the list are concatenated by logical ‘or’ operation. If event passed any filter_tuple then it will be sent to the destination server. Every filter_tuple evaluated by the following steps:
1. Ifcollector_type_listis empty or contains the collector type of the event then proceeds to the next step, otherwisedropthe event.
2. If event collector name (i.e. part of the section name after pipe sign) of the event matches thecollector_filterthen proceed to the next step, otherwisedropthe event.
3. Ifevent_filteris empty or evaluates to True then send the event to the destination, otherwisedropthe event (including the case when expression could not be evaluated because the event doesn’t have field(s) used in the expression).

Invalid values of filter options in server sections

Invalid values are generally skipped/ ignored and defaults are used which will result in all events being collected.Some examples of invalid values for the filter option are:

  • filter ={filelog; samplez.*; facility >7 || level == “error”} – If there is no log files called samplez.* events will not be collected
  • filter ={filelog; sample.*; facility <0 || level == “error”} – There are no events with facility less than zero and no events will be collected.
  • filter ={filelog; sample.*; facility >7 && level == “nolevel”} – There are no events with a level called nolevel (assuming nolevel is an invalid value) AND facility greater than 7 so no events will be collected.
  • filter ={winlog; Security;. } – &#lsquo;.&#rsquo; Is invalid option for event_filter and all events collected by agent will be sent to the server and filter is ignored.
  • filter ={mylog; sample.*; facility >7 || level == “error”} – although the collector filter and event filter are valid ; mylog is not a valid collector_type; so all events collected by agent will be sent to the server and filter is ignored.
  • filter ={winlog; NoName; } – If there is no winlog section called &#lsquo;NoName&#rsquo; then events are not collected and sent to the destination.
  • filter ={winlog; System } – there is no &#lsquo;;&#rsquo; after system this is not a valid format and all log events collected by agent will be sent to the server.

And last but not the least all errors will be reported in agent log file.Consult the log file if an unexpected behavior is encountered and fix all errors by reported agent.

Note:

  • All filter options case sensitive.
  • debug_level = 2 gives verbose information about errors as in earlier versions of the agent.
  • These options cannot be used to send importer events to multiple destinations via the Importer tool.
  • GUI does not support applying ofmulti-destination in server options to an agent yet in vRealize Log Inisight v4.5

The post vRealize Log Insight agent multi-destination support appeared first on VMware Cloud Management.

Jun 232017
 

Matthew Evans

By Matt Evans.

An EUC Senior Specialist SE based in the UK. Matt hasbeen with VMware for 9 months and hasbeen working in the Server based Computing\EUC space for 20 years. Outside of work he is interested in music and running, having completed the London Marathon in April.


As a member of the End User Compute Specialist Sales Engineer team, I use VMware&#rsquo;s cloud hosted demo environment 99% of the time. However, we have all been in the situation where we need to demonstrate the solution but there is no network connection available. That&#rsquo;s one of the reasons I built a VMware Horizon environment in Workstation Pro. The other reason is that the &#lsquo;techie&#rsquo; in me wants to have an environment that I have free rein in. As well as my Horizon environment, I built a Windows 10 virtual machine so that I had a Windows 10 device to manage from my AirWatch demo lab.

The core virtual machines in my demo lab that runs in Workstation Pro is:

  • Windows 2012 R2 – Vmware Horizon Server
  • Windows 2008 R2 – Active Directory, DNS and DHCP
  • Windows 2012 R2 – Workspace ONE components
  • Windows 2012 R2 – RDS Host
  • Windows 10 – VDI desktop
  • Windows 8 – VDI desktop
  • Windows 10 – Managed endpoint by AirWatch (Saas)

As I said I am techie at heart and I like to use three 3rd party tools with my demo lab to enable me too deep dive, create screenshots and shadow my iPhone, now let me explain.

Apowersoft Phone Manager

VMware Workstation Pro supports USB devices which is critical to this tool. I can connect USB devices, such as my iPhone, to my laptop and map them through to the Virtual Machine. As part of my Horizon demo I always like to show the user experience from an iPhone connecting to a VDI or RDSH application as well as what the Workspace ONE portal looks like and what the user experience looks like.

Apowersoft Phone Manager is similar to iTunes, in the way it helps you manage your device. The one feature that it offers that iTunes does not, is the ability to shadow the device, you can see the Workspace ONE app running on my phone below. You can also switch to full screen mode. Being able to shadow the device means that can deliver a demo from my iPhone or iPad and display it through Workstation Pro and onto a larger screen, typically a projector.

I tend to make screenshots of this type of demo as well as documenting the setup and configuration of Horizon, for example. This leads me on nicely to the next tool.

Apowersoft Phone Manager

Lightscreen

Typically I deliver my product demonstrations out of VMware Test Drive, which is managed cloud environment, therefore software updates are managed centrally and not done by me. That&#rsquo;s one of the reasons I use Workstation Pro to run my own demo lab, so I can go through the experience of updating and managing the environment as I would like. When I go through the update I tend to take screenshots that I can then use for custom presentations or share with customers and partners so they know what to expect when they go through the updates themselves. It also helps to take screenshots to help articulate a configuration or specific feature with a customer when a real time screen-sharing tool is not an option. As Workstation Pro allows clipboard redirection between my host OS (laptop) and virtual machine it is very easy for me to take screenshots and then copy them out of the Virtual Machine into One Drive.

Within Lightscreen I configure a hotkey to take the screenshot, a directory to store the image in and also whether I want to capture the whole screen or just the active window – I tend to use the active window. Then I just run through my sequence, click the hotkey when required and finally when I have finished, copy the images out of the Virtual Machine and paste them to my local drive.

Apowersoft Phone Manager

ASIDE: The one feature that could improve Lightscreen would be if you could also capture short video files, even if that was limited to say 10 seconds.

Apowersoft Phone Manager

Glasswire

In my opinion, I have saved the best until last. Glasswire offers Network Monitoring, Threat Monitoring and a Firewall. Because my Virtual Machines are pretty isolated in Workstation Pro I am not really using the software for its main intended purpose. However, I love the Network Monitoring aspect. Being able to monitor the network gives me the visibility of what processes are running, what they are doing and what bandwidth they are using. A good example of its use case for me would be to look at the different protocols we support in Horizon and how they perform.

As you may know we support Blast, PCoIP and RDP within Horizon, and Glasswire enables me to monitor exactly what they are doing – sorry but I did warn you I was a techie at heart. The screenshot below shows all my active processes and their bandwidth usage over a 5 minute period, the time period can be adjusted as you can see in the top right corner.

Glasswire

Let me give you an example of a customer scenario. I was involved in a project where we planned to leverage the customer&#rsquo;s internal Java based VPN client to secure a connection for remote workers and then automatically trigger a connection to a cloud based Virtual Desktop. To be able to configure this process we needed to know which processes were active when a connection was established, if they only used HTTPS and what the differences would be if we connected using the HTML5 client rather than the standard Windows Client. Firstly I logged into my Windows client machine in Workstation Pro and launched the Horizon Client, I then made a connection to a hosted Windows 10 machine and checked Glasswire. From the image below you can see that the Horizon client has two processes running, both of which are making an HTTPS connection to the same host.

Glasswire

Secondly, I wanted to see how the Horizon connection varied when using the HTML5 client rather than the Windows client. The HTML5 client is represented by Chrome, as that was the browser I was using. It is still using HTTPS but in this instance it is a single connection to hv6emea.vmwdemo.com

Glasswire

With the help of this information we were able to configure the VPN client to pass these connections through giving the customer end users and simple and familiar use case.

Running Glasswire in a clean virtual machine in Workstation Pro is ideal as it allows me to remove many of the other processes that could dilute the information I am trying to gather,

Your Go

Without Workstation Pro I would have had to install my demo lab on my native endpoint, which just wouldn&#rsquo;t be viable due to requirements of AD, DHCP, DNS and in some case multiple versions of the same software. Another option would have been to pay for cloud resource to run the virtual machines. This generally is a good option but I want access to all the components, for example the Hypervisor or AD Controller, which is not always an option with Cloud based environments and I wanted to guarantee 100% control.

Try Workstation Pro for Windows, or Workstation Pro for Linux, for free, today.

The post Using VMware Workstation Pro as the perfect demo tool appeared first on VMware Workstation Zealot.

Jun 232017
 

Donn Bullock

by Donn Bullock,Director of End-User Computing Channel Sales for the Americas.

With over 20 years of high-tech sales & marketing experience, Donn has worked at every level of the IT channel including 3 resellers, 2 manufacturers (Compaq & IBM) and a distributor. He started one of the first (he was told by VMware the actual first) business partner VDI practices focused on VMware in 2006. He joined VMware in July 2012 as the first external hire for the new Commercial Field Sales team and today serves as a channel sales executive for VMware&#rsquo;s $500M EUC business in the Americas.

Donn has a BA from Wake Forest University and an MBA in Finance and Marketing from Vanderbilt University and lives in Raleigh, North Carolina.


TruAudio

No matter how millennial you might be (or think you are), sometimes you just can&#rsquo;t escape Windows. That&#rsquo;s what Bryan Garner, CEO of home audio company TruAudio, and his executive team discovered, much to their collective chagrin. Following the setup of their all-new Apple MacBook Pros, the excitement of having joined the Apple ecosystem was abruptly replaced with the reality of a recently installed, Windows-based financial system. As new Mac converts, Bryan and team immediately set out to rescue their newly acquired IT tools from the setback of Windows non-compatibility.

As a fast-growing, small business, TruAudio had been running their financials from Quickbooks Enterprise for many years. In an office exclusively comprised of Windows PCs and Servers, upgrading to Microsoft Dynamics seemed like a natural fit given the IT history of the firm, the application&#rsquo;s ability to scale with the fast pace of their company&#rsquo;s growth, and the recommendation of their trusted IT consultant. That is until the trending younger executive team decided that Macs were their platform of choice for personal productivity. It was only after setting up their crisp new hardware that the discovery of no MacOS interface within Dynamics was discovered.

After an unfulfilling foray into Oracle&#rsquo;s VirtualBox, Bryan and team took the advice of yours truly and had a go with Fusion Pro. The firm has not looked back since. First, VMware&#rsquo;s Fusion Pro satisfied their Dynamics needs, allowing each of the users to create a dual-screen desktop environment of both MacOS and Windows, running applications simultaneously on either desktop OS (and indeed running Microsoft Office on both).

TruAudio’s Desktop

Along the way, the team also discovered that Fusion resolved the unsuspecting issue of printing to their respective locally attached Konica Minolta bizhub printers, which also lacked a native MacOS driver. Printing through Windows on Fusion was now all they needed. As the firm has continued to grow rapidly, Fusion has resolved other unforeseen issues and found its way into other critical parts of the business. For example, the team also discovered that Crestron, the leader in commercial and home automation, lacks a MacOS driver for its products, an easy resolution found once again with Fusion-empowered MacBooks. By connecting the Crestron automation tools into Windows within Fusion, TruAudio has been able to provide the sound testing studio with flexibility for anyone wanting to test out new solutions on TruAudio gear.

In today&#rsquo;s IT environment, not every user wants to be locked into a single OS. For many users, application choice dictates platform, not the other way around. And while users seek choice in their BYOD demands, corporate IT departments insist upon control and standardization. VMware Fusion reconciles this potential conflict of end-user choice with control, creating an environment in which both Windows and Mac can not only work together, but empower new options and use cases not available to users before.

Don&#rsquo;t believe that Fusion can&#rsquo;t make a difference in your company? Just ask Bryan as he runs to his next meeting with MacBook Pro in hand. Try it yourself today, for free.

The post Brand-new Apple MacBook Pro: $3K. VMware Fusion: $160. Running got-to-have-them Windows Apps on my Sweet Setup: Priceless. appeared first on VMware Fusion Blog.

Jun 232017
 

Sign up with the VMware Security Response Center (VSRC) to receive security advisories reported in VMware products. The VSRC works with the security research community and leads the analysis and remediation of software security issues in VMware products.

The post Receive Proactive Security Advisories on Products Your Company may be Using appeared first on Partner News.

Jun 232017
 

Beitrag von Annette Maier, Vice President & General Manager Germany, VMware

Es war DER Deal in der IT-Geschichte und sorgte für einige Furore: Im Oktober 2015 kaufte Dell EMC für eine Rekordsumme von 67 Milliarden US-Dollar auf. Lange wurde darüber gemunkelt, was dieser Deal wohl für VMware bedeutet. Als VMware agieren wir auch in Zukunft weitgehend selbständig und unabhängig von Dell EMC. Aber mit Dell EMC haben wir einen starken Partner an unserer Seite, mit dem wir gemeinsam unsere Zukunftsvisionen vorantreiben können. Vom 8. bis 11. Mai 2017 schließlich fand die erste Dell EMC World in Las Vegas statt. Dort wurden unter anderem auch die Dell EMC VDI Complete Solutions vorgestellt, die eine komplette Desktop- und Applikations-Virtualisierungslösung bietet. Die integrierte Lösung baut auf unserer Vision eines digitalen Arbeitsplatzes auf: Künftig sollen die Mitarbeiter reibungslos jederzeit von jedem Standort aus und mit jedem Gerät Zugriff auf ihre Unternehmensanwendungen haben. Für die IT jedoch soll dieser Vorgang keineswegs kompliziert oder teuer sein. Dieser Herausforderung stellen wir uns mit der neuen Lösung.

Dabei spielen die Bereiche Mobility, Desktop- und Cloud-Technologien eine große Rolle. Gemeinsam mit Dell EMC sind wir in einer strategisch günstigen Position, um Unternehmen zu entlasten: Denn die Verwaltung mehrere Verträge mit unterschiedlichen Anbietern, Service Level Agreements und Solution Management Konsolen treiben die Gesamtbetriebskosten in die Höhe und belasten die ohnehin schon knappen IT-Budgets. Etwas, das wir ändern wollen.

Wie genau sieht unsere Lösung aus?

Durch die Integration von VMware AirWatch in die Dell Client Command Suite können IT-Administratoren proaktiv die handelsüblichen Dell PCs direkt über AirWatch verwalten, was folgende Vorteile mit sich bringt: Wichtige Systemeigenschaften abfragen und abrufen, Basic Input/Output Systeme (BIOS) Einstellungen konfigurieren und Wiederherstellungsmaßnahmen von derselben AirWatch Konsole durchführen, die für die Verwaltung der Windows OS Policies, Apps und Endpoints über ganze Unternehmen hinweg genutzt wird.

Außerdem bieten wir jetzt eine kostengünstige, übersichtliche und einfache Möglichkeit für die Implementierung von Client-Virtualisierungslösungen. Wir nutzen VMware Horizon, eine End-to-end Virtualisierungslösung für Desktops und Anwendungen, um Dell EMC VDI Complete zu unterstützen. Unternehmen profitieren so von einer einzelnen, einfach zu implementierenden Lösung mit allen Komponenten, die für die Bereitstellung virtueller Desktops und Anwendungen notwendig sind. Dabei reduzieren wir den Zeitaufwand um die Hälfte, da nicht mehr individuelle Lösungen jeweils für virtuelle Desktops und Anwendungen implementiert werden müssen.

Wir schonen nicht nur Ihr IT-Budget!

Bei beiden Schritten steht in erster Linie die Kostenreduktion im Fokus. Durch die Integration von VMware AirWatch reduzieren sich die Gesamtbetriebskosten, die für die Verwaltung von PCs anfallen. Mithilfe von VMware Horizon bieten wir eine günstige Möglichkeit für die Implementierung von VDI-Software. Doch es geht nicht nur um die Kosten. Wir möchten die IT-Infrastruktur einfacher und agiler gestalten: Die Verwaltung von Client Software und Hardware ist aktuell in den meisten Unternehmen noch zu komplex. Wir möchten Unternehmen dabei unterstützen, ihre Mitarbeiter endlich mobil zu machen und somit auch die Art und Weise verändern, wie Unternehmen heute arbeiten. Ich freue mich auf die weitere Zusammenarbeit mit Dell EMC, die bestimmt noch einige spannende Entwicklungen mit sich bringt!

Sie möchten gerne mehr zum Thema Strategien von VMware erfahren? Dann folgen Sie mir und VMware DACH auf Twitter!

Jun 232017
 

Purpose:

Few customers inquired about how they can collect metric values directly from vROps. They majorly wanted to get that information using REST API’s. I wrote another blog post detailing how you can do it. I wrote a python script to automatically collect metric values from vROps. This post provides details about that script. Where you can get that script and how it needs to be run.

Introduction:

This python script uses the vROps python client to query and gather information directly from vROps server. Its task is to collect metric values from vROps. It uses REST API’s at the backend to get the information. So, to run the script you to download and install Python Client from vROps server. Also, this script was written and tested in Python 2.7 environment. Check my other blog post on details about how you can use python client to interact with vROps server.

Use Case:

This script will automate the task of collecting metric values directly from vROps for you. You can later use the values as per your requirements.

Where to get it?

You can get the script by going to GitHubpage(https://github.com/sajaldebnath/vrops-metric-collection)and downloading it from there.

There are samples provided in the script.

How to run?

There are two scripts, “metric-collection.py” andset-config.py. Download both the scripts in the same location. Run set-config.py first.

# python <localtion to file>/set-config.py

It will set the environment. Then run metric-collection.py to collect the metric values.

# python <localtion to file>/metric-collection.py

Ideally, you should automate the running of this script using a scheduler like cron jobs. The script should not run less than 5 minutes apart. Since vROps collects data every 5 minutes, so running this script less than that time frame will result in redundant data. In this mode “Maximum number of samples to collect” parameter should be left to default value 1.

If you want to run the script one time and get as much data as possible (historical), then set the above parameter to the desired value. For example, setting it to 5 will result in getting data from last 5 collections.

How does it collect metric values:

Running set-config.py sets up the environment. It will ask forthe following information:

  • Adapter Kind
  • Resource Kind
  • vROPs server IP/FQDN
  • user id
  • vROps password
  • Maximum number of samples to collect
  • Number of Keys to Monitor
  • Keys (one by one)

The adapter kind is the adapter for which you want information (e.g, VMware). The resource kind is the type of resource you want to monitor (e.g., VirtualMachine). These two parameters are used to filter the results.

vROps server and user details are asked to connect to the server. “Maximum number of samples to collect” value is used to determine how many samples we need to collectdata for. By default, it is set to 1. Next is the number of keys to monitor. As the name suggests, this determines the number of keys. The last parameter “Keys” takes the key values one by one.

Running the set-config.py script stores these values in JSON format in config.jsonfile. This file is placed in the same directory. Note, the password is not saved as plain text, it is saved in encrypted. I agree, probably it is not the best encryption, but at least better than plain text password.

Next run metric-collection.py, this script will read the values from config.json file and save the output in JSON format in metric-data.json file. This file is also saved in the same location.

Watch the following video for the detailed run and sample output.

 

Conclusion:

You can use the script if you want to get the metric values directly from vROps server. I did not test the script extensively. It will be a real help if you can test it out and let me know the outcome. Also, any further suggestions are welcome. I hope this helps you as it helped me.

 

The post How to programmatically collect metric values from vROps appeared first on VMware Cloud Management.

Jun 232017
 

El WannaCry, que afecta el sistema operativo de Microsoft Windows, ya ha infectado cientos de miles de ordenadores de todo el mundo en un incesante ataque con ransomware a una escala sin precedentes.

Ángel Villar Garea, ingeniero en sistemas de VMware, comenta y muestra cómo puede ayudar VMware NSX a proteger infraestructuras contra los ataques de WannaCry y otros similares.

Si quieres saber más sobre cómo protegerte de las amenazas más recientes a la ciberseguridad con VMware NSX, descarga tu ejemplar gratuito de &#rsquo;Network Virtualization for Dummies&#rdquo;

Echa un vistazo a nuestro blog y sigue a @VMware_ES para estar al tanto de todas las novedades de VMware.

 

Jun 232017
 

Par Mado Bourgoin, Directrice Technique, VMware France

L&#rsquo;étude«Digital Equality» réalisée par Roland Berger, Numa et La femme Digitale révèle que 73% des femmes pensent qu&#rsquo;elles ont un rôle clé à jouer dans la révolution numérique. Pour que cela se concrétise il faudra auparavant faire disparaitre quelques stéréotypes qui freinent la parité.

Et si la croissance passait par les femmes

Selon une étude menée en 2013 par la commission européenne, la parité dans le secteur digital entrainerait une augmentation du PIB européen de 9 milliards d&#rsquo;euros. La parité est bien entendu une question de justice sociale mais elle devient également un impératif économique à replacer dans le contexte de la révolution numérique. En effet, la nature des emplois sera de plus en plus impactée par les technologies du numérique et de nouveaux metiers vont apparaitre. Les femmes ont leurs cartes à jouer à condition d&#rsquo;y faire tomber quelques barrières à commencer par l&#rsquo;éducation. Seules 15% à 20% d&#rsquo;étudiantes choisissent les filières numériques alors qu&#rsquo;elles sont 45% a présentées un BAC S et il faudrait attendre 2075 pour trouver la parité dans les écoles d&#rsquo;ingénieurs. Conscient de ce défi, le gouvernement français a présenté en janvier 2017 un plan pour la mixité dans les entreprises du numérique. Si cette initiative est une première étape nécessaire dont nous pouvons nous réjouir, il est aussi important de rappeler que la parité est aussi une affaire de comportements personnels.

Les stéréotypes ont la vie dure

Rien ne s&#rsquo;oppose à une parité dans le numérique mais certains stéréotypes persistent et sont véhiculés aussi bien par les hommes que par les femmes. En France, culturellement les femmes ont tendance à s&#rsquo;exclure d&#rsquo;elles-mêmes des métiers à caractères scientifiques alors que la tendance semble inverse dans les pays émergents. En France, c&#rsquo;est encore trop souvent les métiers littéraires qui priment pour les filles. Et pour celles qui ont malgré tout persévéré dans le scientifique elles doivent évoluer dans un monde fortement masculin où elles doivent gommer leur différence et leur spécificité pour s&#rsquo;y intégrer. Pour faire évoluer les mentalités, il faut une prise de conscience de ces stéréotypes et valoriser les complémentarités. Cela concerne tout le monde, hommes et femmes, au sein des familles et des entreprises.

Tout commence par une prise de conscience

Ces attitudes sont souvent héritées de notre éducation et de notre environnement. Il ne s&#rsquo;agit pas de montrer d&#rsquo;un doigt accusateur ces comportements mais de créer les éléments propices à cette prise de conscience. Cela peut passer par des formations, de l&#rsquo;introspection ou des échanges. En aucun cas il ne s&#rsquo;agit de jouer l&#rsquo;opposition, on doit mettre en valeur les complémentarités et souligner les anomalies. Des actions simples comme #jamaisSansElles pointent du doigt toutes les interventions, séminaires et tables rondes où aucune femme n&#rsquo;est représentée. Le seul fait de prendre conscience qu&#rsquo;un événement sans aucune femme est aujourd&#rsquo;hui une incongruité peut être le début d&#rsquo;un changement plus profond.

La réussite se décline aussi au féminin

A titre d&#rsquo;exemple, récemment se tenait à Paris le salon des développeurs DEVOXX. Les femmes y étaient peu présentes, alors que le code, comme l&#rsquo;avait rappelé Axelle Lemaire, fait partie des disciplines d&#rsquo;avenir. Le numérique a donné un aspect stratégique au code qui participe à l&#rsquo;innovation des entreprises et représente un réel potentiel d&#rsquo;emplois. Elles doivent montrer l&#rsquo;exemple pour être source d&#rsquo;inspiration à l&#rsquo;image d&#rsquo;Aurelie Jean, fondatrice d’In Silico Veritas qui explique pourquoi elle s&#rsquo;est faite le chantre du développement logiciel pour les femmes et a endossé les habits de «rôle model». Des organisations de plus en plus nombreuses comme le Women&#rsquo;s Forum démontrent que la réussite se décline également au féminin et des réseaux tels que Girls in Tech cherchent accélérer la croissance des femmes innovatrices dans l’industrie des hautes technologies et la construction de startups à succès.

A nous, entreprises engagées dans le numérique, de donner l&#rsquo;exemple, et envie aux femmes de s&#rsquo;épanouir dans cet environnement.

Copyright illustration pict rider / Fotolia

Article original sur Latribune.fr

 

Jun 222017
 

SRM supports two different replication technologies, Storage Array or Array-Based Replication and vSphere Replication. One of the key decisions when implementing SRM is which technology to use and for what VMs. The two technologies can be used together in an SRM environment though not to protect the same VM. Given that, what are the differences

The post SRM – Array Based Replication vs. vSphere Replication appeared first on Virtual Blocks.

Jun 222017
 

Last August, VMware announced enhancements to Advantage+, our opportunity registration program. This year, we continue the evolution as part of our ongoing commitment to deliver best in class partner programs. We are pleased to announce the following Advantage+ enhancements effective June 19:

  • Lead on Professional Services – Premier and Enterprise Solution Providers and Corporate Resellers can request registration to take the lead on selling Professional Services.
  • Cross-Sell Up-Sell Improvements – Modified validation process to allow for solution-oriented selling.
  • Professional Level Solution Providers eligible to participate in Advantage+ with safeguard registration benefits.

VMware transformational solutions continue to offer immense partner opportunity and these Advantage+ enhancements will streamline the incentive process for our advanced products.

Be sure to review the incentives page on Partner Central for more details.

The post Ad+ Enhancements for 2017: What Partners Need to Know appeared first on Power of Partnership.

Jun 222017
 

VMware Workspace ONE and VMware Horizon Client 4.5 are now available on the Google Play store. Both can be installed on Samsung Galaxy S8 and S8+ smartphones and used with Samsung DeX Station, which enables USB connectivity to peripherals.

The joint Samsung and VMware solution delivers a unified, digital workspace to keep end users productive. Users can seamlessly transition between their on-the-go mobile device and their full-size desk workspace that includes a monitor, mouse and keyboard.

With Samsung and VMware, end users can quickly move from mobile device to desktopfor continuous productivity at work.

Workspace ONE is a simple and secure enterprise platform for delivering and managing any application on the Galaxy S8 and S8+ smartphone. Integrating identity management, real-time application delivery and enterprise mobility management, Workspace ONE helps IT engage digital employees, reduce the threat of data leakage and modernize traditional operations for the mobile-cloud era.

With the Samsung DeX Station, end users can turn their Galaxy S8 or S8+ into a true PC experience. When docked at the DeX Station, Galaxy S8 and S8+ phones launch a special DeX mode on the connected, external monitor, and applications can be opened in multiple, separate windows. We recommend connecting a mouse, keyboard and Ethernet cable for added productivity.

With Workspace ONE and Horizon Client 4.5 installed on the device, DeX users can take advantage of on-the-go access to their Windows, mobile and cloud applications. Without compromising corporate security, this combination is an ideal choice for bring-your-own-device (BYOD) users.

Here&#rsquo;s an example of a Galaxy S8 running DeX mode on a monitor with a Horizon virtual desktop and Workspace ONE.

With Horizon, DeX users will also experience Blast Extreme adaptive transport for high-performing cloud applications and virtual desktops—right from Samsung Galaxy S8 and S8+ devices.

Meanwhile, users can easily multitask between their personal and corporate worlds and even take phone calls. While docked in the DeX Station, call functionality works seamlessly with a Bluetooth headset, without interrupting the desktop experience.

Organizations that want to test the Galaxy S8 or S8+ with Horizon and/or Workspace ONE can quickly sign up at VMware TestDrivefor End-User Computing. IT can conveniently access a free, cloud-based demo environment within minutes—without spinning up internal resources and/or infrastructure.

Go to VMware TestDrive

Learn More

  • See the announcement:Introducing New Samsung Galaxy S8 + VMware Workspace ONE
  • Read more from VentureBeat: Samsung&#rsquo;s DeX dock turns the Galaxy S8 into a PC
  • Watch a demo at KRON 4: Samsung&#rsquo;s Dex Turns Your Phone into a Work or Home Computer

The post One Device for On-the-Go Mobility & Desktop Productivity—Eliminate Compromise with VMware & Samsung appeared first on VMware End-User Computing Blog.

Jun 222017
 

As our CTO Ray O’Farrell recently mentioned, VMware is committed to helping customers build intelligent infrastructure, which includes the ability to take advantage of Machine Learning within their private and hybrid cloud environments. As part of delivering this vision, the Office of the CTO collaborates with customers and with VMware R&D teams to ensure the […]

The post How to Enable Compute Accelerators on vSphere 6.5 for Machine Learning and Other HPC Workloads appeared first on VMware | OCTO Blog.

Jun 222017
 

This blog was updated on May 22,2017, with the latest information about the Device Enrollment Program from Apple. Join the conversation on Twitter using #iOSinBusiness.

What is the Device Enrollment Program from Apple?

The Device Enrollment Program provides a fast, streamlined way to deploy your corporate-owned Mac, iOS or tvOS devices. With a mobile device management (MDM) and unified endpoint management solution like VMware AirWatch, IT can:

  • Customize device settings;
  • Activate and supervise devices over the air; and
  • Enable users to setup their own devices out of the box.

[Related: 27 Questions Answered about AirWatch & the Device Enrollment Program from Apple]

What IT challenges does the Device Enrollment Program help address?

The Device Enrollment Program solves several critical requirements for corporate-owned devices. First, organizations save time and money by eliminating high-touch processes for IT. DEP takes configuration time to zero. Deployment of corporate-owned devices with DEP means zero-touch configuration for IT, eliminates staging and automates device configuration.

Second, onboarding iOS or macOS devices is streamlined for users. Based on the settings IT configured, users are prompted through Setup Assistant (skipping through any unnecessary screens). Users only need to authenticate and don&#rsquo;t need to be tech savvy to get the content, apps and email they need on their smartphones.

Finally, supervising iOS devices over the air is possible with the DEP. With supervision, administrators have more control over the device and can disable features like AirDrop, the App Store and account modification. They can also enable features like password protection. Also, the MDM profile cannot be removed, which eliminates the possibility of un-enrollment to protect data and investments in devices and provides the best user experience possible.

What role does AirWatch play in Apple&#rsquo;s Device Enrollment Program?

To utilize the Device Enrollment Program, MDM capabilities like those part of VMware AirWatch are required. AirWatch integrates with the Device Enrollment Program, enabling organizations to automatically import devices in the console based on order history. Then, administrators can easily configure settings, apply profiles, assign applications and set restrictions that will apply automatically when users unbox devices.

[Related: iOS 10.3, tvOS 10.2 & macOS 10.12.4 Are Live! VMware AirWatch Has Your Mobile Business Covered]

How can I join the Device Enrollment Program from Apple?

First, enroll with Apple and register your organization&#rsquo;s information to create an account and designate your administrators. Next, configure your device settings and Setup Assistant steps in the AirWatch console. You then can ship devices directly to your users.

For more information, check out Apple&#rsquo;s Device Enrollment Program Guide.

What are the device requirements for the Apple Device Enrollment Program?

The devices must be corporate-owned and purchased directly from Apple or through participating Apple Authorized Resellers.*

*The Device Enrollment Program may not be supported by all Apple Authorized Resellers and carriers.

Where is the Device Enrollment Program available?

The Device Enrollment Program is available in 34 countries: Australia,Austria,Belgium,Brazil,Canada,Czech Republic,Denmark,Finland,France,Germany, Greece,Hong Kong,Hungary,India,Ireland,Italy,Japan,Luxembourg,Mexico,Netherlands, NewZealand,Norway,Poland,Portugal,Singapore,South Africa,Spain,Sweden,Switzerland, Taiwan,Turkey,United Arab Emirates,United Kingdom andUnitedStates.

What’s available for education with the Device Enrollment Program from Apple?

Both Apple and AirWatch give special consideration to unique education use cases. With Apple School Manager (ASM), Apple has delivered a central place for account creation, role definitions and content purchases. To support ASM, AirWatch designed a console section for education to setup mobile deployments and streamline management of teachers, students, classes, apps and more—whether you have a 1:1 or shared device deployment. After importing data from Apple School Manager, use AirWatch to:

  • Match devices with students or classes;
  • Assign applications (to users or devices); and
  • Configure the new Classroom application, allowing teachers to guide learning on iPads.

Students quickly choose the device with their photo displayed once their teacher has started the class.

Visit apple.com/business/dep/ and apple.com/education/it/ to learn more about the Device Enrollment Program.

 

Jun 222017
 

I have lately started a tradition of copying/pasting reports of events I attend for the community to be able to read them. As always, they are organized as a mix of (personal) thoughts that, as such, are always questionable …. as well as raw notes that I took during the keynotes and breakout sessions.

You can find/read previous reports at these links:

  • Kubecon – 2017 – Berlin
  • Dockercon – 2016 – Seattle
  • Serverlessconf – 2016 – New York

Note some of these reports have private comments meant to be internal considerations to be shared with my team. These comments are removed before posting the blog publicly and replaced with <Redacted comments>.

Have a good read. Hopefully, you will find this small “give back” to the community helpful.

———————————————————————-

Massimo Re Ferré, CNA BU – VMware

Hashidays London report.

London – June 12th 2017

Executive summary and general comments

This was in general a good full day event. Gut feeling is that the audience was fairly technical, which mapped well the spirit of the event (and HashiCorp in general).

There had been nuggets of marketing messages spread primarily by Mitchell H (e.g. &#rsquo;provision, secure, connect and run any infrastructure for any application&#rdquo;) but these messages seemed a little bit artificial and bolted on. HashiCorp remains (to me) a very engineering focused organization where the products market themselves in an (apparently) growing and loyal community of users.

There were very few mentions of Docker and Kubernetes compared to other similar conferences. While this may be due to my personal bias (I tend to attend more containers-focused conferences as of late), I found interesting that there were more time spent talking about HashiCorp view on Serverless than containers and Docker.

The HashiCorp approach to intercept the container trend seems interesting. Nomad seems to be the product they are pushing as a counter answer for the like of Docker Swarm / Docker EE and Kubernetes. Yet Nomad seems to be a general-purpose scheduler which (almost incidentally) supports Docker containers. However, a lot of the advanced networking and storage workflows available in Kubernetes and in the Docker Swarm/EE stack aren&#rsquo;t apparently available in Nomad.

One of the biggest tenet of HashiCorp&#rsquo;s strategy is, obviously, multi-cloud. They tend to compete with some specific technologies available from specific cloud providers (that only work in said cloud) so the notion of having cloud agnostic technologies that work seamlessly across different public clouds is something they leverage (a ton).

Terraform seemed to be the special product in terms of highlights and number of sessions. Packer, Vagrant were hardly mentioned outside of the keynote with Vault, Nomad and Consul sharing almost equally the remaining of the time available.

In terms of backend services and infrastructures they tend to work with (or their customers tend to end up on) I will say that the event was 100% centered around public cloud. <Redacted comments>.

All examples, talks, demos, customers&#rsquo; scenarios etc. etc. were focused on public cloud consumption. If I have to guess a share of &#rsquo;sentiment&#rdquo; I&#rsquo;d say AWS gets a good 70% with GCP another 20% and Azure 10%. These are not hard data, just gut feelings.

The monetization strategy for HashiCorp remains (IMO) an interesting challenge. A lot (all?) of the talks from customers were based on scenarios where they were using standard open source components. Some of them specifically proud themselves for having built everything using free open source software. There was a mention that at some point this specific customer would have bought Enterprise licenses but the way it was phrased let me think this was to be done as a give-back to HashiCorp (to which they owe a lot) rather than specific technical needs for the Enterprise version of the software.

Having that said there is no doubt HashiCorp is doing amazing things technologically and their work is super well respected.

In the next few sections there are some raw notes I took during the various speeches throughout the day.

Opening Keynote (Mitchell Hashimoto)

The HashiCorp User Group in London is the largest (1300 people) in the world.

HashiCorp strategy is to … Provision (Vagrant, Packer, Terraform), secure (Vault), connect (Consul) and run (Nomad) any infrastructure for any application.

In the last few years, lots of Enterprise features found their way into many of the above products.

The theme for Consul has been &#rsquo;easing the management of Consul at scale&#rdquo;. The family of Autopilot features is an example of that (set of features that allows Consul to self-manage itself). Some of such features are only available in the Enterprise version of Consul.

The theme for Vault has been to broaden the feature set. Replication across data centers is one such feature (achieved via log shipping).

Nomad is being adopted by largest companies first (very different pattern compared to the other HashiCorp tools). The focus recently has been on solving some interesting problems that surface with these large organizations. One such advancement is Dispatch (HashiCorp&#rsquo;s interpretation of Serverless). You can now also run Spark jobs on Nomad.

The theme for Terraform has been to improve platforms support. To achieve this HashiCorp is splitting the Terraform core product from the providers (managing the community of contributors is going to be easier with this model). Terraform will download providers dynamically but they will be developed and distributed separately from the Terraform product code. In the next version, you can also version the providers and require a specific version in a specific Terraform plan. &#rsquo;Terraform init&#rdquo; will download the providers.

Mitchell brings up the example of the DigitalOcean firewall feature. They didn&#rsquo;t know it was coming but 6 hours after the DO announcement they did receive a PR from DO that implemented all the firewall features in the DO provider (these situations are way easier to manage when community members are contributing to provider modules if these modules are not part of the core Terraform code base).

Modern Secret Management with Vault (Jeff Mitchell)

Vault is not just an encrypted key/value store. For example, generating and managing certificates is something that Vault is proving to be very good at.

One of the key Vault features is that it provides multiple (security related) services fronted with a single API and consistent authn/authz/audit model.

Jeff talks about the concept of &#rsquo;Secure Introduction&#rdquo; (i.e. how you enable a client/consumer with a security key in the first place). There is no one size fits all. It varies and depends on your situation, infrastructure you use, what you trust and don&#rsquo;t trust etc. etc. This also varies if you are using bare metal, VMs, containers, public cloud, etc. as every one of these models has its own facilities to enable &#rsquo;secure introduction&#rdquo;.

Jeff then talks about a few scenarios where you could leverage Vault to secure client to app communication, app to app communication, app to DB communications and how to encrypt databases.

Going multi-cloud with Terraform and Nomad (Paddy Foran)

Message of the session focuses on multi-cloud. Some of the reasons to choose multi-cloud are resiliency and to consume cloud-specific features (which I read as counter-intuitive to the idea of multi-cloud?).

Terraform provisions infrastructure. Terraform is declarative, graph-based (it will sort out dependencies), predictable and API agnostic.

Nomad schedules apps on infrastructure. Nomad is declarative, scalable, predictable and infrastructure agnostic.

Paddy is showing a demo of Terraform / Nomad across AWS and GCP. Paddy explains how you can use output of the AWS plan and use them as inputs for the GCP plan and vice versa. This is useful when you need to setup VPN connections between two different clouds and you want to avoid lots of manual configurations (which may be error prone).

Paddy then customizes the standard example.nomad task to deploy on the &#rsquo;datacenters&#rdquo; he created with Terraform (on AWS and GCP). This will instantiate a Redis Docker image.

The closing remark of the session is that agnostic tools should be the foundation for multi-cloud.

Running Consul at Massive Scale (James Phillips)

James goes through some fundamental capabilities of Consul (DNS, monitoring, K/V store, etc.).

He then talks about how they have been able to solve scaling problems using a Gossip Protocol.

It was a very good and technical session arguably targeted to existing Consul users/customers that wanted to fine tune their Consul deployments at scale.

Nomad and Next-generation Application Architectures (Armon Adgar)

Armon starts to define the role of the scheduler (broadly).

There are a couple of roles that HashiCorp took in mind when building Nomad: developers (or Nomad consumers) and infrastructure teams (or Nomad operators).

Similarly, to Terraform, Nomad is declarative (not imperative). Nomad will know how to do things without you needing to tell it.

The goal for Nomad was never to build an end-to-end platform but rather to build a tool that would do the scheduling and bring in other HashiCorp (or third party) tools to compose a platform. This after all has always been the HashiCorp spirit of building a single tool that solves a particular problem.

Monolith applications have intrinsic application complexity. Micro-services applications have intrinsic operational complexity. Frameworks has helped with monoliths much like schedulers are helping now with micro-services.

Schedulers introduce abstractions that helps with service composition.

Armon talks about the &#rsquo;Dispatch&#rdquo; jobs in Nomad (HashiCorp&#rsquo;s FaaS).

Evolving Your Infrastructure with Terraform (Nicki Watt)

Nicki is the CTO @ OpenCredo.

There is no right or wrong way of doing things with Terraform. It really depends on your situation and scenario.

The first example Nicki talks about is a customer that has used Terraform to deploy infrastructure on AWS to setup Kubernetes.

She walks through the various stages of maturity that customers find themselves in. They usually start with hard coded values inside a single configuration file. Then they start using variables and applying them to parametrized configuration files.

Customers then move onto pattern where you usually have a main terraform configuration file which is composed with reusable and composable modules.

Each module should have very clearly identified inputs and outputs.

The next phase is nested modules (base modules embedded into logical modules).

The last phase is to treat subcomponents of the setup (i.e. Core Infra, RDS, K8s cluster) as totally independent modules. This way you manage these components independently hence limiting the possibility of making a change (e.g. in a variable) that can affect the entire setup.

Now that you moved to this &#rsquo;distributed&#rdquo; stage of independent components and modules, you need to orchestrate what needs to be run first etc. Different people solve this problem in different ways (from README files that guide you through what you need to manually do all the way to DIY orchestration tools going through some off-the-shelf tools such as Jenkins).

This was really an awesome session! Very practical and very down on earth!

Operational Maturity with HashiCorp (James Rasell and Iain Gray)

This is a customer talk.

Elsevier has an AWS first. They have roughly 40 development teams (each with 2-3 AWS accounts and each account has 1-6 VPCs).

Very hard to manage manually at this scale. Elsevier has established a practice inside the company to streamline and optimize this infrastructure deployments (they call this practice &#rsquo;operational maturity&#rdquo;). This is the charter of the Core Engineering team.

The &#rsquo;operational maturity&#rdquo; team has 5 pillars:

  • Infrastructure governance (base infrastructure consistency across all accounts). They have achieved this via a modular Terraform approach (essentially a catalog of company standard TF modules developers re-use).
  • Release deployment governance
  • Configuration management (everything is under source control)
  • Security governance (&#rsquo;AMI bakery&#rdquo; that produces secured AMIs and make it available to developers)
  • Health monitoring

They chose Terraform because:

– it had a low barrier to entry

– it was cloud agnostic

– codified with version control

Elsevier suggests that in the future they may want to use Terraform Enterprise. Which underlines the difficulties of monetizing open source software. They are apparently extracting a great deal of value from Terraform but HashiCorp is making 0 out of it.

Code to Current Account: a Tour of the Monzo Infrastructure (Simon Vans-Colina and Matt Heath)

Enough said. They are running (almost) entirely on free software (with the exception of a small system that allows communications among banks). I assume this implies they are not using any HashiCorp Enterprise pay-for products.

Monzo went through some technology &#rsquo;trial and fail&#rdquo; such as:

  • from Mesos to Kubernetes
  • from RabbitMQ to Linkerd
  • from AWS Cloud Formation to Terraform

They, right now, have roughly 250 services. They all communicate with each other over http.

They use Linkerd for inter-services communication. Matt suggests that Linkerd integrates with Consul (if you use Consul).

They found they had to integrate with some banking systems (e.g. faster payments) via on-prem infrastructure (Matt: &#rsquo;these services do not provide an API, they rather provide a physical fiber connection&#rdquo;). They appear to be using on-prem capacity mostly as a proxy into AWS.

Terraform Introduction training

The day after the event I attended the one day &#rsquo;Terraform Introduction&#rdquo; training. This was a mix of lecture and practical exercises. The mix was fair and overall the training wasn&#rsquo;t bad (albeit some of the lecture was very basic and redundant with what I already knew about Terraform).

The practical side of it guides you through deploying instances on AWS, using modules, variables and Terraform Enterprise towards the end.

I would advise to take this specific training only if you are very new to Terraform given that it assumes you know nothing. If you already used Terraform in one way or another it may be too basic for you.