BTree partners with : Turn your infrastructure into code

BTree is proud to announce the partnership with Chef whereby BTree will work on solution development along with Chef for its customers. Now, we can help turn your infrastructure into code in a fast and secure way.

With Chef you can manage servers – 5 or 5,000 of them – by turning your infrastructure into code. Time-consuming activities like manual patching, configuration updates, and service installations for every server will no longer exist. And your Infrastructure becomes flexible, version-able, human-readable, and testable.

To know more about our partnership with , send us an email at

BTree is now a Spotinst Reseller Partner

BTree is proud to announce that we have partnered with Spotinst to take the risk out of your Spot Instances.

Utilizing your cloud provider’s excess capacity can save you a lot but taking advantage of those savings requires a lot of work, and is still a major risk. Using predictive algorithms, Elastigroup enables you to leverage Spot Instances (AWS) and Low-priority VMs (Azure) with absolutely no risk of downtime.

Elastigroup first uses predictive algorithms to predict Spot behavior, capacity trends, pricing, and interruption rate. Whenever there’s a risk of interruption, Elastigroup acts accordingly to balance capacity, ensuring 100% availability and no risk of downtime. This means that your application will always run on the most cost-efficient collection of instances – the best-priced Spot Instances when available and falling back to on-demand when not, in addition to prioritizing any reserved instances you may already own.

Elastigroup supports the vast majority of common applications, automatically plugging into the architecture you already use (see use cases below). As long as your instance isn’t a single point of failure, you can utilize Elastigroup to start saving.

Send us an email at to know more about how you can leverage Spotinst in your environment.

BTree is now a Cloudistics Reseller Partner

BTree is proud to announce that we have partnered with Cloudistics. Cloudistics resets expectations for enterprise cloud, providing bare-metal performance, unlimited scalability, full multitenancy, government-compliant security, and automated management and updates, all with incredible affordability.

Cloudistics uniquely uses composable architecture, in which infrastructure adapts itself to run applications. This enable us to provide a turn-key enterprise cloud: we interlock and virtualize network, storage, and compute, and then hide infrastructure from the user with application-focused control software. The control software is provided through SaaS, which directly manages the infrastructure based on application demands without customer intervention. In this way, customers focus on their applications and services, not their infrastructure.

We are happy to bring Cloudistics benefits to our customers in North America region.

Send us an email at to gain the benefits of Cloudistics today.

DevSecOps Workflow With Automated Inspec – Meetup on 01/30 @5:30PM(Herndon,VA)

DevSecOps Workflow With Automated Inspec

Inspec is a platform-agnostic tool, built on rspec, used to check live-systems for policy compliance. Onyx Point has been working on building a STIG profile to supplement the Scap Security Guide, and have written custom helpers to run the profile(s) automatically as part of an acceptance suite. In this session, Onyx Point will discuss compliance, security, DevOps, and specifically how users could integrate Inspec profiles to begin a migration from DevOps to DevSecOps. More specifically, the presentation will answer the following questions:

– What does Onyx Point do?

– What is compliance and how does it relate to security?

– Why should you care about compliance and Who does it affect?

– Where does compliance/security fit into the DevOps cycle?

– How do you implement DevSecOps?

– When can you implement it?

BTree is proud to host this event at its Herndon, VA facility. Please join us at this meetup to learn, share knowledge and network with DevSecOps community in the region. Food and drinks will be provided at the event.

Look forward to meeting all of you at the event. Do share this meetup with your friends as well so that they can benefit too from the learnings.

BTree is now a Consulting & SI partner of Confluent

We at BTree are proud to announce our partnership with Confluent as their Consulting & SI partner. Our continued focus in cloud computing space will take a great leap now since we now have the best-in-class data streaming platform provider i.e. Confluent along with us in our journey.

Here are a few more details about Confluent: Confluent, founded by the creators of open source Apache Kafka™, provides the only streaming platform that enables enterprises to maximize the value of data. Confluent Platform empowers leaders in industries such as retail, logistics, manufacturing, financial services, technology and media, move data from isolated systems into a real-time data pipeline where they can act on it immediately.

Look forward to more details on our solution offering in this space.

AWS Virtual Tape Library – Picking up steam !

AWS Virtual Tape Library (VTL) already supports these backup applications – Netbackup , backup exec and Veeam. VTL is a good AWS storage option for objects. Each VTL already has preconfigured media changers and and tape drives. This is a good storage gateway option during migration from on-prem to AWS. BTree has extensive expertise in designing your storage gateway solution during migration from on-prem to AWS. Our consultants will come-in and analyze your landscape and recommend an appropriate storage gateway strategy relevant to you. This may leverage one or more of these Storage gateway options – AWS File Gateway , AWS Stored volumes , AWS Cached Volumes and AWS Virtual Tape Library.

On a related note, Veeam recently announced at VeeamON2017 that Veeam customers can now leverage VTL on Amazon AWS object storage as a scalable and cost-effective tape alternative. The great thing about this solution for Veeam customers is that it works with Veeam Backup & Replication, with zero changes needed to existing backup processes. The lack of change of process is an important point, as many organizations build their operational procedures as well as expectations of where different restore points are with rules based on tape capabilities. With Veeam’s integrated VTL solutions, the data lives on low latency AWS public cloud storage (Amazon Simple Storage Service S3) and contains smart de-staging that allowsthe data to move to even more cost-efficient Amazon Glacier Storage for long-term retention. Veeam currently has two options for VTL which include either utilizing Amazon’s native VTL, or StarWind VTL to Cloud. Additional options to be announced at a future date.

To know more about how to leverage VTL in your environment , send your query to

Now you can process , store & monetize your cloud videos with AWS !

AWS recently launched a set of Media services to help you process, store and manage cloud based videos. Here is a list of AWS media services and their web-links for your quick reference:

  • AWS MediaConvert  –  A file-based video transcoding service
  • AWS MediaLive – A broadcast-grade live video processing service
  • AWS MediaPackage – Reliably prepares and protects your video for delivery over the Internet
  • AWS MediaStore – A storage service optimized for media
  • AWS MediaTailor – Lets video providers insert individually targeted advertising into their video streams without sacrificing broadcast-level quality-of-service
  • Kinesis Video Streams – A fully managed video ingestion and storage service that makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing.

Now, you don’t have to worry about managing complex infrastructure for your media services, just focus on content and AWS will take care of the rest for you. 

Here are few benefits for each service for your reference:

AWS MediaConvert
Broadcast grade capabiliities
Reliable and easy to manage
Simple, predictable pricing
AWS MediaLive
Broadcast-grade capabilities
Highly available
Increased efficiency and reduced cost
Simple deployment and management
AWS MediaPackage
Reach a wide range of connected devices
Advanced video experiences and content protection
Built-in scalability and reliability
Easy integration with AWS cloud services
AWS MediaStore
High performance, optimized for video
Scale with your audience
Familiar management tools for access control
AWS MediaTailor
Easily deliver targeted ads to any platform
Improve viewing experiences
Increase the accuracy of ad view reporting
Kinesis Video Streams
Stream Video from Millions of Edge Devices
Easily Build Vision-Enabled Apps
Durable, Searchable Storage
No Infrastructure to Manage
Build both Real-time and Batch Applications

BTree Family wishes our friends a Merry Christmas and a Happy New Year

It is the time of the year when we get to spend time with our families and friends. We at BTree had a wonderful 2017 and we thank God and our friends and families to have trusted us to handle some of the complex IT project needs esp. DevOps space. We successfully delivered on all our engagements and feel very proud to have a wonderful BTree family who made it happen for our clients across Americas region. We wish all our friends a Merry Christmas and a Happy New Year. God bless all of you and God bless America.

Automate with wercker

Tweet app revisited to build, test, deploy using an automated CI-CD pipeline with Wercker deployed as containers on AWS.

This post is to help you set up a CI-CD pipeline using wercker for the application running on docker container. Same can be used to scale out or for bigger applications as well. It would serve as a initial starting point to set up pipeline and workflows on wercker.

This would also help you with Slack integration step to post notifications on your build status.

The application would have this structure;



Application: A simple html page with your content and a tweet icon invoking actionable tweet icon. Source code for the application including the wercker.yml file.


  1. Integrating werkcer with git – On the profile page, open up setting and git connections underneath settings. You would have option to either connect to your github account or bitbucket account.
  2. Creating the application. Click on the + icon on top right corner to see an option named create an application. Once you click on it, it will auto populate the repositories in the git hub account you had linked earlier. Choose the code repository that you would like to configure the CI pipeline. Configure the access and your application will be created. you are all done!
  3. Create the pipelines : Pipelines are collection of steps. While steps are the single logical chunk or block of code, pipeline is the collection of such steps. It could involve building a docker image step, notifying a slack channel as the build pipeline.

4. Create the workflows: Workflow is the flow of pipelines. Workflows determine which order the pipelines get executed. We could align them in series or parallel. One example could be multiple tests that could happen at the same time and all the test pipelines could be aligned in parallel. While deploy step could be aligned in series after the build step.

5. Managing environment variables: Environment variables can be managed at organization level ( highest account level), it could be managed at application level or at a pipeline level as well. Often times top order names are managed at organization level, while application related environment variables are maintained at application level.

Some of the variables we would use are these:

  • slack_url
  • Dockerhub username and password
  • hostname
  • key information.

Wercker provides masking or encrypting the passwords. Once masked it will only be open at the run time invoking of the variable, thus providing an added security layer.

Lets now define the wercker.yml

This is the file maintained in a declarative yaml which controls all the steps, pipelines.

Most of the work happens because of the declarations in the wercker.yml. We can include multiple pipelines like build, test, push to dockerhub and deploy to each environments inside of the wercker.yml file.

The build section contains basic install test as of now. You could add in more tests as per the need. The build section also contains the slack integration, which will post notifications to your slack channel. You will have to configure incoming webhook on slack.

The push to docker contains the syntax declarations on how we push the container to Dockerhub. The deploy section contains commands to show how we deploy the docker image.

Most importantly note the steps being used. We are using predefined steps which have been developed by the community and by wercker organization.

Highly notable integrations are internal-push for pushing the image to the container registry ( consider it as a github for docker images ).

Slack notifier step, which integrated wercker with slack to post notifications about build step.

ssh key addition, this would help us use ssh keys to authenticate and login to EC2 servers.
In the environment variable section, you can create SSH keys.

Once you have generated the keys, you can then go ahead and copy the public key and paste it in the authorized_keys in .ssh directory inside of the home directory of the user you are going to use to login to the EC2 server. Note: we are going to ssh into the instance and do certain commands for our deploy step.

With these resources, you can now get started on wercker.

Have a great start to creating your first pipeline. Keep following the blog for more articles to follow and more refinement on this article too.

AUTHOR:  Ravi Devatha