Archivio tag: amazon

Bookmarks for 16 giu 2015 through 19 giu 2015

These are my links for 16 giu 2015 through 19 giu 2015:

  • 10 Things You Should Know About AWS – High Scalability – – Ahead of the upcoming 2nd annual re:Invent conference, inspired by Simone Brunozzi’s recent presentation at an AWS Meetup in San Francisco, and collected from a few of my recent Fluxcapacitor.com consulting engagements, I’ve compiled a list of 10 useful time and clock-tick saving tips about AWS.
  • IT Landscape for sysadmins
  • MonitoringScape – The past decade has seen a dramatic shift in how we build applications: clouds, containers and micro-services have displaced the old paradigm of static, monolithic infrastructure. The need for operational visibility has grown tenfold. Thankfully, the monitoring landscape has kept up with the times. We now have a choice of over 100 monitoring tools that provide excellent visibility to every nook and cranny of our IT stack. The modern monitoring landscape has something for everyone: on-prem installations, SaaS applications, open-source tools and high-priced enterprise monitoring suites. However, with so many tools to choose from, the monitoring landscape can be difficult to navigate. MonitoringScape is your guide to the new, exciting world of modern monitoring. Keep in mind that this is a community resource, so your comments and suggestions are very welcome.
  • Provision and Bootstrap AWS instances with Chef – This is continuation of the previous post called Provision with Chef – baby steps. Today we going to talk about the process of bootstrapping instances with Chef used by FastCompany
  • Provision machines with AWS – custom bootsrapper – […] Now I will tell a little more about our instance bootstrap process. Basically at the end of the previous post we discussed tree possible options for automated machine startup: Create different AMI for each server role. Install all binaries into one ami an provide a way to load dynamic configs parts through some custom bootstrap script. Use infrastructure automation framework like Chef or Puppet, which could handle installs and configuration for you. […] [ Note: the article is pre chef-provisioning tool ]

Bookmarks for 29 mag 2015 through 10 giu 2015

These are my links for 29 mag 2015 through 10 giu 2015:

  • My Blog: AWS EC2 Auto Scaling: Basic Configuration – Our goal: Create an Auto Scaling EC2 Group in a single Availability Zone and use a HTTP status page as a Health Monitor for our Load Balancer and the Auto Scaling group instances. This exercise will show us some Auto Scaling basics and will be useful to understand the concepts beneath but the Auto Scaling Group will not automatically "scale" responding to external influence like Average CPU Usage or Total Apache Connections (This aspect is covered in this post: AWS EC2 Auto Scaling: External CloudWatch Metric). With the Auto Scaling configuration described here, we will obtain a web server cluster that can be increased and decreased in members with a simple Auto Scaling API call and we will transfer the monitoring role to the ELB to automatically replace failed EC2 instances or web servers.
  • Autoscaling with custom metrics « That’s Geeky – One of the appeals of cloud computing is the idea of using what you need when you need. One of the ways that Amazon provides for this is through autoscaling. In essence, this allows you to vary the number of (related) running instances according to some metric that is being tracked. In this article, we look at how you can trigger a change in the number of running instances using a custom Cloudwatch metric – including the setup of said metric, and a brief look at the interactions between the various autoscaling commands used.
  • Painless AWS Auto Scaling With EBS Snapshots And Capistrano – Boom – AWS (Amazon Web Services) auto scaling is a simple concept on the surface: You get an AMI, set up rules, and the load balancer takes care of the rest. However, actually getting it done is more complicated. Some choices are worse than others: you could bake an AMI (Amazon Machine Image) before you deploy, but that could add 10 minutes or more to each deployment. Some are dangerous: you could create an AMI after each deploy, but you run the risk that an auto scale even happens before your AMIs are done. Plus, you have a whole variety of AMIs deployed in at any given time. Some are similar to what we propose in this tutorial: you could push your code to S3 on each deploy and have user-data scripts that pull it down on each auto scaling event. However you slice it, to get auto scaling to fit into your development work flow in a transparent way takes careful thought and planning. We recently rolled out the following solution at CodePen. It keeps our AMIs static and our application ready for scaling on EBS (Elastic Block Store) snapshots. We can push code using Capistrano and let a few scripts distribute the ever-changing code base to our fleet of servers. I’d like to share the steps required to make it work. This series of posts will walk you through the steps required to build an auto-scaling infrastructure that stays out of your way.
  • coderwall.com : establishing geek cred since 1305712800 – Did you accidentally set node.normal[:foo][:bar] = 'something bad' in your chef recipe? Then you found that the node's normal attributes persisted between chef runs, and you really wanted to use the default attribute precedence level in your cookbook's attributes/default.rb file?

Bookmarks for 15 nov 2014 through 26 nov 2014

These are my links for 15 nov 2014 through 26 nov 2014:

  • Charted – Charted is a tool for automatically visualizing data, created by the Product Science team at Medium. Give it the link to a data file and Charted returns a beautiful, shareable chart of the data. We built Charted with a few core principles in mind: Charted does not store any data. It only fetches and visualizes what the link provides. It also refetches the data every 30 minutes, so the chart is always up-to-date. Charted does not transform or manipulate data. It displays only and exactly what it receives. Any necessary calculations or adjustments must already be reflected in the data. Charted is not a formatting tool. It is deliberately sparse in features. Charted focuses on getting from the data to the visualization with the fewest decisions possible. As a result, we simplified Charted to just a few options. Here’s a walk-through of those options. [ via http://onethingwell.org/post/103638738213 ]
  • Simple Amazon Glacier Uploader – Amazon Glacier is a long-term persistent file-storage system for cold data storage. As a GUI wrapper for the Glacier command line tools, The Simple Amazon Glacier Uploader aims to be an upload and download solution that is as durable as your data. SAGU is a single .jar file Glacier interface written in Java for cross-platform accessibility. The use of Java assures that you will have access to your files regardless of your operating system when it is time to retrieve your data.
  • Snapper, The ultimate Snapshot Tool for Linux – Snapper is a tool for Linux filesystem snapshot management. Apart from the obvious creation and deletion of snapshots, it can compare snapshots and revert differences between snapshots. In simple terms, this allows root and non-root users to view older versions of files and revert changes. The features include: Manually create snapshots Automatically create snapshots, e.g. with YaST and zypp Automatically create timeline of snapshots Show and revert changes between snapshots Works with btrfs, ext4 and thin-provisioned LVM volumes Supports Access Control Lists and Extended Attributes Automatic cleanup of old snapshots Command line interface D-Bus interface PAM module to create snapshots during login and logout

Bookmarks for 3 nov 2014 through 5 nov 2014

These are my links for 3 nov 2014 through 5 nov 2014: