Tag Archives: Products

Node.js on Google App Engine goes beta

We’re excited to announce that the Node.js runtime on Google App Engine is going beta. Node.js makes it easy for developers to build performant web applications and mobile backends with JavaScript. App Engine provides an easy to use platform for developers to build, deploy, manage and automatically scale services on Google’s infrastructure. Combining Node.js and App Engine provides developers with a great platform for building web applications and services that need to operate at Google scale.

Getting started



Getting started with Node.js on App Engine is easy. We’ve built a collection of getting started guides, samples, and interactive tutorials that walk you through creating your code, using our APIs and services and deploying to production.

When running Node.js on App Engine, you can use the tools and databases you already know and love. Use Express, Hapi, Parse-server or any other web server to build your app. Use MongoDB, Redis, or Google Cloud Datastore to store your data. The runtime is flexible enough to manage most applications and services  but if you want more control over the underlying infrastructure, you can easily migrate to Google Container Engine or Google Compute Engine for full flexibility and control.

Using the gcloud npm module, you can take advantage of Google’s advanced APIs and services, including Google BigQuery, Google Cloud Pub/Sub, and the Google Cloud Vision API:
var gcloud = require('gcloud')({
  projectId: 'my-project',
  keyFilename: 'keyfile.json'
});

var vision = gcloud.vision();
vision.detectText('./image.jpg', function(err, text) {
  if (text.length > 0) {
    console.log('We found text on this image...');
  }
});

Services like the Vision API allow you to take advantage of Google’s unique technology in the cloud to bring life to your applications.

Advanced diagnostic tooling


Deploying Node.js applications to Cloud Platform is just the first step. During the lifespan of any application, you’ll need the ability to diagnose issues in production. Google Cloud Debugger lets you inspect the state of Node.js applications at any code location without stopping or slowing it down. You can set breakpoints, and analyze the state of your application in real time:


When you’re ready to address performance, Google Cloud Trace will help you analyze performance by collecting end-to-end latency data for requests to App Engine URIs and additional data for round-trip RPC calls to App Engine services like Datastore, and Memcache.


NodeSource partnership


Along with the Cloud Debug and Trace tools, we’re also announcing a partnership with NodeSource. NodeSource delivers enterprise-grade tools and software targeting the unique needs of running server-side JavaScript at scale. The N|Solid™ platform extends the capabilities of Node.js to provide increased developer productivity, protection of critical applications and peak application performance. N|Solid and Cloud Platform make a great match for running enterprise Node.js applications. You can learn more about using N|Solid on Cloud Platform from the NodeSource blog.


Committent to Node.js and open source


At Google, we’re committed to open source. The new core node.js Docker runtime, debug module, trace tools, gcloud NPM module, everything  all open source:



We’re thrilled to welcome Node.js developers to Cloud Platform, and we’re committed to making further investments to help make you as productive as possible. This is just the start  keep your ear to the ground to catch the next wave of Node.js support on Cloud Platform.

We can’t wait to hear what you think. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #nodejs channel.

- Posted by Justin Beckwith, Product Manager, Google Cloud Platform

Google Compute Engine boosts high availability controls

Today, we’re introducing a feature to Google Compute Engine Autoscaler and Managed Instance Groups that enables region-specific control when scaling compute resources. This is important for customers that need high availability.

Called Regional Instance Groups, this new feature lets you set-up infrastructure that automatically spreads VM instances across three zones of the region and keeps the spread equal as you change group size. Autoscaler works regionally across all three zones, adding and removing instances so that the spread remains equal.

Back in September, we announced the general availability of Google Compute Engine Autoscaler and Managed Instance Groups. Since then, we've seen amazing growth in usage and heard lots of feedback. One of the top requests from customers was to simplify configurations allowing for higher availability.

Now, with region-specific control, in the rare event of a bad build, network issues or zone failure, you're better protected against losing your service. Since your VM instances are equally spread across three zones, two-thirds of them will not be impacted. On top of that, when you’re using Autoscaler, it will notice increased traffic in other zones and adjust their number accordingly. After the faulty zone goes back to normal, the load will be rebalanced to use all three zones again.

Setup is easy. If you choose to use regional configuration, simply select Multi-Zone group while creating a Managed Instance Group. At this point, your choice of locations are regions and the scope of your Managed Instance Group and Autoscaler becomes regional.
To use Regional Instance Groups in Alpha and learn more about the service sign-up here.

- Posted by Jerzy Foryciarz, Product Manager, Google Cloud Platform

Google Cloud Dataproc managed Spark and Hadoop service now GA

Today Google Cloud Dataproc, our managed Apache Hadoop and Apache Spark service, says goodbye to its beta label and is now generally available.

When analyzing data, your attention should be focused on insights, not your tools. Often, popular tools to process data, such as Apache Hadoop and Apache Spark, require a careful balancing act between cost, complexity, scale, and utilization. Unfortunately, this means you focus less on what is important  your data  and more on what should require little or no attention  the cluster processing it.

We created our managed Spark and Hadoop cloud service, Google Cloud Dataproc, to rectify the balance, so that using these powerful data tools is as easy as 1-2-3.

Since Cloud Dataproc entered beta last year, customers have taken advantage of its speed, scalability, and simplicity. We’ve seen them create clusters from three to thousands of virtual CPUs, using our Developers Console and Google Cloud SDK, without wasting time waiting for their cluster to be ready.

With integrations to Google BigQuery, Google Cloud Bigtable, and Google Cloud Storage, which provide reliable storage independent from Dataproc clusters, customers have created clusters only when they need them, saving time and money, without losing data. Cloud Dataproc can also be used in conjunction with Google Cloud Dataflow for real-time batch and stream processing.

While in beta, Cloud Dataproc added several important features including property tuning, VM metadata and tagging, and cluster versioning. In general availability, just like in beta, new versions of Cloud Dataproc, with new features, functionalities and software components, will be frequently released. One example is support for custom machine types, available today.
Cloud Dataproc tips the scale of running Spark and Hadoop in your favor by lowering cost and complexity while increasing scalability and productivity





Cloud Dataproc minimizes two common and major distractions in data processing  cost and complexity by providing:

  • Low-cost. We believe two things  using Spark and Hadoop should not break the bank and that you should pay for what you actually use. As a result, Cloud Dataproc is priced at only 1 cent per virtual CPU in your cluster per hour, on top of the other Cloud Platform resources you use. Moreover, with per-minute billing and a low 10-minute minimum, you pay for what you actually use, not a rounded (up) approximation.
  • Speed. With Cloud Dataproc, clusters do not take 10, 15, or more minutes to start or stop. On average, Cloud Dataproc start and stop operations take 90 seconds or less. This can be a 2-10x improvement over other on-premises and IaaS solutions. As a result, you spend less time waiting on clusters and more time hands-on with data.
  • Management. Cloud Dataproc clusters don't require specialized administrators or software products. Cloud Dataproc clusters are built on proven Cloud Platform services, such as Google Compute Engine, Google Coud Networking, and Google Cloud Logging to increase availability while eliminating the need for complicated hands-on cluster administration. Moreover, Cloud Dataproc supports cluster versioning, giving you access to modern, tested, and stable versions of Spark and Hadoop.

Cloud Dataproc makes two often problematic needs in data processing easy  scale and productivity by being:

  • Modern. Cloud Dataproc is frequently updated with new image versions to support new software releases from the Spark and Hadoop ecosystem. This provides access to the latest stable releases while also ensuring backward compatibility. For general availability we're releasing image version 1.0.0 with support for Hadoop 2.7.2, Spark 1.6.0, Hive 1.2.1, and Pig 0.15.0. Support for other components, such as Apache Zeppelin (incubating) are provided in our GitHub repository for initialization actions.
  • Integrated. Cloud Dataproc has built-in integrations with other Cloud Platform services, such as BigQuery, Cloud Storage, Cloud Bigtable, and Google Cloud Logging so you have more than just a Spark or Hadoop cluster — you have a complete data platform. You can also use Cloud Dataproc initialization actions to extend the functionality of your clusters.

Our growing partner ecosystem offers certified support from several third-party tools and service partners. We're excited to collaborate with technology partners including Arimo, Attunity, Looker, WANdisco, and Zoomdata to make working in Cloud Dataproc even easier. Service providers like Moser, Pythian, and Tectonic are on standby to provide expert support during your Cloud Dataproc implementations. Reach out to any of our partners if you need help getting up and running.

To learn more about Cloud Dataproc, visit the Cloud Dataproc site, follow our getting started guide, take a look at a code example of how you can predict keno outcomes with Cloud Dataproc, or submit your questions and feedback on Stack Overflow.

- Posted by James Malone, Product Manager

BigQuery cost controls now let you set a daily maximum for query costs

Today we’re giving you better cost controls in BigQuery to help you manage your spend, along with improvements to the streaming API, a performance diagnostic tool, and a new way to capture detailed usage logs.

BigQuery is a Google-powered supercomputer that lets you derive meaningful analytics in SQL, letting you only pay for what you use. This makes BigQuery an analytics data warehouse that’s both powerful and flexible. Those accustomed to a traditional fixed-size cluster – where cost is fixed, performance degrades with increased load, and scaling is complex – may find granular cost controls helpful in budgeting your BigQuery usage.

In addition, we’re announcing availability of BigQuery access logs in Audit Logs Beta, improvements to the Streaming API, and a number of UI enhancements. We’re also launching Query Explain to provide insight on how BigQuery executes your queries, how to optimize your queries and how to troubleshoot them.

Custom Quotas: No fear of surprise when the bill comes


Custom quotas allow you to set daily quotas that will help prevent runaway query costs. There are two ways you can set the quota:

  • Project wide: an entire BigQuery project cannot exceed the daily custom quota.
  • Per user: each individual user within a BigQuery project is subject to the daily custom quota.


Query Explain: understand and optimize your queries

Query Explain shows, stage by stage, how BigQuery executes your queries. You can now see if your queries are write, read or compute heavy, and where any performance bottlenecks might be. You can use BigQuery Explain to optimize queries, troubleshoot errors or understand if BigQuery Slots might benefit you.

In the BigQuery Web UI, use the “Explanation” button next to “Results” to see this information.

Improvements to the Streaming API

Data is most valuable when it’s fresh, but loading data into an analytics data warehouse usually takes time. BigQuery is unique among warehouses in that it can easily ingest a stream of up to 100,000 rows per second per table, available for immediate analysis. Some customers even stream 4.5 million rows per second by sharding ingest across tables. Today we’re bringing several improvements to BigQuery Streaming API.

  • Streaming API in EU locations. It’s not just for the US anymore: you may now use the Streaming API to load data into your BigQuery datasets residing in EU.
  • Template tables is a new way to manage related tables used for streaming. It allows an existing table to serve as a template for a streaming insert request. The generated table will have the same schema, and be created in the same dataset and project as the template table. Better yet, when the schema of the template table is updated, the schema of the tables generated from this template will also be updated.
  • No more “warm-up” delay. After streaming the first row into a table, we no longer require a warm-up period of a couple of minutes before the table becomes available for analysis. Your data is available immediately after the first insertion.

Create a paper trail of queries with Audit Logs Beta


BigQuery Audit Logs form an audit trail of every query, every job and every action taken in your project, helping you analyze BigQuery usage and access at the project level, or down to individual users or jobs. Please note that Audit Logs is currently in Beta.

Audit Logs can be filtered in Cloud Logging, or exported back to BigQuery with one click, allowing you to analyze your usage and spend in real-time in SQL.

With today’s announcements, BigQuery gives you more control and visibility. BigQuery is already very easy to use, and with recently launched products like Datalab (a data science notebook integrated with BigQuery), just about anyone in your organization can become a big data expert. If you’re new to BigQuery, take a look at the Quickstart Guide, and the first 1TB of data processed per month is on us. To fully understand the power of BigQuery, check out the documentation and feel free to ask your questions using the “google-bigquery” tag on Stack Overflow.

-Posted by Tino Tereshko, Technical Program Manager

The next generation of managed MySQL offerings on Cloud SQL

Google Cloud SQL is an easy-to-use service that delivers fully managed MySQL databases. It lets you hand off to Google the mundane, but necessary and often time consuming tasks — like applying patches and updates, managing backups and configuring replications — so you can put your focus on building great applications. And because we use vanilla MySQL, it’s easy to connect from just about any application, anywhere.

The first generation of Cloud SQL was launched in October 2011 and has helped thousands of developers and companies build applications. As Compute Engine and Persistent Disk have made great advancements since their launch, the second generation of Cloud SQL builds on their innovation to deliver an even better, more performant MySQL solution at a better price/performance ratio. We’re excited to announce the beta availability of the second generation of Cloud SQL — a new and improved Cloud SQL for Google Cloud Platform.

Speed, more speed and scalability


The two principal goals of the second generation of Cloud SQL are: better performance and scalability per dollar. The performance graph below speaks for itself. Second generation Cloud SQL is more than seven times faster than the first generation of Cloud SQL. And it scales to 10TB of data, 15,000 IOPS and 104GB of RAM per instance — well beyond the first generation.

Source: Google internal testing



Yoga for your database (Cloud SQL is flexible)


Cloud users appreciate flexibility. And while flexibility is not a word frequently associated with relational databases, with Cloud SQL we’ve changed that. Flexibility means easily scaling a database up and down. For example, a database that’s growing in size and number of queries per day might require more CPU cores and RAM. A Cloud SQL instance can be changed to allocate additional resources to the database with minimal downtime. Scaling down is just as easy.

Flexibility means easily connecting to your database from any client with Internet access, including Compute Engine, Managed VMs, Container Engine and your workstation. Connectivity from App Engine is only offered for Cloud SQL First Generation right now, but that will change soon. Because we embrace open standards by supporting MySQL Wire Protocol, the standard connection protocol for MySQL databases, you can access your managed Cloud SQL database from just about any application, running anywhere. For example:

  • Use all your favorite tools, such as MySQL Workbench, Toad and the MySQL command-line tool to manage your Cloud SQL instances
  • Get low latency connections from applications running on Compute Engine and Managed VMs
  • Use standard drivers, such as Connector/J, Connector/ODBC, and Connector/NET, making it exceptionally easy to access Cloud SQL from most applications


Flexibility also means easily starting and stopping databases. Many databases must run 24x7, but some are used only occasionally for brief or infrequent tasks. Cloud SQL can be managed using the Cloud Console (our browser-based administration console), command line (part of our gCloud SDK) or a RESTful API. The command line interface (CLI) and API make Cloud SQL administration scriptable and help users maximize their budgets by running their databases only when they’re needed.

The graph below shows the number of active Cloud SQL database instances running over time. Notice the clusters of five sawtooth-like ridges and then a drop for two additional ridges. These clusters show an increased number of databases running during business hours on Monday through Friday each week. Database activity, measured by the number of active databases, falls outside of business hours, especially on the weekends. This repeated rise and fall of database instances is a great example of flexibility. Its magnitude is helped significantly by first generation Cloud SQL’s ability to automatically sleep when it is not being accessed. While this is not a design goal of the second generation of Cloud SQL, users can quickly create and delete, or start and stop databases that only need to run on occasion. Cloud SQL users get the most from their budget because of the service’s flexibility.



What is a "managed" MySQL database?


Cloud SQL delivers fully managed MySQL databases, but what does that really mean? It means Google will apply patches and updates to MySQL, manage your backups, configure replication and provide automatic failover for High Availability (HA) in the event of a zone outage. It also means that you get Google’s operational expertise for your MySQL database. Google’s team of MySQL experts make configuring replication and automatic failover a breeze, so your data is protected and available. They also patch your database when important security updates are delivered. You choose when (day and time of week) the updates should be applied, and Google’s team takes care of the rest. This combined with Cloud SQL’s automatic encryption on database tables, temporary files and backups ensures your data is secure.

High Availability, replication and backups are configurable, so you can choose what's appropriate for each of your database instances. For development instances, you can choose to opt out of replication and automatic failover, while your production instances are fully protected. Even though we manage the database, you’re still in control.

Pricing: commitment issues


Getting the best Cloud SQL price doesn’t require you to commit to a one- or three-year contract. To get the best Cloud SQL price, just run your database 24x7 for the month. That’s it. If you use a database infrequently, you’ll be charged by the minute at the standard price. But there’s no need to decide upfront and Google helps find savings for you. No commitment, no strings attached. As a bonus, everyone gets the 100% sustained use discount during Beta, regardless of usage.

Ready to get started?


If you haven’t signed up for Google Cloud Platform, do so now and get a $300 credit to test drive Cloud SQL. The second generation Cloud SQL has inexpensive micro instances for small applications, and easily scales up and out to serve performance-intensive applications.

You can also take advantage of our growing partner ecosystem and tools to make working in Cloud SQL even easier. We’ve partnered with Talend, Attunity, Dbvisit and xPlenty to help you streamline the process of loading your data into Cloud SQL and with analytics products Tableau, Looker, YellowFin and Bime so you can easily create rich visualizations for meaningful insights. We’ve also integrated with ScaleArc and WebYog to help you monitor and manage your database and have partnered with service providers like Pythian, so you can have expert support during your Cloud SQL implementations. Reach out to any of our partners if you need help getting up and running.

Bottom Line


Cloud SQL Second Generation makes what customers love about Cloud SQL First Generation faster and more scalable, at a better price per performance.



- Posted by Brett Hesterberg, Product Manager, Google Cloud Platform

Processing logs at scale using Cloud Dataflow

Logs generated by applications and services can provide an immense amount of information about how your deployment is running and the experiences your users are having as they interact with the products and services. But as deployments grow more complex, gleaning insights from this data becomes more challenging. Logs come from an increasing number of sources, so they can be hard to collate and query for useful information. And building, operating and maintaining your own infrastructure to analyze log data at scale requires extensive expertise in running distributed systems and storage. Today, we’re introducing a new solution paper and reference implementation that will show how you can process logs from multiple sources and extract meaningful information by using Google Cloud Platform and Google Cloud Dataflow.

Log processing typically involves some combination of the following activities:

  • Configuring applications and services
  • Collecting and capturing log files
  • Storing and managing log data
  • Processing and extracting data
  • Persisting insights

Each of those components has it’s own scaling and management challenges, often using different approaches at different times. These sorts of challenges can slow down the generation of meaningful, actionable information from your log data.

Cloud Platform provides a number of services that can help you to address these challenges. You can use Cloud Logging to collect logs from applications and services, and then store them in Google Cloud Storage buckets or stream them to Pub/Sub topics. Dataflow can read from Cloud Storage or Pub/Sub (and many more), process log data, extract and transform metadata and compute aggregations. You can persist the output from Dataflow in BigQuery, where it can be analyzed or reviewed anytime. These mechanisms are offered as managed services—meaning they can scale when needed. That also means that you don't need to worry about provisioning resources up front.

The solution paper and reference implementation describe how you can use Dataflow to process log data from multiple sources and persist findings directly in BigQuery. You’ll learn how to configure Cloud Logging to collect logs from applications running in Container Engine, how to export those logs to Cloud Storage, and how to execute the Dataflow processing job. In addition, the solution shows you how to reconfigure Cloud Logging to use Pub/Sub to stream data directly to Dataflow, so you can process logs in real-time.


Check out the Processing Logs at Scale using Cloud Dataflow solution to learn how to combine logging, storage, processing and persistence into a scalable log processing approach. Then take a look at the reference implementation tutorial on Github to deploy a complete end-to-end working example. Feedback is welcome and appreciated; comment here, submit a pull request, create an issue, or find me on Twitter @crcsmnky and let me know how I can help.

- Posted by Sandeep Parikh, Google Solutions Architect

Bringing you more flexibility and better Cloud Networking performance, GA of HTTPS Load Balancing and Akamai joins CDN Interconnect

Google’s global network is a key piece of our foundation, enabling all of Google Cloud Platform’s services. Our customers have reiterated to us the critical importance of business continuity and service quality for their key processes, especially around network performance given today’s media-rich web and mobile applications.

We’re making several important announcements today: the general availability of HTTPS Load Balancing, and sustained performance gains from our software-defined network virtualization stack Andromeda, from which customers gain immediate benefits. We’re also introducing Cloud Router and Subnetworks, which together enable fine-grained network management and control demanded by our leading enterprise customers.

In line with our belief that speed is a feature, we’re also extremely pleased to welcome Akamai into our CDN Interconnect program. Origin traffic from Google egressing out to select Akamai CDN locations will take a private route on Google’s edge network, helping to reduce latency and egress costs for our joint customers. Akamai’s peering with Google at a growing number of points-of-presence across Google’s extensive global networking footprint enables us to deliver to our customers the responsiveness they expect from Google’s services.

General Availability of HTTPS Load Balancing. Google’s private fiber network connects our data centers where your applications run to one of more than 70 global network points of presence. HTTPS Load Balancing deployed at these key points across the globe dramatically reduces latency and increases availability for your customers critically important to achieving the responsiveness users expect from today’s most demanding web and mobile apps. For full details, see the documentation.
Figure 1 Our global load balancing location





Andromeda. Over the past year, we’ve written about our innovations made in Google’s data centers and networking to serve world-class services like Search, YouTube, Maps and Drive. The Cloud Platform team ensures that the benefits from these gains are passed onto customers with no additional effort on their part. Andromeda, Google’s software-defined network virtualization stack, quantifies some of these gains especially around performance. The chart below shows network throughput gains in Gbits/sec: in a little over a year, throughput has doubled for both single-stream and 200-stream benchmarks.




Subnetworks. Subnetworks allow you to segment IP space into regional prefixes. As a result, you gain fine-grained control over the full logical range in your private IP space, avoiding the need to create multiple networks, and providing full flexibility to create your desired topology.

Additionally, if you’re a VPN customer, you’ll see immediate enhancement as subnetworks allow you to configure your VPN gateway with different destination IP ranges per-region in the same network. In addition to providing more control over VPN routes, regional targeting affords lower latency compared to a single IP range spanning across all regions. Get started with subnetworks here.

Cloud Router. With Cloud Router, your enterprise-grade VPN to Google gets dynamic routing. Network topology changes on either end propagate automatically using BGP, eliminating the need to configure static routes or restart VPN tunnels. You get seamless connectivity with no traffic disruption. Learn more here.

Akamai and CDN Interconnect. Cloud Platform traffic egressing out to select Akamai CDN locations travel over direct peering links and are priced based on Google Cloud Interconnect fares. More information on using Akamai as a CDN Interconnect provider can be found here.

We’ll continue to invest and innovate in our networking capabilities, and pass the benefits of Google’s major networking enhancements to Cloud Platform customers. We always appreciate feedback and would love to learn how we can support your mission-critical workloads. Contact the Cloud Networking team to get started!

Posted by Morgan Dollard, Cloud Networking Product Management Lead

Enhancements to Container Engine and Container Registry

DevOps teams are adopting containers to make their development and deployment simpler. Google Cloud Platform has a complete suite of container offerings including Google Container Engine and Google Container Registry. Today we’re introducing some enhancements to them both, along with updates to our ecosystem to give you more options in managing container images and running services.


Container Registry


Docker Registry V2 API support. You can now push and pull Docker images to Container Registry using the V2 API. This allows you to have content addressable references, parallel layer downloads and digest-based pulls. Docker versions 1.6 and above support the v2 API, it’s recommended to upgrade to the latest version. If you’re using a mix of Docker client versions, see the newest Docker documentation to check compatibility.


Performance enhancements. Based on internal performance testing, this update pulls images 40% faster than the previous version.

Advanced Authentication. If you use a continuous delivery system (and we hope you do), it’s even easier to make it work with Container Registry, see the auth documentation page for details and setup. Learn how it works with popular CI/CD systems including Circle, Codeship, Drone.io, Jenkins, Shippable and Wercker.

TwistLock Integration. TwistLock provides rule violation detection and policy enforcement for containers in a registry or at runtime. They recently completed a Beta with 15 customers with positive results. Using TwistLock with GCR and GKE is really simple. See their blog for more details.



Container Engine


Today, on the heels of the Kubernetes 1.1 release, we’re bring the latest from Kubernetes to Container Engine users. The performance improvements in this release ensure you can run Google Container Engine in high-scale environments. Additional highlights of this release include:




  • Horizontal pod autoscaling helps resolve the uneven experiences users see when workloads go through spiky periods of utilization, meaning your pods can scale up and down based on CPU usage.

  • HTTP load balancer that enables routing traffic to different Kubernetes services based on HTTP traffic, such as using different services for sub-URLs.

  • A re-architected networking system that allows native iptables and reduces tail latency by up to 80%, virtually eliminating CPU overhead and improving reliability. Available in Beta, you can manually choose to enable this in GKE by running the following shell command:
             for node in $(kubectl get nodes -o name | cut -f2 -d/); do
                   kubectl annotate node $node 
                      net.beta.kubernetes.io/proxy-mode=iptables;
                   gcloud compute ssh --zone=us-central1-b $node 
                      --command="sudo /etc/init.d/kube-proxy restart";
             done

These and other updates in the 1.1 release will be rolled out to all Container Engine users over the next week. Send us your feedback and connect with the community on the google-containers mailing list or on the Kubernetes google-containers Slack channel.

If you’re new to the Google Cloud Platform, getting started is easy. Sign up for your free trial here.

- Posted by Kit Merker, Product Manager, Google Cloud Platform

Get away with Google Flights

(Cross-posted on the Google Official Blog)

While winds howl, frost bites and snow falls, people dream of getting away from it all. Every year around this time, we see an uptick in searches for spring and summer travel from people who have had it up to here with winter. And in the middle of one of the coldest, snowiest, iciest winters on record in the U.S., you better believe people are gearing up to grab their suntan lotion and their carry-ons, and hop on a plane. Enter Google Flights, which makes it easy to plan the trip that’s right for you. Here are a few tips to help you book this year’s dream vacation.

Flexibility is key when finding great dealsThere’s a travel myth that you can always find the best deals on Tuesday. But actually, you can find good deals any day of the week—especially if you’re flexible with your travel dates. Though it’s sometimes hard to pull the trigger because you’re afraid the price will drop tomorrow (or next Tuesday, maybe?), our experience shows it’s usually best to book right away.

Regardless of which day you sit down to plan your trip, you can use the calendar in Google Flights to scroll through months and see the lowest fare highlighted for each day. If you’re planning even further out, use the lowest fares graph beneath the calendar to see how prices may fluctuate based on the season, holidays or other events. You can also set preferences (such as direct flights only) and our calendar will adjust to show you just those flights and fares that fit the bill. Finally, if you can save more by using a nearby airport or flying on a different day, we’ll show you a tip at the top of your results.
Not sure about your destination? No problem
Sometimes, you know exactly where your destination needs to be—say, when you’re taking a business trip, or headed to a wedding or family reunion. But there are times when all you know is that you want to go somewhere. Maybe you want to go somewhere with a beach, but don’t care if it’s in Greece or the Caribbean. Or you want to visit Southeast Asia, but aren’t sure which countries to visit.
Our research shows more than half of searchers don’t know where they’re going to travel when they sit down to plan. With Google Flights, you can search for regions or whole countries, like “Flights to Europe” and “Flights to Mexico." Or, expand the map to scan the entire world and see accurate prices for all the different cities you can fly to, along with filters for your flight preferences. If you’re in a particularly adventurous—or lazy—mood, select the “I’m Feeling Lucky” button on the map and we’ll suggest ideas for where to go based on popular destinations and your past search history.

But… cheaper isn’t always betterWe all love a good deal, but when it comes to choosing flights, cheaper doesn’t always win—and no wonder, when sometimes that means two connections instead of none. On Google Flights, the vast majority of people choose one of the Best flights—considered to be flights that are the best combination of price and convenience. Try it out next time you’re looking for something that fits your schedule, not just your budget.
So once you’ve warmed your hands on that cup of hot cocoa, put them to work on your keyboard or phone. Google Flights is ready to find the best destinations, dates, fares and flights for you to get away from it all.

Source: Google Travel


Show off your hotel with new photo tools in Google My Business

Last June we introduced you to Google My Business — giving you an easy way to improve your hotel's presence on Google and connect with customers, whether they're looking for you on Google Search, Maps or Google+.  

Starting today, you can tell us which image you’d like to appear when customers search for your hotel on Google. You can even add photos specifically for food and drink, rooms and common areas. It's easy! Just log in to Google My Business on the web or in the Android or iOS apps, and visit the Photos section.

If you haven't already done so, visit google.com/mybusiness to verify your hotel and get started. For more information, you can read the full blog post on the Google and Your Business blog.

Posted by Ashwath Rajan, Product Manager, Travel

Source: Google Travel