Tag Archives: Announcements

New ways to manage sensitive data with the Data Loss Prevention API



If your organization has sensitive and regulated data, you know how much of a challenge it can be to keep it secure and private. The Data Loss Prevention (DLP) API, which went beta in March, can help you quickly find and protect over 50 types of sensitive data such as credit card numbers, names and national ID numbers. And today, we’re announcing several new ways to help protect sensitive data with the DLP API, including redaction, masking and tokenization.

These new data de-identification capabilities help you to work with sensitive information, while reducing the risk of sensitive data being inadvertently revealed. If like many enterprises you follow the principle of least privilege or need-to-know access to data (only use or expose the minimum data required for an approved business process) the DLP API can help you enforce these principles in production applications and data workflows. And because it’s an API, the service can be pointed at any virtually any data source or storage system. DLP API offers native support and scale for scanning large datasets in Google Cloud Storage, Datastore and BigQuery.
Google Cloud DLP API enables our security solutions to scan and classify documents and images from multiple cloud data stores and email sources. This allows us to offer our customers critical security features, such as classification and redaction, which are important for managing data and mitigating risk. Google’s intelligent DLP service enables us to differentiate our offerings and grow our business by delivering high quality results to our customers.  
 Sateesh Narahari, VP of Products, Managed Methods

New de-identification tools in DLP API

De-identifying data removes identifying information from a dataset, making it more difficult to associate the remaining data with an individual and reducing the risk of exposure.
With the DLP API, you can classify and mask sensitive elements in both structured data and unstructured data.


The DLP API now supports a variety of new data transformation options:

Redaction and suppression 
Redaction and suppression remove entire values or entire records from a dataset. For example, if a support agent working in a customer support UI doesn’t need to see identifying details to troubleshoot the problem, you might decide to redact those values. Or, if you’re analyzing large population trends, you may decide to suppress records that contain unique demographics or rare attributes, since these distinguishing characteristics may pose a greater risk.
The DLP API identifies and redacts a name, social security number, telephone number and email address
Partial masking 
Partial masking obscures part of a sensitive attribute  for example, the last 7 digits of a US telephone number. In this example, a 10-digit phone number retains only the area code.
Tokenization or secure hashing
Tokenization, also called secure hashing, is an algorithmic transformation that replaces a direct identifier with a pseudonym or token. This can be very useful in cases where you need to retain a record identifier or join data but don’t want to reveal the sensitive underlying elements. Tokens are key-based and can be configured to be reversible (using the same key) or non-reversible (by not retaining the key).

The DLP API supports the following token types:
  • Format-Preserving Encryption - a token of the same length and character set.




  • Secure, key-based hashes - a token that's a 32-byte hexadecimal string generated using a data encryption key.



  • Dynamic data masking 
    The DLP API can apply various de-identification and masking techniques in real time, which is sometimes referred to as “Dynamic Data Masking” (DDM). This can be useful if you don’t want to alter your underlying data, but want to mask it when viewed by certain employees or users. For example, you could mask data when it’s presented in a UI, but require special privileges or generate additional audit logs if someone needs to view the underlying personally identifiable information (PII). This way, users aren’t exposed to the identifying data by default, but only when business needs dictate.
    With the DLP API, you can prevent users from seeing sensitive data in real-time

    Bucketing, K-anonymity and L-Diversity 
    The DLP API offers even more methods that can help you transform and better understand your data. To learn more about bucketing, K-anonymity, and L-Diversity techniques, check out the docs and how-to guides.


    Get started with the DLP API

    With these new transformation capabilities, the DLP API can help you classify and protect sensitive data no matter where it’s stored. With all tools that are designed to assist with data discovery and classification, there's no certainty that it will be 100% effective in meeting your business needs or obligations. To get started with DLP API today, take a look at the quickstart guides.

    Smarter attribution for everyone

    In May, we announced Google Attribution, a new free product to help marketers measure the impact of their marketing across devices and across channels. Advertisers participating in our early tests are seeing great results. Starting today, we’re expanding the Attribution beta to hundreds of advertisers.

    We built Google Attribution to bring smarter performance measurement to all advertisers, and to solve the common problems with other attribution solutions.

    Google Attribution is:
    • Easy to setup and use: While some attribution solutions can take months to set up, Google Attribution can access the marketing data you need from tools like AdWords and Google Analytics with just a few clicks.
    • Cross-device: Today’s marketers need measurement tools that don't lose track of the customer journey when people switch between devices. Google Attribution uses Google’s device graph to measure the cross-device customer journey and deliver insights into cross-device behavior, all while protecting individual user privacy.
    • Cross-channel: With your marketing spread out across so many channels (like search, display, and email), it can be difficult to determine how each channel is working and which ones are truly driving sales. Google Attribution brings together data across channels so you can get a more comprehensive view of your performance.
    • Easy to take action: Attribution insights are only valuable if you can use them to improve your marketing. Integrations with tools like AdWords make it easy to update your bids or move budget between channels based on the new, more accurate performance data.


    Results from Google Attribution beta customers



    Last April, we shared that for AdWords advertisers, data-driven attribution typically delivers more conversions at a similar cost-per-conversion than last-click attribution. This shows that data-driven attribution is a better way to measure and optimize the performance of search and shopping ads.

    Today we’re pleased to share that early results from Google Attribution beta customers show that data-driven attribution helps marketers improve their performance across channels.

    Hello Fresh, a meal delivery service, grew conversions by 10% after adopting Google Attribution. By using data-driven attribution to measure across channels like search, display, and email, Google Attribution gives Hello Fresh a more accurate measurement of the number of conversions each channel is driving. And because Google Attribution is integrated with AdWords, Hello Fresh can easily use this more accurate conversion data to optimize their bidding.

    "With Google Attribution, we have been able to automatically integrate cross-channel bidding throughout our AdWords search campaigns. This has resulted in a seamless change in optimization mindset as we are now able to see keyword and query performance more holistically rather than inadvertently focusing on only last-click events.
    - Karl Villanueva Head of Paid Search & Display, HelloFresh

    Pixers, an online marketplace, is also seeing positive results including increased conversions. Google Attribution allows Pixers to more confidently evaluate the performance of their AdWords campaigns and adopt new features that improve performance.

    "By using Google Attribution data we have finally eliminated guesswork from evaluating the performance of campaigns we're running, including shopping and re-marketing. The integration with AdWords also enabled us to gradually roll-out smart bidding strategies across increasing number of campaigns. The results have significantly exceeded expectations as we managed to cut the CPA while obtaining larger conversion volumes."
    - Arkadiusz Kuna, SEM & Remarketing Manager at Pixers

    Google Attribution can also help brands get a better understanding of their customer’s path to purchase. eDreams ODIGEO, an online travel company, knows that people don’t usually book flights or hotels after a single interaction with their brand. It often requires multiple interactions with each touchpoint having a different impact.

    “Some channels open the customer journey and bring new customers, whereas other channels are finishers and contribute to close the sales. Google Attribution is helping us to understand the added value of each interaction. It enhances of our ability to have a holistic view of how different marketing activities contribute to success.”
    - Manuel Bruscas, Director of Marketing Analytics & Insights, eDreams ODIGEO


    Next steps



    In the coming months we’ll invite more advertisers to use Google Attribution. If you’re interested in receiving a notification when the product is available for you, please sign up here.

    Don’t forget, even before adopting Google Attribution, you can get started with smarter measurement for your AdWords campaigns. With attribution in AdWords you can move from last-click to a better attribution model, like data-driven attribution, that allows you to more accurately measure and optimize search and shopping ads.

    App Engine firewall now generally available



    Securing applications in the cloud is critical for a variety of reasons: restricting access to trusted sources, protecting user data and limiting your application's bandwidth usage in the face of a DoS attack. The App Engine firewall lets you control access to your App Engine app through a set of rules, and is now generally available, ready to secure access to your production applications. Simply set up an application, provide a list of IP ranges to deny or allow, and App Engine does the rest.

    With this release, you can now use the IPv4 and IPv6 address filtering capability in the App Engine firewall to enforce more comprehensive security policies rather than requiring developers to modify their application.

    We have received lots of great feedback from our customers and partners about the security provided by the App Engine firewall, including Reblaze and Cloudflare:
    "Thanks to the newly released App Engine firewall, Reblaze can now prevent even the most resourceful hacker from bypassing our gateways and accessing our customers’ App Engine applications directly. This new feature enables our customers to take advantage of Reblaze's comprehensive web security (including DDoS protection, WAF/IPS, bot mitigation, full remote management, etc.) on App Engine." 
     Tzury Bar Yochay, CTO of Reblaze Technologies
    "With the App Engine firewall, our customers can lock down their application to only accept traffic from Cloudflare IPs. Because Cloudflare uses a reverse-proxy server, this integration further prevents direct access to an application’s origin servers and allows Cloudflare to filter and block malicious activity." 
     Travis Perkins, Head of Alliances at Cloudflare

    Simple and effective 


    Getting started with the App Engine firewall is easy. You can set up rules in the Google Cloud Platform Console, via REST requests in the App Engine Admin API, or with our gcloud CLI.

    For example, let's say you have an application that's being attacked by several addresses on a rogue network. First, get the IP addresses from your application’s request logs. Then, add a deny rule for the rogue network to the firewall. Make sure the default rule is set to allow so that other users can still access the application.

    And that's it! No need to modify and redeploy the application; access is now restricted to your whitelisted IP addresses. The IP addresses that match a deny rule will receive an HTTP 403 request before the request reaches your app, which means that your app won't spin up additional instances or be charged for handling the request.

    Verify rules for any IP


    Some applications may have complex rulesets, making it hard to determine whether an IP will be allowed or denied. In the Cloud Console, the Test IP tab allows you to enter an IP and see if your firewall will allow or deny the request.

    Here, we want to make sure an internal developer IP is allowed. However, when we test the IP, we can see that the "rogue net" blocking rule takes precedence.
    Rules are evaluated in priority order, with the first match being applied, so we can fix this by allowing the developer IP with a smaller priority value than the blocked network it lies within.
    Another check, and we can see it's working as intended.
    For more examples and details, check out the full App Engine firewall documentation.

    We'd like to thank all you beta users who gave us feedback, and encourage anyone with questions, concerns or suggestions to reach out to us by reporting a public issue, posting in the App Engine forum, or messaging us on the App Engine slack channel.

    Introducing Grafeas: An open-source API to audit and govern your software supply chain



    Building software at scale requires strong governance of the software supply chain, and strong governance requires good data. Today, Google, along with JFrog, Red Hat, IBM, Black Duck, Twistlock, Aqua Security and CoreOS, is pleased to announce Grafeas, an open source initiative to define a uniform way for auditing and governing the modern software supply chain. Grafeas (“scribe” in Greek) provides organizations with a central source of truth for tracking and enforcing policies across an ever growing set of software development teams and pipelines. Build, auditing and compliance tools can use the Grafeas API to store, query and retrieve comprehensive metadata on software components of all kinds.

    As part of Grafeas, Google is also introducing Kritis, a Kubernetes policy engine that helps customers enforce more secure software supply chain policies. Kritis (“judge” in Greek) enables organizations to do real-time enforcement of container properties at deploy time for Kubernetes clusters based on attestations of container image properties (e.g., build provenance and test status) stored in Grafeas.
    “Shopify was looking for a comprehensive way to track and govern all the containers we ship to production. We ship over 6,000 builds every weekday and maintain a registry with over 330,000 container images. By integrating Grafeas and Kritis into our Kubernetes pipeline, we are now able to automatically store vulnerability and build information about every container image that we create and strictly enforce a built-by-Shopify policy: our Kubernetes clusters only run images signed by our builder. Grafeas and Kritis actually help us achieve better security while letting developers focus on their code. We look forward to more companies integrating with the Grafeas and Kritis projects.”  
    Jonathan Pulsifer, Senior Security Engineer at Shopify. (Read more in Shopify’s blog post.)

    The challenge of governance at scale 


    Securing the modern software supply chain is a daunting task for organizations both large and small, exacerbated by several trends:

    • Growing, fragmented toolsets: As an organization grows in size and scope, it tends to use more development languages and tools, making it difficult to maintain visibility and control of its development lifecycle. 
    • Open-source software adoption: While open-source software makes developers more productive, it also complicates auditing and governance. 
    • Decentralization and continuous delivery: The move to decentralize engineering and ship software continuously (e.g., “push on green”) accelerates development velocity, but makes it difficult to follow best practices and standards. 
    • Hybrid cloud deployments: Enterprises increasingly use a mix of on-premises, private and public cloud clusters to get the best of each world, but find it hard to maintain 360-degree visibility into operations across such diverse environments. 
    • Microservice architectures: As organizations break down large systems into container-based microservices, it becomes harder to track all the pieces.

    As a result, organizations generate vast quantities of metadata, all in different formats from different vendors and are stored in many different places. Without uniform metadata schemas or a central source of truth, CIOs struggle to govern their software supply chains, let alone answer foundational questions like: “Is software component X deployed right now?” “Did all components deployed to production pass required compliance tests?” and “Does vulnerability Y affect any production code?” 

    The Grafeas approach 

    Grafeas offers a central, structured knowledge-base of the critical metadata organizations need to successfully manage their software supply chains. It reflects best practices Google has learned building internal security and governance solutions across millions of releases and billions of containers. These include:

    • Using immutable infrastructure (e.g., containers) to establish preventative security postures against persistent advanced threats 
    • Building security controls into the software supply chain, based on comprehensive component metadata and security attestations, to protect production deployments 
    • Keeping the system flexible and ensuring interoperability of developer tools around common specifications and open-source software

    Grafeas is designed from the ground up to help organizations apply these best practices in modern software development environments, using the following features and design points:

    • Universal coverage: Grafeas stores structured metadata against the software component’s unique identifier (e.g., container image digest), so you don’t have to co-locate it with the component’s registry, and so it can store metadata about components from many different repositories. 
    • Hybrid cloud-friendly: Just as you can use JFrog Artifactory as the central, universal component repository across hybrid cloud deployments, you can use the Grafeas API as a central, universal metadata store. 
    • Pluggable: Grafeas makes it easy to add new metadata producers and consumers (for example, if you decide to add or change security scanners, add new build systems, etc.) 
    • Structured: Structured metadata schemas for common metadata types (e.g., vulnerability, build, attestation and package index metadata) let you add new metadata types and providers, and the tools that depend on Grafeas can immediately understand those new sources. 
    • Strong access controls: Grafeas allows you to carefully control access for multiple metadata producers and consumers. 
    • Rich query-ability: With Grafeas, you can easily query all metadata across all of your components so you don’t have to parse monolithic reports on each component.

    Defragmenting and centralizing metadata 

    At each stage of the software supply chain (code, build, test, deploy and operate), different tools generate metadata about various software components. Examples include the identity of the developer, when the code was checked in and built, what vulnerabilities were detected, what tests were passed or failed, and so on. This metadata is then captured by Grafeas. See the image below for a use case of how Grafeas can provide visibility for software development, test and operations teams as well as CIOs.
    (click to enlarge)

    To give a comprehensive, unified view of this metadata, we built Grafeas to promote cross-vendor collaboration and compatibility; we’ve released it as open source, and are working with contributors from across the ecosystem to further develop the platform:

    • JFrog is implementing Grafeas in the JFrog Xray API and will support hybrid cloud workflows that require metadata in one environment (e.g., on-premises in Xray) to be used elsewhere (e.g., on Google Cloud Platform). Read more on JFrog’s blog
    • Red Hat is planning on enhancing the security features and automation of Red Hat Enterprise Linux container technologies in OpenShift with Grafeas. Read more on Red Hat’s blog
    • IBM plans to deliver Grafeas and Kristis as part of the IBM Container Service on IBM Cloud, and to integrate our Vulnerability Advisor and DevOps tools with the Grafeas API. Read more on IBM’s blog
    • Black Duck is collaborating with Google to implement the Google artifact metadata API implementation of Grafeas, to bring improved enterprise-grade open-source security to Google Container Registry and Google Container Engine. Read more on Black Duck’s blog
    • Twistlock will integrate with Grafeas to publish detailed vulnerability and compliance data directly into orchestration tooling, giving customers more insight and confidence about their container operations. Read more on Twistlock’s blog.
    • Aqua Security will integrate with Grafeas to publish vulnerabilities and violations, and to enforce runtime security policies based on component metadata information. Read more on Aqua’s blog
    • CoreOS is exploring integrations between Grafeas and Tectonic, its enterprise Kubernetes platform, allowing it to extend its image security scanning and application lifecycle governance capabilities. 

    Already, several contributors are planning upcoming Grafeas releases and integrations:

    • JFrog’s Xray implementation of Grafeas API 
    • A Google artifact metadata API implementation of Grafeas, together with Google Container Registry vulnerability scanning 
    • Bi-directional metadata sync between JFrog Xray and the Google artifact metadata API 
    • Black Duck integration with Grafeas and the Google artifact metadata API 
    Building on this momentum, we expect numerous other contributions to the Grafeas project early in 2018.

    Join us!

    The way we build and deploy software is undergoing fundamental changes. If scaled organizations are to reap the benefits of containers, microservices, open source and hybrid cloud, they need a strong governance layer to underpin their software development processes. Here are some ways you can learn more about and contribute to the project:


     We hope you will join us!

    Better tools for teams of all sizes

    We’ve heard feedback from businesses of all sizes that they need simpler ways to manage the analytics products they use and the team members who use them. That’s why we’re making new controls available to everyone who uses Analytics, Tag Manager, and Optimize and improving navigation for users of Surveys and Data Studio. These new features will help you more easily manage your accounts, get an overview of your business, and move between products.

    Streamlined account management



    With centralized account management, you can control user access and permissions across multiple products, like Analytics, Tag Manager, and Optimize.

    The first step is to create an organization to represent your business. You then link this organization to all of the different accounts that belong to your business. You can also move accounts between the organizations you create.

    Now you have a central location where administrators for your organization can:
    • Create rules for which types of new users should be allowed access to your organization
    • Audit existing users and decide which products and features they should have access to
    • Remove users who have left your organization or no longer need access to the tools
    • See the last time a user in your organization accessed Google Analytics data
    • Allow users to discover who are your organization’s admins and contact them for help


    New home page



    Setting up an organization also gives you access to a new home page that provides an overview of your business. You’ll be able to manage accounts and settings across products and get insights and quick access to the products and features you use most. For example, you might see a large increase in visitors for a specific Analytics property, and then click through to Analytics to investigate where the visitors are coming from.


    Simplified navigation



    Finally, you’ll get a unified user experience across products. Common navigation and product headers make it easy to switch between products and access the data you need. You can view accounts by organization, or see everything you have access to in one place. We’ve also redesigned search, making it possible to search across all of your accounts in a single place.


    Get started



    If your business would benefit from these features, please visit this page to get started. You can also check out the help center for more info.

    These updates will be rolling out over the next few weeks, so please stay tuned if you don’t yet have access.

    Note: If you’re using the enterprise versions of our products, like Analytics 360, you already have access to these features as part of the Google Analytics 360 Suite.

    Google Code-in 2017 is seeking organization applications


    We are now accepting applications for open source organizations who want to participate in Google Code-in 2017. Google Code-in, a global online contest for pre-university students ages 13-17, invites students to learn by contributing to open source software.

    Working with young students is a special responsibility and each year we hear inspiring stories from mentors who participate. To ensure these new, young contributors have a great support system, we select organizations that have gained experience in mentoring students by previously taking part in Google Summer of Code.

    Organizations must apply before Tuesday, October 24 at 16:00 UTC.

    17 organizations were accepted last year, and over the last 7 years, 4,553 students from 99 different countries have completed more than 23,651 tasks for participating open source projects. Tasks fall into 5 categories:

    • Code: writing or refactoring 
    • Documentation/Training: creating/editing documents and helping others learn more
    • Outreach/Research: community management, outreach/marketing, or studying problems and recommending solutions
    • Quality Assurance: testing and ensuring code is of high quality
    • User Interface: user experience research or user interface design and interaction

    Once an organization is selected for Google Code-in 2017 they will define these tasks and recruit mentors who are interested in providing online support for students.

    You can find a timeline, FAQ and other information about Google Code-in on our website. If you’re an educator interested in sharing Google Code-in with your students, you can find resources here.

    By Josh Simmons, Google Open Source

    Now shipping: Compute Engine machine types with up to 96 vCPUs and 624GB of memory


    Got compute- and memory-hungry applications? We’ve got you covered, with new machine types that have up to 96 vCPUs and 624 GB of memory—a 50% increase in compute resources per Google Compute Engine VM. These machine types run on Intel Xeon Scalable processors (codenamed Skylake), and offer the most vCPUs of any cloud provider on that chipset. Skylake in turn provides up to 20% faster compute performance, 82% faster HPC performance, and almost 2X the memory bandwidth compared with the previous generation Xeon.1

    96 vCPU VMs are available in three predefined machine types

    • Standard: 96 vCPUs and 360 GB of memory
    • High-CPU: 96 vCPUs and 86.4 GB of memory
    • High-Memory: 96 vCPUs and 624 GB of memory


    You can also use custom machine types and extended memory with up to 96 vCPUs and 624GB of memory, allowing you to better create exactly the machine shape you need, avoid wasted resources, and pay for only what you use.


    The new 624GB Skylake instances are certified for SAP HANA scale-up deployments. And if you want to run even larger HANA analytical workloads, scale-out configurations of up to 9.75TB of memory with 16 n1-highmem-96 nodes are also now certified for data warehouses running BW4/HANA.

    You can use these new 96-core machines in beta today in four GCP regions: Central US, West US, West Europe, and East Asia. To get started, visit your GCP Console and create a new instance. Or check out our docs for instructions on creating new virtual machines using the gcloud command line tool.


    Need even more compute power or memory? We’re also working on a range of new, even larger VMs, with up to 4TB of memory. Tell us about your workloads and join our early testing group for new machine types.


    1 Based on comparing Intel Xeon Scalable Processor codenamed "Skylake" versus previous generation Intel Xeon processor codenamed "Broadwell." 20% based on SpecINT. 82% based on on High Performance Linpack for 4 node cluster with AVX512. Performance improvements include improvements from use of Math Kernel Library and Intel AVX512. Performance tests are measured using specific computer systems, components, software, operations and functions, and may have been optimized for performance only on Intel microprocessors. Any change to any of those factors may cause the results to vary. You should consult other information to assist you in fully evaluating your contemplated purchases. For more information go to http://www.intel.com/benchmarks

    Partnering on open source: Managing Google Cloud Platform with Chef


    Managing cloud resources is a critical part of the application lifecycle. That’s why today, we released and open sourced a set of comprehensive cookbooks for Chef users to manage Google Cloud Platform (GCP) resources.

    Chef is a continuous automation platform powered by an awesome community. Together, Chef and GCP enable you to drive continuous automation across infrastructure, compliance and applications.

    The new cookbooks allow you to define an entire GCP infrastructure using Chef recipes. The Chef server then creates the infrastructure, enforces it, and ensures it stays in compliance. The cookbooks are idempotent, meaning you can reapply them when changes are required and still achieve the same result.

    The new cookbooks support the following products:



    We also released a unified authentication cookbook that provides a single authentication mechanism for all the cookbooks.

    These new cookbooks are Chef certified, having passed the Chef engineering team’s rigorous quality and review bar, and are open-source under the Apache-2.0 license on GCP's Github repository.

    We tested the cookbooks on CentOS, Debian, Ubuntu, Windows and other operating systems. Refer to the operating system support matrix for compatibility details. The cookbooks work with Chef Client, Chef Server, Chef Solo, Chef Zero, and Chef Automate.

    To learn more about these Chef cookbooks, register for the webinar with myself and Chef’s JJ Asghar on 15 October 2017.

    Getting started with Chef on GCP

    Using these new cookbooks is as easy as following these four steps:
    1. Install the cookbooks.
    2. Get a service account with privileges for the GCP resources that you want to manage and enable the the APIs for each of the GCP services you will use.
    3. Describe your GCP infrastructure in Chef:
      1. Define a gauth_credential resource
      2. Define your GCP infrastructure
    4. Run Chef to apply the recipe.
    Now, let’s discuss these steps in more detail.

    1. Install the cookbooks

    You can find all the GCP cookbooks for Chef on Chef Supermarket. We also provide a “bundle” cookbook that installs every GCP cookbook at once. That way you can choose the granularity of the code you pull into your infrastructure.

    Note: These Google cookbooks require neither administrator privileges nor special privileges/scopes on the machines that Chef runs on. You can install the cookbooks either as a regular user on the machine that will execute the recipe, or on your Chef server; the latter option distributes the cookbooks to all clients.

    The authentication cookbook requires a few of our gems. You can install them using various methods, including using Chef itself:


    chef_gem 'googleauth'
    chef_gem 'google-api-client'


    For more details on how to install the gems, please visit the authentication cookbook documentation.

    Now, you can go ahead and install the Chef cookbooks. Here’s how to install them all with a single command:


    knife cookbook site install google-cloud


    Or, you can install only the cookbooks for select products:


    knife cookbook site install google-gcompute    # Google Compute Engine
    knife cookbook site install google-gcontainer  # Google Container Engine
    knife cookbook site install google-gdns        # Google Cloud DNS
    knife cookbook site install google-gsql        # Google Cloud SQL
    knife cookbook site install google-gstorage    # Google Cloud Storage


    2. Get your service account credentials and enable APIs

    To ensure maximum flexibility and portability, you must authenticate and authorize GCP resources using service account credentials. Using service accounts allows you to restrict the privileges to the minimum necessary to perform the job.

    Note: Because service accounts are portable, you don’t need to run Chef inside GCP. Our cookbooks run on any computer with internet access, including other cloud providers. You might, for example, execute deployments from within a CI/CD system pipeline such as Travis or Jenkins, or from your own development machine.

    Click here to learn more about service accounts, and how to create and enable them.

    Also make sure to enable the the APIs for each of the GCP services you intend to use.

    3a. Define your authentication mechanism

    Once you have your service account, add the following resource block to your recipe to begin authenticating with it. The resource name, here 'mycred' is referenced in the objects in the credential parameter.


    gauth_credential 'mycred' do
      action :serviceaccount
      path '/home/nelsonjr/my_account.json'
      scopes ['https://www.googleapis.com/auth/compute']
    end


    For further details on how to setup or customize authentication visit the Google Authentication cookbook documentation.

    3b. Define your resources

    You can manage any resource for which we provide a type. The example below creates an SQL instance and database in Cloud SQL. For the full list of resources that you can manage, please refer to the respective cookbook documentation link or to this aggregate summary view.


    gsql_instance ‘my-app-sql-server’ do
      action :create
      project 'google.com:graphite-playground'
      credential 'mycred'
    end
    
    gsql_database 'webstore' do
      action :create
      charset 'utf8'
      instance ‘my-app-sql-server’
      project 'google.com:graphite-playground'
      credential 'mycred'
    end


    Note that the above code has to be described in a recipe within a cookbook. We recommend you have a “profile” wrapper cookbook that describes your infrastructure, and reference the Google cookbooks as a dependency.

    4. Apply your recipe

    Next, we direct Chef to enforce the recipe in the “profile” cookbook. For example:

    $ chef-client -z --runlist ‘recipe[mycloud::myapp]’

    In this example, mycloud is the “profile” cookbook, and myapp is the recipe that contains the GCP resource declarations.

    Please note that you can apply the recipe from anywhere that Chef can execute recipes (client, server, automation), once or multiple times, or periodically in the background using an agent.

    Next steps

    Now you're ready to start managing GCP resources with Chef, and start reaping the benefits of cross-cloud configuration management. Our plan is to continue to improve the cookbooks and add support for more Google products. We're also preparing to release the technology used to create these cookbooks as open source. If you have questions about this effort please visit Chef on GCP discussions forum, or reach out to us on chef-on-gcp@google.com.

    Announcing more Open Source Peer Bonus winners

    We’re excited to announce 2017’s second round of Open Source Peer Bonus winners. Google Open Source established this program six years ago to encourage Googlers to recognize and celebrate the external developers contributing to the open source ecosystem Google depends on.

    The Open Source Peer Bonus program works like this: Googlers nominate open source developers outside of the company who deserve recognition for their contributions to open source projects, including those used by Google. Nominees are reviewed by a volunteer team of engineers and the winners receive our heartfelt thanks and a small token of our appreciation.

    To date, we’ve recognized nearly 600 open source contributors from dozens of countries who have contributed their time and talent to more than 400 open source projects. You can find past winners in recent blog posts: Fall 2016, Spring 2017.

    Without further adieu, we’d like to recognize the latest round of winners and the projects they worked on. Here are the individuals who gave us permission to thank them publicly:

    Name Project Name Project
    Mo JangdaAMP ProjectEric TangMaterial Motion
    Osvaldo LopezAMP ProjectNicholas Tollerveymicro:bit, Mu
    Jason JeanAngular CLIDamien GeorgeMicroPython
    Henry ZhuBabelTom SpilmanMonoGame
    Oscar BoykinBazel Scala rulesArthur EdgeNARKOZ/gitlab
    Francesc AltedBloscSebastian BergNumPy
    Matt HoltCaddyBogdan-Andrei IancuOpenSIPS
    Martijn CroonenChromiumAmit AmbastaOR-tools
    Raphael CostaChromiumMichael PowellOR-tools
    Mariatta WijayaCPythonWestbrook JohnsonPolymer
    Victor StinnerCPythonMarten Seemannquic-go
    Derek ParkerDelveFabian HennekeSecure Shell
    Thibaut CouroubledevdocsChris FillmoreShaka Player
    David Lechnerev3devTakeshi KomiyaSphinx
    Michael NiedermayerFFmpegDan KennedySQLite
    Mathew HuuskoFirebaseJoe MistachkinSQLite
    Armin RonacherFlaskRichard HippSQLite
    Nenad StojanovskiForseti SecurityYuxin WuTensorpack
    Solly RossHeapsterMichael Herzogthree.js
    Bjørn PedersenHugoTakahiro Aoyagithree.js
    Brion VibberJS-InterpreterJelle ZijlstraTypeshed
    Xiaoyu ZhangKubernetesVerónica LópezWomen Who Go
    Anton KachurinMaterial Components for the Web

    Thank you all so much for your contributions to the open source community and congratulations on being selected!

    By Maria Webb, Google Open Source

    Introducing custom roles, a powerful way to make Cloud IAM policies more precise



    As enterprises move their applications, services and data to the cloud, it’s critical that they put appropriate access controls in place to help ensure that the right people can access the right data at the right time. That’s why we’re excited to announce the beta release of custom roles for Cloud IAM.

    Custom roles offer customers full control of 1,287 public permissions across Google Cloud Platform services. This helps administrators grant users the permissions they need to do their jobs — and only those permissions. Fine-grained access controls help enforce the principle of least privilege for resources and data on GCP.

    “Verily is using custom roles to uphold the highest standards of patient trust by carefully managing the granularity of data access granted to people and programs based on their ‘need-to-know’.” — Harriet Brown, Product Manager for Trust, Compliance, and Data Security at Verily Life Sciences 

    Understanding IAM roles 

    IAM offers three primitive roles for Owner, Editor, and Viewer that make it easy to get started, and over one hundred service-specific predefined roles that combine a curated set of permissions necessary to complete different tasks across GCP. In many cases, predefined roles are sufficient for controlling access to GCP services. For example, the Cloud SQL Viewer predefined role combines 14 permissions necessary to allow users to browse and export databases.

    Custom roles complement the primitive and predefined roles when you need to be even more precise. For example, an auditor may only need to access a database to gather audit findings so they know what data is being collected, but not to read the actual data or perform any other operations. You can build your own “Cloud SQL Inventory” custom role to grant auditors browse access to databases without giving them permission to export their contents.

    How to create custom roles 

    To begin crafting custom roles, we recommend starting from the available predefined roles. These predefined roles are appropriate for most use cases and often only need small changes to the permissions list to meet an organization's requirements. Here’s how you could implement a custom role for the above use case:

    Step 1: Select the predefined role that you’d like to customize, in this case Cloud SQL Viewer:
    Step 2: Clone the predefined role and give it a custom name and ID.  Add or remove the desired permissions for your new custom role. In this case, that’s removing cloudsql.instances.export.

    How to use custom roles 

    Custom roles are available now in the Cloud Console, on the Roles tab under the ‘IAM & admin’ menu; as a REST API; and on the command line as gcloud beta iam. As you create a custom role, you can also assign it a lifecycle stage to inform your users about the readiness of the role for production usage.

    IAM supports custom roles for projects and across entire organizations to centralize development, testing, maintenance, and sharing of roles.


    Maintaining custom roles 

    When using custom roles, it’s important to track what permissions are associated with the roles you create, since available permissions for GCP services evolve and change over time. Unlike GCP predefined roles, you control if and when permissions are added or removed. Returning to our example, if new features are added to the Cloud SQL service — with corresponding new permissions — then you decide whether to add the new permissions to your customized “SQL Inventory” role as you see fit. During your testing, the Cloud Console’s appearance may vary for users who are granted custom roles, since UI elements may be enabled or disabled by specific permissions. To help maintain your custom roles, you can refer to the new IAM permissions change log to find all changes to beta and GA services’ permissions.

    Get started! 

    Interested in customizing Cloud IAM roles in your GCP project? Check out the detailed step-by-step instructions on how to get started here. We hope Cloud IAM custom roles make it easier for organizations to align access controls to their business processes. In conjunction with resource-level IAM policies, which can control access down to specific resources such as Pub/Sub topics or Machine Learning models, security administrators now have the power to publish policies as precise as granting a single user just one permission on a resource — or on whole folders full of projects. We welcome your feedback.