Tag Archives: Security & Identity

Getting to know Cloud IAM

If your cloud environment consists of multiple projects and services accessed by multiple users, you want to be able to specify who can do what, to which resource. On Google Cloud Platform (GCP), that means using Cloud Identity and Access Management (IAM), which gives you the control and visibility you need to centrally manage your cloud resources.

Implementing Cloud IAM is an ongoing, multi-step process. First, you need to configure your users and groups. Then, you need to determine whether to define functional roles—and map them to your users. You also need to determine whether pre-defined roles offered by Cloud IAM meet your organization’s needs, and if not, create custom ones. Of course, you’ll have to test your IAM policies, and revisit them over time to make sure that they continue to meet your needs.

To help you implement these controls, we’ve created a flowchart to help you navigate Cloud IAM. Whether you're new to IAM or familiar with it and just need a gentle reminder, this flowchart can serve as a handy checklist of steps you need to follow.
Let’s drill down into the the various steps in the flowchart.

Get started by reading the documentation

If you’re new to Cloud IAM, we highly recommend you take time out to familiarize yourself with the IAM documentation, a comprehensive guide that includes resources to help you follow best practice guidance. In particular, the Understanding Roles document is a great way for you to become more familiar with IAM. This page lists all the available IAM roles per product, and from here you can easily locate the product-specific IAM pages for more detail on their respective IAM roles.

Get hands-on

Once you have familiarized yourself with the overview, it’s time to get hands-on to understand how to actually implement IAM policies. The tutorial on implementing GCP policies has a section on how to map your requirements to IAM and walks through the tutorial implementation using the GCP Console.

Create users and groups

GCP uses Google Accounts for authentication and access management. Best practice is to use groups to set up the IAM policies that meet your requirements. Google groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once rather than one-at-a-time, as you do for individual users or service accounts. You can also easily add members to and remove members from a Google group rather than updating a Cloud IAM policy.

Understand hierarchy

A key concept in Cloud IAM is that it uses a hierarchical approach to implementing IAM, flowing from the Organization node downwards. GCP allows you to group and hierarchically organize all GCP resources into resource containers such as Organizations, Folders and Projects.

The following diagram shows an example of various resources and their hierarchical organization in GCP.
You can set an IAM policy at the organization level, the folder level, the project level, or (in some cases) the resource level. The tables in the Understanding Roles document indicate at what level in the hierarchy you can incorporate an IAM role for a product as part of a policy.

A resource inherits the policies of its parent node. If you set a policy at the Organization level, all its child folders and projects inherit that policy. Likewise, if you set a policy at the Project level, its child resources inherit that policy as well. In other words, the policy for a resource is effectively the union of the policies set on the resource as well as all of the policies it inherits from its ancestors.

Define functional or product-specific roles

When planning your IAM implementation there are two approaches: use a functional role or set access according to data or product type.

Typically the first few IAM policies that you need to map are functional and role-based, for example, the existing Networking and Billing roles.

Restricting access by product, meanwhile, looks at a particular resource and focuses on defining the policy focused on that resource. For example, you may wish to restrict access to specific Cloud Storage buckets, BigQuery datasets or Pub/Sub topics and subscriptions.

Define custom roles

If the predefined IAM roles do not meet your security needs, you can create a custom role with one or more permissions.When creating a custom role, we recommend starting from an existing predefined role and add or remove permissions to it, rather than starting from an empty list of permissions.

Creating custom roles is an advanced configuration action since managing them represents additional operational overhead. You're responsible for maintaining your custom roles, and will need to add any new IAM permissions to them. Any changes that Google Cloud makes to predefined roles will not be reflected in your custom roles. You can use the IAM permissions change log to track the history of permission changes.

Define IAM policy

You can grant roles to users by creating a Cloud IAM policy, which is a collection of statements that define who has what type of access. These policies consist of a set of bindings of members (who has access) to one or more IAM roles. Here's an example of a policy in JSON:

  "bindings": [
     "role": "roles/owner",
     "members": [
     "role": "roles/viewer",
     "members": ["user:bob@example.com"]

Using groups to define members is another best practice. It keeps the policies readable and allows you to easily adjust who has access to a resource without having to update the policies themselves.

Test IAM policies

Testing any policies you create is critical. This way, you can safely check whether you’ve inadvertently granted broader access than required or, conversely, locked things down too much before applying the new or updated policies to your production environment.

An important decision is where you'll undertake testing. We recommend using a dedicated Folder, assuming the functional roles assigned at the Organization level are well-defined and unlikely to change.

A quick way to validate policies set at the Project level is to use the Method: projects.setIamPolicy API page, which has a useful Try this API form so you can see the request and response.

Alternately, another way to troubleshoot IAM policies and simulate changes is to use IAM Explain, a part of the Foresti security suite.

Use version control

Infrastructure as code gives you the confidence that you're deploying reproducible consistent configurations, provides an audit trail for changes and allows you to treat your infrastructure as you do code. An integral component of this approach is version control. Cloud IAM policies form part of your infrastructure definition, and can be used as part of a version control system.

There are lots of ways to write IAM policies:

The policies, in whichever format you settle upon, can then be kept in a version control system, such as Cloud Source Repositories, which provides private Git repositories. You can then integrate the repositories as part of your chosen deployment workflow.

Apply your policies

Finally, it’s time to apply the policies that you've created. You can apply policies using the gcloud CLI, the IAM API, GCP Console, Cloud Deployment Manager or via open-source software tooling such as Terraform, which includes example IAM policies that can be applied using the gcloud command and the API.

Next steps

Navigating the flowchart provides you with a logical approach to implementing your IAM policies—but it doesn't stop there. Setting up Cloud Audit Logging to monitor your policies is a great next step. This done, you’ll be well on your way to a building an environment with control and visibility you need to centrally manage your GCP resources.

Announcing SSL policies for HTTPS and SSL proxy load balancers

Applications in cloud deployments have diverse security needs. When you use a load balancer as an HTTPS or TLS front end, you need to be able to control how it secures connections to clients. In some cases, your security or compliance requirements may restrict the TLS protocols and ciphers that the load balancer can use. For other applications, you may need the load balancer to support older TLS features in order to accommodate legacy clients. Today we’re introducing SSL policies for HTTPS and SSL Proxy load balancers on GCP, giving you the controls you need for this.

Introducing SSL policies

The TLS protocol is at the heart of how we secure the internet, but in the 20+ years since its inception it has by no means remained static. What began as SSL has evolved through several TLS versions, each one adding new cryptographic techniques and enhancing performance. When using TLS, servers and clients negotiate exactly how to speak TLS to one another, including the version of the protocol and the underlying ciphers. With our new SSL policies feature, you can consider both what TLS capabilities your load balancer is willing to negotiate and how you manage those settings over time.

When you create an SSL policy, you specify two things:
  • A minimum TLS version: Setting this to 1.1, for example, means that the load balancer will only negotiate TLS with clients that support TLS version 1.1 or newer.
  • A profile of features: This selects the set of cipher suites that the load balancer can use.
The profile can be either a pre-defined or a custom profile:

To see which TLS features are enabled by each of these profiles, check our SSL policies documentation.

You need only a single gcloud command to create an SSL policy:

gcloud beta compute ssl-policies create web-front-end-app-policy \
       --profile MODERN --min-tls-version 1.1

You can then attach this SSL policy to your load balancer with a second command:

gcloud beta compute target-https-proxies update my_https_lb \
  --ssl-policy my_ssl_policy

Here's a preview of the configuration via the console (available soon):

Keeping up with TLS advances

We’ve designed the pre-defined profiles to satisfy a wide set of security needs. We manage these profiles for you, automatically enabling new features that are broadly implemented by clients and disabling features when they're no longer appropriate. With pre-defined profiles, there’s no need to manually revisit your policy in order to keep your load balancer up-to-date with modern TLS capabilities.

If pre-defined profiles aren’t right for you, use a CUSTOM profile instead. Here's a preview of custom profile configuration via the console (available soon):

With a CUSTOM profile, you can select exactly the TLS features that you want and manage the introduction of new features to keep your SSL policy current.

Try SSL policies today 

Learn more about SSL policies online, and give them a spin. We look forward to your feedback!

Toward effective cloud governance: designing policies for GCP customers large and small

When it comes to security and governance, not all orgs are created equal. A mom-and-pop shop has different needs than a large enterprise, and startups have different requirements than, say, a local government.

Google Cloud Platform (GCP) customers come in all shapes and sizes, and so do the identity and access management policies that they put in place. Whether you work for a small company and wear many hats, or for a large enterprise with a clearly defined role, you need a policy baseline to implement your GCP environment.

To get you off to a good start, we've written a series of articles that look at typical customer environments and their identity postures. Using a hypothetical customer, each article shows you how to design GCP policies that meet the policy requirements of the reference organization.

In a first phase, we’ve published use cases about the following organizations:

  • Enterprise customers can have complex organizational structures and mature policies often developed over many years. Typically, they have many users to consider and manage.
  • Startups typically have simpler policy requirements and need to be able to move quickly. However, they still need to ensure that appropriate safeguards are in place, particularly around protection of intellectual property.
  • Education and training providers need to be able to automatically create and destroy safe and sandboxed student environments.

In addition to these articles, we also published a tutorial based on the fictional startup customer to guide you through many of the implementation steps. You can find the tutorial here.

Of course, this is just the beginning, and we are well aware that one size doesn't fit all  or even most! So we encourage you to read them all and blend their guidance to fit your specific use case. In the meantime, if you have any suggestions for more use cases, please let us know we'll add them to our list.

Use Forseti to make sure your Google Kubernetes Engine clusters are updated for “Meltdown" and “Spectre”

Last month, Project Zero disclosed details about CPU vulnerabilities that have been referred to as “Meltdown” and “Spectre,” and we let you know that Google Cloud has been updated to protect against all known vulnerabilities.

Customers running virtual machines (VMs) on Google Cloud services should continue to follow security best practices and regularly apply all security updates, just as they would for any other operating system vulnerability. We provided a full list of recommended actions for GCP customers to protect against these vulnerabilities.

One recommended action is to update all Google Kubernetes Engine clusters to ensure the underlying VM image is fully patched. You can do this automatically by enabling auto-upgrade on your Kubernetes node pools. Want to make sure all your clusters are running a version patched against these CPU vulnerabilities? The Google Cloud security team developed a scanner that can help.

The scanner is now available within Forseti Security, an open-source security toolkit for GCP, allowing you to quickly identify any Kubernetes Engine clusters that have not yet been patched.

If you’ve already installed Forseti, you’ll need to upgrade to version 1.1.10 and enable the scanner. If not, install Forseti Security on a new project in your GCP organization. The scanner will check the version of the node pools in all Kubernetes Engine clusters running in all your GCP projects on an hourly basis. Forseti writes any violations it finds to its violations table, and optionally sends an email to your GCP admins, to help you identify any lingering Meltdown exposure.

The Forseti toolkit can be used in many different ways to help you stay secure. To learn more about the Forseti community, check out this blog post. Contact discuss@forsetisecurity.org if you have any questions about this tool.

Use Forseti to make sure your Google Kubernetes Engine clusters are updated for “Meltdown" and “Spectre”

Last month, Project Zero disclosed details about CPU vulnerabilities that have been referred to as “Meltdown” and “Spectre,” and we let you know that Google Cloud has been updated to protect against all known vulnerabilities.

Customers running virtual machines (VMs) on Google Cloud services should continue to follow security best practices and regularly apply all security updates, just as they would for any other operating system vulnerability. We provided a full list of recommended actions for GCP customers to protect against these vulnerabilities.

One recommended action is to update all Google Kubernetes Engine clusters to ensure the underlying VM image is fully patched. You can do this automatically by enabling auto-upgrade on your Kubernetes node pools. Want to make sure all your clusters are running a version patched against these CPU vulnerabilities? The Google Cloud security team developed a scanner that can help.

The scanner is now available within Forseti Security, an open-source security toolkit for GCP, allowing you to quickly identify any Kubernetes Engine clusters that have not yet been patched.

If you’ve already installed Forseti, you’ll need to upgrade to version 1.1.10 and enable the scanner. If not, install Forseti Security on a new project in your GCP organization. The scanner will check the version of the node pools in all Kubernetes Engine clusters running in all your GCP projects on an hourly basis. Forseti writes any violations it finds to its violations table, and optionally sends an email to your GCP admins, to help you identify any lingering Meltdown exposure.

The Forseti toolkit can be used in many different ways to help you stay secure. To learn more about the Forseti community, check out this blog post. Contact discuss@forsetisecurity.org if you have any questions about this tool.

Finer-grained security using custom roles for Cloud IAM

IT security aims to ensure the right people have access to the right resources and use them in the right ways. Making sure those are the only things that can happen is the "principle of least privilege," a cornerstone of enterprise security policy. Custom roles for Cloud IAM make that easier with the power to pick the precise permissions people need to do their jobs—and are now generally available.

Google Cloud Platform (GCP) offers hundreds of predefined roles that range from "Owner" to product- and job-specific roles as narrow as "Cloud Storage Viewer." These are curated combinations of the thousands of IAM permissions that control every API in GCP, from starting a virtual machine to making predictions using machine learning models. For even finer-grained access control, custom roles now offer production-level support for remixing permissions across all GCP services.

Security that’s built to fit

Consider a tool that needs access to multiple GCP services to inventory Cloud Storage buckets, BigQuery tables and Cloud Spanner databases. Enumerating data doesn’t require privileges to decrypt that data. While predefined roles to view an entire project may grant .query,.decrypt and .get as a set, custom roles make it possible to grant .get permission on its own. Since a custom role can also combine permissions from multiple GCP services, you can put all of the permissions for a service account in one place—and then share that new role across your entire organization.

Custom roles aren’t just for services; users can also benefit from roles that are properly tailored to get their jobs done. For example, one regulation may state that a privacy auditor should be able to inspect all the the personally-identifiable information (PII) stored about your customers; another, that only full-time employees should process such data. Depending on job roles, it may be too powerful to grant Bigquery Data Owner to an auditor (who shouldn’t be able to delete data); yet Bigquery Data Viewer may be too weak for employees (who also need to search the data and run reports). IAM custom roles allow you to include or exclude permissions to match specific job roles:
“As the largest owner and operator of shopping centers in Australia and New Zealand, data security is crucial to our business. Google Cloud IAM custom roles help us meet our security standards, legislative requirements and remain compliant with the Australian Privacy Principles. With this feature, we can implement identity and access control to the authorized tasks performed by a specific person or machine, allowing us to fine-tune permissions and rigorously conform to the principle of least privilege.” 
— Evgeny Minkevich, Integration Solution Architect, Scentre Group

Managing custom roles

GCP is constantly expanding and evolving, and the set of permissions that control all of its APIs do, too. Almost all permissions are available for customization today, with the exception of a few that are only tested and supported in predefined role combinations. To keep abreast of new permissions, and changes in the support level of existing ones, you can now rely on a central permission change log for all public GCP services as well as a list of all supported permissions in custom roles.

We also suggest some recommended practices for testing, deploying and maintaining your own custom roles. To track and control changes to your custom roles, we’ve improved our integration with Cloud Deployment Manager to create and update custom roles, both within projects and across entire organizations (sample code). Together with existing Deployment Manager features that control how resources are created, organized and secured, IAM custom roles can help automate applying the principle of least privilege.

What’s next

We continue to invest in making IAM more powerful and easier to use, including helping you to create and manage custom roles. That starts with regular updates on permission changes, so you can keep your own custom roles in sync with Google’s new services, roles and permissions. It extends into research with the Forseti Security open source initiative to explain why a permission was granted or denied. We want the principle of "least privilege" to take the least effort, too!

12 best practices for user account, authorization and password management

Account management, authorization and password management can be tricky. For many developers, account management is a dark corner that doesn't get enough attention. For product managers and customers, the resulting experience often falls short of expectations.

Fortunately, Google Cloud Platform (GCP) brings several tools to help you make good decisions around the creation, secure handling and authentication of user accounts (in this context, anyone who identifies themselves to your system  customers or internal users). Whether you're responsible for a website hosted in Google Kubernetes Engine, an API on Apigee, an app using Firebase or other service with authenticated users, this post will lay out the best practices to ensure you have a safe, scalable, usable account authentication system.

1. Hash those passwords

My most important rule for account management is to safely store sensitive user information, including their password. You must treat this data as sacred and handle it appropriately.

Do not store plaintext passwords under any circumstances. Your service should instead store a cryptographically strong hash of the password that cannot be reversed  created with, for example, PBKDF2, SHA3, Scrypt, or Bcrypt. The hash should be salted with a value unique to that specific login credential. Do not use deprecated hashing technologies such as MD5, SHA1 and under no circumstances should you use reversible encryption or try to invent your own hashing algorithm.

You should design your system assuming it will be compromised eventually. Ask yourself "If my database were exfiltrated today, would my users' safety and security be in peril on my service or other services they use? What can we do to mitigate the potential for damage in the event of a leak?"

Another point: If you could possibly produce a user's password in plaintext at any time outside of immediately after them providing it to you, there's a problem with your implementation.

2. Allow for third-party identity providers if possible

Third-party identity providers enable you to rely on a trusted external service to authenticate a user's identity. Google, Facebook and Twitter are commonly used providers.

You can implement external identity providers alongside your existing internal authentication system using a platform such as Firebase Auth. There are a number of benefits that come with Firebase Auth, including simpler administration, smaller attack surface and a multi-platform SDK. We'll touch on more benefits throughout this list. See our case studies on companies that were able to integrate Firebase Auth in as little as one day.

3. Separate the concept of user identity and user account

Your users are not an email address. They're not a phone number. They're not the unique ID provided by an OAUTH response. Your users are the culmination of their unique, personalized data and experience within your service. A well designed user management system has low coupling and high cohesion between different parts of a user's profile.

Keeping the concepts of user account and credentials separate will greatly simplify the process of implementing third-party identity providers, allowing users to change their username and linking multiple identities to a single user account. In practical terms, it may be helpful to have an internal global identifier for every user and link their profile and authentication identity via that ID as opposed to piling it all in a single record.

4. Allow multiple identities to link to a single user account

A user who authenticates to your service using their username and password one week might choose Google Sign-In the next without understanding that this could create a duplicate account. Similarly, a user may have very good reason to link multiple email addresses to your service. If you properly separated user identity and authentication, it will be a simple process to link several identities to a single user.

Your backend will need to account for the possibility that a user gets part or all the way through the signup process before they realize they're using a new third-party identity not linked to their existing account in your system. This is most simply achieved by asking the user to provide a common identifying detail, such as email address, phone or username. If that data matches an existing user in your system, require them to also authenticate with a known identity provider and link the new ID to their existing account.

5. Don't block long or complex passwords

NIST has recently updated guidelines on password complexity and strength. Since you are (or will be very soon) using a strong cryptographic hash for password storage, a lot of problems are solved for you. Hashes will always produce a fixed-length output no matter the input length, so your users should be able to use passwords as long as they like. If you must cap password length, only do so based on the maximum POST size allowable by your servers. This is commonly well above 1MB. Seriously.

Your hashed passwords will be comprised of a small selection of known ASCII characters. If not, you can easily convert a binary hash to Base64. With that in mind, you should allow your users to use literally any characters they wish in their password. If someone wants a password made of Klingon, Emoji and control characters with whitespace on both ends, you should have no technical reason to deny them.

6. Don't impose unreasonable rules for usernames

It's not unreasonable for a site or service to require usernames longer than two or three characters, block hidden characters and prevent whitespace at the beginning and end of a username. However, some sites go overboard with requirements such as a minimum length of eight characters or by blocking any characters outside of 7-bit ASCII letters and numbers.

A site with tight restrictions on usernames may offer some shortcuts to developers, but it does so at the expense of users and extreme cases will drive some users away.

There are some cases where the best approach is to assign usernames. If that's the case for your service, ensure the assigned username is user-friendly insofar as they need to recall and communicate it. Alphanumeric IDs should avoid visually ambiguous symbols such as "Il1O0." You're also advised to perform a dictionary scan on any randomly generated string to ensure there are no unintended messages embedded in the username. These same guidelines apply to auto-generated passwords.

7. Allow users to change their username

It's surprisingly common in legacy systems or any platform that provides email accounts not to allow users to change their username. There are very good reasons not to automatically release usernames for reuse, but long-term users of your system will eventually come up with a good reason to use a different username and they likely won't want to create a new account.

You can honor your users' desire to change their usernames by allowing aliases and letting your users choose the primary alias. You can apply any business rules you need on top of this functionality. Some orgs might only allow one username change per year or prevent a user from displaying anything but their primary username. Email providers might ensure users are thoroughly informed of the risks before detaching an old username from their account or perhaps forbid unlinking old usernames entirely.

Choose the right rules for your platform, but make sure they allow your users to grow and change over time.

8. Let your users delete their accounts

A surprising number of services have no self-service means for a user to delete their account and associated data. There are a number of good reasons for a user to close an account permanently and delete all personal data. These concerns need to be balanced against your security and compliance needs, but most regulated environments provide specific guidelines on data retention. A common solution to avoid compliance and hacking concerns is to let users schedule their account for automatic future deletion.

In some circumstances, you may be legally required to comply with a user's request to delete their data in a timely manner. You also greatly increase your exposure in the event of a data breach where the data from "closed" accounts is leaked.

9. Make a conscious decision on session length

An often overlooked aspect of security and authentication is session length. Google puts a lot of effort into ensuring users are who they say they are and will double-check based on certain events or behaviors. Users can take steps to increase their security even further.

Your service may have good reason to keep a session open indefinitely for non-critical analytics purposes, but there should be thresholds after which you ask for password, 2nd factor or other user verification.

Consider how long a user should be able to be inactive before re-authenticating. Verify user identity in all active sessions if someone performs a password reset. Prompt for authentication or 2nd factor if a user changes core aspects of their profile or when they're performing a sensitive action. Consider whether it makes sense to disallow logging in from more than one device or location at a time.

When your service does expire a user session or require re-authentication, prompt the user in real-time or provide a mechanism to preserve any activity they have unsaved since they were last authenticated. It's very frustrating for a user to fill out a long form, submit it some time later and find out all their input has been lost and they must log in again.

10. Use 2-Step Verification

Consider the practical impact on a user of having their account stolen when choosing from 2-Step Verification (also known as 2-factor authorization or just 2FA) methods. SMS 2FA auth has been deprecated by NIST due to multiple weaknesses, however, it may be the most secure option your users will accept for what they consider a trivial service. Offer the most secure 2FA auth you reasonably can. Enabling third-party identity providers and piggybacking on their 2FA is a simple means to boost your security without great expense or effort.

11. Make user IDs case insensitive

Your users don't care and may not even remember the exact case of their username. Usernames should be fully case-insensitive. It's trivial to store usernames and email addresses in all lowercase and transform any input to lowercase before comparing.

Smartphones represent an ever-increasing percentage of user devices. Most of them offer autocorrect and automatic capitalization of plain-text fields. Preventing this behavior at the UI level might not be desirable or completely effective, and your service should be robust enough to handle an email address or username that was unintentionally auto-capitalized.

12. Build a secure auth system

If you're using a service like Firebase Auth, a lot of security concerns are handled for you automatically. However, your service will always need to be engineered properly to prevent abuse. Core considerations include implementing a password reset instead of password retrieval, detailed account activity logging, rate limiting login attempts, locking out accounts after too many unsuccessful login attempts and requiring 2-factor authentication for unrecognized devices or accounts that have been idle for extended periods. There are many more aspects to a secure authentication system, so please see the section below for links to more information.

Further reading

There are a number of excellent resources available to guide you through the process of developing, updating or migrating your account and authentication management system. I recommend the following as a starting place:

Three ways to configure robust firewall rules

If you administer firewall rules for Google Cloud VPCs, you want to ensure that firewall rules you create can only be associated with correct VM instances by developers in your organization. Without that assurance, it can be difficult to manage access to sensitive content hosted on VMs in your VPCs or allow these instances access to the internet, and you must carefully audit and monitor the instances to ensure that such unintentional access is not given through the use of tags. With Google VPC, there are now multiple ways to help achieve the required level of control, which we’ll describe here in detail.

As an example, imagine you want to create a firewall rule to restrict access to sensitive user billing information in a data store running on a set of VMs in your VPC. Further, you’d like to ensure that developers who can create VMs for applications other than the billing frontend cannot enable these VMs to be governed by firewall rules created to allow access to billing data.
Example topology of a VPC setup requiring secure firewall access.
The traditional approach here is to attach tags to VMs and create a firewall rule that allows access to specific tags, e.g., in the above example you could create a firewall rule that allows all VMs with the billing-frontend tag access to all VMs with the tag billing-data. The drawback of this approach is that any developer with Compute InstanceAdmin role for the project can now attach billing-frontend as a tag to their VM, and thus unintentionally gain access to sensitive data.

Configuring Firewall rules with Service Accounts

With the general availability of firewall rules using service accounts, instead of using tags, you can block developers from enabling a firewall rule on their instances unless they have access to the appropriate centrally managed service accounts. Service accounts are special Google accounts that belong to your application or service running on a VM and can be used to authenticate the application or service for resources it needs to access. In the above example, you can create a firewall rule to allow access to the billing-data@ service account only if the originating source service account of the traffic is billing-frontend@.
Firewall setup using source and target service accounts. (Service accounts names are abbreviated for simplicity.)
You can create this firewall rule using the following gcloud command:
gcloud compute firewall-rules create secure-billing-data \
    --network web-network \
    --allow TCP:443 \
    --source-service-accounts billing-frontend@web.iam.gserviceaccount.com \
    --target-service-accounts billing-data@web.iam.gserviceaccount.com
If, in the above example, the billing frontend and billing data applications are autoscaled, you can specify the service accounts for the corresponding applications in the InstanceTemplate configured for creating the VMs.

The advantage of using this approach is that once you set it up, the firewall rules may remain unchanged despite changes in underlying IAM permissions. However, you can currently only associate one service account with a VM and to change this service account, the instance must be in a stopped state.

Creating custom IAM role for InstanceAdmin

If you want the flexibility of tags and the limitations of service accounts is a concern, you can create a custom role with more restricted permissions that disable the ability to set tags on VMs; do this by removing the compute.instances.setTag permission. This custom role can have other permissions present in the InstanceAdmin role and can then be assigned to developers in the organization. With this custom role, you can create your firewall rules using tags:
gcloud compute firewall-rules create secure-billing-data \
    --network web-network \
    --allow TCP:443 \
    --source-tags billing-frontend \
    --target-tags billing-data
Note, however, that permissions assigned to a custom role are static in nature and must be updated with any new permissions that might be added to the InstanceAdmin role, as and when required.

Using subnetworks to partition workloads

You can also create firewall rules using source and destination IP CIDR ranges if the workloads can be partitioned into subnetworks of distinct ranges as shown in the example diagram below.
Firewall setup using source and destination ranges.
In order to restrict developers’ ability to create VMs in these subnetworks, you can grant Compute Network User role selectively to developers on specific subnetworks or use Shared VPC.

Here’s how to configure a firewall rule with source and destination ranges using gcloud:
gcloud compute firewall-rules create secure-billing-data \
    --network web-network \
    --allow TCP:443 \
    --source-ranges \
This method allows for better scalability with large VPCs and allows for changes in the underlying VMs as long as the network topology remains unchanged. Note, however, that if a VM instance has can_ip_forward enabled, it may send traffic using the above source range and thus gain access to sensitive workloads.

As you can see, there’s a lot to consider when configuring firewall rules for your VPCs. We hope these tips help you configure firewall rules in a more secure and efficient manner. To learn more about configuring firewall rules, check out the documentation.

Simplify Cloud VPC firewall management with service accounts

Firewalls provide the first line of network defense for any infrastructure. On Google Cloud Platform (GCP), Google Cloud VPC firewalls do just that—controlling network access to and between all the instances in your VPC. Firewall rules determine who's allowed to talk to whom and more importantly who isn’t. Today, configuring and maintaining IP-based firewall rules is a complex and manual process that can lead to unauthorized access if done incorrectly. That’s why we’re excited to announce a powerful new management feature for Cloud VPC firewall management: support for service accounts.

If you run a complex application on GCP, you’re probably already familiar with service accounts in Cloud Identity and Access Management (IAM) that provide an identity to applications running on virtual machine instances. Service accounts simplify the application management lifecycle by providing mechanisms to manage authentication and authorization of applications. They provide a flexible yet secure mechanism to group virtual machine instances with similar applications and functions with a common identity. Security and access control can subsequently be enforced at the service account level.

Using service accounts, when a cloud-based application scales up or down, new VMs are automatically created from an instance template and assigned the correct service account identity. This way, when the VM boots up, it gets the right set of permissions and within the relevant subnet, so firewall rules are automatically configured and applied.

Further, the ability to use Cloud IAM ACLs with service accounts allows application managers to express their firewall rules in the form of intent, for example, allow my “application x” servers to access my “database y.” This remediates the need to manually manage Server IP Address lists while simultaneously reducing the likelihood of human error.
This process is leaps-and-bounds simpler and more manageable than maintaining IP address-based firewall rules, which can neither be automated nor templated for transient VMs with any semblance of ease.

Here at Google Cloud, we want you to deploy applications with the right access controls and permissions, right out of the gate. Click here to learn how to enable service accounts. And to learn more about Cloud IAM and service accounts, visit our documentation for using service accounts with firewalls.

What a year! Google Cloud Platform in 2017

The end of the year is a time for reflection . . . and making lists. As 2017 comes to a close, we thought we’d review some of the most memorable Google Cloud Platform (GCP) product announcements, white papers and how-tos, as judged by popularity with our readership.

As we pulled the data for this post, some definite themes emerged about your interests when it comes to GCP:
  1. You love to hear about advanced infrastructure: CPUs, GPUs, TPUs, better network plumbing and more regions. 
  2.  How we harden our infrastructure is endlessly interesting to you, as are tips about how to use our security services. 
  3.  Open source is always a crowd-pleaser, particularly if it presents a cloud-native solution to an age-old problem. 
  4.  You’re inspired by Google innovation — unique technologies that we developed to address internal, Google-scale problems. So, without further ado, we present to you the most-read stories of 2017.

Cutting-edge infrastructure

If you subscribe to the “bigger is always better” theory of cloud infrastructure, then you were a happy camper this year. Early in 2017, we announced that GCP would be the first cloud provider to offer Intel Skylake architecture, GPUs for Compute Engine and Cloud Machine Learning became generally available and Shazam talked about why cloud GPUs made sense for them. In the spring, you devoured a piece on the performance of TPUs, and another about the then-largest cloud-based compute cluster. We announced yet more new GPU models and topping it all off, Compute Engine began offering machine types with a whopping 96 vCPUs and 624GB of memory.

It wasn’t just our chip offerings that grabbed your attention — you were pretty jazzed about Google Cloud network infrastructure too. You read deep dives about Espresso, our peering-edge architecture, TCP BBR congestion control and improved Compute Engine latency with Andromeda 2.1. You also dug stories about new networking features: Dedicated Interconnect, Network Service Tiers and GCP’s unique take on sneakernet: Transfer Appliance.

What’s the use of great infrastructure without somewhere to put it? 2017 was also a year of major geographic expansion. We started out the year with six regions, and ended it with 13, adding Northern Virginia, Singapore, Sydney, London, Germany, Sao Paolo and Mumbai. This was also the year that we shed our Earthly shackles, and expanded to Mars ;)

Security above all

Google has historically gone to great lengths to secure our infrastructure, and this was the year we discussed some of those advanced techniques in our popular Security in plaintext series. Among them: 7 ways we harden our KVM hypervisor, Fuzzing PCI Express and Titan in depth.

You also grooved on new GCP security services: Cloud Key Management and managed SSL certificates for App Engine applications. Finally, you took heart in a white paper on how to implement BeyondCorp as a more secure alternative to VPN, and support for the European GDPR data protection laws across GCP.

Open, hybrid development

When you think about GCP and open source, Kubernetes springs to mind. We open-sourced the container management platform back in 2014, but this year we showed that GCP is an optimal place to run it. It’s consistently among the first cloud services to run the latest version (most recently, Kubernetes 1.8) and comes with advanced management features out of the box. And as of this fall, it’s certified as a conformant Kubernetes distribution, complete with a new name: Google Kubernetes Engine.

Part of Kubernetes’ draw is as a platform-agnostic stepping stone to the cloud. Accordingly, many of you flocked to stories about Kubernetes and containers in hybrid scenarios. Think Pivotal Container Service and Kubernetes’ role in our new partnership with Cisco. The developers among you were smitten with Cloud Container Builder, a stand-alone tool for building container images, regardless of where you deploy them.

But our open source efforts aren’t limited to Kubernetes — we also made significant contributions to Spinnaker 1.0, and helped launch the Istio and Grafeas projects. You ate up our "Partnering on open source" series, featuring the likes of HashiCorp, Chef, Ansible and Puppet. Availability-minded developers loved our Customer Reliability Engineering (CRE) team’s missive on release canaries, and with API design: Choosing between names and identifiers in URLs, our Apigee team showed them a nifty way to have their proverbial cake and eat it too.

Google innovation

In distributed database circles, Google’s Spanner is legendary, so many of you were delighted when we announced Cloud Spanner and a discussion of how it defies the CAP Theorem. Having a scalable database that offers strong consistency and great performance seemed to really change your conception of what’s possible — as did Cloud IoT Core, our platform for connecting and managing “things” at scale. CREs, meanwhile, showed you the Google way to handle an incident.

2017 was also the year machine learning became accessible. For those of you with large datasets, we showed you how to use Cloud Dataprep, Dataflow, and BigQuery to clean up and organize unstructured data. It turns out you don’t need a PhD to learn to use TensorFlow, and for visual learners, we explained how to visualize a variety of neural net architectures with TensorFlow Playground. One Google Developer Advocate even taught his middle-school son TensorFlow and basic linear algebra, as applied to a game of rock-paper-scissors.

Natural language processing also became a mainstay of machine learning-based applications; here, we highlighted with a lighthearted and relatable example. We launched the Video Intelligence API and showed how Cloud Machine Learning Engine simplifies the process of training a custom object detector. And the makers among you really went for a post that shows you how to add machine learning to your IoT projects with Google AIY Voice Kit. Talk about accessible!

Lastly, we want to thank all our customers, partners and readers for your continued loyalty and support this year, and wish you a peaceful, joyful, holiday season. And be sure to rest up and visit us again Next year. Because if you thought we had a lot to say in 2017, well, hold onto your hats.