Tag Archives: Announcements

Using Jenkins on Google Compute Engine for distributed builds



Continuous integration has become a standard practice across a lot of software development organizations, automatically detecting changes that were committed to your software repositories, running them through unit, integration and functional tests, and finally creating an artifact (JAR, Docker image, or binary). Among continuous integration tools, Jenkins is one of the most popular, and so we created the Compute Engine Plugin, helping you to provision, configure and scale Jenkins build environments on Google Cloud Platform (GCP).

With Jenkins, you define your build and test process, then run it continuously against your latest software changes. But as you scale up your continuous integration practice, you may need to run builds across fleets of machines rather than on a single server. With the Compute Engine Plugin, your DevOps teams can intuitively manage instance templates and launch build instances that automatically register themselves with Jenkins. When Jenkins needs to run jobs but there aren’t enough available nodes, it provisions instances on-demand based on your templates. Once work in the build system has slowed down, the plugin automatically deletes your unused instances, so that you only pay for the instances you need. This autoscaling functionality is an important feature of a continuous build system, which gets a lot of use during primary work hours, and less when developers are off enjoying themselves. For further cost savings, you can also configure the Compute Engine Plugin to create your build instances as Preemptible VMs, which can save you up to 80% on per-second pricing of your builds.

Security is another concern with continuous integration systems. A compromise of this key organizational system can put the integrity of your software at risk. The Compute Engine Plugin uses the latest and most secure version of the Jenkins Java Network Launch Protocol (JNLP) remoting protocol. When bootstrapping the build instances, the Compute Engine Plugin creates a one-time SSH key and injects it into each build instance. That way, the impact of those credentials being compromised is limited to a single instance.

The Compute Engine Plugin lets you configure your build instances how you like them, including the networking. For example, you can:

  • Disable external IPs so that worker VMs are not publicly accessible
  • Use Shared VPC networks for greater isolation in your GCP projects
  • Apply custom network tags for improved placement in firewall rules


The plugin also allows you to attach accelerators like GPUs and Local SSDs to your instances to run your builds faster. You can also configure the plugin to use our wide variety of machine types which match the CPU and memory requirements of your build instance to the workload, for better utilization. Finally, the plugin allows you to configure arbitrary startup scripts for your instance templates, where you can do the final configuration of your base images before your builds are run.

If you use Jenkins on-premises, you can use the Compute Engine Plugin to create an ephemeral build farm in Compute Engine while keeping your Jenkins master and other necessary build dependencies behind your firewall. You can then use this extension of your build farm when you can’t meet demand for build capacity, or as a way to transition your workloads to the cloud in a practical and low-risk way.

Here is an example of the configuration page for an instance template:

Below is a high-level architecture of a scalable build system built with the Jenkins Compute Engine and Google Cloud Storage plugins. The Jenkins administrator configures an IAM service account that Jenkins uses to provision your build instances. Once builds are run, it can upload artifacts to Cloud Storage to archive them (and move them to cheaper storage after a given time threshold).
Jenkins and continuous integration are powerful tools for modern software development shops, and we hope this plugin makes it easier for you to use Jenkins on GCP. For instructions on getting this set up in your Google Cloud project, follow our solution guide.

Introducing Cloud Memorystore: A fully managed in-memory data store service for Redis



At Redisconf 2018 in San Francisco last month, we announced the public beta of Cloud Memorystore for Redis, a fully-managed in-memory data store service. Today, the public beta is available for everyone to try. Cloud Memorystore provides a scalable, more secure and highly available Redis service fully managed by Google. It’s fully compatible with open source Redis, letting you migrate your applications to Google Cloud Platform (GCP) with zero code changes.

As more and more applications need to process data in real-time, you may want a caching layer in your infrastructure to reduce latency for your applications. Redis delivers fast in-memory caching, support for powerful data structures and features like persistence, replication and pub-sub. For example, data structures like sorted sets make it easy to maintain counters and are widely used to implement gaming leaderboards. Whether it’s simple session caching, developing games played by millions of users or building fast analytical pipelines, developers want to leverage the power of Redis without having to worry about VMs, patches, upgrades, firewall rules, etc.

Early adopters of Cloud Memorystore have been using the service for the last few months and they are thrilled with the service.
"At Descartes Labs, we have long been fans of Redis and its high performance. We have used Redis on everything from storing asynchronous task queues for tens of thousands of CPUs to a centralized persisted key-value pair store for the feature vectors output by our ML models. Cloud Memorystore provides an agile, scalable, no-operations Redis instance that we can instantly provision and scale without administration burdens."
- Tim Kelton, CoFounder and Cloud Architect, Descartes Labs
 “Cloud Memorystore has provided us with a highly reliable Redis service and has been powering our critical applications. We have been using Cloud Memorystore as an early adopter and we are impressed with the reliability and performance of the service. Google has helped us forget about our Redis instances with Cloud Memorystore and now we can focus more time on building our applications”
- George-Cristian, Software Developer, MDG



Feature Summary (Beta)
Redis version
3.2.11
Max instance size
300 GB
Max network bandwidth
12 Gbps
High availability with automatic failover
Yes
Memory scaling
Yes
Stackdriver Monitoring and Logging
Yes
Private IP access
Yes
IAM roles
Yes
Availability SLA¹
Yes
On-demand pricing
Yes
¹Applicable for GA release only.

Simple and flexible provisioning
How you choose to deploy Cloud Memorystore for Redis depends on the availability and performance needs of your application. You can deploy Redis as a standalone instance or with a replica to provide high availability. But while replicating a Redis instance provides only data redundancy, you still need to do the heavy lifting of health checking, electing of a primary, client connections on failover, etc. The Cloud Memorystore service takes away all this complexity and makes it easy for you deploy a Redis instance that meets your application’s needs.

Cloud Memorystore provides two tiers of service, Basic and Standard, each with different availability characteristics. Regardless of the tier of service, you can provision a Redis instance as small as 1 GB up to 300 GB. With network throughput up to 12 Gbps, Cloud Memorystore supports applications with very high bandwidth needs.

Here is a summary of the capabilities of each tier:


Feature
Basic Tier
Standard Tier
Max instance size
300 GB
300 GB
Max network bandwidth
12 Gbps
12 Gbps
Stackdriver Monitoring support
Yes
Yes
Memory scaling¹
Yes
Yes
Cross-zone replication
No
Yes
Automatic failover
No
Yes
Availability SLA²
No
99.9%
¹Basic Tier instances experience a downtime and a full cache flush during scaling. Standard Tier instance experience very minimal downtime and loss of some unreplicated data during scaling operation. ²Applicable for GA release only.

Provisioning a Cloud Memorystore instance is simple: just choose a tier, the size you need to support the instance availability and performance needs, and the region. Your Redis instance will be up and running within a few minutes.


“Lift and shift” applications
Once provisioned, using Cloud Memorystore is a breeze. You can connect to the Redis instance using any of the tools and libraries you commonly use in your environment. Cloud Memorystore clients makes use of IP addresses to connect to the instance. Applications always connect to one IP address and Cloud Memorystore ensures the traffic is directed to the primary in case there is a failover.

Other key features
Whether it’s provisioning, monitoring or scaling memory, Cloud Memorystore simplifies common management tasks.

Security
Open-source Redis has very minimal security, and as a developer or administrator, it can be challenging to ensure all Redis instances in your organization are protected. With Cloud Memorystore, Redis instances are deployed using a private IP address, which prevents the instance from being accessed from the internet. You can also use Cloud Identity & Access Management (IAM) roles to ensure granular access for managing the instance. Additionally, authorized networks ensure that the Redis instance is accessible only when connected to the authorized VPC network.

Stackdriver integration
Cloud Memorystore instances publish all the key metrics into Stackdriver, Google Cloud’s monitoring and management suite. You can monitor all of your instances from the Stackdriver dashboard, and use Stackdriver Logging to get more insights about the Redis instances


Seamless memory scaling
When a mobile application goes viral, it may be necessary to provision a larger Redis instance to meet latency and throughput needs. With Cloud Memorystore you can scale up the instance with a few clicks, and the Standard High Availability tier lets you scale the instance with minimal disruption to the application.

On-demand pricing
Cloud Memorystore provides on-demand pricing with no upfront cost and has per second billing. Moreover, there is no charge for network traffic coming in and out of a Cloud Memorystore instance. For more information, refer to Cloud Memorystore pricing.

Coming soon to Cloud Memorystore
This Cloud Memorystore public beta release is just a starting point for us. Here is a preview of some of the features that are coming soon.

We are excited about what is upcoming for Cloud Memorystore and we would love to hear your feedback! If you have any requests or suggestions, please let us know through Issue Tracker. You can also join the conversation at Cloud Memorystore discussion group.

Sign up for a $300 credit to try Cloud Memorystore and the rest of GCP. Start with a small Redis instance for testing and development, and then when you’re ready, scale up to serve performance-intensive applications.

Want to learn more? Register for the upcoming webinar on Tuesday, June 26th 9:00 am PT to hear all about Cloud Memorystore for Redis.

Building a serverless mobile development pipeline on GCP: new solution documentation



When it comes to mobile applications, automating app distribution helps ensure hardening and consistent delivery and speeds testing. But mobile application delivery pipelines can be challenging to build, because mobile development environments require you to install specific SDKs. Even distributing beta versions requires specific secrets and signing credentials.

Containers are a great way to distribute mobile applications, since you can incorporate the specific build requirements into the container image. Our new solution, Creating a Serverless Mobile Delivery Pipeline in Google Cloud Platform, demonstrates how you can use our Container Builder product to automate the build and distribution of the beta versions of your mobile application for just pennies a build. Check it out, and let us know what you think!

GCP is building a region in Zürich



Click here for the German version. Danke!


Switzerland is a country famous for pharmaceuticals, manufacturing and banking, and its central location in Europe makes it an attractive location for cloud. Today, we’re announcing a Google Cloud Platform (GCP) region in Zürich to make it easier for businesses to build highly available, performant applications. I am originally from Switzerland, so this cloud infrastructure investment is personally exciting for me.

Zürich will be our sixth region in Europe, joining our future region in Finland, and existing regions in the Netherlands, Belgium, Germany, and the United Kingdom. Overall, the Swiss region brings the total number of existing and announced GCP regions around the world to 20—with more to come!

The Swiss region will open in the first half of 2019. Customers in Switzerland will benefit from lower latency for their cloud-based workloads and data, and the region is also designed for high availability, launching with three zones to protect against service disruptions.

We look forward to welcoming you to the GCP Swiss region, and we’re excited to see what you build with our platform. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize what we build next.

Apigee named a Leader in the Gartner Magic Quadrant for Full Life Cycle API Management for the third consecutive time



APIs are the de-facto standard for building and connecting modern applications. But securely delivering, managing and analyzing APIs, data and services, both inside and outside an organization, is complex. And it’s getting even more challenging as enterprise IT environments grow dependent on combinations of public, private and hybrid cloud infrastructures.

Choosing the right APIs can be critical to a platform’s success. Likewise, full lifecycle API management can be a key ingredient in running a successful API-based program. Tools like Gartner’s Magic Quadrant for Full Life Cycle API Management help enterprises evaluate these platforms so they can find the right one to fit their strategy and planning.

Today, we’re thrilled to share that Gartner has recognized Apigee as a Leader in the 2018 Magic Quadrant for Full Life Cycle API Management. This year, Apigee was not only positioned furthest on Gartner’s “completeness of vision” axis for the third time running, it was also positioned highest in “ability to execute.”

Ticketmaster, a leader in ticket sales and distribution, has used Apigee since 2013. The company uses the Apigee platform to enforce consistent security across its APIs, and to help reach new audiences by making it easier for partners and developers to build upon and integrate with Ticketmaster services.

"Apigee has played a key role in helping Ticketmaster build its API program and bring ‘moments of joy’ to fans everywhere, on any platform," said Ismail Elshareef, Ticketmaster's senior vice president of fan experience and open platform.

We’re excited that APIs and API management have become essential to how enterprises deliver applications in and across clouds, and we’re honored that Apigee continues to be recognized as a leader in its category. Most importantly, we look forward to continuing to help customers innovate and accelerate their businesses as part of Google Cloud.

The Gartner 2018 Magic Quadrant for Full Life Cycle Management is available at no charge here.

To learn more about Apigee, please visit the Apigee website.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available from Apigee here.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Introducing the Kubernetes Podcast from Google



With KubeCon in Copenhagen this week, it’s shaping up to be a busy week for Kubernetes news. Here’s another tidbit: Starting this week, we are launching the Kubernetes Podcast from Google, hosted by yours truly and Google Cloud Kubernetes advocacy lead, Craig Box. In this weekly conversation, we’ll focus on all the great things that are happening in the world of Kubernetes. From the news of the week to interviews with people in the community, we’re helping you stay up to date on everything Kubernetes.

In our first episode we spoke with Paris Pittman; Kubernetes Community Manager, about the community, how it’s different, and how to get involved (if you aren’t already). Next week, we’re talking about Kubeflow with David Aronchick.



We’re just getting started so we’d love for you to subscribe and give it a listen. You can check out the podcast on Google Play Music Podcasts, iTunes Podcasts, or your favorite podcast client (just search for Kubernetes Podcast and look for our logo). You can also subscribe by scanning the QR code for your platform below.


We hope you enjoy the podcast. Be sure to let us know if there are any particular topics you’d like us to discuss, guests you think we should interview, or technology/projects we should know about by email ([email protected]) or on Twitter (@KubernetesPod).

Expanding our GPU portfolio with NVIDIA Tesla V100



Cloud-based hardware accelerators like Graphic Processing Units, or GPUs, are a great choice for computationally demanding workloads such as machine learning and high-performance computing (HPC). We strive to provide the widest selection of popular accelerators on Google Cloud to meet your needs for flexibility and cost. To that end, we’re excited to announce that NVIDIA Tesla V100 GPUs are now publicly available in beta on Compute Engine and Kubernetes Engine, and that NVIDIA Tesla P100 GPUs are now generally available.

Today’s most demanding workloads and industries require the fastest hardware accelerators. You can now select as many as eight NVIDIA Tesla V100 GPUs, 96 vCPU and 624GB of system memory in a single VM, receiving up to 1 petaflop of mixed precision hardware acceleration performance. The next generation of NVLink interconnects deliver up to 300GB/s of GPU-to-GPU bandwidth, 9X over PCIe, boosting performance on deep learning and HPC workloads by up to 40%. NVIDIA V100s are available immediately in the following regions: us-west1, us-central1 and europe-west4. Each V100 GPU is priced as low as $2.48 per hour for on-demand VMs and $1.24 per hour for Preemptible VMs. Like our other GPUs, the V100 is also billed by the second and Sustained Use Discounts apply.

Our customers often ask which GPU is the best for their CUDA-enabled computational workload. If you’re seeking a balance between price and performance, the NVIDIA Tesla P100 GPU is a good fit. You can select up to four P100 GPUs, 96 vCPUs and 624GB of memory per virtual machine. Further, the P100 is also now available in europe-west4 (Netherlands) in addition to us-west1, us-central1, us-east1, europe-west1 and asia-east1.

Our GPU portfolio offers a wide selection of performance and price options to help meet your needs. Rather than selecting a one-size-fits-all VM, you can attach our GPUs to custom VM shapes and take advantage of a wide selection of storage options, paying for only the resources you need.


Google Cloud GPU Type
VM Configuration Options
NVIDIA GPU
GPU Mem
GPU Hourly Price**
GPUs
vCPUs*
System Memory*
16GB
$2.48 Standard
$1.24 Preemptible
1,8
(2,4) coming in beta
1-96
1-624 GB
16GB
$1.46 Standard
$0.73 Preemptible
1,2,4
1-96
1-624 GB
12GB
$0.45 Standard
$0.22 Preemptible
1,2,4,8
1-64
1-416 GB

* Maximum vCPU count and system memory limit on the instance might be smaller depending on the zone or the number of GPUs selected.
** GPU prices listed as hourly rate, per GPU attached to a VM that are billed by the second. Pricing for attaching GPUs to preemptible VMs is different from pricing for attaching GPUs to non-preemptible VMs. Prices listed are for US regions. Prices for other regions may be different. Additional Sustained Use Discounts of up to 30% apply to GPU on-demand usage only.


Google Cloud makes managing GPU workloads easy for both VMs and containers. On Google Compute Engine, customers can use instance templates and managed instance groups to easily create and scale GPU infrastructure. You can also use NVIDIA V100s and our other GPU offerings in Kubernetes Engine, where Cluster Autoscaler helps provide flexibility by automatically creating nodes with GPUs, and scaling them down to zero when they are no longer in use. Together with Preemptible GPUs, both Compute Engine managed instance groups and Kubernetes Engine’s Autoscaler let you optimize your costs while simplifying infrastructure operations.

LeadStage, a marketing automation provider, is impressed with the value and scale of GPUs on Google Cloud.

"NVIDIA GPUs work great for complex Optical Character Recognition tasks on poor quality data sets. We use V100 and P100 GPUs on Google Compute Engine to convert millions of handwritten documents, survey drawings, and engineering drawings into machine-readable data. The ability to deploy thousands of Preemptible GPU instances in seconds was vastly superior to the capacity and cost of our previous GPU cloud provider." 
— Adam Seabrook, Chief Executive Officer, LeadStage
Chaos Group provides rendering solutions for visual effects, film, architectural, automotive design and media and entertainment, and is impressed with the speed of NVIDIA V100s on Google Cloud.

"V100 GPUs are great for running V-Ray Cloud rendering services. Among all possible hardware configurations that we've tested, V100 ranked #1 on our benchmarking platform. Thanks to V100 GPUs we can use cloud GPUs on-demand on Compute Engine to render our clients' jobs extremely fast."
— Boris Simandoff, Director of Engineering, Chaos Group
 If you have computationally demanding workloads, GPUs can be a real game-changer. Check our GPU page to learn more about how you can benefit from P100, V100 and other Google Cloud GPUs!

Accelerating innovation for cloud-native managed databases



Every application has to store and access operational data, usually in a database. Managed database services can help you ship apps faster and reduce operational toil so you can focus on what makes your business successful and unique. To quote analyst firm Gartner, “Cloud-based deployment models and dbPaaS offerings are growing rapidly as an alternative, more flexible, delivery method1,” and our customers’ buying habits are no exception.

Managed database services abstract away the underlying infrastructure so you can interact with a database and an API without worrying about servers, racks, and replication. Google Cloud has a strategy of providing managed database services for your favorite open source data stores as well as proprietary technologies developed at Google over the years.

Today, we’re excited to announce a number of cloud database improvements:

  • Commit timestamps for Cloud Spanner now available
  • Availability of Cloud Bigtable replication in beta
  • Availability of Cloud Memorystore for Redis in beta
  • Cloud SQL for PostgreSQL is now generally available

Commit timestamps for Cloud Spanner


Cloud Spanner is the only globally distributed relational database that supports external (strong) consistency across regions and continents, and that ability opens new opportunities for businesses. Since it became GA last May, we’ve seen a surge of customers like Optiva or Bandai Namco building mission-critical systems on Cloud Spanner. And we continue to focus on our customers, adding product features based on customer requests. Most recently, we added commit timestamps to Cloud Spanner, which lets you determine the exact ordering of mutations and build changelogs.

Cloud Bigtable replication beta

We are announcing that Cloud Bigtable regional replication is rolling out starting today and will be available to 100% of customers by May 1, 2018. A replicated Cloud Bigtable database can provide higher availability, additional read throughput, higher durability and resilience in the face of zonal failures. With the click of a button, you can now replicate your Cloud Bigtable data set asynchronously across zones within a GCP region, providing a scalable, fully managed, replicated wide-column database service for when low latency, random data access and scalability are critical.

Cloud Memorystore for Redis beta

Support for managed Redis is a popular customer request. On May 9th, we’ll begin offering Cloud Memorystore for Redis in beta, an in-memory data store service for Redis that is scalable, more secure, highly available and fully managed by Google. Compatibility with the Redis protocol means you can migrate your applications with zero code changes.

Redis is commonly used to build solutions such as application caches, gaming leaderboards, and incremental counters. Its fast in-memory caching, powerful data structures and features like replication and pub-sub make it ideal for these low-latency use cases. Redis can be deployed as a standalone instance or replicated for high-availability. Cloud Memorystore removes the operational overhead to setting up and managing these instances so it’s easy to deploy what your application needs.

Cloud SQL for PostgreSQL now generally available

PostgreSQL support for Cloud SQL was our #1 customer database request and we are excited it has reached general availability. During the beta period, we added high availability and replication, higher performance instances with up to 416GB of RAM, and support for 19 additional extensions. It also joined the Google Cloud Business Associates Agreement (BAA) for HIPAA-covered customers.

The service is backed by high-availability functionality and Google Cloud support and a 99.95% availability SLA anywhere in the world. DB-Engines, an independent service that ranks database technologies, ranked PostgreSQL their database of the year in 2017 because of its explosive growth in popularity.

And to make it easy for you to migrate to a managed database service, Cloud SQL for PostgreSQL runs standard open-source PostgreSQL. Further, we plan to give any improvements that we make to PostgreSQL back to the community.

And stay tuned for further developments, as we continue to incorporate new products and features into our managed database portfolio. Your data grows and changes, and your database should grow along with it—in engine choice, in scope, in features, in reliability and in ease of use. Our goal is to offer the most complete suite of managed database services to keep your data accessible, more secure and available, and let you focus on your business. Reach out to us to let us know what kinds of managed database services you’d like us to offer next.

(1) Source: Gartner IT Market Clock for Database Management Systems 2018, Donald Feinberg, Adam M. Ronthal, Ankush Jain 18 January 2018

Rolling out the red carpet for GSoC 2018 students!

Congratulations to our 2018 Google Summer of Code (GSoC) students and a big thank you to everyone who applied! Our 206 mentoring organizations have chosen the 1,264 students that they'll be working with during the 14th Google Summer of Code. This year’s students come from 64 different countries!

The next step for participating students is the Community Bonding period which runs from April 23rd through May 15th. During this time, students will get up to speed on the culture and code base of their new community. They’ll also get acquainted with their mentor(s) and learn more about the languages or tools they will need to complete their projects. Coding begins May 15th and will continue throughout the summer until August 14th.

To the more than 3,800 students who were not chosen this year - don’t be discouraged! Many students apply at least once to GSoC before being accepted. You can improve your odds for next time by contributing to the open source project of your choice directly; organizations are always eager for new contributors! Look around GitHub and elsewhere on the internet for a project that interests you and get started.

Happy coding, everyone!

By Stephanie Taylor, GSoC Program Lead

Cloud SQL for PostgreSQL now generally available and ready for your production workloads



Among open-source relational databases, PostgreSQL is one of the most popular—and the most sought-after by Google Cloud Platform (GCP) users. Today, we’re thrilled to announce that PostgreSQL is now generally available and fully supported for all customers on our Cloud SQL fully-managed database service.

Backed by Google’s 24x7 SRE team, high availability with automatic failover, and our SLA, Cloud SQL for PostgreSQL is ready for the demands of your production workloads. It’s built on the strength and reliability of Google Cloud’s infrastructure, scales to support critical workloads and automates all of your backups, replication, patches and updates while ensuring greater than 99.95% availability anywhere in the world. Cloud SQL lets you focus on your application, not your IT operations.

While Cloud SQL for PostgreSQL was in beta, we added high availability and replication, higher performance instances with up to 416GB of RAM, and support for 19 additional extensions. It also joined the Google Cloud Business Associates Agreement (BAA) for HIPAA-covered customers.

Cloud SQL for PostgreSQL runs standard PostgreSQL to maintain compatibility. And when we make improvements to PostgreSQL, we make them available for everyone by contributing to the open source community.

Throughout beta, thousands of customers from a variety of industries such as commercial real estate, satellite imagery, and online retail, deployed workloads on Cloud SQL for PostgreSQL. Here’s how one customer is using Cloud SQL for PostgreSQL to decentralize their data management and scale their business.

How OneMarket decentralizes data management with Cloud SQL


OneMarket is reshaping the way the world shops. Through the power of data, technology, and cross-industry collaboration, OneMarket’s goal is to create better end-to-end retail experiences for consumers.

Built out of Westfield Labs and Westfield Retail Solutions, OneMarket unites retailers, brands, venues and partners to facilitate collaboration on data insights and implement new technologies, such as natural language processing, artificial intelligence and augmented reality at scale.

To build the platform for a network of retailers, venues and technology partners, OneMarket selected GCP, citing its global locations and managed services such as Kubernetes Engine and Cloud SQL.
"I want to focus on business problems. My team uses managed services, like Cloud SQL for PostgreSQL, so we can focus on shipping better quality code and improve our time to market. If we had to worry about servers and systems, we would be spending a lot more time on important, but somewhat insignificant management tasks. As our CTO says, we don’t want to build the plumbing, we want to build the house." 
— Peter McInerney, Senior Director of Technical Operations at OneMarket 
OneMarket's platform is comprised of 15 microservices, each backed by one or more independent storage services. Cloud SQL for PostgreSQL backs each microservice with relational data requirements.

The OneMarket team employs a microservices architecture to develop, deploy and update parts of their platform quickly and safely. Each microservice is backed by an independent storage service. Cloud SQL for PostgreSQL instances back many of the platform’s 15 microservices, decentralizing data management and ensuring that each service is independently scalable.
 "I sometimes reflect on where we were with Westfield Digital in 2008 and 2009. The team was constantly in the datacenter to maintain servers and manage failed disks. Now, it is so easy to scale." 
— Peter McInerney 

Because the team was able to focus on data models rather than database management, developing the OneMarket platform proceeded smoothly and is now in production, reliably processing transactions for its global customers. Using BigQuery and Cloud SQL for PostgreSQL, OneMarket analyzes data and provides insights into consumer behavior and intent to retailers around the world.

Peter’s advice for companies evaluating cloud solutions like Cloud SQL for PostgreSQL: “You just have to give it a go. Pick a non-critical service and get it running in the cloud to begin building confidence.”

Getting started with Cloud SQL for PostgreSQL 


Connecting to a Google Cloud SQL database is the same as connecting to a PostgreSQL database—you use standard connectors and standard tools such as pg_dump to migrate data. If you need assistance, our partner ecosystem can help you get acquainted with Cloud SQL for PostgreSQL. To streamline data transfer, reach out to Google Cloud partners Alooma, Informatica, Segment, Stitch, Talend and Xplenty. For help with visualizing analytics data, try ChartIO, iCharts, Looker, Metabase, and Zoomdata.

Sign up for a $300 credit to try Cloud SQL and the rest of GCP. You can start with inexpensive micro instances for testing and development, and scale them up to serve performance-intensive applications when you’re ready.

Cloud SQL for PostgreSQL reaching general availability is a huge milestone and the best is still to come. Let us know what other features and capabilities you need with our Issue Tracker and by joining the Cloud SQL discussion group. We’re glad you’re along for the ride, and look forward to your feedback!