Partnering with the financial ecosystem with Google Pay


In every geography where Google Pay is present, our stance is consistently one of partnering with the existing financial services and banking systems to help scale and enable frictionless delivery of financial products and services and contribute to the goal of financial inclusion. 


This vision has been consistent since our launch in India and several of our offerings are built on top of NPCI’s pioneering UPI payment network and infrastructure, which has grown over 190X in the last 4 years, to processing over INR 6 trillion in value today. 


At its core, Google Pay is a simple and secure mobile app for sending and receiving money, providing the functionality of a seamless payment experience, which is critical for consumers and merchants. Over the years, we have invested in efforts to bring the convenience of UPI to both online and offline merchants. 


Furthering that objective, in 2019, we had announced the launch of the Spot Platform on Google Pay, a surface for merchants of all types - offline or digital native, small or large, across use cases - to find payment-ready users. 


With more and more users embracing Google Pay in India, our Spot platform works as an additional discovery channel for many businesses to build and offer innovative new experiences to users to drive adoption of their services. The use cases span across ticket purchase, food ordering, paying for essential services like utility bills, shopping and getting access to various financial products. 


Today we have close to 400 merchant Spots on Google Pay, and in this journey, we have seen that financial product offerings perform especially well, with offerings from Spot experiences delivered by financial services players like CashE, Groww, 5paisa, Zest Money etc. seeing significant growth and engagement from users on Google Pay. This engagement underscores that payments platforms are a great surface to deliver financial services to users across the country. 


That being said, many of these Spot experiences especially in the financial products / service categories - be it insurance, wealth management, credit or other financial services - are regulated industries and each merchant is required to be duly authorised to provide those services before we onboard them on to the platform. As Google Pay, our role is firmly circumscribed to providing these merchants a surface where Google Pay users can discover and gain from these offerings - be it credit products, insurance or any others. 


There have been a few instances where these offerings have been reported as ‘Google Pay’s offerings’ which fuels misinterpretation. To be clear, we have always looked at our role firmly as a partner to the existing financial system that brings unique skill sets and offerings to drive further adoption of digital payments in the country.


The success of UPI and digital payments in India have opened many new opportunities for the financial services industry to partner deeply with fin-tech players in the country, and we are encouraged by the progressive and tech-positive approach of the regulators to drive greater financial inclusion. We are committed to play our role by using technology as a means to level social inequalities and contribute to this vision operating within the purview of India’s legal and regulatory frameworks.


Posted by Sajith Sivanandan, Business Head, Payments and NBU, Google APAC


Dev Channel Update for Desktop

The Dev channel has been updated to 95.0.4628.3 for Windows, Linux and Mac

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana

Google Chrome

Discovering Anomalous Data with Self-Supervised Learning

Anomaly detection (sometimes called outlier detection or out-of-distribution detection) is one of the most common machine learning applications across many domains, from defect detection in manufacturing to fraudulent transaction detection in finance. It is most often used when it is easy to collect a large amount of known-normal examples but where anomalous data is rare and difficult to find. As such, one-class classification, such as one-class support vector machine (OC-SVM) or support vector data description (SVDD), is particularly relevant to anomaly detection because it assumes the training data are all normal examples, and aims to identify whether an example belongs to the same distribution as the training data. Unfortunately, these classical algorithms do not benefit from the representation learning that makes machine learning so powerful. On the other hand, substantial progress has been made in learning visual representations from unlabeled data via self-supervised learning, including rotation prediction and contrastive learning. As such, combining one-class classifiers with these recent successes in deep representation learning is an under-explored opportunity for the detection of anomalous data.

In “Learning and Evaluating Representations for Deep One-class Classification”, presented at ICLR 2021, we outline a 2-stage framework that makes use of recent progress on self-supervised representation learning and classic one-class algorithms. The algorithm is simple to train and results in state-of-the-art performance on various benchmarks, including CIFAR, f-MNIST, Cat vs Dog and CelebA. We then follow up on this in “CutPaste: Self-Supervised Learning for Anomaly Detection and Localization”, presented at CVPR 2021, in which we propose a new representation learning algorithm under the same framework for a realistic industrial defect detection problem. The framework achieves a new state-of-the-art on the MVTec benchmark.

A Two-Stage Framework for Deep One-Class Classification
While end-to-end learning has demonstrated success in many machine learning problems, including deep learning algorithm designs, such an approach for deep one-class classifiers often suffer from degeneration in which the model outputs the same results regardless of the input.

To combat this, we apply a two stage framework. In the first stage, the model learns deep representations with self-supervision. In the second stage, we adopt one-class classification algorithms, such as OC-SVM or kernel density estimator, using the learned representations from the first stage. This 2-stage algorithm is not only robust to degeneration, but also enables one to build more accurate one-class classifiers. Furthermore, the framework is not limited to specific representation learning and one-class classification algorithms — that is, one can easily plug-and-play different algorithms, which is useful if any advanced approaches are developed.

A deep neural network is trained to generate the representations of input images via self-supervision. We then train one-class classifiers on the learned representations.

Semantic Anomaly Detection
We test the efficacy of our 2-stage framework for anomaly detection by experimenting with two representative self-supervised representation learning algorithms, rotation prediction and contrastive learning.

Rotation prediction refers to a model’s ability to predict the rotated angles of an input image. Due to its promising performance in other computer vision applications, the end-to-end trained rotation prediction network has been widely adopted for one-class classification research. The existing approach typically reuses the built-in rotation prediction classifier for learning representations to conduct anomaly detection, which is suboptimal because those built-in classifiers are not trained for one-class classification.

In contrastive learning, a model learns to pull together representations from transformed versions of the same image, while pushing representations of different images away. During training, as images are drawn from the dataset, each is transformed twice with simple augmentations (e.g., random cropping or color changing). We minimize the distance of the representations from the same image to encourage consistency and maximize the distance between different images. However, usual contrastive learning converges to a solution where all the representations of normal examples are uniformly spread out on a sphere. This is problematic because most of the one-class algorithms determine the outliers by checking the proximity of a tested example to the normal training examples, but when all the normal examples are uniformly distributed in an entire space, outliers will always appear close to some normal examples.

To resolve this, we propose distribution augmentation (DA) for one-class contrastive learning. The idea is that instead of learning representations from the training data only, the model learns from the union of the training data plus augmented training examples, where the augmented examples are considered to be different from the original training data. We employ geometric transformations, such as rotation or horizontal flip, for distribution augmentation. With DA, the training data is no longer uniformly distributed in the representation space because some areas are occupied by the augmented data.

Left: Illustrated examples of perfect uniformity from the standard contrastive learning. Right: The reduced uniformity by the proposed distribution augmentation (DA), where the augmented data occupy the space to avoid the uniform distribution of the inlier examples (blue) throughout the whole sphere.

We evaluate the performance of one-class classification in terms of the area under receiver operating characteristic curve (AUC) on the commonly used datasets in computer vision, including CIFAR10 and CIFAR-100, Fashion MNIST, and Cat vs Dog. Images from one class are given as inliers and those from remaining classes are given as outliers. For example, we see how well cat images are detected as anomalies when dog images are inliers.

   CIFAR-10       CIFAR-100       f-MNIST       Cat v.s. Dog   
Ruff et al. (2018) 64.8 - - -
Golan and El-Yaniv (2018) 86.0 78.7 93.5 88.8
Bergman and Hoshen (2020) 88.2 - 94.1 -
Hendrycks et al. (2019) 90.1 - - -
Huang et al. (2019) 86.6 78.8 93.9 -
2-stage framework: rotation prediction    91.3±0.3 84.1±0.6 95.8±0.3 86.4±0.6
2-stage framework: contrastive (DA) 92.5±0.6 86.5±0.7 94.8±0.3 89.6±0.5
Performance comparison of one-class classification methods. Values are the mean AUCs and their standard deviation over 5 runs. AUC ranges from 0 to 100, where 100 is perfect detection.

Given the suboptimal built-in rotation prediction classifiers typically used for rotation prediction approaches, it’s notable that simply replacing the built-in rotation classifier used in the first stage for learning representations with a one-class classifier at the second stage of the proposed framework significantly boosts the performance, from 86 to 91.3 AUC. More generally, the 2-stage framework achieves state-of-the-art performance on all of the above benchmarks.

With classic OC-SVM, which learns the area boundary of representations of normal examples, the 2-stage framework results in higher performance than existing works as measured by image-level AUC.

Texture Anomaly Detection for Industrial Defect Detection
In many real-world applications of anomaly detection, the anomaly is often defined by localized defects instead of entirely different semantics (i.e., being different in general). For example, the detection of texture anomalies is useful for detecting various kinds of industrial defects.

The examples of semantic anomaly detection and defect detection. In semantic anomaly detection, the inlier and outlier are different in general, (e.g., one is a dog, the other a cat). In defect detection, the semantics for inlier and outlier are the same (e.g., they are both tiles), but the outlier has a local anomaly.

While learning representations with rotation prediction and distribution-augmented contrastive learning have demonstrated state-of-the-art performance on semantic anomaly detection, those algorithms do not perform well on texture anomaly detection. Instead, we explored different representation learning algorithms that better fit the application.

In our second paper, we propose a new self-supervised learning algorithm for texture anomaly detection. The overall anomaly detection follows the 2-stage framework, but the first stage, in which the model learns deep image representations, is specifically trained to predict whether the image is augmented via a simple CutPaste data augmentation. The idea of CutPaste augmentation is simple — a given image is augmented by randomly cutting a local patch and pasting it back to a different location of the same image. Learning to distinguish normal examples from CutPaste-augmented examples encourages representations to be sensitive to local irregularity of an image.

The illustration of learning representations by predicting CutPaste augmentations. Given an example, the CutPaste augmentation crops a local patch, then pasties it to a randomly selected area of the same image. We then train a binary classifier to distinguish the original image and the CutPaste augmented image.

We use MVTec, a real-world defect detection dataset with 15 object categories, to evaluate the approach above.

  DOCC
(Ruff et al., 2020)  
  U-Student
(Bergmann et al., 2020)  
  Rotation Prediction     Contrastive (DA)     CutPaste  
87.9 92.5 86.3 86.5 95.2
Image-level anomaly detection performance (in AUC) on the MVTec benchmark.

Besides image-level anomaly detection, we use the CutPaste method to locate where the anomaly is, i.e., “patch-level” anomaly detection. We aggregate the patch anomaly scores via upsampling with Gaussian smoothing and visualize them in heatmaps that show where the anomaly is. Interestingly, this provides decently improved localization of anomalies. The below table shows the pixel-level AUC for localization evaluation.

  Autoencoder
(Bergmann et al., 2019)  
  FCDD
(Ruff et al., 2020)  
  Rotation Prediction     Contrastive (DA)     CutPaste  
86.0 92.0 93.0 90.4 96.0
Pixel-level anomaly localization performance (in AUC) comparison between different algorithms on the MVTec benchmark.

Conclusion
In this work we introduce a novel 2-stage deep one-class classification framework and emphasize the importance of decoupling building classifiers from learning representations so that the classifier can be consistent with the target task, one-class classification. Moreover, this approach permits applications of various self-supervised representation learning methods, attaining state-of-the-art performance on various applications of visual one-class classification from semantic anomaly to texture defect detection. We are extending our efforts to build more realistic anomaly detection methods under the scenario where training data is truly unlabeled.

Acknowledgements
We gratefully acknowledge the contribution from other co-authors, including Jinsung Yoon, Minho Jin and Tomas Pfister. We release the code in our GitHub repository.

Source: Google AI Blog


Raising the quality bar with updated guidelines for Wear OS 3.0

Posted by Marcus Leal, Senior Product Manager for Google Play Store

WearOS 3.0 art

Our Modern Android Developer tools and APIs are designed to help you build high quality apps your users love, and this extends to form factors such as wearables. Earlier this year we announced udates to our developer tools APIs to support you in building seamless, high quality apps for your users. Today we’re announcing new guidelines to help support you in building these experiences.

Updated quality guidelines for Wear OS apps

We’ve started by updating our guidelines to give you a better understanding of what we expect of quality apps on Google Play, and what your users will be expecting for Wear OS 3.0. Some of the major changes are summarized below:

  • There are updated quality requirements for notifications, layout, and Wear functionality. Starting October 13th, Wear OS apps will need to meet these requirements to be published on Google Play.
  • Starting October 13th, Watch Faces will need to comply with our updated guidelines. All watch faces still need to comply with Google Play policies in order to publish on Google Play.

Many developers are already meeting these requirements and won’t need to make many of these changes when migrating to Wear OS 3.0. However, we recommend familiarizing yourself with the full updated guidelines here.

Updated screenshot requirements for Wear OS apps

With these quality guideline updates, we’re also rolling out changes to the Play Store to improve the discoverability of Wear OS apps. In July we launched the ability for people to filter for Wear OS and Watch Faces when searching for apps within the Play Store.

We’re now releasing new screenshot requirements for Wear OS apps to help users better understand your Wear OS app’s functionality when discovering new apps. Starting October 13th, Wear OS apps will need to meet these screenshot requirements to be published on Google Play:

  • Upload screenshots with a minimum size of 384 x 384 pixels, and with a 1:1 aspect ratio.
  • Provide screenshots showing only your app interface — screenshots must demonstrate the actual in-app or in-game experience, focusing on the core features and content so users can anticipate what the app or game experience will be like.
  • Don’t frame your screenshots in a Wear OS watch.
  • Don’t include additional text, graphics, or backgrounds in your Wear OS screenshots that are not part of the interface of your app.
  • Don’t include transparent backgrounds or masking.
List of Watch OS dos and don'ts. Do upload screenshots with a minimum size of 384 x 384 pixels, and with a 1:1 aspect ratio. Do provide screenshots showing only your app interface — screenshots must demonstrate the actual in-app or in-game experience, focusing on the core features and content so users can anticipate what the app or game experience will be like. Don’t frame your screenshots in a Wear OS watch.
Don’t include additional text, graphics, or backgrounds in your Wear OS screenshots that are not part of the interface of your app. Don’t include transparent backgrounds or masking.

Similar to mobile, your store listing and the quality of your Wear OS app will influence your search ranking and opportunities for merchandising. In order to put your best foot forward on Google Play, we recommend thinking about the following considerations:

  • Test your app on Wear OS 3.0 devices, and make sure it is working as expected.
  • Make sure your store listing shows that your app is available for Wear OS. One way to do this is to upload a screenshot of your Wear OS app or Watch face in Google Play Console.
  • Most importantly, ensure your Wear OS app meets the new quality requirements.

We hope this transparency helps your development process, and we look forward to seeing more seamless Wear OS experiences on Google Play. Happy Coding!

New beta makes it easier for admins to move folders to shared drives

What’s changing 

We’re launching a new beta that makes it easier for admins and delegated admins to move folders from My Drive to shared drives. This beta will add several usability enhancements including: 
  • Retaining folder IDs (“copyless moves”) to reduce disruption due to the move 
  • Preventing moves that would exceed any shared drive limits 
  • Reparenting any unmovable items under the item owner's My Drive root, and creating shortcuts in the existing hierarchy as a reference 


See below for more information and availability. Eligible customers can use this form to express interest in the beta



Who’s impacted 

Admins 


Why you’d use it 

Shared drives are a powerful way to empower teams and organizations to store, access, and collaborate on files. With this beta, admins and delegated admins will notice significant improvements when moving folders from My Drive to shared drives. 



Currently, when admins move folders, the existing folder IDs change, existing links to these folders can break, and impacts on shared drive limits are unclear. With this beta, folder IDs will not change and moves that can potentially exceed any shared drive limits will be rejected. 



We hope this streamlined process will allow admins to confidently migrate folders from My Drive to shared drives by providing them with more context on the changes they’re making. 




Additional details 

In the coming months, we will introduce end user support for moving My Drive folders to shared drives. We will provide an update on the Workspace Updates Blog when the end user portion of this feature becomes available. 

Dragging and dropping a folder from My Drive into a shared drive



Getting started 


Availability 

  • Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, and Nonprofits, as well as G Suite Business customers 
  • Not available to Google Workspace Business Starter and G Suite Basic customers, as well as users with personal Google Accounts 

Resources 

New beta makes it easier for admins to move folders to shared drives

What’s changing 

We’re launching a new beta that makes it easier for admins and delegated admins to move folders from My Drive to shared drives. This beta will add several usability enhancements including: 
  • Retaining folder IDs (“copyless moves”) to reduce disruption due to the move 
  • Preventing moves that would exceed any shared drive limits 
  • Reparenting any unmovable items under the item owner's My Drive root, and creating shortcuts in the existing hierarchy as a reference 


See below for more information and availability. Eligible customers can use this form to express interest in the beta



Who’s impacted 

Admins 


Why you’d use it 

Shared drives are a powerful way to empower teams and organizations to store, access, and collaborate on files. With this beta, admins and delegated admins will notice significant improvements when moving folders from My Drive to shared drives. 



Currently, when admins move folders, the existing folder IDs change, existing links to these folders can break, and impacts on shared drive limits are unclear. With this beta, folder IDs will not change and moves that can potentially exceed any shared drive limits will be rejected. 



We hope this streamlined process will allow admins to confidently migrate folders from My Drive to shared drives by providing them with more context on the changes they’re making. 




Additional details 

In the coming months, we will introduce end user support for moving My Drive folders to shared drives. We will provide an update on the Workspace Updates Blog when the end user portion of this feature becomes available. 

Dragging and dropping a folder from My Drive into a shared drive



Getting started 


Availability 

  • Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, and Nonprofits, as well as G Suite Business customers 
  • Not available to Google Workspace Business Starter and G Suite Basic customers, as well as users with personal Google Accounts 

Resources 

Easily make all files types available offline in Google Drive

What’s changing 
Google Drive stores your most important files, whether they are Google Docs, Sheets, Slides, PDFs, images or the hundreds of other file types we support today. Today, we are announcing more ways to make sure you can make them all accessible to you even when your internet connection is unavailable. 

In 2019, we launched a beta which enabled you to mark non-Google file types, like PDFs, images and Microsoft Office files, available offline when using Google Drive on the web. Now, we’re making this functionality generally available. When you mark these files available offline, you can easily open these files from your browser even when you aren’t connected to the internet.

Easily find files offline


 

ChromeOS users can now also use the easily accessible Files app on their Chromebook to select Google Docs, Sheets, and Slides files to be available when offline. This streamlined access eliminates the need to open Google Drive or Google Docs to select files to make them available offline. 

Find files offline
 

Who’s impacted 
Admins and end users 
 
Why you’d use it 
Users can access all of their important Drive files while offline such as when they’re traveling or when there’s poor internet connectivity. 

Additional details 
Non-Google files such as PDFs, images and Microsoft Office files will need to be opened using apps installed on your computer through Google Drive Web when offline. This feature is already available for Google Drive for desktop users. 

Getting started 

Rollout pace 
Availability 
  • Available to all Google Workspace customers, as well as Cloud Identity Free, Cloud Identity Premium, G Suite Basic and Business customers. Available on personal accounts as well. 

Resources 

Roadmap 

Easily make all files types available offline in Google Drive

What’s changing 
Google Drive stores your most important files, whether they are Google Docs, Sheets, Slides, PDFs, images or the hundreds of other file types we support today. Today, we are announcing more ways to make sure you can make them all accessible to you even when your internet connection is unavailable. 

In 2019, we launched a beta which enabled you to mark non-Google file types, like PDFs, images and Microsoft Office files, available offline when using Google Drive on the web. Now, we’re making this functionality generally available. When you mark these files available offline, you can easily open these files from your browser even when you aren’t connected to the internet.

Easily find files offline


 

ChromeOS users can now also use the easily accessible Files app on their Chromebook to select Google Docs, Sheets, and Slides files to be available when offline. This streamlined access eliminates the need to open Google Drive or Google Docs to select files to make them available offline. 

Find files offline
 

Who’s impacted 
Admins and end users 
 
Why you’d use it 
Users can access all of their important Drive files while offline such as when they’re traveling or when there’s poor internet connectivity. 

Additional details 
Non-Google files such as PDFs, images and Microsoft Office files will need to be opened using apps installed on your computer through Google Drive Web when offline. This feature is already available for Google Drive for desktop users. 

Getting started 

Rollout pace 
Availability 
  • Available to all Google Workspace customers, as well as Cloud Identity Free, Cloud Identity Premium, G Suite Basic and Business customers. Available on personal accounts as well. 

Resources 

Roadmap 

El Carro extends the flexibility and choices for Oracle databases on Kubernetes

When we released El Carro, our goal was to provide the best experience possible to run Oracle databases on Kubernetes with the help of our operator. Today, we want to take a closer look at how that works. The diagram below shows the high-level architecture of a database that is managed by El Carro. At the core is the actual database instance with its background processes which run in a single container that contains the Oracle installation. So how does this container image get created and what goes into it? The image itself is essentially a snapshot of a filesystem that contains an operating system, packages and other software, and custom scripts. Specifically for El Carro, an image is made up of a base OS, required packages, and an Oracle database installation. The image must be stored on a container registry that is accessible by the Kubernetes cluster, and El Carro will expect oracle binaries to be installed in certain paths—or create symbolic links to those locations.

Architecture Diagram showing the operator controlling the db container.

Initially, El Carro worked with 12c for Enterprise Edition and 18c for Express Edition. And while 12c is still popular with many users, the extended support ended this summer. So the first news is that we added support for 19c, Oracle’s long term release. The choice should be easy for any new database deployments, but the options don’t end there.

We know that DBAs have different preferences in how and where software gets installed and we believe that making different options available will ultimately empower users. With the exception of Express Edition, redistributing is not a right granted by Oracle licenses, preventing the community from providing a public container registry with usable images. Rather than that, each user will have to build their own image based on binaries they download from Oracle themselves, using their own license agreement with Oracle. All of the other containers used by the El Carro operator use open source software and are made available on our public registry, so that you do not have to build and host them yourself.

Option 1 - Use El Carro to build your own image with GCP

If you are using GCP, then we have an easy way for you to create custom images, you just upload Oracle binaries and patches to your own GCS bucket and start a Cloud Build job that will create the container image for you and upload it to your own, private container registry. A single build script and serverless cloud services take care of the whole process, so that you don’t have to worry about building locally and moving more images across the internet. In addition to creating seeded images (see below), this method also allows you to build containers with the Oracle Patches such as Release Update Revisions (RURs).

Diagram of container image build pipeline where a cloud build job reads installation files from GCS and writes finished images to GCR.

Option 2 - Use El Carro to build you own image locally

You can also use the same Dockerfile and build process from Option 1—but without Google Cloud. Download Oracle installers and patches locally or to a VM used for the builds—then start a script that invokes Docker and builds the image on that machine. Lastly, tag and push the container image to a container registry of your choice. You will have to do a few more steps yourself if you don’t use Cloud Build, but you get the same image and customization options as with Option 1.

Option 3 - Use Oracle build scripts to build your own image

Oracle also maintains an open source repository of scripts to build container images with their database. Maybe you are already using those images either with docker or Kubernetes, or you prefer to use Oracle’s own build method over ours. We recently added functionality to El Carro to make sure that the resulting images work just as well as the ones that El Carro can build for you.

Option 4 - Use Oracle’s Container Registry directly

There is a way to avoid building your own images: The Oracle Container Registry contains pre-built images that can be used with our Kubernetes operator directly and without modification. But since Oracle’s registry can only be accessed by customers, it is protected with a password. After accepting Oracle’s license conditions, one can either copy images to their own registry, or configure OCR as a private repository in Kubernetes.

The Power of Seeding

Aside from the installation, it is the creation of a database that takes the longest time in the initial provisioning process and it is often a frustrating wait time before you can log in and use your database for the first time after creation. To reduce this wait time, the first two options allow you to build a pre-seeded database image that already contains a snapshot of a created and configured database. That way this initialization step is moved to the container build process and minimizes the startup time of new database instances.

Aside from the wait time, relying on a seeded image (i.e. including an empty database in the image can provide consistency in config options if the same image is to be used in multiple deployments).

Option 1 - El Carro on GCP

Option 2 - El Carro local build

Option 3 - Oracle local build

Option 4 - Oracle Container Registry

Versions

12c, 18c, 19c

12c, 18c, 19c

12c, 18c, 19c

19c

Editions

XE, EE

XE, EE

EE

EE

Patches Updates

yes

yes

no

no

Seeded Images

yes

yes

no

no

Automatic build pipeline

yes

no

no

n/a

Conclusion

We believe in an open cloud approach and empowering users with choice and flexibility. In the context of running Oracle databases on Kubernetes that means that you get to choose your database container images. El Carro provides build scripts that allow you to not only customize containers but also to increase security and robustness with the ability to bake patches and updates into the container image. Seeding container images with a database further reduces the deployment time by avoiding this step on first startup - which is especially useful in environments that create many databases - such as automatic test pipelines.

But other users may feel more comfortable in receiving support when they use Oracle’s pre-built images from their registry.

The choice is yours. Just know that El Carro is here to help you modernize your Oracle database workloads with Kubernetes. And if you have any other feature requests or choices that matter to you—let us know by filing an issue on Github.

By Bjoern Rost, Product Manager and Ash Gbadamassi, Software Engineer – Cloud Databases

Lessons from our first Community News Summit

“It’s hard being tiny on the internet,” S. Mitra Kalita,URL Media founder and former CNN Digital vice president, said during the inaugural Google News Initiative Community News Summit. “What it takes for me to get a dollar on local news versus a dollar in national news [is] so different.” 


Google was thrilled to bring together such a diverse and insightful group of community news leaders from the U.S. and Canada many of whom echoed this sentiment from Mitra during the two-day event (Aug. 17 - 18), which focused on the challenges and opportunities local publishers face when growing and monetizing their audiences. 


“Local news is where the rubber meets the road,” said summit host and GNI director Olivia Ma, in her opening remarks to a virtual audience of 495 publishers. “Those of us working on news here at Google take our responsibility to help people find trusted, authoritative local journalism very seriously.”


The COVID-19 pandemic accelerated the need for community news outlets to diversify revenue streams and innovate to find sustainable business models. “The reality was, I needed to get on the digital game plan or die,” Sonny Giles, CEO of the Houston Defender Network, said during a session on mythbusting digital advertising. 


Rebekah Monson, the co-founder of Letterhead and Where.by.us, built on that thought in a conversation about entrepreneurial strategies for maximizing audiences. She noted that local news entrepreneurs have to rally limited resources to succeed. “I don’t know any news orgs that hustle and embrace innovation change and interaction faster or more totally than local news outlets,” Monson said. 


Over the two days, publishers also talked about the wins and lessons learned from connecting with their audiences. While figuring out pricing structures for The Juggernaut, founder Snigda Sur said she was so afraid to charge a subscription fee at first and when she did, priced it too low. “What I wish I had known is (to) ask and ask for more,” Sur said. “Some of your earliest users are your biggest champions and your biggest ambassadors.”


Danny Sullivan, Google’s search liaison and one of the first people to work in search engine optimization before joining Google, answered questions about how search and rankings work. While local news outlets may publish a mix of stories, he noted that showcasing original local content is important. “We try to have our systems mirror what readers tend to do so we expect to see local stories,” Sullivan said. Watch the full Q&A on YouTube.

An illustration of the GNI Community News Summit created by artist Drew Merit He drew fun cartoon-like images that captured two days of discussion on everything from Google Search and ranking to entrepreneurial strategies and from reader revenue insights to making the shift from being a journalist to a business leader.  The illustration also includes key quotes like “engagement is not optional”, “tell people about the great work that you do”, “It’s about trust”, “utility is the play”, “Test: Learn: Fail Fast”, “everybody needs to be bought into the why”, and “users are your champions.” The illustration also shows images of people who took part over the two days: eg; Olivia Ma, Summit Host and GNI Director; Danny Sullivan, Google’s Search Liaison; Megan Chan who helped organize the event and authored this blog post, Lance Knobel co-founder of Cityside; Mitra Kalita founder of URL Media; Sonny Giles, CEO of the Houston Defender Network; Liz Alarcon, founder of Pulso and more.

An illustration highlighting key points, discussions and people throughout the two day virtual event.

In a conversation about balancing editorial and business missions, Pulso founder Liz Alarcón focused on the heart of our shared goals, “At the end of the day, people just want stories that will move them.” That sentiment made sense to Cityside founder Lance Knobel who also co-founded The Oaklandside which launched in June 2020 as part of GNI’s Local News Experiments Project. “The journalism is everything and that’s why we exist. That’s unquestionably why people become members and why they give us money.”


If you were unable to join us, you can watch the recording of our main-stage sessions on YouTube and get more information about Google resources and programs by visiting ourGoogle News Initiative website.