10 startups strengthening New York City’s comeback

For a city that never sleeps, New York City became eerily quiet when the COVID-19 pandemic hit last year. The city’s unemployment rate jumped from 3.8% to 20% between April and May 2020, leaving more than 570,000 New Yorkers without work. While the unemployment rate has decreased since the pandemic’s peak, at approximately 9.4% it is still nearly three times higher than pre-COVID 19 and nearly twice the national average. Further, employment cuts and new hiring have not been evenly felt across industries; while the tech industry boomed throughout the pandemic, the lights on Broadway remained dark for months and small businesses across the city continued to struggle to stay afloat. New York City needed help.

In the spring of 2021, as New York City was just beginning to vaccinate large segments of its population, Google for Startups, Tech:NYC and my team at Cornell Tech discussed ways to help the city’s economy bounce back. How could we bring our tools to the industries that were struggling the most?

Together, we launched the NYC Recovery Challenge, a new program designed to showcase how we can use tech to help support job creation for New York’s small business and job seeker community. Laser-focused on job creation and retention in New York City, only startups from across the five boroughs were eligible, with a preference for companies building solutions for industries and New Yorkers hit hard by the pandemic. We formed a community advisory committee from across the city to help evaluate the finalists.

More than 170 New York-based startups applied for the NYC Recovery Challenge. Please join me in congratulating the ten companies selected to be NYC Recovery Challenge Fellows:

In addition to mentorship and one-on-one support, the top three finalists also receive up to $100,000 in no-strings-attached funding to accelerate their business. Manhattan-based first-prize winner, Guava, is a banking hub for Black small business owners that connects founders to equitable financial products and a digital community. Runners-up include Long Island City-based startup Coverr, a financial services tool for independent contractors, and Brooklyn-based Shifterr, a digital marketplace connecting hospitality industry employers to independent shift workers seeking gigs.

In addition to the three cash prize winners, the other seven companies selected reflect the distinct opportunities digital technology provides to better connect workers, employers and communities across the city. These startups range from companies that focus on supporting workers with autism and a mobility company dedicated to eliminating transit deserts, to an AI-powered online community marketplace connecting people to bodegas and novel solutions to identify, bridge and ease access to social services and government resources.

All 10 fellows' companies use digital technology to strengthen a diverse range of formal and informal networks in the city. Strong, dense and diverse networks are the foundation of urban living, constantly fueling creativity, invention and innovation. It’s inspiring to see founders using the power of technology, the strength of our networks and the resiliency of our communities to supercharge New York City’s continued recovery.

7 takeaways from our Black and Latinx Publishers Summit

This month, Google virtually hosted more than 200 publishers at the 2021 Black and Latinx Publishers Summit to discuss how they can grow their businesses using digital advertising. The event featured talks with industry leaders like Local Media Association and CafeMedia on empowering diverse creators, innovating out of a crisis, using analytics to curate content and earning money from sites.

With ad spend predicted to keep rising in 2022, we’re sharing the top seven takeaways from the event to help publishers make the most of this growth.

  1. Increase your reach through collaborations
    Sonny Messiah Jiles and Larry Lee from Local Media Association's Word In Black, a newsroom collaboration of leading Black publishers in the U.S., shared what it means to be in the audience business — not the news business. Reflecting on the Black Lives Matter movement, Sonny and Larry noted that the Black press plays a critical role in elevating voices and servicing communities. Collaborations like Word In Black, among others, have proven valuable to publishers looking to grow and serve a niche audience.
  2. Think of your platform as a business
    Showing up and being authentic to yourself, and to your audience, is important. Tomi Akitunde, Vanessa Mota and Jenné Claiborne from CafeMedia’s Remarkable Voices — an eight-week training and mentorship program — spoke about the challenges of feeling confident enough to turn their hobbies into sustainable businesses. For these creators, keeping a list of wins and removing the “perfection” barrier helped keep them grounded and focused.
  3. Treat your website like your digital piece of real estate
    Grow with Google Digital Coach Sandra Garcia shared that, as a small publisher, you are in “the business of you.” She noted that small publishers and business owners can grow their careers by mastering their brands and online presence — including making sure your brand is consistent and up to date across any platforms you’re publishing content on.
  4. Take advantage of productivity tools
    According to The Tilt, content creators spend 30% of their time creating content weekly. The remainder is spent on managing their personal brand, building relationships, selling, marketing and emailing. Using productivity tools like Drive, Gmail and Google Meet to manage emails, calls and documents can help you spend more time on what really matters — creating content.
  5. Start measuring for better marketing
    Getting to know your audience is essential. Eden Hagos from BLACK FOODIE shared how she analyzes her content’s performance to understand what’s resonating with her audience, what channels are driving traffic and where she should invest more time. Tools like Google Analytics can give you insights about your audience and website to help you make strategic business decisions.
  6. Use emerging platforms to generate leads and test content
    Emerging technologies, formats and social networks are a great way to grow your audience and test out new features. Cedric J. Rogers from Culture Genesis shared how his team uses new platforms to grow readership for their main monetized platforms. Rene Alegria and the Mundo Hispanico team also recently tested moving to an infinite scroll on their homepage, which increased time spent on their site by 300%.
  7. Understand what metrics make up your revenue
    Your earnings are a product of your costs-per-click, clickthrough rate and pageviews. Google AdSense and Google Ad Manager have a variety of features — like Auto ads, Auto optimize and manual experiments — to help you learn how to maximize your metrics and increase your earning potential.

If you’d like to explore new platforms and start monetizing today, Google AdSense is a great place to start — it’s easy to use and automatically provides optimal ad formats and sizes for your site. For publishers looking to monetize cross platform or manage direct deals, try out Google Ad Manager.

A big thank you to all of our speakers for sharing their wisdom and expertise at this year’s Black and Latinx Publisher Summit. If you’re interested in hearing more, check out the event replay.

Beta Channel Update for Chrome OS

The Beta channel is being updated to 97.0.4692.53 (Platform version: 14324.41.0) for most Chrome OS devices.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Cole Brown,

Google Chrome OS 

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 97 (97.0.4692.56) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Beta Channel Update for Desktop

 The Beta channel has been updated to 97.0.4692.56 for Windows, Mac and Linux.

A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Prudhvikumar BommanaGoogle Chrome

Set asset names by February 9, 2022

Starting in the upcoming Google Ads API v10 release, you’ll have to use unique asset names within your Google Ads account. This affects asset names for image and media bundle asset types. The reason we’re making this change is to make it easier to identify assets with human readable names as your collection of assets grows.

Starting on February 9, 2022 a default asset name will be assigned during creates and updates for all existing Google Ads API and AdWords API versions. If you wish to set the asset name yourself, then update your code to set the asset name before this date.

If you have questions while you’re updating your code, please reach out to us on the forum or at googleadsapi-support@google.com.

Training Machine Learning Models More Efficiently with Dataset Distillation

For a machine learning (ML) algorithm to be effective, useful features must be extracted from (often) large amounts of training data. However, this process can be made challenging due to the costs associated with training on such large datasets, both in terms of compute requirements and wall clock time. The idea of distillation plays an important role in these situations by reducing the resources required for the model to be effective. The most widely known form of distillation is model distillation (a.k.a. knowledge distillation), where the predictions of large, complex teacher models are distilled into smaller models.

An alternative option to this model-space approach is dataset distillation [1, 2], in which a large dataset is distilled into a synthetic, smaller dataset. Training a model with such a distilled dataset can reduce the required memory and compute. For example, instead of using all 50,000 images and labels of the CIFAR-10 dataset, one could use a distilled dataset consisting of only 10 synthesized data points (1 image per class) to train an ML model that can still achieve good performance on the unseen test set.

Top: Natural (i.e., unmodified) CIFAR-10 images. Bottom: Distilled dataset (1 image per class) on CIFAR-10 classification task. Using only these 10 synthetic images as training data, a model can achieve test set accuracy of ~51%.

In “Dataset Meta-Learning from Kernel Ridge Regression'', published in ICLR 2021, and “Dataset Distillation with Infinitely Wide Convolutional Networks”, presented at NeurIPS 2021, we introduce two novel dataset distillation algorithms, Kernel Inducing Points (KIP) and Label Solve (LS), which optimize datasets using the loss function arising from kernel regression (a classical machine learning algorithm that fits a linear model to features defined through a kernel). Applying the KIP and LS algorithms, we obtain very efficient distilled datasets for image classification, reducing the datasets to 1, 10, or 50 data points per class while still obtaining state-of-the-art results on a number of benchmark image classification datasets. Additionally, we are also excited to release our distilled datasets to benefit the wider research community.

Methodology
One of the key theoretical insights of deep neural networks (DNN) in recent years has been that increasing the width of DNNs results in more regular behavior that makes them easier to understand. As the width is taken to infinity, DNNs trained by gradient descent converge to the familiar and simpler class of models arising from kernel regression with respect to the neural tangent kernel (NTK), a kernel that measures input similarity by computing dot products of gradients of the neural network. Thanks to the Neural Tangents library, neural kernels for various DNN architectures can be computed in a scalable manner.

We utilized the above infinite-width limit theory of neural networks to tackle dataset distillation. Dataset distillation can be formulated as a two-stage optimization process: an “inner loop” that trains a model on learned data, and an “outer loop” that optimizes the learned data for performance on natural (i.e., unmodified) data. The infinite-width limit replaces the inner loop of training a finite-width neural network with a simple kernel regression. With the addition of a regularizing term, the kernel regression becomes a kernel ridge-regression (KRR) problem. This is a highly valuable outcome because the kernel ridge regressor (i.e., the predictor from the algorithm) has an explicit formula in terms of its training data (unlike a neural network predictor), which means that one can easily optimize the KRR loss function during the outer loop.

The original data labels can be represented by one-hot vectors, i.e., the true label is given a value of 1 and all other labels are given values of 0. Thus, an image of a cat would have the label “cat” assigned a 1 value, while the labels for “dog” and “horse” would be 0. The labels we use involve a subsequent mean-centering step, where we subtract the reciprocal of the number of classes from each component (so 0.1 for 10-way classification) so that the expected value of each label component across the dataset is normalized to zero.

While the labels for natural images appear in this standard form, the labels for our learned distilled datasets are free to be optimized for performance. Having obtained the kernel ridge regressor from the inner loop, the KRR loss function in the outer loop computes the mean-square error between the original labels of natural images and the labels predicted by the kernel ridge regressor. KIP optimizes the support data (images and possibly labels) by minimizing the KRR loss function through gradient-based methods. The Label Solve algorithm directly solves for the set of support labels that minimizes the KRR loss function, generating a unique dense label vector for each (natural) support image.

Example of labels obtained by label solving. Left and Middle: Sample images with possible labels listed below. The raw, one-hot label is shown in blue and the final LS generated dense label is shown in orange. Right: The covariance matrix between original labels and learned labels. Here, 500 labels were distilled from the CIFAR-10 dataset. A test accuracy of 69.7% is achieved using these labels for kernel ridge-regression.

Distributed Computation
For simplicity, we focus on architectures that consist of convolutional neural networks with pooling layers. Specifically, we focus on the so-called “ConvNet” architecture and its variants because it has been featured in other dataset distillation studies. We used a slightly modified version of ConvNet that has a simple architecture given by three blocks of convolution, ReLu, and 2x2 average pooling and then a final linear readout layer, with an additional 3x3 convolution and ReLu layer prepended (see our GitHub for precise details).

ConvNet architecture used in DC/DSA. Ours has an additional 3x3 Conv and ReLu prepended.

To compute the neural kernels needed in our work, we used the Neural Tangents library.

The first stage of this work, in which we applied KRR, focused on fully-connected networks, whose kernel elements are cheap to compute. But a hurdle facing neural kernels for models with convolutional layers plus pooling is that the computation of each kernel element between two images scales as the square of the number of input pixels (due to the capturing of pixel-pixel correlations by the kernel). So, for the second stage of this work, we needed to distribute the computation of the kernel elements and their gradients across many devices.

Distributed computation for large scale metalearning.

We invoke a client-server model of distributed computation in which a server distributes independent workloads to a large pool of client workers. A key part of this is to divide the backpropagation step in a way that is computationally efficient (explained in detail in the paper).

We accomplish this using the open-source tools Courier (part of DeepMind’s Launchpad), which allows us to distribute computations across GPUs working in parallel, and JAX, for which novel usage of the jax.vjp function enables computationally efficient gradients. This distributed framework allows us to utilize hundreds of GPUs per distillation of the dataset, for both the KIP and LS algorithms. Given the compute required for such experiments, we are releasing our distilled datasets to benefit the wider research community.

Examples
Our first set of distilled images above used KIP to distill CIFAR-10 down to 1 image per class while keeping the labels fixed. Next, in the below figure, we compare the test accuracy of training on natural MNIST images, KIP distilled images with labels fixed, and KIP distilled images with labels optimized. We highlight that learning the labels provides an effective, albeit mysterious benefit to distilling datasets. Indeed the resulting set of images provides the best test performance (for infinite-width networks) despite being less interpretable.

MNIST dataset distillation with trainable and non-trainable labels. Top: Natural MNIST data. Middle: Kernel Inducing Point distilled data with fixed labels. Bottom: Kernel Inducing Point distilled data with learned labels.

Results
Our distilled datasets achieve state-of-the-art performance on benchmark image classification datasets, improving performance beyond previous state-of-the-art models that used convolutional architectures, Dataset Condensation (DC) and Dataset Condensation with Differentiable Siamese Augmentation (DSA). In particular, for CIFAR-10 classification tasks, a model trained on a dataset consisting of only 10 distilled data entries (1 image / class, 0.02% of the whole dataset) achieves a 64% test set accuracy. Here, learning labels and an additional image preprocessing step leads to a significant increase in performance beyond the 50% test accuracy shown in our first figure (see our paper for details). With 500 images (50 images / class, 1% of the whole dataset), the model reaches 80% test set accuracy. While these numbers are with respect to neural kernels (using the KRR infinite width limit), these distilled datasets can be used to train finite-width neural networks as well. In particular, for 10 data points on CIFAR-10, a finite-width ConvNet neural network achieves 50% test accuracy with 10 images and 68% test accuracy using 500 images, which are still state-of-the-art results. We provide a simple Colab notebook demonstrating this transfer to a finite-width neural network.

Dataset distillation using Kernel Inducing Points (KIP) with a convolutional architecture outperforms prior state-of-the-art models (DC/DSA) on all benchmark settings on image classification tasks. Label Solve (LS, middle columns) while only distilling information in the labels could often (e.g. CIFAR-10 10, 50 data points per class) outperform prior state-of-the-art models as well.

In some cases, our learned datasets are more effective than a natural dataset one hundred times larger in size.

Conclusion
We believe that our work on dataset distillation opens up many interesting future directions. For instance, our algorithms KIP and LS have demonstrated the effectiveness of using learned labels, an area that remains relatively underexplored. Furthermore, we expect that utilizing efficient kernel approximation methods can help to reduce computational burden and scale up to larger datasets. We hope this work encourages researchers to explore other applications of dataset distillation, including neural architecture search and continual learning, and even potential applications to privacy.

Anyone interested in the KIP and LS learned datasets for further analysis is encouraged to check out our papers [ICLR 2021, NeurIPS 2021] and open-sourced code and datasets available on Github.

Acknowledgement
This project was done in collaboration with Zhourong Chen, Roman Novak and Lechao Xiao. We would like to acknowledge special thanks to Samuel S. Schoenholz, who proposed and helped develop the overall strategy for our distributed KIP learning methodology.


1Now at DeepMind.  ↩

Source: Google AI Blog


Watch With Me on Google TV: Taraji P. Henson’s watchlist

Movies and TV can make us laugh, cry and even shape who we are. Our watchlists can be surprisingly revealing. We’re teaming up with entertainers, artists and cultural icons on a newWatch With Meseries on Google TV to share their top picks and give you a behind-the-scenes look at the TV and movies that inspired them.

You may know Taraji P. Henson from her iconic roles in “Baby Boy,” “Empire,” “Hustle & Flow,” “Hidden Figures” and “The Curious Case of Benjamin Button.” While she dominates the big screen, she’s been using her platform to promote representation. “I didn’t get into acting for fame. I do it for the lives that I can touch because representation is very important and helps build empathy for others.”

Taraji is also passionate about eradicating stigmas around mental health and believes entertainment can help bring awareness to the pain that people may be facing silently. “When I found myself doing work in the mental health field, I realized how much movies and TV help depict the struggles that others go through for us to relate to and how we should all be a little more empathetic when we’re dealing with others.”

Google TV showing Watch With Me page with Taraji P. Henson’s watchlist

We sat down with Taraji to learn more about her work and what she watches.

What does your watchlist say about you?

Taraji P. Henson: My watchlist says that I’m someone who loves to laugh and I’m always rooting for the underdog. My picks should hopefully give you insight into people’s lives and how we should open up our hearts and minds.

What’s your favorite holiday ritual?

Taraji P. Henson:My holiday ritual is watching “The Christmas Story” on repeat.

What’s your top choice genre?

Taraji P. Henson:Comedy is my top choice. I always want to laugh! And I think the best way to teach is through Comedy because you get to laugh, but you also don’t feel like you’re being preached to.

Do you have a favorite guilty pleasure?

Taraji P. Henson:I love watching reality TV, especially any of the Real Housewives franchises!

What’s one rule you have for watching TV or movies with a group?

Taraji P. Henson:Shut up and be quiet! I don't want to hear you next to me talking or crunching on your popcorn.

Who’s your favorite person to nerd out about shows and movies with?

Taraji P. Henson:My best friend since the seventh grade. She was also an art student with me, and we grew up watching the movies on my watchlist together like “Sixteen Candles” and “Beaches.”

What’s the best way to watch your watchlist?

Taraji P. Henson:Wear some comfy sweats or PJs on the sofa with the lights off. Be sure to turn the sound on loud!

So get comfy and settle into Taraji’s watchlist on Google TV, rolling out over the next few days. Be sure to share with us your favorites using #WatchWithMe!

Announcing Jetpack Glance Alpha for app widgets

Posted by Marcel Pintó Biescas, Developer Relations Engineer, @marxallski

Illustration of a laptop with the Android rocket logo

Android 12 revamps a key feature for many Android users, App Widgets, making them more useful, beautiful, and discoverable (84% use at least 1 widget). Today, we’re making it even easier to build them by releasing the first alpha of Jetpack Glance, a new framework built on top of the Jetpack Compose runtime designed to make it faster and easier to build app widgets for the home screen and other surfaces.

We’d love you to give it a try and share your feedback!

Glance offers similar modern, declarative Kotlin APIs that you are used to with Jetpack Compose, helping you build beautiful, responsive app widgets with way less code.

Glance “Hello World” widget sample

Glance “Hello World” widget sample


class GreetingsWidget(private val name: String): GlanceAppWidget() {
    @Composable
    override fun Content() {
        Text(text = "Hello $name")
    }
}

class GreetingsWidgetReceiver : GlanceAppWidgetReceiver() {

    override val glanceAppWidget = GreetingsWidget("Glance")
}

How it works

Glance provides a base-set of Composables to help build “glanceable” experiences. Starting today with app widget components but with more coming. Using the Jetpack Compose runtime, Glance can translate Composables into actual RemoteViews, and display them in an app widget.


Diagram: Glance structure

Diagram: Glance structure


This means that Glance requires Compose to be enabled and depends on Runtime, Graphics, and Unit UI Compose layers, but it’s not directly interoperable with other existing Jetpack Compose UI elements. However, state or any other logic within your app can be shared to create a glanceable UI.


What's in Alpha

This initial release introduces the main APIs to enable you to build app widgets in addition to providing interoperability with existing RemoteViews.

Here’s an overview of what the library offers, at a glance:

We are working on bringing even more functionality with default theming, further Android Studio support, and more. Stay tuned for new releases.



Get started with Glance

Check out the sample on GitHub for a quick start. Glance works with the latest stable Android Studio, although since Glance relies on Compose Runtime, follow the steps on the Jetpack Compose docs to set it up first.

In addition, for a more advanced showcase, checkout the demos in the AndroidX repository.


ResponsiveAppWidget.kt demo

ResponsiveAppWidget.kt demo

The Alpha version is your opportunity to influence the APIs, so please share your feedback and let us know your experience!

Happy Composing with Glance!

Creating new digital businesses with Qaya

When Google moved to “work from home” due to COVID-19 in 2020, I was a Founder-in-Residence in Area 120, Google’s incubator for experimental products. I had spent the prior two years in Area 120 developing Kormo, a jobs marketplace for the “next billion users” in India, Indonesia, and Bangladesh. With time at home to revisit my passion for music and writing, I had a chance to reflect on my belief in creator entrepreneurship, and how to make it part of what I built next.

After spending time with dozens of creators, we consistently heard that building a digital creator business is time-consuming and difficult. This sparked a new project idea: Qaya, a product that provides web storefronts for creators who want to sell products and services directly to their audiences. Today, as part of Area 120, we are announcing Qaya’s U.S. beta launch.

This animation shows a Qaya creator’s storefront on both mobile and desktop. The screens show the storefront home, along with the creator’s selected profile links and products.

Qaya is a small and agile team dedicated to helping creators build businesses on the web. Our project began with a simple idea: creators are the next generation of entrepreneurs. As the CEOs of their own businesses, they need the same commercial tools as any successful founder. Since we began live testing in early 2021, we’ve learned a lot from creators on Qaya, their fans and other creator economy projects.

Creators on Qaya sell everything from trapeze workout guides to wellness training videos, photo filters, beat packs, ASMR read-alouds, productivity templates, knitting patterns and much more. We support pay-gated and free products, with tipping, subscription and other monetization types coming soon.

Alt text: A mobile view of a creator's product detail page on Qaya. The page shows information about the product, including contents and price.

Creators use Qaya as the hub for their business activity across the web. Many link to their Qaya storefronts from their social media bios, and showcase digital products they upload or products and services hosted on other sites. We provide custom yourname.channel or qaya.store/your-name URLs, with payment functionality built in.

Mobile and desktop renderings of the Qaya page for a creator named Jamie Chung.

We also developed customer management and analytics tools that creators use to connect with their fans and understand sales and content performance.

A creator's Qaya dashboard, containing stats on products and sales over time.

Lastly, we know it’s important for creators to grow their audiences. So we’ve started to integrate with other Google products, including YouTube’s Merch Shelf. If you’re an eligible YouTube creator, you can now promote products from Qaya directly below videos on your YouTube channel.

This image shows a creator using Qaya and the YouTube Merch Shelf simultaneously. The creator's products appear on YouTube, under their videos. Consumers can click through to learn more or buy on Qaya.

We’re focused on the U.S. today, but hope to bring Qaya to more countries soon. And, we’re exploring ways to support creators as they experiment with other types of digital goods.

Google has always invested in creators, from publishers on the early internet to YouTubers today. Our goal with Qaya is to explore new ways to continue this work: giving creators tools to build successful, owner-operated businesses on the web.

If you’re a creator and you’d like to work together, you can request an invitation from Qaya’s site.