#WeArePlay | Meet Steven from Indonesia. More stories from around the world.

Posted by Leticia Lago, Developer Marketing

As we bid farewell to 2023, we're excited to unveil our last #WeArePlay blog post of the year. From Lisbon to Dubai, let’s meet the creators behind the game-changing apps supporting communities, bringing innovation and joy to people.



We’re starting off in Indonesia, where Steven remembers his pocket money quickly running out while traveling around rural areas of Indonesia with his parents. Struck by how much more expensive food items were in the villages compared to Jakarta, he was inspired to create Super, providing more affordable goods outside the capital. The app allows shop owners to buy items stored locally and supply them to their communities at lower prices. It's helped boost the hyperlocal supply chain and raise living standards for rural populations. Steven is keen to point out that “it’s not just about entrepreneurship”, but “social impact”. He hopes to take Super even further and improve economic distribution across the whole of rural Indonesia.


ALT TEXT

Next, we’re crossing the Java Sea to Singapore, where twin brothers – and marathon runners – Jeromy and Kenny decided to turn their passion for self-care into a journaling app. On Journey, people can log their daily thoughts and work towards their mental health and self-improvement goals using prompts. With the guidance of coaches, they can practice gratitude, record their ambitions, and learn about mindfulness and building self-confidence. “People tell us it helps them find time to invest in themselves and dedicate space to self-care”, says Jeromy. In the future, the pair want to bring in additional coaches to support even more people to achieve their wellness goals.


ALT TEXT

Now we’re landing in the Middle East where former kindergarten friends Chris and Rene decided to use their experience being expats in Dubai to create a platform for connecting disparate communities across the city. On Hayi حي, locals can share information with their neighbors, find help within the community and connect with those living nearby. “Community is at the heart of everything we do and our goal is to have a positive effect”, says Chris. They’re currently working on creating groups for art and sport enthusiasts to encourage residents to bond over their interests. The pair are also dedicated to sustainability and plan on launching environmental projects, such as wide-scale city clean-ups.


ALT TEXT

And finally, we’re off to Europe where Lisbon-based university chums Rita, João and Martim saw unexpected success. Initially, the trio created a recipe-sharing platform, SaveCook. When they launched its accompaniment, Super Save, however, which compared prices of recipe ingredients across different supermarkets, it exploded in popularity. With rising inflation, people were hugely thankful to the founders “for providing a major service” at such a crucial time. Next, they’re working on a barcode scanner that tells shoppers where they can buy cheaper versions of products “to help people save as much as they can.”

Discover more founder stories from across the globe in the #WeArePlay collection.



How useful did you find this blog post?

Advancements in machine learning for machine learning

With the recent and accelerated advances in machine learning (ML), machines can understand natural language, engage in conversations, draw images, create videos and more. Modern ML models are programmed and trained using ML programming frameworks, such as TensorFlow, JAX, PyTorch, among many others. These libraries provide high-level instructions to ML practitioners, such as linear algebra operations (e.g., matrix multiplication, convolution, etc.) and neural network layers (e.g., 2D convolution layers, transformer layers). Importantly, practitioners need not worry about how to make their models run efficiently on hardware because an ML framework will automatically optimize the user's model through an underlying compiler. The efficiency of the ML workload, thus, depends on how good the compiler is. A compiler typically relies on heuristics to solve complex optimization problems, often resulting in suboptimal performance.

In this blog post, we present exciting advancements in ML for ML. In particular, we show how we use ML to improve efficiency of ML workloads! Prior works, both internal and external, have shown that we can use ML to improve performance of ML programs by selecting better ML compiler decisions. Although there exist a few datasets for program performance prediction, they target small sub-programs, such as basic blocks or kernels. We introduce “TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs” (presented at NeurIPS 2023), which we recently released to fuel more research in ML for program optimization. We hosted a Kaggle competition on the dataset, which recently completed with 792 participants on 616 teams from 66 countries. Furthermore, in “Learning Large Graph Property Prediction via Graph Segment Training”, we cover a novel method to scale graph neural network (GNN) training to handle large programs represented as graphs. The technique both enables training arbitrarily large graphs on a device with limited memory capacity and improves generalization of the model.


ML compilers

ML compilers are software routines that convert user-written programs (here, mathematical instructions provided by libraries such as TensorFlow) to executables (instructions to execute on the actual hardware). An ML program can be represented as a computation graph, where a node represents a tensor operation (such as matrix multiplication), and an edge represents a tensor flowing from one node to another. ML compilers have to solve many complex optimization problems, including graph-level and kernel-level optimizations. A graph-level optimization requires the context of the entire graph to make optimal decisions and transforms the entire graph accordingly. A kernel-level optimization transforms one kernel (a fused subgraph) at a time, independently of other kernels.

Important optimizations in ML compilers include graph-level and kernel-level optimizations.

To provide a concrete example, imagine a matrix (2D tensor):

It can be stored in computer memory as [A B C a b c] or [A a B b C c], known as row- and column-major memory layout, respectively. One important ML compiler optimization is to assign memory layouts to all intermediate tensors in the program. The figure below shows two different layout configurations for the same program. Let’s assume that on the left-hand side, the assigned layouts (in red) are the most efficient option for each individual operator. However, this layout configuration requires the compiler to insert a copy operation to transform the memory layout between the add and convolution operations. On the other hand, the right-hand side configuration might be less efficient for each individual operator, but it doesn’t require the additional memory transformation. The layout assignment optimization has to trade off between local computation efficiency and layout transformation overhead.

A node represents a tensor operator, annotated with its output tensor shape [n0, n1, ...], where ni is the size of dimension i. Layout {d0, d1, ...} represents minor-to-major ordering in memory. Applied configurations are highlighted in red, and other valid configurations are highlighted in blue. A layout configuration specifies the layouts of inputs and outputs of influential operators (i.e., convolution and reshape). A copy operator is inserted when there is a layout mismatch.

If the compiler makes optimal choices, significant speedups can be made. For example, we have seen up to a 32% speedup when choosing an optimal layout configuration over the default compiler’s configuration in the XLA benchmark suite.


TpuGraphs dataset

Given the above, we aim to improve ML model efficiency by improving the ML compiler. Specifically, it can be very effective to equip the compiler with a learned cost model that takes in an input program and compiler configuration and then outputs the predicted runtime of the program.

With this motivation, we release TpuGraphs, a dataset for learning cost models for programs running on Google’s custom Tensor Processing Units (TPUs). The dataset targets two XLA compiler configurations: layout (generalization of row- and column-major ordering, from matrices, to higher dimension tensors) and tiling (configurations of tile sizes). We provide download instructions and starter code on the TpuGraphs GitHub. Each example in the dataset contains a computational graph of an ML workload, a compilation configuration, and the execution time of the graph when compiled with the configuration. The graphs in the dataset are collected from open-source ML programs, featuring popular model architectures, e.g., ResNet, EfficientNet, Mask R-CNN, and Transformer. The dataset provides 25× more graphs than the largest (earlier) graph property prediction dataset (with comparable graph sizes), and graph size is 770× larger on average compared to existing performance prediction datasets on ML programs. With this greatly expanded scale, for the first time we can explore the graph-level prediction task on large graphs, which is subject to challenges such as scalability, training efficiency, and model quality.

Scale of TpuGraphs compared to other graph property prediction datasets.

We provide baseline learned cost models with our dataset (architecture shown below). Our baseline models are based on a GNN since the input program is represented as a graph. Node features, shown in blue below, consist of two parts. The first part is an opcode id, the most important information of a node, which indicates the type of tensor operation. Our baseline models, thus, map an opcode id to an opcode embedding via an embedding lookup table. The opcode embedding is then concatenated with the second part, the rest of the node features, as inputs to a GNN. We combine the node embeddings produced by the GNN to create the fixed-size embedding of the graph using a simple graph pooling reduction (i.e., sum and mean). The resulting graph embedding is then linearly transformed into the final scalar output by a feedforward layer.

Our baseline learned cost model employs a GNN since programs can be naturally represented as graphs.

Furthermore we present Graph Segment Training (GST), a method for scaling GNN training to handle large graphs on a device with limited memory capacity in cases where the prediction task is on the entire-graph (i.e., graph-level prediction). Unlike scaling training for node- or edge-level prediction, scaling for graph-level prediction is understudied but crucial to our domain, as computation graphs can contain hundreds of thousands of nodes. In a typical GNN training (“Full Graph Training”, on the left below), a GNN model is trained using an entire graph, meaning all nodes and edges of the graph are used to compute gradients. For large graphs, this might be computationally infeasible. In GST, each large graph is partitioned into smaller segments, and a random subset of segments is selected to update the model; embeddings for the remaining segments are produced without saving their intermediate activations (to avoid consuming memory). The embeddings of all segments are then combined to generate an embedding for the original large graph, which is then used for prediction. In addition, we introduce the historical embedding table to efficiently obtain graph segments’ embeddings and segment dropout to mitigate the staleness from historical embeddings. Together, our complete method speeds up the end-to-end training time by 3×.

Comparing Full Graph Training (typical method) vs Graph Segment Training (our proposed method).

Kaggle competition

Finally, we ran the “Fast or Slow? Predict AI Model Runtime” competition over the TpuGraph dataset. This competition ended with 792 participants on 616 teams. We had 10507 submissions from 66 countries. For 153 users (including 47 in the top 100), this was their first competition. We learned many interesting new techniques employed by the participating teams, such as:

  • Graph pruning / compression: Instead of using the GST method, many teams experimented with different ways to compress large graphs (e.g., keeping only subgraphs that include the configurable nodes and their immediate neighbors).
  • Feature padding value: Some teams observed that the default padding value of 0 is problematic because 0 clashes with a valid feature value, so using a padding value of -1 can improve the model accuracy significantly.
  • Node features: Some teams observed that additional node features (such as dot general’s contracting dimensions) are important. A few teams found that different encodings of node features also matter.
  • Cross-configuration attention: A winning team designed a simple layer that allows the model to explicitly "compare" configs against each other. This technique is shown to be much better than letting the model infer for each config individually.

We will debrief the competition and preview the winning solutions at the competition session at the ML for Systems workshop at NeurIPS on December 16, 2023. Finally, congratulations to all the winners and thank you for your contributions to advancing research in ML for systems!


NeurIPS expo

If you are interested in more research about structured data and artificial intelligence, we hosted the NeurIPS Expo panel Graph Learning Meets Artificial Intelligence on December 9, which covered advancing learned cost models and more!


Acknowledgements

Sami Abu-el-Haija (Google Research) contributed significantly to this work and write-up. The research in this post describes joint work with many additional collaborators including Mike Burrows, Kaidi Cao, Bahare Fatemi, Jure Leskovec, Charith Mendis, Dustin Zelle, and Yanqi Zhou.

Source: Google AI Blog


StyleDrop: Text-to-image generation in any style

Text-to-image models trained on large volumes of image-text pairs have enabled the creation of rich and diverse images encompassing many genres and themes. Moreover, popular styles such as “anime” or “steampunk”, when added to the input text prompt, may translate to specific visual outputs. While many efforts have been put into prompt engineering, a wide range of styles are simply hard to describe in text form due to the nuances of color schemes, illumination, and other characteristics. As an example, “watercolor painting” may refer to various styles, and using a text prompt that simply says “watercolor painting style” may either result in one specific style or an unpredictable mix of several.

When we refer to "watercolor painting style," which do we mean? Instead of specifying the style in natural language, StyleDrop allows the generation of images that are consistent in style by referring to a style reference image*.

In this blog we introduce “StyleDrop: Text-to-Image Generation in Any Style”, a tool that allows a significantly higher level of stylized text-to-image synthesis. Instead of seeking text prompts to describe the style, StyleDrop uses one or more style reference images that describe the style for text-to-image generation. By doing so, StyleDrop enables the generation of images in a style consistent with the reference, while effectively circumventing the burden of text prompt engineering. This is done by efficiently fine-tuning the pre-trained text-to-image generation models via adapter tuning on a few style reference images. Moreover, by iteratively fine-tuning the StyleDrop on a set of images it generated, it achieves the style-consistent image generation from text prompts.


Method overview

StyleDrop is a text-to-image generation model that allows generation of images whose visual styles are consistent with the user-provided style reference images. This is achieved by a couple of iterations of parameter-efficient fine-tuning of pre-trained text-to-image generation models. Specifically, we build StyleDrop on Muse, a text-to-image generative vision transformer.


Muse: text-to-image generative vision transformer

Muse is a state-of-the-art text-to-image generation model based on the masked generative image transformer (MaskGIT). Unlike diffusion models, such as Imagen or Stable Diffusion, Muse represents an image as a sequence of discrete tokens and models their distribution using a transformer architecture. Compared to diffusion models, Muse is known to be faster while achieving competitive generation quality.


Parameter-efficient adapter tuning

StyleDrop is built by fine-tuning the pre-trained Muse model on a few style reference images and their corresponding text prompts. There have been many works on parameter-efficient fine-tuning of transformers, including prompt tuning and Low-Rank Adaptation (LoRA) of large language models. Among those, we opt for adapter tuning, which is shown to be effective at fine-tuning a large transformer network for language and image generation tasks in a parameter-efficient manner. For example, it introduces less than one million trainable parameters to fine-tune a Muse model of 3B parameters, and it requires only 1000 training steps to converge.

Parameter-efficient adapter tuning of Muse.

Iterative training with feedback

While StyleDrop is effective at learning styles from a few style reference images, it is still challenging to learn from a single style reference image. This is because the model may not effectively disentangle the content (i.e., what is in the image) and the style (i.e., how it is being presented), leading to reduced text controllability in generation. For example, as shown below in Step 1 and 2, a generated image of a chihuahua from StyleDrop trained from a single style reference image shows a leakage of content (i.e., the house) from the style reference image. Furthermore, a generated image of a temple looks too similar to the house in the reference image (concept collapse).

We address this issue by training a new StyleDrop model on a subset of synthetic images, chosen by the user or by image-text alignment models (e.g., CLIP), whose images are generated by the first round of the StyleDrop model trained on a single image. By training on multiple synthetic image-text aligned images, the model can easily disentangle the style from the content, thus achieving improved image-text alignment.

Iterative training with feedback*. The first round of StyleDrop may result in reduced text controllability, such as a content leakage or concept collapse, due to the difficulty of content-style disentanglement. Iterative training using synthetic images, generated by the previous rounds of StyleDrop models and chosen by human or image-text alignment models, improves the text adherence of stylized text-to-image generation.

Experiments


StyleDrop gallery

We show the effectiveness of StyleDrop by running experiments on 24 distinct style reference images. As shown below, the images generated by StyleDrop are highly consistent in style with each other and with the style reference image, while depicting various contexts, such as a baby penguin, banana, piano, etc. Moreover, the model can render alphabet images with a consistent style.

Stylized text-to-image generation. Style reference images* are on the left inside the yellow box. Text prompts used are:
First row: a baby penguin, a banana, a bench.
Second row: a butterfly, an F1 race car, a Christmas tree.
Third row: a coffee maker, a hat, a moose.
Fourth row: a robot, a towel, a wood cabin.
Stylized visual character generation. Style reference images* are on the left inside the yellow box. Text prompts used are: (first row) letter 'A', letter 'B', letter 'C', (second row) letter 'E', letter 'F', letter 'G'.

Generating images of my object in my style

Below we show generated images by sampling from two personalized generation distributions, one for an object and another for the style.

Images at the top in the blue border are object reference images from the DreamBooth dataset (teapot, vase, dog and cat), and the image on the left at the bottom in the red border is the style reference image*. Images in the purple border (i.e. the four lower right images) are generated from the style image of the specific object.

Quantitative results

For the quantitative evaluation, we synthesize images from a subset of Parti prompts and measure the image-to-image CLIP score for style consistency and image-to-text CLIP score for text consistency. We study non–fine-tuned models of Muse and Imagen. Among fine-tuned models, we make a comparison to DreamBooth on Imagen, state-of-the-art personalized text-to-image method for subjects. We show two versions of StyleDrop, one trained from a single style reference image, and another, “StyleDrop (HF)”, that is trained iteratively using synthetic images with human feedback as described above. As shown below, StyleDrop (HF) shows significantly improved style consistency score over its non–fine-tuned counterpart (0.694 vs. 0.556), as well as DreamBooth on Imagen (0.694 vs. 0.644). We observe an improved text consistency score with StyleDrop (HF) over StyleDrop (0.322 vs. 0.313). In addition, in a human preference study between DreamBooth on Imagen and StyleDrop on Muse, we found that 86% of the human raters preferred StyleDrop on Muse over DreamBooth on Imagen in terms of consistency to the style reference image.


Conclusion

StyleDrop achieves style consistency at text-to-image generation using a few style reference images. Google’s AI Principles guided our development of Style Drop, and we urge the responsible use of the technology. StyleDrop was adapted to create a custom style model in Vertex AI, and we believe it could be a helpful tool for art directors and graphic designers — who might want to brainstorm or prototype visual assets in their own styles, to improve their productivity and boost their creativity — or businesses that want to generate new media assets that reflect a particular brand. As with other generative AI capabilities, we recommend that practitioners ensure they align with copyrights of any media assets they use. More results are found on our project website and YouTube video.


Acknowledgements

This research was conducted by Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, and Dilip Krishnan. We thank owners of images used in our experiments (links for attribution) for sharing their valuable assets.


*See image sources 

Source: Google AI Blog


Google Workspace Updates Weekly Recap – December 15, 2023

2 New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.


We have begun enforcing 2-step verification for all admin accounts 
Two-step verification (2SV) is a critical security measure that has been proven to reduce password-based hijacking by more than 50%. We are committed to protecting the security of our users and are taking additional steps to help customers guard against data compromise and prevent account takeovers.

We have begun enforcing 2SV for all admin accounts and will continue this enforcement on an ongoing basis. As of December 2023, this change is already in effect for some customers. When this goes into effect for your organization, you will receive the following notifications:
  • 30 days prior to enforcement in your domain: Super admins will receive various email and in-app notifications informing them of the forthcoming enforcement, encouraging them to verify their admins’ 2SV status. 
  • Once enforcement goes into effect in your domain: All admins will receive email and in-app notifications upon signing into their accounts for the next thirty days. If they do not enable 2SV within this time period, they will be locked out and will need to follow these steps to recover an administrator account.
We highly encourage all administrators to turn on 2SV as soon as possible. Visit the Help Center for more details and further guidance.



Dynamic groups limit increased to 500 
We’re increasing the number of dynamic groups a customer can have from 100 to 500. Dynamic groups are defined as groups whose membership is managed automatically based on specific criteria, such as a user’s department or location. This increase gives admins more flexibility to create dynamic groups as needed and cuts down on manual group management tasks that would otherwise be required. | Rolling out now to Rapid Release and Scheduled Release domains at a gradual pace (up to 15 days for feature visibility). | Available for Google Workspace Frontline Standard, Enterprise Standard and Enterprise Plus, Education Standard and Education Plus, Enterprise Essentials Plus, and Cloud Identity Premium customers only. | Learn more about dynamic groups.


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Meet Add-ons SDK available in Developer Preview 
The Google Meet Web Add-ons SDK is available through our Developer Preview Program. Developers can use the SDK to bring their app experience right into Meet. End users can install, open, and collaborate in apps right inside a meeting, either as the meeting focal point, or in the sidebar — all without ever leaving Meet. | Learn more about Meet Add-ons SDK .

Huddly cameras bring continuous framing to Google Meet Series One room kits 
As part of our initiative to bring adaptive framing to Google Meet meeting rooms, we’re proud to announce that you can now access Huddly’s continuous framing capability available as part of the Series One room kit hardware devices. | Available to all Google Workspace customers using Google Meet Series One room kits only. | Learn more about Google Meet Series One.

Record and share your name pronunciation across Google Workspace products 
From your Google account settings, you can now record your name and share its pronunciation with other users. The pronunciation can be played from your profile card across various Google Workspace tools such as Gmail or Google Docs on web or mobile devices. | Available to Google Workspace Business Starter, Business Standard, Business Plus, Essentials Starter, Enterprise Essentials, Enterprise Essentials Plus, Enterprise Standard, Enterprise Plus, Frontline Starter, Frontline Standard, and Nonprofits customers only. | Learn more about name pronunciation. 

Easy access to people, documents, building blocks and more in Google Docs 
When moving to a blank line within your Doc, you will see an “@” button with the option to select, search and insert smart chips, such as people, dates, timers, or files, building blocks, calendar events, groups and more. | Learn more about bringing smart canvas features to the forefront of your workflow

Excuse assignments in Google Classroom 
Teachers can mark an assignment for a particular student as “Excused” instead of giving it a 0-100 score. This will exclude that particular assignment from the student’s overall grade. | Learn more about excusing assignments. 

Introducing interactive questions for YouTube videos in Google Classroom 
Educators can now turn any YouTube video into an interactive lesson by adding questions for their students to answer throughout the video. | Available to Education Plus and the Teaching and Learning Upgrade only. | Learn more about interactive videos. 

Introducing the Bitbucket app for Google Chat 
We’re adding Bitbucket for Google Chat. Bitbucket is a Git-based code and CI/CD tool optimized for teams using Atlassian’s Jira. | Learn more about Bitbucket app for Google Chat. 

Use “Profile Discovery” to display basic information only in search results, available in open beta 
Google Workspace admins can now turn on “Profile discovery” for their users. When turned on, users can customize how they appear across Google products to people who search for them by their phone number or email. Specifically, you can choose how you want your name to be displayed and how your profile picture will be displayed. | Learn more about Profile Discovery.


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 
Scheduled Release Domains: 
Rapid and Scheduled Release Domains: 

For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

Congratulations to the winners of Google’s Immersive Geospatial Challenge

Posted by Bradford Lee – Product Marketing Manager, Augmented Reality, and Ahsan Ashraf – Product Marketing Manager, Google Maps Platform

In September, we launched Google's Immersive Geospatial Challenge on Devpost where we invited developers and creators from all over the world to create an AR experience with Geospatial Creator or a virtual 3D immersive experience with Photorealistic 3D Tiles.

"We were impressed by the innovation and creativity of the projects submitted. Over 2,700 participants across 100+ countries joined to build something they were truly passionate about and to push the boundaries of what is possible. Congratulations to all the winners!" 

 Shahram Izadi, VP of AR at Google

We judged all submissions on five key criteria:

  • Functionality - How are the APIs used in the application?
  • Purpose - What problem is the application solving?
  • Content - How creative is the application?
  • User Experience - How easy is the application to use?
  • Technical Execution - How well are you showcasing Geospatial Creator and/or Photorealistic 3D Tiles?

Many of the entries are working prototypes, with which our judges thoroughly enjoyed experiencing and interacting. Thank you to everyone who participated in this hackathon.



From our outstanding list of submissions, here are the winners of Google’s Immersive Geospatial Challenge:


Category: Best of Entertainment and Events

Winner, AR Experience: World Ensemble

Description: World Ensemble is an audio-visual app that positions sound objects in 3D, creating an immersive audio-visual experience.


Winner, Virtual 3D Experience: Realistic Event Showcaser

Description: Realistic Event Showcaser is a fully configurable and immersive platform to customize your event experience and showcase its unique location stories and charm.


Winner, Virtual 3D Experience: navigAtoR

Description: navigAtoR is an augmented reality app that is changing the way you navigate through cities by providing a 3 dimensional map of your surroundings.



Category: Best of Commerce

Winner, AR Experience: love ya

Description: love ya showcases three user scenarios for a special time of year that connect local businesses with users.



Category: Best of Travel and Local Discovery

Winner, AR Experience: Sutro Baths AR Tour

Description: This guided tour through the Sutro Baths historical landmark using an illuminated walking path, information panels with text and images, and a 3D rendering of how the Sutro Baths swimming pool complex would appear to those attending.


Winner, Virtual 3D Experience: Hyper Immersive Panorama

Description: Hyper Immersive Panorama uses real time facial detection to allow the user to look left, right, up or down, in the virtual 3D environment.


Winner, Virtual 3D Experience: The World is Flooding!

Description: The World is Flooding! allows you to visualize a 3D, realistic flooding view of your neighborhood.


Category: Best of Productivity and Business

Winner, AR Experience: GeoViz

Description: GeoViz revolutionizes architectural design, allowing users to create, modify, and visualize architectural designs in their intended context. The platform facilitates real-time collaboration, letting multiple users contribute to designs and view them in AR on location.



Category: Best of Sustainability

Winner, AR Experience: Geospatial Solar

Description: Geospatial Solar combines the Google Geospatial API with the Google Solar API for instant analysis of a building's solar potential by simply tapping it.


Winner, Virtual 3D Experience: EarthLink - Geospatial Social Media

Description: EarthLink is the first geospatial social media platform that uses 3D photorealistic tiles to enable users to create and share immersive experiences with their friends.


Honorable Mentions

In addition, we have five projects that earned honorable mentions:

  1. Simmy
  2. FrameView
  3. City Hopper
  4. GEOMAZE - The Urban Quest
  5. Geospatial Route Check

Congratulations to the winners and thank you to all the participants! Check out all the amazing projects submitted. We can't wait to see you at the next hackathon.

Open sourcing tools for Google Cloud performance and resource optimization

Over the years, we at Google have identified common requests from customers to optimize their Kubernetes clusters on Google Cloud. Today, we are releasing a set of open source tools to help customers with these tasks, including bin packing, load testing, and performance benchmarking. These tools are designed to help customers optimize their clusters for cost, performance, and scalability.

Those identified common requests from customers are around the following use cases:

  1. Google Cloud customers ask whether Google Cloud has a bin packing recommendation feature or tool to optimize GKE Standard's nodes usage?
  2. How to easily run Aerospike, Cassandra, PgBench benchmark or other popular benchmarking tools on Google Cloud?
  3. How to load test our application running on Google Cloud? How many requests per second could my app handle given the current size of the existing Google Cloud infrastructure?

The underlying motivation is that customers want some evidence-based tooling in order to help them optimize their Google Cloud resources, optimize for cost, run benchmarks, identify performance bottlenecks, or even to start a performance discussion.

For such use cases mentioned above, we are open sourcing a set of tools for the public to self-service the installation of each application which comprises UI and Backend components deployable to their respective Google Cloud Project. We name the collection of these tools as sa-tools.


BinPacker

BinPacker recommender for GKE node size
There are currently no bin packing recommendation features available in GCP Cloud Console. We are open sourcing a tool to visually scan your GKE cluster and recommend the optimal node’s bin packing size. Users can opt to select services that are grouped together to be in the same node. The installation guide can be found here.

Perfkit Benchmarker with UI

Perfkit Benchmarker with UI

What if you could install an easy-to-use version of Perfkit Benchmarker (PKB) with a click-and-select UI?

With this version, you could simply select the benchmark tool you want to use from a dropdown menu and provide a YAML configuration file. PKB would then automatically spin up a GKE Autopilot cluster with the configuration you have provided and run the benchmark. You could then view the performance metrics results in the UI.

This easy-to-use version of PKB would make it easier to run benchmarks and compare the performance of different systems, even if you don't have much technical experience. The installation guide can be found here.


Web Performance Testing

gTools Performance Testing

We built an open source UI wrapper on top of Locust, running inside your GCP Project. You can have a Locust farm instance run for a specific group of users in comparison to the generic Locust setup where everyone is able to access the Locust instance. The installation guide can be found here.

For more info you may reach us via the contributor list in the repository.

By Yudy Hendry, Anant Damle, Kozzy Hasebe, Jun Sheng, and Chuan Chen – Cloud Solutions Architects Team

Caritas of Austin: Alleviating Homelessness and Creating a Connected Community


In each of our cities, Google Fiber works with incredible community partners and organizations on digital inclusion and equity issues. In Texas, we’re working with Caritas of Austin to help bring fast, reliable internet to the residents of Espero at Rutland, an affordable and supportive housing community and our newest Gigabit Community. GFiber is providing access to high speed internet and digital literacy classes at no cost to residents. In today’s guest post, Rachel Hanover, Deputy Director of Espero Rutland Housing Services shares what this represents for this community.

Thumbnail


















At Caritas of Austin, we believe all people deserve to have their basic needs met and a stable place to call home. We use a multi-layered approach to make homelessness rare, brief and nonrecurring in Central Texas by helping the unhoused population attain proper housing, employment, education, food and a supportive community.

As technology advances and society transitions to “paperless,” an internet connection is vital for finding permanent housing, applying for jobs and accessing other supplemental benefits like unemployment, food assistance  and health insurance. But for tens of millions of Americans, a high-speed internet connection is a luxury they can’t afford. This barrier makes life considerably more challenging to navigate, which is especially true for people experiencing homelessness.

Espero Rutland


In a joint venture to help unhoused individuals find permanent housing, we partnered with The Vecino Group and Austin Housing Finance Corporation to develop Espero Rutland, an affordable and intensely supportive housing community that is scheduled to open early next year.


Espero Rutland consists of 171 studio apartments and features many amenities, including an indoor community room, business center, gym and yoga studio, community dining room, and an outdoor courtyard area with lawn games, gazebo, BBQ stations and community garden. 



















We employ onsite case managers who work closely with residents to curate a personalized plan to help them manage personal finances, develop vocational skills and apply for supplemental benefit programs. To offer these services, it is imperative that residents have a stable internet connection. 

Creating a connected community with Google Fiber



Caritas of Austin is excited to partner with Google Fiber to provide access to a free, high-speed internet connection to every residential unit and property amenity at Espero Rutland. This partnership, which is part of GFiber’s Gigabit Communities program, will support broadband internet free of charge to very low income households. 

In addition to providing internet services at no cost to residents of Caritas Espero Rutland, GFiber will help to provide laptops and digital literacy classes to our residents. The virtual and onsite classes will help residents learn how to use their new laptops to access job applications, healthcare and supplemental benefits. 














At Caritas of Austin, we are committed to ending homelessness by creating a connected, supported community. Homelessness is a complex issue with no “one size fits all” solution. Through partnerships with local organizations like GFiber, we can help our clients build a solid foundation for their future. 

Empowering those experiencing homelessness,transforms individual lives, which contributes to the overall well-being of society–building a stronger, more connected community for everyone. 
Posted by Rachel Hanover, Deputy Director of Espero Rutland Housing Services


Chrome Dev for Desktop Update

The Dev channel has been updated to 122.0.6182.0 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvi Bommana
Google Chrome