Tag Archives: cloud

Build with Google AI video series, Season 2: more AI patterns

Posted by Joe Fernandez – Google AI Developer Relations

We are off to another exciting year in Artificial Intelligence (AI) and it's time to build more applications with Google AI technology! The Build with Google AI video series is for developers looking to build helpful and practical applications with AI. We focus on useful code projects you can implement and extend in an afternoon to bring the power of artificial intelligence into your workflow or organization. Our first season received over 100,000 views in six weeks! We are glad to see that so many of you liked the series, and we are excited to bring you even more Google AI application projects.

Today, we are launching Season 2 of the Build with Google AI series, featuring projects built with Google's Gemini API technology. The launch of Gemini and the Gemini API has brought developers even more advanced AI capabilities, including advanced reasoning, content generation, information synthesis, and image interpretation. Our goal with this season is to help you put those capabilities to work for you and your organizations.


AI app patterns

The Build with Google AI series features practical application code projects created for you to use and customize. However, we know that you are the best judge of what you or your organization needs to solve day-to-day problems and get work done. That's why each application we feature in this series is also meant to be used as an AI pattern. You can extend the applications immediately to solve problems and provide value for your business, and these applications show you a general coding pattern for getting value out of AI technology.

For this second season of this series, we show how you can leverage Google's Gemini AI model capabilities for applications. Here's what's coming up:

  • AI Slides Reviewer with Google Workspace (3/20) - Image interpretation is one of the Gemini model's biggest new features. We show you how to make practical use of it with a presentation review app for Google Slides that you can customize with your organization's guidelines and recommendations. 
  • AI Flutter Code Agent with Gemini API (3/27) - Code generation was the most popular episode from last season, so we are digging deeper into this topic. Build a code generation extension to write Flutter code and explore user interface designs and looks with just a few words of description.
  • AI Data Agent with Google Cloud (4/3) - Why write code to extract data when you can just ask for it? Build a web application that uses Gemini API's Function Calling feature to translate questions into code calls and data into plain language answers.

Season 1 upgraded to Gemini API: We've upgraded Season 1 tutorials and code projects to use the Gemini API so you can take advantage of the latest in generative AI technology from Google. Check them out!


Learn from the developers

Just like last season, we'll go back to the studio to talk with coders who built these projects so they can share what they learned along the way. How do you make the Gemini model review an entire presentation? What's the most effective way to generate code with AI? How do you get a database to answer questions with the Gemini API? Get insights into coding with AI to jump start your own development project.


New home for AI developer content

Developers interested in Google's AI offerings now have a new home at ai.google.dev. There you'll find a wealth of resources for building with AI from Google, including the Build with Google AI tutorials. Stay tuned for much more content through the rest of the year.

We are excited to bring you the second season of Build with Google AIcheck out Season 2 right now! Use those video comments to let us know what you think and tell us what you'd like to see in future episodes.

Keep learning! Keep building!

Tune in for Google I/O on May 14

Posted by Jeanine Banks – VP & General Manager, Developer X, and Head of Developer Relations

Google I/O is arriving this year on May 14th and you’re invited to join us online! I/O offers something for everyone, whether you are developing a new application, modernizing an existing one, or transforming it into a business.

The Gemini era unlocks new possibilities for developers to build creative and productive AI-enabled applications. I/O is where you’ll hear how you can get from idea to production AI applications faster. We’re excited to share what’s new for mobile, web, and multiplatform development, and how to scale your applications in the cloud. You will be able to dive deeper into topics that interest you with over 100 sessions, workshops, codelabs, and demos.

Visit the Google I/O site and register to stay informed about I/O and other related events coming soon. The livestreamed keynotes start May 14 at 10am PT, so mark your calendar.

If you haven’t already, go try out our newest Google I/O puzzle and head to @googlefordevs on Instagram if you need a hint.

Google Cloud Next ’24 session library is now available

Posted by Max Saltonstall – Developer Relations Engineer

Google Cloud Next 2024 is coming soon, and our session library is live!

Next ‘24 covers a ton of ground, so choose your adventure. There's something on the menu for everyone, not just AI.

Developer-focused

Developers, this is your time. We have got a huge collection of edutainment for you in store for Next, including:

  • Thousands of Googlers on-site to connect and chat
  • Demos you can play with, try out, poke and see inside of (rather than just watching)
  • Talks from Champion Innovators about how they put cloud to use
  • Gathering spots for classes, interest groups, trainings and hanging out

This year we have more than double the number of advanced technical sessions, and recommendations for startups, small and medium businesses, and sustainability for all. Data scientists and data engineers can shard themselves out into 60+ big data sessions, including going to the cutting edge with BigQuery multi-modal data.


Artificial intelligence

If you want to build your own AI model, LLM or chatbot we've got sessions for that, covering ways to use Vertex AI to spin up your own large-language models on cloud, to search your multimedia library and to maintain equity in your data used for training.


Diversity, equity, and inclusion

Equity and inclusion go way past AI, and we’re really excited to have talks this year addressing allyship for your Muslim colleagues, growing inclusion in your org, and dialogues for change.

A cupped hand with a lock floating in a bed of clouds above it against a nebulous blue background. A faint ray of sunshine is shining through from the top left corner.

Security and data privacy

Don't forget security (really, who does?). Whether you are tackling security at the infrastructure, platform, machine or workload level, we've got sessions for you. Even if you're on multiple clouds, with multiple teams, you still need to get insight into the security and compliance of it all.

Speaking of all these fun chips, what about the salsa? We've got supply chain security with talks on SLSA and GUAC, plus numerous options for serverless workload security and ML data privacy.


Come join us

So, still on the fence?

Come for the magnificent shows in Vegas.

Come for the chance to sit down with expert developers and engineers.

Come for the amazing technical talks and tutorials.

Or just come for the spectacle. We've got it all at Google Cloud Next ‘24.

Check out sessions and secure your spot for three days of learning, community-building, and cloud tech with experts and peers at Mandalay Bay Convention Center in Las Vegas, April 9–11.

How recommerce startup Beni uses AI to help you shop secondhand

Posted by Lillian Chen – Global Brand and Content Marketing Manager, Google Accelerator Programs

Sarah Pinner’s passion to reduce waste began as a child when she would reach over and turn off her sibling’s water when they were brushing their teeth. This passion has fueled her throughout her career, from joining zero-waste grocery startup Imperfect Foods to co-founding Beni, an AI-powered browser extension that aggregates and recommends resale options while users shop their favorite brands. Together with her co-founder and Beni CTO Celine Lightfoot, Sarah built Beni to make online apparel resale accessible to everyday shoppers in order to accelerate the circular economy and reduce the burden of fashion on the planet.

Sarah explains how the platform helps connect shoppers to secondhand clothing: “Let’s say you’re looking at a Nike shoe. While on the Nike site, Beni pulls resale listings for that same shoe from over 40 marketplaces like Poshmark or Ebay or TheRealReal. Users can simply buy the resale version instead of new to save money and purchase more sustainably. On average, Beni users save about 55% from the new item, and it’s also a lot more sustainable to buy the item secondhand.”

Beni was one of the first companies in the recommerce platform software space, and the competitive landscape is growing. “The more recommerce platforms the better, but Beni is ahead in terms of our partnerships and access to data as well as the ability to search across data,” says Sarah.


How Beni Uses AI

AI helps Beni to ingest all data feeds from their 40+ partnerships into Beni’s database so they can surface the most relevant resale items to the shopper. For example, when Beni receives eBay’s feed for a product search, there may be 100,000 different sizes. The team has trained the Beni model to normalize sizing data. That’s one piece of their categorization.

“When we first started Beni, the intention wasn’t to start a company. It was to solve a problem, and AI has been a great tool to be able to do that,” says Sarah.


Participating in Google for Startups Accelerator: Circular Economy

Beni’s product was built using Google technology, is hosted on Google Cloud and utilizes Vision API Product Search, Vertex AI, BigQuery, and the Chrome web store.

When they heard about the Google for Startups Accelerator: Circular Economy program, it seemed like the perfect fit. “Having been in the circular economy space, and being a software business already using a plethora of Google products, and having a Google Chrome extension - getting plugged into the Google world gave us great insights about very niche questions that are very hard to find online,” says Sarah.

As an affiliate business in resale, Beni’s revenue per transaction is low—a challenge for a business model that requires scale. The Beni team worked one-on-one with Google mentors to best use Google tools in a cost-effective way. Keeping search results relevant is a core piece of the zero-waste model. “Being plugged in and being able to work through ways to improve that relevancy and that reliability with the people in Google who know how to build Google Chrome extensions, know how to use the AI tools on the backend, and deeply understand Search is super helpful.” The Google for Startups Accelerator: Circular Economy program also educated the team in how to selectively use AI tools such as Google’s Vision API Product Search versus building their own tech in-house.

“Having direct access to people at Google was really key for our development and sophisticated use of Google tools. And being a part of a cohort of other circular economy businesses was phenomenal for building connections in the same space,” says Sarah.

Google for Startups Accelerator support extended beyond tech. A program highlight for Sarah was a UX writing deep dive specifically for sustainability. “It showed us all this amazing, tangible research that Google has done about what is actually effective in terms of communicating around sustainability to drive behavior change,” said Sarah. “You can’t shame people into doing things. The way in which you communicate is really important in terms of if people will actually make a change or be receptive.”

Additionally, the new connections made with other circular economy startups and experts in their space was a huge benefit of participating in Google for Startups Accelerator. Mentorship, in particular, provided product-changing value. Google technical mentors shared advice that had a huge impact on the decision for Beni to move from utilizing Vision API Product Search to their own reverse image search. “Our mentors guided us to shift a core part of our technology. It was a big decision and was one of the biggest pieces of mentorship that helped drive us forward. This was a prime example of how the Google for Startups Accelerator program is truly here to support us in building the best products,” says Sarah.


What’s next for Beni

Beni’s mission is straightforward ‐ they’re easing the burden for shoppers to find and buy items second hand so that they can bring new people into resale and make resale the new norm.

Additionally, Beni is continuing to be built into a search platform, searching across second hand clothing. Beni offers their Chrome extension on desktop and mobile, and they will have a searchable interface. In addition to building out the platform further, Beni is looking at how they can support other e-commerce platforms and integrate resale into their offerings.

Learn about how to get involved in Google accelerator programs here.

Kubernetes 1.29 is available in the Regular channel of GKE

Kubernetes 1.29 is now available in the GKE Regular Channel since January 26th, and was available in the Rapid Channel January 11th, less than 30 days after the OSS release! For more information about the content of Kubernetes 1.29, read the Kubernetes 1.29 Release Notes.

New Features

Using CEL for Validating Admission Policy

Validating admission policies offer a declarative, in-process alternative to validating admission webhooks.

Validating admission policies use the Common Expression Language (CEL) to declare the validation rules of a policy. Validation admission policies are highly configurable, enabling policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators. [source]

Validating Admission Policy graduates to beta in 1.29. We are especially excited about the work that Googlers Cici Huang, Joe Betz, and Jiahui Feng have led in this release to get to the beta milestone. As we move toward v1, we are actively working to ensure scalability and would appreciate any end-user feedback. [public doc here for those interested]

The beta of ValidatingAdmissionPolicy feature can be opted into by enabling the beta APIs.

InitContainers as a Sidecar

InitContainers can now be configured as sidecar containers and kept running alongside normal containers in a Pod. This is only supported by nodes running version 1.29 or later, so ensure all nodes in a cluster are at version 1.29 or later before using this feature in Pods. The feature was long awaited. This is evident by the fact that Istio has already widely tested it and the Istio community working hard to make sure that the enablement of it can be done early with minimal disruption for the clusters with older nodes. You can participate in the discussion here.

A big driver to deliver the feature is the growing number of AI/ML workloads which are often represented by Pods running to completion. Thos Pods need infrastructure sidecars - Istio and GCSFuse are examples of it, and Google recognizes this trend.

Implementation of sidecar containers is and continues to be the community effort. We are proud to highlight that Googler Sergey Kanzhelev is driving it via the Sidecar working group, and it was a great effort of many other Googlers to make sure this KEP landed so fast. John Howard made sure the early versions of implementation were tested with Istio, Wojciech Tyczyński made sure the safe rollout vie production readiness review, Tim Hockin spent many hours in API review of the feature, and Clayton Coleman gave advice and helped with code reviews.

New APIs

API Priority and Fairness/Flow Control

We are super excited to share that API Priority and Fairness graduated to Stable V1 / GA in 1.29! Controlling the behavior of the Kubernetes API server in an overload situation is a key task for cluster administrators, and this is what APF addresses. This ambitious project was initiated by Googler and founding API Machinery SIG lead Daniel Smith, and expanded to become a community-wide effort. Special thanks to Googler Wojciech Tyczyński and API Machinery members Mike Spreitzer from IBM and Abu Kashem from RedHat, for landing this critical feature in Kubernetes 1.29 (more details in the Kubernetes publication). In Google GKE we tested and utilized it early. In fact, any version above 1.26.4 is setting higher kubelet QPS values trusting the API server to handle it gracefully.

Deprecations and Removals

  • The previously deprecated v1beta2 Priority and Fairness APIs are no longer served in 1.29, so update usage to v1beta3 before upgrading to 1.29.
  • With the API Priority and Fairness graduation to v1, the v1beta3 Priority and Fairness APIs are newly deprecated in 1.29, and will no longer be served in 1.32.
  • In the Node API, take a look at the changes to the status.kubeProxyVersion field, which will not be populated starting in v1.33. The field is currently populated with the kubelet version, not the kube-proxy version, and might not accurately reflect the kube-proxy version in use. For more information, see KEP-4004.
  • 1.29 removed support for the insecure SHA1 algorithm. To prevent impact on your clusters, you must replace incompatible certificates of webhook servers and extension API servers before upgrading your clusters to version 1.29.
    • GKE will not auto-upgrade clusters with webhook backends using incompatible certificates to 1.29 until you replace the certificates or until version 1.28 reaches end of life. For more information refer to the official GKE documentation.
  • The Ceph CephFS (kubernetes.io/cephfs) and RBD (kubernetes.io/rbd) volume plugins are deprecated since 1.28 and will be removed in a future release

Shoutout to the Production Readiness Review (PRR) team

For each new Kubernetes Release, there is a dedicated sub group of SIG Architecture, composed of very senior contributors in the Kubernetes Community, that regularly conducts Production Readiness reviews for each new release, going through each feature.

  • OSS Production Readiness Reviews (PRR) reduce toil for all the different Cloud Providers, by shifting the effort onto OSS developers.
  • OSS Production Readiness Reviews surface production safety, observability, and scalability issues with OSS features at design time, when it is still possible to affect the outcomes.
  • By ensuring feature gates, solid enable → disable → enable testing, and attention to upgrade and rollout considerations, OSS Production Readiness Reviews enable rapid mitigation of failures in new features.

As part of this group, we want to thank Googlers John Belamaric and Wojciech Tyczyński for doing this remarkable, heavy lifting on non shiny, and often invisible work. Additionally, we’d like to congratulate Googler Joe Betz who recently graduated as a new PRR reviewer, after shadowing during all 2023 the process.

By Jordan Liggitt, Jago Macleod, Sergey Kanzhelev, and Federico Bongiovanni – Google Kubernetes Kernel team

Carbon Limit’s concrete technology is saving the environment using AI

Posted by Lillian Chen – Global Brand and Content Marketing Manager, Google Accelerator Programs

Located in Boca Raton, Carbon Limit aims to decarbonize the industry and take part in saving, protecting, and healing the environment. Cofounder Tim Sperry explains that for him and his cofounders Oro Padron, and Christina Stavridi, the mission is personal. “I’ve lost family members [to polluted air]. Oro has his own story, Christina has her own story, and our other core team member Angel just had kids. All of us have our own connection to our mission. And with that, we've developed a really strong company culture,” he says.

Today, Carbon Limit is evolving to create sustainable solutions for the built environment. Their flagship product, CaptureCrete, is an additive that gives concrete the ability to capture and store CO2 directly from the air.

Carbon Limit’s initial prototype — a portable shipping container fitted with solar panels, filtered media, and intake fans — was a direct air capture system. With a business model that was dependent on tax credits and carbon credits, the team decided to pivot. “We took our original technology, which was always meant to capture CO2 to store in concrete as a permanent storage solution to CO2 in the air, and turned that into concrete technology,” explains Tim. “We’re lowering the carbon footprint of concrete projects and problems, and providing the ability to generate valuable carbon credits. It actually pays to use our technology: you’re quantifiably lowering the carbon footprint and improving the environment, and you can make money from these carbon credits.”


How Carbon Limit uses AI

Combating climate change is a race against time, as cofounder and CMO Oro explains: “We are in an industry that moves at a pace that when technology catches up, sometimes it’s too late.”

“We have found that AI actually is not eliminating, it is creating—it is letting our own people discover things about themselves and possibilities that they didn’t know about,” says Oro. “We embrace AI because we are embracing the future, and we strive to be pioneers.”

Artificial intelligence also allows for transparency in a space that can become congested by unreliable data. “We’re developing tools, specifically the digital MRV, which stands for measurement, reporting, and verification of carbon credits,” says Tim. “There is bad press that there’s a lot of fake or unverified carbon credits being sold, generated, or created.” AI gives real-time, real-world data, exposure, and quantification of the carbon credits. Carbon Limit is generating carbon credits with hard tech, bringing trust into tech.


How Carbon Limit uses Google technology

Carbon Limit is a team of developers, programmers, and data scientists working across multiple operating systems, so they needed a centralized system for collaborating. “Google Workspace has allowed us to build our own CRMs with Google Sheets and Google Docs, which we’ve found to be the easiest way to onboard quickly. Google has been an amazing tool for us to communicate internally.” Christina adds, “We have a small but diverse team with ages that vary. Not every single team member is used to using the same tools, so the way Oro has onboarded the team and utilized these tools in a customizable way where they’re easily adoptable and used by every single team member to optimize our work has been super beneficial.”

Additionally, the Carbon Limit team uses Google data for training their CO2-related data, and Google Colab to train their models. “We have some models that were made in Python, but utilizing Google Cloud has helped us predict models faster,” says Oro.


Participating in Google for Startups Accelerator: Climate Change

Before Carbon Limit started the Google for Startups Accelerator: Climate Change program, the Carbon Limit team considered integrating artificial intelligence (AI) and machine learning (ML) into their process but wanted to ensure that they were making the right decision. With Google mentorship and support, they went full force with AI and ML algorithms. “Accelerator: Climate Change helped us realize exactly what we needed to do,” says Oro.

Participating in the program also gave Carbon Limit access to resources that helped enhance their SEO. “We learned how to increment our backlinks and how to improve performance, which has been extremely helpful to put us on the map. Our whole backbone has been built thanks to Google Workspace,” says Oro.

“The Google for Startups Accelerator program gave us valuable resources and guidance on what we can do, how we can do it, and what not to do” says Tim. “The mentorship and learning from people who developed the technology, use the technology, and work with it every day was invaluable for us.” Christina adds, “The mentors also helped us refine our pitch when communicating our solution on different platforms. That was very useful to understand how to speak to different customers and investors.”

The program also led to a new client for Carbon Limit: Google. “That was critical because with Google as an early adopter, that helped us build a significant amount of credibility and validation,” Tim tells us.


What’s next for Carbon Limit

Looking ahead, Carbon Limit will be launching a new technology that can be used in data centers to mitigate electricity as well as reduce and remove CO2 pollution.

“We went from a carbon capture solution to sustainable solutions because we wanted to go even bigger,” says Tim. “We want to inspire others to do what we’re doing and help create more awareness and a more environmentally friendly world.”

Tim shares, “I love what I do. I love to be able to invent something that didn’t exist. But more importantly, it helps protect my family, my loved ones, future generations, and the environment. And I get to do it with this amazing group of people at Carbon Limit.”

Learn about how to get involved in Google accelerator programs here.

YouTube Ads Creative Analysis

Posted by Brian Craft, Satish Shreenivasa, Huikun Zhang, Manisha Arora and Paul Cubre – gTech Data Science Team


Introduction


Why analyze YouTube ads?

YouTube has billions of monthly logged-in users and every day people watch billions of hours of video and generate billions of views. Businesses can connect with YouTube users using YouTube ads, which are promotional videos that appear on YouTube's website and app, with a variety of video ad formats and goals.

Image of a sample YouTube in-stream skippable video ad
A sample YouTube in-stream skippable video ad

The Challenge

An effective video ad focuses on the ABCDs.

  • Attention: Capturing the viewer's attention till the end.
  • Branding: Helping them hear or visualize the brand.
  • Connection: Making them feel something about the brand.
  • Direction: Encouraging them to take action.

But each YouTube ad has a varying number of components, for instance, objects, background music or a logo. Each of these components affect the view through rate (which is referred to as VTR for the remainder of the post) of the video ad. Therefore, analyzing video ads through the lens of the components in the ad helps businesses understand what about the ad improves VTR. The insights from these analyses can be used to inform the creation of new creatives and to optimize existing creatives to improve VTR.


The Proposal

We propose a machine learning based approach for analyzing a company’s YouTube ads to assess which components affect VTR, for the purpose of optimizing a video ad’s performance. We illustrate how to:

  • Use Google Cloud Video Intelligence API to extract the components of each video ad, using the underlying video files.
  • Transform that extracted data to engineered features that map to actionable business questions.
  • Use a machine learning model to isolate the effect on VTR of each engineered feature.
  • Interpret and action on those insights to improve video ad performance, for instance altering existing creatives or create new creatives to be used in an AB test.

Approach


The Process

The proposed analysis has 5 steps, discussed below.

1. Define Business Questions
Align on a list of business questions that are actionable, for instance “does having a logo in the opening shot affect VTR?” We suggest taking feasibility into account ahead of time, for instance if a product disclaimer is necessary to have for legal reasons, there is no reason to assess the impact a disclaimer has on VTR.

2. Raw Component Extraction
Use Google Cloud technologies, such as the Google Cloud Video Intelligence API, and underlying video files to extract raw components from each video ad. For instance, but not limited to, objects appearing in the video at a particular timestamp, presence of text and its location on the screen, or the presence of specific sounds.

3. Feature Engineering
Using the raw components extracted in step 2, engineer features that align to the business questions defined in step 1. For example, if the business question is “does having a logo in the opening shot affect VTR”, create a feature that labels each video as either 1, having a logo in the opening shot or 0, not having a logo in the opening shot. Repeat this for each feature.

4. Modeling
Create an ML model using the engineered features from step 3, using VTR as the target in the model.

5. Interpretation
Extract statistically significant features from the ML model and interpret their effect on VTR. For example, “there is an xx% observed uplift in VTR when there is a logo in the opening shot.”


Feature Engineering


Data Extraction

Consider 2 different YouTube Video Ads for a web browser, each highlighting a different product feature. Ad A has text that says “Built In Virus Protection'', while Ad B has text that says “Automatic Password Saving”.

The raw text can be extracted from each video ad and allow for the creation of tabular datasets, such as the below. For brevity and simplicity, the example carried forward will deal with text features only and forgo the timestamp dimension.

 Ad

 Detected Raw Text

 Ad A

 Built In Virus Protection

 Ad B

 Automatic Password Saving


Preprocessing

After extracting the raw components in each ad, preprocessing may need to be applied, such as removing case sensitivity and punctuation.

 Ad

 Detected Raw Text

 Processed Text

 Ad A

 Built IVirus Protection

 built ivirus protection

 Ad B

 Automatic Password Saving

 automatic password saving


Manual Feature Engineering

Consider a scenario where the goal is to answer the business question, “does having a textual reference to a product feature affect VTR?”

This feature could be built manually by exploring all the text in all the videos in the sample and creating a list of tokens or phrases that indicate a textual reference to a product feature. However, this approach can be time consuming and limits scaling.

Image of pseudo code for manual feature engineering
Pseudo code for manual feature engineering

AI Based Feature Engineering

Instead of manual feature engineering as described above, the text detected in each video ad creative can be passed to an LLM along with a prompt that performs the feature engineering automatically.

For example, if the goal is to explore the value of highlighting a product feature in a video ad, ask an LLM if the text “‘built in virus protection’ is a feature callout”, followed by asking the LLM if the text “‘automatic password saving’ is a feature callout”.

The answers can be extracted and transformed to a 0 or 1, to later be passed to a machine learning model.

 Ad

 Raw Text

 Processed Text

 Has Textual Reference to Feature

 Ad A

 Built IVirus Protection

 built ivirus protection

 Yes

 Ad B

 Automatic Password Saving

 automatic password saving

 Yes



Modeling


Training Data

The result of the feature engineering step is a dataframe with columns that align to the initial business questions, which can be joined to a dataframe that has the VTR for each video ad in the sample.

 Ad

 Has Textual Reference to Feature

 VTR*

 Ad A

 Yes

 10%

 Ad B

 Yes

 50%


*Values are random and not to be interpreted in any way.

Modeling is done using fixed effects, bootstrapping and ElasticNet. More information can be found here in the post Introducing Discovery Ad Performance Analysis, written by Manisha Arora and Nithya Mahadevan.

Interpretation

The model output can be used to extract significant features, coefficient values, and standard deviation.

Coefficient Value (+/- X%)
Represents the absolute percentage uplift in VTR. Positive value indicates positive impact on VTR and a negative value indicates a negative impact on VTR.

Significant Value (True/False)
Represents whether the feature has a statistically significant impact on VTR.

 Feature

 Coefficient*

 Standard Deviation*

 Significant?*

 Has Textual Reference to Feature

0.0222

0.000033

True


*Values are random and not to be interpreted in any way.

In the above hypothetical example, the feature “Has Feature Callout” has a statistically significant, positive impact of VTR. This can be interpreted as “there is an observed 2.22% absolute uplift in VTR when an ad has a textual reference to a product feature.”

Challenges

Challenges of the above approach are:

  • Interactions among the individual features input into the model are not considered. For example, if “has logo” and “has logo in the lower left” are individual features in the model, their interaction will not be assessed. However, a third feature can be engineered combining the above as “has large logo + has logo in the lower left”.
  • Inferences are based on historical data and not necessarily representative of future ad creative performance. There is no guarantee that insights will improve VTR.
  • Dimensionality can be a concern as given the number of components in a video ad.

Activation Strategies


Ads Creative Studio

Ads Creative Studio is an effective tool for businesses to create multiple versions of a video by quickly combining text, images, video clips or audio. Use this tool to create new videos quickly by adding/removing features in accordance with model output.

Image of sample video creation features in Ads creative studio
Sample video creation features in Ads creative studio

Video Experiments

Design a new creative, varying a component based on the insights from the analysis, and run an AB test. For example, change the size of the logo and set up an experiment using Video Experiments.


Summary


Identifying which components of a YouTube Ad affect VTR is difficult, due to the number of components contained in the ad, but there is an incentive for advertisers to optimize their creatives to improve VTR. Google Cloud technologies, GenAI models and ML can be used to answer creative centric business questions in a scalable and actionable way. The resulting insights can be used to optimize YouTube ads and achieve business outcomes.


Acknowledgements

We would like to thank our collaborators at Google, specifically Luyang Yu, Vijai Kasthuri Rangan, Ahmad Emad, Chuyi Wang, Kun Chang, Mike Anderson, Yan Sun, Nithya Mahadevan, Tommy Mulc, David Letts, Tony Coconate, Akash Roy Choudhury, Alex Pronin, Toby Yang, Felix Abreu and Anthony Lui.

Congratulations to the winners of Google’s Immersive Geospatial Challenge

Posted by Bradford Lee – Product Marketing Manager, Augmented Reality, and Ahsan Ashraf – Product Marketing Manager, Google Maps Platform

In September, we launched Google's Immersive Geospatial Challenge on Devpost where we invited developers and creators from all over the world to create an AR experience with Geospatial Creator or a virtual 3D immersive experience with Photorealistic 3D Tiles.

"We were impressed by the innovation and creativity of the projects submitted. Over 2,700 participants across 100+ countries joined to build something they were truly passionate about and to push the boundaries of what is possible. Congratulations to all the winners!" 

 Shahram Izadi, VP of AR at Google

We judged all submissions on five key criteria:

  • Functionality - How are the APIs used in the application?
  • Purpose - What problem is the application solving?
  • Content - How creative is the application?
  • User Experience - How easy is the application to use?
  • Technical Execution - How well are you showcasing Geospatial Creator and/or Photorealistic 3D Tiles?

Many of the entries are working prototypes, with which our judges thoroughly enjoyed experiencing and interacting. Thank you to everyone who participated in this hackathon.



From our outstanding list of submissions, here are the winners of Google’s Immersive Geospatial Challenge:


Category: Best of Entertainment and Events

Winner, AR Experience: World Ensemble

Description: World Ensemble is an audio-visual app that positions sound objects in 3D, creating an immersive audio-visual experience.


Winner, Virtual 3D Experience: Realistic Event Showcaser

Description: Realistic Event Showcaser is a fully configurable and immersive platform to customize your event experience and showcase its unique location stories and charm.


Winner, Virtual 3D Experience: navigAtoR

Description: navigAtoR is an augmented reality app that is changing the way you navigate through cities by providing a 3 dimensional map of your surroundings.



Category: Best of Commerce

Winner, AR Experience: love ya

Description: love ya showcases three user scenarios for a special time of year that connect local businesses with users.



Category: Best of Travel and Local Discovery

Winner, AR Experience: Sutro Baths AR Tour

Description: This guided tour through the Sutro Baths historical landmark using an illuminated walking path, information panels with text and images, and a 3D rendering of how the Sutro Baths swimming pool complex would appear to those attending.


Winner, Virtual 3D Experience: Hyper Immersive Panorama

Description: Hyper Immersive Panorama uses real time facial detection to allow the user to look left, right, up or down, in the virtual 3D environment.


Winner, Virtual 3D Experience: The World is Flooding!

Description: The World is Flooding! allows you to visualize a 3D, realistic flooding view of your neighborhood.


Category: Best of Productivity and Business

Winner, AR Experience: GeoViz

Description: GeoViz revolutionizes architectural design, allowing users to create, modify, and visualize architectural designs in their intended context. The platform facilitates real-time collaboration, letting multiple users contribute to designs and view them in AR on location.



Category: Best of Sustainability

Winner, AR Experience: Geospatial Solar

Description: Geospatial Solar combines the Google Geospatial API with the Google Solar API for instant analysis of a building's solar potential by simply tapping it.


Winner, Virtual 3D Experience: EarthLink - Geospatial Social Media

Description: EarthLink is the first geospatial social media platform that uses 3D photorealistic tiles to enable users to create and share immersive experiences with their friends.


Honorable Mentions

In addition, we have five projects that earned honorable mentions:

  1. Simmy
  2. FrameView
  3. City Hopper
  4. GEOMAZE - The Urban Quest
  5. Geospatial Route Check

Congratulations to the winners and thank you to all the participants! Check out all the amazing projects submitted. We can't wait to see you at the next hackathon.

Finding Stability in Open Source Work

At Google, open source is at the core of our infrastructure, processes, and culture. For the last 19 years, Google’s Open Source Programs Office (OSPO) has enabled our organization to support open source ecosystems through funding, training, mentorship and direct contribution. Every year for the last 5 years, roughly 10% of our workforce has contributed to open source projects as part of their work as well as in their personal time. We’re focused on investing in and protecting open source communities and infrastructure, as well as expanding access to open source opportunities around the world. Every day we seek to promote open and connected ecosystems as the foundation of technological advancement.

For the last four years, researchers in Google's Open Source Programs Office (OSPO) have analyzed our open source contribution activity annually to identify trends and changes in behavior. The goal of this effort has been to increase transparency and accountability across all of the communities we engage with, as well as provide feedback indicators for Alphabet’s internal tools, processes, and policies. In this iteration, our 2022 open source contribution metrics were remarkably consistent with what we found in 2021, which gives us confidence that what we're measuring is a good representation of open source behavior, especially after the extreme outlier year of 2020.


Security remains a priority

At Alphabet, open source software remains a critical component of our infrastructure, products, and services and we continue to rely on the health and availability of open source projects. Through internal efforts and collaboration with industry-led efforts such as OpenSSF, Alphabet is committed to bolstering the security posture of projects, users, and developers of open source software.

In 2021, Google began funding two Linux Foundation contractors to focus exclusively on security, and in 2022 we've continued to sponsor their work to eliminate fragile C language features and APIs in the kernel. We also continue to support the Rust-in-Linux project, with the goal of improving memory safety, strengthening APIs, and reducing the number of bugs overall in the project. In late 2022, Rust infrastructure support landed in the upstream kernel.

The deps.dev project released a public BigQuery dataset, allowing anyone to explore and analyze the dependencies, advisories, ownership, license, and other metadata of open source packages across supported ecosystems, and explore how this metadata has changed over time.

In 2022 we announced:

  • The OSV-Scanner, a free tool enabling open source developers and users to identify and remediate known vulnerabilities in their project's OSS dependencies. The OSV-Scanner provides a supported frontend to the OSV database which connects a project’s list of dependencies with the vulnerabilities that affect them.
  • The GOSST Upstream Team, a dedicated staff of Google open source security engineers who spend 100% of their time working closely with upstream maintainers to improve the security of critical open source projects.
  • Graph for Understanding Artifact Composition (GUAC) which aggregates software security metadata into a high fidelity graph database–normalizing entity identities and mapping standard relationships between them.

Our contributions continue to scale with our growing workforce

In 2022, roughly 10% of Alphabet's full-time workforce contributed to open source projects hosted on GitHub or Git-on-Borg - our internal production Git service (more details below). This percentage has remained roughly consistent over the last five years, indicating that our open source contribution has continued to scale with the growth of Alphabet. Similar to last year, FTEs represented over 95% of our open source workers, while the remainder includes vendors, independent contractors, temporary staff, and interns who contributed to open source projects during their tenure at Alphabet.

As open source work is core to our ongoing operations, we continue to track engagement over time, helping to compare continuous and sporadic participation. On average, over 45% of our active* contributing population for the year logged an activity on GitHub or Git-on-Borg in an average month. (see Figure 1)
This chart shows Alphabet's monthly active users on GitHub and Git-on-Borg. Over the last five years, the trajectory of monthly active users has continued to increase on both GitHub and Git-on-Borg by more than 15% year over year per month

Our portfolio of projects remains active

We estimate that more than 2000 projects that originated from Alphabet teams and employees were still active* (not archived). To make this estimate, we chose a broad and variable definition of an open source project, including developer tools, utilities, languages, frameworks, libraries, demos, sample code, models, raw data, designs, and more.

Project counts should not be confused with repositories as projects can include many repositories. Within Alphabet, we maintain over 7500 public repositories on GitHub and 1600 public repositories on Git-on-Borg. Our total repositories under management have reduced over time with the enforcement of a new archiving policy that flags repositories for archiving based on activity levels and owner feedback. Most of these repositories are open to outside contribution: more than 500,000 unique GitHub accounts not affiliated with Alphabet workers contributed to Alphabet projects in 2022.

The majority of our open source work happens outside of Alphabet organizations

The majority of repositories we work on are outside of Alphabet organizations: Over the last five years, more than 70% of non-personal GitHub repositories Alphabet contributors interacted with were outside of Google-managed organizations. We updated the methodology behind this metric since our last edition to filter out forks created in the pull request workflow. The top projects (by unique contributors at Alphabet) include Google-initiated projects such as Kuberenetes, Apache Beam, and gRPC as well as community-led projects such as LLVM, Envoy, and Rust.


We continue to invest in the sustainability of open source ecosystems

The mission of the Google Open Source Programs Office remains the same: we sponsor, create, and invest in projects and programs that enable everyone to join and contribute to the global open source ecosystem. In 2022, OSPO provided $5.7M in membership fees and sponsorship funding to 60 key open source projects and organizations. This funding was in addition to our established annual programs:

  • In its 18th year, Google Summer of Code enabled more than 1000 individuals to contribute to more than 150 organizations. Over the lifetime of this program, more than 19,000 individuals from 112 countries have contributed to more than 800 open source organizations across the globe.
  • In its fourth year, Google Season of Docs provided direct grants to 30 open source projects to hire more than 50 technical writers to improve open source project documentation, and published its second case study report highlighting useful open source documentation metrics. More than half of the documentation created in the 2022 program were how-tos, tutorials, and reference documentation; projects primarily wanted to add documentation for missing use cases and fix disorganized documentation.
  • Since 2011, the Google Open Source Peer Bonus Program has awarded bonuses for open source contributions to members of our extended community. In 2022 more than 300 contributors received awards, working in over 40 countries on more than 200 open source projects.

Our open source work will continue to grow and evolve to support the changing needs of our communities. Thank you to our colleagues and community members who continue to dedicate their personal and professional time supporting the open source ecosystem. Follow our work at opensource.google.

By Sophia Vargas – Researcher, Google Open Source Programs Office


About this data:

This report features metrics provided by many teams and programs across Alphabet. In regards to the code and code-adjacent activities data, we wanted to share more details about the derivation of those metrics.

2022 updates: This year, we decided to remove event counts as it is increasingly difficult to differentiate automated activities from human-centered work. Even after filtering out non-human accounts, we couldn’t correlate these events to employee time spent on open source projects, and so we reduced our reporting to focus on our population and scope of effort.

  • Data sources: These data represent activities on repositories hosted on GitHub and our internal production Git service Git-on-Borg. These sources represent a subset of open source activity currently tracked by Google OSPO.
    • GitHub: We continue to use GitHub Archive as the primary source for GitHub data, which is available as a public dataset on BigQuery. Alphabet activity within GitHub is identified by self-registered accounts, which we estimate underreports actual activity.
    • Git-on-Borg: This is our primary platform for internal projects and some of our larger, long running public projects such as Android and Chromium. While we continue to develop on this platform, most of our open source activity has moved to GitHub to increase exposure and encourage community growth.
    • Distinct event types: Note that Git-on-Borg and GitHub APIs produce distinct sets of events—so we report activity metrics per platform. Where GitHub Event logs capture a wide range of activity from code creation and review to issue creation and comments, the Gerrit Event stream (used by Git-on-Borg) only captures code changes and reviews.
  • Driven by humans: We have created many automated bots and systems that can propose changes on various hosting platforms. We have intentionally filtered these data to focus on human-initiated activities.
  • Business and personal: Activity on GitHub reflects a mixture of Alphabet projects, third party projects, experimental efforts, and personal projects. Our metrics report on all of the above unless otherwise specified.
  • Alphabet contributors: Please note that unless additional detail is specified, activity counts attributed to Alphabet open source contributors will include our full-time employees as well as our extended Alphabet community (temps, vendors, contractors, and interns).
  • GitHub Accounts: For counts of GitHub accounts not affiliated with Alphabet, we cannot assume that one account is equivalent to one person, as multiple accounts could be tied to one individual or bot account.
  • *Active counts: Where possible, we will show ‘active users’ defined by logged activity (excluding ‘WatchEvent’) within a specified timeframe (a month, year, etc.) and ‘active repositories’ and ‘active projects’ as those that have enough activity to meet our internal criteria and have not been archived.

Celebrating 25 years of Google Search: developer trends and history

Posted by Google for Developers

This month, Google Search turns 25. A lot has changed over the last quarter of a century when it comes to the development space, but one thing has remained a constant - whether you’re stuck on a problem, reading documentation, learning about new technology, or figuring out the best tech stack for your project, Search has been a helpful tool in getting your questions answered.

What you searched for is a strong signal when it comes to developer trends across web, mobile, cloud, and AI over the years. Let’s take a look at some of the interesting things you’ve looked up* – and some funny queries too – because everyone loves a good retrospective.

*Note: Google Trends data goes as far back as 2004.


Building a better web

After the internet dot-com bubble popped in 2000–2001, the web continued to advance and the internet exploded. Web development responded by enabling designers to incorporate multimedia into web pages. Cascading Style Sheets (CSS) (released in 1997) and Flash video (1996-2017) changed the way web pages looked and moved, and streaming changed the way people consumed video. However, the basic interface and structure of the web page remained the same. With the variety of browsers that came to market, JavaScript frameworks and libraries rose along since it can be run everywhere with both CSS and HTML. All these shifts led to some fun searches.

How to center a div

You can’t think of web development without CSS. And it turns out, “how to center a div” has been searched for from the beginning - it’s also provided the internet with a wealth of memes over the years.

JavaScript libraries

JavaScript is a front-end programming language that is used to add interactivity and dynamic behavior to web pages. It is one of the most popular programming languages in the world, and it is essential for building modern web applications. But at some point, most developers have to ask themselves what kind of JavaScript they should use. Vanilla? A framework? A library?

Starting in 2007 there was an uptick of searches for jQuery, which peaked in 2013 and started to fall after that. Meanwhile, developers started to show more interest in React and Angular right around the same time as jQuery’s peak. By April of 2018 they all had a similar volume of searches, and soon after React took over, followed by Angular. Nigeria searched for React the most, while Japan preferred jQuery, and Ecuador preferred Angular. Nowadays, the choice of JavaScript framework is the subject of a lot of controversy - what's your favorite? Share your thoughts with us.

Graph showing search term volume for “React”,” jQuery”, and “Angular” from 2004-present day
Search term volume for “React”,” jQuery”, and “Angular” from 2004-present day


The rise of mobile

As the web improved, so did mobile. Phones went from cellular to smart. The app economy blossomed. Due to low infrastructure and financial restraints, many emerging markets in Asia, Africa, and Latin America skipped the desktop era in favor of mobile to get their information and entertainment. Mobile development –Android in particular– kicked into high gear as a response.

Android development

Starting in 2007, Android was released as a developer platform before devices were on the market, along with the first Android Developer Challenge which launched to support and recognize developers who build great applications. In 2008, the Android OS was released and open sourced, along with T-Mobile’s G1 as the first smartphone to run Android. That same year, the Android Market was released, allowing developers an easy way to distribute apps to the Android community. In 2012, the marketplace got rebranded to Google Play. All of this momentum helped add to the frenzy, but searches really took off starting in 2012.

Graph of search term volume for “Android development” from 2007-2012
Search term volume for “Android development” from 2007-2012

Mobilegeddon

Even web developers couldn’t escape the importance of mobile in its heyday. By 2010, “mobile-first” and “responsive design” became best practices for the web in order to support mobile traffic. As a response to the clear indication that mobile wasn’t going anywhere, by 2015, Google’s search ranking algorithm changed to favor content that is mobile-friendly. Dubbed ‘Mobilegeddon’ by Chuck Price in a post written on Search Engine Watch, developers quickly searched for the term and adjusted their best practices such as responsive and mobile-first design. By 2017, mobile traffic accounted for approximately half of web traffic worldwide before permanently surpassing it in 2020.


Moving to the cloud

Over the last 25 years, cloud development has evolved from a niche technology to a mainstream solution for organizations of all sizes. Being free from managing infrastructure and operations provides a number of advantages like cost savings, speed, and scalability. In the early days, it was mainly used for hosting static websites and applications. But as technology matured, it became increasingly popular for a wider range of applications, including IoT, big data, real-time data, and ML in addition to more modern development practices like containers, microservices, and security.

Cloud computing

As development continued to modernize, developers, IT, and operations figured out fairly quickly that managing infrastructure and servers was painful and expensive. In response, many cloud environment providers launched between 2002-2010, including Google Cloud Platform.

Graph of search term volume for “cloud computing” from 2004-2012
Search term volume for “cloud computing” from 2004-2012

Cloud databases

Cloud services extend to storage, databases, and so much more – a necessity as technology becomes more robust, supporting large amounts of data in real time from IoT devices or use cases like ML and large language models. While there were searches for the term “cloud database” as far back as 2004, it spiked in 2017, coinciding with Google Cloud’s Cloud Spanner. And with the latest renaissance of AI technology, it’s pretty likely that this search term will keep going up in the coming months and years.


Present day innovations

Disruptive developer technology like artificial intelligence and machine learning are infused in development today. From AI-assisted coding to solving problems leveraging big data, AI is permeating our lives. So it’s no wonder developers are searching for some key terms.

Artificial intelligence, machine learning, and more

While some applications of AI, ML, deep learning, large language models (LLMs) are new, most of the terms aren’t. Even in 2004, AI and ML were search terms of interest. In 2015, most of these terms started to pick back up and continue to trend upwards, with a sheer spike in interest in 2022. That same year, ‘generative AI’ was formally introduced to the world. Python is the most searched coding language closely associated with AI, becoming the most popularly searched language in 2019, finally surpassing Java.

Graph of search term volume for “artificial intelligence”, “machine learning”, “deep learning”, and “generative AI” from 2004-present day
Search term volume for “artificial intelligence”, “machine learning”, “deep learning”, and “generative AI” from 2004-present day

Looking ahead

While some aspects of development have gotten progressively cleaner, more modern, and more lightweight - there’s now more choice and complexity when it comes to your tech stack. So it’s no wonder “why is my code not working” spiked in both the early days and today. At Google, we’ll do our best to help streamline and simplify technology to help you build smarter and ship faster with new technology like Project IDX, Android Studio Bot, and coding for Bard.

Graph of search term volume for “why is my code not working?” from 2004-present day
Search term volume for “why is my code not working?” from 2004-present day

It’s inspiring to see what you have done with the answers to your questions, whether you’re trying to solve specific problems, learning new skills or best practices, figuring out what technology you want to use, or dreaming up your next big idea. We look forward to seeing what the next 25 years bring.

Follow more developer trends and insights on Google for Developers across YouTube, LinkedIn, and Instagram.