In today’s mobile-first world, app publishers who use banner ads must serve them across a greater variety of screen sizes and layouts than ever before. Existing responsive banner ad formats often produce ads that are too small and not optimally tailored to the specifications of each device.
To address this, we’ve created a new banner type called adaptive anchor banners. These banners dynamically adjust creative size to deliver an ad that is ideally sized across all of your user’s devices, without the need to write any custom code.
These banners are designed to replace standard 320x50 and leaderboard banner sizes, as well as smart banners. Here is a comparison of the 3 formats on a standard mobile device:
Migrating your banner implementation to adaptive
Here are a few simple steps to update your banner implementation to use adaptive banners:
- Ensure your UI supports a variable height banner. Depending on what constraints or layout mechanism you are using to position your banner, you may need to remove height constraints such that the layout accepts variable content size.
- For Android this can be done using
- For iOS constrain your banner in terms of X and Y positions, you may also give it a width constraint, but ensure any height constraint or content size is placeholder only.
Note that the max height is 15% of the device height or 90px, whichever is smaller.
- For Android this can be done using
- Use the adaptive banner ad size APIs to get an adaptive ad size. The adaptive ad size APIs are available for different orientations.
Which one you use depends on your use case. If you want to preload ads for a given orientation, use the API for that orientation. If you only need a banner for the current orientation of the device, use the current orientation API.
Once you have an ad size, set that on your banner view as usual before loading an ad. The banner will resize to the adaptive ad size as long as you have laid it out without any conflicting constraints.
- Update your mediation adapters. If you use mediation, update your mediation adapters to the latest version. All open source mediation adapters that support banners have been updated to support the adaptive banner ad size requests. Note that adapters will still only return ad sizes supported by their corresponding ad network SDK, and those ads will be centered in your adaptive banner view.
Review our developer resources
For further information including detailed implementation guidance, review our developer resources:
- Adaptive banner guide (AdMob iOS | AdMob Android | AdMob Unity | Ad Manager iOS | Ad Manager Android)
- Adaptive banner sample app (AdMob iOS | AdMob Android | Ad Manager iOS | Ad Manager Android)
As always, please reach out on our developer forum if you have any questions.
Source: Google Ads Developer Blog
You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.
If you find a new issue, please let us know by filing a bug.
Source: Google Chrome Releases
Source: Google Chrome Releases
Last month’s #AndroidDevSummit was jam-packed with announcements and technical news...so much that we wouldn’t be surprised if you missed something. So all this month, we’ll be diving into key areas from throughout the summit so you don’t miss anything. First up, we’re spotlighting Kotlin, with the top things you should know:
#1: Kotlin momentum on Android
Kotlin is at the heart of modern Android development — and we’ve been excited to see how quickly it has won over developers around the world. At Android Dev Summit we announced that nearly 60% of the top 1000 Android apps on the Play Store now use Kotlin, and we’re seeing more developers adopt it every day. Kotlin has helpful features like null safety, data classes, coroutines, and complete interoperability with the Java programming language. We’re doubling down on Kotlin with more Kotlin-first APIs even beyond AndroidX — we just released KTX extensions, including coroutines support, for Play Core. There’s never been a better time to give Kotlin a try.
#2: Learn more: Getting started with Kotlin & diving into advanced Kotlin with coroutines
If you’re introducing Kotlin into an existing codebase, chances are that you’ll be calling the Java programming language from Kotlin and vice versa. At Android Dev Summit, developer advocates Murat Yener, Nicole Borrelli, and Wenbo Zhu took a look at how nullability, getters, setters, default parameters, exceptions, and more work across the two languages.
For those looking into more advanced Kotlin topics, we recommend watching Jose Alcérreca's and Yigit Boyar's talk that explains how coroutines and Flow can fit together with LiveData in your app's architecture and one on testing coroutines by Sean McQuillan and Manuel Vivo.
#3: Get certified in Kotlin
We announced the launch of our Associate Android Developer certification in Kotlin. Now you can prove your proficiency with modern Kotlin development on Android to your coworkers, your professional network, or even your future employer. As part of this launch, you can take this exam at a discount when using the code ADSCERT99 through January 25.
It’s especially great to hear from you, the Android community, at events like Android Dev Summit: what do you want to hear more about, and how can we help with something you’re working on. We asked you to submit your burning questions on Twitter and the livestream, and developer advocates Florina Muntenescu and Sean McQuillan answered your Kotlin and coroutines questions live during our #AskAndroid segment:
You can find the entire playlist of Android Dev Summit sessions and videos here. We’ll continue to spotlight other areas later this month, so keep an eye out and follow Android Developers on Twitter. Thanks so much for letting us be a part of this experience with you!
Java is a registered trademark of Oracle and/or its affiliates.
Source: Android Developers Blog
On-device machine learning (ML) is an essential component in enabling privacy-preserving, always-available and responsive intelligence. This need to bring on-device machine learning to compute and power-limited devices has spurred the development of algorithmically-efficient neural network models and hardware capable of performing billions of math operations per second, while consuming only a few milliwatts of power. The recently launched Google Pixel 4 exemplifies this trend, and ships with the Pixel Neural Core that contains an instantiation of the Edge TPU architecture, Google’s machine learning accelerator for edge computing devices, and powers Pixel 4 experiences such as face unlock, a faster Google Assistant and unique camera features. Similarly, algorithms, such as MobileNets, have been critical for the success of on-device ML by providing compact and efficient neural network models for mobile vision applications.
Today we are pleased to announce the release of source code and checkpoints for MobileNetV3 and the Pixel 4 Edge TPU-optimized counterpart MobileNetEdgeTPU model. These models are the culmination of the latest advances in hardware-aware AutoML techniques as well as several advances in architecture design. On mobile CPUs, MobileNetV3 is twice as fast as MobileNetV2 with equivalent accuracy, and advances the state-of-the-art for mobile computer vision networks. On the Pixel 4 Edge TPU hardware accelerator, the MobileNetEdgeTPU model pushes the boundary further by improving model accuracy while simultaneously reducing the runtime and power consumption.
In contrast with the hand-designed previous version of MobileNet, MobileNetV3 relies on AutoML to find the best possible architecture in a search space friendly to mobile computer vision tasks. To most effectively exploit the search space we deploy two techniques in sequence — MnasNet and NetAdapt. First, we search for a coarse architecture using MnasNet, which uses reinforcement learning to select the optimal configuration from a discrete set of choices. Then we fine-tune the architecture using NetAdapt, a complementary technique that trims under-utilized activation channels in small decrements. To provide the best possible performance under different conditions we have produced both large and small models.
|Comparison of accuracy vs. latency for mobile models on the ImageNet classification task using the Google Pixel 4 CPU.|
The MobileNetV3 search space builds on multiple recent advances in architecture design that we adapt for the mobile environment. First, we introduce a new activation function called hard-swish (h-swish) which is based on the Swish nonlinearity function. The critical drawback of the Swish function is that it is very inefficient to compute on mobile hardware. So, instead we use an approximation that can be efficiently expressed as a product of two piecewise linear functions.
squeeze-and-excitation block, which replaces the classical sigmoid function with a piecewise linear approximation.
Combining h-swish plus mobile-friendly squeeze-and-excitation with a modified version of the inverted bottleneck structure introduced in MobileNetV2 yielded a new building block for MobileNetV3.
|MobileNetV3 extends the MobileNetV2 inverted bottleneck structure by adding h-swish and mobile friendly squeeze-and-excitation as searchable options.|
- Size of expansion layer
- Degree of squeeze-excite compression
- Choice of activation function: h-swish or ReLU
- Number of layers for each resolution block
In addition to classification models, we also introduced MobileNetV3 object detection models, which reduced detection latency by 25% relative to MobileNetV2 at the same accuracy for the COCO dataset.
In order to optimize MobileNetV3 for efficient semantic segmentation, we introduced a low latency segmentation decoder called Lite Reduced Atrous Spatial Pyramid Pooling (LR-SPP). This new decoder contains three branches, one for low resolution semantic features, one for higher resolution details, and one for light-weight attention. The combination of LR-SPP and MobileNetV3 reduces the latency by over 35% on the high resolution Cityscapes Dataset.
MobileNet for Edge TPUs
The Edge TPU in Pixel 4 is similar in architecture to the Edge TPU in the Coral line of products, but customized to meet the requirements of key camera features in Pixel 4. The accelerator-aware AutoML approach substantially reduces the manual process involved in designing and optimizing neural networks for hardware accelerators. Crafting the neural architecture search space is an important part of this approach and centers around the inclusion of neural network operations that are known to improve hardware utilization. While operations such as squeeze-and-excite and swish non-linearity have been shown to be essential in building compact and fast CPU models, these operations tend to perform suboptimally on Edge TPU and hence are excluded from the search space. The minimalistic variants of MobileNetV3 also forgo the use of these operations (i.e., squeeze-and-excite, swish, and 5x5 convolutions) to allow easier portability to a variety of other hardware accelerators such as DSPs and GPUs.
The neural network architecture search, incentivized to jointly optimize the model accuracy and Edge TPU latency, produces the MobileNetEdgeTPU model that achieves lower latency for a fixed accuracy (or higher accuracy for a fixed latency) than existing mobile models such as MobileNetV2 and minimalistic MobileNetV3. Compared with the EfficientNet-EdgeTPU model (optimized for the Edge TPU in Coral), these models are designed to run at a much lower latency on Pixel 4, albeit at the cost of some loss in accuracy.
Although reducing the model’s power consumption was not a part of the search objective, the lower latency of the MobileNetEdgeTPU models also helps reduce the average Edge TPU power use. The MobileNetEdgeTPU model consumes less than 50% the power of the minimalistic MobileNetV3 model at comparable accuracy.
|Left: Comparison of the accuracy on the ImageNet classification task between MobileNetEdgeTPU and other image classification networks designed for mobile when running on Pixel4 Edge TPU. MobileNetEdgeTPU achieves higher accuracy and lower latency compared with other models. Right: Average Edge TPU power in Watts for different classification models running at 30 frames per second (fps).|
The MobileNetEdgeTPU classification model also serves as an effective feature extractor for object detection tasks. Compared with MobileNetV2 based detection models, MobileNetEdgeTPU models offer a significant improvement in model quality (measured as the mean average precision; mAP) on the COCO14 minival dataset at comparable runtimes on the Edge TPU. The MobileNetEdgeTPU detection model has a latency of 6.6ms and achieves mAP score of 24.3, while MobileNetV2-based detection models achieve an mAP of 22 and takes 6.8ms per inference.
The Need for Hardware-Aware Models
While the results shown above highlight the power, performance, and quality benefits of MobileNetEdgeTPU models, it is important to note that the improvements arise due to the fact that these models have been customized to run on the Edge TPU accelerator.
MobileNetEdgeTPU when running on a mobile CPU delivers inferior performance compared with the models that have been tuned specifically for mobile CPUs (MobileNetV3). MobileNetEdgeTPU models perform a much greater number of operations, and so, it is not surprising that they run slower on mobile CPUs, which exhibit a more linear relationship between a model’s compute requirements and the runtime.
|MobileNetV3 is still the best performing network when using mobile CPU as the deployment target.|
The MobileNetV3 and MobileNetEdgeTPU code, as well as both floating point and quantized checkpoints for ImageNet classification, are available at the MobileNet github page. Open source implementation for MobileNetV3 and MobileNetEdgeTPU object detection is available in the Tensorflow Object Detection API. Open source implementation for MobileNetV3 semantic segmentation is available in TensorFlow through DeepLab.
This work is made possible through a collaboration spanning several teams across Google. We’d like to acknowledge contributions from Berkin Akin, Okan Arikan, Gabriel Bender, Bo Chen, Liang-Chieh Chen, Grace Chu, Eddy Hsu, John Joseph, Pieter-jan Kindermans, Quoc Le, Owen Lin, Hanxiao Liu, Yun Long, Ravi Narayanaswami, Ruoming Pang, Mark Sandler, Mingxing Tan, Vijay Vasudevan, Weijun Wang, Dong Hyuk Woo, Dmitry Kalenichenko, Yunyang Xiong, Yukun Zhu and support from Hartwig Adam, Blaise Agüera y Arcas, Chidu Krishnan and Steve Molloy.
Source: Google AI Blog
Research shows the potential impact of FAW on continental wide maize yield lies between 8.3 and 20.6 million tonnes per year (total expected production of 39m tonnes per year); with losses lying between US$2,48m and US$6,19m per year (of a US$11,59m annual expected value). The impact of FAW is far reaching, and is now reported in many countries around the world.
Agriculture is the backbone of Uganda’s economy, employing 70% of the population. It contributes to half of Uganda’s export earnings and a quarter of the country’s gross domestic product (GDP). Fall armyworm posses a great threat on our livelihoods. We are a small group of like minded developers living and working in Uganda. Most of our relatives grow maize so the impact of the worm was very close to home. We really felt like we needed to do something about it. The vast damage and yield losses in maize production, due to FAW, got the attention of global organizations, who are calling for innovators to help. It is the perfect time to apply machine learning. Our goal is to build an intelligent agent to help local farmers fight this pest in order to increase our food security.
Based on a Machine Learning Crash Course, our Google Developer Group (GDG) in Mbale hosted some study jams in May 2018, alongside several other code labs. This is where we first got hands-on experience using TensorFlow, from which the foundations were laid for the Farmers Companion app. Finally, we felt as though an intelligent solution to help farmers had been conceived.
Equipped with this knowledge & belief, the team embarked on collecting training data from nearby fields. This was done using a smartphone to take images, with the help of some GDG Mbale members. With farmers miles from town, and many fields inaccessible by road (not to mention the floods), this was not as simple as we had first hoped. To inhibit us further, our smartphones were (and still are) the only hard drives we had, thus decreasing the number of images & data we can capture in a day.
But we persisted! Once gathered, the images were sorted, one at a time, and categorized. With TensorFlow we re-trained a MobileNet, a technique known as transfer learning. We then used the TensorFlow Converter to generate a TensorFlow Lite FlatButter file which we deployed in an Android app. We started with about 3956 images, but our dataset is growing exponentially. We are actively collecting more and more data to improve our model’s accuracy. The improvements in TensorFlow, with Keras high level APIs, has really made our approach to deep learning easy and enjoyable and we are now experimenting with TensorFlow 2.0.
The app is simple for the user. Once installed, the user focuses the camera through the app, on a maize crop. Then an image frame is picked and, using TensorFlow Lite, the image frame is analysed to look for Fall armyworm damage. Depending on the results from this phase, a suggestion of a possible solution is given.
The app is available for download and it is constantly undergoing updates, as we push for local farmers to adapt and use it. We strive to ensure a world with #ZeroHunger and believe technology can do a lot to help us achieve this.
We have so far been featured on a national TV station in Uganda, participated in the #hackAgainstHunger and ‘The International Symposium on Agricultural Innovations’ for family farmers, organized by the Food Agricultural Organization of the United Nations, where our solution was highlighted.More recently, Google highlighted our work with this film:
We have embarked on scaling the solution to coffee disease and cassava diseases and will slowly be moving on to more. We have also introduced virtual reality to help farmers showcase good farming practices and various training.
Our plan is to collect more data and to scale the solution to handle more pests and diseases. We are also shifting to cloud services and Firebase to improve and serve our model better despite the lack of resources. With improved hardware and greater localised understanding, there's huge scope for Machine Learning to make a difference in the fight against hunger.
Source: Google Developers Blog
Today’s post is all about Sandro León. Read on!
|Sandro posing in his Noogler hat|
I grew up in Centerville, Ohio, with three sisters, Viviana, Sonia, and Angela. My parents, Alfredo and Emilia, both proud Mexican immigrants, made sure that I knew my heritage, and felt proud of it. Growing up, my sisters and I would help out, working at our parent’s Mexican restaurant, Las Piramides.
Outside of school and work, I’ve always loved listening to music, messing with latest tech, and playing games with friends. My interest in tech and experiences helping family and friends with my limited computer skills, led me to study IT electives in high school. Upon arriving to college, I studied Network Engineering at Sinclair Community College before transferring to the University of Cincinnati (UC) where I completed my B.S. in Computer Engineering.
Throughout university, I grew close to Latino/Hispanic inclusive groups like Latinos en Accion as well as engineering focused teams. Looking for a way to focus my interests even further, I worked with other motivated colleagues to rekindle our Society of Hispanic Professional Engineers (SHPE) chapter at UC. At Google, I work with groups like HOLA (Google’s Employee Resource Group committed to empowering the Latinx community both inside and outside of Google) and Code Next (free Google-run computer science education program that meets Black and Latinx high school students in their own communities) to continue the diversity focused STEM work that got me to where I am. This also includes going back to recruit at SHPE’s convention – the convention that made it happen.
|Sandro and Googlers prepping for the National SHPE convention.|
I’m an IT Resident in Mountain View as part of the IT Residency Program. The program is an immersion into end-to-end IT support at Google, and provides the opportunity to jump-start your career at Google and beyond. My favorite part about the work is that I assist Googlers from all around the world, in-person and remotely, regardless of the team they’re working on. I’ve even had the chance to travel worldwide, visiting and working from the London and Sydney offices. Right now, I’m on rotation with the Google Calendar Site Reliability Team! Learning the ins and outs of keeping production running at Google-scale is amazing as well as a mind-boggling opportunity at times.
Can you tell us about your decision to enter the process?
Even though I’d thought of Google as a dream job when I first learned about the company, I never thought I’d actually get here.
My journey to Google starts and ends with SHPE. When I started studying at the University of Cincinnati, I remembered seeing informational flyers about the Society of Hispanic Professional Engineers. After getting involved with our local chapter, and looking for ways to get us to the National Convention, I discovered and applied for a Google Travel and Conference Scholarship. Soon after applying I got an email, letting me know Google was flying me out to the convention in Kansas City, but I knew I couldn’t go without the team that inspired the idea. So we worked with the university and sponsors and were able to acquire funding for the rest of the group to make the inaugural conference trip together!
Part of registering for the conference was submitting a resume to SHPE, so they could share with attending organizations. I’d never applied to Google as I thought I wouldn’t make it through the tons of other resumes, and even if I did, there wouldn’t be a position for someone with my experience. This was where Google proved me wrong. I’d always romanticized the idea of working in Silicon Valley, with Google at the top of the list. I thought I might visit the Googleplex as a tourist, but didn’t have much confidence that I was employable – especially at Google as a new graduate.
After submitting my resume to SHPE, I never expected Google to reach out, but they did. It took me almost a whole day to respond to the first email because I didn’t believe it, and almost dismissed it as spam.
|Sandro holding a clipboard in front of the Google SHPE convention booth.|
Google had the most helpful recruitment process I’d ever been a part of, and SHPE only helped make it even more surreal. After convincing myself that the email from Google wasn’t spam, I spoke with a recruiter. They made sure that I understood the role and answered all my questions over a phone call. Then they planned to make it possible for me to interview in-person with Googlers at the convention. Being my first SHPE convention, I was overwhelmed by the experience of seeing thousands of professional Hispanic engineers. I was definitely nervous, but having my friends there helped.
|Sandro and Googlers on a trip.|
I build for representation, inclusion, and respect.
What inspires you to come in every day?
I’m inspired to come in everyday because I know the people I work with are just as passionate to help me as I am to help them. Everyday I work here is an opportunity to open the door for others who might not see themselves here, show them they’re valued by helping, and build a better place for them when they get here. From helping people communicate to reaching quantum supremacy, Google brings people together to create and inspire. I’m also especially honored to work with and support Code Next. I get to make sure that students keep learning.
What do you wish you’d known when you started the process?
I would’ve applied sooner if I’d known that the Google careers site was so comprehensive in listing every opening. I would also recommend that anyone interested in a role take a look at the specific criteria listed. They’re as specific as they can be, and depending on what you’re looking for you might have a good chance of finding something you’re interested and qualified for. Don’t dismiss yourself and always keep looking!
|Sandro in front of Google sign in Mountain View.|
Google actually has tons of YouTube videos about general hiring and interviewing. For my interview for the IT Residency Program, I studied a ton of troubleshooting methodologies, and actually reviewed my notes from my classes/studies.
Do you have any tips you’d like to share with aspiring Googlers?
Googleyness is a thing! There’s lots of facets to it, but for me, the most important narrow down to respect and helping others. What was different about Google to me compared to previous workplaces is that everyone is invited to bring their whole selves to work, so make sure you’re being yourself during the interview.
Source: Google Student Blog
The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.
November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.
In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.
New to Go?
Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.
Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.
Millions of Gophers!
Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.
As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.
Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.
An Example of Go In the Enterprise
One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.
MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.
Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.
“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”
Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.
With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.
Today, roughly half of Mercadolibre's traffic is handled by Go applications.
"We really see eye-to-eye with the larger philosophy of the language," Kohan explains. "We love Go's simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production."
Visit go.dev to Learn More
We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.
Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.
There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.
MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.
You can read more about MercadoLibre’s success with Go in the full case study.
Source: Google Developers Blog
Ten years ago, we announced the Go release here on this blog. This weekend we marked Go's 10th birthday as an open-source programming language and ecosystem for building modern networked software.
Go's original target was networked system infrastructure, anticipating what we now call the cloud. Go has become the language of the cloud, but more than that, Go has become the language of the open-source cloud, including Containerd, CoreDNS, Docker, Envoy, Etcd, Istio, Kubernetes, Prometheus, Terraform, and Vitess.
From our earliest days working on Go, we planned for Go to be open source. We knew that bootstrapping a new language and ecosystem was too large a project for one team or even one company to do alone. Go needed a thriving open-source community to curate and grow the ecosystem, to write books and tutorials, to teach courses to developers of all skill levels, and of course to find bugs and work on code improvements and new features. And of course we also wanted to share what we had created with everyone.
Open source at its best is about people working together to accomplish far more than any of them could have done alone. We are incredibly grateful to the thousands of people who have built up Go, its ecosystem, and its community with us over the past decade.
There are over a million Go developers worldwide, and companies all over the globe are looking to hire more. In fact, people often tell us that learning Go helped them get their first jobs in the tech industry. In the end, what we're most proud of about Go is not a well-designed feature or a clever bit of code but the positive impact Go has had in so many people's lives. We aimed to create a language that would help us be better developers, and we are thrilled that Go has helped so many others. Today we launched go.dev to be a hub for all Go developers to learn more and find ways to connect with each other.
As a thank you from us on the Go team at Google to Go contributors and developers worldwide for joining us on Go's journey, we are distributing a commemorative 10th anniversary pin at this month's Go Developer Network meetups. Renee French, who created the Go gopher for the release back in 2009, designed this special pin and also painted the mission control gopher scene at the top of this post. We thank Renee for giving Go so much of her time and a mascot that continues to delight and inspire a decade on.
As #GoTurns10, we hope everyone will take a moment to celebrate the Go community and all we have achieved together. On behalf of the entire Go team at Google, thank you to everyone who has joined us over the past decade. Let's make the next one even more incredible!
By Russ Cox, for the Go team