Tag Archives: AI

AI is bringing back balance to Japanese workers

The “Japanese economic miracle” is a term used to describe the fast-paced growth that Japan saw in the second half of the 20th century. Along with the rise to the world’s second-largest economy came a strong mentality for success, and much like other advanced economies, that left a side effect: work-life imbalance, resulting in an overworked population. 

Japanese entrepreneur Miku Hirano founded her startup, Cinnamon, to address this challenge to help relieve the burden on the Japanese worker. Using artificial intelligence, Cinnamon removes repetitive tasks from office workers’ daily responsibilities, allowing more work to get done faster by fewer people. Cinnamon recently participated in Google Developers Launchpad Accelerator Japan. We asked Miku to reflect on her path from becoming an entrepreneur and the challenges she faces in her work. 

When did you realize you wanted to make an impact on Japanese workers? 

I founded my first startup when I was a student in 2006, and it was successfully acquired by mixi in 2011, so entrepreneurship is not new to me. Just three years ago, I read a news story about a young woman in Japan who committed suicide after working too much. I did some research and found this was not an isolated incident, and in fact, we have word, karoshi, which means death from overwork.. 

I was pregnant at the time, and I started to think we should change this current working style for the next generation. Work-life balance isn’t just a “nice” aspiration to have. Consistency with your family, pursuing hobbies and spending time in nature is directly related to health and happiness. 

So how does Cinnamon help restore work-life balance? 

he majority of the time-consuming work that Japanese workers face is the result of “unstructured data.” For example, legal contracts are often 400 pages long, and without a way to quickly summarize it, workers are left to read the entire document, a task that can take up to a week to accomplish. Cinnamon uses artificial intelligence to quickly summarize the document in minutes.  

What we’re building at Cinnamon is a way to use AI to remove repetitive tasks that can give workers back hours of their life each day and increase the quality and output of their work. Advanced technology is the core component that makes Cinnamon work, and Google’s AI tools like TensorFlow and Firebase have been an easy way to allow computers to read and understand a lot of text very quickly.  

Why did you choose to participate in the Launchpad Accelerator? 

We were facing a block on how to develop large, quality AI models effectively and how to build strong teams.  Google’s program was supporting exactly that. 

During our time in the accelerator, we received hands-on mentorship for complex AI model development. We also got to participate in a program called LeadersLab, which gave us in-depth insight into our leadership styles.

What advice do you have for future entrepreneurs?

Start your business today!  There’s no reason to wait. Talk to potential customers immediately and get a sense for if they find your idea valuable. Most importantly, find your inspiration. For me it’s my kids, because they always inspire me with their genuine, fresh eyes and minds.

Why newsrooms should pay attention to AI

Artificial intelligence is helping transform many businesses, and journalism is no exception. Newsrooms are already using AI to help organize and find videos and images, transcribe interviews in multiple languages and much more. But the industry  is still trying to understand the full impact AI can have.  

Today, we are releasing a report which highlights how AI offers new powers to journalists across the reporting process, from news gathering to distribution. It also underlines how news organizations that want to explore this potential must be ready to consider and carefully monitor the ethical and editorial implications of these new technologies.

This research is the result of Journalism AI, a year-long collaboration between Polis, the international journalism think tank at the London School of Economics and Political Science, and the Google News Initiative, to educate newsrooms about the potential offered by AI-powered technologies through research, training and networking.

Newsrooms around the world are experimenting with AI, and responses to the Journalism AI survey came from 71 media organizations in 32 countries. Publishers, editors and reporters shared their detailed thoughts on the potential of AI for the news industry, how it is impacting their organizations and the risks and challenges involved with this new wave of technological innovation. 

The findings make it clear that journalism should pay attention to AI, which has the potential for wide-ranging and profound influence on how journalism is made and consumed. 

On one side, AI technologies promise to free up time for journalists to work on the more creative aspects of the news production, leaving tedious and repetitive tasks to machines. At a time when the news industry is fighting for economic sustainability and for the public’s trust, it’s easy to see why this promise is highly attractive.

On the other side, via personalization and smart recommendations, AI can help the public cope with news overload, connecting them in a convenient way to credible content that is relevant, useful, and stimulating for their lives.

Newsrooms vary in their AI strategies and implementations, the challenges they’ve experienced and the way it’s changing the way they work and how they approach their business structure. 

Overall, respondents are optimistic about the positive impact that AI can bring, as long as journalists retain their ethical and editorial values and adapt to the new challenges—such as algorithmic bias and the rise of so-called “deepfakes,” in which AI is used to create fake images or videos and pass them as real. 

The report also warns against the risk of perceiving AI simply as a way to cut costs, and that it should instead be used to benefit the people who produce the journalism we consume. There are also significant concerns about a growing divide between large organizations with the resources to take advantage of the potential offered by AI, and smaller ones that risk being left behind.

With AI, the news industry has an opportunity to continue to reinvent itself for the information needs and behavior of people in our data-driven era. But with these new powers come responsibilities to maintain quality, increase editorial diversity and promote transparency of the systems they create. 

Take a read through the Journalism AI report to see the full findings of how media organizations view AI, and what’s next for the industry. 

Using machine learning to tackle Fall Armyworm

Guest post by Nsubuga Hassan, CEO at Hansu Mobile and Intelligent Innovations, Android and Machine Learning DeveloperNazirini doing work on laptopIn 2016 Fall armyworm (FAW) was first reported in Africa. It has devastated maize crops across the continent.

Research shows the potential impact of FAW on continental wide maize yield lies between 8.3 and 20.6 million tonnes per year (total expected production of 39m tonnes per year); with losses lying between US$2,48m and US$6,19m per year (of a US$11,59m annual expected value). The impact of FAW is far reaching, and is now reported in many countries around the world.

Agriculture is the backbone of Uganda’s economy, employing 70% of the population. It contributes to half of Uganda’s export earnings and a quarter of the country’s gross domestic product (GDP). Fall armyworm posses a great threat on our livelihoods. Two people having a conversationWe are a small group of like minded developers living and working in Uganda. Most of our relatives grow maize so the impact of the worm was very close to home. We really felt like we needed to do something about it. Fall Armyworm that threatens cropsThe vast damage and yield losses in maize production, due to FAW, got the attention of global organizations, who are calling for innovators to help. It is the perfect time to apply machine learning. Our goal is to build an intelligent agent to help local farmers fight this pest in order to increase our food security.

Based on a Machine Learning Crash Course, our Google Developer Group (GDG) in Mbale hosted some study jams in May 2018, alongside several other code labs. This is where we first got hands-on experience using TensorFlow, from which the foundations were laid for the Farmers Companion app. Finally, we felt as though an intelligent solution to help farmers had been conceived.

Equipped with this knowledge & belief, the team embarked on collecting training data from nearby fields. This was done using a smartphone to take images, with the help of some GDG Mbale members. With farmers miles from town, and many fields inaccessible by road (not to mention the floods), this was not as simple as we had first hoped. To inhibit us further, our smartphones were (and still are) the only hard drives we had, thus decreasing the number of images & data we can capture in a day.

But we persisted! Once gathered, the images were sorted, one at a time, and categorized. With TensorFlow we re-trained a MobileNet, a technique known as transfer learning. We then used the TensorFlow Converter to generate a TensorFlow Lite FlatButter file which we deployed in an Android app. Demonstration of the Android app identifying the Fall armywormWe started with about 3956 images, but our dataset is growing exponentially. We are actively collecting more and more data to improve our model’s accuracy. The improvements in TensorFlow, with Keras high level APIs, has really made our approach to deep learning easy and enjoyable and we are now experimenting with TensorFlow 2.0.

The app is simple for the user. Once installed, the user focuses the camera through the app, on a maize crop. Then an image frame is picked and, using TensorFlow Lite, the image frame is analysed to look for Fall armyworm damage. Depending on the results from this phase, a suggestion of a possible solution is given.

The app is available for download and it is constantly undergoing updates, as we push for local farmers to adapt and use it. We strive to ensure a world with #ZeroHunger and believe technology can do a lot to help us achieve this.

We have so far been featured on a national TV station in Uganda, participated in the #hackAgainstHunger and ‘The International Symposium on Agricultural Innovations’ for family farmers, organized by the Food Agricultural Organization of the United Nations, where our solution was highlighted.

More recently, Google highlighted our work with this film:

We have embarked on scaling the solution to coffee disease and cassava diseases and will slowly be moving on to more. We have also introduced virtual reality to help farmers showcase good farming practices and various training.

Our plan is to collect more data and to scale the solution to handle more pests and diseases. We are also shifting to cloud services and Firebase to improve and serve our model better despite the lack of resources. With improved hardware and greater localised understanding, there's huge scope for Machine Learning to make a difference in the fight against hunger.

Machine learning meets African agriculture

In 2016, a crop-destroying caterpillar, Fall Armyworm (FAW) was first detected in Africa. The crop pest has since devastated agriculture by infecting millions of corn fields, which threatens food security on the continent. Farmers who rely on harvests for food need to combat the pest, which has now spread to India and China.

That’s where Nazirini Siraji comes in. She is one of several developers working to provide farmers with new tools to fight FAW. After codelabs hosted by a Google developer group in Mbale, Uganda, she created the “Farmers Companion App” using TensorFlow, Google’s open-source machine learning platform. It’s a free app that identifies when a crop has FAW and which stage the worm is in its lifecycle (and therefore how threatening it is and how far it is likely to spread). It also advises on which pesticides or treatments are best to stop the worm spreading any further. The app is already working in the field, helping farmers around Mbale to identify FAW. 

They continue to improve the app so it can identify more pests and diseases. Nazirini shows the impact that developers can have on agricultural issues like FAW and across other sectors, too. We visited Nazirini and her team this year, here’s more about their story:

Learn more about how others are using TensorFlow to solve all kinds of problems.

Teachable Machine 2.0 makes AI easier for everyone

People are using AI to explore all kinds of ideas—identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance. Getting started with your own machine learning projects might seem intimidating, but Teachable Machine is a web-based tool that makes it fast, easy, and accessible to everyone. 

The first version of Teachable Machine let anyone teach their computer to recognize images using a webcam. For a lot of people, it was their first time experiencing what it’s like to train their own machine learning model: teaching the computer how to recognize patterns in data (images, in this case) and assign new data to categories.

Since then, we’ve heard from lots of people who want to take their Teachable Machine models  one step further and use them in their own projects. Teachable Machine 2.0 lets you train your own machine learning model with the click of a button, no coding required, and export it to websites, apps, physical machines and more. Teachable Machine 2.0 can also recognize sounds and poses, like whether you're standing or sitting down. 

We collaborated with educators, artists, students and makers of all kinds to figure out how to make the tool useful for them. For example, education researcher Blakeley H. Payne and her teammates have been using Teachable Machine as part of open-source curriculum that teaches middle-schoolers about AI through a hands on learning experience. 

“Parents—especially of girls—often tell me their child is nervous to learn about AI because they have never coded before,” Blakeley said. “I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of ‘I've never done this before.’”

But it’s not just for teaching. Steve Saling is an accessibility technology expert who used it to explore improve communication for people with impaired speech. Yining Shi has been using Teachable Machine with her students in the Interactive Telecommunications Program at NYU to explore its potential for game design. And at Google, we’ve been using it make physical sorting machines easier for anyone to build. Here’s how it all works: 

Gather examples

You can use Teachable Machine to recognize images, sounds or poses. Upload your own image files, or capture them live with a mic or webcam. These examples stay on-device, never leaving your computer unless you choose to save your project to Google Drive.

Gather-small.gif

Gathering image examples.

Train your model

With the click of a button, Teachable Machine will train a model based on the examples you provided. All the training happens in your browser, so everything stays in your computer.

Training-small.gif

Training a model with the click of a button.

Test and tweak

Play with your model on the site to see how it performs. Not to your liking? Tweak the examples and see how it does.

Test-small.gif

Testing out the model instantly using a webcam.

Use your model

The model you created is powered by Tensorflow.js, an open-source library for machine learning from Google. You can export it to use in websites, apps, and more. You can also save your project to Google Drive so you can pick up where you left off.

Ready to dive in? Here’s some helpful links and inspiration:

Drop us a line with your thoughts and ideas, and post what you make, or follow along with #teachablemachine. We can’t wait to see what you create. Try it out atg.co/teachablemachine.

Teachable Machine 2.0 makes AI easier for everyone

People are using AI to explore all kinds of ideas—identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance. Getting started with your own machine learning projects might seem intimidating, but Teachable Machine is a web-based tool that makes it fast, easy, and accessible to everyone. 

The first version of Teachable Machine let anyone teach their computer to recognize images using a webcam. For a lot of people, it was their first time experiencing what it’s like to train their own machine learning model: teaching the computer how to recognize patterns in data (images, in this case) and assign new data to categories.

Since then, we’ve heard from lots of people who want to take their Teachable Machine models  one step further and use them in their own projects. Teachable Machine 2.0 lets you train your own machine learning model with the click of a button, no coding required, and export it to websites, apps, physical machines and more. Teachable Machine 2.0 can also recognize sounds and poses, like whether you're standing or sitting down. 

We collaborated with educators, artists, students and makers of all kinds to figure out how to make the tool useful for them. For example, education researcher Blakeley H. Payne and her teammates have been using Teachable Machine as part of open-source curriculum that teaches middle-schoolers about AI through a hands on learning experience. 

“Parents—especially of girls—often tell me their child is nervous to learn about AI because they have never coded before,” Blakeley said. “I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of ‘I've never done this before.’”

But it’s not just for teaching. Steve Saling is an accessibility technology expert who used it to explore improve communication for people with impaired speech. Yining Shi has been using Teachable Machine with her students in the Interactive Telecommunications Program at NYU to explore its potential for game design. And at Google, we’ve been using it make physical sorting machines easier for anyone to build. Here’s how it all works: 

Gather examples

You can use Teachable Machine to recognize images, sounds or poses. Upload your own image files, or capture them live with a mic or webcam. These examples stay on-device, never leaving your computer unless you choose to save your project to Google Drive.

Gather-small.gif

Gathering image examples.

Train your model

With the click of a button, Teachable Machine will train a model based on the examples you provided. All the training happens in your browser, so everything stays in your computer.

Training-small.gif

Training a model with the click of a button.

Test and tweak

Play with your model on the site to see how it performs. Not to your liking? Tweak the examples and see how it does.

Test-small.gif

Testing out the model instantly using a webcam.

Use your model

The model you created is powered by Tensorflow.js, an open-source library for machine learning from Google. You can export it to use in websites, apps, and more. You can also save your project to Google Drive so you can pick up where you left off.

Ready to dive in? Here’s some helpful links and inspiration:

Drop us a line with your thoughts and ideas, and post what you make, or follow along with #teachablemachine. We can’t wait to see what you create. Try it out atg.co/teachablemachine.

AI is helping rural patients get crucial medical care

For those of us who live in big cities in developed countries, it’s easy to take access to hospitals and medical specialists for granted. But many rural communities in developing countries have too few medical clinics and doctors. With only prohibitively expensive and slow options for managing their healthcare, many patients in these areas deal with a lack of diagnosis and treatment.

Rafael Figueroa, an entrepreneur from Brazil, created Portal Telemedicina to help address this problem. His company uses advanced technology to drastically reduce the time and cost barriers to quality medical care. Portal participated in Google’s startup acceleration program, and Rafael has become one of the program’s top global artificial intelligence mentors. 

Now, more than 500 rural clinics and large healthcare institutions throughout Brazil and Angola use Portal’s technology. The company was selected by the United Nations as one of 10 global companies to join Accelerate2030, which supports entrepreneurs working towards achieving the UN’s Sustainable Development Goals.

We asked Rafael to reflect on his path toward entrepreneurship, the work Portal is doing and how technology is making it all possible. 

How does Portal work? 


Here’s an example: with our technology, a patient that lives in the rural rainforest in Brazil can go into their local clinic and get an X-ray. With just a few clicks, local nurses can send the information through the cloud to specialist physicians in the capital city of São Paulo. Those specialists can then give diagnostics to those patients, from 1,000 miles away. 

Portal's telediagnostic platform helps doctors give more accurate and fast diagnoses, using artificial intelligence to help diagnose thousands of patients each day. The system double checks all of the diagnostics against the AI prediction and, in case of discrepancy, automatically sends the exam to three other doctors, in order to reduce human errors. 

How does AI know what to look for in an exam image? 

Computers are only as “smart” as the information you input into the system. Our platform uses more than 30 million exams and health records that the computer can use to “learn” and reference so that it can make medical findings at or above human level accuracy. 

How did you know you wanted to be a healthcare entrepreneur? 

I have always strived to help people. I’ve worked with NGOs and have been aware of the discrepancy in access that people have based simply on where they live. In 2013, I had my own healthcare emergency while in Northeast Brazil, and in the absence of a medical specialist, I had a misdiagnosis which almost took part of my mobility. I rushed to São Paulo and underwent surgery. I spent six months without walking, which gave me plenty of time to think about the lack of access to doctors in remote regions. 

How have Google technology and programs helped you grow Portal? 

Google products help us run many aspects of our business, but most importantly we use the machine learning platform TensorFlow, which gives us the image recognition technology that examines the X-rays to make a diagnosis. During Google’s acceleration program, we worked with experts that helped us through tech our challenges and prepared Portal’s technology platform for fast growth.

Do you have any advice for other entrepreneurs?


Don’t presume you know the right path to follow. Prioritize building a platform where you can run experiments very quickly. The key factor is to put the product in the user’s hand ASAP to collect their feedback. Based on that, start to develop a more sophisticated technology.

Understanding searches better than ever before

If there’s one thing I’ve learned over the 15 years working on Google Search, it’s that people’s curiosity is endless. We see billions of searches every day, and 15 percent of those queries are ones we haven’t seen before--so we’ve built ways to return results for queries we can’t anticipate.

When people like you or I come to Search, we aren’t always quite sure about the best way to formulate a query. We might not know the right words to use, or how to spell something, because often times, we come to Search looking to learn--we don’t necessarily have the knowledge to begin with. 

At its core, Search is about understanding language. It’s our job to figure out what you’re searching for and surface helpful information from the web, no matter how you spell or combine the words in your query. While we’ve continued to improve our language understanding capabilities over the years, we sometimes still don’t quite get it right, particularly with complex or conversational queries. In fact, that’s one of the reasons why people often use “keyword-ese,” typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question. 

With the latest advancements from our research team in the science of language understanding--made possible by machine learning--we’re making a significant improvement to how we understand queries, representing the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search. 

Applying BERT models to Search
Last year, we introduced and open-sourced a neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers, or as we call it--BERT, for short. This technology enables anyone to train their own state-of-the-art question answering system. 

This breakthrough was the result of Google research on transformers: models that process words in relation to all the other words in a sentence, rather than one-by-one in order. BERT models can therefore consider the full context of a word by looking at the words that come before and after it—particularly useful for understanding the intent behind search queries.

But it’s not just advancements in software that can make this possible: we needed new hardware too. Some of the models we can build with BERT are so complex that they push the limits of what we can do using traditional hardware, so for the first time we’re using the latest Cloud TPUsto serve search results and get you more relevant information quickly. 

Cracking your queries
So that’s a lot of technical details, but what does it all mean for you? Well, by applying BERT models to both ranking and featured snippets in Search, we’re able to do a much better job  helping you find useful information. In fact, when it comes to ranking results, BERT will help Search better understand one in 10 searches in the U.S. in English, and we’ll bring this to more languages and locales over time.

Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.

To launch these improvements, we did a lot of testing to ensure that the changes actually are more helpful. Here are some of the examples that showed up our evaluation process that demonstrate BERT’s ability to understand the intent behind your search.

Here’s a search for “2019 brazil traveler to usa need a visa.” The word “to” and its relationship to the other words in the query are particularly important to understanding the meaning. It’s about a Brazilian traveling to the U.S., and not the other way around. Previously, our algorithms wouldn't understand the importance of this connection, and we returned results about U.S. citizens traveling to Brazil. With BERT, Search is able to grasp this nuance and know that the very common word “to” actually matters a lot here, and we can provide a much more relevant result for this query.

BERT in Search: Visa Example

Let’s look at another query: “do estheticians stand a lot at work.” Previously, our systems were taking an approach of matching keywords, matching the term “stand-alone” in the result with the word “stand” in the query. But that isn’t the right use of the word “stand” in context. Our BERT models, on the other hand, understand that “stand” is related to the concept of the physical demands of a job, and displays a more useful response.

BERT in Search: Esthetician Example

Here are some other examples where BERT has helped us grasp the subtle nuances of language that computers don’t quite understand the way humans do.

Improving Search in more languages
We’re also applying BERT to make Search better for people across the world. A powerful characteristic of these systems is that they can take learnings from one language and apply them to others. So we can take models that learn from improvements in English (a language where the vast majority of web content exists) and apply them to other languages. This helps us better return relevant results in the many languages that Search is offered in.

For featured snippets, we’re using a BERT model to improve featured snippets in the two dozen countries where this feature is available, and seeing significant improvements in languages like Korean, Hindi and Portuguese.

Search is not a solved problem
No matter what you’re looking for, or what language you speak, we hope you’re able to let go of some of your keyword-ese and search in a way that feels natural for you. But you’ll still stump Google from time to time. Even with BERT, we don’t always get it right. If you search for “what state is south of Nebraska,” BERT’s best guess is a community called “South Nebraska.” (If you've got a feeling it's not in Kansas, you're right.)

Language understanding remains an ongoing challenge, and it keeps us motivated to continue to improve Search. We’re always getting better and working to find the meaning in-- and most helpful information for-- every query you send our way.


The Singapore students using Cloud for smarter recycling

Coming up with big ideas in technology used to take the kind of time and money that only large companies had.  Now open source tools—like TensorFlow, which provides access to Google’s machine learning technology—mean anyone with a smart concept has the opportunity to make it a reality. Just ask Arjun Taneja and Vayun Mathur, two friends and high school students from Singapore with a big ambition to improve recycling rates.  

Arjun and Vayun realized that separating waste is sometimes confusing and cumbersome—something that can derail people's good intentions to recycle. Using TensorFlow, they built a “Smart Bin” that can identify types of trash and sort them automatically. The Smart Bin uses a camera to take a picture of the object inserted in the tray, then analyzes the picture with a Convolutional Neural Network, a type of machine learning algorithm designed to recognize visual objects.  

To train the algorithm, Arjun and Vayun took around 500 pictures of trash like glass bottles, plastic bottles, metal cans and paper. It’s a process that would normally be laborious and expensive. But by using Google’s Colab platform for sharing resources and advice, the students could access a high powered graphics processor (GPU) in the cloud for free. They were also able to access Tensor Processing Units, Google’s machine learning processors which power services like Translate, Photos, Search, Assistant and Gmail. These tools helped their system analyze large amounts of data at once, so the students could correct the model if it didn't recognize an object. As a result, the model learned to classify the objects even more quickly. Once the Smart Bin was trained, all they had to do was place an object in the tray, and the system could predict whether it was metal, plastic, glass or paper—with the answer popping up on a screen. 

Building on their successful trials at home, Arjun and Vayun showcased the Smart Bin with a stall at last week’s Singapore Maker Faire, and they continue to work on other projects. It’s a great example of how tools available in the cloud are cutting out processes and costs that might have held back this kind of invention in the past.

Computing takes a quantum leap forward

Quantum computing: It sounds futuristic because until recently, it was. But today we’re marking a major milestone in quantum computing research that opens up new possibilities for this technology.

Unlike classical computing, which runs everything from your cell phone to a supercomputer, quantum computing is based on the properties of quantum mechanics. As a result, quantum computers could potentially solve problems that would be too difficult or even impossible for classical computers—like designing better batteries, figuring out what molecules might make effective medicines or minimizing emissions from the creation of fertilizer. They could also help improve existing advanced technologies like machine learning. 

Today, the scientific journal Nature has published the results of Google’s efforts to build a quantum computer that can perform a task no classical computer can; this is known in the field as “quantum supremacy.” In practical terms, our chip, which we call Sycamore, performed a computation in 200 seconds that would take the world’s fastest supercomputer 10,000 years.

This achievement is the result of years of research and the dedication of many people. It’s also the beginning of a new journey: figuring out how to put this technology to work. We’re working with the research community and have open sourced tools to enable others to work alongside us to identify new applications. Learn more about the technical details behind this milestone on our AI blog.