Tag Archives: machine learning

Earth to exoplanet: Hunting for planets with machine learning

For thousands of years, people have looked up at the stars, recorded observations, and noticed patterns. Some of the first objects early astronomers identified were planets, which the Greeks called “planētai,” or “wanderers,” for their seemingly irregular movement through the night sky. Centuries of study helped people understand that the Earth and other planets in our solar system orbit the sun—a star like many others.

Today, with the help of technologies like telescope optics, space flight, digital cameras, and computers, it’s possible for us to extend our understanding beyond our own sun and detect planets around other stars. Studying these planets—called exoplanets—helps us explore some of our deepest human inquiries about the universe. What else is out there? Are there other planets and solar systems like our own?

Though technology has aided the hunt, finding exoplanets isn’t easy. Compared to their host stars, exoplanets are cold, small and dark—about as tricky to spot as a firefly flying next to a searchlight … from thousands of miles away. But with the help of machine learning, we’ve recently made some progress.

One of the main ways astrophysicists search for exoplanets is by analyzing large amounts of data from NASA’s Kepler mission with both automated software and manual analysis. Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translate to about 2 quadrillion possible planet orbits! It’s a huge amount of information for even the most powerful computers to analyze, creating a laborious, time-intensive process. To make this process faster and more effective, we turned to machine learning.

NASA_PlanetsPart1_v03_1000px.gif
The measured brightness of a star decreases ever so slightly when an orbiting planet blocks some of the light. The Kepler space telescope observed the brightness of 200,000 stars for 4 years to hunt for these characteristic signals caused by transiting planets.

Machine learning is a way of teaching computers to recognize patterns, and it’s particularly useful in making sense of large amounts of data. The key idea is to let a computer learn by example instead of programming it with specific rules.

I’m a machine learning researcher on the Google AI team with an interest in space, and started this work as a 20 percent project (an opportunity at Google to work something that interests you for 20 percent of your time). In the process, I reached out to Andrew, an astrophysicist from UT Austin, to collaborate. Together, we took this technique to the skies and taught a machine learning system how to identify planets around faraway stars.

Using a dataset of more than 15,000 labeled Kepler signals, we created a TensorFlow model to distinguish planets from non-planets. To do this, it had to recognize patterns caused by actual planets, versus patterns caused by other objects like starspots and binary stars. When we tested our model on signals it had never seen before, it correctly identified which signals were planets and which signals were not planets 96 percent of the time. So we knew it worked!

Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

Armed with our working model, we shot for the stars, using it to hunt for new planets in Kepler data. To narrow the search, we looked at the 670 stars that were already known to host two or more exoplanets. In doing so, we discovered two new planets: Kepler 80g and Kepler 90i. Significantly, Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

NASA_PlanetsPart2_v05_750px.gif
We used 15,000 labeled Kepler signals to train our machine learning model to identify planet signals. We used this model to hunt for new planets in data from 670 stars, and discovered two planets missed in previous searches.

Some fun facts about our newly discovered planet: it’s 30 percent larger than Earth, and with a surface temperature of approximately 800°F—not ideal for your next vacation. It also orbits its star every 14 days, meaning you’d have a birthday there just about every two weeks.

sol-&-kepler-2.gif
Kepler 90 is the first known 8-planet system outside of our own. In this system, planets orbit closer to their star, and Kepler 90i orbits once every 14 days. (Note that planet sizes and distances from stars are not to scale.)

The sky is the limit (so to speak) when it comes to the possibilities of this technology. So far, we’ve only used our model to search 670 stars out of 200,000. There may be many exoplanets still unfound in Kepler data, and new ideas and techniques like machine learning will help fuel celestial discoveries for many years to come. To infinity, and beyond!

A Summary of the First Conference on Robot Learning



Whether in the form of autonomous vehicles, home assistants or disaster rescue units, robotic systems of the future will need to be able to operate safely and effectively in human-centric environments. In contrast to to their industrial counterparts, they will require a very high level of perceptual awareness of the world around them, and to adapt to continuous changes in both their goals and their environment. Machine learning is a natural answer to both the problems of perception and generalization to unseen environments, and with the recent rapid progress in computer vision and learning capabilities, applying these new technologies to the field of robotics is becoming a very central research question.

This past November, Google helped kickstart and host the first Conference on Robot Learning (CoRL) at our campus in Mountain View. The goal of CoRL was to bring machine learning and robotics experts together for the first time in a single-track conference, in order to foster new research avenues between the two disciplines. The sold-out conference attracted 350 researchers from many institutions worldwide, who collectively presented 74 original papers, along with 5 keynotes by some of the most innovative researchers in the field.
Prof. Sergey Levine, CoRL 2017 co-chair, answering audience questions.
Sayna Ebrahimi (UC Berkeley) presenting her research.
Videos of the inaugural CoRL are available on the conference website. Additionally, we are delighted to announce that next year, CoRL moves to Europe! CoRL 2018 will be chaired by Professor Aude Billard from the École Polytechnique Fédérale de Lausanne, and will tentatively be held in the Eidgenössische Technische Hochschule (ETH) in Zürich on October 29th-31st, 2018. Looking forward to seeing you there!
Prof. Ken Goldberg, CoRL 2017 co-chair, and Jeffrey Mahler (UC Berkeley) during a break.

Opening the Google AI China Center

Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world's top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.

I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.

That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.

Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us. 

The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems. 

Once again, the science of AI has no borders, neither do its benefits.

TFGAN: A Lightweight Library for Generative Adversarial Networks



(Crossposted on the Google Open Source Blog)

Training a neural network usually involves defining a loss function, which tells the network how close or far it is from its objective. For example, image classification networks are often given a loss function that penalizes them for giving wrong classifications; a network that mislabels a dog picture as a cat will get a high loss. However, not all problems have easily-defined loss functions, especially if they involve human perception, such as image compression or text-to-speech systems. Generative Adversarial Networks (GANs), a machine learning technique that has led to improvements in a wide range of applications including generating images from text, superresolution, and helping robots learn to grasp, offer a solution. However, GANs introduce new theoretical and software engineering challenges, and it can be difficult to keep up with the rapid pace of GAN research.
A video of a generator improving over time. It begins by producing random noise, and eventually learns to generate MNIST digits.
In order to make GANs easier to experiment with, we’ve open sourced TFGAN, a lightweight library designed to make it easy to train and evaluate GANs. It provides the infrastructure to easily train a GAN, provides well-tested loss and evaluation metrics, and gives easy-to-use examples that highlight the expressiveness and flexibility of TFGAN. We’ve also released a tutorial that includes a high-level API to quickly get a model trained on your data.
This demonstrates the effect of an adversarial loss on image compression. The top row shows image patches from the ImageNet dataset. The middle row shows the results of compressing and uncompressing an image through an image compression neural network trained on a traditional loss. The bottom row shows the results from a network trained with a traditional loss and an adversarial loss. The GAN-loss images are sharper and more detailed, even if they are less like the original.
TFGAN supports experiments in a few important ways. It provides simple function calls that cover the majority of GAN use-cases so you can get a model running on your data in just a few lines of code, but is built in a modular way to cover more exotic GAN designs as well. You can just use the modules you want — loss, evaluation, features, training, etc. are all independent. TFGAN’s lightweight design also means you can use it alongside other frameworks, or with native TensorFlow code. GAN models written using TFGAN will easily benefit from future infrastructure improvements, and you can select from a large number of already-implemented losses and features without having to rewrite your own. Lastly, the code is well-tested, so you don’t have to worry about numerical or statistical mistakes that are easily made with GAN libraries.
Most neural text-to-speech (TTS) systems produce over-smoothed spectrograms. When applied to the Tacotron TTS system, a GAN can recreate some of the realistic-texture, which reduces artifacts in the resulting audio.
When you use TFGAN, you’ll be using the same infrastructure that many Google researchers use, and you’ll have access to the cutting-edge improvements that we develop with the library. Anyone can contribute to the github repositories, which we hope will facilitate code-sharing among ML researchers and users.

TFGAN: A Lightweight Library for Generative Adversarial Networks

Crossposted on the Google Research Blog

Training a neural network usually involves defining a loss function, which tells the network how close or far it is from its objective. For example, image classification networks are often given a loss function that penalizes them for giving wrong classifications; a network that mislabels a dog picture as a cat will get a high loss. However, not all problems have easily-defined loss functions, especially if they involve human perception, such as image compression or text-to-speech systems. Generative Adversarial Networks (GANs), a machine learning technique that has led to improvements in a wide range of applications including generating images from text, superresolution, and helping robots learn to grasp, offer a solution. However, GANs introduce new theoretical and software engineering challenges, and it can be difficult to keep up with the rapid pace of GAN research.

A video of a generator improving over time. It begins by producing random noise, and eventually learns to generate MNIST digits.
In order to make GANs easier to experiment with, we’ve open sourced TFGAN, a lightweight library designed to make it easy to train and evaluate GANs. It provides the infrastructure to easily train a GAN, provides well-tested loss and evaluation metrics, and gives easy-to-use examples that highlight the expressiveness and flexibility of TFGAN. We’ve also released a tutorial that includes a high-level API to quickly get a model trained on your data.
This demonstrates the effect of an adversarial loss on image compression. The top row shows image patches from the ImageNet dataset. The middle row shows the results of compressing and uncompressing an image through an image compression neural network trained on a traditional loss. The bottom row shows the results from a network trained with a traditional loss and an adversarial loss. The GAN-loss images are sharper and more detailed, even if they are less like the original.
TFGAN supports experiments in a few important ways. It provides simple function calls that cover the majority of GAN use-cases so you can get a model running on your data in just a few lines of code, but is built in a modular way to cover more exotic GAN designs as well. You can just use the modules you want -- loss, evaluation, features, training, etc. are all independent.. TFGAN’s lightweight design also means you can use it alongside other frameworks, or with native TensorFlow code. GAN models written using TFGAN will easily benefit from future infrastructure improvements, and you can select from a large number of already-implemented losses and features without having to rewrite your own. Lastly, the code is well-tested, so you don’t have to worry about numerical or statistical mistakes that are easily made with GAN libraries.
Most neural text-to-speech (TTS) systems produce over-smoothed spectrograms. When applied to the Tacotron TTS system, a GAN can recreate some of the realistic-texture, which reduces artifacts in the resulting audio.
When you use TFGAN, you’ll be using the same infrastructure that many Google researchers use, and you’ll have access to the cutting-edge improvements that we develop with the library. Anyone can contribute to the github repositories, which we hope will facilitate code-sharing among ML researchers and users.

By Joel Shor, Senior Software Engineer, Machine Perception

3 ways employees will benefit from digital transformation in 2018

Editor’s note: Business is no longer as usual. New technologies in the workplace, like machine learning and augmented reality, create opportunities for companies to enhance employee productivity. Alan Lepofsky, analyst at Constellation Research, Inc. discusses three key areas where technology will impact work as we know it.

From Baby Boomers to Gen Z, today’s workplace contains a mixture of generations. Although each has grown up with very different technological and cultural experiences, all face similar challenges at work, like information overload and having to stay up-to-date with technology that’s constantly changing. But all is not lost! The future of work is an exciting one which will leverage new tools, technologies and techniques to help people get work done.

At Constellation Research, three of the top areas we’re tracking around employees in the digital workplace are: 1. using technology to augment how teams accomplish work, 2. using data to guide actions and prioritize projects and 3. using technology to encourage more creativity among teams. Here are some of the things we’re observing.

Augmenting our ability to get more done

No longer a thing of the future, AI is already all around us in a big way—powering the voice input on our phones or the content in our news streams.

While conversations about AI often turn to science fiction, the reality for knowledge workers is that AI is already enhancing how they work, and will continue to do so. We’re already seeing email clients that recommend replies, calendars that automate meeting scheduling, and video services that transcribe content.

The way we create, consume and interact with content is also changing. Legacy whiteboards in meeting rooms are being replaced by large, intelligent and interactive screens that allow people to collaborate whether they're in the same room or across the world. Augmented and virtual reality are moving beyond science fiction (and gaming) to mainstream use cases such as education, product design and retail. While today’s headsets may be cumbersome, soon augmented reality will be everywhere, turning any clear surface into a potential display.

In addition, new input methods including voice dictation and gesture recognition (hands and face) are allowing us to interact with our devices in new ways. I actually wrote a lot of this post by speaking out loud to my phone. 

Using data to derive insights and guide actions

How many miles have you flown this year? How many steps have you taken today? Our personal lives are filled with measurements of our accomplishments and actions. Everything is quantified. But can you say the same for work?

Imagine if you could understand which social media posts are most effective or which meetings lead to more customer wins. We don’t always have the information we need at work to help us be more effective employees. In order to provide employees with meaningful information, data needs to be collected and patterns need to be discovered. But the fragmentation of work across social networks, file sharing, web conferencing and business applications creates quite a challenge.

The solution requires charting the interactions between people, content and devices. These collections are called “graphs” in computer science, and they reveal things like who people work with and what content they interact with. This information can be used to discover patterns, leading to insights about the way people work. In turn, this data can help employees better determine what work should be prioritized and what can be postponed.

Everyone becomes a storyteller

Think about the types of content people use at work: email, chat, documents, spreadsheets, presentations. Compare that to your personal life which is probably dominated by photos and videos. Wouldn’t it be nice if we had a similar level of fun and creativity at work? 

In the past, creating compelling graphics or videos was limited to professionals. Today, almost anyone with a camera phone can start creating highly visual content. Most camera applications provide lenses, filters, stickers and other digital tricks to enhance pictures. Some take gorgeous panoramic images and some even create 3600 content. Conversations in group messaging applications now include emojis and animated gifs. Photo-sharing sites can automatically create collages from our best images.

These advances in storytelling are starting to show up in the workplace as well, enabling marketers to create more effective presentations, financial workers to create visually informative spreadsheets and sales people to pitch products with more engaging content. The days of boring content at work are coming to an end.

Delivering in the digital workplace

We’ve witnessed incredible advancements in the tools we use at work over the past 20 years. However, these pale in comparison to what the next decade will be like. The future of work is going to empower employees regardless of skillset or seniority.

If you're ready to embrace the changes and become a digital employee, have your holographic assistant connect with mine so we can discuss this further! ...Or at least take advantage of some of the auto-scheduling features cropping up in your Calendar app.

Source: Google Cloud


3 ways employees will benefit from digital transformation in 2018

Editor’s note: Business is no longer as usual. New technologies in the workplace, like machine learning and augmented reality, create opportunities for companies to enhance employee productivity. Alan Lepofsky, analyst at Constellation Research, Inc. discusses three key areas where technology will impact work as we know it.

From Baby Boomers to Gen Z, today’s workplace contains a mixture of generations. Although each has grown up with very different technological and cultural experiences, all face similar challenges at work, like information overload and having to stay up-to-date with technology that’s constantly changing. But all is not lost! The future of work is an exciting one which will leverage new tools, technologies and techniques to help people get work done.

At Constellation Research, three of the top areas we’re tracking around employees in the digital workplace are: 1. using technology to augment how teams accomplish work, 2. using data to guide actions and prioritize projects and 3. using technology to encourage more creativity among teams. Here are some of the things we’re observing.

Augmenting our ability to get more done

No longer a thing of the future, AI is already all around us in a big way—powering the voice input on our phones or the content in our news streams.

While conversations about AI often turn to science fiction, the reality for knowledge workers is that AI is already enhancing how they work, and will continue to do so. We’re already seeing email clients that recommend replies, calendars that automate meeting scheduling, and video services that transcribe content.

The way we create, consume and interact with content is also changing. Legacy whiteboards in meeting rooms are being replaced by large, intelligent and interactive screens that allow people to collaborate whether they're in the same room or across the world. Augmented and virtual reality are moving beyond science fiction (and gaming) to mainstream use cases such as education, product design and retail. While today’s headsets may be cumbersome, soon augmented reality will be everywhere, turning any clear surface into a potential display.

In addition, new input methods including voice dictation and gesture recognition (hands and face) are allowing us to interact with our devices in new ways. I actually wrote a lot of this post by speaking out loud to my phone. 

Using data to derive insights and guide actions

How many miles have you flown this year? How many steps have you taken today? Our personal lives are filled with measurements of our accomplishments and actions. Everything is quantified. But can you say the same for work?

Imagine if you could understand which social media posts are most effective or which meetings lead to more customer wins. We don’t always have the information we need at work to help us be more effective employees. In order to provide employees with meaningful information, data needs to be collected and patterns need to be discovered. But the fragmentation of work across social networks, file sharing, web conferencing and business applications creates quite a challenge.

The solution requires charting the interactions between people, content and devices. These collections are called “graphs” in computer science, and they reveal things like who people work with and what content they interact with. This information can be used to discover patterns, leading to insights about the way people work. In turn, this data can help employees better determine what work should be prioritized and what can be postponed.

Everyone becomes a storyteller

Think about the types of content people use at work: email, chat, documents, spreadsheets, presentations. Compare that to your personal life which is probably dominated by photos and videos. Wouldn’t it be nice if we had a similar level of fun and creativity at work? 

In the past, creating compelling graphics or videos was limited to professionals. Today, almost anyone with a camera phone can start creating highly visual content. Most camera applications provide lenses, filters, stickers and other digital tricks to enhance pictures. Some take gorgeous panoramic images and some even create 3600 content. Conversations in group messaging applications now include emojis and animated gifs. Photo-sharing sites can automatically create collages from our best images.

These advances in storytelling are starting to show up in the workplace as well, enabling marketers to create more effective presentations, financial workers to create visually informative spreadsheets and sales people to pitch products with more engaging content. The days of boring content at work are coming to an end.

Delivering in the digital workplace

We’ve witnessed incredible advancements in the tools we use at work over the past 20 years. However, these pale in comparison to what the next decade will be like. The future of work is going to empower employees regardless of skillset or seniority.

If you're ready to embrace the changes and become a digital employee, have your holographic assistant connect with mine so we can discuss this further! ...Or at least take advantage of some of the auto-scheduling features cropping up in your Calendar app.

A look at one billion drawings from around the world

Since November 2016, people all around the world have drawn one billion doodles in Quick, Draw!, a web game where a neural network tries to recognize your drawings.


That includes 2.9 million cats, 2.9 million hot dogs, and 2.9 million drawings of snowflakes.

QuickDrawFaces_Blog_Snowflakes.gif

Each drawing is unique. But when you step back and look at one billion of them, the differences fade away. Turns out, one billion drawings can remind us of how similar we are.


Take drawings people made of faces. Some have eyebrows.

QuickDrawFaces_Blog_Eyebrows.gif

Some have ears.

QuickDrawFaces_Blog_Ears.gif

Some have hair.

QuickDrawFaces_Blog_Hair.gif

Some are round.

QuickDrawFaces_Blog_Round.gif

Some are oval.

QuickDrawFaces_Blog_Oval.gif

But when if you look at them all together and squint, you notice something interesting: Most people seem to draw faces that are smiling.

KeywordBlog_heroimage.png

These sorts of interesting patterns emerge with lots of drawings. Like how people all over the world have trouble drawing bicycles.

Blog_Bicycle_gridimage.png

With some exceptions from the rare bicycle-drawing experts.

QuickDrawFaces_Blog_GoodBikes.gif

If you overlay these drawings, you’ll also notice some interesting patterns based on geography. Like the directions that chairs might point:

Or the number of scoops you might get on an ice cream cone.

QuickDraw_Overlay_Images_icecream.png

(Source: Kyle McDonald)

And the strategy you might use to draw a star.

QuickDraw_Overlay_Images_stars.png

Still, no matter the drawing method, over the last 12 months, people have drawn more stars in Quick, Draw! than there are actual stars visible to the naked eye in the night sky.

stars

If there’s one thing one billion drawings has taught us, it’s that no matter who we are or where we’re from, we’re united by the fun of making silly drawings of the things around us.


Quick, Draw! began as a simple way to let anyone play with machine learning. But these billions of drawings are also a valuable resource for improving machine learning. Researchers at Google have used them to train models like sketch-rnn, which lets people draw with a neural network. And the data we gathered from the game powers tools like AutoDraw, which pairs machine learning with drawings from talented artists to help everyone create anything visual, fast.


There is so much we have yet to discover. To explore a subset of the billion drawings, visit our open dataset. To learn more about how Quick, Draw! was built, read this post. And to draw your own star (or ice cream cone, or bicycle), play a round of Quick, Draw!

Pivot to the cloud: intelligent features in Google Sheets help businesses uncover insights

When it comes to data in spreadsheets, deciphering meaningful insights can be a challenge whether you’re a spreadsheet guru or data analytics pro. But thanks to advances in the cloud and artificial intelligence, you can instantly uncover insights and empower everyone in your organization—not just those with technical or analytics backgrounds—to make more informed decisions.

We launched "Explore" in Sheets to help you decipher your data easily using the power of machine intelligence, and since then we’ve added even more ways for you to intelligently visualize and share your company data. Today, we’re announcing additional features to Google Sheets to help businesses make better use of their data, from pivot tables and formula suggestions powered by machine intelligence, to even more flexible ways to help you analyze your data.

Easier pivot tables, faster insights

Many teams rely on pivot tables to summarize massive data sets and find useful patterns, but creating them manually can be tricky. Now, if you have data organized in a spreadsheet, Sheets can intelligently suggest a pivot table for you.


In the Explore panel, you can also ask questions of your data using everyday language (via natural language processing) and have the answer returned as a pivot table. For example, type “what is the sum of revenue by salesperson?” or “how much revenue does each product category generate?” and Sheets can help you find the right pivot table analysis.

GIF

In addition, if you want to create a pivot table from scratch, Sheets can suggest a number of relevant tables in the pivot table editor to help you summarize your data faster.

Suggested formulas, quicker answers

We often use basic spreadsheet formulas like =SUM or =AVERAGE for data analysis, but it takes time to make sure all inputs are written correctly. Soon, you may notice suggestions pop up when you type “=” in a cell. Using machine intelligence, Sheets provides full formula suggestions to you based on contextual clues from your spreadsheet data. We designed this to help teams save time and get answers more intuitively.

Formula suggestions in Sheets

Even more Sheets features

We’re also adding more features to make Sheets even better for data analysis:

  • Check out a refreshed UI for pivot tables in Sheets, and new, customizable headings for rows and columns.
  • View your data differently with new pivot table features. When you create a pivot table, you can “show values as a % of totals” to see summarized values as a fraction of grand totals. Once you have a table, you can right-click on a cell to “view details” or even combine pivot table groups to aggregate data the way you need it. We’re also adding new format options, like repeated row labels, to give you more fine-tuned control of how to present your summarized data.
  • Create and edit waterfall charts. Waterfall charts are good for visualizing sequential changes in data, like if you want to see the incremental breakdown of last year’s revenue month-by-month. Select Insert > Chart > Chart type picker and then choose “waterfall.”
  • Quickly import or paste fixed-width formatted data files. Sheets will automatically split up the data into columns for you without needing a delimiter, like commas, between data.

These new Sheets features will roll out in the coming weeks—see specific details here. To learn more about how G Suite can help your business uncover valuable insights and speed up efficiencies, visit the G Suite website. Or check out these tips to help you get started with Sheets.

Pivot to the cloud: intelligent features in Google Sheets help businesses uncover insights

When it comes to data in spreadsheets, deciphering meaningful insights can be a challenge whether you’re a spreadsheet guru or data analytics pro. But thanks to advances in the cloud and artificial intelligence, you can instantly uncover insights and empower everyone in your organization—not just those with technical or analytics backgrounds—to make more informed decisions.

We launched "Explore" in Sheets to help you decipher your data easily using the power of machine intelligence, and since then we’ve added even more ways for you to intelligently visualize and share your company data. Today, we’re announcing additional features to Google Sheets to help businesses make better use of their data, from pivot tables and formula suggestions powered by machine intelligence, to even more flexible ways to help you analyze your data.

Easier pivot tables, faster insights

Many teams rely on pivot tables to summarize massive data sets and find useful patterns, but creating them manually can be tricky. Now, if you have data organized in a spreadsheet, Sheets can intelligently suggest a pivot table for you.


In the Explore panel, you can also ask questions of your data using everyday language (via natural language processing) and have the answer returned as a pivot table. For example, type “what is the sum of revenue by salesperson?” or “how much revenue does each product category generate?” and Sheets can help you find the right pivot table analysis.

GIF

In addition, if you want to create a pivot table from scratch, Sheets can suggest a number of relevant tables in the pivot table editor to help you summarize your data faster.

Suggested formulas, quicker answers

We often use basic spreadsheet formulas like =SUM or =AVERAGE for data analysis, but it takes time to make sure all inputs are written correctly. Soon, you may notice suggestions pop up when you type “=” in a cell. Using machine intelligence, Sheets provides full formula suggestions to you based on contextual clues from your spreadsheet data. We designed this to help teams save time and get answers more intuitively.

Formula suggestions in Sheets

Even more Sheets features

We’re also adding more features to make Sheets even better for data analysis:

  • Check out a refreshed UI for pivot tables in Sheets, and new, customizable headings for rows and columns.
  • View your data differently with new pivot table features. When you create a pivot table, you can “show values as a % of totals” to see summarized values as a fraction of grand totals. Once you have a table, you can right-click on a cell to “view details” or even combine pivot table groups to aggregate data the way you need it. We’re also adding new format options, like repeated row labels, to give you more fine-tuned control of how to present your summarized data.
  • Create and edit waterfall charts. Waterfall charts are good for visualizing sequential changes in data, like if you want to see the incremental breakdown of last year’s revenue month-by-month. Select Insert > Chart > Chart type picker and then choose “waterfall.”
  • Quickly import or paste fixed-width formatted data files. Sheets will automatically split up the data into columns for you without needing a delimiter, like commas, between data.

These new Sheets features will roll out in the coming weeks—see specific details here. To learn more about how G Suite can help your business uncover valuable insights and speed up efficiencies, visit the G Suite website. Or check out these tips to help you get started with Sheets.

Source: Google Cloud