Category Archives: Google for Work Blog

Work is going Google

How Google went all in on video meetings (and you can, too)

Editor’s note: this is the first article in a five-part series on Google Hangouts.

I’ve worked at Google for more than a decade and have seen the company expand across geographies—including to Stockholm where I have worked from day one. My coworkers and I build video conferencing technology to help global teams work better together.

It’s sometimes easy to forget what life was like before face-to-face video conferencing (VC) at work, but we struggled with many of the same issues that other companies deal with—cobbled together communication technologies, dropped calls, expensive solutions. Here’s a look at how we transitioned Google to be a cloud video meeting-first company.

2004 - 2007: Life before Hangouts

In the mid-2000s, Google underwent explosive growth. We grew from nearly 3,000 employees to more than 17,000 across 40 offices globally. Historically, we relied on traditional conference phone bridging and email to communicate across time zones, but phone calls don’t exactly inspire creativity and tone gets lost in translation with email threads.

We realized that the technology we used didn’t mirror how our teams actually like to work together. If I want to sort out a problem or present an idea, I’d rather be face-to-face with my team, not waiting idly on a conference bridge line.

Google decided to go all in on video meetings. We outsourced proprietary video conferencing (VC) technology and outfitted large meeting rooms with these devices. 

If I need to sort out a problem or present an idea, I’d rather be face-to-face with my team, not waiting idly on a conference bridge line.
Hangouts 1
A conference room in Google’s Zurich office in 2007 which had outsourced VC technology.

While revolutionary, this VC technology was extremely costly. Each unit could cost upwards of $50,000, and that did not include support, licensing and network maintenance fees. To complicate matters, the units were powered by complex, on-prem infrastructure and required several support technicians. By 2007, nearly 2,400 rooms were equipped with the technology.

Then we broke it.

The system was built to host meetings for team members in the office, but didn't cater to people on the go. As more and more Googlers used video meetings, we reached maximum capacity on the technology’s infrastructure and experienced frequent dropped calls and poor audio/visual (AV) quality. I even remember one of the VC bridges catching on fire! We had to make a change.

2008 - 2013: Taking matters into our own hands

In 2008, we built our own VC solution that could keep up with the rate at which we were growing. We scaled with software and moved meetings to the cloud.

Our earliest “Hangouts” prototype was Gmail Video Chat, a way to connect with contacts directly in Gmail. Hours after releasing the service to the public, it had hundreds of thousands of users.

Gmail voice and video chat

The earliest software prototype for video conferencing at Google, Gmail Video Chat.

Hangouts 2

Arthur van der Geer tests out the earliest prototype for Hangouts, go/meet. 

While a good start, we knew we couldn’t scale group video conferencing within Gmail. We built our second iteration, which tied meeting rooms to unique URLs. We introduced it to Googlers in 2009 and the product took off.

During this journey, we also built our own infrastructure (WebRTC) so we no longer had to rely on third-party audio and video components. Our internal IT team created our own VC hardware prototypes; we used  touchscreen computers and custom software with the first version of Hangouts and called it “Google Video Conferencing” (“GVC” for short).

First Google Video Conferencing Prototype | 2008

Google engineers test the first Google Video Conferencing hardware prototype in 2008.

With each of these elements, we had now built our earliest version of Hangouts. After a few years of testing—and widespread adoption by Googlers—we made the platform available externally to customers in 2014 (“Chromebox for Meetings”). In the first two weeks, we sold more than 2,000 units. By the end of the year, every Google conference room and company device had access to VC.

2014 - today: Transforming how businesses do business

GIF test

Nearly a decade has passed since we built the first prototype. Face-to-face collaboration is ingrained in Google’s DNA now—more than 16,500 meetings rooms are VC-equipped at Google and our employees join Hangouts 240,000 times per day! That's equivalent to spending more than 10 years per day collaborating in video meetings. And, now, more than 3 million businesses are using Hangouts to transform how they work too.

We learned a lot about what it takes to successfully collaborate as a scaling business. If you’re looking to transition your meetings to the cloud with VC, here are a few things to keep in mind:

  1. Encourage video engagement from the start. Every good idea needs a champion. Be seen as an innovator by evangelizing video engagement in company meetings from the start. Your team will thank you for it.
  2. If you’re going to move to VC, make it available everywhere. We transformed our work culture to be video meeting-first because we made VC ubiquitous. Hangouts Meet brings you a consistent experience across web, mobile and conference rooms.  If you’re going to make the switch, go all in and make it accessible to everyone.
  3. Focus on the benefits. Video meetings can help distributed teams feel more engaged and help employees collaborate whenever, and wherever, inspiration strikes. This means you’ll have more diverse perspectives which makes for better quality output.

What’s next? Impactful additions and improvements to Hangouts Meet will be announced soon. All the while, we’re continuing to research how teams work together and how we can evolve VC technology to reflect that collaboration. For example, we’re experimenting with making scheduling easier for teams thanks to the @meet AI bot in the early adopter version of Hangouts Chat.

Related Article

Meet the new Hangouts

Last year, we talked about doubling down on our enterprise focus for Hangouts and our commitment to building communication tools focused ...

Read Article

Source: Google Cloud


Transforming Chile’s health sector with connectivity

Editor’s note: From instant access to medical records, to telemedicine in rural areas, connectivity in the health sector has the power to improve lives. In this guest post, Soledad Munoz Lopez, CIO of the Chilean Ministry of Health, shares with us how Chile implemented a national API-based architecture to help bring better health to millions.

Not long ago, Chile’s Ministry of Health (MINSAL) faced an enormous challenge. Chile’s 1,400 connected health facilities and 1,000 remote medical facilities lacked connectivity, and many of its healthcare systems could not easily interoperate. This meant healthcare providers couldn’t always expect to have fast and easy access to medical records.

Earlier efforts to centralize and manage medical records across facilities fell apart because they were costly and far too laborious. And as a result, we missed out on a lot of opportunities. We came to realize that we needed a new approach to IT architecture.

To help ensure that data, applications and services are securely available when and where they’re needed, I’m helping to lead the implementation of a national API-based architecture, powered by Google Cloud’s Apigee. From facilitating smoother public-private partnerships to enabling wider use of services such as telemedicine, we see this as a critical and aggressive move to rapidly improve wellness for our millions of citizens and visitors.

The API-first architecture aligns with a variety of MINSAL’s healthcare efforts, including a national program to connect unconnected healthcare centers, and a plan to digitize all clinic and administrative processes, both for major hospitals and local clinics and primary care centers. It also helps MINSAL’s strategic work, such as better leveraging data and connectivity for public alerts, population health management programs and the Public Health Surveillance initiatives needed for planning and execution of public health policy.

Connecting Chile’s healthcare system

One of the primary areas of concern addressed by the new digital architecture is the ease and speed of integration. As noted above, it’s important that whenever a patient is treated anywhere in Chile, the clinical teams and the patient have access to all the information that has been generated for that patient, regardless of where this information was recorded. This includes data from other health clinics, public or private institutions, laboratories, radiology and images and clinical equipment.

This variety of data sources typifies the diverse heterogenous environment that an API-first architecture needs to address: applications, devices, patient record systems, management systems, scheduling and so on. Most of these pieces within the MINSAL ecosystem were never designed to interoperate. We chose an API-first approach because APIs abstract all of this back-end complexity into predictable, consistent interfaces that allow developers to more quickly and efficiently connect data, services and apps across the nationwide system. The result is a more seamless experience for doctors and patients and a secure but agile infrastructure for MINSAL.

In a previous attempt to efficiently and scalably integrate health records, started in 2005, Chile utilized a centralized SOA-based architecture. This strategy turned out to be an expensive and inflexible way to try and achieve interoperability. The integration expenses were projected to require at least three times the current budget—untenable in a country where the total budget for development of clinical records is about $40 million annually.

Yet far larger are the costs to the users of an unconnected system, including unnecessary travel, duplication of exams and out of pocket costs in general.  

Working with Google Cloud Platform (GCP) and local system integrators such as Tecnodata, MINSAL is implementing a health systems technology investment strategy that is much more efficient. The API-based architecture enables any IT professional in any of Chile’s organizations, facilities, institutions and providers to onboard their information systems in an organized, more secure, self-service manner.  

This helps make the national program much more scalable, and involves local industry experts more closely. In addition, these entities can continue to evolve their own local systems as they need, as long as they’re compliant with the common integration strategy. MINSAL has established the policy that all data records be based on API-centric standards like FHIR and HL7, with images based on DICOM.   

All of these connectivity and interoperability efforts help enable important services that benefit Chilean citizens, such as telemedicine. Telemedicine, which enables patients to avoid unnecessary travel and relocation while under medical care, is highly developed in five specialties in Chile: teledermatology, teleophthalmology, telenephrology, teleradiology and tele-electrocardiography.  

An API platform for a healthy future

The Apigee platform has been the accelerator for the entire program, providing visibility and controls that make APIs easier to manage. It also saves MINSAL from needing to develop API management features that Apigee provides built right into the platform, such as key management, identity brokering, traffic routing, cyber-threat management, data caching, collection of analytics, developer management, developer portal and many others. As a result of the success of this program, we’re moving towards API-based strategies in more than just the health sector. Here are a few examples:

  • A single registry of individual and institutional health providers

  • An identity service integrated with the National Identity Registry

  • A birth pre-registry

  • A verification of identity service for use during emergency medical services

  • A national pharmaceutical terminology service

  • A patient portal (including pregnancy support, for example)

  • Electronic immunization records

  • Traceability and management of national health insurance accounts

  • An electronic medical prescription model


The API platform helps professionals in the entire network of healthcare systems in Chile access patient information throughout the care cycle. MINSAL was able to reduce costs through sharing  information, eliminating delays and reducing the duplication of medical tests. The platform also provides information to apps and websites used by patients, enabling them to see and gradually empower themselves with their own health data.

The promotion of preventive healthcare is a critically important initiative in Chile. API technology supports the monitoring of epidemiological changes in the population, consuming information from operational systems, through the same Apigee API platform that is already interfacing with all the health establishments. This means we now have far better data to begin testing machine learning  and use our big data to help focus our health programs on impactful outcomes.

Chile is a  leader among Latin American national health programs, and works closely with other countries and organizations to develop and coordinate programs and policies. By working with GCP and adopting an API-based architecture with the explicit goal of improving outcomes and the efficacy of the health care system, we hope to inspire others and pave the way to better health for billions of people.  

Source: Google Cloud


Addressing the UK NCSC’s Cloud Security Principles

As your organization adopts more cloud services, it's essential to get a clear picture of how sensitive data will be protected. Many authorities, from government regulators, to industry standards bodies and consortia, have provided guidance on how to evaluate cloud security. Notably, the UK National Cyber Security Centre offers a framework built around 14 Cloud Security Principles, and we recently updated our response detailing how we address these principles for both Google Cloud Platform (GCP) and G Suite. Google Cloud customers in the UK public sector can use the response to assess the suitability of these Google Cloud products to host their data with sensitivity levels up to “OFFICIAL,” including “OFFICIAL SENSITIVE.”

The UK National Cyber Security Centre was set up to improve the underlying security of the UK internet and to protect critical services from cyber attacks. Its 14 Cloud Security Principles are expansive and thorough, and include such important considerations as data in-transit protection, supply chain security, identity and authentication and secure use of the service.

The 14 NCSC Cloud Security Principles allow service providers like Google Cloud to highlight the security benefits of our products and services in an easily consumable format. Our response provides details about how GCP and G Suite satisfy the recommendations built into each of the principles, and describes the specific best practices, services and certifications that help us address the goals of each recommendation.

The NCSC also provides detailed ChromeOS deployment guidance to help organizations follow its 12 End User Device Security Principles. With an end-to-end solution encompassing GCP, applications and connected devices, Google Cloud provides the appropriate tools and functionality to allow you to adhere to the NCSC’s stringent security guidelines in letter and spirit.

Our response comes on the heels of GCP opening a new region in London, which allows GCP customers in the UK to improve the latency of their applications.

We look forward to working with all manner of UK customers, regulated and otherwise, as we build out a more secure, intelligent, collaborative and open cloud.

Source: Google Cloud


Search more intuitively using natural language processing in Google Cloud Search

Earlier this year, we launched Google Cloud Search, a new G Suite tool that uses machine learning to help organizations find and access information quickly.

Just like in Google Search, which lets you search queries in a natural, intuitive way, we want to make it easy for you to find information in the workplace using everyday language. According to Gartner research, by 2018, 30 percent or more of enterprise search queries will start with a "what," "who," "how" or "when.”*

Today, we’re making it possible to use natural language processing (NLP) technology in Cloud Search so you can track down information—like documents, presentations or meeting details—fast.

Related Article

Introducing Google Cloud Search: Bringing the power of Google Search to G Suite customers

Every day, people around the globe rely on the power of Google Search to access the world’s information. In fact, we see more than one tr...

Read Article

Find information fast with Cloud Search

If you’re looking for a Google Doc, you’re more likely to remember who shared it with you than the exact name of a file. Now, you can use NLP technology, an intuitive way to search, to find information quickly in Cloud Search.

Type queries into Cloud Search using natural, everyday language. Ask questions like “Docs shared by Mary,” “Who’s Bob’s manager?” or “What docs need my attention?” and Cloud Search will show you answer cards with relevant information.

NLP Cloud Search GIF

Having access to information quicker can help you make better and faster decisions in the workplace. If your organization runs on G Suite Business or Enterprise edition, start using Cloud Search now. If you’re new to Cloud Search, learn more on our website or check out this video to see it in action.

Introducing Google Cloud Search

*Gartner, ‘Insight Engines’ Will Power Enterprise Search That is Natural, Total and Proactive, 09 December 2015, refreshed 05 April 2017

Source: Google Cloud


How publishers can take advantage of machine learning

As the publishing world continues to face new challenges amidst the shift to digital, news media and publishers are tasked with unlocking new opportunities. With online news consumption continuing to grow, it’s crucial that publishers take advantage of new technologies to sustain and grow their business. Machine learning yields tremendous value for media and can help them tackle the hardest problems: engaging readers, increasing profits, and making newsrooms more efficient. Google has a suite of machine learning tools and services that are easy to use—here are a few ways they can help newsrooms and reporters do their jobs

1. Improve your newsroom's efficiency 

Editors want to make their stories appealing and to stand out so that people will read them. So finding just the right photograph or video can be key in bringing a story to life. But with ever-pressing deadlines, there’s often not enough time to find that perfect image. This is where Google Cloud Vision and Video Intelligence can simplify the process by tagging images and videos based on the content inside the actual image. This metadata can then be used to make it easier and quicker to find the right visual.

2.  Better understand your audience

News publishers use analytics tools to grow their audiences, and understand what that audience is reading and how they’re discovering content. Google Cloud Natural Language uses machine learning to understand what your content is about, independent of a website’s section and subsection structure (i.e. Sports, Local, etc.) Today, Cloud Natural Language announced a new content classifier and entity sentiment that digs into the detail of what a story is actually about. For example, an article about a high-tech stadium for the Golden State Warriors may be classified under the “technology” section of a paper, when its content should fall under “technology” and “sports.” This section-independent tagging can increase readership by driving smarter article recommendations and provides better data around trending topics. Naveed Ahmad, Senior Director of Data at Hearst has emphasized that precision and speed are critical to engaging readers: “Google Cloud Natural Language is unmatched in its accuracy for content classification. At Hearst, we publish several thousand articles a day across 30+ properties and, with natural language processing, we're able to quickly gain insight into what content is being published and how it resonates with our audiences."

3. Engage with new audiences

As publications expand their reach into more countries, they have to write for multiple audiences in different languages and many cannot afford multi-language desks. Google Cloud Translation makes translating for different audiences easier by providing a simple interface to translate content into more than 100 languages. Vice launched GoogleFish earlier this year to help editors quickly translate existing Vice articles into the language of their market. Once text was auto-translated, an editor could then push the translation to a local editor to ensure tone and local slang were accurate. Early translation results are very positive and Vice is also uncovering new insights around global content sharing they could not previously identify.

DB Corp, India’s largest newspaper group, publishes 62 editions in four languages and sells about 6 million newspaper copies per day. To address its growing customers and its diverse readership, reporters use Google Cloud Translation to capture and document interviews and source material for articles, with accuracy rates of 95 percent for Hindi alone.

4. Monetize your audience

So far we’ve primarily outlined ways to improve content creation and engagement with readers, however monetization is a critical piece for all publishers. Using Cloud Datalab, publishers can identify new subscription opportunities and offerings. The metadata collected from image, video, and content tagging creates an invaluable dataset to advertisers, such as audiences interested in local events or personal finance, or those who watch videos about cars or travel. The Washington Post has seen success with their in-house solution through the ability to target native ads to likely interested readers. Lastly, improved content recommendation drives consumption, ultimately improving the bottom line.

5. Experiment with new formats

The ability to share news quickly and efficiently is a major concern for newsrooms across the world. However today more than ever, readers are reading the news in different ways across different platforms and the “one format fits all” method is not always best. TensorFlow’s “summary.text” feature can help publishers quickly experiment with creating short form content from longer stories. This helps them quickly test the best way to share their content across different platforms. Reddit recently launched a similar “tl;dr bot” that summarizes long posts into digestible snippets.

6. Keep your content safe for everyone

The comments section can be a place of both fruitful discussion as well as toxicity. Users who comment are frequently the most highly engaged on the site overall, and while publishers want to keep sharing open, it can frequently spiral out of control into offensive speech and bad language. Jigsaw’s Perspective is an API that uses machine learning to spot harmful comments which can be flagged for moderators. Publishers like the New York Times have leveraged Perspective's technology to improve the way all readers engage with comments. By making the task of moderating conversations at scale easier, this frees up valuable time for editors and improves online discussion.

8
Example of New York Time’s moderator dashboard. Each dot represents a negative comment

From the printing press to machine learning, technology continues to spur new opportunities for publishers to reach more people, create engaging content and operate efficiently. We're only beginning to scratch the surface of what machine learning can do for publishers. Keep tabs on The Keyword for the latest developments.

Source: Google Cloud


At New Zealand schools, Chromebooks top the list of learning tools

New Zealand educators are changing their approach to teaching, building personalized learning pathways for every student. Technology plays a key part in this approach. New Zealand has joined the list of countries including Sweden the United States where Chromebooks are the number one device used in schools, according to analysts at International Data Corporation (IDC).

“Chromebooks continue to be a top choice for schools,” says Arunachalam Muthiah, Senior Market Analyst, IDC NZ. “After Chromebooks’ strong performance in 2016, we see a similar trend in the first half of 2017 with Chromebooks gaining a total shipment market share of 46 percent, continuing to hold their position as the number-one selling device in schools across New Zealand.”

Screen Shot 2017-03-09 at 12.57.49 PM.png
Bombay School students learning about conductivity, electrical circuits and constructing a tune.

Technology is transforming education across the globe, and in New Zealand schools are using digital tools to help  students learn, in the classroom and beyond.  

At Bombay School, located in the rural foothills south of Auckland, students could only get an hour a week of computer access. Bombay School’s principal and board decided on a 1:1 “bring your own device” program with Chromebooks, along with secure device management using a Chrome Education license.

Teachers quickly realized that since each student was empowered with a Chromebook, access to learning opportunities increased daily, inspiring students to chart new learning paths. “Technology overcomes constraints,” says Paul Petersen, principal of Bombay School. “If I don’t understand multiplication today, I can learn about it online. I can look for help. I can practice at my own pace, anywhere I am.”

In 2014 Bombay School seniors collectively scored in the 78th percentile for reading; in 2016, they reached nearly the 90th percentile.

PtEngland_1249__DXP0023_XT-X3.jpg

Students at Point England School take a digital license quiz to learn about online behavior.

In the Manaiakalani Community of Learning in East Auckland, some students start school with lower achievement levels than students in other school regions. Manaiakalani chose Chromebooks to support its education program goals and manage budget challenges. By bringing Chromebooks to the Manaiakalani schools, “we broke apart the barriers of the 9 a.m. to 3 p.m. school day,” says Dorothy Burt, head of the Manaiakalani Education Program and Digital Learning Coordinator, based at Point England School. Using G Suite for Education tools on their Chromebooks, students can work with other students, teachers, and parents on their lessons in the classroom, the library, or at home.

Dorothy says “we’re seeing not only engagement, but actual literacy outcomes improve—it’s made a huge difference to the opportunities students will have in the future.”

We look forward to supporting more countries and schools as they redefine teaching and make learning even more accessible for every student, anywhere.

Source: Google Cloud


The Spanish Data Protection Authority (AEPD) confirms compliance of Google Cloud commitments for international data flows

Millions of organizations use Google Cloud services every day, relying on Google to provide world-class privacy and security protections. Data protection is central to our mission, and we're always looking at ways to facilitate our customers’ compliance journey.

Today we’re pleased to announce that the Spanish Data Protection Agency (“Agencia Española de Protección de Datos” or “AEPD”) has issued a decision confirming that the guarantees established by the contractual commitments provided by Google for the international transfers of data to U.S. connected to its G Suite and Google Cloud Platform (GCP) services are adequate. Therefore, the international transfers to U.S. under such contractual commitments are deemed authorized by the AEPD provided the conditions established by the AEPD’s decision are met.

This authorization benefits all of our G Suite and GCP customers in Spain, who don’t need to pursue it individually. Rather, customers need to opt in to the relevant model contract clauses (via the online processes described on our Help Centers for G Suite and GCP services, respectively) and notify their relevant transfer to the AEPD’s registry. For more details, please see the AEPD’s decision.

The EU’s Data Protection Authorities had already confirmed earlier this year that Google Cloud services’ contractual commitments fully meet the requirements to legally frame transfers of data from the EU to the rest of the world in accordance with the EU Data Protection Directive 95/46/EC.  

This authorization is an important milestone for Google and its Spanish customers, as it reaffirms that the legal protections underpinning G Suite and GCP international data flows meet European and Spanish regulatory requirements. Furthermore, our customers can count on the fact that Google is committed to comply with the General Data Protection Regulation (GDPR) across G Suite and Google Cloud Platform services.


What does this mean for our customers in the Spanish jurisdiction?

G Suite and GCP customers may benefit from a simplified process with regard to international data transfers via our services provided the conditions established by the AEPD’s decision are met.


What are the key aspects of the authorization from the Spanish data protection authority?

Customers in the Spanish jurisdiction can benefit from the authorization as long as the international transfer of personal data remains in the scope of the authorization. You can read the full authorization here. Customers will still be required to notify the AEPD and may need to comply with additional legal requirements. Please consult a lawyer to obtain legal advice specifically applicable to your business circumstances.


How can customers make use of Google’s authorization?

Customers must sign a contract. The contractual arrangements shall include the Data Processing Amendment (DPA) for G Suite / Data Processing and Security Terms (DPST) for GCP and the EU Model Contractual Clauses (MCCs). Our customers can enter into the applicable relevant model contract clauses via the online processes described here for G Suite services and here for GCP services.


Source: Google Cloud


Schlumberger chooses GCP to deliver new oil and gas technology platform

Google Cloud has a simple but steadfast mission: Give companies technology for new and better ways to serve their customers. We handle the network, computing and security chores; you use our software-defined infrastructure, global databases and artificial intelligence to grow your business with speed and at scale.

A great example of this work is our collaboration with Schlumberger, which has selected Google Cloud as its strategic provider for its clients’ digital journey to the cloud.

For over 90 years, Schlumberger has worked with clients in the oil and gas industry. In this work, Schlumberger generates and uses large amounts of data to safely and efficiently manage hydrocarbon exploration and production. Schlumberger has developed a unique software environment that runs on GCP called DELFI*, a cognitive energy and production (E&P) environment at the SIS Global Forum, which spans from exploration to production. Customers can combine DELFI with their own proprietary data and science for new insights and faster results.

Today at the Schlumberger customer event SIS Global Forum, I talked about the new ways Google Cloud and Schlumberger are working together. This unique, multi-year collaboration encompasses a range of technologies:

  • Big data: Schlumberger launched the DELFI cognitive E&P environment and the deployment of an E&P Data Lake based on Google BigQuery, Cloud Spanner and Cloud Datastore with more than 100 million data items comprised of over 30TB of petrotechnical data.

  • Software platforms: Schlumberger announced the launch of its petrotechnical software platforms such as Petrel* E&P and INTERSECT* running on Google Cloud Platform and integrated into DELFI

  • High performance computing: Since announcing our relationship at Google Cloud Next, we’ve worked together to optimize Schlumberger Omega* geophysical data processing platform to run at a scale not possible in traditional data center environments. Using Google Cloud NVIDIA GPUs and Custom Machine Types, Schlumberger has deployed compute capacity of over 35 petaflops and 10PB of storage on GCP.

  • Artificial intelligence: Schlumberger leverages TensorFlow for complex petrotechnical interpretation of seismic and wellbore data, as well as automation of well-log quality control and 3D seismic interpretation.

  • Extensibility: Schlumberger adopted the Apigee API management platform to provide openness and extensibility for its clients and for partners to add their own intellectual property and workflows in DELFI

“To improve productivity and performance, DELFI enables our customers to take advantage of our E&P domain science and knowledge, while at the same time fully using disruptive digital technologies from Google Cloud,” said Ashok Belani, Executive VP of Technology, Schlumberger “This approach ensures that all data is considered when making critical decisions.”

By running on GCP, Schlumberger’s customers can supercharge their applications, whether it’s training machine learning models on our infrastructure, or easier software development and deployment via Kubernetes and containers. We’re also building upon new collaborations with other companies like Nutanix to give Schlumberger the flexibility to run its applications wherever they need to be—on-premises and in the cloud.

Our collaboration with Schlumberger is just the beginning. We’re thrilled the team has chosen Google Cloud to help deliver security, accessibility and innovation through their next generation energy exploration and production technology.  

*Mark of Schlumberger

Source: Google Cloud


Schlumberger chooses GCP to deliver new oil and gas technology platform

Google Cloud has a simple but steadfast mission: Give companies technology for new and better ways to serve their customers. We handle the network, computing and security chores; you use our software-defined infrastructure, global databases and artificial intelligence to grow your business with speed and at scale.

A great example of this work is our collaboration with Schlumberger, which has selected Google Cloud as its strategic provider for its clients’ digital journey to the cloud.

For over 90 years, Schlumberger has worked with clients in the oil and gas industry. In this work, Schlumberger generates and uses large amounts of data to safely and efficiently manage hydrocarbon exploration and production. Schlumberger has developed a unique software environment that runs on GCP called DELFI*, a cognitive energy and production (E&P) environment at the SIS Global Forum, which spans from exploration to production. Customers can combine DELFI with their own proprietary data and science for new insights and faster results.

Today at the Schlumberger customer event SIS Global Forum, I talked about the new ways Google Cloud and Schlumberger are working together. This unique, multi-year collaboration encompasses a range of technologies:

  • Big data: Schlumberger launched the DELFI cognitive E&P environment and the deployment of an E&P Data Lake based on Google BigQuery, Cloud Spanner and Cloud Datastore with more than 100 million data items comprised of over 30TB of petrotechnical data.

  • Software platforms: Schlumberger announced the launch of its petrotechnical software platforms such as Petrel* E&P and INTERSECT* running on Google Cloud Platform and integrated into DELFI

  • High performance computing: Since announcing our relationship at Google Cloud Next, we’ve worked together to optimize Schlumberger Omega* geophysical data processing platform to run at a scale not possible in traditional data center environments. Using Google Cloud NVIDIA GPUs and Custom Machine Types, Schlumberger has deployed compute capacity of over 35 petaflops and 10PB of storage on GCP.

  • Artificial intelligence: Schlumberger leverages TensorFlow for complex petrotechnical interpretation of seismic and wellbore data, as well as automation of well-log quality control and 3D seismic interpretation.

  • Extensibility: Schlumberger adopted the Apigee API management platform to provide openness and extensibility for its clients and for partners to add their own intellectual property and workflows in DELFI

“To improve productivity and performance, DELFI enables our customers to take advantage of our E&P domain science and knowledge, while at the same time fully using disruptive digital technologies from Google Cloud,” said Ashok Belani, Executive VP of Technology, Schlumberger “This approach ensures that all data is considered when making critical decisions.”

By running on GCP, Schlumberger’s customers can supercharge their applications, whether it’s training machine learning models on our infrastructure, or easier software development and deployment via Kubernetes and containers. We’re also building upon new collaborations with other companies like Nutanix to give Schlumberger the flexibility to run its applications wherever they need to be—on-premises and in the cloud.

Our collaboration with Schlumberger is just the beginning. We’re thrilled the team has chosen Google Cloud to help deliver security, accessibility and innovation through their next generation energy exploration and production technology.  

*Mark of Schlumberger

Source: Google Cloud


Box: Bringing image recognition and OCR to cloud content management

Editor’s note: In this guest editorial by Box’s Senior Director of Product Management, Ben Kus tells us how they used Google Cloud Vision to add a new level of image recognition to Box.

Images are the second most common and fastest growing type of file stored in Box. Trust us: that’s a lot of images.

Ranging from marketing assets to product photos to completed forms captured on a mobile device, these images are relevant to business processes and contain a ton of critical information. And yet, despite the wealth of value in these files, the methods that organizations use to identify, classify and tag images are still mostly manual.

Personal services like Google Photos, on the other hand, have gone far beyond simply storing images. These services intelligently organize photos, making them easier to discover. They also automatically recognize images, producing a list of relevant photos when users search for specific keywords. As we looked at this technology, we thought, "Why can't we bring it to the enterprise?"

The idea was simple: find a way to help our customers get more value from the images they store in Box. We wanted to make image files as easy to find and search through as text documents. We needed the technology to provide high-quality image labeling, be cost-effective and scale to the massive amount of image files stored in Box. We also needed it to handle thousands of image uploads per second and had to ensure that users actually found the image recognition useful. But we didn't want to build a team of machine learning experts to develop yet another image analysis technology—that just wasn't the best use of our resources.

That's where Google Cloud Vision came in. The image analysis results were high-quality, the pay-as-you-go pricing model enabled us to get something to market quickly without an upfront cost (aside from engineering resources), and we trusted that the service backed by Google expertise could seamlessly scale to support our needs. And, since many of the image files in Box contain text—such as licenses, forms and contracts—Cloud Vision’s optical character recognition (OCR) was a huge bonus. It could even recognize handwriting!

Using the Google Cloud Vision was straightforward. The API accepts an image file, analyzes the image's content and extracts any printed words, and then returns labels and recognized characters in a JSON response. Google Cloud Vision classifies the image into categories based on similar images, analyzes the content based on the type of analysis provided in the developer's request, and returns the results and a score of confidence in its analysis.

box-1
Photo provided by Box

To securely communicate with Google Cloud Vision, we used the Google API Client Library for Java to establish an HTTPS connection via our proxy server. The simplest way to do this is to modify the JVM's proxy settings (i.e., https.proxyHost and https.proxyPort) and use Java's Authenticator class to provide credentials to the proxy. The downside of this approach is that it affects all of your outgoing connections, which may be undesirable (i.e., if you want other connections to not use the proxy). For this reason, we chose to use the ApacheHttpTransport class instead. It can be configured to use a proxy server only for the connections that it creates. For more information, see this post.

To access Google Cloud Vision, you need credentials—either an API key or a service account. Regardless of which credentials you use, you'll want to keep them secret, so that no one else can use your account (and your money!). For example, do not store your credentials directly in your code or your source tree, do control access to them, do encrypt them at rest, and do cycle them periodically.

So, in order to bring these powerful capabilities to Box, we needed a set of images to send to the API and a destination for the results returned by the API. Now, when an image is uploaded to a folder in Box with the feature enabled—either via the web application or the API—the image is automatically labeled and text is automatically recognized and tagged using metadata. Plus, these metadata and representation values are then indexed for search, which means users can use our web application, a partner integration or even a custom application built on the Box Platform to search for keywords that might be found in their image content. And the search results will appear almost instantly based on the Google Cloud Vision’s analysis. Developers can also request the metadata on the image file via the Box API to use elsewhere in an application.

box-2
Photo provided by Box

As you can imagine, the ability to automatically classify and label images provides dozens of powerful use cases for Box customers. In our beta, we're working with companies across a number of industries:

  • A retail customer is using image recognition in Box to optimize digital asset management of product photos. With automatic object detection and metadata labels, they can cut out manual tagging and organization of critical images that are central to multi-channel processes.

  • A major media company is using image recognition in Box to automatically tag massive amounts of inbound photos from freelance photographers around the globe. Previously, there was no way they could preview and tag every single image. Now they can automatically analyze more images than ever before, and unlock new ways to derive value from that content.

  • A global real estate firm is leveraging optical character recognition in Box to digitize workflows for paper-based leases and agreements, allowing their employees to skip a manual tagging process while classifying sensitive assets more quickly.

We're excited to continue experimenting with GCP's APIs to help our customers get more out of their content in Box. You can learn more about this from our initial announcement.

Source: Google Cloud