Tag Archives: Google+

Calling all superheroes for Web Rangers 2017

https://lh6.googleusercontent.com/dWpBEGxwVLgNsuqeU5cGoavkWiO4i1aDirYMKC4E7G5FLOkDsWQcPPVd3T6AuBqsN0rv5yJ80f2FjIsoQDL-skvJkNKpAvc5Fm0EIGKLYxTullmXsLiXxokdAKhWc7CEAp62uPW4
It’s time to put your superhero capes on because the Web Rangers contest is back for its third edition and we couldn’t be happier announcing it on Children’s day. But who is a Web Ranger? If you are a student between the age of 10 and 17 who knows what it takes to be safe online and also helps your friends and family to do the same, then you are already a Web Ranger.
The Web Rangers contest is a platform for students to spread the message of internet safety and digital citizenship beyond close friends and family. It’s time to show the world your superpowers, it’s time for Web Rangers 2017!
There are different formats to pick from - choose the category that interests you and and put your creativity and skills to test. The best entries in each format stands to win awesome Chromebooks.

  • Campaign: Why reach a few hundred when you can reach thousands, or maybe millions? Run your own internet safety campaign, either individually or as a team of three. It could be one large initiative or a collection of multiple projects like a social media campaign, a video series, awareness drives or all of them - there is no restriction on the format or the number of initiatives.
  • Project: If you prefer to work individually and have that one amazing idea, then this is for you. The format is completely open - create a video, website, app or a game - but remember, it should empower users to stay safe on the Internet and learn what it takes to be a good digital citizen
  • Poster: Put your creative hat on - design a poster that captures the theme of Internet safety and send it across to us. It’s as simple as that.

Year after year, the Web Rangers have been surprising us with some incredible entries and we have a hunch that this year is going to be even better. For inspiration and ideas, check out the winning entries from last year and the year before that. To learn more about the rules and format, or to submit your entries, visit this page. The deadline for submitting your entry is 23:59pm on January 15, 2018.

Posted by Sunita Mohanty, Director, Trust & Safety

Google Arts & Culture invites you to imagine the archeological artifacts of the future

https://lh5.googleusercontent.com/zm1ZNMiRgnpUhIBve2uqN0dsdhnjfkkqOKCMFFxeI-iTqv_uF7l217LMzv_yKkWJbIwuSAX5X3z-P7NmErZRkEcmqo6OY_wNGZatmd_uDF803JDAUIEWuUoBPDTB0w_a1mg_VVG_
Consider the question: What object would you like archaeologists 1,000 years from now to remember our present day culture by?



As part of our first Google Arts & Culture Lab experiment in India, we are delighted to collaborate with Mumbai’s Chhatrapati Shivaji Maharaj Vastu Sangrahalaya (CSMVS) and the British Museum in London to unveil Future Relics, an interactive installation that takes participants on a shared journey, connecting past with present whilst looking to the future. Created as part of the landmark exhibition India and the World: A History in Nine Stories, the Future Relics project will blend ancient craft and modern technology to build relics for the future.



Responding directly to India and the World’s exploration of pots as story-telling objects,  Future Relics invites audiences to contribute an object that represents our lives today —  perhaps an aluminium pressure cooker, a ceiling fan, a mobile phone or an exam paper? Visitors’ contributions will provide the future generations glimpses into the lives and stories of people who lived in the present day, just as the artefacts of the museum give a glimpse of those who lived and ruled in the first cities of India.



As visitors navigate the museum and stumble upon the Future Relics installation, they will be asked to write in Hindi, English or Marathi, using our Google handwriting tool, an object they would want archaeologists to remember 1,000 years from. Google Translate will then group similar words together, transcending language and drawing thematic connections across the three languages. Each group of similar words will create a digital vase on which the handwritten words will be printed. Each new, unique word will birth a new vase. As vases appear and grow, a live visualisation will create a growing landscape of vases.
Future Relic screenshot 4.png
From thousands of contributions, a collection of artifacts will be 3D printed using clay, and gifted to the museum as a relic for future generations to uncover.



Conceived as a new attraction for the landmark exhibition which also marks the commemoration of 70 years of Indian Independence, visitors to the museum will also explore connections and comparisons between India and the rest of the world, covering a period of over a million years though 200 historical artifacts from more than 25 institutions. The Indian objects within each section are positioned within a global context and will help visitors serve to explore the connections.



We hope that visitors will enjoy interacting with Future Relics and becoming a part of a time capsule, as we craft new bridges between tech and culture whilst giving audiences a new perspective of archeology in the digital age. All art lovers and cultural enthusiasts can discover the unseen cultural treasures and the first of its kind interactive installation from November 11th at Mumbai’s Chhatrapati Shivaji Maharaj Vastu Sangrahalaya.



There is more coming up, so stay tuned!



By Freya Murray, Program Manager and Creative Lead, Google Cultural Institute Lab

Resonance Audio: Multi-platform spatial audio at scale

Posted by Eric Mauskopf, Product Manager

As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms.

Experience spatial audio in our Audio Factory VR app for Daydreamand SteamVR

Performance that scales on mobile and desktop

Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.

Using geometry-based reverb by assigning acoustic materials to a cathedral in Unity

Multi-platform support for developers and sound designers

We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.

DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDK

Model complex Sound Environments Cutting edge features

By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.

Controlling sound wave directivity for an acoustic guitar using the SDK

Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos.

If you're interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site, let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.

Google’s campus roadshow to inspire young aspiring mobile developers

Technology is now deeply embedded into our daily lives. Everyday new startups are being founded to create new business models and change the way consumers interact and use various business services. Intelligent automation and mobile based solutions are impacting every industry around us. In these changing times, the current scope of learning ecosystem in India can feel limiting for students who are passionate about learning new technologies and enhance their skills inline with the expectations of the Industry.


While India has long maintained its position as a global base of tech talent, there is growing need to invest further in skilling India’s large base of young students who are keen to learn newer technologies. In line with our objective to train two million developers in India on the latest mobile technologies, we’re excited to announce Mobile Developer Fest, our new campus initiative to inspire thousands of young, aspiring technology developers to kickstart their skilling journey with affordable world-class skilling and education programs.


Mobile Developer Fest (MDF) is designed to be a day-long event for computer science and engineering students, offering them an opportunity to attend tech sessions across multiple product areas like Machine Learning, Firebase, Android and Progressive Web Apps. Students can also participate in hands-on code labs sessions, and learn directly from Google certified developers. The event will also provide an opportunity for students to become part of Google Developer Student Clubs and University Innovation Fellows.


Starting with Bangalore at The CMR Institute of Technology, we will hold multiple MDFs in leading engineering colleges across 12 states in India. MDFs are open events and computer science and engineering students are eligible to register.


The learning curve doesn’t end at these events. Students will also have access to:


Register your interest for the upcoming MDD near you here: https://events.withgoogle.com/mdf
Can’t make it to the event? Join the conversation by following us on Twitter and subscribing to our YouTube channel.

Posted by: William Florance, Head of Google Developer Training and Social Impact Programs

Five observations from my time at YouTube

Earlier this year, I was asked by Google (because they know I am pre "Sucker M.C.") to work on a Doodle celebrating the 44th anniversary of the music that changed my life. The birth of hip-hop was a fusion of expression and technical innovation that forever changed our culture and Google wanted to celebrate the moment when it all came together.

I had one condition on participating: that the project be authentic and not some tech company’s interpretation of a cultural revolution. They couldn't agree more and the collaboration led to an amazing interactive experience that used technology and Google’s reach to celebrate the birth of hip-hop. It showed me that Google and YouTube know how to listen to feedback (in this case, mine), and are willing to work hard to get things right.

I joined Google and YouTube because I saw a great opportunity to bring tech and music together and do right by artists, the industry and fans. Eight months in, I’m more optimistic than ever that YouTube can do that, but the truth is there’s still a disconnect between YouTube and the rest of the industry.

So, how did YouTube get here? What explains the current state of YouTube’s relationship with the industry? I think there are five factors that explain the current situation.

  1. Late to the party. I get why some in the music industry would be skeptical of their relationship with YouTube. They were late to the subscriptions party and YouTube’s focus for many years was largely just on ads. While they have been at subscriptions for a year, and the numbers are very encouraging, YouTube must prove its credibility when it comes to its ability to shepherd their funnel of users into paid subscriptions.

    But since I’ve been here, I’ve been incredibly encouraged by what I’ve seen. The team is serious about subscriptions. And now with YouTube Music and Google Play Music merging, I’m confident they will build an even better subscription service. And with more deals like the one YouTube recently signed with Warner, they’re going to be able to take it global.
  2. Twin-engine growth. The success of streaming subscriptions is one reason why I’m so optimistic about the future. Subscription revenue is still in its infancy, yet it’s already reaping billions for the music industry. It’s not just some business model on a whiteboard; it’s a real and rapidly growing source of cash for labels and artists today.

    Some think ads are the death of the music industry. Ads are not death. Death is death. Irrelevance is death. Fans not being exposed to new music is death. My time at YouTube has me convinced that advertising is another powerful source of growth for the industry. YouTube’s ads hustle has already brought over a billion dollars in 12 months to the industry and it’s growing rapidly. Combined with YouTube’s growing subscription service, they’ve now got two engines taking the industry to a more lucrative place than it’s ever been before.

    But that all depends on whether or not the industry chokes off these new sources of growth. I’m old enough to remember what the industry was saying about iTunes and Spotify before they started contributing billions to its bottom line. The growth that the industry is seeing today proves that ads and subscription thrive side by side.
  3. Let’s talk dead presidents. It is important that labels, publishers and YouTube come together to make transparency a reality, as I strongly believe it will help everyone in the industry move the business forward.

    Artists and songwriters need to truly understand what they’re making on different platforms. It’s not enough for YouTube to say that it’s paid over $1 billion to the industry from ads. We (the labels, publishers and YouTube) must shine a light on artist royalties, show them how much they make from ads compared to subscriptions by geography and see how high their revenue is in the U.S. and compared to other services.

    For instance, critics complain YouTube isn’t paying enough money for ad-supported streams compared to Spotify or Pandora. I was one of them! Then I got here and looked at the numbers myself. At over $3 per thousand streams in the U.S., YouTube is paying out more than other ad supported services.

    Why doesn’t anyone know that? Because YouTube is global and the numbers get diluted by lower contributions in developing markets. But they’re working the ads hustle like crazy so payouts can ramp up quickly all around the world. If they can do that, this industry could double in the next few years.
  4. Fortune AND fame. Every day for the last 30 years, I’ve woken up with the same thought: maybe today’s the day I’m going to meet an artist that’s going to change pop culture. I love watching when an artist goes from obscurity to celebrity. That’s my drug.

    Every artist I’ve ever worked with wanted some fame and fortune. YouTube will deliver fortune … but I think they need to be just as focused on bringing the fame. YouTube is already a great force for breaking new artists; in fact, the majority of music watchtime on YouTube is coming from its recommendations, rather than people searching for what they want to listen to. But YouTube needs to find new ways to promote and break artists and their albums so they have a chance to shine on the platform and connect with their fans. This is one of my biggest priorities and you’ll see more coming soon.
  5. Without safe harbor, we’d all be lost at sea. I’ve spent my professional life fighting for artists to get what they deserve. I’ve worked with the RIAA and the IFPI to fight piracy since back when the main concern was bootlegged tapes. Safe harbor has become an obsession -- with many complaining it’s the cause of all of industry’s woes. I’m not parroting the company line when I say the focus on copyright safe harbors is a distraction. Safe harbor helps open platforms like YouTube, Facebook, Soundcloud and Instagram give a voice to millions of artists around the world, making the industry more competitive and vibrant.

    Every artist should be concerned if their music shows up online without credit or payment. But YouTube’s team has built a system in Content ID that helps rightsholders earn money no matter who uploads their music. As of 2016, 99.5 percent of music claims on YouTube are matched automatically by Content ID and are either removed or monetized.

    Before Content ID, when a fan shared a song with a friend through a mix tape, it was called piracy. Now it's generated over $2 billion for content owners and goes far beyond what the safe harbor provision requires.

One of the first jobs I ever had in the music business was working as a road manager for Run DMC. Doing that taught me a lesson that has formed the core of what I’ve tried to do my entire career: set things up well so that the artists and fans can come together and make magic happen. I’ve spent my entire life helping artists achieve fame and fortune. I wouldn’t have joined YouTube if I didn’t believe the company was committed to delivering more revenue to artists, labels, publishers and composers -- they just have to set them up well and get out of their way.

With love and respect,

Lyor Cohen

Lyor recently watched “Brothers Gonna Work It Out

Source: YouTube Blog


Google Developer Days are coming to Europe

Posted by Jason Titus, Vice President, Developer Product Group

I'm happy to share that we opened registrations for the European installment of our global event series — Google Developer Days (GDD). Google Developer Days showcase our latest developer product and platform updates to help you develop high quality apps, grow & retain an active user base, and tap into tools to earn more.

Google Developer Days — Europe (GDD Europe) will take place on September 5-6 2017, in Krakow, Poland. We'll feature technical talks on a range of products including Android, the Mobile Web, Firebase, Cloud, Machine Learning, and IoT. In addition, we'll offer opportunities for you to join hands-on training sessions, and 1:1 time with Googlers and members of our Google Developers Experts community. We're looking forward to meeting you face-to-face so we can better understand your needs and improve our offerings for you.

If you're interested in joining us at GDD Europe, registration is now open.

Can't make it to Krakow? We've got you covered. All talks will be livestreamed on the Google Developers YouTube channel, and session recordings will be available there after the event. Looking to tune into the action with developers in your own neighborhood? Consider joining a GDD Extended event or organizing one for your local developer community .

Whether you're planning to join us in-person or remotely, stay up-to-date on the latest announcements using #GDDEurope on Twitter, Facebook, and Google+.

We're looking forward to seeing you in Europe soon!

AdMob is heading to Google I/O 2017

Google I/O 2017 is one week away (May 17-19th), and we’ll be there. Google I/O brings together developers from around the globe for an immersive experience focused on exploring the next generation of tech.

This year there will be dozens of talks discussing important topics that matter to you like design & development, growing your business, the latest in mobile tech, and more. We’ll also give you a look into “what’s next” for AdMob.

Be sure to catch the keynotes to be the first to hear about the latest AdMob innovations:

Wednesday, May 17th

Google Keynote
10:00 AM PST, 5:00 PM GMT
Join Google CEO, Sundar Pichai, as he gives a “first look” into all the latest and greatest technology innovations at Google.
Watch the livestream | Add to Calendar

Growth & Monetization Keynote
3:00 PM PST, 10:00 PM GMT
Hear from Sridhar Ramaswamy, SVP of Ads & Commerce, on how new ads and monetization innovations can help you build a customer-centric business.
Watch the livestream | Add to Calendar

Here are some of the other key sessions to check out at I/O 2017:

Wednesday, May 17th

Analytics with Firebase: Overview and Updates

4:00 PM PST, 11:00 PM GMT

Analytics is at the core of your ability to build great apps, grow your user base and earn more money. In this session we will show you what's new with Firebase and how we are building simpler and more powerful reporting that gives you real-time insights into what is happening in your app.

Watch the livestream | Add to Calendar

Thursday, May 18th

Build Great Monetization Experiences with the ALL NEW AdMob 

4:30 PM PST, 11:30 PM GMT

Successful developers use a combination of payments, ads, and sophisticated analytics to earn more from their apps. In this session we will show you how AdMob has strengthened its platform to give you a more holistic picture of those revenue sources with deeper insights and a more intuitive user experience.

Watch the livestream | Add to Calendar

Friday, May 19th

AdMob and Firebase: Better Together 

8:30 AM PST, 3:30 PM GMT

Come learn how AdMob and Firebase seamlessly work together to help you optimize and generate more advertising revenue in your app. This session dives into how you can use AdMob and Firebase to understand how ads impact user experience, how different audiences interact with ads, and how to think about lifetime value.

Watch the livestream | Add to Calendar

Find your Apps’ Best Users with Google’s Machine Learning 

10:30 AM PST, 5:30 PM GMT

In this session we will show you the data you need, the best mathematical models for calculating lifetime value (LTV), and how machine learning is the missing link that converts LTV into actual high value users for your app.

Watch the livestream | Add to Calendar

You can check out the complete Google I/O 2017 agenda here.

If you can’t attend in person, visit an I/O extended event near you! We’ll be live tweeting and sharing posts from the event on our Twitter, LinkedIn and Google+ channels, using the hashtags #io17. We hope to see you there!

Posted by: Duke Dukellis, Product Manager, AdMob

Source: Inside AdMob


Start planning your Google I/O 2017 schedule!

Posted by Christopher Katsaros, Product Marketing Manager

Whether you're joining us in person or remotely, we're looking forward to connecting with you at Google I/O, on May 17-19. It's the best way to learn about building apps for the Google Assistant, how to go from Zero to App with Firebase, all of the goodies inside Android O, and much more!

Over 150 Technical Sessions, Livestreamed

The show kicks off at 10AM PDT on Wednesday, May 17 with the Google Keynote, an opportunity to hear about the latest product and platform innovations from Google, helping connect you to billions of users around the world. After that, we'll be diving into all of the ways developers can take advantage of this newness in a Developer Keynote at 1PM PDT. From there, the 14 tracks at Google I/O kickoff, with over 150 technical sessions livestreamed (i.e. all of them!) at google.com/io.

We've just published more talks on the I/O website, so you can start planning your custom schedule ahead of the conference (shhh! we've got a few more sessions up our sleeve, so don't forget to check back directly after the Developer Keynote).

You can also take advantage of Codelabs - self-paced tutorials on a number of technical topics to get you up and running with a Google product or feature. These Codelabs will be available both to those who are joining us in person at Shoreline, and online for those of you tuning in from around the world. More details will be available on the schedule soon.

Joining in person?

We received a lot of great feedback from attendees last year, and have been working hard since then to make sure this is the best Google I/O, yet. To help make it easier to attend your favorite talks and minimize lines, you'll be able to reserve seats across sessions before I/O starts. But don't worry, we're saving a few seats in each session that will be available on a first-come, first-served basis onsite. We've also increased the size of each of the tents this year, giving you more opportunities to see all of your favorite talks in-person.

Finally, we've doubled the number of Office Hours available, since you told us that being able to connect directly with Googlers to get your questions answered was extremely valuable. On top of that, all of the sandbox demo areas will be inside climate-controlled structures, making it easier to avoid the elements (but don't forget to bring your layers – Shoreline Amphitheatre is still an outdoor venue, after all).

See you in 3 weeks!

We're looking forward to seeing you in just a few weeks. We've got a few more updates to share before then; be sure to check out the Google I/O website for more details, or follow the conversation using the #io17 hashtag.


Discover more of the things you’re into with Topics on Google+

(Cross-posted from the Google+ Keyword blog)

By Anna Kiyantseva, Product Manager, Google+

Millions of people use Google+ to connect around the things they’re interested in. To help you sort through the many Collections and Communities where people share, we’ve created a new feature called Topics. With Topics, you’ll see a high-quality stream of Collections, Communities and people related to things we think you’ll be interested in.


Today, there are already hundreds of Topics available in English, Spanish and Portuguese, covering everything from black-and-white photography to hiking and camping. So whether you’ve recently discovered the wonders of woodworking, love gardening, or can’t get enough of street photography, there’s a stream of unique and interesting stuff waiting for you on Google+.

To see the recommended Topics, head to your home stream and look for the “Topics to explore” cards. Topics will be rolling out over the next day or so, so don’t worry if you don’t see any suggestions right away.

Hope you enjoy it!


Launch Details
Release track:  
Launching to both Rapid release and Scheduled release

Editions:
Available to all G Suite editions

Rollout pace: 
Full rollout (1-3 days for feature visibility)

Impact: 
All end users

Action:
Change management suggested/FYI

More Information
Google+ Keyword Blog


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Introducing the Google Assistant SDK

Posted by Chris Ramsdale, Product Manager

When we first announced the Google Assistant, we talked about helping users get things done no matter what device they're using. We started with Google Allo, Google Home and Pixel phones, and expanded the Assistant ecosystem to include Android Wear and Android phones running Marshmallow and Nougat over the last few months. We also announced that Android Auto and Android TV will get support soon.

Today, we're taking another step towards building out that ecosystem by introducing the developer preview of the Google Assistant SDK. With this SDK you can now start building your own hardware prototypes that include the Google Assistant, like a self-built robot or a voice-enabled smart mirror. This allows you to interact with the Google Assistant from any platform.

The Google Assistant SDK includes a gRPC API, a Python open source client that handles authentication and access to the API, samples and documentation. The SDK allows you to capture a spoken query, for example "what's on my calendar", pass that up to the Google Assistant service and receive an audio response. And while it's ideal for prototyping on Raspberry Pi devices, it also adds support for many other platforms.

To get started, visit the Google Assistant SDK website for developers, download the SDK, and start building. In addition, Wayne Piekarski from our Developer Relations team has a video introducing the Google Assistant SDK, below.


And for some more inspiration, try our samples or check out an example implementation by Deeplocal, an innovation studio out of Pittsburgh that took the Google Assistant SDK for a spin and built a fun mocktails mixer. You can even build one for yourself: go here to learn more and read their documentationon Github. Or check out the video below on how they built their demo from scratch.


This is a developer preview and we have a number of features in development including hotword support, companion app integration and more. If you're interested in building a commercial product with the Google Assistant, we encourage you to reach out and contact us. We've created a new developer community on Google+ at g.co/assistantsdkdev for developers to keep up to date and discuss ideas. There is also a stackoverflow tag [google-assistant-sdk] for questions, and a mailing list to keep up to date on SDK news. We look forward to seeing what you create with the Google Assistant SDK!