Update Google Calendar resources using the Calendar Resource APIs

We recently introduced the new Google Calendar experience on the web, including the ability to add more structured data about your buildings and resources. We’re now making it easier to add and edit that information with updates to the existing Calendar Resources API, as well as adding two new APIs: Buildings and Features.

G Suite admins can also use these APIs to keep resource and building information in Google Calendar up to date and in sync with other systems used for facility management.

For more information on the Calendar Resources APIs, check out the API documentation and Help Center links below.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to all G Suite editions

Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
Admins only

Action:
Admin action suggested/FYI

More Information
Help Center: Create buildings, features, and resources
The Keyword: Time for a refresh: meet the new Google Calendar for web
G Suite Updates: Introducing the new Calendar Resource API
G Suite Admin SDK > Directory API: Resources.calendars
G Suite Admin SDK > Directory API: Resources.features
G Suite Admin SDK > Directory API: Resources.buildings

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Solution: Integrating on-premises storage with Google Cloud using an Avere vFXT



Running compute workloads on the cloud can be a powerful way to leverage the massive resources available on Google Cloud Platform (GCP).

Some workloads, such as 3D rendering or HPC simulations, rely on large datasets to complete individual tasks. This isn't a problem when running jobs on-premises, but how do you synchronize tens, or even hundreds of gigabytes of data with cloud storage in order to run the same workload on the cloud?

Even if your Network Attached Storage (NAS) is situated close enough to a Google Cloud datacenter to mount directly, you can quickly saturate your internet connection when hundreds (or even thousands) of virtual machines attempt to read the same data at the same time from your on-premises NAS.

You could implement a synchronization strategy to ensure files exist both on-premises and in the cloud, but managing data concurrency, storage and resources on your own can be challenging, as would be modifying your existing pipeline to perform these actions.

The Avere vFXT is a virtual appliance that provides a solution for such workloads. The vFXT is a cluster of virtual machines that serves as both read-through cache and POSIX-compliant storage. When you mount the vFXT on your cloud instances, your entire on-premises file structure is represented in the cloud. When files are read on the cloud, they're read from your on-premises NAS, across your secure connection, and onto the vFXT. If a file already exists in the vFXT's cache, it's compared with the on-premises version. If the files on both the cache and on-premises are identical, the file is not re-read, which can save you bandwidth and time to first byte.

As cloud instances are deployed, they mount the vFXT as they would any other filesystem (either NFS or SMB). The data is available when they need it, and your connection is spared oversaturation.
We recently helped Avere put together a partner tutorial that shows how to incorporate an Avere vFXT into your GCP project. It also provides guidance on different ways to connect to Google Cloud, and how to access your vFXT more securely and efficiently.

Check out the tutorial, and let us know what other Google Cloud tools you’d like to learn how to use in your visual effects or HPC pipeline. You can reach me on Twitter at @vfx_agraham.

#NoHacked 3.0: Tips on prevention

Last week on #NoHacked, we have shared on hack detection and the reasons why you might get hacked. This week we focus on prevention and here are some tips for you!

  • Be mindful of your sources! Be very careful of a free premium theme/plugin!

You probably have heard about free premium plugins! If you've ever stumbled upon a site offering you plugins you normally have to purchase for free, be very careful. Many hackers lure you in by copying a popular plugin and then add backdoors or malware that will allow them to access your site. Read more about a similar case on the Sucuri blog. Additionally, even legit good quality plugins and themes can become dangerous if:

  • you do not update them as soon as a new version becomes available
  • the developer of said theme or plugin does not update them, and they become old over time.
In any case, keeping all your site's software modern and updated is essential in keeping hackers out of your website.

  • Botnet in wordpress
    A botnetis a cluster of machines, devices, or websites under the control of a third party often used to commit malicious acts, such as operating spam campaigns, clickbots, or DDoS. It's difficult to detect if your site has been infected by a botnet because there are often no specific changes to your site. However, your site's reputation, resources, and data are at risk if your site is in a botnet. Learn more about botnets, how to detect them, and how they can affect your site at Botnet in wordpress and joomla article.

As usual if you have any questions post on our Webmaster Help Forums for help from the friendly community and see you next week!

Beta Channel Update for Chrome OS

The Beta channel has been updated to 64.0.3282.24 (Platform version: 10176.13.1) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. Systems will be receiving updates over the next several days.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Kevin Bleicher

Google Chrome

Stable Channel Update for Desktop

The stable channel has been updated to 63.0.3239.108 for Windows, Mac and Linux which will roll out over the coming days/weeks.

Security Fixes and Rewards
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

This update includes 2 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.

[$7500][788453] High CVE-2017-15429: UXSS in V8. Reported by Anonymous on 2017-11-24.


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

As usual, our ongoing internal security work was responsible for a wide range of fixes:

  • [794792] Various fixes from internal audits, fuzzing and other initiatives 
Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.

A list of all changes is available in the log.Interested in switching release channels? Find out how.  If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Krishna Govind
Google Chrome

Beta Channel Update for Desktop

The Chrome team is excited to announce the promotion of Chrome 64 to the beta channel for Windows, Mac and Linux. Chrome 64.0.3282.24 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Abdul Syed


Google Chrome

5 ways to improve your hiring process in 2018

Editor’s note: Senior Product Manager Berit Hoffmann leads Hire, a recruiting application Google launched earlier this year. In this post, she shares five ways businesses can improve their hiring process and secure great talent.

With 2018 quickly approaching, businesses are evaluating their hiring needs for the new year.

According to a recent survey of 2,200 hiring managers, 46 percent of U.S. companies need to hire more people but have issues filling open positions with the right candidates. If your company lacks great hiring processes and tools, it can be easy to make sub-optimal hiring decisions, which can have negative repercussions.

We built Hire to help businesses hire the right talent more efficiently, and integrated it with G Suite to help teams collaborate more effectively throughout the process. As your business looks to invest in talent next year, here are five ways to positively impact your hiring outcomes.

1. Define the hiring process for each role.

Take time to define each stage of the hiring process, and think about if and how the process may need to differ. This will help you better tailor your evaluation of each candidate to company expectations, as well as the qualifications of a particular role.

Mobility best practice in connected workspaces: tiered access at Google

Earlier this year, Google reviewed a subset of its own interview data to discover the optimal number of interviews needed in the hiring process to evaluate whether a candidate is right for Google. Statistical analysis showed that four interviews was enough to predict with 86 percent confidence whether someone should be hired. Of course, every company’s hiring process varies according to size, role or industry—some businesses require double that number of interviews, whereas others may only need one interview.

Using Hire to manage your recruiting activities allows you to configure as many hiring process “templates” as you’d like, as well as use different ones for different roles. For example, you might vary the number of interview rounds based on department. Whatever process you define, you can bring all candidate activity and interactions together within Hire. Plus, Hire integrates with G Suite apps, like Gmail and Calendar, to help you coordinate the process.

2. Make jobs discoverable on Google Search.

For many businesses, sourcing candidates is one of the most time-consuming parts of the hiring process, so Google launched Job Search to help employers better showcase job opportunities in search. Since launching, 60 percent more employers show jobs in search in the United States.

Making your open positions discoverable where people are searching is an important part of attracting the best talent. If you use Hire to post a job, the app automatically formats your public job posting so it is discoverable by job seekers in Google search.

3. Make sure you get timely feedback from interviewers.

The sooner an interviewer provides feedback, the faster your hiring team can reach a decision, which improves the candidate’s experience. To help speed up feedback submissions, some companies like Genius.com use a “silent process” approach. This means interviewers are not allowed to discuss a candidate until they submit written feedback first.

Hire supports this “silent process” approach by hiding other people’s feedback from interviewers until they submit their own. We’ve found that this can incentivize employees to submit feedback faster because they want to see what their colleagues said. 63 percent of Hire interviewers leave feedback within 24 hours of an interview and 75 percent do so within 48 hours.

4. Make sure their feedback is thoughtful, too.

Beyond speedy feedback delivery, it’s perhaps more important to receive quality evaluations. Make sure your interviewers know how to write clear feedback and try to avoid common mistakes such as:

  1. Writing vague statements or summarizing a candidate’s resume.
  2. Restating information from rubrics or questionnaires rather than giving specific  examples.
  3. Getting distracted by personality or evaluating attributes unrelated to the job.

One way you can encourage employees to stay focused when they interview a candidate. is to assign them a specific topic to cover in the interview. In Hire, topics are included in each interviewer’s Google Calendar invitation for easy reference without having to log into the app.

Maintaining a high standard for written feedback helps your team not only make hiring decisions today, but also helps you track candidates for future consideration. Even if you don’t hire someone for a particular role, the person might be a better fit for another position down the road. In Hire, you can find candidates easily with Google’s powerful search technology. Plus, Hire takes past interview feedback into account and ranks previous candidates higher if they’ve had positive feedback.

5. Stop letting internal processes slow you down.

If you don’t manage your hiring process effectively, it can be a huge time sink, especially as employers take longer and longer to hire talent. If your business lags on making a decision, it can mean losing a great candidate.

Implementing a solution like Hire can make it a lot easier for companies to move quickly through the hiring process. Native integrations with the G Suite apps you’re already using can help you cut down on copy-pasting or having to jump between multiple tabs. If you email a candidate in Gmail, it’s automatically synced in Hire so the rest of the hiring team can follow the conversation. And if you need to schedule a multi-slot interview, you can do so easily in Hire which lets you access interviewer availability or even book conference rooms. Since launching in July, we’ve seen the average time between posting a position and hiring a candidate decrease from 128 days to just 21 days (3 weeks!).

Hiring doesn’t have to be hard. Request a demo of Hire to see how you can speed up talent acquisition. Or learn more about how G Suite can help your teams transform the way they work.

5 ways to improve your hiring process in 2018

Editor’s note: Senior Product Manager Berit Hoffmann leads Hire, a recruiting application Google launched earlier this year. In this post, she shares five ways businesses can improve their hiring process and secure great talent.

With 2018 quickly approaching, businesses are evaluating their hiring needs for the new year.

According to a recent survey of 2,200 hiring managers, 46 percent of U.S. companies need to hire more people but have issues filling open positions with the right candidates. If your company lacks great hiring processes and tools, it can be easy to make sub-optimal hiring decisions, which can have negative repercussions.

We built Hire to help businesses hire the right talent more efficiently, and integrated it with G Suite to help teams collaborate more effectively throughout the process. As your business looks to invest in talent next year, here are five ways to positively impact your hiring outcomes.

1. Define the hiring process for each role.

Take time to define each stage of the hiring process, and think about if and how the process may need to differ. This will help you better tailor your evaluation of each candidate to company expectations, as well as the qualifications of a particular role.

Mobility best practice in connected workspaces: tiered access at Google

Earlier this year, Google reviewed a subset of its own interview data to discover the optimal number of interviews needed in the hiring process to evaluate whether a candidate is right for Google. Statistical analysis showed that four interviews was enough to predict with 86 percent confidence whether someone should be hired. Of course, every company’s hiring process varies according to size, role or industry—some businesses require double that number of interviews, whereas others may only need one interview.

Using Hire to manage your recruiting activities allows you to configure as many hiring process “templates” as you’d like, as well as use different ones for different roles. For example, you might vary the number of interview rounds based on department. Whatever process you define, you can bring all candidate activity and interactions together within Hire. Plus, Hire integrates with G Suite apps, like Gmail and Calendar, to help you coordinate the process.

2. Make jobs discoverable on Google Search.

For many businesses, sourcing candidates is one of the most time-consuming parts of the hiring process, so Google launched Job Search to help employers better showcase job opportunities in search. Since launching, 60 percent more employers show jobs in search in the United States.

Making your open positions discoverable where people are searching is an important part of attracting the best talent. If you use Hire to post a job, the app automatically formats your public job posting so it is discoverable by job seekers in Google search.

3. Make sure you get timely feedback from interviewers.

The sooner an interviewer provides feedback, the faster your hiring team can reach a decision, which improves the candidate’s experience. To help speed up feedback submissions, some companies like Genius.com use a “silent process” approach. This means interviewers are not allowed to discuss a candidate until they submit written feedback first.

Hire supports this “silent process” approach by hiding other people’s feedback from interviewers until they submit their own. We’ve found that this can incentivize employees to submit feedback faster because they want to see what their colleagues said. 63 percent of Hire interviewers leave feedback within 24 hours of an interview and 75 percent do so within 48 hours.

4. Make sure their feedback is thoughtful, too.

Beyond speedy feedback delivery, it’s perhaps more important to receive quality evaluations. Make sure your interviewers know how to write clear feedback and try to avoid common mistakes such as:

  1. Writing vague statements or summarizing a candidate’s resume.
  2. Restating information from rubrics or questionnaires rather than giving specific  examples.
  3. Getting distracted by personality or evaluating attributes unrelated to the job.

One way you can encourage employees to stay focused when they interview a candidate. is to assign them a specific topic to cover in the interview. In Hire, topics are included in each interviewer’s Google Calendar invitation for easy reference without having to log into the app.

Maintaining a high standard for written feedback helps your team not only make hiring decisions today, but also helps you track candidates for future consideration. Even if you don’t hire someone for a particular role, the person might be a better fit for another position down the road. In Hire, you can find candidates easily with Google’s powerful search technology. Plus, Hire takes past interview feedback into account and ranks previous candidates higher if they’ve had positive feedback.

5. Stop letting internal processes slow you down.

If you don’t manage your hiring process effectively, it can be a huge time sink, especially as employers take longer and longer to hire talent. If your business lags on making a decision, it can mean losing a great candidate.

Implementing a solution like Hire can make it a lot easier for companies to move quickly through the hiring process. Native integrations with the G Suite apps you’re already using can help you cut down on copy-pasting or having to jump between multiple tabs. If you email a candidate in Gmail, it’s automatically synced in Hire so the rest of the hiring team can follow the conversation. And if you need to schedule a multi-slot interview, you can do so easily in Hire which lets you access interviewer availability or even book conference rooms. Since launching in July, we’ve seen the average time between posting a position and hiring a candidate decrease from 128 days to just 21 days (3 weeks!).

Hiring doesn’t have to be hard. Request a demo of Hire to see how you can speed up talent acquisition. Or learn more about how G Suite can help your teams transform the way they work.

Source: Google Cloud


LoWPAN on Android Things

Posted by Dave Smith, Developer Advocate for IoT

Creating robust connections between IoT devices can be difficult. WiFi and Bluetooth are ubiquitous and work well in many scenarios, but suffer limitations when power is constrained or large numbers of devices are required on a single network. In response to this, new communications technologies have arisen to address the power and scalability requirements for IoT.

Low-power Wireless Personal Area Network (LoWPAN) technologies are specifically designed for peer-to-peer usage on constrained battery-powered devices. Devices on the same LoWPAN can communicate with each other using familiar IP networking, allowing developers to use standard application protocols like HTTP and CoAP. The specific LoWPAN technology that we are most excited about is Thread: a secure, fault-tolerant, low-power mesh-networking technology that is quickly becoming an industry standard.

Today we are announcing API support for configuring and managing LoWPAN as a part of Android Things Developer Preview 6.1, including first-class networking support for Thread. By adding an 802.15.4 radio module to one of our developer kits, Android Things devices can communicate directly with other peer devices on a Thread network. These types of low-power connectivity solutions enable Android Things devices to perform edge computing tasks, aggregating data locally from nearby devices to make critical decisions without a constant connection to cloud services. See the LoWPAN API guide for more details on building apps to create and join local mesh networks.

Getting Started

OpenThread makes getting started with LoWPAN on Android Things easy. Choose a supported radio platform, such as the Nordic nRF52840, and download pre-built firmware to enable it as a Network Co-Processor (NCP). Integrate the radio into Android Things using the LoWPAN NCP user driver. You can also expand support to other radio hardware by building your own user drivers. See the LoWPAN user driver API guide for more details.

To get started with DP6.1, use the Android Things Console to download system images and flash existing devices. Then download the LoWPAN sample app to try it out for yourself! LoWPAN isn't the only exciting thing happening in the latest release. See the release notes for the full set of fixes and updates included in DP6.1.

Feedback

Please send us your feedback by filing bug reports and feature requests, as well as asking any questions on Stack Overflow. You can also join Google's IoT Developers Community on Google+, a great resource to get updates and discuss ideas. Also, we have our new hackster.io community, where everyone can share the amazing projects they have built. We look forward to seeing what you build with Android Things!

Improving End-to-End Models For Speech Recognition



Traditional automatic speech recognition (ASR) systems, used for a variety of voice search applications at Google, are comprised of an acoustic model (AM), a pronunciation model (PM) and a language model (LM), all of which are independently trained, and often manually designed, on different datasets [1]. AMs take acoustic features and predict a set of subword units, typically context-dependent or context-independent phonemes. Next, a hand-designed lexicon (the PM) maps a sequence of phonemes produced by the acoustic model to words. Finally, the LM assigns probabilities to word sequences. Training independent components creates added complexities and is suboptimal compared to training all components jointly. Over the last several years, there has been a growing popularity in developing end-to-end systems, which attempt to learn these separate components jointly as a single system. While these end-to-end models have shown promising results in the literature [2, 3], it is not yet clear if such approaches can improve on current state-of-the-art conventional systems.

Today we are excited to share “State-of-the-art Speech Recognition With Sequence-to-Sequence Models [4],” which describes a new end-to-end model that surpasses the performance of a conventional production system [1]. We show that our end-to-end system achieves a word error rate (WER) of 5.6%, which corresponds to a 16% relative improvement over a strong conventional system which achieves a 6.7% WER. Additionally, the end-to-end model used to output the initial word hypothesis, before any hypothesis rescoring, is 18 times smaller than the conventional model, as it contains no separate LM and PM.

Our system builds on the Listen-Attend-Spell (LAS) end-to-end architecture, first presented in [2]. The LAS architecture consists of 3 components. The listener encoder component, which is similar to a standard AM, takes the a time-frequency representation of the input speech signal, x, and uses a set of neural network layers to map the input to a higher-level feature representation, henc. The output of the encoder is passed to an attender, which uses henc to learn an alignment between input features x and predicted subword units {yn, … y0}, where each subword is typically a grapheme or wordpiece. Finally, the output of the attention module is passed to the speller (i.e., decoder), similar to an LM, that produces a probability distribution over a set of hypothesized words.
Components of the LAS End-to-End Model.
All components of the LAS model are trained jointly as a single end-to-end neural network, instead of as separate modules like conventional systems, making it much simpler.
Additionally, because the LAS model is fully neural, there is no need for external, manually designed components such as finite state transducers, a lexicon, or text normalization modules. Finally, unlike conventional models, training end-to-end models does not require bootstrapping from decision trees or time alignments generated from a separate system, and can be trained given pairs of text transcripts and the corresponding acoustics.

In [4], we introduce a variety of novel structural improvements, including improving the attention vectors passed to the decoder and training with longer subword units (i.e., wordpieces). In addition, we also introduce numerous optimization improvements for training, including the use of minimum word error rate training [5]. These structural and optimization improvements are what accounts for obtaining the 16% relative improvement over the conventional model.

Another exciting potential application for this research is multi-dialect and multi-lingual systems, where the simplicity of optimizing a single neural network makes such a model very attractive. Here data for all dialects/languages can be combined to train one network, without the need for a separate AM, PM and LM for each dialect/language. We find that these models work well on 7 english dialects [6] and 9 Indian languages [7], while outperforming a model trained separately on each individual language/dialect.

While we are excited by our results, our work is not done. Currently, these models cannot process speech in real time [8, 9], which is a strong requirement for latency-sensitive applications such as voice search. In addition, these models still compare negatively to production when evaluated on live production data. Furthermore, our end-to-end model is learned on 22,000 audio-text pair utterances compared to a conventional system that is typically trained on significantly larger corpora. In addition, our proposed model is not able to learn proper spellings for rarely used words such as proper nouns, which is normally performed with a hand-designed PM. Our ongoing efforts are focused now on addressing these challenges.

Acknowledgements
This work was done as a strong collaborative effort between Google Brain and Speech teams. Contributors include Tara Sainath, Rohit Prabhavalkar, Bo Li, Kanishka Rao, Shankar Kumar, Shubham Toshniwal, Michiel Bacchiani and Johan Schalkwyk from the Speech team; as well as Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-cheng Chiu, Anjuli Kannan, Ron Weiss and Navdeep Jaitly from the Google Brain team. The work is described in more detail in papers [4-11]

References
[1] G. Pundak and T. N. Sainath, “Lower Frame Rate Neural Network Acoustic Models," in Proc. Interspeech, 2016.

[2] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, attend and spell,” CoRR, vol. abs/1508.01211, 2015

[3] R. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, “A Comparison of Sequence-to-sequence Models for Speech Recognition,” in Proc. Interspeech, 2017.

[4] C.C. Chiu, T.N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R.J. Weiss, K. Rao, K. Gonina, N. Jaitly, B. Li, J. Chorowski and M. Bacchiani, “State-of-the-art Speech Recognition With Sequence-to-Sequence Models,” submitted to ICASSP 2018.

[5] R. Prabhavalkar, T.N. Sainath, Y. Wu, P. Nguyen, Z. Chen, C.C. Chiu and A. Kannan, “Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence Models,” submitted to ICASSP 2018.

[6] B. Li, T.N. Sainath, K. Sim, M. Bacchiani, E. Weinstein, P. Nguyen, Z. Chen, Y. Wu and K. Rao, “Multi-Dialect Speech Recognition With a Single Sequence-to-Sequence Model” submitted to ICASSP 2018.

[7] S. Toshniwal, T.N. Sainath, R.J. Weiss, B. Li, P. Moreno, E. Weinstein and K. Rao, “End-to-End Multilingual Speech Recognition using Encoder-Decoder Models”, submitted to ICASSP 2018.

[8] T.N. Sainath, C.C. Chiu, R. Prabhavalkar, A. Kannan, Y. Wu, P. Nguyen and Z. Chen, “Improving the Performance of Online Neural Transducer Models”, submitted to ICASSP 2018.

[9] D. Lawson*, C.C. Chiu*, G. Tucker*, C. Raffel, K. Swersky, N. Jaitly. “Learning Hard Alignments with Variational Inference”, submitted to ICASSP 2018.

[10] T.N. Sainath, R. Prabhavalkar, S. Kumar, S. Lee, A. Kannan, D. Rybach, V. Schogol, P. Nguyen, B. Li, Y. Wu, Z. Chen and C.C. Chiu, “No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models,” submitted to ICASSP 2018.

[11] A. Kannan, Y. Wu, P. Nguyen, T.N. Sainath, Z. Chen and R. Prabhavalkar. “An Analysis of Incorporating an External Language Model into a Sequence-to-Sequence Model,” submitted to ICASSP 2018.