Google Developer Group Spotlight: A conversation with GDG Juba Lead, Kose

Posted by Aniedi Udo-Obong, Sub-Saharan Africa Regional Lead, Google Developer Groups

Header image featuring Kose with text that says meet Kose

The Google Developer Groups Spotlight series interviews inspiring leaders of community meetup groups around the world. Our goal is to learn more about what developers are working on, how they’ve grown their skills with the Google Developer Group community, and what tips they might have for us all.

We recently spoke with Kose, community lead for Google Developer Groups Juba in South Sudan. Check out our conversation with Kose about building a GDG chapter, the future of GDG Juba, and the importance of growing the tech community in South Sudan.

Tell us a little about yourself?

I’m a village-grown software developer and community lead of GDG Juba. I work with JavaScript stack with a focus on the backend. Learning through the community has always been part of me before joining GDG Juba. I love tech volunteerism and building a community around me and beyond. I attended many local developer meetups and learned a lot that led to my involvement with GDG Juba.

I am currently helping grow the GDG Juba community in South Sudan, and previously volunteered as a mentor in the Google Africa Developer Scholarship 2020.

Why did you want to get involved in tech?

I hail from a remote South Sudan's village with little to zero access to technology. My interest in tech has largely been driven due to an enthusiasm to build things and solve farming, agricultural economics, and social issues using technology.

I am currently researching and working on a farmers connection network to help transform our agricultural economics.

What is unique about pursuing a career as a developer in South Sudan?

When you talk about technology in South Sudan, we are relatively behind compared to our neighbors and beyond. Some challenges include the lack of support, resources, and mentorship among the few technology aspirants. The electricity and internet bills are so costly that an undetermined hustler won't sacrifice their days' hustle for exploring and learning the tech spectrum.

At the same time, there are a lot of areas technology developers can dive into. Finance, hospitality, agriculture, transportation, and content creation are all viable fields. As a determined techie, I tasked myself with allocating 10% of everything I do and earn to learning and exploring technology. This helped me to have some time, money, and resources for my tech journey. As for mentorship, I’m building a global network of resourceful folks to help me venture into new areas of the tech sector.

How did you become a GDG lead?

I’ve always been that person who joined tech events as often as I could find registration links. In my college days, I would skip classes to attend events located hours away. I would hardly miss Python Hyderabad, pycons, and many other Android meetups. It was during the International Women's Day (IWD) 2018 event organized by WTM Hyderabad and GDG Hyderabad that I was lucky enough to give a short challenge pitch talk. I saw how the conference folks were excited and amazed given the fact that I was the only African in the huge Tech Mahindra conference hall. I met a lot of people, organizers, business personalities and students.

Kose shakes hand with woman at stage

Kose takes the stage for International Women's Day (IWD) 2018

At the end of the conference and subsequent events, I convinced myself to start a similar community. Since starting out with a WhatsApp group chat, we’ve grown to about 200 members on our GDG event platform, and have event partners like Koneta Hub and others. Since then, GDG Juba is helping grow the tech community around Juba, South Sudan.

How has the GDG community helped you grow in the tech industry?

From design thinking to public speaking and structuring technical meetups, the GDG community has become a resourceful part of organizing GDG Juba meetups and enhancing my organizational skills.

As a community lead, I continuously plan the organization of more impactful events and conferences, and network with potential event partners, speakers, mentors, and organizers. Being part of the GDG community has helped me get opportunities to share knowledge with others. In one instance, I became a mobile web mentor and judge for the Google Africa Developer Scholarship 2020 program.

What has been the most inspiring part of being a part of your local Google Developer Group?

As a tech aspirant, I had always wanted to be part of a tech community to learn, network, and grow in the community. Unfortunately, back then there wasn't a single tech user group in my locality. The most inspiring thing about being part of this chapter is the network buildup and learning from the community. I notably network with people I could have never networked with on a typical day.

Kose standing with 10 members at GDG Juba meetup

Kose at a GDG Juba meetup

A lot of our meetup attendees now share their knowledge and experiences with others to inspire many. We are seeing a community getting more engagement in technology. Students tell us they are learning things they hardly get in their college curriculum.

As a learner myself, I am very excited to see folks learn new tech skills and am also happy to see women participating in the tech events. I’m especially proud of the fact that we organized International Women's Day (IWD) 2021, making it possible for us to be featured in a famous local newspaper outlet.

What are some technical resources you have found the most helpful for your professional development?

The official documentation from Google Developers for Android, Firebase, and others have been and are still helpful for my understanding and diving into details of the new things I learn.

In addition to the cool resources from the awesome tech bloggers on the internet, these links are helping me a lot in my adventure:

  1. Google Developers Medium articles
  2. Android Developers Training courses
  3. Udacity Android/ Firebase courses
  4. GitHub code review
  5. Google Developers India YouTube channel

What is coming up for GDG Juba that you are most excited about?

As part of our Android Study Jam conducted earlier this year, we are planning to host a mentorship program for Android application development. The program will run from scratch to building a fully-fledged, deployable Android app that the community can use for daily activities. I am particularly excited about the fact that we will be having a mentor who has been in the industry for quite a long time. I hope to see people who read this article participating in the mentorship program, too!

What would be one piece of advice you have for someone looking to learn more about a specific technology?

Be a learner. Join groups that can mentor your learning journey.

Ready to learn new skills with developers like Kose? Find a Google Developer Group near you, here.

Finding belonging in Google’s Aboriginal and Indigenous Network

In 2013, the Google Earth Outreach team reached out to me with a request. They had been invited to partner on a mapping project in western Canada, and were looking for a Googler who could contribute an Indigenous perspective on cultural protocol. They asked if I would  be interested in helping. “Absolutely!” was my immediate response. I’m  Kanien'kehá ka, (Mohawk) and there have been times in my life and my workplace where it felt like there wasn’t space for me to be Indigenous. This was a great opportunity to lean in. There was also pressure: I could bring my perspective from my community but Indigenous communities are incredibly diverse. I hoped my Indigenous Studies degree would help me.

But the experience was a success and led to more participation in projects in the Indigenous space at Google.  Since 2015, I’ve been one of the five people who lead Google’s Aboriginal and Indigenous Network (GAIN), an employee-run group that gives Indigenous Googlers a safe place to nurture our communities. 

Finding belonging in a workplace with a large, diverse population can be difficult. We often bend and mould ourselves to fit others’ expectations. It’s hard to be authentic. It’s hard to hold to our core values, and what truly makes us who we are. But I found this in GAIN.

GAIN is a place where we can  grow and support one another. But it’s more than that. This group ensures that our communities outside of Google thrive, too. GAIN understands that the individual, family and community are all connected. No one thrives in isolation, and that’s what powers GAIN.

This work was about self-determination, and starting the process of decolonization through community empowerment.

Some work GAIN has done that we're proudest of involves creating and launching initiatives in areas like hiring, recruiting, retention, wellness, cultural events and internet connectivity in Indigenous communities. We work to shed light on the Missing and Murdered Indigenous Women and Girls campaigns , support Indigenous small businesses, and promote racial equity and justice initiatives. GAIN has also helped highlight educational tools such as training with ComIT, online STEMprogramming and Grow with Google, which worked with elementary school students in their local library and makerspace in Iqaluit.

We’re also working to make space for the Indigenous community internally. We have film screenings of independent Indigenous films, and have invited film makers from Wapikoni to host a discussion. Bob Joseph spoke to Googlers about what we may not know about the Indian Act, and the Cloud Sales team purchased his book for their entire team. More than 80 people in the Canadian offices are attending the University of Alberta’s free course on Indigenous Canada together. 

But it was with that original Google Maps project that I found a true home. The purpose of this work is to identify Indigenous territories on Google Maps, and recognizing Indigenous independent sovereignties in the same way as other governments do on Google Maps. We also encourage Indigenous populations to take ownership of how their bands and cultures are presented online through StreetView, Earth and Maps. 

When the Firelight Group (an Indigenous-owned consulting group) founded the Indigenous Mapping Workshop in 2014, they invited Google Earth Outreach as a partner and I was a member of the inaugural planning committee. We brought together 100 participants from Indigenous communities to teach them the tools needed to map out the locations their families rely on for hunting, gathering, trapping and fishing. In these workshops, we taught them how to put their own stories on their own maps, and encouraged them to take what they learned back to their communities. Maps are incredibly powerful tools in the hands of Indigenous communities, especially when they allow for our Indigenous worldview, and our Indigenous stories to be told.

This work was about self-determination, and starting the process of decolonization through community empowerment. We’ve supported Indigenous Mapping Workshops throughout Canada each year since 2014. Last year, the Indigenous Mapping Workshop went virtual and had more than 400 attendees. We expect more than 500 virtual attendees at this year’s conference, with over 100 training sessions. We’ve supported Indigenous Mapping Workshops in Australia and New Zealand as well. 

This opportunity was amazing. It is an honour to spend time with other First Nations, Elders and community members. Being welcomed into communities and sharing their stories is not a gift I’ll soon forget. I am so humbled to be able to help bring these tools, stories and Indigenous voices together. 

Helping people and businesses learn how Search works

Every day, billions of people come to Google to search for questions big and small. Whether it’s finding a recipe, looking for a local coffee shop or searching for information on complex topics like health, civics or finance, Google Search helps you get the information you need -- when you need it. 

But part of accomplishing our mission also means making information open and accessible about how Google Search, itself, works. That’s why we’re transparent about how we design Search, how we improve it and how it works to get you the information you’re looking for. 

Like many of the topics you might search for on Google, Search can seem complicated -- but we make it easy to learn about. Here are a few ways you can get a better understanding of how Google Search works:

A one-stop shop

Today, we’re launching a fully-redesigned How Search Works website that explains the ins and outs of Search -- how we approach the big, philosophical questions, along with the nitty-gritty details about how it all works. 

We first launched this website in 2016, and since then, millions of people have used it to discover more about how Search works. Now, we've updated the site with fresh information, made it easier to navigate and bookmark sections and added links to additional resources that share how Search works and answer common questions.

The website gives you a window into what happens from the moment you start typing in the search bar to the moment you get your search results. It gives an overview of the technology and work that goes into organizing the world’s information, understanding what you’re looking for and then connecting you with the most relevant, helpful information.

On the site, you can find details about how Google’s ranking systems sort through hundreds of billions of web pages and other content in our Search index -- looking at factors like meaning, relevance, quality, usability and context -- to present the most relevant, useful results in a fraction of a second. And you can learn about how we go about making improvements to Search. (There have been 4,500 such improvements in 2020 alone!) As you’ll read about, we rigorously test these changes with the help of thousands of Search Quality Raters all around the world -- people who are highly trained using our extensive guidelines. These rater guidelines are publicly available, and they describe in great detail how Search works to surface great content.

Cartoon image depicting results testing

We're always testing changes to Search to provide you with the most helpful results.

Watch and learn

You also can watch our How Search Works video series, a set of easy-to-understand explainers about how Search connects you to helpful, relevant information. Here, you’ll find the answers to common questions like how Autocomplete works (no, it’s not mind-reading), how Google keeps you safe on Search, how ads appear in Search and more. 

And if you’re really in the mood to learn all about Search -- and the real people behind the scenes who are working hard to make it better every single day -- you can watch our “home movie,” “Trillions of Questions, No Easy Answers.” Grab your popcorn!

Trending worldwide

It’s also easy for you to get a view into what people are searching for around the world using Google Trends. For more than 15 years, we’ve made this tool publicly accessible for anyone to gain more insight into how people are using Search to find information. Google Trends is the largest publicly available data set, using anonymized search interest across different geographies to highlight trending topics, questions and societal shifts. You can think of it as a window into what the world is searching for on the web.

Transparency for website creators

When it comes to the open web, we also invest heavily in helping site owners, publishers, businesses, creators and others succeed and get discovered on Search. At Google Search Central, creators can get expert advice from experienced webmasters, view over 1000+ educational videos, learn best practices for web development and discover many more tips to maximize their reach on Search. 

Every day, we make changes to make Search work better -- some small, some large. We work hard to give site owners and content producers ample notice and advice about changes where there’s actionable information they can use. While we strive to provide as much information as we can, we also have a responsibility to protect the integrity of our results and keep results as clean as possible from search spam.  That’s why, although we share a lot of information about Search updates, we can’t share every detail. Otherwise, bad actors would have the information they need to evade the protections we’ve put in place against deceptive, low-quality content.

Over the last two decades, Google Search has evolved tremendously, but one thing remains core to how we operate: transparency about our approach and commitment to providing universally accessible information to all. Explore our newly refreshed website to discover more as we continue to evolve.


Source: Search


Celebrating our first YouTube Festival in Sub-Saharan Africa

Around 500 hours of video are uploaded every minute and over one billion hours of video are watched every day on YouTube. With more than 70% of YouTube videos being watched on mobile devices and 475 million people in Sub-Saharan Africa projected to have mobile internet access by 2025, YouTube provides advertisers with distinct opportunities to connect and reach a growing market of African consumers right where they are.

This is why we recently hosted our first-ever YouTube Festival in Africa. The festival celebrates Africa’s vibrant ecosystem of YouTube creators and advertisers, while providing exclusive first looks at new features, products, and innovations.

The virtual festival, attended by leading advertisers from across Sub-Saharan Africa, was an opportunity to learn about key emerging trends and global best practices. All this in a bid to empower advertisers to learn about all the new ways they can reach engaged audiences on YouTube.

The day's headline announcements included the introduction of YouTube Select and YouTube Audio Ads, which are designed to help marketers target individuals interested in particular content categories and those who use YouTube for ambient listening.





YouTube Select
YouTube Select allows advertisers to place their ads alongside curated content that is most relevant to their brand. Let’s say you manage a smartphone brand aimed at tech savvy millennials. You could have your ads play alongside tech review content, for example.

YouTube’s most popular and relevant content, based on topic, audience, or moment is curated into packages called Lineups. Lineups are designed to achieve popularity, with a focus on top categories and creators across sports, broadcast, beauty and fashion, and popular content.

Lineups give advertisers the confidence that the right people are seeing their ads at the right time. Coupled with the existing YouTube targeting capabilities that advertisers know and use every day, ads can be hyper-personalised.




Audio Ads
Audio is a content format on the rise with people spending 18 hours a week on average listening to music — and 89% of them do so through on-demand streaming. Now advertisers have a new way of reaching these audiences on YouTube, the most popular destination for streaming music.


YouTube Audio Ads is a new format that allows advertisers to reach people using YouTube in the background and those on the free version of YouTube Music. 15-second, non-skippable ads are currently available with more formats coming soon.


Advertisers who have tried it are already seeing great success. More than 75% of measured campaigns are driving a significant lift in brand awareness with an audience that is highly engaged.



This is My YouTube
To shine a spotlight on creators, the festival featured an episode of This is My YouTube. The segment invites advertisers to experience YouTube through the eyes of YouTube creators. We find out what they laugh at, who they cry with, learn from, and escape to — and crucially, how they work with brands to bring relevant products and messaging to their followers.

Content creators play a significant role in influencing purchase behaviour. Research shows that conversions from interest to purchase increase 133% with positive reviews for South African consumers.

Watch our first episode of This is My YouTube with South African YouTube creators Kay Ngonyama and Snikiwe Mhlongo — who, combined, have over 250,000 subscribers.



YouTube + TV
Festival attendees also learned about how YouTube and TV work better together. Today, people watch video content across devices, through different platforms, any time, and anywhere. This change requires advertisers to reach their audiences beyond TV.

Planning on both TV and YouTube provides an opportunity for advertisers to extend their reach even further and drive incremental reach. This is because the consumer journey is not linear. It involves a multitude of touchpoints as a consumer considers a product or service.

While TV has a large reach, research shows that when brands combine both TV and digital as part of their marketing strategy, the return on investment is much larger than the sum of just one medium.

With this consideration, when advertisers craft messaging across both mediums, they’re able to be exactly where their consumers are as they navigate the non-linear consumer journey.

Interestingly, we have found that YouTube as an advertising medium allows advertisers to get TV’s large reach at a much lower cost. And as audiences in Africa, and around the world, consume more content on the platform, why not leverage this screentime as a brand?




A new era for advertising
As the go-to platform for video streaming, YouTube offers an array of curated content for diverse groups of people. Our audience solutions offer a variety of ways for advertisers to reach their valuable audiences.

If you would like to find out more about these YouTube offers, watch the full festival on-demand below.




Posted by Alex Okosi, Managing Director of Emerging Markets, YouTube EMEA

These researchers are bringing AI to farmers

“Farmers feed the entire world — so how might we support them to be resilient and build sustainable systems that also support global food security?” It’s a question that Diana Akrong found herself asking last year. Diana is a UX researcher based in Accra, Ghana, and the founding member of Google’s Accra UX team.

Across the world, her manager Dr. Courtney Heldreth, was equally interested in answering this question. Courtney is a social psychologist and a staff UX researcher based in Seattle, and both women work as part of Google’s People + Artificial Intelligence Research (PAIR) group. “Looking back on history, we can see how the industrial revolution played a significant role in creating global inequality,” she says. “It set most of Western Europe onto a path of economic dominance that was then followed by both military and political dominance.” Courtney and Diana teamed up on an exploratory effort focused on how AI can help better the lives of small, local farming communities in the Global South. They and their team want to understand what farmers need, their practices, value systems, what their social lives are like — and make sure that Google products reflect these dynamics.

One result of their work is a recently published research paper. The paper — written alongside their colleagues Dr. Jess Holbrook at Google and Dr. Norman Makoto Su of Indiana University and published in the ACM Interactions trade journal — dives into why we need farmer-centered AI research, and what it could mean not just for farmers, but for everyone they feed. I recently took some time to learn more about their work.


How would you explain your job to someone who isn't in tech?

Courtney: I would say I’m a researcher trying to understand underserved and historically marginalized users’ lives and needs so we can create products that work better for them. 

Diana: I’m a researcher who looks at how people interact with technology. My superpower is my curiosity and it’s my mission to understand and advocate for user needs, explore business opportunities and share knowledge.


What’s something on your mind right now? 

Diana: Because of COVID-19, there’s the threat of a major food crisis in India and elsewhere. We’re wondering how we can work with small farms as well as local consumers, policymakers, agricultural workers, agribusiness owners and NGOs to solve this problem.

Agriculture is very close to my heart, personally. Prior to joining Google, I spent a lot of time learning from smallholder farmers across my country and helping design concepts to address their needs. 

“Farmers feed the entire world — so how might we support them to be resilient and build sustainable systems that also support global food security?” Diana Akrong
UX researcher, Google


Courtney: I’ve been thinking about how AI can be seen as this magical, heroic thing, but there are also many risks to using it in places where there aren’t laws to protect people. When I think about Google’s AI Principles — be socially beneficial, be accountable to people, avoid reinforcing bias, prioritize safety — those things define what projects I want to work on. It’s also why my colleague Tabitha Yong and I developed a set of best practices for designing more equitable AI products.


Can you tell me more about your paper, “What Does AI Mean for Smallholder Farmers? A Proposal for Farmer-Centered AI Research,” recently published in ACM Interactions

Courtney: The impact and failures of AI are often very western and U.S.-centric. We’re trying to think about how to make this more fair and inclusive for communities with different needs around the globe. For example, in our farmer-centered AI research, we know that most existing AI solutions are designed for large farms in the developed world. However, many farmers in the Global South live and work in rural areas, which trail behind urban areas in terms of connectivity and digital adoption. By focusing on the daily realities of these farmers, we can better understand different perspectives, especially those of people who don’t live in the U.S. and Europe, so that Google’s products work for everyone, everywhere.

Why did you want to work at Google?

Diana: I see Google as home to teams with diverse experiences and skills who work collaboratively to tackle complex, important issues that change real people’s lives. I’ve thrived here because I get to work on projects I care about and play a critical role in growing the UX community here in Ghana.

Courtney: I chose Google because we work on the world's hardest problems. Googlers are  fearless and the reach of Google’s products and services is unprecedented. As someone who comes from an underrepresented group, I never thought I would work here. To be here at this moment is so important to me, my community and my family. When I look at issues I care about the most — marginalized and underrepresented communities — the work we do plays a critical role in preventing algorithmic bias, bridging the digital divide and lessening these inequalities. 


How have you seen your research help real people? 

Courtney: In 2018, we worked with Titi Akinsanmi, Google’s Policy and Government Relations Lead for West and Francophone Africa, and PAIR Co-lead and Principal Research Scientist Fernanda Viegas on the report for AI in Nigeria. Since then, the Ministry of Technology and Science reached out to Google to help form a strategy around AI. We’ve seen government bodies in sub-Saharan Africa use this paper as a roadmap to develop their own responsible AI policies.


How should aspiring AI thinkers and future technologists prepare for a career in this field?

Diana: My main advice? Start with people and their needs. A digital solution or AI may not be necessary to solve every problem. The PAIR Guidebook is a great reference for best practices and examples for designing with AI.

Google Workspace Updates Weekly Recap – August 20, 2021

New updates 

Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers.


Block shares option added to Google Drive sharing emails
Last month, we announced that you could now block shares from another user in Google Drive. es from another user in Google Drive. Now, we're also adding the option to block a user from the sharing notification emails sent from Google Drive. With this addition, you'll be able to start the workflow directly from the email. | Learn more. 


Previous announcements 

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.

View more insights and take quick action on the Users, Domain, and Billing home cards in the Admin console
You’ll notice important notifications and improved guidance within the cards to help you easily take action on user, billing, and domain management activities on the Admin console homepage. | Learn more. 


Upload customized audio prompts and greetings to Google Voice automated attendant
You can now upload your own pre-recorded prompts and greetings when setting up an automated attendant at Google Voice, in addition to the standard text-to-speech voice ability you currently have. | Available to all Google Workspace and G Suite customers with Google Voice standard and premier licenses. | Learn more.


Easily customize theme colors in Sheets and Slides
Now it’s easier to find and select theme colors in Sheets and Slides. | Learn more.


Share where you’re working from in Google Calendar
Starting August 30, 2021, you’ll be able to indicate where you’re working from directly on your calendar. You can add a weekly working location routine and update your location as plans change. Starting August 18, admins will be able to control how the feature is used in their organization. | Learn more.


Limit external messaging to trusted domains in Google Chat
Now, you may choose to limit external chat to people in trusted domains for your entire organization, or set different policies for different OUs. | Learn more.


Dark mode for Google Chat on web
You can now enable dark mode for Google Chat on the web (chat.google.com) and the Google Chat Progressive Web App (PWA). | Learn more.


For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

Dark mode for Google Chat on web

Quick summary

You can now enable dark mode for Google Chat on the web (chat.google.com) and the Google Chat Progressive Web App (PWA). Dark mode creates a better viewing experience in low-light conditions by reducing brightness and potentially reducing eye strain.




Getting started

  • Admins: There is no admin setting for this feature.
  • End users: Within the chat.google.com or the Google Chat PWA, go to Settings > Theme settings and select “Dark mode”.  

Rollout pace


Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers




Ripples Nigeria and the power of geojournalism

In 2015, Samuel Ibemere and his colleagues founded Ripples Nigeria, an online newspaper that aims to bring data journalism into the mainstream. And they’re particularly focused on geojournalism: the harnessing of earth data to accurately report on big stories and important changes in the environment. “The media sector cannot stand by idly while other industries in Africa are contributing to help protect the environment,” Samuel tells us. As well as bringing geojournalism into the mainstream in Nigeria, the hope is that it will also help track climate change.


In 2021, Ripples Nigeria received funding from the Google News Initiative Innovation Challengefor its latest project, Eco-Nai+, Nigeria’s first digital geojournalism platform. The Keyword sat down over Google Meet with Chinedu Obe Chidi, Assistant Editor of Ripples Nigeria, Programme Director of Ripples Centre for Data and Investigative Journalism (RCDIJ) and Team Lead of Eco-Nai+ to find out more about the work being done. 


How would you define geojournalism and its importance today?

Geojournalism uses scientific data on the earth to report the environment. It’s a fusion of journalism and earth sciences to create a brand of journalism that allows us to have objective, visual, measurable, interactive yet broadly accessible coverage of issues surrounding the environment. Without it, people could still write about the environment. But by relying on technical tools — like image geotagging and authoritative open data sources like Google Earth —  we can better communicate from a scientific perspective how best to interpret changes to the environment. It’s about getting more informed, more reliable coverage of issues like rising sea levels, droughts, rainfall, erosion — the many issues tied to the question of climate change, where technical reporting is vital. 


What’s the origin story behind Ripples Nigeria? 


In 2014, two slightly unrelated developments acted as a pull on a group of young Nigerian professionals in the media space. After years of struggle, Nigeria finally entered the internet age - and the media industry rushed to take advantage of new digital opportunities. With that, investigative and data journalism became even more important, helping resolve local and global concerns around corruption, illiteracy, diseases and the environment.

Ripples Nigeria was a product of these fundamental shifts. Realizing the gaps and opportunities at the time, the plan was to build a fiercely independent multimedia platform that would rise to speak truth to power, stay committed to the ideals of solution journalism and become Nigeria’s most influential news source.

Can you tell us about your initial work in data journalism?

We’ve been focusing on data journalism for the past five years. There’s a huge lack of familiarity with the subject on the continent and the more esoteric area of geojournalism is even newer to writers and editors. In 2017, we set up Ripples Centre for Data and Investigative Journalism (RCDIJ) to equip journalists, primarily through our Data Journalism Masterclass, to effectively and accurately embark on data reporting and investigative stories  in key areas like the environment. The Masterclass, in its third year now, has graduated more than one hundred journalists. 

How does Project Eco Nai+ use data?


We rely on three main sources of data. First, we work with user-generated data from those most impacted by environmental changes, like farmers and other rural workers. We thought that if we could get these people to tell their own stories — what things within their natural operating environment were like five to 10 years ago versus today, for instance — they could contribute valuable data to the platform and help document these changes. Second, we use authoritative sources of data such as Google Earth, data from meteorological agencies, and other third-party official or trusted open data sources. Third, we use data collected by people we deploy to the field — researchers, analysts, data collectors, data and investigative journalists — who look at the environment in different communities where irregularities or changes have attracted our interest. These three sources represent a very broad data set that will form the rich database of Eco-Nai+ digital platform. 

The Ripples Nigeria team stand in front of a minivan smiling to the camera in corporate jumpers and work attire.

The Ripples Nigeria team

What do the next few years look like for Ripples Nigeria?

Beyond creating Nigeria’s first geojournalism digital platform with Eco Nai+, we want to launch Nigeria’s first geojournalism lab, a center where journalists can access our tools, training and resources. It’s about empowering journalists across the country to be “geojournalists in practice,”  and contributing collectively to more accurate, responsible reporting on the environment. Eventually, we intend to scale the project to cover journalists across the African continent.

Ultimately, we want to be able to mobilize different interest groups across Africa to buy into the idea of using data to protect the environment. Yes, we’re well aware of our commercial objectives, but as a social enterprise, we believe that at its core — at a time when climate action is needed and fast — Eco-Nai+ is about much more than profit; it is about lasting social impact. We believe that our social mobilisation agenda is good for the country, good for the continent, good for the industry and good for the environment. 


Video-Touch: Multi-User Remote Robot Control in Google Meet call by DNN-based Gesture Recognition

A guest post by the Engineering team at Video-Touch

Please note that the information, uses, and applications expressed in the below post are solely those of our guest author, Video-Touch.

A guest post by Video-Touch

You may have watched some science fiction movies where people could control robots with the movements of their bodies. Modern computer vision and robotics approaches allow us to make such an experience real, but no less exciting and incredible.

Inspired by the idea to make remote control and teleoperation practical and accessible during such a hard period of coronavirus, we came up with a VideoTouch project.

Video-Touch is the first robot-human interaction system that allows multi-user control via video calls application (e.g. Google Meet, Zoom, Skype) from anywhere in the world.

The Video-Touch system in action.

Figure 1: The Video-Touch system in action: single user controls a robot during a Video-Touch call. Note the sensors’ feedback when grasping a tube [source video].

We were wondering if it is even possible to control a robot remotely using only your own hands - without any additional devices like gloves or a joystick - not suffering from a significant delay. We decided to use computer vision to recognize movements in real-time and instantly pass them to the robot. Thanks to MediaPipe now it is possible.

Our system looks as follows:

  1. Video conference application gets a webcam video on the user device and sends it to the robot computer (“server”);
  2. User webcam video stream is being captured on the robot's computer display via OBS virtual camera tool;
  3. The recognition module reads user movements and gestures with the help of MediaPipe and sends it to the next module via ZeroMQ;
  4. The robotic arm and its gripper are being controlled from Python, given the motion capture data.

Figure 2: Overall scheme of the Video-Touch system: from users webcam to the robot control module [source video].

As it clearly follows from the scheme, all the user needs to operate a robot is a stable internet connection and a video conferencing app. All the computation, such as screen capture, hand tracking, gesture recognition, and robot control, is being carried on a separate device (just another laptop) connected to the robot via Wi-Fi. Next, we describe each part of the pipeline in detail.

Video stream and screen capture

One can use any software that sends a video from one computer to another. In our experiments, we used the video conference desktop application. A user calls from its device to a computer with a display connected to the robot. Thus it can see the video stream from the user's webcam.

Now we need some mechanism to pass the user's video from the video conference to the Recognition module. We use Open Broadcaster Software (OBS) and its virtual camera tool to capture the open video conference window. We get a virtual camera that now has frames from the users' webcam and its own unique device index that can be further used in the Recognition module.

Recognition module

The role of the Recognition module is to capture a users' movements and pass them to the Robot control module. Here is where the MediaPipe comes in. We searched for the most efficient and precise computer vision software for hand motion capture. We found many exciting solutions, but MediaPipe turned out to be the only suitable tool for such a challenging task - real-time on-device fine-grained hand movement recognition.

We made two key modifications to the MediaPipe Hand Tracking module: added gesture recognition calculator and integrated ZeroMQ message passing mechanism.

At the moment of our previous publication we had two versions of the gesture recognition implementation. The first version is depicted in Figure 3 below and does all the computation inside the Hand Gesture Recognition calculator. The calculator has scaled landmarks as its input, i.e. these landmarks are normalized on the size of the hand bounding box, not on the whole image size. Next it recognizes one of 4 gestures (see also Figure 4): “move”, “angle”, “grab” and “no gesture” (“finger distance” gesture from the paper was an experimental one and was not included in the final demo) and outputs the gesture class name. Despite this version being quite robust and useful, it is based only on simple heuristic rules like: “if this landmark[i].x < landmark[j].x then it is a `move` gesture”, and is failing for some real-life cases like hand rotation.

Modified MediaPipe Hand Landmark CPU subgraph.

Figure 3: Modified MediaPipe Hand Landmark CPU subgraph. Note the HandGestureRecognition calculator

To alleviate the problem of bad generalization, we implemented the second version. We trained the Gradient Boosting classifier from scikit-learn on a manually collected and labeled dataset of 1000 keypoints: 200 per “move”, “angle” and “grab” classes, and 400 for “no gesture” class. By the way, today such a dataset could be easily obtained using the recently released Jesture AI SDK repo (note: another project of some of our team members).

We used scaled landmarks, angles between joints, and pairwise landmark distances as an input to the model to predict the gesture class. Next, we tried to pass only the scaled landmarks without any angles and distances, and it resulted in similar multi-class accuracy of 91% on a local validation set of 200 keypoints. One more point about this version of gesture classifier is that we were not able to run the scikit-learn model from C++ directly, so we implemented it in Python as a part of the Robot control module.

Figure 4: Gesture classes recognized by our model (“no gesture” class is not shown).

Figure 4: Gesture classes recognized by our model (“no gesture” class is not shown).

Right after the publication, we came up with a fully-connected neural network trained in Keras on just the same dataset as the Gradient Boosting model, and it gave an even better result of 93%. We converted this model to the TensorFlow Lite format, and now we are able to run the gesture recognition ML model right inside the Hand Gesture Recognition calculator.

Figure 5: Fully-connected network for gesture recognition converted to TFLite model format.

Figure 5: Fully-connected network for gesture recognition converted to TFLite model format.

When we get the current hand location and current gesture class, we need to pass it to the Robot control module. We do this with the help of the high-performance asynchronous messaging library ZeroMQ. To implement this in C++, we used the libzmq library and the cppzmq headers. We utilized the request-reply scenario: REP (server) in C++ code of the Recognition module and REQ (client) in Python code of the Robot control module.

So using the hand tracking module with our modifications, we are now able to pass the motion capture information to the robot in real-time.

Robot control module

A robot control module is a Python script that takes hand landmarks and gesture class as its input and outputs a robot movement command (on each frame). The script runs on a computer connected to the robot via Wi-Fi. In our experiments we used MSI laptop with the Nvidia GTX 1050 Ti GPU. We tried to run the whole system on Intel Core i7 CPU and it was also real-time with a negligible delay, thanks to the highly optimized MediaPipe compute graph implementation.

We use the 6DoF UR10 robot by Universal Robotics in our current pipeline. Since the gripper we are using is a two-finger one, we do not need a complete mapping of each landmark to the robots’ finger keypoint, but only the location of the hands’ center. Using this center coordinates and python-urx package, we are now able to change the robots’ velocity in a desired direction and orientation: on each frame, we calculate the difference between the current hand center coordinate and the one from the previous frame, which gives us a velocity change vector or angle. Finally, all this mechanism looks very similar to how one would control a robot with a joystick.

Hand-robot control logic follows the idea of a joystick with pre-defined movement directions

Figure 6: Hand-robot control logic follows the idea of a joystick with pre-defined movement directions [source video].

Tactile perception with high-density tactile sensors

Dexterous manipulation requires a high spatial resolution and high-fidelity tactile perception of objects and environments. The newest sensor arrays are well suited for robotic manipulation as they can be easily attached to any robotic end effector and adapted at any contact surface.

High fidelity tactile sensor array

Figure 7: High fidelity tactile sensor array: a) Array placement on the gripper. b) Sensor data when the gripper takes a pipette. c) Sensor data when the gripper takes a tube [source publication].

Video-Touch is embedded with a kind of high-density tactile sensor array. They are installed in the two-fingers robotic gripper. One sensor array is attached to each fingertip. A single electrode array can sense a frame area of 5.8 [cm2] with a resolution of 100 points per frame. The sensing frequency equals 120 [Hz]. The range of force detection per point is of 1-9 [N]. Thus, the robot detects the pressure applied to solid or flexible objects grasped by the robotic fingers with a resolution of 200 points (100 points per finger).

The data collected from the sensor arrays are processed and displayed to the user as dynamic finger-contact maps. The pressure sensor arrays allow the user to perceive the grasped object's physical properties such as compliance, hardness, roughness, shape, and orientation.

Multi-user robotic arm control feature.

Figure 8: Multi-user robotic arm control feature. The users are able to perform a COVID-19 test during a regular video call [source video].

Endnotes

Thus by using MediaPipe and a robot we built an effective, multi-user robot teleoperation system. Potential future uses of teleoperation systems include medical testing and experiments in difficult-to-access environments like outer space. Multi-user functionality of the system addresses an actual problem of effective remote collaboration, allowing to work on projects which need manual remote control in a group of several people.

Another nice feature of our pipeline is that one could control the robot using any device with a camera, e.g. a mobile phone. One also could operate another hardware form factor, such as edge devices, mobile robots, or drones instead of a robotic arm. Of course, the current solution has some limitations: latency, the utilization of z-coordinate (depth), and the convenience of the gesture types could be improved. We can’t wait for the updates from the MediaPipe team to try them out, and looking forward to trying new types of the gripper (with fingers), two-hand control, or even a whole-body control (hello, “Real Steel”!).

We hope the post was useful for you and your work. Keep coding and stay healthy. Thank you very much for your attention!

This blog post is curated by Igor Kibalchich, ML Research Product Manager at Google AI

Pixel and Android Enterprise connect National Australia Bank

Supporting the mobility needs of our employees has long been a top priority at National Australia Bank. As the IT team for a leading bank in Australia, we want our colleagues across all levels of the company to have secure access to the information they need.

When recently evaluating our device strategy, we wanted to reduce the time and costs of supporting legacy devices and multiple platforms. Pixel devices managed with Android Enterprise have been key to this strategic shift, benefiting our customer support teams who spent much of the last year working from home while continuing to support our customers remotely. 

Rapidly enabling teams

The IT team issued more than 2,000 Pixel devices to our customer contact teams, enabling them to continue serving customers remotely at the start of the pandemic. Vodafone helped rapidly launch the solution, using zero-touch enrollment to quickly set up devices with the necessary applications and configurations.

With zero-touch enrollment, each Pixel setup was 20 minutes faster than our previous device enrollments, saving our IT team and colleagues over 500 hours during the initiative. With our communication and collaboration apps available right out of the box, our teams could get to work right away to help customers.

Our contact center teams use Pixel devices that are fully managed, which allows us to provide the necessary security controls, and wipe and re-enroll them when transferred to a new employee. Branch Managers use Pixels with the work profile, separating work and personal applications. This gives employees the ability to use the device in a personal capacity while our IT team manages and ensures data security over the work profile.

Our IT team has received positive feedback from employees about their experience with the work profile. The simplicity and clear separation between work and personal profiles is a great benefit for those who want to build better balance into their day. Moreover, our IT admins have the security tools necessary to safeguard critical data. 

With managed Google Play, we have flexibility to assign the needed apps to our managed devices, whether they’re fully managed or using the work profile, through the admin console. Providing our teams the flexibility to assign apps to the right teams is a major time saver and ensures everyone has the resources they need. Branch managers can look up customer service records or answer a ping more quickly from their Pixel, instead of returning back to their desk and logging back on to their desktop computer. Android Enterprise has been a catalyst in a more mobile and responsive environment for our various teams.

Simplified management and security

Given the security requirements of the financial services industry, protecting customer data and preventing leakage is critical. Pixel security updates from Google provide a reliable cadence of ongoing protection as threats evolve, and the work profile hits the right balance between security and privacy for our teams.

The combination of zero-touch enrollment, consistent security updates and integration with device management tools has been a driving force for our IT team. We see Android Enterprise as a key component to our mobility strategy, providing the flexibility and security our teams require.