"AI First" the mantra for Google I/O 2017
Google’s annual worldwide developer conference (Google I/O) kicked off at the Shoreline Auditorium in Mountain View, California on Wednesday morning. Seven thousand people are attending live, and others are viewing the event online at 400 Google I/O Extended events in 85 countries.
This year’s mantra is “AI first,” integrating machine learning with each of Google’s software and hardware products. The keynote was a rapid-fire delivery of announcements:
- A new Agile initiative named Google Lens integrates visual recognition with user help. Point your phone’s camera to a label containing a network password. Google Assistant enters the password automatically and connects you to the network. Point the camera to a phone number and Google Assistant dials that number. Point to a concert advertisement. Google Assistant plays a sample of the band’s music and offers to book concert tickets.
- Developing artificially intelligent apps requires two phases — a training phase and an inference phase. In the training phase, the software learns about the problem domain. In the inference phase, the software applies the learning to new situations.
The training phase is computationally intensive. To address this issue, Google announced its new Cloud TPU, which is available on the Google Compute Engine immediately. The Cloud TPU hardware is optimized for both training and inference, and can deliver a whopping 180 teraflops of computing power. Developers can visit http://g.co/tpusignup to sign up. - Google announced its new Google.ai initiative to coordinate AI efforts and teams. The initiative has three parts: research, tools, and applied AI. The research part includes AutoML, in which neural nets design other neural nets. This task is computationally challenging, and Cloud TPU is making it possible.
- Starting immediately, Google Assistant will accept commands that are typed or tapped as well as spoken. Typing is advantageous because, in public venues, people may not want to speak commands. Typing, tapping and speaking to Google Assistant are all integrated, so an interaction with the Assistant may use all three interaction modes.
- Google Assistant is now available on the iPhone!
- Google Assistant is now available in French, German and several other languages. Google Home will launch in Canada, Australia, France, Germany and Japan.
- Effective immediately, Actions on Google handles purchase transactions. With voice interaction and fingerprint scan, you can use Google Pay. You don’t have to enter an address or credit card number.
- A new feature in Google Home is called Proactive Assistance. Here’s how it works: Google Home knows about an upcoming event on your calendar, knows where the event takes place, and calculates the travel time to the event given the current traffic conditions. When you say “What’s up?” to Google Home, the device reminds you that it’s time to leave for the upcoming event.
- In the next few months, Google Home will make no-cost, hands-free calls to any landline within the United States or Canada. Google Home recognizes up to six different voices in a household. So if you say “Call Mom,” the device determines which member of the household is making the request, and calls that person’s mother.
- Spotify will offer free music service to Google Home.
- Google Home will have Bluetooth support, so you’ll be able to play music from any Bluetooth enabled device on the Google Home speaker.
- In addition to its voice responses, Google Home will display information on your phone’s screen and, through Chromecast, on your TV.
- Google Photos will have three new features. With Suggested Sharing, Photos identified the people in your images and offers to share the images with those people. With Shared Libraries, Photos automatically shares images with certain characteristics to people you select. With Photo Books, you can purchase a hard copy of your best images based on criteria that you specify.
- In the next few weeks, YouTube will provide 360 degree video on your Android TV. You’ll issue voice commands to request a certain video. You’ll use your remote to move from side to side within the video scene. Live 360 content will be available.
- Earlier this year, YouTube launched Super Chat where users pay to pin comments on live streams. Users can now trigger physical actions using Super Chat. During the keynote, users paid to drench two fellows known as the Slow Mo Guys with 500 water balloons. All proceeds went to charitable causes.
- TensorFlow is Google’s machine intelligence software library. With the newly created TensorFlow Lite version of that library, developers can add deep learning capabilities to apps that run on small, mobile devices. Smartphones will become even smarter.
- Samsung’s Galaxy S8 and S8+ will add virtual reality features using Google Daydream.
- HTC and Lenovo will use Google Daydream in their standalone VR headsets. All the processing power will be in the headsets. You’ll experience virtual reality without having to attach a cable or a smartphone.
- The new Android Go initiative optimizes the Android system to run on entry level phones. In this context, an entry level phone is one with between half-a-gigabyte and one gigabyte of memory. As one part of this initiative, a Data Saver feature economizes on the use of network resources by compressing the data that’s being sent. Another part named YouTube Go Offline Sharing saves videos for viewing when the network isn’t available.
- With Google Expeditions, students experience things that they can’t ordinarily experience in the safety and comfort of their own classrooms. Students move around a room while they look at a tablet device’s screen. The tablet show anything from the terrain in a far away land to a view from inside the human body.
Later this year, the Expeditions platform will add augmented reality to its repertoire. The tablet’s display will be able to superimpose virtual images onto real objects in the room. - A new Android App Directory helps user discover new apps. Users can try an app before buying the app.
- The user of Agile methodologies and the Scrum framework to develop products.
- Google’s Instant Apps API is now available to all Android developers.
- Many Firebase SDKs will soon be made open-source.
- The new Play Console Dashboards summarize app diagnostics to help developers analyze and improve their apps. In addition, a developer can add Firebase Performance Monitoring to an app with only one line of code. In addition,
- For enhanced security, Firebase will include phone number authentication.
Beta availability of Android O
I write books about Android development, so for me, the most interesting announcement was the availability of a beta for the next Android version – codenamed Android O. In this version of Android, developers will be able to write code in the Kotlin programming language. This is a big deal for developers because it’s a departure from Android’s long-standing Java-only tradition.
Kotlin is completely interoperable with Java, so existing Java code will work without modification. New Agile apps can be built using Java, using Kotlin, or using any combination of the two languages. JetBrains (the company that created Kotlin) will work alongside Google to help the language evolve as a language for mobile platform development. Best of all, Kotlin is available immediately in the new Android Studio 3.0.