Google I/O 2018

Google I/O took place from 8th to 10th May 2018. My resume is on the keynotes and what’s new in Android. I watched the content on 31th May 2018 via youtube.

Google Keynote

  1. Ai / machine learning
  2. Digital Wellbeing
  3. Google News Initiative
  4. Android
  5. Google Maps
  6. Google Lens
  7. Waymo

Google will train more people to their technologies.

Ai

Google Ai and machine learning is again the focus: on Healthcare (images), on audio, morse code.

GMail is redesigned with Ai. Smart tempos: Hit tab for autocompleting. It is rolled out in May 2018.

Google Fotos is redesigned with Ai: Share fotos with same person on it, Convert Foto document to pdf, coloring, light changes. Rolled out in the next couple of months.

TPU 3.0 are present and liquid cooled.

Google Assistant is reinforced with Google Ai. The goal is to get natural language experience. Continued Conversation, multiple actions and pretty please for polite education are available for Google Assistant soon. Start the assistant with ‘Hey Google’.

The Google Assistant launches this summer integrated interactive features like food ordering with partners like Starbucks or Dunkin’ Donuts. It can also show you a day summary using your calendar and presenting your related information.

Google Assistant comes in navigation in Google Maps this summer.

The Google Assistant can make real phone calls to reserve a table in a restaurant or make a haircut appointment. Then integrate it in your calendar. It’s in the experimental phase.

Digital Wellbeing

Current FOMO, fear of missing out, should be changed to JOMO, joy of missing out. See new Google Digital Wellbeing site.

Android Dashboard will show you, where you spend your time.

YouTube has now added take a break reminders and an overview notification of your daily digest.

Google News Initiative

Quality journalism project for the next 3 years. Budget 300 million dollars.

Ai added to Google News will bring you the news you care about. The app uses Material theming and Newscasts.

Full coverage option allows to compare the news from different sources. It allows a sort of news analysis.

Everyone should get access to the same information.

Android

Android P is Ai based and partnered with deep mind. The P-goals are

  • Intelligence
  • Simplicity
  • Digital wellbeing

Android P Beta will not be available on Nexus devices. It will be available on Essential Phone, Pixels, Sony Xperia XZ2, Nokia 7 plus,…

Intelligence

Adaptive Battery uses on device machine learning, to adapt the battery to users usage pattern. The results for the moment are 30% reduction in CPU app wakeups.

Adaptive Brightness learns how you set the brightness slider and sets it then automatically.

App Actions predicts the next action you want to take. Developers need to add an actions.xml to their app, then the action is available on google search, google assistant, play store and smart text selection.

Slices are interactive snippets of the app ui. They can be shown by the google assistant. It’s a new API. Example: Type ‘Lyft’ to google search and see their slice.

ML Kit is a free set of machine learning APIs. It is based on TensorFlow Lite and cross platform (iOS and Android).

Simplicity

New system navigation: Single clean home button: swipe up, to see open apps and 5 predicted apps, swipe up again, to see all the apps. Slide the home button left and right to scroll through your open apps. Search bar at the bottom.

Smart text selection available also from open apps (the overview).

Audio volume adjusts automatically. Only care about on, off, silent.

When blocked app rotation is on, rotation option is now displayed on the bottom bar.

More Simplicities: notification, profiles, settings, screenshot, crash dialogs, status bar

Well being

Dashboard, App Timer.

Do Not Disturb mode stated by gesture ‘Shush’: turn your phone screen to the bottom and the phone will change to DND-mode.

Wind down mode: Set a sleep time and the colors of your phone will be set to gray scale.

Google Maps

Different Types of vehicles (motorcycle added). Ai added.

New Tab ‘For you’ to see what is around your location. It uses machine learning to find your matches.

Long press on a place adds it to a shortlist. You can share or save the list. You can vote in real-time on locations within a team.

Google Lens

Camera VR Feature on google maps navigation. Called VPS.

Google Lens will be integrated in the camera app for the phones compatible with Android P. For ex.: Google Lens Smart Text recognition allows you to copy and paste text from a photo.

Waymo

Waymo, the self-driving car from Google, will be callable via his own app.

Waymo has multiple sensor systems: a vision system, a lidar system and a radar system.

Developer Keynote

The number of developers using Android Studio tripled in the last 2 years.

  1. New Android Capabilities
  2. Platforms and Tools

New Android Capabilities

Android Slices are coming with templates.

Android Assistant Actions: Create Action Links, Action notifications or Routine Suggestions.

PWA (Progressive Web Apps). Lighthouse Version 3.0 is launched.

More clear design guidances for Material Design. Material Theming makes Material Design more dynamic.

Material Components are available on GitHub.

Cloud Text-To-Speech is available. More in MLKit.

Firebase Predictions feature available thanks to machine learning (ML Kit).

AR Core allows to add AR to your App.

Platforms and Tools

Kotlin will be the language of choice for the long term. Already 35% of pro developers are using Kotlin. So, learn more Kotlin!

Installing APK sizes are now reduced by dynamic delivery. Use APK Analyzer to see if your app will profit from dynamic delivery. Choose ‘Android App Bundle’ when generating your APK to have dynamic delivery. Find the results in the Google Play console under App bundle explorer.

Google Play Instant: Try a game without having to install it first.

Android Jetpack is a set of libraries and tools. Android Jetpack is designed for Kotlin. AndroidStudio now integrates a navigation editor.

Emulator is now starting close to instant.

Linux apps run on Chrome OS now.

What’s new in Android

  • dynamic APKs
  • Google APIs
  • Android Jetpack: Android Test, Architecture Components (Room, ViewModel, LiveData, Lifecycles, Paging)
  • android-ktx: Kotlin library
  • Slices are backwards compatible
  • Actions: Deeplinks as visible Intents

Android Support Libraries are renamed to ‘androidx.’ to get off version numbers

App in background has no more access to sensors like audio, camera or device rotation. The app needs to run a foreground service instead.

Mockito can now mock final methods and soon static methods.

Magnifier for easier text selection and cursor manipulation is now available.

Location also indoor positioning with android.net.wifi.rtt.*

Use BiometicDialog for security checks by fingerprint.

Display Cutout by setting the mode of android:windowLayoutInDisplayCutoutMode

Add smart reply UI to Notifications by RemoteInput.setChoices()

Deprecation Policy: New apps and app updates should target API 26.

ImageDecoder can decode Bitmaps and Drawables, including AnimatedImageDrawable. You can decode multiple times from the same source.

AR is integrated in the Emulator.

Google I/O 2017

The conference starts on May 17th and ends on May 20th 2017 (GMT+2). I joined the conference via live stream.

17th

Google Keynote

2 billion active Android devices (smartphone & tablets) today.

Mobile first to AI first approach. Deep learning is the base.

Touch, voice and vision are the available inputs to computers.

Image recognition has now a lower vision error rate than humans! Noisy pictures can made clear that humans can reduce their error rate too. Removing objects like fences from pictures is also possible.

Google Lens is a new product, that can understand what you’re looking at and help you take action on it. Example: Point your phone camera on a flower and google lens can tell you what flower it is.

Google builds AI first data centers.

Training and Inference are the two pars of AI. Training is very costful. Google Compute Engine provides for this new Cloud TPUs in its AI first data centers.

For the new AI content has been created google.ai. It provides Research, Tools an Applied AI.

Learning to learn = train neuronal nets with neuronal nets

Other applications for ML with vision and AI can be pathology analysis or DNA Sequencing.

Google translate will have visual translation.

The iPhone has now the Google assistent “ok google”. With Google Assistant SDK everyone can integrate “ok google” to its products. If products have it, they will have the label “google assistent build in”.

Google Photos has now among others a Photo Book option.

Youtube has now a super chat option. It’s a live chat, where you can also donate.

Android O release comes later this summer. In O double tap in text on address, places or similar will select the entire address, place… string.

TensorFlow Lite available for Android.

Vitals in O stand for security enhancements, os optimizations and developer tools:

  • Apps upcomping to playstore are scanned with machine learing based algorithms. To make this more perceptible they created Google Play Protect.
  • Twice as fast boot time with O on Pixel
  • Play Console Dashboard pins issues on your App to the developer
  • Android Studio Profilers allow you to analyze network, memory and cpu in very deep detail and at runtime.
  • Kotlin will be supported by Android

The beta release of O can be found here.

Android Go for users with minimal resources. Ex.: Youtube Go, …

Add Lite variants of you app, they will be highlighted in the play store. Find the guidelines here.

Daydream is the VR/AR product from Google.

HTC is leader in VR. Lenovo will also be a partner on VR.

GPS can get you to the door. VPS can get you to the exact item that you need for example in a store.

Google Expeditions + AR for learning on schools.

Google will have a job searching feature.

Developer Keynote

First class support for Kotlin. Use as much Kotlin as you want from 0 to 100%.

Languages are the tools we use to express our thoughts.

Java 8 is supported now. You can also ignore Kotlin and use for example lambdas from Java 8.

The team behind Kotlin is the same team that created intelliJ (JetBrains).

The IDE convertes Java code to Kotlin if you paste java code in a .kt file.

Live debugging for network usage, cpu usage, …

No separate SDK Manager anymore, all is distributed via maven repositories.

Instant Apps explore an App without installing it.

Modularize Tool in Android Studio can help to devide your app in separated features, that can be used to provide an instant app version. It takes 4 to 6 weeks with the latest tools to build an instant app version of an existing app project.

Use also Space-Saving Shared Libraries, Optimized Asset Delivery or On-The-Wire Compression to shrink your instant app.

Finally publish your instant app version in the play console.

App Directory from the Google Assistant.

Custom shortcuts to start an app with the Google Assistant.

Actions on Google Developer Console is a new developer console.

TPU = Tensor Processing Unit

TensorFlow Research Cloud registration can be found here

Lighthouse is a chrome extention and can help analyze your website.

Firebase has now cloud functions.

Firebase Performance monitoring can help you to improuve your app in for example starting time. The beta version is available from today.

Assisting the Driver: From Android Phones to Android Cars

Android Auto app available for cars that don’t have a build in navigation system.

Audi Q8 integrates Android. (Talk by Alfons Pfaller with very limited english)

Volvo is also partnering with Android. (Talk by Henrik Green)

New release & device targeting tools or: how I learned to stop worrying & love Android diversity!

Test your new app updates with Open Beta.

Staged rollout is possible for new releases.

You can now download older apk releases from the developer console.

Release notes can be entered by copy&paste to developer console.

The Release Dashboard from developer console allows you to compare against previous releases and analyze the state.

New Device Catalog UI in the developer console.

What’s new in Google’s IoT platform? Ubiquitous computing at Google

IoT = Internet of Things

IoT plattform from Google is Android Things and can be found here

You can build your own hardware boards with Android Things. Google Assistant SDK allows you to build custom machines.

Find an example of Android Things & TensorFlow here

What’s New in Android

Picture in Picture can be set as flag in the manifest activity tag

Color management with new utilities andorid.graphics.Color, ColorSpace, ColorLong, Half

Multi-Display mode available for android with O. Test this feature with

$ adb shell dumpsys display
$ adb shell start <activity> --display <id>

getMetrics() now available on any Media

Playback has been improuved

WebView has now Safe Browsing and a Multi-Process option

Autofill

Put font-files directly in res/font dirctory. Use them with “@font/myfont” or R.font.myfont. Fonts are downloadable with Front Provider in Google Play Services v11. This gives you access to over 800 Googlefonts.

Auto-Sizing TextView will autoresize Text Size.

Castaway for findViewById(). TextView tv = findViewById(R.id.mytextview); works now.

Adaptive icons: give a background and a foreground, the system adapts then.

Notification channels allow to block some notifications of an app. This O you have to use channels, if not your notifications will be dropped!

Cached Data can be checked by getCacheQuotaBytes() and increased by allocateBytes()

Java Programming Language Updates: java.time, java.nio.file, java.lang.invoke

EmojiCompat from SupportLibrary available

Physics based animations are now available

Alter windows has always to be type of TYPE_APPLICATION_OVERLAY

18th

No one likes crashing or janky apps! Engineer for high performance with tools from Android & Play

App stability and bugs are for 50% the critic points on 1 star app reviews. 5 star reviews are 60% caused by speed, design or usability.

Android vitals dashboard is your tool to get better reviews. Pay attention at the following critical performance points:

  • Stability
    • ANR Rate (app not responding = frozen and no response for 5 seconds)
    • Crash Rate
  • Battery
    • Stuck Wake Logs (identify bad use of wake locks)
    • Excessive Wakeups
  • Rendering
    • Slow Rendering (60 frames per second = 1 frame every 16ms, if the rendering takes 17ms, 1 frame is dropped)
    • Frozen Frames (App appears frozen when rendering takes 700ms)

Best practices:

  • Not do blocking operations in the UI thread, use for ex. AsyncTasks instead
  • Finish processing when you are a broadcast receiver
  • Be mindful when introducing deadlocks to your app
  • Use standard wake lock names for each distinct wake lock in your application in order for them to be debuggable in case they become stuck wake locks
  • Avoid using wake locks entirely. They were introducted in the early days of Android and since then many of the use cases for which you needed a wake lock, you no longer need a wake lock for. Examples:
    • Long running download, use Download Manager
    • Synchronizing data with an external server, use Sync Adapter
    • Need to run a background task, use Job Dispatcher
    • You hold a wake lock so that you can process an intent before the device goes to sleep, use the new Job Intent Service fro Support Lib v26
  • If you eventhough nevertheless need a wake lock
    • keep the logic around it very simple, any error in the logic can tead to them getting stuck
    • try to do as little as possible where you are holding this wake lock
    • use defensive error handling
  • Use Firebase JobDispatcher instead of Wakeups

Architecture Components - Introduction

Architecture Guide now available on developer.android.com

LifecycleOwner can be used by extending Activities from LifecycleActivity. LifecycleActivity is just a temporary class until these components reach 1.0 then everything in support library will implement this LifecycleOwner interface. Then use @OnLifecycleEvent(ON_START) void start(){}

LifecycleObserver

You can extend your Listeners from LifeData class

19th

Introduction to Kotlin

Instances can be copied with default copy() function

Semicolons are optional

when block is like “case” in haskell. When can returned directly.

higher order (functions take functions as arguments) available

Use it like in Groovy when you have a single parameter lambda expression

filter function is build-in

green highlighting indicates smart cast. The casts are done by the compiler.

Kotlin compiles to JVM-Bytecode.

Kotlin allows multiplatform projects.

Coroutines are extremely cheap.

Life is great and everything will be ok, Kotlin is here

MainActivity.ky Example

class MainActivity : Activity() {
	override fun onCreate(savedInstanceState: Bundle?) {
		super.onCreate(savedInstanceState)
	}
}

The kotlin typesystem models nullability

Getter and Setter are implicitly available in kotlin

Inline functions are possible, no anonymous classes needed

Operator functions have a special call syntax

Make db operations lazy

Android App Manifest

My resume is up to date for API 25 and based on this documentation.

activity

android:launchMode

Set Activity with multiple instantiations:

  • standard New intent creates new instance. Is the default.
  • singleTop Create only a new instance if no instance of the activity is existing on the top of the stack. The same behavior can be reached if the up intent contains FLAG_ACTIVITY_CLEAR_TOP

Make Activity always the root of the activity stack. The device can only hold one instance of the activity at a time:

  • singleTask Allows other activities to be part of its task
  • singleInstance No other activities to be part of its task. If it starts another activity, that activity is assigned to a different task — as if FLAG_ACTIVITY_NEW_TASK was in the intent.

Google Location Services on Android

Please find the corresponding udacity course here.

Table of Contents

  1. Play services
  2. Location and context
  3. Location example
  4. Activity recognition example
  5. Geofencing

Play services

Use them by adding to build.gradle:

compile 'com.google.android.gms:play-services:10.2.1'

Always use full version number like 10.2.1

Then add to the Manifest:

<meta-data 
	android:name="com.google.gms.version"
	android:value="@integer/google_play_services_version"/>

Create GoogleApiClient in onCreate()

Connect GoogleApiClient in onStart() to Location Service (or other relevant API)

Overwrite onConnectionFailed() and onConnectionSuspend() in Location Service

Write onConnected() and create a LocationRequest that queries the Location Service

Write onLocationChanged() and get the Location object which you can then use

Location and context

Fused Location Provider

Access sensors to determine your location by analyzing your

  • GPS
  • cellular connection
  • wifi connection.

Fine and Coarse Location

Fine

Fine location involves using GPS, Cell and WiFi to get the most accurate possible position. Costing you extra battery life.

Add <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> to you manifest for fine location.

Coarse

Coarse location uses Cell or WiFi signal. Is less fine, but costs less battery.

Add <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> to you manifest for coarse location.

Activity Recognition

Access sensors to determine your activity. It determines if you are

  • in a vehicle
  • on a bicycle
  • on foots
  • standing still
  • tilting (geneigt).

Location example

Add implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener, LocationListener to your Activity.

Add the following variables:

private GoogleApiClient mGoogleApiClient; 
private LocationRequest mLocationRequest;

In onCreate() build your GoogleApiClient by

mGoogleApiClient = new GoogleApiClient.Builder(this)
                .addApi(LocationServices.API)
                .addConnectionCallbacks(this)
                .addOnConnectionFailedListener(this)
                .build();

In onStart() add

mGoogleApiClient.connect();

In onStop() add

if (mGoogleApiClient.isConnected()) {
	mGoogleApiClient.disconnect();
}

In onConnected() add

mLocationRequest = LocationRequest.create();
mLocationRequest.setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY);
mLocationRequest.setInterval(1000); // Update location every second

LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, this);

This updates location periodically. Use last location to get location only once

LocationServices.FusedLocationApi.getLastLocation(mGoogleApiClient);

Other priorities are

  • PRIORITY_BALENCED_POWER_ACCURACY Gives an accuracy to about 100m. Nick name “block accuracy”
  • PRIORITY_HIGH_ACCURACY Gives the finest possible location
  • PRIORITY_LOW_POWER Gives city level location about 10km
  • PRIORITY_NO_POWER Best possible accuracy at no power cost. Gets for example the informations from other clients that requested already.

The interval within you read the sensors has a huge impact on battery life. Do you really need to update your location every second? Try to get the best for an appropriate user experience.

Grab your location from onLocationChanged(Location location)

Activity recognition example

Work-Queue-Processor design pattern. Example: IntentService (start a service, service handles each intent using a worker thread, stops itself when it runs out of work). Read here for more details.

  1. Create an Intent Service DetectedActivitiesIntentService extends IntentService
    1. Implement onHandleIntent()
    2. Add service to your manifest.xml
  2. Implement your MainAcitvity
    1. Make your activity implement ConnectionCallbacks and OnConnectionFailedListener
    2. Implement onConnected
    3. Implement onConnectionSuspended
    4. Implement onConnectionFailed
  3. Create nested BroadcastReceiver within your MainAcitvity
    1. Implement onReceive
    2. Declare Receiver class on MainActivity as member variable mBroadCastReceiver
    3. Instantiate Receiver variable in
  4. Set up the GoogleApiClient
    1. Connect and disconnect the client in onStart()/onStop()
    2. Unregister the broadcast receiver in onPause()
    3. Register the broadcast receiver in onResume()

Geofencing

Draw a virtual fence around a location in the real world and generate events when devices enter or exit the fence. Here are some examples:

  • Parents: Get a notification when a your or a member of your family has arrived home.
  • Groups: Use geofencing for social check-in.
  • Games: Virtually hide loots.
  • Shop: Get notifications of special offers (like “Come in the next 5 minutes and you get your coffee for half price”) if the user gets close to the store. Remove notification if user exists the geofence.

Properties

Define a Geofence with the builder on the Geofence object and the following properties

  • Latitude and Longitude to defines the location
  • Radius: defines how close the user has to be
  • Expiration time: How long the Geofence will be alive, you can also create permanent once
  • ID: is unique

Create a GeofencingRequest with addGeoFences holding an ArrayList of Geofences.

Monetize Your Android App with Ads

Please find the corresponding udacity course here.

Table of Contents

  1. Monetization options
  2. AdMob

Monetization options

  • paid downloads: Difficult to launch, the user really has to know the value of the app beforehand. Ex.: App for childrens
  • subscription: Has a long term revenue, but is also difficult to launch.
  • displaying ads: Is easy to launch, but can cause distraction. Though it seems to be a appropriate way to monetize an app.
  • in-app purchases: Sell digital goods.

Paid downloads and subscription options can be configured in the playstore for your app.

AdMob

AdMob is a platform by Google that connects advertisers ads with apps which are the ad publishers. AdMob uses complex algorithms to determine the appropriate ad for the app user.

Types of ads are

  • Banner Ads
  • Interstitial Ads
  • Native Ads
    • App Install Native Ads
    • Content Native Ads

Cover a portion of the screen.

Can be shared with app content.

Open a webpage on click.

Can either be text or an image.

Interstitial

Cover the entire screen.

Are perfect on a natural break of your app. Ex.: Between levels of a game.

Show text, image or video content.

Native

Customize the look and feel of your ad.

App Install Native Ads can drive app installations with advertising the app.

Content Native Ads allow for a more generic combination of text and images that cover a broader scope.

Add Google Maps to your Android App

You can find the corresponding udacity course here.

Table of Contents

  1. Play services
  2. Maps
  3. Markers
  4. StreetView

Play services

To use them add to the build.gradle:

compile 'com.google.android.gms:play-services:10.2.1'

Then add to the Manifest:

<meta-data 
	android:name="com.google.gms.version"
	android:value="@integer/google_play_services_version"/>
<uses-permission
	android:name="android.permission.ACCESS_FINE_LOCATION"/>

Create GoogleApiClient in onCreate()

Connect GoogleApiClient in onStart() to Location Service (or other relevant API)

Overwrite onConnectionFailed() and onConnectionSuspend() in Location Service

Write onConnected() and create a LocationRequest that queries the Location Service

Write onLocationChanged() and get the Location object which you can then use

Maps

Generate a SHA 1 Key from the terminal by typing:

cd ~/.android

keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android

Go to developers console. Create a project. Activate Maps API. Generate an API Key for Andorid App and enter your SHA 1 and packagename.

Maps fragment

Create an xml fragment with

android:name="com.google.android.gms.maps.MapFragment"

Set attributes on the map fragment:

xmlns:map="http://schemas.android.com/apk/res-auto"
map:cameraBearing="112.5"
map:cameraTargetLat="40.7484"
map:cameraTargetLng="-73.9857"
map:cameraTilt="65"
map:cameraZoom="17"

Google maps object

Allows you to change your map while the app is running.

Look here

Camera Position Options

  • Zoom: Determines how much of the map you will see. 0 is see whole earth. 17-21 is street level.
  • Latitude
  • Longitude
  • Target defines the center of your map by Lat and Lon
  • Bearing is the direction the camera is facing. Specified in degrees clockwise from the north (East is 45˚).
  • Tilt how much the camera is tilted. By default 0. Is given in degrees.

Use moveCamera() instantaniously move to a target. Use animateCamera() to go to a target with a fly over effect.

Markers

MarkerOptions myPlace = new MarkerOptions()
	.position(new LatLng(47.489805, -122.120502))
	.title("My Place")
  • position takes a LatLng
  • title takes a String
  • snippet for more informations

⚠ Making an initial invisble marker visible is faster than creating a new marker => Great way to implement place filtering! (Ex.: Hide or show Hotels with specific cost levels)

Show or Hide markers info window programatically with marker.showInfoWindow() or marker.hindeInfoWindow()

Create an InfoWindowAdapter to fully customize the appereance of an marker’s info window.

Implement .setOnMarkerClickListener() to kick off other actions when clicking on the marker.

Implement .onInfoWindowClick() to kick off other actions when showing an info window on a marker click.

Custom Markers

Add icon property to MarkerOptions with .icon(BitmapDescriptionFactory.fromResource(R.drawable.ic_launcher));

Polylines

Creates a trace for all the LatLng you add to the Polyline with:

PolylineOptions().geodesic(true).add(LatLng);

Get to the initial one back at the end to get a Polygone.

Circles

In onMapReady() add:

m_map.addCircle(new CircleOptions()
     .center(new LatLng(47.489805, -122.120502))
     .radius(5000) // = 5 km
     .strokeColor(Color.GREEN)
     .fillColor(Color.argb(64,0,255,0)));

StreetView

  • setPosition
  • getLocation
  • getPanoramaCamera

By default these 3 options are enabled, manage them with the following, if you like:

  • Street Names: isStreetNamesEnabled(), setStreetNamesEnabled(boolean)
  • Zoom Gestures: isZoomGesturesEnabled(), setZoomGesturesEnabled(boolean)
  • User Navigation: isUserNavigationEnabled(), setUserNavigationEnabled(boolean)

User interaction:

  • Detect Camera changes: setOnStreetViewPanoramaCameraChangeListener
  • Detect User Touches on Panorama: setonStreetViewPanoramaClickListener
  • Detect changes to the Panorama: setOnStreetViewPanoramaChangeListener

Android TV and Google Cast development

This content is taken from the corresponding udacity course.

Table of Contents

  1. Living Room
  2. Google Cast
  3. Android TV

Living Room

Average TV-Viewer spends 3h/day!

  • Google Cast: Protocol that integrates multiple devices with Android, iOS, Web
  • Android TV: Extrend Android App

First extend you app with Google Cast Support, then extend with leanback library to allow users to control your app from Android TV.

Google Cast

Send = cast to Receiver (Android TV, Chromecast). Senders are Android, iOS, Web.

Google Cast is a connecting technologie.

Use the Cast SDK, to make your App a Sender.

The Cast SDK creates a menu item with cast icon. With it you can establish a connection with your Cast device.

Explore the sample apps to learn about Google Cast.

Become a Google Cast Developer costs 5$ and can be accomplished in the Google Cast SDK Developer Console. Then you can register your receiver Applications and your cast devices = device’s serial number.

Cast isn’t supported by the emulator! Use a real Android and a real Cast device for development.

Follow the Google’s visual design checklist.

Use Cast Companion Library (CCL).

Provide start and stop control casts for users at any time. Always show playback controls when casting (ex. minicontroler or fullscreen). Even show playback controls, when the device is locked and over notifications while you app is out of focus. Also user should always be able to disconnect from the device.

If multiple users are connected, only stop the actual cast when the last user disconnects.

If the connection is lost, automatically reconnect users when they’re in your app.

If your app is killed, always try to rejoin the existing session, if it’s still running.

2 ways to start casting:

  • Connect and Play
  • Play and Connect

The Sender always show the action. The Receiver always shows the state. Don’t confuse your users! Think at remote control and TV.

Fade your Receiver UI away after 5 seconds.

Receiver

They are written in HTML5 and JavaScript.

Receivers get an URL generated from the Application ID by the chrome cast. The Sender sends the Application ID to the chrome cast.

Types of receiver applications:

  • Default Media Receiver
    • without Application Id
    • for Simple Media
    • no styling and customization possible
  • Styled Media Receiver
    • with Application Id
    • for Simple Media
    • hosted by google and designed for streaming audio and video content
    • custom style with your own CSS
    • Recommended option
  • Custom Receiver with
    • Application Id for custom Media
    • advanced capabilities like DRM
    • have full control of all aspects of the behavior of your application

Sender

The sender’s lifecycle: Parto Karwat

  1. Manifest
    • minSdkVersion >= 9
    • set the correct Application Theme
    • Add ACCESS_NETWORK_STATE and ACCESS_WIFI_STATE in the production app
  2. menu.xml
    • Add the Cast button
  3. Activiy
    • Initialize the Cast API in onCreate()
    • Assign the the MediaRouteSeletor to MediaRouteActionProvider in onCreateOptionsMenu()
    • Add MediaRouterCallback to the MediaRouter instance in onStart()
    • Remove MediaRouterCallback in onStop() to conserve battery power

Example Cast API initialization:

 mMediaRouter = MediaRouter.getInstance(getApplicationContext());
 mMediaRouteSelector = new MediaRouteSelector.Builder().addControlCategory(CastMediaControlIntent.categoryForCat("794B7BBF")).build();
 mMediaRouterCallback = new MyMediaRouterCallback();

Code for Cast button menu item:

 <item
 	android:id="@+id/media_route_menu_item"
 	android:title="Play on..."
 	app:actionProviderClass="android.support.v7.app.MediaRouteActionProvider"
 	app:showAsAction="always"/>

Again use CCL (Cast Companion Library)!

Try out a cast codelab! :)

Android TV

Use leanback library, it does the most.

Use TV emulator, it’s possible. :)

In your manifest:

  • Add LEANBACK_LAUNCHER intent-filter to your Manifest within your TvActivity
  • Use Theme.Leanback for your TvActivity

Use RecyclerView because of large list and limited memory on your Android TV.

TensorFlow Dev Summit 2017

Please find the Livestram from TensorFlow Dev Summit 2017 here.

The TensorFlow Dev Summit 2017 took place at 9:30am Pacific Time on February 15th, 2017.

Feel free to visit the projects webpage.

Table of Contents

  1. Introduction
  2. XLA
  3. TensorBoad
  4. API
  5. Uses
  6. Distributed TensorFlow
  7. Wide & Deep Learning
  8. Project Magenta

Introduction (Jeff Dean, Rajat Monga, Megan Kacholia)

TensorFlow is an open-source machine learning platform for everyone and many platforms. It is open-scoured since 2016.

At the root of TensorFlow was DistBelief.

TensorFlow works very well in Google Cloud (DIY, Managed).

TensorBoad is a TensorFlow tool that visualizes data to better analyze it.

TensorFlow is at v1.0 and backwards compatibility is guaranteed.

TensorFlow can be used with all kings of systems also Android.

XLA (accelerated linear algebra) is an Experimental TensorFlow Compiler. Can be used for just in time compilation or ahead of time compilation.

Example usage of TensorFlow on a mobile app to translate a text on a street sign: Parto Karwat

TensorFlow is also used for app recommandations in the play store or as spam filter. But there are many more use cases.

You can use TensorFlow to sort materials like cucumbers.

TensorFlow can be used for diagnosis on captures from procedural imaging in medicine. Examples: Eyes background in diabetic retinography; Recognize skin melanoma; diagnose psychiatric brain disorder on MRIs,…

TensorFlow can also paint or make music.

The most TensorFlowProjects are written in Python, but many languages like Jave, Go, C, … are supported.

XLA (Todd Wang)

Is the TensorFlow compiler.

Build your TensorFlow project with XLA just-in-time (JIT) compiler (experimental)!

Not all TensorFlow ops compile. It’s still early days!

Fusions allow less memory operations while operating on data.

XLA program is a set of static, decomposed TF operations.

Use JIT compilation when protoyping. Use Compilation caching as you scale. Use AoT (Ahead-of-time) compilation for mobile/embedded & latency minimizations.

Read the XLA documentation here.

Define your feeds and fetch in the config file in proto.

Compile your graph using the tf_library bazel build macro.

TensorBoad (Dandelion Mané)

Find the example here

Use FileWriter, a Python class that writes data for TensorBoard.

First clean the graph by giving node names and name scopes. Then same structures in the graph, have the same color. You can also color by device if you use CPU and GPU.

A summary is a specialized TensorFlow op that takes data and outputs a protocol buffer containing “summarized” data. Then pass a summary to FileWriter to get it to disk.

The Embedding Visualizer takes your data and makes it a 3D-diagram. T-SNE show how the identified data items are grouped, can be very useful.

Future: Debugger, plugins, shareable TensorBoard for an entire organisation.

API (Martin Wicke)

The Estimator is a model that has a training operation, an evaluation operation and predictions. The model gets inputs and labels to work with.

You can also export your model to a saved model, that is a data format with wich you can use your model directly in tensorflow serving.

Estimators will be released in v1.1 in core, it is already usable in from contrib. The releases will take 6 to 8 weeks.

Keras API (François Chollet)

Keras is an API spec for building deep learning models across many platforms.

Access it by tf.keras.

Keras will be in core at v1.2. For v1.1 you can test it from contrib with tf.contrib.keras

Uses

Google DeepMind (Daniel Visentin)

TF for optimizing the cooling infrastructure at google.

TF used in Gorila = general reinforcement learning architecture.

AlphaGo uses TF.

WaveNet uses TF. WaveNet creates human natural speech by machine. It can also generate music.

Learning to Learn uses TF.

Skin Cancer (Brett Kuprel)

Dataset was 129k images with 2k melanoma.

Research from Stanford. On Nature Cover 02/2017.

Mobile and Embedded Devices (Pete Warden)

Offer unique UX.

Real-time translation, predict letters on keyboard, foto scan, snapchat features, …

Support Android, iOS and Raspberry Pi.

Android

TF can be used from Android Studio directly:

Grab an TF Android example here.

To go more in deepth and do more complex building:

  1. Install TF from here
  2. Install Bazel from here
  3. Download Android SDKs 23+ and NDK 12b+
  4. Set up WORKSPACE file
  5. Run Bazel

TF core is written in C++, Android Apps are written in Java. Use the Android Inference Library.

iOS

  1. Have Xcode 8, automake and libtool
  2. One-shot build
  3. Iterative builds
  4. Use the -Os flag for optimization and use Apple’s accelerate framework.
  5. Link your own apps and compile with -force_load flag

Raspberry Pi

  1. Install with sudo apt-get insta;; -y autoconf automake lovtool gcc-4.8 g++-4.8, always build on directly on the raspberry pi, even it’s slow
  2. Build with make -f
  3. Use NEON acceleration for optimization

Tensorflow increases APK by 12MB, but can be reduced by including only the ops that you are actually using. Results in for ex. under 2MB.

Distributed TensorFlow (Derek Murray)

Allows to use multiple machines to use for one big model.

Distributed is a TF mode.

You can create Sessions and Servers.

TF Ecosystem (Jonathan Hseu)

Integrate TF with your infrastructure.

Run a cluster manager and a distributed storage.

Only Python can be used for Training library.

TF Serving (Noah Fiedel)

Share your models in production.

Serving is how you apply a ML model after you’ve trained it.

Most common way is a RPC Server. Like this the model can always be online and updated.

ML Toolkit (Ashish Agarwal)

Have algorithms that work out of the box with ML Toolkit. Ex.: KMeans, GMMs,…

Sequence Models

Example is Google Translate.

Sequence-to-sequence Models are 2 Neuronal networks: one at the left as Encoder, one at the right as Decoder, both connected by vectors.

RNN is a unit of computation that you repeat over and over.

There exist a bunch of libraries for RNNCells. NASCell is the newest.

Wide & Deep Learning (Hang-Tze Cheng)

Example: App recommendation on google play

Wide model is like a table. Deep model is more like points on a catesian coordinate system.

Idea: Combine the power of wide and deep learning

Configure the connection and it is made. For tutorial just google “Wide & Deep Learning”.

Project Magenta (Douglas Eck)

Magenta is a Project generating music and art using deep learning.

Find the project in the github repository under tensorflow/magenta.

TensoFlow in Medicine (Lily Peng)

Example: Not enough ophthalmologists in india.

Solution: Let’s train a model for retinography classification.

After a positive validation study, the bottleneck is for now the hardware.

Android Wear development

You can find the corresponding udacity course here.

Table of Contents

  1. Introduction
  2. Wear Notifications
  3. Wear Apps
  4. Watch Faces

Introduction

Ubiquitous computing principles:

  • Create 1 service with multiple views on wear, cars, tv’s etc.
  • Reduce distraction by adapting technologie to humans
  • Less is more. Put the user before technologie.

The wearable interface has 2 components:

  • Talk to wearable
  • Wearable talks to the user

Wear inputs are:

  • Voice
  • Touch swipes

Phone wakes up the watch. Let the watch sleep as much as possible!

  • GridViewPager: swipe left/right, up/down
  • DotsPageIndicator: swipe left/right and see on wich page you are by an number of …

Wear Notifications

Easiest way, App notifications go direct to the wear.

Add extra action button to notification will add the action to the wearable notification, but activating it, will run the action on the phone.

Run action on Wear, use WearableExtender.

Set Notification backgrounds with WearableExtender.setBackground(mBitmap)

Use 400x400 for static background and 640x400 for parallaxe background.

Store backgrounds in drawble-nodpi directory.

Create notification on the wearable is the only way to have the notification only on the wear and not on the phone.

Example: Reply-Action wich voice

Use class RemoteInput with String key “extra_voice_reply” .

Then add the Reply-Action to a WearableExtender.

Pages

Extend your notifications with .addPage(secondPageNotification) for multiple page notifications.

Create notification stacks with groups.

See more one android developers page.

setLocalOnly(true) to make sure a notification only apears o the phone (ex. upload progress).

Go to Android Studio > File > Import Sample for multiple notification samples and find out whats possible.

Wear Apps

Wear APK is embedded in the phone APK. Wearable App only exists with phone APK.

Wear App set up rules:

  • Use same package name and version number for phone APK and wearable APK.
  • Request same permissions in both wearable and phone APK.
  • Sign wearable and phone APK always with the same developer id.

Use WatchViewStub for layouts. Use BoxInsertLayout for large amounts of text or data.

Communication between Phone App and Wear App via DataItems or Messages.

DataItems

Create a service that extends from WearableListenerService.

Add it to the manifest with intent filter.

Messages

Messages are simpler. Overwrite/Create the methods.

Messages are fast, but can be lost during transmission.

Watch Faces

  • Round & Square
  • Interactive & Ambient1
  • Low Bit & OLED

Start with a sample (Android Studio > File > Import Sample) or template. Use CanvasWatchFaceService. Engine with methods onCreate, onSurfaceChanged, onDraw. Set System UI Elements.

Design Interactive Mode & Ambient Mode separately.

Most of the time will be in Ambient Mode: Design Ambient Mode thoughtfully.

Use Low-Bit Mode in Ambient Mode: Only Black, White, Yellow, Blue, Red, Magenta, Cyan, Green

Only 5% pixels should be illuminated in ambient mode. Keep 95% of pixels truly black.

Avoid Solid Regions! No permanently on pixel. Danger to burn it.

System UI elements: - Cards: Peek cards - Indicators

Context + Data = Design


  1. Ambient mode has a 95% black screen. [return]

Android Performance

You can find the corresponding udacity course here.

Table of Contents

  1. Rendering performance
  2. Compute performance
  3. Memory performance
  4. Battery performance

Why performance? Bad performance is the most common cause for bad reviews. Perf matters for usability and user exerience.

Rendering performance

Is the most common issue.

Jank is a dropped frame, when calculations to draw take too long (more than 16ms).

The rentering pipeline goes from CPU to GPU an then to the Screen: The rentering pipeline

Rule of thumb: Get as much data onto the GPU as fast as possible and leave it there without modifying it for as long as possible.

Overdraw Problem

Diagnotstic on mobile device:

  1. Go to Settings > Developer Options
  2. Turn on Debug GPU Overdraw.
  3. In the popup, choose Show overdraw areas.

Now you can see how often you redraw a pixel:

  • 4x Overdraw
  • 3x Overdraw
  • 2x Overdraw
  • 1x Overdraw

The 2 ways to remove overdraw:

  • Eliminate unneeded backgrounds and drawables from views that won’t contribute to the final rendered image
  • Define areas of your screen that you know will hide portions of your view

Clipping

Compute performance

Memory performance

Battery performance

Written with in Kiel.

Gradle for Android and Java

You can find the corresponding udacity course here.

Table of Contents

  1. Gradle Fundamentals
  2. Gradle for Java
  3. Gradle for Android
  4. Special Topics

Gradle Fundamentals

Why Gradle? Allow different app flavours like paid or free, debug and release.

Groovy

Groovy is the scripting language for java developers.

Groovy functions always return the last expression in the function block.

Closure

Closures are like code blocks and are surrounded by { and }. Example:

def myClosure {
	...
	...
}

Call it by myClosure().

Make a one line closure def myClosure = { ... }

Higher order = functions as arguments are possible. Example:

def applyTwice(func, arg) {
	func(func(arg))
}

If a closure is only taking 1 argument by default that argument is called it. Access it by $it. Example:

def myList = ["Gradle", "Groovy", "Android"]
def printItem = {item -> println "list elem: $item"}
myList.each(printItem)
myList.each{println "list item: $it"}

Classes

class GroovyGreeter {
	...
	...
}

Setters and getters are automatically created, accress them by .

For more Groovy knowledge Learn X in Y: Groovy and Official Groovy user guide.

Tasks

A task does an action.

Run tasks with the Gradle wrapper

  • ./gradlew tasks Show all tasks runnable from a root project
  • ./gradlew hello Run the “hello” task

Daemon = process that hangs around in the background of your operating system. Always use a daemon if possible.

  • gradle taskname Run the task “taskname”
  • gradle -q Run in quiet mode to show only the output
  • gradle -b solution.gradle tasks See all tasks of solution.gradle

Typed tasks

Example:

task copyFiles(type: Copy)

Common types are:

  • type: Copy copy files, unpack archives
  • type: Zip create archives
  • type: Delete delete files and folders

See all task types in the Gradle DSL Reference. The Gradle DSL Reference is your best friend: learn it, love it!

incremental builds = doing the minimum amount necessary

Create custom types for tasks

class HelloTask extends DefaultTask {
	String firstName

	@TaskAction
	void doAction() {
		... //Use for ex. firstName
  	}
}

task hello(type: HelloTask) {firstName = 'Gunda'}

Relationships between tasks

  • dependsOn A depends on B if task A can’t do it’s work without task B
  • finalizedBy A is finalized by B if every time task A runs, task B should be run afterwards
  • mustRunAfter B must run after A, whenever both task A and task B will be run

Build scripts

Gradle build scripts are written in the domain specific language (DSL) that sits on top of Groovy.

The Gradle plattform is written in Java.

Plugins can be written in any JVM language like Java, Groovy or Scala.

Keep build scripts declarativ. Low-level logic is for Gradle plugins!

A build script has a delegate object. = Entire project delegates to a project object.

Declare tasks with method task(Tasknames) on the project object. Then Gradle tasks show under other tasks the Tasknames.

These are all identic:

project.task("myTask")
task("myTask")
task "myTask"
task myTask

Functions on tasks

  • description Give a decription for the task
  • group Name the task group
  • doLast Add a task to the end of the task list
  • doFirst Add a task to the front of the task list

task configuration closure example:

task myTask {
	description "The description of my Task" //this is a function
	group "The group of my Task" //call omitting the parentheses
	doLast {
		...
	}
}

Log levels

  • -d Debug
  • -i Info
  • default Warning & Lifecycle
  • -q Error & Quiet

println statements are in the quite logging level.

  • gradle -s Put out the stacktrace
  • gradle -S Put out the full stacktrace

Build Lifecyle

Has 3 steps:

  1. Initialization Set up multi-project builds
  2. Configuration Execute the build script and configure all the projects tasks
  3. Execution Execute the projects tasks

The best debugger is clear thought and print statements.

Gradle for Java

Use gradle plugins available for gradle instead of reinventing the wheel.

Add the java plugin with apply plugin: "java". Find a quickstart guide here.

Create a JAR with the command gradle assemble and execute it with gradle execute.

Add repositories to your project by repository {...}. Define dependencies on artifacts contained on your repositories. Ex.: dependencies { compile ... }.

Generate a dependency report with gradle dependencies.

Identify version conflicts with gradle dependencyInsight --dependency dependency-name what generates the dependency insight report.

Create fancy file collections with custom configurations:

configurations {
	custom
}

dependencies {
	custom 'com.google.guava:guava:18.0'
}

The scheme for archive names goes basename-appendix-version-classifier-extension.

High quality software depends on rigorous testing. That’s why tests need to be automated. There exist two sorts of tests:

  • Unit tests: Test individual classes or methods in isolation.
  • Integration tests: Test your code in conjunction with some other systems, libraries or environements.

Add test dependency bytestCompile 'junit:junit:4.12'.

Execute tests with gradle test and get detailed test reports from the build/reports directory.

Find Gradle Plugins on the Gradle Plugin Portal or look at the Standard Gradle plugins.

Set the Gradle version to use in the gradle-wrapper.properties file and always version control it.

Generate the wrapper with gradle wrapper.

Gradle for Android

The entire Android build process is done by Gradle.

Build an APK is executing a Gradle script with the Android Gradle plugin provided by Google.

Get a “failed to sync” message is usually an error in one of your build scripts.

Generated folders and files by Android Studio in a Gradle project:

Folder What holds it?
.gradle Information for incremental build support like tasks inputs and outputs.
.idea Android Studio’s model of your project
build Outputs generated by your build
gradle Wrapper JAR and wrapper properties
File What contains it?
build.gradle Your Gradle script
.iml Part of Android Studio’s project model
gradlew / gradlew.bat The wrapper scripts
local.properties The Location of Android sdk on your machine

Use build types (app/build.gradle) to build different versions of an app.

Find the DAC (developer.android.com) documentation on the Android Gradle plugin here.

Create app flavors by adding the following underneath the buildTypes block:

productFlavors {
	free {
		applicationId "com.example.udacity.flavors.free"
	}
	paid {
		applicationId "com.example.udacity.flavors.paid"
	}
}

Create flavor specific content by right click on app>New>Android resources file enter a file name like “strings” and choose the source set like “paid” or “free”.

Advanced Android Builds

Application Libraries for Multiproject Builds

Two sorts of libraries:

  • .jar : Java Library file, can be used on non-android projects
  • .aar : Android Library file, can include manifest, fragments, layouts etc.

Create an Android Library by right click on the project>New>Module select “Android Library”. Add your custom in project library with compile project(":mylibrary") in the dependencies block of your build.gradle. Like this we can create Activities that are easy to reuse between applications.

Application Signing

Three things needed:

  1. Create a key store and a key
  2. Create a signing config
  3. Assign the signing config to a build type

Create a key store and a key

One time signing: Navigate to Build>Generate Signed APK and go through the wizzard.

Automatic signing: Right click on app>Open Module Settings switch to the Signing tab and create a new signing configuration. Go to the Build Types tab, select a build type and assign the Signing Config.

Multi Deck Support

Turn Java byte code into Davik byte code = dexing. This creates 1 table limited to 65,000 entries.

If you have more then 65,000 methods in a project, set multiDexEnabled true in the defaultConfig in the build.gradle file. This will break up the dexing table into multiple tables.

Proguard

Reduce size of your app by stripping out unused code and ressources. Add minifyEnabled true and shrinkResources true to your buildTypes in your build.gradle file.

You can also obfuscate your code with proguard.

Android Testing

  • Unit tests: Run on a regular Java VM on your computer. Use it to test generic, non-Android related classes. To test code that calls the Android API use a mocking framework like Mockito or connected tests.
  • Connected tests: Run on an Android device or emulator.
Test locations unit test connected
general /src/test /src/androidTest
flavor “free” /src/testFree /src/androidTestFree

Special Topics

Add Gradle Versions Plugin to your project to hold your non-google plugins versions up to date.

More videos on Gradle can be found on Gradle’s youtube channel.

Written with for Gradle in Kiel.

How to use Git and GitHub

You can find the corresponding udacity course here.

Table of Contents

  1. Command-line basics
  2. Git
  3. GitHub

Command-line basics

Command What does it? Did you know?
mkdir name Create new folder “name”
subl filename.txt Create new file, opened with sublime
pwd Shows what directory you are in pwd = print working directory
ls List the files in this directory

Git

There exist 3 locations for a file:

  • The working directory is the folder, where you work in. If you create a file it is in it.
  • The staging area is the area between your directory and the repository. If you want to bring a file from your working directory to the repository it has to pass through the staging area. The staging area holds what will go to the repository.
  • The repository is a folder that contains a documented state of every file in it.

Octopus = strategy Git uses to combine many different versions together.

How often to commit?

  • keep commits small, but not to small
  • one commit per logical change:
    • 1 commit for fixed a typo
    • 1 commit for fixed a bug

Basic commands

Command What does it?
git log Show the log of the current branch
git log --oneline Show short log for the current branch
git log --stat Show the log with changed files log
git log -p 4a60beb Start to show changes at specific log
git show 4a60beb Show only one commit
git diff file1 file2 Compare two versions of a file line by line

All file content in 1 line will always diff ⇒ keep lines short! (max. length of 80 - 120 characters)

Command What does it?
git checkout versionId Go to a specific commit version
git status Show all files changed since the last commit

Run git status frequently!

Command What does it?
git init Make a repository out of a folder
git add Add files to staging area
git commit -m “Message” Bring files from staging area to the repository
git diff --staged Show diff between staging area and repository
git diff Show delta between working directory and staging area
git reset --hard Delete all changes in working directory and staging area. Use carefully
git tag -a v1.0 a87984 Create annotated tag at commit a87984. Always use annotated tags!
git tag v1.0 Create lightweight tag, that have no extra informations.
git log --decorate Show log with tags

Commit message structure

type: Subject

body

footer

The type can be one of these

Type Description
feat: a new feature
fix: a bug fix
docs: changes to documentation
style: formatting, missing semi colons, etc.: no code change
refactor: refactoring production code
test: adding tests, refactoring tests; no production code change
chore: updating build tasks, package manager configs, etc; no production code here

Subject < 50 characters

It has to start with a capital letter and no period at the end. Use imperative tone to describe: “Change” not “changed” or “changes”.

Body (optional)

Explain the what and why of a commit. Don’t explain the how! The length of each line should be < 72 characters.

Reference issue tracker IDs here. Ex.:

Resolves: #123
See also: #456, #789

Branches

Command What does it?
git branch Show all branches
git branch name-of-branch Create branch
git log --graph --oneline branchname1 branchname2 Generate diagram = Show log of 2 branches graphically
git log --oneline --decorate --graph --all Show all branches and commits in the repository

Use always - instead of spaces in branch names.

Every commit (o) only knows his precursor (←o←o). Means that if one commit is lost, you will lose all the commits that have been before this one.

Command What does it?
git checkout -b new-branch-name Corresponds to git branch new-branch-name and git checkout new-branch-name
git merge master coins Merge coins branch into master branch

Always checkout one of the two branches you are planning on merging before doing the merge.

Command What does it?
git merge --abort Restore file to state before start of merge
git merge coins Merge branch coins into active branch

git merge master coins and git merge coins are equal if you are on master branch.

Command What does it?
git branch -d coins Delete branch “coins”
git log -n 1 Give the log a fix number of commits. Here 1 commit.

GitHub

Command What does it?
git fetch Update local copy (working directory)
git push Bring staging area to the repository
git pull git fetch and then git merge
git remote add origin link@website Create remote
git remote -v Show remotes with url for fetch and push

Forking is a fair and better way to start an own work based on an existing repository. It is a clone from GitHub to GitHub. Fork the repository and then clone your fork and start working.

Command What does it?
git clone link@website Clone a repository

Parto Karwat
A pull request is a GitHub specific request to pull a branch into the master branch. If it is your repository and you got the pull request, then you can use the button merge pull request to complete.

Written with in Kiel.

Advanced Android App Development

Developing Android Apps

You can find the corresponding course at udacity.

Table of Contents

  1. Hard facts
  2. Basics
  3. Intent Framework
  4. Data Persistence in Android
  5. Settings or Preferences
  6. Activity Lifecycle
  7. Data Storage

Hard facts

Key Mobile challenges:

  • low processor power
  • limited RAM
  • intermittent, low bandwidth, high latency data connections
  • impact on battery life

Android 1.0 launched in 2008.

Android OS structure from top to bottom:

  1. Application Layer
  2. Application Framework
  3. C/C++ Libs, Android Runtime
  4. Linux Kernel

App generation/deployment process:

Parto Karwat

Use responsive design for Android Apps, because your app will run on many different devices with different screen sizes. Provide at least small phone, large phone, medium tablet and large tablet design.

Have a mobile first policy. Mobile experience is the first consideration when building products. Most internet users come from mobile. Even children use mobiles not desktops to access the internet.

A good app should work like a good buttler: Giving you what you want before you even have to ask for it! NO Refresh-Button or Save-Button. Though using a Refresh-Button for debugging is allowed.

The master-detail flow is one of the most used Android App patterns.

The 4 types of components make off apps:

UI Thread needs 60 frames per second (FPS) < 17 ms computation time

Basics

Android Studio Hints

Press shift 2x to access the search everywhere.

Layouts

Layouts all extend from ViewGroup. All layouts are LayoutManagers (?).

  • LinearLayout: Use for stacking vertically or horizontally
  • RelativeLayout: Powerful layout with tons of possibilities. Layout elements relative to one another.
  • FrameLayout: Use if only 1 child view
  • GridLayout
  • ScrollView: ScrollView can have only one child
  • ConstraintLayout

Logging

The existing log levels are:

  • error
  • warn
  • info
  • debug
  • verbose

ListView

  1. Create visible items + 1 invisible above and underneath
  2. Create new items just in time and hold all created items (visible + invisible) in memory

⚠ The more items, the more memory!

RecyclerView

  1. Create visible items + 1 above and underneath
  2. New list item comes into view: Udapte data in recycle bin before destroying invisible item

++ Less memory overhead, less view management, smoother scrolling

RecyclerView LayoutManagers

  • Linear: Scroll items either vertically or horizontally. Vertically is the default
  • Grid: Item are staggered in a Grid and can scroll either vertically or horizontally.
  • StaggeredGrid: Commonly used for views with content of varying dimensions.
  • Custom: Extend from LayoutManager and create your own.

Adapter

Knows how many list items are in the data set and how to build them. ListView asks the size of the data set and then asks what items to build.

Intent Framework

Intents are like envelopes:

  • explicit has exact adress on it with data inside
  • implicit has action on it with data inside

Explicit Intent

new Intent(context, DetailActivity.class)

Often used in startActiviy(intent)

Implicit Intent

new Intent(Intent.ACTION_VIEW);

Possible actions to perform are VIEW, PICK, DIAL, MUSIC, CAMERA, …

Share Intent

Is the most used implicit intent.

Write a ShareActionProvider to make it work.

Share Intents will be addressed to anyone who can perform action SEND.

Intent shareIntent = new Intent(Intent.ACTION_SEND);
shareIntent.addFlage(Intent.FLAG_ACTIVITY_CLEAR_WHEN_TASK_RESET); 
shareIntent.setType("text/plain");
shareIntent.putExtra(Intent.EXTRA_TEXT, string);

ShareActionProvider mShareActionProvider = (ShareActionProvider) MenuItemCompat.getActionProvider(menuItem);

if(mShareActionProvider != null) {
    mShareActionProvider.setShareIntent(shareIntent);
} else {
    Log.e("ShareActionProvider null?");
}

FLAG_ACTIVITY_CLEAR_WHEN_TASK_RESET prevents the activity we are sharing to run completely, when we press back, we come back to our app, not back in the shared app.

Broadcast Intents

Broadcast a message to many apps.

Use sendBroadcast()method to implement it.

Broadcast Receiver

Broadcast Receiver best use in:

  • Monitoring changes to internet connectivity
  • Charging status (ACTION_POWER_CONNECTED) (Ex. send data when connected to..)

Implementation:

public class MyReceiver extends BroadcastReceiver {
    @Override
    public void onReceive(Contect c, Intent i) {
        //Handle receive
    }
}

Ways to register your BroadcastReceiver:

  1. in Manifest: triggered when app running and terminated
  2. in Code/dynamically in Activity: triggered only when app running
Manifest registration
<receiver
    android:names=".MyReceiver">
    <intent-filter>
    ...
    </intent-filter>
</reciver>

Example: GCM with Syncadapter

Dynamic registration
IntentFilter intentFilter = new IntentFilter("com.myapp.MEW_LIFEFORM");
registerReceiver(myReceiver, intentFilter);

Example: Headphones unplugged/plugged change while hearing music

Intent Filters

Define <intent-filter> for every Activity that should be launchable from an implicit intent. (In your manifest.xml)

<intent-filter>
    <action:name="android.intent.action.VIEW"/>
    <data android:scheme="geo"/>
</intent-filter>

Data Persistence in Android

Data Persistence is the act of saving some data to the phone.

Bundle

Temporary store. Only use it if the user is actively using your app = while the app is open. Example: onSaveInstanceState

The data is saved as a key-complexValue pair

SharedPreferences

Saves Key-primitiveValue-Pairs to a file on the Android Filesystem. The data is persisted unless you install the app or break the preference file.

SQLite Database

Organize more complicated text/numeric/boolean data.

Internal/External Storage

Save multimedia or large data files to the internal or external Storage. Internal Storage is on the phone. External storage can be an SD-Card or similar.

Server

Servers are for data that multiple phones will access. The data can be persisted also when deleting the app or using a different phone. An example for a cloud service is Firebase.

Settings or Preferences

Settings can always be added later. Less settings at the beginning are better.

Generally the flow goes like this:

  1. User edits and updates a preference.
  2. PreferenceChangeListener triggered for that preference.
  3. The new value is saved to the SharedPreference file.
  4. onSharedPreferenceChanged listeners are triggered.

Adding a setting is easier for the user than removing a setting. Removing a setting risk having a subset of users angry about the feature being taken away from them.

Should it be a setting?

Parto Karwat

PreferenceFragment

Remark: PreferenceActivity is depricated since Honeycomb in favor of the more flexible fragment version!

  1. Add dependency compile 'com.android.support:preference-v7:25.1.0'
  2. Create a SettingsFragment and extend it from PreferenceFragmentCompat.
  3. Create a new resource directory called xml and create preference resource files in it like for example pref_main.xml
  4. AddaddPreferencesFromResource(R.xml.pref_main); to the onCreatePreferences() method
  5. Add to your AppTheme <item name="preferenceTheme">@style/PreferenceThemeOverlay</item>

PreferenceScreen

Is the root of a Preference hierarchy. It is the container that holds a couple of Preferences like CheckBoxPreference, ListPreference, etc. or even PreferenceScreens.

Common Preferences

  • CheckBoxPreference
  • ListPreference
  • EditTextPreference

SharedPreferences

SharedPreferences describe the file where the preferences are stored. Store private primitve data in key-value pair with a SharedPreference.

Read your SharedPreferences file with PreferenceManager.getDefaultSharedPreferences(this)

SharedPreferences.Editor

Write your SharedPreferences with the Editor. SharedPreferences.Editor editor = sharedPreferences.edit(); editor.putBoolean(KEY, value); editor.apply();

onSharedPreferencesChangeListener

  1. Let the concerned Activity implement SharedPreferences.OnSharedPreferenceChangeListener
  2. Implement onSharedPreferenceChanged
  3. Register the listener in onCreate with sharedPreferences.registerOnSharedPreferenceChangeListener(this);
  4. Unregister the listener in onDestroy with PreferenceManager.getDefaultSharedPreferences(this).unregisterOnSharedPreferenceChangeListener(this);

PreferenceChangeListener

Is triggered before a value is saved to the SharedPreferences file. Can prevent an invalid update to a preference.

  1. Implement the Preference.OnPreferenceChangeListener in your activity. implements Preference.OnPreferenceChangeListener
  2. Attach the listener to the preference to listen from in onCreatePreferences with Preference preference = findPreference(getString(R.string.pref_name_key)); preference.setOnPreferenceChangeListener(this);
  3. Implement onPreferenceChange()

Activity Lifecycle

Parto Karwat

  • Active Lifetime: Active ⇄ Paused
  • Visible Lifetime: Active → Paused → Stopped → Restored → Visible → Active
  • Screen rotation leeds to: onPause → onStop → onDestroy → onCreate → onStart → onResume

⚠ Save your app state in Bundle instance within onSaveInstanceState(). Restore your app state in onCreate() if the Bundle is not null. Like this actions like rotating your device won’t affect the user experience.

  • onStop is always the last method called before the app is killed (for example if it is running in the background an more resources are needed). Clean up any resources that need an ordinary tear down in onStop and onPause to make your app a good citizen!
  • onSaveInstanceState is called before onPause
  • onRestoreInstanceState is called after onCreate

Loaders

Create one in 3 steps:

  1. Create a Loader ID
  2. Fill-in the Loader Callbacks
  3. Init the Loader with the LoaderManager

AsyncTaskLoader

Use an AsyncTaskLoader for threads bound to an Activity rather than AsyncTask.

AsyncTaskLoader is a better choice for Activity-bound thread management, because it handles lifecycle changes correctly, delivering the result to the current active activity, preventing duplication of background threads, and helping to eliminate duplication of zombie activities.

The functions of AsyncTaskLoader:

  • onStartLoading
  • forceLoad: Force an asynchronous load
  • loadInBackground: Called on a worker thread to perform the actual load and to return the result of the load operation
  • deliverResult: Sends the result of the load to the registered listener

Best practice

  • No Exit Menu item in Android! Exit is the back button.
  • Include menu items with semantic meaning: Exit a session should be available by a menu item “logout”, “sign-out” or similar.
  • Background apps shouldn’t consume ressources.
  • Always prepare background apps to die.

AsyncTask

4 steps:

  • onPreExecute()
  • doInBackground(Params…)
  • onProgressUpdate(Progress…)
  • onPostExecute(Result)

execute(), onPreExecute(), onProgressUpdate() and onPostExecute() run on the main UI thread. onProgressUpdate() and doInBackground() run on a background thread.

Storing Data in SQLite

DB Helper Content Provider
Data Contract SQLite Database Content Provider Test
Database Test URIMatcher Test

The Database Test is a write-read test on the database.

Data Contract defines all tables with its columns.

DBHelper extends SQLiteOpenHelper. DBHelper makes the database with a version number and a database filename.

Manually increment version numbers each time you release an updated database with a new schema.

onCreate() creates the DB as SQL statement executed with .execSQL(..).

Implement onUpgrade() method called, when version number has changed. Use ALTER TABLE or DROP TABLE

Database operations (CRUD)

The following operations are available on any database and known by CRUD for

  • Create
  • Read
  • Update
  • Delete

Content Provider

Content Provider makes your data accessible without needing to know how you stored it. So it makes it easy to switch out the datasource.

Widgets and Search need Content Provider. Ex.: GMail Widget, Play Store Search.

SyncAdapter (Get Data from your Server) and CursorLoader (Get data from your Database) need Content Provider too.

It also exists a SharedContentProvider.

The Content Provider Implementationsteps:

  1. Determine URIs
  2. Update Contract
  3. Fill out URIMatcher
  4. Implement Funktions

Determine URIs

CONTENT:// COM.EXAMPLE.ANDROID.SUNSHINE.APP/ WEATHER/ 64111
SCHEME AUTHORITY LOCATION QUERY
PACKAGE NAME DB TABLENAME is optional!

ContentObserver refreshes automatically with URI.

  • Base URIs = without Query, are for writing the DB.
  • Special URIs are for reading/quering the DB

Update Contract

Add URIs to Contract and create corresponding methods. (??)

Fill out URIMatcher

  • # a number
  • * any string

Implement URIMatcher in ContentProvider class

Add the ContentProvider to the Manifest

<provider
    android:authorities=[PACKAGENAME]
    android:name=[CONTENT_PROVIDER_CLASS]
/>

Implement Funktions

Implementation order:

  1. onCreate()
  2. getType(Uri)
  3. query(..)
  4. insert(..), update(..), delete(..)
  5. optional: bulkInsert(..)

insert(..), update(..), delete(..) are the write operations

Content Resolver

Access the data via Content Provider with Content Resolver.

Use Content Provider for example in an async task:

Cursor myCursor = mContext.getContentResolver().query(..);

About Parto

Parto Karwat

Hey there! I’m a fast learner focussed on Android. Watching Android since 2013, my dive started in 2016 when I won a full scholarship from Google and Udacity to the Android Developer Nanodegree program. I will use this blog to keep my knowledge well-ordered.

If you have any questions, feel free to write me a message.