Google / Android Archives - Phunware Engage Anyone Anywhere Mon, 15 Apr 2024 16:52:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Tech Blog: Engineering Best Practices http://52.35.224.131/engineering-best-practices/ Sun, 14 Apr 2024 18:31:21 +0000 http://127.0.0.1/?p=42738 Explore essential mobile app engineering best practices with Phunware as we share insights from years of experience since 2009. Learn about feature flags, externalized configurations...

The post Tech Blog: Engineering Best Practices appeared first on Phunware.

]]>
At Phunware, we’ve been building mobile apps since 2009. Along the way, we’ve compiled a large list of Engineering Best Practices. In this blog post, we share some of the most important ones we follow.

When defining a new feature for a mobile app, it’s important to follow best practices to ensure the feature is complete, stable, and easy to maintain.

Let’s use a new Leaderboard screen as an example. A less experienced manager may write user stories for the Engineering team asking them to satisfy acceptance criteria that ensures the proper UX is followed, the UI matches the designs, and the points are populated by the appropriate data source. But there is so much more to consider.

Feature Flags

It’s imperative that a mobile app has an external config file. This file will typically contain various configuration settings, urls, strings, etc that an app needs before it launches. Phunware’s Content Management Engine is a great place for developers to create a JSON-based app config file. 

Feature flags are an important component of any config file. Feature flags are simply set to true or false and determine whether a feature should be enabled or not. Using our Leaderboard screen example, we may not want to launch the Leaderboard feature until the first of the month. We can go live with the app release, but keep the flag set to false until we’re ready for users to experience it in production. 

This is also helpful if an issue occurs and the data populating the Leaderboard is corrupt. Rather than delivering a poor user experience, we can temporarily disable the Leaderboard until the issue is resolved.

Externalized Strings & Images

The app config file is also a great place to externalize text strings and image URLs.

Let’s say there’s a typo in our Leaderboard screen or a Product Manager simply wants to change the copy. It’s much quicker and easier to update the text in the config file than to make the changes in code, submit to the stores, wait for approval, and then try to get all our users on the latest app version.

At Phunware, we actually take this a step further and externalize strings for each language. For example, we may have a strings_en key for English strings and a strings_es key for Spanish strings. We serve the appropriate text depending on the user’s language settings.

Externalizing image URLs is also helpful when we want to change images on-the-fly. We’re always uploading new images to Phunware’s Asset Manager and updating URLs.

Analytics
After launching a great feature, we’re going to want to know how it performs. Are users visiting the Leaderboard screen? Are they interacting with the filters?

Analytics is often an afterthought. If we train ourselves to write a corresponding analytics ticket whenever we write a feature ticket, we’ll find that our coverage will be very complete.

Phunware Analytics is a great tool for capturing app launches, unique users, retention cohorts, screen views, and custom event analytics.

Error Handling

So we wrote user story tickets for the Leaderboard screen and the developers have finished implementing it. But what happens when the API goes down and returns a 500 error? Will the app provide an informative error message, a generic one, or simply get into a bad state?

By writing an error handling ticket we can define the behavior we would like when something goes wrong. At Phunware, we like to use a mix of specific and generic error messages.

For the Leaderboard example it may be more appropriate to display a message such as “Unable to load Leaderboard data. Please try again later” rather than “An unexpected error has occurred”. However, the latter might be best as a catch all for any situation where a specific error message wasn’t implemented.

Deep Links

Chances are, if the new Leaderboard is doing well, someone is going to ask if they can send a push notification promoting the Leaderboard which, when tapped, sends users to the Leaderboard screen.

If we considered deep links when writing the initial user stories then we’re probably covered. These days there are many places that may link into an app. Deep links can come from push notifications, app share URLs, emails, or even websites redirecting to the mobile app.

Considering deep links when implementing a new screen saves the time and overhead of having to do the work in a follow up release.

Offline Caching

There are times that our users have a poor network connection. Perhaps they are on a train or their WiFi is down. Ideally we realized that this may occur and tried to create a good user experience when it does.

At Phunware, we make sure to cache as much content as possible. If the user launches the app without an internet connection we’ll still display the images and content that were available the last time they launched the app.

While it’s possible some of the content is outdated, this is a better experience than showing a blank screen.

Displaying a banner at the top showing the user doesn’t appear to have a connection is also helpful in informing the user they are being shown something slightly different than if they had a good connection.

Unit Tests

We try to cover as much of our code as possible with Unit Tests and write them when developing new features.
Unit Tests allow developers to be confident the feature works as expected. We set up our build jobs to run unit tests when new builds are being generated, so a few months down the road, if a developer introduces a regression, we catch it right away. This frees up our QA team to focus on edge case issues rather than discovering breaking changes.

Documentation

So we wrote the user stories the Engineering team needed to implement the Leaderboard screen. Everything has been externalized, deep links have been tested, analytics are in place, and unit tests are all passing. Now it’s time to update the documentation.

Keeping documentation up to date is very important as codebases and feature sets are always changing. Ensuring we have proper documentation allows team members to quickly look up that deep link URL scheme or the analytics event that fires when a user toggles something in the Leaderboard.

In addition to documentation, this is also a great time to update submission checklists and QA test plans, since we’ll want to make sure the new Leaderboard is tested with each new release.

Store Guidelines

Our final best practice to follow is keeping up to date with Google and Apple’s App Store Review Guidelines. We check these weekly because we never know when new guidelines will be announced.

It’s critical to know these before the feature is completed and the app is submitted. There’s nothing worse than getting rejected because we violated a guideline. At that point any deadline we had to launch the app went out the window.

For example, there’s a guideline that requires that any app that allows users to create accounts must also provide a mechanism for users to delete their account. If we knew this when writing that user story for Sign Up and Log In, then we’re covered. If we found out the hard way, then we’ve lost precious time because it may be another sprint or two before the Engineering team can deliver that new flow.

Luckily we followed the other best practices and we’re able to disable it for now!

The post Tech Blog: Engineering Best Practices appeared first on Phunware.

]]>
Dev Blog: Barcode Scanning on iOS http://52.35.224.131/dev-blog-barcode-scanning-ios/ Thu, 03 Nov 2022 15:59:49 +0000 http://127.0.0.1/blog/dev-blog-swift-regex-copy/ Learn how to build an iOS barcode scanner that can scan machine readable codes and about what approach might be best for your use case.

The post Dev Blog: Barcode Scanning on iOS appeared first on Phunware.

]]>
In this tutorial you will learn how to build a barcode scanner that can scan machine readable codes (QR, codabar, etc). You will also learn about the various approaches and which one might be best for your use case.

There are many ways to build a code scanner on iOS. Apple’s Vision Framework introduced additional options. We will first go over the classic tried and true method for creating a code scanner, then we will go over the new options. We will carefully consider the pros and cons for each approach.

1. The Classic Approach

Throughout the years most scanners on iOS have likely taken the following approach.

First AVFoundation is used to set up a video capture session, highlighted in gray in the diagram above. Then an AVCaptureMetaDataOutput object is hooked up to the video session’s output. AVCaptureMetaDataOutput is then set up to emit barcode information which is extracted from an AVMetadataObject (highlighted in blue).

Pros:

  • When it comes to scannable code formats, there aren’t any formats that are exclusive to the newer approaches. Click here to see a full list of supported code formats.
  • Minimum deployment target is iOS 6. This approach will likely accommodate any OS requirements that you may have.
  • This approach is tried and true. This means there are plenty of code examples and stack overflow questions.

Cons:

  • The maximum number of codes that can be scanned at a time is limited. For 1d codes we are limited to one detection at a time. For 2d codes we are limited to four detections at a time. Click here to read more.
  • Unable to scan a mixture of code types. For example a barcode and a QR code can’t be scanned in one go, instead we must scan them individually.
  • The lack of machine learning may cause issues when dealing with problems like a lack of focus or glare on images.
  • Developers have reported issues when supporting a variety of code types on iOS 16. A solution could be to use one of the newer approaches for your users on iOS 16 and above.

2. AVFoundation and Vision

For the following approach the basic idea is to feed an image to the Vision Framework. The image is generated using an AVFoundation capture session, similar to the first approach. Click here for an example implementation.

Notice the three Vision Framework classes in the diagram above (in blue). The entry point to the Vision Framework is the VNImageRequestHandler class. We initialize an instance of VNImageRequestHandler using an instance of CMSampleBufferRef.

Note: VNImageRequestHandler ultimately requires an image for Vision to process. When initialized with CMSampleBufferRef the image contained within the CMSampleBufferRef is utilized. In fact there are other initialization options like CGImage, Data, and even URL. See the full list of initializers here.

VNImageRequestHandler performs a Vision request using an instance of VNDetectBarcodesRequest. VNDetectBarcodesRequest is a class that represents our barcode request and returns an array of VNBarcodeObservation objects through a closure.

We get important information from VNBarcodeObservation, for example:

  • The barcode payload which is ultimately the data we are looking for.
  • The symbology which helps us differentiate observations/results when scanning for various types of codes (barcode, QR, etc) simultaneously.
  • The confidence score which helps us determine the accuracy of the observation/result.

In summary, it took three steps to setup Vision:

  1. Initialize an instance of VNImageRequestHandler.
  2. Use VNImageRequestHandler to perform a Vision request using an instance of VNDetectBarcodeRequest.
  3. Set up VNDetectBarcodeRequest to return our results, an array of VNBarcodeObservation objects.

Pros:

  • Computer Vision and Machine Learning algorithms – The Vision Framework is constantly improving. In fact, at the time of writing Apple is on its third revision of the barcode detection algorithm.
  • Customization – Since we are manually hooking things up we are able to customize the UI and the Vision Framework components.
  • Ability to scan a mixture of code formats at once. This means we can scan multiple codes with different symbologies all at once.

Cons:

  • Minimum deployment target of iOS 11, keep in mind that using the latest Vision Framework features will increase the minimum deployment target.
  • Working with new technology can have its downsides. It may be hard to find tutorials, stack overflow questions, and best practices.

3. DataScannerViewController

If the second approach seemed a bit too complicated, no need to worry. Apple introduced DataScannerViewController which abstracts the core of the work we did in the second approach. Although it’s not exclusive to scannable codes, it can also scan text. This is similar to what Apple did with UIImagePickerViewController, in the sense that it’s a drop in view controller class that abstracts various common processes into a single UIViewController class. Apple provides a short article that introduces the new DataScannerViewController class and walks through the required setup and configuration.

Pros:

  • Easy to use and setup.
  • Low maintenance, Apple is in charge of maintaining the code.
  • Can also scan text, not exclusive to machine readable codes.

Cons:

  • Minimum deployment target of iOS 16.
  • Only available on devices with the A12 Bionic chip and later.
  • Limited control over the UI, even if the UI looks great sometimes we may require something more complex.

Conclusion

We went over the various ways to scan machine readable codes on iOS. We explored the pros and cons of each approach. Now you should be ready to use this knowledge to build or improve on a barcode scanner.

Who knows, you may even choose to take a hybrid approach in order to take advantage of the latest and greatest that Apple has to offer while gracefully downgrading for users on older iOS devices.

The post Dev Blog: Barcode Scanning on iOS appeared first on Phunware.

]]>
Dev Blog: Swift Regex http://52.35.224.131/dev-blog-swift-regex/ Thu, 20 Oct 2022 14:55:43 +0000 http://127.0.0.1/blog/dev-blog-what-developers-should-know-notification-permission-android-13-copy/ Learn more about Swift's new set of APIs allowing developers to write regular expressions (regex) that are more robust and easy to understand.

The post Dev Blog: Swift Regex appeared first on Phunware.

]]>
Introduction

A regular expression (regex) is a sequence of characters that defines a search pattern which can be used for string processing tasks such as find/replace and input validation. Working with regular expressions in the past using NSRegularExpression has always been challenging and error-prone. Swift 5.7 introduces a new set of APIs allowing developers to write regular expressions that are more robust and easy to understand.

Regex Literals

Regex literals are useful when the regex pattern is static. The Swift compiler can check for any regex pattern syntax errors at compile time. To create a regular expression using regex literal, simply wrap your regex pattern by the slash delimiters /…/

let regex = /My flight is departing from (.+?) \((\w{3}?)\)/

Notice the above regex literal also has captures defined in the regex pattern using the parentheses (…). A capture allows information to be extracted from a match for further processing. After the regex is created, we then call wholeMatch(of:) on the input string to see if there’s a match against the regex. A match from each capture will be appended to the regex output (as tuples) and can be accessed by element index. .0 would return the whole matched string, and .1 and .2 would return matches from the first and second captures, respectively.

let input = "My flight is departing from Los Angeles International Airport (LAX)"

if let match = input.wholeMatch(of: regex) {
    print("Match: \(match.0)")
    print("Airport Name: \(match.1)")
    print("Airport Code: \(match.2)")
}
// Match: My flight is departing from Los Angeles International Airport (LAX)
// Airport Name: Los Angeles International Airport
// Airport Code: LAX

You can also assign a name to each capture by prefixing ?<capture_name> to the regex pattern, that way you can easily reference the intended match result like the example below:

let regex = /My flight is departing from (?<name>.+?) \((?<code>\w{3}?)\)/

if let match = input.wholeMatch(of: regex) {
    print("Airport Name: \(match.name)")
    print("Airport Code: \(match.code)")
}
// Airport Name: Los Angeles International Airport
// Airport Code: LAX

Regex

Along with regex literals, a Regex type can be used to create a regular expression if the regex pattern is dynamically constructed. Search fields in editors is a good example where dynamic regex patterns may be needed. Keep in mind that Regex type will throw a runtime exception if the regex pattern is invalid. You can create a Regex type by passing the regex pattern as a String. Note that an extended string literal #”…”# is used here so that escaping backslashes within the regex is not required.

Regex Builder

Another great tool for creating regular expressions is called regex builder. Regex builder allows developers to use domain-specific language (DSL) to create and compose regular expressions that are well structured. As a result, regex patterns become very easy to read and maintain. If you are already familiar with SwiftUI code, using regex builder will be straightforward.

The following input data represents flight schedules which consists of 4 different fields: Flight date, departure airport code, arrival airport code, and flight status.

let input =
""" 
9/6/2022   LAX   JFK   On Time
9/6/2022   YYZ   SNA   Delayed
9/7/2022   LAX   SFO   Scheduled
"""

let fieldSeparator = OneOrMore(.whitespace)


let regex = Regex { 
    Capture {
        One(.date(.numeric, locale: Locale(identifier: "en-US"), timeZone: .gmt)) 
    } 
    fieldSeparator
    Capture { 
        OneOrMore(.word) 
    } 
    fieldSeparator
    Capture { 
        OneOrMore(.word)
    }
    fieldSeparator
    Capture { 
        ChoiceOf {
            "On Time"
            "Delayed"
            "Scheduled"
        }
    }
}

Quantifiers like One and OneOrMore are regex builder components allowing us to specify the number of occurrences needed for a match. Other quantifiers are also available such as Optionally, ZeroOrMore, and Repeat.

To parse the flight date, we could have specified the regex pattern using a regex literal /\d{2}/\d{2}/\d{4}/ for parsing the date string manually. In fact, there’s a better way for this. Luckily, regex builder supports many existing parsers such as DateFormatter, NumberFormatter and more provided by the Foundation framework for developers to reuse. Therefore, we can simply use a DateFormatter for parsing the flight date.

Each field in the input data is separated by 3 whitespace characters. Here we can declare a reusable pattern and assign it to a fieldSeparator variable. Then, the variable can be inserted to the regex builder whenever a field separator is needed.

Parsing the departure/arrival airport code is straightforward. We can use the OneOrMore quantifier and word as the type of character class since these airport codes consist of 3 letters.

Finally, ChoiceOf lets us define a fixed set of possible values for parsing the flight status field.

Once we have a complete regex pattern constructed using regex builder, calling matches(of:) on the input string would return enumerated match results:

for match in input.matches(of: regex) {
    print("Flight Date: \(match.1)")
    print("Origin: \(match.2)")
    print("Destination: \(match.3)")
    print("Status: \(match.4)")
    print("========================================")
}
// Flight Date: 2022-09-06 00:00:00 +0000
// Origin: LAX
// Destination: JFK
// Status: On Time 
// ======================================== 
// Flight Date: 2022-09-06 00:00:00 +0000 
// Origin: YYZ 
// Destination: SNA 
// Status: Delayed 
// ======================================== 
// Flight Date: 2022-09-07 00:00:00 +0000 
// Origin: LAX 
// Destination: SFO 
// Status: Scheduled 
// ========================================

Captures can also take an optional transform closure which would allow captured data to be transformed to a custom data structure. We can use the transform closure to convert the captured value (as Substring) from the flight status field into a custom FlightStatus enum making it easier to perform operations like filtering with the transformed type.

enum FlightStatus: String {
    case onTime = "On Time"
    case delayed = "Delayed"
    case scheduled = "Scheduled"
}

let regex = Regex { 
    ...
    Capture { 
        ChoiceOf {
            "On Time"
            "Delayed"
            "Scheduled"
        }
    } transform: {
        FlightStatus(rawValue: String($0))
    }
}
// Status: FlightStatus.onTime

Final Thoughts

Developers who want to use these new Swift Regex APIs may question which API they should adopt when converting existing code using NSRegularExpression or when writing new code that requires regular expressions? The answer is, it really depends on your requirements. Each of the Swift Regex APIs has its own unique advantage. Regex literals are good for simple and static regex patterns that can be validated at compile time. Regex type is better suited for regex patterns that are constructed dynamically during runtime. When working with a large input data set requiring more complex regex patterns, regex builder lets developers build regular expressions that are well structured, easy to understand and maintain.

Learn More

The post Dev Blog: Swift Regex appeared first on Phunware.

]]>
Dev Blog: What Developers Should Know About the Notification Permission in Android 13 http://52.35.224.131/dev-blog-what-developers-should-know-notification-permission-android-13/ Tue, 04 Oct 2022 16:09:56 +0000 http://127.0.0.1/blog/navigating-permission-changes-in-ios-14-copy/ How does Android 13's new notification permission affect the ability of apps to post notifications? Learn more in Phunware's latest dev blog.

The post Dev Blog: What Developers Should Know About the Notification Permission in Android 13 appeared first on Phunware.

]]>
@media handheld, only screen and (max-width: 768px) { .image-right-caption { float: none !important; margin: 50px 0 50px 0 !important; } }

Android 13 introduces a new runtime permission, android.permission.POST_NOTIFICATIONS, which apps will need to obtain to display some types of notifications. How does this change the ability of apps to post notifications? I’ll attempt to answer that and more in this post. My own research found answers that surprised me.

Why does an app need POST_NOTIFICATIONS permission?

Figure 1: Android 13 system dialog for notifications permission.

The POST_NOTIFICATIONS permission only exists on Android 13 (the permission value “android.permission.POST_NOTIFICATIONS” is only available in code when an app compiles to API 33). When the app is running on devices with Android 12 and lower, POST_NOTIFICATIONS permission is not needed (and, actually, should not be used, more on this later). On Android 13, some notifications can still be displayed without this permission, such as notifications for foreground services or media sessions as described in the documentation. On Android 13, you can think of this permission as having the same value as the app system setting to enable notifications but you can ask the user to enable notifications like a permission without sending them to the system settings screen.

How can my app check if it has POST_NOTIFICATIONS permission?

As a runtime permission, you would think the obvious way to check for this permission is to call checkSelfPermission with the POST_NOTIFICATIONS permission. But this does not work as expected on pre-Android 13 devices. On pre-Android 13 devices, checkSelfPermission(POST_NOTIFICATIONS) will always return that the permission is denied even when notifications have been enabled in the app system settings. So, don’t call checkSelfPermission(POST_NOTIFICATIONS) if the app is not running on Android 13. Calling areNotificationsEnabled() is still the way to check that the user has enabled notifications for your app. To put it another way, only on Android 13 will checkSelfPermission(POST_NOTIFICATIONS) and areNotificationsEnabled() give you the same answer of whether that app has notifications enabled or not.

How can my app get POST_NOTIFICATIONS permission?

First, even apps that do not ask for POST_NOTIFICATIONS permission (such as apps that have not yet been updated to API 33 to know about this permission) may still obtain it. If an app is already installed, and has notifications enabled, and the device updates to Android 13, the app will be granted the permission to continue to send notifications to users. Similarly, if a user gets a new device with Android 13 and restores apps using the backup and restore feature, those apps will be granted POST_NOTIFICATIONS permission, if notifications were enabled.

For newly installed apps, if an app targets API 32 or lower, the system shows the permission dialog (see Figure 1) the first time your app starts an activity and creates its first notification channel. This is why you will see the permission dialog for apps that have not yet been updated for Android 13.

But as a developer, I was looking to add requesting the POST_NOTIFICATIONS permission to apps. Here’s the code I used:

    private val requestPermissionLauncher =
        registerForActivityResult(
            ActivityResultContracts.RequestPermission()
        ) { isGranted: Boolean ->
            onNotificationPermission(isGranted)
        }
…
        requestPermissionLauncher.launch(POST_NOTIFICATIONS)

Like checkSelfPermission(), this did not work the way I expected. On pre-Android 13 devices, requesting the POST_NOTIFICATIONS permission will always return PERMISSION_DENIED without displaying the system dialog. Also, if the app targets API 32 or lower, requesting the POST_NOTIFICATIONS permission will always return PERMISSION_DENIED without displaying the system dialog, even on devices with Android 13. So to request, the POST_NOTIFICATIONS permission at runtime:

  • Only request it on Android 13 or later
  • Your app must target API 33 or later

Do I need to update my app?

Yes, you should update your app if you don’t want the app to lose the ability to display notifications. Because of the situations described above where an app can get the POST_NOTIFICATIONS permission even when no code asks for it, you may be tempted to procrastinate just a little longer before handling this new permission. But remember the auto-reset permissions for unused apps feature introduced with Android 11 and later rolled out to earlier versions. This feature applies to runtime permissions so it applies to the new POST_NOTIFICATIONS permission. Expect that an app will lose this permission as well if it is not used for some time, so it will need to request it to get it back.

The post Dev Blog: What Developers Should Know About the Notification Permission in Android 13 appeared first on Phunware.

]]>
Phunware Launches New Telemedicine Solution http://52.35.224.131/phunware-launches-new-telemedicine-solution/ Thu, 30 Apr 2020 19:10:20 +0000 http://127.0.0.1/blog/smb-mobile-engagement-offer-copy/ Phunware launches a new telemedicine solution for new and existing healthcare customers of its Multiscreen-as-a-Service (MaaS) platform.

The post Phunware Launches New Telemedicine Solution appeared first on Phunware.

]]>
Today Phunware announced the launch of a new mobile telemedicine solution for new and existing healthcare customers of its Multiscreen-as-a-Service (MaaS) platform.

“Healthcare organizations are being forced to leverage telemedicine in order to stay competitive with their digital transformation initiatives and to address patient concerns about the safety underlying in-person visits in the wake of COVID-19,” said Randall Crowder, COO of Phunware. “Our new solution offers physicians an out-of-the-box telemedicine platform on mobile with streamlined reimbursement that keeps its referrals in-network to help reduce their patient leakage while enhancing their revenues.”

Read the full article from proactive

The post Phunware Launches New Telemedicine Solution appeared first on Phunware.

]]>
Phunware Offers A Free 60-Day License Of Its Mobile Engagement SDK http://52.35.224.131/smb-mobile-engagement-offer/ Thu, 23 Apr 2020 21:39:49 +0000 http://127.0.0.1/blog/phunware-smart-city-solution-launch-copy/ Phunware to offer a free 60-day license of its Mobile Engagement software to qualifying small and midsize businesses that become Phenom Certified.

The post Phunware Offers A Free 60-Day License Of Its Mobile Engagement SDK appeared first on Phunware.

]]>
Phunware recently announced an offer for a free 60-day license of its Mobile Engagement software development kits (SDKs) to qualifying small and midsize businesses (SMBs). In order to receive the SDK at no cost, the qualifying business must complete the Phunware Phenom Certified Developer Program within the next 60 days.

“Our hearts go out to everyone directly affected by COVID-19, but we are just as concerned about the untold toll this pandemic is having on small and midsize businesses nationwide as they scramble to adapt to emerging state and federal guidance,” said Randall Crowder, COO of Phunware. “Our enterprise cloud platform for mobile is uniquely suited to help them not only adhere to these guidelines, but also to engage and manage customers in a mobile-first world that is rapidly becoming mobile-only.”

Read the full article from proactive

Sign up for the Phenom Certified Developer Program

The post Phunware Offers A Free 60-Day License Of Its Mobile Engagement SDK appeared first on Phunware.

]]>
Phunware’s Smart City Solution Launches http://52.35.224.131/phunware-smart-city-solution-launch/ Thu, 09 Apr 2020 20:52:27 +0000 http://127.0.0.1/blog/acg-podcast-copy/ Today Phunware announced the launch of a Smart City Pandemic Response Solution to help government officials during the coronavirus (COVID-19) pandemic.

The post Phunware’s Smart City Solution Launches appeared first on Phunware.

]]>
Today Phunware announced the launch of a Smart City Pandemic Response Solution to help government officials address the critical challenges they are facing in their cities due to the coronavirus (COVID-19) pandemic.

“We think it is extremely important for our country’s mayors and city officials to think globally, but act locally during the current COVID-19 pandemic,” said Alan S. Knitowski, President, CEO and Co-Founder of Phunware. “During such trying times, we believe it is critical for local communities to take swift and decisive action from the bottom up to supplement government efforts being led from the top down at both the federal and state level, including a cogent go-forward plan for addressing the needs of citizens and visitors to each city nationwide in safely getting back to a more normal cadence for their personal and professional lives.”

Learn more about the Smart City Pandemic Response Solution

The post Phunware’s Smart City Solution Launches appeared first on Phunware.

]]>
Phunware’s CEO Interviewed for Association for Corporate Growth Virtual Luncheon http://52.35.224.131/acg-podcast/ Wed, 08 Apr 2020 20:10:30 +0000 http://127.0.0.1/blog/phunware-mobile-pandemic-response-solution-launches-copy/ Check out the recent podcast interview with Phunware CEO, Alan S. Knitowski, and Thom Singer in a special episode for Association for Corporate Growth.

The post Phunware’s CEO Interviewed for Association for Corporate Growth Virtual Luncheon appeared first on Phunware.

]]>
Today Phunware’s President, CEO and Co-Founder, Alan S. Knitowski, was scheduled to present at the Association for Corporate Growth (ACG) luncheon. However, with current restrictions surrounding the COVID-19 pandemic, the luncheon and all live events for the foreseeable future have been cancelled. In an effort to continue to provide members with quality content as we all navigate our new normal, ACG Austin/San Antonio conducted an interview with Mr. Knitowski in a special episode of Thom Singer’s podcast.

Listen to the full interview

The post Phunware’s CEO Interviewed for Association for Corporate Growth Virtual Luncheon appeared first on Phunware.

]]>
Phunware Announces 2019 Earnings and Business Developments http://52.35.224.131/2019-earnings-business-update/ Mon, 30 Mar 2020 20:24:12 +0000 http://127.0.0.1/blog/ventilator-registry-launch-copy/ This week Phunware announced its 2019 financial results and provided an update on recent business developments.

The post Phunware Announces 2019 Earnings and Business Developments appeared first on Phunware.

]]>
This week Phunware announced its 2019 financial results and provided an update on recent business developments.

“Today we are pleased to share our trailing financial results for the Company, which included a dramatic year-over-year revenue transformation from one-time, non-recurring application transactions revenue to annual and multi-year recurring platform subscriptions and services revenue tied to the licensing and use of our Multiscreen as a Service (MaaS) enterprise cloud platform for mobile,” said Alan S. Knitowski, President, CEO and Co-Founder of Phunware. “More importantly, and specific to the subsequent events and recent operational actions taken to address our go-forward business activities while the ongoing COVID-19 coronavirus pandemic continues to unfold worldwide, we have announced a $3 million structured debt financing to address our balance sheet and a furlough of 37 Phunware employees to address our cost structure during the existing governmental stay-in-place orders unique to our business facilities and operations in Central Texas, Southern California and Southern Florida.”

Read the full article from Proactive

The post Phunware Announces 2019 Earnings and Business Developments appeared first on Phunware.

]]>
Blythe Masters Appointed as Phunware Board of Directors Chair http://52.35.224.131/phunware-board-of-directors-chair-blythe-masters/ Mon, 30 Mar 2020 19:30:44 +0000 http://127.0.0.1/blog/2019-earnings-business-update-copy/ Today Phunware is pleased to announced the appointment of Blythe Masters as the new Chair of the Board.

The post Blythe Masters Appointed as Phunware Board of Directors Chair appeared first on Phunware.

]]>
Today Phunware is pleased to announced the appointment of Blythe Masters as the new Chair of the Board. Ms. Masters succeeds Eric Manlunas who will remain with Phunware as a Director and Member of both the Compensation Committee and Audit Committee

“We are living in unprecedented times as the world faces the COVID-19 pandemic, so we are honored and fortunate to have Blythe serve as Chair for Phunware’s Board of Directors,” said Alan S. Knitowski, President, CEO and Co-Founder of Phunware. “Blythe’s proven leadership and experience will be invaluable to helping Phunware navigate the current macro and health environments as we continue to diligently manage cash and drive towards self-sufficiency through operational excellence.”

Read the full article from Proactive

The post Blythe Masters Appointed as Phunware Board of Directors Chair appeared first on Phunware.

]]>
Phunware Announces Launch of its National Ventilator Registry http://52.35.224.131/ventilator-registry-launch/ Fri, 27 Mar 2020 18:50:56 +0000 http://127.0.0.1/blog/issuance-of-senior-convertible-notes-copy/ Phunware asks medical professionals to help compile a National Ventilator Registry launched to help identify and track lifesaving equipment.

The post Phunware Announces Launch of its National Ventilator Registry appeared first on Phunware.

]]>
Today Phunware announced that it has launched a National Ventilator Registry, calling medical professionals to help compile the registry so clinicians have complete visibility into existing resources and can locate lifesaving equipment.

“We have built a data engine that is capable of managing over a billion active devices and four billion daily transactions, while generating more than 5 terabytes of data each day,” said Randall Crowder, COO of Phunware. “We can leverage our technology to identify and track critical medical assets like ventilators, but we need to act now and we need everyone’s help getting the word out to medical professionals on the frontline so that we can collect the information that we desperately need.”

Read the full article from Proactive

Visit the National Ventilator Registry

The post Phunware Announces Launch of its National Ventilator Registry appeared first on Phunware.

]]>
Phunware Announces Issuance of Senior Convertible Notes http://52.35.224.131/issuance-of-senior-convertible-notes/ Mon, 23 Mar 2020 20:40:29 +0000 http://127.0.0.1/blog/avia-vetted-product-copy/ Phunware has entered into a financing transaction with Canaccord Genuity for the issuance of senior convertible notes.

The post Phunware Announces Issuance of Senior Convertible Notes appeared first on Phunware.

]]>
Today Phunware announced that it has entered into a financing transaction with Canaccord Genuity for the issuance of senior convertible notes. Upon closing of the sale, Phunware is expected to receive gross cash proceeds of $2.760 million.

Read the full press release

The post Phunware Announces Issuance of Senior Convertible Notes appeared first on Phunware.

]]>
Phunware Recognized as AVIA Vetted Product http://52.35.224.131/avia-vetted-product/ Thu, 19 Mar 2020 21:04:27 +0000 http://127.0.0.1/blog/top-health-system-location-based-services-copy/ AVIA has recognized the Phunware digital front door software as an AVIA Vetted Product based on the needs and criteria of its members.

The post Phunware Recognized as AVIA Vetted Product appeared first on Phunware.

]]>
Today we announced that AVIA has recognized the Phunware digital front door as an AVIA Vetted Product. These products have been proven to address mobile applications effectively based on the needs and criteria of AVIA Members.

“Phunware is honored to have an AVIA Vetted Product, which will allow us to connect with over 25 distinguished health systems who are committed to digital transformation in a mobile-first world,” said Randall Crowder, COO of Phunware. “We look forward to this partnership with AVIA as we continue to offer health systems an enterprise-wide, best-in-class digital front door.”

Read the full article from Proactive

The post Phunware Recognized as AVIA Vetted Product appeared first on Phunware.

]]>
Phunware Location Based Services Deployed at A Leading US Health System http://52.35.224.131/top-health-system-location-based-services/ Mon, 16 Mar 2020 19:45:11 +0000 http://127.0.0.1/blog/phunware-investor-relations-program-hayden-ir-copy/ Phunware announces the deployment of its Location Based Services for a top US health system.

The post Phunware Location Based Services Deployed at A Leading US Health System appeared first on Phunware.

]]>
We recently announced that our patented Location Based Services, a key component of the award-winning Multiscreen-as-a-Service (MaaS) platform, has been deployed at a leading US healthy system spanning 30 facilities and more than 22 million square feet. 

“The enterprise rollout of this mobile application enabled by our location-based services is another great example of leadership in healthcare innovation and we’re proud to play our part in building a true digital front door,” said Alan S. Knitowski, President, CEO and Co-Founder of Phunware. “Being able to navigate a complex facility easily makes hospital visits less stressful for patients, while being able to reach and inform patients with the push of a button, saving precious time and increasing staff efficiencies.

Read the full article from Proactive

The post Phunware Location Based Services Deployed at A Leading US Health System appeared first on Phunware.

]]>
Phunware to Launch Investor Relations Program with Hayden IR http://52.35.224.131/phunware-investor-relations-program-hayden-ir/ Tue, 10 Mar 2020 15:52:47 +0000 http://127.0.0.1/blog/phunware-new-customer-wins-applications-copy/ Phunware announces launch of investor relations program with Hayden IR

The post Phunware to Launch Investor Relations Program with Hayden IR appeared first on Phunware.

]]>
Phunware announced today it has engaged Hayden IR, a highly recognized, national investor relations firm, to raise its visibility and strengthen its relationships with the investment community.

“Over the past year, we have strengthened our financial position as we approach operating cash flow breakeven and move towards breakeven on an adjusted EBITDA basis,” said Alan S. Knitowski, President, CEO and Co-Founder of Phunware. “To ensure we capitalize on these important milestones, we look forward to working with the team of professionals at Hayden IR to help us target and expand our investor audience and ensure we are communicating effectively with Wall Street.”

Read the full article from Proactive

The post Phunware to Launch Investor Relations Program with Hayden IR appeared first on Phunware.

]]>
Phunware Talks New Customer Wins for Application Transactions http://52.35.224.131/phunware-new-customer-wins-applications/ Fri, 06 Mar 2020 16:52:32 +0000 http://127.0.0.1/blog/phunware-appoints-wikipedia-co-founder-larry-sanger-advisory-board-copy/ Phunware appoints Wikipedia co-founder Larry Sanger to Advisory Board.

The post Phunware Talks New Customer Wins for Application Transactions appeared first on Phunware.

]]>
Phunware announced new customer wins for application transactions using Phunware’s proprietary Audience Engagement solution, which is a managed service capability that enables brands to build custom audiences and deliver targeted media to optimize engagement. 

The Company also recently released new user activity audiences capabilities to its Multiscreen-as-a-Service (MaaS) platform that allows brands to create custom user segments, calculate approximate audience sizes and create cross-platform campaigns among users.

“Phunware has been delivering everything you need to succeed on mobile for over a decade, so helping brands engage audiences with digital media is a natural core competency for us in a mobile-first world,” said Luan Dang, CTO and Co-Founder of Phunware. “Our data-enriched media allows brands to optimize their marketing spend, while our blockchain-enabled data exchange provides improved transparency to combat ad fraud and ensure both brand and consumer protection alike.”

New customer wins included Samsung, Live Nation, Ticketmaster, House of Blues, AEG, Madison Square Garden, Metrolink, Coast Electric, Census 2020, the University of Pennsylvania and Truthfinder amongst others.

Read the full article from Proactive

The post Phunware Talks New Customer Wins for Application Transactions appeared first on Phunware.

]]>
Phunware Adds Top US Cancer Center as Mobile Digital Front Door Customer http://52.35.224.131/phunware-top-rated-cancer-center-digital-front-door/ Mon, 02 Mar 2020 18:28:03 +0000 http://127.0.0.1/blog/phunware-location-based-services-cisco-meraki-copy/ Phunware adds top rated US cancer center as mobile digital front door customer on its Multiscreen-as-a-Service (MaaS) platform.

The post Phunware Adds Top US Cancer Center as Mobile Digital Front Door Customer appeared first on Phunware.

]]>
Phunware has announced that it has added one of the top rated cancer hospitals in the United States as a new customer for its mobile digital front door solution. Phunware’s Multiscreen-as-a-Service (MaaS) platform helps patients and clinicians demystify the healthcare journey for both families and staff. 

“MaaS provides our customers with a true digital front door for their patients and staffs, either end-to-end as a complete turn-key solution off-the-shelf, or as software components and tools that they can license, incorporate and build on their own through convenient and frictionless Github downloads and a comprehensive learning management system known as the Phunware Phenom Certified Developer Program,” said Alan S. Knitowski, President, CEO and Co-Founder of Phunware. “Missed appointments cost the US healthcare system more than $150 billion every year, so we’re extremely excited to enable such a prominent, globally recognized healthcare organization to better manage their patient and clinician experience across more than 14 million square feet of facilities spread over a 40 block downtown metropolitan area.”

Read the full article from Proactive

The post Phunware Adds Top US Cancer Center as Mobile Digital Front Door Customer appeared first on Phunware.

]]>
Phunware’s Location Based Services to be Featured in Cisco Meraki Marketplace http://52.35.224.131/phunware-location-based-services-cisco-meraki/ Tue, 25 Feb 2020 17:12:10 +0000 http://127.0.0.1/blog/phunware-himss20-orlando-florida-copy/ Phunware’s Location Based Services to be Featured in Cisco Meraki Marketplace!

The post Phunware’s Location Based Services to be Featured in Cisco Meraki Marketplace appeared first on Phunware.

]]>
Phunware has announced that Cisco Meraki now features the Company’s Multiscreen-as-a-Service (MaaS) Location Based Services (LBS) app in its Meraki Marketplace, which is an exclusive catalog of Technology Partners like Phunware that showcases applications developed on top of the Meraki platform, allowing customers and partners to view, demo and deploy commercial solutions.

“We recently announced a collaboration debut between Phunware and Cisco Webex called the On My Way mobile app portfolio for South by Southwest (SXSW) attendees in March in conjunction with the Cisco Innovation Hub at Capital Factory, where I’ll be discussing three-dimensional cognitive workspaces,” said Randall Crowder, COO of Phunware. “The Meraki Marketplace will now provide Phunware an important channel to thousands of Cisco Meraki customers across more than 100 countries worldwide who need the very best LBS solutions for their network environments without the risk of deploying unproven technology.”

 

Read the full article from Proactive

The post Phunware’s Location Based Services to be Featured in Cisco Meraki Marketplace appeared first on Phunware.

]]>
SwiftUI: A Game Changer http://52.35.224.131/swiftui-a-game-changer/ http://52.35.224.131/swiftui-a-game-changer/#comments Wed, 17 Jul 2019 16:09:07 +0000 http://127.0.0.1/blog/the-power-of-machine-learning-on-a-user-device-copy/ Last month at WWDC 2019, Apple released a heap of information to continue building on their software platforms. This year’s event was jam packed with new features such as user profiles on tvOS, standalone AppStore on watchOS and dark mode on iOS. Also announced was the stunning Mac Pro and Pro Display which is a […]

The post SwiftUI: A Game Changer appeared first on Phunware.

]]>
Last month at WWDC 2019, Apple released a heap of information to continue building on their software platforms. This year’s event was jam packed with new features such as user profiles on tvOS, standalone AppStore on watchOS and dark mode on iOS. Also announced was the stunning Mac Pro and Pro Display which is a powerhouse of a machine that can tackle extreme processing tasks.

Apple has a recurring theme of releasing mind-blowing features, but nothing was more exciting than the announcement of SwiftUI. As Apple’s VP of Software Engineering, Craig Federighi, announced the new UI toolkit, it felt like a metaphorical bomb dropping in the middle of the room!

Shortly after a quick SwiftUI overview, the keynote was over. Developers were left excited, stunned and filled with hundreds of questions about the new UI framework. It felt like the only thing missing from the SwiftUI announcement was the iconic “One More Thing” introduction slide Steve Jobs was known for using.

The blog explains what SwiftUI is, the benefits of using SwiftUI compared to the current UI programming method and how SwiftUI handles data management.

SwiftUI and Declarative Programming

Let’s take a step back and look at what makes this UI toolkit exciting. SwiftUI let developers build the designs for their apps in a new declarative way. Native iOS developers have only known how to build and maintain their UI through imperative programming. Imperative programming requires the user to maintain every UI state themselves and update each item to keep it in sync with their data models. As your UI elements increase, so does the complexity of your logic management, leading to state problems.

With declarative programming, the developer sets the rules that each view should follow and the framework makes sure those guidelines are enforced. As the user interacts with your UI and your data model changes, the view will rebuild itself to reflect those changes automatically. This vastly reduces code complexity and allows developers to create robust user interfaces with fewer lines of code. Other development frameworks, such as ReactNative and Flutter have already been using this declarative UI paradigm, and developers love how quickly they can put together the UI and how this produces easy to read code.

But the declarative framework is only part of the story. SwiftUI brings even more enhancements to iOS programming, such as live previews in Xcode, drag and drop programming and cross-platform development.

Overview of SwiftUI

In order to display the simplicity and beauty of SwiftUI, I think it’s worth seeing a small sample of code. Let’s think about a single view app that contains a table view. This is a view that iOS developers have programmed countless times. You immediately think of adding a UITableView through Interface Builder or programmatically, then assign its datasource and delegate to your ViewController. You then need to add the required datasource and delegate functions to fill the content of the table view. Before you know it, this simple table view is up to 30 lines of code.

Here’s the Swift code for a basic table view that displays a list of country names:

class MasterViewController: UITableViewController {
    var countries: [Country] = fullCountryList
 
    override func viewDidLoad() {
        super.viewDidLoad()
    }
 
    // MARK: - Table View
    override func numberOfSections(in tableView: UITableView) -> Int {
        return 1
    }
 
    override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
        return countries.count
    }
 
    override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
        let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath)
 
        let country = countries[indexPath.row]
        cell.textLabel?.text = country.name
        return cell
    }
}

Now we can take a look at the code needed to create that same table in SwiftUI:

struct MyTableView : View {
    @State var countries: [Country] = fullCountryList
 
    var body: some View {
        List(countries) { country in
            Text(country.name)
        }
    }
}

Believe it or not, the part of that code that actually displays the table view is the 3 lines of code inside the body computed variable, and that includes the closing bracket. The List struct knows to infer the count and can adjust its cell to display the text.

You’ll notice that MyTableView is of type View. In SwiftUI, a View is a struct that conforms to the View protocol, rather than a class that inherits from a base class like UIView. This protocol requires you to implement the body computed variable, which simply expects a View to be returned. Views are lightweight values that describe how you want your UI to look and SwiftUI handles actually displaying UI on the screen.

Using Xcode 11 and SwiftUI, you now have the canvas on the right panel which shows you a live preview of your code. This preview is created by the PreviewProvider block of code that is automatically added with each new View you create. The beauty of this preview is that it refreshes itself as you make changes to your code without having to build and run with each change.

This will surely decrease development time as you no longer have to compile your entire project to check your minor UI adjustments while working to make your app design pixel perfect to the design specs.

Data Management with SwiftUI

This only scratches the surface of what SwiftUI brings to iOS development. SwiftUI is easy to use but there are advanced features that allow you to take your app to the next level. Developers will want to dive deeper into how data is managed within SwiftUI. To keep your data and UI in sync, you will need to decide which views will maintain the “source of truth” for your app and which views will simply be passed as reference data.

Let’s imagine we’re developing a media player and working on the Player screen. This will have many UI elements, but we’ll simplify it to the play/pause button and a progress view.

Here’s a rough model:

Here you have the PlayerView with smaller SwiftUI views to maintain the PlayButton and ProgressView. Each SwiftUI view will need the isPlaying attribute to know how to update its own UI state, but if each view is maintaining its own value, this could cause state problems.

Instead, we want there to be a “master” isPlaying attribute that all the SwiftUI views can read and react to. Here’s a better model:

The parent PlayerView will hold the master isPlaying attribute and the child views will only reference this variable. When the user interacts with the child UI elements to manipulate the isPlaying boolean, those changes will make their way through the views that are associated with the variable.

Let’s take a look at what this looks like in our code:

struct PlayerView : View {
    let episode: Episode
    @State private var isPlaying: Bool = false
 
    var body: some View {
        VStack {
            Text(episode.title).foregroundColor(isPlaying ? .white : .gray)
 
            PlayButton()
        }
    }
}

This SwiftUI PlayerView is a vertical StackView that has a Text label with the show title and a PlayButton View.

Swift 5.1 will introduce Property Wrappers, which allow SwiftUI to use the keyword @State and @Binding to add additional logic to your view’s variables. In the code above, the PlayerView is the owner of the isPlaying attribute so we indicate this with the @State keyword.

struct PlayButton : View {
    @Binding var isPlaying: Bool
 
    var body: some View {
        Button(action: {
            self.isPlaying.toggle()
        }) {
            Image(systemName: isPlaying ? "pause.circle" : "play.circle")
        }
    }
}

Now looking at the PlayButton code, we have the isPlaying boolean here as well, but we added the @Binding keyword to tell this View that this variable is bound to a @State attribute from a parent view.

When a parent view calls a child view, they can pass the State variable to the Binding variable as a parameter into the View and use the “$” prefix:

struct PlayerView : View {
    let episode: Episode
    @State private var isPlaying: Bool = false
 
    var body: some View {
        VStack {
            Text(episode.title).foregroundColor(isPlaying ? .white : .gray)
 
            PlayButton(isPlaying: $isPlaying)
        }
    }
}

By doing this, when a binding variable is changed by some user interaction, the child view sends that change through the entire view hierarchy up to the state variable so that each view rebuilds itself to reflect this data change. This ensures that all your views maintain the same source of truth with your data models without you having to manage each view manually.

This is a high level introduction to data management with SwiftUI. I encourage you to dig further into this topic by watching the WWDC tech talk, Data Flow Through SwiftUI.

Start Working with SwiftUI

The best way to grow your knowledge of SwiftUI and learn its more advanced functions is to start using it to build an app. The great news is that you don’t have to build an entire app from scratch in order to use SwiftUI. Apple provided classes and protocols that allow you to integrate newly designed SwiftUI views into your existing projects.

So the next feature you work on for your iOS, watchOS or tvOS project, consider developing one of the views in SwiftUI and integrate it into your project.

If you want to keep digging into SwiftUI, check out these WWDC Tech Talks and Tutorials:

Here at Phunware, our architects and developers stay up-to-date with the latest changes from Apple WWDC and Google IO. If you’re interested in joining the Phamily, check out our latest job openings. We’re currently looking for Android and iOS software engineers!

The post SwiftUI: A Game Changer appeared first on Phunware.

]]>
http://52.35.224.131/swiftui-a-game-changer/feed/ 1
The Power of Machine Learning on a User Device http://52.35.224.131/the-power-of-machine-learning-on-a-user-device/ http://52.35.224.131/the-power-of-machine-learning-on-a-user-device/#respond Tue, 02 Jul 2019 21:34:35 +0000 http://127.0.0.1/blog/why-are-brands-afraid-mobile-games-copy/ Until recently, using machine learning inside your products was not a small task. It required a data center with servers running all the times: dedicated space, memory and bandwidth. Now, using the power of machine learning, we can make new, empowering features directly on a user’s device. Today, we’re showing you how easy it can […]

The post The Power of Machine Learning on a User Device appeared first on Phunware.

]]>

Until recently, using machine learning inside your products was not a small task. It required a data center with servers running all the times: dedicated space, memory and bandwidth. Now, using the power of machine learning, we can make new, empowering features directly on a user’s device.

Today, we’re showing you how easy it can be to run your own machine learning on a user device. In our step-by-step tutorial, we’re going to go from getting your data, to training your model on a Mac, to running an iOS app with your newfound powers. Read on for instructions!

Rise of Accessibility for Machine Learning

New tools are making machine learning opportunities more and more accessible. Apple has CoreML, a powerful framework optimized for Apple hardware. And Google has TensorFlow Lite models that are made to fit on phones. Both Apple and Google, at their respective annual conferences, dedicated a significant amount of time talking about how they’ve benefitted from moving machine learning to users’ devices, and how they’re empowering developers on their platforms to do the same. With machine learning on your device, you could add these features through your app:

  • Voice control
  • Facial recognition through an app
  • Offline chatbots to assist with FAQs or onboarding
  • Decipher text from signs for accessibility
  • Scan and store text from business cards or important documents
  • Translate text
  • Recognize objects like cars and identify their make/model/type
  • Convenient typing predictions
  • Keyboards that autocomplete your writing in the style of a famous author
  • Add never-before-seen filters to images
  • Tag photos and videos according to who or what is in them
  • Organize emails and messages by what is most important to you

Advantages of Machine Learning

  1. It’s scalable. As the number of users of your app grows, you don’t have to worry about more traffic with the server, or Internet connection points of failure. You don’t need to get extra memory and storage. And users avoid bandwidth issues because they don’t have to ping the Internet all the time
  2. It’s fast. You’re not hindered by internet latency because you are using hardware that is optimized for machine learning.
  3. It’s private. Your users can be rest assured knowing the information being analyzed is all private. You are not handling their data; everything is happening on their devices at their behest.

That said, there are still costs associated with machine learning. For example, creating the models that will be used on device still requires and depends on massive amounts of quality data and high powered machines. Yet even these features are becoming more readily available and easy to use.

Interested in seeing just how easy it can be? Follow our tutorial below!

Before Getting Started.

  • It will be helpful to know a tiny bit of iOS development, including how to run an app on the simulator through Xcode.
  • Also, familiarity with Swift Playgrounds is helpful but not required.
  • Other than that, we’ll take you through the machine learning process one step at a time.

You can find the full code you’ll be writing at the end of this blog post.

Step 1: Getting the Data.

This tutorial focuses on a kind of machine learning called natural language processing (NLP) – which essentially means, “making sense of words.” Specifically, we will be doing a sentiment analysis. This is where we take a word or phrase and decide what feeling is associated with it. Great use cases for this functionality include marketing analysis of customer feedback, evaluating tester interviews for product design, or getting the lay of the land with comments left on user reviews of a product.

Let’s say you want to use sentiment analysis to organize or display messages on your new messaging app, or your new email client. You can group them by tone, or color-coordinated messages to give the user a heads up of what’s coming, or help them decide what they should answer right away, or whatever else you can imagine as a helpful feature. (And again, we can do all this by offloading the processing power and smarts to the users device without compromising other features users want, like end-to-end encryption).

First though, you’ll need to get the data. Ours will come as a CSV. Most major spreadsheet programs can open a CSV, so you can easily see what the data looks like.

DOWNLOAD SAMPLE CSV

As with any data, we want to be transparent with where we got our information. I’ve cleaned up the linked dataset, but the basics of it come courtesy of work done for this paper:

Maas, A., Daly, R., Pham, P., Huang, D., Ng, A. and Potts, C. (2011). Learning Word Vectors for Sentiment Analysis: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. [online] Portland, Oregon, USA: Association for Computational Linguistics, pp.142–150. Available at: http://www.aclweb.org/anthology/P11-1015.

This dataset is basically the CSV form of a simple spreadsheet with two columns.

  • One is labeled “sentiment” and is a column with values of either “Positive” or “Negative”. You may see this in other data sets as 1 for positive and 0 for negative, but for coding purposes we need to format as words instead of integers.
  • The other column is the text of the review, and it is labeled “review” at the top. And there are 25,000 reviews! Go ahead and import this into a spreadsheet to see what it looks like.

This type of machine learning is known as classification and we’ll be making a classifier. The reviews are your “x” inputs, or features, and the “Negative”/“Positive” values – known as labels – are like the “y” values you get as output. Your target prediction is a “Negative” or “Positive” value.

Alright. So if you have downloaded the data, now it’s time to write some code to train the model.

Step 2: Training the Model

Training a model means giving our program a lot of data so that it learns what patterns to look for and how to respond. Once the model is trained, it can be exported as a file to run on a device. That means you’re not taking all those gigabytes of training data with you.

It’s sort of like pouring lots of water over a material to make a sculpture that has the right shape. Our training data is the water. The sculpture is the model. It’s what we’ll use once it is trained and in the right shape.

For this example, we’ll use an Xcode Playground, which is like a blank canvas that runs code and is very useful for experimenting.

  1. Open up Xcode, preferably Xcode 10.2 or later. Your version of iOS should be at least iOS 11. In Xcode go to File > New > Playground. Use macOS as the template, and choose “Blank” from the options. Then click “Next.”
  2. Now it will ask you where to save the project and what to call it. I called mine “CreateMLTextClassifier”.
  3. Save your Playground. It will open up with some boiler plate code. Delete all of that code.

The full code for the playground is available at the end, but we’ll also take you step-by-step.

First we’ll import the frameworks we’ll need at the very top. Add this:

import CreateML
import Foundation
import PlaygroundSupport

Then we’ll create a function that will do the actual magic. Below your import statements, write:

func createSentimentTextClassifier() {
 
}

Now we’ll fill out this function. Write everything in between the brackets until told otherwise. The first thing you’ll write inside the brackets are:

// Load the data from your CSV file
let fileUrl = playgroundSharedDataDirectory.appendingPathComponent("MovieReviewTrainingDatabase.csv")

So we have this line, but in order to make it actually work, we’ll need to set up a folder with our CSV in the right location. What’s happening here is that the Playground is looking for a folder called “Shared Playground Data”. So go ahead and make a folder with that name in your “Documents“ directory, and then add the “MovieReviewTrainingDatabase.csv” to that folder. Now the Playground can find it!

Back to coding. Below the fileUrl lines you just wrote, add:

guard let data = try? MLDataTable(contentsOf: fileUrl) else {
return
}

This takes the CSV file and converts it to a table format that the program knows how to handle better for machine learning.

Next, below the “guard let data …” lines you wrote, write:

// Split the data for training and testing
let (trainingData, testingData) = data.randomSplit(by: 0.8, seed: 5)

This will give you data for training, and testing. This will train the data with 80 percent of what’s in the CSV (that’s what the 0.8 means) and the other 20 percent will be used later. So it will go over and over the training data, now the testing data, which the classifier has never seen, can tell us how well the data would have done in the real world.

As a side note, it’s possible to train your machine learning model so many times on the same data that you “overfit” your model. This means it’s great at working with the training data, but it may not be great at generalizing outside that data. Imagine a facial recognition system that easily identifies my face, but when shown a new face it cannot recognize that it is even a face because it had only ever seen my face. Sort of like that.

Now, below the “trainingData, testingData” lines you wrote, write:

// Make the model
guard let sentimentClassifier = try? MLTextClassifier(trainingData: trainingData, textColumn: "review", labelColumn: "sentiment") else {
return
}

This creates the untrained classifier and gets it ready with the trainingData we made earlier. CoreML already has something called an MLTextClassifier which is specifically meant for this kind of use. So we tell it that the column of our spreadsheet/CSV with our text is the column with “review” written at the top, and the “labelColumn” which will become the labels we’re trying to predict, are in the “sentiment” column of our spreadsheet/CSV.

Now below the previous lines write:

// Training accuracy percentage
let trainingAccuracy = (1.0 - sentimentClassifier.trainingMetrics.classificationError) * 100
print("Training accuracy: \(trainingAccuracy)")

This will let us know during training how accurate our model is getting. It should start small, guessing 50 percent, and then grow to high 90s.

Now below the previous lines write:

// Validation accuracy percentage
let validationAccuracy = (1.0 - sentimentClassifier.validationMetrics.classificationError) * 100
print("Validation accuracy: \(validationAccuracy)")

This tells us about how our validation is going. We have already divided the data between training and testing. Within testing, there is another process of dividing the data between training and validation, so that the data is trained a bunch, but when it comes time for fresh data before going over another cycle of training, we check the validation. It’s yet another standard step that helps avoid overfitting and other such problems.

Now below the previous lines write:

// Testing accuracy percentage
let evaluationMetrics = sentimentClassifier.evaluation(on: testingData)
let evaluationAccuracy = (1.0 - evaluationMetrics.classificationError) * 100
print("Evaluation accuracy: \(evaluationAccuracy)")

This finally tells us how accurate our testing data is after all of our training. It’s the real-world example scenario.

Now below the previous lines write:

// Add metadata
let metadata = MLModelMetadata(author: "Matthew Waller", shortDescription: "A model trained to classify the sentiment of messages", version: "1.0")

This is just metadata saying who made the model, a description, and the version.

And the last part of the function, below the previous lines, is:

// Export for use in Core ML
let exportFileUrl = playgroundSharedDataDirectory.appendingPathComponent("MessageSentimentModel.mlmodel")
try? sentimentClassifier.write(to: exportFileUrl, metadata: metadata)

This exports the model so we can drop it in for use in our app.

Now that you’ve made your function you’re ready to run it!

Below the brackets of the function write:

createSentimentTextClassifier()

Now run the Playground! It may automatically run, or you can press the play icon in the lower left corner.

You should see things like the training, validation, and evaluation accuracy pop up in the console. After everything was parsed and analyzed, my training took 8 seconds. My training accuracy was 100.0, and validation and test data evaluation were at around 88 and 89 percent, respectively.

Not a bad result! Even this tutorial on deep learning, a subset of machine learning, using a modest LSTM (“Long Short-Term Memory”) neural net got about 87 percent accuracy on the test data.

With less than 50 lines of code and about 8 seconds of training, we’ve analyzed 25,000 movie reviews and exported a machine learning model for use. Pretty awesome.

Step 3: Putting Machine Learning to Work

It’s time to get the app ready to use our new model.

I’ve made a skeletal app where we can enter some text, and then automatically evaluate it as positive or negative. With that basic feature up and running, you can imagine entering text from any source, knowing how to classify it, and then presenting it in the right way for the convenience of your user. (And in the future, if you have the labeled data, you could do things like determine whether something is or is not important, or divide text into more categories other than just “Positive” or “Negative”.) The project is available on GitHub.

VIEW GITHUB PROJECT

Once you’ve cloned or downloaded the project, open the project in Xcode. Next open a Finder window for the Shared Playground Data folder you created. Next, drag and drop the “MessageSentimentModel.mlmodel” file you created through the Playground into the Xcode project just below the ViewController.swift file.

When it asks you how you want to import it, check all the checkboxes and use “Create Groups” from the radial options.

Now you’re ready to add the code to make the model work.

Go to the ViewController.swift file, and below “sentimentLabel” add:

let sentimentModel = MessageSentimentModel()

Next uncomment the code in “checkImportanceTapped(_ sender: UIButton)”

So with this line:

guard let languageModel = try? NLModel(mlModel: sentimentModel.model) else {
return
}

This wraps our model in an even easier-to-use framework so that we can take the user’s input and update the text of the sentimentLabel in one line, like so:

sentimentLabel.text = languageModel.predictedLabel(for: text)

And it’s as simple as that!

Now let’s run it.

If we type in “I’m doing well” I get the label “Positive” at the bottom. So far so good!

And “I had a really bad day” is …

And now, we’re off to the races! Play around with it yourself!

I hope you’ve enjoyed this demonstration and primer on machine learning, and can imagine the potential of running AI on device. At Phunware, we’re always working for better quality code. That means figuring out how to apply the latest technologies (such as data binding) to challenging, often high-profile projects. In fact, Phunware’s Knowledge Graph uses machine learning and proprietary algorithms to curate over five terabytes of data every day from approximately one billion active devices each month. This data is then used to provide intelligence for brands, marketers and media buyers to better understand their customers, engage and acquire new customers, and create compelling user experiences.

Feel free to reach out with any questions about the myriad possibilities around mobile (or any sized screen) in this field or others. Thank you for reading!

Interested in joining the Phamily? Check out our latest job openings. We’re currently looking for Android and iOS software engineers!

Full Playground code:

import CreateML
import Foundation
import PlaygroundSupport 
 
func createSentimentTextClassifier() {
// Load the data from your CSV file
let fileUrl = playgroundSharedDataDirectory.appendingPathComponent("MovieReviewTrainingDatabase.csv")
 
guard let data = try? MLDataTable(contentsOf: fileUrl) else {
return
 
// Split the data for training and testing
let (trainingData, testingData) = data.randomSplit(by: 0.8, seed: 5)
 
// Make the model
guard let sentimentClassifier = try? MLTextClassifier(trainingData: trainingData, textColumn: "review", labelColumn: "sentiment") else {
return
}
 
// Training accuracy percentage
let trainingAccuracy = (1.0 - sentimentClassifier.trainingMetrics.classificationError) * 100
print("Training accuracy: \(trainingAccuracy)")
 
// Validation accuracy percentage
let validationAccuracy = (1.0 - sentimentClassifier.validationMetrics.classificationError) * 100
print("Validation accuracy: \(validationAccuracy)")
 
// Testing accuracy percentage
let evaluationMetrics = sentimentClassifier.evaluation(on: testingData)
let evaluationAccuracy = (1.0 - evaluationMetrics.classificationError) * 100
print("Evaluation accuracy: \(evaluationAccuracy)")
 
// Add metadata
let metadata = MLModelMetadata(author: "Matthew Waller", shortDescription: "A model trained to classify the sentiment of messages", version: "1.0")
 
// Export for use in Core ML
let exportFileUrl = playgroundSharedDataDirectory.appendingPathComponent("MessageSentimentModel.mlmodel")
try? sentimentClassifier.write(to: exportFileUrl, metadata: metadata)
}
 
createSentimentTextClassifier()

The post The Power of Machine Learning on a User Device appeared first on Phunware.

]]>
http://52.35.224.131/the-power-of-machine-learning-on-a-user-device/feed/ 0
Phunware Team Takeaways from Google I/O 2018 http://52.35.224.131/phunware-takeaways-google-io-2018/ http://52.35.224.131/phunware-takeaways-google-io-2018/#respond Wed, 30 May 2018 16:01:10 +0000 http://127.0.0.1/blog/phunware-takeaways-google-io-2017-copy/ The world was watching earlier this month as Google CEO Sundar Pichai demonstrated a world first: a very realistic phone call made by a Google Assistant, booking a hair salon appointment on behalf of its “client”. While this moment quickly made headlines, it was only the beginning of three days of debuts, announcements and presentations […]

The post Phunware Team Takeaways from Google I/O 2018 appeared first on Phunware.

]]>
The world was watching earlier this month as Google CEO Sundar Pichai demonstrated a world first: a very realistic phone call made by a Google Assistant, booking a hair salon appointment on behalf of its “client”. While this moment quickly made headlines, it was only the beginning of three days of debuts, announcements and presentations across the world of Google.

I asked the team to weigh in on the highlights from this year while the excitement is still fresh in our minds. From new features to on-site sessions, we’ve covered quite a bit of ground. Here’s what you need to know, from our team to yours, about the future of Android as shown at Google I/O 2018.


The new Material Sketch plugin, demonstrated.

“I enjoyed the inspirational sessions this year, especially ‘Designing for Inclusion: Insights from John Maeda and Hannah Beachler.‘ Seeing two leaders in the design field talk about their experiences and take on the industry was motivational. I am also excited about the new material theming as part of Material Design 2.0 as it enables us to push Android design and brands to better align with their brand guidelines.”

—Ivy Knight, Senior UX/UI Designer


Slices, demonstrated.

“I am really excited about the Navigation library and Slices. Navigation will eliminate a ton of brittle code that we commonly write for Android apps, and I am looking forward to updating Phunware’s App Framework to integrate with it. Slices is really interesting, as it will help our users re-engage with apps that they may have forgotten about. It also enables some really cool use cases such as searching for a doctor’s name and being able to offer the user routing straight to that doctor in a hospital app.”

—Nicholas Pike, Software Architect Android , Product VS


Alex Stolzberg & Nicholas Pike

“I was really excited about the new WorkManager that allows for easy background processes to be performed. You can also easily chain and sequence jobs to make the code very clean for a large amount of processes rather than having a cumbersome nested callback structure, reducing the possibility for bugs when writing features or making changes later on.”

—Alex Stolzberg, Software Engineer Android, Product


L to R, Nicholas Pike, Jon Hancock and Ian Lake. Ian is a former Phunware employee turned Googler who stays involved both with his former coworkers and the larger developer community.

“I’m very excited that Google is taking an opinionated stance on development architectural patterns. Writing apps for Android has been a wild west for years, and having some direction and guidance directly from Google will result in new Android developers entering the field with a greater understanding of how to build complete, stable apps. When those new developers find their first jobs, they’ll be more likely to be comfortable and ready to contribute quickly.”

—Jon Hancock, Software Architect Android


Ram Indani and Romain Guy, an Android Graphics Team Manager, at the in-person app review session.

“I really liked the app review sessions. It shows that Google cares about the applications being developed and is willing to work with the community to improve them. Feedback received from the reviewers is valuable and they ensured that the Googler reviewing the app had expertise in the apps they were reviewing.

—Ram Indani, Software Engineer Android


L to R, Alex Stolzberg, Ram Indani, Nicholas Pike and Duong Tran.

“I am excited about what the new Cast Application Framework has to offer. Some of the benefits of the new framework include simpler code implementation and ad integration as well as enhance performance and reliability. Also, new features such as Google Assistant with voice command are automatically included. I was amazed by the Cast Application Framework team’s willingness to work with developers to create unique solutions for our client’s business requirements, such as providing a custom framework and unique branding options.”

—Duong Tran, Software Engineer Android


Want to stay up to date on the latest and greatest in mobile news? Subscribe to our monthly newsletter!

SUBSCRIBE TO THE NEWSLETTER

The post Phunware Team Takeaways from Google I/O 2018 appeared first on Phunware.

]]>
http://52.35.224.131/phunware-takeaways-google-io-2018/feed/ 0
Going Worldwide: 7 App Localization Tips for Android Devs http://52.35.224.131/going-worldwide-7-tips/ http://52.35.224.131/going-worldwide-7-tips/#respond Wed, 09 May 2018 13:00:30 +0000 http://127.0.0.1/?p=29662 (Originally published on May 25, 2017) Here at Phunware, we are dedicated to making accessible and beautiful apps for an international audience, spanning all languages and locales. We also promote continuous learning for our developers and sharing our knowledge with the developer community. With that purpose in mind, I’d like to pass along a few […]

The post Going Worldwide: 7 App Localization Tips for Android Devs appeared first on Phunware.

]]>
(Originally published on May 25, 2017)

Here at Phunware, we are dedicated to making accessible and beautiful apps for an international audience, spanning all languages and locales. We also promote continuous learning for our developers and sharing our knowledge with the developer community. With that purpose in mind, I’d like to pass along a few tips and tricks I learned at DroidCon Boston 2017 that make it easier to adapt your Android apps to reach more users worldwide.

1. Apply automatic right-to-left (RTL) layout support

Languages like Arabic, Hebrew and Persian are written from right to left, which requires a different layout from left-to-right languages like English and Spanish. With the newer Android SDKs, you can skip the step of providing those RTL layouts separately.

MinSDK 17+ automatically updates layouts for RTL with the following:

  • In your AndroidManifest.xml, specify supportsRtl = true.

    • With this setting, Android will update layouts for RTL automatically when the system language calls for it.
  • Use start and end in layout attributes rather than left and right to get the appropriate automatic RTL changes.
  • Remember to add margins on both sides / ends of your layouts.

MinSDK 19+ automatically handles RTL mirroring of vector icons with the automirrored attribute:

  • Define which icons should or should not be mirrored with RTL (for example, the search icon).
  • Reference the Material Design docs for suggestions on what should or should not be mirrored.

2. Prevent grammar issues by using strings.xml with placeholders for arguments instead of concatenating strings

Because grammar is different from language to language, developers cannot assume sentence structure. Rather than concatenating strings together, use placeholders in strings.xml for arguments (Ex: %1$s, %2$d), so your translation service can specify the grammar properly. Also, make sure your translation service understands that they should leave these placeholder values untouched.

  • To help translators understand placeholder values:
    • Specify a placeholder argument name (id="color").
    • Provide a placeholder example (example="blue").

3. Use <plurals> to handle one-to-many results

This little trick will save you time and hassle (it’s also a suggested Android practice), and it makes for cleaner code. Here’s how it looks:

  • Warning: some languages do not have the concept of plurals. You will have to adjust your plural definitions for those languages accordingly.

4. Speaking of strings, avoid using Spannables to format strings that will be localized

Again, since sentence structure and grammar can change from language to language, the placement of the formatted part of the string might not necessarily be where you’d expect. If you must use a Spannable, don’t use hardcoded indices to format characters (bold, italic, etc.)—you might just BOLD something that makes no sense at all. Instead, programmatically find the parts of the string to format the characters.

Instead of Spannables, you can use:

  • HTML formatting in strings.xml (ex: <b>Hello</b>)
  • Html.fromHtml(String text)

5. Use the “German Test” to check text bounds for truncation or bad layouts

Sometimes, localized text can extend beyond the bounds of your layouts—not good. To check for this, use German. It’s a useful test language for this issue because English-to-German translations result in text expansion of up to 20%, with compound words replacing multiple-word English phrases. At the same time, German uses relatively few special characters, so you’ve got a relatively “pure” test for text bounds.

6. Use the Fastlane Screengrab tool to streamline localization QA

This new tool automates the capture and collection of screenshots across each localized screen in your app, uploading each one to a folder where QA can easily compare and verify each version. Here’s how to use it:

  • First, write espresso tests to go through each screen in your app.
  • Then, set up Fastlane Screengrab to take a snapshot of each screen the tests go through and upload to a folder (it can take in several languages, and run against many devices).
  • Finally, compare and verify screenshots.


(Image source: Fastlane Github.)

7. Use Fastlane Screengrab and Supply to localize on the Google Play Store

Gather the appropriate screenshots with Fastlane Screengrab, then use Fastlane Supply to push up your store metadata, screenshots and .apks quickly and easily. Use Timed Publishing mode so you can review and make changes before final upload. And don’t forget the Google Play character limits for your app listing. (You might want to create a script to count characters and verify that they are within the store limits.)

Finally, here are a few reminders for any developers working on app internationalization and localization:

  • Many languages use special characters that don’t appear in English, so make sure the fonts that you support can handle any special characters needed (not all of them can).
  • Default strings must always be defined in the values/strings.xml file.
  • Watch out for special characters in your strings.xml that must be escaped (Ex: \', \").
  • Keep an eye out for these important Lint warnings:
    • Extra translation (Too many translations)
    • Incomplete translation (Missing translations)
    • Inconsistent number of placeholders (more placeholder arguments in one translation versus another)

I enjoyed sharing these tips with the rest of the Phunware development team and I hope they’ll prove just as useful for you. Want to join us? Phunware is always looking for curious and creative developers who want to work at a company where mobile is top priority. Check out our open positions and let’s get busy changing the world.

This blog post was made with the permission of Phil Corriveau (Intrepid), who presented the class Bonjour, Monde: Optimizing Localization at DroidCon Boston 2017.

Want to learn more? Subscribe to our newsletter for monthly updates on mobile technology, strategy and design.

SUBSCRIBE TO OUR NEWSLETTER

The post Going Worldwide: 7 App Localization Tips for Android Devs appeared first on Phunware.

]]>
http://52.35.224.131/going-worldwide-7-tips/feed/ 0
Android Data Binding with RecyclerViews and MVVM: a Clean Coding Approach http://52.35.224.131/android-clean-coding-approach/ http://52.35.224.131/android-clean-coding-approach/#comments Mon, 08 Jan 2018 20:58:17 +0000 http://127.0.0.1/?p=31716 When users open an Android app, what they see is the result of Android developers assigning data from various inputs (databases, the internet, etc.) to elements of the app user interface. Until 2015, the process of assigning (or “binding”) data toxx UI elements was tedious and potentially messy. During its I/O developer conference that year, […]

The post Android Data Binding with RecyclerViews and MVVM: a Clean Coding Approach appeared first on Phunware.

]]>
When users open an Android app, what they see is the result of Android developers assigning data from various inputs (databases, the internet, etc.) to elements of the app user interface. Until 2015, the process of assigning (or “binding”) data toxx UI elements was tedious and potentially messy. During its I/O developer conference that year, however, Google demonstrated its Data Binding Library, which gave developers the ability to streamline and clean up the process in many ways.

When the Library was Beta-released later that fall, I was eager to learn more about Android data binding and its applications, but it was still in its infancy and Google’s disclaimer warned against trusting it in any released app. Fast forward two years to today, and the Android Data Binding Library has matured considerably. The disclaimer is now gone, and I recently began exploring data binding in my daily development work.

Like any good Android developer, one of my main goals is to write clean code, code that “never obscures the designer’s intent but rather is full of crisp abstractions and straightforward lines of control,” as author Grady Booch put it. I have found that using data binding with the Model-View-ViewModel (MVVM) architectural pattern and RecyclerView accomplishes many of the objectives of clean coding, including reducing the requirement for boilerplate code, facilitating code decoupling and improving readability and testability—not to mention reducing development time.

Unfortunately, Google’s examples of using data binding in Android apps are rather simplistic and lack detail. So let’s explore the necessary steps to set up a project with data binding, a RecyclerView and MVVM—and write clean code in the process.

A Quick MVVM Primer / Refresher

MVVM is an architectural pattern that was created to simplify user interface programming. Google appears to be encouraging the use of MVVM for data binding. In fact, the Architecture Components of its Data Binding Library are modeled on the MVVM pattern.

The main components of MVVM are the Model, View and ViewModel, and its structure essentially supports two-way data binding between the latter two.

  • The View defines the user interface structure, layout and design and consists of views, layouts, scroll listeners and so on. It also notifies the ViewModel about different actions.
  • The ViewModel serves as the intermediary between the View and the Model. It provides data to the View via bindings and handles View logic. It calls methods on the Model, provides the Model’s data to the View and notifies the View of updates.
  • The Model is the data domain model and the source of application logic and rules. It provides data to the ViewModel and can update the ViewModel using notification mechanisms such as data access objects, models, repositories and gateways.

As you can see, the View knows about the ViewModel and the ViewModel knows about the Model. The Model, however, doesn’t know about the ViewModel and the ViewModel doesn’t know—or care—about the View. This separation enables each component to grow independently, and this design pattern makes the user interface distinct from the corresponding business logic. The result is easier application development, testing and maintenance.

Data Binding with MVVM and RecyclerView

Follow the steps below to set up Android data binding using MVVM and RecyclerView.

1. Update the Gradle File(s)

The first step in adding data binding to a project is changing the module’s build.gradle file(s). Recent updates to the Android Data Binding Library have enabled easier data binding by adding a data binding closure to the Android closure, and because data binding is included in Google’s Application and Library plugins you no longer need to add a dependency. Instead, use the following closure:

2. Prepare Your Tags

To use data binding in Layout Files, you must wrap the normal View Groups or Views in <layout> tags containing data tags with variables for bindable methods and binding adapters. Bindable methods are typically referenced with app:data="@{viewModel.data}", where the “viewModel” variable is the ViewModel, set on the binding (more on that later).

To reference the bindable method annotated with @Bindable, you only need to specify viewModel.data. You can still access methods not annotated with @Bindable by using the full method name, such as viewModel.getData. As seen below, to set up a RecyclerView with data binding, just add a method reference from which to acquire the data.

Activity Layout File

Disclaimer: Some attributes, namespaces, etc. have been omitted to highlight how to use data binding.

RecyclerView Adapter Item Layout File

Disclaimer: Some attributes, namespaces, etc. have been omitted to highlight how to use data binding.

3. Set Up the ViewModel

The way you set up and use data binding is similar for both activities and fragments. Depending on the application’s need for the context, UI and lifecycle, you can reference the ViewModel by inflating and binding the View with the data binding library or by inflating it independently and binding to it with the library.

Next, call the appropriate ViewModel methods from the UI. One way to instantiate the binding is to use the DataBindingUtil’s setContentView method. Calling the binding’s setViewModel method sets the ViewModel variable reference, named “viewModel,” as depicted here:

Clean Coding Tip: Separate concerns and increase readability by providing individual methods for topics such as binding and RecyclerView initialization.

4. Implement the Adapter

When implementing the Adapter, the ViewModel needs to be set for the ViewHolder, binding and unbinding of the View. A lot of online examples don’t show unbinding the View, but it should be done to prevent problems.

5. Notify the Adapter for Data Set Changes

In this ViewModel, the data (items) are made available via the method getData(). When you need to notify the Adapter for data set changes, call notifyPropertyChanged(int) instead of calling notifyChange() (which would notify changes for all of the properties and likely cause issues).

6. Update the Method

This binding adapter method is the other part of the glue to update data in the Adapter. In the MVVM pattern chart, the ViewModel notifies the View of property changes by calling this method. Attribute data is referenced as app:data="@{viewModel.data}" and ViewModel.data references method getData, annotated with @Bindable. When combined with the call to notifyPropertyChanged(BR.data), this reference calls the RecyclerViewDataBinding.bind(RecyclerView, DataAdapter, List), annotated with @BindingAdapter({"app:adapter", "app:data"}).

Disclaimer: Although some readers may disagree with having an adapter reference in the ViewModel, this ViewModel provides notifications to the view. The components can be unit tested individually with JUnit and Mockito and together with integration / UI tests.

DataItemViewModel : BaseObservable

Model

7. Set the Default Component

To reuse data binding code among multiple classes, set your data binding component as the default component as shown below.

Clean Coding Tip: Provide a custom Data Binding Component class so you can abstract binding methods from ViewModels and isolate them for testability. Consider mocking the component class for better testing of the component classes.

8. Set Your Data Binding Class Accessor Methods

The data binding library requires classes using the @BindingAdapter annotation to have associated “get” accessor methods.

AppDataBindingComponent : android.databinding.DataBindingComponent

9. Set the Adapter on RecyclerView

This is where you can set the Adapter on RecyclerView and where adapter updates occur.

10. Click Event Handling

When a click event results in handling data accessible in the ViewModel, the best approach is to set the onClick attribute on the View in the bindable layout with android:onClick="@{viewModel::onClick.}" specified for the View. The ViewModel must have onClick(View) method implemented to handle the click event.

Tips for Keeping Your Code Clean

Some final tips from the trenches for Android data binding:

  • Making extra calls to notifyPropertyChanged(BR.data) or notifyChanged() can lead you down a path of producing bugs, including duplicated data.
  • There is a timing bug with the databinding library and use of ViewModels, extending BaseObservable, where calling notifyPropertyChanged(int) or notifyChanged() results in no action taking place. This occurs because the OnPropertyChangedCallback hasn’t been added yet. Until the bug is fixed, consider using this temporary fix: Add an OnPropertyChangedCallback to the ViewModel for handling the corresponding action. It may help to read the generated data binding classes to better understand the problem.
  • Debugging data binding build issues can be tricky. The error messages don’t provide a clear understanding as what the issues may be. Sometimes, an issue may be due to an incorrect object type passed into a binding adapter method. Other times, an issue may be caused by using data binding methods prior to setting the ViewModel.

DOWNLOAD SOURCE FROM GITHUB

At Phunware, we’re always working for better quality code. That means figuring out how to apply the latest technologies (such as data binding) to challenging, often high-profile projects. Interested in joining the Phamily? Check out our latest job openings and don’t forget to subscribe to our newsletter.

SUBSCRIBE TO THE NEWSLETTER

The post Android Data Binding with RecyclerViews and MVVM: a Clean Coding Approach appeared first on Phunware.

]]>
http://52.35.224.131/android-clean-coding-approach/feed/ 5
Look Who’s Talking Now: Exploring Voice and Conversational UX http://52.35.224.131/look-whos-talking-exploring-conversational-ux/ http://52.35.224.131/look-whos-talking-exploring-conversational-ux/#respond Fri, 20 Oct 2017 18:00:54 +0000 http://127.0.0.1/?p=30735 I don’t like talking. I’m the type of person who’s perfectly happy to sit silently in a group. So I felt a bit uncomfortable when “voice” was being touted as THE next big mode in interactive design. Speaking to an app sounded like a chore to me, but I jumped into the verbal side of […]

The post Look Who’s Talking Now: Exploring Voice and Conversational UX appeared first on Phunware.

]]>
I don’t like talking. I’m the type of person who’s perfectly happy to sit silently in a group. So I felt a bit uncomfortable when “voice” was being touted as THE next big mode in interactive design. Speaking to an app sounded like a chore to me, but I jumped into the verbal side of conversational user experience (UX) design with Susan Miller’s Astrology Zone for Amazon Alexa. Now? I’m excited about what I’ve learned and what can be done with this new mode of interaction.

Conversational UX = Having a Dialogue

Part of a designer’s job is to tell a story to the user. In designing a conversational UX, you’re telling your story through a direct dialogue, whether it’s via voice with a virtual assistant or via text with a chatbot. You do that by creating a script in a developer-friendly syntax that defines the user experience. A few things to remember:

  • Conversations aren’t straight lines. Gone are the days of touch-tone phone tree interactions—there is no single path to follow. That means you have to think about the different ways a conversation could go and anticipate as many variables in your design as you can.
  • Responses will vary. No two users talk the same way, so your design should include some flexibility in how to respond. How this works depends heavily on the technology you use and how much natural language it can recognize. In the case of Astrology Zone, we were building specifically for Amazon Alexa, which has a particular structure and recognition pattern. Google Assistant draws upon more than a decade of search data to enable a deep natural language recognition system, so it will understand variations in phrasing more easily. (Siri performs similarly.)
  • Users want to feel they are talking to a person. Think about it: Wouldn’t it be awkward if somebody you’re talking too suddenly sounded automated? Maintain a normal conversational tone across each interaction. Assistant even has the ability to add in smalltalk, which can keep the user engaged. With Astrology Zone, we added in messaging that Susan Miller uses to address her followers during sign-off.
  • Remember, “Voice” doesn’t always mean just talking. Remember when I said that I don’t like talking? It turns out many people feel that way. Google, Amazon and Apple have responded by adding more visual components into their virtual assistants. For example, Google Assistant recently updated to allow for text input in addition to voice on the phone, which is very helpful when I can’t speak over my babbling baby. You still need to apply the same design approach, however, whether a user is literally speaking or conversing via text. On mobile, virtual assistants provide an opportunity to enhance the experience with visual elements like images, search suggestions, links and app content.

Conversational UX Guidelines Aren’t Fully Baked Yet

Read the guidelines, but bear in mind they may be incomplete. We’re in the early days of conversational design and documentation is still evolving. While my team was working on Astrology Zone for Alexa, we read everything we could get our hands on and spent plenty of time interacting with Alexa. Still, we quickly hit roadblocks with development. Why? I’d written our phrases according to what the guidelines said we could do—but those guidelines didn’t say what we couldn’t do.

I’m sure the conversational UX guidelines will become more thorough over time. In the meantime, if you have time to test and iterate, you can experiment with how phrases should be set up. If not, stick to the exact wording provided in the guidelines to be safe. Be careful with verbs and connecting words. There are only a limited number available right now (fewer than we thought). And watch out for possessives—they’re a pain. We never could get Alexa to understand “yesterday’s” correctly.

Further Exploration Reveals New Insights

As I got more interested in conversational design, I kicked the tires of multiple virtual assistant options. I had the most fun with Google Assistant. It’s by far the most robust of our new robotic-voiced friends. Its app development tool, Actions on Google, is amazing and enjoyable to use (I wish every tool worked the same way).

After being introduced to Actions at this year’s Google I/O, I dove in and was able to create a demo in no time. Actions allowed me to focus on writing interactions and possible responses rather than formatting and more technical aspects of design because the system trains the action for variants with every input. On the other hand, writing for Alexa feels like diagramming sentences.

This exploration led me to see how fast I could make a basic app to tell facts about my baby, and even add in some personality and expressiveness. I also tinkered with making a demo Guardians of the Galaxy experience—and learned that in the long run, you don’t want invocations and responses to be exactly the same. (“I am Groot!” followed by an “I am Groot” response…. And another “I am Groot!”… Trust me when I say this doesn’t work out well.)

Conversational UX Design Tips

Here are a few tips from my time on the Astrology Zone project and my other explorations:

  • Provide natural guidance. Create an introduction that tells the user what the app can do and provides some simple suggestions for interactions. Users discover and explore conversational apps differently—they can’t just tap around a screen to find features—so you have to help them out.
  • Keep it short and sweet, and let users be brief as well. For example, to initiate a conversational experience, it’s a good idea to allow users to say only the “invocation” (the app name) and some parameters. A user can initiate Astrology Zone on Alexa by simply saying “Astrology Zone Pisces Today.”
  • Let the user mess up and guide them back. If an app only states that something went wrong, the user doesn’t know if it was something they did or something went wrong with the app. That’s frustrating. Instead, provide an error message with the reason for the error (wherever possible), along with options the user can select to get back on track.
  • Test with multiple people who have different accents and speech patterns. You want to make sure users can comfortably converse with your UI.

Conversational Design Could Improve Accessibility

Users with blindness or visual impairments rely on screen readers to understand and interact with digital devices. On mobile, these readers are built into the operating systems—VoiceOver on iOS, TalkBack on Android devices. Conversational design can help enhance these current features by scripting the user’s experience with the app.

Last year, I worked on a proof of concept for a hospital mobile solution that would make indoor mobile wayfinding accessible for users with blindness or limited vision. I had to think through and write the experience—in this case, the user’s dialogue with the app was in the form of gestures. Imagine if this was pushed further with the use of a virtual assistant. Users would be able to navigate hospital facilities through natural conversation, without the cognitive load of dealing with a standard app UX.

We could also harness these discoveries and disciplines to design for any situation where visual intake of information might not be possible or advisable. For example, interacting with a visual UI on your phone is really not a good idea when driving—in fact, it’s illegal in many places. In the near future, apps may switch to voice-only conversational UI when driving is detected.

After all of this exploration and experience, I’ve come to believe that voice and conversational design will soon be an essential part of UX. I look forward to building even more in the future. In the meantime, check out Susan Miller’s Astrology Zone for Amazon Alexa.

LEARN MORE ABOUT ASTROLOGY ZONE’S ALEXA SKILL

The post Look Who’s Talking Now: Exploring Voice and Conversational UX appeared first on Phunware.

]]>
http://52.35.224.131/look-whos-talking-exploring-conversational-ux/feed/ 0
Phunware Team Takeaways from Google I/O 2017 http://52.35.224.131/phunware-takeaways-google-io-2017/ http://52.35.224.131/phunware-takeaways-google-io-2017/#respond Fri, 02 Jun 2017 20:34:18 +0000 http://127.0.0.1/?p=29675 Returning to work after an action-packed conference is always an effort, and for the Phunware team that attended Google I/O, that felt especially true. This year, our group of ten attendees included members from multiple departments including engineering and creative. We were delighted and excited by what we saw and learned. We were stoked to […]

The post Phunware Team Takeaways from Google I/O 2017 appeared first on Phunware.

]]>
Returning to work after an action-packed conference is always an effort, and for the Phunware team that attended Google I/O, that felt especially true. This year, our group of ten attendees included members from multiple departments including engineering and creative. We were delighted and excited by what we saw and learned.

We were stoked to interact with and learn from the Android community, and especially to see the current and potential uses for new Google Assistant features. Here are some takeaways from our favorite sessions, things we’re looking forward to and a little Android-related “phun.”

What Excited You Most at Google I/O 2017?

We asked our group to weigh in on the announcements and products they were most inspired and excited by at Google I/O 2017. Here’s what they had to say:

“Apart from the new Android announcements (like Kotlin and Android Architecture Components), I was most excited about the other conference attendees. Seeing so many passionate developers in one place really gets me inspired.”
– Dustin Tran, Software Engineer (DT)

“I was most excited about Kotlin and the new Android Architecture Components stuff, but I am also very interested in the Google Assistant API and writing apps for that platform. Android Things was also really cool to see in action.”
– Alex Stolzberg, Software Engineer (AS)

“I was most excited about the incredible community collaboration focus this year. So many of the announcements came about because the Android dev community asked for specific things. Google recognized that and invited non-Googlers from the community on stage for the first time ever.”
– Jon Hancock, Software Engineer (JH)

“I really enjoyed talking to some of the Google design team and going to the sessions on the Google Assistant.”
– Ivy Knight, UX / UI Designer (IK)

“I was really excited to be a part of such a huge conference—and to hang out with the California-based Phunware devs I only see every couple of years.”
– Sean Gallagher, Software Architect (SG)

“What was I most excited about at I/O? The amount of code and time we can save with Kotlin and the new Architecture Components.”
– Nick Pike, Software Architect (NP)

Want to stay up to date on the latest and greatest in mobile news? Subscribe to our monthly newsletter!
SUBSCRIBE TO THE NEWSLETTER

What Was the Most Impressive Session at I/O 2017?

Thanks to our ten-person Phunware team, we were able to attend a broad selection of the 150+ sessions offered at I/O this year. Which impressed us the most?

“I was most impressed by What’s New in Android, where we learned about many tools—like an official Android emulator with Google Play pre-installed, and Android Profiler which allows precise and real-time app monitoring—that will make Android development much easier. Equally impressive were the Architecture Components sessions. Google has realized that developers often have to solve the same problems: network calls to retrieve data through orientation changes and caching / persisting that data. Now, they’re providing easier-to-use and standardized components to utilize when implementing these common use cases.”
– DT

“My favorite session was probably the Android Things talk about Developing for Android Things in Android Studio.”
– AS

“My favorite session was Introduction to Kotlin because of the sheer number of jaw-dropping moments. “
– JH

Building Apps for the Google Assistant got me excited to try building an Assistant app myself. API.ai looks great.”
– IK

“My favorite session was the Office Hours during which we got some really good one-on-one time with Android NDK team devs. They answered a lot of tough questions. Not only were they helpful, they were also great folks!”
– SG

Life is Great and Everything Will Be Ok, Kotlin Is Here! (Pretty self-explanatory, right?)”
– NP

How About the Best I/O 2017 Puns?

One of the best things about attending conferences like I/O is the inside jokes. In case you’re feeling left out, here are some of the Phunware team’s favorite (terrible) Android-related puns:

“An Android app walks into a bar. Bartender asks, ‘Can I get you a drink?’ The app says, ‘That was my _intent_!'”
– DT

Ok Google, give me an Android-related pun…”
– AS

“Android puns just require too much Context.”
– JH

“Can’t wait to check out all the FABulous Materials at I/O.”
– IK

“Need some space to store your app data? Google just gave us lots of Room.”
– NP

Interested in joining the Phunware Android dev team and possibly heading to I/O yourself next year? Check out our open opportunities and apply today!

The post Phunware Team Takeaways from Google I/O 2017 appeared first on Phunware.

]]>
http://52.35.224.131/phunware-takeaways-google-io-2017/feed/ 0