Introduction

I love Asana, I'm a PM and they are a product company. I like data science. I wanna explore Asana and it's competitors. I also want to play with and learn about the BERT model. This will be a curiosity-driven journey, not sure where it'll end.

Packages

In [ ]:
# Package to store the versions of packages used
!pip install -q watermark
In [ ]:
# Package to download the BERT models and process data
!pip install -q transformers
In [ ]:
# Package for scrapping data on Google Store
# https://pypi.org/project/google-play-scraper/
!pip install -q google_play_scraper
In [ ]:
# File manipulation imports for Google Colab
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir("/content/drive/My Drive/Colab Notebooks/BERT_App_Sentiment_Analysis")
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
In [ ]:
# Imports

# Data manipulation and visualization
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
from matplotlib import rc
from tqdm.notebook import tqdm
import datetime
from time import time


# Deep Learning, NLP and metrics
import sklearn
import torch
import transformers 
from textwrap import wrap
from torch import nn, optim 
from torch.utils import data
from collections import defaultdict
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from transformers import BertModel
from transformers import BertTokenizer
from transformers import AdamW
from transformers import get_linear_schedule_with_warmup

# Web Scrapping Imports
# https://pypi.org/project/Pygments/
import json
import pygments
import google_play_scraper
from pygments import highlight
from pygments.lexers import JsonLexer
from pygments.formatters import TerminalFormatter

# Random Seed
#RANDOM_SEED = 99
#np.random.seed(RANDOM_SEED)
#torch.manual_seed(RANDOM_SEED)

%matplotlib inline
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
  import pandas.util.testing as tm
In [ ]:
# Package versions
%reload_ext watermark
%watermark -v -iv
google_play_scraper 0.1
pandas              1.0.5
transformers        3.0.2
torch               1.6.0+cu101
json                2.0.9
seaborn             0.10.1
sklearn             0.22.2.post1
numpy               1.18.5
matplotlib          3.2.2
pygments            2.1.3
CPython 3.6.9
IPython 5.5.0

Web Scrapping

In [ ]:
# Listing apps I want to gather data on
# They'll all be Asana's competitors on task management
# Took the apps from Asana's comparison page, plus a few other alternatives the app store recommends
# https://asana.com/compare
# Asana, Airtable, Basecamp, Jira, Microsoft To Do
# Monday.com, Smartsheet, Taskade, Trello, Wrike
# The google_play_scrapper documentations details how to get the url for each app
# https://github.com/facundoolano/google-play-scraper
apps_list = ['com.asana.app',
             'com.formagrid.airtable',
             'com.basecamp.bc3',
             'com.atlassian.android.jira.core',
             'com.microsoft.todos',
             'com.monday.monday',
             'com.smartsheet.android',
             'com.taskade.mobile',
             'com.trello',
             'com.wrike']
In [ ]:
# List to store details from the apps
app_details = []

# Loop through the app list and retrieve details of each app
for ap in tqdm(apps_list):

    # Retrieve app details
    info = google_play_scraper.app(ap, lang='en', country='us')

    # Store the details
    app_details.append(info)
100%|██████████| 10/10 [00:01<00:00,  6.24it/s]
In [ ]:
# Function to print a request in JSON format
def print_json(json_object):

    # Generate json format
    json_str = json.dumps(json_object,
                          indent = 2,
                          sort_keys = True,
                          default = str)
    
    # The highlight function from pygments highlights the output text 
    # It uses different colorts to facilitate reading
    print(highlight(json_str, JsonLexer(), TerminalFormatter()))
In [ ]:
# Check the result in JSON format
print_json(app_details[0])
{
  "adSupported": null,
  "androidVersion": "7.0",
  "androidVersionText": "7.0 and up",
  "appId": "com.asana.app",
  "comments": [
    "Absolute trash. Stay away from this app and just use the web version if you manage too many tasks and uses a lot of tags and different filters. Filter doesn't work correctly. Probably, the app obverloads because there are over a thousand tasks in our project already. And one more thing, when you browse the app for more than a minute and try to reload/refresh the window it doesn't load anymore. I guess this only works for smaller project with few number of tasks. The heck.",
    "Excellent app that allows a consolidated view of everything you have going. Some anomalies could be improved. Task duration would be good to see. Additionally associating subtasks with their project works also make things easier. Overall a great app though. Easy to use and a great mobile app too. Really worth using too improve project management and collaboration.",
    "I love the Asana platform - it's far better than Jira and other competitors in regards to usability. The Android app is functional but does miss some of the features and sections available on the web app, which is disappointing, and limiting at times. Especially frustrating when the missing items are pro/business level plan features (e.g. workload view in portfolios).",
    "It's very easy to be organized with Asana! The app and web versions are clean and allow you to organize your tasks the way you prefer, creatings sections, projects, asign to different people and dates. I use Asana for years and since then I could not find another app that gives me the same organization freedom.",
    "Their UI team has decided to cripple our ability to jot down tasks quickly by blocking the add button with a pop-up after creating each task. And then their suggestion button is broken which has led me to complain here instead of offering constructive suggestions directly. Asana is getting big and clumsy. v6.50.8",
    "App works well but a feature that is seriously missing: to be able to add tasks under certain sections. Currently if you add a new task, it is placed right at the top of the list and you need to drag it down to the relavant section (which is a pain if you have a long list with numerous sections). Please sort this out, would be a game changer",
    "Super user friendly and really helps me manage my tasks and projects better than any other tool I have used before especially because I work on/with multiple projects and clients.",
    "At first looked very promising (specially UI) but dude, it lacks the very basic feature: the due date alerts. Really!!!? I tried to work my way around to sync my tasks with Google calendar to get alerts from there. But after the initial sync it takes 24 hrs or may be more to sync again. So may be you will miss out on tasks that u add with dhe date in next 24 hrs. I found Project Buddy but it is paid service. It is useless for me now, I have to go back to Trello bcz it has due date notifications.",
    "Just downloaded it. I'm unable to sign up. Is there a problem? Tried with work email and also with Gmail but had no joy",
    "Amazing, free app! Love the functionality and aesthetic! Was mentioned by Edmond Lau in the effective engineer and am glad I took a chance on it! This is hands out the best task managing app I have ever used. Only gripes are that sort by date gets unchecked frequently which gets annoying and that there is no dark mode yet. Other than that, great job team!",
    "People this is an incredible usefell app. In a business where communication is difficult in the sense of just too much information and tasks which are not listed will give you the oppurtunity too brain storm any time and it is an incredible way too inform your employer on your work load because it is now officially documented. You can attach info and also add sub tasks. Well done to the guys putting this incredible app together. Happy Fathersday to you all!!!",
    "Powerful but very irritating Calendar: I keep selecting the WEEK view using the blue icon in the right corner, but it resets to MONTH view after every action (e.g. clicking a task). Also, calendar is EXTREMELY sensitive: just touching it will scroll to a different week or month, messing you up.",
    "Love love love Asana! Both the desktop and mobile version are great. Super easy to use, very intuitive and includes a wide range of features (even just with the free version). Would highly recommend, I'm highly addicted to using it both at work and for myself.",
    "I Would have given this app 5 stars but, the tiny font in Asana is a major flaw and Asana seems to ignore all who have mentioned this flaw. Clickup and others have the ability to change the font size but, I don't understand why Asana is unable or uniterested in not fixing this flaw in their app.",
    "I was charged $395.64 all at once after a free trial. I thought I would be charged monthly. Wrong. The invoice I received said $0. I received an invoice stating $395.64 2 DAYS before it was deducted from my account! 2 DAYS to correct any mistake and it must be in writing to. Probably impossible to stop a bank draft in 2 DAYS! Be careful people!",
    "I am trying to use Asana on my android and it will not allow me to reorganize any tasks. Tasks are constantly saying they are offline. I have found no resolution for this issue. It works on my desktop but not on my mobile.",
    "The most recent update has made it impossible to post pictures on project tasks for me and all of my coworkers with Android phones. The blue check mark to post pictures after taking and viewing is now missing.",
    "Intuitive & easy layout. Best task app I've used yet. Only suggestion is the ability to jump from different users without having to logout like with a service like Gmail allows. I use it for personal & my multiple different business needs.",
    "Have been using this app for almost a year now. This made me forget Slack or other chat applications for getting work done amongst teammates, heck we even quietly stopped using WhatsApp. Now that says something. Keep the good work guys \ud83d\udc4f Awaiting dark mode if and when you plan to launch it",
    "It's a great app with a beautiful user interface. The only thing I want you guys to fix is the notifications. We don't get notified at all when something happens. This lost my interest in this app only because of this reason.",
    "Good experience using this app. Therefore I have to pay for opening more modules",
    "Syncing is painfully slow. All I need to do is see my calendar and it can take up to a minute to resolve. I don't understand the issue - the browser experience is almost instant and it should be here as well.",
    "Password field is missing.Unable to login using password. Password field should be in login screen with email magic link",
    "Incredible. Free version has enough to fall in love with this app.",
    "Efficient app and desktop task tracking app. Been using app for years now",
    "UPDATE: if you love to forget what you're supposed to do, if you love it when tasks completely disappear and change recurring settings, if you love completely dropping the ball, then Asana is for you! What a streaming pile of garbage.",
    "I love Asana, but think the App should be as powerful as the desktop experience. Especially a Task app, is very important to be powerful and have all options available on the move imo",
    "Very user friendly and easy to navigate. We only have 4 team members but it is so easy to use and delicate!",
    "Good app for tracking project tasks and their status.",
    "Not able to reorder or arrange columns. Not able to edit stuff. Says error. Having a really tough time with the app.",
    "I love Asana - I can easily check on projects and tasks, add to-dos and set date/time for reminders. Keep up the great work!",
    "Bug: \"Add Tag\" not working on android. \"Save\" button not working. I could not create new tag. Disapointed.",
    "Got update last night now when I take pictures in asana it blows up image on my screen and gives no option to upload after you click ok to use the blue add button is gone please fix",
    "Great app! Need to improve the outlook integration to minimize constant prompts to sign in to asana!",
    "The App is really good but doesn't load over WiFi! ...only loads when i turn on cellular data.Why????",
    "Keeps having mobile app synch problems. No timely answer from support. Still no response to inquiry.",
    "Needs landscape mode. I shouldn't really have to complain about this, it should be standard!",
    "Great system. The app does not allow me to add tasks from gmail using Android. Really Asana? No mobile integration?",
    "Can you allow premium plan users to auto assign task to self so that we can see them in the \"My Task\" module? Or at least allow solo users to purchase only one seat instead of two? Thanks",
    "App have 1 mln downloads, but still have bugs. I can't create tag. And can't edit any messages in Conversations section"
  ],
  "containsAds": null,
  "contentRating": "Everyone",
  "contentRatingDescription": null,
  "currency": "USD",
  "description": "Asana is the work manager for teams. But better. From the small stuff to the big picture, Asana organizes work so teams are clear what to do, why it matters, and how to get it done.\r\n\r\n\u272d Featured \u201cApp of the Day\u201d on the App Store. \u201cThis project management is powerful but not overwhelming\u2014and never lets the process of being productive get in the way of actually getting things done.\u201d\r\n\r\n\u201cAsana is one of the best collaboration and productivity apps for teams and an Editors\u2019 Choice.\u201d  \u2014PC Mag\r\n\r\n\u201cAsana has been instrumental in enabling our team to grow by 6x this year and successfully scale our processes.\u201d \u2014Ryan Bonnici, Chief Marketing Officer, G2\r\n\r\nSee why more than 7500 people give Asana 4.7 out of 5 stars.\r\n\r\n\u2714 TAKE THE GUESSWORK OUT OF WORK\r\nSee who is doing what and by when across the whole team:\r\nCoordinate plans, projects, and tasks in one shared space\r\nSwitch between list, kanban board, and calendar views\r\nOrganize and assign tasks; set due dates\r\nAttach files to tasks so relevant info is easy to find\r\n\r\n\u2714 GET THE WHOLE PICTURE\u2014EVEN ON THE GO\r\nKeep an eye on progress no matter where you are:\r\nInstantly see if projects are on track, at risk, or off track\u2014and why\r\nPost status updates, or request updates from project owners\r\nDrill down into tasks for more information\r\n\r\n\u2714 PUSH WORK ALONG...WITHOUT GETTING PUSHY\r\nMake sure nothing falls through the cracks:\r\nClarify if tasks are high, medium, or low priority\r\nApprove work or mark tasks for approval\r\nSet tasks as milestones to establish critical checkpoints\r\nGet notified when tasks are completed, overdue, and more\r\n\r\n\u2714 ASANA MOBILE + WEB = YOUR A-TEAM\r\nWork efficiently whether you\u2019re at the office or on the go: \r\nAutomatically transcribe voice memos to tasks\r\nConvert photos of whiteboards, charts, or diagrams to tasks\r\nInstantly sync work between the app and web\r\nWork offline without worrying about losing your data\r\n\r\n\u201cWe get five times more done per person than companies 10 times bigger than us and relatively stress-free.\u201d - Brett Gurewitz, CEO, Epitaph Records (and guitarist for Bad Religion)\r\n\r\n\u201cIf it\u2019s not in Asana it\u2019s not on my radar. Our work has a lot of moving parts and Asana helps ensure nothing falls through the cracks.\u201d - Elissa Hudson, Senior Marketing Manager, Hubspot\r\n\r\nJoin more than 75,000 organizations and millions of users worldwide who trust Asana to stay organized and in control of their work. Download the Asana app now.\r\n\r\nBy downloading Asana, you agree to our Terms of Service, which you can find at https://asana.com/terms",
  "descriptionHTML": "Asana is the work manager for teams. But better. From the small stuff to the big picture, Asana organizes work so teams are clear what to do, why it matters, and how to get it done.<br><br>\u272d Featured \u201cApp of the Day\u201d on the App Store. \u201cThis project management is powerful but not overwhelming\u2014and never lets the process of being productive get in the way of actually getting things done.\u201d<br><br>\u201cAsana is one of the best collaboration and productivity apps for teams and an Editors\u2019 Choice.\u201d  \u2014PC Mag<br><br>\u201cAsana has been instrumental in enabling our team to grow by 6x this year and successfully scale our processes.\u201d \u2014Ryan Bonnici, Chief Marketing Officer, G2<br><br>See why more than 7500 people give Asana 4.7 out of 5 stars.<br><br>\u2714 TAKE THE GUESSWORK OUT OF WORK<br>See who is doing what and by when across the whole team:<br>Coordinate plans, projects, and tasks in one shared space<br>Switch between list, kanban board, and calendar views<br>Organize and assign tasks; set due dates<br>Attach files to tasks so relevant info is easy to find<br><br>\u2714 GET THE WHOLE PICTURE\u2014EVEN ON THE GO<br>Keep an eye on progress no matter where you are:<br>Instantly see if projects are on track, at risk, or off track\u2014and why<br>Post status updates, or request updates from project owners<br>Drill down into tasks for more information<br><br>\u2714 PUSH WORK ALONG...WITHOUT GETTING PUSHY<br>Make sure nothing falls through the cracks:<br>Clarify if tasks are high, medium, or low priority<br>Approve work or mark tasks for approval<br>Set tasks as milestones to establish critical checkpoints<br>Get notified when tasks are completed, overdue, and more<br><br>\u2714 ASANA MOBILE + WEB = YOUR A-TEAM<br>Work efficiently whether you\u2019re at the office or on the go: <br>Automatically transcribe voice memos to tasks<br>Convert photos of whiteboards, charts, or diagrams to tasks<br>Instantly sync work between the app and web<br>Work offline without worrying about losing your data<br><br>\u201cWe get five times more done per person than companies 10 times bigger than us and relatively stress-free.\u201d - Brett Gurewitz, CEO, Epitaph Records (and guitarist for Bad Religion)<br><br>\u201cIf it\u2019s not in Asana it\u2019s not on my radar. Our work has a lot of moving parts and Asana helps ensure nothing falls through the cracks.\u201d - Elissa Hudson, Senior Marketing Manager, Hubspot<br><br>Join more than 75,000 organizations and millions of users worldwide who trust Asana to stay organized and in control of their work. Download the Asana app now.<br><br>By downloading Asana, you agree to our Terms of Service, which you can find at https://asana.com/terms",
  "developer": "Asana, Inc.",
  "developerAddress": null,
  "developerEmail": "support@asana.com",
  "developerId": "Asana,+Inc.",
  "developerInternalID": "9027419648812383370",
  "developerWebsite": "https://asana.com/product",
  "free": true,
  "genre": "Business",
  "genreId": "BUSINESS",
  "headerImage": "https://lh3.googleusercontent.com/4ts1ELx9Kpks2R2KWE_hCTBW63gVqR5UrSgE_vq8XEQPITvoBICGxpCaeWqmnWLqmEyy",
  "histogram": [
    1433,
    529,
    1267,
    3283,
    25052
  ],
  "icon": "https://lh3.googleusercontent.com/EJEviNAy8fAdCNMrcxaZDYLH1AnDnvficaxztxPnEF-fN97TPHud2yKS1sKsuA_kT9Y",
  "inAppProductPrice": null,
  "installs": "1,000,000+",
  "minInstalls": 1000000,
  "offersIAP": false,
  "originalPrice": null,
  "price": 0,
  "privacyPolicy": "http://www.asana.com/privacy",
  "ratings": 31564,
  "recentChanges": "\ud83c\udfb5 Give a little bit...\r\nGive a little bit of appreciation to you.\r\nGive a little bit...\r\nWe've got a little bit of appreciation for you.\r\nNow's the time that we need to share...\r\nSo send a sticker to show you care.\r\nAlright. Ah, yeah. Come along...\r\n\r\nThis update includes Appreciations, improvements to status updates and invites, and bug fixes and general improvements.",
  "recentChangesHTML": "\ud83c\udfb5 Give a little bit...<br>Give a little bit of appreciation to you.<br>Give a little bit...<br>We&#39;ve got a little bit of appreciation for you.<br>Now&#39;s the time that we need to share...<br>So send a sticker to show you care.<br>Alright. Ah, yeah. Come along...<br><br>This update includes Appreciations, improvements to status updates and invites, and bug fixes and general improvements.",
  "released": "Feb 27, 2013",
  "reviews": 9667,
  "sale": false,
  "saleText": null,
  "saleTime": null,
  "score": 4.5836077,
  "screenshots": [
    "https://lh3.googleusercontent.com/a-c_cZ7cTlTHgMmXuG-BqsN6-xm0s_koN56J9_jRhVgd81HSbWT7A48ysMA15ZXFcnA",
    "https://lh3.googleusercontent.com/ZBeNcL0KBzHLkZvmN9TGohTZ1EBdPoQ0BEnBs4eEiAtpZcgRPWAkEtmbA9HxUT3-isI",
    "https://lh3.googleusercontent.com/YQzlPY-Gf0IZcJ23dmX-2WZRt1Sf-xh7d8QteyxVXuUTBXAAEs9ElrWAsU2TaIMPQYo",
    "https://lh3.googleusercontent.com/Od0HDyc248fg5ya7y3b7BcSHz8P-_eQVGqvnln3KJxXRwoSBsnJ9mKEnmBi9mLBnSA",
    "https://lh3.googleusercontent.com/RIEE8eAwXQLotr3jRF0bim47WoGYJ_Iu3W8alWSOnEiImvQee_Vt3r3Uf-imt8_TQpA",
    "https://lh3.googleusercontent.com/WDGYFWrU3MMIwoaHqggYVQT2bCTH4OitaL94oZZcY6pO3CfNCxx-6SkBJy_bP8lBDVyh",
    "https://lh3.googleusercontent.com/k8TYhLxOalU6RPSQFt_8QnUDwBZTcE2UD1CkmQ09T3QYmH7b3g6-jJP4KbqVJPo8cC6L"
  ],
  "size": "14M",
  "summary": "Organize. Plan. Get work done. #withAsana",
  "summaryHTML": "Organize. Plan. Get work done. #withAsana",
  "title": "Asana: Your work manager",
  "updated": 1597360715,
  "url": "https://play.google.com/store/apps/details?id=com.asana.app&hl=en&gl=us",
  "version": "6.51.5",
  "video": "https://www.youtube.com/embed/jY0-gsNImlk?ps=play&vq=large&rel=0&autohide=1&showinfo=0&start=1",
  "videoImage": "https://i.ytimg.com/vi/jY0-gsNImlk/hqdefault.jpg"
}

In [ ]:
# Put the retrieved information into a dataframe
df_app_details = pd.DataFrame(app_details)
In [ ]:
# Save the dataframe to disk

# Retrieve datetime to stamp the file
now = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")

# Save with current datetime
df_app_details.to_csv(f'data/app_details_{now}.csv', header=True, index=None)
In [ ]:
df_app_details.head(3)
Out[ ]:
title description descriptionHTML summary summaryHTML installs minInstalls score ratings reviews histogram price free currency sale saleTime originalPrice saleText offersIAP inAppProductPrice size androidVersion androidVersionText developer developerId developerEmail developerWebsite developerAddress privacyPolicy developerInternalID genre genreId icon headerImage screenshots video videoImage contentRating contentRatingDescription adSupported containsAds released updated version recentChanges recentChangesHTML comments appId url
0 Asana: Your work manager Asana is the work manager for teams. But bette... Asana is the work manager for teams. But bette... Organize. Plan. Get work done. #withAsana Organize. Plan. Get work done. #withAsana 1,000,000+ 1000000 4.583608 31564 9667 [1433, 529, 1267, 3283, 25052] 0 True USD False None None None False None 14M 7.0 7.0 and up Asana, Inc. Asana,+Inc. support@asana.com https://asana.com/product None http://www.asana.com/privacy 9027419648812383370 Business BUSINESS https://lh3.googleusercontent.com/EJEviNAy8fAd... https://lh3.googleusercontent.com/4ts1ELx9Kpks... [https://lh3.googleusercontent.com/a-c_cZ7cTlT... https://www.youtube.com/embed/jY0-gsNImlk?ps=p... https://i.ytimg.com/vi/jY0-gsNImlk/hqdefault.jpg Everyone None None None Feb 27, 2013 1597360715 6.51.5 🎵 Give a little bit...\r\nGive a little bit of... 🎵 Give a little bit...<br>Give a little bit of... [Absolute trash. Stay away from this app and j... com.asana.app https://play.google.com/store/apps/details?id=...
1 Airtable Organize anything you can imagine with Airtabl... Organize anything you can imagine with Airtabl... Organize anything you can imagine with a moder... Organize anything you can imagine with a moder... 100,000+ 100000 3.671053 1546 774 [295, 101, 112, 346, 692] 0 True USD False None None None False None 13M 5.0 5.0 and up Airtable 8024614373053231272 droid@airtable.com https://airtable.com/ None https://airtable.com/privacy 8024614373053231272 Productivity PRODUCTIVITY https://lh3.googleusercontent.com/0AKPNIi6-Dct... https://lh3.googleusercontent.com/xlIqHp_kgI76... [https://lh3.googleusercontent.com/u1iubbqRbdB... https://www.youtube.com/embed/rydOfdGCOBU?ps=p... https://i.ytimg.com/vi/rydOfdGCOBU/hqdefault.jpg Everyone None None None Sep 27, 2016 1596736428 1.4.2 Organize anything you can imagine with Airtabl... Organize anything you can imagine with Airtabl... [Won't let me sign in with Google account unle... com.formagrid.airtable https://play.google.com/store/apps/details?id=...
2 Basecamp 3 <b>Use your company's Basecamp 3 account on-th... <b>Use your company&#39;s Basecamp 3 account o... Basecamp 3, official Android version for the w... Basecamp 3, official Android version for the w... 500,000+ 500000 4.274314 4179 1524 [260, 156, 375, 771, 2617] 0 True USD False None None None False None 7.6M 6.0 6.0 and up Basecamp Basecamp support@basecamp.com https://basecamp.com None https://basecamp.com/privacy 8645525805030592144 Productivity PRODUCTIVITY https://lh3.googleusercontent.com/Mx66p8uDSlbx... https://lh3.googleusercontent.com/DhvIpWbDmOr1... [https://lh3.googleusercontent.com/o_oaonXHNi5... None None Everyone None None None Oct 20, 2015 1582299579 3.18.9 🐛 Bug fixes and improved speed over slow networks 🐛 Bug fixes and improved speed over slow networks [Very bad UX. 1) No progress (% wise) shown fo... com.basecamp.bc3 https://play.google.com/store/apps/details?id=...
In [ ]:
# List to store app reviews
app_reviews = []

# Loop to retrieve and store app reviews
for ap in tqdm(apps_list):

    # Extract sample reviews from reviews with different stars given
    for star in list(range(1, 6)):

        # Extract the most relevant and the most recent reviews
        for sort_order in [google_play_scraper.Sort.MOST_RELEVANT, google_play_scraper.Sort.NEWEST]:
            rvws, _ = google_play_scraper.reviews(ap,
                                                  lang='en',
                                                  country='us',
                                                  sort=sort_order,
                                                  count = 100 if star == 3 else 50,
                                                  filter_score_with = star)
            
            for r in rvws:
                r['sortOrder'] = 'most_relevant' if sort_order == google_play_scraper.Sort.MOST_RELEVANT else 'newest'
                r['appId'] = ap

            # Save reviews
            app_reviews.extend(rvws)
100%|██████████| 10/10 [00:12<00:00,  1.28s/it]
In [ ]:
# Create a dataframe with the reviews
df_app_reviews = pd.DataFrame(app_reviews)
In [ ]:
# Save the dataframe to disk

# Retrieve datetime to stamp the file
now = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")

# Save with current datetime
df_app_reviews.to_csv(f'data/app_reviews_{now}.csv', header = True, index = None)
In [ ]:
# Loading the csv with app reviews
df_reviews = pd.read_csv(f'data/app_reviews_{now}.csv')
df_reviews.head(3)
Out[ ]:
reviewId userName userImage content score thumbsUpCount reviewCreatedVersion at replyContent repliedAt sortOrder appId
0 gp:AOqpTOHTV7PdCc2qccT5aehUpoV3mB0PaGYuGP6VoAr... Anne Amit https://lh3.googleusercontent.com/a-/AOh14GiyP... Absolute trash. Stay away from this app and ju... 1 7 6.50.8 2020-08-11 02:40:35 NaN NaN most_relevant com.asana.app
1 gp:AOqpTOGjmtD5IJRa-8Rk7hxS02RFs1oyJgdDwFOXbsj... mrk 1 https://lh3.googleusercontent.com/-HCdJh-McJWE... At first looked very promising (specially UI) ... 1 34 6.46.6 2020-06-11 23:50:05 NaN NaN most_relevant com.asana.app
2 gp:AOqpTOHVo18xZthD7fpEma0cvNOrtqWv49Kw9yzdi2O... Kunal Sareen https://lh3.googleusercontent.com/a-/AOh14GhWG... Just downloaded it. I'm unable to sign up. Is ... 1 0 6.51.4 2020-08-06 06:30:13 Hi Kunal I'm sorry to hear you are having trou... 2020-08-13 17:13:13 most_relevant com.asana.app
In [ ]:
df_reviews.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5434 entries, 0 to 5433
Data columns (total 12 columns):
 #   Column                Non-Null Count  Dtype 
---  ------                --------------  ----- 
 0   reviewId              5434 non-null   object
 1   userName              5434 non-null   object
 2   userImage             5434 non-null   object
 3   content               5434 non-null   object
 4   score                 5434 non-null   int64 
 5   thumbsUpCount         5434 non-null   int64 
 6   reviewCreatedVersion  4885 non-null   object
 7   at                    5434 non-null   object
 8   replyContent          1837 non-null   object
 9   repliedAt             1837 non-null   object
 10  sortOrder             5434 non-null   object
 11  appId                 5434 non-null   object
dtypes: int64(2), object(10)
memory usage: 509.6+ KB
In [ ]:
# Plot stars
sns.set(style = 'whitegrid', palette = 'muted', font_scale = 1.5)
rcParams['figure.figsize'] = 15, 9
sns.countplot(df_reviews.score)
plt.xlabel('Stars')
plt.ylabel('Total')
Out[ ]:
Text(0, 0.5, 'Total')
In [ ]:
# Plot appId
sns.set(style = 'whitegrid', palette = 'muted', font_scale = 1)
rcParams['figure.figsize'] = 15, 9
ax = sns.countplot(df_reviews.appId)
ax.set_xticklabels(ax.get_xticklabels(),rotation=30)
plt.xlabel('App')
plt.ylabel('Number of Samples')
Out[ ]:
Text(0, 0.5, 'Number of Samples')
In [ ]:
# Creating a pivot table to see which app x star combination didn't retrieve the desired amount of data
app_x_stars = df_reviews.groupby(['appId', 'score']).size().unstack()
app_x_stars
Out[ ]:
score 1 2 3 4 5
appId
com.asana.app 100 100 200 100 100
com.atlassian.android.jira.core 100 100 200 100 100
com.basecamp.bc3 100 100 170 100 100
com.formagrid.airtable 100 100 146 100 100
com.microsoft.todos 100 100 200 100 100
com.monday.monday 100 100 158 100 100
com.smartsheet.android 100 100 84 100 100
com.taskade.mobile 42 16 72 100 100
com.trello 100 100 200 100 100
com.wrike 100 100 146 100 100
In [ ]:
# Plotting app x stars as a heatmap
sns.heatmap(app_x_stars, linewidths=1, linecolor='white', cmap='Blues')
Out[ ]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f1ff9bb71d0>

Preprocessing

In [ ]:
# Grouping function
# This will convert range of 1-5 star reviews into negative(0), neutral(1) and positive(2)
# This is why I've gathered twice as much data for 3 star reviews
def group_rating(rating):

    # initialize groups on -1 to catch any bugs
    grp_rating = -1 

    # Convert ratings to integers
    rating = int(rating)

    # If the rating is above 3, then positive (2)
    if rating > 3:
        grp_rating = 2
    
    # If rating is 3, then neutral (1)
    elif rating == 3:
        grp_rating = 1
    
    # If rating is below 3, then negative (0)
    else:
        grp_rating = 0
    
    return grp_rating
In [ ]:
# Apply the function to the dataset and create a 'sentiment' column with the output
df_reviews['sentiment'] = df_reviews.score.apply(group_rating)
In [ ]:
df_reviews.head(3)
Out[ ]:
reviewId userName userImage content score thumbsUpCount reviewCreatedVersion at replyContent repliedAt sortOrder appId sentiment
0 gp:AOqpTOHTV7PdCc2qccT5aehUpoV3mB0PaGYuGP6VoAr... Anne Amit https://lh3.googleusercontent.com/a-/AOh14GiyP... Absolute trash. Stay away from this app and ju... 1 7 6.50.8 2020-08-11 02:40:35 NaN NaN most_relevant com.asana.app 0
1 gp:AOqpTOGjmtD5IJRa-8Rk7hxS02RFs1oyJgdDwFOXbsj... mrk 1 https://lh3.googleusercontent.com/-HCdJh-McJWE... At first looked very promising (specially UI) ... 1 34 6.46.6 2020-06-11 23:50:05 NaN NaN most_relevant com.asana.app 0
2 gp:AOqpTOHVo18xZthD7fpEma0cvNOrtqWv49Kw9yzdi2O... Kunal Sareen https://lh3.googleusercontent.com/a-/AOh14GhWG... Just downloaded it. I'm unable to sign up. Is ... 1 0 6.51.4 2020-08-06 06:30:13 Hi Kunal I'm sorry to hear you are having trou... 2020-08-13 17:13:13 most_relevant com.asana.app 0
In [ ]:
# Shuffling the dataframe to avoid biasing the model later on
df_reviews = df_reviews.sample(frac=1).reset_index(drop=True)
In [ ]:
# List with class names
class_names = ['negative', 'neutral', 'positive']
In [ ]:
print(f'Negative: {(len(df_reviews[df_reviews.sentiment == 0])/len(df_reviews))}')
print(f'Neutral: {(len(df_reviews[df_reviews.sentiment == 1])/len(df_reviews))}')
print(f'Positive: {(len(df_reviews[df_reviews.sentiment == 2])/len(df_reviews))}')
Negative: 0.3419212366580788
Neutral: 0.29002576370997424
Positive: 0.368052999631947
In [ ]:
# Plot class distribution
sns.set(style = 'whitegrid', palette = 'muted', font_scale = 1.5)
rcParams['figure.figsize'] = 15, 9
sns.countplot(df_reviews.sentiment)
plt.xlabel('Class')
plt.ylabel('Total')
Out[ ]:
Text(0, 0.5, 'Total')

Downloading the pre-treined BERT model.

List of available models: https://github.com/google-research/bert

In [ ]:
# Model download
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-cased')
In [ ]:
# Test text
test_text = 'Just a test sentence. Test 2.'
test_text
Out[ ]:
'Just a test sentence. Test 2.'
In [ ]:
# Tokenize
tokens = tokenizer.tokenize(test_text)
tokens
Out[ ]:
['Just', 'a', 'test', 'sentence', '.', 'Test', '2', '.']
In [ ]:
# Extract the token_ids
token_ids = tokenizer.convert_tokens_to_ids(tokens)
token_ids
Out[ ]:
[2066, 170, 2774, 5650, 119, 5960, 123, 119]
In [ ]:
# Create the encoding object to format the data for the BERT model
encoding = tokenizer.encode_plus(test_text,
                                 max_length = 32,
                                 add_special_tokens = True,
                                 pad_to_max_length = True,
                                 return_attention_mask = True,
                                 return_token_type_ids = False,
                                 return_tensors = 'pt')
Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
In [ ]:
# Print
encoding
Out[ ]:
{'input_ids': tensor([[ 101, 2066,  170, 2774, 5650,  119, 5960,  123,  119,  102,    0,    0,
            0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,    0,
            0,    0,    0,    0,    0,    0,    0,    0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0]])}

Applying the BERT tokenizer to the dataset

In [ ]:
# List for the tokens
token_length = []
In [ ]:
# Drop NaN values before tokenizing
df_reviews = df_reviews.dropna(subset=['content'], how='all')
df_reviews.reset_index(inplace = True, drop=True)
df_reviews.shape
Out[ ]:
(5434, 13)
In [ ]:
# Loop through the dataset content applying the tokenizer
for content in df_reviews.content:
    tokens = tokenizer.encode(content)
    token_length.append(len(tokens))
In [ ]:
# Sample of contents
df_reviews.content.tail(5)
Out[ ]:
5429    I bought the Garmin intinct. Shouldnt have to ...
5430                                            Great app
5431    After latest update, I am not getting alert wh...
5432    Decent enough app but when it says you can soo...
5433    Overall it is a good app. But it doesn't have ...
Name: content, dtype: object
In [ ]:
# Plot
ax = sns.distplot(token_length)
plt.xlim([0, 200])
plt.xlabel('Token Length')
Out[ ]:
Text(0.5, 0, 'Token Length')

Configurations

In [ ]:
# Model Hyperparameters
EPOCHS = 10
BATCH_SIZE = 16
MAX_LENGTH = 150
LEARNING_RATE = 0.00002 
'''
Spent about 7 hours debugging this model to find out that the learning rate
has to be precisely 2e^-5 as anything else was causing the model not to learn at all
'''
Out[ ]:
'\nSpent about 7 hours debugging this model to find out that the learning rate\nhas to be precisely 2e^-5 as anything else was causing the model not to learn at all\n'

Data Batching

In [ ]:
class DataBatcher(data.Dataset):

    # Constructor
    def __init__(self, review, targets, tokenizer, max_len):

        # Initialize class atributes
        self.review = review
        self.targets = targets
        self.tokenizer = tokenizer
        self.max_len = max_len

    def __len__(self):
        return len(self.review)

    # Method to obtain each review
    def __getitem__(self, item):

        # Load a review
        review = str(self.review[item])

        # Create the review embedding
        encoding = tokenizer.encode_plus(review,
                                         max_length = self.max_len,
                                         truncation=True,
                                         add_special_tokens = True,
                                         pad_to_max_length = True,
                                         return_attention_mask = True,
                                         return_token_type_ids = False,
                                         return_tensors = 'pt')
        
        # Among the methods returns, there is the attention mask
        return {'review_text': review,
                'input_ids': encoding['input_ids'].flatten(),
                'attention_mask': encoding['attention_mask'].flatten(),
                'targets': torch.tensor(self.targets[item], dtype = torch.long)}
In [ ]:
# This function creates a data loader to convert the dataset to the BERT format
# torch.utils.data.dataloader.DataLoader
def create_data_loader(df, tokenizer, max_len, batch_size):
    ds = DataBatcher(review = df.content.to_numpy(),
                     targets = df.sentiment.to_numpy(),
                     tokenizer = tokenizer,
                     max_len = max_len)
    
    return data.DataLoader(ds, batch_size = batch_size, num_workers = 4)
In [ ]:
# Train test split
df_train, df_test = train_test_split(df_reviews, test_size = 0.2) #, random_state = RANDOM_SEED
In [ ]:
# Test validation split
df_valid, df_test = train_test_split(df_test, test_size = 0.5) #, random_state = RANDOM_SEED
In [ ]:
print(f'df_train.shape: {df_train.shape}')
print(f'df_test.shape: {df_test.shape}')
print(f'df_valid.shape: {df_valid.shape}')
df_train.shape: (4347, 13)
df_test.shape: (544, 13)
df_valid.shape: (543, 13)
In [ ]:
# Load the data_loaders
train_data_loader = create_data_loader(df_train, tokenizer, MAX_LENGTH, BATCH_SIZE)
test_data_loader = create_data_loader(df_test, tokenizer, MAX_LENGTH, BATCH_SIZE)
valid_data_loader = create_data_loader(df_valid, tokenizer, MAX_LENGTH, BATCH_SIZE)
In [ ]:
# Visualize a sample on the training data
sample = next(iter(train_data_loader))
print(sample['input_ids'].shape)
print(sample['attention_mask'].shape)
print(sample['targets'].shape)
torch.Size([16, 150])
torch.Size([16, 150])
torch.Size([16])
In [ ]:
# Single review sample already on BERT format
print(sample)
{'review_text': ['one of the best apps for managing team tasks and projects.', "Good app overall but $25 as the lowest price? Create another plan for \nsingle user for like 10-15 and I'll rate 5 stars. I'm pretty sure I'm not \nthe only one who would like this.", 'An updated review after using it for sometime. My day experience does not do the trick (compared to TickTick or Anydo), also the widget (only one) is frequently being blanck unresponsive (app is allowed to autostart and work in background). The missing of some collaborative features (shared reminders and notes in tasks) is felt. The design is very nice and it has potential but it seems some what behind other trending ToDo and task/time management apps.', 'There is less than zero reason why I should be forced to use an app when a browser works just as well. Aggressively anti-user. Would give zero stars if allowed.', 'Not provide enough functionality and UI is also not great', 'Super slow!', "I like being able to group tasks into multiple lists on the same screen, and the ability to attach images etc is very valuable. However, I wish I could delete individual tasks with a swipe or couple of taps, instead of having to drag all the way to archive. I also don't like having to swipe all the way to the end to create new boards. The ability to use a website to edit tasks on a PC is my favourite feature.", 'This is the only Microsoft app with no support for proper status bar and navigation bar colouring, the devs working on this app are lazy af especially compared to the ones working on the outlook and onedrive app', "The app is much less developed than the web version. Simplicity has reigned at the expense of functionality. My favourite competitor is Wrike and I could fill a book with the features the Wrike app offers which this lacks. Not only are subtasks not supported, but tasks themselves don't even have a description. 'info boxes' are where you add details and extra materials on a task, but you can't edit those on the app, so you get to write a (very short) task title and that's it.", 'Words cannot express how happy I am to have found this app. As a teacher, writer, and editor, I work with a lot of projects and deadlines and this app makes it possible to Manage it all.', "I connected Zeplin plugin and it works on desktop, but on mobile app my stakeholder don't see it! It is awful UX for 2019. You should provide the same functionality despite of the platform I am using.", 'Nice app', 'could really do with widgets and customised views', 'Its a good app for organization and communication. However, notifications and comments in tasks are sorted from oldest to newest, top to bottom which is very illogical and hard to read', 'Good home management app. My husband and I share a few different lists, like shopping, tasks, movies, etc, and it works for us. It\'s not a 5 stars because when I tried to use it as a personal daily task list, I wasn\'t all that satisfied. I would expect uncompleted "day tasks" to remain visible for the next day, or recurring weekly items which you completed this week, to disappear from your to-do list until the following recurrence.', 'I hate that u have to pay'], 'input_ids': tensor([[  101,  1141,  1104,  ...,     0,     0,     0],
        [  101,  2750, 12647,  ...,     0,     0,     0],
        [  101,  1760,  8054,  ...,     0,     0,     0],
        ...,
        [  101,  2098,   170,  ...,     0,     0,     0],
        [  101,  2750,  1313,  ...,     0,     0,     0],
        [  101,   146,  4819,  ...,     0,     0,     0]]), 'attention_mask': tensor([[1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        ...,
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0]]), 'targets': tensor([2, 1, 1, 0, 1, 0, 2, 0, 1, 2, 0, 2, 1, 1, 2, 1])}

Model

In [ ]:
# Loading the pre-trained BERT model
model_bert = BertModel.from_pretrained('bert-base-cased')
In [ ]:
# Model
model_bert
Out[ ]:
BertModel(
  (embeddings): BertEmbeddings(
    (word_embeddings): Embedding(28996, 768, padding_idx=0)
    (position_embeddings): Embedding(512, 768)
    (token_type_embeddings): Embedding(2, 768)
    (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
    (dropout): Dropout(p=0.1, inplace=False)
  )
  (encoder): BertEncoder(
    (layer): ModuleList(
      (0): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (1): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (2): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (3): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (4): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (5): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (6): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (7): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (8): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (9): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (10): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
  )
  (pooler): BertPooler(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (activation): Tanh()
  )
)
In [ ]:
# Visualize the shape of the last dense layer and the last pooling layer
last_hidden_state, pooled_output = model_bert(input_ids = encoding['input_ids'], attention_mask = encoding['attention_mask'])
In [ ]:
last_hidden_state.shape
Out[ ]:
torch.Size([1, 32, 768])
In [ ]:
pooled_output.shape
Out[ ]:
torch.Size([1, 768])

Adding the layers relative to my specific model.

Only those get trained in practice.

In [ ]:
class SentimentClassifier(nn.Module):

    # Constructor
    def __init__ (self, n_classes):

        # Initialize atributes
        super(SentimentClassifier, self).__init__()

        # Define the pre-trained BERT model
        self.bert = BertModel.from_pretrained('bert-base-cased')

        # Add a dropout layer
        self.drop1 = nn.Dropout()

        # Add a hidden layer
        self.fc1 = nn.Linear(self.bert.config.hidden_size, 100)

        # Add a dense layer
        self.fc2 = nn.Linear(100, n_classes)

        # Final classification with softmax
        self.softmax = nn.Softmax(dim = 1)

    # Forward method
    def forward(self, input_ids, attention_mask):

        # Load the pooling layer from BERT
        _, pooled_output = self.bert(input_ids = input_ids, attention_mask = attention_mask)

        # Define the outputs from the created layers
        output = self.drop1(pooled_output)
        output = self.fc1(output)
        output = self.fc2(output)

        # Return
        return self.softmax(output)
In [ ]:
# Setting the device to GPU
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
device
Out[ ]:
device(type='cuda', index=0)
In [ ]:
# Create instance of the model
model_sentiment_classifier = SentimentClassifier(len(class_names))
In [ ]:
# Send model to the device
model_sentiment_classifier = model_sentiment_classifier.to(device)
In [ ]:
# Load the inputs and attention mask
input_ids = sample['input_ids'].to(device)
attention_mask = sample['attention_mask'].to(device)
In [ ]:
# Print
print(input_ids.shape)
print(attention_mask.shape)
torch.Size([16, 150])
torch.Size([16, 150])
In [ ]:
# Load the inputs and attention mask onto the model
model_sentiment_classifier(input_ids, attention_mask)
Out[ ]:
tensor([[0.1884, 0.2524, 0.5592],
        [0.2807, 0.3305, 0.3889],
        [0.2970, 0.2641, 0.4389],
        [0.3275, 0.2102, 0.4623],
        [0.3776, 0.3209, 0.3015],
        [0.2210, 0.3212, 0.4578],
        [0.3367, 0.2361, 0.4272],
        [0.3171, 0.1802, 0.5027],
        [0.2176, 0.2531, 0.5293],
        [0.2990, 0.3012, 0.3998],
        [0.1812, 0.2638, 0.5550],
        [0.2374, 0.3093, 0.4533],
        [0.2253, 0.2722, 0.5025],
        [0.2991, 0.3004, 0.4005],
        [0.2198, 0.2879, 0.4922],
        [0.2438, 0.2448, 0.5113]], device='cuda:0', grad_fn=<SoftmaxBackward>)
In [ ]:
# The original BERT model uses AdamW: algorithm with fixed decay weight
optimizer = AdamW(model_sentiment_classifier.parameters(), lr = LEARNING_RATE, correct_bias = False)
In [ ]:
# Defining the total number of steps
total_step = len(train_data_loader) * EPOCHS
In [ ]:
# Adjust the learning rate
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = total_step)
In [ ]:
# Loss function
loss_fn = nn.CrossEntropyLoss().to(device)
#loss_fn = nn.NLLLoss().to(device)
#loss_fn = nn.MultiMarginLoss().to(device)
In [ ]:
# Train function
def train_model(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples):

    # Prepare for training
    model = model.train()
    losses = []
    correct_prediction = 0

    # Loop through the data samples
    # Complete Deep Learing cicle
    for d in data_loader:
        input_ids = d['input_ids'].to(device)
        attention_mask = d['attention_mask'].to(device)
        targets = d['targets'].to(device)
        outputs = model(input_ids = input_ids, attention_mask = attention_mask)

        _, preds = torch.max(outputs, dim = 1)
        loss = loss_fn(outputs, targets)

        correct_prediction += torch.sum(preds == targets)
        losses.append(loss.item())

        loss.backward()
        nn.utils.clip_grad_norm_(model.parameters(), max_norm = 1.0)
        optimizer.step()
        scheduler.step()
        optimizer.zero_grad()

    return correct_prediction.double() / n_examples, np.mean(losses)
In [ ]:
# Evaluate function
def evaluate_model(model, data_loader, loss_fn, device, n_examples):

    model.eval()
    losses = []
    correct_prediction = 0

    with torch.no_grad():
        for d in data_loader:
            input_ids = d['input_ids'].to(device)
            attention_mask = d['attention_mask'].to(device)
            targets = d['targets'].to(device)
            outputs = model(input_ids = input_ids, attention_mask = attention_mask)

            _, preds = torch.max(outputs, dim = 1)
            loss = loss_fn(outputs, targets)

            correct_prediction += torch.sum(preds == targets)
            losses.append(loss.item())

    return correct_prediction.double() / n_examples, np.mean(losses)

Training

In [ ]:
%%time

# Store the train history
history = defaultdict(list)

# Control the best accuracy
now = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
best_accuracy = 0

# Loop
for epoch in range(EPOCHS):

    start_time = time()

    print(f'Epoch {epoch+1}/{EPOCHS}')
    print('-' * 10)
    train_acc, train_loss = train_model(model_sentiment_classifier,
                                        train_data_loader,
                                        loss_fn,
                                        optimizer,
                                        device,
                                        scheduler,
                                        len(df_train))
    
    print(f'Train error: {train_loss} Train accuracy: {train_acc}')

    valid_acc, valid_loss = evaluate_model(model_sentiment_classifier,
                                           valid_data_loader,
                                           loss_fn,
                                           device,
                                           len(df_valid))
    
    print(f'Validation error: {valid_loss} Validation accuracy: {valid_acc}')
    print()

    end_time = time()

    print(f'Iteration Time: {end_time - start_time:.2f} seconds')
    print()

    history['train_acc'].append(train_acc)
    history['train_loss'].append(train_loss)

    history['valid_acc'].append(valid_acc)
    history['valid_loss'].append(valid_loss)

    if valid_acc > best_accuracy:
        torch.save(model_sentiment_classifier.state_dict(), f'models/model_sentiment_classifier_{now}.bin')
        best_accuracy = valid_acc
Epoch 1/10
----------
Train error: 0.9397852414232843 Train accuracy: 0.5914423740510697
Validation error: 0.9063669460661271 Validation accuracy: 0.6445672191528545

Iteration Time: 125.50 seconds

Epoch 2/10
----------
Train error: 0.8145275078713894 Train accuracy: 0.732919254658385
Validation error: 0.8095266152830685 Validation accuracy: 0.7403314917127072

Iteration Time: 125.97 seconds

Epoch 3/10
----------
Train error: 0.7411843742079595 Train accuracy: 0.8088336783988958
Validation error: 0.7691685101565193 Validation accuracy: 0.7790055248618785

Iteration Time: 125.73 seconds

Epoch 4/10
----------
Train error: 0.7094255646362024 Train accuracy: 0.8398895790200138
Validation error: 0.7510087893289679 Validation accuracy: 0.8011049723756907

Iteration Time: 125.48 seconds

Epoch 5/10
----------
Train error: 0.6831568909042022 Train accuracy: 0.8679549114331723
Validation error: 0.7873804236159605 Validation accuracy: 0.7605893186003684

Iteration Time: 125.30 seconds

Epoch 6/10
----------
Train error: 0.6748957454281694 Train accuracy: 0.8762364849321371
Validation error: 0.7437187468304354 Validation accuracy: 0.8084714548802947

Iteration Time: 125.18 seconds

Epoch 7/10
----------
Train error: 0.662339240531711 Train accuracy: 0.8888888888888888
Validation error: 0.7426964009509367 Validation accuracy: 0.8066298342541437

Iteration Time: 125.14 seconds

Epoch 8/10
----------
Train error: 0.6572280528352541 Train accuracy: 0.8948700253048079
Validation error: 0.7301992791540483 Validation accuracy: 0.8195211786372008

Iteration Time: 125.08 seconds

Epoch 9/10
----------
Train error: 0.6550940040718106 Train accuracy: 0.8962502875546353
Validation error: 0.7309388623518103 Validation accuracy: 0.8213627992633518

Iteration Time: 124.95 seconds

Epoch 10/10
----------
Train error: 0.6540613220456768 Train accuracy: 0.8976305498044629
Validation error: 0.7298849102328805 Validation accuracy: 0.8195211786372008

Iteration Time: 125.23 seconds

CPU times: user 12min 40s, sys: 8min 4s, total: 20min 45s
Wall time: 21min 6s

Model trained and saved to disk!

In [ ]:
history
Out[ ]:
defaultdict(list,
            {'train_acc': [tensor(0.5914, device='cuda:0', dtype=torch.float64),
              tensor(0.7329, device='cuda:0', dtype=torch.float64),
              tensor(0.8088, device='cuda:0', dtype=torch.float64),
              tensor(0.8399, device='cuda:0', dtype=torch.float64),
              tensor(0.8680, device='cuda:0', dtype=torch.float64),
              tensor(0.8762, device='cuda:0', dtype=torch.float64),
              tensor(0.8889, device='cuda:0', dtype=torch.float64),
              tensor(0.8949, device='cuda:0', dtype=torch.float64),
              tensor(0.8963, device='cuda:0', dtype=torch.float64),
              tensor(0.8976, device='cuda:0', dtype=torch.float64)],
             'train_loss': [0.9397852414232843,
              0.8145275078713894,
              0.7411843742079595,
              0.7094255646362024,
              0.6831568909042022,
              0.6748957454281694,
              0.662339240531711,
              0.6572280528352541,
              0.6550940040718106,
              0.6540613220456768],
             'valid_acc': [tensor(0.6446, device='cuda:0', dtype=torch.float64),
              tensor(0.7403, device='cuda:0', dtype=torch.float64),
              tensor(0.7790, device='cuda:0', dtype=torch.float64),
              tensor(0.8011, device='cuda:0', dtype=torch.float64),
              tensor(0.7606, device='cuda:0', dtype=torch.float64),
              tensor(0.8085, device='cuda:0', dtype=torch.float64),
              tensor(0.8066, device='cuda:0', dtype=torch.float64),
              tensor(0.8195, device='cuda:0', dtype=torch.float64),
              tensor(0.8214, device='cuda:0', dtype=torch.float64),
              tensor(0.8195, device='cuda:0', dtype=torch.float64)],
             'valid_loss': [0.9063669460661271,
              0.8095266152830685,
              0.7691685101565193,
              0.7510087893289679,
              0.7873804236159605,
              0.7437187468304354,
              0.7426964009509367,
              0.7301992791540483,
              0.7309388623518103,
              0.7298849102328805]})

Evaluate Model

In [ ]:
# Create a model instance
model = SentimentClassifier(len(class_names))
In [ ]:
# Load the model
model.load_state_dict(torch.load(f'models/model_sentiment_classifier_{now}.bin'))
Out[ ]:
<All keys matched successfully>
In [ ]:
# Send model to device
model = model.to(device)
In [ ]:
# Predicting using test data
test_acc, test_loss = evaluate_model(model, test_data_loader, loss_fn, device, len(df_test))
In [ ]:
# Model performance
print(f'Test Accuracy:  {test_acc}')
print(f'Test Loss:      {test_loss}')
Test Accuracy:  0.7867647058823529
Test Loss:      0.7642380612737992
In [ ]:
# Function to collect reviews
def get_reviews(model, data_loader):
    model = model.eval()

    review_texts = []
    predictions = []
    prediction_probs = []
    real_values = []

    with torch.no_grad():
        for d in data_loader:
            texts = d['review_text']
            input_ids = d['input_ids'].to(device)
            attention_mask = d['attention_mask'].to(device)
            targets = d['targets'].to(device)
            outputs = model(input_ids = input_ids, attention_mask = attention_mask)

            _, preds = torch.max(outputs, dim = 1)

            review_texts.extend(texts)
            predictions.extend(preds)
            prediction_probs.extend(outputs)
            real_values.extend(targets)

    predictions = torch.stack(predictions).cpu()
    prediction_probs = torch.stack(prediction_probs).cpu()
    real_values = torch.stack(real_values).cpu()

    return review_texts, predictions, prediction_probs, real_values
In [ ]:
# Gathering real data
y_review_texts, y_pred, y_pred_probs, y_test = get_reviews(model, test_data_loader)
In [ ]:
# Classification report
print(classification_report(y_test, y_pred, target_names = class_names))
              precision    recall  f1-score   support

    negative       0.84      0.84      0.84       187
     neutral       0.67      0.77      0.72       162
    positive       0.84      0.75      0.79       195

    accuracy                           0.79       544
   macro avg       0.79      0.79      0.79       544
weighted avg       0.79      0.79      0.79       544

In [ ]:
y_pred_probs
Out[ ]:
tensor([[9.9983e-01, 9.3685e-05, 7.3604e-05],
        [5.8328e-05, 6.1615e-05, 9.9988e-01],
        [4.4207e-05, 4.7958e-04, 9.9948e-01],
        ...,
        [4.6431e-05, 7.9586e-05, 9.9987e-01],
        [9.9936e-01, 5.1191e-04, 1.3140e-04],
        [7.8072e-05, 5.3968e-05, 9.9987e-01]])
In [ ]:
# Function to plot confusion matrix
def show_confusion_matrix(confusion_matrix):
    hmap = sns.heatmap(confusion_matrix, annot = True, fmt = "d", cmap = "Blues")
    hmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation = 0, ha = "right")
    hmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation = 30, ha = "right")
    plt.ylabel('Real Sentiment')
    plt.xlabel('BERT Predicted Sentiment')
In [ ]:
# Create confusion matrix
cm = confusion_matrix(y_test, y_pred)
In [ ]:
df_cm = pd.DataFrame(cm, index = class_names, columns = class_names)
In [ ]:
# Result
show_confusion_matrix(df_cm)
In [ ]:
# Checking one review
idx = 0

review_text = y_review_texts[idx]
true_sentiment = y_test[idx]

pred_df = pd.DataFrame(
    {
        'class_names': class_names,
        'values': y_pred_probs[idx]
    }
)
In [ ]:
print("\n".join(wrap(review_text)))
print()
print(f'Real Sentiment: {class_names[true_sentiment]}')
Forces to accept business trial, impossible to downgrade. Contacting
customer support is convoluted, and impossible. I followed
instructions in Business Class email for support, which ended up with
issue closed before issue was addressed.

Real Sentiment: negative
In [ ]:
# Prediction plot
sns.barplot(x = 'values', y = 'class_names', data = pred_df, orient = 'h')
plt.ylabel('Sentiment')
plt.xlabel('Probability')
plt.xlim([0, 1]);

Testing with new data (s new app review).

In [ ]:
test_text = 'I really love this app. It improved my work organization and efficiency'
In [ ]:
# Apply the same transformation which was applied to the training data, creating the embedding object
encoded_eval = tokenizer.encode_plus(test_text,
                                     max_length = MAX_LENGTH,
                                     add_special_tokens= True,
                                     return_token_type_ids = False,
                                     pad_to_max_length = True,
                                     return_attention_mask = True,
                                     return_tensors = 'pt')
Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
In [ ]:
# Extract the inputs and attention_mask to make a prediction
input_ids = encoded_eval['input_ids'].to(device)
attention_mask = encoded_eval['attention_mask'].to(device)
In [ ]:
# Output (prediction)
output = model(input_ids, attention_mask)
In [ ]:
# Final prediction
probability, prediction = torch.max(output, dim = 1)
In [ ]:
# Print
print(f'\nApp Review Text: {test_text}')
print(f'\nSentiment: {class_names[prediction]}')
print(f'\nProbability: {probability[0]}')
App Review Text: I really love this app. It improved my work organization and efficiency

Sentiment: positive

Probability: 0.999880313873291

The End